US Homeland Security attacks EU effort to police artificial intelligence

Stay informed with free updates

The outgoing head of the US Department of Homeland Security believes Europe’s “adversarial” relationship with tech companies is hampering a global approach to regulation that could result in security vulnerabilities.

Alejandro Mayorkas told the Financial Times the US — home of the world’s top artificial intelligence groups, including OpenAI and Google — and Europe are not on a “strong footing” because of a difference in regulatory approach.

He stressed the need for “harmonisation across the Atlantic”, expressing concern that relationships between governments and the tech industry are “more adversarial” in Europe than in the US.

“Disparate governance of a single item creates a potential for disorder, and disorder creates a vulnerability from a safety and security perspective,” Mayorkas said, adding companies would also struggle to navigate different regulations across jurisdictions.

The warning comes after the EU brought into force its AI Act this year, considered the strictest laws for governing the nascent technology anywhere in the world. It introduces restrictions on “high risk” AI systems and rules designed to create more transparency on how AI groups use data.

The UK government also plans to introduce legislation that would compel AI companies to give access to their models for safety assessments.

In the US, president-elect Donald Trump has vowed to cancel his predecessor, Joe Biden’s executive order on AI, which set up a safety institute to conduct voluntary tests on models.

Mayorkas said he did not know if the US safety institute “would stay” under the new administration, but warned prescriptive laws could “suffocate and harm US leadership” in the rapidly evolving sector.

Mayorkas’s comments highlight fractures between European and American approaches to AI oversight as policymakers try to balance innovation with safety concerns. The DHS is tasked with protecting the security and safety of the US, against threats such as terrorism and cyber security.

That responsibility will fall to Kristi Noem, the South Dakota governor Trump chose to run the department. The president-elect has also named venture capitalist David Sacks, a critic of tech regulation, as his AI and crypto tsar.

In the US, efforts to regulate the technology have been thwarted by fears it could stifle innovation. In September, California governor Gavin Newsom vetoed an AI safety bill that would have governed the technology within the state, citing such concerns.

The Biden administration’s early approach to AI regulation has been accused of being both too heavy handed, and of not going far enough.

Silicon Valley venture capitalist Marc Andreessen said during a podcast interview this week that he was “very scared” about government officials’ plans for AI policy after meetings with Biden’s team this summer. He described the officials as “out for blood”.

Republican senator Ted Cruz has also recently warned against “heavy-handed” foreign regulatory influence from policymakers in Europe and the UK over the sector.

Mayorkas said: “I worry about a rush to legislate at the expense of innovation and inventiveness because lord knows our regulatory apparatus and our legislative apparatus is not nimble.”

He defended his department’s preference for “descriptive” rather than “prescriptive” guidelines. “The mandatory structure is perilous in a rapidly evolving world.”

The DHS has been actively incorporating AI into its operations, aiming to demonstrate government agencies can implement new technologies while ensuring safe and secure deployment.

It has deployed generative AI models to train refugee officers and role-play interviews. This week, it launched an internal DHS AI chatbot powered by OpenAI through Microsoft’s Azure cloud computing platform.

In his tenure, Mayorkas drew up a framework for safe and secure deployment of AI in critical infrastructure, making recommendations for cloud and compute providers, AI developers, infrastructure owners and operators on addressing risks. It included guarding the physical security of data centres, powering AI systems and monitoring activity, evaluating models for risks, bias and vulnerabilities, and protecting consumer data.

“We have to work well with the private sector,” he added. “They’re a key stakeholder of our nation’s critical infrastructure. The majority of it is actually owned and operated by the private sector. We need to execute a model of partnership and not one of adversity or tension.”


Source link

Total
0
Shares
Related Posts