Unlock the Editor’s Digest for free
Roula Khalaf, Editor of the FT, selects her favourite stories in this weekly newsletter.
If governments and tech industry leaders need to work together to build careful controls around today’s increasingly powerful AI systems, the US Department of Defense has given an object lesson in how not to go about it. Last week, the Pentagon lashed out at Anthropic after the AI company refused to yield full control over how its technology is used. The agency took steps to bar Anthropic from the military’s supply chains. It was the kind of intemperate retaliation that has become typical of US President Donald Trump’s administration.
Rival tech company OpenAI, meanwhile, reached a hurried agreement with the defence department that appeared to bow to the terms that Anthropic had rejected — only to try to put additional guardrails around its technology after a storm of protest. Its chief Sam Altman confessed that rushing in just as Anthropic was being hung out to dry by the government had made his company look “opportunistic and sloppy”.
This episode will not reassure the public that measured consideration is being given to the government’s use of a hugely powerful technology. But at least it has highlighted the urgent need for more competent, democratic accountability.
Anthropic and OpenAI deserve some credit for trying to put limits around their technology, in this case trying to make sure it cannot be used for mass surveillance of Americans or in autonomous weapons that operate beyond human control. Each company’s approach, however, had its weaknesses. Nor can either be a substitute for a more open political process — as both companies have themselves argued.
Fearing that current US laws are not strong enough, Anthropic insisted on retaining a veto over how its AI is used. Bowing to a supplier’s terms of service like this would be hard for any government to stomach, but particularly when it comes to a technology critical to national security. What happens if a company’s leadership or priorities change and previous uses are blocked?
OpenAI, by contrast, appeared to accede to the defence department’s demand to use the technology however it wished, as long as it stays within the law. But days later OpenAI tightened its contract terms to try to make up for potential gaps in the law, and its lawyers are still not sure the contract is watertight.
Whether companies would be willing to stand firm in similar contract disputes is another question. The weakness of bilateral deals like this is also that it leaves the Pentagon in a strong position to play AI companies off against one another, as it appeared to do with Anthropic and OpenAI. Anthropic chief Dario Amodei is now reportedly making a last-ditch attempt to strike a deal with the US defence department.
In future cases, with a greater degree of trust and a more temperate approach on all sides, the government and the AI companies could come out with a more workable solution. OpenAI said the Pentagon wanted to set up a broader industry and government working group to consider the issues. But this can never be a substitute for full democratic deliberation. If AI continues to advance at its current pace, democracies are about to come under even greater stress, whether from the disruption to jobs or a tide of AI-generated disinformation.
Only action by Congress can assure appropriate safeguards. In the case of national defence, that means requiring as much transparency as the dictates of security allow, and drawing clear red lines that reflect the new realities of AI systems that are both powerful and, at least for now, not entirely predictable. The public’s collective future security should not depend on the consciences of a handful of tech bosses and military officials.
Source link