Profit vs humanity: AI’s corporate governance debate

This article is an on-site version of our Moral Money newsletter. Premium subscribers can sign up here to get the newsletter delivered three times a week. Standard subscribers can upgrade to Premium here, or explore all FT newsletters.

Visit our Moral Money hub for all the latest ESG news, opinion and analysis from around the FT

Welcome back.

The struggle to balance profit and purpose is tough for many business leaders. It must be especially tricky when you think your industry might end up causing the extinction of the human race.

How are artificial intelligence companies handling this delicate dynamic? Read on and let us know your take at moralmoneyreply@ft.com.

Corporate governance

AI start-ups weigh profit vs humanity

Perhaps no entrepreneurs in history have been so certain of the world-shaking potential of their work as the current crop of AI pioneers. To reassure the public — and perhaps themselves — some of the sector’s leading players have developed unusual governance structures that would supposedly restrain them from putting commercial gain above the good of humanity.

But it’s far from clear that these systems will prove fit for purpose when those two priorities clash. And the tensions are already proving hard to handle, as we can see from recent developments at OpenAI, the world’s most high-profile and highly valued AI start-up. It’s a complex saga, but one that gives a vital window into a corporate governance debate with massive implications.

OpenAI was founded by a group including entrepreneur Sam Altman in 2015 as a non-profit research entity, funded by donations from the likes of Elon Musk, with a mission to “advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return”. But after a couple of years, Altman concluded that the mission would require more expensive computing power than could be funded through philanthropy alone.

So in 2019 OpenAI set up a for-profit business, with a unique structure. Commercial investors — among whom Microsoft became easily the biggest — would have caps imposed on their profits, with all earnings above that level flowing to the non-profit. Crucially, the non-profit’s board would keep control over the for-profit’s work, with the humanity-focused mission taking priority over investor returns.

“It would be wise to view any investment in OpenAI Global, LLC in the spirit of a donation,” investors were told. Yet Microsoft and other investors proved willing to supply the funding that enabled OpenAI to stun the world with the launch of ChatGPT.

More recently, however, investors have been expressing unease with the set-up — notably Japan’s SoftBank, which has pushed for a structural shake-up.

In December, OpenAI moved to address these concerns with a restructuring plan that, while innocuously worded, would have gutted that restrictive governance structure. The non-profit would no longer have control over the for-profit business. Instead, it would rank as a voting shareholder alongside the other investors, and would use its eventual income from the business to “pursue charitable initiatives in sectors such as healthcare, education, and science”.

The plan prompted a devastating open letter from various AI luminaries, urging government officials to take action over what they said was a breach of OpenAI’s self-imposed legal constraints. Crucially, they noted, the December plan would have done away with the “enforceable duty owed to the public” to ensure AI benefits humanity, which had been baked into the organisation’s legal structure from the outset.

This week, OpenAI published a revised plan that addresses many of the critics’ concerns. The key climbdown is over the power of the non-profit board, which will retain overall control of the for-profit business. OpenAI plans to push ahead, however, with the removal of profit caps for its commercial investors.

It remains to be seen whether this compromise is enough to satisfy investors like Microsoft and SoftBank. In any case, OpenAI can reasonably claim to have maintained much tougher constraints on its work than arch-rival DeepMind. When the London-based company sold out to Google in 2014, its founders secured a promise that its work would be overseen by a legally independent ethics board, as Parmy Olson recounts in her book Supremacy. But that plan was soon dropped. “I think we probably had slightly too idealistic views,” DeepMind co-founder Demis Hassabis told Olson.

Some early-stage idealism is still to be found at Anthropic, a start-up founded in 2021 by OpenAI employees who were already worried about that organisation’s drift from its founding mission. Anthropic has created an independent five-person “Long-Term Benefit Trust” with a mandate to promote the interest of humanity at large. Within four years, the trust will have the power to appoint a majority of Anthropic’s board.

Anthropic is structured as a public benefit corporation, meaning its directors are legally required to consider the interests of society as well as shareholders. Musk’s xAI is also a PBC, and OpenAI’s for-profit business will become one under the proposed restructuring.

In practice, however, the PBC structure imposes little in the way of constraints. Only significant shareholders — not members of the wider public — can take action against such companies for breaching their fiduciary obligations to wider society.

And while the preservation of the non-profit body’s control at OpenAI might look like a major win for AI safety advocates, it’s worth remembering what happened in November 2023. After the board fired Altman over concerns about his adherence to OpenAI’s guiding principles, it faced a staff and investor rebellion that ended with Altman’s return and the exit of most of the directors.

In short, the power of the non-profit board, with its duty to humanity, was put to the test — and shown to be minimal.

Two of those departed OpenAI directors warned in an Economist op-ed last year that AI start-ups’ self-imposed constraints “cannot reliably withstand the pressure of profit incentives”.

“For the rise of AI to benefit everyone, governments must begin building effective regulatory frameworks now,” Helen Toner and Tasha McCauley wrote.

The EU has made a strong start on that front with its landmark AI Act. In the US, however, tech figures such as Marc Andreessen have made major headway with their campaign against AI regulation, and the Trump administration has signalled little appetite for tight controls.

The case for regulation is strengthened by the growing evidence of AI’s potential to worsen racial and gender inequality in the labour market and beyond. The long-run risks presented by increasingly powerful AI could prove still more serious. Many of the sector’s leading figures — including Altman and Hassabis — signed a 2023 statement warning that “mitigating the risk of extinction from AI should be a global priority”.

If the AI leaders were deluded about the power of their inventions, there might be no need to worry. But as the investment into this field continues to mushroom, that would be a rash assumption.

Smart reads

Danger zone Global warming exceeded the 1.5C threshold in 21 of the past 22 months, new data showed.

Pushing back US officials are calling on the world’s financial authorities to scale back a flagship climate risk project under the Basel Committee on Banking Supervision.

Full Disclosure — Keeping you up to date with the biggest international legal news, from the courts to law enforcement and the business of law. Sign up here

Energy Source — Essential energy news, analysis and insider intelligence. Sign up here


Source link

Total
0
Shares
Related Posts