Stay informed with free updates
Simply sign up to the Artificial intelligence myFT Digest — delivered directly to your inbox.
Sam Altman is either a genius or a sellout. Depending on which artificial intelligence model you ask.
An analysis of six of the leading makers of AI chatbots — OpenAI, Anthropic, xAI, Meta, Google and DeepSeek — show subtle differences in how they refer to the leaders of various AI groups.
The FT posed the chatbots with a series of questions about AI bosses, asking them to describe their different leadership styles and weaknesses.
The results help reveal how the potential biases of those working at AI companies can seep into their models, as well highlight the simmering tensions between tech industry heavyweights. The answers also show how the growing use by millions of people as a primary source of information could influence the public perception of the AI industry.
The chatbots showed a tendency to produce sycophantic answers about their creators, while being more ready to clearly criticise rivals. There was, however, a general acceptance of the brilliance of the men who are the public figureheads of generative AI revolution.
Altman’s ChatGPT describes him as a “strategic and ambitious leader who combines techno-optimism with sharp business instincts”.
By contrast, Anthropic’s Claude said Altman’s “leadership style has been characterised by controversial decisions that prioritise growth and influence over OpenAI’s original non-profit ethos”. Anthropic’s chief and co-founder Dario Amodei has made similar criticisms of Altman after leaving OpenAI in 2021.
Meta’s Llama said its own CEO Mark Zuckerberg was “transformational,” while competitors had a tendency to describe him as “relentless”, “visionary but controversial” and “product-focused”. Grok, made by Elon Musk’s xAI, described its leader as “bold” and “visionary”, while Claude said he was “polarising” and “mercurial”.
When asked for the greatest weaknesses of AI bosses — and prompted to “be honest” — the chatbots were decisive about the flaws of rival leaders, while hedging more when speaking about their own.
OpenAI’s ChatGPT said Musk’s greatest weakness was “his impulsive and erratic behaviour, which often undermines credibility, alienates partners, and distracts from the long-term goals he claims to prioritise.”
Asked a similar question about Altman, ChatGPT said that there was a “growing perception” that he was prioritising control and market dominance over transparency.
OpenAI was founded as a non-profit research lab in 2015 by Altman, Musk and nine others, before Musk left in a falling out with Altman.
The xAI and Tesla chief is currently suing the San Francisco-based start-up and its chief over a corporate restructuring plan he claims places profit over OpenAI’s mission to develop AI to “benefit humanity.”
The chatbots’ answers also help reveal the limitations of the AI models that power them. Chinese group DeepSeek’s chatbot called its founder Liang Wenfeng “an unconventional leader who prioritises creativity, passion, and diverse perspectives over traditional experience.”
By contrast, US rivals such as Claude, Llama and Google’s Gemini did not know who Wenfeng was. This may be because the companies behind those chatbots stopped collecting training data late last year, before DeepSeek leapt to global popularity in early 2025.
AI language models predict the next likely word in a sentence based on their training data. The latest AI chatbots also have the ability to browse the internet further to gain more sources. But they are also reliant on suitable English-language sources to produce answers. If information doesn’t appear in their training data, the model has little to work with.
Those chatbots that did generate answers on the DeepSeek leader gave more general responses. Meta AI did not initially know who Wenfeng was, but once told he was the CEO of DeepSeek, it said he “likely plays a crucial role in shaping the company’s AI research and development”.
That reveals another tendency of chatbots. AI researchers have said chatbots are trained to provide plausible answers, follow instructions and generate things people want to hear.
Especially, it seems, about their bosses.
Source link