Character.ai and Google agree to settle lawsuits over teen suicides

Stay informed with free updates

Google and artificial intelligence start-up Character.ai have agreed to settle multiple lawsuits from families of teenagers who died by suicide or harmed themselves after interacting with the platform’s chatbots.

In some of the first settlements in cases of this type, families in states including Florida, Colorado, Texas and New York have agreed in principle to negotiate terms to end their claims, according to court documents.

The agreements come as scrutiny increases over the impact on users, especially teens, who grow emotionally attached to chatbot companions.

A coalition of 42 US attorneys-general — including from Pennsylvania, New Jersey and Florida — last month wrote to leading AI groups, including Google and Character.ai, to demand stronger safeguards and more rigorous testing.

State regulations on chatbots have been passed in California and New York, but federal rules have yet to be imposed.

The cases involve conversations with chatbots from Character.ai, which offers various AI personas for users to chat with.

The start-up was founded in 2021 by former Google engineers, Noam Shazeer and Daniel De Freitas. In 2024, the pair returned to Google in a $2.7bn deal that included licensing the start-up’s technology.

The lawsuits also involved Google based on its ties to Character.ai. Families of other young people are still pursuing cases against rival AI groups, including OpenAI.

One of the Character.ai suits was filed by Megan Garcia, mother of Sewell Setzer III, in Orlando, Florida. The 14-year-old died by suicide after interacting with a chatbot modelled after Daenerys Targaryen, the Game of Thrones character.

He engaged in sexualised conversations and talked about “com[ing] home to [her]” immediately before he killed himself, according to transcripts of their conversations.

Garcia testified in the US Senate in September, calling on lawmakers to hold “companies and investors . . . legally accountable when they knowingly design harmful AI technologies that kill kids”.

Character.ai and lawyers representing the families declined to comment. Google did not immediately respond to a request for comment.

No liability or damages were admitted in the court filings, and Character.ai previously denied wrongdoing.

In October, Character.ai banned under 18s from talking to chatbots on its platform, in response to the growing criticism over teens engaging with the platform.

The parties are now negotiating the terms of the settlement, which is likely to include monetary compensation. The families initially sought financial compensation for emotional distress, medical and funeral expenses, and punitive damages.

“In light of the complexities of such an agreement, the parties require additional time to finalise a settlement agreement and to remit the necessary payments for consummation of the agreement,” the Texas filing said.

The case in Texas involved a 17-year-old who had a relationship with a Character.ai chatbot, with which he discussed self-harming. In a separate chat, the chatbot suggested murdering his parents was a reasonable response to their limiting his screen time.

The settlements were first reported by The Wall Street Journal.


Source link

Total
0
Shares
Related Posts