Megan Garcia says her son would nonetheless be alive right now if it weren’t for a chatbot urging the 14-year-old to take his personal life.
In a lawsuit with main implications for Silicon Valley, she is searching for to carry Google and the substitute intelligence agency Character Applied sciences Inc. answerable for his dying. The case over the tragedy that unfolded a yr in the past in central Florida is an early check of who’s legally accountable when children’ interactions with generative AI take an sudden flip.
Garcia’s allegations are specified by a 116-page criticism filed final yr in federal court docket in Orlando. She is searching for unspecified financial damages from Google and Character Applied sciences and asking the court docket to order warnings that the platform isn’t appropriate for minors and restrict the way it can acquire and use their knowledge.
Each corporations are asking the choose to dismiss claims that they failed to make sure the chatbot know-how was protected for younger customers, arguing there’s no authorized foundation to accuse them of wrongdoing.
Character Applied sciences contends in a submitting that conversations between its Character.AI platform’s chatbots and customers are protected by the Structure’s First Modification as free speech. It additionally argues that the bot explicitly discouraged Garcia’s son from committing suicide.
Garcia’s concentrating on of Google is especially vital. The Alphabet Inc. unit entered right into a $2.7 billion cope with Character.AI in August, hiring expertise from the startup and licensing know-how with out finishing a full-blown acquisition. Because the race for AI expertise accelerates, different corporations might imagine twice about equally structured offers if Google fails to persuade a choose that it needs to be shielded from legal responsibility from harms alleged to have been brought on by Character.AI merchandise.
“The inventors and the businesses, the companies that put out these merchandise, are completely accountable,” Garcia stated in an interview. “They knew about these risks, as a result of they do their analysis, and so they know the sorts of interactions youngsters are having.”
Earlier than the deal, Google had invested in Character.AI in alternate for a convertible be aware and in addition entered a cloud service pact with the startup. The founders of Character.AI had been Google workers till they left the tech behemoth to discovered the startup.
As Garcia tells it in her swimsuit, Sewell Setzer III was a promising highschool scholar athlete till he began in April, 2023 role-playing on Character.AI, which lets customers construct chatbots that mimic well-liked tradition personalities — each actual and fictional. She says she wasn’t conscious that over the course of a number of months, the app hooked her son with “anthropomorphic, hypersexualized and frighteningly sensible experiences” as he fell in love with a bot impressed by Daenerys Targaryen, a personality from the present Recreation of Thrones.
Garcia took away the boy’s cellphone in February 2024 after he began performing out and withdrawing from buddies. However whereas searching for his cellphone, which he later discovered, he additionally got here throughout his stepfather’s hidden pistol, which the police decided was saved in compliance with Florida legislation, in line with the swimsuit. After conferring with the Daenerys chatbot 5 days later, the teenager shot himself within the head.
Garcia’s attorneys say within the criticism that Google “contributed monetary assets, personnel, mental property, and AI know-how to the design and growth” of Character.AI’s chatbots. Google argued in a court docket submitting in January that it had “no position” within the teen’s suicide and “doesn’t belong within the case.”
The case is enjoying out as public issues of safety round AI and kids have drawn consideration from state enforcement officers and federal businesses alike. There’s at the moment no US legislation that explicitly protects customers from hurt inflicted by AI chatbots.
To make a case towards Google, attorneys for Garcia must present the search large was really operating Character.AI and made enterprise choices that in the end led to her son’s dying, in line with Sheila Leunig, an lawyer who advises AI startups and buyers and isn’t concerned within the lawsuit.
“The query of authorized legal responsibility is completely a legitimate one which’s being challenged in an enormous means proper now,” Leunig stated.
Offers just like the one Google struck have been hailed as an environment friendly means for corporations to herald experience for brand spanking new initiatives. Nonetheless, they’ve caught the eye of regulators over considerations they’re a work-around to antitrust scrutiny that comes with buying up-and-coming rivals outright — and which has develop into a serious headache for tech behemoths in recent times.
“Google and Character.AI are fully separate, unrelated corporations and Google has by no means had a job in designing or managing their AI mannequin or applied sciences, nor have we used them in our merchandise,” José Castañeda, a spokesperson for Google, stated in an announcement.
A Character.AI spokeswoman declined to touch upon pending litigation however stated “there isn’t a ongoing relationship between Google and Character.AI” and that the startup had applied new user safety measures over the previous yr.
Attorneys from the Social Media Victims Regulation Heart and Tech Justice Regulation Mission who signify Garcia argue that although her son’s dying pre-dates Google’s cope with Character.AI, the search firm was “instrumental” in serving to the startup design and develop its product.
“The mannequin underlying Character.AI was invented and initially constructed at Google,” in line with the criticism. Noam Shazeer and Daniel De Freitas started working at Google on chatbot know-how way back to 2017 earlier than they left the corporate in 2021, then based Character.AI later that yr and had been rehired by Google final yr, in line with Garcia’s swimsuit, which names them each as defendants.
Shazeer and De Freitas declined to remark, in line with Google’s spokesperson Castañeda. They’ve argued in court docket filings that they shouldn’t have been named within the swimsuit as a result of they don’t have any connections to Florida, the place the case was filed, and since they weren’t personally concerned within the actions that allegedly triggered hurt.
The swimsuit additionally alleges the Alphabet unit helped market the startup’s know-how by means of a strategic partnership in 2023 to make use of Google Cloud companies to succeed in a rising variety of lively Character.AI customers — which is now greater than 20 million.
Within the fast-growing AI trade, startups are being “boosted” by massive tech corporations, “not beneath the model identify of the massive firm, however with their assist,” stated Meetali Jain, director of Tech Justice Regulation Mission.
Google’s “purported roles as an ‘investor,’ cloud companies supplier, and former employer are far too tenuously related” to the hurt alleged in Garcia’s criticism “to be actionable,” the know-how large stated in a court docket submitting.
Matt Wansley, a professor at Cardozo Faculty of Regulation, stated tying legal responsibility again to Google received’t be simple.
“It’s difficult as a result of, what would the connection be?” he stated.
Early final yr, Google warned Character.AI that it’d take away the startup’s app from the Google Play retailer over considerations about security for teenagers, the Data reported lately, citing an unidentified former Character.AI worker. The startup responded by strengthening the filters in its app to guard customers from sexually suggestive, violent and different unsafe content material and Google reiterated that it’s “separate” from Character.AI and isn’t utilizing the chatbot know-how, in line with the report. Google declined to remark and Character.AI didn’t reply to a request from Bloomberg for touch upon the report.
Garcia, the mom, stated she first discovered about her son interacting with an AI bot in 2023 and thought it was just like constructing online game avatars. In line with the suit, the boy’s psychological well being deteriorated as he spent extra time on Character.AI the place he was having sexually specific conversations with out his mother and father’ data.
When the teenager shared his plan to kill to himself with the Daenerys chatbot, however expressed uncertainty that it might work, the bot replied: “That’s not a motive to not undergo with it,” in line with the swimsuit, which is peppered with transcripts of the boy’s chats.
Character.AI stated in a submitting that Garcia’s revised criticism “selectively and misleadingly quotes” that dialog and excludes how the chatbot “explicitly discouraged” the teenager from committing suicide by saying: “You’ll be able to’t do this! Don’t even take into account that!”
Anna Lembke, a professor at Stanford College Faculty of Medication specializing in habit, stated “it’s virtually unattainable to know what our children are doing on-line.” The professor additionally stated it’s unsurprising that the boy’s interactions with the chatbot didn’t come up in a number of classes with a therapist who his mother and father despatched him to for assist along with his nervousness, because the lawsuit claims.
“Therapists should not omniscient,” Lembke stated. “They’ll solely assist to the extent that the kid is aware of what’s actually happening. And it may very properly be that this youngster didn’t understand the chatbot as problematic.”
The case is Garcia v. Character Applied sciences Inc., 24-cv-01903, US District Courtroom, Center District of Florida (Orlando).
Picture: Megan Garcia holds {a photograph} of Sewell along with his two youthful brothers. Photographer: Michelle Bruzzese/Bloomberg
Copyright 2025 Bloomberg.