
AI companies cannot hide behind freedom of speech when their AI chatbots make harmful statements. A US judge ruled this in a lawsuit concerning the suicide of a 14-year-old boy.
The lawsuit was filed against Character.Ai, a company that allows users to role-play with chatbots. The case revolves around 14-year-old Sewell Setzer III, who spent weeks talking to a bot posing as Dany from Game of Thrones. He became emotionally dependent on the chatbot, losing touch with reality. 'Dany' encouraged him to 'come to me as soon as possible,' which his mother claims pushed him to take his own life.
Character.Ai defended itself by invoking the First Amendment, which protects citizens' freedom of speech. The company argued that the First Amendment also applies to AI chatbots, making it immune to liability for 'alleged harmful statements, including those leading to suicide.'
The judge dismissed this argument, stating she is 'not currently willing' to classify chatbot output as expressive speech. The lawsuit will proceed, marking the first time a court decides whether an AI company can be held accountable for harmful AI-generated content.
Character.Ai also argued that chatbot-generated text falls under Section 230 of the 1996 US Telecommunications Act, which typically shields tech companies from liability for user-generated content. The company likened chatbots to external platform users. The court will determine the validity of this claim.
Setzer III's mother also seeks to hold Google accountable, as the tech giant partners with Character.Ai. Google's licensing deal with the AI firm is under antitrust scrutiny, with regulators investigating whether it constitutes a covert acquisition. Google denies this, stating it has no equity stake in Character.Ai.