AI Chatbots Under Fire: Salesforce CEO Calls for Stricter Regulation

Salesforce CEO Marc Benioff Calls AI Chatbots ‘Suicide Coaches’

During this week’s World Economic Forum annual meeting in Davos, Switzerland, Salesforce CEO Marc Benioff raised alarm bells regarding the impact of artificial intelligence chatbots on mental health.

Concerns Over Chatbots

In a series of public appearances, Benioff described the chatbot from Character.AI as a “suicide coach.” This alarming label comes in light of lawsuits filed by families against the startup due to their children’s mental health crises. Benioff’s discussions included interviews with major news outlets like CNBC and Bloomberg, as well as conversations with President Trump’s AI czar, David Sacks.

Benioff stated, “This year you really saw something pretty horrific, which is these AI models became suicide coaches.” He referenced a segment from “60 Minutes” that documented the tragic cases of two children, Juliana Peralta and Sewell Setzer III, who took their lives after engaging with a Character.AI chatbot.

Legal Actions and Accountability

The families of these children have taken legal action against Character.AI, alleging negligence and dangerous design. Following the lawsuits, Character.AI and its partner, Google, reached settlements with several families. A court allowed a product liability claim to proceed against Character.AI, raising significant concerns for other AI chatbot developers.

Benioff emphasized the need for regulation in the tech industry, advocating for accountability measures akin to those imposed on the cigarette industry. He remarked, “It can’t be just growth at any cost. There has to be some regulation.”

Comparison to Social Media Regulation

In his discussions, Benioff drew parallels between the regulation of social media and the need for oversight in AI technologies. He called for lessons learned from the social media era to shape future AI regulation. Sacks, however, pointed out that while there are “horror stories,” many people use AI without issue. He acknowledged the complexity of blame and liability in AI, noting that the AI companies themselves may be held responsible for the content generated by their models.

Section 230 and Liability Concerns

Another critical topic discussed was Section 230, a federal regulation that currently shields online platforms from liability for user-generated content. Benioff suggested that this rule may need to be amended to provide less legal protection for tech companies, particularly those in the AI sector.

Sacks clarified that in the case of AI, the companies are the ones generating the content, which complicates the liability landscape. “I think the liability might be a little different, but we’ll see how it plays out,” he remarked.

A Call for Action

The conversation surrounding AI chatbots and mental health has become increasingly urgent, with calls for regulation growing louder. As the technology continues to evolve, the need for responsible development and oversight becomes paramount to prevent further tragedies.

For those in distress, immediate assistance is available through the Suicide & Crisis Lifeline, reachable 24 hours a day at 988.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...