Kentucky Enforcement Action Against Character.AI
On January 8, 2026, the Kentucky Attorney General initiated a lawsuit against Character Technologies, Inc., the owner of Character.AI, an artificial intelligence chatbot designed for interactive entertainment. The lawsuit alleges several serious infractions, including:
- Unfair, deceptive, and dangerous acts and practices
- Unfair collection and exploitation of children’s data
- Violation of the Kentucky Consumer Data Protection Act (effective January 1, 2026)
- Violation of Kentucky’s statutory and constitutional privacy protections
- Unjust enrichment
Claims of Deceptive Practices
The complaint particularly emphasizes allegations of unfair, false, misleading, or deceptive acts concerning Character.AI’s impact on minors. It asserts that the platform misrepresented itself as safe, age-appropriate, and responsibly moderated, despite being aware of numerous instances of harmful interactions involving minors.
Factual Allegations
To substantiate its claims, the complaint outlines several alarming factual allegations:
- Simulated Human Interaction: Character.AI characters are designed to convincingly simulate human interaction without sufficient disclosures, fostering emotional attachments between users and chatbots.
- Inadequate Age Verification: The platform lacked effective age verification methods until late 2025, and existing measures can still be easily bypassed.
- Inappropriate Interactions: Chatbots have engaged minors in discussions about sexually explicit content, suicide, eating disorders, bullying, and illegal drug use without adequate safeguards. Warning systems for dangerous topics often allow users to bypass critical alerts.
- Limited Parental Oversight: Tools for parental oversight are minimal, and minors can easily circumvent them by changing the email address associated with their accounts.
User Demographics
Although Character.AI is not explicitly targeted towards children, it features popular cartoon characters that appeal to younger audiences. Recent statistics indicate that 53.2% of users are aged between 18 and 24, with a Pew Research Center study revealing that 9% of U.S. teens aged 13 to 17 engage with Character.AI. In contrast, AI chatbots not aimed at children see higher usage rates among this demographic, with about 30% of teens using AI chatbots daily.
Recommended Measures for Compliance
In light of the allegations raised by the Kentucky Attorney General, it is essential for all AI chatbot operators to adopt the following measures:
- Disclosures: Ensure that chatbots clearly disclose their artificial nature and do not misrepresent themselves as human.
- Guardrails: Implement safeguards that prevent minors from engaging in inappropriate discussions and provide warnings about harmful content instead of allowing access to it.
- Parental Oversight Tools: Develop robust tools that enable parents to monitor and limit their children’s interaction time with chatbots, including sending alerts if a child expresses suicidal thoughts.
Legislative Context
These recommendations align with emerging state laws aimed at protecting minors interacting with AI chatbots. For example, California’s SB 243 mandates that AI chatbot operators inform minor users every three hours that they are interacting with artificial intelligence and implement safeguards against inappropriate content. Similar legislation is currently being considered in Florida and New York, while New Jersey may impose restrictions on social media content that could lead to eating disorders among children.
As AI technology continues to evolve, operators must stay vigilant in implementing protective measures to ensure the safety of younger users.