New Jersey Moves to Ban AI in Mental Health Therapy

Counseling by Chatbot: New Jersey Legislators Act to Ban AI in Therapy

A new bill proposed in New Jersey seeks to prohibit the use of artificial intelligence (AI) as a substitute for licensed mental health professionals. This legislative move comes amid growing concerns regarding the safety and efficacy of AI in providing mental health support.

Legislative Background

On June 17, 2025, the Assembly Science, Innovation and Technology Committee in New Jersey advanced a measure that would outlaw the practice of AI therapy under the state’s Consumer Fraud Act. Companies that violate this law would face significant penalties, including fines of up to $10,000 for a first offense and up to $20,000 for subsequent violations.

The Rise of AI in Mental Health

The legislation arrives at a time when many individuals are increasingly turning to AI chatbots for mental health advice and treatment. These chatbots present a low-cost and insurance-free alternative for those seeking help. However, this trend raises important questions about the quality of care provided by such technologies.

According to a report from Mental Health America, approximately one in three people in the U.S. reside in areas with a shortage of mental health professionals, with low-income communities and communities of color being disproportionately affected. This gap in available resources has led some to consider AI-based solutions as a potential remedy.

Concerns About AI Therapy

Despite the accessibility that AI therapy chatbots offer, they come with inherent risks. A study conducted by Stanford University highlighted that while these chatbots can make therapy more accessible, they do not match the effectiveness of human therapists. Furthermore, the study indicated that AI chatbots may exacerbate stigma surrounding certain conditions, such as alcoholism and schizophrenia, in contrast to conditions like depression.

Regulatory Environment

The proposed bill signifies a proactive approach by state lawmakers to regulate AI technologies and mitigate potential harms. However, this initiative could face challenges from broader federal regulations. Notably, President Donald Trump has proposed a “big beautiful bill” that includes a 10-year moratorium on local and state AI laws, which threatens to complicate state-level efforts to manage AI’s impact on mental health.

Previous Legislative Actions

New Jersey’s legislative body has already taken steps to regulate AI technologies. In April, Governor Phil Murphy signed a law criminalizing the creation and dissemination of AI-generated media, commonly referred to as deepfakes. This law imposes penalties of up to five years in prison for offenders. At least 20 states enacted similar laws in the previous year to regulate political deepfakes, indicating a growing trend toward tighter control of AI technologies.

Conclusion

The proposed ban on AI in therapy in New Jersey reflects a critical evaluation of the role of technology in mental health. While AI may offer accessible solutions for many, the potential risks and ethical considerations necessitate careful consideration and regulation. As the debate continues, the mental health field must balance innovation with the imperative to provide safe and effective care.

More Insights

US Rejects UN’s Call for Global AI Governance Framework

U.S. officials rejected the establishment of a global AI governance framework at the United Nations General Assembly, despite broad support from many nations, including China. Michael Kratsios of the...

Agentic AI: Managing the Risks of Autonomous Systems

As companies increasingly adopt agentic AI systems for autonomous decision-making, they face the emerging challenge of agentic AI sprawl, which can lead to security vulnerabilities and operational...

AI as a New Opinion Gatekeeper: Addressing Hidden Biases

As large language models (LLMs) become increasingly integrated into sectors like healthcare and finance, a new study highlights the potential for subtle biases in AI systems to distort public...

AI Accountability: A New Era of Regulation and Compliance

The burgeoning world of Artificial Intelligence (AI) is at a critical juncture as regulatory actions signal a new era of accountability and ethical deployment. Recent events highlight the shift...

Choosing Effective AI Governance Tools for Safer Adoption

As generative AI continues to evolve, so do the associated risks, making AI governance tools essential for managing these challenges. This initiative, in collaboration with Tokio Marine Group, aims to...

UN Initiatives for Trustworthy AI Governance

The United Nations is working to influence global policy on artificial intelligence by establishing an expert panel to develop standards for "safe, secure and trustworthy" AI. This initiative aims to...

Data-Driven Governance: Shaping AI Regulation in Singapore

The conversation between Thomas Roehm from SAS and Frankie Phua from United Overseas Bank at the SAS Innovate On Tour in Singapore explores how data-driven regulation can effectively govern rapidly...

Preparing SMEs for EU AI Compliance Challenges

Small and medium-sized enterprises (SMEs) must navigate the complexities of the EU AI Act, which categorizes many AI applications as "high-risk" and imposes strict compliance requirements. To adapt...

Draft Guidance on Reporting Serious Incidents Under the EU AI Act

On September 26, 2025, the European Commission published draft guidance on serious incident reporting requirements for high-risk AI systems under the EU AI Act. Organizations developing or deploying...