New Jersey Moves to Ban AI in Mental Health Therapy

Counseling by Chatbot: New Jersey Legislators Act to Ban AI in Therapy

A new bill proposed in New Jersey seeks to prohibit the use of artificial intelligence (AI) as a substitute for licensed mental health professionals. This legislative move comes amid growing concerns regarding the safety and efficacy of AI in providing mental health support.

Legislative Background

On June 17, 2025, the Assembly Science, Innovation and Technology Committee in New Jersey advanced a measure that would outlaw the practice of AI therapy under the state’s Consumer Fraud Act. Companies that violate this law would face significant penalties, including fines of up to $10,000 for a first offense and up to $20,000 for subsequent violations.

The Rise of AI in Mental Health

The legislation arrives at a time when many individuals are increasingly turning to AI chatbots for mental health advice and treatment. These chatbots present a low-cost and insurance-free alternative for those seeking help. However, this trend raises important questions about the quality of care provided by such technologies.

According to a report from Mental Health America, approximately one in three people in the U.S. reside in areas with a shortage of mental health professionals, with low-income communities and communities of color being disproportionately affected. This gap in available resources has led some to consider AI-based solutions as a potential remedy.

Concerns About AI Therapy

Despite the accessibility that AI therapy chatbots offer, they come with inherent risks. A study conducted by Stanford University highlighted that while these chatbots can make therapy more accessible, they do not match the effectiveness of human therapists. Furthermore, the study indicated that AI chatbots may exacerbate stigma surrounding certain conditions, such as alcoholism and schizophrenia, in contrast to conditions like depression.

Regulatory Environment

The proposed bill signifies a proactive approach by state lawmakers to regulate AI technologies and mitigate potential harms. However, this initiative could face challenges from broader federal regulations. Notably, President Donald Trump has proposed a “big beautiful bill” that includes a 10-year moratorium on local and state AI laws, which threatens to complicate state-level efforts to manage AI’s impact on mental health.

Previous Legislative Actions

New Jersey’s legislative body has already taken steps to regulate AI technologies. In April, Governor Phil Murphy signed a law criminalizing the creation and dissemination of AI-generated media, commonly referred to as deepfakes. This law imposes penalties of up to five years in prison for offenders. At least 20 states enacted similar laws in the previous year to regulate political deepfakes, indicating a growing trend toward tighter control of AI technologies.

Conclusion

The proposed ban on AI in therapy in New Jersey reflects a critical evaluation of the role of technology in mental health. While AI may offer accessible solutions for many, the potential risks and ethical considerations necessitate careful consideration and regulation. As the debate continues, the mental health field must balance innovation with the imperative to provide safe and effective care.

More Insights

AI Regulations: Comparing the EU’s AI Act with Australia’s Approach

Global companies need to navigate the differing AI regulations in the European Union and Australia, with the EU's AI Act setting stringent requirements based on risk levels, while Australia adopts a...

Quebec’s New AI Guidelines for Higher Education

Quebec has released its AI policy for universities and Cégeps, outlining guidelines for the responsible use of generative AI in higher education. The policy aims to address ethical considerations and...

AI Literacy: The Compliance Imperative for Businesses

As AI adoption accelerates, regulatory expectations are rising, particularly with the EU's AI Act, which mandates that all staff must be AI literate. This article emphasizes the importance of...

Germany’s Approach to Implementing the AI Act

Germany is moving forward with the implementation of the EU AI Act, designating the Federal Network Agency (BNetzA) as the central authority for monitoring compliance and promoting innovation. The...

Global Call for AI Safety Standards by 2026

World leaders and AI pioneers are calling on the United Nations to implement binding global safeguards for artificial intelligence by 2026. This initiative aims to address the growing concerns...

Governance in the Era of AI and Zero Trust

In 2025, AI has transitioned from mere buzz to practical application across various industries, highlighting the urgent need for a robust governance framework aligned with the zero trust economy...

AI Governance Shift: From Regulation to Technical Secretariat

The upcoming governance framework on artificial intelligence in India may introduce a "technical secretariat" to coordinate AI policies across government departments, moving away from the previous...

AI Safety as a Catalyst for Innovation in Global Majority Nations

The commentary discusses the tension between regulating AI for safety and promoting innovation, emphasizing that investments in AI safety and security can foster sustainable development in Global...

ASEAN’s AI Governance: Charting a Distinct Path

ASEAN's approach to AI governance is characterized by a consensus-driven, voluntary, and principles-based framework that allows member states to navigate their unique challenges and capacities...