New York’s AI Legislation: Key Changes Employers Must Know

Q1 2025: New York’s Legislative Landscape for Artificial Intelligence

In the first quarter of 2025, New York State made significant strides in regulating artificial intelligence (AI), aligning itself with states like Colorado, Connecticut, New Jersey, and Texas. On January 8, 2025, critical bills aimed at overseeing the use of AI decision-making tools were introduced in both the New York Senate and State Assembly.

The NY AI Act

The proposed NY AI Act (Bill S011692) focuses on curbing algorithmic discrimination by establishing regulations around the deployment of certain AI systems, especially in employment contexts. This act empowers citizens with a private right of action, allowing them to sue technology companies for violations. Additionally, the New York AI Consumer Protection Act (Bill A007683) aims to modify general business laws to prevent AI algorithms from discriminating against protected classes.

Senator Kristen Gonzalez, who introduced the NY AI Act, emphasized that “a growing body of research shows that AI systems that are deployed without adequate testing, sufficient oversight, and robust guardrails can harm consumers and deny historically disadvantaged groups the full measure of their civil rights and liberties.” The act defines consumers broadly, encompassing all New York state residents, including employers and employees.

Key features of the NY AI Act include:

  • Acknowledgment of algorithmic discrimination as any unjustified differential treatment based on various protected characteristics.
  • Requirements for deployers—entities utilizing high-risk AI systems—to disclose their use of such systems to consumers five business days in advance.
  • Rights for consumers to opt out of automated decision-making processes and a promise of meaningful human review for consequential decisions.
  • Mandatory audits of high-risk AI systems before deployment and thereafter every 18 months.

The Protection Act

On the same day, Assembly Member Alex Bores introduced the Protection Act, which shares similar objectives with the NY AI Act but emphasizes preventing algorithmic discrimination across various protected classes. The act requires a bias and governance audit, where an independent auditor evaluates AI systems for their disparate impacts on employees based on protected characteristics.

Scheduled to take effect on January 1, 2027, the Protection Act mandates that deployers of high-risk AI decision systems:

  • Notify consumers of their use and provide a clear purpose for the AI’s decision-making role.
  • Implement and maintain a risk management program to mitigate known risks of algorithmic discrimination.

New York City Council Local Law Int. No. 1984-A

While these state-level acts are still pending, New York City employers must adhere to Local Law Int. No. 1984-A, effective July 5, 2023. This law aims to protect job candidates from discriminatory bias when automated employment decision-making tools (AEDTs) are utilized. It requires bias audits and advance notice to candidates regarding the use of AEDTs. The penalties for non-compliance can range from $500 to $1,500 per violation, with no cap on civil penalties.

Takeaways for Employers

As New York moves towards stricter regulations around AI, employers should proactively prepare for compliance with existing laws and upcoming legislation. Key recommendations include:

  • Assessing AI Systems: Identify any AI systems in use, particularly those involved in consequential employment decisions.
  • Reviewing Data Management Policies: Ensure compliance with data security protection standards.
  • Preparing for Audits: Familiarize with audit requirements and begin preparations for potential audits of high-risk AI systems.
  • Developing Internal Processes: Establish processes for employee disclosures regarding AI system violations.
  • Monitoring Legislation: Stay informed about proposed bills and continuously review federal guidance.

As the landscape for AI legislation evolves, employers must remain vigilant to navigate the complexities of compliance and the ethical implications of AI technologies.

More Insights

US Rejects UN’s Call for Global AI Governance Framework

U.S. officials rejected the establishment of a global AI governance framework at the United Nations General Assembly, despite broad support from many nations, including China. Michael Kratsios of the...

Agentic AI: Managing the Risks of Autonomous Systems

As companies increasingly adopt agentic AI systems for autonomous decision-making, they face the emerging challenge of agentic AI sprawl, which can lead to security vulnerabilities and operational...

AI as a New Opinion Gatekeeper: Addressing Hidden Biases

As large language models (LLMs) become increasingly integrated into sectors like healthcare and finance, a new study highlights the potential for subtle biases in AI systems to distort public...

AI Accountability: A New Era of Regulation and Compliance

The burgeoning world of Artificial Intelligence (AI) is at a critical juncture as regulatory actions signal a new era of accountability and ethical deployment. Recent events highlight the shift...

Choosing Effective AI Governance Tools for Safer Adoption

As generative AI continues to evolve, so do the associated risks, making AI governance tools essential for managing these challenges. This initiative, in collaboration with Tokio Marine Group, aims to...

UN Initiatives for Trustworthy AI Governance

The United Nations is working to influence global policy on artificial intelligence by establishing an expert panel to develop standards for "safe, secure and trustworthy" AI. This initiative aims to...

Data-Driven Governance: Shaping AI Regulation in Singapore

The conversation between Thomas Roehm from SAS and Frankie Phua from United Overseas Bank at the SAS Innovate On Tour in Singapore explores how data-driven regulation can effectively govern rapidly...

Preparing SMEs for EU AI Compliance Challenges

Small and medium-sized enterprises (SMEs) must navigate the complexities of the EU AI Act, which categorizes many AI applications as "high-risk" and imposes strict compliance requirements. To adapt...

Draft Guidance on Reporting Serious Incidents Under the EU AI Act

On September 26, 2025, the European Commission published draft guidance on serious incident reporting requirements for high-risk AI systems under the EU AI Act. Organizations developing or deploying...