New York’s Push for AI Regulation in High-Risk Decision-Making

New York Lawmakers Eye New AI Regulations for ‘High-Risk’ Decisions

As artificial intelligence becomes increasingly integrated into both public and private sectors for significant decision-making, New York state lawmakers are taking steps to establish new regulations for these technologies. This initiative was the focal point of a recent hearing held by the state Legislature in New York City, where stakeholders from labor unions and the tech industry presented their perspectives.

The Need for Regulation

State Senator Kristen Gonzalez, chair of the Internet and Technology Committee, highlighted growing concerns regarding AI tools deployed in high-risk environments. “Over the last few years, we have seen a rise in reports of these tools being deployed in high-risk contexts,” she stated. “When they don’t work as intended, real harms can materialize.”

One significant concern raised is the potential for algorithmic bias, which can lead to unintentional discrimination in critical areas such as employment, housing, and health care. For instance, Mia McDonald, a senior campaign lead at the Communication Workers of America union, emphasized the pervasive use of AI in call centers for monitoring performance and managing recruitment processes.

Impact on Public Services

The implications of AI extend beyond the private sector. In New York City, AI has been used to make decisions for the Administration for Children’s Services, relying on data that may reflect outdated biases. Odetty Tineo, political director at labor union DC37, pointed out the risks associated with using factors that serve as proxies for race and socioeconomic status. “The ACS algorithm does not empower social workers. It undermines their professional expertise with a biased system,” Tineo explained.

The Proposed New York AI Act

To address these issues, Senator Gonzalez is sponsoring the New York AI Act. This legislation aims to require developers and users of AI tools to take “reasonable care” to prevent algorithmic discrimination in high-risk decisions that significantly affect individuals’ lives, such as those related to employment and health care.

Key provisions of the bill include:

  • Regular audits of AI tools by independent third parties to assess discrimination risks and compliance with state regulations.
  • Mandatory notifications to individuals when automated decision-making tools are utilized, along with options to opt out and request human intervention.

Opposition from the Tech Industry

Representatives from the tech industry expressed concerns about the potential burdens imposed by the legislation. Alex Spyropoulos, director of government affairs for Tech:NYC, argued that while AI can have unintended consequences, existing anti-discrimination laws in New York are sufficient. He suggested that the attorney general’s office could already address many of the practices the bill seeks to regulate.

Industry representatives prefer a federal framework for AI regulation, emphasizing that Congress has yet to consider such measures.

Other Legislative Proposals

In addition to the New York AI Act, other bills in the state Legislature include the BOT Act, which would regulate AI monitoring of workers, and the Fair News Act, which aims to establish guidelines for AI use in media organizations.

Lawmakers have until early June to deliberate on these important legislative measures.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...