January 2026 Brings a New Phase of AI Rules Across the United States, Europe, and China
As 2026 begins, governments in the United States, the European Union, and China are rolling out or refining policies that will reshape how artificial intelligence is developed and used, creating what many companies now see as a far more demanding global regulatory climate. Firms that rely on AI for decisions in areas such as lending, housing, healthcare, and employment are entering a period of heightened legal and operational risk.
United States: State-Level Pressure
In the United States, the most immediate pressure is coming from the states rather than Washington. Lawmakers are focusing on what they call “high-risk” or “consequential” uses of AI, meaning systems that can significantly affect people’s lives. California is leading this effort through new rules tied to the California Consumer Privacy Act, which require businesses using automated decision-making technology (ADMT) to give consumers advance notice, allow them to opt out, and provide information about how those systems are used. Although enforcement does not start until January 1, 2027, companies are already being urged to prepare.
Similarly, the Colorado AI Act is set to take effect on June 30, 2026. It will require AI developers and deployers to take reasonable steps to prevent algorithmic discrimination, maintain formal risk-management programs, issue notices, and conduct impact assessments. However, the statute is expected to be debated during the current legislative session, meaning its final form could still change before it comes into force.
State attorneys general are also becoming more aggressive. Scrutiny of AI-related practices increased sharply in 2025 and is expected to remain intense this year. For instance, in Pennsylvania, a settlement was announced in May 2025 with a property management company accused of using an AI system in ways that contributed to unsafe housing and delayed repairs. In Massachusetts, a $2.5 million settlement was reached in July 2025 with a student loan company over claims that its AI-driven lending practices unfairly disadvantaged historically marginalized borrowers.
Cybersecurity Concerns
Cybersecurity has emerged as another major front. AI-powered tools are now being used by both companies and criminals, raising the stakes for data protection and operational resilience. The Securities and Exchange Commission’s Division of Examinations has stated that cybersecurity and operational resiliency, including AI-driven threats to data integrity and risks from third-party vendors, will be a priority in fiscal year 2026. Companies may also face new expectations around how boards disclose their oversight of AI governance as part of managing material cyber risks.
Europe: Implementing the AI Act
Across the Atlantic, the European Union is grappling with how to put its landmark AI Act into practice. The European Commission missed a February 2 deadline to release guidance on Article 6 of the law, which determines whether an AI system is considered “high-risk” and therefore subject to tougher compliance and documentation rules. The Commission is still integrating months of feedback and plans to release a new draft of the high-risk guidelines for further consultation by the end of January, with final adoption possibly in March or April.
This uncertainty has fueled debate over whether parts of the AI Act should be delayed. Enforcers and companies have warned that they are not ready to implement the most complex provisions, even though the law entered into force two years ago. This argument underpins the Commission’s proposed Digital Omnibus package on AI, which would narrow what counts as a high-risk use and delay those obligations by up to 16 months.
During a January 26 hearing of the European Parliament’s civil liberties committee, European Commission Deputy Director-General Renate Nikolay explained why more time is needed, stating, “These standards are not ready, and that’s why we allowed ourselves in the AI omnibus to give us a bit more time to work on either guidelines or specification or standards, so that we can provide this legal certainty for the sector, for the innovators.”
China: Balancing Speed and Control
In China, the focus is less on delays and more on balancing speed with control. In late January, President Xi Jinping addressed senior Communist Party officials, portraying artificial intelligence as a transformative force akin to the steam engine, electricity, and the internet. He warned that China must not let the technology “spiral out of control” and urged leaders to act early and decisively to prevent problems.
The government wants AI to drive economic growth while also preserving social stability and the party’s authority. Chinese AI companies are being pushed to innovate quickly while complying with an expanding web of rules. For example, when Zhipu AI, a fast-growing developer of large language models, filed for a Hong Kong listing in December, it cautioned investors about the heavy burden of meeting multiple AI-related regulations.
Conclusion
The developments in January 2026 illustrate how fragmented and demanding the global AI rulebook is becoming. In the United States, state-level laws and enforcement actions are setting the pace. In Europe, regulators are still negotiating how to apply a sweeping new framework. In China, the government is attempting to harness AI’s economic power without losing political control.