Category: Artificial Intelligence Regulation

Texas Implements Groundbreaking AI Governance Law

Texas has enacted the Responsible Artificial Intelligence Governance Act (TRAIGA), which will take effect on January 1, 2026, introducing new regulations for the use of AI by companies and government entities. The law includes provisions for prohibiting certain uses of AI, requiring disclosures in healthcare, and creating an AI regulatory “sandbox.”

Read More »

Texas Implements Landmark AI Governance Legislation

Texas has enacted the Responsible Artificial Intelligence Governance Act (TRAIGA), which will take effect on January 1, 2026. This legislation aims to regulate AI technologies, prohibiting their use for harmful purposes and requiring transparency when AI interacts with consumers.

Read More »

Texas Leads the Way in New AI Governance Model

Texas has enacted the Texas Responsible Artificial Intelligence Governance Act (TRAIGA), which introduces a new framework for regulating AI by limiting punitive measures against companies while expanding the state’s investigatory powers. The law focuses on AI uses rather than the models themselves, establishing specific prohibitions and creating a regulatory Sandbox Program for companies to test AI systems.

Read More »

Colorado’s Groundbreaking AI Law Sets New Compliance Standards

Analysts note that Colorado’s upcoming AI law, which takes effect on February 1, 2026, is notable for its comprehensive requirements, mandating businesses to adopt risk management programs for high-risk AI systems. This legislation positions Colorado as a leader in state-level AI regulation, contrasting sharply with other states that have implemented more fragmented laws.

Read More »

States Lead the Charge in AI Regulation

States across the U.S. are rapidly enacting their own AI regulations following the removal of a federal prohibition, leading to a fragmented landscape of laws that businesses must navigate. Key states like California and Colorado are focusing on issues such as algorithmic discrimination and data transparency, presenting both challenges and opportunities for enterprises in the AI sector.

Read More »

Colorado Sets New Standards for AI Regulation

Colorado’s new AI law, set to take effect on February 1, 2026, requires businesses to implement risk management programs for high-risk AI systems, making it the most comprehensive AI legislation in the U.S. Analysts warn that the compliance requirements could be complex and burdensome for companies operating in multiple states.

Read More »

AI Governance: Shaping California’s Future

The California Council on Science & Technology is hosting the first public dialogue on the findings of the new AI Frontiers Report, which focuses on AI governance and its implications for national and international standards. This event will feature speakers from prominent institutions and aims to address questions about California’s future in AI development.

Read More »

AI Regulation: Balancing Innovation and Oversight

Experts discuss the implications of the recently passed H.R. 1, which would pause state and local regulations on artificial intelligence for ten years. The article examines the benefits and drawbacks of AI regulation, highlighting the need for careful oversight to balance innovation with potential social impacts.

Read More »

Copyright Challenges in the Age of Generative AI

The Data (Use and Access) Act (the “DUA Act”) was enacted in the UK on 19 June 2025, reforming the General Data Protection Regulation and addressing copyright issues related to generative AI. The Act’s passage followed extensive parliamentary debate, particularly concerning the use of copyright works for training AI models without consent.

Read More »

Europe’s AI Act: A New Era of Regulation for Artificial Intelligence

The European Union’s Artificial Intelligence Act, often referred to as the ‘new GDPR’, has come into effect, marking a significant shift towards the regulation of AI technologies. This comprehensive legislation aims to ensure that AI systems are safe, respect fundamental rights, and promote trustworthy innovation within the EU market.

Read More »