Category: AI Regulation

Microsoft’s Science Chief Opposes Trump’s AI Regulation Ban

Microsoft’s chief scientist, Dr. Eric Horvitz, has criticized Donald Trump’s proposal to ban state-level AI regulations, arguing that it could hinder progress in AI development. He emphasizes the need for guidance and regulation to accelerate the advancement of AI while addressing potential risks, such as misinformation and malicious uses.

Read More »

AI Regulation: Europe’s Urgent Challenge Amid US Pressure

Michael McNamara discusses the complexities surrounding the regulation of AI in Europe, particularly in light of US pressure and the challenges of balancing innovation with the protection of creative sectors. He emphasizes the urgency for Europe to act decisively to safeguard cultural sovereignty and democratic values in the evolving AI landscape.

Read More »

Decoding the Regulation of Health AI Tools

A new report from the Bipartisan Policy Center examines the complex regulatory landscape for health AI tools that operate outside the jurisdiction of the FDA. As AI becomes more integrated into healthcare, the report highlights the challenges and opportunities for responsible innovation amidst a patchwork of federal rules and state laws.

Read More »

Texas Enacts Groundbreaking AI Governance Law

On June 22, 2025, Texas Governor Greg Abbott signed the Texas Responsible AI Governance Act (TRAIGA) into law, significantly altering the original draft that proposed strict regulations on “high-risk” AI systems. The enacted version includes provisions that restrict certain AI practices and establishes a regulatory sandbox program for the development and testing of AI technologies.

Read More »

Africa’s Bold Move Towards Sovereign AI Governance

At the Internet Governance Forum (IGF) 2025 in Oslo, African leaders called for urgent action to develop sovereign and ethical AI systems tailored to local needs, emphasizing the necessity for fairness, transparency, and inclusion. Over 1,000 African startups relying on foreign AI models raise concerns about digital dependency, highlighting the need for transparent governance frameworks and local AI development.

Read More »

Top 10 Compliance Challenges in AI Regulations

As AI technology advances, the challenge of establishing effective regulations becomes increasingly complex, with different countries adopting varying approaches. This regulatory divergence poses significant compliance challenges for multinational companies deploying AI systems across borders.

Read More »

China’s Unique Approach to Embodied AI

China’s approach to artificial intelligence emphasizes the development of “embodied AI,” which interacts with the physical environment, leveraging the country’s strengths in manufacturing and infrastructure. This contrasts with the U.S. focus on cloud-based intelligence, leading to diverging models of AI development and potential implications for global technological standards.

Read More »

Mastering Compliance with the EU AI Act Through Advanced DSPM Solutions

The EU AI Act emphasizes the importance of compliance for organizations deploying AI technologies, with Zscaler’s Data Security Posture Management (DSPM) playing a crucial role in ensuring data security and regulatory adherence. By providing centralized visibility and governance over AI systems, DSPM helps organizations manage risks and promote responsible AI practices.

Read More »