Category: AI

Decoding the Regulation of Health AI Tools

A new report from the Bipartisan Policy Center examines the complex regulatory landscape for health AI tools that operate outside the jurisdiction of the FDA. As AI becomes more integrated into healthcare, the report highlights the challenges and opportunities for responsible innovation amidst a patchwork of federal rules and state laws.

Read More »

Texas Takes the Lead: New AI Governance Law Unveiled

The Texas Responsible Artificial Intelligence Governance Act (TRAIGA), passed on May 31, 2025, establishes disclosure requirements for AI developers and deployers while outlining prohibited uses of AI and civil penalties for violations. The law takes effect on January 1, 2026, as part of a growing trend among states to legislate on artificial intelligence.

Read More »

Texas Enacts Groundbreaking AI Governance Law

On June 22, 2025, Texas Governor Greg Abbott signed the Texas Responsible AI Governance Act (TRAIGA) into law, significantly altering the original draft that proposed strict regulations on “high-risk” AI systems. The enacted version includes provisions that restrict certain AI practices and establishes a regulatory sandbox program for the development and testing of AI technologies.

Read More »

G7 Summit Fails to Address Urgent AI Governance Needs

At the recent G7 summit in Canada, discussions primarily focused on economic opportunities related to AI, while governance issues for AI systems were notably overlooked. This shift towards prioritizing AI innovation and competition raises concerns about the risks associated with advanced AI capabilities that no single nation can manage alone.

Read More »

Africa’s Bold Move Towards Sovereign AI Governance

At the Internet Governance Forum (IGF) 2025 in Oslo, African leaders called for urgent action to develop sovereign and ethical AI systems tailored to local needs, emphasizing the necessity for fairness, transparency, and inclusion. Over 1,000 African startups relying on foreign AI models raise concerns about digital dependency, highlighting the need for transparent governance frameworks and local AI development.

Read More »

Top 10 Compliance Challenges in AI Regulations

As AI technology advances, the challenge of establishing effective regulations becomes increasingly complex, with different countries adopting varying approaches. This regulatory divergence poses significant compliance challenges for multinational companies deploying AI systems across borders.

Read More »

China’s Unique Approach to Embodied AI

China’s approach to artificial intelligence emphasizes the development of “embodied AI,” which interacts with the physical environment, leveraging the country’s strengths in manufacturing and infrastructure. This contrasts with the U.S. focus on cloud-based intelligence, leading to diverging models of AI development and potential implications for global technological standards.

Read More »

Workday Sets New Standards in Responsible AI Governance

Workday has recently received dual third-party accreditations for its AI Governance Program, highlighting its commitment to responsible and transparent AI. Dr. Kelly Trindle, Chief Responsible AI Officer, emphasized that this recognition affirms Workday’s leadership in the critical area of AI governance.

Read More »

AI Ethics Amid US-China Tensions: A Call for Global Standards

As the US-China tech rivalry intensifies, a UN agency is advocating for global AI ethics standards, highlighted during UNESCO’s Global Forum on the Ethics of Artificial Intelligence in Bangkok. Despite the absence of major AI companies and delegations from the US and China, over 1,000 participants discussed the importance of collaborative frameworks to ensure AI serves the collective good.

Read More »