Category: AI Governance

Texas Takes the Lead: New AI Governance Law Unveiled

The Texas Responsible Artificial Intelligence Governance Act (TRAIGA), passed on May 31, 2025, establishes disclosure requirements for AI developers and deployers while outlining prohibited uses of AI and civil penalties for violations. The law takes effect on January 1, 2026, as part of a growing trend among states to legislate on artificial intelligence.

Read More »

Texas Enacts Groundbreaking AI Governance Law

On June 22, 2025, Texas Governor Greg Abbott signed the Texas Responsible AI Governance Act (TRAIGA) into law, significantly altering the original draft that proposed strict regulations on “high-risk” AI systems. The enacted version includes provisions that restrict certain AI practices and establishes a regulatory sandbox program for the development and testing of AI technologies.

Read More »

G7 Summit Fails to Address Urgent AI Governance Needs

At the recent G7 summit in Canada, discussions primarily focused on economic opportunities related to AI, while governance issues for AI systems were notably overlooked. This shift towards prioritizing AI innovation and competition raises concerns about the risks associated with advanced AI capabilities that no single nation can manage alone.

Read More »

Africa’s Bold Move Towards Sovereign AI Governance

At the Internet Governance Forum (IGF) 2025 in Oslo, African leaders called for urgent action to develop sovereign and ethical AI systems tailored to local needs, emphasizing the necessity for fairness, transparency, and inclusion. Over 1,000 African startups relying on foreign AI models raise concerns about digital dependency, highlighting the need for transparent governance frameworks and local AI development.

Read More »

China’s Unique Approach to Embodied AI

China’s approach to artificial intelligence emphasizes the development of “embodied AI,” which interacts with the physical environment, leveraging the country’s strengths in manufacturing and infrastructure. This contrasts with the U.S. focus on cloud-based intelligence, leading to diverging models of AI development and potential implications for global technological standards.

Read More »

Workday Sets New Standards in Responsible AI Governance

Workday has recently received dual third-party accreditations for its AI Governance Program, highlighting its commitment to responsible and transparent AI. Dr. Kelly Trindle, Chief Responsible AI Officer, emphasized that this recognition affirms Workday’s leadership in the critical area of AI governance.

Read More »

New Safeguard Tiers for Responsible AI in Amazon Bedrock

Amazon Bedrock Guardrails now offers safeguard tiers, allowing organizations to implement customizable safety controls for their generative AI applications. This tiered approach enables companies to select appropriate safeguards based on specific needs, balancing safety and performance across various use cases.

Read More »

Texas Takes Charge: New AI Governance Law Enacted

On June 22, 2025, Texas Governor Greg Abbott signed the Texas Responsible AI Governance Act (TRAIGA) into law, asserting the state’s right to legislate on consumer protection and AI use. The law aims to promote responsible AI development while safeguarding individuals from associated risks and ensuring transparency in AI interactions.

Read More »

Tech Giants Push Back: Delaying the EU’s AI Act

Meta and Apple are urging the European Union to delay the implementation of its landmark AI Act, citing concerns that the current timeline may hinder innovation and overwhelm businesses. As the world’s first comprehensive AI regulation is set to take effect in August 2025, many companies remain unprepared for compliance, raising fears of stifled growth, particularly for smaller firms.

Read More »