Category: Regulatory Frameworks for AI

Implementing AI Governance for EU Compliance

AI governance involves establishing rules and policies to ensure responsible AI use, preventing potential risks associated with AI deployment. The EU AI Act categorizes AI systems by risk levels and outlines necessary compliance measures for companies using high-risk AI applications.

Read More »

ASEAN’s Evolving AI Governance Landscape

The Association of Southeast Asian Nations (ASEAN) is making progress toward AI governance through an innovation-friendly approach, but growing AI-related risks highlight the need for more binding regulations. This brief examines ASEAN’s collective efforts in AI regulation and the varying domestic strategies of its member states.

Read More »

Texas Takes the Lead: New AI Governance Law Unveiled

The Texas Responsible Artificial Intelligence Governance Act (TRAIGA), passed on May 31, 2025, establishes disclosure requirements for AI developers and deployers while outlining prohibited uses of AI and civil penalties for violations. The law takes effect on January 1, 2026, as part of a growing trend among states to legislate on artificial intelligence.

Read More »

AI Act Under Fire: Urgent Call for Caution in Europe

The Computer & Communications Industry Association (CCIA Europe) has cautioned EU leaders about the potential risks of implementing the AI Act without a finalized framework, stressing that critical provisions for general-purpose AI models still lack essential guidance. They warn that a rushed rollout could jeopardize the EU’s AI ambitions and hinder innovation in the sector.

Read More »

Texas Implements Landmark AI Governance Legislation

Texas has enacted the Responsible Artificial Intelligence Governance Act (TRAIGA), which will take effect on January 1, 2026. This legislation aims to regulate AI technologies, prohibiting their use for harmful purposes and requiring transparency when AI interacts with consumers.

Read More »

Guidance for General-Purpose AI Models Under the AI Act

The European Commission has published guidance for general-purpose AI models under the AI Act, clarifying the obligations for providers and defining what constitutes a general-purpose AI model. The guidance outlines compliance expectations, particularly for advanced models posing systemic risks, with a timeline for implementation starting from August 2025.

Read More »

Simultaneous Strategies for AI Governance

The development of responsible Artificial Intelligence (AI) policies and overall AI strategies must occur simultaneously to ensure alignment with intended purposes and core values. Bhutan’s unique approach, grounded in the philosophy of Gross National Happiness, positions it as a noteworthy contributor to global AI governance discussions.

Read More »

European Commission Unveils AI Code of Practice for General-Purpose Models

On July 10, 2025, the European Commission published the final version of the General-Purpose AI Code of Practice, which aims to provide a framework for compliance with certain provisions of the EU AI Act. While adherence to the AI Code is voluntary, organizations that choose to comply can benefit from a reduced administrative burden and greater legal certainty.

Read More »