Category: AI Governance

Strengthening AI Governance for Fair Credit Access in Kenya

Kenya is at a critical juncture in utilizing artificial intelligence (AI) for financial inclusion, but expert Jimmie Mwangi warns that without strong governance, AI-driven credit scoring may exacerbate existing inequalities. He emphasizes the need for ethical standards and transparency in AI systems to ensure fair credit access for all, particularly for the unbanked and underserved populations.

Read More »

Governance Challenges for Multi-Agent AI Systems

The article discusses the urgent need for governance frameworks to manage the interactions of multi-agent AI systems, highlighting the risks posed by their autonomous decision-making capabilities. It draws parallels with maritime governance, emphasizing the importance of transparency, accountability, and safety protocols to ensure responsible deployment of AI technologies.

Read More »

Addressing AI-Driven Online Threats with Safety by Design

The rapid growth of artificial intelligence (AI) is reshaping the digital landscape, amplifying existing online harms and introducing new safety risks, particularly through the use of deepfakes. A safety by design governance approach is necessary to address these AI-facilitated online harms by establishing interventions at various stages of the online harm lifecycle.

Read More »

Texas Takes the Lead: New AI Governance Law Unveiled

The Texas Responsible Artificial Intelligence Governance Act (TRAIGA), passed on May 31, 2025, establishes disclosure requirements for AI developers and deployers while outlining prohibited uses of AI and civil penalties for violations. The law takes effect on January 1, 2026, as part of a growing trend among states to legislate on artificial intelligence.

Read More »

Texas Enacts Groundbreaking AI Governance Law

On June 22, 2025, Texas Governor Greg Abbott signed the Texas Responsible AI Governance Act (TRAIGA) into law, significantly altering the original draft that proposed strict regulations on “high-risk” AI systems. The enacted version includes provisions that restrict certain AI practices and establishes a regulatory sandbox program for the development and testing of AI technologies.

Read More »

G7 Summit Fails to Address Urgent AI Governance Needs

At the recent G7 summit in Canada, discussions primarily focused on economic opportunities related to AI, while governance issues for AI systems were notably overlooked. This shift towards prioritizing AI innovation and competition raises concerns about the risks associated with advanced AI capabilities that no single nation can manage alone.

Read More »

Africa’s Bold Move Towards Sovereign AI Governance

At the Internet Governance Forum (IGF) 2025 in Oslo, African leaders called for urgent action to develop sovereign and ethical AI systems tailored to local needs, emphasizing the necessity for fairness, transparency, and inclusion. Over 1,000 African startups relying on foreign AI models raise concerns about digital dependency, highlighting the need for transparent governance frameworks and local AI development.

Read More »

China’s Unique Approach to Embodied AI

China’s approach to artificial intelligence emphasizes the development of “embodied AI,” which interacts with the physical environment, leveraging the country’s strengths in manufacturing and infrastructure. This contrasts with the U.S. focus on cloud-based intelligence, leading to diverging models of AI development and potential implications for global technological standards.

Read More »

Workday Sets New Standards in Responsible AI Governance

Workday has recently received dual third-party accreditations for its AI Governance Program, highlighting its commitment to responsible and transparent AI. Dr. Kelly Trindle, Chief Responsible AI Officer, emphasized that this recognition affirms Workday’s leadership in the critical area of AI governance.

Read More »

New Safeguard Tiers for Responsible AI in Amazon Bedrock

Amazon Bedrock Guardrails now offers safeguard tiers, allowing organizations to implement customizable safety controls for their generative AI applications. This tiered approach enables companies to select appropriate safeguards based on specific needs, balancing safety and performance across various use cases.

Read More »