Category: AI

Southeast Asia’s Unique Approach to AI Safety Governance

Southeast Asia’s approach to AI safety governance combines localized regulation with regional coordination, addressing the diverse cultural and political landscape of the region. The report outlines recent developments across 11 countries, highlighting the need for harmonization in AI strategies and capacity building to foster an inclusive and safe AI future.

Read More »

Comparing AI Action Plans: U.S. vs. China

In July, both the United States and China unveiled their national AI Action Plans, showcasing different approaches to AI development and governance. Despite their contrasting ideologies, the two nations are converging on similar strategies to accelerate domestic AI adoption, promote global diffusion, and manage AI risks without hindering innovation.

Read More »

Private Governance: The Future of AI Regulation

Private governance and regulatory sandboxes are essential for promoting democracy, efficiency, and innovation in AI regulation. This approach allows for agile and accountable experimentation that can outperform state-led initiatives while preserving individual liberty and fostering a vibrant market environment.

Read More »

Egypt Champions Ethical AI for Inclusive Development

Egypt’s Minister of Planning and Economic Development, Rania Al-Mashat, emphasized the importance of robust governance frameworks for artificial intelligence to ensure it benefits society ethically and sustainably. During her remarks at the Tokyo International Conference on African Development, she highlighted Africa’s unique opportunity to leverage AI for inclusive development, given the continent’s youthful population and expanding digital economy.

Read More »

Strengthening AI Governance for Fair Credit Access in Kenya

Kenya is at a critical juncture in utilizing artificial intelligence (AI) for financial inclusion, but expert Jimmie Mwangi warns that without strong governance, AI-driven credit scoring may exacerbate existing inequalities. He emphasizes the need for ethical standards and transparency in AI systems to ensure fair credit access for all, particularly for the unbanked and underserved populations.

Read More »

Governance Challenges for Multi-Agent AI Systems

The article discusses the urgent need for governance frameworks to manage the interactions of multi-agent AI systems, highlighting the risks posed by their autonomous decision-making capabilities. It draws parallels with maritime governance, emphasizing the importance of transparency, accountability, and safety protocols to ensure responsible deployment of AI technologies.

Read More »

Addressing AI-Driven Online Threats with Safety by Design

The rapid growth of artificial intelligence (AI) is reshaping the digital landscape, amplifying existing online harms and introducing new safety risks, particularly through the use of deepfakes. A safety by design governance approach is necessary to address these AI-facilitated online harms by establishing interventions at various stages of the online harm lifecycle.

Read More »

Critical Evaluations of AI Compliance Under the EU Act

The EU’s Artificial Intelligence Act introduces new obligations for organizations regarding general-purpose AI models, set to take effect in August. Dealmakers must enhance their due diligence processes to ensure compliance and understand the risks associated with AI systems in the context of mergers and acquisitions.

Read More »

Microsoft’s Science Chief Opposes Trump’s AI Regulation Ban

Microsoft’s chief scientist, Dr. Eric Horvitz, has criticized Donald Trump’s proposal to ban state-level AI regulations, arguing that it could hinder progress in AI development. He emphasizes the need for guidance and regulation to accelerate the advancement of AI while addressing potential risks, such as misinformation and malicious uses.

Read More »