Category: Artificial Intelligence Governance

Virginia’s Governor Rejects Controversial AI Regulation

On March 24, 2025, Virginia’s Governor vetoed House Bill 2094, which aimed to establish regulations for businesses developing “high-risk” AI systems, citing concerns over its potential to hinder innovation. This decision reflects ongoing debates about how best to regulate AI technology at both state and federal levels.

Read More »

German Coalition Divided on AI Regulation and Digital Sovereignty

Leaked coalition documents reveal a rift between the CDU/CSU and SPD over AI regulation and digital sovereignty as they negotiate a new German government platform. While both parties agree on the need for regulation to support data center development, they differ on the ambition and specifics of the policies, with SPD pushing for a 50% open source target by 2029.

Read More »

Navigating the Ethics of AI: A Call for Responsibility

As artificial intelligence technologies increasingly influence our lives, the ethical responsibilities in their development and use become paramount. Responsible AI aims to balance technological advancement with ethical values to maximize benefits while minimizing risks.

Read More »

Quantum AI: The Urgent Need for Global Regulation

As the integration of quantum computing and AI accelerates, establishing global regulation is crucial to prevent misuse and ensure these technologies benefit humanity. The potential for both positive and negative outcomes underscores the urgency of creating a responsible framework for quantum AI.

Read More »

Building Inclusive AI for a Diverse Future

In an AI-driven era, it is essential to ensure that AI solutions are accessible and inclusive for the disabled community, as over 380 million working-age adults live with disabilities globally. However, a significant lack of high-quality disability data in AI development poses risks of perpetuating existing barriers for these individuals.

Read More »

Unpacking the AI Act’s Emotional Recognition Loophole

The article discusses the implications of the EU AI Act’s ban on emotion recognition technologies (ERTs), highlighting a potential loophole that allows for the identification of emotional expressions without inferring individuals’ emotional states. Despite recognizing the technical limitations of ERTs, the regulation may not adequately protect users from technologies that could be fully functional in the future.

Read More »

EIOPA’s Insights on AI Governance in Insurance

On February 12, 2025, the European Insurance and Occupational Pensions Authority (EIOPA) published a consultation on its draft opinion regarding artificial intelligence (AI) governance and risk management. The Opinion provides guidance for insurance undertakings on the responsible use of AI systems in the insurance value chain, emphasizing the importance of proportionality in governance and risk management measures.

Read More »

Harnessing AI for Global Good

At SXSW 2025, Dr. Rumman Chowdhury emphasized the importance of viewing artificial intelligence through a diverse lens and highlighted the need for responsible AI practices that empower users. She advocates for a shift from passive acceptance of technology to active participation, urging for systems that allow individuals to make informed choices about the algorithms that affect their lives.

Read More »