Category: AI Regulation

Rethinking AI Regulation: The Case for Entity-Based Governance

The paper discusses the debate on whether frontier AI regulation should focus on the core technology (AI models) or its uses, highlighting the challenges of both model-based and use-based approaches. It proposes an alternative approach: entity-based regulation that targets the large business entities developing powerful AI systems, aiming to better address the unique risks posed by frontier AI.

Read More »

Ensuring Safe Adoption of Generative AI: The Role of Output Inspection

As organizations adopt generative AI tools like Zoom AI Companion and Microsoft Copilot, they must establish robust guardrails to manage permissions and usage policies. However, inspection of AI-generated content is crucial to ensure compliance and control over what the AI produces, closing the gap between policies and actual outputs.

Read More »

AI-Generated Band Sparks Controversy in Music Streaming

The Velvet Sundown, an AI-generated band, garnered millions of streams on Spotify before being exposed as a synthetic creation, igniting a debate on copyright and content regulation in the music industry. The incident highlights the lack of oversight regarding AI-generated content on streaming platforms and raises concerns about transparency and the impact on human musicians.

Read More »

Mastering ISO 42001 for Responsible AI Governance

This article provides a detailed overview of the ISO 42001 standard, the first international standard for AI management systems, emphasizing its importance for responsible AI governance. It discusses how organizations can innovate while mitigating risks by adhering to the principles outlined in ISO 42001.

Read More »

The Rising Threat of AI Jailbreaking in Enterprises

AI models have become more integrated into enterprise workflows, but they face significant security threats from jailbreak attempts that exploit their built-in restrictions. These deliberate efforts to bypass ethical and operational rules highlight vulnerabilities in AI governance, especially when enterprise models cannot consistently refuse harmful requests.

Read More »

Texas Takes the Lead in Consumer-Focused AI Regulation

Texas has passed the Texas Responsible Artificial Intelligence Governance Act (TRAIGA), which aims to protect consumers while establishing regulations for businesses using artificial intelligence. This pioneering legislation emphasizes human-centeredness and transparency, requiring companies to disclose AI use and prohibiting harmful practices.

Read More »

AI Regulation: What Lies Ahead After the Moratorium Removal

President Donald Trump’s budget reconciliation bill almost included a decade-long moratorium on AI regulation at the state and local levels, but this provision was ultimately removed by the Senate. As a result, states remain free to create their own regulations for AI, highlighting ongoing debates about consumer protection and innovation in the sector.

Read More »

Understanding the EU AI Act: Key Steps for Compliance

The EU AI Act, which came into force on August 1, 2024, establishes a framework for regulating artificial intelligence based on risk levels associated with AI systems. This legislation aims to enhance user confidence and foster innovation while imposing significant penalties for non-compliance.

Read More »

Understanding the EU AI Act: Key Insights and Implications

The EU AI Act categorizes artificial intelligence into three risk classes: “unacceptable,” “high,” and “low/minimal,” with strict regulations for high-risk systems. It aims to ensure compliance among providers and users of AI systems, impacting various sectors and workplaces.

Read More »

Siemens and SAP Push for a New Era in EU AI Regulation

Siemens and SAP CEOs have called on the European Union to revise its artificial intelligence regulations, arguing that the current rules hinder innovation. They emphasize the need for a new regulatory framework that supports technological advancement rather than stifling it.

Read More »