Category: AI Regulation

Understanding the EU AI Act Risk Pyramid

The EU AI Act employs a risk-based approach to regulate AI systems, categorizing them into four tiers based on the level of risk they present to safety, rights, and societal values. At the top are unacceptable risk systems that are banned outright, while lower tiers include high-risk, limited risk, and minimal risk systems that require varying degrees of oversight and transparency.

Read More »

Harnessing Agentic AI: Current Rules and Future Implications

AI companies, including Meta and OpenAI, assert that existing regulations can effectively govern the emerging field of agentic AI, which allows AI systems to perform tasks autonomously. These companies emphasize the importance of applying current safety processes and legal frameworks to protect businesses and consumers as they adopt this technology.

Read More »

OpenAI Calls for Streamlined AI Regulations in Europe

OpenAI is urging the EU to simplify AI regulations to foster innovation and maintain global competitiveness, warning that complex rules could drive investment to less democratic regions. The organization emphasizes the need for alignment between regulatory efforts and growth initiatives to create a unified strategy for AI development in Europe.

Read More »

Bridging the Gaps in AI Governance

As we stand at a critical juncture in AI’s development, a governance challenge is emerging that could stifle innovation and create global digital divides. The current AI governance landscape resembles a patchwork of fragmented regulations, making the global deployment of AI systems increasingly difficult and costly.

Read More »

Balancing Data Protection and AI Regulation

The document discusses the intersection of data protection laws and AI regulation, emphasizing the importance of compliance with the GDPR for organizations processing personal data through AI systems. It highlights the challenges and responsibilities businesses face as they navigate the evolving landscape of AI legislation, particularly with the implementation of the EU AI Act.

Read More »

Regulating Emotion AI in the Workplace: Challenges and Implications

The EU AI Act imposes strict regulations on the use of emotion recognition systems, categorizing them into “High Risk” and “Prohibited Use” depending on the context. From February 2025, the Act prohibits the use of AI systems to infer emotions in workplace and educational settings, except for specific medical or safety reasons.

Read More »

Delays in the EU AI Act: Standards Development Pushed to 2026

The development of technical standards for the EU’s AI Act is behind schedule, with completion now expected to extend into 2026. This delay may impact manufacturers’ ability to demonstrate compliance with the regulations aimed at ensuring the safety and trustworthiness of high-risk AI applications.

Read More »

UK’s AI Regulation: Balancing Growth and Oversight

The U.K. has paused its efforts on artificial intelligence (AI) regulation, caught between the deregulation approach of the U.S. and the stringent AI Act of the E.U. This delay raises concerns for organizations seeking clarity and consistency in the evolving landscape of AI governance.

Read More »