Category: EU Compliance

EU’s AI Oversight: A Race Against Time

With less than three months until the deadline, many EU member states are still undecided on which authorities will oversee compliance with the AI Act. Delays in appointing these regulators could lead to uncertainty for businesses required to comply with the new rules.

Read More »

Philips’ Insights on the EU AI Act’s Impact on Medical Innovation

Philips’ Chief Innovation Officer, Shez Partovi, emphasizes the need for the EU AI Act to ensure patient safety while fostering innovation in the AI medical device sector. He expresses concerns that overlapping regulatory requirements could hinder the growth of smaller companies and urges for a balance that promotes trust without stifling innovation.

Read More »

AI Compliance: Copyright Challenges in the EU AI Act

The EU AI Act emphasizes the importance of copyright compliance for generative AI models, particularly regarding the use of vast datasets for training. It requires general-purpose AI providers to implement policies that respect copyright protections and ensure transparency about the content used in their training processes.

Read More »

EU AI Act: Enhancing Incident Management Compliance

The EU AI Act introduces new incident response and reporting requirements for providers of high-risk AI systems, mandating the reporting of serious incidents within 72 hours. This legislation aims to protect consumers while encouraging companies to adopt structured incident management practices.

Read More »

AI Act Compliance: Strategic Insights for Businesses

The EU Artificial Intelligence Act (AI Act) is the first comprehensive legal framework aimed at regulating the use of Artificial Intelligence across the European Union, establishing obligations for companies both within and outside the Union. It adopts a risk-based approach, requiring compliance frameworks that address legal, technical, and ethical considerations in the deployment of AI systems.

Read More »

Adapting to the EU AI Act: Essential Insights for Insurers

The EU AI Act introduces new accountability measures for organizations using AI, particularly in high-risk sectors like insurance. Insurers must conduct a Fundamental Rights Impact Assessment (FRIA) to evaluate potential biases and ensure the responsible use of AI in underwriting and pricing.

Read More »

AI Regulation: China’s Blueprint for Global Governance

The article discusses how the Global South, particularly China, is making significant strides in the development and regulation of artificial intelligence (AI), challenging the notion that these countries have a passive role in global affairs. It emphasizes the importance of proactive regulation to balance innovation with ethical considerations and highlights the potential for cross-border cooperation among nations in the Global South to navigate AI’s challenges.

Read More »

EU Commission’s Contingency Plans for AI Standards Delays

The European Commission is prepared to provide alternative solutions if technical standards for the EU’s AI Act are delayed, as the main standardization bodies have announced that the standards will now be ready in 2026 instead of August 2025. The Commission emphasizes that while these standards are not mandatory, they will significantly ease compliance efforts for providers of high-risk AI systems.

Read More »