Category: Artificial Intelligence Regulation

European Commission Abandons AI Liability Directive Amid Industry Pressure

The European Commission has decided not to renew discussions on the AI Liability Directive due to a lack of consensus, with industry pressure leading to concerns over potential regulations. This decision was part of the Commission’s 2025 work program and has sparked criticism regarding the regulatory landscape for artificial intelligence in the EU.

Read More »

AI Act Implementation: What You Need to Know

As of February 2, 2025, the AI Act has begun to be implemented across Europe, prohibiting certain high-risk uses such as emotion recognition in workplaces. Businesses must prepare for stricter requirements based on the risk level of AI systems, including mandatory certification and regular audits for high-risk AI.

Read More »

Exploring Environmental Safeguards in the AI Act

The paper assesses the levels of environmental protection established by the Artificial Intelligence Act (AIA) and its relationship with EU environmental law. It highlights the challenges and opportunities presented by AI technologies in achieving sustainability while addressing potential environmental risks.

Read More »

EU AI Act Enforces Initial Compliance Requirements

The first requirements of the European Union AI Act came into effect on February 2, 2025, banning the use of AI systems involved in prohibited practices and mandating sufficient AI literacy for users and providers. Clifford Chance is actively preparing for the global AI Action Summit in Paris, focusing on delivering trustworthy AI amidst the new regulations.

Read More »

EU Bans AI Systems with Unacceptable Risks

The European Commission has updated the AI Act to ban AI systems that pose an “unacceptable risk,” including practices such as social scoring and harmful manipulations. The Act categorizes AI systems into four risk levels, with high-risk systems requiring strict compliance and conformity assessments before market placement.

Read More »

The EU AI Act: Pioneering Ethical AI Development

The European AI Act is a regulatory framework proposed by the European Commission to ensure that AI is developed and used ethically, transparently, and accountably. It categorizes AI applications based on their risk levels, from minimal to high, and mandates different levels of scrutiny and oversight accordingly.

Read More »

European AI Regulation: A New Era of Responsible Innovation

The European regulation on artificial intelligence (AI) came into force on August 1, 2024, aiming to promote responsible development and deployment of AI within the EU. It establishes clear requirements for developers and deployers based on specific risk assessments associated with AI technologies.

Read More »

Texas Takes the Lead in AI Regulation with TRAIGA

The Texas Responsible AI Governance Act (TRAIGA) proposes comprehensive regulations for AI, focusing on high-risk systems and their developers. It aims to prohibit certain AI uses while providing limited rights for private litigants and exemptions for small businesses.

Read More »

AI Act Exemptions: A Threat to Rights and Security?

The European Union’s AI Act, which aims to regulate high-risk AI systems, is set to officially ban applications deemed to pose “unacceptable risk,” including biometric categorization and emotion recognition. However, national security exemptions could allow law enforcement to bypass these bans, raising concerns about potential abuses and the protection of individual rights.

Read More »