AI Compliance Essentials: Understanding New Regulations

AI Compliance: A Quick Reminder

Artificial intelligence (AI) is reshaping modern society, enabling the automation and modification of routine human activities and, consequently, enhancing efficiency and productivity. Like any technological development, AI presents both benefits and risks. Concerns include potential biases, privacy intrusions, and ethical dilemmas.

According to the Artificial Intelligence Index Report 2024, a 2023 survey found that 66% of respondents anticipate AI will significantly change their lives in the near future, while 54% believe its benefits outweigh its downsides. However, public sentiment is mixed: 52% reported feeling nervous about AI products and services, reflecting a 13% increase from 2022. Globally, the most significant concerns revolve around AI being misused for harmful purposes (49%), its impact on employment (49%), and potential violations of privacy (45%).

Regulatory Efforts

Authorities around the globe are trying to keep pace with AI’s rapid development and mitigate associated risks and public concerns through regulations. A landmark example is the EU AI Act, which constitutes the world’s first AI-focused legal framework for the development, deployment, and use of AI systems and general-purpose AI models. The EU AI Act came into effect on August 1, 2024, with the first set of impactful rules taking effect on February 2, 2025, focusing on (1) prohibited AI systems and (2) AI literacy obligations.

Prohibited AI Systems

Under the EU AI Act, certain AI systems are prohibited due to an unacceptable risk to fundamental rights. These include AI systems that:

  • use subliminal, manipulative, or deceptive techniques that distort behavior and impair decision-making, causing significant harm;
  • exploit vulnerabilities related to age, disability, or socioeconomic status to distort behavior, leading to harm;
  • provide biometric categorization to infer or deduce status in sensitive groups;
  • provide social scoring that results in unfair or detrimental treatment based on behavior or personal traits;
  • provide criminal risk assessment based on profiling or personality traits;
  • create facial recognition databases via mass scraping from the internet or CCTV footage;
  • provide emotion recognition in the workplace or education;
  • conduct real-time remote biometric identification in public spaces for law enforcement purposes.

These prohibitions come with certain qualifiers, as well as safety- and enforcement-related exemptions. To ensure the consistent and uniform application of the EU AI Act in this respect, in February 2025, the European Commission published two draft guidelines: (1) The Guidelines on AI System Definition and (2) The Guidelines on Prohibited AI Practices.

AI Literacy and Compliance

AI literacy is another crucial aspect of the EU AI Act that forms part of a governance framework. It means that employers must ensure that their employees involved in AI deployment understand how these systems work, associated risks, and any potential challenges they present.

Article 4 of the EU AI Act mandates that providers and deployers of AI systems must take measures to ensure their associated personnel possess sufficient AI literacy “taking into account their technical knowledge, experience, education and training and the context the AI systems are to be used in, and considering the persons or groups of persons on whom the AI systems are to be used.” The goal of this obligation is to foster a culture of responsible AI use, supporting compliance and innovation.

This article serves as a brief reminder of the necessity to comply with regulatory requirements. For further details on compliance and steps to be taken, further research on the EU AI Act is recommended.

More Insights

Navigating the Complexities of the EU AI Act

The EU AI Act aims to be the first significant regulation focused on artificial intelligence, ensuring that AI systems in Europe are safe and fair. As the implementation timeline progresses...

UK AI Copyright Rules Risk Innovation and Equity

Policy experts warn that restricting AI training on copyrighted materials in the UK could lead to biased models and minimal compensation for creators. They argue that current copyright proposals...

EU AI Act Faces Challenges from DeepSeek’s Rise

The emergence of the Chinese AI app DeepSeek is prompting EU policymakers to consider amendments to the EU AI Act, particularly regarding the threshold measures of computing power for general-purpose...

Balancing Innovation and Regulation in AI Development

The article discusses the varying approaches to regulating AI development across different countries, highlighting the differences between the United States, European Union, and the United Kingdom. It...

Empowering AI Through Strategic Data Engineering

This article discusses how Data Engineering teams can transform from being bottlenecks to strategic enablers of AI by implementing collaborative frameworks and governance. By fostering partnerships...

AI Liability: Understanding the Risks and Responsibilities

Artificial intelligence (AI) is becoming increasingly integrated into business and social life, offering significant potential while also raising substantial risks such as algorithmic bias and privacy...

AI Compliance Under GDPR: Lessons from the DPC Inquiry

An inquiry by the Irish Data Protection Commission (DPC) serves as a reminder for companies using artificial intelligence (AI) tools to comply with General Data Protection Regulations (GDPR). The...

Europe’s AI Act: A New Era in Global Tech Governance

April 2025 marks the implementation phase of the EU AI Act, the world's first comprehensive regulation of artificial intelligence, which classifies AI systems into four risk tiers. While some view it...

The Future of Tech Governance: India’s AI and Data Strategy

In the evolving landscape of technology regulation, India is carving out its unique path, focusing on societal benefits while navigating the complexities of AI governance. With initiatives like the...