Complying with the New EU AI Regulations

Here Comes Compliance with the EU AI Act

Artificial intelligence reached another milestone at the start of February 2025, particularly relevant for corporate compliance officers: on February 2, 2025, the first five articles of the EU AI Act went into effect.

This signifies the formal beginning of the era of AI compliance. Companies that utilize AI and operate within Europe, or develop and sell AI systems used in Europe, may find themselves subject to regulatory enforcement. Therefore, it is imperative to start incorporating compliance-aware policies and procedures into your company’s AI adoption strategy as soon as possible.

Understanding Article 4 of the EU AI Act

Article 4 outlines that all providers and deployers of AI systems must:

“Take measures to ensure, to their best extent, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf, taking into account their technical knowledge, experience, education and training, and the context in which the AI systems are to be used.”

The Definition of “AI Literacy”

AI literacy encompasses the skills, knowledge, and understanding that allow providers, deployers, and affected persons to make informed decisions regarding AI systems. It also promotes awareness about the opportunities, risks, and potential harm associated with AI.

In essence, companies must ensure that employees are trained to understand the risks posed by AI. From this straightforward requirement arises a host of practical challenges.

The Importance of AI Governance

The principal challenge is this: developing the necessary AI literacy within an organization is impossible without a clear understanding of how the company is using AI. This issue is compounded by the ease with which employees can integrate artificial intelligence into their daily tasks.

Take, for example, DeepSeek, a Chinese generative AI app that unexpectedly surged in popularity. The privacy risks associated with DeepSeek remain largely unknown, as do the potential cybersecurity threats it may pose to organizations.

Before contemplating the policies, procedures, and training necessary to achieve the required AI literacy, management teams must establish governance mechanisms that guide employee AI usage.

For instance, a large corporation could set up an “AI usage board,” comprising leaders from various operational functions who collaborate with risk management teams (compliance, privacy, HR, legal, IT security) to define rules for AI adoption. Decisions may include which AI systems to use, the tasks suitable for AI, and ensuring that customer-facing AI systems clearly inform users that they are interacting with AI.

Ethics and Corporate Culture

Ethics, tone at the top, and corporate culture should be integral to these discussions. Senior management must convey a commitment to the ethical use of AI, even amid uncertainties regarding specific ethical concerns. The AI governance board should facilitate this dialogue.

By demonstrating that while using AI is beneficial, it will be adopted cautiously, ethically, and in compliance with regulations, a strong culture of ethics will foster responsible AI usage, making it easier to achieve the necessary AI literacy.

Examining Article 5 of the EU AI Act

Article 5 introduces prohibited AI practices, establishing tiers of acceptable AI use, starting with the most severe cases that are outright banned.

Many of these prohibited uses will not surprise Western executives. For instance, the law forbids AI that:

  • Deploys “subliminal techniques beyond a person’s consciousness or purposefully manipulative or deceptive techniques” that materially distort a person’s decision-making ability.
  • Monitors a person to predict the risk of criminal behavior based solely on profiling or assessing personality traits.
  • Infers a person’s emotions in workplace or educational settings, with exceptions for medical or safety-related reasons.

While not all prohibited uses need to be enumerated here, the critical takeaway for compliance officers is that organizations require clear policies regarding which AI uses will not be adopted, alongside procedures to ensure compliance.

It is plausible that contractors or business partners may use AI in prohibited ways on behalf of your company. Thus, strong policies, contract management, and third-party monitoring capabilities are essential. Additionally, robust training for employees will be required to ensure they understand the risks associated with third-party AI usage and their role in mitigating these risks.

As the EU AI Act evolves, it will introduce additional tiers of AI usage; the lower the risk associated with the use case, the less oversight required. This will present further challenges for corporate ethics and compliance teams, necessitating the development of processes to assess risks and implement appropriate controls.

Ultimately, a successful AI compliance program will continue to rely on the fundamentals of a strong ethics and compliance framework while navigating the complexities of this new landscape.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...