Understanding the European AI Act: Key Insights for Employers

Understanding the European AI Act: A Guide for Employers

Artificial intelligence (AI) is evolving rapidly and is increasingly integrated into various business operations. Recognizing this trend, Europe has introduced the AI Act, a comprehensive legal framework that all companies operating in Europe must adhere to.

The AI Act officially came into effect on February 2, 2025. Non-compliance can lead to substantial fines, making it crucial for employers to understand their responsibilities. While businesses in the UK are not required to comply domestically, those engaging with the EU market must meet these regulations by the effective date.

Key Requirements for Employers

According to legal experts, there are two primary obligations for employers under the AI Act: the establishment of an AI policy and the prohibition of certain AI systems.

1. Mandatory AI Policy

Employers are required to develop an AI policy that outlines their approach to ensure that employees are AI literate. This means that employees should understand the potential applications and risks associated with AI.

It is important to note that not every employee needs to be an AI expert; rather, all personnel involved with AI systems should have the knowledge to make informed decisions. This includes everyone from AI system providers to end users.

2. Prohibited AI Systems

The AI Act explicitly bans AI systems that undermine fundamental European norms and values, such as those that infringe on fundamental rights. For instance, AI systems used for social scoring or emotion recognition in workplaces and educational settings will be prohibited.

Employers must audit their AI systems to ensure compliance, ceasing the use of any prohibited technologies. Non-compliance can lead to fines of up to EUR 35 million or 7% of total global annual revenue from the previous fiscal year—whichever is greater.

Preparing for Compliance

As the AI Act applies to all employers in Europe, it is essential for organizations of all sizes to prepare adequately. The law encompasses any organization employing individuals who utilize AI technologies.

Member states are responsible for enforcing compliance, and fines for violations will only be clarified on August 2, 2025. Businesses are encouraged to be proactive, as fines may be retroactively applied.

The Future in the UK

Even for companies not operating within the EU, the trend of establishing domestic regulations similar to the AI Act is apparent in the UK. Organizations should prepare for the possibility of future legislation that aligns with the goals of the AI Act.

Ensuring AI Literacy

To fulfill their obligations, employers must ensure their workforce is adequately trained in AI. This may involve general training on basic AI principles, tailored to various employee roles and responsibilities. Employers should consider ongoing education, as AI literacy is not a static requirement.

Conclusion

The European AI Act represents a significant shift in how businesses must engage with AI technologies. By establishing clear policies and ensuring compliance, employers can mitigate risks and harness the benefits of AI responsibly.

More Insights

US Rejects UN’s Call for Global AI Governance Framework

U.S. officials rejected the establishment of a global AI governance framework at the United Nations General Assembly, despite broad support from many nations, including China. Michael Kratsios of the...

Agentic AI: Managing the Risks of Autonomous Systems

As companies increasingly adopt agentic AI systems for autonomous decision-making, they face the emerging challenge of agentic AI sprawl, which can lead to security vulnerabilities and operational...

AI as a New Opinion Gatekeeper: Addressing Hidden Biases

As large language models (LLMs) become increasingly integrated into sectors like healthcare and finance, a new study highlights the potential for subtle biases in AI systems to distort public...

AI Accountability: A New Era of Regulation and Compliance

The burgeoning world of Artificial Intelligence (AI) is at a critical juncture as regulatory actions signal a new era of accountability and ethical deployment. Recent events highlight the shift...

Choosing Effective AI Governance Tools for Safer Adoption

As generative AI continues to evolve, so do the associated risks, making AI governance tools essential for managing these challenges. This initiative, in collaboration with Tokio Marine Group, aims to...

UN Initiatives for Trustworthy AI Governance

The United Nations is working to influence global policy on artificial intelligence by establishing an expert panel to develop standards for "safe, secure and trustworthy" AI. This initiative aims to...

Data-Driven Governance: Shaping AI Regulation in Singapore

The conversation between Thomas Roehm from SAS and Frankie Phua from United Overseas Bank at the SAS Innovate On Tour in Singapore explores how data-driven regulation can effectively govern rapidly...

Preparing SMEs for EU AI Compliance Challenges

Small and medium-sized enterprises (SMEs) must navigate the complexities of the EU AI Act, which categorizes many AI applications as "high-risk" and imposes strict compliance requirements. To adapt...

Draft Guidance on Reporting Serious Incidents Under the EU AI Act

On September 26, 2025, the European Commission published draft guidance on serious incident reporting requirements for high-risk AI systems under the EU AI Act. Organizations developing or deploying...