Understanding the EU AI Act: Essential Steps for Compliance in 2025

Navigating the EU AI Act: What Companies Need to Know and Do in 2025

The European Union has taken a historic step by enacting the AI Act, the world’s first comprehensive regulation of artificial intelligence. As of August 2024, the AI Act is in force, and its implementation is rolling out in stages. For companies using or developing AI systems, this regulation brings significant obligations – and now is the time to act.

What Has Happened So Far?

August 1, 2024: The EU AI Act officially entered into force. This marked the start of a phased implementation timeline.

February 2, 2025: The first obligations became legally binding, particularly:

  • The ban on AI systems with unacceptable risk, including:
    • Social scoring systems
    • Real-time biometric identification in public spaces (with limited exceptions)
    • Emotion recognition in workplaces and schools
    • Manipulative or exploitative AI (especially affecting vulnerable groups)
    • Early requirements for competence and awareness, ensuring staff working with AI have appropriate knowledge.

Where Are We Now?

As of May 2025, we are in the early implementation phase, with the next key milestone approaching in August 2025. During this phase:

  • Companies must evaluate their AI systems to determine their risk classification (unacceptable, high-risk, limited-risk, or minimal-risk).
  • Providers of General Purpose AI (GPAI) models – such as foundation models or large language models – must prepare for transparency and risk management obligations by August 2025.
  • National supervisory authorities and the European AI Office are being established to monitor compliance and provide guidance.

What’s Coming Next?

August 2, 2025:

  • Transparency obligations for General Purpose AI providers kick in. They must:
    • Provide documentation on the capabilities and limitations of their models.
    • Publish summaries of training data (with a focus on copyright and bias).
    • Conduct and publish model evaluations and risk assessments.
    • EU member states must have their national AI regulators in place.

August 2, 2026:

  • Full obligations for high-risk AI systems apply, including:
    • Rigorous conformity assessments
    • Quality management systems
    • Data governance, monitoring, and human oversight processes

August 2, 2027:

  • High-risk AI systems embedded in regulated products (e.g., in medical devices or industrial equipment) must comply with the AI Act in conjunction with sectoral laws.

What Companies Need to Do Now

To avoid disruption and ensure compliance, businesses using or developing AI systems should act proactively:

  1. Classify Your AI Systems
    • Map all AI use cases across your organization.
    • Determine their risk category under the AI Act:
      • Unacceptable (prohibited)
      • High-risk (strict obligations)
      • Limited-risk (transparency duties)
      • Minimal-risk (voluntary codes of conduct)
  2. Build Governance Structures
    • Assign clear responsibilities for AI compliance, risk management, and oversight.
    • Set up cross-functional teams (e.g., Legal, IT, Data Science, Compliance).
  3. Prepare for Documentation
    • Create or update your technical documentation, data sheets, and model logs.
    • If you’re a provider or deployer of high-risk systems, ensure auditability, explainability, and traceability of models.
  4. Conduct AI Risk Assessments
    • Evaluate potential harm to health, safety, fundamental rights, and societal well-being.
    • Develop mitigation strategies and incident handling procedures.
  5. Train Your People
    • Ensure staff understand AI risks and their responsibilities under the AI Act.
    • Consider role-based training for developers, product managers, legal, and compliance teams.
  6. Engage with Regulators
    • Monitor developments from the European AI Office and your national authority.
    • Participate in consultations, industry groups, or pilot programs.
  7. Plan for Audits and Enforcement
    • Set up internal audit capabilities for AI systems.
    • Regularly review and update processes, especially as the EU Commission issues delegated acts or clarifications.

Final Thoughts

The EU AI Act is more than just a legal obligation – it’s a framework that encourages trustworthy, human-centric AI development. Businesses that move early to align their systems, governance, and culture with the regulation will not only reduce risk, but also gain competitive advantage in the rapidly evolving AI landscape.

Preparation is no longer optional – it’s a strategic necessity. What are your thoughts and approaches to getting ready for the EU AI Act?

More Insights

Building Trust in AI: Strategies for a Secure Future

The Digital Trust Summit 2025 highlighted the urgent need for organizations to embed trust, fairness, and transparency into AI systems from the outset. As AI continues to evolve, strong governance and...

Rethinking Cloud Governance for AI Innovation

As organizations embrace AI innovations, they often overlook the need for updated cloud governance models that can keep pace with rapid advancements. Effective governance should be proactive and...

AI Governance: A Guide for Board Leaders

The Confederation of Indian Industry (CII) has released a guidebook aimed at helping company boards responsibly adopt and govern Artificial Intelligence (AI) technologies. The publication emphasizes...

Harnessing AI for Secure DevSecOps in a Zero-Trust Environment

The article discusses the implications of AI-powered automation in DevSecOps, highlighting the balance between efficiency and the risks associated with reliance on AI in security practices. It...

Establishing India’s First Centre for AI, Law & Regulation

Cyril Amarchand Mangaldas, Cyril Shroff, and O.P. Jindal Global University have announced the establishment of the Cyril Shroff Centre for AI, Law & Regulation, the first dedicated centre in India...

Revolutionizing AI Governance for Local Agencies with a Free Policy Tool

Darwin has launched its AI Policy Wizard, a free and interactive tool designed to assist local governments and public agencies in creating customized AI policies. The tool simplifies the process by...

Building Trust in AI Through Effective Governance

Ulla Coester emphasizes the importance of adaptable governance in building trust in AI, highlighting that unclear threats complicate global confidence in the technology. She advocates for...

Building Trustworthy AI Through Cultural Engagement

This report emphasizes the importance of inclusive AI governance to ensure diverse voices, especially from the Global South, are involved in AI access and development decisions. It highlights the...

AI Compliance: Copyright Challenges in the EU AI Act

The EU AI Act emphasizes the importance of copyright compliance for generative AI models, particularly regarding the use of vast datasets for training. It requires general-purpose AI providers to...