Decoding the AI Act: A Practical Guide to Compliance and Risk Management

Artificial intelligence is rapidly transforming how businesses operate, demanding a new understanding of legal and ethical considerations. Navigating this complex landscape requires careful attention, particularly with the advent of comprehensive regulations designed to govern the development, deployment, and use of AI. This analysis delves into the core principles of a landmark piece of legislation and examines how organizations can effectively adapt their practices to ensure ongoing adherence. Focusing on practical strategies and key obligations, we outline a path for building robust internal controls and maintaining responsible AI systems.

What are the key objectives and core principles underpinning the AI Act?

The EU AI Act, formally proposed in April 2021 and enacted August 1, 2024, establishes a uniform legal framework for AI systems across the EU. It balances innovation with the need to protect fundamental rights and personal data.

Key Objectives:

  • Safeguarding Fundamental Rights: Ensures AI systems respect fundamental EU rights, with a focus on ethical considerations.
  • Promoting Innovation: Encourages the development and deployment of trustworthy AI technologies.
  • Fostering Trust: Builds public confidence in AI systems.

Core Principles:

The AI Act adopts a risk-based approach, categorizing AI systems based on their potential risk level. This approach dictates the level of regulatory oversight and compliance requirements.

  • Risk Categorization:
    • Unacceptable Risk: AI systems violating fundamental EU rights are prohibited.
    • High Risk: Systems impacting health, safety, or fundamental rights require conformity assessments and ongoing monitoring.
    • Limited Risk: Transparency requirements apply, such as disclosing AI interaction.
    • Minimal Risk: No specific requirements.
  • General Purpose AI (GPAI): An additional category for AI models trained on large datasets, capable of performing various tasks. GPAI models considered to have “systemic risk” face increased obligations.

Furthermore, the AI Act recognizes different roles within the AI value chain, including providers, deployers, distributors, importers, and authorized representatives. Each role has distinct responsibilities and compliance requirements.

Internal auditors should understand the risk category of each AI system and their organization’s role in its value chain to ensure compliance with the AI Act. These roles can evolve, impacting compliance obligations.

How should organizations approach achieving and maintaining compliance with the varying obligations defined by the AI Act?

The AI Act introduces a risk-based approach, categorizing AI systems into unacceptable, high, limited, and minimal risk levels, along with a category for General Purpose AI (GPAI). Compliance efforts must be tailored to these risk categories and the specific role an organization plays in the AI value chain (Provider, Deployer, etc.). Here’s a breakdown for legal-tech professionals, compliance officers, and policy analysts:

Understanding Your Role and Risk Profile

First, organizations need to identify which role they occupy for each AI system they utilize and determine the applicable risk category. Be careful, a Deployer becomes a Provider when they make significant edits to the AI or rebrand it under their own trademark. The requirements will vary based on whether you are in the EU, outside the EU, a provider, deployer, distributor or representative.

Establishing Foundational Compliance Measures

To prepare, internal auditors can approach AI Act compliance as any other compliance project — but with a focus on evaluation, audit, and governance of distinct AI processes. Make note of the upcoming deadlines:

  • February 2, 2025: Focus on AI literacy training for staff, create an inventory of AI systems, classify systems by risk, and cease use/remove unacceptable-risk AI.
  • August 2, 2025: Address GPAI compliance, including understanding relevant regulatory bodies and establishing transparency mechanisms. (Note: compliance is pushed back to 2027 for pre-existing GPAI systems).
  • August 2, 2026: Implement risk assessment, management, and accountability systems for high-risk AI, and transparency policies for limited-risk systems.
  • August 2, 2027: Apply GPAI measures to all systems and ensure integrated AI components meet high-risk AI obligations.

Key Obligations and Requirements

Several obligations apply based on AI model type:

  • AI Literacy: Ensure sufficient AI literacy among those dealing with AI systems.
  • AI Registry: Companies must build an AI registry which contains all AI systems they use or put on the market. High risk AI systems must be submitted to a central AI repository.
  • AI Risk Assessment: All AI systems on the AI registry should be risk assessed according to the risk classification method used in the AI Act. It should be noted that the classification method is prescribed within the AI act.

For High-Risk AI systems, organizations must implement robust risk management systems, focus on data and data governance, maintain technical documentation, and implement record-keeping and transparency measures. Human oversight must be integrated into the design. It’s imperative that accuracy, resilience, and cybersecurity measures meet the required standards to ensure consistent AI outputs.

Organizations must appoint authorised representatives outside of the EU for high risk AI systems. Conformity assessment and a CE marking of conformity must also be performed.

Limited Risk AI systems incur transparency obligations, informing users they are interacting with an AI. Generated outputs (synthetic audio, image, video, or text content) must be machine-readable and disclose that the content was artificially generated.

Similar to limited risk, General Purpose AI models also require transparency obligations for providers to inform concerned natural persons that they are interacting with an AI system. They must also classify GPAI models with systemic risk.

Internal Audit’s Role

Internal audit departments should develop frameworks to assess AI usage within the organization, offer recommendations to deliver benefits and mitigate risks, and lead by example by assessing their own use of AI. Audit skills are typically ensured through internal and external trainings, and knowledge sharing. It is important for the audit departments to plan and deploy dedicated training activities to ensure that they are adequately skilled to provide assurance on AI.

Considering broader EU legislation

AI systems may also fall under other EU legislation such as DORA and CSRD/CSDDD, particularly relating to third-party suppliers, environmental impact, and cybersecurity resilience. Consider how the AI systems might require the business to comply with a wider set of regulations.

What major requirements are associated with the utilization and auditing of AI systems within organizations?

The EU AI Act introduces a tiered framework of requirements for organizations utilizing AI systems, with the stringency dependent on the risk level associated with the AI in question. Internal auditors are therefore going to be integral to ensuring their companies’ compliance with the new regulations.

Risk-Based Obligations:

Here’s what the different risk levels entail:

  • Unacceptable Risk: AI systems deemed to violate fundamental EU rights and values are prohibited. These include AI used to manipulate individuals, cause significant harm, or create discriminatory outcomes.
  • High Risk: AI systems impacting health, safety, or fundamental rights face stringent requirements. Providers must establish risk management systems, ensure data quality and governance, maintain technical documentation, and provide transparency to deployers. Human oversight and cybersecurity resilience are also essential.
  • Limited Risk: Chatbots or AI-content generators (text or images) fall into this category. Transparency requirements are key here, informing users they are interacting with an AI system.
  • Minimal Risk: AI-enabled video games or spam filters—no specific requirements apply.

Beyond these categories, General Purpose AI (GPAI) models that are foundational for other systems face distinct transparency requirements. If deemed to have “systemic risk” (based on computational power or impact), GPAI models are subject to additional scrutiny, including model evaluations and cybersecurity measures.

Role-Based Obligations in the AI Value Chain:

Obligations also vary based on an organization’s role:

  • Providers: Those developing and placing AI systems on the EU market bear the most responsibility. They ensure compliance with the AI Act’s stringent requirements
  • Deployers: Organizations using AI systems are responsible for proper usage and adherence to provider guidelines, including ensuring human oversight, maintaining data quality.

Key Action Items for Internal Auditors:

To ensure comprehensive compliance with the AI act, there are critical items for internal audit organisations to note:

  • AI Literacy: Ensure staff possesses sufficient AI literacy to understand the operation and use of AI systems appropriately.
  • AI Inventory: Establish and maintain a comprehensive AI registry of systems used across the organization, including subsidiaries.
  • Risk Classification: Classify all AI systems according to the AI Act’s defined risk categories.
  • Impact Assessments: Perform fundamental rights impact assessments for specific AI systems, which outline potential harm and mitigation strategies.
  • Post-Market Monitoring: Implement a plan for continuous data collection, documentation, and analysis of AI system performance.

Ultimately, integrating the appropriate procedures is key to navigating risk and ensuring your organisation’s compliance.

Navigating the intricacies of the AI Act demands a proactive and nuanced approach. Success hinges on understanding your specific role within the AI ecosystem, meticulously assessing the risk profile of each AI system used, and embracing comprehensive compliance measures. By prioritizing AI literacy, establishing a robust system inventory, and conducting thorough risk assessments, organizations can lay the groundwork for responsible AI adoption. Beyond these foundational steps, continuous post-market monitoring and adapting to evolving legal interpretations will be paramount. Ultimately, this isn’t just about ticking boxes; it’s about fostering a culture of responsible innovation, where the power of AI is harnessed ethically and in accordance with fundamental rights.

More Insights

CII Advocates for Strong AI Accountability in Financial Services

The Chartered Insurance Institute (CII) has urged for clear accountability frameworks and a skills strategy for the use of artificial intelligence (AI) in financial services. They emphasize the...

Regulating AI in APAC MedTech: Current Trends and Future Directions

The regulatory landscape for AI-enabled MedTech in the Asia Pacific region is still developing, with existing frameworks primarily governing other technologies. While countries like China, Japan, and...

New York’s AI Legislation: Key Changes Employers Must Know

In early 2025, New York proposed the NY AI Act and the AI Consumer Protection Act to regulate the use of artificial intelligence, particularly addressing algorithmic discrimination in employment...

Managing AI Risks: Effective Frameworks for Safe Implementation

This article discusses the importance of AI risk management frameworks to mitigate potential risks associated with artificial intelligence systems. It highlights various types of risks, including...

Essential Insights on the EU Artificial Intelligence Act for Tech Companies

The European Union has introduced the Artificial Intelligence Act (AI Act), which aims to manage the risks and opportunities associated with AI technologies across Europe. This landmark regulation...

South Korea’s Landmark AI Basic Act: A New Era of Regulation

South Korea has established itself as a leader in AI regulation in Asia with the introduction of the AI Basic Act, which creates a comprehensive legal framework for artificial intelligence. This...

EU AI Act and DORA: Mastering Compliance in Financial Services

The EU AI Act and DORA are reshaping how financial entities manage AI risk by introducing new layers of compliance that demand transparency, accountability, and quantifiable risk assessments...

AI Governance: Bridging the Transatlantic Divide

Artificial intelligence (AI) is rapidly reshaping economies, societies, and global governance, presenting both significant opportunities and risks. This chapter examines the divergent approaches of...

EU’s Ambitious Plan to Boost AI Development

The EU Commission is launching a new strategy to reduce barriers for the deployment of artificial intelligence (AI) across Europe, aiming to enhance the region's competitiveness on a global scale. The...