Understanding the 2024 EU AI Act: Key Implications and Compliance

2024 EU AI Act: A Detailed Analysis

The EU AI Act is a significant regulatory framework aimed at harmonizing the development, deployment, and use of artificial intelligence (AI) within the European Union. This comprehensive regulation, which went into effect on August 1, 2024, seeks to ensure safety, protect fundamental rights, and promote innovation while preventing market fragmentation.

Scope of the AI Act

The AI Act covers a broad range of AI applications across various sectors, including healthcare, finance, insurance, transportation, and education. It applies to providers and deployers of AI systems within the EU, as well as those outside the EU whose AI systems impact the EU market. Exceptions include AI systems used for military, defense, or national security purposes, and those developed solely for scientific research.

An “AI system” is defined as a machine-based system designed to operate with varying levels of autonomy and exhibit adaptiveness. From the input it receives, it can generate derived outputs such as predictions, content, recommendations, or decisions influencing physical or virtual environments.

AI Literacy

The Act emphasizes the importance of AI literacy for providers and deployers. It requires that staff of companies and organizations possess the necessary skills and understanding to engage with AI technologies responsibly. This obligation includes ongoing training and education tailored to specific sectors and use cases.

Risk-Based Approach

To introduce a proportionate and effective set of binding rules for AI systems, the AI Act adopts a pre-defined risk-based approach. This approach tailors the type and content of the rules based on the intensity and scope of the risks that AI systems can generate. The Act prohibits certain unacceptable AI practices while setting requirements for high-risk AI systems and general-purpose AI models.

Prohibited AI Practices

The AI Act prohibits certain AI practices deemed to pose unacceptable risks to fundamental rights, safety, and public interests. These include:

  • AI systems using subliminal techniques to manipulate behavior;
  • Exploiting vulnerabilities of specific groups, such as children or individuals with disabilities;
  • Social scoring based on personal characteristics leading to discriminatory outcomes;
  • Predicting criminal behavior based solely on profiling;
  • Untargeted scraping for facial recognition databases;
  • Emotion recognition in workplaces and educational institutions, except for medical or safety reasons;
  • Biometric categorization to infer sensitive attributes, except for lawful law enforcement purposes;
  • Real-time remote biometric identification in public spaces for law enforcement.

High-Risk AI Systems

The Act establishes common rules for high-risk AI systems to ensure consistent and high-level protection of public interests related to health, safety, and fundamental rights. Requirements include:

  • Establishing a risk management system;
  • Ensuring data quality and governance;
  • Maintaining technical documentation and logging capabilities;
  • Providing transparent information and human oversight;
  • Ensuring accuracy, robustness, and cybersecurity;
  • Implementing a quality management system.

General Purpose AI Models

The Act includes specific rules for general-purpose AI models, particularly those with systemic risks. Providers must notify the EU Commission if their models meet high-impact capability thresholds and prepare comprehensive technical documentation.

Governance, Compliance, and Regulatory Aspects

The AI Act mandates transparency to ensure public trust and prevent misuse of AI technologies. Providers and deployers must inform individuals about their interaction with AI systems and maintain detailed documentation. High-risk AI systems have stricter transparency requirements, including marking synthetic content to prevent misinformation.

Penalties

The AI Act imposes significant penalties for non-compliance, with fines up to EUR 35 million or 7% of total worldwide annual turnover in the preceding financial year for prohibited practices. Other infringements can incur fines of up to EUR 15 million or 3% of the offender’s total worldwide annual turnover, whichever is higher.

Conclusion

The EU AI Act aims to create a trustworthy and human-centric AI ecosystem by balancing innovation with the protection of fundamental rights and public interests. By adhering to the Act’s requirements, businesses can ensure the safe and ethical development and deployment of AI technologies.

More Insights

Transforming Corporate Governance: The Impact of the EU AI Act

This research project investigates how the EU Artificial Intelligence Act is transforming corporate governance and accountability frameworks, compelling companies to reconfigure responsibilities and...

Harnessing AI for Effective Risk Management

Artificial intelligence is becoming essential for the risk function, helping chief risk officers (CROs) to navigate compliance and data governance challenges. With a growing number of organizations...

Senate Reverses Course on AI Regulation Moratorium

In a surprising turn, the U.S. Senate voted overwhelmingly to eliminate a provision that would have imposed a federal moratorium on state regulations of artificial intelligence for the next decade...

Bridging the 83% Compliance Gap in Pharmaceutical AI Security

The pharmaceutical industry is facing a significant compliance gap regarding AI data security, with only 17% of companies implementing automated controls to protect sensitive information. This lack of...

Transforming Corporate Governance: The Impact of the EU AI Act

This research project investigates how the EU Artificial Intelligence Act is transforming corporate governance and accountability frameworks, compelling companies to reconfigure responsibilities and...

AI-Driven Cybersecurity: Bridging the Accountability Gap

As organizations increasingly adopt AI to drive innovation, they face a dual challenge: while AI enhances cybersecurity measures, it simultaneously facilitates more sophisticated cyberattacks. The...

Thailand’s Comprehensive AI Governance Strategy

Thailand is drafting principles for artificial intelligence (AI) legislation aimed at establishing an AI ecosystem and enhancing user protection from potential risks. The legislation will remove legal...

Texas Implements Groundbreaking AI Regulations in Healthcare

Texas has enacted comprehensive AI governance laws, including the Texas Responsible Artificial Intelligence Governance Act (TRAIGA) and Senate Bill 1188, which establish a framework for responsible AI...

AI Governance: Balancing Innovation and Oversight

Riskonnect has launched its new AI Governance solution, enabling organizations to manage the risks and compliance obligations of AI technologies while fostering innovation. The solution integrates...