AI Literacy: Building Effective Programs
The EU AI Act introduced an AI literacy obligation that became effective on February 2, 2025. This regulation applies to all entities engaged with AI systems that have any connection to the European Union, including providers and deployers of such systems.
While the AI Act does not specify what compliance entails, the Commission’s AI Office has provided guidance through a series of Questions & Answers, outlining expectations regarding AI literacy.
The Obligation
Providers and deployers of AI systems are required to “take measures to ensure, to their best extent, a sufficient level of AI literacy” for their staff and individuals operating AI systems on their behalf (Article 4). This obligation emphasizes equipping relevant personnel with “the necessary notions” to make informed decisions regarding AI systems.
The requirement encompasses the need for informed deployment, awareness of AI’s opportunities and risks, and understanding potential harms associated with AI.
Who Needs to be AI Literate?
The obligation extends to a wide range of individuals, including providers, deployers, and affected persons. It also includes any staff or contractors involved in the operation and use of AI systems, highlighting the need for comprehensive training across all levels of the organization.
What Constitutes a “Sufficient” Level of AI Literacy?
The Commission has refrained from imposing strict requirements, leaving the interpretation of “sufficient” to organizations. For instance, entities using high-risk AI systems may need to implement “additional measures” to ensure that their employees are well-informed of the associated risks. This is particularly crucial for those responsible for ensuring human oversight of AI operations.
Even personnel who only use generative AI must receive training on relevant risks, such as hallucination—a phenomenon where AI generates plausible but incorrect or nonsensical information.
Organizations with employees possessing deep technical knowledge should still evaluate their understanding of risks, legal considerations, and ethical aspects related to AI.
The Importance of Human Oversight
There is no exemption for “human-in-the-loop” scenarios; in fact, AI literacy is even more critical for individuals in these roles. Genuine oversight requires a thorough understanding of the AI systems being supervised.
Consequences of Non-Compliance
Enforcement of the AI literacy obligation will be managed by market surveillance authorities, with powers set to come into effect on August 2, 2026. The AI Act does not specify fines for non-compliance with the AI literacy obligation, but there are indications that member states may impose specific penalties through national legislation. The Commission also notes the potential for private enforcement, allowing individuals to sue for damages, although the Act does not establish a right to compensation.
Shaping an AI Literacy Program
To develop an effective AI literacy program, organizations should consider the following steps:
- Identify Stakeholders: Determine who is involved in AI usage, including governance members, developers, service providers, clients, and affected individuals.
- Assess Knowledge Gaps: Evaluate what each group already knows and what they need to learn. For instance, governance committee members may require deeper insights into AI functionality, while data scientists might focus on legal and ethical issues.
- Choose Appropriate Mediums: Select suitable formats for training, such as workshops for governance members and e-learning modules for occasional generative AI users.
- Schedule Training: Plan when training sessions will occur, ensuring alignment with the obligation timeline.
- Track Attendance: Implement measures to monitor participation and ensure high completion rates.
Although the Commission’s guidance specifically addresses the AI literacy obligation under the EU AI Act, the importance of AI literacy extends to all organizations utilizing AI. Establishing a robust AI governance program is essential for managing the legal and organizational risks associated with AI deployment.