First Provisions of EU AI Act Now Apply
The first provisions of the EU AI Act began to take effect on February 2, marking a significant milestone in the governance of artificial intelligence in Europe. Companies that either provide or deploy AI systems are now mandated to ensure a level of AI literacy within their operations, while certain AI practices have been prohibited.
Understanding AI Literacy
According to the Act, AI literacy encompasses the skills, knowledge, and understanding necessary for various stakeholders involved in AI systems. This includes providers (those putting AI systems into the market), deployers (those using these systems), and affected persons. The goal is to empower these groups to make informed decisions regarding the deployment of AI technology and to be aware of both its opportunities and risks.
Article 4 of the Act mandates companies to ensure, to the best of their ability, that their staff and others involved in the operation and use of AI systems possess a sufficient level of AI literacy. Factors such as technical knowledge, experience, education, and the context of AI system usage must be considered.
Implementing AI Literacy in Organizations
Organizations are left to interpret what the requirements of AI literacy mean in practical terms. The Autoriteit Persoonsgegevens (AP), the Dutch Data Protection Authority, has issued guidance to help organizations navigate these requirements. The guidance emphasizes that there is no universal solution for achieving adequate AI literacy; instead, organizations should tailor their strategies based on the specifics of their workforce and the degree of risk associated with the AI systems in use.
To facilitate AI literacy, organizations are encouraged to develop a multi-year plan that follows a four-step process:
- Step 1 – Identify: This step involves creating an inventory of all AI systems used within the organization and documenting the roles of personnel alongside their AI knowledge and skills.
- Step 2 – Determine Goals: Organizations should establish AI literacy goals based on risk levels. Not every employee requires the same depth of knowledge, but those involved with specific AI systems should be adequately informed about their risks and functionalities.
- Step 3 – Execute: After setting goals, organizations must implement appropriate strategies and actions. AI literacy should be prioritized at all organizational levels, with responsibilities potentially defined in roles such as an AI officer.
- Step 4 – Evaluate: Regular analysis of whether AI literacy objectives are being met is crucial. This could involve annual surveys to assess the effectiveness of the measures implemented.
Continuous Process of AI Literacy
The guidance clarifies that AI literacy is not a final objective but a continuous process. Organizations must routinely evaluate and update their literacy measures, especially given the rapid advancements in technology that introduce new opportunities and risks.
It remains to be seen how regulatory bodies in other EU member states will provide guidance on compliance with these literacy obligations, and organizations should stay informed in their respective jurisdictions.
Future Implications of the AI Act
While the initial provisions of the AI Act are now in effect, the majority of its regulations will not be enforced until August 2026, with some provisions commencing later. However, regulations concerning general-purpose AI systems will begin on August 2, 2025, subject to certain exceptions. Furthermore, various codes of conduct and templates will be finalized soon.
For a comprehensive overview of the EU AI Act, including its implementation timelines and practical compliance steps for companies, it is crucial to stay updated as this legislation continues to evolve.