AI Policy Deadline: What Employees Need to Know

Deadline Approaches for Work AI Policy: What Does It Mean for Employees?

The deadline for employers to implement an Artificial Intelligence (AI) policy and training in the workplace is fast approaching. But what does this mean for employees?

The EU’s AI Act – the first legal framework on AI – requires that Belgian employers introduce several measures by 2 February 2025 to improve knowledge of and use of AI technologies.

“The regulation includes several elements but the core aim is to ensure humans are in the loop,” a legal expert stated. “Employees should understand that AI is useful but human influence and common sense are still needed.”

All Belgian organisations must have an AI policy in place, establishing guidelines for AI use within the organisation by 2 February. This includes which applications can be used, by whom, and how.

The act also bans various AI applications that violate European fundamental norms and values. This includes AI systems for “social scoring” (judging people based on their social behaviour) and AI systems for emotion recognition in the workplace. Companies must identify and ban these by 2 February. Employers developing or deploying prohibited AI risk fines of up to €35 million, or 7% of the total global annual turnover if the offender is a company.

What Does This Mean for Employees?

Employers must also take measures to ensure staff have “an adequate level of AI literacy“. This includes all employees who work with AI systems, from providers to end users. “It covers more than just IT professionals,” the expert explained.

AI literacy refers to the skills, knowledge and understanding needed to deploy AI systems and be aware of the opportunities and risks. “This doesn’t mean every employee needs to be an AI expert,” the expert clarified. But employees should have the knowledge and skills to make informed decisions, to know what works and to recognise potential risks and harms.

More broadly, the law should serve to create awareness. “Conversations should take place about what AI will mean for employees. The aim is to ensure people are informed so that they can learn to deal with it.” This debate should flow over into social consultations between management and unions. “I can imagine that some people are worried by this new AI act and the increasing use of AI.”

The AI Act does not provide details on how companies should train staff. It is up to employers to identify what staff need based on their existing knowledge and skills, allowing training to be organised on a case-by-case basis. Plans can also detail how to ensure people remain AI-literate. “After all, AI literacy is not static. If an employee changes positions, the tools they use change or they start using new tools.”

Retroactive Sanctioning

Despite the approaching deadline, few employers are aware that regulations apply to all employers from this date. While the law is introduced by the EU, it is up to the Member States to ensure it is being implemented and oversee sanctioning.

“I imagine that a special body will be created, similar to a Privacy Commission, focusing on this task,” the expert noted.

The national enforcement framework, which will oversee the fining of non-compliant companies, is not due until 2 August 2025. “But we do not recommend waiting until then, as fines can play retroactive.”

More Insights

The Perils of ‘Good Enough’ AI in Compliance

In today's fast-paced world, the allure of 'good enough' AI in compliance can lead to significant legal risks when speed compromises accuracy. Leaders must ensure that AI tools provide explainable...

European Commission Unveils AI Code of Practice for General-Purpose Models

On July 10, 2025, the European Commission published the final version of the General-Purpose AI Code of Practice, which aims to provide a framework for compliance with certain provisions of the EU AI...

EU Introduces New Code to Streamline AI Compliance

The European Union has introduced a voluntary code of practice to assist companies in complying with the upcoming AI Act, which will regulate AI usage across its member states. This code addresses...

Reforming AI Procurement for Government Accountability

This article discusses the importance of procurement processes in the adoption of AI technologies by local governments, highlighting how loopholes can lead to a lack of oversight. It emphasizes the...

Pillar Security Launches Comprehensive AI Security Framework

Pillar Security has developed an AI security framework called the Secure AI Lifecycle Framework (SAIL), aimed at enhancing the industry's approach to AI security through strategy and governance. The...

Tokio Marine Unveils Comprehensive AI Governance Framework

Tokio Marine Holdings has established a formal AI governance framework to guide its global operations in developing and using artificial intelligence. The policy emphasizes transparency, human...

Shadow AI: The Urgent Need for Governance Solutions

Generative AI (GenAI) is rapidly becoming integral to business operations, often without proper oversight or approval, leading to what is termed as Shadow AI. Companies must establish clear governance...

Fragmented Futures: The Battle for AI Regulation

The article discusses the complexities of regulating artificial intelligence (AI) as various countries adopt different approaches to governance, resulting in a fragmented landscape. It explores how...

Fragmented Futures: The Battle for AI Regulation

The article discusses the complexities of regulating artificial intelligence (AI) as various countries adopt different approaches to governance, resulting in a fragmented landscape. It explores how...