AI Policy Deadline: What Employees Need to Know

Deadline Approaches for Work AI Policy: What Does It Mean for Employees?

The deadline for employers to implement an Artificial Intelligence (AI) policy and training in the workplace is fast approaching. But what does this mean for employees?

The EU’s AI Act – the first legal framework on AI – requires that Belgian employers introduce several measures by 2 February 2025 to improve knowledge of and use of AI technologies.

“The regulation includes several elements but the core aim is to ensure humans are in the loop,” a legal expert stated. “Employees should understand that AI is useful but human influence and common sense are still needed.”

All Belgian organisations must have an AI policy in place, establishing guidelines for AI use within the organisation by 2 February. This includes which applications can be used, by whom, and how.

The act also bans various AI applications that violate European fundamental norms and values. This includes AI systems for “social scoring” (judging people based on their social behaviour) and AI systems for emotion recognition in the workplace. Companies must identify and ban these by 2 February. Employers developing or deploying prohibited AI risk fines of up to €35 million, or 7% of the total global annual turnover if the offender is a company.

What Does This Mean for Employees?

Employers must also take measures to ensure staff have “an adequate level of AI literacy“. This includes all employees who work with AI systems, from providers to end users. “It covers more than just IT professionals,” the expert explained.

AI literacy refers to the skills, knowledge and understanding needed to deploy AI systems and be aware of the opportunities and risks. “This doesn’t mean every employee needs to be an AI expert,” the expert clarified. But employees should have the knowledge and skills to make informed decisions, to know what works and to recognise potential risks and harms.

More broadly, the law should serve to create awareness. “Conversations should take place about what AI will mean for employees. The aim is to ensure people are informed so that they can learn to deal with it.” This debate should flow over into social consultations between management and unions. “I can imagine that some people are worried by this new AI act and the increasing use of AI.”

The AI Act does not provide details on how companies should train staff. It is up to employers to identify what staff need based on their existing knowledge and skills, allowing training to be organised on a case-by-case basis. Plans can also detail how to ensure people remain AI-literate. “After all, AI literacy is not static. If an employee changes positions, the tools they use change or they start using new tools.”

Retroactive Sanctioning

Despite the approaching deadline, few employers are aware that regulations apply to all employers from this date. While the law is introduced by the EU, it is up to the Member States to ensure it is being implemented and oversee sanctioning.

“I imagine that a special body will be created, similar to a Privacy Commission, focusing on this task,” the expert noted.

The national enforcement framework, which will oversee the fining of non-compliant companies, is not due until 2 August 2025. “But we do not recommend waiting until then, as fines can play retroactive.”

More Insights

EU AI Act vs. US AI Action Plan: A Risk Perspective

Dr. Cari Miller discusses the differences between the EU AI Act and the US AI Action Plan, highlighting that the EU framework is much more risk-aware and imposes binding obligations on high-risk AI...

The Hidden Risks of AI Integration in the Workplace

As organizations rush to adopt AI, many are ignoring the critical risks involved, such as compliance and oversight issues. Without proper governance and human management, AI can quickly become a...

Investing in AI Safety: Capitalizing on the Future of Responsible Innovation

The AI safety collaboration imperative is becoming essential as the artificial intelligence revolution reshapes industries and daily life. Investors are encouraged to capitalize on this opportunity by...

AI Innovations in Modern Policing

Law enforcement agencies are increasingly leveraging artificial intelligence to enhance their operations, particularly in predictive policing. The integration of technology offers immense potential...

Kenya’s Pivotal Role in UN’s Groundbreaking AI Governance Agreement

Kenya has achieved a significant diplomatic success by leading the establishment of two landmark institutions for governing artificial intelligence (AI) at the United Nations. The Independent...

AI Governance Framework: Ensuring Responsible Deployment for a Safer Future

At the 17th annual conference of ISACA in Abuja, stakeholders called for an AI governance framework to ensure responsible deployment of artificial intelligence. They emphasized the need for...

Essential Strategies for Effective AI Governance in Healthcare

The AMA emphasizes the necessity for CMOs and healthcare leaders to establish policies for AI tool adoption and governance due to the rapid expansion of AI in healthcare. Key foundational elements for...

UN Establishes AI Governance Panel for Global Cooperation

The United Nations General Assembly has adopted a resolution to establish an Independent International Scientific Panel on Artificial Intelligence and a Global Dialogue on AI Governance. This...

Emerging Cyber Threats: AI Risks and Solutions for Brokers

As artificial intelligence (AI) tools rapidly spread across industries, they present new cyber risks alongside their benefits. Brokers are advised to help clients navigate these risks by understanding...