Regulating Emotion Recognition: Challenges in the Workplace

EU AI Act – Spotlight on Emotional Recognition Systems in the Workplace

The scientific advancement of emotion recognition artificial intelligence, commonly referred to as Emotion AI, has gained significant traction in various sectors, particularly in the workplace. This technology utilizes a plethora of biometric data, including facial expressions, keystrokes, tone of voice, and behavioral mannerisms, to identify, infer, and analyze emotions. Emerging from the field of affective computing, which has its roots in the 1990s, Emotion AI integrates studies from natural language processing, psychology, and sociology.

Recent developments in compute power and the proliferation of sophisticated sensor technology have enabled these systems to assess vast amounts of data. As a result, the Emotion AI market is projected to grow from USD 3 billion in 2024 to USD 7 billion over the next five years.

Emotion AI finds application in multiple contexts, including its use in detecting potential conflict or harm in public spaces such as train stations and construction sites. Additionally, it plays a crucial role in the technology and consumer goods sectors, where understanding customer insights and hyper-personalized sales strategies are paramount.

Organizations, including emerging start-ups, are striving to leverage Emotion AI to predict consumer desires. Notably, an Australian start-up is beta testing what it claims to be the world’s first emotion language model, aimed at real-time emotional tracking. Others are developing therapeutic chatbots utilizing Emotion AI to enhance mental health support.

Regulatory Landscape: EU AI Act

With the advent of Emotion AI, regulatory scrutiny has intensified. The EU AI Act, effective from August 1, 2024, imposes stringent requirements on Emotion AI applications, categorizing them into either “High Risk” or “Prohibited Use” categories based on their context.

Notably, any Emotion AI that falls within the Prohibited category is effectively banned in the EU. According to Article 5(1)(f) of the EU AI Act, effective from February 2, 2025, the use of AI systems to infer emotions in workplace and educational settings is prohibited, except for medical or safety reasons.

On February 4, 2025, the European Commission published the “Guidelines on prohibited artificial intelligence practices”, which detail the definitions and parameters surrounding the use of Emotion AI.

Use of Emotion AI in Workplace Settings – Case Studies

Case Study 1: Sentiment Analysis on Sales Calls

In the first case study, a global tech company’s Chief Revenue Officer seeks to implement software that enables uniform sales training across international teams. This software would analyze sales calls, comparing metrics from high and low performers. Key metrics such as dialogue switches, talk-to-listen ratio, and emotional sentiment of both the customer and the sales representative are tracked to enhance engagement.

While the software primarily focuses on customer sentiment, it has the potential to assess the emotions of sales agents, which raises concerns about its implications for performance reviews and employee relations. If an employee consistently ranks low due to the software’s assessments, it may impact their engagement and lead to grievances, complicating the legal landscape surrounding its use.

Case Study 2: AI in Recruitment

The second case study involves a consultancy firm aiming to streamline its recruitment process for remote roles. The firm plans to use AI-powered interview scheduling software that assesses candidates’ facial expressions, voice tone, and other non-verbal cues to gauge enthusiasm and confidence. However, this use of Emotion AI during the hiring process falls within the Prohibited category of the EU AI Act.

Given that the workplace encompasses both physical and virtual spaces, the guidelines specify that the use of emotion recognition systems during recruitment or probation periods is strictly prohibited. The potential for bias and inaccuracies in AI assessments poses significant risks to candidates, particularly marginalized groups.

Conclusions and Recommendations

The implementation of the EU AI Act necessitates heightened vigilance among businesses regarding their AI practices, particularly those involving employees or job applicants. Establishing appropriate governance systems, including internal training, education, and robust audits, will be essential for compliance.

As organizations prepare for the full enforcement of the EU AI Act, it is critical to ensure that Emotion AI applications in customer interactions comply with regulations for High-Risk AI Systems. The provisions concerning High-Risk AI will come into effect in August 2026, and further guidance from the European Commission is anticipated.

Failing to comply with the regulations surrounding Prohibited AI Systems could result in fines of up to EUR 35 million or 7% of a company’s total worldwide annual turnover, potentially coupled with penalties under GDPR, leading to a total of up to 11% of turnover. Such repercussions underscore the importance of implementing effective AI governance now.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...