EU AI Act – Spotlight on Emotional Recognition Systems in the Workplace
The scientific advancement of emotion recognition artificial intelligence, commonly referred to as Emotion AI, has gained significant traction in various sectors, particularly in the workplace. This technology utilizes a plethora of biometric data, including facial expressions, keystrokes, tone of voice, and behavioral mannerisms, to identify, infer, and analyze emotions. Emerging from the field of affective computing, which has its roots in the 1990s, Emotion AI integrates studies from natural language processing, psychology, and sociology.
Recent developments in compute power and the proliferation of sophisticated sensor technology have enabled these systems to assess vast amounts of data. As a result, the Emotion AI market is projected to grow from USD 3 billion in 2024 to USD 7 billion over the next five years.
Emotion AI finds application in multiple contexts, including its use in detecting potential conflict or harm in public spaces such as train stations and construction sites. Additionally, it plays a crucial role in the technology and consumer goods sectors, where understanding customer insights and hyper-personalized sales strategies are paramount.
Organizations, including emerging start-ups, are striving to leverage Emotion AI to predict consumer desires. Notably, an Australian start-up is beta testing what it claims to be the world’s first emotion language model, aimed at real-time emotional tracking. Others are developing therapeutic chatbots utilizing Emotion AI to enhance mental health support.
Regulatory Landscape: EU AI Act
With the advent of Emotion AI, regulatory scrutiny has intensified. The EU AI Act, effective from August 1, 2024, imposes stringent requirements on Emotion AI applications, categorizing them into either “High Risk” or “Prohibited Use” categories based on their context.
Notably, any Emotion AI that falls within the Prohibited category is effectively banned in the EU. According to Article 5(1)(f) of the EU AI Act, effective from February 2, 2025, the use of AI systems to infer emotions in workplace and educational settings is prohibited, except for medical or safety reasons.
On February 4, 2025, the European Commission published the “Guidelines on prohibited artificial intelligence practices”, which detail the definitions and parameters surrounding the use of Emotion AI.
Use of Emotion AI in Workplace Settings – Case Studies
Case Study 1: Sentiment Analysis on Sales Calls
In the first case study, a global tech company’s Chief Revenue Officer seeks to implement software that enables uniform sales training across international teams. This software would analyze sales calls, comparing metrics from high and low performers. Key metrics such as dialogue switches, talk-to-listen ratio, and emotional sentiment of both the customer and the sales representative are tracked to enhance engagement.
While the software primarily focuses on customer sentiment, it has the potential to assess the emotions of sales agents, which raises concerns about its implications for performance reviews and employee relations. If an employee consistently ranks low due to the software’s assessments, it may impact their engagement and lead to grievances, complicating the legal landscape surrounding its use.
Case Study 2: AI in Recruitment
The second case study involves a consultancy firm aiming to streamline its recruitment process for remote roles. The firm plans to use AI-powered interview scheduling software that assesses candidates’ facial expressions, voice tone, and other non-verbal cues to gauge enthusiasm and confidence. However, this use of Emotion AI during the hiring process falls within the Prohibited category of the EU AI Act.
Given that the workplace encompasses both physical and virtual spaces, the guidelines specify that the use of emotion recognition systems during recruitment or probation periods is strictly prohibited. The potential for bias and inaccuracies in AI assessments poses significant risks to candidates, particularly marginalized groups.
Conclusions and Recommendations
The implementation of the EU AI Act necessitates heightened vigilance among businesses regarding their AI practices, particularly those involving employees or job applicants. Establishing appropriate governance systems, including internal training, education, and robust audits, will be essential for compliance.
As organizations prepare for the full enforcement of the EU AI Act, it is critical to ensure that Emotion AI applications in customer interactions comply with regulations for High-Risk AI Systems. The provisions concerning High-Risk AI will come into effect in August 2026, and further guidance from the European Commission is anticipated.
Failing to comply with the regulations surrounding Prohibited AI Systems could result in fines of up to EUR 35 million or 7% of a company’s total worldwide annual turnover, potentially coupled with penalties under GDPR, leading to a total of up to 11% of turnover. Such repercussions underscore the importance of implementing effective AI governance now.