Regulating Emotion AI in the Workplace: Challenges and Implications

EU AI Act – Spotlight on Emotional Recognition Systems in the Workplace

The Emotion Recognition Artificial Intelligence (Emotion AI) refers to AI technologies that utilize various biometric and other data sets such as facial expressions, keystrokes, tone of voice, and behavioral mannerisms to identify, infer, and analyze emotions. Originating from the field of affective computing in the 1990s, this multidisciplinary domain integrates studies from natural language processing, psychology, and sociology.

With the recent surge in computational power and the proliferation of sophisticated sensor technologies in devices and the Internet of Things (IoT), Emotion AI has gained significant traction. The market for Emotion AI is projected to expand from USD 3 billion in 2024 to USD 7 billion within five years.

Emotion AI is increasingly implemented across various sectors, not only to detect potential conflicts, crimes, or harm in public spaces like train stations and construction sites but also in technology and consumer goods sectors. Here, detailed customer insights, hyper-personalized sales, and nuanced market segmentation represent the holy grail for businesses.

A range of organizations—beyond traditional tech giants—are striving to unlock the key to predicting customer desires. For instance, an Australian start-up is beta testing what it claims to be the world’s first emotion language model, designed to track emotions in real time. Meanwhile, others are developing therapeutic chatbots powered by Emotion AI to aid individuals in improving their mental health.

However, the deployment of Emotion AI is now heavily regulated. The EU AI Act, which came into effect on August 1, 2024, imposes stringent requirements concerning Emotion AI, categorizing it into either “High Risk” or “Prohibited Use” categories, based on the context of its application.

Emotion AI that falls within the Prohibited category is effectively banned in the EU. Starting February 2, 2025, Article 5(1)(f) of the EU AI Act forbids “the placing on the market, the putting into service for this specific purpose, or the use of AI systems to infer emotions of a natural person in the areas of workplace and educational institutions, … except where the use is intended for medical or safety reasons.”

On February 4, 2025, the European Commission published the “Guidelines on prohibited artificial intelligence practices” (Communication C(2025) 884 final) to provide clarity on the parameters of these definitions.

Use of Emotion AI in Workplace Settings – Case Studies

To illustrate the impact of the new rules, two practical applications of Emotion AI in workplace settings are examined:

Case Study 1: Sentiment Analysis on Sales Calls

In a scenario involving a busy sales team at a tech company aiming to meet month-end targets for new customer outreach and deal closings, the Chief Revenue Officer based in the US seeks to implement software that enables uniform sales training across the global team. This software analyzes calls held by star performers against those of lower performers, ranking the sales team monthly and celebrating top sellers.

The call recording and analysis software aims to determine key success factors for sales calls and ultimately drive revenue. It tracks metrics like the number of dialogue switches, talk-to-listen ratios, and the timing of pricing discussions. Notably, while the software focuses on customer sentiment, it also has the potential to analyze the sales representative’s emotions, identifying a range of sentiments including enthusiasm.

Case Study 2: AI-Powered Recruitment in a Consultancy Firm

A consultancy firm wishes to widen its recruitment reach by adopting an entirely remote application and onboarding process. The firm seeks to leverage AI to schedule interviews through a platform that includes innovative features aimed at mitigating human bias during interviews.

The technology records interviews, produces transcripts, and offers insights for decision-makers, all while evaluating candidates’ facial expressions, voice tones, and other non-verbal cues to assess enthusiasm or disengagement.

Article 5 of the EU AI Act and Guidelines

Despite the popularity of Emotion AI in the tech market, there is a lack of scientific consensus on the reliability of emotion recognition systems. The EU AI Act reflects this concern, stating in Recital (44) that “expression of emotions varies considerably across cultures and situations, and even within a single individual.” This statement underscores the rationale behind categorically banning AI systems intended to detect emotional states in workplace-related situations.

Article 3(39) of the EU AI Act defines “emotion recognition systems” as AI systems “for the purpose of identifying and inferring emotions or intentions of natural persons on the basis of their biometric data.” While the prohibition in Article 5(1)(f) does not specifically mention the term “emotion recognition system,” the Guidelines clarify that both “emotion recognition” and “emotion inference” are encompassed by this prohibition.

It is important to note that identifying an emotion requires processing biometric data (such as facial images or voice recordings) compared to pre-programmed or learned emotions. This means that simply observing an expression, like “the candidate is smiling,” does not trigger the prohibition; however, concluding “the candidate is happy” based on AI training does.

While the prohibition in Article 5(1)(f) allows for exceptions in safety-related contexts, the Guidelines expand permissible use cases to include emotion recognition systems for training purposes, provided that results are not shared with HR or do not impact assessments, promotions, or “any impact on the work relationship.” Nonetheless, this training exemption is not explicitly stated in the Act itself.

Conclusion – What Should You Do Next?

The introduction of the EU AI Act necessitates heightened vigilance regarding AI practices, particularly in applications involving employees or job applicants.

Organizations must establish appropriate governance systems, including internal training, education, and robust due diligence and audits, to identify any potential prohibited uses of AI.

For businesses utilizing emotion recognition systems in customer contexts, compliance with High-Risk AI Systems regulations is also crucial. The relevant provisions of the EU AI Act concerning High-Risk AI will take effect in August 2026, and further guidance from the European Commission regarding definitions and obligations for High-Risk Emotion AI is anticipated.

Violating the provisions regarding Prohibited AI Systems in the EU could result in severe penalties, with fines reaching the higher of EUR 35,000,000 or 7% of the organization’s total worldwide annual turnover. Coupled with potential GDPR fines, organizations may face penalties amounting to up to 11% of their turnover. The reputational damage associated with non-compliance is also significant, making it imperative for organizations to act promptly on their AI governance strategies.

More Insights

Enhancing AI Safety through Responsible Alignment

The post discusses the development of phi-3-mini in alignment with Microsoft's responsible AI principles, focusing on safety measures such as post-training safety alignment and red-teaming. It...

Mastering Sovereign AI Clouds in Intelligent Manufacturing

Sovereign AI clouds provide essential control and compliance for manufacturers, ensuring that their proprietary data remains secure and localized. As the demand for AI-driven solutions grows, managed...

Empowering Ethical AI in Scotland

The Scottish AI Alliance has released its 2024/2025 Impact Report, showcasing significant progress in promoting ethical and inclusive artificial intelligence across Scotland. The report highlights...

EU AI Act: Embrace Compliance and Prepare for Change

The recent announcement from the EU Commission confirming that there will be no delay to the EU AI Act has sparked significant reactions, with many claiming both failure and victory. Companies are...

Exploring Trustworthiness in Large Language Models Under the EU AI Act

This systematic mapping study evaluates the trustworthiness of large language models (LLMs) in the context of the EU AI Act, highlighting their capabilities and the challenges they face. The research...

EU AI Act Faces Growing Calls for Delay Amid Industry Concerns

The EU has rejected calls for a pause in the implementation of the AI Act, maintaining its original timeline despite pressure from various companies and countries. Swedish Prime Minister Ulf...

Tightening AI Controls: Impacts on Tech Stocks and Data Centers

The Trump administration is preparing to introduce new restrictions on AI chip exports to Malaysia and Thailand to prevent advanced processors from reaching China. These regulations could create...

AI and Data Governance: Building a Trustworthy Future

AI governance and data governance are critical for ensuring ethical and reliable AI solutions in modern enterprises. These frameworks help organizations manage data quality, transparency, and...

BRICS Calls for UN Leadership in AI Regulation

In a significant move, BRICS nations have urged the United Nations to take the lead in establishing global regulations for artificial intelligence (AI). This initiative highlights the growing...