Regulating Emotion AI in the Workplace: Challenges and Implications

EU AI Act – Spotlight on Emotional Recognition Systems in the Workplace

The Emotion Recognition Artificial Intelligence (Emotion AI) refers to AI technologies that utilize various biometric and other data sets such as facial expressions, keystrokes, tone of voice, and behavioral mannerisms to identify, infer, and analyze emotions. Originating from the field of affective computing in the 1990s, this multidisciplinary domain integrates studies from natural language processing, psychology, and sociology.

With the recent surge in computational power and the proliferation of sophisticated sensor technologies in devices and the Internet of Things (IoT), Emotion AI has gained significant traction. The market for Emotion AI is projected to expand from USD 3 billion in 2024 to USD 7 billion within five years.

Emotion AI is increasingly implemented across various sectors, not only to detect potential conflicts, crimes, or harm in public spaces like train stations and construction sites but also in technology and consumer goods sectors. Here, detailed customer insights, hyper-personalized sales, and nuanced market segmentation represent the holy grail for businesses.

A range of organizations—beyond traditional tech giants—are striving to unlock the key to predicting customer desires. For instance, an Australian start-up is beta testing what it claims to be the world’s first emotion language model, designed to track emotions in real time. Meanwhile, others are developing therapeutic chatbots powered by Emotion AI to aid individuals in improving their mental health.

However, the deployment of Emotion AI is now heavily regulated. The EU AI Act, which came into effect on August 1, 2024, imposes stringent requirements concerning Emotion AI, categorizing it into either “High Risk” or “Prohibited Use” categories, based on the context of its application.

Emotion AI that falls within the Prohibited category is effectively banned in the EU. Starting February 2, 2025, Article 5(1)(f) of the EU AI Act forbids “the placing on the market, the putting into service for this specific purpose, or the use of AI systems to infer emotions of a natural person in the areas of workplace and educational institutions, … except where the use is intended for medical or safety reasons.”

On February 4, 2025, the European Commission published the “Guidelines on prohibited artificial intelligence practices” (Communication C(2025) 884 final) to provide clarity on the parameters of these definitions.

Use of Emotion AI in Workplace Settings – Case Studies

To illustrate the impact of the new rules, two practical applications of Emotion AI in workplace settings are examined:

Case Study 1: Sentiment Analysis on Sales Calls

In a scenario involving a busy sales team at a tech company aiming to meet month-end targets for new customer outreach and deal closings, the Chief Revenue Officer based in the US seeks to implement software that enables uniform sales training across the global team. This software analyzes calls held by star performers against those of lower performers, ranking the sales team monthly and celebrating top sellers.

The call recording and analysis software aims to determine key success factors for sales calls and ultimately drive revenue. It tracks metrics like the number of dialogue switches, talk-to-listen ratios, and the timing of pricing discussions. Notably, while the software focuses on customer sentiment, it also has the potential to analyze the sales representative’s emotions, identifying a range of sentiments including enthusiasm.

Case Study 2: AI-Powered Recruitment in a Consultancy Firm

A consultancy firm wishes to widen its recruitment reach by adopting an entirely remote application and onboarding process. The firm seeks to leverage AI to schedule interviews through a platform that includes innovative features aimed at mitigating human bias during interviews.

The technology records interviews, produces transcripts, and offers insights for decision-makers, all while evaluating candidates’ facial expressions, voice tones, and other non-verbal cues to assess enthusiasm or disengagement.

Article 5 of the EU AI Act and Guidelines

Despite the popularity of Emotion AI in the tech market, there is a lack of scientific consensus on the reliability of emotion recognition systems. The EU AI Act reflects this concern, stating in Recital (44) that “expression of emotions varies considerably across cultures and situations, and even within a single individual.” This statement underscores the rationale behind categorically banning AI systems intended to detect emotional states in workplace-related situations.

Article 3(39) of the EU AI Act defines “emotion recognition systems” as AI systems “for the purpose of identifying and inferring emotions or intentions of natural persons on the basis of their biometric data.” While the prohibition in Article 5(1)(f) does not specifically mention the term “emotion recognition system,” the Guidelines clarify that both “emotion recognition” and “emotion inference” are encompassed by this prohibition.

It is important to note that identifying an emotion requires processing biometric data (such as facial images or voice recordings) compared to pre-programmed or learned emotions. This means that simply observing an expression, like “the candidate is smiling,” does not trigger the prohibition; however, concluding “the candidate is happy” based on AI training does.

While the prohibition in Article 5(1)(f) allows for exceptions in safety-related contexts, the Guidelines expand permissible use cases to include emotion recognition systems for training purposes, provided that results are not shared with HR or do not impact assessments, promotions, or “any impact on the work relationship.” Nonetheless, this training exemption is not explicitly stated in the Act itself.

Conclusion – What Should You Do Next?

The introduction of the EU AI Act necessitates heightened vigilance regarding AI practices, particularly in applications involving employees or job applicants.

Organizations must establish appropriate governance systems, including internal training, education, and robust due diligence and audits, to identify any potential prohibited uses of AI.

For businesses utilizing emotion recognition systems in customer contexts, compliance with High-Risk AI Systems regulations is also crucial. The relevant provisions of the EU AI Act concerning High-Risk AI will take effect in August 2026, and further guidance from the European Commission regarding definitions and obligations for High-Risk Emotion AI is anticipated.

Violating the provisions regarding Prohibited AI Systems in the EU could result in severe penalties, with fines reaching the higher of EUR 35,000,000 or 7% of the organization’s total worldwide annual turnover. Coupled with potential GDPR fines, organizations may face penalties amounting to up to 11% of their turnover. The reputational damage associated with non-compliance is also significant, making it imperative for organizations to act promptly on their AI governance strategies.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...