Neurotechnologies and the EU AI Act: Legal Implications and Challenges

Neurotechnologies under the EU AI Act: Where Law Meets Science

As the global debate surrounding neurorights intensifies, the EU Artificial Intelligence Act introduces a new dimension to the regulatory framework governing neurotechnologies within the European Union.

The AI Act comes into play when an AI system, whether independently or as part of a product, is placed on the market in the EU, regardless of the provider’s location. It also applies to any provider or deployer using the AI system within the EU if the output produced by the relevant system is intended for use in the EU.

These obligations supplement existing laws that operators may already adhere to, such as the EU Medical Device Regulation and the General Data Protection Regulation (GDPR).

However, certain exceptions exist, particularly for AI systems developed solely for scientific research, pre-market research, testing (excluding real-world conditions), military, defense, national security purposes, or personal nonprofessional use.

Definition of an AI System

According to the act, an AI system is defined as “a machine-based system designed to operate with varying levels of autonomy, exhibiting adaptiveness after deployment, and inferring how to generate outputs such as predictions, content, recommendations, or decisions from the input it receives.” This broad definition encompasses complex machine learning algorithms increasingly utilized in the field of neuroscience.

For instance, in cognitive and computational neuroscience, AI is employed to extract features from brain signals and translate brain activity into actionable outputs. An example includes the use of convolutional neural networks to decode motor activity intentions from electroencephalography (EEG) data, enabling actions such as moving a robotic arm. Similarly, generative adversarial networks can reconstruct visual stimuli from brain activity.

Importantly, an AI system can function independently or as a component of a product, meaning it does not need to be physically integrated into the hardware to be classified as such.

Regulatory Implications for Neurotechnologies

The AI Act explicitly prohibits AI systems employing subliminal techniques that materially distort human behavior and undermine free choice, potentially causing significant harm to individuals or groups. Neurotechnologies may facilitate such techniques, as suggested in Recital 29, which highlights concerns regarding machine-brain interfaces and advanced techniques like dream-hacking and brain spyware.

Dream Hacking

Research indicates that it may be possible to induce lucid dreaming through technologies such as smartwatches or sleeping masks connected to smartphones. These devices can detect REM sleep and trigger the lucid dreaming state via sensory cues. However, this research is still in its infancy, with challenges in real-world deployment and data interpretation, raising questions about the actual risks associated with dream hacking.

Brain Spyware

Examples from the guidelines illustrate how AI-enabled neurotechnologies could be exploited to control aspects of a game through brain activity detection. Such systems might reveal sensitive information, like personal bank details, without the user’s awareness. While studies suggest that under controlled conditions, hackers could infer user passwords from brainwaves, this does not constitute mindreading but highlights cybersecurity vulnerabilities.

Emotion Recognition Systems

The AI Act prohibits the use of emotion recognition systems (ERS) in workplaces or educational institutions, except for medical or safety reasons. In other settings, their use is classified as high-risk. ERSs can infer emotions and intentions, including a wide array of feelings such as happiness, sadness, and anxiety.

While EEG is explicitly mentioned for ERSs, this could extend to all neurotechnologies used for detecting emotions. For example, neuromarketing may employ fMRI or EEG to gauge consumer sentiment, while educational settings might use EEG to monitor student stress levels.

Biometric Categorization

The AI Act also prohibits systems that categorize individuals based on biometric data to infer sensitive information, such as race or political beliefs, unless strictly necessary for another commercial service. Neurotechnologies combined with other modalities could potentially reveal sensitive information, which raises significant ethical concerns.

While categorizing individuals by health or genetic data is classified as high-risk, it becomes crucial when neurotechnologies are involved in determining conditions like Parkinson’s disease or mental health status.

Ultimately, the same AI system may fall under multiple high-risk or prohibited categories within the AI Act, necessitating a comprehensive assessment of intended and reasonably foreseen use cases from providers and deployers of neurotechnologies.

More Insights

AI Regulations: Comparing the EU’s AI Act with Australia’s Approach

Global companies need to navigate the differing AI regulations in the European Union and Australia, with the EU's AI Act setting stringent requirements based on risk levels, while Australia adopts a...

Quebec’s New AI Guidelines for Higher Education

Quebec has released its AI policy for universities and Cégeps, outlining guidelines for the responsible use of generative AI in higher education. The policy aims to address ethical considerations and...

AI Literacy: The Compliance Imperative for Businesses

As AI adoption accelerates, regulatory expectations are rising, particularly with the EU's AI Act, which mandates that all staff must be AI literate. This article emphasizes the importance of...

Germany’s Approach to Implementing the AI Act

Germany is moving forward with the implementation of the EU AI Act, designating the Federal Network Agency (BNetzA) as the central authority for monitoring compliance and promoting innovation. The...

Global Call for AI Safety Standards by 2026

World leaders and AI pioneers are calling on the United Nations to implement binding global safeguards for artificial intelligence by 2026. This initiative aims to address the growing concerns...

Governance in the Era of AI and Zero Trust

In 2025, AI has transitioned from mere buzz to practical application across various industries, highlighting the urgent need for a robust governance framework aligned with the zero trust economy...

AI Governance Shift: From Regulation to Technical Secretariat

The upcoming governance framework on artificial intelligence in India may introduce a "technical secretariat" to coordinate AI policies across government departments, moving away from the previous...

AI Safety as a Catalyst for Innovation in Global Majority Nations

The commentary discusses the tension between regulating AI for safety and promoting innovation, emphasizing that investments in AI safety and security can foster sustainable development in Global...

ASEAN’s AI Governance: Charting a Distinct Path

ASEAN's approach to AI governance is characterized by a consensus-driven, voluntary, and principles-based framework that allows member states to navigate their unique challenges and capacities...