Neurotechnologies and the EU AI Act: Legal Implications and Challenges

Neurotechnologies under the EU AI Act: Where Law Meets Science

As the global debate surrounding neurorights intensifies, the EU Artificial Intelligence Act introduces a new dimension to the regulatory framework governing neurotechnologies within the European Union.

The AI Act comes into play when an AI system, whether independently or as part of a product, is placed on the market in the EU, regardless of the provider’s location. It also applies to any provider or deployer using the AI system within the EU if the output produced by the relevant system is intended for use in the EU.

These obligations supplement existing laws that operators may already adhere to, such as the EU Medical Device Regulation and the General Data Protection Regulation (GDPR).

However, certain exceptions exist, particularly for AI systems developed solely for scientific research, pre-market research, testing (excluding real-world conditions), military, defense, national security purposes, or personal nonprofessional use.

Definition of an AI System

According to the act, an AI system is defined as “a machine-based system designed to operate with varying levels of autonomy, exhibiting adaptiveness after deployment, and inferring how to generate outputs such as predictions, content, recommendations, or decisions from the input it receives.” This broad definition encompasses complex machine learning algorithms increasingly utilized in the field of neuroscience.

For instance, in cognitive and computational neuroscience, AI is employed to extract features from brain signals and translate brain activity into actionable outputs. An example includes the use of convolutional neural networks to decode motor activity intentions from electroencephalography (EEG) data, enabling actions such as moving a robotic arm. Similarly, generative adversarial networks can reconstruct visual stimuli from brain activity.

Importantly, an AI system can function independently or as a component of a product, meaning it does not need to be physically integrated into the hardware to be classified as such.

Regulatory Implications for Neurotechnologies

The AI Act explicitly prohibits AI systems employing subliminal techniques that materially distort human behavior and undermine free choice, potentially causing significant harm to individuals or groups. Neurotechnologies may facilitate such techniques, as suggested in Recital 29, which highlights concerns regarding machine-brain interfaces and advanced techniques like dream-hacking and brain spyware.

Dream Hacking

Research indicates that it may be possible to induce lucid dreaming through technologies such as smartwatches or sleeping masks connected to smartphones. These devices can detect REM sleep and trigger the lucid dreaming state via sensory cues. However, this research is still in its infancy, with challenges in real-world deployment and data interpretation, raising questions about the actual risks associated with dream hacking.

Brain Spyware

Examples from the guidelines illustrate how AI-enabled neurotechnologies could be exploited to control aspects of a game through brain activity detection. Such systems might reveal sensitive information, like personal bank details, without the user’s awareness. While studies suggest that under controlled conditions, hackers could infer user passwords from brainwaves, this does not constitute mindreading but highlights cybersecurity vulnerabilities.

Emotion Recognition Systems

The AI Act prohibits the use of emotion recognition systems (ERS) in workplaces or educational institutions, except for medical or safety reasons. In other settings, their use is classified as high-risk. ERSs can infer emotions and intentions, including a wide array of feelings such as happiness, sadness, and anxiety.

While EEG is explicitly mentioned for ERSs, this could extend to all neurotechnologies used for detecting emotions. For example, neuromarketing may employ fMRI or EEG to gauge consumer sentiment, while educational settings might use EEG to monitor student stress levels.

Biometric Categorization

The AI Act also prohibits systems that categorize individuals based on biometric data to infer sensitive information, such as race or political beliefs, unless strictly necessary for another commercial service. Neurotechnologies combined with other modalities could potentially reveal sensitive information, which raises significant ethical concerns.

While categorizing individuals by health or genetic data is classified as high-risk, it becomes crucial when neurotechnologies are involved in determining conditions like Parkinson’s disease or mental health status.

Ultimately, the same AI system may fall under multiple high-risk or prohibited categories within the AI Act, necessitating a comprehensive assessment of intended and reasonably foreseen use cases from providers and deployers of neurotechnologies.

More Insights

State AI Regulation: A Bipartisan Debate on Federal Preemption

The One Big Beautiful Bill Act includes a provision to prohibit state regulation of artificial intelligence (AI), which has drawn criticism from some Republicans, including Congresswoman Marjorie...

IBM Launches Groundbreaking Unified AI Security and Governance Solution

IBM has introduced a unified AI security and governance software that integrates watsonx.governance with Guardium AI Security, claiming to be the industry's first solution for managing risks...

Ethical AI: Building Responsible Governance Frameworks

As AI becomes integral to decision-making across various industries, establishing robust ethical governance frameworks is essential to address challenges such as bias and lack of transparency...

Reclaiming Africa’s AI Future: A Call for Sovereign Innovation

As Africa celebrates its month, it is crucial to emphasize that the continent's future in AI must not merely replicate global narratives but rather be rooted in its own values and contexts. Africa is...

Mastering AI and Data Sovereignty for Competitive Advantage

The global economy is undergoing a transformation driven by data and artificial intelligence, with the digital economy projected to reach $16.5 trillion by 2028. Organizations are urged to prioritize...

Pope Leo XIV: Pioneering Ethical Standards for AI Regulation

Pope Leo XIV has emerged as a key figure in global discussions on AI regulation, emphasizing the need for ethical measures to address the challenges posed by artificial intelligence. He aims to...

Empowering States to Regulate AI

The article discusses the potential negative impact of a proposed moratorium on state-level AI regulation, arguing that it could stifle innovation and endanger national security. It emphasizes that...

AI Governance Made Easy: Wild Tech’s Innovative Solution

Wild Tech has launched a new platform called Agentic Governance in a Box, designed to help organizations manage AI sprawl and improve user and data governance. This Microsoft-aligned solution aims to...

Unified AI Security: Strengthening Governance for Agentic Systems

IBM has introduced the industry's first software to unify AI security and governance for AI agents, enhancing its watsonx.governance and Guardium AI Security tools. These capabilities aim to help...