Neurotechnologies under the EU AI Act: Where Law Meets Science
As the global debate surrounding neurorights intensifies, the EU Artificial Intelligence Act introduces a new dimension to the regulatory framework governing neurotechnologies within the European Union.
The AI Act comes into play when an AI system, whether independently or as part of a product, is placed on the market in the EU, regardless of the provider’s location. It also applies to any provider or deployer using the AI system within the EU if the output produced by the relevant system is intended for use in the EU.
These obligations supplement existing laws that operators may already adhere to, such as the EU Medical Device Regulation and the General Data Protection Regulation (GDPR).
However, certain exceptions exist, particularly for AI systems developed solely for scientific research, pre-market research, testing (excluding real-world conditions), military, defense, national security purposes, or personal nonprofessional use.
Definition of an AI System
According to the act, an AI system is defined as “a machine-based system designed to operate with varying levels of autonomy, exhibiting adaptiveness after deployment, and inferring how to generate outputs such as predictions, content, recommendations, or decisions from the input it receives.” This broad definition encompasses complex machine learning algorithms increasingly utilized in the field of neuroscience.
For instance, in cognitive and computational neuroscience, AI is employed to extract features from brain signals and translate brain activity into actionable outputs. An example includes the use of convolutional neural networks to decode motor activity intentions from electroencephalography (EEG) data, enabling actions such as moving a robotic arm. Similarly, generative adversarial networks can reconstruct visual stimuli from brain activity.
Importantly, an AI system can function independently or as a component of a product, meaning it does not need to be physically integrated into the hardware to be classified as such.
Regulatory Implications for Neurotechnologies
The AI Act explicitly prohibits AI systems employing subliminal techniques that materially distort human behavior and undermine free choice, potentially causing significant harm to individuals or groups. Neurotechnologies may facilitate such techniques, as suggested in Recital 29, which highlights concerns regarding machine-brain interfaces and advanced techniques like dream-hacking and brain spyware.
Dream Hacking
Research indicates that it may be possible to induce lucid dreaming through technologies such as smartwatches or sleeping masks connected to smartphones. These devices can detect REM sleep and trigger the lucid dreaming state via sensory cues. However, this research is still in its infancy, with challenges in real-world deployment and data interpretation, raising questions about the actual risks associated with dream hacking.
Brain Spyware
Examples from the guidelines illustrate how AI-enabled neurotechnologies could be exploited to control aspects of a game through brain activity detection. Such systems might reveal sensitive information, like personal bank details, without the user’s awareness. While studies suggest that under controlled conditions, hackers could infer user passwords from brainwaves, this does not constitute mindreading but highlights cybersecurity vulnerabilities.
Emotion Recognition Systems
The AI Act prohibits the use of emotion recognition systems (ERS) in workplaces or educational institutions, except for medical or safety reasons. In other settings, their use is classified as high-risk. ERSs can infer emotions and intentions, including a wide array of feelings such as happiness, sadness, and anxiety.
While EEG is explicitly mentioned for ERSs, this could extend to all neurotechnologies used for detecting emotions. For example, neuromarketing may employ fMRI or EEG to gauge consumer sentiment, while educational settings might use EEG to monitor student stress levels.
Biometric Categorization
The AI Act also prohibits systems that categorize individuals based on biometric data to infer sensitive information, such as race or political beliefs, unless strictly necessary for another commercial service. Neurotechnologies combined with other modalities could potentially reveal sensitive information, which raises significant ethical concerns.
While categorizing individuals by health or genetic data is classified as high-risk, it becomes crucial when neurotechnologies are involved in determining conditions like Parkinson’s disease or mental health status.
Ultimately, the same AI system may fall under multiple high-risk or prohibited categories within the AI Act, necessitating a comprehensive assessment of intended and reasonably foreseen use cases from providers and deployers of neurotechnologies.