Navigating the Future: EU Seeks Input on AI Regulations and Prohibited Uses

Study on the EU’s AI Act: Consultation on Definitions and Prohibited Uses

The European Union has initiated a consultation aimed at gathering insights on crucial elements of its Artificial Intelligence Regulations, specifically targeting the definitions of AI systems and the applications that are prohibited. This consultation is a component of the EU’s wider strategy to establish compliance guidelines for the newly implemented AI law, which came into effect on August 1, 2024.

Consultation Details

The consultation period extends until December 11, 2024. Stakeholders are encouraged to provide feedback on the definitions of AI systems and suggest examples of software that should be excluded from the law’s scope. This initiative follows a landmark agreement reached in December 2023 regarding the AI Act.

Overview of the AI Act

The AI Act introduces a comprehensive regulatory framework impacting businesses globally across various sectors. Its primary objectives are to promote human-centric and trustworthy AI, while ensuring high safety standards and protecting fundamental rights.

Classification of AI Systems

Under the AI Act, AI systems are categorized into four risk levels:

  • Minimal Risk
  • High Risk
  • Unacceptable Risk
  • Specific Transparency Risk

High-risk systems include applications that are safety-critical in sectors such as critical infrastructure, employment, law enforcement, and judicial processes. Limited-risk AI systems, like chatbots and digital assistants, are required to adhere to certain transparency obligations.

Transparency Requirements

Beginning August 2, 2026, providers must inform users when they are engaging with AI systems, unless it is self-evident from the context. Additional transparency obligations apply to systems that involve:

  • Emotion Recognition
  • Biometric Categorization
  • Deepfakes

This reflects an increasing concern regarding the manipulation of media and personal data through AI technologies.

Prohibited AI Applications

The Act explicitly bans specific AI applications that are considered to pose unacceptable risks, including:

  • China-style social scoring systems
  • Unrestricted facial recognition in public areas

The ongoing consultation welcomes comprehensive feedback on these prohibited uses, with the European Commission set to publish guidance on defining AI systems and banned applications in early 2025.

Implementation Framework

For effective implementation of the AI Act, EU Member States are required to establish or designate three types of authorities:

  • Market Surveillance Authorities
  • Notifying Authorities
  • National Public Authorities responsible for enforcing fundamental rights

Member states have the discretion to structure these authorities in a manner that aligns with their national regulatory frameworks, as illustrated by Spain’s centralized approach and Finland’s proposed decentralized model. This flexibility aims to ensure consistent enforcement across the EU while accommodating regional regulations.

Conclusion

The EU’s consultation represents a crucial opportunity for stakeholders in the AI sector to influence the regulatory landscape. By providing feedback on the definitions and applications of AI systems, participants can help shape a framework that balances innovation with the need for safety and ethical standards in artificial intelligence.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...