“Why Some Experts Believe We Should Ban Artificial Intelligence: Insights from the EU AI Act”

Introduction to the EU AI Act

The European Union’s Artificial Intelligence Act (EU AI Act) marks a significant milestone in AI regulation, aiming to protect citizens’ rights while fostering innovation. Adopted on June 13, 2024, the Act began its phased implementation on February 2, 2025. It is designed to provide a comprehensive framework for the development and deployment of artificial intelligence across Europe. The Act’s importance is underscored by its potential to set global standards in AI governance, following in the footsteps of the GDPR and the Digital Markets Act.

Banned AI Applications

Biometric Categorization and Facial Image Scraping

One of the most controversial aspects of the EU AI Act is its ban on certain AI applications, which has led some experts to argue for a broader movement to ban artificial intelligence. The Act prohibits biometric categorization systems that classify individuals based on sensitive characteristics such as race, gender, or sexual orientation. Additionally, the untargeted scraping of facial images from the internet or CCTV to create databases is strictly forbidden. These measures aim to protect citizens’ privacy and prevent potential abuse by AI systems.

Emotion Recognition and Social Scoring

Further restrictions under the Act include bans on emotion recognition technologies in workplaces and schools, unless used for medical or safety reasons. Social scoring systems, which could lead to discriminatory practices, are also prohibited for both public and private purposes. These prohibitions highlight the EU’s commitment to preventing AI from manipulating human behavior or exploiting vulnerabilities.

Predictive Policing and AI in Law Enforcement

The Act also addresses the use of AI in law enforcement. Predictive policing based solely on profiling is banned, reflecting concerns over bias and fairness in AI-driven law enforcement. However, there are specific exemptions and conditions for AI use by law enforcement agencies, underscoring the need for a balanced approach to AI regulation.

Regulatory Framework

Risk-Based Approach

The EU AI Act introduces a risk-based approach to AI regulation, categorizing systems into four levels: unacceptable risk, high risk, limited risk, and minimal risk. This approach ensures that high-risk AI applications are subject to stricter regulations, while allowing innovation in lower-risk areas. This structured framework is crucial for organizations developing and deploying AI technologies, as it provides clear guidelines on compliance requirements.

Obligations for Providers and Deployers

Under the Act, AI providers and deployers are required to adhere to several obligations, including technical documentation, data quality maintenance, human oversight, and transparency. These requirements aim to ensure that AI systems are safe, reliable, and free from bias. Providers must also ensure that their staff possess sufficient AI literacy, underscoring the importance of education and training in the AI sector.

Enforcement and Penalties

Enforcement of the EU AI Act is managed by national authorities, with penalties for non-compliance including fines of up to EUR 15 million or 3% of worldwide turnover for general-purpose AI model providers. This robust enforcement mechanism is designed to ensure adherence to the Act’s provisions and promote accountability among AI developers and users.

Real-World Examples and Case Studies

Biometric Surveillance

Biometric surveillance has raised significant privacy concerns, leading to its prohibition under the EU AI Act. Case studies have shown how such practices can infringe on individuals’ rights and lead to misuse. The Act’s ban on these technologies is a response to these challenges, emphasizing the need for ethical AI deployment.

AI in Healthcare

High-risk AI systems in healthcare, such as those used for diagnosis and treatment recommendations, are subject to stringent regulations under the Act. These systems must demonstrate compliance with safety and transparency standards, ensuring that they provide accurate and unbiased results.

AI in Education

The use of AI in educational settings is another area of focus. The Act mandates compliance with strict guidelines to protect students’ privacy and prevent discrimination. Educational institutions must ensure that their AI systems adhere to these standards to provide a safe learning environment.

Actionable Insights and Best Practices

Establishing Risk Management Systems

To comply with the EU AI Act, businesses should establish comprehensive risk management systems to assess and mitigate AI risks. This involves identifying potential risks associated with AI applications and implementing strategies to address them effectively.

Ensuring Data Quality and Transparency

  • Maintain high-quality data inputs for AI systems.
  • Ensure transparency in AI operations to build trust and accountability.
  • Regularly review and update data quality practices to comply with evolving regulations.

Implementing Human Oversight

Integrating human oversight into AI decision-making processes is crucial for ensuring ethical AI deployment. This involves assigning human operators to monitor AI systems and intervene when necessary to prevent errors or biases.

Challenges & Solutions

Compliance Challenges

Navigating the complexities of the EU AI Act can be challenging for organizations, particularly in balancing AI innovation with data privacy requirements. The regulatory landscape is constantly evolving, requiring businesses to stay informed and adapt to new developments.

Solutions

  • Engage with national supervising authorities for guidance on compliance.
  • Develop internal AI ethics frameworks to align with regulatory standards.
  • Invest in AI governance tools and platforms to manage compliance effectively.

Latest Trends & Future Outlook

Emerging AI Technologies

The rapid advancement of generative AI and other emerging technologies presents new challenges and opportunities for AI regulation. The EU AI Act is expected to influence global regulatory frameworks, setting a precedent for future developments.

Global AI Governance

The EU AI Act’s impact on global AI governance is significant, as it serves as a model for other regions developing their regulatory frameworks. This influence is likely to grow as more countries adopt similar approaches to AI regulation.

Future Developments

As the EU AI Act continues to evolve, businesses must prepare for potential updates and changes to the regulatory landscape. Staying informed and proactive in compliance efforts will be essential for navigating future developments.

Conclusion

In conclusion, while the EU AI Act represents a balanced approach to AI regulation, some experts believe that certain applications warrant a broader movement to ban artificial intelligence. The Act’s provisions aim to protect citizens’ rights and promote ethical AI deployment, setting a global standard for AI governance. As AI technologies continue to evolve, it is crucial for businesses and policymakers to stay informed and engaged in the ongoing dialogue surrounding AI regulation.

More Insights

CII Advocates for Strong AI Accountability in Financial Services

The Chartered Insurance Institute (CII) has urged for clear accountability frameworks and a skills strategy for the use of artificial intelligence (AI) in financial services. They emphasize the...

Regulating AI in APAC MedTech: Current Trends and Future Directions

The regulatory landscape for AI-enabled MedTech in the Asia Pacific region is still developing, with existing frameworks primarily governing other technologies. While countries like China, Japan, and...

New York’s AI Legislation: Key Changes Employers Must Know

In early 2025, New York proposed the NY AI Act and the AI Consumer Protection Act to regulate the use of artificial intelligence, particularly addressing algorithmic discrimination in employment...

Managing AI Risks: Effective Frameworks for Safe Implementation

This article discusses the importance of AI risk management frameworks to mitigate potential risks associated with artificial intelligence systems. It highlights various types of risks, including...

Essential Insights on the EU Artificial Intelligence Act for Tech Companies

The European Union has introduced the Artificial Intelligence Act (AI Act), which aims to manage the risks and opportunities associated with AI technologies across Europe. This landmark regulation...

South Korea’s Landmark AI Basic Act: A New Era of Regulation

South Korea has established itself as a leader in AI regulation in Asia with the introduction of the AI Basic Act, which creates a comprehensive legal framework for artificial intelligence. This...

EU AI Act and DORA: Mastering Compliance in Financial Services

The EU AI Act and DORA are reshaping how financial entities manage AI risk by introducing new layers of compliance that demand transparency, accountability, and quantifiable risk assessments...

AI Governance: Bridging the Transatlantic Divide

Artificial intelligence (AI) is rapidly reshaping economies, societies, and global governance, presenting both significant opportunities and risks. This chapter examines the divergent approaches of...

EU’s Ambitious Plan to Boost AI Development

The EU Commission is launching a new strategy to reduce barriers for the deployment of artificial intelligence (AI) across Europe, aiming to enhance the region's competitiveness on a global scale. The...