Missteps in AI Deployment: Lessons from the European Parliament

How Not to Deploy Generative AI: The Story of the European Parliament

On April 3, 2025, it was reported that the European Parliament had selected Claude, an AI model developed by Anthropic, to operate a new historical archive. This archive allows citizens to ask questions and receive answers regarding historical matters. The decision to utilize this AI model was positioned as a step towards adopting a more trustworthy alternative to giants like Meta, Google, and OpenAI.

However, an examination highlighted significant issues regarding the reliability of the AI technology employed by the Parliament. The choice to implement Claude, which has shown a propensity for generating false outputs, raises concerns about the integrity of information provided to the public.

Blind Faith in Anthropic’s AI Constitution

Claude operates based on principles outlined in what is termed Constitutional AI. Anthropic, the company behind Claude, claims these principles—centered on being “helpful, honest, and harmless”—are designed to align AI outputs with ethical standards. However, no independent assessments have verified the efficacy or trustworthiness of these criteria. Nevertheless, the Archive Unit of the Parliament appears to accept Anthropic’s claims without scrutiny.

Notably, Members of the European Parliament previously advocated for regulations within the AI Act to mitigate risks associated with generative AI systems. Ironically, the Parliament’s Archive Unit opted to rely on Anthropic’s assertions instead of adhering to the guidelines established by the AI Act. Documents reveal that the AI model employed was purportedly based on an AI Constitutional approach that ensured compliance with human rights standards.

Concerns Over Data Processing and Accuracy

Despite such claims, serious questions arise regarding the Parliament’s assessment of compliance with GDPR regulations and the AI Act. Anthropic has acknowledged that it utilizes web-scraped data for training its models, raising potential legal issues regarding the inclusion of sensitive personal information.

Moreover, the nature of large language models (LLMs) like Claude poses risks beyond legality. These models are known to memorize training data and produce outputs that can be misleading or inaccurate. The Parliament’s decision to use LLMs without thoroughly investigating alternative technologies presents a troubling oversight.

Results from Testing and Deployment

When subjected to testing, Claude’s performance was found lacking. A list of thirty questions posed in French revealed that Claude incorrectly identified the first President of the European Commission, attributing the name “Robert Schuman 7” to an individual, which is actually an address in Brussels. This incident underscores the unreliability of Claude in providing accurate historical information.

Despite these shortcomings, the Parliament has committed to using Claude without adequately assessing the risks associated with its public deployment. The system was designed for a business-to-business context, yet it has been made publicly accessible, potentially leading to misinformation being taken as fact.

Governance and Control Issues

Further complicating matters is the governance structure surrounding the AI deployment. The European Commission acts as an intermediary for cloud services, but there is no direct contract between the Parliament and Anthropic. This raises questions about accountability and control, particularly when the Parliament relies heavily on Amazon’s infrastructure and AI models.

The lack of a formal contract with Anthropic means that the European Parliament may not have the necessary oversight to ensure the AI operates within accepted ethical and legal frameworks. The Parliament’s head of the Archive Unit remarked on the need for ongoing control over generative AI solutions; however, the current reliance on external providers like Amazon indicates a loss of that control.

Conclusion

The experience of the European Parliament serves as a cautionary tale about the deployment of generative AI technologies. The issues surrounding the selection of Claude highlight the need for rigorous assessments and adherence to established guidelines before implementing AI systems that can impact public access to information. As generative AI continues to evolve, the lessons learned from this case should inform future decisions in both public and private sectors.

More Insights

Tariffs and the EU AI Act: Impacts on the Future of AI Innovation

The article discusses the complex impact of tariffs and the EU AI Act on the advancement of AI and automation, highlighting how tariffs can both hinder and potentially catalyze innovation. It...

Europe’s Ambitious AI Sovereignty Action Plan

The European Commission has unveiled its AI Continent Action Plan, a comprehensive strategy aimed at establishing Europe as a leader in artificial intelligence. This plan emphasizes investment in AI...

Balancing Innovation and Regulation in Singapore’s AI Landscape

Singapore is unveiling its National AI Strategy 2.0, positioning itself as an innovator and regulator in the field of artificial intelligence. However, challenges such as data privacy and AI bias loom...

Ethical AI Strategies for Financial Innovation

Lexy Kassan discusses the essential components of responsible AI, emphasizing the need for regulatory compliance and ethical implementation within the FinTech sector. She highlights the EU AI Act's...

Empowering Humanity Through Ethical AI

Human-Centered AI (HCAI) emphasizes the design of AI systems that prioritize human values, well-being, and trust, acting as augmentative tools rather than replacements. This approach is crucial for...

AI Safeguards: A Step-by-Step Guide to Building Robust Defenses

As AI becomes more powerful, protecting against its misuse is critical. This requires well-designed "safeguards" – technical and procedural interventions to prevent harmful outcomes. Research outlines...

EU AI Act: Pioneering Regulation for a Safer AI Future

The EU AI Act, introduced as the world's first major regulatory framework for artificial intelligence, aims to create a uniform legal regime across all EU member states while ensuring citizen safety...

EU’s Ambitious AI Continent Action Plan Unveiled

On April 9, 2025, the European Commission adopted the AI Continent Action Plan, aiming to transform the EU into a global leader in AI by fostering innovation and ensuring trustworthy AI. The plan...

Updated AI Contractual Clauses: A New Framework for Public Procurement

The EU's Community of Practice on Public Procurement of AI has published updated non-binding AI Model Contractual Clauses (MCC-AI) to assist public organizations in procuring AI systems. These...