DeepSeek Adapts AI for Italy Amid Regulatory Scrutiny

DeepSeek Tunes Its AI for Italy After Hallucination Probe

Chinese artificial intelligence company DeepSeek has announced plans to launch a national version of its chatbot, specifically tailored to meet Italian regulatory requirements. This initiative comes in response to increasing scrutiny by the Italian watchdog, AGCM, which is intensifying regulations around artificial intelligence (AI) to mitigate the phenomenon known as hallucinations.

Regulatory Landscape in Italy

Italy is recognized as one of the most stringent countries within the European Union regarding AI regulations. The AGCM frequently investigates major tech companies, including Meta and Google, often resulting in fines for noncompliance and crackdowns on issues like sports streaming piracy.

A significant challenge arises in defining what constitutes a search engine. Traditionally, this term was associated with platforms like Google or Yahoo. However, with the advent of AI chatbots, the definition has broadened. These chatbots can pull data from various sources, sometimes creating misleading narratives.

DeepSeek’s Commitment to Reducing Hallucinations

The AGCM has recognized that hallucinations in AI models are a global issue, as noted by a spokesperson: “[DeepSeek] has stated that the phenomenon of AI model hallucinations is a global challenge that cannot be entirely eliminated.” In light of this, DeepSeek has committed to efforts aimed at reducing these hallucinations, a move that has been met with approval from the regulatory body.

However, the effectiveness of these measures remains uncertain. DeepSeek has initiated a series of workshops to educate its staff on Italian law and compliance requirements. They are also expected to submit a detailed report to the AGCM, solidifying their commitments. Failure to comply with these stipulations could result in a hefty fine of up to €10 million (approximately $11.7 million).

Technical Improvements and User Interface Changes

According to Fang Liang, a spokesperson for Concordia AI, while modifications to the user interface and terms and conditions are relatively straightforward, implementing technical improvements poses a greater challenge. This highlights the complexities involved in ensuring that generative AI adheres to the required standards.

Implications for DeepSeek’s Market Presence

Hallucinations are not unique to DeepSeek; they are a widespread issue across all generative AI systems. Researchers from leading organizations like OpenAI have expressed concerns regarding existing training methodologies, which often promote guesswork rather than acknowledging uncertainty.

The AGCM has made it clear that DeepSeek must enhance the transparency of its disclosures about the risks associated with hallucinations. This commitment could pave the way for DeepSeek’s return to the Italian market, following the removal of its chatbot from app stores in January of last year due to data-handling concerns. The reinstatement will heavily depend on whether regulators find the company’s transparency measures satisfactory, as well as the classification of the service under the EU’s Digital Services Act.

In conclusion, DeepSeek’s adaptation to Italian regulations marks a significant step in addressing the challenges of AI hallucinations while navigating the complex regulatory landscape in Europe.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...