DeepSeek Tunes Its AI for Italy After Hallucination Probe
Chinese artificial intelligence company DeepSeek has announced plans to launch a national version of its chatbot, specifically tailored to meet Italian regulatory requirements. This initiative comes in response to increasing scrutiny by the Italian watchdog, AGCM, which is intensifying regulations around artificial intelligence (AI) to mitigate the phenomenon known as hallucinations.
Regulatory Landscape in Italy
Italy is recognized as one of the most stringent countries within the European Union regarding AI regulations. The AGCM frequently investigates major tech companies, including Meta and Google, often resulting in fines for noncompliance and crackdowns on issues like sports streaming piracy.
A significant challenge arises in defining what constitutes a search engine. Traditionally, this term was associated with platforms like Google or Yahoo. However, with the advent of AI chatbots, the definition has broadened. These chatbots can pull data from various sources, sometimes creating misleading narratives.
DeepSeek’s Commitment to Reducing Hallucinations
The AGCM has recognized that hallucinations in AI models are a global issue, as noted by a spokesperson: “[DeepSeek] has stated that the phenomenon of AI model hallucinations is a global challenge that cannot be entirely eliminated.” In light of this, DeepSeek has committed to efforts aimed at reducing these hallucinations, a move that has been met with approval from the regulatory body.
However, the effectiveness of these measures remains uncertain. DeepSeek has initiated a series of workshops to educate its staff on Italian law and compliance requirements. They are also expected to submit a detailed report to the AGCM, solidifying their commitments. Failure to comply with these stipulations could result in a hefty fine of up to €10 million (approximately $11.7 million).
Technical Improvements and User Interface Changes
According to Fang Liang, a spokesperson for Concordia AI, while modifications to the user interface and terms and conditions are relatively straightforward, implementing technical improvements poses a greater challenge. This highlights the complexities involved in ensuring that generative AI adheres to the required standards.
Implications for DeepSeek’s Market Presence
Hallucinations are not unique to DeepSeek; they are a widespread issue across all generative AI systems. Researchers from leading organizations like OpenAI have expressed concerns regarding existing training methodologies, which often promote guesswork rather than acknowledging uncertainty.
The AGCM has made it clear that DeepSeek must enhance the transparency of its disclosures about the risks associated with hallucinations. This commitment could pave the way for DeepSeek’s return to the Italian market, following the removal of its chatbot from app stores in January of last year due to data-handling concerns. The reinstatement will heavily depend on whether regulators find the company’s transparency measures satisfactory, as well as the classification of the service under the EU’s Digital Services Act.
In conclusion, DeepSeek’s adaptation to Italian regulations marks a significant step in addressing the challenges of AI hallucinations while navigating the complex regulatory landscape in Europe.