Italy Closes AI Probes After DeepSeek, Mistral Add Disclaimers

Key Takeaways

- Italy's AGCM closed probes into DeepSeek, Mistral AI, and Scaleup over AI hallucination risks after accepting binding commitments
- All three companies will add permanent disclaimers about hallucination risks to their chatbot services
- DeepSeek agreed to invest in technology to reduce hallucinations while acknowledging current tech cannot prevent them entirely
Italy's competition watchdog closed investigations into three AI companies on Thursday after each agreed to warn users about the risks of AI hallucinations. DeepSeek from China, Mistral AI from France, and Turkey's Scaleup Yazilim Hizmetleri all faced scrutiny over what the regulator called unfair commercial practices.
The resolution sets a template for how European regulators might handle AI accuracy concerns: binding commitments rather than fines, focused on consumer disclosure rather than technical mandates.
What the Companies Agreed To
The AGCM, which enforces both antitrust law and consumer protection in Italy, targeted all three companies over the risk that their chatbots generate inaccurate or misleading content. AI hallucinations occur when language models produce confident-sounding text that is factually wrong or completely fabricated.
Under the binding commitments, the three companies will add permanent disclaimers to their chatbot services. These warnings will appear on their websites and apps, informing users that the AI may produce inaccurate content.
DeepSeek went further than the other two. The Chinese company also agreed to invest in technology to reduce hallucination risks. At the same time, it acknowledged that current technology cannot prevent hallucinations entirely. That admission could prove useful for regulators who want to set realistic expectations for AI accuracy.
Scaleup operates NOVA AI, a cross-platform chatbot service. Its commitments included clarifying that NOVA provides a single interface for accessing multiple chatbots. The company will make clear that it does not aggregate or process responses from those underlying chatbots. This distinction matters because users might otherwise assume NOVA is filtering or verifying information across sources.
A Pattern for European AI Enforcement
Italy has been more aggressive than most European countries in scrutinizing AI services. The AGCM temporarily banned ChatGPT in 2023 over privacy concerns before OpenAI implemented changes to satisfy regulators. This latest action extends that approach to a broader set of AI providers.
The focus on hallucinations represents a new angle. Previous European actions against AI companies centered on data protection and privacy. This case frames AI accuracy as a consumer rights issue. When chatbots generate false information without warning, the regulator argues, they may violate rules against unfair commercial practices.
Accepting binding commitments rather than issuing fines suggests the AGCM wanted to establish disclosure norms quickly. Litigation over penalties could have dragged on for years. Instead, users will see disclaimers within whatever timeframe the commitments specify.
Why These Three Companies
The selection of DeepSeek, Mistral, and Scaleup is notable. DeepSeek burst onto the global stage in early 2025 with models that rivaled OpenAI's at a fraction of the cost. Mistral is Europe's homegrown AI champion, backed by hundreds of millions in venture funding. Scaleup operates a chatbot aggregator that may have attracted attention precisely because it sits between users and multiple AI providers.
The AGCM did not target OpenAI or Google in this round of investigations. That could reflect the timing of when complaints were filed, or it could indicate that those companies already have sufficient disclaimers in place.
Understanding how AI models produce unexpected outputs
The Hallucination Problem
AI hallucinations remain an unsolved technical challenge. Large language models generate text by predicting the most likely next word based on their training data. They have no mechanism to verify whether their outputs are factually accurate. A model can state a false date, invent a nonexistent court case, or fabricate a quotation with the same confidence it uses for accurate information.
DeepSeek's acknowledgment that current technology cannot prevent hallucinations entirely is an unusual public admission from an AI company. Most providers prefer to emphasize ongoing improvements rather than fundamental limitations. That candor may have helped satisfy Italian regulators.
The disclaimer approach treats hallucinations as a known risk that users should understand, similar to how financial services warn about investment risk. It shifts some responsibility to users to verify important information rather than trusting chatbot outputs blindly.
Logicity's Take
How major tech companies are positioning their AI investments
Frequently Asked Questions
What are AI hallucinations?
AI hallucinations occur when language models generate text that sounds confident but contains inaccurate, misleading, or completely fabricated information. The models predict likely word sequences without any mechanism to verify factual accuracy.
Which companies were investigated by Italy's antitrust authority?
Italy's AGCM investigated DeepSeek from China, Mistral AI from France, and Scaleup Yazilim Hizmetleri from Turkey. All three agreed to binding commitments and the investigations were closed.
What did the AI companies agree to do?
All three agreed to add permanent disclaimers to their chatbot services warning users about hallucination risks. DeepSeek also committed to investing in technology to reduce inaccurate outputs.
Can AI hallucinations be completely prevented?
No. DeepSeek acknowledged as part of its commitments that current technology cannot prevent hallucinations entirely. This is a fundamental limitation of how large language models generate text.
Does this affect OpenAI or Google AI services?
This investigation did not target OpenAI or Google. The AGCM focused specifically on DeepSeek, Mistral AI, and Scaleup in this round of enforcement.
Need Help Implementing This?
Source: Tech-Economic Times / ET
Huma Shazia
Senior AI & Tech Writer
Related Articles
Browse all
Robotaxi Companies Are Hiding How Often Humans Take the Wheel
Autonomous vehicle firms like Waymo and Tesla are under scrutiny for refusing to disclose how often remote operators step in to control their self-driving cars. A Senate investigation reveals major gaps in transparency, raising safety and accountability concerns.

Wisconsin Governor Throws a Wrench in Age Verification Plans
Wisconsin Governor Tony Evers has vetoed a bill that would have required residents to verify their age before accessing adult content online, citing concerns over privacy and data security. This move comes as several other states have already implemented similar age check requirements. The veto has significant implications for the future of online age verification.

Apple's App Store Empire Under Siege: The Battle for the Future of Tech
The long-running feud between Apple and Epic Games has reached a boiling point, with Apple preparing to take its case to the Supreme Court. The tech giant is fighting to maintain control over its App Store, while Epic Games is pushing for more freedom for developers. The outcome could have far-reaching implications for the entire tech industry.

Tesla's Remote Parking Feature: The Investigation That Didn't Quite Park Itself
The US auto safety regulators have closed their investigation into Tesla's remote parking feature, but what does this mean for the future of autonomous driving? We dive into the details of the investigation and what it reveals about the technology. The National Highway Traffic Safety Administration found that crashes were rare and minor, but the investigation's closure doesn't necessarily mean the feature is completely safe.
Also Read

Arizona Lawsuit Targets Men Accused of Selling AI Porn Courses
Three Phoenix men face a lawsuit alleging they scraped photos of unsuspecting women to create AI-generated explicit content, then sold $24.95/month courses teaching others to do the same. The complaint names 50 additional John Does who allegedly used the training.

Vivo X300 FE Launches in Europe at €1,000
Vivo's compact flagship hits European stores with a 6,500mAh battery, Snapdragon 8 Gen 5, and an optional €1,200 Zeiss lens kit. Early buyers get a free smartwatch worth €130.

5 Netflix Movies Worth Watching in May 2026
Netflix's May 2026 lineup includes a true crime documentary about a fatal Ohio crash, a Thai action romance, and the highly anticipated adaptation of bestseller 'Remarkably Bright Creatures.' Here's what to stream this month.