All posts
Trending Tech

Italy Closes AI Probes After DeepSeek, Mistral Add Disclaimers

Huma Shazia30 April 2026 at 3:48 pm4 min read
Italy Closes AI Probes After DeepSeek, Mistral Add Disclaimers

Key Takeaways

Italy Closes AI Probes After DeepSeek, Mistral Add Disclaimers
Source: Tech-Economic Times
  • Italy's AGCM closed probes into DeepSeek, Mistral AI, and Scaleup over AI hallucination risks after accepting binding commitments
  • All three companies will add permanent disclaimers about hallucination risks to their chatbot services
  • DeepSeek agreed to invest in technology to reduce hallucinations while acknowledging current tech cannot prevent them entirely

Italy's competition watchdog closed investigations into three AI companies on Thursday after each agreed to warn users about the risks of AI hallucinations. DeepSeek from China, Mistral AI from France, and Turkey's Scaleup Yazilim Hizmetleri all faced scrutiny over what the regulator called unfair commercial practices.

The resolution sets a template for how European regulators might handle AI accuracy concerns: binding commitments rather than fines, focused on consumer disclosure rather than technical mandates.

What the Companies Agreed To

The AGCM, which enforces both antitrust law and consumer protection in Italy, targeted all three companies over the risk that their chatbots generate inaccurate or misleading content. AI hallucinations occur when language models produce confident-sounding text that is factually wrong or completely fabricated.

Under the binding commitments, the three companies will add permanent disclaimers to their chatbot services. These warnings will appear on their websites and apps, informing users that the AI may produce inaccurate content.

DeepSeek went further than the other two. The Chinese company also agreed to invest in technology to reduce hallucination risks. At the same time, it acknowledged that current technology cannot prevent hallucinations entirely. That admission could prove useful for regulators who want to set realistic expectations for AI accuracy.

Scaleup operates NOVA AI, a cross-platform chatbot service. Its commitments included clarifying that NOVA provides a single interface for accessing multiple chatbots. The company will make clear that it does not aggregate or process responses from those underlying chatbots. This distinction matters because users might otherwise assume NOVA is filtering or verifying information across sources.

A Pattern for European AI Enforcement

Italy has been more aggressive than most European countries in scrutinizing AI services. The AGCM temporarily banned ChatGPT in 2023 over privacy concerns before OpenAI implemented changes to satisfy regulators. This latest action extends that approach to a broader set of AI providers.

The focus on hallucinations represents a new angle. Previous European actions against AI companies centered on data protection and privacy. This case frames AI accuracy as a consumer rights issue. When chatbots generate false information without warning, the regulator argues, they may violate rules against unfair commercial practices.

Accepting binding commitments rather than issuing fines suggests the AGCM wanted to establish disclosure norms quickly. Litigation over penalties could have dragged on for years. Instead, users will see disclaimers within whatever timeframe the commitments specify.

Why These Three Companies

The selection of DeepSeek, Mistral, and Scaleup is notable. DeepSeek burst onto the global stage in early 2025 with models that rivaled OpenAI's at a fraction of the cost. Mistral is Europe's homegrown AI champion, backed by hundreds of millions in venture funding. Scaleup operates a chatbot aggregator that may have attracted attention precisely because it sits between users and multiple AI providers.

The AGCM did not target OpenAI or Google in this round of investigations. That could reflect the timing of when complaints were filed, or it could indicate that those companies already have sufficient disclaimers in place.

Also Read
Why ChatGPT Started Obsessing Over Goblins

Understanding how AI models produce unexpected outputs

The Hallucination Problem

AI hallucinations remain an unsolved technical challenge. Large language models generate text by predicting the most likely next word based on their training data. They have no mechanism to verify whether their outputs are factually accurate. A model can state a false date, invent a nonexistent court case, or fabricate a quotation with the same confidence it uses for accurate information.

DeepSeek's acknowledgment that current technology cannot prevent hallucinations entirely is an unusual public admission from an AI company. Most providers prefer to emphasize ongoing improvements rather than fundamental limitations. That candor may have helped satisfy Italian regulators.

The disclaimer approach treats hallucinations as a known risk that users should understand, similar to how financial services warn about investment risk. It shifts some responsibility to users to verify important information rather than trusting chatbot outputs blindly.

ℹ️

Logicity's Take

Also Read
Nadella Says Microsoft Will 'Exploit' OpenAI Deal Through 2032

How major tech companies are positioning their AI investments

Frequently Asked Questions

What are AI hallucinations?

AI hallucinations occur when language models generate text that sounds confident but contains inaccurate, misleading, or completely fabricated information. The models predict likely word sequences without any mechanism to verify factual accuracy.

Which companies were investigated by Italy's antitrust authority?

Italy's AGCM investigated DeepSeek from China, Mistral AI from France, and Scaleup Yazilim Hizmetleri from Turkey. All three agreed to binding commitments and the investigations were closed.

What did the AI companies agree to do?

All three agreed to add permanent disclaimers to their chatbot services warning users about hallucination risks. DeepSeek also committed to investing in technology to reduce inaccurate outputs.

Can AI hallucinations be completely prevented?

No. DeepSeek acknowledged as part of its commitments that current technology cannot prevent hallucinations entirely. This is a fundamental limitation of how large language models generate text.

Does this affect OpenAI or Google AI services?

This investigation did not target OpenAI or Google. The AGCM focused specifically on DeepSeek, Mistral AI, and Scaleup in this round of enforcement.

ℹ️

Need Help Implementing This?

Source: Tech-Economic Times / ET

H

Huma Shazia

Senior AI & Tech Writer

Related Articles

Tesla's Remote Parking Feature: The Investigation That Didn't Quite Park Itself
Trending Tech·8 min

Tesla's Remote Parking Feature: The Investigation That Didn't Quite Park Itself

The US auto safety regulators have closed their investigation into Tesla's remote parking feature, but what does this mean for the future of autonomous driving? We dive into the details of the investigation and what it reveals about the technology. The National Highway Traffic Safety Administration found that crashes were rare and minor, but the investigation's closure doesn't necessarily mean the feature is completely safe.