A European study reveals Mistral AI's Le Chat as the least intrusive AI chatbot for data privacy, while Meta AI, Google's Gemini, and Microsoft's Copilot rank lowest for protecting user information.
Mistral AI's Le Chat Leads the Pack in AI Data Privacy, Meta AI Lags Behind
In a comprehensive European study assessing data privacy among leading artificial intelligence chatbots, French firm Mistral AI's Le Chat has emerged as the most privacy-conscious platform. The research, conducted by Incogni, a service specialising in the removal of personal information, provides a rare independent evaluation of how popular AI chatbots handle sensitive user data.
The study scrutinised widely used generative AI platforms, including OpenAI's ChatGPT, Meta AI, Google's Gemini, Microsoft's Copilot, xAI's Grok, Anthropic's Claude, Inflection AI's Pi AI, and China-based DeepSeek.
Each platform was rated on a privacy scale from zero (most privacy-friendly) to one (least privacy-friendly), based on 11 key criteria. The assessment examined how models are trained, what data they collect, how transparently they operate, and whether user-generated content is shared with third parties.
Why Mistral AI's Le Chat Tops the Privacy Rankings
According to Incogni’s findings, Le Chat from Mistral AI sets itself apart by collecting only "limited" personal information and demonstrating strong consideration for privacy-specific concerns in AI systems. The model is among the few that restrict user-generated requests from being shared beyond essential service providers.
Pi AI, developed by Inflection AI, follows a similar approach in limiting data sharing, offering users greater peace of mind when interacting with the platform.
The report highlights Le Chat’s commitment to privacy as a potential benchmark for the AI industry, where growing concerns about how personal data is harvested and exploited have led to increased regulatory scrutiny.
ChatGPT, Grok and Claude: Middle Ground on Privacy
OpenAI's ChatGPT ranked second overall, praised for its clear privacy policy, which informs users about how their data is used. However, the study raised concerns regarding the platform’s training practices and the degree to which user data integrates with its product offerings.xAI's Grok, operated by billionaire entrepreneur Elon Musk, claimed third place, although researchers flagged transparency issues and the extent of data collection. Similarly, Anthropic's Claude performed moderately well but faced criticism over how user interactions with the model are handled.
Meta AI, Gemini and Copilot: Bottom of the Class
At the opposite end of the spectrum, Meta AI was ranked the most privacy-invasive chatbot, followed closely by Google's Gemini and Microsoft's Copilot.
The study found that these platforms provide users with minimal control over how their prompts are used, particularly regarding the option to exclude their interactions from being used for further model training.
Incogni warned that the practices of companies at the bottom of the ranking raise significant privacy concerns, particularly for individuals and organisations seeking to use AI tools without compromising sensitive information.
Growing Demand for AI Privacy Transparency
The report comes at a time when global regulators and privacy advocates are increasingly scrutinising the AI industry. With generative AI tools becoming embedded in everyday life, the demand for greater transparency, data control, and user protection is expected to intensify.
Mistral AI's Le Chat, by prioritising privacy, has set an example in an evolving landscape where safeguarding personal data is becoming as crucial as technological innovation itself.

Comments