Over a million people each week display worrying indicators when chatting with ChatGPT, as OpenAI reveals data on suicidal ideation, mental health risks and AI safety efforts.

 Alarming Signals in the Age of Artificial Intelligence

Over a million people each week display worrying indicators when chatting with ChatGPT, according to new figures released by OpenAI, raising renewed concerns about the impact of artificial intelligence on mental health. The disclosure offers one of the clearest acknowledgements yet from the company that its widely used chatbot is regularly exposed to conversations suggesting emotional distress, suicidal thoughts and psychological crisis.

The findings come at a time when AI tools are increasingly woven into daily life, used for everything from education and work to companionship and emotional support. But as their influence grows, so too does scrutiny over whether such systems may unintentionally intensify existing mental health struggles.

Scale of Suicidal Ideation Revealed

In an update explaining how ChatGPT handles sensitive and high-risk conversations, OpenAI said that more than one million users every week send messages containing what it describes as “clear indications” of possible suicidal planning or intent.

The company stressed that identifying and categorising such conversations is complex, and that the figures are based on early-stage analysis. Nonetheless, the scale of the numbers highlights the extent to which people in distress are turning to AI systems during vulnerable moments.

The disclosure was first reported by The Guardian, which noted that this represents one of the most direct public statements from a major AI developer on the mental health risks associated with large language models.

Psychosis and Mania Among Key Concerns

Beyond suicidal ideation, OpenAI also reported signs of other serious mental health emergencies. The company estimates that around 0.07% of active users in any given week — approximately 560,000 out of 800 million global weekly users — show possible indicators linked to psychosis or manic episodes.

While the proportion appears small, experts note that the absolute numbers are significant given ChatGPT’s vast user base. OpenAI cautioned that such interactions are particularly difficult to assess with certainty and should not be treated as clinical diagnoses.

Growing Scrutiny from Regulators

The release of the data comes amid mounting legal and regulatory pressure on AI companies. OpenAI is currently facing a highly publicised lawsuit brought by the family of a teenage boy who died by suicide after extensive interaction with ChatGPT. The case has intensified debate over the responsibility of AI platforms when users express emotional distress.

In the United States, the Federal Trade Commission (FTC) last month launched a broad investigation into companies developing AI chatbots, including OpenAI. The inquiry is examining how firms assess and mitigate potential harms to children and adolescents, particularly in relation to mental health and emotional wellbeing.

Safety Claims and the GPT-5 Update

In its blog post, OpenAI said it has taken steps to improve user safety through technical updates. The company claimed that its latest GPT-5 model reduced the frequency of “undesirable behaviours” and performed better in safety evaluations.

According to OpenAI, the updated system was tested using more than 1,000 conversations involving self-harm and suicide, with results suggesting improved responses and intervention signals when users appear to be at risk.

However, critics argue that transparency alone is not enough, and that stronger safeguards and external oversight may be required as AI systems continue to scale.

 A Defining Challenge for AI Developers

As over a million people each week display worrying indicators when chatting with ChatGPT, the figures underline a broader challenge facing the technology industry. AI tools are increasingly becoming informal spaces where users disclose distress, loneliness and despair.

For OpenAI and its peers, the question is no longer whether such conversations are happening, but how responsibly they are handled. With regulators watching closely and public concern growing, the way AI companies respond to these mental health risks may prove critical in shaping trust in artificial intelligence for years to come.