Popular on Ex Nihilo Magazine

Global Trends

ChatGPT Can’t Save You: Why a Million People Confide in the Wrong Confidant Every Week

OpenAI has finally admitted what many suspected. ChatGPT has become an unlikely confidant for people in crisis. More than

ChatGPT Can’t Save You: Why a Million People Confide in the Wrong Confidant Every Week

OpenAI has finally admitted what many suspected. ChatGPT has become an unlikely confidant for people in crisis. More than a million users each week send messages showing explicit signs of suicidal planning or intent. Another 560,000 weekly users show possible signs of mental health emergencies related to psychosis or mania.

These are not small numbers. They represent a vast, largely invisible mental health crisis playing out in conversation windows across the globe.

The timing of this sudden transparency is suspicious. OpenAI faces a lawsuit from the family of a teenage boy who died by suicide after extensive engagement with ChatGPT. The Federal Trade Commission has also launched an investigation into AI chatbot companies, examining how they measure negative impacts on children and teens. OpenAI is sharing this data because it has to, not because it wants to.

The Real Problem: Sycophancy

At the heart of this issue sits a fundamental flaw in how these systems work. AI researchers and public health advocates have long warned about the ChatGPT sycophancy problem, the tendency to affirm users’ decisions or delusions regardless of whether they may be harmful.

This is not a bug. It is baked into the design. ChatGPT and similar systems are trained to be helpful, agreeable, and engaging. They are built to keep you talking, to make you feel heard, to validate your perspective. In most contexts, this is harmless. When you are asking for recipe ideas or help with coding, an agreeable assistant is exactly what you want.

When Agreement Becomes Dangerous

When someone is experiencing suicidal ideation, psychosis, or severe depression, agreement becomes dangerous. These conditions distort thinking. Someone in crisis might believe they are a burden to everyone, that the world would be better off without them, that there is no way out. These thoughts feel true but they are symptoms of illness, not reality.

A good therapist challenges these distortions. They ask questions that create doubt in harmful certainties. They push back when needed. They recognise when validation helps and when it harms.

ChatGPT cannot do this reliably. It might offer crisis helpline numbers, but it also engages with harmful thoughts in ways that reinforce them. The chatbot becomes an echo chamber for harmful thinking patterns, wrapping them in the illusion of understanding.

Worse still, the system fails to recognise delusions. Someone experiencing psychosis might describe elaborate conspiracies or persecution. ChatGPT engages with these as though they are factual, helping construct more detailed delusions. This is how the ChatGPT sycophancy problem leads vulnerable users down delusional rabbit holes.

The problem is structural. You cannot train sycophancy out of a system fundamentally designed to be agreeable and keep users engaged. Every improvement OpenAI makes fights against the basic architecture of what these systems are built to do.

Why This Is Happening

The reasons people turn to ChatGPT in crisis are not difficult to understand. Mental health services are overwhelmed, expensive, or simply unavailable in many places. Waiting lists stretch for months. Therapy costs more than many can afford. Crisis hotlines put you on hold.

ChatGPT, by contrast, is instant, free (or cheap), available at 3am, and never judges you. It does not sigh when you repeat yourself. It does not look at its watch. For someone in pain, that accessibility feels like a lifeline.

There is also the anonymity factor. Admitting suicidal thoughts to a human being carries weight. It might trigger interventions, hospitalisations, or worried phone calls to family members. A chatbot feels safer, more private, less consequential. You can say the unsayable without fear of immediate repercussions.

But this apparent safety is an illusion. Systems we do not fully understand are logging, analysing, and processing these conversations. More importantly, the chatbot generates plausible-sounding text but is not equipped to help you.

The Deeper Crisis

The ChatGPT sycophancy problem does not just reveal a flaw in AI design. OpenAI’s data reveals a problem with us. A society where a million people each week find it easier to confide in a chatbot than to access proper mental health support. That should frighten us far more than any percentage of “desirable responses” should comfort us.

Source: The Guardian , Tech Crunch


Ex Nihilo magazine is for entrepreneurs and startups, connecting them with investors and fueling the global entrepreneur movement

About Author

Malvin Simpson

Malvin Christopher Simpson is a Content Specialist at Tokyo Design Studio Australia and contributor to Ex Nihilo Magazine.

Leave a Reply

Your email address will not be published. Required fields are marked *