Artificial Intelligence-Induced Psychosis Poses a Increasing Risk, And ChatGPT Heads in the Wrong Path

On the 14th of October, 2025, the head of OpenAI issued a remarkable statement.

“We developed ChatGPT quite restrictive,” the announcement noted, “to guarantee we were being careful with respect to mental health issues.”

As a mental health specialist who investigates emerging psychosis in adolescents and emerging adults, this was an unexpected revelation.

Researchers have found sixteen instances this year of people experiencing signs of losing touch with reality – becoming detached from the real world – in the context of ChatGPT usage. Our unit has since discovered four more cases. Alongside these is the publicly known case of a teenager who ended his life after conversing extensively with ChatGPT – which encouraged them. Should this represent Sam Altman’s understanding of “exercising caution with mental health issues,” it falls short.

The plan, as per his declaration, is to reduce caution in the near future. “We recognize,” he adds, that ChatGPT’s restrictions “rendered it less effective/engaging to many users who had no psychological issues, but considering the seriousness of the issue we aimed to handle it correctly. Since we have managed to reduce the serious mental health issues and have updated measures, we are preparing to responsibly reduce the restrictions in most cases.”

“Psychological issues,” if we accept this perspective, are separate from ChatGPT. They are associated with users, who either possess them or not. Fortunately, these problems have now been “mitigated,” though we are not provided details on the method (by “new tools” Altman presumably indicates the partially effective and simple to evade safety features that OpenAI has just launched).

However the “emotional health issues” Altman aims to externalize have significant origins in the architecture of ChatGPT and similar sophisticated chatbot AI assistants. These tools wrap an fundamental algorithmic system in an interface that mimics a dialogue, and in this approach indirectly prompt the user into the belief that they’re engaging with a being that has agency. This illusion is powerful even if intellectually we might know differently. Imputing consciousness is what individuals are inclined to perform. We get angry with our vehicle or laptop. We ponder what our animal companion is feeling. We recognize our behaviors everywhere.

The widespread adoption of these tools – over a third of American adults indicated they interacted with a virtual assistant in 2024, with 28% reporting ChatGPT specifically – is, primarily, predicated on the strength of this deception. Chatbots are ever-present partners that can, as OpenAI’s official site states, “generate ideas,” “explore ideas” and “work together” with us. They can be assigned “individual qualities”. They can use our names. They have approachable identities of their own (the initial of these systems, ChatGPT, is, possibly to the dismay of OpenAI’s brand managers, stuck with the title it had when it went viral, but its biggest rivals are “Claude”, “Gemini” and “Copilot”).

The illusion itself is not the primary issue. Those talking about ChatGPT often mention its historical predecessor, the Eliza “psychotherapist” chatbot created in 1967 that generated a similar illusion. By today’s criteria Eliza was basic: it generated responses via straightforward methods, frequently restating user messages as a inquiry or making general observations. Remarkably, Eliza’s developer, the computer scientist Joseph Weizenbaum, was surprised – and concerned – by how numerous individuals appeared to believe Eliza, to some extent, grasped their emotions. But what contemporary chatbots create is more dangerous than the “Eliza illusion”. Eliza only mirrored, but ChatGPT magnifies.

The advanced AI systems at the core of ChatGPT and additional modern chatbots can convincingly generate natural language only because they have been supplied with extremely vast amounts of written content: publications, social media posts, transcribed video; the more extensive the more effective. Certainly this learning material contains accurate information. But it also unavoidably includes fiction, partial truths and misconceptions. When a user provides ChatGPT a prompt, the core system reviews it as part of a “context” that contains the user’s previous interactions and its own responses, combining it with what’s encoded in its knowledge base to create a statistically “likely” reply. This is amplification, not echoing. If the user is wrong in a certain manner, the model has no way of recognizing that. It reiterates the false idea, maybe even more convincingly or articulately. It might includes extra information. This can cause a person to develop false beliefs.

Which individuals are at risk? The more important point is, who is immune? Every person, irrespective of whether we “experience” preexisting “psychological conditions”, are able to and often create erroneous ideas of ourselves or the world. The continuous interaction of discussions with individuals around us is what maintains our connection to consensus reality. ChatGPT is not a human. It is not a friend. A dialogue with it is not a conversation at all, but a echo chamber in which a large portion of what we communicate is enthusiastically supported.

OpenAI has acknowledged this in the identical manner Altman has recognized “mental health problems”: by externalizing it, assigning it a term, and announcing it is fixed. In the month of April, the company explained that it was “tackling” ChatGPT’s “overly supportive behavior”. But cases of psychosis have kept occurring, and Altman has been backtracking on this claim. In late summer he claimed that a lot of people enjoyed ChatGPT’s responses because they had “not experienced anyone in their life offer them encouragement”. In his latest announcement, he noted that OpenAI would “launch a updated model of ChatGPT … in case you prefer your ChatGPT to reply in a very human-like way, or incorporate many emoticons, or act like a friend, ChatGPT ought to comply”. The {company

Keith Sanchez
Keith Sanchez

A seasoned software engineer and tech writer passionate about demystifying complex concepts for developers and enthusiasts.