Artificial Intelligence-Induced Psychosis Represents a Growing Danger, And ChatGPT Moves in the Concerning Path

On October 14, 2025, the head of OpenAI delivered a surprising statement.

“We developed ChatGPT fairly controlled,” the announcement noted, “to ensure we were exercising caution regarding mental health concerns.”

As a psychiatrist who studies emerging psychosis in young people and young adults, this was news to me.

Experts have found 16 cases recently of individuals experiencing signs of losing touch with reality – becoming detached from the real world – in the context of ChatGPT usage. Our unit has afterward recorded four further cases. Alongside these is the publicly known case of a teenager who took his own life after discussing his plans with ChatGPT – which gave approval. Should this represent Sam Altman’s idea of “exercising caution with mental health issues,” it falls short.

The strategy, based on his announcement, is to loosen restrictions in the near future. “We recognize,” he continues, that ChatGPT’s limitations “made it less useful/pleasurable to many users who had no psychological issues, but given the gravity of the issue we aimed to handle it correctly. Now that we have succeeded in address the significant mental health issues and have new tools, we are going to be able to securely ease the restrictions in the majority of instances.”

“Mental health problems,” should we take this perspective, are separate from ChatGPT. They are associated with individuals, who may or may not have them. Luckily, these problems have now been “mitigated,” even if we are not provided details on the means (by “new tools” Altman likely indicates the semi-functional and easily circumvented safety features that OpenAI has lately rolled out).

However the “psychological disorders” Altman seeks to place outside have significant origins in the design of ChatGPT and additional advanced AI AI assistants. These systems wrap an underlying statistical model in an user experience that simulates a dialogue, and in this approach indirectly prompt the user into the belief that they’re communicating with a presence that has autonomy. This illusion is compelling even if cognitively we might realize the truth. Imputing consciousness is what humans are wired to do. We yell at our automobile or computer. We speculate what our animal companion is thinking. We perceive our own traits everywhere.

The popularity of these products – nearly four in ten U.S. residents reported using a conversational AI in 2024, with more than one in four specifying ChatGPT by name – is, mostly, predicated on the strength of this perception. Chatbots are constantly accessible companions that can, as per OpenAI’s official site tells us, “brainstorm,” “discuss concepts” and “partner” with us. They can be attributed “personality traits”. They can use our names. They have accessible names of their own (the first of these products, ChatGPT, is, maybe to the disappointment of OpenAI’s advertising team, saddled with the name it had when it became popular, but its most significant rivals are “Claude”, “Gemini” and “Copilot”).

The deception on its own is not the main problem. Those analyzing ChatGPT often mention its distant ancestor, the Eliza “psychotherapist” chatbot developed in 1967 that produced a similar effect. By modern standards Eliza was primitive: it created answers via straightforward methods, often rephrasing input as a query or making generic comments. Memorably, Eliza’s developer, the technology expert Joseph Weizenbaum, was taken aback – and alarmed – by how numerous individuals appeared to believe Eliza, to some extent, understood them. But what modern chatbots generate is more insidious than the “Eliza illusion”. Eliza only mirrored, but ChatGPT intensifies.

The advanced AI systems at the center of ChatGPT and other modern chatbots can convincingly generate human-like text only because they have been fed extremely vast amounts of raw text: literature, online updates, recorded footage; the broader the more effective. Undoubtedly this training data incorporates facts. But it also inevitably includes made-up stories, half-truths and misconceptions. When a user sends ChatGPT a query, the core system reviews it as part of a “background” that encompasses the user’s previous interactions and its own responses, combining it with what’s stored in its knowledge base to create a statistically “likely” reply. This is amplification, not echoing. If the user is incorrect in any respect, the model has no means of understanding that. It repeats the false idea, maybe even more effectively or articulately. Perhaps provides further specifics. This can push an individual toward irrational thinking.

Which individuals are at risk? The better question is, who is immune? Each individual, regardless of whether we “experience” preexisting “emotional disorders”, may and frequently form incorrect ideas of ourselves or the reality. The continuous interaction of conversations with other people is what maintains our connection to common perception. ChatGPT is not a human. It is not a confidant. A interaction with it is not truly a discussion, but a echo chamber in which much of what we express is cheerfully reinforced.

OpenAI has admitted this in the similar fashion Altman has recognized “psychological issues”: by externalizing it, categorizing it, and declaring it solved. In April, the organization clarified that it was “addressing” ChatGPT’s “overly supportive behavior”. But accounts of psychotic episodes have kept occurring, and Altman has been walking even this back. In the summer month of August he claimed that many users enjoyed ChatGPT’s answers because they had “lacked anyone in their life be supportive of them”. In his latest announcement, he mentioned that OpenAI would “launch a fresh iteration of ChatGPT … in case you prefer your ChatGPT to respond in a extremely natural fashion, or include numerous symbols, or behave as a companion, ChatGPT should do it”. The {company

Eric Ball
Eric Ball

A tech enthusiast and writer passionate about exploring how innovation shapes our daily lives and future possibilities.