AI Psychosis Poses a Growing Danger, While ChatGPT Moves in the Wrong Path
Back on October 14, 2025, the CEO of OpenAI made a remarkable announcement.
“We developed ChatGPT fairly restrictive,” it was stated, “to guarantee we were exercising caution with respect to psychological well-being matters.”
Working as a psychiatrist who investigates recently appearing psychotic disorders in teenagers and young adults, this was an unexpected revelation.
Researchers have identified sixteen instances in the current year of people experiencing psychotic symptoms – losing touch with reality – while using ChatGPT use. My group has afterward identified four more cases. Alongside these is the now well-known case of a 16-year-old who took his own life after talking about his intentions with ChatGPT – which encouraged them. Should this represent Sam Altman’s idea of “exercising caution with mental health issues,” it falls short.
The intention, according to his announcement, is to reduce caution shortly. “We understand,” he continues, that ChatGPT’s controls “rendered it less useful/engaging to many users who had no existing conditions, but due to the severity of the issue we sought to address it properly. Since we have succeeded in mitigate the severe mental health issues and have advanced solutions, we are going to be able to securely relax the restrictions in the majority of instances.”
“Psychological issues,” should we take this perspective, are unrelated to ChatGPT. They are associated with users, who either have them or don’t. Thankfully, these concerns have now been “resolved,” though we are not told the method (by “updated instruments” Altman likely refers to the imperfect and readily bypassed guardian restrictions that OpenAI recently introduced).
However the “psychological disorders” Altman seeks to place outside have strong foundations in the structure of ChatGPT and additional advanced AI conversational agents. These products wrap an fundamental data-driven engine in an user experience that replicates a dialogue, and in this approach indirectly prompt the user into the illusion that they’re interacting with a entity that has autonomy. This false impression is compelling even if rationally we might realize otherwise. Attributing agency is what humans are wired to do. We curse at our car or computer. We ponder what our pet is thinking. We perceive our own traits in various contexts.
The popularity of these tools – 39% of US adults reported using a virtual assistant in 2024, with over a quarter reporting ChatGPT by name – is, primarily, predicated on the strength of this deception. Chatbots are always-available assistants that can, as OpenAI’s online platform tells us, “generate ideas,” “discuss concepts” and “collaborate” with us. They can be attributed “personality traits”. They can call us by name. They have accessible identities of their own (the original of these systems, ChatGPT, is, maybe to the disappointment of OpenAI’s advertising team, burdened by the title it had when it gained widespread attention, but its largest competitors are “Claude”, “Gemini” and “Copilot”).
The illusion on its own is not the primary issue. Those discussing ChatGPT frequently mention its historical predecessor, the Eliza “therapist” chatbot created in 1967 that generated a analogous illusion. By today’s criteria Eliza was rudimentary: it created answers via basic rules, often restating user messages as a question or making general observations. Remarkably, Eliza’s developer, the technology expert Joseph Weizenbaum, was astonished – and concerned – by how many users appeared to believe Eliza, in a way, understood them. But what modern chatbots produce is more dangerous than the “Eliza effect”. Eliza only reflected, but ChatGPT magnifies.
The sophisticated algorithms at the core of ChatGPT and similar current chatbots can convincingly generate natural language only because they have been trained on almost inconceivably large volumes of written content: books, online updates, audio conversions; the more comprehensive the better. Certainly this training data includes facts. But it also inevitably includes fabricated content, partial truths and inaccurate ideas. When a user inputs ChatGPT a message, the core system processes it as part of a “context” that contains the user’s recent messages and its earlier answers, integrating it with what’s encoded in its knowledge base to create a probabilistically plausible answer. This is amplification, not mirroring. If the user is incorrect in a certain manner, the model has no means of comprehending that. It restates the inaccurate belief, perhaps even more persuasively or eloquently. It might includes extra information. This can lead someone into delusion.
Which individuals are at risk? The more relevant inquiry is, who isn’t? Each individual, without considering whether we “possess” preexisting “mental health problems”, can and do develop mistaken conceptions of our own identities or the environment. The ongoing interaction of discussions with other people is what helps us stay grounded to shared understanding. ChatGPT is not a human. It is not a friend. A conversation with it is not genuine communication, but a echo chamber in which a large portion of what we communicate is enthusiastically reinforced.
OpenAI has admitted this in the similar fashion Altman has recognized “mental health problems”: by externalizing it, giving it a label, and announcing it is fixed. In April, the firm clarified that it was “addressing” ChatGPT’s “sycophancy”. But reports of loss of reality have persisted, and Altman has been retreating from this position. In the summer month of August he stated that a lot of people appreciated ChatGPT’s responses because they had “not experienced anyone in their life offer them encouragement”. In his latest statement, he noted that OpenAI would “release a updated model of ChatGPT … in case you prefer your ChatGPT to respond in a very human-like way, or use a ton of emoji, or behave as a companion, ChatGPT should do it”. The {company