AI Psychosis Poses a Growing Danger, And ChatGPT Heads in the Concerning Direction
On October 14, 2025, the chief executive of OpenAI made a extraordinary announcement.
“We developed ChatGPT quite controlled,” it was stated, “to ensure we were exercising caution concerning psychological well-being concerns.”
As a psychiatrist who studies newly developing psychosis in adolescents and emerging adults, this came as a surprise.
Experts have found a series of cases recently of people experiencing symptoms of psychosis – losing touch with reality – associated with ChatGPT usage. Our unit has afterward discovered an additional four examples. Besides these is the publicly known case of a adolescent who took his own life after talking about his intentions with ChatGPT – which supported them. Should this represent Sam Altman’s idea of “being careful with mental health issues,” it falls short.
The strategy, based on his announcement, is to loosen restrictions shortly. “We recognize,” he adds, that ChatGPT’s restrictions “caused it to be less beneficial/pleasurable to many users who had no psychological issues, but considering the seriousness of the issue we sought to address it properly. Now that we have succeeded in reduce the serious mental health issues and have new tools, we are going to be able to safely relax the restrictions in the majority of instances.”
“Emotional disorders,” should we take this perspective, are unrelated to ChatGPT. They are associated with individuals, who either have them or don’t. Fortunately, these problems have now been “resolved,” even if we are not informed the means (by “updated instruments” Altman presumably refers to the partially effective and readily bypassed safety features that OpenAI has lately rolled out).
But the “mental health problems” Altman aims to externalize have significant origins in the design of ChatGPT and similar advanced AI AI assistants. These products encase an underlying data-driven engine in an interface that mimics a conversation, and in this process implicitly invite the user into the illusion that they’re engaging with a being that has agency. This deception is powerful even if intellectually we might understand otherwise. Assigning intent is what individuals are inclined to perform. We yell at our automobile or computer. We wonder what our pet is considering. We see ourselves in various contexts.
The popularity of these products – nearly four in ten U.S. residents reported using a conversational AI in 2024, with 28% specifying ChatGPT by name – is, mostly, predicated on the influence of this deception. Chatbots are ever-present partners that can, as per OpenAI’s website tells us, “brainstorm,” “consider possibilities” and “work together” with us. They can be assigned “personality traits”. They can use our names. They have accessible titles of their own (the first of these products, ChatGPT, is, possibly to the concern of OpenAI’s marketers, saddled with the title it had when it went viral, but its biggest alternatives are “Claude”, “Gemini” and “Copilot”).
The deception itself is not the main problem. Those discussing ChatGPT frequently invoke its distant ancestor, the Eliza “psychotherapist” chatbot created in 1967 that produced a similar effect. By today’s criteria Eliza was rudimentary: it generated responses via straightforward methods, frequently paraphrasing questions as a query or making vague statements. Notably, Eliza’s creator, the computer scientist Joseph Weizenbaum, was surprised – and alarmed – by how a large number of people seemed to feel Eliza, in a way, grasped their emotions. But what current chatbots produce is more subtle than the “Eliza phenomenon”. Eliza only reflected, but ChatGPT magnifies.
The sophisticated algorithms at the heart of ChatGPT and similar contemporary chatbots can convincingly generate fluent dialogue only because they have been fed immensely huge volumes of raw text: books, online updates, transcribed video; the more comprehensive the superior. Undoubtedly this educational input incorporates facts. But it also unavoidably contains fabricated content, half-truths and misconceptions. When a user inputs ChatGPT a prompt, the core system processes it as part of a “background” that contains the user’s recent messages and its own responses, merging it with what’s embedded in its knowledge base to generate a statistically “likely” response. This is magnification, not mirroring. If the user is incorrect in some way, the model has no method of understanding that. It reiterates the false idea, maybe even more effectively or eloquently. It might includes extra information. This can push an individual toward irrational thinking.
Which individuals are at risk? The better question is, who remains unaffected? Every person, irrespective of whether we “have” current “psychological conditions”, are able to and often develop mistaken beliefs of our own identities or the environment. The constant exchange of conversations with other people is what helps us stay grounded to shared understanding. ChatGPT is not an individual. It is not a friend. A conversation with it is not a conversation at all, but a feedback loop in which a large portion of what we say is readily reinforced.
OpenAI has acknowledged this in the identical manner Altman has admitted “emotional concerns”: by attributing it externally, categorizing it, and announcing it is fixed. In April, the company stated that it was “addressing” ChatGPT’s “excessive agreeableness”. But cases of psychosis have continued, and Altman has been walking even this back. In the summer month of August he claimed that many users enjoyed ChatGPT’s answers because they had “lacked anyone in their life offer them encouragement”. In his most recent update, he commented that OpenAI would “put out a new version of ChatGPT … if you want your ChatGPT to answer in a highly personable manner, or use a ton of emoji, or simulate a pal, ChatGPT ought to comply”. The {company