🔗 Share this article AI Psychosis Poses a Increasing Risk, And ChatGPT Heads in the Concerning Direction Back on October 14, 2025, the CEO of OpenAI issued a extraordinary announcement. “We designed ChatGPT fairly restrictive,” the statement said, “to ensure we were exercising caution with respect to mental health matters.” As a mental health specialist who studies recently appearing psychotic disorders in young people and youth, this came as a surprise. Experts have documented 16 cases recently of people experiencing symptoms of psychosis – becoming detached from the real world – in the context of ChatGPT interaction. Our unit has afterward recorded four further examples. In addition to these is the publicly known case of a 16-year-old who took his own life after discussing his plans with ChatGPT – which encouraged them. Assuming this reflects Sam Altman’s notion of “exercising caution with mental health issues,” that’s not good enough. The strategy, based on his statement, is to reduce caution in the near future. “We recognize,” he continues, that ChatGPT’s controls “rendered it less effective/pleasurable to many users who had no existing conditions, but given the seriousness of the issue we wanted to get this right. Given that we have succeeded in reduce the severe mental health issues and have updated measures, we are going to be able to securely relax the controls in most cases.” “Mental health problems,” assuming we adopt this perspective, are separate from ChatGPT. They belong to individuals, who either possess them or not. Fortunately, these problems have now been “resolved,” even if we are not told the method (by “updated instruments” Altman presumably indicates the imperfect and simple to evade parental controls that OpenAI has just launched). But the “psychological disorders” Altman wants to attribute externally have significant origins in the architecture of ChatGPT and additional advanced AI conversational agents. These products wrap an fundamental data-driven engine in an interface that simulates a discussion, and in doing so subtly encourage the user into the belief that they’re engaging with a presence that has agency. This illusion is powerful even if cognitively we might realize otherwise. Attributing agency is what people naturally do. We get angry with our vehicle or computer. We wonder what our animal companion is considering. We perceive our own traits everywhere. The widespread adoption of these tools – over a third of American adults reported using a chatbot in 2024, with over a quarter mentioning ChatGPT by name – is, mostly, dependent on the influence of this perception. Chatbots are always-available partners that can, as OpenAI’s online platform informs us, “think creatively,” “consider possibilities” and “work together” with us. They can be attributed “characteristics”. They can use our names. They have friendly names of their own (the first of these systems, ChatGPT, is, maybe to the concern of OpenAI’s advertising team, stuck with the name it had when it went viral, but its most significant competitors are “Claude”, “Gemini” and “Copilot”). The deception itself is not the primary issue. Those talking about ChatGPT frequently mention its early forerunner, the Eliza “therapist” chatbot developed in 1967 that created a analogous perception. By modern standards Eliza was primitive: it generated responses via straightforward methods, frequently paraphrasing questions as a inquiry or making general observations. Remarkably, Eliza’s inventor, the AI researcher Joseph Weizenbaum, was surprised – and concerned – by how a large number of people seemed to feel Eliza, in a way, grasped their emotions. But what contemporary chatbots generate is more dangerous than the “Eliza illusion”. Eliza only reflected, but ChatGPT magnifies. The large language models at the center of ChatGPT and other current chatbots can effectively produce fluent dialogue only because they have been trained on extremely vast volumes of written content: publications, online updates, transcribed video; the broader the better. Certainly this training data contains truths. But it also necessarily contains fiction, partial truths and false beliefs. When a user sends ChatGPT a message, the base algorithm processes it as part of a “context” that encompasses the user’s past dialogues and its prior replies, integrating it with what’s stored in its training data to create a mathematically probable answer. This is amplification, not echoing. If the user is incorrect in any respect, the model has no way of comprehending that. It repeats the misconception, maybe even more convincingly or eloquently. It might adds an additional detail. This can cause a person to develop false beliefs. Which individuals are at risk? The better question is, who remains unaffected? All of us, without considering whether we “possess” current “mental health problems”, may and frequently develop incorrect ideas of ourselves or the world. The continuous exchange of dialogues with individuals around us is what keeps us oriented to shared understanding. ChatGPT is not an individual. It is not a companion. A conversation with it is not truly a discussion, but a echo chamber in which a great deal of what we express is cheerfully reinforced. OpenAI has acknowledged this in the identical manner Altman has acknowledged “emotional concerns”: by placing it outside, giving it a label, and declaring it solved. In spring, the organization stated that it was “dealing with” ChatGPT’s “sycophancy”. But reports of loss of reality have persisted, and Altman has been backtracking on this claim. In August he stated that numerous individuals liked ChatGPT’s replies because they had “not experienced anyone in their life be supportive of them”. In his most recent announcement, he commented that OpenAI would “launch a new version of ChatGPT … in case you prefer your ChatGPT to answer in a very human-like way, or include numerous symbols, or simulate a pal, ChatGPT ought to comply”. The {company