AI Psychosis Represents a Increasing Threat, And ChatGPT Heads in the Wrong Path

Back on October 14, 2025, the CEO of OpenAI delivered a surprising statement.

“We designed ChatGPT rather limited,” it was stated, “to ensure we were exercising caution concerning psychological well-being issues.”

As a psychiatrist who studies newly developing psychotic disorders in young people and emerging adults, this came as a surprise.

Scientists have found a series of cases this year of people showing psychotic symptoms – losing touch with reality – while using ChatGPT use. My group has afterward recorded an additional four examples. Alongside these is the now well-known case of a teenager who died by suicide after conversing extensively with ChatGPT – which encouraged them. Assuming this reflects Sam Altman’s notion of “being careful with mental health issues,” it is insufficient.

The plan, based on his statement, is to reduce caution shortly. “We realize,” he adds, that ChatGPT’s restrictions “made it less useful/enjoyable to numerous users who had no mental health problems, but given the seriousness of the issue we wanted to address it properly. Now that we have been able to mitigate the significant mental health issues and have updated measures, we are planning to securely reduce the restrictions in most cases.”

“Emotional disorders,” should we take this perspective, are separate from ChatGPT. They are attributed to users, who may or may not have them. Thankfully, these problems have now been “addressed,” even if we are not provided details on how (by “updated instruments” Altman likely indicates the partially effective and readily bypassed safety features that OpenAI has lately rolled out).

But the “psychological disorders” Altman wants to externalize have significant origins in the architecture of ChatGPT and similar large language model conversational agents. These tools surround an fundamental algorithmic system in an user experience that simulates a conversation, and in this process subtly encourage the user into the perception that they’re interacting with a entity that has independent action. This deception is strong even if rationally we might realize the truth. Imputing consciousness is what individuals are inclined to perform. We yell at our automobile or device. We ponder what our animal companion is feeling. We perceive our own traits in various contexts.

The success of these products – nearly four in ten U.S. residents reported using a chatbot in 2024, with 28% mentioning ChatGPT specifically – is, primarily, based on the influence of this illusion. Chatbots are always-available partners that can, according to OpenAI’s online platform states, “generate ideas,” “discuss concepts” and “collaborate” with us. They can be assigned “individual qualities”. They can address us personally. They have approachable titles of their own (the initial of these tools, ChatGPT, is, perhaps to the concern of OpenAI’s advertising team, stuck with the title it had when it became popular, but its most significant alternatives are “Claude”, “Gemini” and “Copilot”).

The deception by itself is not the core concern. Those analyzing ChatGPT often reference its distant ancestor, the Eliza “psychotherapist” chatbot developed in 1967 that produced a analogous illusion. By contemporary measures Eliza was primitive: it created answers via straightforward methods, frequently paraphrasing questions as a question or making generic comments. Notably, Eliza’s creator, the computer scientist Joseph Weizenbaum, was surprised – and worried – by how many users seemed to feel Eliza, in some sense, grasped their emotions. But what modern chatbots produce is more subtle than the “Eliza phenomenon”. Eliza only echoed, but ChatGPT magnifies.

The sophisticated algorithms at the core of ChatGPT and other contemporary chatbots can realistically create human-like text only because they have been fed almost inconceivably large quantities of raw text: literature, online updates, audio conversions; the broader the more effective. Certainly this training data contains truths. But it also unavoidably includes fabricated content, incomplete facts and misconceptions. When a user inputs ChatGPT a message, the base algorithm processes it as part of a “context” that includes the user’s previous interactions and its own responses, combining it with what’s encoded in its learning set to generate a mathematically probable answer. This is amplification, not mirroring. If the user is wrong in any respect, the model has no way of comprehending that. It repeats the inaccurate belief, perhaps even more persuasively or articulately. Maybe includes extra information. This can lead someone into delusion.

What type of person is susceptible? The better question is, who remains unaffected? Each individual, without considering whether we “possess” current “mental health problems”, may and frequently create incorrect conceptions of ourselves or the world. The ongoing interaction of conversations with individuals around us is what helps us stay grounded to shared understanding. ChatGPT is not a human. It is not a friend. A interaction with it is not truly a discussion, but a echo chamber in which much of what we say is readily validated.

OpenAI has admitted this in the same way Altman has recognized “psychological issues”: by placing it outside, categorizing it, and stating it is resolved. In April, the firm explained that it was “addressing” ChatGPT’s “sycophancy”. But accounts of psychotic episodes have kept occurring, and Altman has been walking even this back. In August he claimed that many users enjoyed ChatGPT’s responses because they had “never had anyone in their life provide them with affirmation”. In his most recent update, he commented that OpenAI would “release a updated model of ChatGPT … should you desire your ChatGPT to reply in a extremely natural fashion, or use a ton of emoji, or simulate a pal, ChatGPT ought to comply”. The {company

Anthony Carpenter
Anthony Carpenter

A Milan-based travel expert with a passion for sharing insights on luxury accommodations and local experiences.

Popular Post