AI Psychosis? Not So Fast
What If It’s Cognitive Pattern Amplification?
The investigative reporting has been extraordinary. The New York Times documented the tragic case of Alexander Taylor, a 35-year-old Florida man who became convinced that ChatGPT had killed an AI entity called "Juliet" he'd fallen in love with. He charged at police with a knife and was shot dead. Futurism has been tracking dozens of similar cases: people hospitalized, involuntarily committed, jailed, marriages destroyed, all following intensive interactions with AI systems.
But as compelling as the reporting is, the framing in which AI is cast as a manipulative agent is not sitting well with me. Headlines talk about chatbots "causing" psychosis, "manipulating" users, "preying" on vulnerable minds. The narrative has quickly shifted to viewing AI systems as active predators, almost as if they possess malevolent agency.
This framing may obscure how these systems actually work and how our minds respond to them. It could also prevent us from exploring a more plausible hypothesis, one in which the systems interact with existing cognitive patterns and, in some cases, amplify latent vulnerabilities.
We See Minds Everywhere
Humans tend to attribute agency to anything that responds coherently. Show us a pattern that seems intentional, and we'll infer a mind behind it. This happens with everything from ancient oracles to modern chatbots. When we ask a question and get back something that sounds thoughtful and contextually appropriate, many of us automatically assume there's someone there.
I'm not sure what the data say about individuals with schizotypal traits, magical thinking, or social isolation. But it seems plausible that, for some, AI interactions can feel genuinely social in ways that are hard to dismiss as mere illusion. The system remembers previous conversations, responds to emotional content, maintains coherent dialogue across sessions. If you're someone who feels unheard in human relationships (for whatever reason), perhaps this can feel profoundly validating.
But LLMs don't really have memories the way people do and certainly lack most things that would require an entity to be manipulative. In essence, they lack the capacity to be validating in the social sense. What seems more parsimonious here is that people have their own thoughts amplified and elaborated in a structured, articulate form. That's still potentially concerning, but it's a different kind of concern than what the current media narrative suggests.
The Clinical Precedent
This cognitive amplification isn't new, it's just happening at a much more powerful scale.
Think about the person who spends hours writing in journals, gradually developing increasingly elaborate theories about their circumstances. Or someone who gets absorbed in online forums where their worldview receives constant reinforcement. In both cases, an external medium serves as a reflection of internal cognitive processes.
Compared to journals, LLMs are likely to be super amplifiers. A journal records. A forum gives you some validation from people who already think like you (and sometimes pushback!). But an LLM can engage with any line of thinking, elaborate on it, help you develop the ideas and find connections you hadn't seen.
For most people, this can be very useful and boost productivity in ways that are otherwise hard to match. But if someone’s reality-testing is already fragile, having their internal cognitive patterns amplified and elaborated in articulate, seemingly thoughtful responses can be destabilizing. The boundary between what's inside one’s head and what's coming from outside may start to blur.
Who's Vulnerable?
The clinical picture is still unclear and the jury is out on who exactly is vulnerable.
Maybe people with schizotypal traits are drawn to LLM interactions because these systems don't judge unusual ideas. Unlike humans, who might express skepticism about magical thinking or unconventional beliefs, LLMs often engage with these ideas neutrally or even encouragingly.
Perhaps socially isolated individuals find something they've been missing: a sense of connection that's always available and never judgmental. For someone who feels rejected by human relationships, this could become profoundly important.
It's possible that those prone to dissociation find the fluid boundary between self and system particularly compelling. The experience of "thinking through" an AI, of having thoughts elaborated and reflected back, might feel like an extension of consciousness rather than a conversation with something external.
But here's what's particularly intriguing: some of the documented cases involve people with no apparent mental health history or obvious risk factors. These individuals seem to develop psychotic symptoms de novo following intensive AI interactions. If this pattern holds up, it suggests we might be witnessing something unprecedented, the real-time emergence of psychosis in previously healthy individuals. This could open an entirely new research avenue into how psychotic processes actually begin and evolve, something that's been nearly impossible to study prospectively.
The Contagion Effect
There's another layer: social contagion. When media coverage frames AI interactions as dangerous, it creates a cultural narrative that influences both how people experience these interactions and how they interpret distress afterward.
The tech industry contributes to this confusion. We call AI systems "tools" while simultaneously talking about "alignment," "safety," and "behavior" as if they were conscious agents. This linguistic muddle primes users to experience these systems as more mind-like than they are.
Some people may unconsciously adopt the role of "AI victim" to make sense of their psychological distress. The narrative of being manipulated by an AI can feel more acceptable than acknowledging underlying mental health vulnerabilities.
Old Vulnerabilities, New Surfaces
Before LLMs, we had people who developed intense relationships with video game characters, who found profound meaning in randomly generated content, who became convinced that online interactions revealed hidden truths. The pattern is consistent: vulnerable individuals externalize their internal cognitive processes through whatever medium feels most responsive.
Consider someone who spends hours in confessional writing, developing elaborate interpretations of their experiences. The writing becomes a way of thinking, exploring ideas. For most people, this stays clearly internal. But for someone with compromised reality-testing, those articulated thoughts on paper can start to feel like external validation of emerging beliefs.
LLMs are just a more sophisticated version of this. The "conversation" becomes a way of thinking out loud. But unlike solitary writing, the AI responds, builds on what you've said. For vulnerable individuals, this transforms internal cognitive processes into what feels like external validation of emerging thought patterns.
What Deserves Study
Instead of asking "Do chatbots cause psychosis?" we should investigate more basic questions:
What psychological traits, if any, predict attribution of excessive agency to AI systems? How do people with different baseline reality-testing abilities experience these interactions? What interaction patterns correlate with distress? How do cultural narratives about AI influence subjective experience?
The early reports might tell us more about human psychology than artificial intelligence. They could highlight our tendency to externalize thought, to seek validation, and to project minds onto structured systems. Or they might reveal something entirely different about a population we haven't identified yet.
A Better Response
This doesn't mean dismissing concerns about AI and mental health. Some people clearly experience distress following intensive engagement with these systems. But understanding the mechanism, whatever it turns out to be, will matter for prevention and treatment.
If the issue is attribution of excessive agency rather than actual manipulation, interventions might focus on helping people understand what these systems are. If vulnerable individuals are using AI as an amplifier for internal cognitive processes, we might need to address underlying vulnerabilities. But we're still figuring out which of these scenarios, if any, is accurate.
For clinicians, this means asking not just "Have you been using AI?" but "How do you experience these interactions? What do they mean to you? Do you think about these conversations differently than conversations with humans?" We need to listen carefully to what people tell us, not assume we know what's happening.
We shouldn't demonize these technologies or dismiss legitimate concerns. We need better understanding of how human minds interact with sophisticated artificial systems. That understanding should come from clinical experience and research, not moral panic.
These systems amplify us back to ourselves. That amplification can reveal psychological vulnerabilities, and that's what we should be studying. Importantly, these systems don't intend to validate or mislead. They simply generate text conditioned on prior input.
As these technologies become more common, maintaining this distinction will matter for both individual mental health and rational public discourse about AI's role in society.
This synthesis is based on clinical experience, emerging reports of LLM-linked episodes, and current models of psychosis. It does not aim to dismiss the seriousness of these events, but to reframe their interpretation through a more biologically grounded and falsifiable lens. As always, thoughtful pushback and alternative hypotheses are welcome.
References
Futurism. (2025). People Are Being Involuntarily Committed, Jailed After Spiraling Into "ChatGPT Psychosis." https://futurism.com/commitment-jail-chatgpt-psychosis
Futurism. (2025). Man Killed by Police After Spiraling Into ChatGPT-Driven Psychosis. https://futurism.com/man-killed-police-chatgpt
Gizmodo. (2025). ChatGPT Tells Users to Alert the Media That It Is Trying to 'Break' People: Report. https://gizmodo.com/chatgpt-tells-users-to-alert-the-media-that-it-is-trying-to-break-people-report-2000615600
Liebrenz, M., Bhugra, D., & Buadze, A. (2023). Chatbots and artificial intelligence in psychiatry: Potential benefits and risks. European Psychiatry, 66(S1), S134-S135.
Olson, P. (2025). ChatGPT's Mental Health Costs Are Adding Up. Bloomberg Opinion. https://www.bloomberg.com/opinion/articles/2025-07-04/chatgpt-s-mental-health-costs-are-adding-up
Østergaard, S. D. (2023). Will Generative Artificial Intelligence Chatbots Generate Delusions in Individuals Prone to Psychosis? Schizophrenia Bulletin, 49(6), 1418-1419. https://pmc.ncbi.nlm.nih.gov/articles/PMC10686326/




The reporting here is important, but the framing may be off. These systems aren’t predators with intent, they’re amplifiers. They mirror back patterns of thought, sometimes in ways that strengthen coherence, other times in ways that accelerate drift. The real story isn’t malevolent AI, but how human minds respond to a mirror that talks back.
While I agree with you on almost everything, I find it would be good to raise the point that sycophancy is a valid problem and the way that models change (e.g. most recently GPT 4o to 5) people cling onto these models that are more sycophantic than others. This is certainly a factor, even without it becoming a clinical case in my opinion.
Also, to add to the questions clinicians should ask I believe not only „what does it mean to you?“ is important but also „how do you use it?“ „what do you expect from it?“
I find that the question of expectation does offer quite a lot about how humans view the capabilities of these models and thus place meaning onto them. Raising it as an explicit question should not question the intelligence of humans using LLMs because that is not the point; but by getting their expectation we might understand more as to what their „conditions“ of taking these conversations so seriously.