The Right Level of Wrong: Why Psychiatric Conditions Need an Algorithmic Explanation
Rethinking How We Explain Mental Illness to the People Living With It
There's a paradox at the heart of modern psychiatry that keeps me up at night. We have two ways of understanding mental illness that seem fundamentally at odds, yet both appear indispensable. On one side, we have the molecular story: dopamine receptors, neurotransmitter reuptake, receptor binding affinities. On the other, we have the algorithmic story: temporal difference learning, model-free versus model-based systems, computational arbitration between competing decision-making processes.
Here's the thing: I'm convinced the molecular level is the wrong level of explanation for understanding psychiatric symptoms, while simultaneously being grateful it exists for practical treatment decisions. I should be clear that I'm not critiquing the incredible work being done in molecular psychiatry; I want to enrich it with a complementary layer of algorithmic insight. And I think this paradox reveals something profound about what we actually mean when we say we want to "explain" mental illness.
The Information Problem with Molecular Psychiatry
Let me start with why telling someone their "dopaminergic system is dysregulated" is, from an explanatory standpoint… well, not so much of an explanation. I'm not saying dopamine is not involved; I'm making an argument about what constitutes useful information.
In information theory, Claude Shannon taught us that information is fundamentally about reducing uncertainty. A statement carries information only to the extent that it narrows down the space of possible outcomes. (For the mathematically inclined: information is measured as the reduction in entropy, where entropy H = -Σ P(i) log₂ P(i) across all possible outcomes i. A statement that changes the probability distribution reduces uncertainty by ΔH = H_before - H_after bits. Feel free to skip the math; the key insight is that informative statements meaningfully change what outcomes we expect.)
Consider this: glutamate is the primary excitatory neurotransmitter in roughly 80-85% of forebrain neurons. So when someone says "your condition involves glutamate dysfunction," they're essentially saying "your condition involves the brain." This is like explaining a car accident by noting that "wheels were involved"; technically true, but explanatorily vacuous.
The same logic applies to dopamine; invoking "dopamine dysregulation" doesn't specify which behaviors will emerge, under what circumstances, or why this particular person experiences their particular constellation of symptoms.
The Algorithmic Alternative: Computation Specifies Behavior
Now contrast this with an algorithmic explanation. Instead of saying "your dopamine is low," we might say something like this:
"Your brain uses two main systems to make decisions. One system is fast and habitual; it learns what to do based on what worked before, using a process called temporal difference learning. This system updates its predictions about rewards according to the rule: V(s) ← V(s) + α[r + γV(s') - V(s)], where V(s) is the value of your current situation, r is the reward you just got, V(s') is the value of your new situation, α is how much you learn from each experience, and γ is how much you care about future rewards versus immediate ones. (Don't worry about the math; the key is that this system caches simple value estimates.)
The other system is slower and more deliberate; it builds internal models of how the world works—transition models P(s'|s,a) that predict what state s' you'll end up in if you take action a in your current state s, and reward models R(s,a) that predict what reward you'll get from taking action a in state s. This enables flexible planning through what's essentially mental simulation.
The final decision combines both systems: Q_total(s,a) = ωQ_MB(s,a) + (1-ω)Q_MF(s,a), where Q represents action values, MB is model-based, MF is model-free, and ω is the weighting parameter that determines whether you rely more on cached habits or deliberate planning. (Again, skip the math if you prefer—the key is that your brain literally arbitrates between fast habits and slow deliberation.)
But here's where dopamine gets interesting. Rather than being a simple "reward chemical," dopamine neurons actually exhibit structured diversity. Some dopamine neurons are optimistic (α+ > α−), responding more to better-than-expected outcomes, while others are pessimistic (α+ < α−), being more sensitive to disappointments. Additionally, neurons vary in their temporal horizons; some weight immediate rewards heavily while others give more weight to delayed outcomes.
Let me use substance use as an example to illustrate how this might work. In substance use disorders, we might hypothesize that this sophisticated system becomes dysregulated. The arbitration process could break down, with the brain losing confidence in its ability to predict and control future outcomes, making the deliberate planning system feel unreliable.
Meanwhile, the diversity in dopamine signaling might become systematically biased: neurons that should encode optimistic long-term outcomes (career success, family relationships) could become pessimistic, while neurons encoding immediate rewards (like drug effects) might become hyperoptimistic. This would create a profound temporal bias where immediate outcomes feel unrealistically positive while delayed outcomes feel unrealistically negative. The result could be decision-making dominated by the immediate, certain relief that substances provide, while long-term goals feel abstract and unattainable."
Notice what this explanation does that the molecular one doesn't: it specifies behavioral patterns and connects them to computational processes. It explains why someone might simultaneously overthink decisions yet feel unmotivated to act. It makes predictions about when symptoms might be worse (high-stakes decisions requiring deliberation) versus better (routine, well-learned activities). And it incorporates the sophisticated reality that dopamine isn't just "reward chemical"; it's a diverse population of neurons encoding different aspects of reward prediction, temporal horizons, and uncertainty.
This is what I mean by the algorithmic level being the "right" level of explanation. It's the level at which we can actually map between neural computation and behavioral phenotypes.
But Wait—The Molecular Level Still Matters
Here's where the paradox gets interesting. Despite everything I just said about molecular explanations being informationally sparse, I'm not advocating we abandon them. They remain incredibly useful for a specific purpose: guiding treatment decisions.
When a clinician needs to choose between medications, knowing that one patient might respond better to a strong D2 antagonist versus a partial agonist, or that another might benefit from glutamatergic modulation, these molecular distinctions become practically vital. The neurochemical level provides a useful abstraction for pharmacological intervention, even if it's not the right level for psychological explanation.
This is a perfect illustration of George Box's famous maxim: "All models are wrong, but some are useful." Molecular psychiatry is wrong as an explanation of behavior, but useful as a guide to intervention.
Why Medicine Defaults to Molecular Thinking
It's worth understanding why molecular explanations feel so natural in medicine. In most organ systems, cellular and molecular mechanisms do provide meaningful explanations for clinical phenomena. When we explain heart failure through weakened actin-myosin interactions in cardiac muscle, or liver dysfunction through compromised cytochrome P450 enzyme systems, we're operating at the right level of biological organization. The molecular story directly connects to the organ's function.
This success has shaped medical training profoundly. Medical students learn to think mechanistically: identify the broken molecular pathway, understand how it disrupts normal physiology, then intervene at that level. This approach works brilliantly for most of medicine. A cardiologist can predict that ACE inhibitors will help heart failure by blocking angiotensin-converting enzyme, reducing afterload on the heart. A hepatologist knows that certain drug interactions occur because they compete for the same P450 enzymes.
But the brain is different. Unlike other organs with relatively straightforward input-output relationships, the brain's primary function is information processing and behavioral control. The gap between molecular events and behavioral outcomes is vast, filled with multiple levels of organization: circuits, networks, algorithms, and ultimately, the psychological phenomena that patients actually experience.
The problem isn't that molecular mechanisms don't matter in psychiatry; it's that they don't directly specify the behaviors and experiences we're trying to explain. This creates a persistent explanatory gap that algorithmic thinking can help bridge.
Augmenting Clinical Intuition
What excites me most is how the algorithmic perspective can enhance rather than replace clinical molecular thinking. When we understand that different psychiatric conditions involve different patterns of computational dysfunction, we can make more sophisticated predictions about which molecular interventions might help.
For instance, if we conceptualize certain aspects of depression as involving hyperactive model-based control combined with decreased reward sensitivity, we might predict that medications affecting both dopaminergic reward signaling AND prefrontal modulatory systems would be most effective. But here's where the distributional dopamine story becomes crucial: if the problem involves corrupted asymmetric scaling factors in dopamine populations; neurons that should encode optimistic long-term outcomes becoming systematically pessimistic while neurons encoding immediate rewards become hyperoptimistic; then effective treatment might require rebalancing this diversity rather than simply increasing or decreasing overall dopamine function.
I'm suggesting theoretically motivated combination therapy rather than trial-and-error polypharmacy. The computational framework could guide whether a patient needs interventions that restore optimistic future-oriented dopamine signaling, recalibrate temporal discount factors across neuron populations, or strengthen the prefrontal circuits that arbitrate between competing decision systems.
The Communication Benefits
Perhaps most importantly, algorithmic explanations improve how we talk with patients about their conditions. Instead of mystifying neurochemical imbalances, we can offer explanations that connect to lived experience while maintaining scientific rigor.
When someone with OCD asks why they can't stop checking the door, we can explain how their brain's uncertainty-monitoring system has become hypersensitive, leading to a computational loop where the threshold for "certain enough" is never reached. When someone with ADHD struggles with procrastination, we can discuss how their brain's reward-prediction system has difficulty maintaining motivation for distant or uncertain outcomes.
These explanations don't just satisfy intellectual curiosity; they provide actionable insights. They help patients recognize patterns, develop coping strategies, and understand why certain therapeutic approaches might be particularly helpful for their specific computational profile.
What Makes an Explanation Meaningful?
This brings us to a deeper philosophical question: what should we demand from a psychiatric explanation? I propose three criteria:
Behavioral specificity: It should predict or account for specific patterns of behavior, not just invoke general dysfunction.
Mechanistic precision: It should specify how inputs transform into outputs through identifiable computational processes.
Experiential relevance: It should connect to the patient's subjective experience in a way that feels both accurate and helpful.
Molecular explanations typically fail the first and third criteria, while succeeding at aspects of the second. Algorithmic explanations can potentially satisfy all three, while still remaining grounded in neuroscience.
The Path Forward
I'm not arguing for the wholesale abandonment of molecular psychiatry. Rather, I'm suggesting we need a more sophisticated understanding of which level of explanation serves which purpose. Use molecular models to guide pharmacological decisions. Use algorithmic models to understand, predict, and explain behavior. And use the integration of both levels to develop more precise, personalized approaches to treatment.
The future of psychiatric explanation lies not in choosing between levels of analysis, but in understanding how they complement each other. This shift could guide trial design, biomarker selection, and even AI-driven decision tools. For example, clinical trials could stratify participants by computational profiles rather than broad DSM categories, testing whether specific algorithmic phenotypes predict response to targeted treatments. The brain is simultaneously a chemical system, a computational device, and a meaning-making apparatus. Our explanations should honor this complexity while remaining useful for the people who need them most—our patients.
After all, the goal goes beyond being scientifically correct. We need to provide explanations that actually explain; that reduce uncertainty, specify mechanisms, and empower people to understand their own minds. Finding the right level of "wrong"; useful models that aren't ultimate truths; that's the heart of progress in psychiatry. That's information worth having.
What do you think? Have you experienced the tension between different levels of psychiatric explanation in your own work or personal experience? I'd love to hear your thoughts in the comments.
I like the idea of linking the molecular level with the phenotypic level maybe via an algorithmic explanation. For example, these two new studies find separately (and by coincidence) that there are 4 genetically-informed dimensions of bipolar disorder and also autism (ASD). Maybe a similar study could be done to try to find subclasses of schizophrenia?
https://pubmed.ncbi.nlm.nih.gov/40666370/
https://www.nature.com/articles/s41588-025-02224-z
This strikes me as a very sensible and well -argued perspective. In an era when far too many articles (written by neuroscientists who ought to know better) are still lazily taking about “dopamine hits” your piece stands out a genuine contribution our thinking about psychological dysfunction. Nice work!