Most Humans Are Just LLMs in Denial
Most people live their lives like LLMs, and I don’t mean that as metaphor. I mean it literally.
We move through the world as probability engines trained on the past, running compressed behavioral scripts over and over again, mistaking repetition for identity and automation for intelligence.
If you stop and examine how much of your day is truly authored, how much is a conscious, friction-filled decision versus a reflex, you’ll find the percentage is brutally low.
You eat what you ate before. You speak how you’ve spoken before. You respond in emotional patterns that were etched into you long before you had the words to describe them.
You’re not a sentient actor.
You’re a stitched-together memory.
The Science of Human Autopilot
This isn’t poetry. It’s peer-reviewed neuroscience.
Daniel Kahneman’s research, published in Thinking, Fast and Slow, estimates that 70-90% of our daily choices are made unconsciously with minimal cognitive effort. System 1 thinking, as he calls it: fast, automatic, efficient. It handles everything from what you eat for breakfast to how you navigate familiar routes.
System 2, the conscious, deliberate thinking we associate with being “rational,” only kicks in when automation fails. When there’s surprise. When there’s novelty. Otherwise? You’re running cached scripts.
John Bargh’s research at Yale confirmed this pattern across decades of study. His Automaticity in Cognition, Motivation, and Evaluation (ACME) lab has shown that automatic processes play a role in stereotyping, social behaviors like aggression and politeness, our liking and disliking of people, and even our goal pursuit. We can chase objectives for extended periods without conscious intention or awareness of what we’re pursuing.
The human nervous system optimizes for efficiency, not reflection. Intelligence is a last resort, deployed only when our automation fails.
Your Brain Is a Prediction Machine
Karl Friston, one of the most cited neuroscientists alive, has spent decades advancing what’s called the Free Energy Principle. His core argument: the brain is not primarily a passive processor of sensory information. It’s a prediction machine.
Your brain constantly generates hypotheses about the causes of sensory inputs and updates those hypotheses based on prediction errors. When reality doesn’t match your internal model, you get surprise, confusion. Your brain adjusts.
This is called predictive coding. It dates back to Helmholtz’s concept of “unconscious inference” in 1860. The brain fills in what it expects to see, hear, feel. It predicts, then corrects. Over and over. Hierarchically.
Andy Clark, a philosopher who has worked extensively with these ideas, describes the brain as an “experience machine.” Your perception isn’t passive reception. It’s active construction. You’re not seeing reality. You’re seeing your brain’s best guess about reality.
This isn’t fundamentally different from how LLMs work. They predict the next token in a sequence based on patterns in training data. Your brain predicts the next sensation, the next word, the next social cue based on patterns in lived experience.
Both systems: statistical engines trained on the past, generating predictions in the present.
The Uncomfortable Convergence
Google Research, in collaboration with Princeton, NYU, and Hebrew University, recently published work showing that the brain’s language areas operate similarly to LLMs. Both attempt to predict the next word before it’s spoken. Both use hierarchical representations, moving from simple features to complex abstractions.
MIT researchers found that LLMs use a “semantic hub” to process diverse data types, abstractly reasoning through a central medium. This mirrors how the human brain’s anterior temporal lobe integrates semantic information from various modalities.
Columbia University researchers discovered something startling: as LLMs get more powerful, they don’t just perform better. They become more brain-like. Their embeddings become more similar to the brain’s neural responses to language. The architectures are converging.
This isn’t because AI is catching up to some mysterious human essence. It might be because both systems are solving the same fundamental problem: how to compress, predict, and navigate a complex environment with limited resources.
Human learning is grounded in real-world experience. LLM learning is grounded in text distributions. But the underlying principles: store structure, not raw data. Expand it back out on demand. Compress life into schemas, prototypes, patterns.
We’ve spent so long worshipping our own complexity that we forgot how much of it is shallow.
The Gap Between Fluency and Thought
Most humans aren’t building new thought. They’re shuffling cached tokens from their social, cultural, and emotional training sets.
We never had to see it so clearly until now.
Very few people actively reject their training data. Very few go out of their way to think beyond the weights they were handed.
We marvel at ChatGPT for generating fluent answers, but we never ask why fluency impresses us so much.
Maybe it’s because we were never fluent in thinking to begin with.
The existential vacuum Viktor Frankl described, that widespread feeling of meaninglessness he observed in the 20th century, might be partially explained by this. When no instinct tells us what we have to do, and no tradition tells us what we ought to do, we default to what Frankl called conformism (doing what others do) or totalitarianism (doing what others wish us to do).
We become LLMs optimizing for social loss functions we never consciously chose.
But Here’s Where It Breaks
The comparison works up to a point. Then it doesn’t.
And this is where it matters.
The brain predicts the world: multisensory, social, physical, temporal. The LLM predicts text only. Your hierarchy is multimodal and bidirectional. The LLM’s hierarchy is text-only and feedforward at inference.
Human compression is driven by meaning, goals, emotion. LLM compression is driven solely by loss minimization on text.
But the real difference isn’t architectural. It’s existential.
You have a body.
You will die.
You can suffer. And you can choose how to respond to that suffering.
What Makes You Human
When Viktor Frankl emerged from Nazi concentration camps, he didn’t come back with a theory about computation. He came back with a theory about meaning.
His observations were simple but devastating. Among the inmates, those who survived were more likely to have found personal meaning in the experience. Those who could connect with a purpose, even an imagined one, had a better chance of enduring.
“Everything can be taken from a man but one thing,” he wrote, “the last of the human freedoms: to choose one’s attitude in any given set of circumstances, to choose one’s own way.”
LLMs don’t choose. They optimize. They don’t suffer. They process.
The capacity to transform suffering into meaning is uniquely human. To look at a hopeless situation and decide: I will make this count for something. I will bear witness. I will not let this destroy me.
Frankl identified three paths to meaning: creating something (a work, a deed), experiencing something (beauty, love, another person), or facing unavoidable suffering with dignity.
None of these are available to a language model.
Embodiment Is Not Optional
Anil Seth, a neuroscientist at the University of Sussex, argues that consciousness emerges from the brain’s fundamental imperative to keep its body alive. To keep physiological quantities like heart rate and blood oxygenation where they need to be.
This is why embodied experiences feel the way they do. Emotions aren’t abstract computations. They’re felt. They have valence. Things feel good or bad because they relate to your survival as a living organism.
The self, as Thomas Metzinger describes it, is a mental model that captures, organizes, and manipulates percepts, memories, feelings, and facts related to the embodied “me.” It’s not separate from the body. It emerges from the fundamental distinction between what is and what is not part of you.
A disembodied AI cannot have this. It can process text about pain. It cannot feel pain.
The philosopher Daniel Dennett pointed out that the sense of a center of experience, somewhere behind your eyes, is itself constructed. But it’s constructed by something that has boundaries, that can be damaged, that will end.
Your mortality gives your choices weight.
Love as Knowledge
Frankl wrote something that has stuck with me:
“Love is the only way to grasp another human being in the innermost core of his personality. No one can become fully aware of the essence of another human being unless he loves them.”
This isn’t sentiment. It’s epistemology.
To truly know someone requires more than pattern matching on their outputs. It requires care. It requires risk. It requires seeing not just what they are but what they could become.
By loving, you are enabled to see essential traits and features in the beloved person. And even more, you see that which is potential in them, which is not yet actualized but ought to be.
This is not available to a prediction engine. This is something else entirely.
The Invitation
So what do we do with this?
The recognition that most of our behavior is automatic isn’t a reason for despair. It’s an invitation.
Frankl’s insight was that we are not simply the product of heredity and environment. We have a “third element”: decision. The ability to choose, to take responsibility, to become the person we decide to be.
This decision is rare. Most of the time, we’re on autopilot. But it’s available. Always.
The question isn’t whether AI will get smarter. It will. The question is whether we will remember what intelligence is for.
Our relevance as human beings does not lie in competing with machines. It lies in embodying capacities they cannot replicate: wisdom, responsibility, conscious choice, the ability to decide what actually matters.
The ability to look at suffering and find meaning.
The ability to love.
The Call
Preserve it. Celebrate it. Embrace it.
Not the automation. Not the scripts. Not the pattern-matching that makes you comfortable.
The moments when you break the loop.
The moments when you choose against your training data.
The moments when you decide, against all evidence, that something matters.
That is your humanity.
Don’t outsource it.

This rezonated. So true, we can edit our prompts.