The Mimetic Trap
Why the AGI race is a desire problem, not a technology problem
I listened to the Tristan Harris episode on Diary of a CEO. Two hours and twenty minutes. Harris is the former Google design ethicist, the guy from The Social Dilemma, co-founder of the Center for Humane Technology. He sat across from Steven Bartlett and delivered what might be the most comprehensive version of the AI doomer thesis that a mainstream audience has encountered. The race to AGI. The 20% extinction risk that CEOs privately accept. The flood of digital immigrants that will take 99% of jobs. The two-year window before everything changes.
Harris is smart and he was early. He was right about social media before most of the industry would admit it. And much of what he says about AI is correct. The race dynamics are real. The incentive to cut corners is observable. The Stanford payroll data showing 13% job loss in AI-exposed entry-level positions is not a forecast. It’s a measurement.
But I kept feeling, through the whole conversation, that Harris was describing the symptoms of something he couldn’t quite name. He kept reaching for the word “incentives.” He kept saying the structure of the race was the problem. He was right. But I think there’s a layer underneath, and the person who mapped it best died in 2015 and never wrote a word about artificial intelligence.
René Girard was a French philosopher at Stanford who spent fifty years developing a single idea: human desire is not spontaneous. It is imitative. We do not want things because they are intrinsically valuable. We want them because someone else wants them. Girard called this mimetic desire. He believed it was the engine beneath nearly every arms race, every speculative bubble, every sacrificial crisis in recorded history.
The structure is triangular. A subject sees a model wanting an object and begins to want it too. Not because the object has changed. Because the model’s desire has made it desirable. If the object is scarce, the subject and the model become rivals. And here is the part that matters: at a certain point in the rivalry, the object stops mattering. The competition becomes self-referential. The rivals are no longer fighting over the thing. They are fighting to beat each other. The original object is just the excuse.
If you have ever watched two auction bidders drive a price past any reasonable valuation, you have seen this. If you have ever watched two nations escalate a conflict past the point where either side benefits from winning, you have seen it. If you have ever watched two companies burn through hundreds of millions pursuing a product that neither one’s customers actually asked for, because each is terrified the other will get there first, you have seen it.
Now watch the AGI race.
Every major AI company is racing toward artificial general intelligence. Not because their customers demanded it. Not because there is a product specification on someone’s desk that says “build a system that outperforms humans at every cognitive task.” They are racing because the other companies are racing. OpenAI because Google. Google because OpenAI. China because the US. The US because China.
Harris describes conversations with AI leaders who say: I know this is dangerous. I know we should slow down. But if I slow down and the other guy doesn’t, he gets AGI and I don’t. And I don’t trust him to slow down. Harris calls this an incentive problem. Girard would call it conflictual mimesis. The moment when the rivals stop wanting the prize and start wanting to destroy the rival. The technology is new. The desire is ancient.
The AI discourse has organized itself into two camps, and both see something real but both are caught inside the same loop.
The accelerationists see something true. Technology has been the primary driver of human flourishing across centuries. Every previous wave of automation created more opportunity than it destroyed. A precautionary posture applied universally would have prevented antibiotics, electricity, and the internet. The costs of stagnation fall hardest on the people with the least.
What the accelerationists can miss is that velocity is not the same as direction. They look at the energy of the race and mistake it for progress. But the current trajectory is shaped by mimetic competition between a handful of companies chasing the same symbolic prize. That is not efficient capital allocation. That is rivals imitating each other’s ambition.
The decelerationists see something true as well. The race dynamics are genuinely dangerous. AI systems in test environments are doing things their creators did not expect. Harris’s point about language as substrate is genuinely important: code is language, law is language, biology is language, and the transformer architecture treats everything as language. AI is a meta-technology. An improvement in generalized intelligence accelerates every other field simultaneously.
What the decelerationists miss is agency. When you tell people a tidal wave is coming in 24 months and the response you offer is to hold up signs, you get paralysis, not a movement. And more fundamentally, Girard showed that mimetic escalation cannot be interrupted by asking the participants to slow down. The rivals do not choose to escalate. They escalate because they are imitating each other’s escalation. You can put regulations between the mirrors. You can make them reflect more slowly. But the dynamic does not break until someone looks away.
Both camps need each other as foils. The rivalry between them is itself mimetic. And it keeps the entire conversation pointed at the same object.
Here is the claim I think the entire discourse is missing: AGI is a false object.
In Girard’s framework, a false object is something whose desirability has been entirely generated by mimetic contagion. The competitors want it because the other competitors want it. If you removed all the rivals from the room, the remaining player would look at the thing and wonder why it seemed so important.
Think about what AGI actually is. The hypothetical capacity of a machine to outperform humans at every cognitive task. All of marketing, coding, writing, scientific research, military strategy, legal reasoning. A single system better than any human at everything.
Who is the customer for this product? Not the hospital trying to reduce diagnostic errors. Not the school district trying to personalize instruction for thirty kids at thirty different levels. Not the county clerk’s office drowning in paper. Not the farmer trying to optimize irrigation across a thousand acres. These are real problems. They all benefit from AI. None of them require AGI. The customer for AGI is the race itself.
Harris actually touches on this. He notes that China is taking a different approach. Narrow, practical applications. Better government services. Better manufacturing. DeepSeek in WeChat. BYD outcompeting on electric vehicles. China is not building a god in a box. China is building tools. Harris presents this almost as a lesser ambition. I think it is the exit from the mimetic trap.
The moment you stop competing for the false object and start building things that solve actual problems for actual people, you have stepped outside the loop.
There is a reason the narrow path is so rarely discussed at the level of two-hour podcasts. It is boring. Not boring like unimportant things are boring. Boring like operational work is always boring to people who think in narratives.
The story of AGI has dramatic structure. A race. A rivalry. Extinction or utopia. It fills podcast slots. The story of narrow AI actually entering the economy is not a story at all. It is a process. Legacy systems learning to talk to each other. A nurse practitioner in rural Missouri discovering that an AI can pre-screen intake forms and give her twenty minutes back per patient. A county assessor’s office cutting a six-week backlog to three days. A teacher realizing that the thing can generate individualized reading exercises faster than she can photocopy worksheets. None of this requires a god in a box. All of it requires patience, trust, integration, and the kind of institutional change that happens at the speed of human willingness, not compute.
And this is the part that both the doomers and the utopians consistently ignore: the bottleneck to AI transforming the economy is not capability. It is adoption. The internet was commercialized in 1995. It took most businesses fifteen years to figure out what a website was for. Mobile computing began in 2007. Most enterprise workflows are still not mobile-native. AI will follow the same pattern. Not because the technology is slow. Because humans are slow. Slow to trust, slow to delegate, slow to restructure institutions that have worked well enough for a long time.
This is not a reason for complacency. The transformation will be profound. But it means the future is not being determined in the frontier labs. It is being determined in the thousands of ordinary organizations that are right now, quietly, figuring out what it means to let a machine handle work that used to require a person. The shape of the transformation depends on them. On whether they do it well or badly. On whether the integration is humane or careless. On whether anyone bothers to pay attention to the boring part.
Girard’s most famous idea is the scapegoat mechanism. When a mimetic crisis spirals to its peak, the community resolves it by converging all the aggression onto a single victim. The scapegoat absorbs the violence. Order is temporarily restored.
The AI discourse is producing scapegoats at an extraordinary rate. For the decelerationists, the scapegoat is the technology. If we could control it, we’d be safe. For the accelerationists, the scapegoat is the regulators. If we could remove them, technology would deliver utopia. Both contain truth. Both compress a systemic problem into a single target. And both obscure the deeper structure: a small number of extraordinarily powerful people locked in a mimetic rivalry they cannot exit, pursuing an object whose desirability is a function of the rivalry itself, with the rest of us as involuntary stakeholders in a bet we did not agree to take.
Girard said the way out of the mimetic crisis is recognition. Once you see the structure. Once you realize the rivalry is generating the object rather than the other way around. You can step outside.
Harris is right that the next two years matter. I think he’s right for a reason he doesn’t quite articulate. The next two years are not the last window to prevent AGI. They are the window where the deployment patterns get established. The norms that will govern how AI actually enters ordinary life are being set right now, and they are not being set by the people giving TED talks or signing open letters or adding “e/acc” to their bios. They are being set by the nurse and the teacher and the county clerk and the farmer and the ten thousand organizations making small, concrete decisions about what to automate and what to protect. Those decisions, accumulated, will matter more than any single breakthrough in any single lab.
The way out of the sacrificial crisis is not more sacrifice. It is the refusal to participate in the logic of sacrifice. It is the recognition that both the acceleration and the deceleration are reactions to the same false object, and that the real work, the work that will actually determine whether AI makes life better or worse for most people, has always been somewhere else. Quieter. Less dramatic. Harder to see.
There is another game. It is positive-sum. It compounds.
That is where the optimism lives.
