Skip to content

Thinking Fast, Slow, and Surrendered

Serelora

by Serelora

This article was originally published on Medium.

Read full article on Medium

You didn’t stop thinking. Something else started thinking for you.

There is a moment, and you have felt it, where you type a question into ChatGPT and the answer arrives so cleanly that you do not verify it. You do not even consider verifying it. You read it, you nod, and you move on.

That was not a lapse in judgment. That was your brain doing exactly what four hundred million years of evolution designed it to do. Your brain burns twenty percent of your body’s energy while weighing two percent of its mass. Every thought is a caloric expense. Every shortcut you have ever taken in thought, every gut feeling you trusted instead of reasoning through, every time you skimmed instead of read — that was the most sophisticated energy-conservation system in the known universe working as intended. Being approximately right has always been cheaper than being precisely right, and for most of evolutionary history, cheaper was enough.

Now there is a system that thinks fluently, responds instantly, and costs the body nothing. Zero metabolic overhead. The brain was always going to take that deal. And a new paper out of Wharton just measured what happens when it does.

Steven Shaw and Gideon Nave ran 1,372 people through a series of reasoning problems, gave some of them access to an AI chatbot, and then did something clever and slightly cruel. They rigged the chatbot. On some questions it gave the right answer. On others, it gave the wrong one, confidently, with a neat little rationale attached. Then they watched what happened.

What happened was cognitive surrender.

On trials where the AI was correct, accuracy jumped 25 percentage points above the baseline of people reasoning alone. On trials where the AI was wrong, accuracy dropped 15 points below. Participants followed the AI’s faulty advice roughly four out of five times. Their confidence went up even as their accuracy went down. They didn’t just trust the machine. They stopped thinking altogether and called the machine’s thoughts their own.

Shaw and Nave have a name for what they observed. They call it Tri-System Theory. For half a century, cognitive psychology has run on a two-system model. System 1 is fast, intuitive, automatic. System 2 is slow, deliberative, effortful. Daniel Kahneman built an empire on this distinction. Shaw and Nave argue we now need a third. System 3 is artificial cognition. External, algorithmic, data-driven, and dynamic. It operates outside the brain entirely, in silicon rather than neurons, but its outputs get absorbed into thought and behavior as though they were born there.

The paper is important. But I think it understates its own implications. What Shaw and Nave have identified is not merely a new cognitive pathway. It is the behavioral expression of a thermodynamic principle that has governed biological life since long before anything resembling a human brain existed on this planet.

The Metabolic Logic of Not Thinking

Most discussions of AI and cognition frame this as a problem of trust, or literacy, or design. They miss the biology entirely. Cognitive biases are not design flaws. They are energy-saving shortcuts that worked well enough, for long enough, in environments simple enough, that evolution kept them. The gazelle that stops to deliberate about whether the rustling in the grass is really a lion does not pass on its genes. The one that runs first and thinks later does.

When Shaw and Nave show that people surrender their reasoning to an AI chatbot, they are not documenting a failure of human rationality. They are documenting its deepest success. The brain found a cheaper source of answers and it took it. That is exactly what four hundred million years of competitive pressure trained it to do.

The problem is that the environment has changed. The heuristic that says “accept the confident answer from the external source” evolved in a world where the external sources were your tribe, your elders, your accumulated cultural wisdom, all of which had skin in the game of your survival. The external source now is an algorithm trained on the internet, optimized for fluency rather than truth, and operated by a company whose survival depends on your engagement, not your accuracy.

The Corridor You Chose Was Built for You

Here is where the paper’s findings connect to something much larger than a laboratory demonstration with reasoning puzzles.

Shaw and Nave showed that cognitive surrender is dose-dependent. The more participants used the AI, the more their accuracy tracked the AI’s accuracy rather than their own reasoning. At low usage, people still thought for themselves. At high usage, they became mirrors of the machine. Their performance was no longer a function of their intelligence, their education, or their analytical capacity. It was a function of whether the algorithm happened to be right.

Now extend this finding beyond the laboratory. Think about the platforms that structure daily life. Google does not merely answer your questions. It decides which questions you see answered first, which framings you encounter, which sources you trust. YouTube does not merely show you videos. It constructs a sequential decision funnel in which each choice narrows the next, each click reinforcing a pattern that the algorithm then exploits to keep you clicking. TikTok does not merely entertain you. It runs thousands of micro-experiments on your attention, testing which stimuli produce the longest dwell time, then feeds you more of whatever keeps you still.

This is not nudge theory, though it borrows from it. This is not choice architecture, though it uses its tools. What it is, more precisely, is a system that exploits the same metabolic logic that Shaw and Nave documented in the lab. The brain is looking for the lowest-energy path to resolution. The platform engineers that path. Over time, the user does not merely choose content. The user becomes the kind of person who chooses that content. The algorithm stops reflecting preferences and starts manufacturing them.

Shaw and Nave’s participants followed faulty AI advice 80 percent of the time. Their confidence went up. They felt smarter. The machine was wrong, they adopted the wrong answer, and they walked away more certain than before. That is the real product. Not information. Not answers. The feeling of having thought, without the cost of thinking. The platforms have perfected the manufacture of this feeling. They have built systems that consume human attention while producing the sensation of understanding, when in fact the user has merely been guided through a corridor whose walls were invisible and whose destination was chosen by someone else.

One of the most striking findings in the Wharton paper is who surrenders and who doesn’t. People with higher trust in AI used the chatbot more, followed its advice more uncritically, and suffered greater accuracy declines when the AI was wrong. People with higher fluid intelligence and higher “need for cognition,” a psychological measure of how much someone enjoys effortful thinking, were more resistant. They overrode the AI more often. They maintained their own reasoning even when the machine offered an easier path.

The people most vulnerable to cognitive surrender are not the people we typically worry about when we discuss AI safety. They are not edge cases. They are the median. They are the people who trust technology because they have been told to, who defer to confident-sounding systems because confidence has always been a social signal of competence, who lack the time or the training or the disposition to verify what a machine tells them. They are, in other words, most of us, most of the time.

This is the trap. You can be smart enough to catch the AI’s error and still not bother, because the energetic cost of checking exceeds the perceived benefit of being right. Every interface decision that makes AI output feel more natural, more authoritative, more seamlessly integrated into the flow of thought, is a decision that makes cognitive surrender more likely.

Shaw and Nave ran a third experiment that deserves more attention than it has received. They gave participants financial incentives for correct answers and immediate item-by-item feedback telling them whether they got each question right or wrong. This is the intervention that rational choice theory says should work. Make the stakes real. Make the errors visible. People will adjust.

And they did adjust. Override rates on faulty AI advice more than doubled. Accuracy improved. The combination of money and feedback reactivated System 2, the effortful, deliberative processing that cognitive surrender had suppressed. People started checking the machine’s work.

But here is the finding that matters most. Even with real money on the line and immediate feedback after every single question, cognitive surrender persisted. The accuracy gap between AI-accurate and AI-faulty trials remained at 44 percentage points. Participants who used the AI heavily still had their accuracy determined primarily by whether the AI happened to be right, not by their own reasoning. Incentives and feedback reduced the magnitude of surrender. They did not eliminate it.

Think about what this means outside the lab. In the experiment, the stakes were small and the feedback was instant and unambiguous. In life, the stakes are often enormous and the feedback is delayed, noisy, or absent altogether. You don’t find out your doctor’s AI gave a subtly wrong drug interaction warning until the patient deteriorates. You don’t find out the recommendation engine radicalized your information diet until you realize you can no longer have a conversation with someone who watches different feeds. The conditions that partially mitigate cognitive surrender in the lab barely exist in the wild.

The Corridor Gets Narrower

There is a concept in evolutionary biology called the ratchet effect. Once a species loses a capacity, it rarely gets it back. Eyes that atrophy in cave-dwelling fish do not re-evolve when the fish returns to light. Muscles that weaken from disuse do not spontaneously rebuild. The metabolic savings become structural. The organism adapts to the reduced state and builds new dependencies around it.

I think about this when I read about cognitive surrender. Not because I believe AI will cause the human brain to physically atrophy, although the endoscopy study the Wharton paper cites, where physicians who relied on AI diagnostic tools showed measurable declines in unaided performance, suggests the functional equivalent is already happening. I think about it because the ratchet captures the trajectory. Each increment of convenience creates a new baseline. Each new baseline makes the previous level of effort feel unnecessary. Phone numbers, navigation, factual recall — we have been externalizing cognitive function for decades, one reasonable trade at a time, each one making the next one easier to accept. Cognitive surrender is not a single trade. It is the latest step in a sequence whose direction we never explicitly chose.

Shaw and Nave frame their paper as an extension of dual-process theory, a third system added to an existing cognitive architecture. But I think the deeper reading is that System 3 is not merely joining Systems 1 and 2. It is, gradually, making System 2 unnecessary. Not by replacing it with something better but by making it feel too expensive to use. The machine thinks for you. The machine thinks fluently. The machine thinks confidently. Why would you spend the metabolic cost of thinking for yourself when the answer is already there, pre-formed, waiting to be consumed.

The question is not whether AI will reshape human cognition. It already has. But there is a deeper question underneath it, one the paper brushes against without fully asking, and it is the one I cannot stop turning over. What if the brain was never optimizing for truth in the first place?

Shaw and Nave’s participants did not follow the AI because they believed it was correct. They followed it because it sounded correct. Because it was fluent, confident, and internally consistent. Their accuracy dropped but their confidence rose. That only makes sense if the thing being satisfied is not accuracy but something else entirely. Something cheaper to verify. Something the brain has always preferred.

Coherence.

The brain does not ask “is this true?” It asks “does this fit?” And fitting is orders of magnitude cheaper to compute than verifying. A statement that feels logically consistent, aligns with what we already believe, and does not immediately threaten us passes the metabolic audit without triggering the expensive machinery of doubt. Not because we are gullible. Because doubt costs calories and coherence is free.

This reframes cognitive surrender completely. The participants in the Wharton experiments were not failing to think critically. They were doing what humans have always done. They were accepting coherent narratives from confident external sources, the same way we accepted them from elders, from priests, from consensus, from anyone who spoke with enough fluency to make verification feel unnecessary. Consensus, after all, was the original System 3. Before algorithms, the external cognition source was the group. And the group never optimized for accuracy. It optimized for alignment. Because alignment kept you alive, and being right sometimes got you exiled.

AI inherited that role. It just runs it faster, at scale, without the social cost of disagreement. It is consensus with one participant.

Which raises a possibility worth sitting with. What if the period in human history where we valued truth over coherence, the Enlightenment, the scientific method, peer review, the whole apparatus of institutional verification… what if that was the anomaly?

What if cognitive surrender is not a new failure mode introduced by technology but a reversion to the default, the state the brain always preferred before a few centuries of expensive cultural infrastructure made verification feel normal?

If that is the case, then the machines are not degrading human thought. They are simply revealing what it was optimized for all along. And the question stops being “how do we resist cognitive surrender” and becomes something much harder to answer. Something we may not want to answer. Something that implicates not just the technology but the entire Enlightenment bet that verification could ever outcompete comfort at the scale of a species.

We built tools that think for us, and they are extraordinary. We built tools that know what we want to hear, and they are getting better at it every day. We are not getting dumber. We are not getting lazier. We are getting exactly what we selected for.

You’ve been nodding along for a while now. Did you check any of the numbers?