Last week, I watched someone in a school use Google Gemini to draft an email. Nothing controversial about that - many of us do it. But what struck me was the complete cognitive disengagement. No pause to consider the recipient. No wrestling with how to frame a delicate point. Just: prompt, generate, send. The entire thinking process - the awkward, uncomfortable, genuinely difficult work of communicating clearly - had been outsourced to a machine.
Then came the headlines: "MIT Study Shows ChatGPT Rots Your Brain." Sensationalist nonsense, obviously. But beneath the clickbait lies something profound and unsettling. The study doesn't actually say AI is melting our minds. What it reveals is far more nuanced and, frankly, more important: thinking requires genuine effort, and when we bypass that effort, we're not just being lazy, we're fundamentally changing how our brains work.
We stand at a peculiar moment. We have tools that can think alongside us, but we're often using them to think instead of us. And the difference between those two prepositions - alongside vs instead - might determine whether AI amplifies human intelligence or atrophies it.
The Metabolic Cost of Thought
Thinking is bloody exhausting. Not metaphorically exhausting either. Actually, physically, metabolically exhausting. Your brain, despite being only about 2% of your body weight, consumes roughly 20% of your body's energy. And when you engage in what psychologist Daniel Kahneman termed "effortful thinking," that energy expenditure spikes dramatically.
Kahneman's 1973 book Attention and Effort established something crucial: attention isn't just focus, it's literally the allocation of metabolic resources. When you concentrate hard on a complex problem, your pupils dilate, your heart rate increases, your blood pressure rises. Your body is working. The mental effort you feel isn't an illusion; it's your brain burning through glucose at an accelerated rate.
I've discussed Kahneman's distinction between System 1 and System 2 thinking in previous pieces, but it bears repeating here because it's fundamental to understanding what's happening with AI. System 1 is fast, automatic, effortless. It's what happens when you recognise a face or read simple text. System 2 is slow, deliberate, and metabolically expensive. It's what happens when you multiply 17 by 24 in your head or work through a moral dilemma.

The critical point is that we evolved to be what Kahneman calls "cognitive misers." Our brains default to System 1 thinking whenever possible because System 2 is calorically expensive. When food was scarce and predators were abundant, wasting energy on unnecessary thinking could literally get you killed. So we developed mental shortcuts, heuristics, and a profound bias toward cognitive efficiency.
This evolutionary legacy means we're constantly searching for ways to avoid hard thinking. It's not laziness but rather biology. And AI tools like ChatGPT have arrived at precisely the moment when that biological tendency meets technological capability. The result is that we've created the perfect cognitive bypass.
As someone with AuDHD, I'm acutely aware of cognitive load. The additional effort of masking my neurodivergent traits whilst simultaneously trying to think clearly about complex problems is genuinely exhausting. Some days, by 3pm, I'm spent - not from working, but from the sheer metabolic cost of thinking whilst navigating a neurotypical world. So I understand the appeal of offloading cognitive work to AI. The question is: at what cost?
What the MIT Study Actually Says (And Doesn't Say)
The MIT Media Lab study that sparked the "brain rot" headlines is more interesting (and more troubling) than the sensationalist coverage suggests. Let's look at what actually happened.
Researchers divided 54 participants aged 18-39 into three groups. One group wrote SAT-style essays using ChatGPT, another using Google Search, and the third using nothing at all, just their brains. Participants wore EEG caps that monitored brain activity across 32 regions whilst they wrote and rewrote essays over several months.
The findings were stark, ChatGPT users showed the lowest neural engagement, particularly in areas associated with creativity, memory formation, and executive function. The brain-only group demonstrated the highest connectivity, especially in alpha, theta, and delta bands, which are the neural signatures of deep cognitive processing, creative ideation, and semantic memory.
But where it gets interesting is that over time, ChatGPT users became progressively more reliant on the tool. By the third essay, many had devolved into pure copy-paste behaviour. They'd prompt ChatGPT, make minimal edits, and submit. The essays, as two English teachers who evaluated them observed, were "soulless". They were technically competent but devoid of original thought or personal voice.

The study introduced a term that's worth understanding further: "cognitive debt." Like technical debt in software development, cognitive debt accumulates when we take shortcuts that seem efficient in the moment but create problems down the line. Each time we let ChatGPT do our thinking, we're borrowing against our future cognitive capacity.
Now, here's what the study doesn't say: it doesn't claim that AI inherently damages brains or that using ChatGPT will make you stupid. What it demonstrates is far more specific and, frankly, more actionable. When you outsource the thinking process entirely, you don't develop the neural pathways required for that type of cognition. It’s not throwing out ChatGPT; it’s just not relying on it fully.
Think of it like calculators. If I ask you to find the square root of 274,635,915,822, you'd reach for a calculator and you'd be right to. That's a reasonable use of computational assistance. But if you've never learned to do basic arithmetic without a calculator, you're in trouble when you need to estimate, check if an answer makes sense, or solve problems where calculators aren't available.
One critic of the study, writing in The Conversation, made this point brilliantly: "Producing essays with pen and paper is no longer a demonstration of critical thinking ability, just as doing long division is no longer a demonstration of numeracy." The issue isn't that we're using AI but it's that we haven't raised the bar to justify its use.
The study also revealed something fascinating in its fourth session. Participants who'd been using ChatGPT were suddenly asked to write without it, whilst brain-only users were given ChatGPT for the first time. The brain-only users adapted brilliantly because they'd developed strong thinking skills and now had a tool to augment them. The ChatGPT users struggled. When forced to think independently, they couldn't recall their own previous essays and showed minimal improvement in neural engagement.
This isn't about AI being bad. It's about cognitive atrophy. If you've been letting a machine lift all the heavy weights, your muscles haven't developed. When you suddenly need to lift something yourself, you're unprepared.

Descartes Was Right to Doubt
René Descartes, sitting in his stove-heated room in 1637, engaged in what might be history's most famous thinking exercise. He systematically doubted everything he could possibly doubt until he arrived at the one certainty that remained: Cogito, ergo sum. I think, therefore I am.
But notice what Descartes had to do to reach this insight: withdraw from the world, sit in isolation, and engage in brutally difficult mental labour. The effort of thinking was central to his philosophical breakthrough. He couldn't outsource his doubting to someone else. The thinking itself was the point.
The philosophical tradition has always recognised that genuine thinking requires effort, discomfort, and often physical withdrawal. Aristotle distinguished between theoria (contemplative thinking) and praxis (practical action), arguing that the contemplative life was humanity's highest calling precisely because it was the most difficult.
Hannah Arendt later explored this distinction through her concepts of vita activa (the active life) and vita contemplativa (the contemplative life). She worried that modern society increasingly valued only action and productivity, leaving little room for genuine thought. If she were alive today, watching us offload thinking to AI so we can produce more content faster, I suspect she'd be horrified.
Martin Heidegger made a similar observation when he distinguished between "calculative thinking" and "meditative thinking." Calculative thinking - the kind we do when solving equations or planning logistics - is important but insufficient. Meditative thinking - the deeper, more difficult questioning of fundamental assumptions - is what separates mere cleverness from wisdom.
AI is brilliant at calculative thinking. It can solve equations, optimise routes, and process vast datasets faster than any human. But meditative thinking - the kind that asks "why are we doing this?" or "what does this mean?" - requires a consciousness, a lived experience, and a capacity for genuine doubt that AI simply doesn't possess.
When we let AI do our thinking, we're not just being efficient. We're abdicating the very activity that Descartes identified as proof of our existence. If you stop thinking, what exactly are you?

I've written before about Churchill's insight that prophets must emerge from civilisation but retreat to wilderness to create what he called "psychic dynamite." The pattern holds here too: breakthroughs require both immersion in existing knowledge and the difficult mental work of processing that knowledge in isolation. AI can help with the immersion - gathering information, identifying patterns, sifting through data - but it can't do the wilderness work. That requires your brain, working hard, burning through glucose, forming new neural connections.
The Unboxing Ideas Framework and Cognitive Effort
My Unboxing Ideas framework, which I've discussed extensively in previous pieces and which I love to share in workshops around the world, provides a useful lens for understanding where AI can legitimately help and where it becomes a cognitive crutch. Let me walk through each stage and examine the role of effort.
PREP: Laying the Foundation
This is where you're gathering information, clarifying goals, identifying constraints, and mapping the problem space. It's cognitively demanding because you're holding multiple threads simultaneously whilst building mental models of how they interconnect.
AI can genuinely help here. It's excellent at summarising articles, identifying key themes across sources, and organising disparate information. But - and this is crucial - you still need to do the mental work of understanding those relationships. If you're just asking ChatGPT "summarise these five articles" without reading them yourself, you're not preparing; you're abdicating.
The test should be whether you can explain the key concepts to someone else without referring back to the AI summary. If not, you haven't done the cognitive work required.
BREW: Letting Ideas Simmer
Graham Wallas, whose 1926 work The Art of Thought inspired my framework, identified "incubation" as a critical stage in creative thinking. This is where your subconscious mind processes information, makes unexpected connections, and generates novel insights.
Here's where AI fundamentally cannot help, because this isn't conscious work. It's your brain, during downtime, forming associations that only your unique experience and neural architecture can create.
The MIT study participants who used ChatGPT from the start never developed these pathways. They never learned to sit with incomplete information, to tolerate the discomfort of not knowing, to let their minds wander productively. They went straight from problem to solution, bypassing the crucial stage where real insight emerges.

I've experienced this myself. My best ideas don't come when I'm staring at a screen; they come during walks, in the shower, whilst washing up. And I reckon it’s the same for most of us. That's because my conscious mind has stepped back and let my subconscious do its associative work. If I'd been using AI to immediately answer every question, those insights would never have formed.
AHA: The Moment of Insight
Jerome Bruner coined the term "effective surprise" to describe genuine insight. It’s the moment when disparate elements suddenly cohere into a new understanding. Bruner observed that,
"Effective surprise the quality of obviousness about them when they occur, producing a shock of recognition following which there is no longer astonishment." Jerome Bruner
But the thing about effective surprise is you can only be surprised if you've done the groundwork. The Aha moment feels sudden, but it's actually the culmination of extensive prior effort. Your brain has been working on the problem, testing combinations, until something clicks. This can be both conscious or unconscious in terms of the percolation of ideas.
AI can't have your Aha moments for you. It can generate novel combinations of existing ideas, but it can't experience that shock of recognition, that feeling of "yes, that's it!" because it has no phenomenological experience whatsoever. You’ve probably felt this like I have when you see examples of email subject lines or alliterated tips and tricks offered up by generative AI.
But, when you use ChatGPT to generate ideas, you're not having insights. Really, you're just consuming someone else's (or rather, some statistical model's) pattern-matching output. The neural pathways that would have formed during your own insight process remain undeveloped.
CHECK: Putting Ideas to the Test
This is where thinking is perhaps hardest: evaluating and critiquing your own ideas. It requires what psychologists call "metacognition" - thinking about thinking. You need to step back from your attachment to an idea and examine it dispassionately.
AI can assist here by playing devil's advocate, identifying potential flaws, or suggesting alternative perspectives. And it does this really well in many cases. But it cannot replace the metacognitive work of genuinely questioning your assumptions, recognising your biases, and making nuanced judgments about validity.
The danger is that AI makes this stage feel easier than it is. ChatGPT will happily evaluate your idea and provide a balanced assessment. But if you haven't developed your own critical thinking skills, you can't judge whether that assessment is sound. You're trusting the algorithm's pattern-matching against its training data, not exercising genuine judgment.

Working WITH AI Requires MORE Thinking, Not Less
The twist that most coverage of the MIT study misses, in my humble opinion, is that competent AI use actually requires deeper thinking, not less.
Consider what you need to use AI effectively:
- Formulate precise, well-constructed prompts (requires clarity of thought)
- Evaluate outputs critically (requires domain expertise)
- Synthesise information across multiple AI-generated responses (requires cognitive integration)
- Maintain your own mental models to judge coherence (requires sustained attention)
- Know when AI is hallucinating or producing plausible nonsense (requires fact-checking ability)
I work with teachers and school leaders implementing AI tools. The educators who use AI most effectively aren't those who know the least; they're those who know their subjects deeply. They can spot when ChatGPT produces superficially plausible but fundamentally flawed lesson plans. They can take an AI-generated idea and refine it based on their understanding of pedagogy and their specific students' needs. They can spot a hallucinated Macbeth quote or an inconsistent application of Kantian philosophy.
The weaker teachers - those who perhaps lack deep subject knowledge - are the most likely to accept AI outputs uncritically. And crucially, they're the ones AI helps least, because they're not developing the expertise they need to evaluate its suggestions. You might get somewhere fast but your brain won’t remember how you got there if you ever need to do it again! It could be argued that AI is like the birds eating the breadcrumbs in Hansel and Gretel.

If you start with AI, you never build the cognitive capacity to use it well. But if you build that capacity first through hard thinking, AI becomes a powerful amplifier.
The problem, as one astute commentator of the MIT noted, is that "educators have not raised the bar in a way that makes AI a necessary part of the process. Educators still require students to complete the same tasks and expect the same standard of work as they did five years ago."
This is critical. If students can complete an assignment through copy-paste ChatGPT, the assignment is the problem, not the AI. We should be setting challenges that require AI as a tool but still demand genuine thinking.
Don't ask students to "write an essay on what causes climate change". It is prime fodder for GenAI drivel. Instead, ask them to use AI to generate three competing arguments about carbon taxation, then write a synthesis that identifies the hidden assumptions in each position and argues for which approach is most viable in the UK context. Suddenly, AI is a research assistant, not a replacement.
Or in history, give students a complex debate - say, the causes of the First World War - and let them use AI to gather multiple historiographical perspectives. Then require them to write a 500-word argument defending which interpretation is most convincing and why, citing specific evidence. The AI does the legwork of gathering sources; the student does the thinking about which argument holds water.
In science, ask students to use AI to generate three different experimental designs for testing a hypothesis, then have them present which design is best and explain the methodological trade-offs. The AI can propose options; the student must demonstrate understanding of why one approach is superior to another in their specific context.
Business Studies students could use ChatGPT to draft a marketing strategy for a local start-up, then present it to actual business owners who'll ask uncomfortable questions about target demographics, cost projections, and competitor analysis. The AI can generate the framework, but students need genuine understanding to defend their choices under scrutiny.
The AI does the grunt work; the human does the thinking.
Why We Must Preserve "Hard Thinking"
Robert Bjork, a psychologist at UCLA, coined the term "desirable difficulties". Making learning harder often makes it more effective. When you struggle with material, when you have to work to retrieve information or solve problems, you form stronger, more durable memories and deeper understanding.
This is the "generation effect" because information you generate yourself through effort is remembered far better than information you passively receive. It's why students who take notes by hand often outperform those who type, the extra difficulty of handwriting forces deeper processing. And to be fair, anything that requires effort to do is more likely to lead to stronger neural pathways being established.
My youngest daughter is learning back walkovers on the trampoline. She could watch hundreds of YouTube tutorials, study Olympic gymnasts, even use AI to generate personalised training plans. But ultimately, she has to physically practise - to fail, adjust, fail again, gradually build the muscle memory and proprioceptive awareness. There's no shortcut. The difficulty is the point.

Thinking works the same way. You can watch clever people think, you can read brilliant arguments, you can even have AI generate sophisticated analyses. But you can't develop your own thinking capacity without actually thinking - hard, effortful, sometimes frustrating thinking.
The neuroscience backs this up. Your brain is constantly pruning unused neural connections and strengthening frequently used ones. This "use it or lose it" principle means that if you stop engaging in effortful thinking, those neural pathways literally atrophy. The cognitive muscles weaken.
I've written before about smartphones and children, arguing that we're raising a generation without the attentional capacity or delay-of-gratification tolerance that previous generations developed naturally. The same principle applies to AI and thinking. If we raise young people who've never had to think hard - who've always had AI to handle the cognitive heavy lifting - what happens when they encounter problems AI can't solve?
Because such problems exist. Novel situations that don't fit training data. Ethical dilemmas requiring lived experience and moral reasoning. Creative challenges demanding genuine originality rather than pattern-matching. Strategic decisions where stakes are high and precedent is limited.
These are precisely the domains where human thinking remains superior to AI (for now, at least). But only if we've developed that capacity through practice. And practice means effort. Sustained, uncomfortable, metabolically expensive effort.
A Framework for Thinking in the AI Age
So what do we actually do? How do we navigate this landscape where AI can help but also harm, where efficiency and cognitive development pull in different directions?
Aristotle had a concept called phronesis, or practical wisdom. Not just theoretical knowledge but the judgment to know which principles apply in which circumstances. We need phronesis for AI use: knowing when to think independently, when to use AI as a tool, and when to avoid it entirely. In education, I really like the AI Assessment Scale developed by Leon Furze and others that helps guide learners on when AI should and shouldn’t be used.
.webp)
There are some further ideas that we can consider:
Start with manual thinking, then augment with AI. This is what the MIT study actually teaches us. The participants who went from brain-only to AI use in the fourth session adapted brilliantly. They'd built their cognitive capacity and now had a tool to enhance it. Do the hard thinking first, then use AI to scale, refine, or extend.
Deliberately practise hard thinking even when AI is available. Create routines where you work through problems without technological assistance. Write morning pages by hand. Solve puzzles. Engage in genuine debate where you have to think on your feet. Read dense philosophy without AI summaries. This isn't Luddism; it's cognitive maintenance. Like going to the gym even though escalators exist, you're maintaining capacity you might need.
Use AI as a sparring partner, not a substitute. One of the most powerful uses of ChatGPT is Socratic dialogue. Present your argument and ask it to challenge your reasoning, identify flaws, suggest counterarguments. This forces you to think harder, not less. You're using AI to make the cognitive work more difficult, not easier.
Create "AI-free zones" for deep work. When you're doing genuinely creative or strategic thinking - the Brew phase of Unboxing Ideas - turn off AI access. Let your brain do its associative work without the temptation to shortcut to an answer.
Judge your AI use by whether it builds or depletes cognitive capacity. This is the crucial test. After using AI, ask yourself: do I understand the topic better? Could I recreate this thinking independently? Have I learned something? If the answer is no, you've used AI as a crutch, not a tool.
Raise the bar. If you're an educator, parent, or leader, design challenges that assume AI availability but still require genuine thinking. Don't ban AI; instead, set tasks where AI handles the grunt work but human judgment remains essential.
The goal isn't to avoid AI but to use it in ways that amplify rather than atrophy our thinking capacity. This requires intention, discipline, and a willingness to embrace difficulty when ease is available.

Key Takeaways
1. Thinking is metabolically expensive and that's a feature, not a bug. The effort you feel when thinking hard isn't a problem to be solved; it's how your brain learns, builds capacity, and forms durable memories. Kahneman showed us that attention equals effort. When you avoid that effort, you're not just being efficient, you're preventing cognitive development.
2. The MIT study doesn't say "AI is bad" - it says outsourcing thinking atrophies thinking skills. Like any tool, AI can help or harm depending on how you use it. The study shows that when people let AI do their thinking entirely, they don't develop the neural pathways required for that cognition. But those who build capacity first and then use AI perform brilliantly.
3. Use the Unboxing Ideas framework to know where AI can help. PREP: gathering and organising information (AI helpful). BREW: making unique connections through incubation (AI cannot do this for you). AHA: insight requires prior mental work (AI cannot have your insights). CHECK: evaluation requires judgment (AI can assist, but not replace).
4. The paradox: competent AI use requires MORE thinking, not less. You need deeper subject knowledge to evaluate AI outputs, better thinking skills to formulate good prompts, and stronger judgment to know when AI is producing plausible nonsense. The best AI users are those who think hardest.
5. Deliberately practise "hard thinking" even when AI is available. Your brain is a muscle that atrophies without use. Create routines where you think independently: handwritten morning pages, phone-free problem-solving, genuine debates without searching for facts mid-conversation. Maintain your cognitive capacity intentionally.
6. Raise the bar so don't use AI to do old tasks easier; use it to do new tasks possible. If students can complete assignments through copy-paste ChatGPT, the assignment needs redesigning. Set challenges that assume AI availability but require genuine human judgment, creativity, and critical evaluation.
7. Thinking is proof of existence so don't outsource your consciousness. Descartes knew: Cogito, ergo sum. I think, therefore I am. When you stop doing the difficult work of thinking, when you let algorithms handle cognition whilst you merely consume outputs, what exactly remains of you?
The Point of the Pain
I'm watching that teacher draft another email with ChatGPT. Generate, scan, send. No cognitive effort visible. But I'm wondering: what happens after a year of this? Five years? What neural pathways are pruning? What thinking capacity is quietly atrophying?
The MIT study is a warning shot, not a death knell. We're not doomed to brain rot. But we do face a choice, and it's not the one the headlines suggest. The choice isn't between using AI or rejecting it. It's between thinking with AI and letting AI think instead of us.
Those two prepositions - with and instead - contain the entire problem and the entire solution.
Hard thinking is uncomfortable. It requires effort, produces frustration, demands sustained attention in an age of distraction. Your brain actively resists it because it's metabolically expensive. Every instinct pushes you toward the shortcut, the easy answer, the AI-generated solution.
But that difficulty is the point. The effort is how you know you're doing it right.
When you sit with an incomplete thought, tolerating the discomfort of not knowing. When you wrestle with contradictory ideas, trying to synthesise them into coherence. When you question your own assumptions and force yourself to articulate why you believe what you believe. That's thinking. That's the hard, exhausting, absolutely essential work of being human.
AI can help you scale that thinking, extend it, refine it. But it can't do it for you. And if you let it try, you're not being efficient. Instead, you're eroding the very capacity that makes you something more than a particularly sophisticated input-output device.
Thinking is hard. It's supposed to be. And that difficulty isn't a bug to be engineered away; it's proof that you're still here, still human, still capable of something no algorithm can replicate.
Cogito, ergo sum.
I think, therefore I am.
Not "AI thinks, therefore I consume."
The exhausting art of actually thinking might just be the most important skill we can preserve. Because when thinking stops being hard - when it becomes something we outsource entirely - we risk losing more than cognitive capacity.
We risk losing ourselves.
Further Reading
Discover more interesting articles here.
.png)



