Everyone's Learning AI. Nobody's Learning to Think

March 20, 2026

LinkedIn's 2026 Skills Report tells us what employers want. It accidentally reveals something much more uncomfortable. Every February, the professional development world performs a familiar ritual. LinkedIn publishes its Skills on the Rise report, the algorithm surfaces it, and thousands of people spend twelve minutes feeling pleasantly anxious before adding 'prompt engineering' to their profile and considering themselves prepared for the future. Tick. Done. Progress achieved.

I've watched this cycle for long enough to find it genuinely unsettling (not just the LinkedIn report either - WEF, PISA, Google etc - they all release something similar!). It’s not because the research is bad, but because of how consistently it gets misread. The 2026 edition, covering the fastest-growing skills across the UK and US labour markets, is actually a fascinating document. It tells you something real about where the economy is heading. But I wonder if we are drawing the right conclusions from it.

The right conclusion isn't 'learn to use AI tools faster.' It's something considerably more difficult to act on, considerably harder to measure, and considerably less likely to feature in anyone's CPD programme. The right conclusion is this: what employers are actually desperate for, and what training budgets are almost entirely failing to develop, is judgement.

Not AI skills. Not even critical thinking in the vague, aspirational way that phrase gets deployed. Genuine, operational, calibrated judgement under conditions of uncertainty. The ability to make a good call when the information is incomplete, the stakes are real, and the model can't decide for you.

That's what the list says, if you're willing to read it properly. Let's do that.

AI Generated Image. Midjourney Prompt: learn skills faster and faster

What the List Actually Says (Once You Stop Being Dazzled)

LinkedIn's methodology tracks two things: how many professionals add a given skill to their profile year-on-year, and how many professionals with that skill actually get hired. The 2026 results cluster into eight broad categories. On the technical side: AI engineering, AI business strategy, operational efficiency, risk and compliance. On the human side: executive and stakeholder communications, leadership and people management, business revenue growth, financial operations.

The dominant cultural reading of this list is predictable. AI is everywhere, so learn AI. The robots are coming for your job, so prove you can operate the robots. LinkedIn's own COO, Dan Shapero, leaned into this framing, saying: 

"Those who embrace AI, are curious with the technology, and use it in their daily work will be seen as the future leaders at each company." Dan Shapero

Fair enough. But look at what's sitting alongside the AI categories. Risk compliance management. Data governance. Responsible AI. Executive communication. Cross-functional team leadership. Stakeholder relations. This isn't a list of people who know how to use software. This is a list of people who know what to do after the software has finished doing its thing - people who can evaluate the output, navigate the political complexity of implementation, and make a defensible decision when the context doesn't fit the training data.

“There's still nuance that AI can't do.” Andrew Seaman

Seaman was referring to cross-team collaboration, client relations, and leadership training. But that throwaway observation deserves far more weight than it received in the coverage. 'Nuance that AI can't do' isn't a consolation prize for the technically squeamish. It's a precise description of the most economically scarce human capability in an AI-saturated environment. And nobody is seriously asking why we're so bad at developing it.

The Confusion at the Heart of the 'AI Skills' Narrative

The thing that bothers me about the way most organisations are responding to this research is where I want to start. The LinkedIn data is real. The skills on the list are genuinely in demand. But the response - courses, certifications, tool-specific training - is solving for the wrong problem.

AI Generated Image. Midjourney Prompt: solving for the wrong problem

Knowing how to operate a piece of software and knowing when, whether, and how to act on its output are not the same cognitive activity. One is trainable in an afternoon. The other takes years of deliberate exposure to genuinely difficult situations. Yet the vast majority of AI upskilling investment is aimed at the former while expecting it to produce the latter.

I've written previously about what genuine AI literacy actually requires - understanding what these systems are, how they function, and what they fundamentally cannot do (FRiDEAS #70 covers this in depth, including the philosophical dimension of AI as agent rather than tool). But what strikes me more and more is that the conversation has barely moved beyond tool proficiency. The debate is still largely: 'Which platform should we use?' and 'How do we prompt it effectively?' rather than 'How do we decide what to trust, when to override it, and what to do when it confidently gives us the wrong answer?'

This is where my AI Cards come in - they were designed precisely to create space for that more uncomfortable, more important conversation. Not 'can you use AI?' but 'can you think critically in the presence of AI?' The LinkedIn data, read properly, is making the same argument. The market is not primarily asking for people who can use the tools. It is asking for people who can govern them, question them, and function well in the ambiguous space where the tool runs out.

The distinction matters enormously for how organisations think about development. You cannot fix a judgement deficit with a software tutorial. And yet that's essentially what's happening at scale.

The Fox Knows Many Things: Tetlock, Berlin, and the Judgement Question

In 1953, the philosopher Isaiah Berlin - an Oxford don who had a particular gift for making complicated ideas feel vivid - published a celebrated essay called 'The Hedgehog and the Fox.' He borrowed the central image from the Greek poet Archilochus: 'The fox knows many things, but the hedgehog knows one big thing.' Berlin used it to distinguish between two types of thinkers. Hedgehogs organise the world around a single central vision or governing principle. Foxes draw on many ideas, many sources, many angles, remaining sceptical of any one framework's ability to explain everything.

I've drawn on Berlin's ideas in other FRiDEAS pieces (Frogs, Birds, Hedgehogs, Foxes and Gameboys), so this isn't entirely new territory for regular readers. But the specific fox/hedgehog distinction hasn't been the focus before, and it becomes crucial here.

Fifty years after Berlin's essay, the psychologist Philip Tetlock took that metaphor and ran an extraordinary experiment with it. Over nearly two decades, Tetlock tracked the predictions of 284 political and economic experts - academics, think-tank analysts, government advisers - asking them to forecast specific, verifiable outcomes across international affairs and economics. He gathered 28,000 forecasts and then, patiently, checked them against what actually happened.

The results were uncomfortable. Most expert forecasters were, in Tetlock's memorable phrase, no more accurate than a dart-throwing chimpanzee. But buried in the aggregate failure was one consistent pattern. A subset of forecasters significantly and repeatedly outperformed everyone else. And what distinguished them wasn't their domain expertise. It wasn't their academic credentials. It wasn't even their intelligence, at least not primarily.

“The strongest predictor of rising into the ranks of superforecasters is perpetual beta — the degree to which one is committed to belief updating and self-improvement. It is roughly three times as powerful a predictor as its closest rival, intelligence.” Philip Tetlock

What distinguished the best forecasters was how they thought, not what they knew. They were foxes. They drew on multiple frameworks. They updated their views readily when evidence changed. They held their own opinions loosely. They were, in Berlin's terms, not committed to one big idea. And the hedgehogs - the confident specialists with a dominant framework - consistently underperformed, particularly on long-range predictions within their own area of supposed expertise.

Now look at LinkedIn's 2026 list again. Responsible AI. Data governance. Cross-functional leadership. Stakeholder communication. Risk compliance. These are not hedgehog skills. You cannot do any of them well if you are only capable of seeing a situation through one lens. They require holding multiple perspectives simultaneously, knowing which one to weight when they conflict, and being willing to change your mind when the context shifts.

That is the fox profile. And Tetlock's research suggests, empirically, that it is the profile most correlated with good decisions under genuine uncertainty. The labour market, apparently, has figured this out before most training departments have.

AI Generated Image. Midjourney Prompt: hedgehogs

Why Organisations Keep Buying Hedgehog Solutions to Fox Problems

If the evidence is this clear - if what the market wants is demonstrably the fox profile, and if that profile is demonstrably associated with better outcomes - why is the overwhelming majority of investment going into tool-specific, certifiable, hedgehog-friendly training?

The answer is structural, and it is honestly a bit embarrassing for those of us who work in organisational development and education. Fox skills are genuinely difficult to measure. A prompt engineering certification produces a certificate. Judgement under uncertainty produces nothing you can put in a dashboard.

This is partly a measurement artefact in the LinkedIn data itself, which is worth acknowledging. The methodology tracks skills that people add to their profiles and skills associated with successful hires. Both signals are subject to profound gaming. People add skills they want to have, not only skills they demonstrably possess. Hiring managers select for signals they can evaluate quickly in an interview - and judgement, by its nature, only becomes visible over time and in genuinely difficult situations. LinkedIn's data captures intention and signal. It cannot fully capture the real-world complexity of what actually makes someone effective in a role.

I touched on something adjacent to this in FRiDEAS #39 on Supercommunicators, drawing on Charles Duhigg's work. His central finding - that effective communication depends on first understanding which type of conversation you are in - is relevant here too. The communication skills on LinkedIn's list (executive communication, stakeholder management, relationship development) are not about articulation. They are about contextual reading. About knowing when to push and when to listen, when to assert and when to question. That requires the same cognitive flexibility that makes a superforecaster. And it cannot be taught by sending someone on a presentation skills course.

The measurement problem is compounded by a cultural one. Organisations that reward confident certainty will systematically select against the fox profile. A leader who says 'I'm not sure, let me look at this from a few different angles' sounds weak in cultures that prize decisiveness. And yet Tetlock's data is unambiguous: the confident hedgehog is more satisfying to listen to and more frequently wrong. We have, in many organisations, optimised our selection processes to find people who sound like they know what they're doing rather than people who actually do.

The Education Question Nobody in This Space Wants to Answer

There is an uncomfortable implication here for education, and I want to name it directly even though this piece isn't primarily aimed at teachers and school leaders.

The skills that LinkedIn's 2026 data identifies as the fastest-growing, most in-demand, most economically significant are precisely the skills that most formal education systems are worst at developing and, in many cases, actively work against.

AI Generated Image. Midjourney Prompt: Adaptability, cross-functional thinking, the ability to navigate ambiguity and hold multiple perspectives simultaneously

Adaptability, cross-functional thinking, the ability to navigate ambiguity and hold multiple perspectives simultaneously - none of these feature meaningfully in GCSE mark schemes. The Ofsted inspection framework rewards clear processes, consistent delivery, and demonstrable outcomes. These are, structurally, hedgehog metrics. They measure performance within a defined system, not the capacity to function well when the system is unclear or absent.

The XP School in Doncaster is one of the few UK examples I'd point to where expedition-based learning creates the conditions for genuine fox development - students working on real-world problems that have no clean answer key, building the tolerance for ambiguity that Tetlock identifies as central to good judgement. It is not coincidental that this model sits uncomfortably within standard accountability frameworks.

I have written previously about the failure to allow students to experience real failure as curriculum (FRiDEAS #86 touches on this in the context of leadership under uncertainty). The same argument applies here. You cannot develop calibrated judgement in an environment where every significant decision is made by someone else and every task has a predetermined correct answer. Fox thinking is forged in genuine uncertainty. We have largely designed that uncertainty out of formal education, and the LinkedIn data is showing us the bill.

So What Do You Actually Do About It?

I am aware this piece has been more diagnosis than prescription. That is partly intentional - the diagnosis is more important than it looks, and it's being largely ignored in favour of purchasing the next AI platform licence. But let me offer three moves that I think are genuinely worth making.

1. Reframe what 'AI readiness' means in your organisation.

Stop measuring AI readiness as tool proficiency. Start measuring it as the capacity to make good decisions in the presence of AI outputs, including knowing when to trust them and when to challenge them. This is a different question from 'can your team use ChatGPT?' and it requires different development activity. Creating structured opportunities to interrogate AI outputs, debate their assumptions, and practice the kind of contextual judgement that Tetlock's foxes demonstrate - that is where the real developmental work lies.

2. Make fox thinking visible and valued.

In most organisations, the person who synthesises across domains, who pulls together perspectives from finance, operations, HR and strategy and finds the underlying pattern, is not the most celebrated person in the room. The most celebrated person is the one with the confident answer. This is backwards, and Tetlock's research suggests it's also expensive in terms of decision quality. Deliberately creating forums where multi-perspective thinking is surfaced, genuinely difficult problems with no pre-agreed answer, cross-functional teams with genuine autonomy — is not just a cultural nicety. It is a strategic development investment.

3. Be honest about what you cannot measure, and resist the temptation to stop valuing it.

Berlin's hedgehog is seductive precisely because the world feels more manageable when organised around a single clear principle. Measurement frameworks, KPIs, certification pathways - these are all hedgehog tools. They are genuinely useful. But they become dangerous when organisations conclude that what cannot be measured in this way doesn't exist or doesn't matter. Judgement, contextual wisdom, the capacity to hold uncertainty without collapsing into false certainty - these are real, they are valuable, and the labour market is pricing them accordingly. The fact that they resist easy quantification is not a reason to abandon them. It is a reason to think harder about how you develop and recognise them.

Tetlock's superforecasters were not exceptional because they had access to better data. They were exceptional because of how they related to uncertainty, with curiosity rather than anxiety, with openness rather than defensiveness, with the intellectual humility to know that their current view might be wrong. That disposition is neither innate nor accidental. It is cultivated. And the question worth sitting with, after you've finished reading LinkedIn's 2026 list and updating your profile accordingly, is whether anything in your organisation's culture and development practice is genuinely cultivating it.

AI Generated Image. Midjourney Prompt: super forecasters

Key Takeaways

  1. LinkedIn's 2026 Skills on the Rise list is not primarily a call to upskill in AI tools. Read carefully, it is a call for better judgement under uncertainty -  the capacity to navigate ambiguity, synthesise across domains, and make defensible decisions when AI reaches the limits of its usefulness.
  2. Philip Tetlock's superforecasting research provides the empirical foundation for what this kind of judgement actually looks like. The fox profile - drawing on multiple frameworks, updating views in response to evidence, holding opinions loosely - consistently outperforms the hedgehog, even in the hedgehog's specialist domain. The strongest predictor of forecasting accuracy is not intelligence but 'perpetual beta': commitment to belief updating.
  3. The dominant response to LinkedIn's data - tool-specific AI training and certification - is a hedgehog solution to a fox problem. It is measurable, deliverable, and largely insufficient.
  4. The measurement trap is real: fox skills are hard to quantify, which creates systematic pressure to underinvest in them. Organisations that can only see what they can measure will continue hiring for signal and losing on substance.
  5. For educational leaders: the skills LinkedIn identifies as most economically valuable are the ones most at odds with how formal education systems - and particularly UK inspection frameworks - currently define and reward success. This is a structural problem that deserves structural thinking, not another CPD session.
  6. Reframe AI readiness as judgement capacity rather than tool proficiency - let’s not make this about tick boxes but how people use the boxes they have ticked!
  7. Make cross-domain, multi-perspective thinking visible and valued in your culture - what is seen and heard in a culture shows how much we value it.
  8. Resist the temptation to stop valuing what resists easy measurement - what matters isn’t always measurable and what can be measured doesn’t always matter.

The fox/hedgehog distinction isn't a judgement about intelligence. Tetlock is clear on that. Some of the worst forecasters in his research were also the most credentialled. It's a judgement about cognitive style, and cognitive style can shift. The question worth carrying away from this piece isn't 'am I a fox or a hedgehog?' It's whether the environments you lead, design, or learn within are creating any genuine pressure to become more fox-like. If the honest answer is no, that's worth knowing and probably worth doing something about before LinkedIn publishes the 2027 list.

Subscribe Now

Subscribe to receive the latest blog posts directly to your inbox every week.

By subscribing, you agree to our Privacy Policy.
Thank you! Your submission has been received!
Oops! Something went wrong. Please try again later.