Digital
20mins

What the Algorithm Knows

May 15, 2026

The phone ban debate is a distraction. The real question is who - or what - is now the authoritative source of knowledge for the people in your institution. And whether you’ve noticed. I’ll say it again: not all screen time is equal. 

There is something revealing about the way educated, intelligent people discuss the question of phones and social media for young people. The debate has all the hallmarks of a moral panic with decent intentions: compelling anecdotes, contested statistics, newspaper columns reaching for the language of public health emergency, politicians reaching for the language of newspaper columns. Jonathan Haidt publishes The Anxious Generation. Rishi Sunak issues guidance on school phone bans. The Online Safety Act crawls through Parliament. Everyone agrees something must be done, and disagrees vigorously about what.

I’ve written before about the smartphone question and the genuine complexity underneath the apparent simplicity of “just ban them” (see The Digital Family, The Myth of Digital Disruption? and Navigating the Push-Pull Continuum of Technology. I am not going to regurgitate that here. What I want to do instead is pull back from the device itself and ask a harder, less comfortable question - one that the phone ban debate almost never touches.

The question is not whether young people should have phones. It is this:

In the absence of any deliberate institutional decision about epistemic authority, who or what is actually deciding what your staff, your students, and your communities believe to be true? 

And if the answer is “an algorithm optimised for engagement,” what exactly have you done about it?

AI Generated Image. Midjourney Prompt: algorithm optimised for engagement

The debate we’re having (and the one we’re not)

The phone ban conversation operates almost entirely within two frameworks: mental health and safeguarding. Both are legitimate. The evidence on adolescent mental health and heavy social media use is contested in its causal claims but striking in its correlations - particularly for teenage girls. Safeguarding concerns around harmful content, unsolicited imagery, and predatory contact are real and documented.

But what neither framework asks is not “is this device damaging children’s wellbeing?” It is not even “is this content appropriate?” It is “what is this system teaching people to believe, and on whose authority?”

That is an epistemic question. And the reason institutions - schools, universities, NHS trusts, councils, businesses - are so poorly equipped to answer it is that they have never been asked to think epistemically about technology before. They have been asked to think about safeguarding. About digital literacy, usually in the thinnest sense of “spot a phishing email.” About screen time. Not about the architecture of belief.

That is the gap this piece is concerned with.

AI Generated Image. Midjourney Prompt: filter bubbles with social media icons inside each bubble

Pariser’s filter bubble, properly understood

In 2011, the internet activist Eli Pariser published The Filter Bubble, a book that named something people had dimly sensed but not clearly articulated. He had noticed, almost accidentally, that his Facebook feed had quietly stopped showing him posts from his conservative friends. Not because he had unfollowed them. Because the algorithm had learned, from his clicking behaviour, that he preferred not to engage with their content - and had simply removed it, without asking, without telling him, and without any obvious mechanism for him to notice or reverse the decision.

Pariser’s key observation was that this represented a fundamental shift in the architecture of information. The previous model - newspapers, broadcast television, the BBC - was built on editorial judgement. Someone decided what mattered, what context it needed, and what the public ought to know. That model was not neutral; it embedded all manner of assumptions about class, race, geography, and cultural value. But it was, crucially, accountable. You could argue with the editor. You could cancel the subscription. You could know, at least in principle, that a human being had made a choice.

The new model replaced editorial judgement with personalisation. As Pariser put it:

“A squirrel dying in front of your house may be more relevant to your interests right now than people dying in Africa.” Eli Pariser

The algorithm does not ask what you should know. It asks what you are most likely to click on. These are not the same question. They have never been the same question. But in the decade and more since Pariser wrote those words, the conflation of the two has become so total that most people - including most institutional leaders - have stopped noticing the difference.

AI Generated Image. Midjourney Prompt: clickbait

The filter bubble is not a side-effect of the attention economy. It is the product. Every platform that monetises attention is structurally incentivised to show you more of what you already believe, already like, already agree with. Not because they are malicious, but because disagreement, friction, and challenge are, empirically, worse for engagement metrics than confirmation and comfort.

From gatekeeping to optimisation - the institutional blind spot

For most of the twentieth century, the question of epistemic authority - who gets to say what counts as knowledge and why - was answered by institutions. In education, this meant the curriculum, the teacher, the examination board, the textbook publisher, the library. In public life, it meant the BBC, the broadsheet press, the learned society, the professional association. None of these were perfect arbiters. All embedded their own biases. But they were, in the sociologist’s sense, accountable - subject to scrutiny, challenge, correction, and reform.

AI Generated Image. Midjourney Prompt: gatekeeper with a sword guarding libraries of information

Cathy O’Neil’s Weapons of Math Destruction (2016) offers the sharpest framework for understanding what replaced them. O’Neil, a mathematician who spent time on Wall Street before becoming one of its most rigorous critics, describes a class of algorithms characterised by three features: they are opaque (you cannot see how they work), they are self-reinforcing (they amplify existing patterns rather than correcting them), and they operate at scale (affecting millions of people simultaneously). Her examples range from recidivism-prediction software used in criminal sentencing to the university rankings that distort institutional behaviour across the entire higher education sector.

“The privileged, we’ll see time and again, are processed more by people, the masses by machines.” Cathy O’Neil

The platforms governing what young people - and, let us not pretend otherwise, most adults - see, read, share, and believe are precisely this kind of system. They are opaque: no user can see the weighting behind their feed. They are self-reinforcing: the more you engage with a type of content, the more of it you receive, regardless of whether that content is accurate, harmful, or simply wrong. And they operate at extraordinary scale: TikTok alone has over a billion active users.

What makes this specifically an institutional problem, rather than merely a social one, is the question of what O’Neil calls the feedback loop. When an algorithm shows a young person content that confirms a particular belief about health, identity, politics, or the world, and that person engages with it, the algorithm intensifies the signal. The young person does not know this is happening. Their institution - the school, the university, the employer - almost certainly has no view on it at all.

The philosopher Miranda Fricker introduced the concept of “epistemic injustice” - the idea that people can be wronged specifically as knowers, in their capacity to give and receive knowledge. I explored this in the context of algorithmic decision-making in The Missing Link. But there is a dimension that extends beyond what Fricker mapped. The algorithmic system does not merely devalue certain people’s testimony. It actively shapes what people believe is available to testify about. It doesn’t just silence voices; it narrows the territory in which voices can be heard.

This is optimisation. It is not the same as education. The inability to distinguish between the two is the institutional blind spot.

The shared knowledge problem

Schools, universities, and most organisations are built on a premise that is so embedded as to be almost invisible: that their members share a common knowledge base. It’s not about agreement - disagreement is the lifeblood of genuine intellectual institutions - but a common set of texts, frameworks, and epistemic standards. You can argue about the interpretation of a poem if you have both read the poem. You can debate a historical claim if you share some understanding of what counts as historical evidence. You can have a professional disagreement about clinical practice if you are drawing on a common body of research literature.

The philosopher Philip Kitcher, in his work on the social structure of scientific knowledge, argues that a healthy epistemic community requires a well-ordered relationship between expert and public knowledge - not that everyone knows the same things, but that there are shared standards for what makes knowledge legitimate, and shared mechanisms for revising it. This is not an abstract philosophical nicety. It is the functional precondition for institutions to work.

“The rational layman will recognize [sic] that, in matters about which there is good reason to believe that there is expert opinion, he ought (methodologically) not to make up his own mind.” Philip Kitcher

What algorithmic personalisation does, at scale and over time, is disaggregate this. When every student in a classroom - and every member of a staff team, and every governor, and every parent - is operating inside their own curated epistemic environment, the shared territory for argument, evidence, and revision shrinks. Not because anyone has chosen this. Because the architecture of the platforms they use every day is designed to individualise rather than collectivise knowledge.

AI Generated Image. Midjourney Prompt: individualised knowledge siloes

This is the dimension the phone ban debate almost never reaches. The guidance on school phone bans was at least implicitly gesturing at a problem about the school as a shared environment. But the framing was behavioural - phones as distraction, as social harm, as safeguarding risk. The epistemic dimension - what does it mean for a school to be a knowledge community when every pupil’s knowledge is being individually curated by a commercial algorithm - was never made explicit.

The irony is pointed. The institution that bans phones during the school day and then sends its pupils home to five unmediated hours of algorithmically curated content has not addressed the epistemic question. It has merely relocated it.

What institutions actually control (and mostly ignore)

There is a version of this argument that ends in despair - or, worse, in a call for platform regulation so comprehensive that it becomes a substitute for institutional thinking. That is not the argument. Platform regulation matters, but it is slow, contested, and perpetually outpaced by the thing it is trying to regulate. The Online Safety Act, whatever its eventual impact, is not going to answer the epistemic question for any school or organisation.

What leaders actually control - and largely ignore - is their institution’s epistemic posture. This is not digital literacy in the thin sense, although that matters. It is the deeper question of whether the institution has any coherent account of what makes knowledge authoritative, and whether it shares that account with its community.

Some schools are beginning to treat media provenance - where does this claim come from, who made it, what were they optimising for - as a subject of genuine curriculum attention, rather than a bolt-on to PSHE. Although it has now been suspended, The Stanford History Education Group’s civic online reasoning work has documented some of the most effective approaches to helping young people evaluate digital information. They concluded that it’s not about teaching them to evaluate the content of a source, but to do what professional fact-checkers do: leave the page quickly, search laterally, and ask what other sources say about this source. It is a habit of humility, not a checklist.

Some organisations are starting to ask, at a structural level, what the information diet of their staff actually looks like - not in a surveillance sense, but in the sense of deliberately creating conditions for exposure to heterodox views, conflicting evidence, and sources outside the algorithmic default. This is essentially what a well-curated reading programme, a decent staff CPD library, or an intellectually serious meeting culture has always been. The novelty is recognising that the default, in the absence of deliberate institutional intervention, is now the algorithm’s choice rather than anyone’s.

The question leaders rarely ask is not “what are our policies on phone use?” but “do we have any epistemic strategy at all?” The former is a behavioural management question. The latter is an institutional identity question. The distinction is not trivial.

AI Generated Image. Midjourney Prompt: information diet

Authority in the age of the algorithm

The philosopher of technology John Danaher - whose work on algocracy, or rule by algorithm, I drew on in previous articles - has argued that the deepest problem with algorithmic systems is not that they make bad decisions, but that they make decisions for reasons that are structurally inaccessible to the people they affect. You cannot argue with the feed. You cannot ask it to justify itself. You cannot appeal to a higher authority within the system, because the system does not work that way.

“The debate about algorithmic governance (or as I prefer ‘algocracy’) has been gathering pace over the past couple of years. As computer-coded algorithms become ever more woven into the fabric of economic and political life, and as the network of data-collecting devices that feed these algorithms grows, we can expect that pace to quicken.” John Danaher

This creates a peculiar problem for institutions whose legitimacy depends on accountable authority. A teacher can be questioned. A textbook can be challenged. An examination board can be scrutinised, campaigned against, reformed. An algorithm that has decided, on the basis of your clicking behaviour and demographic profile, what you are likely to believe and what you are therefore likely to be shown next - that is not accountable in any of these senses. It is, in Danaher’s framing, a form of governance without government.

AI Generated Image. Midjourney Prompt: unstoppable algorithms

The question for educational and organisational leaders is not whether to declare war on the algorithm. That is a category error. The question is whether they claim any countervailing epistemic authority - or whether they cede the field entirely to systems optimised for engagement rather than understanding.

Claiming epistemic authority is not the same as being authoritarian. It does not mean dictating what people believe. It means being explicit about what standards of evidence and argument the institution holds itself to, modelling those standards in practice, and creating conditions in which members of the community can develop the habit of asking, before they accept a claim: who made this, and what were they optimising for?

The phone ban debate will continue. It will produce more guidance, more research, more moral urgency, and probably some legislation. Some of that may be genuinely useful. But none of it will answer the question that matters most for institutions: in a world where the default epistemic authority is a commercial algorithm, what authority do you claim? And what are you doing to make it worth claiming?

Key Takeaways

  1. The phone ban debate addresses symptoms rather than causes - the real issue is epistemic, not behavioural, and banning devices during the school day relocates the problem rather than resolving it.
  2. Algorithmic personalisation is not a side-effect of social media platforms; it is the product - systems optimised for engagement are structurally incapable of being optimised simultaneously for understanding.
  3. The filter bubble has not improved despite us knowing about it. Eli Pariser’s filter bubble concept, now well over a decade old, has proved more structurally consequential than most institutional leaders have absorbed - the shift from editorial accountability to algorithmic optimisation changed the architecture of belief, not just the volume of information.
  4. Institutions are built on a shared epistemic commons - common texts, common standards of evidence, common frameworks for argument - and algorithmic personalisation disaggregates this commons at scale without anyone choosing for it to happen.
  5. It doesn’t matter whether the information is true, accurate or valid if it is perpetuated. What Cathy O’Neil calls the feedback loop in algorithmic systems means that misinformation, extremism, and factual error are not aberrations to be filtered out; they are outputs of the optimisation process when engagement is the metric.
  6. Leaders who want to address the epistemic challenge need to be explicit about what makes knowledge authoritative. They ask not “what is our phone policy?” but “what is our epistemic strategy?” - which means modelling that in practice, and teaching the habit of asking who made a claim and what they were optimising for.

The algorithm does not know what your institution values. It does not know what your students need to understand. It knows what they clicked on last Tuesday, and it will use that to determine what they see next. The question of whether you - as a leader, an educator, an employer - have any meaningful counter-offer to that is, in the end, not a digital question at all. It is a question about whether you believe your institution stands for something worth knowing, and whether you are prepared to say so clearly enough that a seventeen-year-old’s feed cannot drown it out.

Subscribe Now

Subscribe to receive the latest blog posts directly to your inbox every week.

By subscribing, you agree to our Privacy Policy.
Thank you! Your submission has been received!
Oops! Something went wrong. Please try again later.