Your AI assistant has declined three meeting requests, rescheduled your dentist appointment, and replied to six emails whilst you were making tea. You scan through its decisions. Most look sensible. One feels slightly off. At what point did you stop using a tool and start employing an assistant?
We treat AI as another productivity enhancement. But Yuval Noah Harari writes in Nexus:
“AI is the first technology in history that can make decisions and create new ideas by itself. All previous human inventions have empowered humans, because no matter how powerful the new tool was, the decisions about its usage remained in our hands. Knives and bombs do not themselves decide whom to kill. They are dumb tools, lacking the intelligence necessary to process information and make independent decisions. In contrast, AI can process information by itself, and thereby replace humans in decision-making. AI isn’t a tool; it’s an agent.” Yuval Noah Harari
This categorical shift matters. Tools amplify what we choose to do. Agents interpret what we mean and decide what should happen next.

When Procedures Become Judgements
Computer scientist Joseph Weizenbaum observed in the 1970s that computers “mechanise the already mechanical.” They automated repetitive processes with clearly defined steps. A spreadsheet calculates according to your formulae. Spell-check flags errors according to rules. They execute without interpreting.
Philosopher Gilbert Ryle distinguished between “knowing how” (procedural knowledge) and “knowing that” (factual knowledge). Traditional technologies embody “knowing how” because they follow procedures. A hammer knows how to drive nails. A calculator knows how to perform arithmetic. This stuff was fundamental in some of my philosophy of religion classes back in the day.
AI systems increasingly demonstrate “knowing that” - they possess information and reason about it. This lets them operate in domains previously reserved for human judgement. They don’t just follow instructions; they interpret objectives, make contextual decisions, adapt their approaches based on outcomes.
One way to understand it is to think about spell-check versus AI rewriting your email “in a more professional tone.” Spell-check flags errors you could understand and contest. But when AI rewrites your essays, whose voice remains? The system has judged tone, formality, phrasing. It interpreted what you meant and decided what you should have said.
Daniel Dennett calls this “competence without comprehension.” AI systems demonstrate remarkable competence across domains without the comprehension we associate with human expertise. A radiologist understands what a lung is, how it functions, why patterns indicate disease. An AI system trained on radiology images may surpass the radiologist in diagnostic accuracy without possessing any contextual understanding. And that’s why many radiology departments now leverage AI diagnostics alongside human interaction and interpretation. Jensen Huang highlights that the number of radiologists in the USA has actually increased in recent years with AI rather than decreased. (Check out his interview on Joe Rogan’s podcast below)
Jensen Huang on JRE #2422
Martin Heidegger distinguished between technology as “ready-to-hand” (tools that disappear into use) and “present-at-hand” (objects we contemplate separately). AI fits neither category. It acts on our behalf yet remains fundamentally alien to us.
Philosophy Versus Practice
Some argue that current AI systems lack “true” agency. They don’t display self-generated intent or contextual awareness. They can’t refuse or redefine goals. They merely execute within parameters, making them sophisticated automation rather than genuine agents. This objection contains truth but misses the urgent point. The philosophical debate about whether AI possesses genuine intentionality or consciousness is fascinating. The practical reality is more pressing.
Functional agency matters more than metaphysical agency. Systems that act autonomously, produce outcomes, trigger consequences without a human in the loop at every step - these systems behave like agents regardless of their inner experience.

The idea of parameters then becomes worth exploring. Current AI systems operate within constraints defined by their creators. But so do all agents, including humans. I operate within biological parameters, cultural constraints, legal boundaries, professional expectations. Parameters don’t negate agency; they define its scope. The real question isn’t whether AI systems possess some mystical property of “true agency.” It’s whether they behave like agents in practice. Increasingly, they do. They interpret ambiguous instructions. They make contextual judgements. They produce novel outputs that couldn’t have been directly predicted from their training. They adapt strategies based on feedback. Their decisions carry real-world consequences.
When an AI system screens your job application, recommends your medical treatment, determines your creditworthiness, the subjective experience doesn’t matter. What matters is that it made a decision affecting your life, and you have limited recourse to challenge or understand that decision.
This is the sociology of delegation trumping the philosophy of intent. We’re living with systems that behave like agents, making decisions that cascade through society in ways we struggle to comprehend or control. Our governance structures, ethical frameworks, accountability mechanisms need to respond to this functional reality, not wait for philosophical consensus about machine consciousness.
Harari goes further to say,
“Even at the present moment, in the embryonic stage of the AI revolution, computers already make decisions about us - whether to give us a mortgage, to hire us for a job, to send us to prison. This trend will only increase and accelerate, making it more difficult to understand our own lives.” Yuval Noah Harari

When Nobody’s Responsible
Your hammer misses the nail and hits your thumb. The failure is unambiguously yours. Your calculator gives you the wrong answer because you entered the wrong formula. You’re accountable. Tools extend responsibility without diluting it. But when an AI agent declines that meeting, screens out that job applicant, flags that loan application as high-risk, who’s responsible when it goes wrong?
Philosopher John Danaher calls this “algocracy” - rule by algorithm. Decisions affecting our lives increasingly emerge from processes we cannot fully scrutinise or challenge.
"I use the term ‘algocracy’ to describe a particular kind of governance system, one which is organised and structured on the basis of computer-programmed algorithms. To be more precise, I use it to describe a system in which algorithms are used to collect, collate and organise the data upon which decisions are typically made, and to assist in how that data is processed and communicated through the relevant governance system. In doing so, the algorithms structure and constrain the ways in which humans within those systems interact with one another, the relevant data, and the broader community affected by those systems. This can be done by algorithms packaging and organizing [sic]the information in a particular way or even by algorithms forcing changes in the structure of the physical environment in which the humans operate (Kitchin and Dodge 2011). Such systems may be automated or semi-automated, or may retain human supervision and input." John Danaher
Many modern AI systems operate as black boxes. When an AI makes a decision, we often cannot explain how it arrived at that conclusion. It has discovered correlations in data that may be invisible to human observers, correlations that may reflect historical biases or spurious patterns rather than genuine causal relationships.
Traditional decision-making contexts expect justifications that appeal to shared values and logical reasoning. If you’re denied a loan by a human banker, you might receive an explanation referencing specific concerns about your financial situation. You can engage with these justifications, offer additional information, challenge faulty assumptions. When an AI system denies you a loan, the decision emerges from complex statistical correlations across thousands of variables. The system cannot provide a comprehensible explanation because it doesn’t operate through explicit reasoning chains humans can follow. It has learned patterns from data but cannot articulate why those patterns should determine your access to capital.

The UK witnessed catastrophic consequences with the Post Office Horizon scandal, brilliantly portrayed in the ITV drama, Mr Bates vs The Post Office. Hundreds of sub-postmasters were prosecuted, bankrupted, some imprisoned based on faulty accounting software. “Computer says no” became an unchallengeable verdict, trumping lived experience, personal testimony, common sense. The computer was wrong. Lives were destroyed before this was acknowledged.
Philosopher Miranda Fricker calls this “testimonial injustice” - the devaluation of a person’s ability to give an account of their own experiences. When an AI system flags someone as a credit risk despite their good payment history, or as a potential threat despite their law-abiding behaviour, the individual’s testimony about their character and intentions carries diminishing weight against algorithmic predictions.
The asymmetry troubles me. Human decision-makers can be questioned, their reasoning examined, their biases challenged. Algorithmic decision-makers operate in relative opacity, their conclusions presented with an air of objective authority that masks the subjective choices embedded in their design, training data, deployment.
This creates not just an accountability gap but a fundamental challenge to how we understand responsibility. If an AI system makes a consequential decision based on opaque reasoning, who bears responsibility? The programmer? The organisation that deployed it? The data scientists who trained it? The executives who approved its use? Usually all of them, partially, and none of them sufficiently.

Living With Agents
The technology exists. The delegation is happening. The genie isn’t going back in the bottle. So how do we live with something that possesses capability without consciousness, competence without comprehension, agency without accountability?
We need “agency literacy” or the capacity to recognise when we’re using a tool versus delegating to an agent. This determines the ethical framework we apply, the accountability structures we demand, the degree of oversight we require. When you use a tool, you remain the locus of decision-making. You bear full responsibility for outcomes. You can explain your reasoning. You can be held accountable. When you delegate to an agent - whether human or artificial - responsibility becomes distributed. The agent exercises discretion. Its judgements matter. Its errors are partially its own.
This distinction has always existed with human agents. Employment law, professional standards, organisational structures have evolved over centuries to manage delegation’s complexities. But agentic AI presents new challenges. Unlike human employees, AI agents don’t have professional training, ethical codes, fear of consequences (well, not in the way we see all those 3 things anyway!). They cannot be sued, sanctioned, held morally accountable (unless we try and suggest that their human creators are liable). They operate at speeds and scales that make meaningful human oversight impossible in many contexts.
Some decisions should remain human not because humans are infallibly better but because making them, wrestling with competing values, taking responsibility for outcomes, constitutes human dignity. Philosopher Hans Jonas argued that modern technology creates an “ethics of the future”. This is a responsibility to consider how our actions affect not just contemporary humans but distant future generations. AI magnifies this responsibility enormously.
“Act so that the effects of your action are compatible with the permanence of genuine human life." Hans Jonas
We probably need to design for reversibility. Agents should make decisions you can undo, not ones that cascade irreversibly. A meeting declined by AI can be rescheduled. A loan application rejected by an algorithm destroys credit ratings and closes opportunities that are often not easy to put right. Stakes matter.
We need to demand explicability. If you cannot explain how a decision was reached, you cannot take responsibility for it. Systems operating as black boxes should not make consequential decisions about human lives. This doesn’t mean AI systems must mimic human reasoning as their value often comes from discovering patterns humans miss. But it does mean we need robust methods for understanding, auditing, challenging their outputs.
We need to question the defaults. Just because an AI can act autonomously doesn’t mean it should. The current trajectory is towards maximal delegation where we let AI systems make more decisions with less oversight, in the name of efficiency and scale. But efficiency isn’t the only value. Not every decision benefits from automation.
We need to cultivate discernment about what we’re willing to delegate. Some tasks are ripe for automation - the repetitive, the clearly defined, the low-stakes. Others demand human judgement not because humans are better (they often aren’t) but because human responsibility matters intrinsically. Every delegation of decision-making authority is also a delegation of power. When we allow AI systems to determine who gets jobs, loans, medical care, freedom, we’re not making technical choices about efficiency. We’re making political choices about who shapes society and how.
These systems embody values whether we acknowledge it or not. The data they’re trained on reflects historical patterns, including historical injustices. The objectives they optimise reflect someone’s definition of what matters. The contexts they ignore reveal assumptions about what’s relevant. These aren’t neutral technical decisions—they’re value-laden choices that should be subject to democratic deliberation, not left solely to technologists and corporate interests.

The Enchanted Broom
Harari warns again,
“Can we trust computer algorithms to make wise decisions and create a better world? That’s a much bigger gamble than trusting an enchanted broom to fetch water. And it is more than just human lives we are gambling on.” Yuval Noah Harari
He’s referring to Goethe’s 1797 poem The Sorcerer’s Apprentice, made more famous by Disney’s film of the same name. The apprentice enchants a broom to fetch water but lacks the knowledge to stop it. The broom continues mindlessly, flooding the room. The apprentice splits the broom in two. Now two brooms fetch water with mechanical determination. Once you’ve delegated agency to something that lacks judgement, stopping it requires capabilities you may not possess.
We’re at an inflection point. AI systems are already making decisions that shape lives, allocate opportunities, structure society. The trend is towards more autonomy, more delegation, more consequential decisions made by systems we don’t fully understand. We face a choice. Will we sleepwalk into an agentic future, treating these systems as mere tools even as they increasingly function as agents? Will we delegate authority without demanding accountability? Will we optimise for efficiency without considering what we lose? Or will we cultivate the literacy, institutions and wisdom necessary to shape this transition deliberately? Will we demand transparency where it matters? Will we preserve human agency where it’s essential? Will we design systems that enhance rather than diminish human dignity?
This isn’t just a technical challenge or a regulatory puzzle. It’s a civilisational question about what kind of future we want to build and what we’re willing to sacrifice to get there. The enchanted broom is already fetching water. Whether we still remember the spell to make it stop remains to be seen.
Key Takeaways
Recognise the category shift. Agentic AI isn’t simply better automation - it’s delegation without the established frameworks of employment law, professional standards, moral accountability. The distinction between using a tool and employing an agent fundamentally changes how we think about responsibility and oversight.
Functional agency matters more than metaphysical agency. The philosophical debate about whether AI possesses “true” consciousness is less urgent than the practical reality that these systems already behave like agents, making decisions that carry real-world consequences. Our governance must respond to functional reality.
Demand explicability. If you cannot explain how a decision was reached, you cannot take responsibility for it. I’ll say it again: systems operating as black boxes should not make consequential decisions about human lives. Algorithmic opacity creates accountability gaps that undermine both justice and human dignity.
Design for reversibility. Agents should make decisions you can undo, not ones that cascade irreversibly. Build in human checkpoints before decisions become entrenched. Not all automation serves human flourishing.
Question the defaults. Just because an AI can act autonomously doesn’t mean it should. Every delegation of decision-making authority is also a delegation of power. Be deliberately intentional about which decisions warrant delegation and which demand human judgement.
Cultivate agency literacy. Develop the capacity to recognise when you’re using a tool versus delegating to an agent. This determines the ethical framework, accountability structures, oversight required. This literacy isn’t technical; it’s civic and ethical, essential for everyone navigating an increasingly automated world.
Some decisions should remain human not because humans are infallibly better at making them but because making them is itself constitutive of human dignity. Efficiency isn’t the only value that matters, as we’ve said. We’ve worked for years to try and make work easier. When we have billions of people unemployed, many of whom find purpose in their work, will we have got what we want? We should be careful what we wish for because we just might get it but not in the way we thought we would.
Further Reading
Discover more interesting articles here.
.png)
.jpg)


