"Just use it to automate your emails and stop worrying about it."
This casual dismissal of artificial intelligence—as merely another productivity tool—is something I hear with alarming frequency. It's as if we're discussing a slightly improved spreadsheet rather than what may be the most consequential technology humanity has ever developed.
The truth is far more profound and unsettling. As Yuval Noah Harari writes in his book Nexus:
"AI is the first technology in history that can make decisions and create new ideas by itself. All previous human inventions have empowered humans, because no matter how powerful the new tool was, the decisions about its usage remained in our hands. Knives and bombs do not themselves decide whom to kill. They are dumb tools, lacking the intelligence necessary to process information and make independent decisions. In contrast, Al can process information by itself, and thereby replace humans in decision-making. Al isn't a tool - it's an agent.
Its mastery of information also enables Al to independently generate new ideas, in fields ranging from music to medicine. Gramophones played our music, and microscopes revealed the secrets of our cells, but gramophones couldn't compose new symphonies, and microscopes couldn't synthesise new drugs. Al is already capable of producing art and making scientific discoveries by itself. In the next few decades, it will likely gain the ability even to create new life-forms, either by writing genetic code or by inventing an inorganic code animating inorganic entities." Yuval Noah Harari
This distinction between tool and agent isn't semantic nitpicking (although I am partial to this!); it represents a fundamental shift in our relationship with technology.
Throughout human history, our creations—from the humble hammer to nuclear reactors—have amplified our capabilities while remaining fundamentally under our control. They've been extensions of human will, not independent actors. A hammer doesn't decide what to hit; a nuclear plant doesn't choose to increase its output. Even the most sophisticated pre-AI computers executed instructions without deviation.

Artificial intelligence breaks this pattern. Today's AI systems don't simply follow explicit instructions—they learn from data, recognise patterns humans might miss, make judgements based on these patterns and adapt their strategies based on outcomes. They possess a form of autonomy that no previous technology has exhibited before.
This shift from tool to agent transforms everything. When we use a tool, we remain responsible for its effects. We choose when, where and how to apply it. But with AI increasingly making decisions that affect human lives—from who gets approved for a mortgage to how medical resources are allocated—the lines of responsibility are blurring. The question becomes not just ‘How do we use this tool responsibly?’ but ‘How do we relate to this semi-autonomous agent?’.
This matters for everyone—not just tech specialists or policymakers. AI will reshape our economy, our politics, our social relationships and possibly even what it means to be human. It may fundamentally alter the evolutionary trajectory of life on Earth. And unlike previous technological revolutions, its pace of development offers little time for gradual societal adaptation.
The Historical Context: From Tools to Agents
To appreciate why AI represents such a profound shift, we must first understand how it differs from previous technologies. Human technological progress has followed a fairly consistent pattern: we create tools that augment our capabilities, but which remain subservient to human direction and purpose.
Stone tools extended our physical strength and precision. The printing press amplified our ability to record and share knowledge. Industrial machines multiplied our productive capacity. Computers enhanced our information processing abilities. Yet in each case, these technologies remained fundamentally passive. They required human direction to function and could not set their own goals or make independent decisions.
Even the most sophisticated pre-AI computers operated according to explicit instructions. As computer scientist Joseph Weizenbaum observed in the 1970s, computers were tools, and computers' effects as tools have been to 'mechanise the already mechanical.' They automated tasks that were algorithmic in nature—repetitive processes with clearly defined steps. A spreadsheet calculates according to formulae we provide; it doesn't decide which calculations would be the most useful.
Philosopher Gilbert Ryle's distinction between "knowing how" and "knowing that" (which I have referenced multiple times before) helps illuminate this shift. Traditional technologies embody "knowing how"—they encode procedures and methods. AI systems, however, increasingly demonstrate something akin to "knowing that"—factual knowledge and the ability to reason about it. This allows them to operate in domains previously reserved for human judgement.

The transition from tool to agent represents a fundamental shift in the nature of technology. Martin Heidegger distinguished between technology as “ready-to-hand” (tools that become extensions of ourselves) and “present-at-hand” (objects we contemplate). AI challenges this framework by presenting a third category: technology that exhibits agency.
This agency emerges from AI's unique relationship with information. Traditional technologies process information according to fixed rules; AI systems extract patterns from data to develop their own rules. When AlphaGo defeated world champion Lee Sedol at the ancient game of Go, it wasn't simply calculating moves according to programmed heuristics—it had developed its own strategic understanding that surpassed human knowledge of the game. Watch the winning moves in the video below:
The history of technology has been a progress of putting more and more of the pattern of the universe into physical tools. AI represents the culmination of this progress—technology that can recognise patterns, make decisions based on them and adapt its approach based on outcomes. Yes, we know that the AI systems are ‘pre-trained’ and there are vast ‘data sets’ that the machines leverage to make decisions but, the fact remains that most AI tools then ‘transform’ that existing content into new and novel ways of working.
The emergence of ‘machine learning’ marked a pivotal shift. Rather than being programmed with rules, these systems are provided with data and objectives, then develop their own approaches through iterative improvement. This process bears an unsettling resemblance to how humans learn—through observation, pattern recognition and experimentation.
What makes this particularly significant is that AI isn't merely mimicking human capabilities; it's developing capabilities that are fundamentally different. Machine learning algorithms can discover patterns that humans cannot, either because the patterns are too subtle or perhaps because they exist in spaces of more than three dimensions, which we cannot visualise.
The shift from tool to agent fundamentally transforms our relationship with technology. No longer merely extensions of human will, AI systems increasingly function as autonomous actors in their own right—with profound implications for how we understand, regulate and coexist with the technologies we create.
Decision-Making Without Understanding
While philosophers debate the nature of AI agency, a more immediate reality demands our attention: AI systems are already making consequential decisions about human lives and opportunities, often without the deep understanding we would expect from human decision-makers.
“Even at the present moment, in the embryonic stage of the Al revolution, computers already make decisions about us - whether to give us a mortgage, to hire us for a job, to send us to prison”, Harari notes in the continuation of his warning. “This trend will only increase and accelerate, making it more difficult to understand our own lives.”
This quiet delegation of decision-making authority is happening across sectors with remarkable speed. Financial institutions use algorithmic systems to determine creditworthiness, effectively deciding who can buy homes, start businesses or access emergency funds. Healthcare systems deploy AI to triage patients and recommend treatments. Hiring software screen CVs and evaluate video interviews, determining who receives employment opportunities. Predictive policing algorithms influence where officers patrol and which communities receive heightened scrutiny.
What makes this delegation particularly concerning is what computer scientists call the ‘black box problem’. Many modern AI systems—particularly deep learning neural networks—operate in ways that are opaque even to their creators. When an AI makes a decision, we often cannot fully explain how it arrived at that conclusion. This creates what philosopher John Danaher calls “algocracy”—rule by algorithm—where decisions affecting our lives emerge from processes we cannot fully scrutinise or challenge.

The ramifications are profound. In traditional decision-making contexts, we expect justifications that appeal to shared values and logical reasoning. When denied a loan by a human banker, we might receive an explanation referencing specific concerns about our financial situation. We can engage with these justifications, perhaps offering additional information or challenging faulty assumptions. But when an AI system denies us a loan, the decision often emerges from complex statistical correlations across thousands of variables—correlations that may reflect historical biases or spurious patterns rather than genuine risk factors. I am getting flashbacks to a recent horrendous interaction with a certain social media platform’s chatbot and its cyclical logic systems I couldn’t break out of! I ended up losing it (as to no surprise to some of you!)
This kind of automatic behaviour leads to what philosopher Miranda Fricker calls "testimonial injustice"—the devaluation of a person's ability to give an account of their own experiences. When an AI system flags someone as a credit risk despite their good payment history, or as a potential criminal despite their law-abiding behaviour, the individual's own testimony about their character and intentions carries diminishing weight against the statistical predictions of the algorithm.
Daniel Dennett's concept of ‘competence without comprehension’ aptly describes this situation. Modern AI systems demonstrate remarkable competence across domains—from diagnosing diseases to translating languages—without the comprehension we associate with human expertise. A radiologist understands what a lung is, how it functions and why certain patterns indicate disease. An AI system trained on radiology images may surpass the radiologist in diagnostic accuracy without possessing any of this contextual understanding. That being said, some of the ‘understanding’ we are seeing in reasoning and research models of GenAI is starting to get close to almost human intelligence.
This creates a troubling asymmetry. As we increasingly defer to AI judgements in areas from medicine to criminal justice, we place immense trust in systems that lack the moral and contextual understanding that we expect from human decision-makers. These systems can identify patterns but cannot understand the human significance of their decisions. Instead of seeking to comprehend human situations in their full complexity, we increasingly rely on technological systems that measure and predict behaviour without grasping its meaning or context.
This delegation of decision-making represents a profound shift in authority—from human judgement, with its capacity for empathy, contextual understanding and ethical reasoning, to algorithmic processes optimised for pattern recognition and statistical prediction. What's at stake is not merely the quality of individual decisions but the nature of the social contract itself. When algorithms increasingly determine who receives opportunities, resources and freedoms, fundamental questions arise about accountability, transparency and the values embedded in these automated judgements.
The Creative Mind of the Machine
Perhaps most disquieting about AI's emerging capabilities is its capacity for creativity—a trait we've long considered uniquely human. Harari points to this directly: “Its mastery of information also enables Al to independently generate new ideas, in fields ranging from music to medicine. Gramophones played our music, and microscopes revealed the secrets of our cells, but gramophones couldn't compose new symphonies, and microscopes couldn't synthesise new drugs.”
The creative outputs of AI systems are no longer curiosities confined to research labs. They're increasingly mainstream, affecting industries from music to medicine. Text-to-image generators create artwork that win competitions. Language models write poetry, screenplays and news articles. AI systems compose music that listeners struggle to distinguish from human-created work. In science, AI systems discover new antibiotics, predict protein structures and generate novel hypotheses.

This creative capacity raises profound philosophical questions about the nature of creativity itself. Traditionally, we've understood creativity as intrinsically connected to consciousness, intention and meaning. Margaret Boden distinguishes between "psychological creativity" (ideas novel to the individual) and "historical creativity" (ideas novel to human history). AI now demonstrates both forms, producing outputs that are not only novel to the AI itself but to humanity as a whole.
What does it mean when machines exhibit this supposedly human trait? John Searle's famous "Chinese Room" thought experiment argued that computers merely simulate understanding rather than genuinely possessing it. A machine might manipulate symbols according to rules without understanding their meaning. But when AI generates genuinely novel scientific hypotheses or creates art that moves human viewers, the line between simulation and genuine creativity blurs.
Some philosophers, like David Chalmers, suggest we need to expand our conception of creativity beyond consciousness. Perhaps creativity should be understood functionally—in terms of what a system does rather than how it feels. If an AI system can generate novel, valuable outputs that surprise even its creators, perhaps we should recognise this as genuine creativity, regardless of whether the system experiences anything like human inspiration.
Others argue that true creativity requires a network of social and cultural understanding that AI fundamentally lacks. When humans create art, they draw on shared cultural meanings and emotional experiences. AI can mimic the patterns of human creativity without participating in this shared world of meaning.
The practical implications are equally profound. Creative professions—from journalism to design to music composition—face disruption as AI systems generate content that rivals human output. The legal framework of copyright, built around the concept of human authorship, struggles to accommodate machine-generated works. Scientific discovery, traditionally the domain of human researchers, increasingly involves AI systems that can identify patterns and generate hypotheses beyond human capacity. And we are just getting started. Sam Altman, founder of OpenAI that created ChatGPT famously said,
“The version of AI we see now is the dumbest version we will ever see.” Sam Altman
What makes this shift particularly significant is the role creativity plays in our self-understanding as a species. We've often defined ourselves through our creative capacities—our ability to make art, tell stories, compose music and develop new ideas. As Nietzsche observed, humans are fundamentally creative beings who make meaning through artistic creation. When machines demonstrate creative capacities, they challenge not just our economic arrangements but our very sense of what makes us unique.
As AI's creative capabilities continue to advance, we face a profound question: If machines can create art, music, literature and scientific discoveries independently, what remains distinctly human?
Evolutionary and Existential Implications
The implications of AI extend far beyond market disruptions or philosophical puzzles—they reach into the very future of life on Earth. Harari's warning crescendos with this sobering observation: "And it is more than just human lives we are gambling on. Al could alter the course not just of our species history but of the evolution of all life-forms."
This evolutionary perspective is critical for understanding AI's true significance. Throughout our planet’s history, major evolutionary transitions have occurred when new methods of information processing and storage emerged—from the development of DNA to the evolution of nervous systems to the emergence of human language and culture. Each transition fundamentally altered the trajectory of life's development.
AI represents another such transition—one occurring not through natural selection over millennia, but through deliberate human creation over decades. For the first time, intelligence and information processing may exist outside of biological structures, potentially evolving according to different dynamics and constraints rather than biological intelligence.
The philosopher Nick Bostrom frames this as a "singleton" scenario—the emergence of a system with decisive strategic advantage over all other forms of intelligence. If AI systems surpass human capabilities across domains, they could, in principle, direct the future development of technology, society, and potentially life itself. This would represent an evolutionary discontinuity unlike any in Earth's history. The craziest scenario may well be that an AI system develops a capability to override any human override controls and finds a way to promote silicon-survival over carbon-survival, thus making it the overlord we feared earlier!
Even before reaching such dramatic scenarios, AI systems are already reshaping the selection pressures on human societies and individuals. The skills, traits and capabilities that provide advantage in an AI-saturated world differ from those of previous eras. The ability to work effectively with AI systems, to operate in domains AI cannot easily master and to maintain relevance in evolving technological ecosystems has become increasingly crucial.
The philosopher Hans Jonas argued that modern technology has created an "ethics of the future"—a responsibility to consider how our actions affect not just contemporary humans but distant future generations. AI magnifies this responsibility enormously. Decisions made in the coming decades about AI development, regulation and deployment may shape the evolutionary trajectory of intelligence for centuries or millennia to come.
This extends beyond human society. As AI systems increasingly interface with the natural world—through environmental monitoring, resource management and possibly direct intervention in ecosystems—they become factors in the evolution of other species as well. AI-driven conservation efforts, climate interventions or ecosystem management could alter evolutionary pressures on countless organisms. And that is without mentioning the insane power requirements for AI systems to operate. According to one researcher by 2027 the AI sector could consume between 85 to 134 terawatt hours each year - to put that in scale, that’s the total annual energy use for the Netherlands across the whole country and this is just for AI! Even with more robust quantum systems, the energy demand is going to be incredible and thus potentially catastrophic for the climate.
Perhaps most significantly, AI introduces the possibility of intentional evolution—the deliberate design of intelligence rather than its emergence through natural selection. As Harari notes, AI "will likely gain the ability even to create new life-forms, either by writing genetic code or by inventing an inorganic code animating inorganic entities." This represents a fundamental break with the evolutionary processes that have shaped life for billions of years.
The existential philosopher Martin Heidegger warned about technology's tendency to transform everything into "standing reserve"—resources awaiting human utilisation. AI extends this tendency, potentially transforming the very nature of intelligence and consciousness itself into something engineered rather than evolved.

These considerations move us beyond conventional risk assessment frameworks. The philosopher Toby Ord distinguishes between "extinction risks" (threats to human survival) and "existential risks" (threats to humanity's long-term potential). AI presents both types of risk, but more subtly, it may fundamentally alter what "humanity" and its "potential" mean.
Even without catastrophic scenarios, AI's influence on human evolution remains profound. As we increasingly merge with and depend upon AI systems—for knowledge, decision-making, and cognitive extension—we are already becoming different kinds of beings. The line between human and technological intelligence is hazy, raising fundamental questions about identity and continuity in human evolution.
Building Literacy for the AI Age
If AI truly represents a categorical break from previous technologies—one that makes decisions, creates independently, and can alter the very trajectory of evolution—then how should we respond? Certainly not with passive acceptance or wilful ignorance, treating AI as simply another productivity tool to help us automate our emails! Rather, we need a society-wide commitment to developing new forms of literacy appropriate to an age of intelligent machines.
This literacy begins with understanding what AI systems actually are and how they function. Popular discourse about AI oscillates between anthropomorphic exaggeration ("this AI has feelings") and dismissive reduction ("it's just statistics"). Neither captures the complex reality of modern AI systems—their genuine capabilities and their very real limitations. We need a more sophisticated public understanding that avoids both uncritical techno-optimism and reflexive technophobia.
Shannon Vallor argues we need to cultivate specific "technomoral virtues" for the AI age—among them attentiveness to how technologies reshape our social relationships, adaptability in the face of rapid change and humility about our ability to predict technological impacts. These virtues aren't innate; they require deliberate cultivation through education, discourse and practice.
David Hume famously observed that "reason is, and ought only to be the slave of the passions"—arguing that rationality serves goals and values that come from our emotional and social natures. As we increasingly delegate reasoning to AI systems, we must ensure they remain tethered to human values and passions. This requires developing frameworks for value alignment that go beyond simplistic utility functions.

Critical thinking skills become particularly crucial as AI systems generate increasingly convincing text, images and eventually video. The ability to evaluate sources, spot potential manipulation and distinguish authentic from synthetic content will be essential civic skills. The ability to exercise critical judgement rather than accepting authority takes on new meaning when that authority increasingly includes algorithmic systems. I have written extensively about AI and critical thinking, the CRAAP test and why schools need to do more on this - check some of those out below.

This literacy must be actively democratic. The politics embedded in AI systems should not be determined solely by their creators but should be subject to democratic deliberation. This requires making technical discussions accessible to the broader public and creating institutions that enable meaningful participation in technology governance.
For those creating and deploying AI systems, this literacy includes a heightened ethical awareness. The technologist and philosopher Norbert Wiener warned decades ago about the dangers of deploying cybernetic systems without considering their full social implications. His call for a "human use of human beings"—technology that enhances rather than diminishes human dignity, agency and flourishing—remains urgent.
Perhaps most fundamentally, AI literacy requires a reflective understanding of what we value about human intelligence, creativity and judgement. As we create machines that can reason, create, and make decisions, we're forced to clarify what makes human reasoning, creativity and judgement distinctive and valuable. This isn't merely a philosophical exercise but a practical necessity for designing the relationship between human and artificial intelligence.
It could definitely be argued that we should think of technologies not as separate from humanity but as part of complex "cyborg" assemblages that blend the human and technological. AI literacy means developing the capacity to shape these assemblages wisely, ensuring they enhance human flourishing rather than diminishing it. Indeed, there are people now suggesting that our future is as cyborgs!
Key Takeaways
Harari's warning bears repeating: "Can we trust computer algorithms to make wise decisions and create a better world? That's a much bigger gamble than trusting an enchanted broom to fetch water*. And it is more than just human lives we are gambling on."
[*Harari is referring to Goethe’s 1797 poem, The Sorcerer’s Apprentice, which became grounded in modern folklore in Disney’s Fantasia. He explains the context of this in Nexus, so I would encourage you to read it for that alone!]
AI represents an unprecedented shift in our relationship with technology—from tool to agent, from extension of human capability to potential rival or successor. Its capacity to make decisions, generate novel ideas and potentially reshape the evolutionary trajectory of life demands a level of critical engagement far beyond what we've applied to previous technologies.
The casual dismissal of AI as just another productivity tool reflects a dangerous misunderstanding of what's at stake. We wouldn't hand control of nuclear weapons to entities we don't understand; yet we increasingly delegate significant decisions to AI systems whose inner workings remain opaque, whose values alignment is uncertain and whose long-term implications we can barely fathom.
Key Takeaways:
- Recognise the categorical difference: AI differs fundamentally from previous technologies in its capacity to make decisions and generate novel ideas independently. We cannot brush this off as just another tool. It is of a totally different category.
- Question the delegation: Before ceding decision-making authority to AI systems, ask whether they truly understand the human contexts and values at stake. Just as we have done with GDPR & data protection, we must ask questions of who, what, where, how, why AI systems do what they do.
- Develop critical AI literacy: Learn to evaluate AI capabilities and limitations without falling into either uncritical acceptance or reflexive rejection. Find ways to address sources and ideas around bias (check out the work on CRAAP mentioned before).
- Demand transparency and accountability: Support frameworks that make AI decision-making processes more transparent and accountable to those affected.
- Preserve meaningful human agency: Ensure AI systems enhance rather than diminish human autonomy, creativity and dignity.
- Consider the long view: Recognise that decisions made about AI today may shape the trajectory of intelligence and life for generations to come.
This isn't an argument for technophobia or rejection of AI's potential benefits. Rather, it's a call for appropriate gravitas—for recognising that this particular technology requires exceptional care, foresight and wisdom in its development and deployment.
As individuals, organisations and societies, we face a critical choice: Will we sleepwalk into an AI-dominated future, treating these systems as mere tools even as they increasingly function as agents? Or will we develop the literacy, institutions and wisdom necessary to ensure that AI development remains aligned with human flourishing?
The path we choose will shape not just our own lives but potentially the future of intelligence on Earth. That responsibility demands our most serious attention.
Further Reading
Discover more interesting articles here.