The AI Paradox: Are We Efficiently Rushing Nowhere?

October 11, 2024

Artificial Intelligence: the buzzword that seems to have come from nowhere that's got everyone from Number 10 to nan's bridge club in a tizzy. It promises a brave new world of efficiency, where machines take care of the drudgery, leaving us humans free to pursue higher pleasures. Sounds brilliant, doesn't it? But I’m not so sure. As someone who's spent a while knee-deep in the AI trenches - from helping headteachers leverage chatbots to showing cops how to spot deepfakes - I'm starting to wonder if we're not just efficiently rushing towards... well, nowhere in particular.

Don’t get me wrong: I love GenAI, most of the time. I use it EVERY DAY and often think how I ever got anything done before it! I also know that it pays a lot of my salary as a keynote speaker, workshop facilitator and coach. In fact, I did some quick maths and 75% of my income in the last 2 years has been centred around AI!

So we have a situation where AI is writing our emails, generating our reports, and even planning our social media content. We've saved hours. But what are we doing with all this freed-up time? If you're like most of us, you're filling it with more work. More emails. More meetings. More 'productivity'. We've become so obsessed with doing that we've forgotten about being. I will use ‘we’ intentionally throughout this too; I am as guilty as hell. I often write FRiDEAS from a point of view that I am on a journey of discovery - and this is another one of those areas I am afraid.

What if we used AI not just to do more, but to create space for what makes us uniquely human? Instead of using AI to let us churn out more content or crunch ever more numbers, what if we used it to free up time for writing poetry, having a natter with our neighbour, or simply watching the clouds go by? We are navigating an AI-augmented landscape and we must ask ourselves: Are we using AI to enhance our humanity, or are we slowly, efficiently, algorithmically erasing it?

The Promise vs. The Reality

When AI burst onto the scene, it came wrapped in shiny promises of a leisure-filled future. "Let the robots do the work," they said, "and humans will be free to pursue their passions." It was a tantalising vision, reminiscent of the 1930s economist John Maynard Keynes' prediction of a 15-hour work week by 2030. AI, we were told, would be the key to unlocking this utopia. (Side note: maybe Keynes was right but not in the way we wanted!)

Fast forward to today, and the reality looks somewhat different. We're working longer hours than ever, with UK employees clocking an average of 42.5 hours per week, significantly above the EU average. The promise of AI-enabled leisure seems to have evaporated into a cloud of push notifications and never-ending Zoom calls.

So, where did we go wrong?

The truth is, AI has delivered on many of its promises. It's made us more efficient, more productive, and more capable of handling complex tasks. The problem lies not with the technology itself, but with how we've chosen to use it.

Instead of leveraging AI to reduce our workload, we've used it to amplify it. Email AI helps us send more emails. Productivity AI encourages us to squeeze more tasks into our day. Even our 'downtime' is optimised, with AI-powered apps suggesting how to make the most of our precious few moments of leisure.

We are using our greatly enhanced productivity to produce more and more things that nobody much wants or needs. And in the process, we are using up the world.

The misuse of AI's potential stems from a fundamental misunderstanding of its purpose. We've treated AI as a tool for doing more, rather than a means of creating space for being more.

Consider the case of a UK marketing agency I recently worked with. They implemented AI tools to streamline their content creation process, promising their team that this would free up time for more creative, strategic work. Six months later, their output had tripled, but employee satisfaction had plummeted. The freed-up time had been immediately filled with demands for more content, more campaigns, more metrics. Not only that, but they went from needing three front-end developers to one. I am not in the game of bashing people who are doing things differently - and heck, I have probably put more people out of jobs than I would like - but are we really thinking holistically and societally about this?

If you have read more than one of these articles, you will know I often quote Peter Drucker - always in a way that is positive. But his statement in his 1959 book, The Age of Discontinuity: Guidelines to Our Changing Society, concerns me:

“The most important, and indeed the truly unique, contribution of management in the 20th century was the fifty-fold increase in the productivity of the manual worker in manufacturing.”

Peter Drucker

The book title promises great things but at what cost?

This scenario is playing out across industries. Teachers using AI to mark work find themselves assigned more classes. Lawyers using AI for research are expected to take on more cases. The efficiency dividend is being pocketed by organisations, not distributed to workers in the form of leisure time (on the whole - there are some outliers, I believe.)

The reality we're facing is a far cry from the promise of AI. Instead of ushering in an era of reduced work and increased fulfilment, we've created a hamster wheel that spins ever faster. We're efficiently rushing towards burnout, stress, and a diminished sense of human value.

As we grapple with this reality, it's crucial to reassess our relationship with AI. We need to ask ourselves: Are we using this technology in a way that aligns with our human needs and values? Or have we become servants to the very tools meant to serve us? (Some people like Geoffrey Hinton and Mo Gawdat might suggest that these questions are futile because the genie is out of the bottle and AI has already gone too far towards making us its slave but I have to have hope.)

The promise of AI remains potent. But realising that promise will require a fundamental shift in how we view productivity, work, and the role of technology in our lives. It's time to reclaim the original vision of AI - not as a taskmaster, but as a liberator of human potential.

The Efficiency Trap

In the UK, we've long prided ourselves on our ability to 'keep calm and carry on'. But in our AI-augmented world, it seems we've twisted this ethos into 'keep calm and carry on doing more'. We've fallen into what I call the 'efficiency trap' - a vicious cycle where increased efficiency doesn't lead to more free time, but to heightened expectations and ever-growing to-do lists.

In 2019, a UK government report found that AI could add an additional £630 billion to the British economy by 2035. Brilliant news, isn't it? But here's the question we're not asking: at what cost to our collective wellbeing?

The efficiency trap works like this: AI helps us complete a task faster. Instead of using that saved time for rest, reflection, or creativity, we immediately fill it with more tasks. Our productivity rises, so expectations rise with it. Soon, we're doing more work than ever, just to keep pace with these new, AI-enabled standards.

I've seen this play out in countless organisations. A financial services firm in Manchester implemented an AI system to automate data analysis. The result? Their analysts, instead of having more time to think strategically, found themselves drowning in requests for more frequent and detailed reports. The goalposts had shifted, and the race to meet new expectations began. “We can take on more clients” became the MO.

This trap isn't just a corporate phenomenon. It's seeping into our personal lives too. How many of us use AI-powered productivity apps, only to find ourselves cramming more activities into our 'optimised' schedules? We're becoming efficiency junkies, chasing the next hit of productivity at the expense of genuine human experiences.

The philosopher Bertrand Russell, in his essay In Praise of Idleness, wrote:

"The modern man thinks that everything ought to be done for the sake of something else, and never for its own sake."

Bertrand Russell

AI Generated Image: Midjourney Prompt: Human on a hamster wheel in a suit doing email ar16:9

Russell penned this in 1932, but it seems eerily prescient in our AI-driven age.

The efficiency trap also has profound psychological implications. As we outsource more tasks to AI, we risk losing the sense of accomplishment that comes from overcoming challenges. The Hungarian-American psychologist Mihaly Csikszentmihalyi's concept of ‘flow' - which I have referenced before and recommend everyone reads - that state of deep engagement and satisfaction we experience when fully absorbed in a task - becomes harder to achieve when AI is doing the heavy lifting.

“Most enjoyable activities are not natural; they demand an effort that initially one is reluctant to make. But once the interaction starts to provide feedback to the person's skills, it usually begins to be intrinsically rewarding.”

Mihaly Csikszentmihalyi

Moreover, this relentless pursuit of efficiency is changing our perception of time itself. We're developing what social scientists call 'time anxiety' - a constant feeling that we're not doing enough, not being productive enough, not keeping up with the AI-enhanced Joneses.

To break free from this trap, we need to fundamentally reframe our relationship with efficiency. Instead of asking "How can AI help me do more?", we should be asking "How can AI help me do what I do, but better?" It's about using AI not to cram more into our lives, but to create space for what truly matters.

Efficiency isn't an end in itself. It's a means to an end. And that end should be a life richer in meaning, connection, and genuine human experiences - not just in completed tasks and ticked boxes.

The efficiency trap is real, but it's not inescapable. By consciously choosing how we use AI, we can turn this tool of efficiency into a tool for effectiveness - one that enhances our humanity rather than diminishes it.

The Unintended Consequences of AI Adoption

As AI weaves itself into the fabric of our society, we're beginning to see ripple effects that few could have predicted. These unintended consequences are reshaping our world in ways both subtle and profound, often catching us off guard.

In education, AI-assisted learning tools promised to revolutionise teaching. And in many ways, they have. But as a headteacher in Bristol recently confided to me, "We're seeing a worrying trend. Our students are becoming brilliant at finding information, but they're struggling to think critically about it." I have talked elsewhere about the need for critical thinking as a core element of an evolving education system - my preference is to use the CRAAP framework developed by librarians at California State University, Chico.

The implications are stark. Are we inadvertently creating a generation of information regurgitators rather than independent thinkers? The challenge now is to use AI not just as an information dispenser, but as a tool to foster deeper understanding and creative problem-solving.

As I have mentioned, I have had the privilege of working with a lot of small businesses as part of the Extraordinary Collective Digital Boost programme over the last three years. It has become super clear that in digital marketing, AI-generated content is becoming increasingly sophisticated. But as one marketing manager of a decent-sized Liverpool SME put it, "Our content is more consistent now, but it's lost its spark. We're struggling to maintain our brand's unique voice." This homogenisation of content raises questions about authenticity in an AI-augmented world. How do we preserve the human touch that resonates with consumers when algorithms are doing the talking?

The business world is grappling with its own set of unforeseen challenges. AI's role in decision-making is expanding, but with it comes a shift in the nature of leadership. Many UK managers are finding themselves in uncharted territory, balancing the analytical power of AI with their own experience and intuition. This isn't just about job security; it's about the fundamental nature of human judgement in the corporate world.

As one CEO of a UK Multi-Academy Trust told me, "AI gives us data-driven insights, but it can't replicate human intuition built on years of experience. We're having to redefine what leadership means in this new landscape." The question becomes: how do we strike a balance between AI's analytical power and the irreplaceable value of human wisdom?

Perhaps nowhere are the unintended consequences of AI more evident than in public services. In policing, AI-powered predictive tools were hailed as a game-changer for crime prevention. However, there are lots of anecdotal examples which highlight concerns about these systems perpetuating biases and undermining community trust. It’s simple statistics: if an AI system is trained on historical data, which is systemically flawed, the predictive and generative outcomes of this system will be undoubtedly skewed.

In social care, AI is being used to assess needs and allocate resources. But as a social worker in Yorkshire pointed out, "The system is efficient, but it's missing the nuances. We're seeing cases where the AI recommendation doesn't align with what we, as humans, know is best for the individual."

These examples highlight a crucial point: AI is not just a tool we use; it's a force that's actively shaping our society, often in ways we didn't anticipate. It's changing how we learn, how we communicate, how we make decisions, and how we care for one another. The challenge we face is not just about managing AI, but about redefining our roles in an AI-augmented world. We need to be proactive in shaping AI's impact, rather than merely reacting to it.

We must constantly ask ourselves: Are we using AI in a way that enhances our human capabilities, or are we unwittingly diminishing them? Are we allowing AI to make our world more efficient at the expense of making it less human?

The unintended consequences of AI adoption serve as a reminder that technology is not destiny. The future of AI in our society is not predetermined; it's a future we must actively choose and shape. And in making these choices, we must prioritise not just what AI can do, but what it means for our shared humanity.

The Philosophical Implications

It wouldn’t be one of my posts without a philosophical ruse would it?! As we hurtle towards this seemingly impending future, we find ourselves grappling with philosophical questions that would make even the Ancients scratch their heads. At the heart of this existential natter is a deceptively simple question: What does it mean to be human in an age where machines can think?

The ancient Greeks pondered the nature of humanity, but I doubt Aristotle ever imagined a world where an AI could write a passable imitation of 'Nicomachean Ethics. Yet here we are, forcing us to reassess our understanding of intelligence, creativity, and consciousness itself.

The nature of work, for example, is a concept that's been central to human identity since we first picked up a tool. As AI increasingly takes on cognitive tasks we once thought uniquely human, we're forced to confront uncomfortable truths. If a machine can do your job better than you, what value do you bring? It's a question that's keeping many of us awake at night. (By the way, a November 2023 TIME magazine article has shown that this is the trajectory by the way…)

TIME Magazine November 6, 2023 https://time.com/6300942/ai-progress-charts/

"Today, for the mass of humanity, science and technology embody 'miracle, mystery, and authority'. Science promises that the most ancient human fantasies will at last be realised. Sickness and ageing will be abolished; scarcity and poverty will be no more; the species will become immortal. Like Christianity in the past, the modern cult of science lives on the hope of miracles. But to think that science can transform the human lot is to believe in magic. Time retorts to the illusions of humanism with the reality: frail, deranged, undelivered humanity. Even as it enables poverty to be diminished and sickness to be alleviated, science will be used to refine tyranny and perfect the art of war.”

John Gray, Straw Dogs

The philosopher John Gray, never one to sugarcoat things, puts it bluntly, eh? ⬆️

The future of work is not a future of leisure. It seems to be one in which machines do more of the work humans have customarily done, while humans are left to do the shadow-work of tending to the machines.

But perhaps we're looking at this all wrong. Maybe the rise of AI isn't a threat to human value, but an opportunity to redefine it. After all, if machines can take care of the mundane, couldn't that free us to focus on what makes us uniquely human? Empathy, creativity, moral reasoning - these are areas where, at least for now, humans still have the edge.

Yet even here, AI is blurring the lines. Now an AI can compose a symphony or create art, what does that say about human creativity? The philosopher Margaret Boden of the University of Sussex has spent decades pondering this question. She argues that while AI can be creative, it lacks the self-awareness and emotional depth that characterise human creativity.

“Now, it is true that there are programs which can write poetry – although I don’t know of any AI programs that can write good poetry – and produce very interesting and, in a few cases, I would say, very aesthetically satisfying graphics, including coloured paintings. There are even programs which can write prose – for example, news reports describing a football game.

But, if you think of a report about a football match, I don’t think that there’s any system, at the moment, that could visually recognise what was so special about that goal by David Beckham against Crystal Palace [in 1996] when he scored from inside his own half. And even if it could realise how special it was, could it find the language to describe it?

If somebody were to try to describe that goal, they aren’t going to just say, “Oh, Beckham then scored a goal from the other side of the pitch.” They’re going to write more than that, because it was very special. And they’d not only have to have a good understanding of football, they’d have to have a good understanding of language – which, at the moment, these programs don’t have. They don’t understand language at all. They just either use canned phrases or they rely on statistics for word clusters – words that tend to appear together in human written prose – to pick their words, but they don’t understand any of the language that they use."

Margaret Boden

AI Generated Image. Midjourney Prompt: A futuristic neural network emerging from a minimalist human silhouette, with glowing synapses and data streams, set against a stark white background. Holographic UI elements displaying brainwave patterns float nearby. Hyper-detailed, sleek design, vibrant neon accents ar16:9

This leads us to perhaps the most profound philosophical implication of AI: the nature of consciousness itself. As our AIs become more sophisticated, the line between simulated and genuine consciousness becomes increasingly blurry. The philosopher David Chalmers calls this "the hard problem of consciousness" - and it's a problem that's only getting harder.

Are we creating sentient beings? And if we are, what ethical obligations do we have towards them? It's not just a question for sci-fi novels anymore. As we develop more advanced AI systems, we may need to grapple with the rights of artificial entities. It's enough to make Jeremy Bentham's head spin (which, incidentally, is still preserved at University College London - talk about philosophical zombies!).

The philosophical implications of AI aren't just academic exercises. They have real-world consequences for how we structure our society, our economy, and our lives. If AI can do most jobs better than humans, how do we distribute resources? How do we find meaning? How do we avoid creating a world where, as Yuval Noah Harari warns, we have "the rise of the useless class"?

These are heavy questions, and there are no easy answers. But one thing is clear: as we steer into this brave new world, we need philosophy more than ever. We need to think deeply about what we value, what gives our lives meaning, and what kind of future we want to create.

In the end, the rise of AI doesn't just challenge our notions of work and intelligence - it challenges our very conception of what it means to be human. And that is a philosophical puzzle worthy of our full attention.

Reclaiming Our Humanity

I know I've painted a rather gloomy picture of our AI-driven future. But fear not for all is not lost! It's time we stop being passengers on this runaway AI train and grab the controls. It's time to reclaim our humanity.

First things first, let's bin this notion that efficiency is the be-all and end-all. We're not robots. We're messy, emotional, irrational beings - and that's what makes us brilliant. So instead of using AI to squeeze every last drop of productivity out of our day, why not use it to create space for what makes us human?

I did this very thing the other day. I put my bare feet on the grass in Harrogate and sat still for an hour. A full 60 minutes. A full 3600 seconds. And I pretty much counted every one I think! It was HARD. But I am glad I did it. No AirPods. No phone. No book. No latest podcast to catch up on news article to digest. I know that the AI in my calendar would not allow anyone to book time in that hour. I had my emails set to auto-reply. I left my phone in the car with Do Not Disturb on which automatically sends notifications to others that my notifications are snoozed! And I am going to do it more, however uncomfortable it made me feel.

Imagine using AI to handle your email inbox, not so you can answer even more emails, but so you can spend an hour reading a novel or playing a game with the kids. Let AI take care of your routine work tasks, not so you can take on more projects, but so you can mentor a colleague or volunteer in your community.

This isn't pie-in-the-sky thinking. Some forward-thinking UK companies are already exploring ways to use technology to enhance employee wellbeing and customer service, rather than simply to increase productivity. The key is to focus on using AI and other technologies in ways that support human values and interactions, not replace them.

But it's not just about how we work. It's about how we live. We need to start seeing AI as a tool for enhancing our humanity, not replacing it. That means using AI to augment our decision-making, not to make decisions for us. It means leveraging AI to expand our creativity, not to create in our stead.

Instead of fretting about AI writing the next Booker Prize winner, why not use AI as a tool for inspiration?

“The technology you use impresses no one. The experience you create with it is everything.”

Sean Gerety

In education, instead of using AI to spoon-feed information to students, let's use it to free up teachers to foster critical thinking and emotional intelligence. These uniquely human skills will be more crucial than ever in an AI-dominated world. We don’t need more AI-generated assignments, written by AI that students use, to be marked by AI teacher tools and fed through AI data analysis to make AI predictions on what the students need to learn next.

And in our personal lives, let's use AI to create more time for genuine human connection. Imagine AI handling your schedule and to-do list, not to cram in more activities, but to ensure you have time for those leisurely Sunday lunches with family or impromptu pints with mates.

But none of this will happen automatically. We need to make conscious choices about how we implement and interact with AI. We need to be proactive in shaping our AI-augmented future, rather than passively accepting whatever comes our way.

This means having hard conversations about the role of AI in our society. It means setting boundaries and being willing to say "no" to AI solutions that don't align with our values. It means reimagining our education systems to prioritise the skills that make us uniquely human. It means finding tools that help us…become more human.

Most importantly, it means remembering that we are not powerless in the face of technological change. We have agency. We have choice. We have the ability to shape our tools, rather than being shaped by them.

Before I give you my final takeaway thoughts, I couldn’t go through a post about working with AI to enhance our lives without referencing Ethan Mollick’s brilliant book, Co-Intelligence, again. I think this is a seminal piece to help us understand our role with AI. I don’t apologise for giving a long excerpt from him but 100% urge you to read his book in full. He says there are 4 Rules for Co-Intelligence with AI:

Rule 1: Always invite AI to the Table

Rule 2: Be the human in the loop

Rule 3: Treat AI like a person (but tell it what kind of person it is

Rule 4: Assume this is the worst AI you will ever use.

“As imperfect as the analogy is, ,working with AI is easiest if you think of it like an alien person rather than a human-built machine. You must establish a clear and specific AI persona, defining who the AI is and what problems it should tackle.”

My Takeaways

1. Mind the AI Gap: Be aware of the growing disconnect between AI's promised benefits and the reality of its implementation. Efficiency doesn't always equate to effectiveness or improved quality of life.

2. Redefine Productivity: Challenge the notion that doing more is always better. Use AI to create space for meaningful work and leisure, not just to cram more tasks into your day.

3. Preserve Human Skills: Focus on developing and valuing uniquely human capabilities like empathy, creativity, and critical thinking. These skills will become increasingly important in an AI-dominated world.

4. Implement AI Ethically: When adopting AI solutions, consider the broader implications and unintended consequences. Prioritise systems that enhance human decision-making rather than replace it entirely.

5. Cultivate Digital Wisdom: Develop the ability to critically evaluate AI-generated information and recommendations. Remember, AI is a tool, not an infallible oracle.

6. Reclaim Your Time: Use AI to automate mundane tasks, but be intentional about how you use the time saved. Prioritise activities that enhance your wellbeing and human connections. What fills your cup, as Action Jackson would say?

7. Shape the Future: Engage in discussions about AI's role in society. Your voice matters in determining how these technologies are developed and implemented.

The goal isn't to resist AI, but to harness it in ways that amplify our humanity rather than diminish it, to create a future where technology serves human flourishing, not the other way around.

We're in the thick of an AI revolution, and it's not going anywhere. I have said multiple times before that I think this is more disruptive than any other revolution in history, including the printing press, industrial revolution, the internet or social media, because it has the ability to amend itself, independent of human intervention. But here's the thing: we don't have to let it run roughshod over our lives.

AI is a tool, not a tyrant (for now at least). It's up to us to decide how we use it. If you want to use AI to churn out more pointless reports. Go ahead. But why not use it to free up time for a game of padel or to finally write that novel you've been banging on about for years?

The trick is to use AI to enhance our humanity, not replace it. Let's use it to do the boring bits so we can focus on what makes us human - creativity, empathy, and the ability to have a good laugh.

Subscribe Now

Subscribe to receive the latest blog posts directly to your inbox every week.

By subscribing, you agree to our Privacy Policy.
Thank you! Your submission has been received!
Oops! Something went wrong. Please try again later.