Beyond Button Pressing

February 6, 2026

We’re drowning in prompt engineering tutorials and AI literacy courses, aren’t we? Every week brings another webinar on “10 ChatGPT Hacks for Busy Educators” or “How to Write the Perfect AI Prompt in 5 Steps”. The message is clear and consistent that if you’re not using AI to speed up your workflow, you’re being left behind. Learn to prompt better, work faster, automate more.

But there’s something that’s been keeping me up at night. In all these sessions about optimising prompts and maximising efficiency, I’m noticing what’s conspicuously absent from a lot of the conversations. Nobody seems to be asking whether the task should be automated. Nobody’s examining what gets lost when AI generates our first drafts. Nobody’s questioning who benefits most from this relentless push toward algorithmic assistance.

We’ve confused technical competence with critical thinking. We’ve mistaken knowing how to use ChatGPT with understanding what it means to think alongside AI. The gap between what we’re teaching and what we actually need has never been wider.

I’ve spent the last six months creating something that makes me deeply uncomfortable. Not another guide to better prompting. Not a course on AI productivity hacks. Instead, I’ve made a set of cards designed to provoke the questions we’ve been avoiding. Questions that make people squirm. Questions without tidy answers. Questions that demand we stop optimising and start thinking.

AI Generated Image. Midjourney Prompt: critical thinking

The Theatre of AI Literacy

I have delivered hundreds of staff training sessions on AI and I see a familiar pattern. First, there’s an expectation of a dazzling demonstration of what the tools can do. Watch how quickly it generates a lesson plan! See how it produces personalised feedback in seconds! Look at this perfectly formatted report created from bullet points! Then comes the need for a tutorial on prompt engineering. Be specific. Give it context. Tell it your role. Iterate until you get what you want.

The implicit promise is seductive and I have fallen into it too many times than I can count. It’s as if we master these tools, we’ll reclaim your time. Work smarter, not harder. Do in minutes what used to take hours. It’s efficiency porn dressed up as professional development.

What’s missing from this theatre (I seem to be talking about this everywhere at the minute!) is any serious interrogation of what we’re actually doing. We’re teaching people to drive faster without teaching them about traffic laws, road safety, or where they’re actually going. We’re optimising the mechanics whilst ignoring the meaning.

I have been considering what most AI literacy training actually covers. Prompt engineering dominates the agenda. How to structure requests, how to get better outputs, how to refine results through iteration. Tool features come next. Which models do what, which platforms offer which capabilities, how to integrate them into your existing workflow. Efficiency gains round out the curriculum. Time saved, tasks automated, workload reduced. 

And all of this is awesome. I love it. We need to find better ways to save time, cash and stress but this isn’t literacy. It’s operationalisation. We’re creating competent users, not critical thinkers. The distinction matters enormously.

Real literacy would ask different questions entirely. Not just how to use AI, but when and whether to use it. Not just what it can do, but what it can’t do and shouldn’t do. Not just how to get better outputs, but what we lose by outsourcing certain cognitive tasks to algorithms.

AI Generated Image. Midjourney Prompt: outsourcing cognition to algorithms

I see this gap everywhere in my consultancy work. Teachers who can generate resources brilliantly but haven’t considered what happens when students grow up believing that thinking means prompting. Leaders who produce impressive AI-assisted reports but haven’t examined the assumptions baked into those algorithmic recommendations. Schools racing to implement AI policies without asking whose interests those policies actually serve.

The cost isn’t immediately visible. Students still submit work. Teachers still meet deadlines. Reports still get written. But something crucial is disappearing beneath the surface. The messy, uncertain, exploratory thinking that precedes good writing. The struggle that builds understanding. The human judgement that can’t be automated without losing something essential.

We’re sleepwalking into dependence whilst congratulating ourselves on our digital fluency. The new cards I have created emerged from a simple realisation that if we’re not asking the right questions about AI, we’re not doing education. We’re doing compliance.

What the Cards Actually Are

The set contains 48 cards organised around four themes:

Thinking with AI. 

Creating with AI. 

Learning with AI. 

Leading with AI. 

Each card poses a single provocative question. No answers are provided because providing answers would defeat the purpose entirely. I do share my own thoughts on how I would approach the questions but they are just suggestions - and that distinction matters. 

The Thinking with AI cards probe how AI shapes our reasoning processes. When an AI answer sounds persuasive, what reasoning steps are missing or assumed rather than made explicit? What problem are you actually asking AI to think about, and what problem might it be solving instead?

These questions are deliberately uncomfortable. They force you to examine the gap between fluency and logic, between sounding right and being right. They make visible the assumptions we’re importing when we accept AI-generated explanations without scrutiny.

The Creating with AI cards examine what happens to creative and intellectual work when AI enters the process. Where has AI removed the need for a first draft, and what thinking disappeared with it? If I had to teach this without AI, what would I do differently? 

These aren’t abstract philosophical musings. They’re practical questions about daily decisions. When you use AI to generate feedback comments, what are you actually authoring and what are you curating? When speed replaces care, what does that cost your students?

The Learning with AI cards focus on what changes for learners when AI becomes ubiquitous. What is this learner learning because of AI, not just with it? What does struggle look like when AI is always available? How does AI affect what learners remember versus what they can retrieve?

The brutal honesty of these questions is intentional. We need to confront what’s actually happening in classrooms where AI assistance is normalised. Not the idealised version we present in policy documents, but the messy reality of how tools reshape learning.

The Leading with AI cards address systemic and ethical dimensions that leaders cannot avoid. What risks are being normalised because AI feels inevitable? How is AI reshaping trust between leaders, staff, and learners?

These questions matter because decisions made at leadership level cascade throughout organisations. When a head teacher decides to implement AI-powered behaviour monitoring or data analytics, that choice affects everyone. But how often do we examine the assumptions embedded in those systems?

AI Generated Image. Midjourney Prompt: leadership cascade

Each card includes a QR code linking to a dedicated page on my website. These pages provide deeper context. Short explainers ground the question. Classroom and leadership scenarios make it concrete. Practical activities offer ways to explore the question with students, staff, or leadership teams. Technical notes define key terms without jargon. Additional resources point to further reading and tools.

But the crucial bit is that the pages also include response boxes. Space for guided reflection. Options to upload video responses. A prompt asking what you’d share to help others. And crucially, a section displaying community responses where your thinking becomes part of the collective resource.

This isn’t a product you buy and shelve. It’s infrastructure for sustained thinking. The cards are provocations. The QR codes are gateways. The monthly webinars are where the real work happens. (Yes, if you get the cards, you get access to the monthly webinars where I walk through using these in an organisation.)

Why Physical Cards Matter

We have a world of infinite digital content and so physical cards might seem almost perverse. Why not just a PDF? Why not a mobile app? Why something you can drop, lose, or spill coffee on?

The physicality is precisely the point. Cards demand you slow down. You can’t accidentally swipe past an uncomfortable question. You have to pick it up, read it aloud if you’re in a group, sit with it. The friction is the feature.

AI Generated Image. Midjourney Prompt: friction in thinking

I’ve used question cards in keynotes and workshops for years now and the dynamic never fails to surprise me. When you ask a room of educators to physically choose a card, something shifts. The random selection means you can’t curate your way to comfortable questions. You get what you get and you have to engage with it.

Cards create conversation in ways that screens don’t. You can spread them across a table in a staff meeting. Pull one out during a planning session. Hand them to colleagues and ask them to choose the question that makes them most uncomfortable. The awkwardness of reading a question aloud is precisely what makes it work. Discomfort is where thinking happens.

There’s also the shuffle principle. You can’t “complete” a set of cards. There’s no progression from beginner to advanced, no sense that you’ve mastered the material once you’ve been through them all. Different questions become relevant at different moments. Last month’s card that seemed abstract suddenly becomes urgent when you’re making a specific decision about AI implementation.

Physical cards constrain you in productive ways. You can’t have them all in front of you simultaneously. You can’t search for keywords. You can’t skip to the ones you think you’ll find easiest. The limitation forces genuine encounter with questions you might otherwise avoid.

This matters more than it might seem. Digital resources promise boundlessness and deliver forgettability. We bookmark things we never return to. We save PDFs we never open. We sign up for courses we never finish. The infinite archive is also the infinite deferral.

Cards sit on your desk. They occupy physical space in staff rooms and leadership meetings. They become objects that carry meaning beyond their text. People develop favourites. Teams develop rituals around them. The material presence creates mental presence in ways that digital resources rarely achieve.

I’m not romanticising physical media for its own sake. I’m making a pragmatic argument about attention and engagement. When everything is available all the time, nothing feels urgent. When you hold a question in your hand, you have to decide whether to engage with it or put it down. That decision itself is valuable.

Critical Thinking, Not Prompt Engineering

The fundamental shift we need moves us from AI literacy to AI criticality. From prompt engineering to question crafting. From tool mastery to systemic thinking.

AI literacy as currently conceived focuses on operational competence. Can you write an effective prompt? Do you understand how different models work? Can you integrate AI tools into your workflow? These are useful skills but they’re not sufficient. Technical competence creates an illusion of readiness whilst leaving the most important questions unexamined.

The prompt engineering trap is particularly insidious (and I have to be careful not to fall into it too often myself). The entire framing positions you as a consumer optimising your inputs to get better outputs. Learn to prompt correctly and AI will serve you better. This perspective keeps you focused on the mechanics whilst ignoring the architecture.

But the real skill isn’t writing better prompts. The real skill is developing better judgement about when not to prompt at all. Knowing which tasks genuinely benefit from AI assistance and which lose something essential when automated. Understanding not just how to delegate to algorithms but whether delegation serves your actual goals.

I want to consider three brief scenarios that illustrate the difference between literacy and criticality.

  1. A teacher uses AI to generate personalised feedback comments for student essays. The AI literacy question asks how to prompt the system to produce more specific and constructive feedback. The critical thinking question asks what student learning requires that AI cannot provide. When does feedback from a person matter more than speed or scale? What does the student lose if they never receive feedback shaped by someone who knows their development over time?
  1. A curriculum leader uses AI to generate lesson plans for a new unit. The literacy question focuses on providing enough context to get a coherent plan. The critical thinking question examines which parts of planning involve pedagogical judgement that shouldn’t be automated. What thinking disappears when you skip straight to the plan? How does starting with an AI-generated structure shape what you end up teaching?
  1. A head teacher uses AI to analyse behaviour data and generate intervention recommendations. The literacy question asks how to set parameters and interpret outputs. The critical thinking question probes what contextual knowledge this decision requires that AI cannot possess. Whose perspective is missing from the algorithmic recommendation? What happens when data patterns override professional judgement about individual students?

These aren’t hypothetical scenarios. They’re happening in schools across the country right now. The difference between approaching them through an AI literacy lens versus an AI criticality lens determines whether we end up with thoughtful integration or thoughtless automation.

The philosophical stake here is higher than it might initially appear. We risk creating a generation of students who understand thinking as equivalent to prompting. Who believe that knowledge work means knowing how to extract information from AI systems rather than how to construct understanding through sustained engagement with ideas.

Gilbert Ryle’s distinction between knowing how and knowing that becomes relevant here. Traditional technologies embodied knowing how. They encoded procedures and methods. AI systems increasingly demonstrate something closer to knowing that. They contain factual knowledge and can reason about it. This shift changes what human capability looks like and what we need to develop in students.

“It is, however, one thing to know how to apply such concepts, quite another to know how to correlate them with one another and with concepts of other sorts. Many people can talk sense with concepts but cannot talk sense about them; they know by practice how to operate with concepts, anyhow inside familiar fields, but they cannot state the logical regulations governing their use. They are like people who know their way about their own parish, but cannot construct or read a map of it, much less a map of the region or continent in which their parish lies.” Gilbert Ryle

If AI can generate competent prose, create lesson plans, produce feedback comments, and synthesise research, what becomes of human expertise? Not a redundant question about whether humans remain valuable, but a practical question about which capabilities we need to deliberately cultivate because they won’t develop automatically in an AI-saturated environment.

The answer involves judgement, contextual sensitivity, ethical reasoning, and the capacity to sit with ambiguity. These capabilities don’t emerge from prompt engineering practice. They develop through exactly the kind of sustained, uncomfortable questioning that the cards are designed to provoke.

This is where the monthly webinars become crucial. Because these questions don’t have tidy answers that I can provide. They require collective sense-making.

AI Generated Image. Midjourney Prompt: sense making

The Community Dimension

The monthly webinars aren’t an add-on or a bonus feature. They’re central to how this whole thing works. These questions are too complex and too contextual for individual contemplation alone. We need to think together.

Different contexts reveal different dimensions of the same question. A primary school teacher brings concerns about early literacy development. A university lecturer worries about academic integrity. A multi-academy trust leader grapples with system-wide policy. A parent asks about their child’s relationship with AI homework help. Each perspective enriches the others.

The webinars create rhythm and accountability. It’s easy to buy a set of cards, use them once or twice with initial enthusiasm, then let them gather dust. Monthly sessions create expectation and momentum. You know you’ll be back in conversation with others who are wrestling with the same questions. That anticipation changes how you engage with the cards throughout the month.

More importantly, the webinars allow for the evolution of questions. AI develops rapidly. What seems urgent this month might be superseded by new capabilities or new concerns next month. The community helps identify emerging issues and adjust focus accordingly. The cards are version one. The collective thinking shapes what comes next.

The sessions aren’t structured as presentations where I hold forth about AI ethics. They’re facilitated discussions. Small group breakouts around specific card themes. Case study sharing where people describe what happened when they used a particular question in a staff meeting or leadership retreat. Resource sharing where someone recommends a useful article or tool they’ve discovered.

The video and text response features serve a similar purpose. When you submit your thinking through the QR codes, you’re not just documenting your process. You’re contributing to a public repository of perspectives. Not consensus, because consensus would be false and boring. But considered disagreement. Multiple ways of approaching the same question.

This matters because the dominant narratives about AI in education come from technology companies and consultancies with vested interests. We need counter-narratives built by practitioners thinking carefully about what serves students and learning. The webinars and response features create space for those narratives to emerge.

Access is straightforward. When you purchase a set of cards, you get permanent access to monthly webinars. Not a six-week course with an end date. Ongoing participation for as long as you find it valuable. Recordings available for those who can’t attend live. A commitment to keeping the conversation open rather than extracting value and disappearing.

The community dimension also serves another function. It reminds us that none of us are the expert here. Everyone is figuring this out as we go. The head teacher who’s implemented AI tools across their school doesn’t have it all sorted. They’ve just encountered different problems than someone who hasn’t started yet. Humility is essential when the territory is this new and the stakes are this high.

Why I’m Uncomfortable About This

I do need to be honest about something though. The whole enterprise makes me deeply uncomfortable. I’m selling a product about critical thinking. There’s an inherent irony in commodifying questions designed to challenge commodification.

The cards themselves could become just another thing to buy and not use. Another item in the category of professional development theatre. Look, we’ve got the AI ethics cards in our staff room! We’re taking this seriously! Meanwhile, nothing actually changes about how AI gets implemented or what questions get asked before decisions are made.

AI Generated Image. Midjourney Prompt: thinking traps

I’m trying to avoid several traps and I’m not sure I’ll succeed.

The expert trap is the most obvious. I don’t have all the answers to these questions. Nobody does. That’s the entire point. The cards are provocations, not prescriptions. But the very act of creating them positions me as someone who knows better. Someone with special insight into the right questions to ask.

The truth is more mundane. These questions emerged from conversations with hundreds of educators in workshops and consultancy sessions. From observing my daughters navigate school in an AI age. From my own stumbling attempts to think clearly about tools I use daily but don’t fully understand. From synthesising research I didn’t conduct. The questions aren’t mine in any meaningful sense. They’re collective concerns given a particular form.

The consumption trap is equally worrying. These cards only work if you actually use them uncomfortably. Reading a question is not the same as grappling with a question. Nodding along is not the same as letting a question disrupt your assumptions. The value lives in the squirming, not in the recognising.

I can’t control how people use the cards. Someone could buy them, take a quick photo for social media to signal their critical engagement with AI, then never touch them again. The cards become a badge rather than a tool. A symbol of virtue rather than a generator of discomfort.

The righteousness trap lurks beneath the others. There’s a temptation to feel morally superior for asking critical questions about AI. To position yourself as more thoughtful than those teachers who just want to speed up their marking. To imagine that interrogating AI implementation makes you one of the good ones.

But everyone is trying to do right by their students and their colleagues. The teacher using AI to write feedback comments might be drowning in workload and seeing AI as a lifeline. The leader implementing algorithmic systems might genuinely believe they’re using data to promote equity. Good intentions are everywhere. Critical questions aren’t about judging people. They’re about examining systems and revealing assumptions that might otherwise remain invisible.

So why am I doing this despite the discomfort? Because silence feels worse. Because “wait and see” is a luxury that students don’t have. Because someone needs to create resources that prioritise questioning over optimising. Because if not me and you and all of us, then who?

Which brings me to the principle that sits underneath everything I do. I am not The Ideas Guy. We are all The Ideas Guy. It’s sort of a tagline I tried to convey in my book but it’s never been more true than right now. The insights won’t come from me. They’ll emerge from the community using these cards. From educators brave enough to ask uncomfortable questions in uncomfortable meetings. From leaders willing to pause before implementing the next AI solution. From students who learn to interrogate algorithmic outputs rather than accepting them at face value.

Collective intelligence beats individual expertise every single time. Especially when we’re navigating territory this uncertain.

The Invitation

Here’s what I’m actually asking you to do.

  1. Get the cards. They’re available now at theideasguy.io with everything included. Physical cards that you can hold and spread across a table. QR codes linking to the full ecosystem of resources on every question. Access to monthly webinars where we think through these issues together. Response features where your perspective becomes part of the collective resource.
  1. Use them uncomfortably. In the next staff meeting, pull out one card and actually discuss it. Not in a perfunctory way where someone reads it aloud and everyone nods and you move on. Really discuss it. Apply it to a specific decision your school is facing. Let the awkwardness be productive. Let people disagree. Let the question reveal assumptions you didn’t know were operating.
  1. Join the webinars. Bring your specific contexts and questions. Learn from others’ experimentation. Share what you’re discovering. These aren’t one-way presentations. They’re collaborative sense-making sessions. Your contribution matters as much as anyone else’s.
  1. Share what you find. Use the video response option through the QR codes. Tag me on social media when you use the cards in ways that surprise you. Tell me when a question falls flat or reveals something you weren’t expecting. Build the collective resource by making your thinking visible to others.

Success here doesn’t look like everyone agreeing on the right answers. Success looks like more people asking better questions before implementing AI. Not perfect policies but thoughtful hesitation before rushing to automation. Not AI rejection but AI integration with eyes wide open.

The cards are part of something larger. Other resources are coming. A growing community of practice around critical AI adoption. Educators leading this conversation rather than just reacting to narratives shaped by technology companies and venture capital interests.

The future of education in an AI age won’t be determined by the technology. It’ll be determined by the questions we ask about the technology. And by our courage to sit with uncomfortable answers. Let’s ask better questions together.

AI Generated Image. Midjourney Prompt: asking questions

Seven Principles for Critical AI Thinking

Question before optimising. Understand the problem before automating the solution. The first question should never be “How can AI help with this?” It should be “What am I actually trying to achieve and why?”

Value judgement over efficiency. Some things shouldn’t be faster. Some processes require time, struggle, and human attention precisely because those elements create value. Speed is not always virtue.

Preserve productive struggle. Learning requires difficulty. When AI removes all friction, it often removes the cognitive work that builds understanding. We need to distinguish between pointless struggle and necessary challenge.

Make thinking visible. Show students and colleagues the reasoning that AI hides. Model the judgement calls, the evaluative criteria, the contextual considerations that algorithms cannot capture. Your thinking is the curriculum.

Distribute benefits equitably. Ask who gains and who loses from each AI implementation. Whose workload actually gets reduced? Whose skills get devalued? Who becomes dependent on tools they don’t control?

Maintain human accountability. Don’t let “the algorithm decided” replace “I decided”. When AI systems inform high-stakes decisions about students, someone must remain responsible for outcomes. That someone needs to be human.

Build collective intelligence. These questions are too big for individual answers. We need communities of practice where people share what they’re learning, challenge each other’s assumptions, and develop wisdom together.

The future we’re building right now won’t emerge from the technology alone. It’ll emerge from how we choose to think about and with that technology. From the questions we ask and the conversations we’re willing to have.

I’m not The Ideas Guy. We are all The Ideas Guy. Let’s prove it.

Subscribe Now

Subscribe to receive the latest blog posts directly to your inbox every week.

By subscribing, you agree to our Privacy Policy.
Thank you! Your submission has been received!
Oops! Something went wrong. Please try again later.