Education
20mins

Why Most Schools Get Stuck at the Foundation

February 20, 2026

A school leadership team gathers around a conference table, laptop open to their shiny new AI policy document. They've ticked all the boxes - drafted acceptable use guidelines, run a twilight CPD session on CoPilot (because “they are Microsoft” and that’s the right tool), even trialled the tool with Year 10 English. Six months later, nothing has fundamentally changed. Teachers still aren't using AI tools with any confidence. Students remain confused about what's actually allowed. The "transformation" exists only in PowerPoint slides and policy documents collecting digital dust in shared drives.

This isn't just failure. It feels like it's the predictable outcome of treating AI integration like a procurement exercise rather than organisational change. And it's happening in lots of schools across the world right now.

The problem is architectural. Schools are attempting to build AI integration without understanding that it requires three distinct but interdependent layers - Foundation, People, and Practice. Miss any one, and the whole structure collapses. Build them in the wrong order, and you've wasted time and goodwill you can't afford to lose.

Most schools never make it past the first layer. They get stuck in policy-writing purgatory, endlessly refining documents whilst teachers wonder when they'll actually be allowed to try anything. Others skip straight to the shiny practice layer, buying subscriptions and mandating use before anyone understands why or how. Both approaches fail, just in different ways.

This piece examines why AI implementation efforts stall, what's required to reach meaningful practice, and how to build this architecture properly the first time. Because if we're going to integrate AI into education - and we are - we might as well do it right.

AI Generated Image. Midjourney Prompt: policy writing purgatory

The Architectural Problem

I could walk into any school in November 2025 and ask about their AI strategy. You'll hear about comprehensive policy documents that nobody's read, new tool subscriptions that nobody uses with confidence, or honest admissions that they're “waiting for clearer guidance from the DfE”. All three are making the same mistake: treating AI integration as a project rather than organisational transformation.

Projects have endpoints. You draft a policy, tick the box, move on. Transformation is ongoing, iterative, and requires fundamental shifts in how people think and work. AI integration sits firmly in the transformation category, yet most schools are approaching it with project management mindsets.

The parallels with broader digital transformation are striking. Successful companies maintained what made them valuable whilst using technology to amplify existing strengths. Adobe didn't abandon creative software when moving to the cloud; it enhanced it. IKEA didn't try to become Amazon; it made IKEA easier to access. Schools attempting AI integration need the same principle: becoming a better version of yourself using new tools, not becoming something you're not.

But schools default to policy-first approaches for a reason worth naming: risk aversion bordering on paralysis. Ofsted inspections, league tables, parental complaints, media scrutiny. In this context, the safest move feels like having comprehensive documentation that proves you've thought about safeguarding, data protection, and academic integrity, even if that documentation doesn't actually improve practice.

AI Generated Image. Midjourney Prompt: ducks in a row

This defensive posture leads to what people call the Cart-Horse Fallacy, buying tools before understanding what problems you're solving. Schools purchase AI subscriptions after one conference presentation, impressed by demonstrations that bear little relation to their actual teaching contexts. Then they wonder why nobody uses them effectively. It's deciding you need a Ferrari before working out where you need to go. (We talked about this isomorphism on a recent Edufuturists podcast with Alex More.)

The consequence isn't just wasted money but more like wasted trust. Every failed initiative, every mandated change that goes nowhere, every policy that exists only on paper - these erode staff confidence in leadership's ability to navigate change effectively. When we have such fragile teacher retention and recruitment, burning through goodwill on poorly planned AI implementation is strategic malpractice.

So what does getting it right actually look like? It starts with understanding that you're building three interdependent layers, and whilst you can work on them simultaneously, each requires different approaches, timescales, and success measures.

The Foundation Layer: The Unglamorous Groundwork

The WiFi revelation usually happens about three months in. Someone tries to use an AI tool with thirty Year 9s simultaneously, and the network collapses. Turns out your infrastructure was designed for occasional internet searches, not continuous cloud-based interaction.

This is where foundation work actually begins, not with inspiring vision statements, but with honest assessment of whether your building can support what you're proposing.

Policies That Enable Rather Than Restrict

Most school AI policies read like legal disclaimers  - extensive lists of prohibited uses. "Students must not use AI to complete homework." "Staff must not input student data." These tell people what they can't do without helping them understand what they should do.

The alternative is developing enabling frameworks. We should consider the difference:

Restrictive: "Students may not use AI tools to complete any assessed work unless explicitly permitted."

Enabling: "When considering AI use in assessment, ask: What skills is this designed to evaluate? Would AI prevent or support demonstration of those skills? Can we use Furze et al’s AI Assessment Scale?”

The second approach trusts teachers to think contextually. It acknowledges that AI might be appropriate in some contexts (researching sources) whilst undermining assessment purpose in others (demonstrating written English). Most importantly, it teaches decision-making rather than compliance.

Developing these frameworks requires wrestling with genuinely hard questions: When is AI scaffolding versus dishonesty? If AI provides better feedback than time-poor teachers, is using it sound or problematic? How do we handle equity when some students have better home AI access?

These questions lack neat answers. That's why you need frameworks - guidelines for navigating complexity rather than pretending it doesn't exist.

Safeguarding and Infrastructure Reality

Students are already using AI tools. Let me say that louder for those in the back. The question isn't whether they'll encounter ChatGPT, it’s whether schools will teach them to use these tools wisely.

Data protection concerns are legitimate. GDPR and UK data protection laws create genuine constraints. The solution isn't refusing to engage. It’s got to be teaching what information never goes into AI systems whilst demonstrating safe use with anonymised content.

Technical infrastructure also matters brutally. Many schools lack bandwidth for thirty students using AI simultaneously. Filtering systems block tools or make them unusable. Device access varies wildly. Integration with existing systems proves challenging. You can't ban mobile phones whilst expecting students to develop sophisticated digital literacy. Pick a lane.

AI Generated Image. Midjourney Prompt: pick a lane

Auditing What You Have

Before implementing anything new, audit what's happening. Most leaders discover AI use is more widespread than realised. Teachers experimenting quietly, students using tools for homework, departments developing practices nobody knows about.

The audit should cover: 

  • digital skills baseline (genuine competency, not application claims)
  • current tool use (what's embedded, where are frustrations)
  • hidden champions (quiet practitioners testing carefully)
  • resource gaps (time, training capacity, specialist knowledge).

This audit work isn't admin but I would argue it's essential architecture. But foundation alone achieves nothing without the people layer.

The People Layer: Where Everything Actually Happens

You can have perfect policy and flawless infrastructure. Your WiFi could be impeccable, your frameworks carefully considered, your safeguarding protocols watertight. Without buy-in from the people who actually teach, your AI integration exists only on paper. It’s another initiative that leadership (or more probably, one leader) championed and staff quietly ignored until it faded into institutional memory.

The people layer is where theoretical possibility meets practical reality. It's where good ideas either take root or wither. And it's where most schools make their biggest mistakes, treating cultural change as if it's a training problem that can be solved with a few CPD sessions.

Strategic Champions, Not Just Enthusiasts

Every school has enthusiasts. Those staff members who get excited about new technology and volunteer for everything are some of my favourite folks. These people have their place, but they're not automatically your AI champions. In fact, relying solely on enthusiasts creates predictable problems. They're often seen as "not normal teachers" by their colleagues. Their enthusiasm can come across as evangelism, triggering resistance rather than curiosity. And they may lack credibility in departments where scepticism runs deep.

Effective AI champions are strategically selected across several dimensions:

Subject diversity: You need voices from English, Maths, Science, Humanities, Arts, PE, SEND provision because AI's relevance and application looks completely different in different contexts. The Maths teacher using AI to generate differentiated problem sets faces different challenges than the English teacher using it to support students with essay structure. If your champion network is all humanities teachers, don't expect buy-in from Science and Maths.

Career stage variety: The NQT experimenting with AI brings different insights than the head of department with twenty years' experience. Both perspectives matter. The early-career teacher might see possibilities that veterans dismiss, but the veteran might identify pitfalls that enthusiasm misses.

Sceptic inclusion: This is crucial and counterintuitive. You need thoughtful critics in your champion network, not just believers. The teacher who's genuinely worried about academic integrity isn't blocking progress; they're identifying real problems that need addressing. Include them in the conversation, and you benefit from their concerns. Exclude them, and they become the voice of underground resistance.

Diverse teaching contexts: The teacher working with top set Year 13 faces different challenges than the colleague supporting bottom set Year 9, who faces different challenges again from the one running intervention groups. All these contexts matter.

The champion role isn't "selling AI to sceptics". It’s translating between abstract possibility and concrete practice in specific contexts. These people don't need to have all the answers. They need curiosity, credibility with their colleagues, and the psychological safety to experiment honestly.

Creating that psychological safety is harder than it sounds. British school culture, for all its many strengths, can struggle with explicit discussion of failure. We talk about "learning from mistakes" whilst maintaining systems that punish them. For AI integration to work, staff need genuine permission to try things that might not work, document what went wrong, and share those insights without fearing it'll appear in their performance management review.

AI Generated Image. Midjourney Prompt: learning from mistakes

Confidence-Building vs Training

The typical school response to AI integration is booking a twilight session titled "Introduction to ChatGPT" or "AI in the Classroom". Ninety minutes of presentation, maybe some hands-on practice, perhaps a handout with "50 AI prompts for teachers". Everyone dutifully attends, nods along, and then returns to business as usual because they've been given information without developing confidence. I know because I deliver them!

Training teaches "how to use a tool". Confidence-building develops "how to think about what this tool might do in my specific context with my particular students addressing my unique challenges." The difference is fundamental.

I am trying to consider two approaches to supporting teachers with AI:

Training approach: Here's ChatGPT. Here's how you write a prompt. Here are some example uses. Any questions? Great, here's a handout for reference.

Confidence-building approach: What's something you find time-consuming or frustrating in your teaching right now? Let's explore whether AI might help with that specific challenge. Let's try some prompts together. What works? What doesn't? What concerns does this raise for you? What would you need to feel comfortable trying this with your class?

The second approach takes longer. It's harder to scale. It can't be packaged into a neat CPD session. But it actually changes practice because it engages with teachers' real challenges rather than theoretical possibilities.

The most valuable AI skill isn't knowing the "right prompts" for me; it's developing the judgement to determine when AI use is appropriate, what to do with AI-generated output, and how to critically evaluate whether the tool is actually improving things.

Building this judgement requires what psychologists call "scaffolded autonomy" - enough structure to feel safe, enough freedom to develop genuine capability. This might look like subject-specific working groups where teachers explore AI together, protected time for experimentation without expectation of immediate classroom application, or structured protocols for trying new approaches.

Student Voice: They're Already Using It

The conversation about AI in schools often proceeds as if students are passive recipients of whatever adults decide. This is peculiar given that many students are already using AI tools regularly - for homework, for research, for writing support, for creative projects, in Snapchat (yes, it’s there as their #1 friend on the list if you didn’t know). The question isn't whether students will use AI. It's whether schools will be part of the conversation about how to use it well.

Student perspectives matter for several reasons. First, they're often more digitally fluent than staff, not because of generational magic but because they've grown up adapting to new platforms constantly. They understand intuitively how these tools work and where they fall short in ways that take adults longer to discover.

Second, students can articulate how AI is actually being used in their peer groups, which is often quite different from how adults imagine it's being used. The teacher worried about students using ChatGPT to write entire essays might discover that most students use it to get unstuck on introductions or to generate ideas they then develop themselves. Understanding actual use patterns matters for developing appropriate responses.

Third, involving students creates opportunities for co-creation. Some schools have established student AI advisory groups who help shape policy, test new approaches, and provide feedback on what's working. This isn't tokenistic consultation but rather it's recognising that students have genuine expertise in how these tools function in their lived experience.

The "cheating" narrative needs addressing head-on before we finish here. Yes, some students will use AI inappropriately to avoid learning. This has always been true with every tool ever invented. Some students copy from textbooks, from Google, from each other. AI doesn't create academic dishonesty. It makes existing dishonesty easier to scale. The solution isn't banning the tool but redesigning assessment to make mere output reproduction less valuable than genuine understanding. As my friend, Darren Coxon says, “If a student can use ChatGPT to do their homework, why are they doing that homework?”

Parent Engagement and Managing Resistance

The parent conversation about AI triggers predictable anxieties: privacy concerns, over-reliance fears, beliefs that AI ends "real learning", confusion about what AI actually is, the dreaded screen-time debacle. The temptation is reassurance - "Don't worry, we've got this" - but reassurance without information creates false comfort.

What parents need is translation: helping them understand what AI integration does and doesn't mean. This requires acknowledging genuine concerns whilst providing context. When students can articulate what they're learning about AI, how they're using it, and what critical thinking they're developing, parents get clearer pictures than official communications provide.

Meanwhile, in any change process, some staff will resist. The instinct is viewing resistance as a problem, like the stubborn colleagues who "don't get it". This is almost always wrong. Resistance is feedback. People resisting often see genuine problems that enthusiasm missed.

The teacher worried about AI exacerbating inequalities isn't always being difficult. Perhaps, they're identifying real concerns. Students with unlimited home internet access and premium subscriptions benefit more than those relying on school tools. What's your plan?

The head of department concerned about AI undermining writing development isn't necessarily blocking progress. Perhaps, they're raising pedagogical questions. If students use AI to generate drafts they edit, are they developing writing skills or editing skills? Both have value, but they're different.

Creating space for legitimate critique without derailing progress requires distinguishing "this is open for discussion" from "this is happening, let's discuss how". We are integrating AI. That’s decided. How we do it remains genuinely open to influence.

The people layer is ongoing, not a phase you complete. It requires constant tending, communication, and adjustment based on feedback. Staff confidence built over months can be shattered by a single poorly-handled incident. Trust accumulates slowly and erodes quickly.

The Practice Layer: Disciplined Experimentation

This is where most schools either never arrive, or arrive too early and wonder why everything falls apart. They've skipped the foundation work, ignored the people layer, and jumped straight to "let's all use ChatGPT for lesson planning" without understanding that practice without architecture is just chaos.

The schools that reach this layer successfully do so because they've built the foundations that make practice possible. Their infrastructure works. Their frameworks enable rather than restrict. Their staff feel supported rather than coerced. And they understand that practice doesn't mean universal adoption. It means disciplined experimentation that generates learning applicable to their specific context.

AI Generated Image. Midjourney Prompt: building foundations

Micro-Pilots: Starting Small on Purpose

Whole-school rollout of AI tools fails predictably and for obvious reasons. You mandate use before understanding what works, impose practice before developing capability, and create resentment by forcing engagement before building confidence. It's the educational equivalent of requiring everyone to run a marathon without training because marathons are healthy.

The alternative is micro-pilots, which are small-scale experiments with volunteer participants, clear success criteria, time-bound trials, and structured reflection. This isn't just a compromise to make change more palatable. It's the most effective way to learn what actually works in your specific context with your particular students and your unique constraints.

A micro-pilot might involve five teachers across different subjects using AI tools for differentiation in one unit of work over half a term. Or three departments exploring AI-assisted assessment feedback for four weeks. Or a single year group using AI for research skills in a defined project. The scale matters less than the structure. These are genuine experiments designed to generate learning, not just demonstrations to prove a predetermined conclusion.

The design principles for effective pilots are remarkably consistent:

Volunteer participation: People who want to try something learn more from it than people forced to try it. Mandated pilots create compliance at best, resentment at worst. Neither generates useful insight.

Clear learning questions: What specifically are you testing? Not "does AI work" but "can AI-generated practice questions reduce teacher workload whilst maintaining quality" or "does AI support for essay planning improve student writing confidence". Specific questions generate specific answers.

Defined boundaries: Time limits (we're trying this for six weeks), scope limits (only with Year 9, only in this topic), resource limits (using free tools only). Boundaries create psychological safety and remember that this is an experiment, not a permanent change.

Multiple measures: Don't just track whether the tool was used. Measure teacher workload impact, student engagement, learning outcomes, unexpected discoveries, and things that didn't work. The failures teach as much as the successes.

Structured reflection: Build in time for participants to document their experience, discuss with each other, and identify patterns. Without reflection, pilots generate data but not learning.

The beauty of micro-pilots is that they make it safe to fail. When a whole-school initiative collapses, it's embarrassing and expensive. When a micro-pilot doesn't work, it's called "learning" and gets documented for future reference. This changes the emotional stakes of experimentation entirely.

Evaluation That Actually Teaches

Most evaluation in schools serves accountability purposes, proving to someone (governors, Ofsted, parents) that you did what you said you'd do. This has its place. But the evaluation that drives improvement serves a different function: it teaches you what to do next.

Evaluating AI pilots requires moving beyond engagement metrics - "70% of participating teachers used the tool at least once" - to questions that actually matter:

Did it improve learning? Not just "did students like it" but "did they learn more, better, or differently than they would have otherwise". This is hard to isolate because so many factors influence learning, but careful comparison between pilot and non-pilot groups can reveal patterns.

Was it sustainable for teachers? A tool that produces brilliant results but requires three extra hours of prep per lesson isn't sustainable. You need honest assessment of workload impact, which means creating conditions where teachers can admit "this takes longer than traditional approaches" without fearing it reflects badly on their capability.

What unintended consequences emerged? Every intervention creates ripple effects beyond its intended impact. Perhaps AI-generated resources reduced teacher planning time but also reduced their intimate understanding of content. Perhaps student reliance on AI support developed confidence with challenging texts but reduced willingness to struggle independently. These discoveries matter enormously.

What surprised you? The most valuable insights often come from unexpected discoveries. The AI tool you thought would help with differentiation turns out to be brilliant for generating assessment exemplars. The approach that seemed promising for bottom sets actually works better with top sets. If your evaluation doesn't capture surprises, it's not asking the right questions.

What failed and why? This question separates genuine evaluation from performance theatre. If every pilot "succeeds", you're not experimenting, you're demonstrating. Useful evaluation documents dead ends as thoroughly as breakthroughs because understanding why something doesn't work prevents future waste of time and effort.

AI Generated Image. Midjourney Prompt: pilot project

Iteration and Implementation

The power of micro-pilots lies not in the pilots themselves but in what you do with the learning they generate. This is where iteration becomes essential. This is the disciplined process of using insights from experiments to inform next steps, adjusting approaches based on evidence, and building institutional memory of what works in your context.

When does a successful pilot become wider practice? Not just "it worked" but "it worked sufficiently well, for enough of the success criteria, with sustainable effort, that broader adoption makes sense". When do you pivot the pilot (easy for me to say!), adjusting the approach based on insights but not abandoning the underlying idea? And when do you cut losses and move on entirely?

Implementation requires phasing that reflects reality rather than aspiration. Not everyone is ready at the same time. Not every context suits immediate adoption. We should consider things like:

Readiness over hierarchy: Just because a department has certain status doesn't mean they're ready for AI integration. The Art department experimenting enthusiastically might implement successfully before the Maths department struggling with staffing issues.

Subject-specific considerations: What works in English doesn't automatically transfer to Science. AI tools that excel at generating creative writing prompts might be useless for practical science demonstrations.

Age-appropriate adaptation: Using AI with Year 2 requires completely different approaches than with Year 8. Younger students need more scaffolding, closer supervision, and explicit teaching about what AI is and isn't.

Workload sequencing: Implementing AI tools during exam season, inspection preparation, or reporting deadlines is asking for resentment and failure.

Better to have three departments implementing well than ten struggling badly. Small, incremental improvements across the organisation compound over time to create significant change.

How the Layers Connect

The three layers aren't sequential phases. I think of them as interdependent systems operating simultaneously. Foundation without people generates dust-gathering policies. People without foundation creates chaos. Practice without either leads to reckless experimentation or surface compliance. Get all three working together, and sustainable transformation emerges.

Common Failure Patterns

Foundation-stuck schools have comprehensive policies and extensive documentation. Eighteen months in, they're still refining policies whilst teachers wonder if they'll ever experiment. Perfection becomes excuse for avoiding practice's messiness.

People-skipping schools purchase subscriptions and mandate adoption. Teachers receive logins and deadlines without confidence or capability. Surface compliance follows - tools technically used but without genuine engagement.

Practice-rushing schools move to implementation before considering foundation or people work. They pilot without learning questions, implement without support, scale before understanding what works. Chaos results.

What Success Looks Like

Foundation success: Staff make informed decisions using frameworks when confronted with novel situations. Infrastructure supports rather than blocks AI use.

People success: Confident experimentation and honest feedback. Staff articulate when AI helps their challenges and when it won't. Concerns raised constructively.

Practice success: Improved outcomes with sustainable workload. Documented learning about what works. Thoughtful integration where appropriate, not universal adoption.

Different Paths, Different Outcomes

One MAT I worked with spent autumn 2024 on foundation - audit, WiFi upgrades, policy framework through consultation, staff confidence survey. January 2025 began people work - champions including sceptics, subject-specific exploration. Easter 2025 started micro-pilots: five volunteers, one half-term, careful evaluation. Science and History scaled based on results; English modified approach; Maths abandoned that application; RE tried different use case. Eighteen months in: meaningful integration across departments, students developing critical literacy, infrastructure supporting use.

A primary I supported took a different approach. The headteacher attended a summer conference, purchased the AI platform and announced an initiative in September 2024. Forty-five minute training from company rep. WiFi couldn't cope. Platform didn't integrate with MIS. Content didn't align with curriculum. Usage dropped to almost nothing by Christmas. Budget wasted, trust damaged.

The difference wasn't resources - the primary actually spent more. The difference was respecting process. The MAT built foundations before practice, engaged people before mandating, ran genuine experiments. Longer, but sustainable.

AI Generated Image. Midjourney Prompt: the muddled middle

Starting Where You Are

Most schools aren't starting from zero. They're in the muddled middle - some foundation work done, some people engagement attempted, some practice experiments running, nothing quite cohering.

If you're foundation-stuck: Set a deadline for moving to people and practice work. Foundation never feels "complete". Start with hidden champions, who are those staff quietly experimenting despite waiting for permission.

If you skipped foundation: Go back without destroying momentum. Conduct the audit you should've done initially. Use practice experience to inform foundation development.

If you're people-poor: Slow practice rollout to invest in capability-building. Identify why people aren't engaged. Is it fear, confusion, exhaustion, principled objection, lack of resource or something else? Create lower-stakes voluntary exploration.

If you're somewhere in the middle: Map where you are in each layer. Identify the biggest gaps limiting progress. Prioritise addressing the gaps that most constrain everything else.

Whatever your starting point: infrastructure must function, frameworks must enable decisions, experimentation needs support, students must be involved, failure must be safe, evaluation must be honest, adjustment must be possible. Read that again if you need to!

Key Takeaways

  1. Build in order, work in parallel. Foundation first, people ongoing, practice emerges. You can work on all three simultaneously, but trying to build practice without foundation or people layers is asking for failure.

  2. Foundation isn't admin, it's architecture. Policy, ethics, infrastructure, and baseline assessment aren't bureaucratic obstacles. They’re actually the structures that enable everything else. But documentation isn't transformation.

  3. People work never ends. Training is a moment. Confidence-building is ongoing. Create networks of champions, provide psychological safety for experimentation, and recognise that resistance often contains valuable feedback.

  4. Start smaller than you’re comfortable with. Five volunteer teachers beats fifty mandated ones. Micro-pilots with genuine evaluation beat grand rollouts with crossed fingers. Learn through doing, scale based on evidence.

  5. Measure what matters. Not "how many used AI" but "did it improve learning and was it sustainable for teachers?" Track unexpected discoveries as carefully as planned outcomes. Document failures as thoroughly as successes.

  6. Your context is unique. Learn from others but build for your school. Borrowed frameworks need translation, not just duplication. What works elsewhere might not work here, and vice versa.

  7. AI serves pedagogy, not vice versa: Technology should amplify what makes your school valuable, not replace it. The goal is better education, not more technology.

That leadership team from the opening doesn't have to stay stuck. They can acknowledge their approach needs adjustment, build foundations properly, engage people rather than mandate action, run genuine experiments and learn from them. It's messier than we hope. It takes longer than we want. Some assumptions prove wrong. Some pilots fail. Some staff remain sceptical. But real transformation happens in classrooms, in conversations, in capability being built.

The difference between schools that successfully integrate AI and those that don't isn't time or resources. It's understanding you're not implementing tools but instead you’re building organisational capacity to learn and adapt in a world where AI exists. That requires architecture, not enthusiasm. Patience, not speed. Honesty about what's working. Respecting that foundation, people, and practice work are different challenges requiring different approaches.

Most schools stay stuck at foundation because they think getting policy right is AI integration. Or they rush to practice and wonder why nothing sticks. Schools getting this right understand: transformation isn't a project with an endpoint. It's a capability you build enabling ongoing adaptation.

Three layers. Each essential. Each interdependent. Build them properly, and you create something sustainable. Skip any one, and you're building on sand whilst pretending it's bedrock.

The ground is shifting. AI isn't going away. Students already use these tools. The question isn't whether schools will integrate AI; it's whether they'll do so thoughtfully, building foundations supporting practice, engaging people so they feel ownership, and learning through disciplined experimentation rather than chaotic implementation.

That's harder than buying subscriptions and announcing transformation. It's also the only approach that actually works.

Subscribe Now

Subscribe to receive the latest blog posts directly to your inbox every week.

By subscribing, you agree to our Privacy Policy.
Thank you! Your submission has been received!
Oops! Something went wrong. Please try again later.