enable great insights

Your home for insight, perspective, and decision-making patterns to help you navigate complexity with confidence. Practical guidance drawn from real-world experience.

No hype. No silver bullets. Just clarity when you need it most.

The AI Illusion

Why 'Doing More' Isn't the Same as Getting Smarter

Why clarity of intent matters more than the number of AI initiatives.

This is the fifth edition of Enable Great Conversations - a series unpacking the leadership challenges behind technology decisions, exploring how clarity and confidence can be built through open conversation and experience.


The Race to Do Something

AI is everywhere. Discussed in boardrooms, product roadmaps, vendor pitches, and news headlines... The pressure to act is real with a consistent underlying message: if you're not deploying AI, you're already losing ground.

So organisations are moving; pilots, demos, and experimentation. There's visible momentum, tangible activity, and the reassuring sense in most workplaces that something is happening.

But there's a question I routinely find myself asking: is any of it delivering outcomes?

The tools are powerful - the possibilities are real - but the outcomes (based on the feedback I hear) are often underwhelming. This isn’t because AI doesn't work or isn’t the right agenda to progress, but because many organisations have mistaken activity for advantage. They're racing to "do something" with AI driven by pressure rather than need, and the illusion of progress is actually masking a deeper problem.

The Illusion: Progress Without Purpose

The challenge with AI adoption isn't a lack of effort, it's a lack of direction.

Walk into most organisations today and you'll find evidence of AI: teams experimenting with generative tools, departments running pilots, routine deployments of AI assistants like ChatGPT and Copilot. There's real energy, genuine curiosity, and the perception of momentum. The problem is that much of that activity doesn't surface in anything coherent or measurable.

Consider a typical scenario: an organisation launches multiple AI pilots across different departments: Marketing tests content generation tools, Operations explore predictive analytics, Customer Service experiments with chatbots etc. etc. Each initiative makes sense in isolation, generates internal buzz, and collectively they create the impression of an organisation embracing innovation.

These pilots deliver interesting results but scaling them is difficult. You need alignment of data across systems, robust governance, and consistency on tooling and approach What looks like momentum can often just be motion – leaving organisations with fragmented (sometimes costly) efforts that don't create capability.

Recent research supports what many leaders are experiencing first-hand. Studies show that while AI experimentation is widespread, actual production deployment remains rare. MIT's research found that about 95% of generative AI implementations fall short, with only 5% reaching production. Meanwhile, US Census data shows that AI adoption among large companies declined in 2025 - dropping from 14% earlier in the year to 12% by late summer.

The corporate psychology behind this is understandable. No one wants to be left behind. Boards are asking about AI strategy. Competitors are making noise about their deployments. The instinct is to act, to experiment, and to demonstrate progress, but AI adoption without strategic intent doesn't create capability, it just creates noise.

The Hidden Cost: AI Debt

This is where the real problem emerges: AI debt.

Like technical debt or (as we’ve recently discussed here) leadership debt, AI debt accumulates quietly. It's the compound effect of unaligned experimentation: fragmented tools that don't integrate, disconnected data that prevents scaling, duplicated efforts across teams, and shadow AI initiatives that create risk without governance...

At first, this debt is invisible; pilots are cheap, and the organisational impact feels minimal. However, the moment you try to scale, integrate, or build something coherent on top of disparate initiatives, the debt becomes real.

MIT's research revealed the scale of this challenge. While 40% of companies purchased official AI subscriptions, workers from over 90% of companies reported using personal AI tools - creating a shadow AI economy that operates outside formal governance. Simultaneously, enterprise-grade AI systems are being quietly rejected: 60% evaluating tools, only 20% reaching pilot stage and just 5% reaching production (mostly failing due to immature workflows and a lack of contextual adaptation).

Every misaligned pilot adds to a growing burden of complexity that leaders have to reconcile at some point. The organisations that moved fastest are now discovering they've built a landscape of disconnected initiatives that are expensive to maintain and impossible to scale (early-stage Power Platform anyone?!).

This is a familiar leadership pattern: the temptation to prioritise action over alignment, value speed over thoughtfulness, and mistake visible activity for meaningful progress.

Building Real Capability

So what does progress look like when it's real?

The leaders getting AI right aren't the ones with the most pilots or the longest list of tools… They're the ones who started by asking better questions:

  • "What problem are we solving, and is AI the right solution?"
  • Is the tooling we’re considering ready for the outcome we’re trying to drive?”
  • "What foundational changes to our data, governance, and skills will allow this to work effectively?"
  • "Where should AI enhance decision-making, and where should we retain the human aspect of what we do?”

These questions sound basic, but they force clarity before commitment, and ensure that effort aligns with intent.

BCG's research demonstrates this pattern clearly. Companies successfully extracting value from AI pursue on average only about half as many opportunities as less advanced peers. They focus on the most promising initiatives and expect more than twice the ROI. They're doing less, but achieving more, because every move is deliberate.

AI maturity isn't about scale or volume, it's about intent. Organisations building durable capability share common characteristics:

  • They've defined their data foundations first
  • They've established clear ownership and accountability
  • They've built ethical guardrails that create trust rather than fear
  • They've treated AI as a capability to develop, not a product to purchase or sell

This isn't about being conservative or slow. It's about being strategic: there’s a difference between deploying AI everywhere, and deploying AI where it matters.

The Leadership Challenge

This creates a difficult position for technology leaders.

Boards want measurable progress and visible momentum. Teams want the freedom to explore and experiment. Vendors are pitching solutions that promise immediate value. Leaders sit in the middle of all this - balancing innovation with integrity, and speed with sustainability.

Many succumb to the illusion because saying "no" or "not yet" feels risky. It's easier to approve another pilot than to pause and ask whether it aligns with strategy, and more comfortable to show activity rather than admit that you're still building foundations. The pressure to demonstrate AI adoption is significant, and the path of least resistance is to let teams experiment broadly.

True leadership means applying discernment. It means slowing things down to align them properly, and recognising that the most effective technology leaders aren't those moving fastest; they're those ensuring that every move still points in the right direction. You need to challenge assumptions, identify gaps in foundations, and provide the perspective needed to distinguish genuine progress from well-packaged noise.

The Real Opportunity

The opportunity with AI is significant, but the true value (in my opinion) lies not in automation for its own sake, but in integration. Using AI to enhance decision quality, to surface insights faster, to free teams from repetitive work so they can focus on judgment and creativity… When applied thoughtfully, AI is a genuine force multiplier.

The organisations that will win with AI won't be those who deployed the most tools or ran the most pilots. They'll be the ones who thought the best. Who built foundations before scaling. Who treated AI as a capability to develop rather than a checkbox to tick and recognised that advantage comes from coherence, not volume.

This is the pattern that runs through all effective technology leadership: clarity beats speed, confidence comes from alignment, and the right question often unlocks the right answer (if you haven’t already, have a read of “What Great Leadership Looks Like). AI amplifies this truth. Get the foundations right, and AI becomes transformative. Rush without direction, and it risks becoming another source of complexity and frustration

The illusion isn't that AI doesn't work, it's that activity alone creates value. The mature approach recognises that doing more isn't the same as getting smarter, and the organisations that grasp this distinction are the ones building sustainable advantage while others are still counting pilots…

The question now is simple: which kind of organisation will you be?


Enable Great Conversations

The best decisions don't happen in isolation. They happen in conversation - with trusted peers, experienced advisors, and teams who know what it’s really like.

That's what Enable Great Conversations is about: a series exploring the real moments – the ambiguous ones, the uncomfortable ones, the ones that don’t fit neatly in a playbook - where leadership is tested, and clarity is found. Each release aims to capture a single insight, decision, or challenge that helps move organisations from noise to clarity.

There are many more of these moments worth unpacking and we’ll continue to explore them in the weeks and months ahead. We hope you’ll follow along, or join the conversation in the comments below, or follow along via the Enable Great page.