The gap between AI ambition and AI reality is growing. Boards want ROI within twelve months. Tech leaders know that’s not how transformational technology works. And caught in the middle? Everyone trying to make AI actually deliver value.
At our AI Reality Check roundtable, we gathered senior leaders from financial services, media, pharmaceuticals, law and consulting to tackle the hard questions about what’s working, what’s failing and why.
The 70% problem
Let’s start with the uncomfortable truth: 70% of AI projects fail in their first year. That’s not a technology problem. It’s an expectation problem.
Ollie Whiting, CEO of La Fosse, put it in historical context. The desktop PC took over a decade to achieve meaningful productivity gains. The web followed a similar pattern. We’re four years into the AI revolution and somehow expecting instant transformation.
“The impatience of boards, investors and shareholders to get ROI over the line in such a short space of time is one of the key reasons for failure. We’re letting history repeat itself and wandering a bit blindly into this.”
Who actually owns AI governance?
Ask ten organisations who’s responsible for AI governance and you’ll get ten different answers. Legal thinks they own it. Security thinks they own it. The CTO wants centralised control. Individual business units are just getting on with it.
The roundtable revealed a common pattern: ambitious governance forums that aim to track every AI initiative, but reality falling short. As one participant from Pacific Life Re put it, the intention is good but the execution is fragmented. Different territories have different regulatory understandings, and when something goes through legal and compliance first, the immediate answer is often no.
The Guardian’s Chief AI Officer, shared their approach: principles first, product second, monitoring third. They’ve published AI principles, built governance into their product development and continuously take the temperature of both staff and readers. Media organisations face particular scrutiny, with readers anxious to know whether AI is involved in journalism.
The AI veneer is cracking
Remember when every company rushed to build mobile apps in the early 2010s? Those apps were essentially mobile websites, and they quickly revealed all the cracks in back-end infrastructure. Five years of data infrastructure spending followed.
We’re about to see the same pattern with AI. Organisations are throwing agents onto badly designed processes and wondering why they don’t deliver value. The shiny AI tool you bought last quarter? It’s probably falling over because the whole end-to-end process hasn’t been designed.
Anu Doll, Founder of Synexra, provided the strategic anchor for the session, arguing that AI’s true value lies in weaponising a firm’s competitive moats through an Agentic Operating Model. Her framework identifies the high-leverage capabilities where intelligence creates genuine market distinction rather than mere efficiency. By bridging the “Autonomy Gap”; the distance between strategic ambition and foundational readiness, Synexra ensures that infrastructure, data and governance are hardened to support “Autonomous Flow,” transitioning teams from task executors to Intelligence Orchestrators of unique, high-growth value chains.
The democratisation imperative
Here’s a stark reality from La Fosse: 40-50% of a recruiter’s working week is spent on tasks that could be automated. They could be driving double the productivity doing work they actually enjoy. But they can’t, because AI hasn’t been democratised.
Too much energy is being spent on centralised governance and not enough on getting AI into end users’ hands with the right guardrails. The desktop PC only delivered productivity gains when everyone had one on their desk. The web only transformed business when it was democratised. AI will follow the same pattern.
One participant, working with Soho House, described the advantage of smaller organisations: no labyrinthine governance structures, no siloed AI officers blocking everything. Instead, they’re showing business people how tools like Claude work, planting seeds and watching ideas develop. That’s where the real ROI comes from.
The leadership learning gap
Research shared at the roundtable revealed a troubling lack of trust in board-level AI decision-making among tech professionals. Part of this is communication. Does your front-line team know about the AI training the exec team did over Christmas? Probably not.
But it’s also about humility. As one CEO put it, leaders need to admit they might not have the answers they had for the last decade. The CTO who doesn’t understand business processes is destined to fail. The Chief AI Officer who only knows AI and not the heritage of technology is equally doomed.
The consensus: AI literacy must be mandatory from top to bottom. Cross-functional leadership isn’t optional. Gone are the days of siloed executives who only understand their own domain.
Don’t forget the humans
When ROI is measured in headcount saved and roles reduced, employees get scared. Redundancy announcements and layoffs erode psychological safety, regardless of the productivity gains promised.
But reframe the conversation around personal productivity, around how many hours a week can you save, and something shifts. People feel empowered. They want to perform better. They engage with the tools rather than fearing them.
This isn’t soft thinking. It’s fundamental to successful AI adoption. The organisations that crack this balance between transformation and cultural safety will be the ones that succeed.
What’s next?
This roundtable was the start of an ongoing conversation. We’re committed to bringing together leaders who are navigating AI implementation in the real world.
If you want to be part of the next discussion, or if you’re wrestling with AI challenges in your organisation, get in touch. Sometimes the best insights come from people facing the same problems.