Last night, we brought together a room of senior tech, data, and transformation leaders at The Conduit in London to do something we’ve needed to do for a while: have an honest conversation about AI. 

Not the hype version. Not the horror stories or the miracle stories. The real one. 

Lucy Kemp, our Chief Marketing Officer, opened the evening with some hard-hitting stats from the research. Earlier this year, we commissioned independent research with 2,000 UK tech workers to find out how AI is actually being used inside organisations. What came back was striking. 92% of tech workers are using AI. A quarter of them are spending half their working day on it. But 52% of C-suite believe their AI strategy matches what’s actually happening on the ground. Only 16% of their teams agreed. 

That gap, between what leaders think is happening and what’s actually happening, was the starting point for the event. 

Our CEO, Ollie Whiting hosted the panel. Ollie has spent the last two years in conversations with industry leaders and government, largely focused on AI regulation, governance, and the impact of AI on employment. He set the tone early. “Depending on who you’re speaking to, we are on the edge of the most exciting technological revolution ever. Or we’re on the edge of the demise of humanity. Tonight is a reality check that we are hopefully not at either end of those scales.” 

He also offered a useful lens on why La Fosse pays close attention to this. “We’re often considered canaries in the mine when it comes to the economy. When the economy is down, recruitment and staffing is down. When the economy’s up, we’re booming.” That position, right at the intersection of technology and talent, is exactly why the research matters to us, and why we wanted to share it with the leaders in the room. 

The panel, Natalie Cramp (Partner, JMAN Group), Kevin Cassar (Chief Data and AI Officer, TalkTalk), Ravi Bhalla (Director of Enterprise AI, incoming at M&G), and Monica Verma (Founder and CEO, Monica Talks Cyber), did not disappoint. 

Here are the key takeaways from the evening. 

The thing leaders aren’t paying enough attention to

We opened with a direct question: what is happening in AI right now that leaders in this room are not paying enough attention to? 

Kevin led with decision-making. As organisations give powerful AI tools to their people, those tools are shaping the information that reaches the top. “How many of you are actually thinking about how AI is changing the decision-making process?” he asked the room. Fewer than a quarter of hands went up. Kevin’s point was direct: AI tends to agree with the way a question is framed, and if you pose the same question in reverse, it will agree with that too. “The harder the question we pose to people, the less robust the situation is” he said. That bias is working its way up the chain, and most leadership teams have not confronted it yet. 

Natalie called out the operating model. “I still think, even where people think they’re thinking about people, they’re missing the whole operating model,” she said. “Organisations are not spending enough time, or having the right people in the room. The Chief People Officers is almost never in that room.” She pointed to what is already happening at the entry level: there are a fifth of the entry-level vacancies today compared to six years ago. The question of how you train the senior leaders of the future, when the junior roles that used to be the training ground are disappearing, is one most organisations have not answered. “That is going to bite in two years,” Natalie said. “And that takes a long time to do the work and get right.” 

What leaders are worrying about that they probably shouldn’t

Monica was direct. “People are panicking about hype that they don’t understand is hype.” The frontier AI companies have been saying AGI is six months away for more than three years. It has not happened. “Every frontier AI company has been telling us AGI is only six months away. And that’s been happening for more than three years,” she said. “No, AGI is not happening.” 

What she thinks organisations should be doing instead is categorising what AI is actually capable of. “Categorise AI’s capabilities into three things,” she said. “What it is great at versus what it is good at versus what it sucks at.” Pattern recognition: genuinely excellent. Natural language processing: good. Reasoning, context, ethics, and actual decision-making: “What it sucks at is reasoning, context, ethics, repercussions, actual decision-making. Which is what we’re using it for.” That mismatch between AI’s real strengths and how organisations are deploying it is one of the most important things leaders can get clear on. 

What good governance actually looks like

Ollie opened the governance section with a challenge to the room. “Governance is an ancient Greek word for how to steer and navigate a ship. What does it actually look like beyond a policy document, which is what it’s really considered as in today’s world?” 

Monica’s answer was built on five pillars she has developed over six years: data readiness and validation, strategy and leadership, culture and literacy, governance and implementation, and controls and engineering. The most fundamental, and the most overlooked, is data. “The biggest fundamental crux to AI has actually nothing to do with AI, but it’s everything to do with data” she said. “How many of you actually have data governance in place?” Four hands went up in a room of over 60 people. “You put your data in AI models. They have never been validated. They’ve never been classified. Now suddenly none of your access management controls work.” Samsung banned every generative AI model across the business after engineers leaked proprietary code through ChatGPT. Monica’s view is that 73% is probably an undercount. 

Ravi reframed the governance question entirely. “How many of us think of governance as a hindrance versus governance as an enabler?” he asked. His answer came from direct experience. When working in financial services, his team developed an AI tool to identify vulnerable customers through voice detection, spotting changes in tonality, pace, and specific words. It required navigating a genuine conflict between FCA requirements and EU AI Act prohibitions. “If we did not have the conversation, that would have been a big disaster,” he said. “We would have had a regulatory fine rather than a regulatory win.” Working with legal and compliance from the start, rather than bringing them in after the fact, was what made the tool possible. “Use governance as an enabler,” he said. “See where the challenges are. Work with your colleagues, mitigate them.” 

Natalie added the point that has to come before any framework is built. “Before you get into your governance, you need to have the conversation about opportunity and risk,” she said. “You have to have decided as an organisation where you sit on the curve of taking the opportunity and living with some of the risk. The answer is going to be different for every single organisation.” She used UCAS as an example: a body trusted with millions of students’ data that has to take a far more cautious approach than a high-growth private equity asset. “There’s not a right or wrong, but there is a right for you as a leadership team.” 

Treating AI like a new hire

One of the most practical sections of the evening came when Ollie turned the conversation to implementation. “My favourite topic,” he said, before asking the panel what treating AI like a human actually means in practice. 

Natalie’s answer was clear. “People don’t think about AI like they would a person. And I think people are missing a trick as a result.” She drew a direct parallel with onboarding. “We’re bringing it in and expecting it to nail it on day one. With a person that you hired, you would induct them. In an ideal world, they would have an onboarding process and be given a load of information about the business in order to succeed in it. We don’t necessarily give our AI that information.” She argued for putting AI agents on the org chart alongside the human team, giving them names, setting KPIs, and treating underperformance the same way you would with any member of staff. “If they’re not hitting their KPIs, they should be on a performance improvement plan in the same way as you would with your team.” 

Kevin built on this with a point about delegation. In the same way an organisation sets financial delegations of authority, it needs to define where it is prepared to delegate decisions to AI and where it is not. “We’re going to have to have delegations of authority to AI,” he said. “Where are we prepared to delegate? And I’m not seeing that proper operating model work going on in depth in enough organisations.” 

Monica added the accountability dimension. “Just because you have trained an agent and given it KPIs, that accountability doesn’t go away. That is crucial.” She pointed to the Solar Winds case, in which the former CISO was personally indicted by the SEC following a breach that had nothing to do with AI. “Think about AI agents,” she said. “An AI agent will never be accountable because AI is not conscious. It will be your employee, but it’s not sentient. Court is not going to hold an AI accountable. Somebody, some human, is going to be accountable.” 

Ollie brought it back to a simple question that most organisations still cannot answer well. “We wouldn’t set up humans for failure. So why would we do it with our agents?” 

Stop chasing AI, start measuring it

The final section was shaped by the audience, with questions focused on what good actually looks like when you are trying to move forward without a clear roadmap. 

Kevin’s answer was practical. Start by measuring what you already have before deploying anything new. “What AI tools have we deployed? What value do they add? How do we measure those? If you can’t do that, I wouldn’t be deploying anything else.” He made a compelling case for making AI boring. “If you had a finance function which has identified a two to three million pound cost saving over three years by reducing its finance close by seven days, that’s quite a tangible thing. How many times do we see organisations set up another AI lab to do what exactly?” The value is in the unglamorous, specific, measurable work. Not the next shiny deployment. 

Natalie’s framing built on that. The mistake most organisations are making is treating AI as the strategy rather than a tool inside it. “Your organisational strategy should not be changing every three months. AI should be one tool in your toolbox as to how you make that happen.” She argued for a three-year organisational strategy with quarterly AI sprints operating inside it. “The AI strategy is still too focused on tools. It’s not about how do we set up to take the opportunity.” 

What’s next

This was part two of a three-part story. Part one was understanding what is actually going on in organisations. Part two, last night, was exploring what we should do about it. 

Part three lands in October. We are calling it Beyond Headcount: Workforce 2027. We have gathered a group of AI and workforce planning experts to tackle the next set of questions: what should hiring plans look like next year, what are the roles organisations are not yet hiring for, and what does the human-plus-AI organisation actually look like at scale. As Lucy put it on the night: “The current AI narrative of laying off all your people and replacing them with AI just isn’t working. There is a different way through it and we are going to map it for you.” 

If you want to be in the room for that conversation, watch this space. 

In the meantime, you can download the full AI in the Workforce white paper here.