Agentic AI Consulting: Why Discovery Is the Work, Not the Phase Before It
- 5 days ago
- 8 min read

The Interview That Showed Me What I Was Missing
Three interviews. That was the time and cost allocation. The MD, a head of operations, and one frustrated team lead who'd emailed ahead of time to make sure she got on the schedule. Three people, selected thoughtfully, back-to-back across a single afternoon, and from those conversations I was expected to understand how a 140-person business was actually functioning.
I remember driving away thinking something felt off. Nothing specific that had been said. More about what hadn't, and who hadn't been in the room.
It took a few more engagements before I understood the pattern clearly enough to name it. The receptionist who'd been manually re-entering the same data into two separate systems every working day for two years wasn't in my interview schedule. Neither was the team lead who'd quietly built a shadow ChatGPT workflow to handle a chunk of her team's reporting (with confidential data), without any IT sign-off, because it saved her three hours a week and solved a problem nobody else had bothered to address.
The ops staff who'd developed workarounds so ingrained they'd stopped thinking of them as workarounds weren't there either. To them, that was just how the job worked, to their line managers, a hidden process and fragmanted data in the hands of a few.
The real picture of any organisation doesn't live in the boardroom or executive office. It lives with the people doing the work. Senior voices tell you the version they've constructed, which is often coherent, often optimistic, and almost always incomplete. The answers you need are distributed across the whole building. And the standard discovery process isn't designed to reach them.
Consulting Has a Sampling Problem
Consulting engagements typically sample three to five voices and call it discovery. The MD, a head of ops, maybe one team lead who was vocal enough to get included. Everyone else's reality stays invisible, not because anyone is being careless, but because the economics of human-led discovery make comprehensive reach structurally impossible.
Time is finite. Calendars don't cooperate. Interviewing 50 people properly across a client organisation costs time and money most projects don't have, and creates scheduling disruption most clients can't absorb. So it doesn't happen. The sample stays small, the picture stays partial, and the strategy that follows is built on incomplete foundations. That's not a failing of any individual consultant. It's the structural limit of the model.
This is standard practice across the industry. The same pattern that keeps organisations stuck at the pilot stage also keeps their discovery compressed. It's worth sitting with that.
Why This Matters More for AI Work
For most consulting engagements, a limited sample is a constraint you can manage around. For AI work, it's a different kind of problem.
AI is the biggest cultural shift most organisations will go through in a generation. That's the considered position of every serious change framework, from Kotter's established change model to the approach we use at 360 Strategy. Both put people as the engine of change. Both are clear that mobilisation doesn't work top-down. And both identify the same two factors most consistently documented to derail adoption: resistance and fear.
You can't mobilise people you haven't heard. You can't identify resistance you haven't found. You can't address fear you didn't know existed.
Top-down AI rollouts breed exactly these conditions. A leadership team that has already bought into the plan, three enthusiastic early adopters from the original workshop, and below them a much larger group of people who were never consulted, never heard, and are now watching a change programme arrive from above with no clear picture of what it means for their role, their workflow, or their job security.
The cultural groundwork that determines whether the whole programme succeeds or fails gets quietly skipped. Every serious piece of change research points to the same conclusion: you cannot skip this step and expect the adoption to hold. Discovery isn't the phase before the work. For AI work in particular, it's the foundation the whole programme stands on. And it has to reach everyone, not just the leadership team who already agree with the plan.
Interviewing 30, 50, 200 people properly costs time and money clients don't have, and causes scheduling disruption most organisations can't absorb. So it doesn't happen. This is fact, not opinion.
What Gets Missed
When discovery is compressed, five categories of insight tend to disappear.
Process inefficiencies: the manual work that's been there so long nobody questions it. Re-entry, duplication, workarounds built around systems that were never properly integrated, still running years later because nobody mapped the problem at the operational level.
Hidden data: knowledge that exists informally, held in people's heads rather than documented anywhere. The kind that walks out of the building when someone leaves, and only gets noticed once it's gone.
Shadow AI: tools that staff have adopted independently, outside IT governance, because they solve a genuine problem the organisation hasn't officially addressed. This is more common than most boards realise, and it carries risks that haven't been formally assessed.
Competency gaps: who can actually use what, at what level, and where the training gaps will slow or stall the programme once it lands.
Pockets of resistance: the people who won't actively oppose the change but will quietly undermine it. Not bad actors, usually. People with legitimate concerns that were never surfaced, so they were never addressed.
None of these are exotic findings. In almost every engagement, they're present. The only variable is whether discovery is structured to reach them.
The Rooms the Consultant Never Entered
Here's the uncomfortable truth the industry doesn't talk about: most AI transformations fail in the rooms the consultant never entered. The post-mortems blame the tech, the change manager, the budget. They almost never blame the discovery, because nobody can prove what wasn't found. But that's where the failure starts. In the conversations that didn't happen. With the people who were never asked. Holding the workarounds, the fears, and the shadow tools the strategy deck didn't account for.
Which is why I've stopped treating discovery as the phase before the work. Discovery is the work. By the time you've genuinely heard every voice in the organisation, the change programme has already started, because being heard is the first act of being mobilised. Everything after that is execution.
Agents Change the Maths
The discovery problem has always been economic rather than methodological. Consultants know they should speak to more people. Clients know they should make room for it. The problem is that doing it properly at scale costs too much in time, money, and operational disruption for most engagements to accommodate.
Agents change what's economically possible. An adaptive interviewing agent that runs in parallel across an organisation, at any hour that suits the respondent, without interviewer bias, without fatigue, and at a fraction of the cost of equivalent human time, restructures the calculation entirely. The interview that was uneconomic across 200 staff becomes trivially affordable. The cultural mapping that was structurally impossible within normal project constraints becomes standard practice.
This isn't a marginal improvement. The economic constraint that has defined how discovery gets done, and therefore how much gets missed, is removed. That changes everything downstream: the insight, the cultural groundwork, the change programme it feeds, and the probability that the AI investment actually sticks.
The shift from tool-based to agent-based consulting changes more than workflow. It changes what's economically possible at every stage of an engagement. Discovery is where that difference shows up first.
VoxantTM
Voxant is the agent I've built for exactly this purpose. It conducts turn-by-turn adaptive voice interviews, deployed across a client's organisation, at whatever scale the engagement requires.
It follows the thread. If a respondent mentions a workaround in passing, Voxant asks about it. If someone expresses hesitation about a change, it explores why. The adaptivity isn't a feature, it's the mechanism that makes the insight possible. A fixed survey asks the same questions to everyone and gets the same categories of answer. Voxant listens to what's actually being said and responds to it, the way a skilled human interviewer would, without the time and cost that make skilled human interviewers impractical at scale.
The result is a picture of the organisation that no human-led discovery process, working within normal project constraints, could replicate. Pattern recognition across dozens or hundreds of conversations simultaneously. Signals that only emerge at scale, not from three interviews in a boardroom.
360 Strategy provides AI consulting in Scotland and across the UK. Voxant is one of the agents I deploy to do that work properly.
Ready to hear what your organisation isn't telling you? Book a discovery call with 360 Strategy.
Tools Versus Agents
Most AI consultants sell access to tools. Better prompts, the right platforms, workflow integrations. There's genuine value in that work and I'm not dismissive of it. The harder, more consequential work is building agents that take actions, adapt to what they encounter, and do real jobs autonomously at scale.
Voxant is one. There will be more.
The economics of consulting are about to invert. The consultants winning the next decade aren't the ones with the biggest teams. They're the ones with the best agents. I'm not waiting for that future. Voxant is how I'm already operating in it.
Frequently Asked Questions
What is agentic AI consulting?
Agentic AI consulting moves beyond advising clients on which tools to use. It involves building and deploying autonomous agents that take actions, adapt to what they find, and do real work independently. The distinction matters because most AI consultants are still operating in a tool-access model. Agentic consultants are building the infrastructure that replaces it.
What is discovery in an AI consulting engagement?
Discovery is the phase where a consultant maps the organisation before designing any AI programme. It covers existing processes, data, staff capabilities, cultural readiness, and potential points of resistance. Done properly, it determines whether the programme that follows has any chance of succeeding. Done poorly, or skipped entirely, it's the most common reason AI investments fail to deliver.
Why do most AI transformation programmes fail?
The short answer is that they skip the groundwork. Most failures trace back to discovery that was too narrow, covering three to five senior voices, and a change programme designed without understanding how the wider organisation actually operates. By the time the resistance surfaces, the budget is committed and the momentum is gone.
What is shadow AI, and why does it matter for AI programmes?
Shadow AI refers to tools and workflows staff have adopted independently, outside any IT governance or approval process. It's more common than most organisations realise, and it carries unassessed risks around data security, compliance, and model reliability. It also signals something useful: the problems staff solved on their own are often the problems most worth addressing in a formal AI programme.
What is Voxant?
Voxant is an adaptive voice interviewing agent built for organisational discovery. It conducts turn-by-turn interviews with staff at every level of a client organisation, adapts its questions based on the responses it receives, and surfaces patterns that no human-led process, working within normal project constraints, could replicate at the same scale or cost.
How is Voxant different from a staff survey?
A survey asks fixed questions and returns fixed answers. Voxant conducts genuine interviews and follows the thread. If a respondent mentions something unexpected, Voxant asks about it. The insight that emerges from an adaptive conversation is categorically different from what a structured questionnaire can surface, which is precisely why surveys miss the things that matter most.
Why does AI change management require a different approach to discovery?
AI is the most significant cultural change most organisations will go through in a generation. Every serious change framework, from Kotter to 360 Strategy's own model, puts people as the engine of change. That means you cannot design an effective AI programme without understanding the fears, capabilities, and informal workarounds of the people who will determine whether it succeeds. Top-down rollouts that skip this step breed the two things most reliably documented to kill adoption: resistance and fear.
What does agent-based consulting mean for the economics of an engagement?
Traditional consulting economics are constrained by human time. Discovery at scale, interviewing 50 or 200 people properly, is simply unaffordable in most engagements. Agents remove that constraint. An adaptive interviewing agent runs in parallel across an organisation, without fatigue, without bias, and at a fraction of the cost of equivalent human time. Discovery that was structurally impossible within a normal project budget becomes standard.
Mark Evans MBA is founder of 360 Strategy, a leading growth strategy and AI consultancy based in Scotland.
Comments