top of page

The Agentic Shift: Why Most AI Consultants Are Still Thinking in 2019

Close-up of a vintage blue typewriter with white keys displaying QWERTY layout. Blurred beige background, evoking a nostalgic mood.
The Agentic Shift: Why Most AI Consultants Are Still Thinking in 2019

AI (Artificial Intelligence) inside business is no longer only about automation. It's about agents that perceive, decide, and learn. This piece explains the hierarchy of agents and why consulting must catch up.


By Mark Evans MBA, CMgr FCMI. Aka Rogue Entrepreneur


IBM recently reported a USD 3.5 billion productivity gain after applying AI across more than seventy business areas, embedding digital agents deep inside its own operations (CIO, 2025). These systems now run processes that once needed teams of people. They read signals, interpret context, and act with intent. They are not passive tools anymore but, participants.


And yet most consultants still talk as if they are in 2019. They talk about digital transformation while outsourcing the work to developers they do not fully understand.


They sell the idea of progress but stay stuck in the past. You can see it in their language, automation, efficiency, scale, all the old comfort words of yesterday.


What they miss is that AI has quietly changed shape. It is no longer about rules and triggers. It is about perception, reasoning, and feedback. Those who grasp that difference will build leaner, faster, more adaptive organisations. Those who do not will keep automating the wrong things beautifully.


Understanding Agentic Architecture

An AI agent is not a tool you switch on. It is a system that senses, interprets, and acts to achieve a goal (IBM, 2025). Its power sits in how it perceives, how it remembers, and how it learns from what it does.


And when you see it like that, you start to understand why most firms are getting it wrong. They chase features, not structure. They confuse movement with progress. But there is a hierarchy in all this, a progression of capability that defines how intelligent a system really is.


Each layer builds on the one before it. Reaction becomes memory. Memory becomes planning. Planning becomes evaluation. Evaluation becomes learning. That ladder, once you see it, changes everything.


Figure 1. The Hierarchy of Agentic Intelligence in Business Systems.


The figure shows a five-step pyramid. At the base sits the Reflex Agent, reacting instantly. Above that the Model-Based Agent, remembering and predicting. Then the Goal-Based Agent, planning with purpose. Higher still the Utility-Based Agent, weighing trade-offs. And at the top, the Learning Agent, adapting through experience.


Arrows on the left mark how complexity and autonomy increase. Arrows on the right show how business value and risk rise. It is a map of maturity. Each layer adds more perception, reasoning, and accountability. It helps a leader see where their business sits and where it may need to climb next.


Flowchart titled "The Hierarchy of Agentic Intelligence in Business Systems," showing 5 agent types on steps. Complexity rises with each step.

1. Reflex Agents: The Fast Reactors

A reflex agent is simple. It waits for a signal, follows a rule, and executes. No context. No learning. Just if this, then that.


It can be useful. A warehouse sensor that switches the lights on when movement is detected. A finance bot that flags a missing entry. Small, efficient helpers that keep things tidy.


However, that's where it ends. When the world changes, they cannot keep up. They repeat the same action, even when it no longer fits the moment. Reflex systems survive only in predictable worlds, and business is rarely predictable anymore.


2. Model Based Agents: The Ones That Remember

Now imagine the next step, systems that remember. A model based agent builds an internal picture of its world. It knows what has happened, it sees what is happening now, and it can anticipate what is likely to come next.


Think of a logistics AI planning delivery routes. It remembers road closures, weather patterns, driver habits, and delivery times. It knows that if it takes the same route again tomorrow, something may have changed.


This kind of system does not just react. It reasons. It adjusts its decisions based on how the environment evolves and how its own actions shape that change. And yet, even here, the intelligence remains contained. It remembers but it does not yet aim.


3. Goal Based Agents: The Planners

A goal based agent introduces direction. It has a destination, and every decision is shaped by how close it moves the system towards that outcome.


Consider a self driving car. It does not simply react to traffic lights or road markings. It simulates future scenarios, tests possible turns, and chooses the path that leads to its goal with the least risk. The same logic applies in finance. A goal based agent may simulate ten different cost strategies before choosing the one that reaches a target margin.


This is where machines start to behave strategically. They look ahead, test assumptions, and act with intent. It is not just cause and effect anymore. It is foresight.


4. Utility Based Agents: The Evaluators

Utility based agents take things further. They do not just choose actions that achieve a goal. They weigh which outcomes are best overall. They bring judgement into play.

Imagine a retail AI balancing delivery time, cost, and customer satisfaction. It tests each route and picks the one that creates the most value across all three. It does not chase a single objective, it balances competing priorities.


And that is where you begin to see how closely this mirrors leadership thinking. It is not about efficiency at all costs. It is about balance, trade offs, and consequences.


Of course, the quality of those decisions depends on the quality of the model beneath them. A flawed scoring function can make a clever system dangerously confident. And that is the real risk. When machines evaluate, they can be right for the wrong reasons or wrong with perfect logic.


5. Learning Agents: The Improvers

A learning agent goes a step further. It does not wait for a developer to change its code. It learns through experience. It adapts.


It has four moving parts working together. The Performance element does what it knows. The Learning element improves that knowledge through feedback. The Critic watches results and scores them against expectation. The Problem generator keeps curiosity alive by testing new actions.


Think of a sales AI analysing thousands of customer calls. It identifies what tone, timing, or sequence drives the best results and keeps adjusting. Every outcome becomes a lesson. Every failure becomes data.


However, learning takes time. It can be messy. It needs structure, patience, and clean data to make sense of it all. Without that, learning systems just spiral into noise. And yet, when done right, they become alive in the best sense of the word, adaptive, aware, and always improving.


The Rise of Multi Agent Systems

When you begin to connect these agents, something new happens. Intelligence stops being isolated and starts becoming collaborative. A logistics agent can now talk to a pricing agent. A compliance agent can align with a marketing one. Each specialises in its own domain, but together they operate as a coordinated network.


And that, considering where business is going, may be the real story of AI. Not a single clever agent, but many working in harmony under shared goals and governance.

Salesforce Agentforce 360 is already built around that idea. Each agent carries a specific role, but they operate within one trusted platform, connected by rules of cooperation and accountability (Reuters, 2025). The architecture feels closer to a business ecosystem than a piece of software.


And yet, even in these systems, a human still matters. Machines can optimise, but they cannot judge what should be optimised. They can weigh a trade off but not question its ethics. A well designed agent network still needs a human in the loop, not to control every action, but to set the boundaries of reason.


Why Consulting Has Fallen Behind

This is where the consulting world has lost its footing. Too many firms still treat AI as a technology project rather than a cognitive system. They sell digital roadmaps without understanding how intelligence actually behaves. They talk transformation, but the work stays trapped in PowerPoint.


The failure is not just technical. It is structural, cognitive, and operational all at once.

  • Structurally, strategy sits apart from execution. Slides replace systems.

  • Cognitively, there is little understanding of how learning loops work, so feedback never gets designed into the model.

  • Operationally, everything remains siloed. Automation exists, but intelligence does not.


And because of that, most AI projects plateau. They look efficient on paper but hollow in practice. They automate without thinking. They execute without understanding. And that, in truth, is why so many digital strategies fade after launch.


Raising the Bar for AI Consulting

The next generation of consultants must be builders as well as thinkers. They need to understand how these systems perceive, decide, and evolve. They must know how to govern them, not just advise around them.


That shift demands a new literacy, the ability to translate technical behaviour into business language. A good consultant should be able to explain why an agent made a choice, how feedback altered its reasoning, and what governance keeps it accountable.


They also need to recognise that adaptability is the real metric of success. Not just cost savings or efficiency gains, but how fast a system learns and how safely it adapts. PwC Trust and Transparency in AI study 2025 makes that point clear, ethics and oversight can no longer be bolted on later. They must be designed in from the start.


This is the space where 360 Strategy operates. We do not separate advice from implementation. Strategy, system design, and governance all happen together. Because intelligence without accountability is not strategy, it is risk.


A Language for Boards and Leaders

At board level, clarity matters more than novelty. Leaders do not need more noise about AI, they need frameworks that connect decisions to value.


Three things now stand out.

  1. Enterprises such as IBM and Salesforce have already proven measurable returns through agentic design (CIO, 2025; Reuters, 2025).

  2. Agentic systems are no longer research. They are (under human supervision) operational, embedded, and scaling fast.

  3. Most consultants still think in a linear world, advising on automation while the market moves toward intelligence.


Boards should no longer ask, "Do we use AI?" The real question is, "How intelligent are our systems, and how well are they governed?"


Because, in truth, the competitive edge will not come from who has the biggest model or the flashiest platform. It will come from who understands how to design agency and how to keep it accountable.


The Road Ahead

The next decade of AI may be remembered as the decade of agency. Progress will depend less on scale and more on orchestration, how humans and machines learn to work together.


Even the most advanced agents remain limited by what they can perceive. They understand the data they see, but not the world beyond it. And that is why governance, ethics, and human oversight still matter. A machine can simulate wisdom, but only people can define purpose.


The consultants who fail to grasp this shift will fade fast. The ones who thrive will treat intelligence as living infrastructure, systems that learn, adapt, and grow under the same principles that guide good leadership.


Because transformation is not about awareness anymore. It is about competence. It is about building intelligence you can trust. And the firms that master that will shape not just their own future, but the way business itself begins to think.


Book a free 30 minute meeting and get AI Agent ready. https://calendly.com/mark-733/30min


References

Top 10 FAQs


1. What is an AI agent? A system that perceives, decides, and acts to achieve a defined goal.

2. How does it differ from automation? Automation follows static rules, agents adapt using context, memory, and feedback.

3. Why do agents matter for business? They enable continuous optimisation and faster, data led decisions.

4. Which industries lead in adoption? Logistics, finance, and customer operations, with IBM and Salesforce setting the pace.

5. How do learning agents improve? Through feedback loops that measure performance and refine behaviour.

6. What risks exist? Bias, drift, and opacity if systems lack clear governance and human oversight.

7. Can SMEs use agentic AI? Yes. Scaled down, modular frameworks make adoption practical and affordable.

8. How should consultants evolve? They must build and govern systems, not just advise from a distance.

9. How is success measured? By adaptability, decision quality, and sustained business value.

10. What is next for agentic AI? Networks of specialised agents collaborating under shared governance to create intelligence that scales safely.

Comments

Rated 0 out of 5 stars.
No ratings yet

Commenting on this post isn't available anymore. Contact the site owner for more info.
bottom of page