top of page

AI Agent Readiness, Part Two: Data and Technology That Actually Move

Updated: Nov 19

Futuristic black-and-white illustration of a glowing rectangular object with concentric circles and vertical lines, set against a dark background.
Agent Readiness

By Mark Evans MBA, CMgr FCMI Aka Rogue Entrepreneur


AI Agent Readiness what happens after culture

Across eight months of client audits, one truth keeps repeating. Leaders want AI. Ideas are not the problem. The will to fix the data and the plumbing usually is. That gap explains why pilots stall, why value slides into next quarter, and why the same steering groups keep meeting without delivering anything useful.


In my previous article we dealt with culture. Culture is the first gate. It decides whether people will actually work with AI or quietly reject it, and it decides whether leadership will protect change or kill it. That article covers trust, behaviour, fear, incentives and internal politics of AI agent readiness. This blog piece assumes that cultural groundwork and moves to the other two weak points: data and technology.


The current policy view in the UK is blunt. Skills England has warned that AI skills gaps are now a drag on growth, and that closing those gaps could unlock up to four hundred billion pounds for the UK economy by 2030. The message from government is that capability, not hype, is the limiting factor. The UK has started publishing tools to help employers build that capability, not just talk about AI. GOV.UK+1


The quiet winners already act this way. They use a plain evaluation system and a practical way of building. It lets them change models quickly, make decisions with confidence, and learn in public. More of them are now opening the black box and telling boards how they are doing it.


This article focuses on data and technology. It reflects what we are seeing inside real organisations. It adds balanced advice so you can act without buying into hype.

Why firms get stuck


In my experience over the past 12 months, most companies fall into one of three patterns:


  • The magpie Chases shiny AI demos. Talks loudly about agents. Avoids the work on data, access, and process. Ends up in pilot hell.

  • The overwhelmed Sees the size of the legacy mess, freezes, and gets trapped in analysis. Burns months. No delivery.

  • The mountaineer Starts a huge foundations programme. Tries to fix everything. Produces diagrams and governance decks. No live value.


We have seen the pendulum swing inside the same company. They start like magpies to impress the board. They crash into ugly data reality. They panic. They swing all the way to mountaineer mode and announce a full clean up. Now nothing goes live. That panic reaction is not leadership but, fear dressed up as structure.


All three patterns miss the timing. In a fast market, perfection loses to progress. Blind progress also hurts. The answer sits somewhere in the middle.


Intentional opportunism

This is the operating position we now recommend.

Start now, and start smart. Pick one or two visible use cases that carry real value for real users. Treat them as learning engines. In parallel, build a few foundations that you will use again and again. Not either or. Both.


Why it works.

  • Early use cases buy trust with the people who will fund the next stage.

  • They expose real constraints in your data, access, permissions, and approval path.

  • They teach the organisation what AI and agents can and cannot do.

  • They show commercial movement, not theatre.


This is exactly where UK firms are feeling pressure. Government is not talking about vanity pilots. It is now giving employers structured readiness tools like an AI Skills Framework, an AI Skills Adoption Pathway, and an AI Adoption Checklist because the barrier is no longer "can we buy AI." The barrier is "can our people safely use it, prove value, and stay inside the rules." GOV.UK+2IBM Newsroom+2


Moreover, with that signal, you invest in the foundations that change your slope. You avoid boiling the ocean. You avoid a grand design that never reaches the front line.


Data readiness in the real world

Data is the loudest blocker in our audits. Not which model. Not which cloud vendor. Data.

What we keep seeing:


  • Fragmented sources. The same customer appears under three different names. The same product family has different labels in sales, finance, and service.

  • Access friction. Legal and privacy fear shuts down access for everyone. Or there are no guardrails at all and data is already leaking through shadow tools.

  • Tribal knowledge. A few experts hold the real process in their heads. Nothing is documented. An agent cannot learn from silence.


These are not edge cases. Across the UK, many firms are already improvising. Seventy one percent of UK workers say they use unapproved AI tools at work, and half of them say they do it every week. A fifth admit they even use unsanctioned AI on finance tasks. Only around a third say they worry about data privacy. That is shadow AI, and it is already live inside British companies. It is happening because people are under pressure to deliver, but the organisation has not given them a safe, approved way to do it. Microsoft UK Stories+2IT Pro+2


Here are the five data readiness moves that work in practice.

Use AI to reduce pain, not to skip accountability. You can now use natural language entity matching, retrieval, and semantic search to stitch data that was never designed to talk. This gives you leverage. It does not remove the need for ownership, quality and audit. The National Institute of Standards and Technology frames trustworthy AI as something you govern across the whole lifecycle. You measure data quality, you control access, and you understand the risk tolerance behind each use. IT Pro


Turn tacit know how into steps. Ask a subject matter expert to record their screen while narrating what they do and why. Feed that video and audio to your large language model to draft a standard operating procedure. Give it back to the expert to correct. You now have working instructions you can give to an agent. This replaces tribal knowledge with documented flow in hours, not months.


Focus on the vital few sources. Do not try to connect every source in the company. Identify the data that drives revenue protection or service outcomes and expose just that through a clean access layer. Add basic metadata. Name an owner. This is sometimes described as context engineering. It is not marketing language. It is controlled availability of the right context to the right agent at the right time.


Only clean deeply where the return is clear. This is where most data lake projects went wrong. They tried to fix everything. You do not need everything for your first two high value use cases. You need the sources that unlock more than one path. Prioritise by commercial impact, not by elegance.


Build new systems agent ready by default. If you are standing up a new system or rebuilding a core process, design it so that an agent can work with it later. Logical structure. Rich metadata. Clear naming. Role based permissions. Standard operating procedures captured and versioned. Ownership defined. This is basic design for future automation.


All of this sits under governance. You still need strict read and write rules. You still anonymise where required. You still create a secure sandbox where teams can experiment without risking a leak. Safe exploration beats shadow use. This matters because staff are clearly not waiting for permission. They are already bringing their own AI into finance, sales and customer communication, and most of them are doing it with no formal guardrail. Microsoft UK Stories+2IT Pro+2


Technology readiness is a set of dials

Technology is not one big choice but, a set of dials. You can set each dial with intent.


Centralised or localised. Local teams require the autonomy to develop straightforward agents that assist with their tasks. A central team should handle intricate, shared challenges that cannot be addressed by a single team. Maintain central control over write access to sensitive systems. Define boundaries based on value, complexity, risk, and data sensitivity.


Point tools or platform. Different builders need different paths. A point and click builder so non technical teams can create basic agents. A low code layer so operations can integrate systems. A full code path so engineers can extend and harden. Bring in good vertical tools where a vendor already solves one repeatable pain point better than you will, such as legal response, customer service triage, or coding assist.


Buy, adapt, or build. If a tool covers most of the job, buy and adapt it. If one of your core vendors will clearly deliver the feature in the next release cycle, patch and wait instead of burning months on something that will be replaced. Build when there is no credible option in sight, or when having it built around your process is itself an advantage. A blended approach is normal. You build on top of what you buy.


Shared utilities that everyone can trust. Stand up internal services for common needs. Secure retrieval over finance and sales data. Task routing. Identity. Guardrails. Monitoring. Give every team a safe set of bricks so they do not invent their own risky workaround. This is also where you enforce permissions and audit trail.


Access and permissions. Make the read path and the write path explicit. Read access for sensitive data can be controlled and logged. Write access to finance, customers, and safety critical systems should never sit with a lone team. It should sit behind central approval with audit.


Evaluation is the lever

Evaluation is where speed becomes safe.

The companies that move with confidence are not guessing. They write down how they judge a change. Product, risk, compliance, and engineering all look at the same evidence. The evaluation process itself becomes an internal asset. It is not for show. It is competitive advantage.


There are three layers.

  1. Model and system tests. Accuracy is not enough. You also check calibration, robustness, bias, safety, efficiency and cost to serve. You evaluate the full system in realistic scenarios, not just the model in isolation. Work such as Holistic Evaluation of Language Models shows why single headline scores are misleading and why broad, scenario based testing is more reliable when you need to compare or swap models (Liang et al., 2022). GOV.UK

  2. Live experiments. Offline scores suggest. Live tests decide. You run controlled A B or interleaving tests in production with clear exposure logging, stop rules, and rollback. Where it makes sense, you can even use agents to test agents, so you can check responses at scale, then escalate edge cases to humans. Evaluation is a running activity, not a tick box.

  3. Policy as code. Before anything moves beyond pilot, it must pass promotion gates. Those gates are encoded in policy. The gates are tied to risk levels. Safe changes get a lighter path. Moderate changes get the standard path. Sensitive changes trigger full review. You keep the audit trail tight. This is the discipline described in current AI governance work such as the NIST AI Risk Management Framework and the ISO 42001 AI management system standard, which both frame AI as something you manage, monitor, and continually improve rather than something you deploy and forget (NIST, 2023. ISO, 2023). IT Pro+1


When these layers are in place, change stops feeling dangerous. You can see what matters within hours. You can decide within days. You can roll back in minutes.


Project Looking Glass

Figures in this section are pilot targets informed to date by eight months of audits. They are not guarantees.


Project Looking Glass was a 90 day client pilot (manufacturing business anonymous). We built an account tracker that spots drops in spend from long term and existing customers using historic invoicing data. The system then creates targeted follow up tasks inside Salesforce, with suggested next actions and context for the sales, QC, finance and customer team.


Scope was tight on purpose. Two core data sources. One pilot region. One playbook. Weekly evaluation. The build was agent ready by default, with named owners and an audit trail. Total project cost for the pilot was £28k ex VAT. Ongoing technical support made available on a monthly contract.


Pilot targets included an increase of 8 percent spend on flagged key accounts, plus 12 percent revenue uplift in the pilot segment, which equates to around four hundred and sixty thousand pounds, and around 2 percent EBITDA improvement. We track and continue to monitor precision, action rate, win back rate, and time to first contact every week with executive team.


This shift matches the mood we are seeing in UK boardrooms. Two thirds of UK enterprises say AI is already driving meaningful productivity gains, yet only around 38 percent say they are actually prioritising AI upskilling. Boards are starting to ask less about pilots and more about retention, margin and recovery of lost revenue. They are asking why value is not landing faster if the benefit is already visible. IBM Newsroom+1


The point of Looking Glass was not the typical shiny magpie project to satisfy FOMO. It is controlled revenue recovery identified by a skilled strategic consultant who looks at the business first (not IT or Tools), identifies the bleed and maps out the fix that sticks. It shows value to the board, and it forces discipline around access, permissions, and evaluation.


"We never realised we were losing sales to a competitor. We kept throwing more money at marketing for new clients and forgot our most valued ones. Mark helped us see the leak and fix it. It's changed how we run the business."Managing Director, Scotland

How 360 Strategy works

Most AI consultants start with tools. We do not.


At 360 Strategy we start with culture and with the business itself. We look at how the company really runs. We look at how people behave under pressure. We look at where money is leaking and where trust will snap if you move too fast. This is exactly where most UK firms say they are stuck. Over half of UK SMEs report that they do not have the internal skills to move AI from talk to delivery. Fewer than one in eight have trained their teams in how to use AI at all. A clear majority are now asking for external guidance on how to deploy AI safely without creating legal or reputational risk. FE News+1


Our AI readiness evaluation does not ask only whether you can plug in a model. It asks:


  • Will the people who actually do the work accept this in front of customers.

  • Does this move support your plan to protect margin and grow in the next 12 to 18 months.

  • Can we prove value without lighting up compliance.

  • Can the agent see enough of the truth to act, and can you defend that access.

  • Can you evaluate, approve, and roll back without drama.


This matters because most firms are already under strain. They are not asking for a lab experiment. They are asking to keep customers, defend margin, and stop drift. We treat AI as a controlled way to do that, not as a stunt.


Only one quarter to move the needle

Here is what you can do in the next ninety days.

  1. Deliver one or two real use cases. Not shiny magpie projects. Is it Productivity, Efficiency or Opportunity (PEO). Pick work that protects revenue or improves service. Stand up a clean access layer over the top data sources and name the owners.

  2. Create a shared evaluation kit with agreed scenarios, a minimum metric set, and promotion gates tied to risk levels.

  3. By the end of the quarter you move from talk to evidence. You also know, with certainty, where to invest next.


Leadership questions that keep teams honest:


  • Can we trace the lineage and quality of the data feeding this use case?

  • Do our tests cover calibration, robustness, bias, safety, efficiency and cost to serve.

  • Do we have scenarios that reflect real users and edge cases.

  • Are model cards written and maintained for every production model. Model cards are structured reports that document purpose, limits, performance, and risk areas for a model, and are widely recommended for responsible deployment (Mitchell et al., 2019).

  • Can we run a live test with guardrails and rollback inside a week.

  • Are promotion gates encoded and linked to risk tiers.

  • Can product, risk, and engineering all see the same evaluation evidence in one place.


If too many answers are no or unsure, your switch costs will stay high.


Sum up

There is no way around the drudge work. You still have to collect evidence, clean data, and encode rules for release. The difference now is that you do not need to fix everything to start.


Choose well. Move. Learn. Build the few foundations that raise your slope. Do that, and you move from talking about AI to proving controlled value with AI.


Frequently Asked Questions


What does an AI consultant do?

A traditional AI consultant talks about tools. 360 Strategy does not start there. We look at culture, behaviour, and commercial pressure first. Then we pick one high value use case that can be delivered safely and measured.


How do I know if my business is ready for AI?

You are ready to start if you can access core data without a legal fight, you have one team willing to run a real use case, and a senior owner will back it and not smother it in process.


Why does data matter more than the model?

Most firms are not blocked by the model. They are blocked by messy data. If finance and sales do not even agree on which customers are considered active, you cannot expect an agent to warn you when spend is dropping.


What is Project Looking Glass?

Looking Glass was a 90 day pilot. It tracked spend drops in key accounts across products and pushed targeted follow ups into Salesforce.


How fast can we see value?

With a tight scope, and owners in the room, you can see movement in one quarter. The mistake is trying to rebuild the whole organisation before you start. The smarter path is one controlled use case with proper evaluation, then scale.


Is this only for large enterprises?

No. Most pressure right now is inside Scottish founder led and mid market firms. They are losing revenue quietly at the edges. They need control and recovery, not a Silicon Valley style lab.

 

References

IBM (2025) IBM Report: Two-Thirds of UK Firms Gain from AI – Reskilling Key to Unlocking Greater Productivity. London: IBM UK. Available at: https://newsroom.ibm.com/2025-10-28-IBM-Report-Two-Thirds-of-UK-Firms-Gain-from-AI-Reskilling-Key-to-Unlocking-Greater-Productivity (Accessed: 30th October 2025).


Institute of Coding (2025) SMEs call for more AI training support. London: Institute of Coding. Available at: https://www.fenews.co.uk/skills/smes-call-for-more-ai-training-support/ (Accessed: 3 November 2025). See also: https://www.techradar.com/pro/many-smbs-say-they-cant-get-to-grips-with-ai-need-more-training (Accessed: 31st October 2025).


International Organization for Standardization (2023) ISO/IEC 42001: Artificial intelligence management system. Geneva: International Organization for Standardization. Available at: https://www.iso.org/standard/42001 (Accessed: 31st October 2025).


Liang, P., Bommasani, R., Hudson, D.A., Welbl, J., et al. (2022) Holistic Evaluation of Language Models. arXiv. Available at: https://arxiv.org/abs/2211.09110 (Accessed: 1st November 2025).



Mitchell, M., Wu, S., Zaldivar, A., Barnes, P., Vasserman, L., Hutchinson, B., Spitzer, E., Raji, I.D. and Gebru, T. (2019) Model Cards for Model Reporting. arXiv. Available at: https://arxiv.org/abs/1810.03993 (Accessed: 1st November 2025).


National Institute of Standards and Technology (2023) Artificial Intelligence Risk Management Framework (AI RMF 1.0). Gaithersburg, MD: National Institute of Standards and Technology. Available at: https://www.nist.gov/itl/ai-risk-management-framework (Accessed: 31st October 2025).


Skills England (2025a) AI skills for the UK workforce. London: Skills England. Published 29 October 2025. Available at: https://www.gov.uk/government/publications/ai-skills-for-the-uk-workforce (Accessed: 1st November 2025).


Skills England (2025b) Help for UK businesses to fill £400bn AI skills gap. London: Skills England. Published 29th October 2025. Available at: https://www.gov.uk/government/news/help-for-uk-businesses-to-fill-400bn-ai-skills-gap (Accessed: 30th October 2025).

bottom of page