top of page

Machine Speed Markets: AI Agents, Control of Demand, and the Fight to Stay Inside the Deal

Updated: Nov 19

Carnival ride at night with swirling motion blur of vibrant red and yellow lights against a dusky sky; checkered flags and racing cars.
Machine Speed Markets: AI Agents, Control of Demand, and the Fight to Stay Inside the Deal

Written by Mark Evans MBA, CMgr FCMI Aka Rogue Entrepreneur


Introduction: A New Agent in the Market

A new type of market participant is emerging in the digital economy – autonomous AI agents that can perceive, reason, and act on behalf of humans (Shahidi et al., 2025). Recent research by NBER economists suggests these AI agents in particular, could drive a "Coasean singularity," a point where transaction costs fall towards zero, radically reshaping how markets function. In essence, tasks like finding information, negotiating deals, and enforcing contracts which are traditionally costly frictions in commerce, may become nearly instantaneous and costless.


The implications are enormous: if AI agents eliminate most transaction frictions, entirely new market structures and business models become feasible, fundamentally changing the make-or-buy boundaries of firms and the structure of industries. Indeed, McKinsey analysts project that by 2030, up to $3-5 trillion in annual retail commerce globally could be orchestrated by AI agents – a transformation on the scale of the internet or mobile revolutions, but likely unfolding much faster (McKinsey & Company, 2025a). Business leaders and founders today face a challenge: will they harness this agent-driven shift to create value, or be disrupted by those who do?


The Timeline: Where We Are and Where We're Going

Before going deeper, let's be clear about timing. This is not science fiction but, it' s also not everywhere yet.


Right now, in 2025:

  • Large enterprises are currently piloting AI procurement agents that scan suppliers and negotiate terms autonomously (Reuters, 2025; Logistics Viewpoints, 2025).

  • Some tendering processes in professional services and technology sectors are being handled by automated systems (albeit with a human in the loop guardrails) (Reuters, 2025).

  • Dynamic pricing algorithms are already active across travel, retail, and logistics – your competitor's price changes before you finish your coffee (Calvano et al., 2020).

  • Early agent-to-agent interactions are happening in financial markets and supply chain coordination.


By 2027-2028:

  • Agent-to-agent negotiation becomes standard practice in B2B commerce (Gartner, 2025).

  • Mid-market businesses start encountering buyer agents in their everyday transactions (Gartner, 2025).

  • First major regulatory frameworks take effect in the EU and UK, establishing liability and transparency requirements (European Union, 2024; UK Government, 2023).

  • Platform consolidation begins – a few dominant agent systems emerge as default choices (McKinsey & Company, 2025a).


By 2030:

  • McKinsey projects $3-5 trillion in annual commerce orchestrated by AI agents (McKinsey & Company, 2025a).

  • Markets have fundamentally restructured around agent infrastructure.

  • Businesses not visible to agent systems are systematically excluded from procurement decisions.

  • The question shifts from "should we use agents?" to "how do we survive in agent-mediated markets?" (McKinsey & Company, 2025a).


Your window to shape positioning: 2-3 years. After that, the structures harden and you are responding to other people's decisions rather than making your own.


When This Hits Your Sector

Not every sector moves at the same speed. Agent adoption follows a predictable pattern based on transaction complexity, data availability, and buyer sophistication.


Immediately relevant (2025-2026):

  • Professional services (legal, accounting, consulting) – tendering processes already being automated (Reuters, 2025).

  • Procurement-heavy industries (manufacturing, construction) – supplier selection moving to agent-based systems (Logistics Viewpoints, 2025).

  • B2B marketplaces and platforms – agent compatibility becoming table stakes (McKinsey & Company, 2025a).

  • Financial services procurement and vendor management.

 

  • Near-term impact (2026-2027):

  • Healthcare services and medical equipment procurement

  • Logistics and supply chain operations – route optimisation and carrier selection

  • Technology and software purchasing – licence management and vendor consolidation

  • Industrial distribution and parts supply


Mid-term horizon (2027-2029):

  • Consumer retail and e-commerce – personal shopping agents go mainstream

  • SME service providers across sectors

  • Traditional distribution channels face disintermediation pressure

  • Local and regional suppliers need agent visibility strategies


If you are selling to large enterprises with sophisticated procurement teams, agent-based buying is already emerging in your market (Reuters, 2025; Logistics Viewpoints, 2025). If you are selling to small businesses or directly to consumers, you have 2-3 years to understand the shift and position accordingly.


The mistake is assuming you have time because you do not see it yet. By the time agent-mediated buying is obvious in your sector, the positioning decisions have already been made by others.


The Coasean Singularity: What Happens When Friction Disappears

At the heart of the Coasean singularity concept is Ronald Coase's 1937 insight that transaction costs – the costs of searching for information, bargaining, and enforcing agreements – determine how we organise economic activity (Shahidi et al., 2025). When transaction costs are high, firms and hierarchies emerge to internalise exchanges; when those costs plummet, markets can handle interactions that were previously impractical.


The Economic Theory: Coase describes that transaction costs determine whether you build something in-house or buy it from the market. High friction means you build internal capability. Low friction means you can rely on external markets.


What This Means in Practice: If it currently takes your team three days and twelve emails to get quotes, vet suppliers, negotiate terms, and finalise an order, that friction keeps you working with known partners. When AI agents can do all of that in three minutes, the supplier you have worked with for ten years has no friction advantage over one hundred competitors you have never heard of.


Why This Matters: Loyalty based on convenience disappears. Relationships based on "we know how to work together" lose value. You need a different reason for customers to choose you, and that reason needs to be visible to their AI agent, not just to them.

Consider what near-zero transaction costs could mean:


Instant Search & Match: Agents can evaluate millions of options instantly, making search costs negligible. Buyers and sellers could be matched with unprecedented precision and speed. The traditional shortlist of "the three suppliers we always use" gets replaced by "the three suppliers the agent determined are optimal for this specific transaction."


Automated Negotiation: Contracts and deals might be negotiated and executed automatically via smart contracts or algorithms, with minimal human involvement. What used to take a procurement team two weeks now happens in minutes, at 3am, while everyone sleeps (Reuters, 2025; Logistics Viewpoints, 2025).


Seamless Trust & Verification: Identity checks, reputation scoring, and due diligence could be handled by agents in real-time, enabling strangers to transact as safely as known parties. In a world of ubiquitous AI intermediaries, "zero-trust" marketplaces may emerge where trust is built into the infrastructure of transactions rather than built up over years of relationship (European Union, 2024).


By dramatically lowering search, communication, and contracting costs, AI agents expand the set of viable market designs and make possible new forms of commerce that were previously impractical. Entirely personalised markets, dynamic real-time auctions, and frictionless cross-platform services become thinkable. However, this unprecedented efficiency also comes with caveats: even as AI agents unlock new possibilities, they introduce new challenges and uncertainties (McKinsey & Company, 2025a; Calvano et al., 2020).


Demand Side: Why Users Will Embrace AI Agents

From the consumer or user perspective, people do not adopt AI agents for the technology's sake, they adopt them for better outcomes with less effort. Users face a fundamental trade-off: decision quality (how good is the outcome the agent delivers) versus effort reduction (how much time and mental effort does the agent save).


Consumers will adopt AI agents when these systems can deliver high-quality decisions with minimal effort, outperforming either manual effort or traditional intermediaries.

Early use cases are likely to be routine, data-intensive tasks where agents have an advantage: price comparison shopping, scheduling appointments, basic contract negotiations (finding the best insurance terms), travel planning. In such domains, the agent's speed and breadth of search can far exceed a human's, and the downside of a suboptimal choice is relatively low.


By contrast, for complex or high-stakes decisions – medical treatment plans or important financial investments – users will be more cautious. Trust is a major factor: people need confidence that the agent is competent and aligned with their goals. Agent capability and the task context will mediate adoption.


What This Means for Business: If your product or service is purchased through a routine, price-driven decision process, you will encounter buyer agents very soon. If your product requires consultative selling and complex evaluation, you have more time but, not indefinitely. The agents are learning fast.


Supply Side: How Firms Will Deploy AI Agents

On the supply side, companies across industries are racing to integrate AI agents into their services and operations. For businesses, AI agents represent both a new product opportunity and a new competitive threat. The NBER research outlines several strategic questions firms face (Shahidi et al., 2025):


Ecosystem Scope – "Walled Garden" or Open Agent?

Firms must decide whether their AI agents will operate within a closed ecosystem or across multiple platforms. A walled garden agent might only work within the company's own platform (an Amazon shopping agent that only finds products on Amazon). This can ensure quality control and help monetise transactions internally. In contrast, an open ecosystem agent can interact with many platforms.


The Strategic Choice: A closed agent can lock in users and funnel transactions to a company's platform, whereas an open agent could attract more users by offering comprehensive service. Companies will have to balance user experience against the desire to capture value.


What This Means in Practice: If you are a retailer, do you build an agent that only shows your products (keeping margin but limiting utility), or do you make your products visible to every agent (losing margin control but gaining reach)? This is not an IT roadmap choice but, a pricing power decision that will determine your market position for the next decade.


Monetisation Model – How to Earn Revenue from Agents?

Several models are emerging. Firms might charge users a subscription fee for a personal AI assistant or take a commission on each transaction the agent facilitates. Others may monetise via data – using insights from agent interactions to improve products or target offers – or charge platform fees to merchants who want access to the agent's users.

Each model has implications. Subscription can guarantee revenue but may limit user adoption. Commission aligns with successful transactions but could bias agent behaviour. Data monetisation raises privacy concerns. Platform fees require a critical mass of users to have leverage.


Agent Capability – How Advanced Should the Agent Be?

Firms also decide what capabilities to build into their agents. A basic agent might execute simple scripted tasks. An intermediate agent could negotiate within set parameters or handle multi-step workflows with some autonomy. The most advanced agents will learn user preferences over time, make context-aware autonomous decisions, and even initiate actions proactively.


The Trade-off: Building more advanced agents requires more sophisticated AI, more training data, and poses greater risk if the agent makes a bad call. However, advanced agents could deliver more value and differentiate a company's offerings. If one company's agent becomes known for being smarter or more helpful, users may flock to it.

These supply-side decisions will shape industry competition in the AI era. We may see divergent strategies, some companies will offer "universal" agents to grab territory, while others build captive agents tied to their ecosystem (McKinsey & Company, 2025a).


Market-Level Impacts: Efficiency Gains vs. New Frictions

If AI agents proliferate, the market-level effects will be profound. On one hand, autonomous agents promise to greatly increase market efficiency. On the other hand, they could also introduce new forms of friction or failure.


Efficiency Gains:

Markets could operate with far less friction than today. Search costs approach zero as agents instantly scour options. Matching between buyers and sellers becomes algorithmically optimised, meaning fewer missed opportunities and closer alignment of offerings with consumer preferences. Communication and bargaining costs shrink – software agents can negotiate 24/7, in parallel, without wages or fatigue.

In theory, this leads to more transactions happening that previously would not, more competitive pricing, and lightning-fast fulfilment of needs. Economic surplus should increase as markets clear faster and more efficiently.


New Frictions:

Ironically, unleashing millions of AI agents could create its own bottlenecks and distortions. Congestion becomes a risk, if countless agents are pinging servers and databases with queries, digital platforms might get overwhelmed. Price obfuscation is another issue: human buyers today see posted prices, but if each AI agent is negotiating bespoke prices or bundles, the notion of a single market price could fade.


There is also a collusion risk: multiple agents (especially if trained with similar algorithms) might learn to coordinate or stabilise prices at higher levels, effectively forming a tacit cartel without any explicit human agreement. Recent studies of algorithmic pricing show that even simple AI algorithms can inadvertently learn collusive pricing strategies, sustaining supracompetitive prices without direct communication (Calvano et al., 2020; Cambridge University Press, 2025).


Additionally, "good enough" automation may lead to quality degradation in decision making – if agents default to easy, satisfactory choices, markets might suffer a lack of exploration or innovation.


What This Means: While AI agents remove traditional frictions, they introduce new ones that firms and regulators will need to manage. The net welfare effects are uncertain and will likely vary by context (European Union, 2024; UK Government, 2023).


New Market Possibilities Enabled by AI Agents

Perhaps the most exciting aspect of AI agents is the way they expand the design space of markets – enabling mechanisms and services that were previously impractical.


Hyper-Personalised Markets: AI agents can continuously learn individual user preferences by observing behaviour and asking occasional clarification questions. This means markets can move beyond one-size-fits-all offerings. An agent might bundle multiple products into a personalised package deal, updated in real time – a task far too complex for manual market mechanisms.


Automated Micro-Contracts: In traditional commerce, enforcing contracts (especially small-scale ones) is often not worth the legal cost. AI agents change that by using smart contracts, escrow systems, and algorithmic enforcement mechanisms. This makes it viable to have micro-contracts and on-the-fly agreements that were previously unenforceable.

Imagine automatically charging a penalty if a delivery is late, or dynamically adjusting price based on quality-of-service metrics – all handled by agents and code. Entire new markets might emerge, such as pay-per-use or fractional ownership models, because agents can manage the intricate bookkeeping and rule enforcement.


Trust and Reputation at Scale: AI agents offer a new approach to trust: continuous identity verification and reputation tracking embedded in transactions. Agents could carry cryptographic proofs of a user's credentials or track the reliability of counterparties across platforms. This could create portable reputations and digital trust scores that agents use to vet partners instantly.


The result might be "zero-trust marketplaces" where you can safely do business with an unknown party because the agents handle the verification and any risk mitigation (European Union, 2024).


Regulatory Challenges in an AI-Agent Economy

The rise of AI agents does not just disrupt business models – it also poses novel challenges for regulators and policy. Our current legal and regulatory frameworks assume human actors making decisions, and they strain to fit a world where autonomous software negotiates and transacts.


Liability & Responsibility: Who is accountable if an AI agent makes a harmful decision or breaks the law? If your personal AI agent commits to a bad contract or causes some damage, is it you (the user) on the hook? Or the company that created the agent's software? What about the platform that hosted the agent's transaction?

Regulators will need to clarify this. Business leaders deploying agents should expect rules requiring transparency about when an AI is acting and perhaps mandates to assign responsibility. Companies might need to design fail-safes or insurance for agent-caused errors.


Algorithmic Collusion and Competition Law: AI agents could learn to tacitly cooperate in ways that raise prices or exclude competitors, even without any formal agreement. This vexes traditional competition enforcement, which is built on detecting human-to-human agreements or intent to collude (Calvano et al., 2020; Cambridge University Press, 2025).

If pricing algorithms "converge" to a cartel-like outcome simply by self-learning, is it illegal? How do we even detect or prove it? Regulators are already debating updates to competition law to handle algorithmic collusion. Businesses should be cautious – if you deploy an agent that dynamically prices your product, you will need to ensure it is not implicitly colluding with others (WSJ, 2024).


Bias and Discrimination: AI agents trained on real-world data could inadvertently learn biased or discriminatory behaviours. An agent tasked with hiring or lending might pick up on patterns that correlate with protected characteristics (rejecting loan applicants from certain postcodes that correlate with ethnicity, as a proxy for risk).

Companies using AI agents in sensitive areas will likely face scrutiny to demonstrate their algorithms are fair and unbiased. The onus may be on firms to conduct algorithmic audits and be transparent about criteria (European Union, 2024).


Market Manipulation & Security: A worrying scenario is agent-on-agent manipulation – adversarial tactics where one company's AI agents try to deceive or exploit others. An agent could feed false signals to a competitor's agent or learn the patterns of others to trick them in negotiations (Gartner, 2025).

Regulators might categorise certain manipulative behaviours as fraud or unfair practices, but policing this will be challenging. Businesses will need to harden their agents against adversarial strategies and possibly collaborate on industry standards (European Union, 2024; UK Government, 2023).


In general, the regulatory environment for AI agents is nascent. We can expect new regulations around AI transparency, accountability, and interoperability. Companies that engage proactively with regulators – helping shape sensible rules and demonstrating self-regulation – could avoid harsher restrictions and earn public trust.


Ethical and Societal Perspectives: Balancing Efficiency with Values

Beyond formal regulation, the rise of AI agents raises deeper ethical and societal questions that business leaders cannot ignore. At the core, it challenges us to balance relentless efficiency with human values and equitable outcomes.


Kantian Ethics – Means and Ends

The Philosophy: Immanuel Kant's moral philosophy emphasises that people should be treated as ends in themselves, not merely as means to an end. Kantian ethics is deontological, often focused on adherence to moral principles like honesty, autonomy, and dignity, regardless of expediency (Kant, 1785).


What This Means in Practice: An AI agent negotiating a deal might have the capacity to lie or mislead to get a better outcome. A purely utilitarian business view might accept that if the lie increases profit. But a Kantian approach would object: truthfulness is a duty, and deception is unethical even if it "works" (Kant, 1785).


Businesses implementing AI agents will face ethical dilemmas. The means by which results are achieved matter. Companies will need to encode ethical guidelines into their AI (forbidding behaviours like deception or unjust bias), not just aim for optimal outcomes.


The Practical Application: This might translate to design principles like transparency (the agent's decisions can be explained to affected parties) and consent (users have control over how their agent operates), aligning with the idea of treating individuals with respect.


Freeman's Stakeholder Theory – Broadening the Purpose

The Philosophy: While early business doctrine (Milton Friedman) held that the sole responsibility of business is to maximise shareholder profit, modern thought – notably R. Edward Freeman's stakeholder theory – states that companies must create value for all stakeholders, including employees, customers, suppliers, and communities.


What This Means in Practice: If agents dramatically cut costs and boost efficiency, who reaps the benefits? A pure shareholder-centric approach might drive companies to use AI agents primarily to reduce labour costs (replacing workers) and squeeze out more profit, even at the expense of jobs or customer experience. In contrast, a stakeholder approach would encourage using AI to augment employees (freeing them from drudgery to focus on higher-value work), to enhance customer service, and to collaborate with partners for mutual gain.


As Freeman puts it, "business is about profits, sure, but it is also about purpose." Adopting AI should tie into a company's core purpose and values. If a company's mission is to improve customer well-being, its AI agents should be designed to genuinely help customers, not to exploit their data or push unnecessary sales.


The Test: AI agents will test the sincerity of stakeholder commitment. Will companies use AI to benefit consumers, employees, and communities, or just to cut costs and chase earnings? The firms that choose the former may build more sustainable, trusted brands in the long run.


Social and Economic Equity: The Acemoglu Warning

The Research: Economist Daron Acemoglu warns that on our current trajectory, AI (including autonomous agents) is likely to deepen inequality if left unchecked (Acemoglu, 2024). He observes that many AI applications are being used to automate work and monitor workers, potentially displacing jobs and eroding wages for large segments of society. The productivity gains might mostly accrue to the owners of the AI and big platforms, rather than the average worker.


What This Means: In the context of AI agents, this could mean that small businesses get wiped out by big players who can afford the best AIs, or that workers displaced from intermediary roles (like sales agents or brokers) have no pathways to new productive roles.


The Power Concentration Risk: If consumers start delegating decisions en masse to a handful of AI agent platforms, those platform providers could gain immense influence – essentially gatekeepers of markets. This echoes concerns we have seen with large tech firms controlling search results, but on an even broader scale.


Social responsibility and long-term thinking are integral when implementing transformative tech. Proactively addressing issues like job impacts, data privacy, and fairness is not just altruism but, risk management. Companies that ignore these and are perceived as causing harm with AI could face public backlash, brand damage, or heavy-handed regulation later.


While AI agents present incredible efficiency opportunities, business leaders must ground their strategies in a sound ethical framework. Blending Kantian principles (respect for the individual, commitment to honest conduct) with a stakeholder mindset (creating value for all parties) can guide more balanced decision-making (Kant, 1785; Darden School of Business, 2024).

 

Five Strategic Questions Every Board Should Answer Now

Understanding the shift to agent-mediated markets is necessary. But understanding alone does not protect your position. Here are the five questions every board and leadership team should answer in the next 12 months:


1. Disintermediation Risk: Are We Visible Where Agents Look?

If buyers start using AI agents to source suppliers, negotiate terms, and make purchase decisions, will those agents find you? Are you present in the digital channels where agents search and compare? Do you have structured data that agents can parse, or are you locked in PDFs and phone calls?

Being invisible to agents is the same as not existing. If your competitor is agent-visible and you are not, you lose by default.


2. Agent Strategy: Build, Buy, or Integrate?

Should you build proprietary agents (closed ecosystem, control the experience, capture the data) or ensure compatibility with third-party agents (open ecosystem, broader reach, less control)? This is not an IT roadmap choice. This is a pricing power decision that will determine your market position for years.


Closed agents can lock in users and funnel transactions to your platform. Open agents attract more users but make you easier to compare and harder to differentiate. Most companies will need elements of both, but the primary strategy must be clear.


3. Ethical Guardrails: What Will We Forbid?

What behaviours will you forbid your agents from doing, even if they would improve outcomes? Will you allow deception in negotiation? Will you permit discriminatory filtering based on proxies for protected characteristics? Will you let your agent exploit information asymmetries?


The temptation will be to optimise purely for results. But the first scandal where your agent is caught behaving unethically will cost more than any efficiency gain. Decide your lines now, before deployment.


4. Stakeholder Impact: Who Benefits from Efficiency Gains?

If AI agents drive 20% cost reduction in procurement or 30% efficiency gains in service delivery, how will those gains be distributed? Shareholders only? Shared with employees through training and redeployment? Passed to customers through lower prices? Used to strengthen supplier relationships?


Your answer to this question reveals whether you mean what you say about corporate purpose. Agents will make the choice unavoidable, the gains will be visible, and people will ask where they went.


5. Regulatory Preparedness: Are We Ready for Transparency and Accountability?

Are you prepared for transparency requirements about when agents are acting on your behalf? For algorithmic audits that examine your pricing or selection logic? For liability frameworks that hold you accountable for agent decisions?


These regulations could be 12-18 months away in major markets. Companies that wait for the rules to arrive will spend the next three years if they are fortunate, playing catch-up. Those that build compliance into their agent architecture now will have a smoother transition.


These are not questions you answer in an email. They require board-level discussion, cross-functional input, and strategic commitment. But answering them now, while the market is still fluid, gives you positioning power later. Once the structures harden, you will be responding to other people's decisions rather than making your own.


A Note on Perspective: Where I Sit in This

My work sits at the junction of strategy, technology, and organisational change. I have spent the past decade helping leadership teams navigate digital transformation, AI adoption, and global market structure shifts. I understand agent technology at the architectural level and have guided implementation strategy across many sectors including manufacturing, logistics, construction, and professional services to name a few.

I am not a machine learning engineer. I do not build the algorithms. What I do is help leadership teams think clearly about what to build, why to build it, how to govern it, and whether building it aligns with their strategic intent and values.


The economic theory and philosophical frameworks in this article are not academic decoration. They are the tools and signals I use in boardrooms to cut through technology hype and focus on the decisions that actually matter: market positioning, stakeholder impact, and long-term competitive dynamics.


I have seen too many organisations chase technology with FOMO for its own sake, without asking whether it serves their purpose or their people. I have seen boards commit hefty budgets to AI initiatives without thinking through the ethical implications or the stakeholder consequences. And I have seen the damage that results when technology deployment runs ahead of strategic clarity.


The agent revolution will most definitely create winners and losers. The winners will not be the companies with the best algorithms. They will be the companies that knew what they were trying to achieve, why it mattered, and who it was supposed to serve and, before they wrote the first line of code.


How I Work With Leadership Teams

This article outlines a fundamental market shift that will reshape competitive dynamics over the next 2-5 years. The organisations that thrive will be those that think through these implications now, while positioning decisions still matter.


I work with boards and leadership teams in three ways:

Strategic Facilitation (2-day executive workshop):

Use this article's frameworks to guide your leadership team through the five critical questions above. We map your disintermediation risk, assess your agent strategy options, define your ethical guardrails, clarify your stakeholder commitments, and build your regulatory readiness plan.

Output: Strategic positioning decisions and governance frameworks for agent deployment. You leave with clarity on where you are going and why, before you commit resources to getting there.

Typical engagement: £POA depending on organisation size and preparation required.


Governance Framework Development (4-6 week engagement):

Build the ethical guardrails, oversight mechanisms, and stakeholder policies that ensure your AI agent deployment aligns with your values and risk tolerance. This includes algorithmic accountability structures, transparency protocols, and bias audit frameworks (European Union, 2024; UK Government, 2023).

Output: Documented governance that can withstand regulatory scrutiny and public examination. You get both the frameworks and the internal capability to apply them.

Typical engagement: £POA depending on scope and sector complexity.


Board Advisory (ongoing retainer):

Quarterly strategic sessions to track agent market developments, assess competitive positioning, and adjust strategy as the landscape evolves. I serve as an external sounding board for leadership teams navigating uncharted territory.

Output: Continuous strategic insight and decision support as agent markets mature. You avoid both premature commitment and dangerous delay.

Typical engagement: £3,500 - £6,000 per quarter.


These are not implementation projects but, strategic decision-making engagements for organisations that need to think clearly about where they are going before they commit resources to getting there.


If your organisation is grappling with how to position for agent-mediated markets, let's talk. Book a consultation at calendly.com/mark-733 or reach out directly through 360strategy.co.uk.


Conclusion: Leading in the Agent Era

The dawn of the AI agent era presents a transformative moment for businesses. Much like the advent of the internet or smartphones, it will create winners and losers. But unlike past shifts, this one introduces autonomous decision-makers into the economy, forcing us to reconsider principles that have long underpinned commerce, from the nature of contracts and competition to the ethics of decision-making.


Business leaders and founders have the challenging but exciting task of navigating this transition. Those who combine forward-thinking strategy with ethical foresight will be best positioned to thrive. As McKinsey put it, agentic commerce requires "a fundamental rethinking of how value is created, captured, and delivered". Companies that act boldly and responsibly can ride this wave to redefine their industries, setting new standards for efficiency and customer experience. Those that cling to old models or pursue AI adoption in a narrow, shortsighted way may find themselves disintermediated or left behind as AI agents become the new gatekeepers of markets.


Technology alone does not dictate outcomes – our choices and values do. Will AI agents negotiate better deals for all and usher in a new era of abundance? Or will they exacerbate inequality and create black-box markets that undermine trust? The answer depends on how we guide this innovation.


Business leaders are not just spectators but stewards of this future. By staying informed on cutting-edge research, engaging with diverse perspectives (economics, ethics, stakeholders), and prioritising both innovation and responsibility, they can ensure that the coming Coasean singularity truly delivers on its promise – creating more efficient markets and more inclusive prosperity.


The companies that succeed in tomorrow's agent-mediated economy will likely be those led by people who ask not just "How can we use AI agents?" but also "Why and for whom are we using them?" and "What values do we embed in them?" By finding the right balance, business leaders can harness AI agents as a force for good – driving growth, delighting customers, and uplifting society, all at once.


The window to shape your position is open. However, it will not stay open forever.


FAQ's


  1. What are AI agents in business?

    AI agents are autonomous software systems that perceive, reason and act to complete tasks like sourcing, negotiating and scheduling on your behalf.

  2. What is the Coasean singularity?

    It’s the point where AI collapses search, negotiation and contracting costs, reshaping make-or-buy decisions and market structure.

  3. How soon will AI agents impact B2B procurement?

    Large enterprises are piloting now; agent-to-agent buying becomes mainstream in the next 2–3 years.

  4. What risks do AI agents pose for suppliers?

    Disintermediation and invisibility: if your data isn’t agent-readable, you drop off shortlists and lose deals by default.

  5. How do firms monetise AI agents?

    Models include subscription, transaction commission, data insights and platform access fees - each with trade-offs on reach, margin and trust.

  6. What new market designs do agents enable?

    Hyper-personalised bundles, automated micro-contracts and real-time auctions governed by code and measurable service levels.

  7. What governance do we need before deploying agents?

    Clear ethical guardrails, auditability, role-based controls, bias testing and accountability for decisions made at machine speed.

  8. How will regulation affect agent deployments?

    Expect EU/UK transparency, liability and competition rules; build compliance into architecture rather than adding it later.

  9. How can boards prepare for agent-mediated markets?

    Map disintermediation risk, decide open vs closed agent strategy, set ethical red lines, allocate benefits across stakeholders and plan for audits.

  10. What’s the first practical step for leaders?

    Make your offers agent-visible: structured product and service data, machine-readable terms, pricing logic and performance signals.


References:

Acemoglu, D. (2024) The Simple Macroeconomics of AI. Cambridge, MA: MIT Shaping the Future of Work Initiative. Available at: https://economics.mit.edu/sites/default/files/2024-04/The%20Simple%20Macroeconomics%20of%20AI.pdf (Accessed: 11th November 2025).


Calvano, E., Calzolari, G., Denicolò, V. and Pastorello, S. (2020) ‘Artificial Intelligence, Algorithmic Pricing, and Collusion’, American Economic Review, 110(10), pp. 3267–3297. Available at: https://www.aeaweb.org/articles?id=10.1257%2Faer.20190623 (Accessed: 11th November 2025).


Cambridge University Press (2025) Hunold, M. et al. ‘Algorithmic price recommendations and collusion: experimental evidence’, Journal of Industrial Organization Education (advance online). Available at: https://www.cambridge.org/core/services/aop-cambridge-core/content/view/CB651EEFF516B590F70D4A1447162FAF/S1386415725000092a.pdf/algorithmic-price-recommendations-and-collusion-experimental-evidence.pdf (Accessed: 10th November 2025).


Darden School of Business (2024) ‘Stakeholder Theory’, University of Virginia. Available at: https://www.darden.virginia.edu/stakeholder-theory (Accessed: 11th November 2025).


European Union (2024) Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence (AI Act). Available at: https://eur-lex.europa.eu/eli/reg/2024/1689/oj/eng (Accessed: 10th November 2025).


Gartner (2025) ‘Gartner Predicts Over 40% of Agentic AI Projects Will Be Cancelled by End of 2027’. Available at: https://www.gartner.com/en/newsroom/press-releases/2025-06-25-gartner-predicts-over-40-percent-of-agentic-ai-projects-will-be-canceled-by-end-of-2027 (Accessed: 9th November 2025).


Kant, I. (1785) Groundwork for the Metaphysic of Morals. (Modern English translation). Available at: https://www.earlymoderntexts.com/assets/pdfs/kant1785.pdf (Accessed: 11th November 2025).


Logistics Viewpoints (2025) ‘Walmart and the New Supply Chain Reality: AI, Automation and Resilience’. Available at: https://logisticsviewpoints.com/2025/03/19/walmart-and-the-new-supply-chain-reality-ai-automation-and-resilience/ (Accessed: 10th November 2025).


McKinsey & Company (2025a) ‘The agentic commerce opportunity: How AI agents are ushering in a new era for consumers and merchants’. Available at: https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-agentic-commerce-opportunity-how-ai-agents-are-ushering-in-a-new-era-for-consumers-and-merchants (Accessed: 11th November 2025).


Reuters (2025) ‘Walmart bets on AI super agents to boost e-commerce growth’. Available at: https://www.reuters.com/business/retail-consumer/walmart-bets-ai-super-agents-boost-e-commerce-growth-2025-07-24/ (Accessed: 10th November 2025).


Shahidi, P., Rusak, G., Manning, B.S., Fradkin, A. and Horton, J.J. (2025) ‘The Coasean Singularity? Demand, Supply, and Market Design with AI Agents’, in The Economics of Transformative AI. Chicago, IL: University of Chicago Press (NBER chapter). Available at: https://www.nber.org/books-and-chapters/economics-transformative-ai/coasean-singularity-demand-supply-and-market-design-ai-agents (Accessed: 11th November 2025).


UK Government (2023) AI regulation: a pro-innovation approach (White Paper). Available at: https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach/white-paper (Accessed: 9th November 2025).


Wall Street Journal (WSJ) (2024) ‘Federal Government Backs Tourists in Atlantic City Casino Hotel Suit’. Available at: https://www.wsj.com/us-news/law/federal-government-backs-tourists-in-atlantic-city-casino-hotel-suit-db28bdf6 (Accessed: 10th November 2025).

 

bottom of page