AI agents are booming, deployments are not

Published on the 10/02/2026 | Written by Heather Wright


Who will be the first CIO fired over agentic fails?…

Agentic AI is coming with multiagent workflows exploding (from a very low base) in the past few months despite threats of lawsuits, fines and CIO dismissals from poorly-managed deployments. But despite the hype, they’re far from ‘everywhere’ yet.

A new report, The State of AI Agents, says only 19 percent of organisations have deployed the bots, and typically in narrow, limited-scope use cases rather than broad, cross functional rollouts.

“By 2030, poor agent governance could cost CIOs their jobs.”

Nonetheless, the Databricks report, which is based on data gathered from more than 20,000 global organisations using the company’s data intelligence platform, is breathless about the potential and growth. It says use of multi-agent systems soared 327 percent in four months, saying enterprises are transitioning from single chatbots to multi-agent systems built on domain intelligence and capable of planning, deciding and acting across end-to-end business workflows. Something to note however, is the opaqueness of the report: Figures are vague, usually just percentages.

The Deloitte AI Institute’s 2026 State of AI in the Enterprise report is also bolshy about the future of agents, saying autonomous AI agents are ‘racing’ into the enterprise, transforming AI from a source of information and insights into a system that can do real work. It says 85 percent of respondents in its survey of more than 3,000 director to C-suite leaders directly involved in their companies’ AI initiatives, expect to customise agents to fit the unique needs of their business. (Adding a note of caution, however, the report also notes that while AI is delivering productivity ‘for most’, its delivering business reimagination for few.)

What’s new isn’t just better models, it’s the architecture, The State of AI Agents says, with agentic architectures often pairing foundation models with enterprise context and tools to independently plan and take action. Rather than the ‘see what sticks’ approach adopted by many early on, enterprises are now honing in on their strategies, with MIT’s Building a High Performance Data and AI Organisation report highlighting that more than half of leaders see agentic AI as a force multiplier for operational performance and decision making.

Organisations are using multiple model families to align use cases with the most performative LLMs – and protect against vendor lock-in. Seventy-eight percent of companies are using two or more LLM model families, with 59% using three or more in October 2025 – up from just 36 percent in July 2025.

Databricks says its Supervisor Agent, which creates systems comprised of multiple agents working together to complete tasks across specialised domains, has become the number one agent use case across its Agent Bricks platform used to build, evaluate and deploy agents.

Information extraction was the second most common agent use case, enabling companies to convert unlabelled text into structured tables and tap data for AI initiatives such as pulling product details, prices and descriptions from supplier PDFs even when they’re formatted differently.

While market intelligence and strategic analytics was the top AI use case in Asia Pacific (mirroring Europe, the Middle East and Africa, and North America), customer engagement is a key testing ground for the tools, with tasks such as customer support, customer advocacy, onboarding, personalised marketing content, customer interaction summarisation and customer sentiment analysis representing 40 percent of AI use cases.

That ties in with an earlier Deloitte Digital report, which identified that nine in 10 customer experience leaders believe AI has the potential to improve customer experience.

While broad adoption of agentic AI is limited, there are local organisations who are already well on their way in adopting the technology.

NSW Health is reportedly leveraging AI agents to predict supply shortages and monitor inventory levels across its hospital network. The system analyses usage patterns, delivery schedules and demand forecasts to automatically reorder essential supplies, reducing waste and ensuring medical supplies are always available.

ASB has deployed AI agent ‘virtual assistants’ to provide real-time financial assistance, performing customer validation and handling complex queries, with the ability to pass to a human rep when things get too complicated.

ANZ meanwhile, has just announced that it has deployed Salesforce’s Agentforce in its new CRM tool, which will consolidate data from 20 different platforms, and supporting automation of tasks and streamlining of workflows.

And University of Auckland researchers are developing a nationwide research platform to develop agents tailored to New Zealand’s needs and strengths and test them in realistic simulations. It has received $250,000 in seed funding for the Aotearoa Agentic AI Platform: A Productivity Multiplier and will look for further funding this year.

It is one of the proposals to be considered by the New Zealand Institute for Advanced Technology to establish a new national platform for AI research.

But while agentic tools can open the door to new capabilities, they also bring technical complexity and inherent risks both in terms of governance and the risk of a money pit that fails to move from experimentation to production.

IDC’s FutureScape: Worldwide Agentic Artificial Intelligence 2026 Predictions goes as far as to warn that by 2030 up to 20 percent of G1000 organisations will face lawsuits, fines and CIO dismissals due to high-profile disruptions tied to poor AI agent governance. (On the flip side, the report also forecasts that 60percent of G2000 CEOs will use agentic AI to inform strategic decisions, leveraging autonomous systems to simulate outcomes and guide boardroom planning.)

The solution, according to The State of AI Agents, is unified governance which serves as a layer defining how data is used, setting guardrails and rate limits and establishing structured accountability within organisations to help ensure decisions are aligned with evolving ethical standards.

Evaluations, using frameworks to systematically measure, test and improve the quality and reliability of AI models at all stages of deployment, go hand in hand with governance. “While governance provides the guardrails and control panel for agents, evaluations monitor and measure agent behaviour throughout their lifecycle, enabling governance to adapt in real time as agents learn or environments change.”

But the report warns that evaluations go beyond general benchmarks: Custom benchmarks specific to an organisation’s data and goal task are required.

“Domain-specific evaluations validate knowledge and decisions grounded in enterprise data, allowing teams to tie evaluation metrics to business KPIs (such as CSAT, handle time and revenue lift), making improvements actionable. This continuous evaluation transforms AI agents from static tools into learning systems that improve over time and have the potential to unlock scalable business impact.”

Post a comment or question...

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

MORE NEWS:

Processing...
Thank you! Your subscription has been confirmed. You'll hear from us soon.
Follow iStart to keep up to date with the latest news and views...
ErrorHere