What Is AI Agent Sprawl?
AI agent sprawl is the uncontrolled proliferation of AI agents across an enterprise, where the number of deployed agents grows faster than the organisation's ability to govern, secure, or even inventory them. An agent qualifies as "sprawl" when it operates without an assigned owner, without documented permissions, without security review, or after its original use case has been retired but the agent continues running.
On April 28, 2026, Gartner released research identifying six steps to manage AI agent sprawl and projecting that by 2028, the average Fortune 500 enterprise will manage more than 150,000 agents, up from fewer than 15 in 2025. That trajectory creates a governance challenge unlike anything enterprise IT has faced before. This article explains both the problem and Gartner's recommended framework for addressing it.
How Serious Is the AI Agent Sprawl Problem in 2026?
The scale of the problem is already significant. The average enterprise currently manages 37 deployed AI agents according to 2026 data, and more than 80% of Fortune 500 companies using AI agents have no strategy to manage them, according to Gartner. The critical gap is between deployment velocity and governance maturity.
The security consequences are measurable. A 2026 Gravitee survey found that 88% of organisations reported confirmed or suspected AI security incidents. Only 14.4% have full security approval for their existing AI deployments. Only 24.4% have complete visibility into how their agents are communicating with each other.
The root cause is structural. Most agents are being built by business teams using low-code and no-code tools such as Microsoft Copilot Studio, Salesforce Agentforce, and similar platforms. These tools are deliberately designed to make agent creation fast and accessible. The governance infrastructure to match that deployment speed simply does not yet exist in most organisations.
A 2026 Gravitee State of AI Agent Security report found that 81% of teams are past the planning phase on agentic AI, yet only 14.4% have full security approval and 88% of organisations confirmed or suspected security incidents this year. The gap between executive confidence and actual controls is the defining security problem of 2026.
What Are Gartner's 6 Steps to Manage AI Agent Sprawl?
Gartner's April 28, 2026 press release identifies the following six steps as the core framework for organisations seeking to bring AI agent sprawl under control. Each step builds on the previous one and addresses a distinct failure mode in how enterprises currently manage their agentic AI deployments.
Step 1: Establish agent governance and policies. Set clear rules for when and how agents are built, who can create and share them, and what connectors are permitted. Without explicit governance policies, every business unit defaults to its own interpretation of acceptable agent behaviour, permissions, and data access. Policy must precede proliferation.
Step 2: Build a centralised agent inventory. Use AI Trust, Risk, and Security Management (AI TRiSM) tools to discover and categorise all agents across the organisation, including those built with sanctioned tools and those running as shadow AI. Gartner's data suggests most enterprises have significantly more agents running than IT teams believe, making discovery the most critical first governance action.
Step 3: Define agent identity, permissions, and lifecycle. Manage each agent's identity, permission model, and access controls. Establish a formal review and retirement process so redundant or expired agents are removed rather than left running indefinitely with active credentials. This step directly addresses the credential sprawl problem identified by Strata Identity's 2026 research, where only 23% of enterprises have a formal agent identity management strategy.
Step 4: Develop AI information governance. Govern what information each agent can access. Ensure a process exists to keep that data current, manage permissions to prevent oversharing, and archive data when it is no longer needed by the agent. This step applies data governance principles to the agentic context — something most organisations have not yet updated their data policies to address.
Step 5: Monitor and remediate agent behaviour. Establish ongoing visibility into agent usage. Ensure policy compliance, detect anomalous behaviour, and correct agents that exceed their intended scope or risk tolerance. This step requires tooling that goes beyond static policy documentation, into real-time monitoring of what agents are actually doing versus what they were designed to do.
Step 6: Foster a culture of responsible AI usage. Support the workforce with training programmes and a community of practice to drive adoption and amplify best practices on agent management. Governance frameworks without cultural adoption fail at the user layer — business teams that understand why governance matters are the most effective last line of defence against rogue agent creation.
Why Does Agent Sprawl Happen So Quickly?
Three structural dynamics accelerate sprawl in enterprise environments.
Low-code democratisation. Platforms like Microsoft Copilot Studio, Salesforce Agentforce, and ServiceNow's AI agent builder allow non-technical business users to create and deploy functional agents in hours. The barrier to agent creation has dropped from months of engineering work to an afternoon. There is no equivalent reduction in the governance effort required to manage those agents safely.
Departmental autonomy without central visibility. Marketing, finance, HR, and operations teams each build agents for their own workflows, typically without notifying IT until something goes wrong. Each agent represents a new set of data connections, API access grants, and credential assignments that the central security team has no visibility into until a discovery sweep is run.
No formal retirement process. Agents built for a specific project, campaign, or seasonal workflow are rarely actively decommissioned. They continue running, holding data access and executing actions on schedules, long after their use case has ended. Gartner estimates that by 2028, a significant proportion of the 150,000 agents per Fortune 500 enterprise will be legacy agents running without active owners.
What Are the Real Risks of Unmanaged AI Agents?
Agent sprawl creates four categories of concrete organisational risk.
Data exfiltration and unauthorised access. Agents with broad or unclear data access permissions, particularly those provisioned with shared human credentials, represent persistent high-privilege access accounts that are rarely included in standard access reviews. A single compromised agent with administrative credentials can expose data at a scale no individual human actor could match.
Compliance failures. In Hong Kong, the HKMA, SFC, and Digital Policy Office have each issued substantive AI governance guidance in the past 18 months. An organisation that cannot produce an inventory of its AI agents, document their data access, or demonstrate policy controls will struggle to satisfy regulatory audit requirements. The Personal Data (Privacy) Ordinance implications of agents processing personal data without documented governance are equally significant.
Operational incidents from agent conflicts. Multiple agents with overlapping responsibilities, conflicting instructions, or access to the same data and systems can produce unexpected outcomes at machine speed. A 2026 Gravitee survey found that only 24.4% of organisations have full visibility into agent-to-agent communications, meaning most cannot detect or diagnose these conflicts before they impact operations.
Reputational and legal liability. When an unmanaged agent makes a decision that harms a customer, vendor, or business partner, the question of accountability falls back on the organisation. Demonstrating that responsible governance was in place requires documentation of agent identity, permissions, monitoring, and policy compliance that most organisations currently cannot produce.
What Agent Sprawl Looks Like in a Typical Hong Kong Enterprise
Consider a Hong Kong financial services firm with 800 employees. Its IT team is aware of 12 deployed AI agents. A discovery sweep using an AI TRiSM tool reveals 41 additional agents running across the organisation: customer service bots built by the contact centre team on Copilot Studio, document processing agents deployed by legal, a compliance monitoring agent configured by the risk team, and several workflow automation agents built by individual business analysts for their own productivity.
Of those 41 shadow agents, 23 hold active connections to the firm's document management system. Fourteen are provisioned with credentials that have not been rotated in over six months. Eight have no designated owner following staff turnover. Three have been running continuously since a project that concluded in Q4 2025.
None of this is exceptional. It is the median enterprise in 2026. The Gartner framework exists precisely because this situation is the norm, not the outlier, and the standard IT governance processes built for managed applications were not designed to catch it.
Which Tools Are Emerging to Address Agent Sprawl?
Several platforms now target the agent sprawl problem directly.
Microsoft Agent 365 (generally available May 1, 2026) provides discovery, lifecycle governance, and runtime security controls across Microsoft 365, Windows, AWS Bedrock, and Google Cloud environments. It directly implements Steps 2, 3, and 5 of Gartner's framework within the Microsoft ecosystem.
AI TRiSM platforms from vendors such as Reco.ai and Gravitee are designed specifically for cross-vendor agent discovery and policy enforcement. These tools are appropriate for organisations running agents across multiple non-Microsoft platforms.
Google's Agent Identity framework assigns each agent a unique cryptographic ID with defined authorisation policies. Agent Gateway enforces security policies and provides protection against prompt injection, tool poisoning, and data leakage at the platform level.
No single tool addresses all six steps of Gartner's framework. Steps 1, 4, and 6 — policy establishment, information governance, and culture building — require organisational process changes that tools alone cannot drive. The effective response combines tooling for discovery and monitoring with deliberate governance process design.
Ready to Get Your AI Agent Environment Under Control?
Most Hong Kong enterprises underestimate how many AI agents are already running in their organisation. UD has partnered with Hong Kong businesses for 28 years. Whether you are starting with an AI readiness assessment, building your agent governance framework, or preparing for regulatory scrutiny, we'll walk you through every step — from inventory and policy design to deployment governance and ongoing monitoring.