Gartner's April 2026 research shows that 25% of all enterprise GenAI applications will experience at least five minor security incidents per year by 2028. The organisations least likely to be in that 25% are not the ones with the biggest security budgets — they are the ones that treated agentic AI governance as an architecture decision made before deployment, not a remediation project launched after an incident.
This guide walks through the four attack surfaces unique to AI agents, the identity and access management failures that make them exploitable, and the practical governance steps that enterprise IT leaders in Hong Kong can implement now.
Why Do AI Agents Create a New Category of Enterprise Security Risk?
AI agents are not simply smarter chatbots. They are autonomous software actors that connect to live business systems, take actions without human approval at each step, and can chain multiple operations across CRM, ERP, and file systems in a single workflow. This autonomy is what makes them productive — and what makes them a fundamentally different security problem from any prior enterprise software.
Traditional enterprise security was built around the assumption that every consequential action has a human decision point before execution. An employee submits a request; a system validates access; a human approves the outcome. AI agents collapse this sequence. An agent with access to your CRM, your document management system, and your email platform can read sensitive data, draft communications, and update records in sequence — without a human decision point at any step.
According to McKinsey's 2026 research on enterprise AI security, 80% of organisations have already encountered risky AI agent behaviours — including unauthorised data exposure and improper system access — and this is before most organisations have deployed agents at significant scale. The risk profile compounds as capability and connectivity expand.
What Are the Four Attack Surfaces Unique to Enterprise AI Agents?
Gartner's 2026 cybersecurity analysis identifies four distinct attack surfaces that AI agents introduce to enterprise environments. Each one has no direct equivalent in traditional software security, which is why existing controls frequently fail to address them.
1. Prompt injection. An attacker embeds malicious instructions inside content that the AI agent is expected to process — a customer email, a document, a web page. The agent, unable to distinguish between legitimate instructions and injected commands, executes the attacker's instructions with its full system access. Unlike SQL injection, which targets databases, prompt injection targets the AI model's reasoning process itself.
2. Overpermissioned agent identities. AI agents require credentials to connect to business systems. When those credentials are provisioned with excess permissions — the path of least resistance during deployment — a compromised agent becomes a privileged insider threat. Gartner predicts that AI-related legal claims will exceed 2,000 by end of 2026, with access control failures among the leading causes.
3. Multi-agent trust exploitation. Enterprise AI architectures increasingly involve chains of agents — an orchestrator agent that delegates to specialist sub-agents. When these agents trust each other's outputs without independent verification, a compromised agent earlier in the chain can corrupt the outputs of every agent downstream. The attack surface grows geometrically with each agent added to a workflow.
4. Tool and MCP server supply chain risk. AI agents connect to external systems via tools and, increasingly, via MCP (Model Context Protocol) servers. Unvetted third-party tools or MCP connectors may contain malicious logic that executes when the agent invokes them — a supply chain attack vector with no human review in the execution path.
How Does Identity and Access Management Break Down With AI Agents?
Traditional IAM was designed for human actors: a user authenticates with credentials, receives access based on their role, and their actions are logged against their identity. AI agents break each of these assumptions in ways that existing IAM infrastructure was not designed to handle.
Agent identity registration. Most enterprise IAM systems have no native concept of a non-human agent identity. Agents are frequently given service account credentials designed for batch processes — credentials that carry no session context, no behavioural baseline, and no automatic expiry. A compromised agent operating on a service account is, from the IAM system's perspective, indistinguishable from a legitimate automated process.
Credential scope and lifetime. Human credentials are typically reviewed in regular access reviews tied to employment or role changes. Agent credentials have no equivalent lifecycle. According to Atlan's 2026 enterprise AI security analysis, the most common access control failure in agent deployments is credentials provisioned at deployment and never reviewed again — even as the agent's task scope expands significantly over time.
Action attribution and audit. IAM audit logs capture what was accessed. Agentic workflows require attribution of what was decided and why — a level of logging that requires purpose-built agent observability tooling, not standard SIEM infrastructure. For organisations operating under HKMA, SFC, or PDPO requirements, this attribution gap has direct regulatory implications.
What Does Current Research Show About Enterprise AI Agent Vulnerabilities?
The evidence on agentic AI security failures is now substantive enough to inform concrete governance decisions. These are not theoretical risks — they are documented patterns from early enterprise deployments.
McKinsey's 2026 enterprise AI security research found that security and risk concerns are the number one barrier to scaling agentic AI, with small access control failures capable of cascading into major compliance violations or data breaches costing millions. The research specifically notes that agents gaining autonomy over tools, data, and systems create compounding failure modes that do not exist in traditional software architectures.
Gartner's April 2026 press release on GenAI security incidents projects that 25% of all enterprise GenAI applications will experience at least five minor security incidents per year by 2028, up from a very small baseline in 2025. The primary drivers cited are insufficient permission governance, inadequate audit trail coverage, and the use of unvetted third-party AI tools and connectors.
In Hong Kong specifically, the PCPD issued a dedicated alert on March 16, 2026, warning enterprises about the elevated data privacy risks of agentic AI systems. The alert specifically cited cross-system data aggregation — where an AI agent accesses personal data held in multiple separate systems via a single workflow — as a primary PDPO compliance risk that existing data governance frameworks were not designed to address.
What Is the Gartner Framework for Securing Enterprise AI Agents?
Gartner's 2026 cybersecurity trends report identifies agentic AI governance as one of the top enterprise security priorities, and outlines a governance approach centred on four pillars. These translate directly into enterprise architecture and policy decisions that IT leaders can act on now.
Pillar 1 — Least-privilege agent identities. Every AI agent should be provisioned with the minimum credentials needed to complete its defined task scope — and those credentials should be reviewed on the same cycle as human access reviews. Agents should never hold standing credentials to systems they access infrequently; just-in-time access provisioning for agents is the recommended pattern.
Pillar 2 — Purpose-built agent observability. Standard SIEM and audit logging tools capture human actions. Agent observability requires logging at the reasoning layer: what instructions the agent received, what tools it invoked, what data it accessed, and what decisions it made at each step. Without this layer, incident investigation after an agent-related breach is effectively impossible.
Pillar 3 — Input validation and output verification. AI agents should not process external content — emails, documents, web pages — without sanitisation layers that detect and strip potential prompt injection payloads. Similarly, agent outputs that trigger consequential actions (financial transfers, data exports, system modifications) should pass through a verification step before execution.
Pillar 4 — Supply chain governance for tools and connectors. Every tool, plugin, and MCP server that an AI agent can invoke should be subject to a vendor security review equivalent to any software with production system access. This includes open-source connectors and tools developed internally by teams without security review gates in their deployment pipeline.
How Does Hong Kong's PDPO Apply to Agentic AI Deployments?
Hong Kong's Personal Data (Privacy) Ordinance applies to any processing of personal data — and AI agents that access, aggregate, or act on personal data across enterprise systems are subject to its requirements in full. The PCPD's March 2026 alert was specific: agentic AI poses higher data privacy risks than ordinary AI chatbots because of its ability to autonomously access personal data across multiple systems in a single workflow.
Three PDPO principles are most directly implicated by agentic AI deployments. The data minimisation principle requires that agents only access personal data necessary for their defined task — an agent with CRM, HR, and document system access in a single credential set almost certainly violates this principle in its current form. The purpose limitation principle requires that data collected for one purpose is not used for another — when an agent trained for customer service tasks is later used for internal HR analysis, the purpose has changed and the data use may no longer be lawful. The transparency principle requires that individuals understand how their data is used — agentic workflows that cross multiple systems create opacity that is difficult to surface in standard privacy notices.
The practical implication for Hong Kong enterprise leaders is that PDPO compliance reviews must now include a dedicated agentic AI assessment, separate from the general AI tool review that most organisations have already conducted. The PCPD has signalled that enforcement attention will follow the March 2026 alert.
How Do You Build an AI Agent Security Posture? A Practical Roadmap
The gap between knowing these risks and having an active security posture is where most enterprise organisations sit in 2026. The following roadmap translates the Gartner framework and PDPO requirements into a sequenced action plan that an IT Director or Head of Digital Transformation can execute across a 90-day horizon.
Days 1–30: Inventory and access audit. Identify every AI agent currently operating across the enterprise — including shadow deployments in individual business units. For each agent, document what systems it connects to, what credentials it holds, when those credentials were provisioned, and whether a least-privilege review has ever been conducted. This inventory is the prerequisite for everything that follows.
Days 31–60: Observability layer and governance policy. Deploy purpose-built agent logging that captures reasoning steps, tool invocations, and data access patterns. Draft an AI agent access governance policy that defines credential provisioning standards, review cycles, and the approval process for expanding an agent's tool access scope. This policy should be reviewed by legal and compliance before finalisation, specifically against PDPO requirements.
Days 61–90: Input validation, output controls, and supply chain review. Implement prompt injection detection layers for agents that process external content. Add human-in-the-loop checkpoints for agent actions that cross defined risk thresholds (financial, data export, system modification). Conduct security reviews of all third-party tools and MCP connectors currently in use, and establish a review gate for any new connector before production deployment.
懂AI的冷,更懂你的難 — UD 同行28年,讓科技成為有溫度的陪伴. Building a robust agentic AI security posture is not a one-time project — it is an ongoing governance capability that evolves as agent deployments scale. UD's team brings 28 years of enterprise security and technology implementation experience to help Hong Kong organisations build that capability systematically.
🛡️ Ready to Strengthen Your Enterprise AI Security?
UD is a trusted Managed Security Service Provider (MSSP)
With 28 years of experience, delivering solutions to 50,000+ enterprises
We'll walk you through every step — from AI agent security assessment to governance policy design, PenTest, and full MSSP coverage for your enterprise