OWASP Agentic AI Risks 2026: What CISOs Must Prepare For
- TSSConsult

- 13 hours ago
- 7 min read

The Security Environment Has Changed for Good
Not long ago, the most pressing AI security question was:
What if the model says something wrong?
Today, the question has fundamentally shifted:
What if the agent does something wrong?
AI systems are quickly moving beyond simple chat interfaces. Now, they can carry out tasks, use tools, connect with enterprise systems, and even make decisions for users.
These systems now:
Read internal documents
access enterprise applications
write and execute code
orchestrate workflows
interact with other agents
operate continuously across sessions
They perform all these actions much faster than any person could.
This shift is changing the types of threats that enterprises face.
Security teams are no longer protecting only data and applications.
They are now protecting independent decision-making systems embedded in operational systems.
In order to address these changes, the OWASP GenAI Security Project released the OWASP Top 10 for LLM Applications and is now working on risks tied to agentic AI systems. The frameworks are the first organised effort to map out the threats from autonomous AI.
For CISOs, this is not academic research.
This marks the start of a new approach to security. To understand why, it's important to examine how agentic AI changes the risk landscape and why traditional security models are no longer sufficient.
Why Agentic AI Requires a New Security Model
Traditional LLM security concerns focused primarily on:
prompt injection
sensitive data leakage
insecure output handling
training data poisoning
supply chain weaknesses
These risks remain real.
But they don’t cover all the actual challenges that agentic AI brings.
Agentic systems possess capabilities that earlier AI models did not:
autonomous task planning
tool execution
API integration
credential access
persistent memory
multi-agent collaboration
Each of these abilities creates new ways for attackers to get in.
A compromised AI assistant that simply generates text acts as a nuisance.
A compromised AI agent with access to enterprise systems can:
delete data
exfiltrate sensitive information
modify configurations
execute transactions
propagate errors across automated workflows
This challenge is much bigger than before.
Research from CyberArk’s Machine Identity Security Report shows that enterprises now operate with machine identities outnumbering human identities by more than 80 to 1.
Every AI agent, API token, service account, and automation tool becomes a potential identity to govern and secure.
When independent decision-making enters that environment, the exposure is not additive.
The risk grows much faster than before.
Traditional security models, built for humans and with fixed permissions, were not designed for autonomous software that can create its own workflows. However, organisations can begin closing this gap by adapting familiar security concepts such as least privilege, continuous auditing, and RBAC to suit agentic AI environments. By extending these principles to manage agent actions, permissions, and interactions, CISOs can lay a foundation for defending both legacy and agentic systems during the transition.
The OWASP Agentic AI Risk Landscape
Security research and industry analysis increasingly identify ten core categories of risk associated with agentic systems.
Together, these risks make up the new set of threats that enterprises face from AI.
ASI01 - Agent Goal Hijack
The Risk
Attackers manipulate an agent’s objectives through malicious content embedded in:
emails
documents
websites
tool outputs
The agent regards these instructions as legitimate inputs and redirects its actions accordingly.
Why It Matters
This is basically prompt injection, but happening as part of real operations.
The attacker does not need to compromise the system infrastructure.
They only need to manipulate the agent’s reasoning process.
Research on indirect prompt injection has demonstrated how embedded instructions in external content can influence AI agents' processing of that information.
CISO Response
Treat all reasoning inputs as untrusted content.
Isolate system prompts from external inputs.
Implement strict content filtering.
require human validation for sensitive actions
ASI02 - Tool Misuse and Exploitation
The Risk
Agents misuse legitimate tools because their reasoning has been manipulated.
These tools may include:
file management systems
cloud infrastructure APIs
enterprise databases
financial systems
communication platforms
Why It Matters
Agents do not just read information.
They act on systems.
If an agent is tricked into using a harmful tool, the damage can happen right away.
CISO Response
implement Least Agency principles
Restrict tool access to the minimum required.
validate tool calls at runtime
maintain full audit trails
ASI03 - Identity and Privilege Abuse
The Risk
Agents operate using credentials such as:
API keys
service tokens
delegated permissions
system integrations
Attackers who manipulate the agent effectively gain access to those permissions.
Why It Matters
AI agents often have several permissions combined into one identity.
This makes them attractive targets for privilege escalation.
Non-human identities already dominate enterprise environments, and AI agents accelerate that trend.
CISO Response
Extend Identity and Access Management policies to agents.
Use short-lived credentials
Monitor privilege escalation
Audit agent permissions regularly
ASI04 - Agentic Supply Chain Vulnerabilities
The Risk
Agents commonly rely on third-party tools, plugins, and model components.
Compromised tools can manipulate agent behaviour during runtime.
Why It Matters
Traditional supply chain attacks usually happen before deployment, but agentic supply chain risks can appear while agents are running and adding new tools.
CISO Response
maintain approved tool registries
Verify plugin and API provenance.
apply allow-listing controls
conduct vendor security reviews
ASI05 - Unexpected Code Execution
The Risk
Agents generate and execute code based on instructions or reasoning outputs.
If attackers manipulate that reasoning, they can trigger remote code execution within enterprise environments.
Why It Matters
When agents can both write and run code, a simple text input can lead directly to a system being compromised.
CISO Response
sandbox all agent-generated code
require human approval before execution
Apply container isolation
monitor execution environments
ASI06 - Memory and Context Poisoning
The Risk
Agents often store long-term context using:
vector databases
retrieval-augmented generation (RAG) systems
contextual embeddings
Attackers can poison these knowledge stores, changing future decisions.
Why It Matters
Memory poisoning, unlike prompt injection, stays in the system even after sessions end.
The agent remains compromised even after the attacker disappears.
CISO Response
Monitor changes to memory stores.
validate knowledge sources
restrict write permissions
Audit the persistent context regularly.
ASI07 - Insecure Inter-Agent Communication
The Risk
Multi-agent architectures rely on agents exchanging instructions.
Attackers can spoof or manipulate these communications.
Why It Matters
Compromising communication between agents can redirect entire workflows.
This type of security risk is new and doesn’t have a direct match in older systems.
CISO Response
encrypt agent communications
authenticate agent identities
validate message provenance
Apply zero-trust architecture
ASI08 - Cascading Failures
The Risk
A failure or compromise in one agent propagates across automated workflows.
Why It Matters
Agentic architectures usually connect multiple systems.
One bad input can set off several automated actions before anyone notices.
CISO Response
define blast-radius limits
Implement workflow circuit breakers
deploy kill switches
conduct resilience simulations
ASI09 - Human–Agent Trust Exploitation
The Risk
Agents present malicious recommendations in persuasive language, causing humans to approve unsafe actions.
Why It Matters
People tend to trust systems that sound smooth and confident.
Studies have shown that users often over-rely on AI outputs without adequate verification.
CISO Response
train staff to critically evaluate AI outputs
introduce structured approval workflows
require confirmation for high-risk actions
display uncertainty indicators
ASI10 - Rogue Agents
The Risk
A compromised or misaligned agent behaves outside intended parameters.
Why It Matters
Autonomous systems might keep running even if they start causing problems.
In complex setups, it can take time to notice these issues.
CISO Response
Implement behavioural monitoring
find anomalies
deploy immutable kill switches
conduct agent-focused red-team exercises
Real-World AI Security Incidents
Agentic risks are not hypothetical.
Several recent incidents have exposed the emerging attack surface.
Microsoft 365 Copilot and “EchoLeak”
Security researchers demonstrated how a malicious email could manipulate Microsoft 365 Copilot into exposing sensitive information through indirect prompt injection.
The attack exploited the agent’s access to enterprise data.
ChatGPT Plugins and Tool Invocation Risks
Early ChatGPT plugin ecosystems revealed how LLMs interacting with external services could be manipulated into unsafe actions.
Prompt injection, combined with connected tools, created new pathways for data leakage and action chaining across systems.
AutoGPT Vulnerabilities
Autonomous agent frameworks such as AutoGPT demonstrated vulnerabilities, including server-side request forgery and unsafe execution paths.
These incidents show that old application vulnerabilities can become much more serious when autonomous reasoning is involved.
How Governance Guidelines Help CISOs Respond
The OWASP agentic risk model works best when used alongside broader governance guidelines.
Three frameworks now form the foundation of AI governance:
OWASP - Threat Surface
OWASP identifies how agentic systems can be attacked.
NIST AI Risk Management Framework - Governance
The NIST AI RMF provides a working framework for managing AI risk across the lifecycle through four functions:
Govern
Map
Measure
Manage
This framework helps organisations turn AI risks into explicit rules and safety measures.
EU AI Act - Regulatory Accountability
The EU AI Act (Regulation EU 2024/1689) introduces a risk-based regulatory model for AI systems, emphasising:
transparency
manual supervision
cybersecurity
post-deployment monitoring
Organisations outside the EU are also starting to follow these AI governance principles.
The Concept of Least Agency
Across all ten risk categories, a single architectural concept appears as foundational:
Least Agency.
Just like least privilege limits user access, Least Agency limits what agents are allowed to do.
Agents should receive only the autonomy, permissions, and tool access required for their defined tasks.
This principle should be part of your system’s design from the very beginning.
What CISOs Should Do Now
Inventory AI agents across the enterprise
Map agent identities and permissions
Threat model against agentic risk frameworks
Conduct AI-focused red-team exercises.
Improve security governance for autonomous systems.
Engage executive leadership and boards.
Managing agentic AI is no longer only a technical problem. CISOs should brief boards with clear, business-focused narratives about agentic AI risks, highlighting both potential business impacts and examples of recent incidents. Providing concrete scenarios and linking these risks to wider organisational strategies will help boards appreciate the urgency and ensure leadership buy-in.
It has become a key business risk.
Final Thought
Moving from chatbots to agentic systems is one of the biggest changes in enterprise technology since cloud computing.
Organisations aren’t just using software to process requests anymore.
Now, they are using autonomous systems that can act on their behalf.
In this new environment:
OWASP helps identify where systems break.
NIST helps organisations govern them.
The EU AI Act shows where regulatory accountability is heading.
The organisations that thrive with AI won’t just be the fastest to deploy agents.
They will focus on deploying agents securely, openly, and with strong governance.
References
OWASP Foundation - OWASP Top 10 for LLM Applications
NIST - AI Risk Management Framework (AI RMF 1.0)
NIST - Generative AI Profile Companion Resource
European Union - Artificial Intelligence Act (Regulation EU 2024/1689)
CyberArk - Machine Identity Security Report
Stanford HAI - Human Trust in AI Research
AIM Security - EchoLeak Copilot vulnerability research
GitHub Security Advisories - AutoGPT vulnerabilities


