SailPoint has unveiled a new report titled “AI agents: The new attack surface”, warning companies of the fast-growing security risks posed by autonomous AI systems. The report reveals that while 82% of organisations already use AI agents, only 44% have policies in place to secure them. This security gap is particularly alarming considering that 98% of organisations plan to expand their use of agentic AI within the next year, despite 96% of tech professionals acknowledging these agents as a growing risk.

“Agentic AI is both a powerful force for innovation and a potential risk,” said Chandra Gnanasambandam, EVP of Product and CTO at SailPoint. “These autonomous agents are transforming the nature of work, but they also introduce a new attack surface. They often operate with broad access to sensitive systems and data, yet have limited oversight. That combination of high privilege and low visibility creates a prime target for attackers.”

Agentic AI systems, often referred to as AI agents, are autonomous programs that perceive their environment, make decisions, and take actions to fulfil specific goals. Unlike traditional machine identities, these agents may self-modify, spawn sub-agents, and require broad access to sensitive systems.

The report emphasises that 72% of respondents believe AI agents pose even greater risks than machine identities, citing key concerns such as their ability to access privileged data (60%), perform unintended actions (58%), and share sensitive or inaccurate information (57% and 55%).

As organisations accelerate AI integration, they’re also creating a rapidly expanding—and largely unprotected—attack surface. These agents interact with financial records, customer data, legal documents, and intellectual property, yet many operate with little to no oversight.

The risks are not hypothetical

As many as 80% of companies say their AI agents have already taken unintended actions, with 39% accessing unauthorised systems and 33% sharing inappropriate data. In addition, 23% of respondents reported incidents where AI agents were tricked into revealing access credentials, highlighting a real and present danger.

Governance and identity security must evolve to keep pace. As AI agents become critical parts of business operations, users must treat them as a distinct identity type, subject to the same, if not stricter, governance as human users. SailPoint’s report calls for comprehensive solutions that can discover, monitor, and manage AI agents with zero standing privilege, unified visibility, and full auditability.

Post Views: 36