July 31, 2025
AI Agents Might Be the Reason Fraud Detection Tools Are Going Blind

The rapid rise of consumer-facing AI agents is undermining the very systems designed to detect online fraud, and most businesses aren’t prepared for what’s coming.
Transmit Security’s new report, Blinded by the Agent, reveals that traditional fraud detection technologies are failing in the face of AI-powered agents that now act on behalf of users. These agents, such as those built on platforms like OpenAI’s ChatGPT, can perform a range of tasks, from completing transactions to navigating websites, without exhibiting any of the human signals fraud systems were built to recognise.
The result is a growing blind spot in digital security. As these agents become more common, they strip away the behavioural clues, such as typing speed, mouse movement, and device familiarity, that systems use to verify legitimate users. The report notes that current fraud prevention tools, including behavioural biometrics, device fingerprinting, and bot detection, were designed to track human activity. When the human disappears, so does their protection.
In addition, more than 60% of online traffic to retailers is already generated by bots. With AI agents now being adopted by consumers, that figure is expected to exceed 90% in the near future.
“Fraud controls today were built for a world where humans click the buttons. But now, AI is clicking them for us — and the systems can’t tell the difference between AI operated by legitimate users and AI operated by fraudsters,” said Mickey Boodaei, CEO and Co-Founder of Transmit Security.
Fraudsters Can’t Be Easily Fooled
Fraudsters have taken notice, shifting tactics to exploit legitimate agents as a cover. Because these agents operate within the rules, they pass through most fraud detection layers unchecked, rendering even advanced systems ineffective.
The impact is twofold. First, fraud losses are projected to rise sharply, by as much as 500% in the coming years, as organisations lose visibility into who’s really behind each transaction. Second, legitimate customers using AI agents are increasingly facing false declines, as existing systems struggle to differentiate between good and bad behaviour. This not only increases friction but also damages the customer experience.
Adding to the pressure, fraud teams are expected to see their workload double or even triple over the next 12 to 18 months, just to keep up with the changing threat landscape. Much of this is due to the fact that AI agents often run from cloud-based environments, constantly changing locations and device characteristics. That breaks device fingerprinting models, which rely on consistent identifiers.
Similarly, behavioural biometrics become irrelevant when no human is interacting with the interface. Meanwhile, bot detection systems are being forced to whitelist popular AI agents, even though they can’t verify whether the end user is a real customer or a fraudster hiding behind automation.