AI Scams Cost Businesses Up to $1M a Year

The State of AI Fraud and Privacy Report, based on a survey of 300 fraud and technology leaders, paints a troubling picture of how AI has reshaped cybercrime. From phishing emails generated by large language models to automated bot attacks that mimic human behavior, fraud has become faster, cheaper, and harder to trace. Ninety-nine percent of respondents reported losses from AI-enabled attacks in the past 12 months, with the average annual cost per company sitting around $414,000. But the impact isn’t just financial—93% of fraud teams say operations have been disrupted, with many struggling to keep up with manual reviews and false positives. The strain is particularly visible in the B2B SaaS industry, where 62% of organizations have seen a sharp rise in manual fraud checks as bots overwhelm existing systems. Privacy Regulations Tighten the Net Adding to the challenge, privacy-first technologies are making it harder for fraud teams to do their job. Shifts like Apple’s Intelligent Tracking Prevention, along with VPNs and stricter browser privacy settings, have dismantled the digital fingerprints companies once relied on to verify users. More than three-quarters of respondents (76%) said these privacy tools have impacted their ability to detect fraud, and 40% claim identification accuracy has dropped significantly as a result. Banks Lag, Fintechs Adapt The financial sector remains the biggest target for AI-driven scams. Fifty-four percent of banks report facing AI-powered fraud attempts—the highest of any industry. Yet banks also appear slower to modernize, with just one-third exploring AI-based fraud detection tools. Fintechs, by contrast, are moving faster. Over half (52%) say they’re already testing AI-powered defenses. Still, nearly half have faced attacks involving synthetic identities or forged documents, a growing trend among cybercriminals using generative AI to create convincing fake profiles. SaaS Firms Feel the Weight of Scale For SaaS providers, the issue isn’t just detection—it’s scale. High volumes of logins and privacy-conscious users make it difficult to distinguish between real and fraudulent activity. That’s led to more credential stuffing, session spoofing, and bot-driven takeovers, all requiring time-consuming manual reviews. Two-thirds of SaaS leaders still express confidence in their tools but admit they’re struggling to manage the workload that comes with these evolving threats. Fighting Back with Privacy-First Identification Despite the mounting costs, companies are not standing still. The report found that 90% of organizations plan to adopt more persistent, privacy-compliant identification tools within the next year. This shift aligns with a broader industry move toward frictionless security—moving away from traditional passwords and multi-factor authentication in favor of device intelligence that can silently verify trusted users. As AI continues to blur the line between human and machine behavior, businesses face an uncomfortable truth: the smarter the technology gets, the smarter the fraud becomes. And for many, the bill has already hit seven figures.

Artificial intelligence is proving to be a double-edged sword for business. While companies embrace it to improve operations and customer experience, fraudsters are using the same technology to launch more sophisticated attacks, costing organisations serious money.

A new report from Fingerprint reveals that 41% of all fraud attempts are AI-powered, with nearly every organisation surveyed suffering financial losses over the past year. In fact, 30% of companies reported annual losses of up to $1 million linked to AI-driven scams.

The Smarter the Technology, the Smarter the Fraud

The State of AI Fraud and Privacy Report reveals how AI has influenced cybercrime to become more sophisticated, hard to spot and even harder to prevent. Fraudsters use large language models to generate phishing emails, while automated bot attacks mimic human behaviour, causing huge losses for both customers and companies.

Ninety-nine percent of respondents reported losses from AI-enabled attacks in the past 12 months, with the average annual cost per company sitting around $414,000. The impact isn’t only financial. Ninety-three percent of fraud teams say operations have been disrupted, with many struggling to keep up with manual reviews and false positives.

The strain is particularly visible in the B2B SaaS industry, where 62% of organisations have seen a sharp rise in manual fraud checks as bots overwhelm existing systems.

Regulations Tighten the Net

Adding to the challenge, privacy-first technologies are making it harder for fraud teams to do their job. Shifts like Apple’s Intelligent Tracking Prevention, along with VPNs and stricter browser privacy settings, have dismantled the digital fingerprints companies once relied on to verify users.

More than 70% of respondents said these privacy tools have impacted their ability to detect fraud, and 40% claim identification accuracy has dropped significantly as a result.

Among sectors, the financial services remain the biggest target for AI-driven scams. Fifty-four percent of banks report facing AI-powered fraud attempts, the highest of any industry. Banks also appear slower to modernise, with just one-third exploring AI-based fraud detection tools. However, one report revealed that banks aren’t quite sitting still when it comes to fighting back with AI of their own.

Fintechs, by contrast, are moving faster, as 52% say they’re already testing AI-powered defences. Still, almost half of them have faced attacks involving synthetic identities or forged documents, a growing trend among cybercriminals using generative AI to create convincing fake profiles.

Privacy-First Identification Could Be the Answer

In spite of the increasing expenses, 90% businesses intend to implement more persistent, privacy-friendly identification solutions over the course of the next year.

Dan Pinto, CEO and co-founder of Fingerprint, said: “At the same time, privacy regulations are rightfully shifting to give consumers more control. How do you stop sophisticated, automated threats when the old methods of identifying users are becoming obsolete? The answer must be a move toward more advanced, privacy-compliant identification methods.”

The change is part of a larger industry movement toward frictionless security, moving away from legacy passwords and multi-factor authentication towards device intelligence that silently authenticates trusted users.