October 14, 2025
AI Scams Cost Businesses Up to $1M a Year

Artificial intelligence is proving to be a double-edged sword for business. While companies embrace it to improve operations and customer experience, fraudsters are using the same technology to launch more sophisticated attacks, costing organisations serious money.
A new report from Fingerprint reveals that 41% of all fraud attempts are AI-powered, with nearly every organisation surveyed suffering financial losses over the past year. In fact, 30% of companies reported annual losses of up to $1 million linked to AI-driven scams.
The Smarter the Technology, the Smarter the Fraud
The State of AI Fraud and Privacy Report reveals how AI has influenced cybercrime to become more sophisticated, hard to spot and even harder to prevent. Fraudsters use large language models to generate phishing emails, while automated bot attacks mimic human behaviour, causing huge losses for both customers and companies.
Ninety-nine percent of respondents reported losses from AI-enabled attacks in the past 12 months, with the average annual cost per company sitting around $414,000. The impact isn’t only financial. Ninety-three percent of fraud teams say operations have been disrupted, with many struggling to keep up with manual reviews and false positives.
The strain is particularly visible in the B2B SaaS industry, where 62% of organisations have seen a sharp rise in manual fraud checks as bots overwhelm existing systems.
Regulations Tighten the Net
Adding to the challenge, privacy-first technologies are making it harder for fraud teams to do their job. Shifts like Apple’s Intelligent Tracking Prevention, along with VPNs and stricter browser privacy settings, have dismantled the digital fingerprints companies once relied on to verify users.
More than 70% of respondents said these privacy tools have impacted their ability to detect fraud, and 40% claim identification accuracy has dropped significantly as a result.
Among sectors, the financial services remain the biggest target for AI-driven scams. Fifty-four percent of banks report facing AI-powered fraud attempts, the highest of any industry. Banks also appear slower to modernise, with just one-third exploring AI-based fraud detection tools. However, one report revealed that banks aren’t quite sitting still when it comes to fighting back with AI of their own.
Fintechs, by contrast, are moving faster, as 52% say they’re already testing AI-powered defences. Still, almost half of them have faced attacks involving synthetic identities or forged documents, a growing trend among cybercriminals using generative AI to create convincing fake profiles.
Privacy-First Identification Could Be the Answer
In spite of the increasing expenses, 90% businesses intend to implement more persistent, privacy-friendly identification solutions over the course of the next year.
Dan Pinto, CEO and co-founder of Fingerprint, said: “At the same time, privacy regulations are rightfully shifting to give consumers more control. How do you stop sophisticated, automated threats when the old methods of identifying users are becoming obsolete? The answer must be a move toward more advanced, privacy-compliant identification methods.”
The change is part of a larger industry movement toward frictionless security, moving away from legacy passwords and multi-factor authentication towards device intelligence that silently authenticates trusted users.