Pindrop has just released its 2025 Voice Intelligence & Security Report, and the findings are both staggering and urgent. The report sheds light on a rapidly evolving fraud landscape dominated by AI-powered deepfakes and synthetic voice attacks that are targeting organisations across industries at an unprecedented scale.

“Voice fraud is no longer a future threat—it’s here, and it’s scaling at a rate that no one could have predicted,” said Vijay Balasubramaniyan, CEO and co-founder of Pindrop. “Deepfakes, synthetic voice tech, and AI-driven scams are reshaping the fraud landscape. The numbers are staggering, and the tactics are growing more sophisticated by the day.”

One of the most alarming revelations: deepfake voice fraud attempts skyrocketed by over 1,300% in 2024. What was once a rare event—about once a month—has become a daily threat, now averaging seven incidents per day. Contact centres, which often serve as the front lines of customer interaction, have become a primary battleground, seeing the highest fraud rates in six years.

Pindrop’s analysis of over 1.2 billion customer calls in 2024 revealed sharp increases in fraudulent activity. Synthetic voice attacks surged by 475% at insurance companies and 149% at banks, while synthetic call volumes overall jumped 173% between Q1 and Q4. Compounding the challenge, there was a 61% spike in attempts to steal personally identifiable information (PII) and bank credentials, and a 26% overall increase in fraud attempts, far exceeding initial projections.

Retail is a Vulnerable Sector

The report notes that fraud attempts in retail have now doubled, with one in every 127 calls deemed fraudulent—a rate five times higher than in the financial sector. This dramatic shift signals that attackers are branching out beyond traditional financial targets, aiming at sectors where defences are often weaker.

But the threats aren’t stopping at the call centre. The report paints a broader, more troubling picture: AI is fueling fraud across digital channels. Pindrop highlighted the growing use of spoofing-as-a-service tools, AI-enhanced phishing techniques, and voice modulation software. Attackers are also leveraging breached PII and underground tutorials to bypass security controls that many organisations still rely on.

Recruitment Processes are Under Siege

Pindrop reports a rise in deepfake job applicants—individuals using AI-generated voices and avatars to deceive hiring managers during remote interviews. These tactics threaten to undermine corporate hiring pipelines and raise serious concerns about trust and identity in virtual interactions.

In response to these evolving threats, Pindrop has expanded its solutions with Pindrop® Pulse™ for Meetings, a product designed to detect real-time deepfakes in voice and video. This innovation aims to protect businesses during sensitive communications where identity verification is critical.

The fraud landscape is becoming more complex and more AI-driven. The data suggests that AI now powers over 42.5% of all fraud attempts, with nearly one-third considered successful. Face swap attacks have spiked by 704%, and mobile web injection fraud rose by 255% in 2023, according to insights from partner firm iProov.

Looking ahead to 2025, the forecast is even more concerning. Pindrop expects deepfake call volumes to rise another 155%, with deepfake-related fraud growing by 162%, comprising a larger share of total fraud activity. Retail fraud, which climbed 107% in 2024, is on track to double again, potentially reaching one fraudulent call for every 56. And for contact centres, the financial impact is daunting—they could face over $44.5 billion in fraud exposure in the coming year.

Post Views: 55