The advent of AI has given rise to a tsunami of fake voice and video content, threatening contact centre operations’ security. Contact centre managers might think ‘deepfake’ content won’t happen to them, but it is a real and present danger. Ben Colman, co-founder and CEO of Reality Defender, an AI-driven deepfake detection company, has witnessed first hand the escalating danger.

“Deepfake technology has been evolving for a few years, but what’s changed over the last year is that it’s now publicly measurable. We have clients across various sectors actively tracking deepfake fraud attempts daily,” Colman told Customer Experience Magazine.

According to a recent whitepaper from identity authentication firm authID, deepfake-driven fraud is experiencing explosive growth. AI-powered scams are making up nearly half of all fraud attempts in the financial sector.

“The main trend is the move from voice-based fraud to video-based,” Colman explained.

This ease of creation, requiring minimal technical skill, significantly broadens the potential for widespread malicious use.

“With advancements allowing deepfakes to be created locally on a computer, technologies like Zoom and Teams are also increasingly vulnerable. Even basic online information can be leveraged.”

The escalating threat to contact centres 

Ben Colman, co-founder and CEO of Reality Defender

Traditional security measures in call centres, often centred around verifying personal information, are becoming increasingly inadequate against the backdrop of convincing audio and video impersonations. The old dated security features only prove that somebody has someone else’s information.

“We detect whether the voice or face itself is genuine,” explained Colman. Reality Defender shifts the focus from verifying who someone claims to be to checking the authenticity of their biometric data.

AuthID’s report details two key attack methods: presentation attacks, involving presenting a deepfake to a camera, and injection attacks, where deepfake images are inserted into verification software, bypassing traditional security layers. The fact that even human reviewers struggle to identify presentation fakes in 99% of cases highlights the sophistication of these techniques.

Additionally, the financial consequences can be severe, as evidenced by the reported $25 million wire transfer orchestrated via a deepfake video call.

The role of AI in proactive defence 

Reality Defender’s technology integrates into existing call centre workflows, analysing audio and video in real-time to identify AI-generated characteristics.

“We’re the only company focusing exclusively on deepfake detection,” claimed Coleman.

Unlike systems that handle sensitive personal data, Reality Defender’s approach focuses solely on identifying the artificial origin of the media and mitigating privacy concerns. “We focus on detection, not data processing. In addition, our clients retain full control of their data within their infrastructure,” clarified Coleman.

An optimistic outlook on AI security   

Although AI is currently accelerating the growth of deepfake content, the technology has a vital role to play in securing call centres.

“While AI empowers malicious actors, its potential for defence is even greater,” said Coleman. “Our AI-powered detection models can analyse vast amounts of data and identify subtle anomalies indicative of AI manipulation at scale.”

“We’ve expanded our detection capabilities to include audio, video, images, and text, with real-time analysis for audio and video,” he added.

Reality Defender’s strategic partnerships help boost these capabilities even further.“We’re also integrating with third-party platforms to broaden the reach of our detection capabilities,” he said.

Our collaborations with generative AI companies also provide us with early access to their models, allowing us to proactively develop detection methods,” he added.

Post Views: 17