How AI technology is being used to fight voice fraud threat

Dr. Nikolay Gaubitch, director of research at Pindrop, explains how the voice fraud threat is largely undetected by most organisations' cyber protection

Even in a world with a plethora of digital communication channels, voice remains one of the most important (and natural) ways for people to connect with others. An unfathomable number of calls are made every day, from sales and marketing activity to customer service, and people simply catching up with friends and family. But as with all forms of useful technology, the telephony channel is continually targeted by fraudsters looking to exploit the system. 

Dr. Nikolay Gaubitch is director of research at Pindrop, who started when co-founder Vijay Balasubramaniyan was traveling in India and tried to order a new suit from a local tailor. His bank immediately flagged the international transaction as suspicious and tried to call him to verify the purchase, but the the resulting phone call couldn’t prove his identity over the phone. As Vijay also had no way to prove his identity to the bank, the bank cancelled his order - leading Vijay to establish Pindrop in 2011, in order to find a better way for people to authenticate over the phone.

With the telephony channel now firmly at the fore of the fraud landscape, Gaubitch provides an engaging overview of the ins and outs of voice fraud - and why businesses should be taking it extremely seriously.

Why does voice fraud fall under the radar in many organisations?

In an increasingly remote working environment, call centres have become an important channel for organisations to connect with their customers. However, in the digital age we live in, sometimes the security and protection around the telephony channel may not have been a top priority for businesses. 

With the majority of business being carried out online, organisations have long secured their digital channels given the plethora of options available in the market. However, the telephone channel rarely has the same level of protection or regulation. 


What tactics and techniques do fraudsters typically use to target organisations?

Most commonly, fraudsters rely on social engineering techniques. They often pose as their victim with the objective to obtain information required to perform malicious attacks. This information is typically gathered online, via the telephone, or in its most raw form in your rubbish bin. They often use the telephony channel to impersonate a legitimate customer to verify the gathered information or to trick the agent to carry out fraudulent transactions. 

Taking this a step further, some fraudsters carry out what we call intercept attacks where they are on the phone to both an organisation’s call centre and the victim. This technique can enable fraudsters to gather the relevant data in real time, in order to authenticate as the customer through the traditional method of knowledge-based authentication (KBA) where they must provide information such as their mother’s maiden name, month of birth etc. 


Can organisations call on technology to help them to detect fraud?

Absolutely! When we talk about using technology to combat fraud it’s useful to look at two sides of the coin – fraud detection and authentication. 

When combating voice fraud, it’s vital to look at both stopping the fraudsters and ensuring good customer experience isn’t compromised. Stopping fraudsters in a timely manner without impacting the experiences of genuine callers is not something humans can accomplish alone. This is where technology, and more specifically artificial intelligence (AI) and machine learning (ML), can play a vital role.

No human can be expected to monitor for signs of fraud across the hundreds of calls they may take in a day. Instead, organisations can implement an anti-fraud solution that runs on AI and machine learning in their call centres. 

When it comes to detecting fraud, the technology can passively analyse audio, voice, behaviour, and metadata from every call with the aim to detect any subtle signs that indicate a potential fraudster. 

For authentication, the technology can be used in addition to or instead of the traditional KBA I mentioned earlier. Such technology can determine the caller’s identity quickly and seamlessly by creating unique multi-factor credentials based on the device, voice, and behaviour of the customer. This gives the call agent peace of mind and the confidence that they are speaking to a legitimate customer. The key benefit here is that the call agent can service the customer faster and in a more personalised way, rather than treating them as a potential fraudster. 

Fraud detection and customer authentication complement each other. If a fraudster attempts to trick the authentication system, the fraud detection element will step in, and vice versa. 


Share

Featured Articles

OpenText CEO Roundtable: The Future of Safe Enterprise AI

Technology Magazine attends OpenText World Europe 2024 and hears from company CEO and CTO Mark Barrenechea about how OpenText will continue to harness AI

Top 100 Women 2024: Julie Sweet, Accenture - No. 5

Technology Magazine’s Top 100 Women in Technology honours Accenture’s Julie Sweet at Number 5 for 2024

OpenText AI: Empowering Businesses in Information Management

Technology Magazine was on the ground at OpenText World Europe 2024 to examine how the company is harnessing enterprise AI to perfect data-led solutions

GFT & Google Cloud Gen AI to Power Next-Gen Customer Service

AI & Machine Learning

Top 100 Women 2024: Ursula Koski, AWS - No.4

Digital Transformation

Microsoft in Japan: $2.9bn Investment to Boost AI & Cloud

Cloud & Cybersecurity