Scam email cyber attacks increase after rise of ChatGPT

Share
Darktace’s research suggests that generative AI is providing an avenue for threat actors to craft sophisticated and targeted attacks at speed and scale
Darktrace research has revealed a 135% increase in ‘novel social engineering’ attacks in 2023 amidst the widespread availability of ChatGPT

Darktrace, a global leader in cyber security AI, has revealed that its researchers observed a 135% increase in ‘novel social engineering attacks’ from January to February 2023, corresponding with the widespread adoption of ChatGPT.  

With these novel social engineering attacks using sophisticated linguistic techniques, including increased text volume, punctuation, and sentence length with no links or attachments, Darktace’s research suggests that generative AI is providing an avenue for threat actors to craft sophisticated and targeted attacks at speed and scale. 

In March 2023, Darktrace commissioned a global survey with Censuswide to gather third-party insights into human behaviour around email, to better understand how employees globally react to potential security threats, their understanding of email security and the modern technologies that are being used as a tool to transform the threats against them.

“Email security has challenged cyber defenders for almost three decades,” commented Max Heinemeyer, Chief Product Officer at Darktrace. “Since its introduction, many additional communication tools have been added to our working days but for most industries and employees, email remains a staple part of everyone’s job. 

“As such, it remains one of the most useful tools for attackers looking to lure victims into divulging confidential information through communication that exploits trust, blackmails, or promises reward so that threat actors can get to the heart of critical systems, every single day.”

Concerns about generative AI creating email scams

The emergence of ChatGPT has catapulted AI into the mainstream consciousness – nearly a quarter (24%) of respondents in the UK said have already tried ChatGPT or other generative AI chatbots for themselves – and with it, real concerns have emerged about its implications for cyber defence. Almost three in four (73%) of employees are concerned that hackers can use generative AI to create scam emails indistinguishable from genuine communications. 

Emails from CEOs or other senior business leaders are the third highest type of email that employees are most likely to engage with, with almost one in five respondents (19%) agreeing. Defenders are up against Generative AI attacks that are linguistically complex and entirely novel scams that use techniques and reference topics that we have never seen before.  

Many UK employees (nearly a third) have sent an important email to the wrong recipient with a similar-looking alias by mistake or due to autocomplete. This rises to over two in five (43%) in the financial services industry and 41% in the legal industry, adding another layer of security risk that isn’t malicious. A self-learning system can spot this error before the sensitive information is incorrectly shared. Self-learning AI in email, unlike all other email security tools, is not trained on what ‘bad’ looks like but instead learns you and the normal patterns of life for each unique organisation. 

By understanding what’s normal, it can determine what doesn’t belong in a particular individual’s inbox. Email security systems get this wrong too often, with 71% of respondents saying that their company’s spam/security filters incorrectly stop important legitimate emails from getting to their inbox. 

“The email threat landscape is evolving. For 30 years security teams have given employees training on spotting spelling mistakes, suspicious links, and attachments,” Heinemeyer adds. “While we always want to maintain a defence-in-depth strategy, there are increasingly diminishing returns in the approach of entrusting employees with spotting malicious emails. In a time where readily-available technology allows to rapidly create believable, personalized, novel and linguistically complex phishing emails, we find humans even more ill-equipped to verify the legitimacy of ‘bad’ emails than ever before. Defensive technology needs to keep pace with the changes in the email threat landscape, we have to arm organisations with AI that can do that.”

Share

Featured Articles

Nvidia & AWS’s AI Breakthroughs at Re:Invent 2024

Nvidia & AWS showcase groundbreaking AI, robotics & quantum computing solutions at re:Invent 2024, changing enterprise AI deployment across industries

SAP and AWS Partner on AI-Powered Cloud ERP Platform GROW

Partnership between enterprise software firm SAP and cloud computing leader Amazon Web Services aims to speed cloud software adoption with generative AI

SAVE THE DATE – Cyber LIVE London 2025

Cyber LIVE returns in 2025 for a one-day in-person event co-located with Tech & AI LIVE London Global Summit

Amazon's New AI Chip Challenges Nvidia's Dominance

AI & Machine Learning

Wipro Cloud Deal Marks Marelli’s Data Centre Transformation

Digital Transformation

SUBMISSIONS OPEN - Global Tech & AI Awards 2025

Digital Transformation