Skip to content

Generative AI Changes Everything You Know About Email Cyber Attacks

In March 2023, a global survey was commissioned with Censuswide to 6,711 employees across the UK, US, France, Germany, Australia, and the Netherlands to better understand human behavior around email and potential security threats. Social engineering, or malicious cyber campaigns delivered via email, has been a challenge for cyber defenders for almost three decades and is a profitable business for hackers. AI tools, like ChatGPT, are making it easier for threat actors to craft sophisticated and targeted attacks.

Darktrace research revealed a 135% increase in ‘novel social engineering attacks’ across thousands of active customers from January to February 2023, corresponding with the widespread adoption of ChatGPT. 71% of global employees are concerned that hackers can use generative AI to create scam emails that are indistinguishable from genuine communication. Nearly 1 in 3 (30%) of global employees have fallen for a fraudulent email or text in the past and 70% of global employees have noticed an increase in the frequency of scam emails and texts in the last 6 months. 87% of global employees are concerned about the amount of personal information available about them online that could be used in phishing and other email scams.

The research also revealed that native, cloud, and ‘static AI’ email security tools take an average of thirteen days from an attack being launched on a victim to that attack being detected. As a result, self-learning AI is needed that can spot malicious emails before they are sent and protect organizations from the evolving email threats.

Key Points:
• Social engineering has been a challenge for cyber defenders for almost three decades and is a profitable business for hackers.
• AI tools, like ChatGPT, are making it easier for threat actors to craft sophisticated and targeted attacks.
• 71% of global employees are concerned that hackers can use generative AI to create scam emails that are indistinguishable from genuine communication.
• Native, cloud, and ‘static AI’ email security tools take an average of thirteen days from an attack being launched on a victim to that attack being detected.
• Self-learning AI is needed that can spot malicious emails before they are sent and protect organizations from the evolving email threats.

Leave a Reply

Your email address will not be published. Required fields are marked *