Darktrace: Generative AI to Amp Up Cybercriminals' Capabilities
In the longer term, offensive AI will be used throughout the attack life cycle.
Already have an account?
Darktrace expects mass availability of generative AI tools such as ChatGPT will significantly enhance attackers’ capabilities by providing better tools to generate and automate human-like attacks.
New Darktrace data indicates early signs of attackers using AI and automation to their advantage for purposes such as phishing. The data shows attackers are pulling away from executive impersonation to instead prioritize impersonating other business-critical individuals, such as IT.
Darktrace’s Findings on Generative AI
Key findings from Darktrace include:
Between May and July, Darktrace saw changes in attacks that attempt to abuse trust. While VIP impersonation – phishing emails that mimic senior executives – decreased 11%, email account takeover attempts increased by 52% and impersonation of the internal IT team increased by 19%. The changes suggest that as employees have become better attuned to the impersonation of senior executives, attackers are pivoting to impersonating IT teams to launch their attacks.
In the same timeframe, Darktrace’s Cyber AI Research Center observed a 59% increase in multistage payload attacks across Darktrace customers. That’s when a malicious email encourages the recipient to follow a series of steps before delivering a payload or attempting to harvest sensitive information. This reflects an increase in QR code phishing attacks as a way to smuggle in malicious links and indicates increasing use of automation in attacks.
Darktrace detected nearly 50,000 more multistage payload attacks in July than in May. Automating these attacks would allow cybercriminals to hit more targets faster.
The new data released demonstrates a challenge beyond automation and AI — the ever-changing patterns of attackers as they seek to evade defenses.
Darktrace’s Nicole Carignan
Nicole Carignan, Darktrace‘s vice president of strategic cyber AI, said while generative AI has opened the door to providing offensive tools to more novice threat actors, the efficacy of these tools will only be as good as those directing them.
“At its infancy, we expect more sophisticated AI attacks to start at the nation-state level,” she said. “In the near term, that might mean an increase in speed and scale, but not necessarily generating new attack methods.”
Scroll through our slideshow above for more on generative AI and cyberattacks.
Want to contact the author directly about this story? Have ideas for a follow-up article? Email Edward Gately or connect with him on LinkedIn. |
About the Author
You May Also Like