Darktrace: Generative AI to Amp Up Cybercriminals' Capabilities
In the longer term, offensive AI will be used throughout the attack life cycle.
![Generative AI and security Generative AI and security](https://eu-images.contentstack.com/v3/assets/blt10e444bce2d36aa8/blt4ac0e884ae192fba/6537c9700e413931426909ce/Generative-AI.jpg?width=700&auto=webp&quality=80&disable=upscale)
SomYuZu/Shutterstock
In the longer term, malicious hackers will use offensive AI throughout the attack life cycle, said Darktrace’s Nicole Carignan. That includes using natural language processing (NLP) or large language models (LLMs) to understand written language and craft contextualized spear-phishing emails at scale, or image classification to speed up the exfiltration of sensitive documents once an environment is compromised and the attackers are on the hunt for material they can profit from.
“AI will make it possible for machines to deploy unique attacks at scale – always on, continuously morphing at machine speed,” she said.
Defensive AI that detects anomalous behavior at scale has been protecting organizations against sophisticated threat actors and tools for years, Carignan said. Whether the attack is AI-powered, automated or a sophisticated threat actor, AI that is designed to isolate anomalous, suspicious behavior specific to an organization can detect and defend in machine time.
“It’s also important to remember that one AI technique should not be used to answer every problem or objective, but collaborative and competing AI techniques should be applied to gather all of the relevant information for an investigation, provide additional context, conduct hypothesis-based investigations and evaluate the hypothesis with hundreds or thousands of data points,” she said. “That means finite, specific AI approaches and models with tested accurate outcomes working together as a human would to evaluate various data points – but at scale. Then, with transparency and explainability, a human security operator can evaluate the outcome of the AI tools building trusting relationships with algorithms, as well as feeding back human intelligence into the cycle.”
The need for defenders to be everywhere, all at once, has pushed the adoption of AI in security, Carignan said. The volume and sophistication of threats has grown exponentially in recent years, making it extremely difficult for human security teams to monitor, detect and react to every threat or attempted attack.
“Due to the complexity of modern systems, thousands of micro-decisions now need to be made daily to match an attacker’s spontaneous and erratic behavior, and to stand a fighting chance at not only spotting threats, but prioritizing and containing them,” she said. “As AI and automation enable attackers to operate at increased speed and scale, this is only poised to get more challenging. This has become a job for AI. AI can perform thousands of calculations in real time to detect suspicious behavior and perform the micro decision-making necessary to respond to and contain malicious behavior in seconds. In an ideal scenario, this contains an ongoing incident in a targeted way, drastically reducing potential damage, as well as giving security teams time to respond with a more cohesive remediation action and strategy. Today, thousands of organizations entrust AI to interrupt in-progress, sophisticated attacks without trying to rely on humans to take the sledgehammer out and interrupt wider business operations in the incident response process. Adoption will need to increase in the future as novel threats become the new normal.”
Dave Gerry, CEO at Bugcrowd, said safety and privacy should continue to be a top concern for any tech company, regardless of whether it is AI focused or not.
“When it comes to AI, ensuring that the model has the necessary safeguards, feedback loop, and most importantly, mechanism for highlighting safety concerns, is critical,” he said. “As organizations rapidly adopt AI for all of the efficiency, productivity and democratization of data benefits, it’s important to ensure that as concerns are identified, there is a reporting mechanism to surface those in the same way a security vulnerability would be identified and reported.”
Bob Janssen, Delinea‘s vice president of engineering and head of innovation, said unlike other AI tools, generative AI focuses on creating new data rather than analyzing existing data.
“It enables the development of realistic synthetic data, which can be used for training and testing security models without exposing sensitive information,” he said. “It is a game changer in how organizations address cloud security. It provides realistic synthetic data for testing, simulates sophisticated attack scenarios, and minimizes the risk of exposing sensitive information during development, enhancing overall security measures.”
Patrick Harr, CEO at SlashNext, said generative AI is a game changer for cybercriminals, who can use it to develop, disseminate and modify attacks very quickly. However, it has also improved security efficacy in organizations.
“With the increase in sophistication and volume of threats attacking organizations on all devices, generative AI-based security provides organizations with a fighting chance at stopping these breaches,” he said.
Patrick Harr, CEO at SlashNext, said generative AI is a game changer for cybercriminals, who can use it to develop, disseminate and modify attacks very quickly. However, it has also improved security efficacy in organizations.
“With the increase in sophistication and volume of threats attacking organizations on all devices, generative AI-based security provides organizations with a fighting chance at stopping these breaches,” he said.
Darktrace expects mass availability of generative AI tools such as ChatGPT will significantly enhance attackers’ capabilities by providing better tools to generate and automate human-like attacks.
New Darktrace data indicates early signs of attackers using AI and automation to their advantage for purposes such as phishing. The data shows attackers are pulling away from executive impersonation to instead prioritize impersonating other business-critical individuals, such as IT.
Darktrace’s Findings on Generative AI
Key findings from Darktrace include:
Between May and July, Darktrace saw changes in attacks that attempt to abuse trust. While VIP impersonation – phishing emails that mimic senior executives – decreased 11%, email account takeover attempts increased by 52% and impersonation of the internal IT team increased by 19%. The changes suggest that as employees have become better attuned to the impersonation of senior executives, attackers are pivoting to impersonating IT teams to launch their attacks.
In the same timeframe, Darktrace’s Cyber AI Research Center observed a 59% increase in multistage payload attacks across Darktrace customers. That’s when a malicious email encourages the recipient to follow a series of steps before delivering a payload or attempting to harvest sensitive information. This reflects an increase in QR code phishing attacks as a way to smuggle in malicious links and indicates increasing use of automation in attacks.
Darktrace detected nearly 50,000 more multistage payload attacks in July than in May. Automating these attacks would allow cybercriminals to hit more targets faster.
The new data released demonstrates a challenge beyond automation and AI — the ever-changing patterns of attackers as they seek to evade defenses.
Darktrace’s Nicole Carignan
Nicole Carignan, Darktrace‘s vice president of strategic cyber AI, said while generative AI has opened the door to providing offensive tools to more novice threat actors, the efficacy of these tools will only be as good as those directing them.
“At its infancy, we expect more sophisticated AI attacks to start at the nation-state level,” she said. “In the near term, that might mean an increase in speed and scale, but not necessarily generating new attack methods.”
Scroll through our slideshow above for more on generative AI and cyberattacks.
Want to contact the author directly about this story? Have ideas for a follow-up article? Email Edward Gately or connect with him on LinkedIn. |
About the Author(s)
You May Also Like