Ivanti: Everyone Should be Concerned About ChatGPT and Cybersecurity
ChatGPT can make it easier to become a cybercriminal.
![ChatGPT ChatGPT](https://eu-images.contentstack.com/v3/assets/blt10e444bce2d36aa8/blte12bb7e88a7af675/652408bdf45c0542960f3364/ChatGPT.jpg?width=700&auto=webp&quality=80&disable=upscale)
Shutterstock
Channel Futures: Can ChatGPT help with social engineering, therefore making it more convincing and easier to fool people?
Daniel Spicer: It can make it more convincing. ChatGPT doesn’t have the same spelling and grammatical errors that we teach people to look for in phishing in the first place. And they have a lot of more realistic examples of legitimate communication to make it more convincing. But more importantly, ChatGPT becomes a force multiplier. Where a threat actor was required to sit there and come up with a new phishing scheme every time, and that was time and energy they needed to prepare for an attack, ChatGPT offloads all of that work. So not only is it more convincing, but we’re just going to be able to see more phishing attacks because you can generate and change up your tactics faster and faster.
CF: Can ChatGPT also generate malware?
DS: ChatGPT doesn’t understand that it’s producing malware. But it has access to a large amount of information about code. And so you can ask it, hey, how can I encrypt a file, how can I run a file in the background without seeing a pop-up to a user, or how do I control this program so that it requires admin permissions and you start slowly piecing together the pieces for a ransomware. File encryption, admin permissions, and all of that code is out there.
There are legitimate uses for that code chat, but ChatGPT doesn’t actually understand that it ’s building something malicious. The good news right now is ChatGPT doesn’t write good code. It still requires someone with some technical knowledge to help put the pieces together properly and make sure that it’s not buggy. One of the things that you’ll see on the darknet forums is them talking a lot about how to get around some of the buggy things that ChatGPT does when it generates code. That is a short-term problem. Generative AI advancements are going to continue to focus on code because of the movement around low-code and no-code within the industry. And when you think about it, generative AI, the barrier to creation is about having large data sets. And GitHub is just sitting there as a large data set of a whole bunch of code that can be used to train a model that will do a better job of writing code and more importantly, a better job of writing malware. And of course, the end impact for us is that lowers the barrier of entry for cybercrime.
CF: Does ChatGPT make it even easier to become a cybercriminal?
DS: It is going to make it easier. What we haven’t seen yet is just how much the bar actually lowers. But again, we should consider ChatGPT as a large demo experiment. This is a beta. It’s going to continue to get better. So as you think about how generative AI in the hands of a threat actor continues to lower the bar, ChatGPT already is going to do a good job the first phase of an attack, generating those phishing emails and those social engineering campaigns. In the near future, we’re going to have a better version of generative AI, but ChatGPT is already making it easier to develop malicious code.
One of the other things that it’s going to start lowering the bar on is getting past security defenses. So even in the ransomware kits that are provided to help lower the bar and make it easier for the ransomware providers essentially to enable more people to perform ransomware attacks, there are still certain things you need to do to customize to get past defenders’ tools and capabilities. And one of those is taking a ransomware sample or even command-and-control malware, and packaging it and changing it so that it’s not detected by your endpoint detection and response (EDR) or your antivirus solutions. Well, when ChatGPT or whatever the next version of generative AI is understands code better, it’s not a matter of taking something that’s already made and trying to put layers to obfuscate it. You can take something that’s made, deconstruct it into its original pieces and rearrange it and compile it, and that’s going to be a whole other level of difficulty for defender tools to pick up on. So this improvement to obfuscation and packaging malware is really the next thing that I’m really concerned about because I’m not sure that we’re ready for it.
CF: How widespread is ChatGPT use right now? Are we just at the very beginning? Is it just going to get bigger and bigger?
DS: We’re definitely at the very beginning. There’s a lot of monitoring going on, on dark web marketplaces and dark web forums, trying to see how threat actors are reacting to the technology. I still don’t think that we have fully understood how much is that bar really lowering and what are the primary use cases that threat actors are going to do. And the challenge really is going to be for defenders. This is purely reactive. We’re in a position where we have to wait and see how the threat actors are going to integrate this generative AI into their patterns so that we can come up with the appropriate defenses. Unfortunately, this is a very nervous waiting game for a lot of defenders.
CF: Can anything be done to make ChatGTP safer?
DS: I think OpenAI has already attempted and their attempts have not been very successful. Honestly, it’s really hard to put the genie back in the bottle on this. And I don’t know that that’s really what we should be doing here. I think what we need to be doing instead of trying to retcon what’s already been done is how do we actually build better defenses. And how do we start enabling defenders to really use AI? One of the challenges with this generative AI is it’s not very useful for defenders. We need tools that help us do analysis. Generative AI kind of just listens and responds to you, and spits out output. It doesn’t help with analysis or analytics the same way. And so this is a place where defenders should and especially security companies should be paying more attention to how do we start building technology to enable defenders to have AI, not just the attackers. But I don’t think there is a way to put the genie back in the bottle on this one.
CF: How do organizations and individuals protect themselves from ChatGPT-associated cybercrime?
DS: So any proper security program needs to make sure that they have good threat models and that they’re updating them not just on a regular basis, like an annual basis, but when critical events happen. And I think all defenders need to realize that this is a critical event. Just because you haven’t been attacked yet, this is such a change and shift in the industry that this is the time to be reevaluating your threat model so that you can recognize where your phishing training may be lacking, where maybe you need to improve some of your phishing defenses to be a little bit more aggressive in preparation for some of these attacks. And definitely start evaluating your vendors and getting with your vendors to make sure that you understand how they’re going to address this threat because their detection models are going to need to see significant updates.
This is not something that everyone’s going to have a perfect answer to right now. It’s going to require you to check in with your vendors on a frequent basis here because this is at the beginning stage. We haven’t seen full weaponization and full utilization of the technology yet. So over the next couple of months, me as a CSO, I’m making regular contact with my partners. I’m like OK, ChatGPT, how are you combating these new threats and what are you already seeing so that we can continue to adjust our security controls appropriately.
CF: We haven’t seen any attacks with ChatGPT at the root, but is that likely to change pretty quickly?
DS: I would still characterize it as proof of concept. The threat actors have a new technology and they’re trying to figure out how to incorporate it into their attack patterns and how to weaponize it. And so I would still say that what we’re seeing now is more proof of concept than part of the regular playbook the threat actor is utilizing. That will change quickly and without notice. So to be proactive about it is really what defenders need to do to the best of their capabilities and make sure that their vendors are being proactive about it as well.
CF: We haven’t seen any attacks with ChatGPT at the root, but is that likely to change pretty quickly?
DS: I would still characterize it as proof of concept. The threat actors have a new technology and they’re trying to figure out how to incorporate it into their attack patterns and how to weaponize it. And so I would still say that what we’re seeing now is more proof of concept than part of the regular playbook the threat actor is utilizing. That will change quickly and without notice. So to be proactive about it is really what defenders need to do to the best of their capabilities and make sure that their vendors are being proactive about it as well.
There’s quite the growing buzz around ChatGPT, a chatbot launched by OpenAI last November. But there are also growing concerns about cybersecurity.
It leverages natural language processing (NLP) to analyze verbal input and generate responses, imitating a natural human conversation. It can write anything — letters, song lyrics, research papers, recipes, therapy sessions, poems, essays, outlines, even software code.
This week, OpenText and Microsoft announced an extension of their partnership
“This multi-year, multi-billion dollar investment from Microsoft follows their previous investments in 2019 and 2021, and will allow us to continue our independent research and develop artificial intelligence (AI) that is increasingly safe, useful and powerful,” OpenAI said in a blog.
Growing ChatGPT Cybersecurity Concerns
However, there’s increasing concern about cyber threats that will be associated with ChatGPT. Cybercriminals can weaponize ChatGPT in a growing number of ways.
To learn more about ChatGPT and cyber threats, we spoke with Daniel Spicer, Ivanti‘s chief security officer. He said “everyone should be concerned” about coming ChatGPT threats.
Ivanti’s Daniel Spicer
“Hats off to the [OpenAI] team, they’ve built a very disruptive technology and it’s going to change industries all over the place,” he said. “From an infosec perspective, our concerns are really about how it can be utilized to aid threat actors. Generative AIs are much more advantageous for threat actors and offensive security than they are for blue teams. But honestly, ChatGPT is really going to be a risk for just about everyone, even as far as threat actors that target individuals with SMS phishing or attacks of that sort. It’s going to really change our lives.”
We couldn’t reach OpenAI for comment on ChatGPT and cybersecurity.
Scroll through our slideshow above for more from Ivanti about ChatGPT and cybersecurity threats.
Want to contact the author directly about this story? Have ideas for a follow-up article? Email Edward Gately or connect with him on LinkedIn. |
About the Author(s)
You May Also Like