The Gately Report: Onapsis Says More Emphasis on Cybersecurity Needed in AI, Generative AI
Plus, Lockbit threatens to expose data stolen from CDW.
Blue Planet Studio/Shuterstock
Channel Futures: How is Onapsis making use of AI in its cybersecurity solutions? How has that evolved?
JP Perez-Etchegoyen: We are using ML and AI in a number of different ways, and especially now with the generative models. When large language models (LLM) and generative models started to pick up, we sat down with our leadership and said we cannot afford not leveraging this new technology, so how do we make it possible?
So we basically consulted with lawyers, consulted internally on compliance with IT, and we developed policies and procedures, acceptable use of AI tools and policies, and we trained our employees in a way that they can use AI tools within certain boundaries, understanding the data they provide, understanding the different classifications of data, etc. It’s not the same information that is public, private or confidential.
So we encourage them to use, for example, ChatGPT, but with different boundaries. The different teams on the marketing side, on the research side, even on the development side, they use generative AI from an internal perspective for being more efficient. So we are trying to catch up because if we don’t leverage AI, you are one step behind because others are. So that’s from an internal perspective. Also in our products, we integrate ML and AI to be able to identify for different use cases like anomaly detection and being able to classify activity of users in different ways.
CF: Generative AI continues to dominate headlines. When it comes to AI and cybersecurity, are there pros and cons?
JPPE: From a cybersecurity perspective, AI is still pushed by software. So we are talking about models which heavily depend on the data they were trained on, but still we are talking about software. So all of the vulnerabilities that we know of can affect this, including new ones, for example, on the side of the injection flows, we have prompt injections now. That’s a new type of vulnerability that anyone developing these models should be aware of because if someone can manipulate a prompt and make the model behave in a different way, that’s something you need to consider if you’re exposing that as a service to to users. So how do I secure that, which also involves how you secure the data you use to train it because a big part of the AI service is the data that is fed into it for from a training perspective. You need to protect it. You need to anonymize it and all of those safeguards you put in. We have been working with customers that start implementing AI from a business perspective with the business use cases, and the data that is being used to train those models is some of the most sensitive data for organizations. So how do you extract that data? How do you process it? How do you secure it? How do you restrict access and eventually make sure that you don’t expose personal information? So all of that has to be taken care of from a cybersecurity perspective.
The other side is, how could AI models be used from a bad-guy perspective to target organizations? These generative models are becoming so efficient and so good, they can craft emails, for example. You can ask them to create things that could be looking like real things, from an entire web page to an email to support a phishing campaign, to malware. We start seeing more and more malware, which contains pieces of code that were automatically generated by these models. So these efficiencies that organizations get by using AI are the same that the threat actors get by leveraging AI tools to be more efficient, to create better things faster to target organizations. So that’s a the downside. It’s not only good for for the defenders, but also it can be used by by the threat actors.
CF: Is apprehension over AI and generative AI justified? How do you convince C-suite executives to embrace AI and all that it can do for them?
JPPE: I’m sure that’s a difficult conversation because as with any new technology, there’s hesitation and there’s fear of, are we going to be spearheading this? Are there risks that we are not aware of? But it starts by putting the right boundaries in place. Putting in the right policies, ensuring that your legal teams are on board, that your IT teams are on board, and that compliance teams are on board. All of those teams have to have a say on it. But also it starts with clear business use cases. It’s not like, let’s use AI and let’s find where AI could fit within our needs. No, it has to be driven by a clear need.
For example, we need to automate the analysis of certain business information. Let’s increase the efficiencies of this business process by reducing by 30% the time that it takes analysts to make a decision. So there has to be a clear use case around it. And there are hundreds of those. Organizations are adopting it at light speed. It’s becoming more and more adopted in a number of different ways. But to be able to talk to C-levels, it’s important to give assurance that different areas are involved, that different areas can put boundaries in it, that you you are willing to train the employees, that you’re willing to put policies in place and you’re willing to invest it, but also that there is a clear goal. We want to do it this, this and that way, and the ROI is going to be XYZ because we’re going to be better at doing that, or we are going to be opening up a new channel, etc. That clarity has to be there.
CF: How do you ensure AI-generated insights are dependable, impartial, explainable, ethical, moral and transparent?
JPPE: We have heard and read in multiple places about the biases and ethical considerations on these models. It all boils down to how the model is being trained. It’s not about how the model is being used. So when you are training a model, you need to ensure that the data is properly prepared and filtered, and managed because the biases are on the data itself, not on the model.
It’s not easy because these models are trained on billions of entries. They have billions of parameters, but also they are trained on billions and billions of data points. So you need to start by analyzing, processing and filtering down the data to ensure that the biases are not there.
But this is still very new. All the topics around biases and things like hallucinations, ethical biases — all of that is still developing. The research on LMMs has been skyrocketing for the past year, and a lot of that skyrocketing effect has to do with how we ensure that these models are usable in a way that they don’t introduce biases. So it’s really difficult when you manage unstructured data; for example, text, data coming from the internet and data coming from repositories. So for that you need to implement specific filtering. That’s a little bit less of a problem when you train a model with business data because what you are mapping there is the reality of the business, so you want it to match the reality as much as possible. But when you’re training with text, when you are dealing with text-generation models, that’s where the unstructured data is and it becomes a challenge. But there’s been a lot of research, so I think we’re going to get better and better within the next couple of months and years because this is a real problem and it’s something that is also top of mind for researchers.
CF: Onapsis works with MSSPs, SIs, VARs and more. How is it helping them meet their customers’ needs in terms of AI and cybersecurity?
JPPE: We definitely work with some of the largest integrators. We work with MSSPs to ensure that their customers’ businesses are secure. We do cybersecurity for business applications. So in that case, we deploy technology, and we work with those integrators and consultants to ensure that no underlying risks and vulnerabilities can pop up. So we are still working in terms of how to integrate AI. And it’s through our product on one hand. So our product involves ML and AI to detect anomalies, and to categorize user behaviors. That’s the way we are introducing this technology.
We adopt AI in two ways. On one hand, it’s internally to be more efficient and on the other hand it’s through our product and through the use of our product, how MSSPs and the largest integrators that are working with us are helping their customers integrating this technology.
CF: When it comes to AI and cybersecurity, is it still early days? Any idea what we’re likely to see in the years ahead?
JPPE: It is early in a way because ChatGPT was released less than a year ago and ever since we have seen an explosion and a level of disruption at multiple layers that we haven’t seen in other innovations in the past. It’s still developing. We are seeing cybersecurity researchers focusing on the security of these models. There are many ways where researchers are trying to basically fool models into thinking that they are individuals and they try to get a specific output interfacing with those models. So we are seeing a lot of that. It’s still developing.
But what I do believe is that this year has been a lot of research and innovation. Next year, we’re going to see a lot of adoption, very concrete, very real adoption of these technologies into the businesses. I do expect the majority of businesses are going to be adopting this in multiple different ways. They cannot afford not to do that. And from a security perspective, security has to be not an afterthought. At the same time you say, how can we implement this? How can AI fit into this specific use case? You bring legal, you bring IT and you bring compliance.
Cybersecurity is going to be a part of that conversation because how you protect the models, how you protect the data, how you prevent users from introducing malicious input into that service, that’s going to be an important part as well. So next year, as we see more organizations implementing this, we’re going to see more requirements from the cybersecurity perspective into how to secure that.
CF: What do you find most disturbing about the current threat landscape?
JPPE: I think most disrupting is the evolution of the threat landscape from the perspective of how it has grown, especially over the past three years. Since COVID-19 hit, threat actors have been investing more, automating more and growing, so there’s been more and more activity, more automated exploitation.
An example of that was the Cybersecurity and Infrastructure Security Agency (CISA) releasing the catalog of known exploited vulnerabilities, which basically is a response to there [being] more and more vulnerabilities [that are] actively exploited. And at the end of September, Mandiant released their research around time to exploit, which shows an increase of exploited vulnerabilities as well as a decrease in the time in the window.
Also, threat actors continue to exploit old vulnerabilities. That’s growing and it pays off for them. There are many different ways how these threat actors can capitalize on the compromise so it’s profitable for them. So it’s no wonder more people are doing that.
I think from a threat landscape perspective, it’s the sheer evolution and the growth of that that’s disturbing. I think we need to do more. It’s not only about private companies. Governments need to do more. I think how CISA has been evolving is great. It’s great to see CISA adapting to the presidential directive on cybersecurity and also how other governments follow their steps. But it’s really about all of us being able to invest more and focus more because it’s not getting any better unfortunately.
CF: Onapsis is listed on Inc.’s 2023 list of 5,000 fastest-growing companies for the third consecutive year. What’s fueling that growth?
JPPE: It’s basically the need for securing what matters. Onapsis started 14 years ago because we realized organizations weren’t paying attention to the business-critical applications. Every time we were doing a pen test or an assessment of a company — I’m not talking about small companies, but some of the largest organizations in the world — it was the same thing, complete compromise of their business without even a username or password. So we started realizing this is a big problem in most of the largest organizations in the world.
We started doing a lot of evangelizing and letting people know this is a problem, and we started addressing this with technology. But more and more organizations are realizing that they need to address this, that this is a gap. It’s been a black box for them. And that’s what’s fueling the growth. It’s like the realization of organizations that they have a problem and they need to address it because otherwise … they can make headlines through that.
CF: What can partners and customers expect from Onapsis in the months ahead, into 2024?
JPPE: We work very diligently on the road map, talking to our customers. So we are evolving the platform, integrating more use cases for cybersecurity and bringing solutions to our customers. It’s not just telling them you have a problem here and you have a problem there. It’s helping them really solve those problems.
So over the past year, we have been working with our customers, understanding what their pain points are. And we’re working through our road map to deliver concrete solutions to their cybersecurity problems so they can focus their time on other things and not having to solve and address those problems. Really for us, automation and ROI is key to delivering to them, and that’s where we’re going.
In other cybersecurity news …
The LockBit cybercriminal group reportedly threatened to expose data it stole from CDW as the IT solution provider has refused to pay the ransom.
According to SC Media, Lockbit threatened to leak CDW’s data on Oct. 11.
“As soon as the timer runs out you will be able to see all the information, the negotiations are over and are no longer in progress. We have refused the ridiculous amount offered,” said LockBit spokesperson LockBitSupp.
CDW tells us it’s addressing an “isolated IT security matter associated with data on a few servers dedicated solely to the internal support of Sirius Federal, a small U.S. subsidiary of CDW-G.” These servers, which are non-customer-facing, are isolated from its CDW network and other CDW-G systems.
“Our security protocols detected and contained suspicious activity related to these servers,” it said. “We immediately launched an investigation with the support of leading internal and external cybersecurity experts. In addition, we have contacted appropriate government authorities regarding this matter. Our systems remain fully operational and at no time did we identify evidence of any risk to other CDW systems or any external systems.”
CDW is aware that a third party has made data available on the dark web, which it claims to have taken from this environment.
“As part of the ongoing investigation, we are reviewing this data and will take appropriate action in response – including directly notifying anyone affected, as appropriate,” it said.
Devin Ertel, Menlo Security‘s CISO, said we’re starting to see a growing trend of resistance against ransoms by either refusing to pay or offering less than the requested amount.
“While this won’t stop ransomware attacks, we may be reaching a point where companies are not willing to pay these high demands,” he said.
Zane Bond, head of product at Keeper Security, said cybercriminal groups utilize a host of attack vectors in order to extract monetary value or inflict operational damage against a target. In this case, LockBit is employing a pressure tactic to drive the highest possible payment from CDW by setting a deadline. CDW is one of the largest resellers globally and has enormous amounts of data. Without knowing the details of the attack, the boldness of the threat actor and staggering reported price of this ransom are concerning.
“When faced with a ransomware attack, organizations are faced with a difficult decision – whether or not a ransom should be paid,” he said. “Paying a ransom to stop the release of their data may seem like the simplest solution. However, this only fuels the criminal activity. The payment of a ransom doesn’t guarantee the cybercriminal will decrypt a victim’s files or reinstate access to their systems. In fact, cybercriminals have often received payment and subsequently placed stolen files on the dark web, to further monetize their value.”
Internet-exposed Progress Software WS_FTP servers unpatched against a maximum severity vulnerability are now being targeted in ransomware attacks, according to Sophos X-Ops.
As recently observed by Sophos X-Ops incident responders, threat actors self-described as the Reichsadler Cybercrime Group attempted, unsuccessfully, to deploy ransomware payloads created using a stolen LockBit 3.0 builder.
“The ransomware actors didn’t wait long to abuse the recently reported vulnerability in WS_FTP Server software,” Sophos X-Ops said.
Despite the release of a fix for this vulnerability by Progress Software last month, not all servers have been patched. The threat actors tried to escalate privileges using the open-source GodPotato tool, which enables privilege escalation across Windows client and server platforms.
John Bambenek, principal threat hunter at Netenrich, said the good news is that the patch for this, or the ability, has existed for about two weeks. This means defenders should have had ample time and resources with which to mitigate this vulnerability.
“While there have been attempts to escalate privilege, thus making these attacks more devastating, it appears that the attackers have only really been able to deploy ransomware on the victim’s machine that is running this FTP software itself,” he said. “However, industry sectors that use the software for transferring files remain vulnerable. Of particular concern is the medical sector, where not only file transfers from going between providers are important, the lack of being able to access those records on a timely basis could certainly impact patient care and potentially mortality rates.”
Melissa Bischoping, director of endpoint security research at Tanium, said any vulnerability in a public-facing device like web servers, FTP servers or network infrastructure is an attractive target for a threat actor to compromise. Some organizations may face delayed patching either due to visibility challenges, or delays to avoid disruptive downtime.
“As part of your security strategy, having a plan of action to mitigate and patch vulnerabilities in those critical and exposed services should be part of your vulnerability management planning,” she said. “Once inside your network, attackers will seek to leverage other harvested credentials or vulnerabilities to move through your environment. A defense-in-depth approach coupled with enriched telemetry from endpoint and network devices, will allow teams to respond faster and with more precision, even if an attacker manages to breach one barrier. The goal is always to stop an attack as early in the kill chain as possible, but recognize that there are opportunities to disrupt and detect an attack at all points in the kill chain.”
According to latest research from Proofpoint and Ponemon Institute, the average total cost of a cyberattack experienced by health care organizations is nearly $5 million, a 13% increase from the previous year.
According to the report, which surveyed 653 health care IT and security practitioners, 88% of the surveyed organizations experienced an average of 40 attacks in the past 12 months. Among the organizations that suffered the four most common types of attacks—cloud compromise, ransomware, supply chain and business email compromise (BEC)—an average of 66% reported disruption to patient care. Specifically, 57% reported poor patient outcomes due to delays in procedures and tests, 50% saw an increase in medical procedure complications, and 23% experienced increased patient mortality rates. These numbers reflect last year’s findings, indicating that health care organizations have made little progress in mitigating the risks of cyberattacks on patient safety and wellbeing.
Supply chain attacks are the type of threat most likely to affect patient care. Nearly two-thirds of surveyed organizations suffered a supply chain attack in the past two years. Among those, 77% experienced disruptions to patient care as a result, an increase from 70% in 2022. BEC, by far, is the type of attack most likely to result in poor outcomes due to delayed procedures, followed by ransomware. BEC is also most likely to result in increased medical procedure complications and longer lengths of stay.
“While the health care sector remains highly vulnerable to cybersecurity attacks, I’m encouraged that industry executives understand how a cyber event can adversely impact patient care,” said Ryan Witt, chair of Proofpoint‘s health care customer advisory board. “I’m also more optimistic that significant progress can be made to protect patients from the physical harm that such attacks may cause. Our survey shows that health care organizations are already aware of the cyber risks they face. Now they must work together with their industry peers and embrace governmental support to build a stronger cybersecurity posture—and consequently, deliver the best patient care possible.”
Ted Miracco, CEO of Approov Mobile Security, said the challenges faced by health care organizations in addressing cybersecurity include a lack of cybersecurity expertise, and insufficient budget and staffing.
“These challenges need to be addressed to ensure effective security measures are in place, especially in the critical areas of mobile app and API vulnerabilities, and the persistent phishing and BEC attacks,” he said. “With the average cost of a cyberattack reaching almost $5 million, it makes sense for these organizations to invest ahead of the attack versus spending money to remediate after the patient data has been exfiltrated and other damage has been done.”
Emily Phelps, director of Cyware, said health care is a consistently attractive target for threat actors because of the valuable data they collect and store.
“Adversaries far outnumber available cybersecurity pros, so to mitigate the risks, health care organizations must leverage automation tools that enable lean security teams to efficiently address threats,” she said. “Employees should have regular security awareness training so they are prepared to recognize and avoid common threat tactics. And organizations should consider partnering with security providers that can offer expertise that is difficult to source and retain internally.”
According to latest research from Proofpoint and Ponemon Institute, the average total cost of a cyberattack experienced by health care organizations is nearly $5 million, a 13% increase from the previous year.
According to the report, which surveyed 653 health care IT and security practitioners, 88% of the surveyed organizations experienced an average of 40 attacks in the past 12 months. Among the organizations that suffered the four most common types of attacks—cloud compromise, ransomware, supply chain and business email compromise (BEC)—an average of 66% reported disruption to patient care. Specifically, 57% reported poor patient outcomes due to delays in procedures and tests, 50% saw an increase in medical procedure complications, and 23% experienced increased patient mortality rates. These numbers reflect last year’s findings, indicating that health care organizations have made little progress in mitigating the risks of cyberattacks on patient safety and wellbeing.
Supply chain attacks are the type of threat most likely to affect patient care. Nearly two-thirds of surveyed organizations suffered a supply chain attack in the past two years. Among those, 77% experienced disruptions to patient care as a result, an increase from 70% in 2022. BEC, by far, is the type of attack most likely to result in poor outcomes due to delayed procedures, followed by ransomware. BEC is also most likely to result in increased medical procedure complications and longer lengths of stay.
“While the health care sector remains highly vulnerable to cybersecurity attacks, I’m encouraged that industry executives understand how a cyber event can adversely impact patient care,” said Ryan Witt, chair of Proofpoint‘s health care customer advisory board. “I’m also more optimistic that significant progress can be made to protect patients from the physical harm that such attacks may cause. Our survey shows that health care organizations are already aware of the cyber risks they face. Now they must work together with their industry peers and embrace governmental support to build a stronger cybersecurity posture—and consequently, deliver the best patient care possible.”
Ted Miracco, CEO of Approov Mobile Security, said the challenges faced by health care organizations in addressing cybersecurity include a lack of cybersecurity expertise, and insufficient budget and staffing.
“These challenges need to be addressed to ensure effective security measures are in place, especially in the critical areas of mobile app and API vulnerabilities, and the persistent phishing and BEC attacks,” he said. “With the average cost of a cyberattack reaching almost $5 million, it makes sense for these organizations to invest ahead of the attack versus spending money to remediate after the patient data has been exfiltrated and other damage has been done.”
Emily Phelps, director of Cyware, said health care is a consistently attractive target for threat actors because of the valuable data they collect and store.
“Adversaries far outnumber available cybersecurity pros, so to mitigate the risks, health care organizations must leverage automation tools that enable lean security teams to efficiently address threats,” she said. “Employees should have regular security awareness training so they are prepared to recognize and avoid common threat tactics. And organizations should consider partnering with security providers that can offer expertise that is difficult to source and retain internally.”
So much emphasis has been placed on the innovation fostered by artificial intelligence (AI) and generative AI that cybersecurity has often taken a back seat, and that needs to change.
That’s according to JP Perez-Etchegoyen, CTO at Onapsis. The Onapsis platform delivers vulnerability management, change assurance and continuous compliance for business applications from leading vendors such as SAP, Oracle and others. The platform is powered by the Onapsis Research Labs, the team responsible for the discovery and mitigation of more than 1,000 zero-day vulnerabilities in business applications.
Onapsis’ JP Perez-Etchegoyen
“We are at that point where AI is such a big innovation that has been disrupting so many places that the focus hasn’t been placed on cybersecurity from the get-go,” he said. “It has been placed on, how can we use it? Is it real? Is it going to scale? But what about the data? Are we infringing any copyright? There have been a lot of that biases and ethical considerations. Cybersecurity hasn’t really been a priority for this, but it’s going to be as this matures. And this is more on the front lines of any organization implementing any type of process.”
Innovation Moving Fast with Generative AI
The level of disruption is so big that most of the processes that organizations and individuals are running today are going to be in some way, shape or form crossed by AI, either predictive or generative AI, Perez-Etchegoyen said.
“There are many different use cases, but there’s going to be a level of AI and machine learning (ML) in every scenario, so definitely security,” he said. “And I think next year we’re going to see a lot of presentations in conferences by security researchers. ChatGPT was released at the end of 2022, and this year there was a lot of research on innovation and development, and researchers are starting to look at [cybersecurity]. So next year we’ll see a lot of a conference presentations and headlines around how hackers can compromise these models as this gets more mature and adopted.”
See our slideshow above for more from Onapsis and more cybersecurity news.
Want to contact the author directly about this story? Have ideas for a follow-up article? Email Edward Gately or connect with him on LinkedIn. |
About the Author(s)
You May Also Like