The Gately Report: ExtraHop Partners to Benefit from Government Business Growth
Plus, patients' personal data is stolen in the City of Hope data breach.
![ExtraHop partners benefit from federal growth ExtraHop partners benefit from federal growth](https://eu-images.contentstack.com/v3/assets/blt10e444bce2d36aa8/blt965632e6edb4a69f/6525e6b4b605741909b4d5b5/Benefit-Plus-Sign.jpg?width=700&auto=webp&quality=80&disable=upscale)
oatawa/Shutterstock
Channel Futures: How did you investigate and battle cybercrime with the FBI and Department of Education?
Mark Bowling: I was in the Navy for seven years, a nuclear engineering officer on subs, and went to a carrier for the last part of my career. Then I went to the FBI. I spent 20 years there, and then I went to the Department of Education just for one last year. There, I was an assistant special agent in charge. It's a law enforcement role and I was responsible for the cybersecurity controls and investigations to protect our National Student Loan Data System, which would be the fourth largest bank in the world with the amount of loans … and the information of about 163 million Americans. So you're protecting a lot of financial information for banks, and you're protecting a lot of personally identifiable information (PII) and nonpublic financial information about students and their parents. I investigated crimes against those data sets. Before that, with the FBI, I was a field executive. But all during my career, I did primarily counterterrorism and cyber investigations. And one of the ones that I'm most notorious for was the investigation of Joseph Konopka. He went by the name "Dr. Chaos." He had a little group of anarchists following him called the Realm of Chaos. So I have a lot of experience in responding to cyber threats. I have a deep understanding of the type of cyber threats that impact our country.
CF: How has cybercrime evolved since then?
MB: There are three things that we've seen. One is what I call the internationalization of cybercrime, where you have state-sponsored cybercrime. You have the North Koreans who are engaging in cyber-facilitated bank fraud to make money for the regime because they don't have an economy worth a damn. You have the Russians who have Russian military intelligence, which do things like SolarWinds. As a matter of poking the Americans in the eye, the Putin regime supports cybercriminal organizations that are very active in this country, particularly like the Ryuk, a cybercrime gang that has largely attacked hospitals. So you have cybercrime as a matter of making money for regimes. You have cybercrime as a matter of making money for international terrorism organizations that are supported by people like the Iranian government. You have cybercrime as a matter of policy so that Putin can just disrupt our economy. And then you have state-sponsored cyber actions, which are targeted at critical infrastructures. Our water infrastructure, for instance, is one of those.
So what you have is cybercrime that is meant to just be profitable and disruptive to our economy, and then you have very targeted strategic goals. So, for instance, the Russians went after Viasat, and Viasat is one of our customers. The reason they went after Viasat was because Viasat was used as a backup telecommunication mechanism for government agencies in Ukraine. So in order to facilitate the attack on Ukraine, Russian military intelligence went after Viasat. And that's just one example. The Chinese are the most notorious, frankly. The Chinese are doing it as a matter of economic warfare. They want to steal our trade secrets. They want to make us less competitive. They want to make themselves more competitive. And that's because, being a communist country, a communist regime, all of the businesses are owned by the party, and even the ones that aren't owned by the party are owned by these princelings, who are the children of senior party leaders. So it's really indistinguishable.
CF: We’re hearing major warnings, including a recent one from the White House, about nation-state attackers targeting critical infrastructure and water systems. Is there a reason for alarm?
MB: I would say there are reasons for awareness, but there's no reason for alarm. They're going to attack our critical infrastructures and some of our critical infrastructures are better off than others. For instance, I think our electrical power distribution grid, because of the North American Electric Reliability Corporation (NERC), they have fairly strong controls. I'm not going to say they're bulletproof, and I'm sure that the Russians and the Chinese have a plan to get into them, but they will be more difficult to disrupt. And that infrastructure is highly resilient. I'm not going to say our electrical power distribution grid is invulnerable. It's at risk, but it's at risk from bad weather. It's at risk from your average hurricane. It's at risk from tornadoes. You have critical infrastructures like health care that are far more vulnerable. And one of the most vulnerable, I believe, is water and gas. We saw what happened with Colonial Pipeline two years ago. We see that that's fairly vulnerable. We've seen some recent executive orders where specific critical infrastructures are now being directed by the Cybersecurity and Infrastructure Security Agency (CISA) to improve what I would call their technical baseline and their administrative baseline, their administrative controls, so that they're more resilient.
CF: Why are water systems so vulnerable?
MB: So what makes our water systems much more vulnerable is you can have such a wide array. Anybody in the Milwaukee area gets water from Milwaukee County, and they get it from Lake Michigan. So that is probably a reasonably secure system. I'm not going to say it's great. It's not like our bulk electrical system, but it's reasonably secure. But if you get outside of Milwaukee into one of the small towns, they may be on a well system. They may be getting water from a river or a small reservoir, so they don't have the resources to secure their water system, sewage and maybe they have natural gas. They might even have electricity. They don't have the same resources that Milwaukee County does with 2 million people. So where you get threats in the water systems aren't necessarily the major metropolitan water systems. The threat is actually to the smaller municipalities. But there's an offset. You may successfully attack the small municipalities, but even if you poison the water with a very high dose of chlorine in the water, then the impact is smaller. Still it's enough to create fear, but it would be isolated in a small area.
CF: Why aren’t we seeing more attacks on water systems? Are threat actors afraid of government/law enforcement retaliation?
MB: You have limited capabilities and you want to get the biggest bang for your buck. And so you're not going to use the ace you have up your sleeve to attack a small municipality. The only reason you would do that is if you're … associated with a terrorist organization like the Iranian Revolutionary Guard Corps (IRGC) and you just want to create some fear. But the bang for the buck is much more strategic. What you're not going to do is expose the fact that you have a zero-day exploit on a programmable logic controller just so somebody can fix them and push out a patch to everybody. You want to save those aces.
CF: In terms of these areas of critical infrastructure that are more vulnerable than others, what should they be doing to potentially stop attacks from succeeding?
MB: There are things that these organizations can do by themselves. They can improve their cybersecurity practices. They can reduce what I call their attack surface by understanding what their assets are, proper configuration management, proper change management, vulnerability management, doing vulnerability scanning and then fixing the vulnerabilities. Those are the basics. There are things that I would encourage any major critical infrastructure to do. If they are able to, isolate their operational technology (OT) network. Manufacturers do that. The electrical power grid, our bulk electrical system, does that. So that's one of the reasons why I believe NERC and the regional entities, and the regional transmission operators have created a more resilient framework. That's because the actual operation of the grid is air-gapped from the internet. That makes it much more difficult to get in.
And then finally, one of the things I think the United States should do as a policy is create an internet completely internalized to the United States that is used only for what I would consider our critical process-oriented infrastructures. You can't do that for health care or banking because those are consumer-oriented. But you might be able to have a separate control plane internet for electric power distribution, for water, for natural gas, and maybe even have a separate one for manufacturing. I do think it's time to think about having a parallel process-oriented network that is used exclusively by specific critical infrastructures.
CF: What’s keeping ExtraHop out of the headlines in terms of cyberattacks or data breaches? How are you protecting ExtraHop, and its partners and customers?
MB: So there are two answers to that. One is we have an enterprise IT framework and then we have our product framework. All of our customers' data is in our product environment. We have very specific technical controls that we've implemented to protect our ExtraHop production environment and we have those very same controls implemented to protect our enterprise IT environment. We do things like the basics, vulnerability scanning, configuration management and change management, all those things to reduce our attack surface. And we're also implementing very robust technical controls or administrative controls.
Right now, we're doing a very deep policy review. We are building our entire policy deck around what is known as the common criteria. And so with building around the common criteria, it's easier to get certifications like FedRAMP. It's easier for our product to get certifications like NIAP. We build our program around specific control criteria. We have a SOC 2 audit every year, for instance. And so we implement best practices, we implement robust policies, we follow very specific technical and administrative control frameworks. We work really hard and we watch our network. We eat our own dog food. We use our ExtraHop tool to monitor our network. So we watch our own network assiduously.
CF: Who’s targeting ExtraHop? What are the most frequent attack vectors?
MB: At this point, we haven't seen a huge number of very sophisticated attacks, knock on wood. We see a little bit of fraud. We see occasional phishing campaigns, but we pick that up pretty quickly. We generally detect the phishing campaigns and the fraud. The internal phish tests that I run are far more successful than the phishing campaigns that have targeted us. The fraud is generally some kind of employment fraud. So what we'll do is we'll post a job opening, and then someone will copy that job opening. They'll register a domain that mimics one of our domains, ExtraHopcareers.com or something like that, and then people will send their resumes there, and then they'll steal the IDs of these people who think they're applying for jobs with ExtraHop. And so we've been very aggressive about knocking those domains off and contacting the people who may have been impacted and letting them know that they need to protect their own PII. So we're trying to be a good corporate citizen there.
To be secure in today's world, you need to be two things. You need to be good and you need to be lucky. If you're not good and you're lucky, they're going to get you. And if you're good, but you're not lucky, they're going to get you.
CF: AI in cybersecurity is a hot topic. How is ExtraHop incorporating AI into its cybersecurity?
MB: What we do is we anonymize all of our customers' data so it's nonattributed, and then we look at the events happening in our customer's environment. Then that informs a learning model that knows more about the internet and knows more about external attacks on customers' environments than any other model in the world. And so that's how we're using it.
So there are three things for AI. [No. 1], do you have policies that manage and control how your people use AI in their daily job? No. 2, do you have policies and know exactly how your third-party vendors are using your data that you own, but that you put into their environment? And No. 3, do you have a plan to utilize and maximize the use of AI for your product, for your operationalization? You operationalize AI and we've done an amazing job of that at ExtraHop.
In other cybersecurity news …
City of Hope, a cancer research and treatment center, said an unauthorized third party accessed its systems and copied files between Sept. 19 and Oct. 12, 2023.
City of Hope’s notification with the Maine Attorney General’s Office says that more than 827,000 patients have been impacted by the breach.
According to City of Hope’s data security incident notification, information stolen by the threat actor(s) may include contact information (e.g., email address, phone number), date of birth, Social Security number, driver’s license or other government identification, financial details (e.g., bank account number and/or credit card details), health insurance information, medical records and information about medical history and/or associated conditions, and/or unique identifiers to associate individuals with City of Hope (e.g., medical record number).
“Upon discovery of this incident, City of Hope immediately instituted mitigation measures,” it said. “We then promptly implemented additional and enhanced safeguards, and enlisted the support of a leading cybersecurity firm to enhance the security of our network, systems and data. We also launched a comprehensive investigation, identified individuals affected, reported the incident to law enforcement and notified regulatory bodies.”
James McQuiggan, security awareness advocate at KnowBe4, said unfortunately, cybercriminals continue targeting the health care industry, seeking to gain identity information they can sell on the dark web.
![KnowBe4's James McQuiggan KnowBe4's James McQuiggan](https://eu-images.contentstack.com/v3/assets/blt10e444bce2d36aa8/blt6c9d589fb472728b/6525d9d273f62d2a30a8e310/McQuiggan-James_KnowBe4.jpg?width=700&auto=webp&quality=80&disable=upscale)
KnowBe4's James McQuiggan
“Not only the victims of this attack, but everyone should be vigilant regarding their email, financial and social media accounts, and credit monitoring,” he said. “The breach, involving many personal and health information, opens the door to sophisticated spear phishing attacks. Cybercriminals will exploit the detailed data to craft highly personalized and convincing phishing emails, aiming to deceive victims further.”
Individuals must closely monitor their accounts, credit and emails for any unusual activity, McQuiggan said.
“This incident underscores the necessity of proactive cybersecurity measures and personal vigilance in protecting against identity theft and fraud,” he said.
New research from Wiz has found that AI-as-a-service providers such as Hugging Face are susceptible to two critical risks that could allow threat actors to escalate privileges, gain cross-tenant access to other customers' models, and even take over the continuous integration and continuous deployment (CI/CD) pipelines.
The development comes as machine learning (ML) pipelines have emerged as a new supply chain attack vector, with repositories such as Hugging Face becoming an attractive target for staging adversarial attacks designed to glean sensitive information and access target environments. The threats are two-pronged, arising as a result of shared Inference infrastructure takeover and shared CI/CD takeover. They make it possible to run untrusted models uploaded to the service in Pickle format and take over the CI/CD pipeline to perform a supply chain attack.
“The pace of AI adoption is unprecedented and enables great innovation,” Wiz said in its blog. “However, organizations should ensure that they have visibility and governance of the entire AI stack being used, and carefully analyze all risks, including usage of malicious models, exposure of training data, sensitive data in training, vulnerabilities in AI SDKs, exposure of AI services and other toxic risk combinations that may exploited by attackers. “
In its blog in response, Hugging Face said it has resolved all issues related to the exploit and continues to remain diligent in its threat detection and incident response process.
Eric Schwake, director of cybersecurity strategy at Salt Security, said while AI presents exciting opportunities, it also introduces novel attack vectors that traditional security solutions may need to catch up on.
“The very nature of AI models, with their complex algorithms and vast training datasets, makes them vulnerable to manipulation by attackers,” he said. “AI is also a potential black box, which provides very little visibility into what goes on inside of it. Malicious actors can exploit these vulnerabilities to inject bias, poison data or even steal intellectual property. Development and security teams need to build in controls for the potential uncertainty and increased risk caused by AI. This means the entire development process for applications and APIs should be rigorously evaluated from aspects such as data collection practices, deployment and monitoring while in production. Taking steps ahead of time will be important to not only catch vulnerabilities early, but also detect potential exploitation by threat actors. Educating developers and security teams about the ever-changing risk associated with AI is also critical.”
John Bambenek, president of Bambenek Consulting, said with AI, you need models to do interesting things. However, general generative AI providers can’t anticipate every use case and organizations want customization.
![Bambenek Consulting's John Bambenek Bambenek Consulting's John Bambenek](https://eu-images.contentstack.com/v3/assets/blt10e444bce2d36aa8/blt6d58648b5abc52e9/661400380c2e4934b1323593/Bambenek_John_Bambenek_Consulting_2024.jpg?width=700&auto=webp&quality=80&disable=upscale)
Bambenek Consulting's John Bambenek
“The problem is few people know how to do that,” he said. “Therefore, a marketplace is emerging for niche organizations to create models and adaptations of generative AI tools. The problem is, like open source software, people can include malicious functionality that is run by victim organizations. The problem comes in because, unlike open source software where the source code can be examined, we have no good tools to find this in an automated way. We’ve radically magnified the software supply chain risk and made it harder to detect.”
New research from Wiz has found that AI-as-a-service providers such as Hugging Face are susceptible to two critical risks that could allow threat actors to escalate privileges, gain cross-tenant access to other customers' models, and even take over the continuous integration and continuous deployment (CI/CD) pipelines.
The development comes as machine learning (ML) pipelines have emerged as a new supply chain attack vector, with repositories such as Hugging Face becoming an attractive target for staging adversarial attacks designed to glean sensitive information and access target environments. The threats are two-pronged, arising as a result of shared Inference infrastructure takeover and shared CI/CD takeover. They make it possible to run untrusted models uploaded to the service in Pickle format and take over the CI/CD pipeline to perform a supply chain attack.
“The pace of AI adoption is unprecedented and enables great innovation,” Wiz said in its blog. “However, organizations should ensure that they have visibility and governance of the entire AI stack being used, and carefully analyze all risks, including usage of malicious models, exposure of training data, sensitive data in training, vulnerabilities in AI SDKs, exposure of AI services and other toxic risk combinations that may exploited by attackers. “
In its blog in response, Hugging Face said it has resolved all issues related to the exploit and continues to remain diligent in its threat detection and incident response process.
Eric Schwake, director of cybersecurity strategy at Salt Security, said while AI presents exciting opportunities, it also introduces novel attack vectors that traditional security solutions may need to catch up on.
“The very nature of AI models, with their complex algorithms and vast training datasets, makes them vulnerable to manipulation by attackers,” he said. “AI is also a potential black box, which provides very little visibility into what goes on inside of it. Malicious actors can exploit these vulnerabilities to inject bias, poison data or even steal intellectual property. Development and security teams need to build in controls for the potential uncertainty and increased risk caused by AI. This means the entire development process for applications and APIs should be rigorously evaluated from aspects such as data collection practices, deployment and monitoring while in production. Taking steps ahead of time will be important to not only catch vulnerabilities early, but also detect potential exploitation by threat actors. Educating developers and security teams about the ever-changing risk associated with AI is also critical.”
John Bambenek, president of Bambenek Consulting, said with AI, you need models to do interesting things. However, general generative AI providers can’t anticipate every use case and organizations want customization.
![Bambenek Consulting's John Bambenek Bambenek Consulting's John Bambenek](https://eu-images.contentstack.com/v3/assets/blt10e444bce2d36aa8/blt6d58648b5abc52e9/661400380c2e4934b1323593/Bambenek_John_Bambenek_Consulting_2024.jpg?width=700&auto=webp&quality=80&disable=upscale)
Bambenek Consulting's John Bambenek
“The problem is few people know how to do that,” he said. “Therefore, a marketplace is emerging for niche organizations to create models and adaptations of generative AI tools. The problem is, like open source software, people can include malicious functionality that is run by victim organizations. The problem comes in because, unlike open source software where the source code can be examined, we have no good tools to find this in an automated way. We’ve radically magnified the software supply chain risk and made it harder to detect.”
ExtraHop partners will directly benefit from the company aggressively pursuing federal government business in the months ahead.
That’s according to Mark Bowling, ExtraHop’s chief information security and risk officer. Before joining ExtraHop, he spent more than 25 years investigating and combating cybercrime and nation-state attacks in leadership positions with the FBI and U.S. Department of Education.
In January, ExtraHop, the network detection and response (NDR) provider, secured $100 million in growth capital from existing investors. The company expects ExtraHop partners to benefit from this growth capital.
“We're going to be moving hard into the Fed,” Bowling said. “We're going to be getting some federal certifications like FedRAMP and our products are going to get National Information Assurance Partnership (NIAP) certified so that we can sell them to the usual suspects, the NATO countries, Israel, the APAC countries — South Korea, Japan, Australia, New Zealand, maybe Singapore. So with that NIAP certification and with the government certifications, we're going to be able to have a much bigger footprint in the defense community and in the intelligence community, the federal government community as a whole.”
Working with ExtraHop Partners to Secure Customers of All Sizes
Federal agencies have much bigger budgets than SMBs, Bowling said. But ExtraHop wants to help customers of all sizes.
![ExtraHop's Mark Bowling ExtraHop's Mark Bowling](https://eu-images.contentstack.com/v3/assets/blt10e444bce2d36aa8/blt88f2fe53cbc52d84/6613fb6fe95d7bdbd8885363/Bowling_Mark_ExtraHop_2024.jpg?width=700&auto=webp&quality=80&disable=upscale)
ExtraHop's Mark Bowling
“We want to hit both,” he said. “We want to work with our extended detection and response (XDR) and managed detection and response (MDR) partners to help the smaller businesses deploy ExtraHop in a way that's cost economical for them. We want to work with our enterprise partners to deploy Reveal(x) 360 and Reveal(x) Enterprise in their environments in a way that is best for them. The larger you are, you have your own incident response teams. They have a scale that SMBs and midmarket don't have. So we want to make sure we can provide services to the SMBs, the midmarket, small enterprise, all the way up to large enterprise, and of course, government is the biggest of the enterprises. And so you have multiple agencies that have budgets where that SMB wouldn't even be a rounding error for them. What we want to do is make sure that we're able to meet the needs of all of our prospective customers.”
ExtraHop partners will also see the company continue to roll out use cases, Bowling said.
“When I got here three years ago, we had ExtraHop Reveal(x), and it was a little bit of intrusion detection system (IDS), a lot of NDR and a little bit of network performance management (NPM),” he said. “And now we have specific modules. So what we're doing is we are laser-targeting the use case for our customers. So now our customers can get both a great NDR use case and a great IDS use case. Or they get the NPM use case, or they get all of them.”
Scroll through our slideshow above for more from ExtraHop and more cybersecurity news.
About the Author(s)
You May Also Like