The Cybersecurity Shift: The Best Defense Is a Good Offense

Retaliation at attackers once stopped, but now the battleground has shifted again.

Pam Baker

February 18, 2019

13 Min Read
Defense to offense strategy

The U.S. federal government depends on the private sector to help protect critical infrastructure. That’s no small feat for utilities and companies to accomplish, given the increasing frequency, intensity and variations of attacks from nation states and bad actors. With frustrations running high, the idea of retaliating or attacking pre-emptively inevitably comes to mind. But the idea was tabled in the past due to several restrictive circumstances ranging from legal liabilities to technical difficulties. Now the battlefront is changing again, and so is the technology in the arsenal, reigniting dreams of shifting security from defense to offense.

Harrison-Harry_Garrison.jpg

Garrison’s Harry Harrison

“Twenty years ago, there were two worlds: the national security world, which faced sophisticated threats from other nation states, and the commercial world, which faced threats from low-level criminals and hobbyists,” said Henry Harrison, co-founder and CTO of cybersecurity firm Garrison. “For the commercial world, it was an innocent age, but that time has passed. Today, the threats have converged. Nation states do not restrict their activity to attacking other governments, and high-end criminals share many of the same sophisticated capabilities as nation states, due in part to the rise in technologies like AI. Commercial organizations are facing a dramatically different set of adversaries than they have in the past.”

War Fronts and Changing Rules of Engagement

Back in the 1990s and early 2000s, hackers earned their street cred by battling other hackers. Sometimes it was friendly fire met with an amused retort when successfully defended. Other times, it was a serious battle of wits and code designed to cripple or destroy the other side.

Scheferman-Scott_Cylance.jpg

Cylance’s Scott Scheferman

“The culture, laws and even the mythos of that era were much different than they are today,” said Scott Scheferman, senior director of global services and strategic adviser at Cylance. “Admins and hackers would openly brag about such encounters and wins with their peers and management.”

“The cat and mouse game was very much ‘on point’ then,” said Scheferman, who spent more than 17 years consulting for DoD/IC cyberoperations prior to joining the commercial side. “Attackers anticipated the return fire and would bait the original targets with honeypots, malicious files or simple readme.txt files akin to a ‘nice try.”

Since then, the war front has changed considerably. Scherferman says today’s hazards include, but aren’t limited to: liability, litigation, false flag attribution, legality, geo-political/nation-state level visibility and the incredibly complex “many-to-many” challenges on the technology, attribution, tools, policy and business risk level fronts.

Sheehy-John_IOActive.jpg

IOActive’s John Sheehy

Successfully maneuvering around those hazards to strike an opponent is almost impossible.

“In general, ‘hacking back’ is a bad idea for non-governmental entities,” said John Sheehy, director of strategic services at IOActive, “There are many reasons for this.”

First, Sheehy says, are the legal issues which are complex and easily snarled by extenuating circumstances. “This is further complicated by even moderately sophisticated attackers normally jumping through multiple jurisdictions and multiple third-party systems.”

But there are legal quagmires in U.S. jurisdictions that are highly problematic too.

“Over the years, law enforcement has not been consistent in its application of the Computer Fraud and Abuse Act, which is how they define hacking,” said Jonathan Couch, senior vice president of strategy at ThreatQuotient. “Organizations are given a little bit of leeway when they are …

… recovering their own data, but there are limits: unauthorized access to systems not owned by that company is where the line typically sits.”

“Security teams can go on the offense any time they want, but they most likely won’t be legally protected,” he said.

Couch-Jonathan_ThreatQuotient.jpg

ThreatQuotient’s Jonathan Couch

Second on the list of reasons not to go on the offense is a very high and totally unacceptable error rate.

“Such actions cloud all participants’ understanding of contested space and their adversaries’ intentions,” said Sheehy. “This is a dangerous recipe for near instantaneous escalation.”

Escalations that could easily “result in injuries and deaths to otherwise uninvolved parties.”

Third on the list of reasons why hacking back is a certain path to catastrophe is the private sector’s woeful inability to make high-assurance attack attributions. Simply put, nation states have vast resources, both technical and nontechnical, at their disposal to identify attackers. Even so, nation states are still severely challenged in making correct attributions within a short timeframe. Thus, it’s not something even a very large and well-funded organization could easily pull off.

“No nongovernmental organization has or will ever have such resources to draw upon,” said Sheehy. “Moreover, it’s a fantasy to expect that high-assurance attribution can be performed by relying only upon network sensors operating at the targeted organization. For example, just because a particular malware tool chain and TTPs have been used, does not mean that it is the original threat actor behind this latest event.”

And that snarl of misattributions also likely leads to a worse problem: fouling the investigative work of federal agencies. “A mob with keyboards and mice is no better than one with pitchforks and torches,” said Sheehy.

Thus, for years now, it appeared that cybersecurity pros had arrived at a stalemate where the hazards permanently force one choice: defensive plays. But as it turns out, those hazards are not permanent after all. The war front has moved, but the obstacles didn’t move with it. Which is not to say that new hazards didn’t appear. This is a war zone after all. Danger is omnipresent.

Changing Locations Changes the Rules and the Odds

 Location became a defining turn in the offense vs defense debate. Indeed, a new perspective of geography changed the rules of engagement entirely.

“There is a marked difference between attacking an adversary on their soil, versus attacking them on your own ground once they have invaded,” said Scheferman. “The latter is the area we need to focus on most going forward. The cyber realm is not like the kinetic realm of domestic vs. foreign soil and ‘taking the fight to the enemy’s soil.’ Moreover, as we head into 2019 and beyond there is a wealth of new ways we can proactively target an attacker inside our own environment.”

In other words, by going on the offensive within an organization’s own perimeter, many of the hazards that exist in attacking an adversary on their own turf simply melt away.

What’s more, it means you’re attacking an aggressor who has few or no defenses of their own while they are in the offense mode. As is said of football, “The best defense is a damn good offense.” Not exactly easy to do, but the odds of the defender just got a whole lot better.

However, this strategy calls for more than moving the battlefront. It requires a new …

… arsenal too.

“With the renaissance of machine-learning and AI now in full swing, the best way to defeat an adversary is against the backdrop of time itself: knowing and being where the adversary is going to be, before they get there; and moreover, having the ability to take autonomous, automated machine-speed (near real-time) actions the moment they arrive. This becomes the most impactful form of attacking the adversary: restoring the advantage of time to the defender,” said Scheferman.

Defend Forward

 If this looks like offense wrapped in a defensive play, that’s because it is. And that is the direction modern-day security strategies seems to be leaning.

“Government cybersecurity most certainly should go on the offensive, but in a defensive way,” said Henry B. Whitaker, FSO at Dexter Edward, the maker of Fognigma software.

Indeed, the federal government has signaled a shift in its posture, and in how private organizations should similarly change their security mindset.

“The 2018 Department of Defense Cyber Strategy, released on Sept. 18, 2018, signaled a shift in how organizations think about cyberdefense,’ said Whitaker. The new directive is for cybersecurity to “defend forward to disrupt or halt malicious cyberactivity at its source.”

“In other words, rather than waiting for a threat to attack, defending forward allows for very strategic offensive actions to weaken or nullify the attacker at their source before an attack occurs,” explained Whitaker. “Defending forward inherently carries with it national security risks, so private organizations should never attempt it on their own but they can and should be integral in providing government support.”

The defensive part of this offensive play refers to a purposeful focus on few to no collateral effects in order to limit unintended consequences from an uncontrolled attack launch. Whitaker cited the example of the NotPetya attack which quickly grew out of control and became a global epidemic.

The chief weapon that a defend-forward strategy depends on is artificial intelligence (AI). Technically that would be machine learning and deep learning, two subsets of AI that are taking the cybersecurity space by storm. But this is a long way from AI as most people think of it, which is a machine that thinks like a human. Perhaps it is more precise to think of current forms of AI to be more “assisted” or “augmented” intelligent support for human talent, rather than as “artificial” intelligence akin to a sentient being.

In any case, this new offense-is-the-new-defense strategy is not possible without AI in one or more of its fledgling forms. However, AI is weaponized in this case and like any weapon it serves the foe as well as the friend.

AI — From Fledgling Defender to Attacking War Hawk

AI is currently a fledgling in the security space and subject to errors and being hacked itself.

“AI and machine learning are almost entirely applied to identifying malicious activity on the network and remediating it,” said ThreatQuotient’s Couch. “There has been some work on applying AI to cyberthreat intelligence to identify attackers, but that is in its early stages.”

“The biggest problem with applying AI to a lot of issues is that …

… you need models to train the systems but in the cyberworld, things just aren’t that advanced or well-documented yet,” he said.  “If we believe it was a Chinese intrusion team, is that actually who conducted the attack? We don’t have any definitive answers on that front, so we can never close the loop on what happened well enough to train these systems to learn.”

Brazzell-Curtis_Ponderance.jpg

Pondurance’s Curtis Brazzell

But even when we do have the models to successfully train the machines, we will likely never be able to fight off nation states and bad actors by AI alone. The odds are good that it will always be subject to making errors and being hacked. Trusting AI to fight and win our battles for good of country or company without human oversight is a dangerous and costly mistake.

“Could AI be tricked by using a valid proxy or another method to force a defensive target to instead attack an innocent victim?”  asked Curtis Brazzell, managing security consultant at cybersecurity firm Pondurance, contemplating possible scenarios. “What would that look like? Could you be liable for damages? Could attackers leverage this as a new DoS attack?“

Further, most experts agree that AI wars are eventually coming. While one AI will obviously try to subdue the other in these machine battles, it may do so by absorbing it and its owner’s secrets (remember, all that training data reflects what the trainer knows) or by “flipping it” against its owner in an unexpected counterattack aimed at the owner’s shortcomings which it learned from the training data. Here again lies the danger in escalating events by going on the offense instead of sticking to defensive tactics. But perhaps most terrifying of all is that we may not realize what the AI is actually doing until it is far too late.

“Numerous research papers have shown these systems can be exploited or manipulated to produce misclassifications,” said IOActive’s Sheehy. “Most disturbing is that there is no way to accurately determine exactly why a machine learning/AI system made a particular decision.”

“Second, tightly coupled systems cause effects to propagate very quickly. It is a fundamental design failure to allow systems without deterministic decision-making to be able to cause high-impact effects, such as a computer network attack,” he said.

But a sound argument can be made that security isn’t about eliminating risks, as much as everyone would like it to be. Instead, it is about managing risks, i.e., reducing risks to an acceptable level. The problem is in defining what’s acceptable. The definition in the eyes of the defender, the attacker, stockholders and the law can vary widely.

Still there is no doubt that even as a fledgling, AI is rapidly making headway in security. The temptation to strike back grows with AI’s capabilities. A strong strategy can perhaps temper that eagerness and deliver an effective battle plan.

Cylance’s Scheferman says he sees three ways AI can be used in offensive strategies:

1. Predictive Time Advantage. Algorithms based on a predictive model can look at millions of features in just microseconds, which it can then use to …

… identify with extremely high confidence that the code is malicious and prevent it from ever executing.

“This is not only meant to be a real time advantage over the adversary,” he said. “It is also meant to demotivate them over the coming years, much akin to the way they’ve historically demotivated us as leaders in our industry, to the point we say things like ‘It’s a matter of when, not if, I will be compromised.”

2. Localized security intelligence placed on edge devices. There isn’t time to wait on analytics in the cloud to analyze the threat and get back to security professionals before the threat actor has done damage. To speed response times the cloud and the edge must be used in smarter ways.

“In a microsecond universe, intelligence must exist exactly where the adversary is operating: in memory on the local stack,” Scheferman said. “To achieve this requires a shift in how we think about the cloud. Use it to create models, to train on, to enrich upon after the fact, to orchestrate, to do a lot of wonderful things that only can be done in the cloud. But don’t use it as the primary source of intelligence needed to enable actions to take place at the edge.”

Another advantage to this tactic, at least initially, is you’ll surprise adversaries. This is usually their domain and they won’t expect to see you there.

3. Forget who, find the what. AI can focus on many things in microseconds to establish whether an attack is imminent or not — and it can do so without bothering with identifying who is behind it.

“The industry has gotten the concept of attribution and identity all wrong the last several years,” said Scheferman. “When under attack, knowing ‘who’ an adversary is matters not. But, knowing ‘what’ an adversary is, and whether they are masquerading as one of your authenticated users, for example, matters more than anything.”

Find it, destroy it is a better offensive play than playing “Knock-knock, who’s there?”

“These are three forward-leaning, largely AI-enabled ways that defenders can proactively attack adversaries midstream, during an attack,” said Scheferman. “In this way, and really only in this way, we can finally go on the offensive against the adversary: by beating them at their own game and surprising them in our ability to anticipate and outmaneuver, and do so without a tether to the cloud.”

Read more about:

MSPs

About the Author

Pam Baker

A prolific writer and analyst, Pam Baker’s published work appears in many leading print and online publications including Security Boulevard, PCMag, Institutional Investor magazine, CIO, TechTarget, Linux.com and InformationWeek, as well as many others. Her latest book is “Data Divination: Big Data Strategies.” She’s also a popular speaker at technology conferences as well as specialty conferences such as the Excellence in Journalism events and a medical research and healthcare event at the NY Academy of Sciences.

Free Newsletters for the Channel
Register for Your Free Newsletter Now

You May Also Like