Monday, July 18, 2016

A review of Bellovin/Landau/Lin's paper on Cyber Weapons

Limiting the Undesired Impact of Cyber Weapons: Technical Requirements and Policy Implications

Steven M. Bellovin, Susan Landau, and Herbert S. Lin
Acknowledgements: We are grateful for comments from Thomas Berson, Joseph Nye, and Michael Sulemeyer, which improved the paper. 

In case you're curious: This paper went off the rails in my opinion in a couple areas. The first is that definitionally, putting "cyber weapons" into both CNE and CNA buckets makes life hard throughout the piece. Then, when it tries to move into non-proliferation, it compounds the confusion to result in conclusions not supported by historical evidence. 

What I look for in these papers is the simple phrase "When backdoors are inserted into systems, they use NOBUS technology when possible". Without that phrase, I feel like we're missing an important call for realistic norms.

I left some of the line numbers in below for clarity, although you may find it annoying. Quotes from the paper are in Italics.

A note on terminology: This paper defines a cyber weapon to be a software-based IT artifact or tool that can cause destructive, damaging, or degrading effects on the system or network against which it is directed.  A cyber weapon has two components: a penetration component and a payload component. The penetration component is the mechanism through which the weapon gains access to the system to be attacked. The payload component is the mechanism that actually accomplishes what the weapons is supposed to do—destroy data, interrupt communications, exfiltrate information, causing computer-controlled centrifuges to speed up, and so on. 

This is a not great definition, which we have to hinge the whole paper on. Aside from making the model extremely simplistic ("Penetrate and Payload") versus what we would consider normal technical reality ("Initial foothold, persistence, elevation of priv, lateral movement, C2-connectivity, large-scale exfiltration data protocols, resistance to discovery, etc."), this definition conjoins the exfiltration of data, which is never CNA, with activities that are clearly CNA. And that's a big problem for the paper as a whole going forward. Is Gauss a cyber weapon? So many things fall into these buckets that are clearly non-issues, or technology that is so widespread that it cannot be discussed in this context strategically that it makes the rest of the paper hard to hold together.
I have my own definition of course, which I think is broader and more accurate, (if by accurate you mean "If you use this definition you will be better at conducting cyber war").

104 That said, indiscriminate targeting is not an inherent characteristic of all cyberattacks. A number of cyberattacks we have seen to date—those on Estonia, Georgia, Iran, Sony—have been carefully targeted and have, for various reasons, not spread damage beyond the original target. In three of these cases (Estonia, Georgia, and Iran), the cyberattacks would appear to have helped accomplish the goal of the 109 attackers—and did so without loss of life. None of the four attacks were “inherently indiscriminate.” 

Would we not say that the Sony attack accomplished its goal? Its goal may simply have been deterrence and announcement of capability. The paper misses an opportunity here to talk about how different discrimination is in the cyber domain. For example, it's very hard to make traditional "weapons" target only people with red shirts on and only on Tuesdays. But these sort of complex automated decisions are the heart of a modern toolkit, if for no other reason than for covertness and OPSEC.

128 What is technically necessary for a cyber weapon to be capable of being used discriminately? (A cyber weapon that can be used discriminately will be called a targetable cyber weapon.) Two conditions must be met. 
  • Condition 1: The cyber weapon must be capable of being directed against explicitly designated targets; that is, the cyber weapon must be targetable. 
  • Condition 2: The weapon’s direct effects on the intended targets must minimize the creation of negative effects on other systems that the attacker has not explicitly targeted.  

This could have been a good place to mention how different cyber is when it comes to predictability. When everything is probabalistic (as in the Quantum world) making clear distinctions as to what is "targeted" or even what is the "target" can be difficult in a way policy makers are not used to. For example, "People who visit the playpen website" is a clear, but probabalistic, target. We don't even know how many those are ahead of time, or where or who they are. And that's just the tip of the iceberg when it comes to the complexity of this issue. 

Iran apparently retaliated for Stuxnet by attacking the Saudi Aramco oil company. This cyber weapon, named Shamoon, erased the disks of infected computers.   

I obviously can't speak for Iranian MOIS/IRGC but should probably be talking about this instead:

407 might be provided with capabilities that enable it to directed to a specified set of IP   

Just a typo ("to be directed"). But while we're proofreading...

Possible cooperative measures can relate to locations in cyberspace—for example, 459 nations might publish the IP addresses associated with the information systems of protected entities, 460 45 and cyber weapons would never be aimed at those IP addresses. 461 Alternatively, systems may have machine-readable tags (such as a special file kept in common locations) that identify it as a protected system. Cyber weapons could be designed to check for such tags as part of their assessment of the environment.  

I am extremely pro watermarking - but we still need to separate CNE and CNA activities in any such scheme. A special file in special locations is something that offers particularly bad OPSEC...:)

464 Conversely, another cooperative measure is to designate military entities as such in cyberspace. For example, lists of IP addresses of military facilities (more specifically, entities that would constitute valid military targets in the event of hostilities) could be published. Such a list would serve the same function that uniforms serve—the visual differentiation between a soldier (a valid military target during hostilities) and a non-soldier  

489 A scheme to label nonprotected entities also creates problems. For example, public 490 lists of the IP addresses of military computing facilities is an invitation to the outside 491 world to target those facilities. The legal need to differentiate unprotected 492 (military) and protected (civilian) arises only in the context of the laws of war, 493 which themselves come into force only during armed conflict. Thus, we confront the 494 absurdity of a situation in which prior to the outbreak of hostilities, lists of military 495 entities are kept secret, but upon initiation of hostilities, they are made public 

Honestly, does it not make more sense to simply invest in defensive measures (or even air-gap) the systems you don't want hacked? Do we have to attempt to mirror every aspect of previous law? Or is this is a subtle way to point out that we cannot do so?

The paper could also have pointed out the fundamental issue that IP addresses are not anything close to what a military planner is used to when it comes to a definitive military space. They are often translated and re-used. Spaces which are "enemy IP ranges" are one day serving one person, then another day serve someone else. IP addresses can of course serve multiple people at the same time... the issues are endless! Until we get to the point where we can admit "CYBER IS DIFFERENT" we are spinning in circles on these issues. Maybe this paper is a step towards that direction?

The knowledge that a system is vulnerable may encourage more attempts at break-ins than might have existed otherwise. Such knowledge does not increase risks of proliferation through reuse, but may, however, increase risk of proliferation by demonstrating certain approaches are plausible. Indeed, the Iranian cyber effort that began after the Stuxnet attack is an example of this.

This is a weak connection. The Iranians had many reasons to invest in a Cyber Offense program. 

 561 Presenting an actual methodology for implementing an attack increases the chance of proliferation. Such a presentation may take the form of a theoretical approach,  but with sufficient detail to make clear the technique works; it may go one step further and provide code for a worked-out example.   
574 Presenting a methodology without running code may count as proliferation 575 depending on how quickly the idea can be made to work in practice. That is not 576 always easy to predict  

Very dangerous ideas in the last two paras. This idea that presenting ideas in computer security, even offensive ideas, could be considered "proliferation" is a call towards regulation in a very very bad way. This is what gets you the "technology" controls in Wassenaar, which would have the effect of killing off every security conference and fundamentally changing how security research is done. That some ideas are "good" and some are "bad" is weirdly uniquely a theme in the policy circles from US academia, which should know better. 

In addition, manufacture of cyber weapons is much cheaper and faster than typical kinetic weapons. Once a cyber weapon has been reverse engineered and its mechanism of deployment and use recovered, making more of the same is far easier than the manufacture of other types of weapons.  

We know from historical record that this is not a real risk, and yet you continue to see it espoused as a significant danger. Has there been another Stuxnet, repurposed from the original code? Cyber tools are different in that they essentially are SYSTEMS, not just sets of code. Has having all of Hacker Team's source code on Github resulted in ANYTHING? Nobody can point to this weird re-purposement theory causing any real issues. But you see it a lot in papers which are promoting regulation efforts. 

627 The reverse engineering of Stuxnet, the only sophisticated cyber weapon that we 628 have described, took much longer than that. But the reverse engineering was 629 accomplished by a handful of researchers in private-sector firms working 630 essentially in odd bits of time. Stuxnet changed the game. It was the first example of 631 a high-quality engineering team approach to developing a cyber weapon. 

That paragraph is not true in many ways. I'm not sure where it comes from, to be honest. Every penetration testing company did its own reverse engineering of Stuxnet and Flame etc. It's just not true that this is the first high quality intrusion tool...etc.

675 If disclosure of the penetration mechanisms were done entirely publicly, any potential victim could patch its systems to prevent the weapon from being used against it.  

Really? This whole section is drawing on suspect hypothesis which is not backed up at all. In fact, in many cases we know that public disclosure does not change vulnerability rates. The Stuxnet case itself is a classic counter-example to the hinted-at policies in this section of the paper. Why the authors chose to drag vulnerability disclosure politics into the paper are a mystery. 

687 Contrary to a belief held in many quarters, cyber weapons as a class are not inherently indiscriminate. But designing a cyber weapons [SIC] so that it can be used in a targeted manner is technically demanding and requires a great deal of intelligence about the targets against which it will be directed. 

I feel like this last part of the conclusion is at odds with the paper in some ways. If using a "cyber weapon" is so technically demanding and requires such specific knowledge of the targets, why do we care so much about proliferation?   

But the core issue of the paper is mixing CNE and CNA efforts. Without that central issue being untangled, it's hard to make policy cases that are coherent. 

Tuesday, July 12, 2016

When is a Cyber Attack an Act of War?

Politics aside, one lesson we can draw from the ongoing debate over Hillary Clinton’s private email server is this - in the years ahead, questions of US national security will increasingly be tied to digital assets.

Just as the concept of a bank has evolved from “a physical place where you keep your money” to a software services provider that conducts financial transactions, so too are countries becoming increasingly defined by code, rather than physical, tangible assets. The United States and other countries are reaching a point where they have a far greater presence in cyberspace than they do on land, sea and air. The sovereignty, integrity, and viability of countries will increasingly depend on cybersecurity issues.

For many, this raises a key question, which members of Congress are now starting to press US military leaders to answer: at what point does a cyber attack constitute an act of war? And how should we respond?

The problem with this question is that it’s impossible to answer. The bottom line is that we can’t define a digital act of war with neat red lines, the way we can define a missile strike as an act of war. There are too many variables to account for in cyber activity, which would ultimately affect how the US government and military would interpret a cyber attack.

To illustrate the problem, consider this: when does an attack on an electric utility cross the line? Is it when a state-sponsored group turns off the power in Denver for a week? What if they only turned it off for one minute? How many lights do you have to turn off in order for it to be considered an act of war?

This seems academic, or far fetched, or perhaps simply hair-splitting, but this has come up before in real world situations. When the Iranians were DDoSing our financial infrastructure, we had to address whether to respond. How big of a DDoS constitutes an “attack”? Did the DDoS really have the kind of effect on our financial community that would require a response or was it simply uncomfortably expensive for private companies to ameliorate? For the executives at those financial companies and the nation security team addressing the issue, these questions were anything but academic.

Strategic Uncertainty

If Iran had assassinated Adelson by turning off his pacemaker, instead of hacking his casino, would we have responded as a nation? One valuable answer is “Perhaps”. Strategic uncertainty can provide cover for inaction and action both. But right now it is the result of a muddled national cyber strategy, with no clear answers forthcoming.

What does it mean if we can’t clearly define what an act of cyber war in any technical way? This problem reverberates through the strategic thought space in other ways too.

What is “Critical Infrastructure”?

The generally accepted view on cyber war is that when hackers cause physical damage to a critical infrastructure facility, this crosses the threshold and could trigger a military response.

This is part of the Pentagon’s own understanding of cyber war - what it refers to as the “equivalency principle.” If a cyber attack is equivalent to a traditional military action, then it crosses the line. Or as one military official put it: “If you shut down our power grid, maybe we will put a missile down one of your smokestacks.”

But what exactly is “critical infrastructure?” We seem to view it as primarily hard assets tied to things like energy production and military readiness. But in reality critical infrastructure is far more than that - it’s everything that makes the economy run and the country function. Therefore, the banking system is also critical infrastructure; so too is the news media, US election system, Justice system, and of course the computer systems used to manage and run those systems.

We also need to think of the US Constitution and Bill of Rights as part of our country’s critical infrastructure. That might sound strange to some, but we’re now in an age when a foreign government can easily target US citizens and companies for saying things it doesn’t like. Case in point is North Korea’s 2014 hack of Sony Pictures and threats to US theater chains over the “The Interview” film, which it opposed. However, earlier that same year, Iranian hackers also caused widespread damage to Las Vegas Sands Corp. because of its CEO’s criticism of the Iranian nuclear program.

Critical infrastructure is a far bigger category than it first appears. For this reason, we need to stop focusing on what was attacked and instead consider the effect of that attack.

Red Lines Won’t Work

Another problem in defining cyber war is the notion of “red lines.” This is a concept rooted in a past where air superiority was the dominant consideration and military planners literally drew red lines around objects on a map.

It is extremely difficult to draw a red line in cyberspace. Every government entity, military, company or organization has a broad and confusing presence in the digital world. For example, a hospital has its physical network hardware on site, as well as cloud-based storage in server vaults in other states or countries. The medical equipment it uses, which is increasingly connected to the web, also has back-end web servers and applications managed by outside companies in various other regions throughout the world. For US corporations, mapping cyber assets is even more complex, as you have vast networks and resources stretching over a wide range of countries and states. No US company is just a US company - even the smallest companies have supply chains, customers, and employees all over the world.

You can draw a red line around a hospital’s on-site network, but what if an attacker hits a third-party cloud solutions provider, which stores critical patient data within its systems, but of course, also processes military support information and financial trading information? All internet infrastructure is multi-use.

This feature of the cyber domain re-iterates that we are making a mistake if we focus our attention on what access was gained to what systems, rather than the effects of the attacks, which can vary widely even from the same networks.

What a Country Should Be Allowed to Hack

There’s real concern that a foreign intelligence service may have been able to breach the “homebrew” private email server that Hillary Clinton used when she was Secretary of State. Whether or not that did in fact happen, it is well within the bounds of “acceptable” nation-state activity.

Countries have the “right” to hack each other to fulfill traditional espionage goals. However, the key question in evaluating these attacks is how the data is handled. If data is stolen or communications monitored, all for the purposes of internal consumption by the other country, that is a permissible act. However, if the hackers try to manipulate or destroy data, or they dump it publicly to manipulate certain outcomes (like a Presidential election), that is when cyber espionage exceeds those established boundaries that we (as the US) would like to live by.

The Problem with Deterrence
The real mistake in cyber war planning is to focus on “deterrence”, a notional trap introduced by our fascination with Nuclear parallels.  

While much thought has gone into developing “deterrence” strategies for cyber, the real key to long-term national viability in the space is developing resilience strategies on top of the infrastructure we need to guide our daily lives. We have, for example, asked our power grid to do something impossible: be able to withstand any attack, and in the event of failure, be repairable within hours. Instead, solar power, generators and short-term household batteries need to be a key component of any cyber-defense strategy, much as hard drive backups are a key component at a tactical level against a Saudi Aramco-like attack.

We also have to invest in a country-wide cloud-computing infrastructure that can host key elements of our democracy too important to lose or have manipulated by a crafty adversary. Imagine a way for local governments to leverage the Federal Government’s information security expertise - or even just for for every Federal Agency to have the same level of security as the most secured ones.

And no strategy works when we don’t know what we would even consider worth a response. The current cloudiness as to whether a US Company being blackmailed by a nation-state is even worth the a US response, or whether manipulation of our electoral system counts as an offensive action indicates we are not ready yet to enumerate the norms our country wants to live under in the cyber domain - something that should worry all of us.