Monday, July 18, 2016

A review of Bellovin/Landau/Lin's paper on Cyber Weapons

Limiting the Undesired Impact of Cyber Weapons: Technical Requirements and Policy Implications

Steven M. Bellovin, Susan Landau, and Herbert S. Lin
Acknowledgements: We are grateful for comments from Thomas Berson, Joseph Nye, and Michael Sulemeyer, which improved the paper. 

 
In case you're curious: This paper went off the rails in my opinion in a couple areas. The first is that definitionally, putting "cyber weapons" into both CNE and CNA buckets makes life hard throughout the piece. Then, when it tries to move into non-proliferation, it compounds the confusion to result in conclusions not supported by historical evidence. 

What I look for in these papers is the simple phrase "When backdoors are inserted into systems, they use NOBUS technology when possible". Without that phrase, I feel like we're missing an important call for realistic norms.

I left some of the line numbers in below for clarity, although you may find it annoying. Quotes from the paper are in Italics.


A note on terminology: This paper defines a cyber weapon to be a software-based IT artifact or tool that can cause destructive, damaging, or degrading effects on the system or network against which it is directed.  A cyber weapon has two components: a penetration component and a payload component. The penetration component is the mechanism through which the weapon gains access to the system to be attacked. The payload component is the mechanism that actually accomplishes what the weapons is supposed to do—destroy data, interrupt communications, exfiltrate information, causing computer-controlled centrifuges to speed up, and so on. 

This is a not great definition, which we have to hinge the whole paper on. Aside from making the model extremely simplistic ("Penetrate and Payload") versus what we would consider normal technical reality ("Initial foothold, persistence, elevation of priv, lateral movement, C2-connectivity, large-scale exfiltration data protocols, resistance to discovery, etc."), this definition conjoins the exfiltration of data, which is never CNA, with activities that are clearly CNA. And that's a big problem for the paper as a whole going forward. Is Gauss a cyber weapon? So many things fall into these buckets that are clearly non-issues, or technology that is so widespread that it cannot be discussed in this context strategically that it makes the rest of the paper hard to hold together.
 
I have my own definition of course, which I think is broader and more accurate, (if by accurate you mean "If you use this definition you will be better at conducting cyber war"). https://www.youtube.com/watch?v=GiV6am2lNTQ

104 That said, indiscriminate targeting is not an inherent characteristic of all cyberattacks. A number of cyberattacks we have seen to date—those on Estonia, Georgia, Iran, Sony—have been carefully targeted and have, for various reasons, not spread damage beyond the original target. In three of these cases (Estonia, Georgia, and Iran), the cyberattacks would appear to have helped accomplish the goal of the 109 attackers—and did so without loss of life. None of the four attacks were “inherently indiscriminate.” 

Would we not say that the Sony attack accomplished its goal? Its goal may simply have been deterrence and announcement of capability. The paper misses an opportunity here to talk about how different discrimination is in the cyber domain. For example, it's very hard to make traditional "weapons" target only people with red shirts on and only on Tuesdays. But these sort of complex automated decisions are the heart of a modern toolkit, if for no other reason than for covertness and OPSEC.

128 What is technically necessary for a cyber weapon to be capable of being used discriminately? (A cyber weapon that can be used discriminately will be called a targetable cyber weapon.) Two conditions must be met. 
  • Condition 1: The cyber weapon must be capable of being directed against explicitly designated targets; that is, the cyber weapon must be targetable. 
  • Condition 2: The weapon’s direct effects on the intended targets must minimize the creation of negative effects on other systems that the attacker has not explicitly targeted.  

This could have been a good place to mention how different cyber is when it comes to predictability. When everything is probabalistic (as in the Quantum world) making clear distinctions as to what is "targeted" or even what is the "target" can be difficult in a way policy makers are not used to. For example, "People who visit the playpen website" is a clear, but probabalistic, target. We don't even know how many those are ahead of time, or where or who they are. And that's just the tip of the iceberg when it comes to the complexity of this issue. 

Iran apparently retaliated for Stuxnet by attacking the Saudi Aramco oil company. This cyber weapon, named Shamoon, erased the disks of infected computers.   

I obviously can't speak for Iranian MOIS/IRGC but should probably be talking about this instead: https://www.theguardian.com/world/2012/apr/23/iranian-oil-ministry-cyber-attack

407 might be provided with capabilities that enable it to directed to a specified set of IP   

Just a typo ("to be directed"). But while we're proofreading...

Possible cooperative measures can relate to locations in cyberspace—for example, 459 nations might publish the IP addresses associated with the information systems of protected entities, 460 45 and cyber weapons would never be aimed at those IP addresses. 461 Alternatively, systems may have machine-readable tags (such as a special file kept in common locations) that identify it as a protected system. Cyber weapons could be designed to check for such tags as part of their assessment of the environment.  

I am extremely pro watermarking - but we still need to separate CNE and CNA activities in any such scheme. A special file in special locations is something that offers particularly bad OPSEC...:)

464 Conversely, another cooperative measure is to designate military entities as such in cyberspace. For example, lists of IP addresses of military facilities (more specifically, entities that would constitute valid military targets in the event of hostilities) could be published. Such a list would serve the same function that uniforms serve—the visual differentiation between a soldier (a valid military target during hostilities) and a non-soldier  

489 A scheme to label nonprotected entities also creates problems. For example, public 490 lists of the IP addresses of military computing facilities is an invitation to the outside 491 world to target those facilities. The legal need to differentiate unprotected 492 (military) and protected (civilian) arises only in the context of the laws of war, 493 which themselves come into force only during armed conflict. Thus, we confront the 494 absurdity of a situation in which prior to the outbreak of hostilities, lists of military 495 entities are kept secret, but upon initiation of hostilities, they are made public 

Honestly, does it not make more sense to simply invest in defensive measures (or even air-gap) the systems you don't want hacked? Do we have to attempt to mirror every aspect of previous law? Or is this is a subtle way to point out that we cannot do so?

The paper could also have pointed out the fundamental issue that IP addresses are not anything close to what a military planner is used to when it comes to a definitive military space. They are often translated and re-used. Spaces which are "enemy IP ranges" are one day serving one person, then another day serve someone else. IP addresses can of course serve multiple people at the same time... the issues are endless! Until we get to the point where we can admit "CYBER IS DIFFERENT" we are spinning in circles on these issues. Maybe this paper is a step towards that direction?

The knowledge that a system is vulnerable may encourage more attempts at break-ins than might have existed otherwise. Such knowledge does not increase risks of proliferation through reuse, but may, however, increase risk of proliferation by demonstrating certain approaches are plausible. Indeed, the Iranian cyber effort that began after the Stuxnet attack is an example of this.

This is a weak connection. The Iranians had many reasons to invest in a Cyber Offense program. 

 561 Presenting an actual methodology for implementing an attack increases the chance of proliferation. Such a presentation may take the form of a theoretical approach,  but with sufficient detail to make clear the technique works; it may go one step further and provide code for a worked-out example.   
 
574 Presenting a methodology without running code may count as proliferation 575 depending on how quickly the idea can be made to work in practice. That is not 576 always easy to predict  

Very dangerous ideas in the last two paras. This idea that presenting ideas in computer security, even offensive ideas, could be considered "proliferation" is a call towards regulation in a very very bad way. This is what gets you the "technology" controls in Wassenaar, which would have the effect of killing off every security conference and fundamentally changing how security research is done. That some ideas are "good" and some are "bad" is weirdly uniquely a theme in the policy circles from US academia, which should know better. 

In addition, manufacture of cyber weapons is much cheaper and faster than typical kinetic weapons. Once a cyber weapon has been reverse engineered and its mechanism of deployment and use recovered, making more of the same is far easier than the manufacture of other types of weapons.  

We know from historical record that this is not a real risk, and yet you continue to see it espoused as a significant danger. Has there been another Stuxnet, repurposed from the original code? Cyber tools are different in that they essentially are SYSTEMS, not just sets of code. Has having all of Hacker Team's source code on Github resulted in ANYTHING? Nobody can point to this weird re-purposement theory causing any real issues. But you see it a lot in papers which are promoting regulation efforts. 

627 The reverse engineering of Stuxnet, the only sophisticated cyber weapon that we 628 have described, took much longer than that. But the reverse engineering was 629 accomplished by a handful of researchers in private-sector firms working 630 essentially in odd bits of time. Stuxnet changed the game. It was the first example of 631 a high-quality engineering team approach to developing a cyber weapon. 

That paragraph is not true in many ways. I'm not sure where it comes from, to be honest. Every penetration testing company did its own reverse engineering of Stuxnet and Flame etc. It's just not true that this is the first high quality intrusion tool...etc.

675 If disclosure of the penetration mechanisms were done entirely publicly, any potential victim could patch its systems to prevent the weapon from being used against it.  

Really? This whole section is drawing on suspect hypothesis which is not backed up at all. In fact, in many cases we know that public disclosure does not change vulnerability rates. The Stuxnet case itself is a classic counter-example to the hinted-at policies in this section of the paper. Why the authors chose to drag vulnerability disclosure politics into the paper are a mystery. 

687 Contrary to a belief held in many quarters, cyber weapons as a class are not inherently indiscriminate. But designing a cyber weapons [SIC] so that it can be used in a targeted manner is technically demanding and requires a great deal of intelligence about the targets against which it will be directed. 

I feel like this last part of the conclusion is at odds with the paper in some ways. If using a "cyber weapon" is so technically demanding and requires such specific knowledge of the targets, why do we care so much about proliferation?   

But the core issue of the paper is mixing CNE and CNA efforts. Without that central issue being untangled, it's hard to make policy cases that are coherent. 

No comments:

Post a Comment