Friday, August 26, 2016

The Unintended Consequences of Software Liabilities

"Pacemakers".

People love the idea of holding software company's feet to the fire when it comes to security. You hear a lot about software liabilities, how "inevitable" they are for example, at CFR meetings or other policy forums. You hear about mandatory FDA-enforced or Commerce enforced recalls for cars or other IoT devices with software vulnerabilities.

But if you do that, you make it so every hacker in the world can figure out the cost of a disclosed vulnerability, which means shorting stock becomes the best bug bounty in the world. "Why not just control all vulnerability disclosure?" the policy makers then say. Fantastic idea. I wonder if THAT will have any unintended consequences?


Monday, August 22, 2016

Data is not Analysis in Vulnerability Equities

If you haven't read Matt Tait's and my piece on why we think the VEP has severe problems, please do so! We love heckling. :) That said, government 0day policy is the last area we should be focusing on as a control measure. It's insanely complex and we don't have the data to do it right, which is the case I think you'll see me making in the next few months.

I want to point out that in many policy and academic papers I've read (some of which are referenced in the above piece) they've both over-simplified the idea of bug collision (to "sparse" or "dense" - terms which make no real technical sense) and come to the opposite conclusion about vulnerability overlap of every technical person I know, many of whom have decades of experience holding 0day.

Below are some Twitter notes from Steffan Esser, Halvar, Grugq, Argp, and others who point out that while anecdotal evidence of a lack of overlap is not conclusive in any way, it's interesting that everyone in the business seems to have the same basic experience. 

To wit, the most common way vulnerabilities get "killed" appears to be because of coincidental code refactor. 

And of course, sometimes it's not a vulnerability, but a CLASS of vulnerabilities that you are trying to measure. Most big research firms have new classes of bugs and new exploit techniques that are not seen or used publicly. There are no clear lines here, but at a certain point, what you're trying to measure is math. Why is there no Math Equities Process for the government? It's because MATH is not as sexy as 0day (aka, not as clearly impactful on Microsoft's bottom line and marketing message?). 

Even if you had all the data, normalizing it, analyzing it and understanding it would be a complex, difficult endeavor. And beyond that, making a sane policy choice is even harder. But until then, we have to admit that our policy choices are a bit...insane. :)
 




Tuesday, August 16, 2016

Why EQGRP Leak is Russia

"Cyber Stalingrad Statue has opinions!"

First off, it's not a "hack" of a command and control box that resulted in this leak. Assuming it's real (I cannot confirm or deny anything here - largely because I don't know), it's almost certainly human intelligence - someone walked out of a secure area with a USB key. So let's go down the list of factors that make it "Almost Certainly Russia".

  1. Timing: Seems almost certain to be related to the DNC hacks. High level US political officals seemed quite upset about the DNC hacks, which no doubt resulted in a covert response, which this is then likely a counter-response to. As Snowden put it: Somebody is sending a message that they know about USG efforts to influence elections and governments via cyber. 
  2. Mention of corruption and elections in the text of the release feels classicly Russian
  3. Ability to keep something this big quiet for three years (leak is just post-Snowden) is probably limited to only those with operational security expertise or desire to leverage those bugs for themselves
  4. Information results from HUMINT, not simple hack of a C2 box as suggested (not that even that would be easy). Level of difficulty: Very Experienced Nation State. 
    1. Alternate possibility: someone was sitting on a redirector box and the most incompetent person on Earth uploaded this ops disk to it to make their lives easy. Still means someone was hiding on this box who knows what they're doing in an unusually skilled way. 
    2. Alternate, believable opinion on this from the Grugq: here.
  5. No team of "hackers" would want to piss off Equation Group this much. That's the kind of cojones that only come from having a nation state protecting you.
  6. Wikileaks also has the data (they claim)
"Conventional Wisdom from Russian Intel!"


Monday, July 18, 2016

A review of Bellovin/Landau/Lin's paper on Cyber Weapons

Limiting the Undesired Impact of Cyber Weapons: Technical Requirements and Policy Implications

Steven M. Bellovin, Susan Landau, and Herbert S. Lin
Acknowledgements: We are grateful for comments from Thomas Berson, Joseph Nye, and Michael Sulemeyer, which improved the paper. 

 
In case you're curious: This paper went off the rails in my opinion in a couple areas. The first is that definitionally, putting "cyber weapons" into both CNE and CNA buckets makes life hard throughout the piece. Then, when it tries to move into non-proliferation, it compounds the confusion to result in conclusions not supported by historical evidence. 

What I look for in these papers is the simple phrase "When backdoors are inserted into systems, they use NOBUS technology when possible". Without that phrase, I feel like we're missing an important call for realistic norms.

I left some of the line numbers in below for clarity, although you may find it annoying. Quotes from the paper are in Italics.


A note on terminology: This paper defines a cyber weapon to be a software-based IT artifact or tool that can cause destructive, damaging, or degrading effects on the system or network against which it is directed.  A cyber weapon has two components: a penetration component and a payload component. The penetration component is the mechanism through which the weapon gains access to the system to be attacked. The payload component is the mechanism that actually accomplishes what the weapons is supposed to do—destroy data, interrupt communications, exfiltrate information, causing computer-controlled centrifuges to speed up, and so on. 

This is a not great definition, which we have to hinge the whole paper on. Aside from making the model extremely simplistic ("Penetrate and Payload") versus what we would consider normal technical reality ("Initial foothold, persistence, elevation of priv, lateral movement, C2-connectivity, large-scale exfiltration data protocols, resistance to discovery, etc."), this definition conjoins the exfiltration of data, which is never CNA, with activities that are clearly CNA. And that's a big problem for the paper as a whole going forward. Is Gauss a cyber weapon? So many things fall into these buckets that are clearly non-issues, or technology that is so widespread that it cannot be discussed in this context strategically that it makes the rest of the paper hard to hold together.
 
I have my own definition of course, which I think is broader and more accurate, (if by accurate you mean "If you use this definition you will be better at conducting cyber war"). https://www.youtube.com/watch?v=GiV6am2lNTQ

104 That said, indiscriminate targeting is not an inherent characteristic of all cyberattacks. A number of cyberattacks we have seen to date—those on Estonia, Georgia, Iran, Sony—have been carefully targeted and have, for various reasons, not spread damage beyond the original target. In three of these cases (Estonia, Georgia, and Iran), the cyberattacks would appear to have helped accomplish the goal of the 109 attackers—and did so without loss of life. None of the four attacks were “inherently indiscriminate.” 

Would we not say that the Sony attack accomplished its goal? Its goal may simply have been deterrence and announcement of capability. The paper misses an opportunity here to talk about how different discrimination is in the cyber domain. For example, it's very hard to make traditional "weapons" target only people with red shirts on and only on Tuesdays. But these sort of complex automated decisions are the heart of a modern toolkit, if for no other reason than for covertness and OPSEC.

128 What is technically necessary for a cyber weapon to be capable of being used discriminately? (A cyber weapon that can be used discriminately will be called a targetable cyber weapon.) Two conditions must be met. 
  • Condition 1: The cyber weapon must be capable of being directed against explicitly designated targets; that is, the cyber weapon must be targetable. 
  • Condition 2: The weapon’s direct effects on the intended targets must minimize the creation of negative effects on other systems that the attacker has not explicitly targeted.  

This could have been a good place to mention how different cyber is when it comes to predictability. When everything is probabalistic (as in the Quantum world) making clear distinctions as to what is "targeted" or even what is the "target" can be difficult in a way policy makers are not used to. For example, "People who visit the playpen website" is a clear, but probabalistic, target. We don't even know how many those are ahead of time, or where or who they are. And that's just the tip of the iceberg when it comes to the complexity of this issue. 

Iran apparently retaliated for Stuxnet by attacking the Saudi Aramco oil company. This cyber weapon, named Shamoon, erased the disks of infected computers.   

I obviously can't speak for Iranian MOIS/IRGC but should probably be talking about this instead: https://www.theguardian.com/world/2012/apr/23/iranian-oil-ministry-cyber-attack

407 might be provided with capabilities that enable it to directed to a specified set of IP   

Just a typo ("to be directed"). But while we're proofreading...

Possible cooperative measures can relate to locations in cyberspace—for example, 459 nations might publish the IP addresses associated with the information systems of protected entities, 460 45 and cyber weapons would never be aimed at those IP addresses. 461 Alternatively, systems may have machine-readable tags (such as a special file kept in common locations) that identify it as a protected system. Cyber weapons could be designed to check for such tags as part of their assessment of the environment.  

I am extremely pro watermarking - but we still need to separate CNE and CNA activities in any such scheme. A special file in special locations is something that offers particularly bad OPSEC...:)

464 Conversely, another cooperative measure is to designate military entities as such in cyberspace. For example, lists of IP addresses of military facilities (more specifically, entities that would constitute valid military targets in the event of hostilities) could be published. Such a list would serve the same function that uniforms serve—the visual differentiation between a soldier (a valid military target during hostilities) and a non-soldier  

489 A scheme to label nonprotected entities also creates problems. For example, public 490 lists of the IP addresses of military computing facilities is an invitation to the outside 491 world to target those facilities. The legal need to differentiate unprotected 492 (military) and protected (civilian) arises only in the context of the laws of war, 493 which themselves come into force only during armed conflict. Thus, we confront the 494 absurdity of a situation in which prior to the outbreak of hostilities, lists of military 495 entities are kept secret, but upon initiation of hostilities, they are made public 

Honestly, does it not make more sense to simply invest in defensive measures (or even air-gap) the systems you don't want hacked? Do we have to attempt to mirror every aspect of previous law? Or is this is a subtle way to point out that we cannot do so?

The paper could also have pointed out the fundamental issue that IP addresses are not anything close to what a military planner is used to when it comes to a definitive military space. They are often translated and re-used. Spaces which are "enemy IP ranges" are one day serving one person, then another day serve someone else. IP addresses can of course serve multiple people at the same time... the issues are endless! Until we get to the point where we can admit "CYBER IS DIFFERENT" we are spinning in circles on these issues. Maybe this paper is a step towards that direction?

The knowledge that a system is vulnerable may encourage more attempts at break-ins than might have existed otherwise. Such knowledge does not increase risks of proliferation through reuse, but may, however, increase risk of proliferation by demonstrating certain approaches are plausible. Indeed, the Iranian cyber effort that began after the Stuxnet attack is an example of this.

This is a weak connection. The Iranians had many reasons to invest in a Cyber Offense program. 

 561 Presenting an actual methodology for implementing an attack increases the chance of proliferation. Such a presentation may take the form of a theoretical approach,  but with sufficient detail to make clear the technique works; it may go one step further and provide code for a worked-out example.   
 
574 Presenting a methodology without running code may count as proliferation 575 depending on how quickly the idea can be made to work in practice. That is not 576 always easy to predict  

Very dangerous ideas in the last two paras. This idea that presenting ideas in computer security, even offensive ideas, could be considered "proliferation" is a call towards regulation in a very very bad way. This is what gets you the "technology" controls in Wassenaar, which would have the effect of killing off every security conference and fundamentally changing how security research is done. That some ideas are "good" and some are "bad" is weirdly uniquely a theme in the policy circles from US academia, which should know better. 

In addition, manufacture of cyber weapons is much cheaper and faster than typical kinetic weapons. Once a cyber weapon has been reverse engineered and its mechanism of deployment and use recovered, making more of the same is far easier than the manufacture of other types of weapons.  

We know from historical record that this is not a real risk, and yet you continue to see it espoused as a significant danger. Has there been another Stuxnet, repurposed from the original code? Cyber tools are different in that they essentially are SYSTEMS, not just sets of code. Has having all of Hacker Team's source code on Github resulted in ANYTHING? Nobody can point to this weird re-purposement theory causing any real issues. But you see it a lot in papers which are promoting regulation efforts. 

627 The reverse engineering of Stuxnet, the only sophisticated cyber weapon that we 628 have described, took much longer than that. But the reverse engineering was 629 accomplished by a handful of researchers in private-sector firms working 630 essentially in odd bits of time. Stuxnet changed the game. It was the first example of 631 a high-quality engineering team approach to developing a cyber weapon. 

That paragraph is not true in many ways. I'm not sure where it comes from, to be honest. Every penetration testing company did its own reverse engineering of Stuxnet and Flame etc. It's just not true that this is the first high quality intrusion tool...etc.

675 If disclosure of the penetration mechanisms were done entirely publicly, any potential victim could patch its systems to prevent the weapon from being used against it.  

Really? This whole section is drawing on suspect hypothesis which is not backed up at all. In fact, in many cases we know that public disclosure does not change vulnerability rates. The Stuxnet case itself is a classic counter-example to the hinted-at policies in this section of the paper. Why the authors chose to drag vulnerability disclosure politics into the paper are a mystery. 

687 Contrary to a belief held in many quarters, cyber weapons as a class are not inherently indiscriminate. But designing a cyber weapons [SIC] so that it can be used in a targeted manner is technically demanding and requires a great deal of intelligence about the targets against which it will be directed. 

I feel like this last part of the conclusion is at odds with the paper in some ways. If using a "cyber weapon" is so technically demanding and requires such specific knowledge of the targets, why do we care so much about proliferation?   

But the core issue of the paper is mixing CNE and CNA efforts. Without that central issue being untangled, it's hard to make policy cases that are coherent. 

Tuesday, July 12, 2016

When is a Cyber Attack an Act of War?

Politics aside, one lesson we can draw from the ongoing debate over Hillary Clinton’s private email server is this - in the years ahead, questions of US national security will increasingly be tied to digital assets.

Just as the concept of a bank has evolved from “a physical place where you keep your money” to a software services provider that conducts financial transactions, so too are countries becoming increasingly defined by code, rather than physical, tangible assets. The United States and other countries are reaching a point where they have a far greater presence in cyberspace than they do on land, sea and air. The sovereignty, integrity, and viability of countries will increasingly depend on cybersecurity issues.

For many, this raises a key question, which members of Congress are now starting to press US military leaders to answer: at what point does a cyber attack constitute an act of war? And how should we respond?

The problem with this question is that it’s impossible to answer. The bottom line is that we can’t define a digital act of war with neat red lines, the way we can define a missile strike as an act of war. There are too many variables to account for in cyber activity, which would ultimately affect how the US government and military would interpret a cyber attack.

To illustrate the problem, consider this: when does an attack on an electric utility cross the line? Is it when a state-sponsored group turns off the power in Denver for a week? What if they only turned it off for one minute? How many lights do you have to turn off in order for it to be considered an act of war?

This seems academic, or far fetched, or perhaps simply hair-splitting, but this has come up before in real world situations. When the Iranians were DDoSing our financial infrastructure, we had to address whether to respond. How big of a DDoS constitutes an “attack”? Did the DDoS really have the kind of effect on our financial community that would require a response or was it simply uncomfortably expensive for private companies to ameliorate? For the executives at those financial companies and the nation security team addressing the issue, these questions were anything but academic.

Strategic Uncertainty

If Iran had assassinated Adelson by turning off his pacemaker, instead of hacking his casino, would we have responded as a nation? One valuable answer is “Perhaps”. Strategic uncertainty can provide cover for inaction and action both. But right now it is the result of a muddled national cyber strategy, with no clear answers forthcoming.

What does it mean if we can’t clearly define what an act of cyber war in any technical way? This problem reverberates through the strategic thought space in other ways too.


What is “Critical Infrastructure”?

The generally accepted view on cyber war is that when hackers cause physical damage to a critical infrastructure facility, this crosses the threshold and could trigger a military response.

This is part of the Pentagon’s own understanding of cyber war - what it refers to as the “equivalency principle.” If a cyber attack is equivalent to a traditional military action, then it crosses the line. Or as one military official put it: “If you shut down our power grid, maybe we will put a missile down one of your smokestacks.”

But what exactly is “critical infrastructure?” We seem to view it as primarily hard assets tied to things like energy production and military readiness. But in reality critical infrastructure is far more than that - it’s everything that makes the economy run and the country function. Therefore, the banking system is also critical infrastructure; so too is the news media, US election system, Justice system, and of course the computer systems used to manage and run those systems.

We also need to think of the US Constitution and Bill of Rights as part of our country’s critical infrastructure. That might sound strange to some, but we’re now in an age when a foreign government can easily target US citizens and companies for saying things it doesn’t like. Case in point is North Korea’s 2014 hack of Sony Pictures and threats to US theater chains over the “The Interview” film, which it opposed. However, earlier that same year, Iranian hackers also caused widespread damage to Las Vegas Sands Corp. because of its CEO’s criticism of the Iranian nuclear program.

Critical infrastructure is a far bigger category than it first appears. For this reason, we need to stop focusing on what was attacked and instead consider the effect of that attack.

Red Lines Won’t Work

Another problem in defining cyber war is the notion of “red lines.” This is a concept rooted in a past where air superiority was the dominant consideration and military planners literally drew red lines around objects on a map.

It is extremely difficult to draw a red line in cyberspace. Every government entity, military, company or organization has a broad and confusing presence in the digital world. For example, a hospital has its physical network hardware on site, as well as cloud-based storage in server vaults in other states or countries. The medical equipment it uses, which is increasingly connected to the web, also has back-end web servers and applications managed by outside companies in various other regions throughout the world. For US corporations, mapping cyber assets is even more complex, as you have vast networks and resources stretching over a wide range of countries and states. No US company is just a US company - even the smallest companies have supply chains, customers, and employees all over the world.

You can draw a red line around a hospital’s on-site network, but what if an attacker hits a third-party cloud solutions provider, which stores critical patient data within its systems, but of course, also processes military support information and financial trading information? All internet infrastructure is multi-use.

This feature of the cyber domain re-iterates that we are making a mistake if we focus our attention on what access was gained to what systems, rather than the effects of the attacks, which can vary widely even from the same networks.

What a Country Should Be Allowed to Hack

There’s real concern that a foreign intelligence service may have been able to breach the “homebrew” private email server that Hillary Clinton used when she was Secretary of State. Whether or not that did in fact happen, it is well within the bounds of “acceptable” nation-state activity.

Countries have the “right” to hack each other to fulfill traditional espionage goals. However, the key question in evaluating these attacks is how the data is handled. If data is stolen or communications monitored, all for the purposes of internal consumption by the other country, that is a permissible act. However, if the hackers try to manipulate or destroy data, or they dump it publicly to manipulate certain outcomes (like a Presidential election), that is when cyber espionage exceeds those established boundaries that we (as the US) would like to live by.

The Problem with Deterrence
The real mistake in cyber war planning is to focus on “deterrence”, a notional trap introduced by our fascination with Nuclear parallels.  

While much thought has gone into developing “deterrence” strategies for cyber, the real key to long-term national viability in the space is developing resilience strategies on top of the infrastructure we need to guide our daily lives. We have, for example, asked our power grid to do something impossible: be able to withstand any attack, and in the event of failure, be repairable within hours. Instead, solar power, generators and short-term household batteries need to be a key component of any cyber-defense strategy, much as hard drive backups are a key component at a tactical level against a Saudi Aramco-like attack.

We also have to invest in a country-wide cloud-computing infrastructure that can host key elements of our democracy too important to lose or have manipulated by a crafty adversary. Imagine a way for local governments to leverage the Federal Government’s information security expertise - or even just for for every Federal Agency to have the same level of security as the most secured ones.

And no strategy works when we don’t know what we would even consider worth a response. The current cloudiness as to whether a US Company being blackmailed by a nation-state is even worth the a US response, or whether manipulation of our electoral system counts as an offensive action indicates we are not ready yet to enumerate the norms our country wants to live under in the cyber domain - something that should worry all of us.  





Friday, June 24, 2016

Can Google do Cyber Deterrence?

http://smallwarsjournal.com/jrnl/art/law-of-armed-conflict-attribution-and-the-challenges-of-deterring-cyber-attacks

I want to post a few of my issues with this paper. First of all, it is not a good sign when you start lumping all of CNO together when talking about cyber deterrence, or when a lot of your paper is quotes from various ex-government management types leading to a sort of policy telephone game. And when you listen to Fred Kaplan talk about cyber deterrence as a result of his book, (00:33 here) he says we're only beginning to ask the right questions.

I will disagree with a cogent example: Google.

Google practices strategic cyber deterrence against many nation states using all the tools explained in Joshua Tromp's paper. Once the CEO realized they had been had by the Chinese Government, who were themselves looking for State dissidents, he poured an insane amount of resources into the problem, and to this day Google operates a capability that outclasses most nation states when it comes to deterrence.

We can compare Google's access to information to a nation state's SIGINT arm, but it's obvious that they could, if they so desire, unmask the efforts of any country's intelligence services with a quick look at their massive database of human behavior and location.  Likewise, once the hacking was discovered Google pulled out of China, which puts economic and social pressure on the Chinese government. And they increased the cost for activity against them by massively improving their own internal defensive efforts, buying companies who had groundbreaking technology in the sector, and making sure to build out cooperation with US intelligence.

It's also easy to forget how Google is now warning users if they are being targeted by nation states via Phishing attacks or password guessing. This level of attention means that if you target Google and they catch you, you might lose the ability to target people THROUGH Google. How long before your Android phone warns you that you're being followed by state security in Beijing by tracking your phone and theirs?

So to sum up:

  • Google increased their CND investment 
  • They operated in concert with other state actors to increase social costs of Chinese cyber offensive operations
  • They maintain a strategic deterrence in their ability to unmask HUMINT efforts by the Chinese

Of course, now that the deterrence engine is in place, they can also operate it at some level against the US Government.


FireEye's recent graph is very interesting - although indicting people is strategically dangerous, it may also work.

Ok, so back to Joshua's paper. It is full of stuff like this:

It all SOUNDS legit, but you can't make policy or strategic decisions on this kind of "data".

Just to take one example from that paragraph, "The nations that are the most powerful are actually the most vulnerable to cyber-attacks". This is not really true. While yes, it is hard to affect Afghanistan's government via cyber, having a full-take of their cell phone network lets you control it as well as anything else could. And would you rather go up against Google or your local dentist when it comes to cyber war?

Basically, repeating all the "things people know" about the Cyber domain, and then trying to draw deterrence out of that grand picture does not provide for a way of really looking at the problem. it may be that without clearance, it is impossible to draw an accurate picture using metrics of how well deterrence works in the field, but even if it is possible, we would need a more focused analysis of the problem than is presented in the paper.

Vulnerabilities Resist Catagorization

Policy is a lot about categories of things. Recently I was reading a paper which categorized exploits in a way that rang weirdly to me, since I spend all my time thinking about exploits.

The title of this paper could be "Regulating this stuff is going to be mindbogglingly hard" but click here to read the full thing:

Here's what I want to say about that: You cannot sort exploits into any distinct categories without oodles of work and a lot of hand-waving that makes it useless for regulation. I'm not saying this to pick on this particular paper or its categories, which derive from one of Mikko's more morose ruminations on the subject. It's a common problem and a real issue with designing intelligent policy in this space.


Let's talk briefly about this one vulnerability to explain how hard this can be. The Spooler exploit was two phased:

  1. You could use a bug in the remote procedure call (RPC) endpoint to write arbitrary files to arbitrary places on the disk as "Admin". This is useful, but is not remote code execution (you cannot overwrite files).
  2. Writing MOF files into certain very specific places will allow you to execute code (this is not a bug or vulnerability, but a very rarely known feature). It's interesting to note that the original Metasploit exploit for this issue used AT Files for this part of the exploit, as they didn't know about the MOF technique, whereas CANVAS and Stuxnet both used MOF files.

Also, the spooler bug was not an "0day" in the sense that people most often use it. While unknown in the wider community (or by Microsoft) it was published in a Russian magazine years before.  

Imagine trying to ban 0day exploits that "allow for remote code execution". Would the Spooler vulnerability fall into that regulation? Perhaps only when combined with the knowledge of MOF files or the ATSVC technique? What about when you realize that the vulnerability was already in a Russian magazine? What about how the code it allows you to execute is VBScript, and not native code? And how it only effects Windows systems that share printers or have a specific configuration?

There's a thousand different categorizations you could apply to exploits, is what I'm saying, and none of them are universally available or even technically correct in a majority of cases. Right now the policy world tries to ignore this with legalistic jargon, but the physics of the problem are not going to change to make it any easier, unfortunately.