Friday, June 24, 2016

Can Google do Cyber Deterrence?

http://smallwarsjournal.com/jrnl/art/law-of-armed-conflict-attribution-and-the-challenges-of-deterring-cyber-attacks

I want to post a few of my issues with this paper. First of all, it is not a good sign when you start lumping all of CNO together when talking about cyber deterrence, or when a lot of your paper is quotes from various ex-government management types leading to a sort of policy telephone game. And when you listen to Fred Kaplan talk about cyber deterrence as a result of his book, (00:33 here) he says we're only beginning to ask the right questions.

I will disagree with a cogent example: Google.

Google practices strategic cyber deterrence against many nation states using all the tools explained in Joshua Tromp's paper. Once the CEO realized they had been had by the Chinese Government, who were themselves looking for State dissidents, he poured an insane amount of resources into the problem, and to this day Google operates a capability that outclasses most nation states when it comes to deterrence.

We can compare Google's access to information to a nation state's SIGINT arm, but it's obvious that they could, if they so desire, unmask the efforts of any country's intelligence services with a quick look at their massive database of human behavior and location.  Likewise, once the hacking was discovered Google pulled out of China, which puts economic and social pressure on the Chinese government. And they increased the cost for activity against them by massively improving their own internal defensive efforts, buying companies who had groundbreaking technology in the sector, and making sure to build out cooperation with US intelligence.

It's also easy to forget how Google is now warning users if they are being targeted by nation states via Phishing attacks or password guessing. This level of attention means that if you target Google and they catch you, you might lose the ability to target people THROUGH Google. How long before your Android phone warns you that you're being followed by state security in Beijing by tracking your phone and theirs?

So to sum up:

  • Google increased their CND investment 
  • They operated in concert with other state actors to increase social costs of Chinese cyber offensive operations
  • They maintain a strategic deterrence in their ability to unmask HUMINT efforts by the Chinese

Of course, now that the deterrence engine is in place, they can also operate it at some level against the US Government.


FireEye's recent graph is very interesting - although indicting people is strategically dangerous, it may also work.

Ok, so back to Joshua's paper. It is full of stuff like this:

It all SOUNDS legit, but you can't make policy or strategic decisions on this kind of "data".

Just to take one example from that paragraph, "The nations that are the most powerful are actually the most vulnerable to cyber-attacks". This is not really true. While yes, it is hard to affect Afghanistan's government via cyber, having a full-take of their cell phone network lets you control it as well as anything else could. And would you rather go up against Google or your local dentist when it comes to cyber war?

Basically, repeating all the "things people know" about the Cyber domain, and then trying to draw deterrence out of that grand picture does not provide for a way of really looking at the problem. it may be that without clearance, it is impossible to draw an accurate picture using metrics of how well deterrence works in the field, but even if it is possible, we would need a more focused analysis of the problem than is presented in the paper.

Vulnerabilities Resist Catagorization

Policy is a lot about categories of things. Recently I was reading a paper which categorized exploits in a way that rang weirdly to me, since I spend all my time thinking about exploits.

The title of this paper could be "Regulating this stuff is going to be mindbogglingly hard" but click here to read the full thing:

Here's what I want to say about that: You cannot sort exploits into any distinct categories without oodles of work and a lot of hand-waving that makes it useless for regulation. I'm not saying this to pick on this particular paper or its categories, which derive from one of Mikko's more morose ruminations on the subject. It's a common problem and a real issue with designing intelligent policy in this space.


Let's talk briefly about this one vulnerability to explain how hard this can be. The Spooler exploit was two phased:

  1. You could use a bug in the remote procedure call (RPC) endpoint to write arbitrary files to arbitrary places on the disk as "Admin". This is useful, but is not remote code execution (you cannot overwrite files).
  2. Writing MOF files into certain very specific places will allow you to execute code (this is not a bug or vulnerability, but a very rarely known feature). It's interesting to note that the original Metasploit exploit for this issue used AT Files for this part of the exploit, as they didn't know about the MOF technique, whereas CANVAS and Stuxnet both used MOF files.

Also, the spooler bug was not an "0day" in the sense that people most often use it. While unknown in the wider community (or by Microsoft) it was published in a Russian magazine years before.  

Imagine trying to ban 0day exploits that "allow for remote code execution". Would the Spooler vulnerability fall into that regulation? Perhaps only when combined with the knowledge of MOF files or the ATSVC technique? What about when you realize that the vulnerability was already in a Russian magazine? What about how the code it allows you to execute is VBScript, and not native code? And how it only effects Windows systems that share printers or have a specific configuration?

There's a thousand different categorizations you could apply to exploits, is what I'm saying, and none of them are universally available or even technically correct in a majority of cases. Right now the policy world tries to ignore this with legalistic jargon, but the physics of the problem are not going to change to make it any easier, unfortunately.

Wednesday, June 22, 2016

Useful Fundamental Metrics for Cyber Power


Quoting from his article:
How do we define cyber power?  In other words, how do we measure who is stronger (or weaker) in the exercise of cyber force?  In this domain we lack any equivalent to counting tanks or airplanes.  What are the alternative measures?

If you're measuring cyber power, you can measure it in a number of different ways:
  • Exploitation
  • Implantation
  • Exfiltration and Analysis
  • Integration into other capabilities (HUMINT, for example)
  • Achieved Effect
From an offensive practitioner's perspective, it doesn't really matter what you analyze, but looking at the implant technology is probably easiest. Exploitation is more of a statistical game than anything else, and relies on complex amortization and depreciation concepts we don't want to use as part of any kind of simplistic capability measurement. Measuring effect is often a matter of measuring the level of policy aggression a particular service has. Which brings us into the scope of any capability (service level, not country level): the NSA obviously can be a "5" while the FBI is a "3".  Measuring on a per-country basis can occlude that distinction. 

I want to refer to this chart from few blog posts back, and build a simple plan for making a metric that solves this problem for you. There are probably a hundred different dimensions you could measure "implant capability" - I drew upon my experience designing, writing and using implants to make this chart and simplify it to some base components.  

As a funny note: Designing a system to do anonymous deconfliction is useful both between intra-government services  (FBI/NSA/.mil) and between yourself and your adversaries. For example perhaps both China and the US are on an ISIS commander's laptop, and they want to coordinate without officially coordinating... 

Comments from one of the more actively attacked companies after I posted this chart were "Russia's SIGINT team going against us is at 4 or 5 on every category according to our internal incident response". Because the chart is essentially exponential, not linear, that indicates an extreme peak of capability and expense. It's easy though to confuse exponential cost and exponential effect, which is a fancy way of saying that in the Cyber domain you sometimes have to pay a lot of money for an incremental effect. 

If you just take "Sourcing", "Networking", and "Persistence" you get a nice graph of capabilities of different services from publicly available information or things you might have laying around if you are an incident response firm or counter-intelligence agency.

Measuring capabilities based on public information is an important part of this method.

Tuesday, June 21, 2016

The Vulnerabilities Equities Process is not a Panacea

http://belfercenter.ksg.harvard.edu/files/vulnerability-disclosure-web-final3.pdf



I wanted to post this paper and discuss a few things in it and how Policy-writ-large tends to make mistakes in this area, simply by oversimplifying. (For reference, these posts un-simplify the issues a bit: Post 1, Post 2)

First, I want to include their bios, since they are relevant. Then we can discuss some of the recommendations of the paper, and the larger strategic issues, and some of the strange implications implicit in their paper.

TL;DR: Ari Schwartz was on the NSC and did policy work at the White House. 

TL;DR: Robert Knake was on the NSC and did policy work at the White House



The Vulnerabilities Equities Process implies at its root that we have a coherent national strategy when it comes to cyber issues (which we arguably do not). And the fundamental weakness of it all is that it implies you have enough centralized knowledge to make gritty decisions on particular vulnerabilities and their possible future effects, which is a massive tasking all on its own.

But assuming all of that is palatable, let's look at just one of the recommendations and examine it for "unintended consequences". 

Page 15 has a number of bullet points of this type.

This is the kind of blanket recommendation that the paper is fond of, but which has some huge consequences if taken seriously. For example, the Government is using tax dollars to buy vulnerabilities for the express purpose of accomplishing strategic needs. If you restrict the license you purchase vulnerabilities or offensive information tools to only those which you obtain full rights to, then you might as well in-house the whole effort.

Requiring full exclusive rights to things you buy drives the price up of those things. And in many cases, because those rights are not available, or the seller chooses not to make them available, it drives the seller to other markets. So the obvious corollary to this bullet point is that they have to mandate strict controls upon the sale or transfer of vulnerability information to parties other than the USG. This would be ten times as futile, intrusive, and expensive as the proposed Wassenaar cyber regulations, which have been essentially killed in this country already. This, all in the misbegotten dream of "draining the endless swamp" of vulnerabilities in software.

The Government is not a catch-all vulnerability bounty program for all of the world's information technology, nor does it have the budget to pretend to be. But for some reason, certain policy makers think it should take on this role, as if budgetary constraints and mission timeliness were not things.

The budgetary issue is hardly the only issue with the VEP, which is emblematic of the struggle US Government policy has with figuring out if every cyber vulnerability is a strategic systematic risk. But before we rush to cement the VEP policies in place, we need to figure out what the implications are for the policy proposals on the table, because on the face of it, they are not wise or well thought through. 



Friday, June 17, 2016

A way forward for Microsoft and Friends


You can watch Jan Neutze from Microsoft talk to NATO at CyCon 2016 and see him repeat the talking points we have heard over and over from Microsoft.  They are, in bullet point form as follows:

  • Please don't trojan our software, because it hurts the trust relationship we have between ourselves and our customers. "Loss of trust is the single biggest concern we all have."
  • We deserve a voice (and a veto) on your offensive operations (and the norms around them)
  • We don't want to bear the cost of all the cyberwar every government wants to do, or all the regulatory regimes they can think up. 
  • "States should be discriminant in their actions and not create a mass event" (including loss of trust)
He goes on to say that of course he really likes the Vulnerability Equities Process - lots of people think this is the answer towards limiting our offensive capabilities. But the Vulnerability Equities Process is not cost free. In some cases, the resources people assume we have for developing additional capability in this space just don't exist at all. What they're really arguing for is an unrealistic unilateral curtailing of our offensive capability. But some of his requests are reasonable when you consider what we expect from other countries as well - in particular, how do you limit the events such that they are "Discriminant" and not "Mass events". 

But let's get back to the first and most important issue: Microsoft's trust relationship with their customers. There's a reason they call the Microsoft security group "Trustworthy computing". Bill Gate's memo was a life-changing event for the company, and remains a large influence to this day.

So let's get to the kinds of offensive interactions possible with Microsoft, like examining the kinds of chemical interactions possible with Oxygen:

  1. The Government could force Microsoft (or ask them nicely) to backdoor their software in a way that looks like a normal vulnerability
  2. The Government could force Microsoft (or ask them nicely) to push a particular trojaned version of software to a particular customer 
  3. The Government can do research on their own or buy vulnerabilities and use those against Microsoft customers in the wild
  4. The Government can do supply chain attacks, as machines go to Microsoft customers
  5. The Government can introduce vulnerabilities in cryptographic specs that Microsoft then implements
Of all those things, the one Microsoft should be complaining least about is number 3! Yet you hear constant unrealistic whining.

A simple requirement that would be great to make a norm would be having any introduced backdoors be "Nobody but US". If you use that simple rule, the risks of a "mass event" are almost gone, and Microsoft can be more comfortable that offensive use is "discriminant", which is not an unreasonable request.




Monday, June 13, 2016

What Playpen Hath Wrought

Before looking at the Playpen miasma that resulted from the FBI’s use of a “NIT” (aka, Remote Access Trojan) to unmask users of a website with illicit material, it’s good to look at the larger picture of how an offensive capability usually evolves. However, few people in the policy space have looked at the technical capabilities of many actors to see these patterns, which I’ll go over now. It’s important when trying to write laws and regulations that respect our legal traditions, but account for the realities of hacking technology as used by law enforcement to know where these capabilities are heading, instead of writing policy for where they are now.


Law Enforcement, like all remote access organizations, follows a predictable tree of technology paths when developing their capabilities. If you look at the Russian Sofacy group or any other signals intelligence group, you will see the same basic path from left to right on the below graph of features.


Just to put one of them into context: why would you start with symmetric or no cryptography at all, as the graphic indicates, and as the FBI is using, according to reporting? The answer is reliability. When operating in the field, reliability is an extremely elusive property and doing complex things such as mutual authentication and encryption in a reliable way, is prohibitively expensive for young groups just starting down the path. This is as true for the Israeli, Russian, or Chinese teams as it is for the FBI.


Some of these things you cannot rush with massive injections of money. This is why it is hard to bootstrap a Unified Cyber Command or Law Enforcement capability quickly. Often you do not even know what capability you need to build (massive testing framework that includes real-iron machines, not just VMWare ESX! Warehouse-sized collection of old Unix hardware! Global-sized web crawler!) - these things are not obvious until after you have failed in the field and learned from your failures, which takes time. Operating at scale is hard to test for unless you are truly operating at scale. We are not investing billions of dollars in “Cyber Ranges” for fun.



Going right on this chart adds exponential cost and allows for higher levels of operational tempo and OPSEC requirements. If someone is going to die if you screw up, then you want to be as far right as possible, but you probably won't have the Infinity Budget to get where you want to be on this chart anymore than you can afford the San Fran apartment with enough room to hold your three kids and their stuff.


A follow-on article will suggest some preemptive ways to regulate Law Enforcement use of this technology that jive with how it is obviously going to evolve. This is important because the option of “Let the FBI do whatever it wants” is just as bad as “Never use hacking tools to solve crimes”.