Friday, September 28, 2018

Forecasting vs Policy Work

No castle in Games of Thrones is complete without an extremely accurate map room! Apparently satellite imagery was available to all at a good price point.

Like many people in the business I'm a fan of the work of "StratFor", which is an ex-spook shop that does what they call "Strategic Forecasting" of geopolitical change. If you read their work carefully, a large amount of their methodology is an attempt to avoid a bias towards assuming that national or political leaders matter. 

If you just assume that every country has a set of resources and goals, and that it will act in its best interests, regardless as to who gets voted President, then over a long enough term you have a much better chance of making accurate predictions, is their play. 

It's an attempt to discover and analyze the emergent behavior inherent in the system, as opposed to getting caught up doing game theory and monte carlo simulations until the end of time. Using this mindset produces vastly different results from most predictive methods, and the cyber tilt on the playing field is notable. Early StratFor predictions used fundamentals such as the aging population or shrinking workforces in various countries, and indicated they would need to vastly increase unskilled labor pools by importing workers, but of course, modern predictions look at this as a gap automation will fill. 

But you can still look at the fundamentals - what resources do countries have, what are their geopolitical strengths and weaknesses and how will they be able to maintain their position using their resources. Geopolitical positioning has been altered by the Internet, of course, as everything has. And a large internet company is its own kind of resource. 

This is why when a paper comes out saying that Germany will have a strong VEP leaning towards disclosure any decent forecaster is going to look at that as an oddity. We are now, and have been for a long time, in a great-powers competition meta. Germany needs to ramp up as soon as possible on both its defensive and offensive capabilities. The real question is how close it gets to the 5EYES in order to do so. You can make these predictions without looking at all at who's in charge, or what the politics are.

The one hole, of course, that seems obvious in retrospect, is that non-state actors are vastly more important than any Westeros map can capture. Everyone asks about the Cyber-9/11 and then goes on to talk about Russia and China as if it was a Taliban plot to hit the WTC. In other words, we may be looking in the wrong direction entirely.

Tuesday, September 18, 2018

Equities issues are collectives

One of the great differences between people who've dealt with exploits their whole lives and people who are in the national security policy space just starting with exploits is the concept of an exploit being a singular thing. If you've tried to hack into a lot of networks, you generally view your capabilities as a probabilistic function. The concept of making a one-by-one decision on how releasing any particular vulnerability to the vendor would affect your future SIGINT is an insane one since the equities issues are a "collective" noun.

LINK (This equities issue argument made here about the Trump admin declassifying FBI texts is  familiar to those of us to follow the VEP)

As you can see above the "presumption of public disclosure" line feels almost stolen directly out of one of Stanford or Belfer's VEP papers.

Monday, September 10, 2018

Why Overwatch?

So we've done a number of Overwatch-related posts on this blog. And I wanted to talk about the method behind the madness. First of all, I wanted to talk about what you see when you read cyber policy papers: simple game theory inspired by the arguments around nuclear deterrence.

The problem with this kind of work is that no matter how many variables people add to these models, they don't capture the nature of either cyber offense or cyber defense in a way that can start to predict real world behavior.

Practitioners have other frameworks and models (c.f. Matt Monte's book), and the one I've chosen is Overwatch for the following reasons:

  • Overwatch is extremely popular in the hacking community and almost universally well understood, even at the highest levels (more so than other sports, such as Football or Basketball). It's possible this is because Overwatch's themes and story resonate strongly in this day and age, for reasons beyond this blog.
  • As an E-sport, tactical development in Overwatch is directly measured and both teams are on identical ground (no amount of steroids can overcome a bad strategy)
  • The diverse character set and abilities explore nicely the entire space of possibilities and translates well to the cyber war domain
  • Overwatch analysis has a rich, coherent and well understood terminology set (Shotcallers, "Sustain", win-condition, Deathballs, meta changes, team-comp, wombo-combos, etc.). 

This keynote explains our model for adversarial action in the cyber domain using Overwatch analogies.

Immunity is not the only team to use this kind of language to develop an analysis framework for extremely complex systems. An extremely popular series of biology videos on YouTube right now is the Tier Zoo videos, where he discusses various animals as if they were playable Overwatch character classes. The key thing here being: This is a much more illuminating way to classify survival strategies than you might have imagined. And of course, it demonstrates this model works at the most complex levels available (aka, the real world).

Treating cyber security offense and defense as discrete automata may still provide some value for policy decision making, but it is more likely that an Overwatch-based model will be able to provide predictive value - much as simple expert systems have now been replaced for complex decision making by deep learning algorithms.

Friday, September 7, 2018

Paper Review: The Security Risks of Government Hacking by Riana Pfefferkorn

Ok, so after I did a review of the German VEP paper, Riana pointed me at her paper. Academics have thick skins as a rule, so I went through and live tweeted a response, but she is owed a deeper review, on reflection.

First of all, I am often chided for lumping all policy people together, or being overly derogatory towards the efforts of policy people in this area who are not also subject matter experts. But it has not gone unnoticed that there are fundamental problems with study in the area, most recently this article on CFR and this Twitter thread.

When you talk to major funders in this area they complain that "Every paper I read is both the same, and entirely irrelevant". And the reasons why get dissected pretty well by that CFR post as quoted below:
There are three categories of factors that make scholarly cyber conflict research a significantly more challenging task than its nuclear era counterparts: (1) the characteristics of the threat space, (2) data availability constraints, and (3) the state of political science as a discipline.
Historically, and luckily, when non-subject matter experts attempt to draw conclusions in this field they make glaring and weird mistakes about the history of the subject. This is most often to attempt to back up the trope that cyber tools are extremely likely to get caught, and then when caught are used against everyone else.  (You can see another example of someone without any technical experience doing the same kind of thing here.)

Below (from page 9 of the paper) Riana makes some very odd claims:
In another example, after nation-state actors widely believed to be the United States and Israel unleashed the so-called Stuxnet malware to undermine Iran’s nuclear program, new malware which was in part identical to the Stuxnet code appeared on the internet.25 Researchers also discovered additional types of malware that used Stuxnet’s USB port infection technique to spread to computers.26
The reality is of course more complex, but it worries me that when reading the released reports on Stuxnet, Duqu, and Gauss, she did not appear to understand the sweep of how things fit together. The technical concepts of how code works cannot be avoided when making policy claims of this nature, and it has the problem of invalidating other arguments in the paper when this sort of thing is broken from the very beginning.

Likewise, when talking about bug rediscovery, it's impossible to discuss these things by giving equal weight to two papers with completely different results. It's like CNN trying to give equal weight to a climate change denier and an atmospheric scientist.

But that's what we see in Riana's paper.
Trey Herr, found rediscovery rates of 14% to 17% for vulnerabilities in browser software and 22% for bugs in the Android mobile operating system.5 After their conclusions were criticized as inaccurate, Schneier and Herr updated their paper, revising their rediscovery rates slightly upward and concluding that “rediscovery takes place more often than previously thought.”6 On the other hand, the RAND Corporation issued a report analyzing a different set of data and put the rediscovery rate at only about 5% per year.7 
I'm fairly sure they revised their rates downwards, and not upwards? It doesn't matter though. It's impossible to draw the kinds of conclusions you would want from any of these numbers, as she goes on to state a few paragraphs later:
Ultimately, experts do not precisely know the rediscovery rate for any specific vulnerability or class of vulnerabilities, and aren’t going to know anytime soon. 
Then there are paragraphs which try to push a political agenda, but don't have a grasp on the history of how vulnerabilities have been handled. None of the claims here can be substantiated, and many of them are pure fantasy.
Today we have companies that are in the business of developing and selling 0-days, with no intention of revealing the flaw to the vendor so that it may be fixed. 0-days are generally used by state actors, may not be very common, and are not the biggest security problem out there. The existence of a market for 0-days may incentivize the discovery of more vulnerabilities. Some think that could lead to better security overall, so long as the government buying the 0-day ultimately discloses it to the vendor to be fixed. But that assumes 0-days are relatively rare; if they are plentiful, then an active 0-day market could be harmful.
The market for bugs has always been a smaller part of the larger community of people who find, use, and trade bugs, which existed long before there were governments in the game. The commercial consulting market dwarfs the government market, and is largely in the same exact business.

And governments are not a free bug-bounty program - they don't buy bugs to then disclose them to a vendor. That would be an exceedingly poor use of tax money.

Some parts of the paper, of course, highlight common-sense areas where there are wide policy gaps.
Judges issue hacking warrants ex parte based on the assurances of the government, but those representations may not capture the hacking campaign’s impact on people for whom there is no probable cause to believe they have committed any crime. As its use of hacking techniques continues and expands, it will be important for the government to narrowly tailor hacking campaigns to minimize impact on innocent users and to explain the expected impact accurately to the authorizing judge. 
Most substantially, I thought the paper represented a cautionary note against using Government Hacking as a policy bulwark against government mandated backdoors, which are on their face, a simpler, less chaotic, policy.

The problem however, is without deeply understanding the technical details this kind of paper can only misrepresent and over-abstract the risks on both sides. In that sense, it does more to muddy the issue than clarify it, even as it claims in the conclusion to want to further the discussion.

Wednesday, September 5, 2018

The German VEP

Most policy-work is still done in the reverse of logic, with gestaltian leaps of faith covered in heaping gobs of wishful thinking as to cause and effect. Vulnerability Equities Process papers are especially susceptible to this, because the Mozilla and Microsoft lobbyist team is punching well above their weight and has turned it into a "moral human rights" issue in the EU, to try to get codified in law what they could not do in the United States, because what's good for Mozilla is not necessarily good national policy.

This is especially true for Germany though! Germany is a huge industrial state at great risk for information operations and more direct cyber attack, that manufactures factories, but is notably behind on its own offensive capabilities. Sven Herpig's proposed VEP policy (for Germany, and the EU in general) would be like trying to catch up in the America's Cup Yacht race, but without pulling up your anchor.

However, that is gestaltian thinking at work! And I have been trying to propose a more rigorous process for looking at policy papers. And it is this:

  • Convert proposed language into flowchart
  • Use boolean alegebra to simplify flowchart (see below for the 4d4 Wassenaar flowchart, rearranged to demonstrate the real structure)
    • Look at whether any parts of the flowchart imply other parts (for example, all places that STORE data in a database also obviously INDEX it, etc.) Sometimes what looks like a large technical differentiation chart can be reduced by inference.
  • Make a spreadsheet of scenarios for regression testing all proposed changes to the text
  • Use GIT or other version tracking to attach rationals and other notes to proposed changes in language
  • Look at the total return on investment of the proposal, given the regression testing results
  • When people adjust the language, don't let them assume an effect, but go through the entire regression test again with the new language. THIS FINDS WEIRD THINGS.

This diagram is much more useful for running sample scenarios through than the language in WA itself, imho.

So when looking at VEP proposals in general, it's possible to say uncontroversially that this is a new area for governments, and that law in particular has been "not great" at dealing with rapidly changing environments in the cyber domain, and that therefor it is always fascinating when, without doing this kind of work or without testing their VEP for a decade or so, people want to en-scribe a particular VEP into their law (Sven's proposal sunsets the law after five years - but most laws in this space get automatically renewed, so that is of little comfort in terms of malleability). If your update process is crazy expensive and difficult, it makes more sense that you would test everything for many years, before shipping it, right?

Regardless, let's look at the details of Sven's proposal in detail, as promised.

The first thing is he pre-frames the argument with his title:
Weighing Temporary Retention versus Immediate Disclosure of 0-Day Vulnerabilities

But those are not the only options. Obviously indefinite retention is an option, as are many other things that happen in practice but make poor policy papers, for example, having the Government distribute patches themselves, or special purpose workarounds, or any number of other creative things which enable NOBUS, for example.

So one sample scenario spreadsheet to help make decisions about the proposed German VEP is here. It is not comprehensive, but it's similar to the ones I've made for export control language proposals.

There's a lot of negative results in the spreadsheet, but if someone who is pro-VEP wants to take a crack at it, I'm happy to send over the editing rights to the document, although obviously my opinion is that this is because on the whole, this policy proposal is a bad idea for Germany and the EU.

That said, I think "not addressing the negative repercussions" is the strategy of choice for the lobbyist teams who want to push this sort of thing forwards, but there's a reason particular gaps and flexibilities were built in the US VEP and it's not that they didn't think of all the various issues or just really like keeping 0day in a giant horde like a dragon sitting on a gold pile in a cave somewhere underneath Ft. Meade.

Additional CinemaSIN:
Why do policy people just do "Absent data, we can assume X" as a thing?

This is an especially terrible paragraph in the German VEP paper.

And for bonus negative points:

I think it's worth pointing out that making decisions on bad data is in no way better than than making decisions on no data. In fact, it's probably worse, since you have a higher confidence level in your decisions when you make them on data.

And, for the record, I engage in Cyber Policy work because we've been wandering in the desert for what seems like 40 years, and it's time to head in a direction of some sort.

Saturday, September 1, 2018

Joe Nye's latest Norms Piece

The US cyber world appears in disarray. Between the Chinese and the Russians getting super aggressive, our constant bleating about cyber norms sounds like the distress signal a lone sheep sends out when the rest of the flock has been lost to wolves.  The latest example of this is Joe Nye's paper this week on the Normative Restraints on Cyber Conflict.

The waffling starts in the Abstract with a Trumpian "Many observers have called for laws and norms to manage the growing cyber threat". Norms are deeply about established practice and the one thing you can count on any norms paper (and there are a LOT of them) to do is carefully avoid any deep discussion as to what the established practices really are.

I'm not going to pull punches: This Joseph Nye paper is a boatload of wishful thinking. It's an exemplary example of extraneous exposition and I read it carefully so you don't have to. But where I know the outside world hears weakness and withdrawal, I have different ears and hear distant Wakandan drums.

You wouldn't know it from our policy papers, but this is a period of reconstitution. While change and re-balance is no doubt a part and parcel of the cyber landscape, I think the world will be surprised at what uncloaks when this interlude is over.