Tuesday, November 13, 2018

The true meaning of the Paris Call for Trust in Cyberspace

Links:


I often find it hard to explain anything in the cyber policy realm without pointing out how weird an idea "copyright" is. The easiest way to read Cat's article in the Washington Post is that the PR minions of most big companies wants to make it seem like some sort of similar global controls over cyber vulnerabilities and their use are a natural thing, or at least as natural as copyright. In some sense, it's a coalition of the kinda-willing, but that's all the PR people need since this argument is getting played out largely in newspapers.

But just to take one bullet point from the Paris text:
  • Develop ways to prevent the proliferation of malicious ICT tools and practices intended to cause harm;

What ... would that mean? you have to ask yourself.

You can paraphrase what software (and other) companies want, which is to find a way to ameliorate what in the industry is called "technical debt" by changing the global environment. If governments could assume the burden of preventing hacking, this can allow for greater risk-taking in the cyber realm by companies. I liken it to the credit card companies making it law enforcement's problem that they built an entire industry on the idea of everyone having a secret number small enough to memorize that you would give to everyone you wanted to pay money to.

From the WP article:
This could make way for other players on the global stage. France and the United Kingdom, Jordan said, are now emerging as leaders in the push to develop international cybersecurity norms. But the absence of the United States also reflects the Trump administration’s aversion to signing on to global pacts, instead favoring a transactional approach to issues, Singer said.

It's not so much "transactional" as it is "practical and workable" because to have a real agreement in cyber you need more trust than is typical of most arraignments. This is driven by the much reduced visibility into capabilities that is part and parcel of the domain, which frankly I could probably find a supporting quote for in Singer's new book :).

Aside from really asking yourself what it would MEAN IN REAL PRACTICAL TERMS for humanitarian law to apply to the cyber domain, you also have to ask yourself if all the parties in any particular group would AGREE on those meanings.

And then, as a follow up, ask yourself what the norms are that the various countries really live by, as a completely non-aspirational practicality, and especially the UK and France.



Wednesday, October 24, 2018

Book Review: LikeWar (Peter W. Singer, Emerson T. Brooking)

TL;DR


Buy it here!

Summary


There are some great stories in this book. From Mike Flynn's real role pre-Trump Admin, to a hundred other people's stories as they manipulated social media or tried to prevent it in order to have real world effects. This book draws a compelling narrative. It's well written and it holds your interest.

That said, it feels whitewashed throughout. There's something almost ROMANTIC about the people interviewed through much of it. But the particular take the authors have on the problem illustrates an interesting schism between the technical community and the academic community. For example, in the technical community, the minute you say something like this, people give you horrible looks like you were doing a Physics lecture and somehow wanted to tie your science to a famous medieval  Alchemist :

Highlight (yellow) - Location 307

Carl von Clausewitz was born a couple of centuries before the internet,
but he would have implicitly understood almost everything it is doing to
conflict today.
What's next? OODA Loops?!? Sheesh.

In a way though, it's good that the book started this way, as it's almost a flare to say exactly what perspective the book is coming from.

Two other early pieces of the book also stuck out:
Highlight (yellow) - Location 918

For it has also become a colossal information battlefield, one
that has obliterated centuries’ worth of conventional wisdom
about what is secret and what is known.
And:
Highlight (yellow) - Location 3627

And in this sort of war, Western democracies find themselves
at a distinct disadvantage. Shaped by the Enlightenment,
they seek to be logical and consistent. Built upon notions of transparency,
In other words: This book has a extreme and subjective view of government and industry and an American perspective. Its goal is often less to explain LikeWar than to decry its effects on US geopolitical dominance. We have a million Cleared individuals in the US. Are we really built on notions of transparency? This would have been worth examining, but does not fit with the tenor of the author's work here.

The book does bring humor to its subject though and many of the stories within are fleshed out beyond what even someone who lived through them would remember, such as a detailed view on AOLs early attempts to censor the Internet:

Highlight (yellow) - Location 4203

AOL recognized two truths that every web company would
eventually confront. The first was that the internet was a teeming
hive of scum and villainy.

Missing in Action

That said, anyone who lived through any of the pieces of this book will find lots missing. Unnoticed is the outsided role of actual hackers in the stories that fill this book. It's not a coincidence where w00w00 or Mendax ended up, although it goes unmentioned. And the role of porn and alternative websites is barely touched upon. How the credit card companies have controlled Fetlife would be right in line with what this book should cover, yet I doubt the authors have heard of FL (or for that matter could name the members of w00w00). Nor is Imgur mentioned. It's also not recognized that the same social network the intelligence community uses to publish their policies (Tumblr) is 90% used for browsing pornography.

Clay Shirky, the first real researcher into this topic, who gets one mention in the book (iirc), pointed out that whenever you create a social network of any kind, it becomes a dating site. This is one of those axioms that can produce predictive effects on the subject matter at hand. Sociology itself has been revolutionized by the advent of big data from Big Dating. The very shape of human society has changed, as the spread of STDs has pointed out. And the shape of society changes War, so this book could be illustrating it.

At its most basic example, examining social networks involves looking for network effect - the same thing that drives most dating sites to create fake profiles and bots so they can convince people to pay for their service. These are primal features of the big networks - how to get big and stay big. As Facebook loses relevance, Instagram gains it, and as Instagram loses it....we didn't see any of this sweep in the book. Some topics were too dirty, perhaps?

Conclusion


Like many books coming out, this book is a reflexive reaction to the 2016 election and nowhere is that more evident than in the conclusion.

Some statements are impossible to justify:
Highlight (yellow) - Location 4488

Like them or hate them, the majority of today’s most
prominent social media companies and voices will
continue to play a crucial role in public life for years to come.
Other statements are bizarre calls for a global or at least American censorship regime:

Highlight (yellow) - Location 4577

In a democracy, you have a right to your opinion, but no
right to be celebrated for an ugly, hateful opinion, especially
if you’ve spread lie after lie.
The following paragraph is not really true, but also telling:
Highlight (yellow) - Location 4621

Of the major social media companies, Reddit is the only one that preserved the known fake Russian accounts for public examination. By wiping clean this crucial evidence, the firms are doing the digital equivalent of bringing a vacuum cleaner to the scene of a crime. They are not just preventing

The authors, like many people, see the big social networks as criminal conspirators, responsible for a host of social ills. But for the past generations we have "Taught the Controversy" when it comes to evolution in our schools and it's hard to be confused as to why the population finds it hard to verify facts.

Instead of trying to adjust our government and society to technological change, we've tried to stymie it. Why romanticize the past, which was governed by Network News, the least trustworthy arbiters of Truth possible? We've moved beyond the TV age into the Internet age, and this book is a mournful paean to the old gods, rightfully toppled by disintermediation.

Still worth a read though.

Tuesday, October 2, 2018

"Own your data"

In today's edition of "trying to figure out what things in the cyber policy world really mean" I want to highlight this extremely insightful thread on "Owning your data".


Obviously you're never going to get AccessNow and FS-ISAC or any other group to agree on what that means. But sometimes it's worth noting that a particular terms one of the policy groups is pushing doesn't really mean anything at all or (as in the case of "Surveillance software") encompasses a lot more than they want you to think it does.

Friday, September 28, 2018

Forecasting vs Policy Work

No castle in Games of Thrones is complete without an extremely accurate map room! Apparently satellite imagery was available to all at a good price point.


Like many people in the business I'm a fan of the work of "StratFor", which is an ex-spook shop that does what they call "Strategic Forecasting" of geopolitical change. If you read their work carefully, a large amount of their methodology is an attempt to avoid a bias towards assuming that national or political leaders matter. 

If you just assume that every country has a set of resources and goals, and that it will act in its best interests, regardless as to who gets voted President, then over a long enough term you have a much better chance of making accurate predictions, is their play. 

It's an attempt to discover and analyze the emergent behavior inherent in the system, as opposed to getting caught up doing game theory and monte carlo simulations until the end of time. Using this mindset produces vastly different results from most predictive methods, and the cyber tilt on the playing field is notable. Early StratFor predictions used fundamentals such as the aging population or shrinking workforces in various countries, and indicated they would need to vastly increase unskilled labor pools by importing workers, but of course, modern predictions look at this as a gap automation will fill. 

But you can still look at the fundamentals - what resources do countries have, what are their geopolitical strengths and weaknesses and how will they be able to maintain their position using their resources. Geopolitical positioning has been altered by the Internet, of course, as everything has. And a large internet company is its own kind of resource. 

This is why when a paper comes out saying that Germany will have a strong VEP leaning towards disclosure any decent forecaster is going to look at that as an oddity. We are now, and have been for a long time, in a great-powers competition meta. Germany needs to ramp up as soon as possible on both its defensive and offensive capabilities. The real question is how close it gets to the 5EYES in order to do so. You can make these predictions without looking at all at who's in charge, or what the politics are.

The one hole, of course, that seems obvious in retrospect, is that non-state actors are vastly more important than any Westeros map can capture. Everyone asks about the Cyber-9/11 and then goes on to talk about Russia and China as if it was a Taliban plot to hit the WTC. In other words, we may be looking in the wrong direction entirely.





Tuesday, September 18, 2018

Equities issues are collectives

One of the great differences between people who've dealt with exploits their whole lives and people who are in the national security policy space just starting with exploits is the concept of an exploit being a singular thing. If you've tried to hack into a lot of networks, you generally view your capabilities as a probabilistic function. The concept of making a one-by-one decision on how releasing any particular vulnerability to the vendor would affect your future SIGINT is an insane one since the equities issues are a "collective" noun.

LINK (This equities issue argument made here about the Trump admin declassifying FBI texts is  familiar to those of us to follow the VEP)

As you can see above the "presumption of public disclosure" line feels almost stolen directly out of one of Stanford or Belfer's VEP papers.

Monday, September 10, 2018

Why Overwatch?



So we've done a number of Overwatch-related posts on this blog. And I wanted to talk about the method behind the madness. First of all, I wanted to talk about what you see when you read cyber policy papers: simple game theory inspired by the arguments around nuclear deterrence.


The problem with this kind of work is that no matter how many variables people add to these models, they don't capture the nature of either cyber offense or cyber defense in a way that can start to predict real world behavior.

Practitioners have other frameworks and models (c.f. Matt Monte's book), and the one I've chosen is Overwatch for the following reasons:


  • Overwatch is extremely popular in the hacking community and almost universally well understood, even at the highest levels (more so than other sports, such as Football or Basketball). It's possible this is because Overwatch's themes and story resonate strongly in this day and age, for reasons beyond this blog.
  • As an E-sport, tactical development in Overwatch is directly measured and both teams are on identical ground (no amount of steroids can overcome a bad strategy)
  • The diverse character set and abilities explore nicely the entire space of possibilities and translates well to the cyber war domain
  • Overwatch analysis has a rich, coherent and well understood terminology set (Shotcallers, "Sustain", win-condition, Deathballs, meta changes, team-comp, wombo-combos, etc.). 


This keynote explains our model for adversarial action in the cyber domain using Overwatch analogies.

Immunity is not the only team to use this kind of language to develop an analysis framework for extremely complex systems. An extremely popular series of biology videos on YouTube right now is the Tier Zoo videos, where he discusses various animals as if they were playable Overwatch character classes. The key thing here being: This is a much more illuminating way to classify survival strategies than you might have imagined. And of course, it demonstrates this model works at the most complex levels available (aka, the real world).

Treating cyber security offense and defense as discrete automata may still provide some value for policy decision making, but it is more likely that an Overwatch-based model will be able to provide predictive value - much as simple expert systems have now been replaced for complex decision making by deep learning algorithms.


Friday, September 7, 2018

Paper Review: The Security Risks of Government Hacking by Riana Pfefferkorn

https://twitter.com/StanfordCIS/status/1037401854324264961

Ok, so after I did a review of the German VEP paper, Riana pointed me at her paper. Academics have thick skins as a rule, so I went through and live tweeted a response, but she is owed a deeper review, on reflection.

First of all, I am often chided for lumping all policy people together, or being overly derogatory towards the efforts of policy people in this area who are not also subject matter experts. But it has not gone unnoticed that there are fundamental problems with study in the area, most recently this article on CFR and this Twitter thread.

When you talk to major funders in this area they complain that "Every paper I read is both the same, and entirely irrelevant". And the reasons why get dissected pretty well by that CFR post as quoted below:
There are three categories of factors that make scholarly cyber conflict research a significantly more challenging task than its nuclear era counterparts: (1) the characteristics of the threat space, (2) data availability constraints, and (3) the state of political science as a discipline.
Historically, and luckily, when non-subject matter experts attempt to draw conclusions in this field they make glaring and weird mistakes about the history of the subject. This is most often to attempt to back up the trope that cyber tools are extremely likely to get caught, and then when caught are used against everyone else.  (You can see another example of someone without any technical experience doing the same kind of thing here.)

Below (from page 9 of the paper) Riana makes some very odd claims:
In another example, after nation-state actors widely believed to be the United States and Israel unleashed the so-called Stuxnet malware to undermine Iran’s nuclear program, new malware which was in part identical to the Stuxnet code appeared on the internet.25 Researchers also discovered additional types of malware that used Stuxnet’s USB port infection technique to spread to computers.26
The reality is of course more complex, but it worries me that when reading the released reports on Stuxnet, Duqu, and Gauss, she did not appear to understand the sweep of how things fit together. The technical concepts of how code works cannot be avoided when making policy claims of this nature, and it has the problem of invalidating other arguments in the paper when this sort of thing is broken from the very beginning.

Likewise, when talking about bug rediscovery, it's impossible to discuss these things by giving equal weight to two papers with completely different results. It's like CNN trying to give equal weight to a climate change denier and an atmospheric scientist.

But that's what we see in Riana's paper.
Trey Herr, found rediscovery rates of 14% to 17% for vulnerabilities in browser software and 22% for bugs in the Android mobile operating system.5 After their conclusions were criticized as inaccurate, Schneier and Herr updated their paper, revising their rediscovery rates slightly upward and concluding that “rediscovery takes place more often than previously thought.”6 On the other hand, the RAND Corporation issued a report analyzing a different set of data and put the rediscovery rate at only about 5% per year.7 
I'm fairly sure they revised their rates downwards, and not upwards? It doesn't matter though. It's impossible to draw the kinds of conclusions you would want from any of these numbers, as she goes on to state a few paragraphs later:
Ultimately, experts do not precisely know the rediscovery rate for any specific vulnerability or class of vulnerabilities, and aren’t going to know anytime soon. 
Then there are paragraphs which try to push a political agenda, but don't have a grasp on the history of how vulnerabilities have been handled. None of the claims here can be substantiated, and many of them are pure fantasy.
Today we have companies that are in the business of developing and selling 0-days, with no intention of revealing the flaw to the vendor so that it may be fixed. 0-days are generally used by state actors, may not be very common, and are not the biggest security problem out there. The existence of a market for 0-days may incentivize the discovery of more vulnerabilities. Some think that could lead to better security overall, so long as the government buying the 0-day ultimately discloses it to the vendor to be fixed. But that assumes 0-days are relatively rare; if they are plentiful, then an active 0-day market could be harmful.
The market for bugs has always been a smaller part of the larger community of people who find, use, and trade bugs, which existed long before there were governments in the game. The commercial consulting market dwarfs the government market, and is largely in the same exact business.

And governments are not a free bug-bounty program - they don't buy bugs to then disclose them to a vendor. That would be an exceedingly poor use of tax money.

Some parts of the paper, of course, highlight common-sense areas where there are wide policy gaps.
Judges issue hacking warrants ex parte based on the assurances of the government, but those representations may not capture the hacking campaign’s impact on people for whom there is no probable cause to believe they have committed any crime. As its use of hacking techniques continues and expands, it will be important for the government to narrowly tailor hacking campaigns to minimize impact on innocent users and to explain the expected impact accurately to the authorizing judge. 
Most substantially, I thought the paper represented a cautionary note against using Government Hacking as a policy bulwark against government mandated backdoors, which are on their face, a simpler, less chaotic, policy.

The problem however, is without deeply understanding the technical details this kind of paper can only misrepresent and over-abstract the risks on both sides. In that sense, it does more to muddy the issue than clarify it, even as it claims in the conclusion to want to further the discussion.











Wednesday, September 5, 2018

The German VEP

Most policy-work is still done in the reverse of logic, with gestaltian leaps of faith covered in heaping gobs of wishful thinking as to cause and effect. Vulnerability Equities Process papers are especially susceptible to this, because the Mozilla and Microsoft lobbyist team is punching well above their weight and has turned it into a "moral human rights" issue in the EU, to try to get codified in law what they could not do in the United States, because what's good for Mozilla is not necessarily good national policy.

This is especially true for Germany though! Germany is a huge industrial state at great risk for information operations and more direct cyber attack, that manufactures factories, but is notably behind on its own offensive capabilities. Sven Herpig's proposed VEP policy (for Germany, and the EU in general) would be like trying to catch up in the America's Cup Yacht race, but without pulling up your anchor.

However, that is gestaltian thinking at work! And I have been trying to propose a more rigorous process for looking at policy papers. And it is this:

  • Convert proposed language into flowchart
  • Use boolean alegebra to simplify flowchart (see below for the 4d4 Wassenaar flowchart, rearranged to demonstrate the real structure)
    • Look at whether any parts of the flowchart imply other parts (for example, all places that STORE data in a database also obviously INDEX it, etc.) Sometimes what looks like a large technical differentiation chart can be reduced by inference.
  • Make a spreadsheet of scenarios for regression testing all proposed changes to the text
  • Use GIT or other version tracking to attach rationals and other notes to proposed changes in language
  • Look at the total return on investment of the proposal, given the regression testing results
  • When people adjust the language, don't let them assume an effect, but go through the entire regression test again with the new language. THIS FINDS WEIRD THINGS.

This diagram is much more useful for running sample scenarios through than the language in WA itself, imho.

So when looking at VEP proposals in general, it's possible to say uncontroversially that this is a new area for governments, and that law in particular has been "not great" at dealing with rapidly changing environments in the cyber domain, and that therefor it is always fascinating when, without doing this kind of work or without testing their VEP for a decade or so, people want to en-scribe a particular VEP into their law (Sven's proposal sunsets the law after five years - but most laws in this space get automatically renewed, so that is of little comfort in terms of malleability). If your update process is crazy expensive and difficult, it makes more sense that you would test everything for many years, before shipping it, right?

Regardless, let's look at the details of Sven's proposal in detail, as promised.

The first thing is he pre-frames the argument with his title:
Weighing Temporary Retention versus Immediate Disclosure of 0-Day Vulnerabilities

But those are not the only options. Obviously indefinite retention is an option, as are many other things that happen in practice but make poor policy papers, for example, having the Government distribute patches themselves, or special purpose workarounds, or any number of other creative things which enable NOBUS, for example.

So one sample scenario spreadsheet to help make decisions about the proposed German VEP is here. It is not comprehensive, but it's similar to the ones I've made for export control language proposals.

There's a lot of negative results in the spreadsheet, but if someone who is pro-VEP wants to take a crack at it, I'm happy to send over the editing rights to the document, although obviously my opinion is that this is because on the whole, this policy proposal is a bad idea for Germany and the EU.

That said, I think "not addressing the negative repercussions" is the strategy of choice for the lobbyist teams who want to push this sort of thing forwards, but there's a reason particular gaps and flexibilities were built in the US VEP and it's not that they didn't think of all the various issues or just really like keeping 0day in a giant horde like a dragon sitting on a gold pile in a cave somewhere underneath Ft. Meade.

----
Additional CinemaSIN:
Why do policy people just do "Absent data, we can assume X" as a thing?

This is an especially terrible paragraph in the German VEP paper.



And for bonus negative points:

I think it's worth pointing out that making decisions on bad data is in no way better than than making decisions on no data. In fact, it's probably worse, since you have a higher confidence level in your decisions when you make them on data.

And, for the record, I engage in Cyber Policy work because we've been wandering in the desert for what seems like 40 years, and it's time to head in a direction of some sort.

Saturday, September 1, 2018

Joe Nye's latest Norms Piece


The US cyber world appears in disarray. Between the Chinese and the Russians getting super aggressive, our constant bleating about cyber norms sounds like the distress signal a lone sheep sends out when the rest of the flock has been lost to wolves.  The latest example of this is Joe Nye's paper this week on the Normative Restraints on Cyber Conflict.

The waffling starts in the Abstract with a Trumpian "Many observers have called for laws and norms to manage the growing cyber threat". Norms are deeply about established practice and the one thing you can count on any norms paper (and there are a LOT of them) to do is carefully avoid any deep discussion as to what the established practices really are.

I'm not going to pull punches: This Joseph Nye paper is a boatload of wishful thinking. It's an exemplary example of extraneous exposition and I read it carefully so you don't have to. But where I know the outside world hears weakness and withdrawal, I have different ears and hear distant Wakandan drums.

You wouldn't know it from our policy papers, but this is a period of reconstitution. While change and re-balance is no doubt a part and parcel of the cyber landscape, I think the world will be surprised at what uncloaks when this interlude is over.

Thursday, August 16, 2018

Classification/Clearance and the traditional CIA triad

I will start with my conclusion: It is far past time to completely throw out our classification/clearance system for something more modern, of which many potential examples exist. We have never had the resources and political will to do so, and suffer copiously as a result.

Even if we had the memory eraser from MIB the clearance system doesn't fit our modern needs.

In addition to failing to scale to the millions of people we now have cleared - each a specialized individual case! - the clearance system directly conflicts with the basics of the information security axioms we use to govern other complex systems. In particular, to refer to the CIA triad - we know (c.f. SNOWDEN, etc.) that the system almost encourages large scale compromise of confidential information, and not in a way that more "insider threat" programs can really prevent.

We also know that our availability to do national-security-sensitive work is strangled by an inability to get new people into the system - that the best people tend to leave because the requirements are onerous and once you've left your clearance behind to venture briefly into the commercial world, it's unlikely you'll ever get it back. What good is a system in the modern that takes two years to make a decision on someone's trustworthiness?

And the integrity of the classification system only works when we realize it is a relationship and a community, and not a gateway of privilege. If it's possible to lose your clearance for political reasons, or simply because you lost a job, or because you had any minor personal issue, then it's impossible to get reliable assessments for your intelligence community as a whole. Information does not naturally come with a meta-data label of sensitivity or scope - in fact, nearly the opposite is true, as we know from our long-used exploitation of unclassified traffic for national security purposes.

The clearance system is the hulking shadow in the back of any conference meeting on how to meet our strategic needs in the cyber domain. Even discussing norms is impossible without a better and more nuanced system for understanding and managing information-based national security risk.

Many people would say there is no better system, but even some version of the traffic-light-protocol might be a more workable option, or may be more realistically what we use now anyways, if we care to admit it.

----
https://www.cfr.org/report/sharing-classified-cyber-threat-information-private-sector
https://twitter.com/jimsciutto/status/1029810782186496000




Friday, July 13, 2018

When is a "Search" minus the Quantum Theory, for Law Professors



One issue with reactions to Carpenter is that they tend to assume that we can get clarity around how technological change affects what a search is by making up various artificial models for how telephony systems  and search processes work. The principal example of this sort of model being Orin Kerr's description in his Lawfare piece.


Example random model Orin made up. :)
If you want a better example of how complicated this sort of thing is, I recommend this Infiltrate talk on the subject of how Regin (allegedly the Brits) searched a particular cellular network, covertly.



If you want to find out when a particular search started or ended, you almost always have to develop a lot of expertise in Quantum Mechanics, starting with Heisenburg, but quickly moving into the theory of computation, etc. This is a good hobby in and of itself, but probably more than a legal professor wants to do.

So I recommend a shortcut. A search is anything that can tell a reasonable person whether or not someone is gay. It's simple and future-proof and applies to most domains.

Thursday, July 12, 2018

The Senate Meltdown/Spectre Hearing


You can browse directly to the debacle here. Everything from beginning to end of this was a nightmarish pile of people grandstanding about the wrong things.

Let's start with the point that if you're going to get upset about a bug, Meltdown and Spectre are SUPER COOL but that does not make them SUPER IMPORTANT. In the time it took Immunity to write up a really good version of and exploit for this, maybe fifty other local privilege escalation bugs have come out for basically any platform they affected. And they are hardly the first new bugclass to come along. I guarantee you every major consulting company out there has a half dozen private bugclasses. People always say "You need to be able to handle an 0day on any resilient system" and the same thing is true for bugclasses.

I'm going to quote the National Journal here.
Chairman John Thune said he “hesitates” to craft legislation that would require U.S. companies to promptly hand over information on new cyber-vulnerabilities to the government, or to deny that same information to Chinese firms.
“You’d like to see that happen sort of organically, which is what we tried to suggest today and which many of the panelists indicated is happening in a better way, a more structured way,” Thune told reporters after the hearing.
Nearly every part of this not-veiled threat is a bad idea. Assuming they could come up with a definition of "cyber-vulnerability", the companies involved do most of this work overseas. They would no doubt make sure to give this information to every government at the same time. Now we are in a race to see who can take advantage of it first?

There's a reason Intel didn't even bother to show up to this hearing. One of them is they can't afford to be seen taking sides with the USG in public. Which is precisely why this conversation happens over beers in bar somewhere instead of us counter-productively trying to browbeat them on live TV for no good reason. And we have to deal with the fact that sometimes we don't get what we want.



Here's a list of things we could have learned:

  • Bugs that private companies discover are not classified information protected and owned by the USG
  • There are consequences to our adversarial relationship with the community and with industry
  • No matter how much we blather on about coordinated disclosure systems and public private partnerships, companies have other competing interests they are not going to sacrifice just because it would be nice for the USG




Sunday, June 24, 2018

Sanger's "The Perfect Weapons" [CITATION NEEDED]

Book Link.

Everyone is very excited about the "revelation" than in order to do their APT1 paper, Mandiant (according to Sanger) hacked back. But that's not the only stunner in the book. He also points to a WMD-level cyber capability leveraged against both Iran and Russia by the United States. There are a ton of unsubstantiated claims in the book, and the conclusion is a call for "Cyber Arms Control" which feels unsupported and unspecified. But Sanger has clearly drunk deeply of the Microsoft Kool-Aid.

But to the point of the (alleged) hack-back: We should have long ago developed a public policy for this, since everyone agrees it is happening, but we seem unable to do so even in the broadest strokes. I think part of the problem is that we are always asking ourselves what we want the cyber norms to be, instead of what they actually are. I'm not sure why. It seems like an obvious place to start.

WMD theory has a pretty heavy emphasis on countervalue attacks....
This is the only mention of Kaspersky in the book - a noted absence...

This is...a threat of a WMD via Cyber.

Is this new?

This is a chilling projection.

This is not good reporting right here.

Sheesh.

Hahahahah. DO THEY?



Cypherpunks: The Vast Conflict



I've been carefully reading Richard Danzig's latest post, Technology Roulette: Managing Loss of Control as Many Militaries Pursue Technological Superiority. I want to put this piece in context - first of all, Richard Danzig is one of the best policy writers, and one of the deepest American policy thinkers currently active. Secondarily, this paper is a product of a deeply conservative government reaction to the ascendant Cypherpunk movement and is in that sense, leading the wrong direction.

Ok, that sounds melodramatic. Let me sum up the paper thusly:
  • New branches of science introduce upheaval and each comes, as a party gift, with a new weapon of mass destruction and general revolution in how war works. 
  • We used to get one a century or so, which was possible to adapt to, like a volcano that erupted every so often
    • We built treaties and political theory and tried not to kill everyone on the planet using the magic of advanced diplomacy
  • Now we are getting many new apocalyptic threats at a time
    • AI
    • 3d-Printing
    • Drones
    • Cyber War
    • Gene editing techniques
    • Nanotechnology
  • Rate of new world-changing tech is INCREASING OVER TIME.
    • Our ability to create new international political structures to adapt to new threats appears moribund


Most legal policy experts look askance on the "libertarian" views of the computer science community they have been thrust into contact with as if a Japanese commuter on the rush hour train. But the computer science world is less big L Libertarian than philosophically Cypherpunkian, tied to the simple belief that the advance of Technology is at its sum, always net positive for human liberty. Where society conflicts with the new technologies available to humanity, society should change instead of trying to restrict the march of technology.

Hence, where government experts are scared of disintermediation, as evidenced by a paranoia over Facebook's electoral reach, the computer world sees instead that newspapers were themselves centralized control over the human mind, and worthy of being discarded to the dustbin of history.

Where the FBI sees a coming crisis in the "Going Dark" saga, they find exactly no fertile ground in the technology sector, as if the field they would plant their ideas in was first salted, and then sent into space on one of Elon's rockets.

The US Government and various NGOs were both surprised and shocked at the unanimity and lack of deference of the technological community with regards to the Wassenaar cyber controls or the additional cryptographic controls the FBI wants. This resistance is not from a "Libertarian" political stance, but a from the deep current of cypherpunkism in the community.

These days, not only do Cypherpunks "write code", to quote Tim May's old maxim, but they also "have data". The pushback around Project Maven can be described on a traditional political platter, but also on a tribal "US vs THEM" map projection.

Examine the conversation around autonomous weapons. Of course, an autonomous and armed flying drone swarm can be set to kill anyone in a particular building. This is at least as geographically discriminatory as a bomb. Talks to restrict this technology even at the highest principal level so far restrict only an empty set of current and future solutions.

Part of this is the smaller market power of governments in general for advanced technology. A selfie drone is essentially 99.999% the same as a militarized drone, and this trend is now true for everything from the silicon on up, and some parts of the US Govt have started to realize their sudden weakness.

As Danzig's paper points out, the platitude that having a "human in the loop" to control automated systems is going to work is clearly false. Likewise, he argues that our addiction to classification hamstrings us when it comes to understanding systemic risk.

 The natural tendency within the national security establishment is to minimize the visibility of these issues and to avoid engagement with potentially disruptive outside actors. But this leaves technology initiatives with such a narrow a base of support that they are vulnerable to overreaction when accidents or revelations occur. The intelligence agencies should have learned this lesson when they had only weak public support in the face of backlash when their cyber documents and tools were hacked.
But his solution is anything but. We're in a race, and there's no way to get out of it based around the idea of slowing down technological development.

Monday, June 18, 2018

Policy Bugclass: False inequivalencies

I'm going to leave it up to your imagination why this picture perfectly encapsulates every moment someone suggests two random cyber things are different that are actually the same.


We try to maintain a list of policy-world "bugclasses" when in the cyber domain. 
  1. Assuming Data or Execution is bound to a physical location
  2. Assuming code has a built-in "Intent"
  3. Building policy/law in legal language instead of in Code (i.e. policy that does not work at wire-speed is often irrelevant)
  4. False inequivalences
In this article I want to talk a little bit about False Inequivalences, since they are probably the most prevalent type of bugclass that you run into, and you see them everywhere - in export control, in national security law, in policy in general.

For example, export control law (5a1j) likes to try to draw distinctions between the ability to store and the ability to search, or (4d4) the ability to run a command, and the ability to gather and exfiltrate information. In national security policy papers you'll often see a weird distinction between the ability to gather information and the ability to destroy information. Another, more subtle error is a sort of desire to have "networks" which are distinct. Technologists look upon the domain name system as a weak abstraction, but for some reason policy experts have decided that there are strict and discernible boundaries to networks that are worth porting various International Law conventions over to.

This bugclass is a real danger, as explaining why two things are "provably equivalent in any real practical sense" annoys lawyers whose entire lifespan has been spent splitting the hairs in language, and think that as a tool, hairsplitting can produce consistent and useful global policy. 

More specifically, we need to find a way to revise a lot of our legal code to accept this reality: Title 10 and Title 50 need to merge. Foreign and domestic surveillance practices need to merge. The list goes on and on...


Tuesday, June 5, 2018

Security, Moore's Law, and Cheap Complexity

https://www.err.ee/836236/video-google-0-projekti-tarkvarainseneri-ettekanne-cyconil

To paraphrase Thomas Dullien's CyCon talk:
  • We add 3 ARM computers per year per person on Earth right now. 
  • The only somewhat secure programs we know of focus entirely on containing complexity
  • Software is a mechanism to create a simplified machine from a complex CPU - exploits are mechanisms to unlock this complexity
  • We write software for computers that don't exist yet because we design hardware and software at the same time.
  • We've gotten significantly better at security in the past 15 years, but we've been outpaced by the exponential increase in complexity
  • Every device is now a "Network of Computers" - intra-device lateral movement is very interesting
  • It's much cheaper to use something complicated to emulate something simple than vice versa, in the age of general purpose cheap CPUs. This generates massive economies of scale, but at a cost...insecurity.
  • The economics of chip manufacturing means CPU and Memory providers are driven to sell the hardware they can get away with selling - some percentage of the transistors in a chip are bad, and the chip maker is strongly motivated to ship the least reliable CPU that the customer cannot detect
    • When there's only a few hundred atoms in a transitor, only three or four more makes a big difference
  • Until Rowhammer the link between hardware reliability and security was not clear to Electrical Engineers.
  • You cannot write real world secure programs that operate on hardware you cannot trust
  • Computers are deterministic at the abstract sense, but they are really only deterministic MOST of the time. Engineers work really hard to make it so you can ignore the physics of a chip. But it's still happening.
    • Determinism has to be fought for in computers, and is not a given.
  • The impossibility of inspectability in the digital sphere
    • Everything has firmware, none of which we can really have any assurance of
    • Average laptop has ~40 CPUs all with different firmware
    • Local attackers can use physics to induce transient faults, which bypasses crypto verification, which then means nobody can get you out
  • If control of a device has ever been disputed, it can never be ascertained if it is back in control. This is counter our standard intuition for how objects work.
  • The same forces that drive IT's success drive IT's insecurity.
  • Halvar loves SECCOMP_STRICT sandbox and wants to make it useful, but of course, making it useful will probably break it
  • Computers will look very different from today's architectures in fifteen years - more different than they did fifteen years ago. Engineers are now focused on designing parallel machines, since Moore's law is over for single-cores. 
  • All the insane complexity we can pump into computation systems is essentially in your pocket. 
  • It's still early days in computers. How good was humanity at building bridges seventy years after we started?

Tuesday, May 29, 2018

What is the high ground in cyberspace?

I can't even possibly get into how crazy hilarious most of the proposed cyber norms are. Usually the response is "What does the technical community know? and then a few years later, "Hmm. That didn't work." even though it was entirely predictable.

High Ground (C.F. Thomas Dullien)

High ground in cyber is high traffic sites! Facebook and Google are "unsinkable aircraft carriers" in that sense, but any site which has a huge traffic share is high ground, most of them have very low security, and there's lots of mountain ranges we don't acknowledge the existence of.

This screencap from Matt Tait's 2018 INFILTRATE keynote talks about update providers as strategic risks...
RedTube and other major porn sites have a wider reach than the New York Times ever will. Gaming sites are equally high ground. Dating sites are clearly high ground. There's what you think people do on the Internet versus what they really do, almost everywhere you look, which is why good strategists are holding themselves to the hard data they get from historical operations, and not just making up fanciful cyber norms in Tallinn.

I think it's counter-intuitive to grasp that almost everything your computer does when it reaches out is "get more code to execute". Software Updates are the obvious one, but a web page is also just code executing. PDFs are code executing. Word documents are code executing. New TF2 maps are code executing. NVidia's driver download page is exceptionally high ground.

In other words, there's nothing your computer does that is not "updates" when it comes to understanding your strategic risk.

Team Composition


We covered team compositions as applied to cyber operations quite heavily in our talk at T2 in Finland. To quickly summarize: Dive Tanks are going to be implants that are more "RAT"-like. These typically are entirely in userspace, and operate in the grey zones and chaotic areas of your operating system. Main tanks tend to be kernelspace or below. Obviously your implant strategy changes everything about what else you incorporate into your operations.

Win Condition



In Overwatch, one win condition is "we have a ranged DPS on the high ground, unopposed". Knowing the win conditions is important because it keeps you from wasting time and "feeding" your opponents when the battle is already lost. In cyber operations, feeding your opponents is quite simply using new exploits and implants when your current ones have already been caught. This is why a good team will immediately remove all their implants and cease operations once they even get a hint that they were discovered.

Unlike in Overwatch, the win condition in cyber is usually who is more covert than the other person. You don't have to remove your opponent from the field, you just have to make it irrelevant they are there.

Conclusion

Keeping your strategy as simple as possible allows for a high tempo of operations with a predictable and scalable results. Create a proper toolkit composition, execute the right tactical positioning based on your composition, and understand your win condition, and you will end up a grandmaster. :)

Thursday, May 24, 2018

When our countermeasures have limits

Countermeasures are flashy. But do they work?

So the FBI took over the domain VPNFilter was using for C2. VPNFilter also used a number of Photobucket accounts for C2, which we can assume have been disabled by Photobucket.


Hmm. Why did they do so many? Do we assume that every deployed region would have the same exact list?

Here's my question: How would you build something like this that was take-down resistant? Sinan's old paper from 2008 on PINK has some of the answers. But just knowing that seizing a domain is useless should change our mindset...

As a quick note: that last sentence of the FBI affidavit is gibberish.

From what I can tell from public information, the VPNFilter implants did not have a simple public-key related access method. But they may have a secret implant they installed only in select locations which does have one. Cisco and the FBI both are citing passive collection and a few implants from VirusTotal and from one nice woman in PA. We do know the attackers have a dedicated C2 for Ukrainian targets. 



My point is this: Our current quiver of responses can't remove botnets from IoT devices. The only reasonable next move is to do a larger survey of attacker implants - ideally to all of them, using the same methods the attackers did (we have to hope they didn't patch each box). This requires a policy framework that allows for DHS to go on the offense without user permission, and worldwide.

Tuesday, May 22, 2018

Exploits as Fundamental Metrics for Cyber Power


If you're measuring cyber power, you can measure it in a number of different ways:

  • Exploitation (this article!)
  • Integration into other capabilities (HUMINT, for example)
  • Achieved Effect (so much of IL wants to look here, but it is very hard)
In a previous article on this site we built a framework around software implants as a metric for measuring sophistication in capability. (Also see this Ben Buchanan piece for Belfer.)

Since there are no parades through downtown DC of cyber combat platforms, or even announcements in Janes, non-practitioners have thus tried to tag any effort which includes "0days" as sophisticated, and in the case of export control - too sophisticated to allow to be traded in without controls. The way this typically appears is by the concept of "Bypassing Authorization" being some sort of red line.

But from a strategic standpoint we have for years tried to look at the development and expenditure of 0day as a declaration of capabilities befitting a State-level opponent. This is of course a mistake, but one part of that mistake is of thinking of all 0days as equal from an information-carrying perspective as regards capabilities.

So what then, do practitioners look for when gauging 0day for nation-state-level sophistication, if not simply the use of any 0day?

Here is my personal list:
  • Scalable CONOPS
  • Toolchain Integration 
  • Cohesive OPSEC
  • Historical Effort and Timescales
Without going into each one of those in detail, I want to highlight some features that you'll see in State-level exploits. Notably, there is no red line on the "sophistication" of an exploit technique that differentiates "State" from "amateur". On the contrary, when you have enough bugs, you pick the ones that are easiest to exploit and fit best into your current CONOPS. Bugs with the complexity level of strawberry pudding recipes tend to be unreliable in the wild, even if they are perfectly good in the lab environment.

A notable exception is remote heap overflows, which for a long time were absent from public discourse. These tend to be convoluted by nature. And it's these that also typically demonstrate the hallmarks of a professional exploit that has had the time to mature properly. In particular, continuation of execution problems are solved, the exploit will back off if it detects instability in the target, the exploit will use same-path-stagers, you'll see PPS detection and avoidance, and the exploit will be isolated properly on its own infrastructure and tookit. What you're looking for is the parts of an exploit that required a significant testing effort beyond that which a commercial entity would invest in.

One particular hallmark is of course the targeting not of the newest and greatest targets, but of the older and more esoteric versions. A modern exploit that also targets SCO UnixWare, or Windows 2000, is a key tell of a sophisticated effort with a long historical tail.

There is a vast uneducated public perception that use of any 0day at all in an operation, or 4 or 5 at once, indicates a "state effort". However, the boundaries around state and military use of exploits are more often in the impressions of the toolkits they fit into than in the exploits themselves. While exploits, being the least visible parts of any operation, are sometimes the hardest to build metrics around, it's worth knowing that the very fact that 0days exist as part of a toolchain is not the needed metric for strategic analysis, nor the one practitioners in the field use.

Tuesday, May 8, 2018

What is an "Observable Characteristic" in Software Export Control?

Note: This is a living document partially written for those new to export controls - if you think I misunderstood something let me know and I'll address it within!
---------------------------------------------------------------------------------------------------------


I want to highlight this Twitter thread here which goes over the 4D4 ("Intrusion Software") in a bit of detail. I feel like many people who are proponents of 4D4 complain that the rest of us, who have concerns, don't properly understand export control frameworks. I would posit there IS NO UNDERSTANDING OF EXPORT CONTROL FRAMEWORKS BY DESIGN :). But to be more specific about the concerns the following bite size bit is the most important part:


Being able to look, objectively, at a piece of hardware and say "This is a stealth coating because it has the following manufacturing characteristics" is a different category than being able to look at a piece of software and saying "This bypasses ASLR and DEP". Deep down, while radio frequencies are in general going to be a universal thing, and performance can be measured, the export control language applied to software exists in a huge fog! What does it mean to "bypass a mitigation"?

The Issues of End Use Controls


What this results in is END USE controls. In other words, instead of saying "We want to ban antennas that can emit the following level of power" we are writing controls  that say "We want to ban software that CAN BE USED for the following thing". This means instead of looking at the software to control it, you end up looking at the marketing, so the controls are littered with marketing language ("Carrier Grade Speed!") and do not have functions, characteristics, or performance levels of any kind.

Sometimes you see long lists of functionalities in software controls, as if this is going to be a definitive characteristic if you add enough of them. For example, 5a1j ("Surveillance software") is essentially:

  • Collects network information
    • parses it and stores the metadata about it (aka, FTP usernames and such) into a DB
  • Indexes that information (why else would you have this in a DB?)
  • Can visualize and graph relations between users (based on the information you indexed)
  • Can search the DB using "selectors" ("again, why else is it in a DB?")
This is what modern breach detection software is - a product category that did not really exist when 5a1j was formulated. But each of the pieces DID exist and given a market opportunity, they got put together as you would expect. In other words - long lists of functions are not enough to make a good control (especially when all the functions you are describing are commoditized in the ELK Docker image)


Performance levels are the big differentiation typically. The general rule is that if you cannot define a performance level, you are writing a terrible regulation because it will apply broadly to a huge population you don't care about and have massive side effects, but the international community has typically just ignored this for software controls (because it is hard). Part of the difficulty here is that performance levels in the cyber domain tend to go up a lot faster than in most manufacturing areas. The other issue is that for the controls people seem to want, there are no clear metrics on performance. For example, with 5a1j there is nothing that differentiates the speeds/processing/storage that Goldman Sachs (or even a small university) would use versus a country backbone ISP.

Another thing to watch out for is the contrast between controls on "Software" and the controls on "Technology". Usually these controls go hand in hand. They'll say "We control this antenna, but also we control technology for making those antennas so you can't just sell a Powerpoint on how to make them to China either". In software, this gets a lot more difficult. Adding an exception to a technology control does not fix the software control...

What we are learning is that software export controls work best when tied to an industry Standard. This does not describe the current cyber tools regulations (4d4 or 5a1j), however. We do know  that end use-based controls are not good even with very large exceptions carved into them for reasons which might require another whole paper, but which seem obvious in retrospect when looking at the regulations as "laws" which need to be applied on an objective basis.

Impact


I got flack last Sunday for Tweeting that export controls "ban" an item, which they clearly do not. However, the effect of export controls is similar - largely a slower, more painful, and silent death rather than a strict ban. I.E. export controls are less a bullet to the head and more a chronic but fatal disease for your domestic industry. This is partially because licensing has an extremely high opportunity cost on US businesses which raises the expense of doing business up and down the supply chain.

There's a common misconception among export control proponents that when used loosely (aka, automatic licensing with only reporting requirements), export control is "cost free" for businesses. Nothing could be further from the truth. Even very small companies (aka startups) are now international companies, and having to understand their risks from export control regimes can be prohibitively expensive with such broadly (and poorly) designed controls as 4d4 or 5a1j.

More strategically, no proponent of strict export control regimes wants to look at their cost and efficacy. Do they even work? Do they work at a reasonable cost? For how long do they work? Do we have a realistic mechanism for removal of controls once they become ineffective? These are all questions we should answer before we implement controls. The long term impacts are recorded at policy meetings in sarcastic anecdotes - "We don't even build <controlled system> in the US anymore, we just buy it from China - that export control did its job!" 

Sadly, this means that export controls are almost certainly having the exact opposite effect from what is desired. This could probably be addressed by having a quite strict "Foreign availability" rule when designing new regulations. After all, what is the point of putting restrictions on our exports when the same or similar technology is available from non WA members? Any real stress on these issues is mysteriously missing from discussions around the cyber-tools regulations. :)

Unilateralism


The goal of the Wassenaar Arrangement and other similar agreements is of course to avoid the problem of unilateral controls, which are an obvious problem. What they don't want to hear is that the implementation differences between countries are large enough that the controls are unilateral anyway. To have truly non-unilateral controls you need one body making licensing decisions - and by design WA does not work like that.

The gaps in implementation are large enough that entire concepts appear in one country that don't exist in others - most prominently, the "Deemed Export" ruleset, which says that if I sell an Iranian an iPhone in Miami, that is the same as exporting it to Iran and I need to get a license.

Goals, both Stated and Unstated

The stated goal of export controls is avoiding technology transfer for national security purposes! (Note that "human rights issues" are not a stated goal for the WA).

The unstated goals are more complex. For example, we get a lot of intelligence value out of the quarterly reports from software companies for all of their international customers. There's probably also limited intel value in the licensing documents themselves ("Please have your Chinese customer fill out a form - in English - stating what they are using your product for!") Obviously the US likes having a database somewhere of every foreign engineer who has accessed a lithography plant, I guess. Because this stuff is unstated, it's hard to argue against in terms of ROI but I would say that for most of this you can get a ton better value by having a private conversation with the companies involved, which is a good first step towards building the kinds of relationships we always claim we want between the USG and industry. As stated previously, the costs imposed by even a "reporting only" licensing scheme are enormous.

When I talk to Immunity customers (large financials) about 5a1j, they assume that the reason the USG wants reporting on all breach detection systems sold overseas is so they can better hack them. It's hard to argue with that. This is a high reputational cost that the USG pays among industry for intelligence that is probably of little real value.

The other unstated goal is leverage. Needless to say with a complicated enough export control regime, nearly every company will eventually be in violation. Likewise, blanket export licenses can massively reduce your opportunity cost, and many countries are happy to issue them in various special cases. Again, I think from a government's perspective it is better long term to develop fruitful bilateral relationships.

A lot of these issues are especially true with "end use"-centric controls - which rely on information that the SELLER or BUILDER of the technology has no way to know ahead of time.

And the last-but-not-least unstated goal is to control the sale of 0day. Governments are mostly aligned in their desire to do this, although few of them understand what that means, what the side effects would be, or how this would really play out. But parts of the rest of their strategy (VEP) only really work when these controls go into place, so they have a strong drive to write the controls and see how things work later. It is this particular unstated  but easily visible goal that I think is the largest threat towards the security industry currently.

Conclusions

I tried in this document to start painting the landscape as to where cyber tool export controls can go wrong. Part of my goal joining ISTAC was to stop making the mistakes we made with 5a1j and 4d4 by putting things into a bit of a more coherent policy framework. Hopefully this document will be useful to other technical and policy groups as together we find a way to navigate this tricky policy area.

----------------
Resources/notes:

To be fair, we've known a lot of these issues for a very long time, and we simply have not fixed them:

From this very good, very old book.