Monday, February 18, 2019

Review: Bytes, Bombs, and Spies

It should have been titled bytes bombes and spies. A lost opportunity for historical puns!


In my opinion, this book proved the exact opposite of its thesis, and because of that, it predicts, like the groundhog, another 20 years of cyber winter. I say that without yet mentioning what the book's overall thesis is, which is that it's possible to think intelligently about cyber policy without a computer science degree or clearance. That is, is it possible to use the same strategic policy frameworks we derived for the cold war going into a global war of disintermediation? You can hence judge this book on the coherence of its response to the questions it manages to pose.

It's no mistake that the best chapter in the book, David Aucsmith's dissection of the entire landscape, is also its most radical. Everything is broken, he explains, and we might have to reset our entire understanding to begin to fix it. You can read his thoughts online here.

Westphalia is no longer the strongest force, perhaps.


Jason Healey also did some great work in his chapter, if for no other reason than he delved into his own disillusionment more than usual.



Yeah, about that...

But those sorts of highlights are rare (in cyber policy writing in general but also in this book). 

Read any Richard Haass article or his book and you will see personified the dead philosophy of the cold war reaching up from its well deserved grave: Stability at any cost, at the price of justice or truth or innovation. At the cost of anything and everything. This is the old mind-killer speaking - the dread of species-ending nuclear annihilation. 

What that generation of policy thinkers fears more than anything is destabilization. And that filters into this book as well.

Is stability the same as control?

Every policy thinker in the space recognizes now, if only to bemoan them, the vast differences between the old way and the new:

The domain built out of exceptions...
But then many of the chapters fade into incoherence.

This is just bad.

Are we making massive policy differentiations based on the supposed intent of code again? Yes, yes we are. Pages 180 and 270 of the book disagree even on larger strategic intent of one of the most important historical cyber attacks, Shamoon, which is alternately a response to a wiper attack and a retaliation for Stuxnet. Both cannot be correct and it's weird the editors didn't catch this.

What is your rules of engagement if not code running at wire speed, perhaps in unknowable ways, the way AI is wont to do, but even if not AI, can you truly understand the emergent systems that are your codebase or are you just fooling yourself?

There are bad notes to the book as well: Every chapter that goes over the imagined order of operations for what an offensive cyber operation would look like, and which US military units would do what, has a short self life, although possibly this is the only book you'll find that kind of analysis currently.

But any time you see this definition for cyber weapons you are reading nonsense, of the exact type that indicates the authors should go get that computer science degree they assume isn't needed, or at least start writing as if their audience has one:

Why do people use this completely broken mental model?

Likewise, one chapter focused entirely on how bad people felt when surveyed about their theoretical feelings around a cyber attack. Surveys are a bad way to do science in general and the entire social science club has moved on from them and started treating humans like the great apes we are.

That chapter does have one of my favorite bits though, when it examines how out of sorts the Tallinn manual is:

"Our whole process is wrong but ... whatevs!"

So here's the question for people who've also read the whole book: Did we move forward in any unit larger than a Planck length? And if not, what would it take to get us some forward motion? 



Tuesday, February 5, 2019

Iconoclastic Doctrine

I gave a talk at a private intelligence and law enforcement function a few months ago and I wanted to write up part of it for the blog and it went as follows:


Hacking, and hence cyber war, is in many ways the discipline of controlled iconoclasty. There's really two philosophies for mature hackers - people who want to be far ahead of the world, who prove how smart they are by finding bugs and using them in unexpected ways. Essentially characters who put all their stats into vulnerability development. Then there's operators, who can take literally any security weakness no matter how small, and walk that all the way to domain admin. The generational split here, for 90's hackers, used to be cross site scripting, which was viewed largely like a vegan burger at a Texas Bar-B-Q.

There are other generational splits that never happened for hackers, but which I see affect cyber policy. When you read about hackers in the news, you often hear not that they are 400 pound bedridden Russophiles, but that they are transgendered:


If you're not part of the hacker community, like, say if you do government policy for a living, it's easy to miss how much higher the percentage of transgender people are at all levels of the information security world than in the outside world. Hacking requires a mental courage to go your own direction and the clarity to examine societies' norms and discard the ones that don't work for you. If you're looking for cyber talent, you're better off with an ad on Fictionmania than on the Super Bowl broadcast.

Even in religious countries, the majority of their offensive cyber force is Atheistic. And support for the transgender community in the fifth Domain is a generation beyond what it is in the other four.

In summary, assuming that your goal is to "ensure US/Allied freedom of action in cyberspace and deny the same to our adversaries" then the transgender ban is like an NBA team banning tall people because they wear larger clothes. It's strategically stupid, and spills over into the IC and entire military industrial construct. It has national-security level implications, and not good ones.


Project the Raven

It's impossible to miss the reporting on Project Raven that came out yesterday, even though on one hand I feel like we've already heard a bit about the UAE's efforts from other reports. I have to admit, I never judge people from media reports. A friend of mine says that you're not really serving your country until your name gets dragged through the mud in the NYT.

Of course, one thing that strikes you as you read these reports is that their significance in the media is somewhat belied by the fact that the whole effort appears to fit INTO A VILLA.

I find this imbalance is true in a lot of areas of computer security. The number of hours it takes to implement an export control on some items is ten times larger than the number of hours to re-build that whole item overseas in a non-controlled country. This kind of wacky ratio isn't true for, say, stealth technology or things that go boom really big.

If you watch the following talk from 2012 I make a prediction, and it is this: I think non-state actors are where the game is. I think that is the big, underlying change that we are trying desperately to ignore.







Friday, January 25, 2019

There is no Escalatory Ladder in the Matrix



I'm still reading Bytes, Bombs and Spies, specifically the chapter where they define "OCO" to mean "Offensive Cyber Operations" and then riff on the concept of how command and control and escalatory ladders would exist in wartime, who would authorize what, how doctrine would evolve, etc.

I know the book is meant to at some level portray that policy can be strategically thought out in the unclassified space, but the deeper I get into it, the less I feel like it makes that point. What does a "constrained regional scope" mean in cyberspace? (note: It means nothing.)

If anything, what this book points out is how little value you can get from traditional political-science terms and concepts. Escalatory ladder makes little sense with a domain where a half-decade of battlefield preparation and pre-placement are required for attacks, where attacks have a more nebulous connection to effect, deniability is a dominant characteristic, and where intelligence gathering and kinetic effect require the same access and where emergent behavior during offensive operations happens far beyond human reaction time.


Wednesday, January 23, 2019

Bytes Bombs and Spies

A shibboleth for whether you should be doing cyber policy work. :)

ISR has to be anticipatory, comprehensive, and real time. And because of that, I'm reading the new compendium of essays edited by Herb Lin and Amy Zegart. The book is dense and long, so I'm not done with it, but I get a very different feeling from it so far than what they've intended I think.

Start by listening to this podcast:
https://www.hoover.org/news/hoover-scholars-examine-cyber-warfare-new-book

In it, a few interesting questions come to light. For example, they say things like this:

  1. Does not know why, unlike with nuclear, the scientific community has not gone into policy with cyber (sociology of knowledge problem) 
  2. Does not think you need a "CS Degree" to work on cyber policy (in comparison to nuclear work, which was more technical in some way) 
Obviously I disagree with both of those things.

There are things you always hear in this kind of podcast:
  • A hazy enumeration of the ways the cyber domain is different
  • War versus not-war legal hairsplitting 
  • Either wishful thinking or doleful laments about cyber norms
  • Massive over-parsing of vague throwaway comments from government officials
For example, the Differences of the Cyber Domain (from this podcast):
  • Intangible - manipulates information
  • It's man made! Laws of physics don't constrain information-weapons so much as imagination.
  • Target Dependance
  • Accumulation problem - lots of copies of the same malware doesn't help. Exploits are time-delimited in a way that is quite different from capabilities in the real world.
Ok, so some real talk here: The first thing a hacker learns is that code and data are the same thing, and both are just state space under the covers. This is why when you build a fuzzer, you can measure "code coverage" but you know that you're just estimating what you REALLY want to explore, which is state space coverage. It's why your exploits can use the thread environment block to store code, or why every language complex enough to be useful has an injection bugclass. I have a whole post coming soon about shellcode encoder decoders to really drive the history of this thing home. ADMutate anyone? 

Anyways, once you understand that in your gut, the way a hacker does, the cyber domain is not at all confusing. It becomes predictable, which is another word for computable.

Deep down, the most ironic thing is Lin's statements about the different physics of the physical and cyber domains because the theory of computation is ALSO the physics of quantum mechanics, which is what we learned when we were building nuclear bombs. 


Friday, January 4, 2019

VEP: Centralized Control is Centralized Risk




When we talk about the strategic risks of a Vulnerability Equities Process people get absorbed in the information security risks. But the risks of any centralized inter-agency control group also include:

  • Time delay on decision making. Operational tempo is a thing. And you can't use a capability and then kill it on purpose on a regular basis without massive impact to your opsec, although this point appears to be lost on a few former members of the VEP. 
    • For teams that need a fast operational tempo (and are capable of it), any time delay can be fatal. 
    • For teams that don't operate like that, you either have to invest in pre-positioning capability you are not going to use (which is quite expensive) or further delay integration until the decision about whether to use it has been made.
  • One-size-fits-all capability building. While there may be plenty of talented individuals for the VEP process, it is unlikely they are all subject matter experts at the size and scale that would be needed for a truly universal process. I.E. the SIGINT usefulness of a SAP XSS may be ... very high for some specialized team.
  • Having multiple arms allows for simpler decision making by each arm, similar to the way an octopus thinks. 
  • Static processes are unlikely to work for the future. Even without enshrining a VEP in law, any bureaucratic engine has massive momentum. A buggy system stays buggy forever, just like a legacy font-rendering library. 
It may not even result in different decisions than a distributed system. For example: Any bug bought/found by SIGINT team is likely to be useful for SIGINT, and retained. Otherwise your SIGINT team is wasting its time and money, right?

Likewise, any bug found externally, say through a government bug bounty, is likely to be disclosed.

Here's a puzzle for you: What happens if your SIGINT team finds a bug in IIS 9, which is an RCE, but hard to exploit, and they work for a while, and produce something super useful. But then, a bit later, that same bug comes in through the bug bounty program DHS has set up, but as an almost useless DoS or information leak? How do you handle the disparity in what YOU know about a bug (exploitability f.e.) versus what the public knows?

Outputs

This leads you into thinking - why is the only output available from the VEP disclosure to a Vendor? Why are we not feeding things to NATO, and our DHS-AntiVirus systems, and building our own patches for strategic deployment, and using 0days for testing our own systems (aka, NSA Red Team?). There are a ton of situations where you would want to issue advisories to the public, or just to internal government use, or to lots of people that are not vendors.

During the VEP Conference you heard this as an undercurrent. People were almost baffled by how useless it was to just give bugs to vendors, since that didn't seem to improve systemic security risks nearly enough. But that's the only option they had thought of? It seems short sighted.




Thursday, December 20, 2018

VEP: Handling This Patch's Meta Changes

We may be about to enter the Healer meta in Cyber War for Season 3, what does that mean for you?

The Meta Change


Google caught an 0day this week, and Microsoft issued an out of band patch. Kaspersky has been catching and killing 0day as well recently, at a much higher pace than you would expect, something over one a month. If that doesn't surprise you, then it's probably because you're accustomed to seeing defensive companies catch only implants or lateral movement and then helplessly shrugging their shoulders when asked how the hackers got in.

But we're about to enter a healer-meta. In Overwatch, that is when the value of defensive abilities overwhelms the damage you can do over time. So your offense is forced into one of two options:

  • High Burst Damage Heroes (such as snipers, Pharah, ultimate-economy playstyles, etc.)
  • Diving healers
In Cyberwar what this means is that when effective defensive instrumentation gets widely deployed (and used) attackers are forced to do one of three things:
  • Use worms and other automated techniques that execute your mission faster than any possible response (burst damage)
  • Operate at layers that are not instrumented (aka, lower into firmware, higher into app-level attacks)
  • Modify the security layers themselves (aka, you dive the healer by injecting into the AV and filtering what it sends)

VEP

There were two arguments around what you should do in a VEP in a Healer Meta at the Carnegie conference. One, was that since you knew bugs were going to get caught, you should make sure you killed them as soon as possible after use (which reduces threat to yourself from your own bugs). The other was that you were going to need a huge stockpile of bugs and that you should stick to only killing bugs that you knew for sure HAD gotten caught, and only if your attribution was already broken by your adversary.

These are very different approaches and have vastly different strategic needs.

Of course, the third take is to avoid the problem of 0day getting caught by focusing on the basics of countering the meta as outlined above. But a nation state can (and will) also use other arms to restrict the spread of defensive technology or erode its effectiveness (i.e. by attacking defensive technology supply chains, using export control to create deliminations between HAVES and HAVE NOTS, etc.).