Tuesday, February 5, 2019

Iconoclastic Doctrine

I gave a talk at a private intelligence and law enforcement function a few months ago and I wanted to write up part of it for the blog and it went as follows:


Hacking, and hence cyber war, is in many ways the discipline of controlled iconoclasty. There's really two philosophies for mature hackers - people who want to be far ahead of the world, who prove how smart they are by finding bugs and using them in unexpected ways. Essentially characters who put all their stats into vulnerability development. Then there's operators, who can take literally any security weakness no matter how small, and walk that all the way to domain admin. The generational split here, for 90's hackers, used to be cross site scripting, which was viewed largely like a vegan burger at a Texas Bar-B-Q.

There are other generational splits that never happened for hackers, but which I see affect cyber policy. When you read about hackers in the news, you often hear not that they are 400 pound bedridden Russophiles, but that they are transgendered:


If you're not part of the hacker community, like, say if you do government policy for a living, it's easy to miss how much higher the percentage of transgender people are at all levels of the information security world than in the outside world. Hacking requires a mental courage to go your own direction and the clarity to examine societies' norms and discard the ones that don't work for you. If you're looking for cyber talent, you're better off with an ad on Fictionmania than on the Super Bowl broadcast.

Even in religious countries, the majority of their offensive cyber force is Atheistic. And support for the transgender community in the fifth Domain is a generation beyond what it is in the other four.

In summary, assuming that your goal is to "ensure US/Allied freedom of action in cyberspace and deny the same to our adversaries" then the transgender ban is like an NBA team banning tall people because they wear larger clothes. It's strategically stupid, and spills over into the IC and entire military industrial construct. It has national-security level implications, and not good ones.


Project the Raven

It's impossible to miss the reporting on Project Raven that came out yesterday, even though on one hand I feel like we've already heard a bit about the UAE's efforts from other reports. I have to admit, I never judge people from media reports. A friend of mine says that you're not really serving your country until your name gets dragged through the mud in the NYT.

Of course, one thing that strikes you as you read these reports is that their significance in the media is somewhat belied by the fact that the whole effort appears to fit INTO A VILLA.

I find this imbalance is true in a lot of areas of computer security. The number of hours it takes to implement an export control on some items is ten times larger than the number of hours to re-build that whole item overseas in a non-controlled country. This kind of wacky ratio isn't true for, say, stealth technology or things that go boom really big.

If you watch the following talk from 2012 I make a prediction, and it is this: I think non-state actors are where the game is. I think that is the big, underlying change that we are trying desperately to ignore.







Friday, January 25, 2019

There is no Escalatory Ladder in the Matrix



I'm still reading Bytes, Bombs and Spies, specifically the chapter where they define "OCO" to mean "Offensive Cyber Operations" and then riff on the concept of how command and control and escalatory ladders would exist in wartime, who would authorize what, how doctrine would evolve, etc.

I know the book is meant to at some level portray that policy can be strategically thought out in the unclassified space, but the deeper I get into it, the less I feel like it makes that point. What does a "constrained regional scope" mean in cyberspace? (note: It means nothing.)

If anything, what this book points out is how little value you can get from traditional political-science terms and concepts. Escalatory ladder makes little sense with a domain where a half-decade of battlefield preparation and pre-placement are required for attacks, where attacks have a more nebulous connection to effect, deniability is a dominant characteristic, and where intelligence gathering and kinetic effect require the same access and where emergent behavior during offensive operations happens far beyond human reaction time.


Wednesday, January 23, 2019

Bytes Bombs and Spies

A shibboleth for whether you should be doing cyber policy work. :)

ISR has to be anticipatory, comprehensive, and real time. And because of that, I'm reading the new compendium of essays edited by Herb Lin and Amy Zegart. The book is dense and long, so I'm not done with it, but I get a very different feeling from it so far than what they've intended I think.

Start by listening to this podcast:
https://www.hoover.org/news/hoover-scholars-examine-cyber-warfare-new-book

In it, a few interesting questions come to light. For example, they say things like this:

  1. Does not know why, unlike with nuclear, the scientific community has not gone into policy with cyber (sociology of knowledge problem) 
  2. Does not think you need a "CS Degree" to work on cyber policy (in comparison to nuclear work, which was more technical in some way) 
Obviously I disagree with both of those things.

There are things you always hear in this kind of podcast:
  • A hazy enumeration of the ways the cyber domain is different
  • War versus not-war legal hairsplitting 
  • Either wishful thinking or doleful laments about cyber norms
  • Massive over-parsing of vague throwaway comments from government officials
For example, the Differences of the Cyber Domain (from this podcast):
  • Intangible - manipulates information
  • It's man made! Laws of physics don't constrain information-weapons so much as imagination.
  • Target Dependance
  • Accumulation problem - lots of copies of the same malware doesn't help. Exploits are time-delimited in a way that is quite different from capabilities in the real world.
Ok, so some real talk here: The first thing a hacker learns is that code and data are the same thing, and both are just state space under the covers. This is why when you build a fuzzer, you can measure "code coverage" but you know that you're just estimating what you REALLY want to explore, which is state space coverage. It's why your exploits can use the thread environment block to store code, or why every language complex enough to be useful has an injection bugclass. I have a whole post coming soon about shellcode encoder decoders to really drive the history of this thing home. ADMutate anyone? 

Anyways, once you understand that in your gut, the way a hacker does, the cyber domain is not at all confusing. It becomes predictable, which is another word for computable.

Deep down, the most ironic thing is Lin's statements about the different physics of the physical and cyber domains because the theory of computation is ALSO the physics of quantum mechanics, which is what we learned when we were building nuclear bombs. 


Friday, January 4, 2019

VEP: Centralized Control is Centralized Risk




When we talk about the strategic risks of a Vulnerability Equities Process people get absorbed in the information security risks. But the risks of any centralized inter-agency control group also include:

  • Time delay on decision making. Operational tempo is a thing. And you can't use a capability and then kill it on purpose on a regular basis without massive impact to your opsec, although this point appears to be lost on a few former members of the VEP. 
    • For teams that need a fast operational tempo (and are capable of it), any time delay can be fatal. 
    • For teams that don't operate like that, you either have to invest in pre-positioning capability you are not going to use (which is quite expensive) or further delay integration until the decision about whether to use it has been made.
  • One-size-fits-all capability building. While there may be plenty of talented individuals for the VEP process, it is unlikely they are all subject matter experts at the size and scale that would be needed for a truly universal process. I.E. the SIGINT usefulness of a SAP XSS may be ... very high for some specialized team.
  • Having multiple arms allows for simpler decision making by each arm, similar to the way an octopus thinks. 
  • Static processes are unlikely to work for the future. Even without enshrining a VEP in law, any bureaucratic engine has massive momentum. A buggy system stays buggy forever, just like a legacy font-rendering library. 
It may not even result in different decisions than a distributed system. For example: Any bug bought/found by SIGINT team is likely to be useful for SIGINT, and retained. Otherwise your SIGINT team is wasting its time and money, right?

Likewise, any bug found externally, say through a government bug bounty, is likely to be disclosed.

Here's a puzzle for you: What happens if your SIGINT team finds a bug in IIS 9, which is an RCE, but hard to exploit, and they work for a while, and produce something super useful. But then, a bit later, that same bug comes in through the bug bounty program DHS has set up, but as an almost useless DoS or information leak? How do you handle the disparity in what YOU know about a bug (exploitability f.e.) versus what the public knows?

Outputs

This leads you into thinking - why is the only output available from the VEP disclosure to a Vendor? Why are we not feeding things to NATO, and our DHS-AntiVirus systems, and building our own patches for strategic deployment, and using 0days for testing our own systems (aka, NSA Red Team?). There are a ton of situations where you would want to issue advisories to the public, or just to internal government use, or to lots of people that are not vendors.

During the VEP Conference you heard this as an undercurrent. People were almost baffled by how useless it was to just give bugs to vendors, since that didn't seem to improve systemic security risks nearly enough. But that's the only option they had thought of? It seems short sighted.




Thursday, December 20, 2018

VEP: Handling This Patch's Meta Changes

We may be about to enter the Healer meta in Cyber War for Season 3, what does that mean for you?

The Meta Change


Google caught an 0day this week, and Microsoft issued an out of band patch. Kaspersky has been catching and killing 0day as well recently, at a much higher pace than you would expect, something over one a month. If that doesn't surprise you, then it's probably because you're accustomed to seeing defensive companies catch only implants or lateral movement and then helplessly shrugging their shoulders when asked how the hackers got in.

But we're about to enter a healer-meta. In Overwatch, that is when the value of defensive abilities overwhelms the damage you can do over time. So your offense is forced into one of two options:

  • High Burst Damage Heroes (such as snipers, Pharah, ultimate-economy playstyles, etc.)
  • Diving healers
In Cyberwar what this means is that when effective defensive instrumentation gets widely deployed (and used) attackers are forced to do one of three things:
  • Use worms and other automated techniques that execute your mission faster than any possible response (burst damage)
  • Operate at layers that are not instrumented (aka, lower into firmware, higher into app-level attacks)
  • Modify the security layers themselves (aka, you dive the healer by injecting into the AV and filtering what it sends)

VEP

There were two arguments around what you should do in a VEP in a Healer Meta at the Carnegie conference. One, was that since you knew bugs were going to get caught, you should make sure you killed them as soon as possible after use (which reduces threat to yourself from your own bugs). The other was that you were going to need a huge stockpile of bugs and that you should stick to only killing bugs that you knew for sure HAD gotten caught, and only if your attribution was already broken by your adversary.

These are very different approaches and have vastly different strategic needs.

Of course, the third take is to avoid the problem of 0day getting caught by focusing on the basics of countering the meta as outlined above. But a nation state can (and will) also use other arms to restrict the spread of defensive technology or erode its effectiveness (i.e. by attacking defensive technology supply chains, using export control to create deliminations between HAVES and HAVE NOTS, etc.). 






Wednesday, December 19, 2018

The Brussels Winter VEP Conference


So recently I went to a conference on vulnerability equities which was under Chatham House Rule, which means I can't say WHO SAID anything or who was there, but they did publish an agenda, so your best guess is probably right, if you've been following the VEP discussion.

Anyways, here (in click-bait format like Jenny Nicholson) are my top three things that are literally and mathematically irrational about the VEP, as informed by the discussion at the conference:

1. A lot of the questions you are supposed to answer in order to make the VEP decision are 100% unknowable.

Questions typically include:
  • How many targets use a particular software?
  • How many friendly people use a software platform?
  • Will the Chinese find this bug easily or not?
  • etc.

Some panel members thought a partial solution might be for every technology company to give all their customer survey information to the government, which could help answer questions like "Do we need to protect or hack more people who are vulnerable to this bug?" This idea is a bad idea and you could sense the people in the room laughing internally at it, although it is partially already the goal of Export Control regulations.

Needless to say, if you are making your decisions based on a bunch of questions you have NO ANSWERS TO, you are making RANDOM decisions. And some of the questions are obviously unknowable because they involve the future. For example, the answer to "Do our opponents use the latest version of Weblogic?" is always "not at the moment but the future is an unknown quantum interplay between dark energy and dark matter that may decide if the universe continues to expand and also if the system administrator in Tehran upgrades to something vulnerable to this particular deserialize issue!". An even better example is the question of "How hard is this bug for the Chinese to find?" to which if you KNEW WHAT BUGS THE CHINESE COULD FIND IN THE FUTURE you would not be worrying about CyberWar problems so much as how to deal with the crippling level of depression that happens when you have a brain the size of a planet.

Although ironically the VEP will tell the Chinese how hard it is for US to find particular bugclasses, so we have THAT going for us at least.


2. Voting does not resolve equities issues. One of the panelists mentioned that if you want to take every bug, and rank its usefulness from 1 to 10, and then take its negative impact, and rank that one to ten, you can draw a nice diagram like the one below.

Then (they posit) you can just look at the equities decisions you've made, and draw a simple line with some sort of slope between the yay's and the nays and you've "made progress" (tm).



Except that in reality, every number on the graph is somewhere on the axis of "would stop World War III if we could use it for SIGINT" and "would end all commerce over the Internet as we know it resulting in the second Great Depression". I.E. every number is zero, infinity, or both zero AND infinity at the same time using a set of irrational numbers that can only graphed on the side of a twelve dimensional Klein bottle. Voting amongst stakeholders does not solve this fundamental unit comparison issue, to say the least.

What if a bug has no use, but the bugclass it belongs to is one you rely on for other ops? The complications are literally an endless Talmudic whirlpool into the abyss.

For example, I am continually mystified by certain high level officials misunderstanding of the basics of OPSEC when you give a bug out. They seem to think that you can USE a bug operationally before you go through the VEP, and then decide to kill it, and not suffer huge risks with OPSEC (including attribution). They often justify this with the idea that "sometimes bugs get caught in the wild or die by themselves" which is TRUE. In that sense, yes, every operational use of an exploit is an equities decision - one that you take for OPSEC reasons. Which is why GOOD OPERATORS use one whole toolchain per target if possible. And if you think that's overkill, then maybe you've underestimated the difficulty of your future target set.

Also note that no person in government policy wants to use this process to measure the impact of the VEP over time - although I'm not sure what units you would measure your operational loss in, other than human lives? Likewise, there's only one output to the VEP, "Give bug to Vendor" as opposed to a multi-output system including "Write and publish our own Patch" which seems like a better choice if you want to have options for when you disagree with a vendor's triage or timeline?

3. No Government in Europe is dumb enough in this geopolitical environment to do VEP for real. It may happen that every Western government signs or sets up some document that assigns a ton of unanswerable rote paperwork per-bug to their already small technical and cleared teams, if for no other reason, because Microsoft and Mozilla and the Software Alliance all have legitimate soft power that can influence public policy. I mention them in particular because they funded this conference and following the money is a thing I once heard about. As a positive bonus note: VEPs are, great cover for killing OTHER people's bugs once you catch them in the wild.

But the EU technical teams were also there at the conference, with the government policy people responsible for getting their cyber war game from D-level to A-level. You can imagine the post-Snowden meetings all across Europe in rooms with no electronic devices where elected officials looked at their teams and said "What exactly do they mean "SSL Added and Removed Here?!? We need to 'Get Gud', as the teens are saying. Pronto."

Does anyone realistically think that they're going to hamstring themselves? Because I talked to them there and I'm pretty sure they're not going to. (insert SHRUG emoji!)



And here's the actual strategy implication that they know, but don't want to say: Your best people will leave if you implement the VEP seriously. There are those Sardaukar for whom it is not about money, who are with you for life, as long as you have a mutual understanding that their work is on mission, all warheads in foreheads. And to them, the VEP is an anathema.

And then there are people out for fame and money, and those people are going to get stolen by a random company anyway, because why would they ever stay and be a glorified bug bounty hunter?

I mean, every country is different. It's possible I'm misjudging cultures and talent pools. Or not. But if you are running a country's VEP program, you have to be pretty confident that I'm wrong about that to move forward. This is the kind of thing you'd want to start asking about in your exit interviews.

Oh, and as a final note: One of the submitted talks to INFILTRATE required an equities decision. Cool 0day, very old, and you should come and see the talk even though we haven't officially announced it yet. :)