Thursday, February 28, 2019

RAND Paper: Qualitatively

A new RAND paper came out this morning at 0-dark-thirty on the benefits/issues of private sector attribution of cyber issues to the USG, which is a weird way to frame the topic. Read it here. A few things stuck out at me when I read the paper: First of which is that the conclusion says exactly nothing, which is not a good sign. To quote it in full below:

Conclusion
After analysis and reflection, we believe that the private sector provides valuable capabilities that augment and support USG interests regarding investigation and attribution of malicious cyber activity. The capabilities and reach of the private sector is obviously strong and broad, and it offers additional information and insights that can bolster existing USG capabilities to detect and manage nation-state and criminal threats. 
Specifically, there are opportunities for increased collaboration between public and private sector that can (and should) leverage personal relationships between former colleagues. And there may be more opportunities for more formal, structured, or frequent interactions. However, as was mentioned during our interviews, a collaboration that is too close or structured could well backfire. And so careful and thoughtful, but deliberate interactions will likely produce the best results for detecting and managing malicious cyber activity directed toward U.S. persons and businesses. 
Here is an alternative conclusion: Google, FireEye, and Crowdstrike are both trusted more, and better at, cyber domain attribution than the US Government ever will be. It is almost certain that the future of this space for the USG is to feed information to private companies and let them do the heavy lifting on both the attribution and deterrence side.

The other major issue with the paper is the concept of asking fifteen experts in a survey what they think, and then writing that down and attempting to draw metrics out of it, as detailed on page 15.

Expert Interviews
In order to better understand the significance that the growing capability of private sector attribution may have for the USG, we performed qualitative research by interviewing 15 senior subject matter experts to explore 4 main topics
I love that phrase "qualitative research" because THAT'S NOT A THING. I don't understand how anyone designs a paper like that in 2019. Half the paper is discussing what those 15 SSME's said, which might have made an interesting Washington Post article with unnamed sources, but is not a whitepaper.

Part of the issue when looking at attribution is that trust is often more about personal reputations than institutional reputations. This is why nobody cares what any particular Government agency says (especially today) but if Rob Joyce or Alex Stamos puts their name on it, they pay attention. And it's not especially relevant when a government issues an attribution note, if that information doesn't change anything except to a cyber insurance company that wasn't going to pay out for an incident anyways, or as a speaking indictment for a group of contractors in Russia who now just can't come to DefCon.








Wednesday, February 27, 2019

AI and Reasoning about a new Meta

Both regulatory action (including export controls) and efforts to promote activity in AI have been hilarious jokes. Part of this is the nature of the problem: AI and ML are a general set of techniques that became possible when we had enough of both compute and data in one place.

Regulating the data that feeds into AI at first makes more sense than trying to regulate Tensorflow chips, which at best are a cost-savings mechanism, or the ML algorithms since we honestly don't know how they work anyways, or even the final product, which can be trivially reproduced by someone with the data.

But regulating "large useful databases" runs into a number of other problems. How much data is "too much"? Do you only regulate labeled databases? What if someone exports and then COMBINES two databases?  What if someone exports the labels separately from the databases?

The larger question then becomes: Can we build a stable human society that does not depend on controlling the spread of technology? 

Or even an easier, but still frustrating question: Is the shape of the way human societies work predictable, based on what we know of new technology? 

Every so often they introduce a new character into Overwatch, or manipulate the values of various character's skills. And professional teams then analyze them and try to figure out how to gain non-obvious advantages. The "working set" of characters is then called "The Meta". But why can't this be calculated ahead of time? Why did it take some random team named "GOATS" to realize you don't need any DPS characters at all to be unstoppable? 


This problem has been gnawing at me for a while. It's seemingly easy at first, but then requires a complex enough model that you are afraid you're just going to monte carlo it, which feels like failure. It's possible there's a more clever AI-based solution.

I fall into the crowd of people who think AI is the solution to everything - if for no other reason than the more you look at science itself, the more you think we will need an artificial brain to help us make forward progress. These days you have to micro-specialize in any field. But an Scientist-AI would not have to, and I feel like that's the race every country SHOULD be on, as opposed to trying to figure out how to integrate AI into battle planning or specialized sensor networks. In that sense the Meta leans towards teams who have an AI to help them figure out the Meta...

Tuesday, February 26, 2019

International Humanitarian Cyber Law

Image result for red cross cyber
CYBER FIRST AID FOR CYBER INTERNATIONAL LAW ANALYSIS

So Friday I attended a meeting with the Red Cross in DC aimed at discussing the issues around extending international humanitarian law tenets (IHL) to the cyber domain. I'm going to assume it was under a ruleset that prevents me from naming names or specific positions. But I wanted to write down some take-aways to help future analysis of proposals to interpret IHL in particular ways. As one lawyer says "If you give me the choice of writing the law, or interpreting the law, I take interpreting ten out of ten times".

It was pointed out that we do, as a country, take IHL into account when planning cyber ops, as we do with operations in any domain. But that doesn't mean there's nearly as cut and dry a port of the existing concepts that many international humanitarian lawyers want.

You'll find yourself, as you think about this stuff, asking many questions, for example:

  • Can we create a standard interpretation of the law that works on both easy targets and hard targets? The difference is often that with an easy target, you have direct control of your implants and your operation, and the effect you are trying to have may also be very direct and simple. With a more complex targets, you may be launching a worm into a system and hoping for an effect, and your targeting may have to be more diffuse to give you any hope at all.
  • Can we create a useful interpretation of IHL when none of the terms are defined when it comes to cyber? If North Korea takes out Sony Pictures Entertainment, what is the proportional response in the cyber domain? What does it mean to be indiscriminate in the cyber domain?
  • Is data a "Civilian Object"? And of course, if it is, does that mean governments can't use cloud hosting with civilians? The implications are an endless fractal of regulatory pain.
  • Is OPSEC or IHL driving any particular policy? It's impossible to tell, since everything is handled covertly in this space, whether we refrained from an action for OPSEC or IHL reasons. Or if there is a collision, does OPSEC win? Because hiding from attribution may require not looking like your op was written by a legal team in Brussels... This is a particularly hard problem because other domains we tend to have a massive advantage, but in Cyber we are essentially peer-on-peer, and may not have the overwhelming force necessary to take all the precautions the IHL lawyers would prefer.
  • How do you apply this in a space that has extremely fuzzy attribution at best? (Attendees wanted to basically live on the hopes that attribution would become a mostly solved problem - which I personally found hilarious)
  • How well does this all work when no two states agree on anything in the space? Obviously meetings like this are an attempt to get some level of agreement between parties, but it may not be possible to get agreements that any large set of states agree on.
Many lawyers in this space, even the aggressively pro-IHL lawyers, assume that "access" and "ops" are quite different, and most say that all access is fine. Access whatever you want, as long as you don't turn it off on purpose. I find this interesting in the case that access really does require risk. It's impossible to test your router trojans perfectly, and sometimes they take whole countries offline by mistake.

So much in whether something is deemed to be violating IHL is about the "intent" of the attacker, which typically is going to be completely opaque. Various lawyers will also point out that different aspects of laws apply depending on whether an armed conflict is already happening, which I frankly think is avoiding the issues at stake and hoping to come to an agreement on principals nobody really feels they will ever have to put into practice.

Activists (and citizens) probably have a different opinion on access vs ops, at least based on the reaction to Snowden. They might claim that even accessing their naked pictures violates, if not IHL, then some universal right to privacy. But this starts reaching into the issues we have with espionage law, and how we define domestic traffic, and a whole host of other things that are equally unsettled and unsettling.

Monday, February 18, 2019

Review: Bytes, Bombs, and Spies

It should have been titled bytes bombes and spies. A lost opportunity for historical puns!


In my opinion, this book proved the exact opposite of its thesis, and because of that, it predicts, like the groundhog, another 20 years of cyber winter. I say that without yet mentioning what the book's overall thesis is, which is that it's possible to think intelligently about cyber policy without a computer science degree or clearance. That is, is it possible to use the same strategic policy frameworks we derived for the cold war going into a global war of disintermediation? You can hence judge this book on the coherence of its response to the questions it manages to pose.

It's no mistake that the best chapter in the book, David Aucsmith's dissection of the entire landscape, is also its most radical. Everything is broken, he explains, and we might have to reset our entire understanding to begin to fix it. You can read his thoughts online here.

Westphalia is no longer the strongest force, perhaps.


Jason Healey also did some great work in his chapter, if for no other reason than he delved into his own disillusionment more than usual.



Yeah, about that...

But those sorts of highlights are rare (in cyber policy writing in general but also in this book). 

Read any Richard Haass article or his book and you will see personified the dead philosophy of the cold war reaching up from its well deserved grave: Stability at any cost, at the price of justice or truth or innovation. At the cost of anything and everything. This is the old mind-killer speaking - the dread of species-ending nuclear annihilation. 

What that generation of policy thinkers fears more than anything is destabilization. And that filters into this book as well.

Is stability the same as control?

Every policy thinker in the space recognizes now, if only to bemoan them, the vast differences between the old way and the new:

The domain built out of exceptions...
But then many of the chapters fade into incoherence.

This is just bad.

Are we making massive policy differentiations based on the supposed intent of code again? Yes, yes we are. Pages 180 and 270 of the book disagree even on larger strategic intent of one of the most important historical cyber attacks, Shamoon, which is alternately a response to a wiper attack and a retaliation for Stuxnet. Both cannot be correct and it's weird the editors didn't catch this.

What is your rules of engagement if not code running at wire speed, perhaps in unknowable ways, the way AI is wont to do, but even if not AI, can you truly understand the emergent systems that are your codebase or are you just fooling yourself?

There are bad notes to the book as well: Every chapter that goes over the imagined order of operations for what an offensive cyber operation would look like, and which US military units would do what, has a short self life, although possibly this is the only book you'll find that kind of analysis currently.

But any time you see this definition for cyber weapons you are reading nonsense, of the exact type that indicates the authors should go get that computer science degree they assume isn't needed, or at least start writing as if their audience has one:

Why do people use this completely broken mental model?

Likewise, one chapter focused entirely on how bad people felt when surveyed about their theoretical feelings around a cyber attack. Surveys are a bad way to do science in general and the entire social science club has moved on from them and started treating humans like the great apes we are.

That chapter does have one of my favorite bits though, when it examines how out of sorts the Tallinn manual is:

"Our whole process is wrong but ... whatevs!"

So here's the question for people who've also read the whole book: Did we move forward in any unit larger than a Planck length? And if not, what would it take to get us some forward motion? 



Tuesday, February 5, 2019

Iconoclastic Doctrine

I gave a talk at a private intelligence and law enforcement function a few months ago and I wanted to write up part of it for the blog and it went as follows:


Hacking, and hence cyber war, is in many ways the discipline of controlled iconoclasty. There's really two philosophies for mature hackers - people who want to be far ahead of the world, who prove how smart they are by finding bugs and using them in unexpected ways. Essentially characters who put all their stats into vulnerability development. Then there's operators, who can take literally any security weakness no matter how small, and walk that all the way to domain admin. The generational split here, for 90's hackers, used to be cross site scripting, which was viewed largely like a vegan burger at a Texas Bar-B-Q.

There are other generational splits that never happened for hackers, but which I see affect cyber policy. When you read about hackers in the news, you often hear not that they are 400 pound bedridden Russophiles, but that they are transgendered:


If you're not part of the hacker community, like, say if you do government policy for a living, it's easy to miss how much higher the percentage of transgender people are at all levels of the information security world than in the outside world. Hacking requires a mental courage to go your own direction and the clarity to examine societies' norms and discard the ones that don't work for you. If you're looking for cyber talent, you're better off with an ad on Fictionmania than on the Super Bowl broadcast.

Even in religious countries, the majority of their offensive cyber force is Atheistic. And support for the transgender community in the fifth Domain is a generation beyond what it is in the other four.

In summary, assuming that your goal is to "ensure US/Allied freedom of action in cyberspace and deny the same to our adversaries" then the transgender ban is like an NBA team banning tall people because they wear larger clothes. It's strategically stupid, and spills over into the IC and entire military industrial construct. It has national-security level implications, and not good ones.


Project the Raven

It's impossible to miss the reporting on Project Raven that came out yesterday, even though on one hand I feel like we've already heard a bit about the UAE's efforts from other reports. I have to admit, I never judge people from media reports. A friend of mine says that you're not really serving your country until your name gets dragged through the mud in the NYT.

Of course, one thing that strikes you as you read these reports is that their significance in the media is somewhat belied by the fact that the whole effort appears to fit INTO A VILLA.

I find this imbalance is true in a lot of areas of computer security. The number of hours it takes to implement an export control on some items is ten times larger than the number of hours to re-build that whole item overseas in a non-controlled country. This kind of wacky ratio isn't true for, say, stealth technology or things that go boom really big.

If you watch the following talk from 2012 I make a prediction, and it is this: I think non-state actors are where the game is. I think that is the big, underlying change that we are trying desperately to ignore.