Wednesday, September 28, 2016

Some old training materials on anti-attribution :)

The thing here is the top page: what information is visible OBLIQUELY, versus information your adversary has to see at the time of attack, versus information left behind on a target. The oblique information is the ... fun part.

As a concrete example: Some teams are good at exploiting race conditions in the Windows kernel - and that information filters through a county's various teams slowly as people leave and join different companies and agencies. But even if the entire toolchain is completely new and unknown, if I see that they got in via a Windows kernel race condition very early on, I can assess which team that came from. Anyways, the slides above are from the part of the class that taught the operational security value of not exposing that data, or at least knowing when it was exposed.

Does that make sense? I feel like the best way to learn all of this stuff if you're a policymaker is from reading Cryptonomicon

Or read these two pages from DKM's The Last Dancer:

You should read that book anyways.

Friday, September 23, 2016

Holes in the Math

When I worked with the NSA, I worked with Mathematicians more than anything else. This prezi, from 2011 is available here.

Not to drill holes in a dead horse, but the VEP was never meant to be real. It was always a mirage meant to assuage big domestic software companies. You can tell this by what is missing from it: Bugclasses and Math.

The NSA is the powerhouse it is precisely because it has a wide aperture on the capabilities it uses against a laser-focused mission. That, and sixty years of hiring the best mathematicians in the country and putting them to work on advancing a private library of mathematical tools for attacking problems in the signals and computing areas. 

A bugclass is something that is hard to define. But it's very important in these policy debates. The quintinessential easy-to-understand bugclass is the format string vulnerability. In short, the standard library everyone uses in C used to support string arguments that looked like this:

printf("Hello, %s! I am happy to meet you!",name);

The highlighted part in that string is known as the "Format String" as it takes commands like %s meaning "Read a string off the next argument". However, it also took the command %n, which meant "Write the current length to the stack". You could then, as an attacker, look for places in any program where you controlled what a format string would say, and put %n's into it, to either crash the program or control variables on the stack, leading the control of the program.

These are called "Format String bugs" and were easy to find, and easy to attack, and easy to remove. Now they are almost non-existent in real-world programs. 

But look at the VEP. What does it mean if I tell you "There is a class of vulnerabilities where you can put %n's into format strings and own programs with it". That's not a "vulnerability". It's a class of vulnerabilities. This distinction is super important. 

Many bugs are exploited not because they are useful individually, but because they are examples of bugclasses that MAY be useful in the future. In this sense they are part of a spectrum, at one end of which is the uber-expensive world-owning mathematical advancements that only the NSA can do. 

In other words, the VEP is a calculated sham because if it was there to do strategic work, it would have to recognize the full spectrum of offensive activity. It would be less about vulnerabilities, and more about Math, or even bugclasses. But those never even come up in discussion. They aren't relevant to a PR conversation. 

Thursday, September 22, 2016

The Information Singularity of Vulnerability Discovery Collisions

Corrections of corrections of corrections.

So I want to point out a great example of how not understanding the technology - in fact, not having a deep background in the technology - can make it impossible to do valuable analysis on a policy problem.

If you've read Mailyn's posts, which are a response to Matt Tait and I when we talked about the VEP being a pure PR exercise, you'll note she tries to continue a common claim that vulnerabilities we are using are often found by our adversaries and then used against US interests.

In her first post, she claimed MS08-067 was an example of this, but failed to realize due to her lack of technology background that Stuxnet only included it after it was made public. In her second post, she claimed the LNK bug in Stuxnet was a clear example of this, being found used in the wild by AV company VirusBlokAda. The problem with her "corrected" analysis is that what VirusBlokAda found was...Stuxnet itself! In other words, she makes the ultimate in circular arguments about vulnerability discovery collisions without even realizing it. Her post has since been edited to claim that LNK makes her argument, but in some unknown way that is unclear and unsupported (and untrue, in my opinion).

It's not just Mailyn. The Schwartz and Knake paper does the same thing: assumes vulnerability collisions are a known common effect in our world. But the truth is no doubt infinitely more complex. And without a deep understanding of the technology, it's impossible to talk about the policy issues around vulnerabilities.

This is the difference between Access Now-style activism, where things are true if you WANT them to be true, and science-based policy, which requires understanding the subject matter at hand.

Below is another example, from Cyber War vs Cyber Realities, a book out of Oxford University Press (not from Oxford University itself) I'm peer-reviewing right now.

What these types of authors would love to say is "We are policy people, not technologists". Which is fine, but in this case, we are doing highly technical policy work, where knowing Stuxnet when you see it is important. Knowing what parts of your policies are more complex than a simple statement is how you get to good policy, and without that, we are lost in this bizarre information singularity.

Tuesday, September 20, 2016

The Chinese Get Real

I want to point out the coincidence of rising Chinese cyber power is necessarily going to allow them to make concessions, and even reach agreement on acceptable behavior, with the United States, and that in the medium term, China will be an ally of the United States with regards to cyber issues, as opposed to the dire adversary it is currently perceived as.

As a bit of color, the Stern Stewart Summit I attended last week also had a Chinese attendee. Of course, at that high financial level what the Chinese are interested in is ZTE and Huawei, which are essentially blacklisted from the American and many allied markets. As one attendee put it: There are "PR problems", and "problem problems". Huawei and ZTE have a "problem problem". 

In a sense, they are casualties of the old cold war in cyber between the US and China. This was defined by a more aggressive, but more primitive and expansive Chinese effort, and a more subtle but more advanced American effort.

But now, things have changed. The Chinese have reached a technological and capability tipping point and are now putting out top notch public cyber security results, and therefor feel more confident about giving up ubiquitous presence on US networks in exchange for normalizing relations. As another signature: Conferences. It is no accident Qihoo360 bought SyScan, a top notch technical conference in information security, and is now growing it domestically and internationally. (As opposed to the much more cloistered XCon).

Keen Security Lab, QiHoo360 Marvel Team are top notch teams, working in the open like a normal western security research team would. This is a huge and new show of confidence by the Chinese.

A little while ago FireEye posted a graph listing their detection over time of Chinese hackers inside US commercial systems. I annotated it a few times with some ideas of what it could represent.

You can see deterrent actions and the subsequent reductions if you squint right. The horizontal lines delimit the two types of cyber espionage China has been conducting, with strategic espionage (the kind the US does as well) being the floor of activity.

That's one mental model which can help you understand the US-China cyber rapprochement on the economic espionage issue, which the US finds extremely important. The other issue of course is the ongoing ban of ZTE and Huawei, which I think the Chinese thought they could simply avoid. Quotes from the Huawei CEO have indicated he did not think it would have an impact, and yet Huawei and ZTE are nowhere to be seen in the US market, and US partners of theirs are considered laughingstocks at sales conferences.

Here's another possibility for that graph though that just takes into account skillset increases by the Chinese team, something I think is easy to forget about:
Because it is inevitable that the previous model of wide economic espionage was going to get them caught, the C team had outlived their usefulness, and their mission was closed.

To sum up: You don't have to hack EVERYTHING if you can hack ANYTHING, and the Chinese are showing signs that they've moved to that level. This allows them to make alliance with the United States on issues of mutual importance in the cyber arena. 

Monday, September 19, 2016

The Stern Stewart Summit, Germany, And Beds.

 I have only one real beef with Europe and that is I don't understand how an advanced economic block could fail to make bedsheets that were queen-sized. They seem to prefer shoving two-small beds together and just calling it a day.

That said, I participated in the annual Stern Stewart Summit last week, held at Schloss Elmau near Munich. The Summit started off as a collection of CFO's interested in purely financial matters, and continues this tradition but is now broadened to the CEOs and high-level executives of almost every big German company. It is also attended by a variety of political figures, including the American Ambassador to Germany and, in this case, Eric Cantor (former Majority Leader of the US House), former Canadian P.M. Joe Clarke, a high level team from Jordan, one Chinese representative, and many others.

It is an intimate event, of about 250 people, but cleverly arranged so you end up meeting and knowing everyone.

This is the kind of event that most people, other than me, wore suits to the lounges in, just in case they saw someone they knew. I was the youngest person there by almost a decade.
The Summit is run under Chatham House rules and therefor I did not take pictures and cannot quote people. But let me say a few things came up repeatedly, which I will list in an unordered way below:

  • Pessimism about the overall state of the EU
    • Can a monetary union exist without unified monetary policy (people did not think so)
    • The pressures of low growth and increased immigration/terrorism risks are overwhelmingly impactful
    • No end to war and unrest is in sight, and no policies to correct these things are available.
  • Generational gaps in the workforce are causing problems
  • Adapting to Industry 4.0 and technological changes is a struggle, particularly as EU has conflicts between a tradition of slow change and new business models (Uber, BnB, Electric Cars) threaten that

I gave a modified version of this:


Version 2

Let me start by saying that for my entire adult life I’ve been studying how to break into computer networks and systems.  Luckily for me I did this during the growth and now ever-presence of the Internet, first at the NSA and then in private industry. What that means, realistically, is I’ve broken into almost everything, like a lot of people who came of age during that time. The company I help run, Immunity, based in the blistering tropical crossroads of Miami, now secures large banks and manufacturing organizations.

I have fifteen minutes and I did have a very corporate, very boring speech for you set up where I talked about current day risks to your companies and some strategic things you can do to maybe help. But on the flight over, I reconsidered. What I really want to do is tell you about a threat you haven’t yet seen coming.

Like anyone who started their career in intelligence I find reading news reports stultifying because I believe nothing in them, but I want to pick some pieces out to illustrate a trend. The first one is the meaning of Stuxnet, which you have all read about in the economist or perhaps Wired magazine about, especially those of you who work at Siemens. Despite the muddled messages coming out of NATO, cyber war is, in my opinion, very real, and while other wars were all about mechanized destruction, cyber war is all about mechanized covertness. Our confusion about Stuxnet is a clear example of the dangers there.

And because of that covertness, the messages that normally would function as deterrence and capability announcement are muddled and our global policy on cyber security is muddled as a result. As Thomas Dullien, the famous German cyber strategist now working for Google points out: The US is keen on using new technological fronts in war, such as drones or cyber, and then screaming for international norms the minute someone else catches up.

So back to Stuxnet, or OLYMPIC GAMES, or whatever you want to call it - it was the not just a cyber effort against Iran’s nuclear capability but the announcement of a team. A rather huge team that has been playing World-Cup level soccer on the cyber battlefield for a decade and a half. It is in that sense a formidable and hungry teenager - perhaps one that I helped give birth to.

When I was a teenager I was in Fairfax Virginia, a few minutes from Washington DC, and when I go back now I don’t even recognize it. The past fifteen years have had it boom with giant crystalline structures - massive glass houses for rootkit writers and exploit developers - hackers, in all but name. The Iraq war and the Afghanistan wars were also cyber wars in ways that are just beginning to come out.

Not incidentally, the same thing is true when you go to Beijing and visit their Center for Internet Security. You can look across the table at the Chinese hacker team there and see in their eyes that they’ve hacked everything. It’s a weird thing, that look in their eyes. Knowing the world’s secrets because you’ve had to manually pull them out of mail spools for hours a time.

I’ve seen it in Chinese, Russians, Germans, Italians, French, American hackers. You can’t throw a rock from my childhood house without hitting someone who specializes in some kind of router exploitation technique. Look, here’s the really scary part of the story: The world is now full of hackers. The last fifteen years have created a cadre on all sides. We have poured our money into it in a way that we didn’t pour our money into advanced nuclear reactors. We live in a world that is safer in some ways because of it, and much more dangerous in other ways. You have been colonized with a crew that may or may not share your values.

Even before we get into the minutia of how you can protect yourself, I have to redefine something in your heads: The very nature of a cyber weapon. Because hackers don’t think of them the way you hear about them in the news. Imagine, if you will, that cyberspace was a terrain, and that weapons were anything that could change the surface structure of that terrain.

When the details of NSA’s QUANTUM were released by Edward Snowden, people focused on the exploitation - the breaking into of German phones is sexy. But the shaping of the flow of data is the key to QUANTUM and the most beautiful part. The art of cyber war is about controlling and understanding information paths. Or to paraphrase our initial speaker paraphrasing Napoleon: Cyber war is about controlling your opponent’s internal chaos by using their computers against them.

Ironically the most common kind of cyber weapon is the ability to disseminate information. The Pirate Bay, for one, which now even has a political party attached. Wikileaks, for another. It’s not accident both were started by elite but independent hacker teams. Think Guccifer and the Russian Business Network as other examples.

The other definition that I hope to change in your mind today is that of a computer.  In particular there is an anecdote that Thomas Watson, president of IBM in 1943, said there might be a world market for maybe five computers. This is one of those statements that is tritely amusing if you look at your desktops and mobile phones.

But to a hacker, he was not wrong in his assessment of the market! Right now, in fact, there are probably less computers. We know, because we give all the real computers human names. You may have heard of them: Azure, Alexa, Google, Siri, and possibly an NSA computer in that giant plant in Utah. Isn’t it an interesting accident that all the real computers are American?.

What I’m saying is: If you cannot seamlessly scale your computation, along with everything that implies in terms of redundancy, accounting, data transfers, parallelism APIs, and storage management - then you have a pocket calculator good for games and trivia and pictures of cats, not a computer.

If you don’t have a computer, it is much harder to break into networks, for technical reasons that are beyond the scope of this talk. It’s also harder to protect privacy when all your information is stored overseas and so I see a frustration in Europe at the power of all the multinationals that own real computers, which I think has less to do with privacy perhaps, and more with a concrete sense of a loss of national power in a new domain.

If you read Sebastian Dullien, a German economist who is among other things a Senior Policy Fellow at the European Council on Foreign Relations, and you SHOULD be reading him, then you know he has written about how to create European champions in the digital space, as he calls it. In other words, a European Google or Facebook or Apple. How does one create the kinds of investments that would make them, because without them, without any native European computers by a hacker’s definition - large scale computing minds - European companies are dependant on research done everywhere else to secure themselves.  

It is easy to misread Dullien’s work and other economist’s work as a call for European protectionism - to use EU Data protection standards as an aerial denial weapon against US Internet companies, which is exactly how they are viewed in the States. But it is also a warning. If protectionism doesn’t work then EU companies would be largely left undefended.
To draw a painful analogy: the Immune system response to almost every security problem is “segmentation”, think firewalls, and Export control. But a better answer is an extremely close cooperation, which echoes our other conversations here at the Summit.

Let me tell you this: The current day threat is real. Right now, I’m 100% sure there are North Korean hackers are inside German Banks, trying to leverage their access for massive wire transfer fraud. Russians are preparing the battlefield as well. I know they’re doing this, because that’s what I would do. German Intellectual property is being stolen by the Chinese the same way it was from American companies: On a massive industry-ending scale, but dealt with, as I’ve heard from members of the Summit, with resignation.

In the future we will have to deal with the fact that we’ve trained thousands and thousands of people in the dark arts of cyber war, with the skills to infiltrate any network and sometimes even the desire. But right now, you need to take crufty twenty year old technology installed everywhere that runs your businesses and find a way to just make it to tomorrow without bleeding out.

And putting my commercial hat on, let me say that this is not a problem you can just throw money at and get clear results. Cyber security is a community of snake oil driven by marketing and slick sales. The only way forward is as if you were at the top of the Alps - every step must be tested, and you are roped to the people next to you so they can catch you if you fall.

To put this into practical terms: When you purchase security products, you must commit to testing that they perform their function as if you were the adversary! This means a focus from the beginning on learning and valuing the offensive side of information security, which acts as a guiding light to your defensive efforts. Resist the urge to demonize the hackers among you, or the tools they use. I have spent the last two years of my life arguing with European diplomats about whether penetration testing tools should be included in the Wassenaar arms control agreements, and let me say, that’s a backwards step into a crevasse.

And you must find a way to broadly share intelligence on threats you see with your peers in industry. This sounds easy. But revealing your threats requires coming to grips with the regulators who are going to see it as a weakness and want to penalize you, and it means your direct competition will get inside information on your operations.

When you’re doing serious mountaineering they say “There is no privacy at the top of a mountain”. And, based on some personal experience, they really really mean it. Today my goal is to help the business community recognize they are at the top of the mountain when it comes to information security, and they have a long, painful, march back down to base camp. And it’s in many ways just you - the Government is not going to come to your aide.

In conclusion - we have come an unbelievably long way in the past fifteen years, and we recognize that cyber security touches everything we do, usually in a pretty painful way. But I think it’s important to see that we’re all still new at it. We’re still feeling our way around, and building relationships and learning what works and what doesn’t the hard way. I look forward to chatting with you all, and again, thank you for having me.

Monday, September 12, 2016

An old dailydave post on cyber attribution, and some notes

What I wanted to call out today was how technology is customized by people who use it. So while everyone can run the same rootkit or exploits or tools/methodologies of any kind, they are almost certainly going to modify them over time. Hacking groups evolve over time like everything else, and you can do biology-like tree diagrams of how that happens. VxClass, from Halvar Flake, but now a Google tool, does this at scale on implants, but it's true for all parts of the technology domain we live in.

That time signature shows the movement of information through an organization and between organizations as clearly as DNA does. Currently I'm reading this new paper by Herb Lin and you can't see that inside the paper.

The original post from 2013 follows:

We had this whole section in the early Unethical Hacking classes where
we talked about attribution, and anti-attribution methodology. To
summarize it, we realized that there are some things that can be
trivially changed by an exploit team - obviously the strings inside the
trojans are the best example of these. Or the emails they register their
cover accounts with. These mean nothing.

But there is meta-data they cannot change easily. What follows we call
the tripod of cyber attribution:

1. Knowledge of particular vulnerabilities, exploits, or techniques.
This produces a "chain"-like time-based fingerprint that is extremely
difficult to spoof, since you would need to replicate the entire Chinese
technology tree to pretend to be Chinese. Simply stealing some exploits
won't do, because you'll never have an exploit or exploit technique
BEFORE they go public with it. And you can also add "time to mature and
deploy a technology" to your analysis, making it a very robust
indicator. This is also true of operator methodologies, analysis
techniques, and attack surfaces.

2. Targeting. This is hard to change because it results not from
technological restrictions, but from policy restrictions and turf wars.
If you're not allowed by the Politburo to steal Chinese data, then you
won't. Faking this is possible, but it's somewhat complex. This, of
course, is why it's also dangerous to do "collision prevention" on your
rootkits. If you never catch Rootkits A and Q on the same box, ever in
the history of time, then A and Q are from the same team (or allied teams).

3. Dissemination. It's hard to pretend to be Russian if the data you are
stealing from Dow Chemicals ends up in Chinese state-owned enterprise's
product lines. This is one reason economic espionage efforts are so
dangerous to groups trying to hide attribution.

In any case, completely extraneous to this topic: Lurene did a podcast
you should listen to in your car or whatever - .
It's kind of like eavesdropping on two random people in a Starbucks in
DC who are talking about cyber - which .... is any two random people in
a Starbucks in DC, according to my sampling. :>


Monday, September 5, 2016

The Tech Does Not Support the VEP

 wrote a little rebuttle to Matt and my piece on the VEP. Here's the thing: I get told off quite a lot for asking for policy people to understand the technology at a fairly deep level before trying to argue the merits of the VEP. And here's why:
From Mailyn's "rebuttle"...

I love this idea that you can just study policy at Harvard and understand this issue. But you can't. MS08-067 (the bug used in Conficker) came out in version 1.001 of Stuxnet which was compiled in 2009, after it was patched. Now, that's "as far as we know" - it's possible it was used for other things before it went into Stuxnet.

I know why she wrote that though - because there was a Spooler bug that was used in Stuxnet that was also made "public" by a Russian newspaper and nobody noticed. So it was technically not 0day, but not patched, a category of bug that proponents of the VEP would like to pretend does not exist.

Not only that, but Matt and I do not propose "Bulk Disclosure" and we do not claim that the US does not have an interest in a secure Internet for commerce - we simply claim that the VEP is a pure PR move that cannot hope to accomplish its stated goals and does great harm while doing so (and in addition is bad PR!).

Thursday, September 1, 2016

The high bug overlap race!

Assuming you have high bug overlap in a certain area (Windows Internet Explorer bugs, for example), what are the additional questions you want to ask to make policy around whether releasing vulnerabilities is of high enough value to consider?

  1. Does releasing this vulnerability give my adversaries a temporary but significant advantage? If I release a Windows vulnerability to Microsoft, does that information get handled in a secure way, or can it be exploited by a foreign service to attack American systems for the months before it gets turned into a patch and then deployed by American companies?
  2. Does releasing this vulnerability demonstrate a sensitive capability we need to keep secret, such as a new kind of bug-class analysis engine?
  3. Can your opponent recover their stock of vulnerabilities faster than you can get them patched such that releasing the vulnerabilities would not have any positive effect? I.E. Finding high overlap just means you've decided to RACE your opponent's bug finding team. And you might not win.

Feel free to send me more. :)

Updates Below!