Sunday, February 28, 2016

A plausible platform for cyber norms

While at times we discuss "cyber norms" with other States, I think it is good to start byte-sized and build a platform for reciprocal trust that mirrors our technical capabilities.

For example, last year we had a problem where we accused Russia of a state sponsored attack on JP Morgan. The United States defines financial utilities and companies as critical infrastructure, and it is easy to see how a simple malware incident can result in serious consequences. For example, we find ourselves trying to draw very subtle lines in the sand when it comes to penetrations of power plants and other utilities.

Watermarking implants can help solve these issues: in particular the issue of not knowing whether an intrusion is the result of a known responsible actor following accepted norms, or a rogue nation or third party.

Watermarking does not have to solve the attribution problem - they can be shared watermarks that attribute an implant (or "trojan" in common parlance) to a group of nations. For example, Russia, China, Israel, Germany, France, 5Eyes, etc. These nations can share a watermarking protocol which would allow them to provide a technical platform for "Red Phone" activities, or higher-level norms, including "off limits" targets or activities.

Take, for example, the 5eyes penetration of Belgacom. If the Cyber Group has decided that a norm they are following is that they will not perpetrate credit card/financial fraud, and they will not conduct economic espionage, the liability of Belgacom is much reduced when discovering a trojan on their network that has been "Signed" as a participating nation state.

This proposal increases all of our safety, and a follow-on paper is potentially available for people interested in technical details of how exactly you can provide signing protocols for watermarks that are shared, covert, and non-transferable.

(Think of this proposal as the opposite of the Tallinn Cyber Manual, which simply ports in one fell swoop current laws of war to "cyberized versions", including such hilarious nonsense as banning cyber-booby-traps, whatever those are. :> )

A Brief Introduction to Ancient History for Policy People

The technical community is often amazed by how little the policy world knows about the history of software vulnerabilities. So I want to take this post to introduce a few members of the pre-historical world, much as a Disney movie introduces you to a large plant eating reptile.

Both hackers and dinosaurs are equally adorable!

Let me put it bluntly: The people in these groups are now in places of influence in both Government and Industry all over the world. Like many fields, it takes decades to get a deep understanding of the issues involved in information security. This post is designed to give policy people the context they need to understand historical tribal factions which remain important today. All these groups were notable for performing a level of security research which eclipsed most nation-states during their time (and perhaps today, as well).

The CEO of Duo Security has an interesting take on the D&D alignment chart, which gathers historical groups with modern ones.

TESO


You may not recognize the pseudonyms "Halvar Flake" or "Zip" or "Stealth" or "Caddis". But I guarantee you that you would recognize their real names and the capabilities they have built in recent years, and those of us who were active in the 90's recognized TESO as the premium brand of quality exploits. In many ways, TESO and ADM changed public perceptions around exploits as things that could be developed with a level of quality crafting that was strategically better than just "proof of concept" but was in fact operational in the wild, with real science and artistic care.

TESO was largely a European group. But they were respected world-wide.

w00w00




Was Duke w00w00 or ADM?
People know about w00w00 a little bit because they had a few members who did very well in the media space ("Napster" was a w00w00 hacker and "WhatApp" was written by one as well. Hacker Billionaires!). But that undervalues the research and influence of other members in the largely American-based group.

ADM 

This group's name is short for "Association of Mobsters" in French. And it comes from a French base but of course like all hacker groups was international. ADM was also known for high quality exploits in the "Remote Unix Hacking" arena. 

ADM did a lot of research into early exploit automation (c.f. ADMHack) - integrating many exploits in one package which made intelligent decisions as it tried to exploit a given network. They did one known defacement: Of the DEFCON website.

-------------
ADMmountd.c
-------------

/*
 *
 *
 * Linux rpc.mountd 2.2beta29 exploit
 *
 * Coded by plaguez, Antilove, Mikasoft at the ADM Party (7/98)
 *
 * Credits:
 *    - DiGiT for finding the vulnerability
 * Compile: rpcgen mount.x ; gcc exmnt.c
 */

Are those names you know? They should be. Plaguez inspired some of my early shellcode, but ADM was another one of those teams (no picture is available) who were far ahead of their time. You would know the real names of these hackers should I mention them here, which I am not rude enough to do.


GOBBLES



"GOBBLES were auditing the Roxen webserver packages for holes that can be
used to comprimise servers so that GOBBLES could have the holes patched so
that no servers could be comprimised."

Enigmatically GOBBLES was famous for both having a sense of humor and broken English in their exploits, which were often against targets chosen purely for comedic effect, but also for poking fun at the developing security industry and its hypocrisy and lack of skill. Their most famous work was the exploit Apache Nosejob, which exploited a rather tricky overflow in the Apache web-server on the "Secure" OS OpenBSD, using a vulnerability previously declared unexploitable by the ISS X-Force researchers who discovered the issue.

They became famous via posts of humorous "Advisories" to the Bugtraq mailing list, but below is a video at DEFCON (the famous "Wolves Among Us" talk) which added to their popularity by examining cultural issues in the security community itself. You'll also notice the famously tall Stephen Watt make an appearance.




l0pht Heavy Industries

This group includes now-Government executives, and the beginnings of the security consulting industry.

This sprawling Boston-based group is famous for many things, including the hackers which released l0phtcrack and Back Orifice 2000 (an early Windows RAT). They later sold themselves as a company to @stake (which I joined when I left the NSA), and also testified in front of Congress on cyber-issues, highlighting the risks long before they were a political hot potato.

One odd fact is that this crew also started the practice of issuing formal "Advisories" for security vulnerabilities that the group GOBBLES was well known for making fun of.


Phenoelit


Since I am currently at a NATO workshop with a member of Phenoelit talking about policy with Government officials, I cannot avoid pointing out that this German-based team still, in fact, exists and is doing good work in the space. They also run the ph-neutral hacking conference, which is unique in having no "talks". Their most famous member "FX" is well known for doing router hacking before it was cool enough for Alex Wheeler to do. Router hacking is still important! Think of it as "Internet of Things" work, but before the marketing droids got their beady little eyes on it. 

Phrack and Phrack High Council (PHC)/Project Mayhem

These two are very different but easy to confuse. Phrack magazine is a a well known research publication in the space, whereas PHC was known for hacking other hackers and releasing their private information, especially those who were "White Hats". 

Conclusion

Not listed here are cDc, LSD.pl, SYNNERGY, 8lgm, and many others, each of which remains highly influential in the space. If you are annoyed you are missing, please feel free to send me a paragraph.



Sunday, February 21, 2016

Cyberwar and Breaking the Forth Wall

Immunity is a company, but corporate survival required that long ago we develop a braintrust to understand large sweeping ideas about cyber war.

In particular, we attributed the Sony Pictures attack to North Korea very early on , we have a different understanding of what a cyber weapon is, and we see the current conflict between Apple and the FBI in a very different light from most people.

When my friends from Apple ask me for the Immunity take on the lawsuit, the honest answer is that the lawsuit is a tiny part of an ongoing re-alignment between Governments and the tech industry we all rely on, much as the war in Iraq was a realignment of the balance between the Sunnis, Kurds, and Shia.

We see everything post-Snowden, including this lawsuit, as a failure of the US Government to understand how national sovereignty has changed due to the Internet and a complete lack of understanding that they are in the middle of an insurgency that requires counter insurgency tactics, and not simple legal efforts. This is not a popular position, needless to say.

Insurgencies are always a battle of ideas, would be our response.

And of course, the most famous insurgency of all time had a bit to do with the limits to search and seizure. But let's take a look at the optics for a sec:



The FBI's position on Apple is the most telling thing, because it is every government's position, in the sense that Apple's desire to protect their reputation isn't worth two cents of consideration by the FBI. As far as the FBI is concerned, a 1% chance of finding anything useful on that iPhone is worth a 99% chance of destroying Apple's reputation or international market position. That's not what a COIN scholar would call "Sharing the risk".

But not wrong. :)

Let me put it broader: The FBI does't think the tech industry's opinion matters, because the tech industry is part of a much larger population that the FBI represents. But if you took a national survey of Iraq as to how the country should be run, all the oil profits and national decisions would of course rest with the Shia in Baghdad. The FBI's lawsuit against Apple is exactly that national referendum. They may win it legally. They may win it in the court of public opinion. But they will have already lost it in the places that matter precisely because the FBI doesn't think they matter.

The CEO of Twitter thinks your strategy is bullshit, Jim Comey.
So no matter if they win a court case based on a law created before electricity, the idea of prosecuting it is a stupid stupid idea by an FBI team that doesn't even realize what kind of war it is fighting. Has anyone asked them what happens if they win? What happens when every customer, instead of getting a U2 album on their phone, gets free end to end encryption that inter-operates with Google's Android and does voice as well? All they have to do is put Signal on the top of both app stores and wait.

What the FBI needs to do at the top level is realize a slow managed landing from the golden age of surveillance is preferable to a sudden crashing exit where now you're fighting a tech war you can't win against all the former government engineers that you can no longer attract to work for you.

But it may be too late to do the smart thing. Our only hope of resolution might be just to buy the FBI all a copy of Snow Crash. Honestly it boggles my mind when policy people haven't read that book. If you're reading this and you're doing policy work, spend the two hours it takes to look at Neal Stephenson's take on how this ends up and tell me why he's wrong. :)



Tuesday, February 16, 2016

Why 0day is a nebulous concept, part 1!

There is an inherent problem with taking things that come out of the technical community as slang and then attaching legal meanings to them. But of course, the law profession is not without its hubris and thinks that it can pretty much define anything. Sometimes they even try to redefine mathematical constants such as Pi.

In that way, law is not a science, as much as engineering. So not to pick on any particular lawyer, but I want to quote some brief twitter exchanges to help illustrate the concept.


My analysis of how OPSEC decisions are made is entirely aligned with everyone else in the field, but we don't usually let lawyers in on it because we have to start our discussion from scratch, is what I read from that. :)

I enjoyed her responses a lot more knowing she had no idea what my background was. Someday Susan and I will have a beer and a good laugh about it.

Let's talk about a better mental model for lawyers to use when they are talking about the wild and wonderful world of vulnerabilities! It may help them understand why the concept of "0day" is so slippery in real life, and even of "exploit" and "vulnerability". (c.f. This Phrack Paper for some historical details on terminology dating to 2002, which were already widely used within the world of security engineers.)


Here are some key concepts:

  1. Code flaws are often used to create multiple primitives. Multiple primitives are used to create exploit logic - and you can combine them in lots of exciting ways, like when you create cookies. 
  2. 0day is a label that assumes what other people don't know. It is a model of the mind, not a scientific principle you can hang regulation on.
  3. Exploit engineers don't generally use the term "payload" - and incident response people use it to mean "trojan stage" or "dropper" which is confusing.
So in this sense, when lawyers say they handle the term "Vulnerability" just fine, neveryoumind, what they mean is "We don't know if it means code flaw, exploit primitive, use of that exploit primitive in an exploit, or what?" and when they say "0day" they are expecting you to be omniscient, which is optimistic, at best.

Friday, February 12, 2016

The value of an 0day stockpile to the country versus the value of feeling self-rightous



I wanted to follow on from yesterday by discussing Susan Hennessey's post on the NSA, in the sense that like a storybook character, "I Speak For The Trees". She's a former NSA lawyer and she quotes the current head of TAO and I find both those things funny, but there's some very clear misconceptions in industry and her post that I want to clear up.

I am quoting from https://www.lawfareblog.com/good-defense-good-offense-nsa-myths-and-merger below:
Second, there is a mistaken belief that it is not possible to both disclose and exploit a discovered vulnerability. Rob Joyce, head of NSA’s Tailored Access Operations, recently noted that, contrary to popular belief, it is generally more productive for NSA to exploit known vulnerabilities than zero-days.
Rob Joyce and Susan Hennessey are both wrong and if they disagree they are happy to come to INFILTRATE to point out why :). While yes, you don't need 0days to hack, there is a clear OPSEC advantage to having them, and once you have them, to not giving them up. Likewise, situations change and should modern defenses live up to their promise we will be ruing the day we decided to empty our "stockpiles" of vulnerabilities. Thirdly, it is obvious to the technical community (although not to lawyers and policy makers) that 0days are not a simple commodity like grain or oil, but often are highly correlated, composed of smaller parts and techniques, and uniquely non-fungible. Also, it is unproven in the public world whether our vulnerabilities have any significant overlap with Chinese and Russian stockpiles.

Based on all of these things, caution needs to be given to any claims that having the NSA "lean towards defense" in its handling of 0days would be beneficial even in the slightest.

This camo does not protect me from being found by Russian network analysts, but it does get me dates!


It is obvious to any experienced "operator" (as someone who hacks things for a living is known) that while a target may not be patched, when you use a known vulnerability, you are risking an IDS or AV or other defensive mechanism SILENTLY detecting you, and warning the target. The worst case scenario is not being blocked. The worse case scenario is being detected without knowing you are detected!

A brief understanding of how operations and defense play together is important. This is not NSA specific, but imagine you, as a nation-state attacker, use your shiny new IIS 0day against a Russian target. Russia keeps full packet logs of their entire countries network and has for many years. If you give that vulnerability to Microsoft, and fix it, Russia will then go backwards in time and look for all possible exploitation that would fit that pattern. Perhaps this is the goal of the next generation of EINSTEIN as well? ;>

Nation-grade hacking the way the US does it requires expensive implants, so if an implant (like FLAME) is discovered, not only will you lose access to that host, you may lose access to a thousand other hosts, and of course have to deploy an entirely new tool chain. You will, in a sense, have your entire tool-chain wrapped up in a destructive manner similar to having one of your spies discovered, and their case officer found and silently tracked for several years until all your other spies are found.

Because of this, even if you are no longer using an 0day, nation-state hackers are loath to give them to a vendor for fixes even if there may be some minor ancillary benefit. This case is completely lost, for whatever reason, to policy dialog at the moment.

The Chinese often do the opposite, having made a different OPSEC calculation. They use cheap implants that are highly replaceable, for the most part, and so when they realize they have been discovered, they release their exploits widely to avoid attribution. This is not the American way. We need the NSA to stockpile more 0day, not less, to accomplish our long term strategic goals.


Thursday, February 11, 2016

0days


Via Nicholas Weaver

From his latest post on "Trust and the NSA Reorganization":
Put simply, a zero-day is just more powerful than an older exploit. When the offense team knows the value is about to rapid diminish—and the time dimension means IA is more likely to bear a temporary risk—and it’s not difficult to imagine the efforts taken to exploit the vulnerability while it is still unpatchable. It is true that, in this scenario, the damage of early disclosure through offensive use is limited, because another attacker would need time to weaponize the exploit before a patch is released publically, and there is little such an attack could do to change the patch schedule.
So many over-simplifications in one paragraph, and normally I wouldn't care, but people keep doing it and so I want to move us forward a bit. (Excuse the pun :>)


0days, like atoms, are not simplistic and contain many mysterious and fun moving parts!

For anyone who has lived with 0days their whole adult life, listening to lawyers pontificate about them is painfully awkward, like a modern physicist trying to discuss wave-particle interactions with a Middle Ages alchemist.

Clearly 0days are an intoxicant of the highest order, but I'd like to demonstrate some quick subtleties that tie to their underlying wave-particle nature that the simplistic views of them cannot capture.

Let's play, like Einstein did, a quick mind game, that even lawyers can understand. :) I chose for this example the simplest thing I could imagine, but it still demonstrates the complexity of modern day 0day physics.

You have a piece of code with a null pointer deference in it. This code is in a library that handles images or some other common utility, and is widely shared.

The following things are all true:

  • When in the Kernel, this is a local privilege escalation (with a high criticality!)
  • But in modern Windows kernels, this may be entirely mitigated to a local crash (or not, hard to know without close investigation by a super-expert)
  • In a remote service, this vulnerability can only cause a crash
  • Except on certain architectures that sometimes map things at very low addresses (MIPS, for example), in which case it can allow remote code execution (very highest criticality!)
  • In userspace, this null pointer dereference is usually just a crash of the lowest criticality
In the same way that particles can decay into many different other particles and energies, we can track how that vulnerability changes over time. The most simplistic, and completely wrong, view is "Windows of Vulnerability". This the the one lawyers and defenders often cling to, as they don't know any better.

Let's say, for example, Microsoft fixes the null pointer dereference in their kernel, but that code is shared and continues to exist in a media player that many people use on Linux. In addition, they fix it, not with a security advisory, but in a service pack, while continuing to maintain and patch systems running under the older service pack, which is in common use.

Is that vulnerability an 0day, because on systems running the old service pack, it continues to be of high criticality? 

What if Microsoft, instead of fixing the null pointer dereference itself, removes the path of code that reaches that code from userland. Is that vulnerability fixed, or still an 0day? 

What if they DO issue an advisory for it, but Linus Torvalds completely ignores it, and continues to ship mainline kernels with the buggy code, which are exploitable but only on certain Linux kernel configurations? Is that still an "0day" in your terms? Did the bug "die"? 

What if they fix it on all versions of Windows, and issue an advisory, but completely fail to properly patch it? So it is known but unknown? Or is that a new vulnerability spawned out of the destruction of the old one?


What if no patch is ever issued, and nobody ever fixes it, but the product goes out of maintenance and is replaced by other products?

What if only the NSA and the Russians and Chinese know about this null pointer dereference, is it still an 0day?

What if I told you that 0day-reality was more complex and interesting than it first appeared?



Until you have asked all these questions in all their forms - crashed thousands of vulnerability-particles together to understand their underlying nature, it is impossible to make informed decisions as to what to do with them to protect yourself or build cool nano-machines out of them or even what words to use when talking about them. Basically, if you are still talking about archaic "Windows of vulnerability" or "Weaponization" you are wrong at the vocabulary and conceptual level, before you even reached the policy decisions you're trying to offer.

This is the metric you can use to see where you're at: Do you know what a write4 primitive is? Can you tell me how you would transform that into an information leak primitive?

The offensive community is happy to help, so come find us at INFILTRATE and we'll start the process. :)