Friday, May 26, 2017

Platform Security

COM SECURITY TALK from INFILTRATE 2017: https://vimeo.com/214856542

Ok, so I have a concept that I've tried to explain a bunch of times and
failed every time. And it's how not just codebases decompose, but also
whole platforms. And when that platform cracks, everything built on it
has to be replaced from scratch. Immunity has already gone through our
data, like every other consulting company, and found that the process of
the SDL is 10 times less of an indicator of future security than the
initial choice of platform to build a product on.

It's easier for people to understand the continual chain of
vulnerabilities as these discrete events. They look at the CyberUL work
and think they can assess software risk. But platform risk is harder.

Some signs of cracking are:

  * New bugclasses start to be found on a regular basis
  * Vulnerability criticality regularly is "catastrophic" as bugclasses
    that used to be of low risk are now known to be of super high risk
    when combined together
  * Remediations become much more difficult than "simply patch" and
    often bugs are marked "won't fix"
  * Even knowing if you are vulnerable is sometimes too much work even
    for experts
  * Mitigations at first seem useful but then demonstrate that they do
    more harm than good

From an attacker's standpoint, being able to smell a broken platform is
like knowing where a dead whale is before anyone else - there is about
to be a feeding frenzy. Whole careers will live and die like brittle
stars upon the bloated decomposing underwater corpses of Java and .Net.
Microsoft Windows is the same thing. I want to point out that two years
ago when Microsoft Research gave their talk at INFILTRATE, initially
nobody took any notice. But some of us forced research on it, because we
knew that it was about the cracking of an entire platform - probably the
most important platform in the world, Active Directory.

From a defensive standpoint, what I see is people are in denial this
process even exists. They think patching works. They want to believe.

From an architectural standpoint, Windows is only two things: COM and
Win32api. Forshaw has broken both of them. And not in ways that can be
fixed. What does that mean? Anyways, watch the video. :)

Thursday, May 25, 2017

The PATCH Act

The PATCH act is well meaning, but handles strategic security issues with the wrong scope and without the information needed to solidify US Government response any longer term systemic risks.

Specifically, we know the following things:
  • Patched vulnerabilities can still result in massive security events (such as Wannacry)
  • Vulnerabilities we know about are sometimes, but not often, found out by our adversaries (RAND paper)
  • Exploits DO sometimes get caught (usually one at a time)
  • Exploits lately have been leaking (wholesale)
  • Understanding the risks or technical details of any one vulnerability is a massive undertaking
  • Exploits are composed of multiple vulnerabilities, each with their own complex story and background
  • Other governments are unlikely to give vulnerabilities to US companies through any similar system

We also know what we don’t know:
  • We don’t know which vulnerabilities we will need in the future
  • We don’t know what vulnerabilities our adversaries will find and use in the future
  • We often don’t know what mitigations will and won’t work in the real world (you would THINK patching would work, but Wannacry exists!)
  • We don't know how our supply chain will react to us giving vulnerabilities to vendors

The PATCH act defines vulnerabilities quite broadly for this reason: We don’t know what types of things will have impact and we will need to react to in the future. But this is also a sign that we are not ready for a legislative solution.

Imagine setting up the exact system described in the Act but only for Internet Explorer vulnerabilities. As you run this imaginary system through its paces you immediately discover how hard it is to get any value out of it. That’s not a good sign for a new law. Proponents of the PATCH Act say it is a "light touch" but anything that handles every vulnerability the United States government uses from every possible dimension is by definition a giant process. One, in this case, we don't know will be effective.

Another question is how we build a defensive whole-of-government framework - for example, should the head of the GSA be read in on our vulnerability knowledge (in aggregate, if not of individual vulnerabilities) so they can guide future purchasing decisions?

In order for our IC to continue in the field of computer exploitation, we will have to get some hold on wholesale leakers of our most sensitive technology. This does not mean “tracking down leakers” but building systems and processes resistant to leaking. It is about information segmentation and taking operators out of the system as much as possible.

This is true in all intelligence fields and may require re-engineering many of our internal processes. But assuming we can do that, and that efforts are already underway to do so, we still have to handle that exploits get caught occasionally, and that other people find and use exploits and that even after a patch, we have complex strategic issues to deal with.


In that sense, having a vendor produce and distribute a patch is only part of the complete breakfast of helping our strategic security needs. It is less about “defense vs offense” and more about handling the complex situations that occur when using this kind of technology. We would be wise to build an emerging strategy around that understanding before any legislation like the PATCH act forces us down a path.

Tuesday, May 23, 2017

Cover and Wannacry

I went to a dinner party once, not in the US, and ended up talking to a retired HUMINT official, also not from the US. I asked him some dumb questions, as Americans do. One of which was, "What's it like trying to make friends with people you hate?"

What he said was that there's always something you can find to really like about a person. You just dwell on that and the camaraderie is natural.

The other question I asked him was if it stressed him out, all the cover and hiding and stuff. And what he said was that after a while he never ever worried about hiding from the adversary. He only worried about getting back-stabbed by the incompetents on his own team. Generally, people who think they are helping, but instead are fucking you, and your whole team, over. This, to be fair, is how I think of all the well-meaning people trying to give vulnerabilities to Microsoft because they feel left out and they want to be part of the cool kids club.

But here's also the thing about cover: People are good at it. It's hard to admit, because there's a natural impulse to think that what you are catching is at least the B team. But maybe it's the D- team. Maybe there is an exponential scale beyond the fishpond you're finding and know about and have listed on the Symantec pages, or maybe the part of the picture you see on the Symantec blog posts analyzing trojans with MD5 signatures is missing crucial pieces of the puzzle.

So what I like to do to look at these things is have a larger strategic structure in mind, and then say "How does this fit into the REALM of possibilities", instead of "What does this lead to from the evidence as presented".

The realm of possibilities is quite interesting here. In addition to being a worm, Wannacry had a TOR C2 in it. And the reporting on Wannacry very much makes it seem like a disconnected event. But what if Wannacry is part of a fabric of attacks? What if the ransom money is meaningless - just something to hook onto for the press so that the reporting isn't "North Korean worm targets everyone...for no apparent reason". Because that would mean everyone did deep analysis. Nobody does deep analysis of what Ransomware does, except to try to decrypt the data.

Sometimes you give a worm "something" to do that is not the main goal. People aren't really analyzing Wannacry for C2 operations that much - mostly they just remove it. In this way, a Nation-State attack can be cloaked as a simple crimeware attack run by a nation-state.

And in the case of Wannacry, there are two goals which might be the main goal if you put a real cyber warfare strategist in charge, which I assume they do:
1. Access to information and networks that are hard to reach
2. Testing self-replicant infrastructure and methodology

The main goal is not "make 100k" because this is a team which steals millions of dollars per op. It would have made MORE sense for them to have shared their kill-switch information with Huawei, Tencent and Qihoo360 first, or soon after launch. . . and I bet we find they tried to do just that.


Monday, May 22, 2017

Hack back and the Yamamoto Chapter



So, I've tried my best to get the policy world to read Cryptonomicon, because it's important if you want to understand modern cyber policy in any sort of context.Weirdly, for an obviously over-educated crew that likes to read a lot, Cryptonomicon is not on the reading list.

But if you have time, just read this one short chapter: here.

What happens when you hang out with US spooks who don't know each other and Europeans at the same party is that you see an interesting feedback loop. Because US spooks have a natural tendency to play not just "stupid" but exactly half as smart as whoever they are talking to. This leads to a bemused European watching on as two US spooks each land on the lowest common denominator of explaining how they have actually never seen a computer in real life, and hope to one day meet a hacker who can tell them how this newfangled technology like "mice" works.

HOLD ME BACK BEFORE I HACK XNU MACH KERNELS IN MY RAGE


But if you are doing cyber policy work, you cannot help but notice there has been a truly odd number of papers essentially advocating hack-back coming from various arms of the policy world most connected with the "deeper state". I've listed a few recent links below.


In order to parse this properly - to "unpack" it, in the parlance of the policy world - you have to have hacked a few thousand networks personally, I think. And like any penetration testing company knows: Network Security is a rare thing. 

But it is exceptionally rare outside the United States. Here, we have West Coast charlatans selling us snake oil boxes and solutions which typically cause more problems than they help. But we've also invested heavily in education, and process. You have to LEARN how to have a penetration test. It has to hurt a bit the first few times. Developers hate it, managers hate the cost and delays. Network engineers hate designing around new nonsense requirements. 

Penetration testing, and security services in general are not an easy service to know you need, and know how to consume. You have to learn what code security looks like, and how to test your third party vendors, and, frankly, you have to learn how to give a shit to the point where you'll pay insane money for people to tell you that you suck in new and fascinating ways, without getting upset.

Most of the world doesn't want to go through this painful process. And in this case, I mean most of the developed world: Korea is still trying to get over how every banking app there uses ActiveX. Japan has a weird addiction to ignoring security while being technologically very advanced. China has a huge problem with pirated software and the great firewall of China. The Europeans wish they could regulate the whole hacking problem away. The Russians spend their time talking about kick-backs for recommending various security software, rather than penetration-testing results. 

In other words, their offensive teams are much more experienced than their defensive teams, and while this is changing, (Qihoo360! Tencent!), it is still new. They haven't had time to make as many mistakes as the US has on defense. They haven't learned how to care as much.

There are spots of brightness everywhere - you'll find clued-up people doing their best to secure their enterprises in innovative ways all over the world. It's no accident that all of Europe was on Chip And Pin ten years before Target got hacked. 

What you really want is this map, but normalized to "number of Internet connected Windows boxes" so you can get a percentage mapping. This map would look even more extreme in that case. Also, if it has the correct Peter's projection!

US Policy is to always say the following sentence until you believe it: "We are the most at risk nation for cyber attacks because we have adopted technology the most!" It's hilarious when people believe us.

Because if you've been in the biz you know the truth which is that overall, as Wannacry demonstrated (see above), there's a real security gap between nations. And I'd like to tie it together by pointing out that when the US policy teams talk about hack-back, the not-so-subtle subtext is "We are holding back. BlackHat alone had 9000 people at it last year. I swear to god, I could build a top notch hacking team by going into any random StarBucks in Fairfax and yelling out loud 'I will give this hard-to-find legacy SPARC TADPOLE LAPTOP to the first person to write my name on Strana.ru's front page without having to fill out Teaming Agreement paperwork!'. 

BlackHat and RSA are a peacock's tail of beautiful useless fitness-function announcement. No other country has anything like them in this space.

So when we talk about hack back what we're saying is that we may, very well, build a working hack back policy into our national strategy to combat what we consider unfair economic espionage. But we're also saying this: "Your companies are secured with raw hope and duct tape and you know we have a colossally massive back-bench of people waiting to go active if we just give them a mission. We are playing pretty stupid and helpless but ... don't fuck with us."


Friday, May 19, 2017

the enemy gets a vote




The little known corollary to General (now Secretary) Mattis’s comment on war is that your supply chain also gets a vote. People look at the ShadowBrokers to Wannacry-worm unofficial "technology transfer program" and think it is the Vulnerability Equities worst case scenario. But it’s really not.

The worst case scenario is that an exploit leaks that is composed of GCHQ parts, with some NSA add-ons, some CIA add ons, and a piece that you bought from a third party vendor under a special license. I'm not going to get into the fact that exploits DO get caught sometimes, and probably more often now that breach detection software is getting popular. But let's just look at the proposed PATCH law and other proposals from the simplest angle.

Most of the proposals for how to re-organize the VEP assume you can browbeat your third-party vendors, (and GCHQ, GCSB, etc. !) into accepting that, on your whim, you can send their vulnerabilities to a vendor for patching. This is simply not true - any more than the idea that you could GPL the Windows source code if you felt like it.

The thing is this: The exploit vendors also get a vote on these matters. And if you kill their bugs or exploit techniques or simply have bad OPSEC and get caught a lot they tend to vote by simply not selling you the good vulnerabilities. I cannot overstate how much we need our foreign second party partners in this space, and even more than that, how much we need our supply chain. Not only is the signals intelligence enabled through active network attack inescapably necessary for the safety of the country, but we are trying to build up CyberCom, enable Law Enforcement, and recover from the leaks and damage Snowden did.

In simple terms, yes, exploits save lives. They are not weapons, but they can be powerful tools. I have, and I cannot be more literal than this, seen it with my own eyes. You don't have to believe me.

Ironically, in order to determine which vulnerabilities present the most risk to us and just in general combat threats in cyberspace, we will probably have to hack into foreign services, which is going to require that we have even more capability in this space.

To sum up:
  • If you enforce sending vulnerabilities which are not public to vendors via a law, we will lose our best people from the NSA, and they will go work for private industry.
  • If we cannot protect our second party partner's technology they will stop giving it to us.
  • If we give bought bugs to vendors, they will stop selling them to us. Not just that one exploit vendor. Once the USG has a reputation for operating in this way, word will get out and the entire pipeline will dry up causing massive harm to our operational capability.
  • We need that technology because we do need to recover our capability in this space for strategic reasons
Of course the general rule of thumb in intelligence operations is to protect your sources and methods at all costs. And that includes your exploit vendors. I know there are public talks out there that claim you can do network operations entirely without 0day - but the truth is much more complex, and our need for 0day is not going be replaced by only using exploits which have patches available. Nor is it tenable to use exploits "just for a little while" and then give them to a vendor.

But there are better ideas than VEP available. One idea is simply to fund a bug bounty out of the Commerce Department for things we find strategic (i.e. not just for vulnerabilities which is something Microsoft and Apple should fund, but explicitly for exploits and toolkits other countries are using against us).

Likewise, the IC can be more open about what exploits we know get caught, and having custom-built mitigation expertise available ahead of time for corporations can limit the damage of a leak or an exploit getting caught, at the cost of attribution. This may include writing and distributing third party patches, IDS signatures, and implant removal tools.

And having sensors on as many networks as possible can help discover which of your vulnerabilities have been caught or stolen.

One interesting possibility if we close off our exploit pipeline is that we instead will be forced into wholesale outsourcing operations themselves - something I think we should be careful about. Finally before we codify the VEP into any sort of law, we should look for similar efforts from Russia and China to materialize out of the norms process, something we have not seen even a glimmer of yet.

-----
Layercake "Golden Rules" quotes for those without YouTube. :)
o Always work in a small team
o Keep a very low profile
o Only deal with people who come recommended
o Never be too greedy

Tuesday, May 9, 2017

Heritage Paper on Hack Back

https://www.lawfareblog.com/active-cyber-defense-aka-hackback

There's very little difference I can find from the Heritage Institution paper from Paul Rosenzweig et al and this CyberSecPolitics post. But I think it's a good idea, if for no other reason than to set up a functional capability that can scale, and that takes money from private industry to pay for the defense of private industry, while being managed in a granular way by the government.


Sunday, May 7, 2017

The Teams

One thing I learned from INFILTRATE 2017 is that while there are some new players on the field, the majority of the people are in teams that have two decade old lineages. This has massive implications maybe I'll go into later.

But with that in mind, I will ignore Mark Dowd's hilarious self-introduction, and simply say this: There are a lot of people I would advise even a well resourced nation state not to mess with in the cyber domain. Dowd is on the top of that list. So watch the below video, when you get a chance.

Network Centric Warfare



I feel like people have a small view of Network Centric Warfare. I feel like part of this problem is the old-school thought-memes that are "command and control". Even in the Wassenaar debate you see this come up over and over again. They want to ban, not the trojans but the "Command and Control" that generates and controls those trojans.

This is because the people who write the regulations are the same people who wrote the Wikileaks page on network centric warfare. Because in their head, the Internet allows for FASTER command and control. "It's an evolution," they say. "It increases the SPEED of command."

This is entirely wrong. What the internet did - what network centric warfare did - was change the very nature of how combat systems work. Instead of "command and control" you have something very different - you have networks of networks. You have "publish and subscribe", you have "broadcast" you have "direct to the front line from the back-end analysis systems". You have, in a word, emergent and reactive systems.

Ask yourself, what percentage of the sensors do I have to take over to BE the command and control system? What part of how I do C2 is an illusion?

And nowhere is this more true than in the cyber domain. There are two  schools of thought. "It's an evolution of what we have already, be it EW or Psyops or SIGINT or HUMINT or whatever". And the other one: This is a revolution. It's as different as a leaf-cutting ant colony from a centipede. It is night and day. It is the difference between H.R. Gyger and John Audubon. It is like 50 Shades of Grey vs The Fault in Our Stars.

I honestly don't know any way to make this clearer and if you've read this blog (or ever dated me) you are probably sick of me bringing up ants all the time. But no analogy to network centric warfare is more direct than the one of how social insects work.

Wednesday, May 3, 2017

What you don't know can't hurt you.

Ranking Risk in Vulnerability Management

Prioritization gaps are a hard thing to explain, or maybe a very easy thing to explain but only in a way that it is hard to put a metric on. But let's start by telling the story of every enterprise vulnerability management organization everywhere, which has enough resources available to test and patch five vulnerabilities, and there are fifty advisories all marked critical on the systems they run. So, as in the graphic above, you see people developing ranking systems which look at vulnerabilities and say "Hey, is this important enough that I have to spend time really working on it". In some cases there IS no patch and you're creating a custom mitigation, but either way, the testing process is expensive. 

Look at the factors they use and then think of the ways you can defeat those as an attacker, because any gap between what the attacker prioritizes and what the defender prioritizes is really its own vulnerability. And if I can PREDICT or CONTROL the factors a defender uses (say, by controlling public information), then I, as the attacker, can always win.

For example, if I attack QUICKLY then the vulnerability remediation prioritization tree is always off, because I will be done attacking before you have a chance to notice that something is under "active attack".


This should go without saying...
Likewise, some exploits I can specialize in, which makes them "easy" for me, even if publicly they are known to be "hard" -  Windows kernel heap overflows, for example. I can invest in understanding a particular attack surface, which means I can apply say, font parsing bugs, to lateral movement problems, which you may not realize are a possibility. 

And of course, I can invest in understanding the patch system Microsoft has, and knowing which bugs they won't patch or which bugs they have messed up the patches for. 

The point here is that as an attacker I invest in understanding and attacking the global vulnerability management process, and the specific vulnerability management processes in use by my targets, as a process.

Wednesday, April 26, 2017

Technical Paper Review: Lighting the Dark Web

There was a bit of a 1 vs 1 between two law professors on the subject of NITs, which I think is worth a read. Ahmed Ghappour, contends that the FBI and current legal practice of NITs is a thorn in the side of international legal norms, on the summary basis of "Hacking random computers in other countries has issues". Orin Ker, on the other hand, is like "It's all cool, yo." Both papers are drafts.

There are several technical holes that need to be discussed, although Ghappour's paper goes into depth into the details of what a NIT is, the reality is that "NIT" is not a technical specification but simply a description of the end user (aka, Law Enforcement), and the definition is simply "Malware that helps you investigate".  In other words, legal analysis that assumes or presumes that "NIT" is in some way a special technique, separate from intrusion techniques as they are used elsewhere, feels highly deceptive to a technical crowd.

Current known LE practice is to take over a web domain that is utilized exclusively by "bad people" and then attack those bad people from that trusted website with a client-side exploit, which then does some very simple things to locate them.

But building legal rules on this scenario is short-sighted because future scenarios are almost certain to be much more complex and require essentially the same level of toolkit and methodology that penetration testing or SIGINT attacks carried out by TAO have. For example, future targets could be using the Grugq's "Portal" firewall.

Likewise, a key issue is revealed in Orin's draft text:
In this particular context, it is doubtful that the rule expressed in the 1987 Restatement is viewed by states today as extending to the use of NITs. First, the 1987 rule is focused on conduct that occurs exclusively in a foreign state. Yet use of a NIT is not necessarily such conduct; it is entirely possible that the use of a NIT results in conduct solely within the territory of the state employing the NIT. To put it another way, application of the 1987 rule in the manner suggested by Ghappour results in a state being prohibited from using a NIT even to pursue criminal conduct in its own territory. The 1987 rule had no (and was intended to have no) such effect.
There are two ways to think about hosts that you do not know the location of:

  • They default to somewhere within your borders.
  • They default to somewhere NOT within your borders.
  • It is in fact, NEITHER within or without your borders - but handled in a special way, much like Microsoft and Google would prefer, because of rule 2.

From original paper by Ahmed Ghappour:
The legal process for network investigative techniques presumes search targets are territorially located, which is not at all accurate. Indeed, most potential targets on the dark web are outside the territorial United States.27 Approximately 80% of the computers on the dark web are located outside the United States.28
So as far as I can tell only in special circumstances should the default warrant process really be valid. Just because this results in a situation LE does not like, where Tor-services are not domestically warrantable under current legal frameworks, does not mean we should pretend this is not the case.

And of course, computer networks have many more complexities than are addressed. For example, what happens when your intrusion discovers that what you reasonably THOUGHT was a domestic computer, is in fact, a foriegn computer? There are many ways this can happen: Load balancers and various types of proxies can redirect connections from one computer to another transparently, for example.

Keep in mind that IP addresses are often ephemeral - the very process of uniquely identifying a machine is an extremely difficult one technically, especially for remote techniques (speaking from the experience of anyone who has built a large scale scanner).

Orin's paper talks about attacks (CNA):
To be sure, the FBI’s existing hacking techniques, properly executed, do not rise to the level of a cyber “armed attack,” which would permit a state to respond with force under Article 51 of the U.N. Charter.43
While inadvertent "computer attack", meaning "damage or destruction" is unlikely under the current methodologies, it is none-the-less technically possible, and becomes more likely in the future as techniques become more necessarily invasive. Collateral damage is a very real threat - there were a lot of legitimate customers on MegaUpload, for example. There is real risk here of international incident - Orin's paper currently states "There is no sign that the USG or the American public was offended by the foreign search", but there are easy ways to see scenarios where this would not be the case.

For example, BSDM is illegal in the UK but very legal in the States. Should the UK's Law Enforcement officers execute a UK NIT warrant collecting the list of Fetlife.com users to search for UK citizens, we can see immediate conflict with American perspectives.

The Playpen story in Oren's paper, where we discovered the server was in Iceland with NIT, then collected it with a MLAT, is instructive. What's our plan when we discover the server is in Iran? Likewise, we had already conducted a search of the Icelandic server BEFORE we knew it was in Iceland, where we had a good legal relationship.

Orin's paper continues:
"But he does not point to any instances in which the ongoing practice actually caused the reaction he fears"

  1. Friction may be covert and handled quietly. Not seeing the friction does not mean there isn't any.
  2. We may self-limit to child porn and mega-crime for a reason. What about when it's not that? What about the norms we set?


As another side note about how hard this is getting to be in practice, check out what happens when Law Enforcement asks an unwilling party to help them with an investigation, as seen today on the Tor mailing list:
https://lists.torproject.org/pipermail/tor-relays/2017-April/012217.html

That response shows the built in gravitational forces that are going to require Law Enforcement step up to the level of the NSA or CIA's teams.

Lastly, Orin's paper has a VERY strange ending:
NITs are very expensive to develop and require a great deal of technical sophistication to use. Drafting an NIT warrant requires considerable legal sophistication and the evaluation of significant legal ambiguities. Use of NITs may lead to disclosure of their details in subsequent litigation, potentially depriving the government of future access to computers by using that same vulnerability. 
So far the Government has successfully prevented any disclosure of vulnerabilities used (and here of course we have a built in confusion of the "vulnerability" and "malware/implant" with the term "NIT").  Likewise, there's no technical reason the FBI cannot scale to the level of the NSA, given sufficient funding. Oren seems to be implying there's an operational security issue here, when it's really a resources issue. The FBI COULD, in theory, use a new exploit and implant and toolchain for every single investigation. This is, in fact, the most secure way to do this kind of work.

Keep in mind, Law Enforcement, especially local Law Enforcement, often leaks the things they find out in order to pressure people, even people who are not suspects. For example, here is an article of them leaking the porn browsing data from a non suspect.

Thursday, April 20, 2017

Making a NEW and NextGen AntiVirus Company out of DHS instead of an old and busted one


So I have yet another exciting policy proposal based on how the USG can't trust any software vendor's remediation process to be beyond control of the FSB. :)

You can see in the DHS a tiny shadow of an anti-virus company. EINSTEIN and Threat Intelligence and incident response, and managed penetration testing - the whole works. But we're kinda doing it without realizing what we're building. And why not develop real next-gen infosec companies instead?

In fact, the way using secret USG information would work best is if we could use it ALL AT ONCE. Instead of publishing reports, and giving the Russians time to upgrade all their trojans as various companies react at different times, we can FLASH UNINSTALL every variant of a single Russian trojan, as if we were FireEye, on any company that opts-in to our system.

Also, why should we rely on Microsoft's patches when we can, as soon as we need to, make our own USG-deved patches with something like 0patch.com? Not doing this, seems like being horribly unprepared for real-world events like leaks, no?

Why can't I sign up to the DHS "behavioral analysis" AI endpoint protection for my company, which has a neural network trained not just on open-source malware, but on the latest captured Russian trojans? 

Think Next Gen people! :)

Alternative Theories

Fact 1: ShadowBrokers release was either "Old-Day" or "Patched"
Fact 2: Microsoft PR claims no individual or organization told them (found them all internally, eh?)

And of course, Fact 3: the US-CERT response to the ShadowBroker's earlier announcements.

So there are a lot of possibilities here that remain unexplored. I know the common thought (say, on Risky.biz) is that the Vulnerability Equities Process jumped into action, and helped MS with these bugs and then the patches came out JUST IN TIME.

Question: Why would the US not publicize, as Susan Hennessey has suggested, this effort from the VEP?

Fact 4: The SB release was on Friday, three short days after MS Patch Tuesday.

One possibility is that the SB team tested all their bugs in a trivial way by running them against the patched targets, then released when nothing worked anymore. But no pro team works this way, because a lot of time "patches" break exploits by mistake, and with a minor change, you can re-enable your access.

Another possibility is that the ShadowBroker's team reverse engineered everything in the patch, realized their stolen bugs were really and truly fixed, and then released. That's some oddly fast RE work.

Maybe the SB has a source/access inside the USG team that makes up the VEP or is connected in some way (they had to get this information somehow!), and is able to say definitively these bugs were getting fixed conclusively, and doesn't have to do any reverse engineering.

If the SB is FSB, then it seems likely that they have a source inside Microsoft or access to the patch or security or QA team, and were able to get advanced notice of the patches. This presents some further dilemmas and "Strategy Opportunities". Or, as someone pointed out, they could have access to MAPP, assuming these bugs went through the MAPP process.

One thing I think missed in the discussion is that Microsoft's Security strategy is in many ways, subordinate to a PR strategy. This makes sense if you think of Microsoft a company out to make money. What if we take the Microsoft statement to Reuters at their word, and also note that Microsoft has the best and oldest non-State Intelligence service available in this space? In other words, maybe they did not get their vulnerability information from the VEP.

There are a ton of unanswered questions, and weird timings with this release, which I don't see explored, but maybe Grugq will do a more thorough piece. I wanted to explore this much to point out one quick thing: The USG can not trust the integrity of Microsoft's networks or decision makers when it comes to national security interests.


Wednesday, April 19, 2017

0-12 and some duct tape

In a recent podcast Susan Hennessey at about seven minutes in says:
"...The authors here are from rather different communities, attorneys, private industry, non-legal policy areas, technical people, and again and again when we talk about cyber policy issues there's this concern that lawyers don't know enough about technology or technologists don't know enough about policy and there's this idea that there's this mythical person that's going to emerge that knows how to code and knows the law and has this really sharp policy and political sensibility and we're going to have this cabbage patch and then cyber security will be fixed - that's never struck me as particularly realistic. . . ."

"I've heard technologists say many many times in the policy space that if you've never written a line of code you should put duct tape over your mouth when it comes to these discussions"

Rob Lee, who has a background in SCADA security, responds with tact saying "Maybe we can at least drive the policy discussion with things that are at least a bit technically feasible."

He adds "You don't have to be technical, but you do have to be informed by the technical community and its priorities".

He's nicer than I am, but I'm also writing a paper with Sandro for NATO policy makers and the thesis has been bugging me for weeks on "What I want Policy Makers to know about cyber war". So here goes:

  1. Non-state actors are as important as States
  2. Data and computation don't happen in any particular geo-political place, which has wide ramifications, and you're not going to like them
  3. We do not know what makes for secure code or secure networks. We literally have no idea what helps and what doesn't help. So trying to apply standards or even looking for "due diligence" on security practices is often futile (c.f FTC case on the HTC Phones)
  4. Almost all the useful historical data on cyber is highly classified, and this makes it hard to make policy, and if you don't have data, you should not make policy (c.f. the Vulnerability Equities Process) because what you're doing is probably super wrong
  5. Surveillance software is the exact same thing as intrusion detection software
  6. Intrusion software is the exact same thing as security assessment and penetration testing software
  7. Packets cannot be "American or Foreign" which means a lot of our intel community is using outdated laws and practices
  8. States cannot hope to control or even know what cyber operations take place "within their borders" because the very concept makes almost no sense
  9. Releasing information on vulnerabilities has far ranging consequences both in the future and for your past operations and it's unlikely to useful to have simple policies on these sorts of things
  10. No team is entirely domestic anymore - every organization and company is multi-national to the core
  11. In the cyber world, academia is almost entirely absent from influential thought leadership. This was not the case in the nuclear age when our policy structures were born, and all the top nuclear scientists worked at Universities. The culture of cyber thinkers (and hence doers) is a strange place, and in ways that will both astonish and annoy you, but also in ways which are strategically relevant.
  12. Give up thinking about "Defense" and "Offense" and start thinking about what is being controlled by what, or in other words what thing is being informed or instrumented or manipulated by what other thing
  13. Monitoring and manipulation are basically the same thing and have the same risks
  14. Software does not have a built in "intent". In fact, code and data are the same thing. Think of it this way, if I control everything you see and hear, can I control what you do? That's because code and data are the same, like energy and matter.

If I had to answer Susan's question, I'd say the less tactful version of Rob's answer. Which is that in fact we are now in a place where those cabbage patch dolls are becoming prominent. Look at John De Long, who was a technologist sitting next to me before he became a lawyer, and Lily Ablon, and Ryan Speers, Rob Joyce, and a host of others, who all have deep technological experience before they became policy people. It's just the other side of the story is that every Belfer center post-grad or "Cyber Law and Policy Professor" with no CS experience of any kind has to leave the field and go spend some time doing bug bounties or pen testing or incident response for a while to get some chops.

But think of it this way, the soccer game's score is 0-12, and not in your favor. Wouldn't you want to change the lineup for the second half?

Monday, April 17, 2017

Fusion Centers


So the Grugq does great stand up - his timing and sense of using words is amazing. But it is important to remember that when I met him, a million years ago, he was not pontificating. He was, as I was, working for @stake and on the side writing Solaris kernel rootkits. Since then he's spent a couple decades sitting in cyber-land, getting written up by Forbes, and hanging out in Asia talking to actual hackers about stuff. My point is that he's a native in the lingo, unlike quite a lot of other people who write and talk about the subject.

Which is why I found his analysis of Chinese Fusion Centers (see roughly 35 minutes in) very interesting. Because if you're building cyber norms or trying to enforce them, you have to understand the mechanisms other countries use to govern their cyber capabilities all the way to the ground floor. It's not all "confidence building measures" and other International Relations Alchemy. I haven't been able to find any other open source information on how this Fusion Center process works in China, which is why I am pointing you at this talk. [UPDATE: here is one, maybe this, also this book]

Likewise, the perspectives of foreign SIGINT programs that the US has decided to Gerrymander the cyber norms process is fascinating. "What we are good at is SUPER OK, and what you are good at is NOT GOOD CYBER NORMS" is the US position according to the rest of the world, especially when it comes to our stance on economic espionage over cyber. This is an issue we need to address.


Saturday, April 15, 2017

VEP: When disclosure is not disclosure.

True story, yo.

I want to tell a personal story of tragedy and woe to illustrate a subtle point that apparently is not well known in the policy sect. That point is that sometimes, even when an entire directory of tools and exploits leaks, your bugs still survive, hiding in plain sight.

A bunch of years ago, one of my 0days leaked in a tarball of other things, and became widely available. At the time, we used it as training - porting it to newer versions of an OS or to a related OS was a sort of fun practice for new people, and also useful.

And when it leaked, I assumed the gig was up. Everyone would play with it, and not just kill that bug, but the whole technique around the exploitation and the attack surface it resided in.

And yet, it never happened. Fifteen years later only one person has even realized what it was, and when he contacted us, we sent him a more recent version of the exploit, and then he sent back a much better version, in his own style, and then he STFU about it forever.

I see this aspect in the rest of the world too - the analysis of a leaked mailspool or toolset is more work than the community at large is going to put into it. People are busy. Figuring out which vulnerability some exploit targets and how requires extreme expertise and effort in most cases.

So I have this to say: Just because your adversary or even the entire world has a copy of your exploit, does not mean it is 100% burnt. And you have to add this kind of difficult calculus to any VEP decision. It happens all the time, and I've seen the effects up close.

ShadowBrokers, the VEP, and You

Quoting Nicolas Weaver in his latest Lawfare article about the ShadowBroker's Windows 0days release, which has a few common thematic errors as relates to the VEP:
This dump also provides significant ammunition for those concerned with the US government developing and keeping 0-day exploits. Like both previous Shadow Brokers dumps, this batch contains vulnerabilities that the NSA clearly did not disclose even after the tools were stolen. This means either that the NSA can’t determine which tools were stolen—a troubling possibility post-Snowden—or that the NSA was aware of the breach but failed to disclose to vendors despite knowing an adversary had access. I’m comfortable with the NSA keeping as many 0-days affecting U.S. systems as they want, so long as they are NOBUS (Nobody But Us). Once the NSA is aware an adversary knows of the vulnerabilities, the agency has an obligation to protect U.S. interests through disclosure.

This is a common feeling. The idea that "when you know an adversary has it, you should release it to the vendor". And of course, hilariously, this is what happened in this particular case, where we learned a few interesting things.

"No individual or organization has contacted us..."

"Yet mysteriously all the bugs got patched right before the ShadowBroker's release!"
We also learned that either the Russians have not penetrated the USG->Microsoft communication channel and Microsoft's security team, or else Snowden was kept out of the loop, from his tweets chiding the USG for not helping MS.

This is silly because codenames are by definition unclassified, and having a LIST OF CODENAMES and claiming you have the actual exploits does not mean anything has really leaked.

The side-understanding here, is that the USG has probably penetrated ShadowBrokers to some extent. Not only were they certain that ShadowBrokers had the real data, but they also seem to have known their timeframe for leaking it...assuming ShadowBrokers didn't do their release after noticing many of the bugs were patched.

And this is the information feed that is even more valuable than the exploits: What parts of your adversary have you penetrated? Because if we send every bug to MS that the Russians have, then the Russians know we've penetrated their comms. That's why a "kill all bugs we know the Russians have" rule as @ncweaver posits and which is often held as a "common-sense policy" is dangerous and unrealistic without taking into consideration extremely complex OPSEC requirements for your sources. Any patch is an information feed from you, about your most sensitive operations, to your enemy. We can do so only with extreme caution.

Of course the other possibility, looking at this timeline carefully, is that the ShadowBrokers IS the USG. Because the world of mirrors is a super fun place, is why. :)




Tuesday, April 11, 2017

"Don't capture the flag"

Technically Rooted Norms


In Lawfare I critiqued an existing and ridiculous norms proposal from Carnegie Endowment for International Peace. But many people find my own proposal a bit vague, so I want to un-vague it up a bit here on a more technical blog. :)

Let's start with a high level proposal and work down into some exciting details as follow from the original piece:
"To that end, I propose a completely different approach to this particular problem. Instead of getting the G20 to sign onto a doomed lofty principle of non-interference, let’s give each participating country 50 cryptographic tokens a year, which they can distribute as they see fit, even to non-participating states. When any offensive teams participating in the scheme see such tokens on a machine or network service, they will back off. 
While I hesitate to provide a full protocol spec for this proposal in a Lawfare post, my belief is that we do have the capability to do this, from both a policy and technical capacity. The advantages are numerous. For example, this scheme works at wire speed, and is much less likely to require complex and ambiguous legal interpretation."

FAQ for "Don't Capture the Flag" System


Q: I’m not sure how your proposal works. Banks pick their most sensitive data sets, the ones they really can’t afford to have attacked, and put a beacon on those sets so attackers know when they’ve found the crown jewels? But it all works out for the best because a lot of potential attackers have agreed to back off when they do find the crown jewels? ;-)

A: Less a beacon than a cryptographic signature really. But of course for a working system you need something essentially steganographic, along with decoys, and a revocation system, and many other slightly more complex but completely workable features that your local NSA or GCHQ person could whip up in 20 minutes on a napkin using things laying around on GitHub.
Also, ideally you want a system that could be sent via the network as well as stored on hosts. In addition, just because you have agreed upon it with SOME adversaries, doesn't mean you publish the scheme for all adversaries to read.

Q: I think the problem is that all it takes for the system to produce a bad outcome is one non-compliant actor, who can treat the flags not as “keep out” signs but as “treasure here” signs. I’d like a norm system in which we had 80% compliance, but not at the cost of tipping the other 20% off whenever they found a file that maximized their leverage.

A: I agree of course, and to combat this you have a few features:
1. Enough tokens that you have the ability to put some on honeypots
2. Leaks, which as much as we hate them would provide transparency on this subject retrospectively, and of course, our IC will monitor for transgressions in our anti-hacker operations
3. The fact that knowing whether something is important is often super-easy anyways. It's not like we are confused where the important financial systems are in a network. 

Ok, so that's that! Hopefully that helps or gives the scheme's critiques more to chew on. :)








Wednesday, March 29, 2017

Stewart Baker with Michael Daniel on ze Podcasts

I want to put a quick note out about the latest Steptoe Cyber Law podcast, which is usually interesting because Stewart Baker is a much better interviewer than most against people like this. He's informed, of course, as the US's best known high power lawyer in the space. But also, he's willing to push back against the people on his show and ask harder questions than almost any other public interviewer.

TFW: "I know shit and I have opinions"

http://www.steptoecyberblog.com/2017/03/27/steptoe-cyberlaw-podcast-interview-with-michael-daniels/

The whole interview is good, and Michael Daniel's skillset is very much (and always was) managing and understanding the physics of moving large government organizations around for the better. His comments on the interview are totally on point when it comes to how to handle moving government agencies to the cloud. Well worth the time!

More to the point of this blog however: 47 minutes into podcast Stewart Baker says, basically, that he thinks the VEP is bullshit, and everyone he knows (which is everyone) thinks the VEP is bullshit. Daniels says about VEP not that it works in any particular way, but that he is a "believer", and, to be fair, his position is "moderate" in some ways. In particular, he acknowledges that there is a legitimate national security interest in exploitation. But he cannot address any of the real issues with the VEP at a technical level. In summary: He has no cogent defense of the VEP other than a nebulous ideology.


Wednesday, March 1, 2017

Control of DNS versus the Security of DNS

"We're getting beat up by kids, captain!"


So instead of futile and counterproductive efforts trying to regulate all vulnerabilities out of the IoT market, we need to understand that our policies for national cybersecurity may have to let go of certain control points we have, in order to build a resilient internet.

In particular, central points of failure like DNS are massive weak points for attacks run by 19 year olds in charge of botnets.

But why is DNS still so centralized when decentralized versions like Convergence have been built? The answer is: Control.

Having DNS centralized means big businesses and governments can fight over trademarked DNS names, it means PirateBay.com can be seized by the FBI. It is a huge boon for monitoring of global internet activity.

None of the replacements offer these "features". So we as a government have to decide: Do we want a controllable naming system on the internet, or a system resistant to attack from 19 year olds? It's hard to admit it, but DNSSec solved the wrong problem.

Tuesday, February 21, 2017

Some hard questions for team Stanford


These Stanford panels have gotten worse, is a phrase I never thought I'd say. But the truly painful hour of reality TV above needs jazzing up more than the last season of Glee, so here is my attempt to help, with some questions that might be useful to ask next time. But before I do, a quick Twitter conversation with Aaron Portnoy, who used to work at Exodus. I mention him specifically because Logan Brown, the CEO of Exodus, is the one person on the panel who has experience with the subject matter.

Aaron worked at Exodus before their disclosure policy change (aka, business model pivot). This followup is also interesting.

Let's take a look at why these panels happen - based on the very technical method of who sponsors them, as displayed by the sad printouts taped on the table methodology. . .

At one point Oren, CEO of Area1, is like "Isn't the government supposed to help defend us, why do they ever use exploits?", assuming all defense and equities issues are limited to one domain and business model, his, even though his whole company's pitch is that THEY can protect you?

The single most poisonous  idea to keep getting hammered through these panels by people without operational experience of any kind is the idea that the government will use a vulnerability and then give it to vendors. The only possible way to break through to people how much of a non-starter this is is to look at it from the other direction with some sample devil's advocate questions:

Some things are obvious even to completely random twitter users...yet never really brought up at Stanford panels on the subject?

  1. What are the OPSEC issues with this plan?
  2. How do we handle non-US vendors, including Russian/Chinese/Iranian vendors?
  3. How do we handle our exploit supply chain? 
  4. Are vulnerabilities linked?
  5. What impact will this really have, and do we have any hard data to support this impact on our security?
  6. Should we assume that defense will always be at a disadvantage and hence stockpiling exploit capability is not needed?
  7. Why are we so intent on this with software vulnerabilities and not the US advantage in cryprtographic-math? Should we require the NSA publish their math journals as well?
  8. What do we do when vulnerability vendors refuse to sell to us if their vulns are at risk of exposure
  9. What do we do when the price for vulnerabilities goes up X 100? Is this a wise use of taxpayer money?

Just  a start. :)


Friday, February 17, 2017

Just cause deterrence is different in cyber doesn't mean it doesn't exist

Are there Jedi out there the Empire cannot defeat?

That's a long title for a blog post. But ask yourself, as I had to ask Mara Tam today: Do we always have escalatory dominance over non-state players in cyber?  I'm not sure we do.

What does that mean for cyber deterrence or for our overall strategy or for the Tallinn team's insistence that only States need be taken into account in their legal analysis? (Note: Not good things.)


That said, Immunity's deterrence against smaller states has always been: I will spend the next ten years building a team and a full toolchain to take you on if you mess with our people and we catch you, which we might. Having a very very long timeline of action is of great value in cyber.

Thursday, February 16, 2017

DETERRENCE: Drop other people's warez

I'll take: Famous old defacements for $100, Alex


I had this whole blogpost written - it had Apache-Scalp in it, and some comments on my attempts at dating, and Fluffy Bunny, and was all about how whimsical defacement had a certain value in terms of expressing advanced capability, and hence in terms of deterrence. "Whimsy as a force multiplier!"

But then Bas came over and pointed out that I was super wrong. Not only are defacements usually useless, but they are not the Way. In most domains, deterrence is about showing what you can do. In cyber, deterrence is showing what other people can do.

The Russians and US have been performing different variations on this theme. The ShadowBrokers team is a 10 out of 10 on the scale, and our efforts to out their trojans, methodologies, and team members via press releases is similar, but perhaps less effective overall.

If you are still on the fence over whether the VEP is a good idea: The Russians can release an entire tree of stolen exploits and trojans because:

  1. Our exploits don't overlap with theirs
  2. Our persistence techniques, exfiltration techniques, and hooking techniques that we use in our implants, where they are not public, don't overlap with theirs.
  3. Or maybe they filtered it out so techniques they still use don't get burnt?


Tuesday, February 14, 2017

Cover Visas

There is absolutely no steganography in this picture of a fire!

So the problem with making it so the only way to get from Iraq to the US is being a cooperating asset is that you put our asset's families at risk. We need a huge amount of people who got green cards purely on a lottery or from extended family chains so when we want to offer someone an "expedited magical spy green card" we can, and his/her family won't get automatically kneecapped.

This is one of those strategic dillemas. What if it's 100% true that there's someone bad coming in, because why not? It may literally be impossible to vet people at the border. But if you NEED a permeable border to accomplish building your local HUMINT network, and without one you are completely blind in-country, you may have to just bear that risk?

At some level, building cover traffic is important, and also one of the most difficult things in SIGINT. Keep in mind as far as anyone can tell, public research into stegonography died as soon as digital watermarks clearly were not the answer to DRM for the big media labels - for the simple reason that the way to remove any theoretical digital watermark on a song is to mp3 encode it.


Saturday, February 11, 2017

The TAO Strategy's Weakness: Hal Fucking Martin the Third



I want everyone to watch the video above, but think of it in terms of how to build a cyber war grand strategy. 21-year-old aggressive-as-fuck me thought that the whole strategy of TAO was stupid. But I couldn't say why because I was all raw ID the way 21 year olds all are. "Scale is good" people intuitively think  - we need to be able to do this with a massive body of people we can train up.

40 yo me has proposed an insane idea - as different from the way we do things now as a Eukaryota is from the Bacteria and Archaea that we evolved from. I cloak it in "hack back" or "active defense", but the truth is that it stems from a single philosophy I've held my whole life, one that dates to when TESO and ADM were ripping their way through the Mesozoic Internet.

It is this simple phrase: You should not use the exploit if you cannot write it. The truth is, I cannot write the exploits that Scrippie writes. But I for sure understand them. Let that be our bar then - a nucleus composed of small teams of people who understand the exploits they are using, but don't share them or any of their other infrastructure with other teams.

We talk a little bit about dwell time here. But we are now in an age when the dwell time of a hacker in your system who doesn't have full access and analysis and exfiltration of your data is zero. How does your strategy of "hunting" handle that era? And this applies to our and other country's cyber offense teams more than anywhere else. We have a knife made out of pure information and all the SAPs in the world can't save us with the current structure we have.

In summary, how many separate exploit and implant and infrastructure and methodology chains do we really need to obtain dominance over this space? "So many", as Bri would say.

Friday, February 10, 2017

Shouting into the void *ptr;

Getting old people off Office is less a technical problem than a political one.


So a couple other hackers with deep expertise in exploitation and offensive operations and I often go to a USG policy forum which will remain unnamed and we propose strange things. One of those strange things can be best titled: Insecure at any price, the Microsoft story.

What this means is exactly what you're seeing in the latest EO: Get off Microsoft on your desktop. You cannot secure it. Despite Jason Healey's obsession with innovations from Silicon Valley, sometimes you have to say: There are things we cannot build with.

I will list them below:

  • Microsoft Office (Google Docs 100 times better anyways)
  • Microsoft Windows
  • OS X
  • PHP
  • ASP (ASP.NET good, old ASP bad)
  • Ruby on Rails (not sure how they made this so insecure, but they did)
  • Sharepoint. NEVER USE SHAREPOINT. It's a security nightmare because XSS exists.
  • Wordpress.
But it is also true about protocols. SMTP needs to be almost no part of your business. If you regularly use SMTP and email in your business structure, you are failing, and we already have replacements in the messaging space that do everything it does, but better. 

Imagine two hackers sitting with policy lawyers and we say "Use Chromebooks, Use iPads" and that's what you're reading in the latest EO. That's how you solve OPM-hacking type issues. Of course, it is likely to simply be a coincidence. You never know where the info from these policy meetings ends up. It is only slightly more substantive than literally shouting into the void.

Tallinn 2.0 is the Bowling Green Massacre of Cyber War Law


Above is the Atlantic Council livestream of the Tallinn Manual 2.0 launch. Look, no-one can deny that Mike Schmitt is a genius, but the Tallinn Manual is more mirage than oasis. Let me sum it up: They can't AGREE on whether the Russian IO work on the US Election was anything in particular, and they already acknowledge that they don't have solidity on what state sovereignty means in cyberspace. In other words, wtf does the Treaty of Westphalia have to do with information warfare, if anything, is still an unanswered question, no matter how many of "the best lawyers in the world" you put in a room in Tallinn.

Literally, that means that despite his opening statement at EVERY EVENT HE'S EVER AT, the Internet is literally an ungoverned space, with a sort of militant "rule of the strong" applying at best. That's what the Russian efforts this fall mean.

That doesn't mean his efforts are wasted - The US DoD and other states LOVE a manual that can allow them to rationalize their actions, and that's why this is on the desktops of specialist lawyers across the space. Right now CYA in cyber costs fifty bucks on Amazon. Deep down, if you can't agree on the lines or definition of anything, then you don't have a process that produces consistent results.

"We captured all reasonable views and put them in the manual." What is this, the Talmud of cyber war?

But these are just my opinions (and yes, they are shared among the high level International Law specialists in this space I've talked to at the pool), and the hard part of this release is how little criticism processes like this have. These sorts of events are love-fests, not working groups.