COM SECURITY TALK from INFILTRATE 2017: https://vimeo.com/214856542 Ok, so I have a concept that I've tried to explain a bunch of times and failed every time. And it's how not just codebases decompose, but also whole platforms. And when that platform cracks, everything built on it has to be replaced from scratch. Immunity has already gone through our data, like every other consulting company, and found that the process of the SDL is 10 times less of an indicator of future security than the initial choice of platform to build a product on. It's easier for people to understand the continual chain of vulnerabilities as these discrete events. They look at the CyberUL work and think they can assess software risk. But platform risk is harder. Some signs of cracking are: * New bugclasses start to be found on a regular basis * Vulnerability criticality regularly is "catastrophic" as bugclasses that used to be of low risk are now known to be of super high risk when combined together * Remediations become much more difficult than "simply patch" and often bugs are marked "won't fix" * Even knowing if you are vulnerable is sometimes too much work even for experts * Mitigations at first seem useful but then demonstrate that they do more harm than good From an attacker's standpoint, being able to smell a broken platform is like knowing where a dead whale is before anyone else - there is about to be a feeding frenzy. Whole careers will live and die like brittle stars upon the bloated decomposing underwater corpses of Java and .Net. Microsoft Windows is the same thing. I want to point out that two years ago when Microsoft Research gave their talk at INFILTRATE, initially nobody took any notice. But some of us forced research on it, because we knew that it was about the cracking of an entire platform - probably the most important platform in the world, Active Directory. From a defensive standpoint, what I see is people are in denial this process even exists. They think patching works. They want to believe. From an architectural standpoint, Windows is only two things: COM and Win32api. Forshaw has broken both of them. And not in ways that can be fixed. What does that mean? Anyways, watch the video. :)
Friday, May 26, 2017
Platform Security
Thursday, May 25, 2017
The PATCH Act
The PATCH act is well meaning, but handles strategic security issues with the wrong scope and without the information needed to solidify US Government response any longer term systemic risks.
Specifically, we know the following things:
- Patched vulnerabilities can still result in massive security events (such as Wannacry)
- Vulnerabilities we know about are sometimes, but not often, found out by our adversaries (RAND paper)
- Exploits DO sometimes get caught (usually one at a time)
- Exploits lately have been leaking (wholesale)
- Understanding the risks or technical details of any one vulnerability is a massive undertaking
- Exploits are composed of multiple vulnerabilities, each with their own complex story and background
- Other governments are unlikely to give vulnerabilities to US companies through any similar system
We also know what we don’t know:
- We don’t know which vulnerabilities we will need in the future
- We don’t know what vulnerabilities our adversaries will find and use in the future
- We often don’t know what mitigations will and won’t work in the real world (you would THINK patching would work, but Wannacry exists!)
- We don't know how our supply chain will react to us giving vulnerabilities to vendors
The PATCH act defines vulnerabilities quite broadly for this reason: We don’t know what types of things will have impact and we will need to react to in the future. But this is also a sign that we are not ready for a legislative solution.
Imagine setting up the exact system described in the Act but only for Internet Explorer vulnerabilities. As you run this imaginary system through its paces you immediately discover how hard it is to get any value out of it. That’s not a good sign for a new law. Proponents of the PATCH Act say it is a "light touch" but anything that handles every vulnerability the United States government uses from every possible dimension is by definition a giant process. One, in this case, we don't know will be effective.
Another question is how we build a defensive whole-of-government framework - for example, should the head of the GSA be read in on our vulnerability knowledge (in aggregate, if not of individual vulnerabilities) so they can guide future purchasing decisions?
Another question is how we build a defensive whole-of-government framework - for example, should the head of the GSA be read in on our vulnerability knowledge (in aggregate, if not of individual vulnerabilities) so they can guide future purchasing decisions?
In order for our IC to continue in the field of computer exploitation, we will have to get some hold on wholesale leakers of our most sensitive technology. This does not mean “tracking down leakers” but building systems and processes resistant to leaking. It is about information segmentation and taking operators out of the system as much as possible.
This is true in all intelligence fields and may require re-engineering many of our internal processes. But assuming we can do that, and that efforts are already underway to do so, we still have to handle that exploits get caught occasionally, and that other people find and use exploits and that even after a patch, we have complex strategic issues to deal with.
In that sense, having a vendor produce and distribute a patch is only part of the complete breakfast of helping our strategic security needs. It is less about “defense vs offense” and more about handling the complex situations that occur when using this kind of technology. We would be wise to build an emerging strategy around that understanding before any legislation like the PATCH act forces us down a path.
Tuesday, May 23, 2017
Cover and Wannacry
I went to a dinner party once, not in the US, and ended up talking to a retired HUMINT official, also not from the US. I asked him some dumb questions, as Americans do. One of which was, "What's it like trying to make friends with people you hate?"
What he said was that there's always something you can find to really like about a person. You just dwell on that and the camaraderie is natural.
The other question I asked him was if it stressed him out, all the cover and hiding and stuff. And what he said was that after a while he never ever worried about hiding from the adversary. He only worried about getting back-stabbed by the incompetents on his own team. Generally, people who think they are helping, but instead are fucking you, and your whole team, over. This, to be fair, is how I think of all the well-meaning people trying to give vulnerabilities to Microsoft because they feel left out and they want to be part of the cool kids club.
But here's also the thing about cover: People are good at it. It's hard to admit, because there's a natural impulse to think that what you are catching is at least the B team. But maybe it's the D- team. Maybe there is an exponential scale beyond the fishpond you're finding and know about and have listed on the Symantec pages, or maybe the part of the picture you see on the Symantec blog posts analyzing trojans with MD5 signatures is missing crucial pieces of the puzzle.
So what I like to do to look at these things is have a larger strategic structure in mind, and then say "How does this fit into the REALM of possibilities", instead of "What does this lead to from the evidence as presented".
The realm of possibilities is quite interesting here. In addition to being a worm, Wannacry had a TOR C2 in it. And the reporting on Wannacry very much makes it seem like a disconnected event. But what if Wannacry is part of a fabric of attacks? What if the ransom money is meaningless - just something to hook onto for the press so that the reporting isn't "North Korean worm targets everyone...for no apparent reason". Because that would mean everyone did deep analysis. Nobody does deep analysis of what Ransomware does, except to try to decrypt the data.
Sometimes you give a worm "something" to do that is not the main goal. People aren't really analyzing Wannacry for C2 operations that much - mostly they just remove it. In this way, a Nation-State attack can be cloaked as a simple crimeware attack run by a nation-state.
And in the case of Wannacry, there are two goals which might be the main goal if you put a real cyber warfare strategist in charge, which I assume they do:
1. Access to information and networks that are hard to reach
2. Testing self-replicant infrastructure and methodology
The main goal is not "make 100k" because this is a team which steals millions of dollars per op. It would have made MORE sense for them to have shared their kill-switch information with Huawei, Tencent and Qihoo360 first, or soon after launch. . . and I bet we find they tried to do just that.
What he said was that there's always something you can find to really like about a person. You just dwell on that and the camaraderie is natural.
The other question I asked him was if it stressed him out, all the cover and hiding and stuff. And what he said was that after a while he never ever worried about hiding from the adversary. He only worried about getting back-stabbed by the incompetents on his own team. Generally, people who think they are helping, but instead are fucking you, and your whole team, over. This, to be fair, is how I think of all the well-meaning people trying to give vulnerabilities to Microsoft because they feel left out and they want to be part of the cool kids club.
But here's also the thing about cover: People are good at it. It's hard to admit, because there's a natural impulse to think that what you are catching is at least the B team. But maybe it's the D- team. Maybe there is an exponential scale beyond the fishpond you're finding and know about and have listed on the Symantec pages, or maybe the part of the picture you see on the Symantec blog posts analyzing trojans with MD5 signatures is missing crucial pieces of the puzzle.
So what I like to do to look at these things is have a larger strategic structure in mind, and then say "How does this fit into the REALM of possibilities", instead of "What does this lead to from the evidence as presented".
The realm of possibilities is quite interesting here. In addition to being a worm, Wannacry had a TOR C2 in it. And the reporting on Wannacry very much makes it seem like a disconnected event. But what if Wannacry is part of a fabric of attacks? What if the ransom money is meaningless - just something to hook onto for the press so that the reporting isn't "North Korean worm targets everyone...for no apparent reason". Because that would mean everyone did deep analysis. Nobody does deep analysis of what Ransomware does, except to try to decrypt the data.
Sometimes you give a worm "something" to do that is not the main goal. People aren't really analyzing Wannacry for C2 operations that much - mostly they just remove it. In this way, a Nation-State attack can be cloaked as a simple crimeware attack run by a nation-state.
And in the case of Wannacry, there are two goals which might be the main goal if you put a real cyber warfare strategist in charge, which I assume they do:
1. Access to information and networks that are hard to reach
2. Testing self-replicant infrastructure and methodology
The main goal is not "make 100k" because this is a team which steals millions of dollars per op. It would have made MORE sense for them to have shared their kill-switch information with Huawei, Tencent and Qihoo360 first, or soon after launch. . . and I bet we find they tried to do just that.
Monday, May 22, 2017
Hack back and the Yamamoto Chapter
So, I've tried my best to get the policy world to read Cryptonomicon, because it's important if you want to understand modern cyber policy in any sort of context.Weirdly, for an obviously over-educated crew that likes to read a lot, Cryptonomicon is not on the reading list.
But if you have time, just read this one short chapter: here.
What happens when you hang out with US spooks who don't know each other and Europeans at the same party is that you see an interesting feedback loop. Because US spooks have a natural tendency to play not just "stupid" but exactly half as smart as whoever they are talking to. This leads to a bemused European watching on as two US spooks each land on the lowest common denominator of explaining how they have actually never seen a computer in real life, and hope to one day meet a hacker who can tell them how this newfangled technology like "mice" works.
HOLD ME BACK BEFORE I HACK XNU MACH KERNELS IN MY RAGE |
But if you are doing cyber policy work, you cannot help but notice there has been a truly odd number of papers essentially advocating hack-back coming from various arms of the policy world most connected with the "deeper state". I've listed a few recent links below.
- http://www.academia.edu/33127707/The_Cyber_Longbow_and_Other_Information_Strategies_U.S._National_Security_and_Cyberspace
- https://cybersecpolitics.blogspot.com/2017/05/heritage-paper-on-hack-back.html
- https://cybersecpolitics.blogspot.com/2016/11/the-gwu-active-defense-report-is-secret.html
In order to parse this properly - to "unpack" it, in the parlance of the policy world - you have to have hacked a few thousand networks personally, I think. And like any penetration testing company knows: Network Security is a rare thing.
But it is exceptionally rare outside the United States. Here, we have West Coast charlatans selling us snake oil boxes and solutions which typically cause more problems than they help. But we've also invested heavily in education, and process. You have to LEARN how to have a penetration test. It has to hurt a bit the first few times. Developers hate it, managers hate the cost and delays. Network engineers hate designing around new nonsense requirements.
Penetration testing, and security services in general are not an easy service to know you need, and know how to consume. You have to learn what code security looks like, and how to test your third party vendors, and, frankly, you have to learn how to give a shit to the point where you'll pay insane money for people to tell you that you suck in new and fascinating ways, without getting upset.
Most of the world doesn't want to go through this painful process. And in this case, I mean most of the developed world: Korea is still trying to get over how every banking app there uses ActiveX. Japan has a weird addiction to ignoring security while being technologically very advanced. China has a huge problem with pirated software and the great firewall of China. The Europeans wish they could regulate the whole hacking problem away. The Russians spend their time talking about kick-backs for recommending various security software, rather than penetration-testing results.
In other words, their offensive teams are much more experienced than their defensive teams, and while this is changing, (Qihoo360! Tencent!), it is still new. They haven't had time to make as many mistakes as the US has on defense. They haven't learned how to care as much.
There are spots of brightness everywhere - you'll find clued-up people doing their best to secure their enterprises in innovative ways all over the world. It's no accident that all of Europe was on Chip And Pin ten years before Target got hacked.
US Policy is to always say the following sentence until you believe it: "We are the most at risk nation for cyber attacks because we have adopted technology the most!" It's hilarious when people believe us.
Because if you've been in the biz you know the truth which is that overall, as Wannacry demonstrated (see above), there's a real security gap between nations. And I'd like to tie it together by pointing out that when the US policy teams talk about hack-back, the not-so-subtle subtext is "We are holding back. BlackHat alone had 9000 people at it last year. I swear to god, I could build a top notch hacking team by going into any random StarBucks in Fairfax and yelling out loud 'I will give this hard-to-find legacy SPARC TADPOLE LAPTOP to the first person to write my name on Strana.ru's front page without having to fill out Teaming Agreement paperwork!'.
Because if you've been in the biz you know the truth which is that overall, as Wannacry demonstrated (see above), there's a real security gap between nations. And I'd like to tie it together by pointing out that when the US policy teams talk about hack-back, the not-so-subtle subtext is "We are holding back. BlackHat alone had 9000 people at it last year. I swear to god, I could build a top notch hacking team by going into any random StarBucks in Fairfax and yelling out loud 'I will give this hard-to-find legacy SPARC TADPOLE LAPTOP to the first person to write my name on Strana.ru's front page without having to fill out Teaming Agreement paperwork!'.
BlackHat and RSA are a peacock's tail of beautiful useless fitness-function announcement. No other country has anything like them in this space.
So when we talk about hack back what we're saying is that we may, very well, build a working hack back policy into our national strategy to combat what we consider unfair economic espionage. But we're also saying this: "Your companies are secured with raw hope and duct tape and you know we have a colossally massive back-bench of people waiting to go active if we just give them a mission. We are playing pretty stupid and helpless but ... don't fuck with us."
Friday, May 19, 2017
the enemy gets a vote
The little known corollary to General (now Secretary) Mattis’s comment on war is that your supply chain also gets a vote. People look at the ShadowBrokers to Wannacry-worm unofficial "technology transfer program" and think it is the Vulnerability Equities worst case scenario. But it’s really not.
The worst case scenario is that an exploit leaks that is composed of GCHQ parts, with some NSA add-ons, some CIA add ons, and a piece that you bought from a third party vendor under a special license. I'm not going to get into the fact that exploits DO get caught sometimes, and probably more often now that breach detection software is getting popular. But let's just look at the proposed PATCH law and other proposals from the simplest angle.
Most of the proposals for how to re-organize the VEP assume you can browbeat your third-party vendors, (and GCHQ, GCSB, etc. !) into accepting that, on your whim, you can send their vulnerabilities to a vendor for patching. This is simply not true - any more than the idea that you could GPL the Windows source code if you felt like it.
The thing is this: The exploit vendors also get a vote on these matters. And if you kill their bugs or exploit techniques or simply have bad OPSEC and get caught a lot they tend to vote by simply not selling you the good vulnerabilities. I cannot overstate how much we need our foreign second party partners in this space, and even more than that, how much we need our supply chain. Not only is the signals intelligence enabled through active network attack inescapably necessary for the safety of the country, but we are trying to build up CyberCom, enable Law Enforcement, and recover from the leaks and damage Snowden did.
In simple terms, yes, exploits save lives. They are not weapons, but they can be powerful tools. I have, and I cannot be more literal than this, seen it with my own eyes. You don't have to believe me.
Ironically, in order to determine which vulnerabilities present the most risk to us and just in general combat threats in cyberspace, we will probably have to hack into foreign services, which is going to require that we have even more capability in this space.
To sum up:
- If you enforce sending vulnerabilities which are not public to vendors via a law, we will lose our best people from the NSA, and they will go work for private industry.
- If we cannot protect our second party partner's technology they will stop giving it to us.
- If we give bought bugs to vendors, they will stop selling them to us. Not just that one exploit vendor. Once the USG has a reputation for operating in this way, word will get out and the entire pipeline will dry up causing massive harm to our operational capability.
- We need that technology because we do need to recover our capability in this space for strategic reasons
But there are better ideas than VEP available. One idea is simply to fund a bug bounty out of the Commerce Department for things we find strategic (i.e. not just for vulnerabilities which is something Microsoft and Apple should fund, but explicitly for exploits and toolkits other countries are using against us).
Likewise, the IC can be more open about what exploits we know get caught, and having custom-built mitigation expertise available ahead of time for corporations can limit the damage of a leak or an exploit getting caught, at the cost of attribution. This may include writing and distributing third party patches, IDS signatures, and implant removal tools.
And having sensors on as many networks as possible can help discover which of your vulnerabilities have been caught or stolen.
One interesting possibility if we close off our exploit pipeline is that we instead will be forced into wholesale outsourcing operations themselves - something I think we should be careful about. Finally before we codify the VEP into any sort of law, we should look for similar efforts from Russia and China to materialize out of the norms process, something we have not seen even a glimmer of yet.
-----
Layercake "Golden Rules" quotes for those without YouTube. :)
o Always work in a small team
o Keep a very low profile
o Only deal with people who come recommended
o Never be too greedy
Tuesday, May 9, 2017
Heritage Paper on Hack Back
https://www.lawfareblog.com/active-cyber-defense-aka-hackback
There's very little difference I can find from the Heritage Institution paper from Paul Rosenzweig et al and this CyberSecPolitics post. But I think it's a good idea, if for no other reason than to set up a functional capability that can scale, and that takes money from private industry to pay for the defense of private industry, while being managed in a granular way by the government.
There's very little difference I can find from the Heritage Institution paper from Paul Rosenzweig et al and this CyberSecPolitics post. But I think it's a good idea, if for no other reason than to set up a functional capability that can scale, and that takes money from private industry to pay for the defense of private industry, while being managed in a granular way by the government.
Sunday, May 7, 2017
The Teams
One thing I learned from INFILTRATE 2017 is that while there are some new players on the field, the majority of the people are in teams that have two decade old lineages. This has massive implications maybe I'll go into later.
But with that in mind, I will ignore Mark Dowd's hilarious self-introduction, and simply say this: There are a lot of people I would advise even a well resourced nation state not to mess with in the cyber domain. Dowd is on the top of that list. So watch the below video, when you get a chance.
But with that in mind, I will ignore Mark Dowd's hilarious self-introduction, and simply say this: There are a lot of people I would advise even a well resourced nation state not to mess with in the cyber domain. Dowd is on the top of that list. So watch the below video, when you get a chance.
Network Centric Warfare
I feel like people have a small view of Network Centric Warfare. I feel like part of this problem is the old-school thought-memes that are "command and control". Even in the Wassenaar debate you see this come up over and over again. They want to ban, not the trojans but the "Command and Control" that generates and controls those trojans.
This is because the people who write the regulations are the same people who wrote the Wikileaks page on network centric warfare. Because in their head, the Internet allows for FASTER command and control. "It's an evolution," they say. "It increases the SPEED of command."
This is entirely wrong. What the internet did - what network centric warfare did - was change the very nature of how combat systems work. Instead of "command and control" you have something very different - you have networks of networks. You have "publish and subscribe", you have "broadcast" you have "direct to the front line from the back-end analysis systems". You have, in a word, emergent and reactive systems.
Ask yourself, what percentage of the sensors do I have to take over to BE the command and control system? What part of how I do C2 is an illusion?
And nowhere is this more true than in the cyber domain. There are two schools of thought. "It's an evolution of what we have already, be it EW or Psyops or SIGINT or HUMINT or whatever". And the other one: This is a revolution. It's as different as a leaf-cutting ant colony from a centipede. It is night and day. It is the difference between H.R. Gyger and John Audubon. It is like 50 Shades of Grey vs The Fault in Our Stars.
I honestly don't know any way to make this clearer and if you've read this blog (or ever dated me) you are probably sick of me bringing up ants all the time. But no analogy to network centric warfare is more direct than the one of how social insects work.
Wednesday, May 3, 2017
What you don't know can't hurt you.
Ranking Risk in Vulnerability Management |
Prioritization gaps are a hard thing to explain, or maybe a very easy thing to explain but only in a way that it is hard to put a metric on. But let's start by telling the story of every enterprise vulnerability management organization everywhere, which has enough resources available to test and patch five vulnerabilities, and there are fifty advisories all marked critical on the systems they run. So, as in the graphic above, you see people developing ranking systems which look at vulnerabilities and say "Hey, is this important enough that I have to spend time really working on it". In some cases there IS no patch and you're creating a custom mitigation, but either way, the testing process is expensive.
Look at the factors they use and then think of the ways you can defeat those as an attacker, because any gap between what the attacker prioritizes and what the defender prioritizes is really its own vulnerability. And if I can PREDICT or CONTROL the factors a defender uses (say, by controlling public information), then I, as the attacker, can always win.
For example, if I attack QUICKLY then the vulnerability remediation prioritization tree is always off, because I will be done attacking before you have a chance to notice that something is under "active attack".
This should go without saying... |
Likewise, some exploits I can specialize in, which makes them "easy" for me, even if publicly they are known to be "hard" - Windows kernel heap overflows, for example. I can invest in understanding a particular attack surface, which means I can apply say, font parsing bugs, to lateral movement problems, which you may not realize are a possibility.
And of course, I can invest in understanding the patch system Microsoft has, and knowing which bugs they won't patch or which bugs they have messed up the patches for.
The point here is that as an attacker I invest in understanding and attacking the global vulnerability management process, and the specific vulnerability management processes in use by my targets, as a process.