Monday, October 31, 2016

The Cyber Kill Chain Has Killed Your Defensive Strategy

From the latest ActiveDefense report, which gets its own post later today. :)

So much writing from defensive strategists focuses on these sorts of abstraction levels and includes some version of this exact picture. This is because if you do not run operations or penetration testing teams, you need something to grasp onto to start modeling the kinds of problems you are having. This multi-stage approach seems to work, mentally, so you go with that.

Then you end up with "Breaking the cyber kill chain" and various other things.

Just to critique this one particular chart: What is "Malware Weaponization"? Also, is this chart deliminating types of access, or an attacker's progression through time. Because I may not create malware until I have done decent internal reconnaissance of a target network. In fact, I may never establish command and control at all. I may do these things in different orders, or skip a bunch of them, or have it all automated so it all is basically one step, or any number of things.

What I'm saying is this: Like in OverWatch, there is a meta-game. If you build your strategy statically, assuming attackers haven't built theirs specifically to defeat you, I'm not sure how you think you can succeed.

Or in practical terms: We are all building Crowdstrike/Mandiant/Cylance/Endgame agent spoofers right now, right? The meta-game in hacking is quite simply "What we choose to build next".

Policy people quiz: WHO IS THIS FAMOUS HACKER? :)

Doing signature checks on all traffic at ISS in RealSecure? Here's ADMMutate Polymorphic shellcode generation, and so on forever. (Learning the meta-game requires you to invest in learning the technology though. Staying up to speed on the meta-game of hacking is a huge factor in the burn-out that results in most people leaving the technical aspect of security for management. :( )

That "Active Defense" paper deserves a real analysis, and it will get one in the next post.


Sunday, October 30, 2016

Book review: Cyber Insecurity, Harrison and Herr

This book is a collection of various people's current thoughts on cyber policy - which means quality varies. In some cases, Rob Lee's chapter, you have an expert with decades of experience giving extremely valuable and in some cases new, perspectives. In others, Mailyn Fiddler's chapter for example, you have a collection of Wired magazine quotes and some half-baked opinions masquerading as analysis. Note that this book is NOT available in Kindle. :(

Let me give you some quick examples from Mailyn's chapter, since this is the chapter that most closely talks about things we all hold dear, and it's extremely bad.


All the added footnotes make it seem more authoritative, despite being ignorant nonsense.

The ability to find 0days in-house was largely a boutique capability of competent governments? The exact opposite is true. This is the problem with Mailyn writing about vulnerability markets: She cannot recognize blatant falsehoods and parrots whatever she can quote to support her built-in prejudices from random news articles which themselves are of dubious value, in this case from one of Morgan Marquis-Boire's CitizenLab reports (which doesn't support her statement at all that I can see) and an Economist article's quote from a random Marine Colonel.

Regardless, even if it had been supported from two random places that were more in tune with her argument, the statement is obviously false. We used to recommend people start at 1993 in the Bugtraq mailing list and read from there to get a sense of our history. It's still a good idea.


Here's what she means: "I have ethical qualms with the whole idea of 0day and want to shut it down any way I can, despite having NO EXPERIENCE IN THE SUBJECT!"


Many of these papers have discussed the hilarious idea that there are SOME bugs that are just TOO DANGEROUS to use for intelligence work, and we should instead give them to the vendors to fix. Let's talk a little bit about what happens when you send signals to a market that there is a product that you absolutely must have! For example, if the VEP decides to kill "All bugs that allow someone to escape the ESXi hypervisor" then what's going to happen is the price on the next hypervisor escape is going to be one million dollars. And the next one after that is ten million dollars. And if you won't buy it, the market will use your signal that ESXi vulns are super important to raise the price elsewhere. I'm not an economist but that's how you turn penicillin into a rare cancer medication!

This is her argument, so let's follow it to the natural conclusion:

For the VEP to be effective when you apply it to vendors, you have to do both of two things:

  1. Find a way to patch bugs that the members of your market don't know about
    1. Not just pretending they got patched by the vendor's internal team, which assumes the members of this market doesn't have better intel than you do on these things
    2. Not silently patching them, since everyone diffs patches
  2. Create an export-control regime that bars vendors from selling to anyone you don't want them to
    1. Which is completely impossible

Let's take another suggestion from Mailyn's horrible paper: The idea that centralizing all bug buying in one place is going to save you money. What happens to a market when you artificially restrict the number of customers? The price goes up exponentially as sellers forgo balancing the idea of multiple customers with their opportunity risk. That, or you build another market because you've introduced a layer of red tape.

affects -> effects, if we're nitpicking. :) You could take that last sentence and have it apply to any policy paper at all, which is a true demonstration of this chapter's level of quality.
This is the main issue with the book: Nobody wants to state things in undiplomatic language so they often say a whole lot of nothing instead. The undiplomatic statement this chapter should have said is: Controlling information in the Internet age is damn near impossible. Bugs exist in software, and we need to get over it and move on with our lives. If we want other countries not to kill journalists by hacking them and then shooting them we need to be brave and tell them to stop shooting journalists or we will do more than send them a sternly worded letter.

Friday, October 28, 2016

Risk Assessment is Damn Near Impossible

Difficulties in Assessing Technical Risk


I very much recommend this kind of talk to policy-types: http://www.slideshare.net/PacSecJP/jurczyk-windows-metafilepacsecv2

Notice that even the BEST EXPERTS ON THE SUBJECT are often wrong when assessing this sort of thing.


The main point being that here is a simple counter-example to the VEP being at all realistic: Bugs are often collected in libraries (or even in root-code that ends up in many different implementations of libraries), and then trickle-up to exploitability in one or more products that are based in it. For example, vulnerabilities in the EMF/WMF file parsing formats can be exploited in various versions of IE, Office, the Kernel, or many other end-user-exposed systems.

The result is that no VEP-like process can truly estimate the value (or risk) of any set of vulnerabilities, because the vulnerabilities are quantum in nature - not truly there until you decide to exploit them on a particular configuration of products and platforms.

Business Risk vs Technical Risk

Penetration testing companies often have reports which assign Low, Medium, High, or Critical risk to any particular finding. These sometimes have numbers assigned to them (these findings go up to 11!) and in many customers a "Critical" finding triggers specific internal processes that require a fix to be in place within 30 days.

So estimating Risk is a no-joke kind of operation. Much time is spent arguing over each finding and its level of risk. But the key feature there is that while we can state "All cross site scripting  (XXS) vulnerabilities are of HIGH risk", even there there are many subtlies. For example, a STORED XSS that reaches many users of a web app is clearly dangerous, but some applications don't even have authentication layers, at which point reflected XSS is totally harmless.

And assuming you know and agree with your customer about technical risk there is a truly VAST gap between that and agreeing on business risk. Keep in mind, the Classification system itself is a designation of BUSINESS RISK to the business of the USG.

The point around the VEP, software liability, cyber insurance, and many other policy issues being simple: If you understand and agree on the technical risk of a vulnerability (which you won't), then you probably won't agree on the business risk.

Thursday, October 27, 2016

Operational Gaps and the VEP

Operational Gaps as Hackers See Them


There's no doubt about it that if you spend your time reading the popular press you get a sense that cyber offense teams are at an unlimited advantage, like reading about Rhonda Rousey in her prime. But if you're a professional attacker then you spend your time with your paranoia dialed to 11, and for good reason. There are two ways to fail as a hacker: Be tactically insecure - for example, as this person was with transferring the Dirty Cow exploit from one box to another in the clear like a total idiot (assuming the story is true):

This is how you get caught by just some random guy. Imagine the Khrunichev's data capture capabilities on their internal infrastructure.

The other way to fail is to have a strategic insecurity, and one of the most common is having an "operational gap". The simplest way to understand how professional operators feel about hacking is that you are racing from exposure to exposure. When you SSH into a box with a stolen username and password, to take the simplest example, there is a window of time before you use a local privilege escalation to get root access so you can hide the fact that you are there by editing various files such as WTMP and UTMP.

When a bug like Dirty Cow dies, you may not have one to replace it. That means every time you SSH into a machine and it turns out to be patched you run the risk of getting caught without being able to clean up your logs. And getting caught once means your entire toolchain can get wrapped up. This is why operational gaps are so dangerous.

The good thing about being tied into everyone else is that when I fall into this crevasse I pull everyone else in with me!


In addition, machines you have implants on get upgraded all the time. Even hardware gets replaced sometimes! So you constantly have to rebuild your infrastructure - the covert network you have built on top of your target's real network is something that requires a lot of maintenance. All of that maintenance means your implants need to be ported to the very latest Windows OS, or your local exploits need to work "Even when McAfee is installed", or the HTTP Channel you are using for exfiltration needs to support the latest version of BlueCoat reputational proxy. This constant hive of activity is positively ant-like.

But imagine what would happen to an ant nest if they stopped cleaning it for just one day. That's what not having any part of your toolchain working feels like to a hacker. And add to that the traditional issues we all have with software development. Building new parts of your tool chain can be a two-year investment. In most cases, there is NO way to rush software development, and hacking is extremely sensitive to this.



Rapid Reaction Teams

That said the original form of hacker, even at the Nation State Level, was much more vertically integrated, and every nation-state maintains this kind of team. Hackers tend to cluster into small (10 or so people) groups which build their toolchains (exploits, RATs, implants, C2-infrastructure, analysis toolkits, etc.) on the fly. The important difference here is that with this model the people who BUILD the tools are the same as the people who USE the tools.

This has the advantage of very high OPSEC (toolchain entirely unique, customized to your target set), but also the disadvantage that you cannot maintain a large set of targets and none of your toolkit is tested and there's a delay sometimes between when you see something you need, and when you get it because you are not preparing for the future. That said, there's LESS of a delay because a team like this will often see something they need, then build it overnight, then deploy it the next day before the system administrators are even awake. As you can imagine, this is a very powerful way of operating right up until you screw up and knock the main mail server off the network because your untested kernel implant doesn't cooperate with whatever RAID drivers your target happens to have installed.

In the nation-state area your Rapid Reaction Team is most often specialized in going after your really hard targets, but every penetration testing company also works like this. Not only that, but most penetration testing companies have a large pile of 0day that they've found on engagements that they just frankly can't be bothered sending to vendors. Sometimes these get written up for conference talks, but usually not.

And of course, the reason you see people go on and on about Cyber Letters of Marque is that that many nation states lean on a collection of private rapid reaction forces for the majority of their capability. Without the US setting norms in this area that we're comfortable with, we're not talking apples to apples with our counterparts.

Capabilities, Costs, and Countermeasures



The difficulty in policy is that we get a lot of our understanding of how "painful" a regulation is or how effective a countermeasure is or how much a particular capability will cost to build from public, unclassified, sources. These sources are either looking at hackers who have gotten caught (aka, FANCY BEAR!), or talking to open source information people who have a lot of experience with penetration testing companies, but not necessarily the hybrid Rapid Reaction Force and Industrialized Cyber Group way the modern IC (in any country) works.

Using Exploits against the BORG


Hackers have ALWAYS assumed that defensive tools worked, even when they didn't.
Aside from the obvious fact that despite the optimism you see in policy papers that support the VEP, not all targets are soft targets. We build our Intelligence Community efforts so they can tackle very very hard targets in addition to having a high level of reliability against a medium-sized business that happens to sell submarine parts to the Iranians.

But the VEP-supporters assume every company is penetrated the same way, and that our ability to do so will last into perpetuity. The Schwartz-Knake paper on the subject throws a fig leaf over the problem of exploit scarcity by just saying "We'll increase the budgets a bit to handle this".

This post tries to get policy makers into the basic paranoid operational mindset a hacker lives in day in and day out, to counteract the perception of super-powers the media likes to portray us in. Without going into the details of how hacking is done, it's easy to over-simplify these issues. The result is the VEP. Codifying it into law or an executive order or expanding its reach would be a massive mistake.

----

Updates:
Ever heard of this company? No? Well I can guarantee they have enough 0day to own the whole Internet. Comforting thoughts!

Chris Rohlf is well known in the industry but it's worth noting that NOBODY in the technical community thinks VEP is at all workable. Who did they run this idea through before implementing it?

Monday, October 24, 2016

CyCon debate on the VEP

Saturday I participated in a panel at CyConUS 2016 with Ari Schwartz and Steven Bellovin on the various merits or lack thereof of the Vulnerability Equities Process.
That is not Ari's happy face.
Ari is, as you may remember from earlier posts on this blog, a huge fan of the VEP, and has many suggestions for codifying it in law, or at least Executive Order. My goal was to probe a bit and find out the root cause for why he thinks these are good ideas. The answer is simple: Ethics.

There's always a surprising section of otherwise well informed people who think that 0days are an ETHICAL issue, as opposed to a technology and policy issue. Much as we used to think that gentlemen don't read each other's mail, many policy makers continue to believe the argument by Microsoft that the very nature of holding 0days is a dirty business the government should have no part of, no matter what damage their position on this causes to our national interests.

This is partially due to Microsoft and other vendors pushing this argument for purely selfish reasons (their ethical arguments always coincide nicely with their business interests), but it's also an argument built out of a kind of weird jealousy and fear of the unknown. If you haven't worked with 0days, they are at once sexy and scary, like before you realize that dominatrices are just middle aged women with a shitty job. 

During the panel Ari insisted over and over that I didn't know wtf I was talking about, that I wasn't there. He got heated as he pondered who the hell this annoying brown dude in a ill-fitting suit was to talk about these things! But nothing can erase the clear truth that none of the important studies on how vulnerabilities work was done before the VEP started. They developed policy based on their gut instincts, without any recognition of how the problem worked in real life.

The only sane policy when it comes to the VEP is to not "lean towards disclosure" but only release vulnerabilities when they pose a clear and present danger and releasing the vulnerability would solve the problem, not add to it.

As I pointed out on the panel, the VEP is at its heart a transparency and confidence building measure, in the parlance of international relations people. However, imagine if you went to your wife and told her a really hot girl at work hit on you, but you told her no. Would that help your relationship? In other words, not all transparency and confidence building measures are good ideas. The VEP is exactly like that example: Are Microsoft and Google best friends now with the USG or are they simply wishing we'd handle the crypto issue and National Security Letters like adults?

I spent most of my time on the panel talking about the massive operational security issues in the VEP, both short and very long term. None of those issues have really been examined in the public as far as I can tell, and high level policy people like Ari have been mostly ignoring them. To make it simple: They are dire. Implementing the VEP the way we have has concretely damaged our most critical efforts in the cyber area.

But worse than that is the very idea of implementing any major policy around vulnerabilities without understanding at a deep technical level how they work. If you ask Ari this question alone, and he cannot answer it, you know we have failed: What percentage of the vulnerabilities we sent through the VEP were ones that our adversaries also had and were using? For bonus points ask him to name a few bug-classes and see what he says.

The problem with the VEP is really this. We didn't start by asking the right questions. We started with an ethical judgement we got hypnotized into by the marketing teams in Silicon Valley and hoped for the best. 


Friday, October 14, 2016

The Russia Question

Right now people in the policy space are asking themselves what to do about the Russians hacking the DNC and basically every political operative in DC and leaking it to Wikileaks or via Guccifer 2.0. The best article right now is on FP which has as its conclusion: Something must be done, and soon.

But what?

First of all, one of the big misunderstandings about cyber war is that it is somehow an inherently asymmetric operation. But if you use this as a thought experiment you will see just how hard targeting a real attack against Russia is. There are two areas of cyber operations that are impossibly expensive at scale: Maintenance and Targeting.

It is hard to explain just how mind bogglingly expensive these two items are to policymakers who haven't written a rootkit lately.

There are a few other features of cyber operations that have massive sunk costs (building a real computer is one of them) but just in terms of operational cost, real targeting is going to be something done by T-Rex-sized behemoths. Which is why when the Germans asked me how amazing Israel was in this space I said it was largely marketing, because it is. No country that size can compete at this level.

And the Indominus Rex hybrid mutant mega-fauna in this space is still the USG. Despite the attitude of helplessness policymakers hold publicly, a retaliation is 100% doable and 100% could be entirely done in cyberspace and probably some operators are sharpening their swords against the aged FreeBSD servers that run strana.ru as we speak.

"MMM. AGING UNIX INFRASTRUCTURE TASTES LIKE CHICKEN!"


Offensive operations take a long time though - always twice as long as you think they will, no matter how much padding you've added, like house renovations. The Bene Gesserit used to just tell their victims "You will be punished." and nothing more and frankly it is scarier that way.

Let me put it this way though: we manage the IT of our campaigns and election systems like complete idiots.

The only smart solution for securing our election systems is to centralize them and assess them, and even OPEN them so they can be assessed by third parties. Every other option is clear insanity as every security professional has said for more than a decade.

And having been involved in trying to help with the security of a Presidential campaign I can say that they are not securable the way they are run now. If you were going to do it right, you'd use the same level of security you would with any other billion dollar enterprise - you'd have an IT department handing out special purpose iPhones and ChromeBooks and you'd have professionals helping you secure things the same way you have professionals making your campaign videos.

If we want the USG to retaliate when the DNC gets owned because our electoral process is that important, we have to start with those two steps.

On another note, some people have asked me why I do so much critique without offering my own cyber war book, and I would say there is a long body of work dating back to 2011 here and here, and of course many Prezi's on the subject here. But largely we at Immunity use our model of cyber war to put our money where our mouth is, which is why we now sell INNUENDO and other tools to test for modern threats. :)

Thursday, October 13, 2016

Talking about exploits and bugs!

So I wanted to invite the readers of this blog to an event that's all about discussing vulnerabilities. POLICY PEOPLE ARE WELCOME TO DRINK ON US :)


Sunday, October 9, 2016

Book Review: Cyber War vs Cyber Realities

Book Review of the Day! 

Cyber War vs Cyber Realities
by Brandon Valeriano  (Author), Ryan C. Maness (Author)

Brandon Valeriano is Senior Lecturer in the Department of Politics and Global Security at the University of Glasgow.

Ryan C. Maness is Visiting Fellow of Security and Resilience Studies at Northeastern University.


Ok, so I want to start by saying that the advantage of giving a book like this some time is that you can see if the predictions made by the authors came true or if the way they did NOT come true shows a flaw in the strategic argument made by the book. This book is from August 2015, and already the predictions made are demonstrably far off.

And here's why:
This is clearly where the book demonstrates it is about advocacy and not strategic analysis.
What happens in the academic world is that people get SCARED of cyber war. This is especially true of people with no experience in it. When people say they are not "technical" but still want to write about cyber war you always wonder how they know they are not just babbling gibberish? What they end up doing, as in the image above, is wishing the whole cyber war thing would "Just go away" and finding rationalizations to that point. This book is a 600 page rationalization but I read all these things because I find that they have a tendency to otherwise go unchallenged.

This for example, is an example of what happens when the authors are simply quoting random sections from the same five papers instead of applying technical understanding. You cannot deface websites with cross-site-scripting!

The crux of their argument is that the behavior of cyber weapons is too uncertain to be used. This is a direct reflection of their failure to be comfortable with the technical details of the subject matter. They then argue that because of this (and not the thousand other factors involved) States have been quite restrained in their use of cyber war technologies. They have a database of various statistics on various cyber conflicts. The weakness here is that it's nearly impossible to create a database of cyber conflict. Open Source information in this area is spotty, at best. And even when available, without a deep technical and geopolitical background, it's hard to interpret.

To quote them: "Restraint dominates because of fears of blowback, the normative taboo against civilian harm, and the problem that, once used, a cyber weapon is rendered usable by others." (p. 111)

This is a false analysis. "Fear of blowback" is always an issue when going offensive, but countries seem to think it's not a problem (c.f. current Russian activities against US GOP/DNC) when doing almost every part of what they do in cyber.

Likewise, it's a common misconception in academia that once used a cyber weapon can be turned against you. Stuxnet is the clear counter-example! And while "civilian harm" is one thing in the laws of war, the norm is clearly established that countries can hack civilian institutions without regard to consequences, and when they make mistakes and, say, break a countries main gateway router, that's ok too.

The terms you don't read about in academic books like this one are "CNE/CNA". And CNA is a line that governments cross only sometimes. But the prep-work for CNA is done constantly. That's the clear norm!

The driver that is not behind this book is following the money trail of cyber war. There is no analysis of how much these efforts cost, and how those costs and OPSEC risks are interlinked. You just can't do that analysis without deep technical understanding and without that factor, the strategic analysis of this book is useless.

As a side note: SHAMOON WAS MOST LIKELY NOT A REACTION TO STUXNET. (But more likely to an attack on the Iranian Oil Ministry, which would make a lot of sense, right?) When the book talks about this attack it also says: "Yet we see that the impact was not as dramatic as was initially thought." But Shamoon was a message to Saudi Arabia that their oil capacity was at risk. It was not designed to take the oil capacity offline, although likely Iran could have done that. This is another example of the book going off into the weeds, to be honest.

Advocacy rears its ugly head here:



Why would they state that the Government probably knew about Heartbleed? The Government explicitly said it did not and there is NO evidence, from Snowden or otherwise, that it did. This kind of statement conclusively demonstrates the aim of the book beyond what I would consider a reasonable doubt.

This is a book of fear. I will close with a snapshot from the conclusion, which is a classically Trumpian argument against listening to the experts in a field:

Thanks for reading! :)

-----------------------------------------

More screenshots and notes:
Real peer review should have caught a lot of things in this book, but the first thing that needs to happen is if you are writing a book about cyber war you cannot be confused about what Cross Site Scripting is and then blame it on being a "Security Researcher, not a tech professional".

XSS cannot be used to deface websites.

"Intrusions need to be added to software" is gibberish.


Yeah, that's not true.

Cyber methods are clear and evident? Most would...disagree. 
This is hilariously wrong.

Definitely false.

Jesus, this is just not true.

IRONY DURING THIS ELECTION.

... arg.