Thursday, December 28, 2017

A Permanent Revolution




I wanted to end the year on a positive note, by highlighting people, some of whom you won't know, who I think represent a new wave of technical cyber policy experts doing great work on the various subjects needed in this area.


I'm not saying that this team agrees with each other on every issue, but as a whole, the community is changing to be more technical and more reality focused, and that's a good thing, and a lot of progress was made there this year. The vectors are trending up, and the enemy's gate is always down. :)

Wednesday, December 27, 2017

A slow acceptance

It's worth putting the latest Foreign Affairs piece by Susan Hennessey into context.

I'm still curious what line the OPM hack in theory crossed?
It's been obvious to many of the readers that part of the reason this blog even exists is because a lot of the members of the offensive community found it perplexing that our strategic policy centers were so off base. Last year I had a whiskey-and-policy meeting with a former govt official in the space and when he asked why I was so worked up over VEP and Wassenaar I said "Because I'm sick of getting our asses handed to us by Wikileaks and a dozen other bit players because we can't figure out where first base is let alone hit a home run once in a while!"

I see Susan Hennessey's piece as a way to try to begin to acclimatize the policy world that drastic changes need to take place. Her piece is on deterrence, but every part of the cyber policy community is heavily linked and in weird ways. You don't get deterrence without making some sort of grand bargain on crypto backdoors, in other words.

The last line is telling. It is exactly worth pointing out that not only did the last policy fail, but that it failed in predictable ways for predictable reasons.

For fifteen years we've had people at the top of the cyber policy food chain who only gave nominal support to the positions their technical community cared deeply about. Not only did the State Dept cyber team or the Obama White House cyber team not see or not care about the obvious ensuing chaos while it was signing the Wassenaar Arrangement. They didn't know who to call to ask about it even if they did care. It's essentially a sign of hostility to the technical community that they would ban penetration testing software without so much as sending a Facebook message to any of the companies in the States who sell penetration testing software. That hostility is the root cause of why we can't have deterrence, or other nice things.

But this has changed. There is hope, as General Leia would say. But that hope comes at the cost of acknowledging not just failure, but why we failed.

Sunday, December 24, 2017

Book Review: On Cyber: Towards An Operational Art for Cyber Conflict

Authors: Greg Conti and David Raymond

Annoyingly and ironically this book is only available in paperback, and not in electronic format.
 
I spent Christmas Eve on the beach re-reading this book. Moments later these same seagulls issued a flank attack that stole my apple pastry from me. :(
So I went through this book carefully looking for serious flaws. I came up with a few minor issues instead. But this and Matt Monte's book are the books that should be getting read by teams looking to get up to speed from a military angle. Maybe I would add Relentless Strike as well.

The reason this book works is that resume matters. You don't see tons of quotes from the authors stolen from the traditional canon of B.S. policy papers or Wired magazine articles. Nothing in this book is quoting a NY Times article that everyone in the know already has discounted as a disinformation effort via targeted leaks. 

I'm not saying this to be harsh - but it's a fact that almost all the books in this space suffer from a lack of experience in the area. These authors know what they're talking about in both of the domains this book straddles and it would be clear even if you didn't know who they were. The book quotes Dual Core and Dan Geer as easily as Clausewitz. 

If there are gaps in the book, they are in a failure to go the extra mile philosophically to avoid ruffling feathers in the policy world. What does it mean that cyber operations can engage in N-dimensional flanking operations? They often point to contentious issues with regards to how traditional thought runs without directly naming and shaming. Tell me again how the US copyright regime is in some way different technologically from the Chinese effort against Falun Gung? 

When it comes to predictions, the book fails to predict the worm revolution we're in now and is heavily focused on AI and scale, since the US military is so focused on C2-based operations, but that's a myopia that can only be corrected after operational planners have mastered the basics of maneuver in cyber. It's a US focused book, but what else would you expect?

The book also could use more direct examples than it has - if for no other reason than because they push the concepts better than raw text does. They get close to adopting the offensive community's definition of a cyber weapon, but fail to mention Wikileaks, for example. What is a click-script? Why do they exist? I want to ask this book just to have it written down in a way that future operators need to see. There are real gaps here and I'm not sure if they're intentional efforts at abstraction.

A good cyber operations class for future officers, in the US military and beyond, would do well to expand upon this book's chapters with direct examples from their own experience. But even if all they do is assign this book as required reading, they'll have done pretty well.


Saturday, December 23, 2017

Innocent until Covertly Proven Guilty



Tom Bossert made some interesting publicized comments on the Wannacry worm a few days ago. Some of the media questions were leading and predictable. There was the usual blame-the-NSA VEP nonsense which he pushed back on strongly and (imho) correctly. Likewise, there was the International Law crowd trying to claw back relevance.

Mostly what we learned from press conference is that Tom Bossert is smart and knows what he's talking about. Likewise, he realistically pointed out that DPRK has done pretty much everything wrong a State can do, and hence we've essentially emptied our policy toolbox over their heads already.

But, of course, he also made a comment on the MalwareTechBlog/Marcus Hutchin's case, essentially saying that we got lucky that he registered the Wannacry killswitch domain. Sam Varghese over at ITWire immediately wrote an article claiming I had egg on my face for my positing that MalwareTechBlog in fact had prior knowledge of Wannacry, and was not being honest about his efforts. In fact, I had bet @riotnymia some INFILTRATE tickets that this would go the other way. Looks like she should book a trip! :)

A more balanced approach was taken by TechBeacon taking into account Brian Kreb's article.

Marcus himself has been busy calling me stupid on the Internet, which I find amusing in so much as I've been around a lot of people in legal trouble over the years, from various members of the TJMaxx hacking incident, to a bunch I won't mention currently going through legal issues with computer hacking, to, even more oddly, a romantic relationship with someone whose family got accused of murder (and who also hooked up famed 4th Amendment lawyer Orin Kerr with his wife, fwiw, because the legal world is positively tiny).

Here's what I know about all people in those positions: They are essentially driven insane, like portraits shattered by a hammer. Orin, surprisingly, will argue against all evidence that we treat cyber criminals the same in the States as overseas. But we don't. We resolutely torture people and companies accused of hacking based on essentially tea-leaf reading from law enforcement (on one hand) or our intelligence organizations (in the case of nation state attribution).

Kaspersky, of course, is one of those. And it's interesting how the stories change from the news paper leaks (was involved in FSB op) to the standing statements on the podium from government officials across the world, which state only that Kaspersky presents "An unnecessary risk when placed in areas of high trust". What we've learned is that the UK and Lithuania have both also essentially banned Kaspersky.

In other words: We live in a world where nothing is as it seems, except when it is.




Monday, December 18, 2017

The Anagram of Offense

"Stronger, Safer, Together", "Crawl, Wall, Run" and other trite phrases often heard in policy podcasts. :)


So over the weekend I made a few people mad on the Twitters by suggesting that the internet white hat group I Am The Cavalry was wasting its time with its IoT security advocacy, some of which has turned into law, various Commerce Department guidance, FDA regs, etc.

On one hand, more secure IoT devices are obviously, good, right? But on the other hand, when the rubber hits the regulatory road, you get a weird mix of "Please don't have built in backdoor passwords on your IoT devices" and "please make all IoT devices updatable". These typos of regulations attempt to fix point-problems with existing technology in a way that may or may not introduce bigger systemic risk.

The government has an interest in reducing systemic risk on the Internet as a whole. This is read by various agencies as a license for additional regulatory actions since that is almost the only tool in their box. But everyone on offense realizes we cannot do it the way that I Am The Cavalry wants to.

The Mirai worm is an example of this issue: A couple of kids built a massive IoT botnet that was then used to DDoS a few various networks. DDoS's are known issues and typically take one company off the map for a while, and are very hard to prevent as it comes down to doing filtering in a distributed and robust way against an adversary, which is not a fun problem to have.

But when they DDoSed Dyn, a provider of DNS, they caused actual disruption on the internet. But instead of trying to solve the problem of having a centralized weak point running an obsolete protocol that we depend on for literally everything, we've decided to try to make an internet where nodes can be trusted, which we know is impossible!

Additionally, requiring point solutions for IoT devices may introduce more systemic risk than we are comfortable with. Because it's impossible to say "I want SECURE updates to all IoT devices" and have any two experts agree on what that means, we have to say we want them "signed cryptographically". But these updates are coming from places that we know we cannot trust - small vendors are often weak targets, and the supply chain gets only weaker from there.

It is as if we tried to implement regulations to write SECURE PHP code so every Wordpress site didn't become a font of usernames and passwords for hackers. All of these ideas are on their face, a waste of time, which is why the offensive community tends to look at organizations solving problems OTHER than the centralized weak points as a bit silly.

I posed this point to one of the government boards looking at the IoT issue, and was told it was not helpful, but hopefully this blog answers why I wrote them this in the first place. Offensive security is almost always about finding centralized weak points that your adversary has forgotten about, or does not realize need protection, and a lot less about busting through the security layers they have in place. That's the whole ball game, every day, for the last 20 years for most of us in the industry.

An easy example is this: If your team isn't freaking out about this vulnerability in GoAhead Web Server, then they are clearly missing situational awareness.

I understand that instead of "simple" regulatory and legal fixes, this requires shepherding new massive engineering and technical efforts through the political sphere, but I still think if we want to move the dial, we have to engage in a way that truly changes the terrain.

(Secretive Sniff You is a good anagram for Offensive Security :) )

Wednesday, December 6, 2017

A Better Norm for Enforced Backdoors

This is the kind of joke you only can see in a Wonder Woman comic for what should be obvious reasons.

So various people in the government  think they can force a private American company to implement a backdoor in their product without a warrant. But they also say they haven't done this yet.

Part of the reason why is that doing classified work in non-classified environments comes with risk - i.e. part of the reason classification systems are effective is that people in the system have signed off on the idea. Threats of prosecution only go so far really as a preventative measure against leaks (as we are now hyper-aware?)

To wit, the other major reason is that as a matter of policy, forced backdoors are terrible in a way that is visibly obvious to anyone and everyone who has looked at them. The reason is that we want to claim a "Public Private Partnership" and that's a community wide thing, and this is a tiny community.

What everyone is going to expect with a public-private partnership is simple: Shared Risk. If you ask the Government if they're going to insure a company for the potential financial harm of any kind of operation, including a backdoor, they'll say "hell no!". But then why would they expect a corporation to go along with it? These sorts of covert operations are essentially financial hacks that tax corporations for governments not wanting to pay the up-front costs of doing R&D on offensive methods, and the companies know it.

The backdoors problem is the kind of equities issue that makes the VEP look like the tiny peanuts it is and it's one with an established norm that the US Government enforces, unlike almost every other area of cyber. Huawei, Kaspersky, and ZTE have all paid the price for being used by their host governments (allegedly). Look at what Kaspersky and Microsoft are saying when faced with this issue: "If asked, we will MOVE OUR ENTIRE COMPANY to another Nation State".

In other words, whoever is telling newspapers that enforced backdoors are even on the table is being highly irresponsible or doesn't understand the equities at stake.

Tuesday, December 5, 2017

The proxy problem to VEP

Ok, so my opinion is the VEP should set very wide and broad guidelines and never try to deal with the specifics of any vulnerability. To be fair, my opinion is that it can ONLY do this, or else it is fooling itself because the workload involved in the current description of any VEP is really really high.

One point of data we have is the Chinese Vulnerability reporting team apparently takes a long time on certain bugs. My previous analysis was that they used to take bugs they knew were blown and then give them to various Chinese low-end actors to blast all over the Internet as their way of informally muddying the waters (and protecting their own ecosystem). But a more modern analysis indicates a formal and centralized process perhaps.

So here's what I want to say, as a thought experiment: Many parts of the VEP problem completely map homomorphically to finding a vulnerability and then asking yourself if it is exploitable.

For example, with the DirtyCow vulnerability. Is it at all exploitable? Does it affect Android? How far back does the vulnerability go? Does it affect GRSecced systems? What about RHEL? What about stock kernel.org systems but with uncommon configurations. What about systems with low memory or systems with TONS OF MEMORY. What about systems under heavy load? What about future kernels - is this a bug likely to still exist in a year?

Trees have roots and exploits get burned, and there's a strained analogy in here somewhere. :)

The list of questions is endless, and each question requires an experienced Linux kernel exploitation team at least a day to answer. And that's just one bug. Imagine you had a hundred bugs, or a thousand bugs, every year, and you had to answer these questions. Where is this giant team of engineers that instead of writing more kernel exploits is answering all these questions for the VEP?

Every team who has ever had an 0day has seen an advisory come out, and said "Oh, that's our bug" and then when the patch came out, you realized that was NOT your bug at all, just another bug that looked very similar and was even maybe in the same function. Or you've seen patches come out and your exploit stopped working and you thought "I'm patched out" but the underlying root cause was never handled or was handled improperly.

We used to make a game out of second guessing Microsoft's "Exploitability" indexes. "Oh, that's not exploitable? KOSTYA GO PROVE THEM WRONG!"

In other words: I worry about workload a lot with any of these processes that require high levels of technical precision at the upper reaches of government.

Tuesday, November 28, 2017

Matt Tait is Keynoting INFILTRATE 2018!

So I know INFILTRATE is not aimed at the security policy crowd, but Matt Tait, formerly of GCHQ and Google Project Zero, and now a senior fellow at the Robert S. Strauss Center for International Security and Law at the University of Texas at Austin is going to give a keynote this year that I think the audience of this blog will want to attend.

You may, of course, know him only as @pwnallthethings or because he was involved in our running Russian Election drama, but I've honestly never met someone who had both the technical chops that Matt Tait has, and the ability to absorb the legal and policy area, communicate it, and project how it will fold in spacetime in the future.

I spent some time last week talking to him about his speech, and I already know it's good. :)

So if you are not already registered, then you should!


Tuesday, October 31, 2017

The Year of Transparency

I'm just going to quote a small section here of Rob Graham's blog on Kaspersky, ignoring all the stuff where he calls for more evidence, like everyone does, because it's boring and irrelevant.
I believe Kaspersky is guilty, that the company and Eugene himself, works directly with Russian intelligence.

That's because on a personal basis, people in government have given me specific, credible stories -- the sort of thing they should be making public. And these stories are wholly unrelated to stories that have been made public so far.

There's a lot to read from the Kaspersky press release on the subject of their internal inquiry. But the main thing to read from it is that the US information security community has already had a master class on Russian information operations and yet the Russians still think we will fall for it.

If any of you have a middle schooler, you know that they will gradually up the veracity of their lies when they get caught skipping school. "I was on time"->"I was a bit late"->"I missed class because I was sick"->"I just felt like playing the new Overwatch map so I didn't go to school."

In the Kaspersky case we are led to believe that Eugene was completely caught out by these accusations, and at the same time that someone in 2014 brought to him a zip file full of unreleased source code for NSA tools which he immediately ordered deleted without even looking at it and without asking any detailed questions about the matter.

This is what all parents call: Bullshit.

The US likely has multiple kinds of evidence on KasperskyAV:

  • SIGINT from the Israelis which has KEYLOGS AND SCREENSHOTS of bad things happening INSIDE KASPERSKY HQ (and almost certainly camera video/audio which are not listed in the Kaspersky report but Duqu 2.0 did have a plugin architecture and no modern implant goes without these features)
  • Telemetry from various honeypots set up for Kaspersky analysis. These would be used to demonstrate not just that Kaspersky was "pulling files into the cloud" but HOW and WHEN and using what sorts of signatures. There is a difference to how an operator pulls files versus an automated system, to say the least. What I would have done is feed the Russians intel with codewords from a compromised source and then watched to see if any of those codewords ended up in silent signatures.
  • HUMINT, which is never ever mentioned anywhere in any public documents but you have to assume the CIA was not just sitting around in coffee bars wearing tweed jackets all this time wondering what was up with this Kaspersky thing they keep reading about. Needless to say the US does not go to the lengths it has gone to without at least asking questions of its HUMINT team?
I know the Kaspersky people think I have something against them, which I do not, or that I have inside info, which I also do not. But the tea leaves here literally spell the hilarity out in Cyrillic, which I can, in fact, read. 




Wednesday, October 11, 2017

The Empire Strikes Back

XKCD needs to calculate the strength of those knee joints in a comic for us.


It's fascinating how much of the community wants to be Mulder when it comes to Kaspersky's claims of innocence. WE WANT TO BELIEVE. And yet, the Government has not given out "proof" that Kaspersky is, in fact, what they claim it is. But they've signaled in literally every way possible what they have in terms of evidence, without showing the evidence itself. This morning Kaspersky retweeted a press release from the BSI which when translated, does not exonerate him, so much as just ask the USG  for a briefing, which I'm sure they will get.

Likewise, where there is one intelligence operation, there are no doubt more. Kaspersky also runs Threatpost and a popular security conference. Were those leveraged by Russian intelligence as well? What other shoes are left to drop?

Reports like this rewrite our community's history: Are all AV companies corrupted by their host governments? Is this why Apple refused to allow AV software on the iPhone, because they saw the risk ahead of time and wanted to sell to a global market?

If I was Russian intelligence leveraging KAV I would make it known that if you put a bitcoin wallet on your desktop, and then also bring tools and documents from TAO home to "work from home" and you happen to have KAV installed, your bitcoin wallet would get donations. No communication needed, no risky contacts with shady Russian consulate officials. Nothing convictable as espionage in a court of law. Maybe I would mention this at the bar at Kaspersky SAS in Cancun.

But the questions cut both ways: Is the USG going to say they would never ask an American AV company to do this? The international norms process is a trainwreck and the one thing they hang their hats on is "We've agreed to not attack critical infrastructure" but defining what the trusted computing base of the Internet as a whole is they left as a problem for the "techies".

We see now the limitations of this approach to cyber diplomacy, and the price.





Saturday, September 16, 2017

The Warrant Cases are Pyrrhic Victories

The essential question in Trusted Computing has always been "Trusted FROM WHOM?" and the answer right now is from the Government.


Trusted Computing is Complex

So a while back I had two friends who I hung out with all the time and because we knew almost no women after we worked a full day at the Fort we would go back to their house and try to code an MP3 decoder or work on smart card security (free porn!) or any number of random things.

One of my friends, Brandon Baker, went off to Microsoft and ended up building the Hyper-V kernel and worked on this little thing called Palladium, which then got renamed the Next Generation Trusted Computing Base and because of various political pressures relating to creating an entirely new security structure based on hardware PKI was then buried.

But it didn't die - it has been slowly gaining strength and being re-incarnated in various forms, and one of those forms is Azure Confidential Computing.

People have a hard time grasping Palladium because without all the pieces, it is always broken and makes no sense, and most of those pieces are in poorly documented hardware. But the basic idea is: What if Microsoft Windows could run a GPG program that it could not introspect or change in any way, such that your GPG secret key was truly secret, even from the OS, even if a kernel rootkit was installed?

Of course, the initial concept for Palladium was mostly oriented towards DRM, in the sense of having a media player that could remotely authenticate itself to a website and a secured keyboard/screen/speaker such that you couldn't steal the media. This generated little interest in the marketplace and the costs for implementation were enormous, hence the failure to launch.


"Winning" on warrants. The very definition of Pyrrhic Victories.

Law Subsumed by Strategy

There's a sect among the Law Enforcement, national security, and legal community that looks upon Microsoft and Google's court cases on extra-territorial warrant responses as an impingement of the natural rights of the US Nation State.

It's no surprise that the legal arguments are disjointed from both sides. Effectively the US position is that the government should be able to collect whatever data it wants from Google or Microsoft, because the data is accessible from the US, and because they want it. And Google and Microsoft have stored that data on overseas servers for many reasons but also because their customers, both international and domestic think the US State no longer has that natural right, that it is as primitive as Prima Nocte. And in addition their employees think the US has failed to go to bat on these issues for Google/Microsoft/etc in China and the EU.  This isn't necessarily true, but it is true that the USG has treated the populations that make up the technology elites as if their opinions are not relevant to the discussion.

Law is not a Trump Card

The problem with making the US Government the primary foe in every technology company's threat model is they can very quickly adapt to new laws by building systems which they cannot introspect, which is what Azure Confidential Computation is. But that's just the beginning. Half their teams come from the NSA and CIA technology arms. They know how to cause huge amounts of pain to our system while staying within regulations and laws, and they have buy in from the very tops of their organizations.

This was all preventable. If we'd had decent people in the executive team killing the Apple lawsuit last year, and finding some way to come to an agreement and end the crypto war, we could have prevented Going Dark from being a primary goal of all of the biggest companies (I.E. even at Financials). We needed to be able to negotiate with them in good faith to maintain a balance of "The Golden Age of Metadata" with what they and their customers wanted.

We didn't have anyone who could do that. As in so many pieces of the cyber-government space, we may have missed our window to prevent the next string in the international order from unraveling.

Thursday, September 7, 2017

Opaque cyber deterrence efforts

From

Pakistan's Nuclear Policy: A Minimum Credible Deterrence

By Zafar Khan
Figuring out what cyber operations can and can't deter is most similar to figuring out what percentage of your advertising budget you are wasting. That is: you know 90% of your cyber deterrence isn't working, you just don't know which 90%.

That said, so much more of cyber deterrence is based around private companies than we are used to working with in international relations. Kaspersky may or may not have been used for ongoing Russian operations, and the deterrent effect of banning them from the US market will have a long reach. This mix is complicated and multi-faceted. Some of the hackers that ran China's APT1 effort now work for US Anti-Virus companies.

Modern thinkers around deterrence policy often look at only declared overt deterrence, of the type North Korea is currently using. But covert deterrence is equally powerful and useful and much more applicable to offensive cyber operations where there is no like-for-like comparison between targets or operational capability.

But cyber does have deterrent effects - knowing that someone can out your covert operatives by analyzing the OPM and Delta Airline databases can deter a nation-state from operating in certain ways.

The question is whether non-nation-state actors also have opaque cyber deterrence abilities and how to model these effects as part of a larger national security strategy - for example, via Google's Project Zero. And it's possible that the majority of cyber deterrence will at least pretend to be non-nation-state efforts, such as ShadowBrokers.

Technically, deterrence often means the ability to rapidly respond and neutralize offensive cyber tools. Modern technology such as endpoint monitoring, or country-wide network filtering, can provide an effective deterrence effort when provided with input from SIGINT or HUMINT sources that effectively neutralizes potential offensive efforts by our adversaries.

Wednesday, August 23, 2017

The Cyber Coup

Twitter posts are now official government statements - and this introduces interesting follow-on systemic weaknesses to cyber warfare. Do you think we can defend the Twitter infrastructure against a Nation-State? Or even an interested non-nation state player? Is that now the NSA's problem?

Because if not, the payload looks something like this:





UNGGE and Tallinn 2.0 Revisited

https://amp.theguardian.com/world/2017/aug/23/un-cyberwarfare-negotiations-collapsed-in-june-it-emerges

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3024405 (Paper from Mike Schmitt and Liis Vihul on this)

So I want to bring us back to social insects and point out that basically everything we know always turns out to be wrong, but in a weird way. For example, I was taught growing up via whatever biology classes and nature shows I watched, that the bees have a queen, and the queen lays all the eggs and the workers do her bidding via chemical cues or whatever because they are so closely related to her.

But what turns out to be true is a thousand times more complex, because workers can also lay eggs, and often choose to as a strategy. And that means that the simple model in my head of how a nest works is all off-kilter - tons of energy in the nest has to be dedicated to maintaining order. That brings us to their paper:

This right here is where Mike Schmitt and Liis Vihul go wrong... 

Ok, so the paper is very much a last stand defense for the Tallinn process and the rest of the work that Mike and crew have put into stretching the Barney costume of normal international law over the mastodon that is the cyber domain. Look, I've met Mike Schmitt and he has an IQ of something like 250, but he's dead wrong on this whole thing and it's getting painfully obvious to the whole community.

The place he goes most awry is in the paragraph highlighted above: He thinks states have territory and that territory extends into cyberspace, which it just doesn't. I get the that the implications of that are complicated and quite scary, but he runs straight off a philosophical cliff when he says that any "physical change" including replacing hard drives is going to trespass sovereignty. The real world has the FBI conducting operations all over the world because we don't know where something is once it hits the Tor network, and frankly we don't care. We are going to ./ and let the courts sort them out.

Everyone reading this blog could build scenario after scenario that challenges his arguments around the applicability of various aspects of international law in cyberspace based on his initial fallacy, but until we gather a group of people around and have some sort of intervention ceremony with him it's going to be impossible for him to internalize it.

I don't think he can read Tallinn 2.0 and notice that his bevy of "Experts" have created a document that reads EXACTLY LIKE THE TALMUD, with everyone agreeing on some things, disagreeing on some other things, and making exactly zero sense the whole time when applied to modern cyber operations. This is the kind of thinking that lets them draw nonsensical derivations about there being some sort of physical-component line you could draw between "violates sovereignty" and "PERFECTLY OK" as if Stuxnet never happened.

The authors also warn of any further operations saying that if they get caught:

"This could lead to further “Westphalianization” of the internet, as well as increased data localization, which runs counter to the long-term U.S. policy objective of the free flow of information." 

I'm pretty sure that's already happened. Did we write this paper in a time machine, perhaps?


Tuesday, August 8, 2017

Strategic Plateaus in the Cyber domain

One thing I think that surprises many people who don't play video games is how similar the strategies for them all are. It's as if Chess and Checkers and Go all had the same basic gameplay.

In most online shooters, you have characters with a high "skill ceiling" that require precise aim and maneuvering, and others which have the ability to soak up damage or cause area effects or heal their friends, which generally require more positioning and strategy understanding.

And as new characters are introduced to a game, or existing characters are tweaked, you change the strategies that works the best overall.



In Overwatch, the most popular game among hackers right now, you have "Dive Comp" and "Deathball Comp". These represent "Fast, deadly characters and chaotic rampage" vs "Healthy armored characters and slow advance". If you're going with the right team composition and strategy you can overcome even very serious disadvantages with your "mechanics" (shooting skill, reaction times, etc.) . I.E. your team gains an asymmetric advantage until the other teams copy you and catch up.

Which technique works best is generally called "The current meta" and trickles down from the pro-players to the very lowest ranks of Overwatch (where nobody should honestly care, but they still really do). New meta shifts in Overwatch, despite the continual changes introduced by every patch, are extremely rare, perhaps once a year! The game designers say this is because people are bad at finding and testing new strategies, essentially. It is a rare skill. You almost have to be pretty good at any new strategy to know if it even really works. I call this a strategic plateau, because it LOOKS like the meta is still one way, but it's really another way, yet to be discovered until someone gets good  enough at some new way of operating.

And yet, the cyber domain is even more choppy than any computer game could ever be. Things change at a tremendous rate, and people generally look at the "Cyber Meta" as a static thing! Either we are in the "Botnet Meta" or the "Worm Meta". We either do "Client side attacks" or we do "SQLi attacks". So many people think the cyber meta is what the West Coast's VC funded machine tells them it is at RSA or in Wired Magazine!

Getting this right is a big bet - some might point to recent events by saying it is a bet of global importance. Investment in a high end "Man on the Side" technology stack can run you into the billions. You'd better hope the meta doesn't change until your investment pays off. And what are the strategic differences between TAO-style organizations and the Russian/Chinese way? It's possible to LOSE if you don't understand and adapt to the current up-to-date Meta of the domain you are in, no matter what your other advantages are.

Grugq has a whole talk on this, but everyone is going to divide it differently in their head and be really crazy about it, the way people are when I use Torbjorn on attack. Also, why isn't "Kaspersky" in my spreadsheet yet! :) Also: Do you have a similar spreadsheet? IF SO SHARE.

No matter how you define the "Deathball" or "Dive Comp" of the cyber domain you also need to analyze in depth how modern changes in the landscape effect them and make them stronger and weaker. "Bitcoin and Wikileaks as a service" may have replaced "Russian Intel" as a threat against giant teams of operators, for example. Endpoint defenses and malware analysis and correlation may have advanced to the point where Remote Worms have become much stronger in the meta.

But the real fun is in thinking up new comps to run - before QUANTUMINSERT was done, someone had to imagine it fully fledged in their heads. Before the Russians could run a destructive worm from a tiny contractor team that hit up an accounting firm, someone already had a certainty in their mind that knew it would work. And so that's the real question I'm asking everyone here. What's the next meta? What does your dark shadow tell you?



Monday, August 7, 2017

DDIRSA posts about VEP

Former DDIRNSA (who just retired) posted this today, and it accurately reflects his feelings on the VEP debate, I assume.

https://www.lawfareblog.com/no-us-government-should-not-disclose-all-vulnerabilities-its-possession

There's nothing in there that would surprise someone who regularly reads this blog though - essentially he does not hold any water with the argument that we should be giving up all our vulnerabilities to vendors.

Likewise, he appears to be miffed that people are blaming WannaCry/NotPetya on the NSA, as you might expect.

Oh, also I want to mention the things he didn't say would be good compromises, which tend to be offered as "halfway points" from people who have never been in this business. He didn't say "Let's only keep 0day for a few months" or "Let's only keep certain kinds of 0day - the not important ones". All those ideas are terrible, and get offered again and again by various policy arms as if they are going to magically get better over time.


Saturday, August 5, 2017

The Killswitch story feels like bullshit

If you haven't watched the INFILTRATE keynote from Stephen Watt here then you need to do that, especially if you are a lawyer who specializes in cyber law. INFILTRATE is where you hear about issues that effect the community in the future, and you should register now! :)


But let me float my and others initial feeling when MalwareTech got arrested: The "killswitch" story was clearly bullshit. What I think happened is that MalwareTech had something to do with Wannacry, and he knew about the killswitch, and when Wannacry started getting huge and causing massive amounts of damage (say, to the NHS of his own country) he freaked out and "found the killswitch". This is why he was so upset to be outed by the media.

Being afraid to take the limelight is not a typical "White Hat" behavior, to say the least.

That said, we need to acknowledge the strategic impact law enforcement operations as a whole have on national security cyber capabilities, and how the lighter and friendlier approach of many European nations avoids the issues we are having here in the States.

Pretty much every infosec professional (yes, even the ones in the IC!) knows people who have been indicted for computer crimes now. And in most of those cases, the prosecution has (as in the video above) operated in what is essentially an unfair, merciless way, even for very minor crimes. This has massive strategic implications when you consider that the US Secret Service and FBI often compete with Mandiant for the handling of computer intrusions, and the people making the decisions about which information to share with Law Enforcement have an extremely negative opinion of it.

In other words: Law Enforcement needs to treat hacker cases as if they are the LAPD prosecuting a famous actor in LA. Or at least, that's the smartest thing to do strategically, and something the US does a lot worse than many of our allies.

Wednesday, August 2, 2017

0days: Testing, and Timing

Perfect timing is everything...
So there's another reason why nation states use 0days: Testing. Testing on exploits is particularly hard. All software is hard, but exploits are "things that are not supposed to work" by definition. This means not only are you testing them against a few VMs you have laying around but also against, say, every version of AVG's free virus protection you have ever seen, on every version of Windows possible, with every possible configuration.

As you can imagine, that problem is "exponential" in the way that computer scientists use the term when they mean "a complete freakin' nightmare". Of course, the only REAL test is whether they "work in the wild". This is a whole different level of "working" and "works in the wild" is labeled on many exploits to say 'yes, this one is at that next level of quality'.

And some exploits, even very good exploits, like ETERNALBLUE appears to be, fail the testing. (According to reporting, it was known as "ETERNAL BLUESCREEN" since it often crashed targets.)

When exploits fail the testing phase, you don't give up on them. You pass them off to different exploitation teams for more analysis, you wait for the code in the target process to change (which is often successful). You wait for another bug to be found to combine with this bug. Persistence, which is a key metric of your operational success, is not just about the hacking part, in other words.

And even if your exploit is PERFECT you still have many things to do before you can use it. It needs to be integrated into your toolset. People need to be trained on it. Targets need to be collected and triaged. Operational security notes need to be written. What do you do when the exploit fails? Does it leave logs? How do you clean those up? How do I tune my defenses to detect if the Russians are already using this vulnerability (and have therefor tuned their OWN defenses to detect it?)

All of these things take time and we could, in some cases, be talking several years. Your average penetration testing gig is maybe two weeks long. These processes are similar, but not the same. So be careful extrapolating operational work from penetration testing too much. The Grugq has a good presentation you can read on this evolution here.

Needless to say, attacking with vulnerabilities which already are well known has negative impact on your OPSEC. But it also may mean the targets you most care about (which have an average patch testing cycle of 14 days), may become out of your reach.

Tuesday, August 1, 2017

Do you need 0days? What about oxygen?

I always enjoy it when people say that you don't need 0days to gather cyber intelligence as a nation state, such as in today's SearchSecurity article about the BlackHat discussion on the VEP.

Technically, you don't need covert intelligence at all. Open Source information can be just as good in many cases. But then, there are also cases (and I'm struggling to avoid bombast here) where covert collection is desired. And from a military standpoint, there are many cases where hidden pre-placement on an enemy network is desired.


The answer to "Do you need 0days" is "Yes."

Intelligence and military work is quite different from penetration testing work. This should go without saying, but let's delve a bit into the "how" to see why exactly 0days are so useful.

First of all, in penetration testing you rarely sit on a target network for months or years collecting data like you do in intelligence. And you rarely need that data to be "untampered with". I.E. We don't want our signals intelligence collection to be double-agents feeding us false data. Implants in general have received a lot less attention in the public penetration testing sphere than in the intelligence sphere. FLAME is still generations ahead of what a typical penetration testing company would use. I say this, because our "Somewhat similar to FLAME" framework INNUENDO is in that market space, and the people who buy it are typically large banks looking to emulate nation state threats, not small and midsize penetration testing companies.

The thing is this: Using a non-0day exploit means IDS systems can silently catch you, and then burn and turn your implant network against you. This is a non-trivial risk. Human lives are OFTEN ON THE LINE and when they are not, billion dollar SIGINT programs are.

In intelligence, you need to overcome every network visibility and management tool the defender has, and the defender only has to detect you once. Also in many cases you simply cannot fail when doing intelligence operations in the cyber domain. In penetration testing you can get away with writing a report that says "You have no unpatched vulnerabilities on your system." This is, most of the time, what the customer really wants!

In intelligence work you have a much higher bar. Get in, get out, be undetected, for years at a time, and the consequences for failure are unimaginable. This is where 0days fit in, as part of a mature intelligence capability that takes into account the real risk structure of the world of mirrors.




Monday, July 31, 2017

Rebecca Slayton can write and has a cool name


This is a really great paper you can read here. Highly recommend it for its much more in-depth analysis of how offense is probably expensive. At the end, she goes into a cost/benefit of both offense and defense of Stuxnet - which in my opinion is the weakest part of the paper. You can't say vulnerabilities cost 0 dollars if they came from internal sources. And you can't just "average" the two possibilities you have and come up with a guess for how much something was.

I mean, the whole paper suffers a bit from that: If you're not intimately familiar with putting together offensive programs, there are many many moving pieces you don't account for in your spreadsheet. That's something you learn when running any business really. On the other hand, she's not on Twitter, so maybe she DOES have experience in fedworld and just doesn't want to go into depth?

Also, there's no discussion of opportunity costs. And a delay of three months on releasing a product, equally true for web applications and nuclear bombs, can be infinitely expensive.

But aside from that, this is the kind of work the policy world needs to start doing. Five stars, would read again.

I mean, the simpler way of saying this is the NSA mantra, which is that whoever knows the network better, controls it. And defenders obviously have a home-field advantage... :)

Quotas as a Cyber Norm

Originally I chose this picture as a way of illustrating perspective around different problems we have. But now I want a giant scorpion pet! So win-win!

Part of the security community's issue with the VEP is that it addresses a tiny problem that Microsoft has blown all out of proportion for no reason, and distracts attention from the really major and direct policy problems in cyber namely:

Vulnerabilities have many natural limits - like giant scorpions needing oxygen. If nothing else, it costs money to find them and test and exploit them, even assuming they are infinite in supply, which I assure you they are not. Likewise, vulnerabilities can be mitigated by a company with a good software development practice - there is a way for them to handle that kind of risk. A backdoored cryptographic standard or supply chain attack cannot be mitigated, other than by investing a lot of money in tamper proof bags, which is probably an unreasonable thing to ask Cisco to do. 

Deep down, forcing the bureaucracy to prioritize on actions that have no "cost" to them but high risk for an American company makes a lot more sense than something like the VEP, which imposes a prioritization calculus on something that is already high cost to the government.

Essentially what I'm asking for here is this: Limit the number of times a year we intercept a package from a vendor for backdooring. Maybe even publish that number? We could do this only for certain countries, perhaps? There are so many options, and all of them are better than "We do whatever we want to the supply chain and let the US companies bear those risks."

Likewise, do we have a policy on asking US companies to send backdoored updates to customers? Is it "Whenever it's important as we secretly define it?"

Imagine if China said, "Look, we backdoor two Huawei routers a year for intelligence purposes." Is that an OK norm to set?


Friday, July 28, 2017

A partial retraction of the Belfer paper

https://www.lawfareblog.com/what-you-see-what-you-get-revisions-our-paper-estimating-vulnerability-rediscovery

"Did not ever look at the data - hope the other two data sources are ok?"

As a scientist, I had the same reaction to the Belfer VEP paper that a climate scientist would when confronting a new "science" paper telling me the Earth was going into an ice age. So you can imagine my "surprise" when it got partially retracted a few days after being published in response to me having a few spare hours the afternoon the day after it was originally released.

Look, on the face of it, this paper's remaining numbers cannot be right. There are going to be obvious confounding variables and glaring statistical flaws in any version of this document that claims 15% of all bugs collide between two independent bug finders within the conditions this paper uses. They haven't released the new paper and data yet, so we will have to wait to find out what they are. But if you're in the mood for a Michael Bay movie-style trailer, I will say this: The answer is fuzzers, fuzzers, and more fuzzers. A better title for this paper would have been "Modern fuzzers are very complex and have a weird effect on bug tracking systems and also we have some unsubstantiated opinions on the VEP".

The only way to study these sorts of things is to get truly intimate with the data. This requires YEARS OF WORK reading reports daily about vulnerabilities and also experience writing and using exploits. Nobody in policy-land with a political science or international relations background wants to hear that. It sounds like something a total jerk would say. I get that and for sure Ari Schwartz gets that. But also I get that this is not a simple field where we can label a few things with a pocket-sized methodology and expect real data, the way this paper tried to.

An example of how not being connected to the bugs goes horribly wrong has been published on this blog previously, when a different Befler "researcher" (Mailyn Fiddler) had to correct her post on Stuxnet and bug collisions TWICE IN A ROW because she didn't have any of the understanding of the bugs themselves necessary to know when her thesis was obviously wrong.

As in the case of the current paper, she eventually claimed her "conclusion didn't change" just because the data changed drastically. That's a telling statement that divides evidence-based policy creation from ideological nonsense. Just as an ideological point of reference, Bruce Schneier, one of the current paper's authors, also was one of the people to work with the Guardian on the Snowden Archive.

The perfect paper on bug collision would probably find that the issue is multi-dimensional, and hardly a linear question of a "percentage". And any real effort to determine how this policy works against our real adversaries would be TOP SECRET CODEWORD at a minimum.




Friday, July 21, 2017

Something is very wrong with the Belfer bug rediscovery paper

This is what the paper says about its Chromium data:
Chrome: The Chrome dataset is scraped from bugs collected for Chromium, an open source software project whose code constitutes most the Chrome browser. 20 On top of Chromium, Google adds a few additional features, such as a PDF viewer, but there is substantial overlap, so we treat this as essentially identical to Chrome.21 Chrome presented a similar problem to Firefox, so to record only vulnerabilities with a reasonable likelihood of public discovery, we limited our collection to bugs labeled as high or critical severity from the Chromium bug tracker. This portion of the dataset comprises 3,397 vulnerability records of which there are 468 records with duplicates. For Chrome, we coded a vulnerability record as a duplicate if it had been merged with another, where merges were noted in the comments associated with each vulnerability record, or marked as a Duplicate in the Status field.

The problem with this methodology is simply that "merges" do not indicate rediscovery in the database. The vast majority of the findings relied upon for the paper are false positives.



To look at this, I went through their spreadsheet and collected out those mentioned 468 records. Then I examined them on the Chromium bug tracking system. The vast majority of them were "self-duplicates" from the automated fuzzing and crash detection systems.
I'm a Unix hacker so I converted it to CSV and wrote a Python script to look at the data. Happy to share scripts/data.

Looking at just the ones that have a CVE or got a reward makes more sense. There's probably only 45 true positives in this data set (i.e. the ones with CVE numbers). That's 1.3% which agrees with the numbers from the much cleaner OpenSSL Bug Database (2.4%) from this paper.



---
Notes:
Example false positives in their data set:

  • This one has a CVE, but doesn't appear to be a true positive other than people noticed things crashed in many different ways from one root cause.
  • Here someone at Google manually found something that clusterfuzz also found.
  • Here is another clear false positive. Here's another. Literally I just take any one of them, and then look at it.
  • Interesting one from Skylined, but also a false positive I think.



Thursday, July 20, 2017

Decoding Kasperksy



http://fortune.com/2017/07/16/kaspersky-russia-us/

Although various internet blowhards are hard at work asking for "More information to be released" regarding why the US is throwing Kaspersky under the bus, that's never going to happen. It's honestly easiest to get in the press by pretending to be in disbelief as to what the United States is doing in situations like this.

I say pretend because, it's really pretty clear what the US is saying. They are saying, through leaks and not-so-subtle hints, that Kasperksy was involved in Russian operations. It's not about "being close to the Kremlin" or historical ties between Eugene Kaspersky and the FSB or some kind of DDoS prevention software. Those are not actionable in the way this has been messaged at the highest levels. It's not some sort of nebulous "Russian Software" risk. It's about a line being crossed operationally.

The only question is whether you believe Eugene Kaspersky, who denies anything untoward, or the US Intelligence Community, which has used its strongest language and spokespeople as part of this effort and has no plans to release evidence.

And, in this particular case, the UK intel team (which has no doubt seen the evidence) is backing the US up, which is worth noting, and they are doing it in their customary subtle but unmistakable way, by saying at no point was Kaspersky software ever certified by their NCSC.

The question for security consultants, such as Immunity, is how do we advise our US-based clients - and looking at the evidence, you would have to advise them to stop using Kaspersky software. Perhaps your clients are better off with VenusTech?

I'm pretty sure this AV company is deceased!

Bugmusment 2017


The Paper itself:


Commentary:
Note that the paper in the selected area would be TS/SCI for both us and China. :)

(To be honest, I don't think it even does this)



Cannot be true.

Ok, so I can see how it went. After the Rand paper on 0day collisions came out, existing paper writers in the process of trying to point out how evil it was the Government knew about 0day were a bit up a creek without a paddle or even a boat of any kind.

Because here's the thing: The Rand paper's data agrees with every vulnerability researcher's "gut feeling" on 0day collision. You won't take a 5% over a year number to a penetration testing company and have them say, "NO WAY THAT IS MUCH TOO LOW!"

But if you were to take a 20% number to them, they would probably think something was wrong with your data. Which is exactly what I thought.

So I went to the data! Because UNLIKE the Rand paper, you can check out their GitHub, which is how all science should work. The only problem is, when you dig into the data, it does not say what the paper says it does!

Here is the data! https://github.com/mase-gh/Vulnerability-Rediscovery

From what I can tell, the Chromium data is from fuzzers, which naturally collide a lot. Especially when in most cases I can click on the rediscovery is from the exact same fuzzer, just hitting the same bug over and over in slightly different ways. The Android data I examined manually had almost all collisions from various libstagefright media parsing bugs, which are from fuzzers. A few seemed to be errors. In some cases, a CVE covers more than one bug, which makes it LOOK like they are collisions when they are not. This is a CVE issue more than anything else, but it skews the results significantly.

Ok, so to sum up:
The data I've looked at manually does not look like it supports the paper. This kind of research is hard specifically because manual analysis of this level of data is time consuming and requires subject matter experts.

It would be worth going in depth into the leaked exploits from ShadowBrokers etc. to see if they support any of the figures used in any of the papers on these subjects. I mean, it's hard not to note that Bruce Scheier has access to the Snowden files. Maybe there are some statistics about exploits in there that the rest of us haven't seen and he's trying to hint at?

This was the paragraph in the paper that worried me the most. There is NO ABILITY TO SCIENTIFICALLY HAVE ANY LEVEL OF PRECISION AS CLAIMED HERE.

Wednesday, July 19, 2017

An important note about 0days

https://twitter.com/ErrataRob/status/886942470113808384

For some reason the idea that patches == exploits and therefor any VEP-like program that releases patches is basically also trickling out exploits is hard to understand if you haven't done it.



Also, here's a very useful quick note from the head of Project Zero:


Issues with "Indiscriminate Attacks" in the Cyber Domain

c.f.: https://www.cyberscoop.com/petya-malmare-war-crime-tallinn-manual/ 

The fundamental nature of targeting in the cyber domain is very different from conventional military standards. In particular, with enough recon, you can say to a high degree "Even though I released a worm that will destroy every computer it touches, I don't think it will kill anyone or cause permanent loss of function for vital infrastructure."

For example, if I have SIGINT captures that say that the major hospitals have decent backup and recovery plans, and the country itself has put their power companies on notice to be able to handle computer failures, I may have an understanding of my worm's projected effects that nobody else does or can.

Clearly another historical exception is if my destructive payload is only applicable to certain very specific SCADA configurations. Yes, there are going to be some companies that interact poorly with my exploits and rootkit, and will have some temporary damage. But we've all decided that even a worm that wipes every computer is not "destroying vital infrastructure" unless it is targeted specifically at vital infrastructure and in a way that causes permanent damage. Sony Pictures and Saudi Aramco do still exist, after all, and they are not "hardened targets".

The main issue is this: You cannot know, from the worm or public information, what my targeting information has told me and you cannot even begin to ask until you understand the code. Analyzing Stuxnet took MONTHS OF HARD WORK. And almost certainly,  this analysis was only successful because of leprechaun-like luck, and there are still many parts of it which are not well understood.

So combine both an inability to determine after-the-fact if a worm or other tool was released with a minimal chance for death or injury because you don't know my targeting parameters with the technical difficulty of examining my code itself for "intent" to put International Law frameworks on a Tokyo-level shaky foundation. Of course, the added complication is that all of cyber goes over civilian infrastructure - which moots that angle as a differentiating legal analysis.

Many of the big governmental processes try to find a way to attach "intent" to code, and fall on their face. The Wassenaar Arrangement's cyber regs is one of them. In general, this is a problem International Law and Policy students will say is in every domain, but in Cyber, it's a dominant disruptive force. 

In other words, we cannot say that NotPetya was an "indiscriminate weapon".