Thursday, December 28, 2017

A Permanent Revolution

I wanted to end the year on a positive note, by highlighting people, some of whom you won't know, who I think represent a new wave of technical cyber policy experts doing great work on the various subjects needed in this area.

I'm not saying that this team agrees with each other on every issue, but as a whole, the community is changing to be more technical and more reality focused, and that's a good thing, and a lot of progress was made there this year. The vectors are trending up, and the enemy's gate is always down. :)

Wednesday, December 27, 2017

A slow acceptance

It's worth putting the latest Foreign Affairs piece by Susan Hennessey into context.

I'm still curious what line the OPM hack in theory crossed?
It's been obvious to many of the readers that part of the reason this blog even exists is because a lot of the members of the offensive community found it perplexing that our strategic policy centers were so off base. Last year I had a whiskey-and-policy meeting with a former govt official in the space and when he asked why I was so worked up over VEP and Wassenaar I said "Because I'm sick of getting our asses handed to us by Wikileaks and a dozen other bit players because we can't figure out where first base is let alone hit a home run once in a while!"

I see Susan Hennessey's piece as a way to try to begin to acclimatize the policy world that drastic changes need to take place. Her piece is on deterrence, but every part of the cyber policy community is heavily linked and in weird ways. You don't get deterrence without making some sort of grand bargain on crypto backdoors, in other words.

The last line is telling. It is exactly worth pointing out that not only did the last policy fail, but that it failed in predictable ways for predictable reasons.

For fifteen years we've had people at the top of the cyber policy food chain who only gave nominal support to the positions their technical community cared deeply about. Not only did the State Dept cyber team or the Obama White House cyber team not see or not care about the obvious ensuing chaos while it was signing the Wassenaar Arrangement. They didn't know who to call to ask about it even if they did care. It's essentially a sign of hostility to the technical community that they would ban penetration testing software without so much as sending a Facebook message to any of the companies in the States who sell penetration testing software. That hostility is the root cause of why we can't have deterrence, or other nice things.

But this has changed. There is hope, as General Leia would say. But that hope comes at the cost of acknowledging not just failure, but why we failed.

Sunday, December 24, 2017

Book Review: On Cyber: Towards An Operational Art for Cyber Conflict

Authors: Greg Conti and David Raymond

Annoyingly and ironically this book is only available in paperback, and not in electronic format.
I spent Christmas Eve on the beach re-reading this book. Moments later these same seagulls issued a flank attack that stole my apple pastry from me. :(
So I went through this book carefully looking for serious flaws. I came up with a few minor issues instead. But this and Matt Monte's book are the books that should be getting read by teams looking to get up to speed from a military angle. Maybe I would add Relentless Strike as well.

The reason this book works is that resume matters. You don't see tons of quotes from the authors stolen from the traditional canon of B.S. policy papers or Wired magazine articles. Nothing in this book is quoting a NY Times article that everyone in the know already has discounted as a disinformation effort via targeted leaks. 

I'm not saying this to be harsh - but it's a fact that almost all the books in this space suffer from a lack of experience in the area. These authors know what they're talking about in both of the domains this book straddles and it would be clear even if you didn't know who they were. The book quotes Dual Core and Dan Geer as easily as Clausewitz. 

If there are gaps in the book, they are in a failure to go the extra mile philosophically to avoid ruffling feathers in the policy world. What does it mean that cyber operations can engage in N-dimensional flanking operations? They often point to contentious issues with regards to how traditional thought runs without directly naming and shaming. Tell me again how the US copyright regime is in some way different technologically from the Chinese effort against Falun Gung? 

When it comes to predictions, the book fails to predict the worm revolution we're in now and is heavily focused on AI and scale, since the US military is so focused on C2-based operations, but that's a myopia that can only be corrected after operational planners have mastered the basics of maneuver in cyber. It's a US focused book, but what else would you expect?

The book also could use more direct examples than it has - if for no other reason than because they push the concepts better than raw text does. They get close to adopting the offensive community's definition of a cyber weapon, but fail to mention Wikileaks, for example. What is a click-script? Why do they exist? I want to ask this book just to have it written down in a way that future operators need to see. There are real gaps here and I'm not sure if they're intentional efforts at abstraction.

A good cyber operations class for future officers, in the US military and beyond, would do well to expand upon this book's chapters with direct examples from their own experience. But even if all they do is assign this book as required reading, they'll have done pretty well.

Saturday, December 23, 2017

Innocent until Covertly Proven Guilty

Tom Bossert made some interesting publicized comments on the Wannacry worm a few days ago. Some of the media questions were leading and predictable. There was the usual blame-the-NSA VEP nonsense which he pushed back on strongly and (imho) correctly. Likewise, there was the International Law crowd trying to claw back relevance.

Mostly what we learned from press conference is that Tom Bossert is smart and knows what he's talking about. Likewise, he realistically pointed out that DPRK has done pretty much everything wrong a State can do, and hence we've essentially emptied our policy toolbox over their heads already.

But, of course, he also made a comment on the MalwareTechBlog/Marcus Hutchin's case, essentially saying that we got lucky that he registered the Wannacry killswitch domain. Sam Varghese over at ITWire immediately wrote an article claiming I had egg on my face for my positing that MalwareTechBlog in fact had prior knowledge of Wannacry, and was not being honest about his efforts. In fact, I had bet @riotnymia some INFILTRATE tickets that this would go the other way. Looks like she should book a trip! :)

A more balanced approach was taken by TechBeacon taking into account Brian Kreb's article.

Marcus himself has been busy calling me stupid on the Internet, which I find amusing in so much as I've been around a lot of people in legal trouble over the years, from various members of the TJMaxx hacking incident, to a bunch I won't mention currently going through legal issues with computer hacking, to, even more oddly, a romantic relationship with someone whose family got accused of murder (and who also hooked up famed 4th Amendment lawyer Orin Kerr with his wife, fwiw, because the legal world is positively tiny).

Here's what I know about all people in those positions: They are essentially driven insane, like portraits shattered by a hammer. Orin, surprisingly, will argue against all evidence that we treat cyber criminals the same in the States as overseas. But we don't. We resolutely torture people and companies accused of hacking based on essentially tea-leaf reading from law enforcement (on one hand) or our intelligence organizations (in the case of nation state attribution).

Kaspersky, of course, is one of those. And it's interesting how the stories change from the news paper leaks (was involved in FSB op) to the standing statements on the podium from government officials across the world, which state only that Kaspersky presents "An unnecessary risk when placed in areas of high trust". What we've learned is that the UK and Lithuania have both also essentially banned Kaspersky.

In other words: We live in a world where nothing is as it seems, except when it is.

Monday, December 18, 2017

The Anagram of Offense

"Stronger, Safer, Together", "Crawl, Wall, Run" and other trite phrases often heard in policy podcasts. :)

So over the weekend I made a few people mad on the Twitters by suggesting that the internet white hat group I Am The Cavalry was wasting its time with its IoT security advocacy, some of which has turned into law, various Commerce Department guidance, FDA regs, etc.

On one hand, more secure IoT devices are obviously, good, right? But on the other hand, when the rubber hits the regulatory road, you get a weird mix of "Please don't have built in backdoor passwords on your IoT devices" and "please make all IoT devices updatable". These typos of regulations attempt to fix point-problems with existing technology in a way that may or may not introduce bigger systemic risk.

The government has an interest in reducing systemic risk on the Internet as a whole. This is read by various agencies as a license for additional regulatory actions since that is almost the only tool in their box. But everyone on offense realizes we cannot do it the way that I Am The Cavalry wants to.

The Mirai worm is an example of this issue: A couple of kids built a massive IoT botnet that was then used to DDoS a few various networks. DDoS's are known issues and typically take one company off the map for a while, and are very hard to prevent as it comes down to doing filtering in a distributed and robust way against an adversary, which is not a fun problem to have.

But when they DDoSed Dyn, a provider of DNS, they caused actual disruption on the internet. But instead of trying to solve the problem of having a centralized weak point running an obsolete protocol that we depend on for literally everything, we've decided to try to make an internet where nodes can be trusted, which we know is impossible!

Additionally, requiring point solutions for IoT devices may introduce more systemic risk than we are comfortable with. Because it's impossible to say "I want SECURE updates to all IoT devices" and have any two experts agree on what that means, we have to say we want them "signed cryptographically". But these updates are coming from places that we know we cannot trust - small vendors are often weak targets, and the supply chain gets only weaker from there.

It is as if we tried to implement regulations to write SECURE PHP code so every Wordpress site didn't become a font of usernames and passwords for hackers. All of these ideas are on their face, a waste of time, which is why the offensive community tends to look at organizations solving problems OTHER than the centralized weak points as a bit silly.

I posed this point to one of the government boards looking at the IoT issue, and was told it was not helpful, but hopefully this blog answers why I wrote them this in the first place. Offensive security is almost always about finding centralized weak points that your adversary has forgotten about, or does not realize need protection, and a lot less about busting through the security layers they have in place. That's the whole ball game, every day, for the last 20 years for most of us in the industry.

An easy example is this: If your team isn't freaking out about this vulnerability in GoAhead Web Server, then they are clearly missing situational awareness.

I understand that instead of "simple" regulatory and legal fixes, this requires shepherding new massive engineering and technical efforts through the political sphere, but I still think if we want to move the dial, we have to engage in a way that truly changes the terrain.

(Secretive Sniff You is a good anagram for Offensive Security :) )

Wednesday, December 6, 2017

A Better Norm for Enforced Backdoors

This is the kind of joke you only can see in a Wonder Woman comic for what should be obvious reasons.

So various people in the government  think they can force a private American company to implement a backdoor in their product without a warrant. But they also say they haven't done this yet.

Part of the reason why is that doing classified work in non-classified environments comes with risk - i.e. part of the reason classification systems are effective is that people in the system have signed off on the idea. Threats of prosecution only go so far really as a preventative measure against leaks (as we are now hyper-aware?)

To wit, the other major reason is that as a matter of policy, forced backdoors are terrible in a way that is visibly obvious to anyone and everyone who has looked at them. The reason is that we want to claim a "Public Private Partnership" and that's a community wide thing, and this is a tiny community.

What everyone is going to expect with a public-private partnership is simple: Shared Risk. If you ask the Government if they're going to insure a company for the potential financial harm of any kind of operation, including a backdoor, they'll say "hell no!". But then why would they expect a corporation to go along with it? These sorts of covert operations are essentially financial hacks that tax corporations for governments not wanting to pay the up-front costs of doing R&D on offensive methods, and the companies know it.

The backdoors problem is the kind of equities issue that makes the VEP look like the tiny peanuts it is and it's one with an established norm that the US Government enforces, unlike almost every other area of cyber. Huawei, Kaspersky, and ZTE have all paid the price for being used by their host governments (allegedly). Look at what Kaspersky and Microsoft are saying when faced with this issue: "If asked, we will MOVE OUR ENTIRE COMPANY to another Nation State".

In other words, whoever is telling newspapers that enforced backdoors are even on the table is being highly irresponsible or doesn't understand the equities at stake.

Tuesday, December 5, 2017

The proxy problem to VEP

Ok, so my opinion is the VEP should set very wide and broad guidelines and never try to deal with the specifics of any vulnerability. To be fair, my opinion is that it can ONLY do this, or else it is fooling itself because the workload involved in the current description of any VEP is really really high.

One point of data we have is the Chinese Vulnerability reporting team apparently takes a long time on certain bugs. My previous analysis was that they used to take bugs they knew were blown and then give them to various Chinese low-end actors to blast all over the Internet as their way of informally muddying the waters (and protecting their own ecosystem). But a more modern analysis indicates a formal and centralized process perhaps.

So here's what I want to say, as a thought experiment: Many parts of the VEP problem completely map homomorphically to finding a vulnerability and then asking yourself if it is exploitable.

For example, with the DirtyCow vulnerability. Is it at all exploitable? Does it affect Android? How far back does the vulnerability go? Does it affect GRSecced systems? What about RHEL? What about stock systems but with uncommon configurations. What about systems with low memory or systems with TONS OF MEMORY. What about systems under heavy load? What about future kernels - is this a bug likely to still exist in a year?

Trees have roots and exploits get burned, and there's a strained analogy in here somewhere. :)

The list of questions is endless, and each question requires an experienced Linux kernel exploitation team at least a day to answer. And that's just one bug. Imagine you had a hundred bugs, or a thousand bugs, every year, and you had to answer these questions. Where is this giant team of engineers that instead of writing more kernel exploits is answering all these questions for the VEP?

Every team who has ever had an 0day has seen an advisory come out, and said "Oh, that's our bug" and then when the patch came out, you realized that was NOT your bug at all, just another bug that looked very similar and was even maybe in the same function. Or you've seen patches come out and your exploit stopped working and you thought "I'm patched out" but the underlying root cause was never handled or was handled improperly.

We used to make a game out of second guessing Microsoft's "Exploitability" indexes. "Oh, that's not exploitable? KOSTYA GO PROVE THEM WRONG!"

In other words: I worry about workload a lot with any of these processes that require high levels of technical precision at the upper reaches of government.