Tuesday, May 29, 2018

What is the high ground in cyberspace?

I can't even possibly get into how crazy hilarious most of the proposed cyber norms are. Usually the response is "What does the technical community know? and then a few years later, "Hmm. That didn't work." even though it was entirely predictable.

High Ground (C.F. Thomas Dullien)

High ground in cyber is high traffic sites! Facebook and Google are "unsinkable aircraft carriers" in that sense, but any site which has a huge traffic share is high ground, most of them have very low security, and there's lots of mountain ranges we don't acknowledge the existence of.

This screencap from Matt Tait's 2018 INFILTRATE keynote talks about update providers as strategic risks...
RedTube and other major porn sites have a wider reach than the New York Times ever will. Gaming sites are equally high ground. Dating sites are clearly high ground. There's what you think people do on the Internet versus what they really do, almost everywhere you look, which is why good strategists are holding themselves to the hard data they get from historical operations, and not just making up fanciful cyber norms in Tallinn.

I think it's counter-intuitive to grasp that almost everything your computer does when it reaches out is "get more code to execute". Software Updates are the obvious one, but a web page is also just code executing. PDFs are code executing. Word documents are code executing. New TF2 maps are code executing. NVidia's driver download page is exceptionally high ground.

In other words, there's nothing your computer does that is not "updates" when it comes to understanding your strategic risk.

Team Composition


We covered team compositions as applied to cyber operations quite heavily in our talk at T2 in Finland. To quickly summarize: Dive Tanks are going to be implants that are more "RAT"-like. These typically are entirely in userspace, and operate in the grey zones and chaotic areas of your operating system. Main tanks tend to be kernelspace or below. Obviously your implant strategy changes everything about what else you incorporate into your operations.

Win Condition



In Overwatch, one win condition is "we have a ranged DPS on the high ground, unopposed". Knowing the win conditions is important because it keeps you from wasting time and "feeding" your opponents when the battle is already lost. In cyber operations, feeding your opponents is quite simply using new exploits and implants when your current ones have already been caught. This is why a good team will immediately remove all their implants and cease operations once they even get a hint that they were discovered.

Unlike in Overwatch, the win condition in cyber is usually who is more covert than the other person. You don't have to remove your opponent from the field, you just have to make it irrelevant they are there.

Conclusion

Keeping your strategy as simple as possible allows for a high tempo of operations with a predictable and scalable results. Create a proper toolkit composition, execute the right tactical positioning based on your composition, and understand your win condition, and you will end up a grandmaster. :)

Thursday, May 24, 2018

When our countermeasures have limits

Countermeasures are flashy. But do they work?

So the FBI took over the domain VPNFilter was using for C2. VPNFilter also used a number of Photobucket accounts for C2, which we can assume have been disabled by Photobucket.


Hmm. Why did they do so many? Do we assume that every deployed region would have the same exact list?

Here's my question: How would you build something like this that was take-down resistant? Sinan's old paper from 2008 on PINK has some of the answers. But just knowing that seizing a domain is useless should change our mindset...

As a quick note: that last sentence of the FBI affidavit is gibberish.

From what I can tell from public information, the VPNFilter implants did not have a simple public-key related access method. But they may have a secret implant they installed only in select locations which does have one. Cisco and the FBI both are citing passive collection and a few implants from VirusTotal and from one nice woman in PA. We do know the attackers have a dedicated C2 for Ukrainian targets. 



My point is this: Our current quiver of responses can't remove botnets from IoT devices. The only reasonable next move is to do a larger survey of attacker implants - ideally to all of them, using the same methods the attackers did (we have to hope they didn't patch each box). This requires a policy framework that allows for DHS to go on the offense without user permission, and worldwide.

Tuesday, May 22, 2018

Exploits as Fundamental Metrics for Cyber Power


If you're measuring cyber power, you can measure it in a number of different ways:

  • Exploitation (this article!)
  • Integration into other capabilities (HUMINT, for example)
  • Achieved Effect (so much of IL wants to look here, but it is very hard)
In a previous article on this site we built a framework around software implants as a metric for measuring sophistication in capability. (Also see this Ben Buchanan piece for Belfer.)

Since there are no parades through downtown DC of cyber combat platforms, or even announcements in Janes, non-practitioners have thus tried to tag any effort which includes "0days" as sophisticated, and in the case of export control - too sophisticated to allow to be traded in without controls. The way this typically appears is by the concept of "Bypassing Authorization" being some sort of red line.

But from a strategic standpoint we have for years tried to look at the development and expenditure of 0day as a declaration of capabilities befitting a State-level opponent. This is of course a mistake, but one part of that mistake is of thinking of all 0days as equal from an information-carrying perspective as regards capabilities.

So what then, do practitioners look for when gauging 0day for nation-state-level sophistication, if not simply the use of any 0day?

Here is my personal list:
  • Scalable CONOPS
  • Toolchain Integration 
  • Cohesive OPSEC
  • Historical Effort and Timescales
Without going into each one of those in detail, I want to highlight some features that you'll see in State-level exploits. Notably, there is no red line on the "sophistication" of an exploit technique that differentiates "State" from "amateur". On the contrary, when you have enough bugs, you pick the ones that are easiest to exploit and fit best into your current CONOPS. Bugs with the complexity level of strawberry pudding recipes tend to be unreliable in the wild, even if they are perfectly good in the lab environment.

A notable exception is remote heap overflows, which for a long time were absent from public discourse. These tend to be convoluted by nature. And it's these that also typically demonstrate the hallmarks of a professional exploit that has had the time to mature properly. In particular, continuation of execution problems are solved, the exploit will back off if it detects instability in the target, the exploit will use same-path-stagers, you'll see PPS detection and avoidance, and the exploit will be isolated properly on its own infrastructure and tookit. What you're looking for is the parts of an exploit that required a significant testing effort beyond that which a commercial entity would invest in.

One particular hallmark is of course the targeting not of the newest and greatest targets, but of the older and more esoteric versions. A modern exploit that also targets SCO UnixWare, or Windows 2000, is a key tell of a sophisticated effort with a long historical tail.

There is a vast uneducated public perception that use of any 0day at all in an operation, or 4 or 5 at once, indicates a "state effort". However, the boundaries around state and military use of exploits are more often in the impressions of the toolkits they fit into than in the exploits themselves. While exploits, being the least visible parts of any operation, are sometimes the hardest to build metrics around, it's worth knowing that the very fact that 0days exist as part of a toolchain is not the needed metric for strategic analysis, nor the one practitioners in the field use.

Tuesday, May 8, 2018

What is an "Observable Characteristic" in Software Export Control?

Note: This is a living document partially written for those new to export controls - if you think I misunderstood something let me know and I'll address it within!
---------------------------------------------------------------------------------------------------------


I want to highlight this Twitter thread here which goes over the 4D4 ("Intrusion Software") in a bit of detail. I feel like many people who are proponents of 4D4 complain that the rest of us, who have concerns, don't properly understand export control frameworks. I would posit there IS NO UNDERSTANDING OF EXPORT CONTROL FRAMEWORKS BY DESIGN :). But to be more specific about the concerns the following bite size bit is the most important part:


Being able to look, objectively, at a piece of hardware and say "This is a stealth coating because it has the following manufacturing characteristics" is a different category than being able to look at a piece of software and saying "This bypasses ASLR and DEP". Deep down, while radio frequencies are in general going to be a universal thing, and performance can be measured, the export control language applied to software exists in a huge fog! What does it mean to "bypass a mitigation"?

The Issues of End Use Controls


What this results in is END USE controls. In other words, instead of saying "We want to ban antennas that can emit the following level of power" we are writing controls  that say "We want to ban software that CAN BE USED for the following thing". This means instead of looking at the software to control it, you end up looking at the marketing, so the controls are littered with marketing language ("Carrier Grade Speed!") and do not have functions, characteristics, or performance levels of any kind.

Sometimes you see long lists of functionalities in software controls, as if this is going to be a definitive characteristic if you add enough of them. For example, 5a1j ("Surveillance software") is essentially:

  • Collects network information
    • parses it and stores the metadata about it (aka, FTP usernames and such) into a DB
  • Indexes that information (why else would you have this in a DB?)
  • Can visualize and graph relations between users (based on the information you indexed)
  • Can search the DB using "selectors" ("again, why else is it in a DB?")
This is what modern breach detection software is - a product category that did not really exist when 5a1j was formulated. But each of the pieces DID exist and given a market opportunity, they got put together as you would expect. In other words - long lists of functions are not enough to make a good control (especially when all the functions you are describing are commoditized in the ELK Docker image)


Performance levels are the big differentiation typically. The general rule is that if you cannot define a performance level, you are writing a terrible regulation because it will apply broadly to a huge population you don't care about and have massive side effects, but the international community has typically just ignored this for software controls (because it is hard). Part of the difficulty here is that performance levels in the cyber domain tend to go up a lot faster than in most manufacturing areas. The other issue is that for the controls people seem to want, there are no clear metrics on performance. For example, with 5a1j there is nothing that differentiates the speeds/processing/storage that Goldman Sachs (or even a small university) would use versus a country backbone ISP.

Another thing to watch out for is the contrast between controls on "Software" and the controls on "Technology". Usually these controls go hand in hand. They'll say "We control this antenna, but also we control technology for making those antennas so you can't just sell a Powerpoint on how to make them to China either". In software, this gets a lot more difficult. Adding an exception to a technology control does not fix the software control...

What we are learning is that software export controls work best when tied to an industry Standard. This does not describe the current cyber tools regulations (4d4 or 5a1j), however. We do know  that end use-based controls are not good even with very large exceptions carved into them for reasons which might require another whole paper, but which seem obvious in retrospect when looking at the regulations as "laws" which need to be applied on an objective basis.

Impact


I got flack last Sunday for Tweeting that export controls "ban" an item, which they clearly do not. However, the effect of export controls is similar - largely a slower, more painful, and silent death rather than a strict ban. I.E. export controls are less a bullet to the head and more a chronic but fatal disease for your domestic industry. This is partially because licensing has an extremely high opportunity cost on US businesses which raises the expense of doing business up and down the supply chain.

There's a common misconception among export control proponents that when used loosely (aka, automatic licensing with only reporting requirements), export control is "cost free" for businesses. Nothing could be further from the truth. Even very small companies (aka startups) are now international companies, and having to understand their risks from export control regimes can be prohibitively expensive with such broadly (and poorly) designed controls as 4d4 or 5a1j.

More strategically, no proponent of strict export control regimes wants to look at their cost and efficacy. Do they even work? Do they work at a reasonable cost? For how long do they work? Do we have a realistic mechanism for removal of controls once they become ineffective? These are all questions we should answer before we implement controls. The long term impacts are recorded at policy meetings in sarcastic anecdotes - "We don't even build <controlled system> in the US anymore, we just buy it from China - that export control did its job!" 

Sadly, this means that export controls are almost certainly having the exact opposite effect from what is desired. This could probably be addressed by having a quite strict "Foreign availability" rule when designing new regulations. After all, what is the point of putting restrictions on our exports when the same or similar technology is available from non WA members? Any real stress on these issues is mysteriously missing from discussions around the cyber-tools regulations. :)

Unilateralism


The goal of the Wassenaar Arrangement and other similar agreements is of course to avoid the problem of unilateral controls, which are an obvious problem. What they don't want to hear is that the implementation differences between countries are large enough that the controls are unilateral anyway. To have truly non-unilateral controls you need one body making licensing decisions - and by design WA does not work like that.

The gaps in implementation are large enough that entire concepts appear in one country that don't exist in others - most prominently, the "Deemed Export" ruleset, which says that if I sell an Iranian an iPhone in Miami, that is the same as exporting it to Iran and I need to get a license.

Goals, both Stated and Unstated

The stated goal of export controls is avoiding technology transfer for national security purposes! (Note that "human rights issues" are not a stated goal for the WA).

The unstated goals are more complex. For example, we get a lot of intelligence value out of the quarterly reports from software companies for all of their international customers. There's probably also limited intel value in the licensing documents themselves ("Please have your Chinese customer fill out a form - in English - stating what they are using your product for!") Obviously the US likes having a database somewhere of every foreign engineer who has accessed a lithography plant, I guess. Because this stuff is unstated, it's hard to argue against in terms of ROI but I would say that for most of this you can get a ton better value by having a private conversation with the companies involved, which is a good first step towards building the kinds of relationships we always claim we want between the USG and industry. As stated previously, the costs imposed by even a "reporting only" licensing scheme are enormous.

When I talk to Immunity customers (large financials) about 5a1j, they assume that the reason the USG wants reporting on all breach detection systems sold overseas is so they can better hack them. It's hard to argue with that. This is a high reputational cost that the USG pays among industry for intelligence that is probably of little real value.

The other unstated goal is leverage. Needless to say with a complicated enough export control regime, nearly every company will eventually be in violation. Likewise, blanket export licenses can massively reduce your opportunity cost, and many countries are happy to issue them in various special cases. Again, I think from a government's perspective it is better long term to develop fruitful bilateral relationships.

A lot of these issues are especially true with "end use"-centric controls - which rely on information that the SELLER or BUILDER of the technology has no way to know ahead of time.

And the last-but-not-least unstated goal is to control the sale of 0day. Governments are mostly aligned in their desire to do this, although few of them understand what that means, what the side effects would be, or how this would really play out. But parts of the rest of their strategy (VEP) only really work when these controls go into place, so they have a strong drive to write the controls and see how things work later. It is this particular unstated  but easily visible goal that I think is the largest threat towards the security industry currently.

Conclusions

I tried in this document to start painting the landscape as to where cyber tool export controls can go wrong. Part of my goal joining ISTAC was to stop making the mistakes we made with 5a1j and 4d4 by putting things into a bit of a more coherent policy framework. Hopefully this document will be useful to other technical and policy groups as together we find a way to navigate this tricky policy area.

----------------
Resources/notes:

To be fair, we've known a lot of these issues for a very long time, and we simply have not fixed them:

From this very good, very old book.

How to find good Cyber Security Policy Writing

There are some simple rules to follow to see if a policy piece in this space will be extra painful to read:


  • Does it use "unpack" and not in the context of talking about compression algorithms? 
  • Does it liberally quote a thousand other articles, but without any real understanding of their context? 
  • Does it have obvious misstatements about technical facts?
  • Does the author have no experience in Intel or industry?
  • Does it lean heavily on "data" which could be reasonably considered purely subjective or of shoddy quality?
The "Cyber Strategy" book reviewed on this blog is a good example of this. But the opposite is also true! Lately you have spooks coming out from the shadows to write policy pieces, and the heads of various companies have spent time to do so as well. There are policy teams (both in the US and elsewhere) that have spent time to learn the technology!

You can see the examples of some of this work here, on the Cyber Defense Review. I haven't even read it yet, but I know a journal that has an article from Bryson Bort or Shawn Henry is going to have worthwhile perspectives.

For what it's worth pure legal writing can also have interesting tidbits, like this piece from Mike Schmitt - a leading proponent of International Law's role in this space. Usually the value in a pure policy or legal piece occurs when they acknowledge the current issues of the system instead of optimistically whitewashing past efforts. 

Mike got pushback (largely from the US) when he and others proposed a "violation of sovereignty" standard. This is because it doesn't work in operational practice. But he still likes the idea because it makes LEGAL decisions quite clear. :)
 



Thursday, May 3, 2018

Book Review: Cyber Strategy by Valeriano, Jensen, Maness



I give this book 0 stars out of 5. To be fair, I give the entire genre of books like this 0 stars of out 5. This is the worst kind of cyber policy writing. They've concocted some sort of database of information, culled mostly from news reports from what I can tell. Then they do some basic statistical analysis on it and somehow flesh that out into an entire book by pulling random quotes from other terrible parts of the cyber policy pantheon.

For example, here they both misspell "Regin" and then attribute it to the United States (for no reason).

In general the editing of the book itself was spotty - but this is something that concerns me more about the original dataset, which appears to attribute various efforts to various countries in ways I'm 90% sure are not the correct ones. If your data is wrong, then eventually your conclusions are essentially random, and your policy proscriptions are base opinion.


For example, the above simplistic argument against the use of exploits during cyber operations is one that you often see in policy-world but which nobody who has ever been involved in an operation takes seriously. Also, "Tomahawk" is a proper freakin' noun! I don't think anyone but me has even read this book, to be honest.

But to reiterate: IT IS HUGELY RARE TO SEE AN ADVERSARY USE YOUR ATTACKS AGAINST YOU. Everyone in their head is going to cite ETERNALBLUE but that was used only once the patch was out, as far as we know. The opsec reason for this is that USING a bug you caught from someone TELLS them you caught them! Likewise every group has their own concept of operations and other people's tools don't always fit yours.

I mean it was fascinating you didn't see the FLAME exploit turned around - it was reusable - but everyone just assumed it only fit in the FLAME toolchain. It's almost easier to find new bugs than do the research necessary to re-use old bugs.

Books like this always try to back up their arguments with copious quotes to other equally bad books:

On the face of it, Libicki is clearly wrong. But it's possible his quote was used out of context! I'll never know because the Kindle edition of that book is $66. I only have so much budget to spend reading this kind of thing.

Ok so in summary: Don't buy or read this book or books like it. We need NEW thoughts in cyber policy and this is not how you get them.

The Gravity Well


Every particle, even dark matter, can bend spacetime.


Part of the reason policymakers are often confused about resistance to many of the items on their wishlist in the cyber domain is that they've already achieved the impossible: Copyright!

If you think about how amazing it is that nowhere on the Internet can you go get Avengers: Infinity Wars for free then it leads you to also ponder the vast array of international agreements, corporate pressure, and technological filtering that makes this possible. Nothing could be more inimical to the nature of the cyber domain than copyright. And yet: governments and industry have made it real.

To paraphrase Bruce, "Bits being copied is like water being wet" - and yet we have somehow made it so in cyberspace all waterfalls all run upwards, to make it possible to remove a single picture of Star Wars from the Internet. Why then, policy people ask, can we not just erase all information about computer exploitation and harmful code from the Internet? How much harder can it be?

Allan Friedman once said to me that he looks at his work at the Commerce Dept as correcting market failures. But the real market distortion is like a massive gravity well all around us. Copyright is what makes it so that a firewall vendor can hide the true nature of their weaknesses by making it illegal to write up a "performance comparisons" or "reverse engineer" their protocols. So many of the systemic vulnerabilities come from a system of our own making. And when we try to address them at the edges by regulating IOT device security or going on and on about vulnerability disclosure or revising the CFAA to add just one more exception it's like trying to chew away a piece of a black hole.

This post is a call for legislation more than my other posts are: We need to address the root of the problem. That means changing what an end-user-license can restrict. It is not just that everyone should be able to write about and patch the code running on their devices, but that we need to acknowledge that copyright has distorted who can even understand the depths of the risks we all face.

---
Links/Resources:

Tuesday, May 1, 2018

The Dark Gulf between the FBI and the Technical Community

https://oig.justice.gov/reports/2018/o1803.pdf

I think it's important that we acknowledge and address that each side in the encryption debate does not believe so much that the other side is wrong, but that they are lying liers who will stop at nothing to get their way.

In particular, the contrast between Susan Hennessey/Stewart Baker's take on the FBI documents that got released about the iPhone unlocking debacle and the Risky.biz (and technical community) take are quite telling.

Risky.Biz, when speaking to their audience, essentially assumed that the FBI, while not misleading the court in words, knew that it could have reached out to the contractor base for potential solutions to unlocking Farook's iPhone, and that not doing so was highly dishonest when presenting that there were "no other solutions" other than the nuclear option of forcing Apple to build decryption capability for the FBI.

Susan Hennessey (and others in the national security space) take the FBI at their word: No misleading statements went to the court and "not having the capability in our hands right now" is enough to move forward with the legal case. "Perhaps there is some stovepiping issue but nothing even slightly duplicitous" we hear from that side. "If you only read the whole report, then you'll see!"

Here's what you see when you read the whole report: There are fig leaves of truth covering the massively weird idea that in the highest profile case in the country the FBI didn't ask the head of the ROU to have a contractor maybe look into the problem. It's literally unbelievable. "Oh, I thought we could only work on classified issues" is what's in the report, as if these teams don't have each other's cell phone numbers.

This is a damning two paragraphs in the report and it validates what the technical community thinks of the FBI.
You see the same dynamic with the Ray Ozzie cryptographic proposal. The national security teams that believe the technical community isn't being straightforward about how possible it is to build secure key escrow when clearly public key encryption exists don't believe they are wrong, they simply believe the technical community knows perfectly well how to build PKI and just won't for political reasons.

Matt Green points out the the issue is not "Can we build PKI?" but "Do we want to have companies forced to build PKI and assume the essentially unbounded liability of maintaining it?". Keep in mind, tech people have watched every PKI system they've built over the past 20 years fail in some way and companies have no desire to hold or control any data beyond that which gets them ad revenue.

Of course, Stewart Baker would then say "Unless the Chinese want it, in which case everyone seems to RUSH to find a solution". The technical community, in turn, says "But the USG is supposed to be our friends."

I'm not saying the technical community is right about these issues, although they clearly are: The FBI lied in all but words, and the "going dark" debate is insane when we can identify serial killers from their great grandparent's DNA.

But even if the FBI was right, we have to look strategically at what it means in every dimension of this debate when the US technical community and the FBI both don't trust each other even a tiny bit. Without solving this, how can we move forward on any part of the massive problems we face? How do you have public private partnerships without trust? How do you build cyber war norms? Infragard can only go so far...