Monday, June 18, 2018

Policy Bugclass: False inequivalencies

I'm going to leave it up to your imagination why this picture perfectly encapsulates every moment someone suggests two random cyber things are different that are actually the same.


We try to maintain a list of policy-world "bugclasses" when in the cyber domain. 
  1. Assuming Data or Execution is bound to a physical location
  2. Assuming code has a built-in "Intent"
  3. Building policy/law in legal language instead of in Code (i.e. policy that does not work at wire-speed is often irrelevant)
  4. False inequivalences
In this article I want to talk a little bit about False Inequivalences, since they are probably the most prevalent type of bugclass that you run into, and you see them everywhere - in export control, in national security law, in policy in general.

For example, export control law (5a1j) likes to try to draw distinctions between the ability to store and the ability to search, or (4d4) the ability to run a command, and the ability to gather and exfiltrate information. In national security policy papers you'll often see a weird distinction between the ability to gather information and the ability to destroy information. Another, more subtle error is a sort of desire to have "networks" which are distinct. Technologists look upon the domain name system as a weak abstraction, but for some reason policy experts have decided that there are strict and discernible boundaries to networks that are worth porting various International Law conventions over to.

This bugclass is a real danger, as explaining why two things are "provably equivalent in any real practical sense" annoys lawyers whose entire lifespan has been spent splitting the hairs in language, and think that as a tool, hairsplitting can produce consistent and useful global policy. 

More specifically, we need to find a way to revise a lot of our legal code to accept this reality: Title 10 and Title 50 need to merge. Foreign and domestic surveillance practices need to merge. The list goes on and on...


Tuesday, June 5, 2018

Security, Moore's Law, and Cheap Complexity

https://www.err.ee/836236/video-google-0-projekti-tarkvarainseneri-ettekanne-cyconil

To paraphrase Thomas Dullien's CyCon talk:
  • We add 3 ARM computers per year per person on Earth right now. 
  • The only somewhat secure programs we know of focus entirely on containing complexity
  • Software is a mechanism to create a simplified machine from a complex CPU - exploits are mechanisms to unlock this complexity
  • We write software for computers that don't exist yet because we design hardware and software at the same time.
  • We've gotten significantly better at security in the past 15 years, but we've been outpaced by the exponential increase in complexity
  • Every device is now a "Network of Computers" - intra-device lateral movement is very interesting
  • It's much cheaper to use something complicated to emulate something simple than vice versa, in the age of general purpose cheap CPUs. This generates massive economies of scale, but at a cost...insecurity.
  • The economics of chip manufacturing means CPU and Memory providers are driven to sell the hardware they can get away with selling - some percentage of the transistors in a chip are bad, and the chip maker is strongly motivated to ship the least reliable CPU that the customer cannot detect
    • When there's only a few hundred atoms in a transitor, only three or four more makes a big difference
  • Until Rowhammer the link between hardware reliability and security was not clear to Electrical Engineers.
  • You cannot write real world secure programs that operate on hardware you cannot trust
  • Computers are deterministic at the abstract sense, but they are really only deterministic MOST of the time. Engineers work really hard to make it so you can ignore the physics of a chip. But it's still happening.
    • Determinism has to be fought for in computers, and is not a given.
  • The impossibility of inspectability in the digital sphere
    • Everything has firmware, none of which we can really have any assurance of
    • Average laptop has ~40 CPUs all with different firmware
    • Local attackers can use physics to induce transient faults, which bypasses crypto verification, which then means nobody can get you out
  • If control of a device has ever been disputed, it can never be ascertained if it is back in control. This is counter our standard intuition for how objects work.
  • The same forces that drive IT's success drive IT's insecurity.
  • Halvar loves SECCOMP_STRICT sandbox and wants to make it useful, but of course, making it useful will probably break it
  • Computers will look very different from today's architectures in fifteen years - more different than they did fifteen years ago. Engineers are now focused on designing parallel machines, since Moore's law is over for single-cores. 
  • All the insane complexity we can pump into computation systems is essentially in your pocket. 
  • It's still early days in computers. How good was humanity at building bridges seventy years after we started?

Tuesday, May 29, 2018

What is the high ground in cyberspace?

I can't even possibly get into how crazy hilarious most of the proposed cyber norms are. Usually the response is "What does the technical community know? and then a few years later, "Hmm. That didn't work." even though it was entirely predictable.

High Ground (C.F. Thomas Dullien)

High ground in cyber is high traffic sites! Facebook and Google are "unsinkable aircraft carriers" in that sense, but any site which has a huge traffic share is high ground, most of them have very low security, and there's lots of mountain ranges we don't acknowledge the existence of.

This screencap from Matt Tait's 2018 INFILTRATE keynote talks about update providers as strategic risks...
RedTube and other major porn sites have a wider reach than the New York Times ever will. Gaming sites are equally high ground. Dating sites are clearly high ground. There's what you think people do on the Internet versus what they really do, almost everywhere you look, which is why good strategists are holding themselves to the hard data they get from historical operations, and not just making up fanciful cyber norms in Tallinn.

I think it's counter-intuitive to grasp that almost everything your computer does when it reaches out is "get more code to execute". Software Updates are the obvious one, but a web page is also just code executing. PDFs are code executing. Word documents are code executing. New TF2 maps are code executing. NVidia's driver download page is exceptionally high ground.

In other words, there's nothing your computer does that is not "updates" when it comes to understanding your strategic risk.

Team Composition


We covered team compositions as applied to cyber operations quite heavily in our talk at T2 in Finland. To quickly summarize: Dive Tanks are going to be implants that are more "RAT"-like. These typically are entirely in userspace, and operate in the grey zones and chaotic areas of your operating system. Main tanks tend to be kernelspace or below. Obviously your implant strategy changes everything about what else you incorporate into your operations.

Win Condition



In Overwatch, one win condition is "we have a ranged DPS on the high ground, unopposed". Knowing the win conditions is important because it keeps you from wasting time and "feeding" your opponents when the battle is already lost. In cyber operations, feeding your opponents is quite simply using new exploits and implants when your current ones have already been caught. This is why a good team will immediately remove all their implants and cease operations once they even get a hint that they were discovered.

Unlike in Overwatch, the win condition in cyber is usually who is more covert than the other person. You don't have to remove your opponent from the field, you just have to make it irrelevant they are there.

Conclusion

Keeping your strategy as simple as possible allows for a high tempo of operations with a predictable and scalable results. Create a proper toolkit composition, execute the right tactical positioning based on your composition, and understand your win condition, and you will end up a grandmaster. :)

Thursday, May 24, 2018

When our countermeasures have limits

Countermeasures are flashy. But do they work?

So the FBI took over the domain VPNFilter was using for C2. VPNFilter also used a number of Photobucket accounts for C2, which we can assume have been disabled by Photobucket.


Hmm. Why did they do so many? Do we assume that every deployed region would have the same exact list?

Here's my question: How would you build something like this that was take-down resistant? Sinan's old paper from 2008 on PINK has some of the answers. But just knowing that seizing a domain is useless should change our mindset...

As a quick note: that last sentence of the FBI affidavit is gibberish.

From what I can tell from public information, the VPNFilter implants did not have a simple public-key related access method. But they may have a secret implant they installed only in select locations which does have one. Cisco and the FBI both are citing passive collection and a few implants from VirusTotal and from one nice woman in PA. We do know the attackers have a dedicated C2 for Ukrainian targets. 



My point is this: Our current quiver of responses can't remove botnets from IoT devices. The only reasonable next move is to do a larger survey of attacker implants - ideally to all of them, using the same methods the attackers did (we have to hope they didn't patch each box). This requires a policy framework that allows for DHS to go on the offense without user permission, and worldwide.

Tuesday, May 22, 2018

Exploits as Fundamental Metrics for Cyber Power


If you're measuring cyber power, you can measure it in a number of different ways:

  • Exploitation (this article!)
  • Integration into other capabilities (HUMINT, for example)
  • Achieved Effect (so much of IL wants to look here, but it is very hard)
In a previous article on this site we built a framework around software implants as a metric for measuring sophistication in capability. (Also see this Ben Buchanan piece for Belfer.)

Since there are no parades through downtown DC of cyber combat platforms, or even announcements in Janes, non-practitioners have thus tried to tag any effort which includes "0days" as sophisticated, and in the case of export control - too sophisticated to allow to be traded in without controls. The way this typically appears is by the concept of "Bypassing Authorization" being some sort of red line.

But from a strategic standpoint we have for years tried to look at the development and expenditure of 0day as a declaration of capabilities befitting a State-level opponent. This is of course a mistake, but one part of that mistake is of thinking of all 0days as equal from an information-carrying perspective as regards capabilities.

So what then, do practitioners look for when gauging 0day for nation-state-level sophistication, if not simply the use of any 0day?

Here is my personal list:
  • Scalable CONOPS
  • Toolchain Integration 
  • Cohesive OPSEC
  • Historical Effort and Timescales
Without going into each one of those in detail, I want to highlight some features that you'll see in State-level exploits. Notably, there is no red line on the "sophistication" of an exploit technique that differentiates "State" from "amateur". On the contrary, when you have enough bugs, you pick the ones that are easiest to exploit and fit best into your current CONOPS. Bugs with the complexity level of strawberry pudding recipes tend to be unreliable in the wild, even if they are perfectly good in the lab environment.

A notable exception is remote heap overflows, which for a long time were absent from public discourse. These tend to be convoluted by nature. And it's these that also typically demonstrate the hallmarks of a professional exploit that has had the time to mature properly. In particular, continuation of execution problems are solved, the exploit will back off if it detects instability in the target, the exploit will use same-path-stagers, you'll see PPS detection and avoidance, and the exploit will be isolated properly on its own infrastructure and tookit. What you're looking for is the parts of an exploit that required a significant testing effort beyond that which a commercial entity would invest in.

One particular hallmark is of course the targeting not of the newest and greatest targets, but of the older and more esoteric versions. A modern exploit that also targets SCO UnixWare, or Windows 2000, is a key tell of a sophisticated effort with a long historical tail.

There is a vast uneducated public perception that use of any 0day at all in an operation, or 4 or 5 at once, indicates a "state effort". However, the boundaries around state and military use of exploits are more often in the impressions of the toolkits they fit into than in the exploits themselves. While exploits, being the least visible parts of any operation, are sometimes the hardest to build metrics around, it's worth knowing that the very fact that 0days exist as part of a toolchain is not the needed metric for strategic analysis, nor the one practitioners in the field use.

Tuesday, May 8, 2018

What is an "Observable Characteristic" in Software Export Control?

Note: This is a living document partially written for those new to export controls - if you think I misunderstood something let me know and I'll address it within!
---------------------------------------------------------------------------------------------------------


I want to highlight this Twitter thread here which goes over the 4D4 ("Intrusion Software") in a bit of detail. I feel like many people who are proponents of 4D4 complain that the rest of us, who have concerns, don't properly understand export control frameworks. I would posit there IS NO UNDERSTANDING OF EXPORT CONTROL FRAMEWORKS BY DESIGN :). But to be more specific about the concerns the following bite size bit is the most important part:


Being able to look, objectively, at a piece of hardware and say "This is a stealth coating because it has the following manufacturing characteristics" is a different category than being able to look at a piece of software and saying "This bypasses ASLR and DEP". Deep down, while radio frequencies are in general going to be a universal thing, and performance can be measured, the export control language applied to software exists in a huge fog! What does it mean to "bypass a mitigation"?

The Issues of End Use Controls


What this results in is END USE controls. In other words, instead of saying "We want to ban antennas that can emit the following level of power" we are writing controls  that say "We want to ban software that CAN BE USED for the following thing". This means instead of looking at the software to control it, you end up looking at the marketing, so the controls are littered with marketing language ("Carrier Grade Speed!") and do not have functions, characteristics, or performance levels of any kind.

Sometimes you see long lists of functionalities in software controls, as if this is going to be a definitive characteristic if you add enough of them. For example, 5a1j ("Surveillance software") is essentially:

  • Collects network information
    • parses it and stores the metadata about it (aka, FTP usernames and such) into a DB
  • Indexes that information (why else would you have this in a DB?)
  • Can visualize and graph relations between users (based on the information you indexed)
  • Can search the DB using "selectors" ("again, why else is it in a DB?")
This is what modern breach detection software is - a product category that did not really exist when 5a1j was formulated. But each of the pieces DID exist and given a market opportunity, they got put together as you would expect. In other words - long lists of functions are not enough to make a good control (especially when all the functions you are describing are commoditized in the ELK Docker image)


Performance levels are the big differentiation typically. The general rule is that if you cannot define a performance level, you are writing a terrible regulation because it will apply broadly to a huge population you don't care about and have massive side effects, but the international community has typically just ignored this for software controls (because it is hard). Part of the difficulty here is that performance levels in the cyber domain tend to go up a lot faster than in most manufacturing areas. The other issue is that for the controls people seem to want, there are no clear metrics on performance. For example, with 5a1j there is nothing that differentiates the speeds/processing/storage that Goldman Sachs (or even a small university) would use versus a country backbone ISP.

Another thing to watch out for is the contrast between controls on "Software" and the controls on "Technology". Usually these controls go hand in hand. They'll say "We control this antenna, but also we control technology for making those antennas so you can't just sell a Powerpoint on how to make them to China either". In software, this gets a lot more difficult. Adding an exception to a technology control does not fix the software control...

What we are learning is that software export controls work best when tied to an industry Standard. This does not describe the current cyber tools regulations (4d4 or 5a1j), however. We do know  that end use-based controls are not good even with very large exceptions carved into them for reasons which might require another whole paper, but which seem obvious in retrospect when looking at the regulations as "laws" which need to be applied on an objective basis.

Impact


I got flack last Sunday for Tweeting that export controls "ban" an item, which they clearly do not. However, the effect of export controls is similar - largely a slower, more painful, and silent death rather than a strict ban. I.E. export controls are less a bullet to the head and more a chronic but fatal disease for your domestic industry. This is partially because licensing has an extremely high opportunity cost on US businesses which raises the expense of doing business up and down the supply chain.

There's a common misconception among export control proponents that when used loosely (aka, automatic licensing with only reporting requirements), export control is "cost free" for businesses. Nothing could be further from the truth. Even very small companies (aka startups) are now international companies, and having to understand their risks from export control regimes can be prohibitively expensive with such broadly (and poorly) designed controls as 4d4 or 5a1j.

More strategically, no proponent of strict export control regimes wants to look at their cost and efficacy. Do they even work? Do they work at a reasonable cost? For how long do they work? Do we have a realistic mechanism for removal of controls once they become ineffective? These are all questions we should answer before we implement controls. The long term impacts are recorded at policy meetings in sarcastic anecdotes - "We don't even build <controlled system> in the US anymore, we just buy it from China - that export control did its job!" 

Sadly, this means that export controls are almost certainly having the exact opposite effect from what is desired. This could probably be addressed by having a quite strict "Foreign availability" rule when designing new regulations. After all, what is the point of putting restrictions on our exports when the same or similar technology is available from non WA members? Any real stress on these issues is mysteriously missing from discussions around the cyber-tools regulations. :)

Unilateralism


The goal of the Wassenaar Arrangement and other similar agreements is of course to avoid the problem of unilateral controls, which are an obvious problem. What they don't want to hear is that the implementation differences between countries are large enough that the controls are unilateral anyway. To have truly non-unilateral controls you need one body making licensing decisions - and by design WA does not work like that.

The gaps in implementation are large enough that entire concepts appear in one country that don't exist in others - most prominently, the "Deemed Export" ruleset, which says that if I sell an Iranian an iPhone in Miami, that is the same as exporting it to Iran and I need to get a license.

Goals, both Stated and Unstated

The stated goal of export controls is avoiding technology transfer for national security purposes! (Note that "human rights issues" are not a stated goal for the WA).

The unstated goals are more complex. For example, we get a lot of intelligence value out of the quarterly reports from software companies for all of their international customers. There's probably also limited intel value in the licensing documents themselves ("Please have your Chinese customer fill out a form - in English - stating what they are using your product for!") Obviously the US likes having a database somewhere of every foreign engineer who has accessed a lithography plant, I guess. Because this stuff is unstated, it's hard to argue against in terms of ROI but I would say that for most of this you can get a ton better value by having a private conversation with the companies involved, which is a good first step towards building the kinds of relationships we always claim we want between the USG and industry. As stated previously, the costs imposed by even a "reporting only" licensing scheme are enormous.

When I talk to Immunity customers (large financials) about 5a1j, they assume that the reason the USG wants reporting on all breach detection systems sold overseas is so they can better hack them. It's hard to argue with that. This is a high reputational cost that the USG pays among industry for intelligence that is probably of little real value.

The other unstated goal is leverage. Needless to say with a complicated enough export control regime, nearly every company will eventually be in violation. Likewise, blanket export licenses can massively reduce your opportunity cost, and many countries are happy to issue them in various special cases. Again, I think from a government's perspective it is better long term to develop fruitful bilateral relationships.

A lot of these issues are especially true with "end use"-centric controls - which rely on information that the SELLER or BUILDER of the technology has no way to know ahead of time.

And the last-but-not-least unstated goal is to control the sale of 0day. Governments are mostly aligned in their desire to do this, although few of them understand what that means, what the side effects would be, or how this would really play out. But parts of the rest of their strategy (VEP) only really work when these controls go into place, so they have a strong drive to write the controls and see how things work later. It is this particular unstated  but easily visible goal that I think is the largest threat towards the security industry currently.

Conclusions

I tried in this document to start painting the landscape as to where cyber tool export controls can go wrong. Part of my goal joining ISTAC was to stop making the mistakes we made with 5a1j and 4d4 by putting things into a bit of a more coherent policy framework. Hopefully this document will be useful to other technical and policy groups as together we find a way to navigate this tricky policy area.

----------------
Resources/notes:

To be fair, we've known a lot of these issues for a very long time, and we simply have not fixed them:

From this very good, very old book.

How to find good Cyber Security Policy Writing

There are some simple rules to follow to see if a policy piece in this space will be extra painful to read:


  • Does it use "unpack" and not in the context of talking about compression algorithms? 
  • Does it liberally quote a thousand other articles, but without any real understanding of their context? 
  • Does it have obvious misstatements about technical facts?
  • Does the author have no experience in Intel or industry?
  • Does it lean heavily on "data" which could be reasonably considered purely subjective or of shoddy quality?
The "Cyber Strategy" book reviewed on this blog is a good example of this. But the opposite is also true! Lately you have spooks coming out from the shadows to write policy pieces, and the heads of various companies have spent time to do so as well. There are policy teams (both in the US and elsewhere) that have spent time to learn the technology!

You can see the examples of some of this work here, on the Cyber Defense Review. I haven't even read it yet, but I know a journal that has an article from Bryson Bort or Shawn Henry is going to have worthwhile perspectives.

For what it's worth pure legal writing can also have interesting tidbits, like this piece from Mike Schmitt - a leading proponent of International Law's role in this space. Usually the value in a pure policy or legal piece occurs when they acknowledge the current issues of the system instead of optimistically whitewashing past efforts. 

Mike got pushback (largely from the US) when he and others proposed a "violation of sovereignty" standard. This is because it doesn't work in operational practice. But he still likes the idea because it makes LEGAL decisions quite clear. :)