Friday, July 13, 2018

When is a "Search" minus the Quantum Theory, for Law Professors



One issue with reactions to Carpenter is that they tend to assume that we can get clarity around how technological change affects what a search is by making up various artificial models for how telephony systems  and search processes work. The principal example of this sort of model being Orin Kerr's description in his Lawfare piece.


Example random model Orin made up. :)
If you want a better example of how complicated this sort of thing is, I recommend this Infiltrate talk on the subject of how Regin (allegedly the Brits) searched a particular cellular network, covertly.



If you want to find out when a particular search started or ended, you almost always have to develop a lot of expertise in Quantum Mechanics, starting with Heisenburg, but quickly moving into the theory of computation, etc. This is a good hobby in and of itself, but probably more than a legal professor wants to do.

So I recommend a shortcut. A search is anything that can tell a reasonable person whether or not someone is gay. It's simple and future-proof and applies to most domains.

Thursday, July 12, 2018

The Senate Meltdown/Spectre Hearing


You can browse directly to the debacle here. Everything from beginning to end of this was a nightmarish pile of people grandstanding about the wrong things.

Let's start with the point that if you're going to get upset about a bug, Meltdown and Spectre are SUPER COOL but that does not make them SUPER IMPORTANT. In the time it took Immunity to write up a really good version of and exploit for this, maybe fifty other local privilege escalation bugs have come out for basically any platform they affected. And they are hardly the first new bugclass to come along. I guarantee you every major consulting company out there has a half dozen private bugclasses. People always say "You need to be able to handle an 0day on any resilient system" and the same thing is true for bugclasses.

I'm going to quote the National Journal here.
Chairman John Thune said he “hesitates” to craft legislation that would require U.S. companies to promptly hand over information on new cyber-vulnerabilities to the government, or to deny that same information to Chinese firms.
“You’d like to see that happen sort of organically, which is what we tried to suggest today and which many of the panelists indicated is happening in a better way, a more structured way,” Thune told reporters after the hearing.
Nearly every part of this not-veiled threat is a bad idea. Assuming they could come up with a definition of "cyber-vulnerability", the companies involved do most of this work overseas. They would no doubt make sure to give this information to every government at the same time. Now we are in a race to see who can take advantage of it first?

There's a reason Intel didn't even bother to show up to this hearing. One of them is they can't afford to be seen taking sides with the USG in public. Which is precisely why this conversation happens over beers in bar somewhere instead of us counter-productively trying to browbeat them on live TV for no good reason. And we have to deal with the fact that sometimes we don't get what we want.



Here's a list of things we could have learned:

  • Bugs that private companies discover are not classified information protected and owned by the USG
  • There are consequences to our adversarial relationship with the community and with industry
  • No matter how much we blather on about coordinated disclosure systems and public private partnerships, companies have other competing interests they are not going to sacrifice just because it would be nice for the USG




Sunday, June 24, 2018

Sanger's "The Perfect Weapons" [CITATION NEEDED]

Book Link.

Everyone is very excited about the "revelation" than in order to do their APT1 paper, Mandiant (according to Sanger) hacked back. But that's not the only stunner in the book. He also points to a WMD-level cyber capability leveraged against both Iran and Russia by the United States. There are a ton of unsubstantiated claims in the book, and the conclusion is a call for "Cyber Arms Control" which feels unsupported and unspecified. But Sanger has clearly drunk deeply of the Microsoft Kool-Aid.

But to the point of the (alleged) hack-back: We should have long ago developed a public policy for this, since everyone agrees it is happening, but we seem unable to do so even in the broadest strokes. I think part of the problem is that we are always asking ourselves what we want the cyber norms to be, instead of what they actually are. I'm not sure why. It seems like an obvious place to start.

WMD theory has a pretty heavy emphasis on countervalue attacks....
This is the only mention of Kaspersky in the book - a noted absence...

This is...a threat of a WMD via Cyber.

Is this new?

This is a chilling projection.

This is not good reporting right here.

Sheesh.

Hahahahah. DO THEY?



Cypherpunks: The Vast Conflict



I've been carefully reading Richard Danzig's latest post, Technology Roulette: Managing Loss of Control as Many Militaries Pursue Technological Superiority. I want to put this piece in context - first of all, Richard Danzig is one of the best policy writers, and one of the deepest American policy thinkers currently active. Secondarily, this paper is a product of a deeply conservative government reaction to the ascendant Cypherpunk movement and is in that sense, leading the wrong direction.

Ok, that sounds melodramatic. Let me sum up the paper thusly:
  • New branches of science introduce upheaval and each comes, as a party gift, with a new weapon of mass destruction and general revolution in how war works. 
  • We used to get one a century or so, which was possible to adapt to, like a volcano that erupted every so often
    • We built treaties and political theory and tried not to kill everyone on the planet using the magic of advanced diplomacy
  • Now we are getting many new apocalyptic threats at a time
    • AI
    • 3d-Printing
    • Drones
    • Cyber War
    • Gene editing techniques
    • Nanotechnology
  • Rate of new world-changing tech is INCREASING OVER TIME.
    • Our ability to create new international political structures to adapt to new threats appears moribund


Most legal policy experts look askance on the "libertarian" views of the computer science community they have been thrust into contact with as if a Japanese commuter on the rush hour train. But the computer science world is less big L Libertarian than philosophically Cypherpunkian, tied to the simple belief that the advance of Technology is at its sum, always net positive for human liberty. Where society conflicts with the new technologies available to humanity, society should change instead of trying to restrict the march of technology.

Hence, where government experts are scared of disintermediation, as evidenced by a paranoia over Facebook's electoral reach, the computer world sees instead that newspapers were themselves centralized control over the human mind, and worthy of being discarded to the dustbin of history.

Where the FBI sees a coming crisis in the "Going Dark" saga, they find exactly no fertile ground in the technology sector, as if the field they would plant their ideas in was first salted, and then sent into space on one of Elon's rockets.

The US Government and various NGOs were both surprised and shocked at the unanimity and lack of deference of the technological community with regards to the Wassenaar cyber controls or the additional cryptographic controls the FBI wants. This resistance is not from a "Libertarian" political stance, but a from the deep current of cypherpunkism in the community.

These days, not only do Cypherpunks "write code", to quote Tim May's old maxim, but they also "have data". The pushback around Project Maven can be described on a traditional political platter, but also on a tribal "US vs THEM" map projection.

Examine the conversation around autonomous weapons. Of course, an autonomous and armed flying drone swarm can be set to kill anyone in a particular building. This is at least as geographically discriminatory as a bomb. Talks to restrict this technology even at the highest principal level so far restrict only an empty set of current and future solutions.

Part of this is the smaller market power of governments in general for advanced technology. A selfie drone is essentially 99.999% the same as a militarized drone, and this trend is now true for everything from the silicon on up, and some parts of the US Govt have started to realize their sudden weakness.

As Danzig's paper points out, the platitude that having a "human in the loop" to control automated systems is going to work is clearly false. Likewise, he argues that our addiction to classification hamstrings us when it comes to understanding systemic risk.

 The natural tendency within the national security establishment is to minimize the visibility of these issues and to avoid engagement with potentially disruptive outside actors. But this leaves technology initiatives with such a narrow a base of support that they are vulnerable to overreaction when accidents or revelations occur. The intelligence agencies should have learned this lesson when they had only weak public support in the face of backlash when their cyber documents and tools were hacked.
But his solution is anything but. We're in a race, and there's no way to get out of it based around the idea of slowing down technological development.

Monday, June 18, 2018

Policy Bugclass: False inequivalencies

I'm going to leave it up to your imagination why this picture perfectly encapsulates every moment someone suggests two random cyber things are different that are actually the same.


We try to maintain a list of policy-world "bugclasses" when in the cyber domain. 
  1. Assuming Data or Execution is bound to a physical location
  2. Assuming code has a built-in "Intent"
  3. Building policy/law in legal language instead of in Code (i.e. policy that does not work at wire-speed is often irrelevant)
  4. False inequivalences
In this article I want to talk a little bit about False Inequivalences, since they are probably the most prevalent type of bugclass that you run into, and you see them everywhere - in export control, in national security law, in policy in general.

For example, export control law (5a1j) likes to try to draw distinctions between the ability to store and the ability to search, or (4d4) the ability to run a command, and the ability to gather and exfiltrate information. In national security policy papers you'll often see a weird distinction between the ability to gather information and the ability to destroy information. Another, more subtle error is a sort of desire to have "networks" which are distinct. Technologists look upon the domain name system as a weak abstraction, but for some reason policy experts have decided that there are strict and discernible boundaries to networks that are worth porting various International Law conventions over to.

This bugclass is a real danger, as explaining why two things are "provably equivalent in any real practical sense" annoys lawyers whose entire lifespan has been spent splitting the hairs in language, and think that as a tool, hairsplitting can produce consistent and useful global policy. 

More specifically, we need to find a way to revise a lot of our legal code to accept this reality: Title 10 and Title 50 need to merge. Foreign and domestic surveillance practices need to merge. The list goes on and on...


Tuesday, June 5, 2018

Security, Moore's Law, and Cheap Complexity

https://www.err.ee/836236/video-google-0-projekti-tarkvarainseneri-ettekanne-cyconil

To paraphrase Thomas Dullien's CyCon talk:
  • We add 3 ARM computers per year per person on Earth right now. 
  • The only somewhat secure programs we know of focus entirely on containing complexity
  • Software is a mechanism to create a simplified machine from a complex CPU - exploits are mechanisms to unlock this complexity
  • We write software for computers that don't exist yet because we design hardware and software at the same time.
  • We've gotten significantly better at security in the past 15 years, but we've been outpaced by the exponential increase in complexity
  • Every device is now a "Network of Computers" - intra-device lateral movement is very interesting
  • It's much cheaper to use something complicated to emulate something simple than vice versa, in the age of general purpose cheap CPUs. This generates massive economies of scale, but at a cost...insecurity.
  • The economics of chip manufacturing means CPU and Memory providers are driven to sell the hardware they can get away with selling - some percentage of the transistors in a chip are bad, and the chip maker is strongly motivated to ship the least reliable CPU that the customer cannot detect
    • When there's only a few hundred atoms in a transitor, only three or four more makes a big difference
  • Until Rowhammer the link between hardware reliability and security was not clear to Electrical Engineers.
  • You cannot write real world secure programs that operate on hardware you cannot trust
  • Computers are deterministic at the abstract sense, but they are really only deterministic MOST of the time. Engineers work really hard to make it so you can ignore the physics of a chip. But it's still happening.
    • Determinism has to be fought for in computers, and is not a given.
  • The impossibility of inspectability in the digital sphere
    • Everything has firmware, none of which we can really have any assurance of
    • Average laptop has ~40 CPUs all with different firmware
    • Local attackers can use physics to induce transient faults, which bypasses crypto verification, which then means nobody can get you out
  • If control of a device has ever been disputed, it can never be ascertained if it is back in control. This is counter our standard intuition for how objects work.
  • The same forces that drive IT's success drive IT's insecurity.
  • Halvar loves SECCOMP_STRICT sandbox and wants to make it useful, but of course, making it useful will probably break it
  • Computers will look very different from today's architectures in fifteen years - more different than they did fifteen years ago. Engineers are now focused on designing parallel machines, since Moore's law is over for single-cores. 
  • All the insane complexity we can pump into computation systems is essentially in your pocket. 
  • It's still early days in computers. How good was humanity at building bridges seventy years after we started?

Tuesday, May 29, 2018

What is the high ground in cyberspace?

I can't even possibly get into how crazy hilarious most of the proposed cyber norms are. Usually the response is "What does the technical community know? and then a few years later, "Hmm. That didn't work." even though it was entirely predictable.

High Ground (C.F. Thomas Dullien)

High ground in cyber is high traffic sites! Facebook and Google are "unsinkable aircraft carriers" in that sense, but any site which has a huge traffic share is high ground, most of them have very low security, and there's lots of mountain ranges we don't acknowledge the existence of.

This screencap from Matt Tait's 2018 INFILTRATE keynote talks about update providers as strategic risks...
RedTube and other major porn sites have a wider reach than the New York Times ever will. Gaming sites are equally high ground. Dating sites are clearly high ground. There's what you think people do on the Internet versus what they really do, almost everywhere you look, which is why good strategists are holding themselves to the hard data they get from historical operations, and not just making up fanciful cyber norms in Tallinn.

I think it's counter-intuitive to grasp that almost everything your computer does when it reaches out is "get more code to execute". Software Updates are the obvious one, but a web page is also just code executing. PDFs are code executing. Word documents are code executing. New TF2 maps are code executing. NVidia's driver download page is exceptionally high ground.

In other words, there's nothing your computer does that is not "updates" when it comes to understanding your strategic risk.

Team Composition


We covered team compositions as applied to cyber operations quite heavily in our talk at T2 in Finland. To quickly summarize: Dive Tanks are going to be implants that are more "RAT"-like. These typically are entirely in userspace, and operate in the grey zones and chaotic areas of your operating system. Main tanks tend to be kernelspace or below. Obviously your implant strategy changes everything about what else you incorporate into your operations.

Win Condition



In Overwatch, one win condition is "we have a ranged DPS on the high ground, unopposed". Knowing the win conditions is important because it keeps you from wasting time and "feeding" your opponents when the battle is already lost. In cyber operations, feeding your opponents is quite simply using new exploits and implants when your current ones have already been caught. This is why a good team will immediately remove all their implants and cease operations once they even get a hint that they were discovered.

Unlike in Overwatch, the win condition in cyber is usually who is more covert than the other person. You don't have to remove your opponent from the field, you just have to make it irrelevant they are there.

Conclusion

Keeping your strategy as simple as possible allows for a high tempo of operations with a predictable and scalable results. Create a proper toolkit composition, execute the right tactical positioning based on your composition, and understand your win condition, and you will end up a grandmaster. :)

Thursday, May 24, 2018

When our countermeasures have limits

Countermeasures are flashy. But do they work?

So the FBI took over the domain VPNFilter was using for C2. VPNFilter also used a number of Photobucket accounts for C2, which we can assume have been disabled by Photobucket.


Hmm. Why did they do so many? Do we assume that every deployed region would have the same exact list?

Here's my question: How would you build something like this that was take-down resistant? Sinan's old paper from 2008 on PINK has some of the answers. But just knowing that seizing a domain is useless should change our mindset...

As a quick note: that last sentence of the FBI affidavit is gibberish.

From what I can tell from public information, the VPNFilter implants did not have a simple public-key related access method. But they may have a secret implant they installed only in select locations which does have one. Cisco and the FBI both are citing passive collection and a few implants from VirusTotal and from one nice woman in PA. We do know the attackers have a dedicated C2 for Ukrainian targets. 



My point is this: Our current quiver of responses can't remove botnets from IoT devices. The only reasonable next move is to do a larger survey of attacker implants - ideally to all of them, using the same methods the attackers did (we have to hope they didn't patch each box). This requires a policy framework that allows for DHS to go on the offense without user permission, and worldwide.

Tuesday, May 22, 2018

Exploits as Fundamental Metrics for Cyber Power


If you're measuring cyber power, you can measure it in a number of different ways:

  • Exploitation (this article!)
  • Integration into other capabilities (HUMINT, for example)
  • Achieved Effect (so much of IL wants to look here, but it is very hard)
In a previous article on this site we built a framework around software implants as a metric for measuring sophistication in capability. (Also see this Ben Buchanan piece for Belfer.)

Since there are no parades through downtown DC of cyber combat platforms, or even announcements in Janes, non-practitioners have thus tried to tag any effort which includes "0days" as sophisticated, and in the case of export control - too sophisticated to allow to be traded in without controls. The way this typically appears is by the concept of "Bypassing Authorization" being some sort of red line.

But from a strategic standpoint we have for years tried to look at the development and expenditure of 0day as a declaration of capabilities befitting a State-level opponent. This is of course a mistake, but one part of that mistake is of thinking of all 0days as equal from an information-carrying perspective as regards capabilities.

So what then, do practitioners look for when gauging 0day for nation-state-level sophistication, if not simply the use of any 0day?

Here is my personal list:
  • Scalable CONOPS
  • Toolchain Integration 
  • Cohesive OPSEC
  • Historical Effort and Timescales
Without going into each one of those in detail, I want to highlight some features that you'll see in State-level exploits. Notably, there is no red line on the "sophistication" of an exploit technique that differentiates "State" from "amateur". On the contrary, when you have enough bugs, you pick the ones that are easiest to exploit and fit best into your current CONOPS. Bugs with the complexity level of strawberry pudding recipes tend to be unreliable in the wild, even if they are perfectly good in the lab environment.

A notable exception is remote heap overflows, which for a long time were absent from public discourse. These tend to be convoluted by nature. And it's these that also typically demonstrate the hallmarks of a professional exploit that has had the time to mature properly. In particular, continuation of execution problems are solved, the exploit will back off if it detects instability in the target, the exploit will use same-path-stagers, you'll see PPS detection and avoidance, and the exploit will be isolated properly on its own infrastructure and tookit. What you're looking for is the parts of an exploit that required a significant testing effort beyond that which a commercial entity would invest in.

One particular hallmark is of course the targeting not of the newest and greatest targets, but of the older and more esoteric versions. A modern exploit that also targets SCO UnixWare, or Windows 2000, is a key tell of a sophisticated effort with a long historical tail.

There is a vast uneducated public perception that use of any 0day at all in an operation, or 4 or 5 at once, indicates a "state effort". However, the boundaries around state and military use of exploits are more often in the impressions of the toolkits they fit into than in the exploits themselves. While exploits, being the least visible parts of any operation, are sometimes the hardest to build metrics around, it's worth knowing that the very fact that 0days exist as part of a toolchain is not the needed metric for strategic analysis, nor the one practitioners in the field use.

Tuesday, May 8, 2018

What is an "Observable Characteristic" in Software Export Control?

Note: This is a living document partially written for those new to export controls - if you think I misunderstood something let me know and I'll address it within!
---------------------------------------------------------------------------------------------------------


I want to highlight this Twitter thread here which goes over the 4D4 ("Intrusion Software") in a bit of detail. I feel like many people who are proponents of 4D4 complain that the rest of us, who have concerns, don't properly understand export control frameworks. I would posit there IS NO UNDERSTANDING OF EXPORT CONTROL FRAMEWORKS BY DESIGN :). But to be more specific about the concerns the following bite size bit is the most important part:


Being able to look, objectively, at a piece of hardware and say "This is a stealth coating because it has the following manufacturing characteristics" is a different category than being able to look at a piece of software and saying "This bypasses ASLR and DEP". Deep down, while radio frequencies are in general going to be a universal thing, and performance can be measured, the export control language applied to software exists in a huge fog! What does it mean to "bypass a mitigation"?

The Issues of End Use Controls


What this results in is END USE controls. In other words, instead of saying "We want to ban antennas that can emit the following level of power" we are writing controls  that say "We want to ban software that CAN BE USED for the following thing". This means instead of looking at the software to control it, you end up looking at the marketing, so the controls are littered with marketing language ("Carrier Grade Speed!") and do not have functions, characteristics, or performance levels of any kind.

Sometimes you see long lists of functionalities in software controls, as if this is going to be a definitive characteristic if you add enough of them. For example, 5a1j ("Surveillance software") is essentially:

  • Collects network information
    • parses it and stores the metadata about it (aka, FTP usernames and such) into a DB
  • Indexes that information (why else would you have this in a DB?)
  • Can visualize and graph relations between users (based on the information you indexed)
  • Can search the DB using "selectors" ("again, why else is it in a DB?")
This is what modern breach detection software is - a product category that did not really exist when 5a1j was formulated. But each of the pieces DID exist and given a market opportunity, they got put together as you would expect. In other words - long lists of functions are not enough to make a good control (especially when all the functions you are describing are commoditized in the ELK Docker image)


Performance levels are the big differentiation typically. The general rule is that if you cannot define a performance level, you are writing a terrible regulation because it will apply broadly to a huge population you don't care about and have massive side effects, but the international community has typically just ignored this for software controls (because it is hard). Part of the difficulty here is that performance levels in the cyber domain tend to go up a lot faster than in most manufacturing areas. The other issue is that for the controls people seem to want, there are no clear metrics on performance. For example, with 5a1j there is nothing that differentiates the speeds/processing/storage that Goldman Sachs (or even a small university) would use versus a country backbone ISP.

Another thing to watch out for is the contrast between controls on "Software" and the controls on "Technology". Usually these controls go hand in hand. They'll say "We control this antenna, but also we control technology for making those antennas so you can't just sell a Powerpoint on how to make them to China either". In software, this gets a lot more difficult. Adding an exception to a technology control does not fix the software control...

What we are learning is that software export controls work best when tied to an industry Standard. This does not describe the current cyber tools regulations (4d4 or 5a1j), however. We do know  that end use-based controls are not good even with very large exceptions carved into them for reasons which might require another whole paper, but which seem obvious in retrospect when looking at the regulations as "laws" which need to be applied on an objective basis.

Impact


I got flack last Sunday for Tweeting that export controls "ban" an item, which they clearly do not. However, the effect of export controls is similar - largely a slower, more painful, and silent death rather than a strict ban. I.E. export controls are less a bullet to the head and more a chronic but fatal disease for your domestic industry. This is partially because licensing has an extremely high opportunity cost on US businesses which raises the expense of doing business up and down the supply chain.

There's a common misconception among export control proponents that when used loosely (aka, automatic licensing with only reporting requirements), export control is "cost free" for businesses. Nothing could be further from the truth. Even very small companies (aka startups) are now international companies, and having to understand their risks from export control regimes can be prohibitively expensive with such broadly (and poorly) designed controls as 4d4 or 5a1j.

More strategically, no proponent of strict export control regimes wants to look at their cost and efficacy. Do they even work? Do they work at a reasonable cost? For how long do they work? Do we have a realistic mechanism for removal of controls once they become ineffective? These are all questions we should answer before we implement controls. The long term impacts are recorded at policy meetings in sarcastic anecdotes - "We don't even build <controlled system> in the US anymore, we just buy it from China - that export control did its job!" 

Sadly, this means that export controls are almost certainly having the exact opposite effect from what is desired. This could probably be addressed by having a quite strict "Foreign availability" rule when designing new regulations. After all, what is the point of putting restrictions on our exports when the same or similar technology is available from non WA members? Any real stress on these issues is mysteriously missing from discussions around the cyber-tools regulations. :)

Unilateralism


The goal of the Wassenaar Arrangement and other similar agreements is of course to avoid the problem of unilateral controls, which are an obvious problem. What they don't want to hear is that the implementation differences between countries are large enough that the controls are unilateral anyway. To have truly non-unilateral controls you need one body making licensing decisions - and by design WA does not work like that.

The gaps in implementation are large enough that entire concepts appear in one country that don't exist in others - most prominently, the "Deemed Export" ruleset, which says that if I sell an Iranian an iPhone in Miami, that is the same as exporting it to Iran and I need to get a license.

Goals, both Stated and Unstated

The stated goal of export controls is avoiding technology transfer for national security purposes! (Note that "human rights issues" are not a stated goal for the WA).

The unstated goals are more complex. For example, we get a lot of intelligence value out of the quarterly reports from software companies for all of their international customers. There's probably also limited intel value in the licensing documents themselves ("Please have your Chinese customer fill out a form - in English - stating what they are using your product for!") Obviously the US likes having a database somewhere of every foreign engineer who has accessed a lithography plant, I guess. Because this stuff is unstated, it's hard to argue against in terms of ROI but I would say that for most of this you can get a ton better value by having a private conversation with the companies involved, which is a good first step towards building the kinds of relationships we always claim we want between the USG and industry. As stated previously, the costs imposed by even a "reporting only" licensing scheme are enormous.

When I talk to Immunity customers (large financials) about 5a1j, they assume that the reason the USG wants reporting on all breach detection systems sold overseas is so they can better hack them. It's hard to argue with that. This is a high reputational cost that the USG pays among industry for intelligence that is probably of little real value.

The other unstated goal is leverage. Needless to say with a complicated enough export control regime, nearly every company will eventually be in violation. Likewise, blanket export licenses can massively reduce your opportunity cost, and many countries are happy to issue them in various special cases. Again, I think from a government's perspective it is better long term to develop fruitful bilateral relationships.

A lot of these issues are especially true with "end use"-centric controls - which rely on information that the SELLER or BUILDER of the technology has no way to know ahead of time.

And the last-but-not-least unstated goal is to control the sale of 0day. Governments are mostly aligned in their desire to do this, although few of them understand what that means, what the side effects would be, or how this would really play out. But parts of the rest of their strategy (VEP) only really work when these controls go into place, so they have a strong drive to write the controls and see how things work later. It is this particular unstated  but easily visible goal that I think is the largest threat towards the security industry currently.

Conclusions

I tried in this document to start painting the landscape as to where cyber tool export controls can go wrong. Part of my goal joining ISTAC was to stop making the mistakes we made with 5a1j and 4d4 by putting things into a bit of a more coherent policy framework. Hopefully this document will be useful to other technical and policy groups as together we find a way to navigate this tricky policy area.

----------------
Resources/notes:

To be fair, we've known a lot of these issues for a very long time, and we simply have not fixed them:

From this very good, very old book.

How to find good Cyber Security Policy Writing

There are some simple rules to follow to see if a policy piece in this space will be extra painful to read:


  • Does it use "unpack" and not in the context of talking about compression algorithms? 
  • Does it liberally quote a thousand other articles, but without any real understanding of their context? 
  • Does it have obvious misstatements about technical facts?
  • Does the author have no experience in Intel or industry?
  • Does it lean heavily on "data" which could be reasonably considered purely subjective or of shoddy quality?
The "Cyber Strategy" book reviewed on this blog is a good example of this. But the opposite is also true! Lately you have spooks coming out from the shadows to write policy pieces, and the heads of various companies have spent time to do so as well. There are policy teams (both in the US and elsewhere) that have spent time to learn the technology!

You can see the examples of some of this work here, on the Cyber Defense Review. I haven't even read it yet, but I know a journal that has an article from Bryson Bort or Shawn Henry is going to have worthwhile perspectives.

For what it's worth pure legal writing can also have interesting tidbits, like this piece from Mike Schmitt - a leading proponent of International Law's role in this space. Usually the value in a pure policy or legal piece occurs when they acknowledge the current issues of the system instead of optimistically whitewashing past efforts. 

Mike got pushback (largely from the US) when he and others proposed a "violation of sovereignty" standard. This is because it doesn't work in operational practice. But he still likes the idea because it makes LEGAL decisions quite clear. :)
 



Thursday, May 3, 2018

Book Review: Cyber Strategy by Valeriano, Jensen, Maness



I give this book 0 stars out of 5. To be fair, I give the entire genre of books like this 0 stars of out 5. This is the worst kind of cyber policy writing. They've concocted some sort of database of information, culled mostly from news reports from what I can tell. Then they do some basic statistical analysis on it and somehow flesh that out into an entire book by pulling random quotes from other terrible parts of the cyber policy pantheon.

For example, here they both misspell "Regin" and then attribute it to the United States (for no reason).

In general the editing of the book itself was spotty - but this is something that concerns me more about the original dataset, which appears to attribute various efforts to various countries in ways I'm 90% sure are not the correct ones. If your data is wrong, then eventually your conclusions are essentially random, and your policy proscriptions are base opinion.


For example, the above simplistic argument against the use of exploits during cyber operations is one that you often see in policy-world but which nobody who has ever been involved in an operation takes seriously. Also, "Tomahawk" is a proper freakin' noun! I don't think anyone but me has even read this book, to be honest.

But to reiterate: IT IS HUGELY RARE TO SEE AN ADVERSARY USE YOUR ATTACKS AGAINST YOU. Everyone in their head is going to cite ETERNALBLUE but that was used only once the patch was out, as far as we know. The opsec reason for this is that USING a bug you caught from someone TELLS them you caught them! Likewise every group has their own concept of operations and other people's tools don't always fit yours.

I mean it was fascinating you didn't see the FLAME exploit turned around - it was reusable - but everyone just assumed it only fit in the FLAME toolchain. It's almost easier to find new bugs than do the research necessary to re-use old bugs.

Books like this always try to back up their arguments with copious quotes to other equally bad books:

On the face of it, Libicki is clearly wrong. But it's possible his quote was used out of context! I'll never know because the Kindle edition of that book is $66. I only have so much budget to spend reading this kind of thing.

Ok so in summary: Don't buy or read this book or books like it. We need NEW thoughts in cyber policy and this is not how you get them.

The Gravity Well


Every particle, even dark matter, can bend spacetime.


Part of the reason policymakers are often confused about resistance to many of the items on their wishlist in the cyber domain is that they've already achieved the impossible: Copyright!

If you think about how amazing it is that nowhere on the Internet can you go get Avengers: Infinity Wars for free then it leads you to also ponder the vast array of international agreements, corporate pressure, and technological filtering that makes this possible. Nothing could be more inimical to the nature of the cyber domain than copyright. And yet: governments and industry have made it real.

To paraphrase Bruce, "Bits being copied is like water being wet" - and yet we have somehow made it so in cyberspace all waterfalls all run upwards, to make it possible to remove a single picture of Star Wars from the Internet. Why then, policy people ask, can we not just erase all information about computer exploitation and harmful code from the Internet? How much harder can it be?

Allan Friedman once said to me that he looks at his work at the Commerce Dept as correcting market failures. But the real market distortion is like a massive gravity well all around us. Copyright is what makes it so that a firewall vendor can hide the true nature of their weaknesses by making it illegal to write up a "performance comparisons" or "reverse engineer" their protocols. So many of the systemic vulnerabilities come from a system of our own making. And when we try to address them at the edges by regulating IOT device security or going on and on about vulnerability disclosure or revising the CFAA to add just one more exception it's like trying to chew away a piece of a black hole.

This post is a call for legislation more than my other posts are: We need to address the root of the problem. That means changing what an end-user-license can restrict. It is not just that everyone should be able to write about and patch the code running on their devices, but that we need to acknowledge that copyright has distorted who can even understand the depths of the risks we all face.

---
Links/Resources:

Tuesday, May 1, 2018

The Dark Gulf between the FBI and the Technical Community

https://oig.justice.gov/reports/2018/o1803.pdf

I think it's important that we acknowledge and address that each side in the encryption debate does not believe so much that the other side is wrong, but that they are lying liers who will stop at nothing to get their way.

In particular, the contrast between Susan Hennessey/Stewart Baker's take on the FBI documents that got released about the iPhone unlocking debacle and the Risky.biz (and technical community) take are quite telling.

Risky.Biz, when speaking to their audience, essentially assumed that the FBI, while not misleading the court in words, knew that it could have reached out to the contractor base for potential solutions to unlocking Farook's iPhone, and that not doing so was highly dishonest when presenting that there were "no other solutions" other than the nuclear option of forcing Apple to build decryption capability for the FBI.

Susan Hennessey (and others in the national security space) take the FBI at their word: No misleading statements went to the court and "not having the capability in our hands right now" is enough to move forward with the legal case. "Perhaps there is some stovepiping issue but nothing even slightly duplicitous" we hear from that side. "If you only read the whole report, then you'll see!"

Here's what you see when you read the whole report: There are fig leaves of truth covering the massively weird idea that in the highest profile case in the country the FBI didn't ask the head of the ROU to have a contractor maybe look into the problem. It's literally unbelievable. "Oh, I thought we could only work on classified issues" is what's in the report, as if these teams don't have each other's cell phone numbers.

This is a damning two paragraphs in the report and it validates what the technical community thinks of the FBI.
You see the same dynamic with the Ray Ozzie cryptographic proposal. The national security teams that believe the technical community isn't being straightforward about how possible it is to build secure key escrow when clearly public key encryption exists don't believe they are wrong, they simply believe the technical community knows perfectly well how to build PKI and just won't for political reasons.

Matt Green points out the the issue is not "Can we build PKI?" but "Do we want to have companies forced to build PKI and assume the essentially unbounded liability of maintaining it?". Keep in mind, tech people have watched every PKI system they've built over the past 20 years fail in some way and companies have no desire to hold or control any data beyond that which gets them ad revenue.

Of course, Stewart Baker would then say "Unless the Chinese want it, in which case everyone seems to RUSH to find a solution". The technical community, in turn, says "But the USG is supposed to be our friends."

I'm not saying the technical community is right about these issues, although they clearly are: The FBI lied in all but words, and the "going dark" debate is insane when we can identify serial killers from their great grandparent's DNA.

But even if the FBI was right, we have to look strategically at what it means in every dimension of this debate when the US technical community and the FBI both don't trust each other even a tiny bit. Without solving this, how can we move forward on any part of the massive problems we face? How do you have public private partnerships without trust? How do you build cyber war norms? Infragard can only go so far...





Thursday, April 12, 2018

Stealing the Socket for Policy and Profit

One exploit that has fascinated me for more than a couple of years is this one by Yuange of NSFOCUS. When I mentioned this on Twitter, Yuange himself pointed me at this paper, where he describes a bit of his technique and his philosophy. A few things stick out from this exploit:

The first, is he was ahead of his time in adopting the PE parsing technique for writing portable Windows shellcode. He also had a uniquely Chinese style of writing the entire exploit in C, and having the shellcode just "compiled" instead of hand written. Thirdly, he used an entirely new and innovative method of stealing the socket on IIS by using the built in ISAPI handler calls. Fourthly, he built in a micro-backdoor to his exploit shellcode.



I want to highlight the third thing - the socket stealing. But first, I want to look at the work of another well known hacker group: LSD-PL. I can't remember now if their Windows Asmcode paper was the first public example of the PE-parsing technique for Windows shellcode. I remember Oded Horowitz worked in that area before it was public (and also wrote a special purpose linker for Windows which allowed you to write your shellcode in C using Visual Studio).

LSD used a specific technique for their FindSck Asmcode which looks almost exactly like their Unix version. I'll paste it below since a significant portion of the policy community is learning hacker assembly now.

Page 22 of this presentation has the decompilation of this.

In this case they go through every FD from 0 to 0xffff and call getpeername() on it. Then they see if the source port is the one they hardcoded into the shellcode at runtime to look for.

However, compare that technique to the first GOCode in Apache Nosejob from hacker comedy group Gobbles. Apache Nosejob was the second version of Apache-scalp, which exploited an "impossible" bug released by IIS XForce researcher Mark Dowd.


As you can see it's called "GOCode" because on the remote side, the shellcode is going through its FDs and sending "G" to them and the exploit responds to that G with an O as a simple handshake. This technique is obviously noisier (every socket gets a G, like in some weird Oprah show!) but more resilient against certain kinds of networking environments (aka NAT).

But why are all these somewhat contemporary techniques so different? And why even invest this kind of time and energy in stealing sockets?

Here's what Yuange has to say:


And here is what LSD has to say about that same thing:


One key point from the LSD-pl Windows slides is that they implemented a mini-backdoor in assembly partially to solve the problem all Unix hackers had moving to Windows before Powershell was included by default - the OS feels lobotomized.

Shellcode is called "Shellcode" because a Unix shell is a full-featured programming environment. There are thousands of ways to transfer files from point A to point B given shell access to a 1990's Unix system. This is not nearly as easy on Windows 2000. But LSD and Yuange both realized that the path of least resistance on Windows was to build file transfer into your stage-1 assembly code rather than trying to script up a debug.com wrapper.

Yuange's IIS exploit doesn't "pop cmd.exe" - it has this mini-shell for the operator to use.
So now let's go back to the Yuange exploit and talk about the ISAPI-stealing code as if you are 22yo me, puzzling over it. The first thing he does is set an exception handler for all access violations, and then he walks up the stack, testing for a possible EXTENSION_CONTROL_BLOCK.


The ECB has a set size (0x90), which it stores as the first DWORD and then the connID field at ecb+8 will always point...right back at the ECB! Once he has found the ECB he now has a connID and the addresses (stored in the ECB) for function pointers to the ReadClient() and WriteClient() that IIS provides every ISAPI.

This means his exploit is going to steal the socket reliably, no matter what ISAPI he targets, and whether or not it is In_Proc or Out_Proc, using SSL or not, even if he is behind several layers of middleware and firewalls and proxies of various sorts. In that sense it is BETTER and more generic than the LSD-PL and GOCode styles for this particular problem set (IIS Exploits).

Generic shellcode platforms are often derided for not being worth the effort by penetration testers, but I hope by reading this article you have now gained the foresight to see that for real work, by skilled but small teams who cannot afford a room of Raytheon engineers to architect bespoke solutions to every exploit and operation's microclimate, this became a necessary investment. Kostya summed up a lot of Immunity experience with this in a BlackHat talk.

Generally further in time from left to right.

If you're completely non-technical, then the goal of this kind of analysis is difficult to understand, but we wanted to point out that real teams consider their exploit only done when it "Works in the wild" and socket-stealing and post-exploit data transfer is a big part of that. Likewise, there are many ways to solve these problems, and different teams chose different ways which speak in interesting patterns. Historically, the people who were developing these techniques have moved on into interesting places (Yuange is at Tencent I hear) and if you were not impressed with them in 2001, you may not truly understand the modern landscape.

There was a purpose to hacking in the 2000's beyond getting on stage somewhere. The early hacker groups were run by strong philosophies. Mendez is not the only hacker who had a political bent driven by a strong-world view. What and/or who was the AntiSec movement, for example? You can't spend all of your spare time obsessively reading secrets without being changed and those twists are evident in modern geopolitics as clearly as glacial troughs, if you have the right eyes for it.

Monday, March 19, 2018

Some CrazyPants ideas for handling Kaspersky

These pants make more sense than some of the ideas posted for handling Kaspersky

So the benefit of being a nation-state, and the hegemon of course, is that you can pretty much do whatever you want. I refer, of course, to last week's LawFare post on policy options for Kaspersky Labs. The point of the piece, written by a respected and experienced policy person, Andrew Grotto, is that the US has many policy options when dealing with the risk Kaspersky and similar companies pose to US National Security. Complications include private ownership of critical infrastructure, the nature of cyberspace, and of course ongoing confusion as to the whether we have punitive or palliative aims in the first place. Another complication is how crazypants all the suggestions are.

He lists six options, the first two dealing with "Critical Infrastructure" where the Government has direct regulatory levers and Kaspersky has a zero percent market share already and always will. The third one is so insane, so utterly bonkers, that I laughed out-loud when reading it. It is this:


Ok, so keep in mind that "deemed export" is an area of considerable debate in the US Export Control community, and not something any other country does. While yes, applying the BIS Export Control rule in this case would immediately cause every company that does business in the United States to rush to uninstall KAV, this is not where the story would end.

Instead, we would have a deep philosophical discussion (i.e. Commerce Dept people being hauled in front of Congress) - because for sure not everyone who works at Azure, every backup provider in the world, or literally any software company, is a US Citizen. Because while Kaspersky has deep and broad covert access to the machines they are installed on, they are not the only ones.

We currently interpret these rules extremely laxly, for good reason.

The next suggestion in the piece is adding Kaspersky to the Entities list - essentially blacklisting them without giving a reason. Even ZTE did not get this treatment and while they paid a fine and are working their way back to good graces if possible, this was highly defensible. I mean, in these cases what about the thousands of US businesses that already have Kaspersky installed? The follow-on effects are massive and the piece ends up recommending against it, since the case against Kaspersky, while logical, is possibly not universally persuasive as a death sentence without further evidence?

Tool number 5 is the FTC doing legal claims against Kaspersky for "unfair or deceptive acts or practices" in particular, for pulling back to the cloud files that are innocuous. Kaspersky's easy defense is going to be "We don't know they are innocuous until we pull them back and analyze them, we make it clear this is what we do, we are hardly the only company to do so, for example see this article." I.E. the idea of FTC legal claims is not a good one and they know it.

The last "Policy Tool" is Treasury Sanctions. Of course we can do this but I assume we would have to blow some pretty specific intel sources and methods to do so.

Ok, so none of the ideas for policy toolkit options are workable, obviously. And as Andrew is hardly new at this, I personally would suggest that this piece came out as a message of some kind. I'm not sure WHAT the message is, or who it is for, but I end with this image to suggest that just because you CAN do something doesn't mean it is a good idea.




What happens if the Russians get false flag right?



There's a lot of interesting and unsolved policy work to be done in the Russian hack of the 2018 Olympics. Some things that stuck out at me was the use of Router techniques, their choice of targeting, and of course, the attempt at false flagging the operation to the North Koreans. I mean, it's always possible the North Koreans, not shabby at this themselves, rode in behind the Russians or sat next to Russian implants, and did their own operation.

There's a lot of ways for this sort of thing to go wrong. Imagine if there had been a simple bug in the router implants, which had caused them to become bricked? Or imagine if the Russians had gotten their technical false flag efforts perfect, and we did a positive attribution to North Korea, or could not properly attribute it at all, but still assumed it was North Korea?

Or what if instead of choosing North Korea, they had chosen Japan, China, or the US or her allies?

What if a more subtle false flag attempt smeared not just a country, but a particular individual, who was then charged criminally, which is the precedent we appear to want to set?

I don't think anyone in the policy community is confident that we have a way to handle any of these kinds of issues. We would rely, I assume, on our standard diplomatic process, which would be both slow, unused to the particulars of the cyber domain, and fraught with risks.

It's not that this issue has not been examined, as Allen points out, Herb Lin has talked about it. But we don't have even the glimmers of a policy solution. We have so much policy focus on vulnerability disclosure (driven by what Silicon Valley thinks) but I have seen nothing yet on "At what point will we admit to an operation publicly, and contribute to cleanup"?  or "How do we prove that an operation is not us or one of our allies to the public". In particular I think it is important that these issues are not Government to Government issues necessarily.

---
Resources:

  • Herb Lin: LINK
  • Technical Watermarking of Implants Proposal: LINK

Tuesday, March 13, 2018

The UK Response to the Nerve Agent Attack

Not only do I think the UK should response with a cyber attack, I think they will do so in short order.

It's easy to underestimate the brits because they're constantly drinking tea and complaining about the lorries but the same team that will change an Al Qaeda magazine to cupcake recipes will turn your power off to make a point
The Russians have changed their tune entirely today, now asking for a "joint investigation" and not crowing about how the target was an MI6 spy and traitor to the motherland killed as a warning to other traitors (except on Russian TV). I don't think the Brits will buy it. As Matt Tait says in his Lawfare piece, this is the Brits talking at maximum volume, using terminology that gives them ample legal cover for an in-kind military response. Ashley Deeks further points out the subtleties of the international law terminology May chose to use and how it affects potential responses.

For something like this, sanctions go without saying, but I don't think that ends the toolbox. The US often also does indictments, but that's more message sending than impactful sometimes. The UK could pressure Russia on the ground in many places (by supporting Ukraine, perhaps?) but that takes a long time and is somewhat risky. Cyber is a much more attractive option for many reasons, which I will put below in an annoying bullet list.

  • Cyber is direct
  • Cyber can be made overt with a tweet or a sharply worded message
  • GCHQ (and her allies) are no doubt extremely well positioned within Russian infrastructure (as was pointed out in this documentary) so operational lag could be minimized or negligable
  • Cyber can be made to be discriminatory and proportional
  • Cyber can be reversible or not as desired
  • Sending this message through cyber provides a future deterrent and capabilities announcement
That answers why the Brits SHOULD use cyber for this. But we think they will, because they've sent that as a signal via the BBC and the Russians heard it loud and clear.


Tuesday, March 6, 2018

Why Hospitals are Valid Targets for Cyber

Tallinn 2.0 screenshot that demonstrates which subject lines are valid in spam and which are not. This page has my vote for "Most hilarious page in Tallinn 2.0". CYBER BOOBY TRAPS! It's this kind of thing that makes "Law by analogy" useless, in my opinion.

So often because CNE and CNA are really only a few keystrokes away ("rm -rf /", for example), people want to say "Hospitals" are not valid targets for CNE, or "power plants" are not valid targets for CNE, or any number of things they've labeled as critical for various purposes.

But the reason you hack a hospital is not to booby trap an MRI machine, but because massive databases of ground truth are extremely valuable. If I have the list of everyone born in Tehran's hospitals for the last fifty years, and they try to run an intelligence officer with a fake name and legend through Immigration, it's going to stand out like a sore thumb.

The same thing is true with hacking United. Not only are the records in and out of Dulles airport extremely valuable for finding people who have worked with the local federal contractors but doing large scale analysis of traffic amounts lets you guesstimate at budget levels and even figure out covert program subjects. People look at OPM and they see only the first order of approximation of the value of that kind of large database. Who cares about the clearance info if you can derive greater things from it?

The Bumble and Tinder databases would be just as useful. If you are chatting with a girl overseas, and she says she doesn't have a Bumble/Tinder account, and you're in the national security field, you're straight up talking to an intelligence officer. And it's hard to fake a profile with a normal size list of matches and conversations... 

And, of course, hacking critical infrastructure and associated Things of the Internet allows for MASINT, even on completely civilian infrastructure. People always underestimate MASINT for some reason. It's not sexy to count things over long periods of time, I guess.

Also, it's a sort of hacker truism that eventually all networks are connected so sometimes you hack things that seem nonsensical to probe for ways into networks that are otherwise heavily monitored.

I highly recommend this book. Sociology is turning into a real science right before our eyes...as is intelligence.
SIGINT was the original big data. But deep down all intelligence is about making accurate predictions. Getting these large databases allows for predictions at a level that surprises even seasoned intelligence people. Hopefully this blog explains why so many cyber "norms" on targeting run into the sand when they meet reality.