Thursday, March 24, 2016

A checklist of scenarios for Wassenaar text judging

We all know that personally I think (along with most of the security community) that we would be much better off removing any language related to "intrusion software" from Wassenaar's export control regulations.

But people are putting together lots of texts to examine. However, in the software world, we do something called "Test Driven Development" - writing the tests first so you can tell if your software works by constantly running testing over it as you develop it. The list in this blog is solely concerned with "unintended capture". Basically all of these scenarios are banned by the current proposed US implementation, in case you wondered why the whole industry is up in arms about it and are new to this blog. Please help me add to it by sending me emails and tweets!


  • You are Kaspersky and you have captured a sample of Stuxnet. You send this to some researchers in Hungary to help you analyze it
  • You are a penetration tester for E&Y, and you want to buy and use CANVAS or Metasploit Pro while traveling to your customer sites, both domestically and abroad.
  • You are a researcher working for Booze Allen Hamilton, and you find an issue with a commonly used Korean word processor. You send an exploit for this to your Korean friend so he can talk to the vendor for you after making sure it really works.
  • You are Tavis Ormandy, and you travel to Singapore to give a talk on Antivirus Security, including demonstrating some 0day they won't fix
  • You work for Symantec and you turn your head to the left and talk to the H1-B employee who sits next to you about a vulnerability
  • You are a company that does threat intelligence and you send samples of various trojans and their C2's to your customers so they can help protect themselves
  • You run Blackhat and you want to have a conference and do trainings with an international crowd of people without running all your slides or attendees through the NSA for approval.
  • ...


Wednesday, March 23, 2016

A networked war requires a networked peace?

Book Link: https://ccdcoe.org/multimedia/international-cyber-norms-legal-policy-industry-perspectives.html

I have some serious questions about how much "Cyber Excellence" you can get when only one of the authors of your book has any technical background in their Bios. But disregarding the urge to ask why quoting, say, POLITICO or the ECONOMIST is how policy-suggestions are made, I wanted to analyse in depth what the book was actually saying.

To be fair, it says a lot of things, but it also says their opposites, leaving a reader wondering which of those paths is going forwards and which is backwards. The book itself primarily seems to reflect internal struggles with whether policymaking around cyber norms is even possible.

This book would have been ten times better if it had focused on two things:

1. Every person in the book who claimed that yes, geography and cyber were totally connected and therefor all sorts of laws were simple to apply to cyber needs to go and take five random IP addresses and Geolocate them. Then someone should point out to them how onion routing, VPNs, co-hosting, and content delivery networks work. You can tell people in this book who don't know what they are talking about because they go on and on about "scholars" opinions when what they should be doing is learning how to use traceroute.

2. Stuxnet is the acceptable norm. And this book should have focused very clearly on WHY that is so from a technical perspective, because the answer is very interesting, and not at all in coherence with the policies espoused in this book (or by the cyber norms crowd in general) :) .



---------------------------------------------------------------------
My notes are in italics below, along with what I felt were telling excerpts of each chapter.

Chapter 2
The Nature of  International Law Cyber Norms
Michael N. Schmitt and Liis Vihul

One of the better chapters, but also one of the most ambivalent. Perhaps because of that.

"With respect to the jus ad bellum, the primary terminological obstacle deals with the use of the word ‘attack’. Article 51 of the UN Charter allows states to use force in self-defence in situations amounting to an ‘armed attack’. Not all hostile cyber operations directed at a state rise to this level. As a general matter (the precise threshold is by no means settled), such operations must result in the destruction of property or injury to persons before qualifying as an armed attack that opens the door to a forceful response, whether kinetic or cyber in nature."
- Is destruction of an entire industrial sector over a decade "destruction of property". How much data destruction is "destruction of property"?

Finally, a similar IHL-based debate is underway as to whether the term ‘civilian object’ extends to data.61 If so interpreted, a cyber operation designed to destroy civilian data would be prohibited by Article 52 of Additional Protocol I, which bans direct attacks against civilian objects. If not, civilian data is a lawful object of attack, except in those circumstances where its loss might cause physical damage to objects or injury to persons. The critical and unresolved fault line in the debate lies between interpretations that limit the term to entities that are tangible, which is arguably the plain meaning of the term ‘object’, and those based on the argument that in contemporary understanding the ordinary meaning of ‘object’ includes data.62

Where does dropping of mail spools fall, I wonder?

 Therefore, it can be difficult to point to a particular state’s cyber practice to support an argument that a norm has emerged. States, including victim states, may be reticent in revealing their knowledge of a cyber operation, because doing so may disclose capabilities that they deem essential to their security. Undisclosed acts cannot, as a practical matter, amount to state practice contributing to the emergence of customary international law.

. From an international security perspective, normative clarity is not always helpful. Two recent examples are illustrative. The relative silence of states in reaction to the 2010 Stuxnet operation against Iranian nuclear enrichment centrifuges does not necessarily indicate that states believe that the operation was lawful (assuming for the sake of analysis that it was launched by other states, since only states can violate the prohibition on the use of force set forth in Article 2(4) of the UN Charter). On the contrary, they may have concluded that the attack violated the prohibition on the use of force because it was not in response to an Iranian armed attack pursuant to the treaty and customary law of self-defence. Yet those states may logically have decided that the operation was nevertheless a sensible means of avoiding a pre-emptive and destabilising kinetic attack against the facilities by Israel.



Considered in concert, these factors render improbable the rapid crystallisation of new customary norms to govern cyberspace. Therefore, the normative impact of customary law on cyber conflict is most likely to take place in the guise of interpretation of existing customary norms, and if so, interpretive dilemmas similar to those affecting treaty interpretation will surface.


Chapter 3
Cyber Law Development and the United States Law of War Manual
Sean Watts
Clearly trying to push an agenda but needs to go back and learn traceroute.

In early treatments of the subject, a viewpoint emerged that might be termed Exceptionalist. According to this view, cyberspace represented an unprecedented novelty entirely unlike other domains previously regulated by international law. Exceptionalists imagined an Internet owned and regulated by no one, over which states could not and should not exert sovereignty. Some Exceptionalist views ran so strong that they issued manifesto-like declarations of independence that defied states to intervene.1 They advanced a view that Professor Kristen Eichensehr aptly termed ‘cyber as sovereign’.

In response to Exceptionalists, a view developed that might be termed Sovereigntist. According to the Sovereigntist view, cyberspace, while novel with respect to the conditions that informed the creation of most existing treaties and customs, remains fully subject to international law. The Sovereigntist view continues to recognise sovereign states as both the stewards and subjects of international law in cyberspace.3 Scholars sometimes refer in this respect to a ‘cybered Westphalian age’.4 

These debates concerning the role of international law in managing cyberspace spawned a cottage industry of legal commentary and scholarship seeking to influence and shape future cyber law. Overwhelmingly resolved in favour of Sovereigntists, these debates were in large part conducted by and between non-state actors such as academics, non-governmental organisations, and think tanks.6 They produced commentary and claims that in both quantitative and qualitative terms have dwarfed the input of sovereign states. 

This is the kind of horrible grandiosity this chapter is full of. 

At minimum, the observation confirms the US viewpoint that a number of important regulatory ambiguities and even voids exist under the current legal framework.
...
 Nothing about the structure, composition or operation of cyberspace convinces the Manual’s authors that cyberspace is a legal void or unregulated by existing law.

This whole chapter was written to draw this rather tenuous conclusion, the reader senses immediately.

What the Manual clarifies with respect to cyber operations and what it leaves unresolved should be understood simply as a snapshot of the state of international law cyber norms as well as an indication of a single state’s limited interest in immediately cultivating more developed and meaningful international norms in that area.

Chapter 4
The International Legal Regulation of State-Sponsored Cyber Espionage
Russell Buchan
This chapter was pure fantasy.  

In light of state practice, however, ‘[t]he argument that cyberspace constitutes a law-free zone is no longer taken seriously’.
Here, again the lady is protesting quite a lot.

By analogy, I would argue that where a state stores confidential information in servers located in another state or transmits such information through cyber infrastructure located in another state, that information represents ‘a crucial dimension of national sovereignty that presupposes the nation state’ and the right to have that information protected from intrusion flows from the general entitlement of states to have their political integrity respected, that is their sovereignty.
The whole chapter is full of this kind of ridiculous legal rationalization. Don't even bother reading it. Did anyone peer review this book?


Chapter 5
Beyond ‘Quasi-Norms’:  The Challenges and Potential  of Engaging with Norms  in Cyberspace
Toni Erskine and Madeline Carr
Rips up the bombastic and confident tone of the previous chapters by pointing out they are not looking at norms, but just normative aspirations (aka, wishful fucking thinking).

It is not at all surprising to think that agents with particular interests or values will seek to impose rules and codes of conduct on practices that further these interests or values. This is a common, and often laudable, occurrence in discussions of cyberspace. Our very simple point is that these preferred principles and proposed rules are not norms. They are normative aspirations.

This tension between the desire to apply domestic law to digital information that does not remain tethered by geography and the promotion of an online experience that transcends territorial borders is a common framework within which justifications for imposing sovereign control are put forward. What is important here is not exactly how these actors account for their failure to adhere to the principle of de-territorialised data, but the perceived need to do so.

Chapter 6
United Nations Group  of Governmental Experts:  The Estonian Perspective
Marina Kaljurand

A rather sad chapter of helpless indignation.

A major breakthrough on detailed interpretations of international law applicable in cyberspace was not to be expected. However, any consideration that the Group would be able to bring out and agree upon, in addition to the general declaration of 2013, would be a positive development. Estonia recognised that there are complex issues concerning the application of international law, in particular the ‘thresholds’ for a breach of sovereignty, use of force, aggression or armed attack. However, in our view such questions cannot be set theoretically, but rather on a case-by-case basis and taking into account all relevant facts and circumstances. The absence of definitions of these concepts does not mean the impossibility of application of international law.

The preamble of Resolution 58/199 sets a non-exhaustive list of examples of critical infrastructures, such as those used for the generation, transmission and distribution of energy, air and maritime transport, banking and financial services, e-commerce, water supply, food distribution and public health – and the critical information infrastructures that increasingly interconnect and affect their operations.

Estonia sees the 2015 Report as a remarkable achievement. Given the ideological battle and differences in national ICT capabilities, taking the 2013 consensus further was a difficult, but successfully completed task. In particular, Estonia welcomes attention to norms of responsible state behaviour that, in the absence of shared detailed consensus on how international law applies in cyberspace, is a way forward towards building such understanding.


Chapter 7
Patryk Pawlak

CBM's in all flavors and charts. A good chapter - not too technical, but covers some ground. Worth a read.


Chapter 8
Outer Space
Paul Meyer

The following paragraph sets the flavor of badness for the whole chapter. It is like reading Scientology's Dianetics but just because a childhood friend made you.
The third common feature is that while military activity is present in both environments, and has been for several years, these environments have not yet been ‘weaponised’ or transformed into active battle zones. In this context, weaponisation means the general introduction into an environment of offensive arms capable of destroying or damaging objects within that same environment. 

The report recommends that a further GGE be created in 2016, although mere continuation of GGE studies may begin to suffer from diminishing returns. It is evident in the cyber security field that as countries move beyond statements of lofty general principles and begin to address specific measures, divisions of views become more pronounced and concrete outcomes more elusive.

Chapter 9
Greg Austin
China
This chapter avoids the obvious conclusions at all costs.

 A legal norm is the result of diplomatic compromise among the states which crafted it. Moral rectitude is in the eye of the beholder. Thus any privileging of one country’s normative position over that of another state – for example suggesting that the US position is preferred over China’s – is a statement of an individual ethical choice not one of political or legal analysis. 

 One import of this was that the membership of the SCO (all authoritarian states) strongly identified with China’s positions on most issues, especially the balance to be struck between state sovereignty and international openness. 

From the GGE:
States should not attack each other’s critical infrastructure for the purpose of damaging it; • States should not target each other’s cyber emergency response systems; and • States should assist in the investigation of cyber attacks and cyber crime launched from their territories when requested to do so by other states.102

This is not a commitment to refrain from all use of military cyber assets against each other. Article 4 only says that each country has an equal right of self-defence in cyberspace against ‘unlawful use or unsanctioned interference in the information resources of the other side, particularly through computer attack’. Neither Russia nor China regards cyber espionage or preparations for war in cyberspace as ‘unlawful’ or ‘unsanctioned’. 

One important change has been in China’s sense of urgency in using such norms to restrain countries like the US from more rapid strengthening of what China sees as the US hegemonic position in cyberspace. 

By September 2015, there are increasing signs that China feels obliged to cooperate in cyberspace rather than risk the fabric of its economic ties. China’s economy is almost certainly not immune from serious damage that could be brought on by a US cyber attack. 

Chapter 10
Technological Integrity  and the Role of Industry  in Emerging Cyber Norms
Ilias Chantzos and Shireen Alam
An argument against govt control of crypto, written pre-Apple lawsuit, I assume.


Technological integrity is a principle that promotes privacy measures and shuns the prospect of hidden functionality. Law enforcement agencies around the world are battling against widespread encryption and asserting that a lack of backdoors is causing criminal – including terrorist – investigations to ‘go dark’.3 However, it is nearly impossible to have the luxury of strict security together with surveillance, since beyond a certain point the ability to survey erodes security.4 In turn, this means that there remains no option for governments to have spying capabilities without creating this opportunity to criminals. 

Some concrete ways in which the cyber security industry plays a role in influencing cyber norms include: 1) developing the latest technologies and their use; 2) monitoring and informing on the evolution of the threat landscape; 3) engaging in Public Private Partnerships (PPP) and capacity-building efforts; 4) assisting law enforcement in fighting cyber crime; and 5) providing technologies and scalable capabilities to enable countries to implement regulations and public policies.


Government agencies at all levels should form meaningful partnerships with the private sector. A single player does not have all the answers, resources, skills, assets or scalable capabilities to counter rapidly growing and evolving cyber threats. Therefore, it is in the interests of all parties to foster different collaboration models that enable the exchange of information, as well as the dissemination of expertise and capacity-building. 

Missing is the idea that governments are often the adversary. :)


Chapter 11
Microsoft whinging.



Friday, March 18, 2016

"I am a Chinese operator"

This piece was originally posted to the DailyDave mailing list (which you should subscribe to!) but I am including it below since it illustrates the concept better than my post here:

So here I am as a Chinese tool developer and operator on one of the
lesser known, but higher skills teams, sitting at my desk drinking
Starbucks, uber-ironically, as I like to do.  We work for the PLA out
of an office in Shanghai, but we don't have a catchy name. Just the
world's most boring cover company that in theory does IT Support for the
local businesses, but in reality does anything but.

I'm finishing up a heap overflow in Flash, technically an integer
overflow, that leads to heap corruption, if you must know. The PLA group
I work for has given me about a few million 32-bit key numbers, which
are stored on a laptop that has never been connected to any network, and
is itself stored in a safe in the back room. I open it up, and run a
quick script to find a 32-bit number from the set that has no bad bytes
in it, and also is a NOP for the purposes of this exploit.

I use that as the fill-string for my exploit, and then for my Javascript
obfuscator pick another one of the numbers and use that as my XOR key.
The third one I use inside the shellcode itself. I mark these three
numbers as used in a file so I don't reuse them later. All my other
variables names are unrelated 32-bit numbers, because why not? But this
is a heap overflow, and not an MFC application, so I don't have room to
sign giant cryptographically secure blobs of random numbers with a
private key of any sort.

What I'm hacking today is a concrete company. They compete with the
Chinese concrete companies in many places of the world, but that's not
the point. They also supply the US Military's Asian bases. So while I
will be pulling down their entire Exchange server, once I get into their
network, which is basically a forgone conclusion, I'm not here for
industrial espionage purposes. Likewise, knowing how much they are
selling goes into our larger economic reports, which are used to make
decisions by the State in terms of interest rates and that sort of
thing. Stuff above my level.

I fire my exploit off at my target three times, to three different
people. One of them succeeds, and I've made my coffee money for the day
(and a bunch more, let's be honest, this is a good gig). I have been
told that if I give any email from this target to my friend who works in
construction, I will of course be fired.

But one of them gets silently caught, and Mandiant includes it in a
report, along with a long detailed description about my trojan, which I
stole from a Russian criminal group. Later, because that concrete
company has been losing a lot of business in Asia a DHS official is
asked if this intrusion is a potential violation of our agreement. He
looks at the very detailed internal Mandiant report on the initial
intrusion, and runs each interesting constant in the report through his
oracle, forwards and backwards, and he says, "I cannot say whether or
not it is the Chinese or the Russians, but they are CLAIMING to follow
our norms process, at least."

We're the Saudi Arabia of Copyright

American copyright law protects images that are as old as this mouse, for no discernible reason.

"Do not weep; do not wax indignant. Understand." – Baruch Spinoza

The key problem with policy work in this area has been that the underlying behavior of cyber technology is very weird. When you try to understand Quantum Physics, for example, you run into the massive problem that electrons and photons don't act at all like anything you can see in the real world. A "Wave-Particle" that is a little package of probability is not like a ball or a wave or really anything. The same thing is true in cyber norms. What we think of as two different things turn out to be the same thing a lot of the time.

For example, censorship and copyright are linked as closely as electrons and photons, as is security monitoring and surveillance. Efforts to outlaw one using technology immediately run into problems because there is not technical difference between them. As Ann Ganzer learned this year, the same is true with penetration testing and "intrusion software".

In other words: If you think Saudi Arabia's desire to protect their Creator from being insulted, and America's desire to protect "content creators" from being "stolen from" are any different, you are going to be disappointed with how the Internet works. I can't tell you how many times I've been in a very high level policy meeting and heard from otherwise intelligent people "Why can't we have a driver's license for using the Internet?"

A different America would take a strong and principled stand against censorship in general. But we can't. We are the Saudi Arabia of copyright. More than any other nation we rely on what is essentially a global regime of censorship, full of legal and technological tools, to make our economy run. And it produces an obvious blind-spot in our policy work to the objective viewer. It is no accident that as Secretary Clinton was giving speeches about Internet Freedom, she was also fighting as hard as possible to wipe Wikileaks off the Internet with the law.

Corporations (Google/MS/Apple in particular) have not missed this obvious facet to our policy work. They can see that the United States Government and the Chinese Government are directly aligned against them on this (and many similar) issues. To illustrate it, a segment from the book released yesterday by Nato on cyber norms (page 106) is below:

President Xi engaged with a different conception of freedom to that used by Neelie Kroes. He linked freedom to order by saying that ‘order is the guarantee of freedom’ and therefore, it is necessary to respect sovereign law in cyberspace ‘as it will help protect the legitimate rights and interests of all internet users’.58 In a similar effort to justify exercising domestic law (and thereby infringing the norm of de-territorialised data), US Senator Patrick Leahy made the following comments when introducing a bill designed to prevent ‘foreign-owned and operated’ websites from facilitating intellectual property theft: ‘We cannot excuse the behaviour because it happens online and the owners operate overseas. The Internet needs to be free – not lawless’.

And when I last talked to a Nation State policy-maker (not US), censorship was first on the  list of things they wanted to enable on their national infrastructure. "How do we block ISIS from Forums, Twitter posts, etc.?" is a hugely desired ticket item. But what that ticket item requires is more expensive, both in money and Freedom, than any country can afford.


Monday, March 14, 2016

Cyber Norms: The futility of blacklisting critical infrastructure

First I want to quote from an email here: """
Cyber security policy is not a greenfield space!

I did post these to regs@ in December and am guessing you still have not read them. Of interest : Section III, ‘Norms rules, and principles for the responsible behaviour of States’.[1] China and Russia in fact co-authored a Code of Conduct in support of the larger report.[2]
________
[1] pp. 6-7, http://www.un.org/ga/search/view_doc.asp?symbol=A/70/174
[2] pp. 4-6, http://www.un.org/ga/search/view_doc.asp?symbol=A/69/723
"""

If you read those two papers you will see the UN doing their usual "let's all be friends" in cyber ranting about how great it would be if all States avoided conflict on any level. There are a few recurring themes in cyber policy work at this level:

  • Please don't hack critical infrastructure
  • Please control hacking from your own territory, for the love of all that is holy
  • Please censor SOME stuff, which we can all agree on (?) but not TOO much stuff (because of human rights)
  • "Confidence building is important" (but only between Nation States, not with companies or communities)

The next set of blogposts here will discuss all four of these issues, starting with critical infrastructure, and how our previous efforts in the area are doomed to failure and why.


To be fair, previous cyber norms policy worked at some level. We have a LOT of cooperation between States when it comes to criminal prosecutions and gangs (Eastern Europe and Russia in surprising particular). We do have people to call in Russia who will tell us that the JPMorgan hack "was not them", even if it takes several months. The law enforcement side of cyber norms is far far ahead of the war and intelligence side.

Critical Infrastructure


That seal is a NY Dam waiting to happen.


First of all if you've done 15 years of scoping penetration tests or hacking Nation States, you know the idea of blacklisting "critical infrastructure" is bullshit.

A simple truth about the cyber domain: My goal is not to hack your critical infrastructure. My goal is to figure out which infrastructure on your network is critical that you didn't think was critical, and hack that. The stuff you already knew was critical is protected by the NSA. The stuff you didn't think was critical is defended by Symantec or Microsoft Defender. 

Likewise, much as there is no difference technically between penetration testing software and hacking software, there is no difference technically between allowable pre-positioning and intelligence gathering and "trespassing" on critical infrastructure systems.

One idea they have not tried of course, is the idea of placing signed tokens on machines which they feel are too sensitive to have Iranian implants on them because they would offer the chance of critical failure. Then in theory, an Iranian hacker could check for those signatures before putting the rootkit on the box, right?

Going through that mental exercise demonstrates the difficulty of the goal of blacklisting critical infrastructure. Networks are fully connected things by their very nature. And the data layer is even more connected than the network layer.

And that will let us segue in to "Controlling your own territory from the cyber norms perspective", which will be tomorrow's policy blogpost. :)


Friday, March 11, 2016

The CFR 0day Meeting

The San Francisco Style...


Adam Segal and Michael Levy held a Council on Foreign Relations workshop on "Confronting the Zero-Day cyber-security challenge" last week in San Francisco while RSA was on. CFR is a pretty big deal, for those of you who don't spin in the Government policy circles.

I was not attending RSA, but I flew in the for the meeting to attend. The meeting also had a lot of the usual faces from Industry and Government that you would see at the NTIA meetings or the Wassenaar meetings. It's a giant traveling information security policy circus - but largely because everyone is afraid of a Wassenaar-level fuckup. The Shadow of that policy mess hung over this meeting like a breath of hot, smoggy DC air.

In any case, I attend the policy meetings so you don't have to!

Let me give you the summary of the conclusions: The conclusion was there is no way to have a stance on 0days from a diplomatic standpoint - that software liability is a hard option - that we don't have any of the numbers or metrics or data that we would need to make informed policy decisions in this area and we are not even sure HOW or WHERE to get these numbers and what those numbers would measure exactly.

That's not my conclusion, that was basically the conclusion of the group as a whole. It is a much more honest opinion than the CFR wanted, as far as I can tell.

There was one question which I wanted to address again though. And it was this:

Q: Some parts of what you did at the NSA were classified, but let's say you worked on a 0day, how do you know what parts of that were classified? Is the bug class classified? What level of abstraction is actually the secret part? What do you have to get pre-publication reviewed? Anything in exploitation? Anything in Windows exploitation? Anything just on that particular service? Anything that is exactly that bug?

A: We all just use our judgement on that. 

That's not the answer I gave at the time, but it was a much better question than I realized in the haze of jet-lag. I get this question over and over again - "How do you define 0day?". "Surely there is some clear, industry accepted, clear as day idea of what a "vulnerability" is?", the uber-powerful lawyer asked me just yesterday.

But there ISN'T. The only thing anyone knows about 0days is that like porn, they know it when they don't see it. The question this person asked about classification was quite subtly probing the root of that problem.


I'll be honest, I don't recommend the book. I read the whole thing on the plane and like all recent "cyber war history" books it is good if you've never heard of the subject before and need to learn about it, but all the details are at the wrong resolution because nobody who was really involved will actually talk who is not already retired. Also, what author DOES NOT KNOW ABOUT FIGHT CLUB?



The All Seeing Eye

Sauron, where did I leave my phone?
So as today's annoyingly hypothetical question let me ask you this: If Google or Apple wanted to disclose our intelligence efforts in China, say, how much damage do you think they could do?

Is the next @Snowden truly someone who works in the intelligence community, or are they someone with access to enough data and computation that they can do massive damage without ever having "clearance"?

Look, all I'm saying is that there are a LOT of reasons for the US IC not to go to war with the only companies on Earth that have a working AI.

Tuesday, March 8, 2016

A technical scheme for "watermarking" intrusions

A Sample Scenario

A commercial security company finds a trojan on one of the servers used by Turkey Point Nuclear Generating Station. While none of the management machinery is compromised (and in fact, is not even computerized), the server is responsible for both holding sensitive information and conducting other sensitive operations and an analysis by a point team deployed from a National Lab indicates that had those operations been compromised, there was a possibility of power loss from Turkey Point, although not a nuclear release.

What our policy-makers do in this event is often dictated by whether they know, for a fact, that the trojan was placed there by a participating nation state following acceptable norms, or if it is potentially the work of a rogue nation or criminal group. Sometimes these situations will matter in the future to the point of "evacuate large cities" versus "clean up and forget about it". Our technical and political protocols as represented in this post are a first-draft attempt to provide an initial, reasonable step, towards a solution.

Some other solutions

One major other idea people want to implement is of course "no go zones" for intrusion. This is harder than it looks. Most important systems are dual use - collecting intelligence about a power plant is indistinguishable from being in a position to DoS it. So we back down to having the norm of "taking all due care" when on a sensitive system. This is nearly impossible to audit or manage. So for this and other reasons not stated here, we recommend instead that a system of "anonymous Red Phones" be set up.

The Value of Multilateral as Opposed to Bilateral Norms

Assuming you have a perfect way to do the watermarking as described below, if you only have two members of the Norms Group, detecting the watermark provides attribution. Therefore having many members in the norms group is ideal.

Likewise, a group-level anonymous "red phone" can allow for back and forth over a contested issue without running into the attribution issue.



Real World Use Cases


This "international incident" related to the war in Ukraine was in fact, nothing of the sort.


A basic background in watermarking


Every watermarking specialist has spent hours looking at this image and can't even see it anymore, just patterns of high and low frequency data.

Steganography and Watermarking are very similar, but watermarking has one clear major difference, especially when used, as most people want to, to fingerprint movie files or images so you can tell which customer you sent them to.

This is the normal conceptual format for watermarking, which is how visually inspectable watermarks work. This works fine as the smart people at BangBus know very well.


But customers hate seeing watermarks all over the place and of course, visual watermarks are subject to tampering and removal using that advanced "crop" tool in Windows Paint (or more sophisticated techniques I won't go into). So what super smart PhD people do is an invisible watermark, using this basic format:


And lots of people do really good mathematical work making watermarks (all of which boils down to hiding information in the hair and feathers of Lena). After several pages of math doing statistical modeling, you can add your watermark to compressed data and remain still basically invisible to the human eye, while still being recoverable after display or just from the compressed data stream itself. You'll note that all schemes like this avoid using EXIF or other tag data parts of the image format because you can't get a PhD by doing the obvious solution. However they have one simple problem, which is they all fail in the exact same way:



This is just how information theory works, and no amount of PhDing can solve it, in my opinion (and hopefully in yours). For this reason, invisible watermarks historically only work when the world does not know you are doing them. We are not so lucky in our goals (dire music goes here).

What are our design goals and constraints

One key thing is that we don't need to watermark software in particular, but intrusions in general. And intrusions are large complex things. Some of them involve exploits, some involve software implants ("Trojans") and some involve hardware implants. Many involve all three and each of those three components has many sub-components all of which we are expecting the skilled intrusion detection team to have access to when they conduct their analysis - but not necessarily immediately.

Significant intrusions get analyzed by teams of experts when they are discovered. But of course, signs of intrusions are being looked for by automated systems all the time. Our goal is to create a system that is detectable by a team of experts, but not by an automated system. An extremely robust system will be detectable just from an incident response report, without any access to the raw intrusion data at all, which has some political advantages.

Our particular technical options

The simplest way to indicate that we are a "nation-state" and not a "criminal group" is to share a private key and cryptographically sign a random data blob within as many sections of your intrusion chain as possible. The more places you sign, the more likely you are to be "validatable" by the Nation State Incident Response team (which also has the private key).

Of course, to "hide" this from automated detection techniques, you could make both the Blob and the signature something computed by code that, in some cases, is never even run during normal use.

Encryption routines are common inside implants. Likewise, most implants gather data from the machines they are installed on, for use as a key to encryption routines. This is valuable to them because it makes incident response harder (even INNUENDO does this). This is valuable to us because it means that a signature cannot be stolen from one trojan, and added to another trojan on a different machine.

Imagine an even tinier protocol, where you simply decide on a large set of 32-bit numbers, and if you see any three of them inside the analysis of your trojan, it is part of the Group. There is plenty of cover data to hide these numbers in. They could be register values computed during an initialization operation, for example. Or even text included within the program as a "forgotten debug variable". This kind of protocol would be more vulnerable to a theoretical "automated detection", but is more resistant to other kinds of analysis (no way to steal a signature if you can't figure out what part of the code is the signature). Likewise, this scheme operates without needing additional information from the host systems. Another benefit to this kind of scheme is that it is applicable to "data in motion" as well as data at rest (aka, Exploits).

The end result may be a multi-layered scheme, with each layer operating at a different level of confidence and security.

In the end you get a "uniform" for your intrusion efforts, but one that has camouflage and is not transferable to criminal groups.

Social Protocol Design


In addition to a technical design, we also need to decide when and how things such as keys will be distributed, what does a revocation look like, what does a "challenge" look like in case you think someone is overstepping the acceptable norms, or failing to sign their work, etc. I will leave all thoughts of this to another paper as it largely depends on having a working technical solution first. But this protocol will initially offer at least the possibility of an anonymous group "Red Phone" to avoid crisis when we need it most. A worthy goal?




Monday, March 7, 2016

The Cyber Domain is Different - Part 2

One major difference between cyber, and how we handle Nuclear/Chem/Bio is in the origin. If you are a self-defined "policy person" it might be wise to ask yourself how the below group of people, all of them PhD's essential in the beginnings of Nuclear efforts, differed from the people that you know are smart about strategy in the cyber domain:

If you gathered the people essential to building a cyber war doctrine for the US, none of them would have PhDs or teaching positions at Universities. A pretty big difference!

This is a difference extremely under-rated in Government policy circles, as I've seen first hand.



Tuesday, March 1, 2016

How are 90's hackers relevant to policy people, anyways?

TL;DR: All those 90's hackers have built things that warp the Internet in strategically interesting ways.


I had a comment from one of the policy experts who reads this blog. She asked "That was interesting, but how is that relevant to policy people?"

That's a good question! One quick answer to that is that analyzing the "birth" of cyber is a good way to understand why Cyber is not the same as Nuclear/Bio/Chem when it comes to regulation.

The first thing I want to help policy-peeps understand is that a cyber weapon is anything that changes the terrain of cyberspace.


  • This can be by allowing you to offer information without it being blocked by your adversary: think Wikileaks, Pirate Bay, or Tor Servers 
  • It can also be something that allows you to access confidential information (think NSA's QUANTUM)
  • Or it can be something that offers situational awareness (like Shodan or a rack-mount of Qualys servers with a team of people that really know how to use it)
  • Or a program that offers a hardware implant for every router on the market


But in general, real "Cyber Weapons" are very large programs - staffed by ten people minimum each. And they change the fundamental way the network works, as opposed to having a list of features like a commercial product.

And every one of those groups from the 90's knows that and has been in places where they have built them and many of those people continue to build them to this day. This is one of the differences between, say, Nuclear and Cyber. Whereas Nuclear was largely started in one place, Cyber started all over the world at about the same time. Remember that it took a letter from Einstein himself to start the Manhattan program, because only he understood the ramifications of the theories, and had the political push to make it happen.

But it is not a mistake that ten years ago there was a huge exodus of offensive talent from the intelligence community to Microsoft and Google and now they are at the forefront of the strategic war. It is not a mistake that the people involved in those 90's hacker groups have a different understanding of the possibilities of cyber.

And so WhatApp has strong end-to-end crypto, Napster offered files that were hard to remove from the net, and Wikileaks and Pirate Bay still exist even after massive US Government attempts to blot them from the Internet. What do you think the members of "Hacked By Owls" did after they were done defacing things? Lots.

I could go on, but think as a policy person to yourself: How would the world be different from a policy perspective if every major country on Earth had nuclear technology at first, instead of just the US? Too often the policy world asks itself "How is Cyber similar to Nuclear Weapons?" instead of asking how they are different.

And if you see something in the news that changes the Internet, anything really, ask yourself where it came from. Chances are one of those 90's hackers teams is behind it.