Tuesday, July 16, 2019

Hermaeus Mora

After the war, Robert Oppenheimer remarked that the physicists involved in the Manhattan project had "known sin". Von Neumann's response was that "sometimes someone confesses a sin in order to take credit for it."[132]
What is an exploit if not a daedric word of power?

There's no written history of the start of the US's cyber capabilities, although there are pretend histories out there, filled with random leaked data to reporters of various programs that mattered less than people think they might have. Perhaps this will change in fifty or so years. I think the early nuclear community was best analyzed in this book (The Making of the Atomic Bomb) from Richard Rhodes. And of course, the early cryptoanalytic community has many good books, but we recently reviewed this one on the blog, and it came to the obvious conclusion that the success of the Allies in cryptography depended primarily on the kind of talent the Germans had exiled or repelled.

This blog is a strategy blog, which means occasional drill-downs into the technical details of policy or technology, but ideally, we want to look at the geopolitical trends that give us the ability to predict  the future. That means knowing the history.

But just because there's no written history does not mean there's no history. And one thing I know, without needing to send my blogpost through pre-pub review, is that the early history of Allied Cyber efforts mirrors that of Nuclear and Crypto in that the majority of it was based on the pivotal work of first generation immigrants.

Wednesday, July 10, 2019

International Humanitarian Law and Weird Machines

When you read the International Humanitarian Law work (or export control law, for that matter) in the area of cyber war and cyber-almost-war you get the feeling they are stuck in the 1940's but they are being very precise about it. Part of the difficulty of computers is that even from the very beginning everything was shrouded in the blackest of classified mist, to the point where the Brits didn't announce they had cracked Enigma with the earliest computers for thirty years, and then when they did, a lot of Germans did not believe them.

This means that after the war, Turing and others (c.f. Manhattan project, which was computationally expensive just like codebreaking) were left writing about computation engines they KNEW WORKED and KNEW WERE IMPORTANT but couldn't say why. And computation engines is more the word for electromechanical devices programmed by moving switches and cables around, until von Neumann and others designed architectures (and machines) with what we now know of as RAM.

One Memory to Hold Them All

The key thing in this architecture is that your code is also data in a very practical way. And to take it one step further, both map into a state-space and moving into the weirder parts of that state-space that do what the attacker wants is called "exploitation" (moving state-spaces has nothing to do with executing native code necessarily).

You can see Mike Schmitt and Jeff Biller in their recent paper, pull legal theory towards this reality.  By recasting cyber capabilities as "communication of code" and hence indirect actions which cuts the cord to a lot of international law (some from 1907) that was obviously malformed when talking about iOS exploits.

This is a pretty major step for Mike Schmitt in particular, as the primary defender of the "We can make existing international law fit cyber if we just STRETCH IT LIKE SO" school of thought.  In that sense, the paper is well worth a read even if we told you so.


As a bonus, here is the 4d4 Wassenaar export control language for "Intrusion Software" binary-simplified and graphed. Notice how the two items that define it are "can extract or modify data" OR "modification of execution to supply external instructions"? All computer programs do that. Essentially the only technical specficiation that makes any sense is "avoids AV" aka "covertness". It's this kind of regulatory nonsense that is more pain than it could possibly ever be worth but which is generated automatically when the law and regulatory communities are stuck in the 40s.

Monday, July 8, 2019

Book Review: Delusions of Intelligence, R.A. RATCLIFF

So one of my friends told me this story about how he went to an introductory meeting once where a bunch of Americans were presenting to his team (non-Americans) as part of a joint project. And they went down the list, with various people talking about their respective responsibilities for helping with various parts of the project. And he turned to his coworker and he was like "They all seem very smart and very nice, but to be honest, I thought the Americans would be helping more. This is a super high priority project - we have almost fifty of our best people on it full time, and there's only a dozen or so they could spare?" And his friend looked at him for a second and said, "Yes, but this is only the liaison team. They aren't doing the work. Each of these people is responsible for coordinating an entire building in Herndon full of people with our efforts."

On page 76, Delusions of Intelligence, says "Hut 6 alone had 1300 people working at it, and the total people at Bletchley Park was around 10k, while the US Army Signal Security Agency went from 331 to 26k in the same period." But this is the only mention of force strength I can find in the book.  And some quick Googling while losing every single comp game on Overwatch this weekend was not able to determine anything more specific with regards to a ratio of cryptologic efforts between countries in WWII.

It's relevant to the book's conclusions as well.  To paraphrase:
1. The Germans were hopelessly fractured with their cryptologic efforts vs. a unified and centralized British and Allied approach
2. Early success and high-level support (Churchill) allowed for the investment of "big projects" on the side of the Allies to attack difficult problems which the Germans assumed were impossible (so did not even try at)
3. Assuming that cracking mechanical rotor crypto was impossible made a psychological barrier in the Germans that made even major OPSEC lapses on the part of the Allies something to be rationalized away
4. The Germans were obsessed with short term tactical results, and overwhelmed with processing even those. And they assumed mechanical (computational) efforts to aide them would not be fruitful since "cryptography is done with the human mind".
5. The German war effort was entirely military-minded, wheras the Brits had a fluid "Civilian" and "Civilians in whatever uniform made the most sense at the time" approach.

Some of this was said best in Neal Stephenson's Cryptonomicon where he has a character point out that the war was essentially stamped out in the Bletchley Park Huts, or that for the Japanese to tell their superiors that their codes were broken would be so dishonorable that it was impossible to believe, even if the results of it were obvious.

And the corollary to American cyber efforts (fractured, with maximum infighting), are hard not to ignore. The historical picture of a German cypher network getting partial upgrades over time, which if done all at once would have knocked the Allied efforts out, but done piecemeal were ineffective, can only remind you of similar efforts to modernize the USG networks and systems.

To be fair, the book heavily undersells resource constraints and "killing all the smart people seems to be bad for our cryptography team" as causal. 

In any case, ironically this book is only available in paper form, but I highly recommend picking it up for a flight.

Wednesday, July 3, 2019

cybernetics and american conceptual failure

The cybernetics diagram Chris is referring to.

<dave> arg
<dave> I failed at using screen
<dave> it's like hacker 101
<dave> I feel bad
<dave> i pretty much used to ONLY use computers through click-scripts
<dave> which is part of why I never customized anything
<bas> dave: emacs has opened my mind to the non-ascetic tooling lifestyle
<bas> now I'm like "understand? yes please, symbols? yes please, visualization? yes please, hover tooltips? yes please"
<chris> i'll use dave as an example of why americans don't understand cybernetics
<bas> i think you can use dave as an example of why americans don't understand lots of things
<bas> :)
<bas> like "why doesn't anyone care about ants!?"
<chris> So cybernetics spread all over the Soviet Union very rapidly, and in Czechoslovakia, whereas what spread here was systems theory instead of cybernetics.
<chris> SB: How did that happen? It seems like something went kind of awry.
<chris> M: Americans like mechanical machines.
<chris> B: They like tools.
<chris> SB: Material tools more than conceptual tools.
<chris> B: No, because conceptual tools aren’t conceptual tools in America, they’re not part of you.
<chris> that interview is full of treasures
<bas> heh
<chris> so what they're saying is that there is a tendency here for ppl (example is engineers) to focus on the first box
<chris> but not model how the tools they use shape the thoughts they think
<chris> because if you understand cybernetics, then you *do* want agency in that process
<chris> meaning you want to direct your own evolution
<chris> by shaping the tools that end up shaping you
<chris> thus emacs
<bas> feedback loops
<chris> bas: i need to find more ways to link emacs to *
<bas> iterative improvement of workflow and tooling
<bas> in a loop
<miguel> weren't both lisp and emacs created by americans?
<chris> so this is the link: http://www.oikos.org/forgod.htm, the diagram is the crux of the matter
<chris> miguel: there are exceptions to every rule
<dave> chris: Can I paste that whole conversation to my cybersec blog?
<dave> because it's funny
<dave> also: I don't see how I'm the example!
<chris> well example as in you choose not to enter the cybernetic process
<dave> henry, are you sure they NEVER send you any pointers used as unique identifiers?
<dave> I choose not to enter the cybernetic process?
<dave> in terms of, I do not shape my tools, but rather let them shape me?
<chris> by not buying into the emacs paradigm
<dave> ah
<chris> yeh

Saturday, June 15, 2019

Bytes, Bombs and Spies - A guest review

After reading his book review on ‘Bytes, Bombs, and Spies’ Dave was kind enough to offer me a guest blog post to share my own thoughts. First I think it helps to understand what this book is. It’s not exactly another cyber research/policy book. It’s a look at ‘The strategic dimensions of offensive cyber operations’ through ‘a collection of essays’.

The reason I think this is important to note is because many of its authors contradict one another whether they intended to or not. Because this book is about offense I feel obligated to state the obvious. In offense the details matter. In fact they’re everything. It’s ‘you can write values k through kN but not beyond 258 bytes from the end of the struct, and the Nth position in your overwrite must have bits 1-4 set’ levels of accuracy or it just won’t work.

I tend to judge books like this based on how many new things I learned, not how many flaws I can find. In that regard this book is fantastic. Many of its authors are people I follow on Twitter and aggressively consume anything they write. They come from various academic, .mil, and .gov backgrounds. But there are also things in this book that give me cause for concern.

Anytime one of the book's essays ventures from abstract thinking into concrete implementation an experienced technical reader will cringe. Reading the terms ‘the {network, mail server}’ or ‘sysadmins’ makes me think the author did not sit down with an experienced SRE to understand how the cybers work in 2019. The way these simplistic architectures are described will make you nostalgic for a simpler time back when you were reading that 2001 CCNA exam prep guide. The internet in 2019 is comprised of massive platforms and ecosystems run by private companies. Find me a Fortune 500 outside the United States whose infrastructure doesn’t, in part, resolve to an AWS data center in Ashburn Virginia. Are there people who think a LAN of Win2k boxes with a single AD controller and an Exchange server is powering Gmail?

In the closing paragraphs of the ‘Second Acts In Cyberspace’ chapter Libicki makes the point that re-architecting is the only solution after a successful attack. Even organizations, public or private, that have the skills to build their own infrastructure build things that look nothing like it did 10 years ago. The platforms that power the modern internet are composed of hundreds of microservices. It’s likely that these design choices were specifically to meet the precise needs of global scale and cannot be “re-architected” without enormous effort. When DoD tackled Heartbleed they gave an award and a public nod to the team because the challenge was something like 8 million computing devices.

I was especially surprised by the ‘The Cartwright Conjecture’ chapter. I will read anything Jason Healy puts his name on but to me that theory fell apart entirely over the last few years.

“We’ve got to talk about our offensive capabilities … to make them credible so that people know there’s a penalty for attacking the United States” - General James Cartwright

I’ve never really bought into this concept as it assumes that the United States can deter cyber attacks by showcasing its own cyber capabilities. This line in particular “The bigger your hammer the less you have to swing it”. Did no one question what happens when your adversary takes your hammer and hits you in the face with it? Clearly a ‘stockpile’ of 0days and persistence tooling instills so much fear in our adversaries that they published it and then trolled people on Twitter. 

What groups such as the Shadow Brokers have done to the United States is what I have been advocating we should have been doing all along: publicly exposing the technical details of exploits and toolchains seen in the wild against American interests. That’s a ‘defend forward’ strategy I can get behind. Law enforcement does this to some degree but it's usually after you've been breached. The trolling bit, of course, is unnecessary. One of the things I found particularly interesting about this chapter was its mention of the United States having to co-opt or coerce, and weaponize technology companies in order to create fear in adversaries. Healy rightly points out the consequences of doing this. I’m not convinced this is needed at all. Our adversaries are likely already threatened by the fact their own operators have Gmail accounts or that they have to use operationally compromised systems in the US in order to reach Twitter. Doubling down on a free, secure, and open Internet is probably the best tool we will ever have.

This book is worth reading, and its authors deserve credit for exploring such a highly debated topic. What I think is lacking from most essays in this book is the understanding that we cannot have a strong offense without assuming some risk on others behalf. In 2019 every company is a technology company and if we are to get serious about defending an economy built on technology then we need to be honest with ourselves, it will come at the cost of a strategic offense.

Chris Rohlf

Friday, June 14, 2019

What does "On Team" mean?


One issue with the VEP is that hackers, those who form the core of your offensive national-grade team, find it extremely off-putting when you kill their bugs. Even the terminology of that statement should give a policy-maker pause. While there are no absolutes in life, a VEP process is only not going to hamstring your recruitment and retainment when it is known internally to lean towards never killing bugs.

This brings us to Will Hurd and political divisions within a country. In many countries (Turkey, for example) the military has a very different cultural dynamic from the political sphere. This is extremely evident for the nascent cyber capabilities of a number of places, which if you are at the right conference or have the right Twitter network, you can ask about directly.

While Twitter is not available in China, Chinese hackers definitely are on it. The same is true for Iranians, and the Iranian team is exposed to a tech culture that is almost universally atheistic, pro-LGBQT, and with a wider global focus than their domestic policy team. Half the Iranian team is watching Dexter in their spare time for some reason. To be a hacker is to be an outlier and if your society or political organization does not support outliers, it is hard to recruit them.

This is also easy to see domestically - you hear DHS complain that nobody in the tech community will stand up and propose a good key escrow system. The DoD seemed both confused and concerned that company after US company is refusing to sell them advanced AI. If by structure your government lags on issues like gay rights, you will suffer in this domain.

It is equally true in almost every other country as well. It's hard to predict how these schisms will affect the balance of power in cyberspace. But I think it does.

In other words, I like Will Hurd a lot and I think he's an important voice in the community, but I also, if I had to predict, would say there is a good chance he will not end up keynoting BlackHat.

Tuesday, June 11, 2019



Neal Stephenson and William Gibson and Daniel Keys Moran all treated the problem of disinformation and over-information differently in their books. MINOR SPOILER WARNING btw.

NS's latest book is a cool 900 pages, longer than even those compendiums of cyber policy we usually review on this blog. I read it the entire way to Argentina and realized when we landed I was only 50% of the way through. The whole first section is about a near future where someone runs a successful strategic disinfo-op (using actors and faked media) against a town in America claiming that it has been attacked with nuclear weapons, while using cyber means to cut it off from the rest of the country. This works surprisingly well, and changes the nature of how people interact with the Internet as a result.

Good science fiction grapples with policy problems, in many ways, sooner and more accurately than policy writing. All the consternation over "Deep Fakes" is a proxy grieving process for "Mass media can no longer be used to control the masses". The recent controversy over the NYT's reporting on the Baltimore ransomware attack is perhaps a symptom of ongoing consensus fragmentation. You can, for any given factset, fool SOME of the people ALL of the time. We are essentially tribal in all things.

In other words, we always lived in Unreality, but the rise of the cyber domain means we now live in a Chaotic Unreality which seems to make a lot of people uncomfortable.

To be fair, everything about cyber makes policy people uncomfortable because the whole thing is so weird. The clearest example of this, to me, is Targeting, an aspect of projecting cyber power that is basically ignored. This is why someone ends up hacking a fish tank to later own a casino.

I don't know what makes good targeters. Smaller organizations tend to be better at it for obvious reasons that Cybercoms should probably address at some point because having a significant cyber effect is almost always perpendicular to the way a planner thinks it ought to be done.

VEEP covered another angle on this: In one episode the Chinese pay back a debt to a politician not by hacking an election itself, but by messing with the traffic lights and power in North Carolina during the Democratic Primary, in certain districts known to vote against her, which works perfectly well.

Good targeting is just not believing all the things you see around you because they actually don't exist, as the Matrix pointed out but as so few can internalize. In other words, Fall is a good book; I highly recommend it.