Thursday, December 19, 2019

DHS's cyber policy is a straight up casualty of the partisan wars

A great Politico article came out this week on DHS and its rocky history when it comes to executing on its cyber mission. Every aspect of it deserves a read, but it can be summed up in a few bullet points as well, for the lazy:

  • DHS did not have the talent base to pull of a lot of its cyber mission, and probably never will
  • Budgetary scale in DHS to address cyber issues is minimal and probably always will be
  • DHS has forever lost the trust of the constituencies it needs

That last bullet point is hammered in by Kirstjen Nielsen's career implosion as she promoted harsh anti-migration methods but is emphasized by the current DHS twitter account, which is now a partisan parody of what you would want to see from an organization trying to get cooperation from large technology companies.




Many people, myself included, always wonder at the efforts of government agencies to turn themselves into budget anti-virus companies. But strategically, the one thing DHS or DOJ has to offer is their reputation. When they make an attribution or statement from their official Twitter feed, that has to be believed by everyone. And we don't have that anymore, which is going to have implications up and down the cyber domain.

In some ways, having an independent cyber agency is the only solution. Untainted by the other missions of DHS, without the history of DHS, without the offensive mission of the NSA or military, and perhaps set up in a way that allows private industry to trust it with a respected technical leadership. I don't see this happening any time soon, however, but it might be something for a future administration to consider.

Monday, December 16, 2019

Are End Use Controls Fit For Use?
(You can see here a classic case of End Use Controls)

You would not know it from public reporting, but Export Control is in a bit of a crisis, and that crisis has a name, and that name is "End Use Controls". This is important because when I started really looking at Export Controls, post the "Intrusion Software Debacle", Export Controls were a sleepy little town at the edge of the wilderness, and now they are the center of everything, with Huawei as the most obvious example.

But deep down, the US has less and less ways to project strategic power, and export control is taking over the role of many other parts of diplomatic basket, parts it is not especially suited for. You can sum up the selling point of export control's historical role by saying "Preventing bad things from getting into the hands of bad people" and to a certain extent, that still exists. 

But let's take a look at a new article from Ely Ratner, Elizabeth Rosenburg, and Paul Scharre in Foreign Affairs on Countering China.

Let's sum up their recommendations so you don't have to do all the reading:
  • Boost R&D Spending
  • Attract talent by expanding high skilled Visa Program
  • Enable domestic production of 5G by using Tax incentives and government buying power
  • Enhanced Visa Screening to counter espionage and coordination with Academia on a blacklist
  • Adding PLA organizations to the Entity List
  • Blacklisting all PLA-associates from Visas (this conflicts with their other recommendation, obviously)
  • Expanding Export controls based on End-Use
  • Finding new sources of Rare Earth minerals (I assume by asteroid mining? lol)
  • Forcing Chinese companies to comply with US Financial Transparency Rules
  • Promoting BLOCKCHAIN (lol)
  • New Multilateral agreements "Just like TPP but somehow different in that we actually sign them this time"

Wait, WTF? Is this for real?

I don't know how articles like this from CNAS are not supposed to be meant as the duct-taped-banana of "Countering China" thought-pieces. The bit on export control is the most likely-to-happen bit! Equally as insane as all the other ideas though.

By definition, an end use control does not prevent technology from getting in the hands of bad people. It prevents companies from MARKETING technology as for a specific thing, but the technology itself is going to invariably become ubiquitous. If at any point in your creation of an export control you're saying things like "Well, this technology is so dual use that the only difference between Military and Non-Military use is the going to be the description on the task order" then what you're doing is creating a nice way to talk to people informally about the wiseness of their business model and customer-set, more than an actual "Export Control".

These issues are hugely relevant when it comes to understanding strategic contention around cyber tooling, but also around machine learning technology, 3d-printing, biologics,  and the next generation of consumer products. Ask yourself, is there a GUIDELINE anywhere for how to create a GOOD end use control for your subject matter? What's the difference between an export control that WORKS and one that DOESN'T? If you don't have such a guideline, then you know the answer is that there is probably no good way to do it.

But if export controls aren't the answer, what is?

Wednesday, December 11, 2019

Crypto Prima Nocta

Yesterday there was a big Senate hearing on Encryption and the witnesses were Matt Tait (hacker), Cyrus Vance (DA NY), Erik Neuenschwander (Apple), and Jay Sullivan (Facebook).

For any dying government policy there's going to be a set of policy experts that advocate keeping it in the interests of Stability. Crypto policy is no different from the principle of Prima Nocta in that way. You can see this in Cyrus Vance's testimony, which harkened back to the balance of power when CALEA was signed into law. CALEA was 25 years ago. Has anything changed since then, do you think? It is the OK BOOMER of surveillance balances.

What's really changed, since today Judaism is being defined as a nationality for some reason, is the public's awareness that maybe giving governments free access to our deepest secrets is not a great idea. What governments always say is "Terrorism, Child Exploitation Materials, Murders and Serious Crimes" but what they mean is "War on Drugs and political resistance".  Senator Kennedy probably was the most pessimistic person on the panel, and literally said "Your companies don't care what we think, do they? They don't trust governments." But it's not the companies that don't trust governments so much as everybody in general.

The Government (and Matt Tait's) argument is pretty simple: We need a balance that allows the Government access to anything stored on your phone at any time. They'll say "decrypted when presented with a lawful court order" but Apple's policy is even simpler: "No."

There were a couple obvious fallacies from the pro Key-Escrow testimony from Matt Tait and Cyrus Vance.

  • Key Escrow (on devices) is doable and easily splittable from the problem of end-to-end encryption on the wire
  • Key Escrow will be secure against modification by people on their own phones
  • Various Senators assumed Apple HAD a magic key, and then decided to delete it, when Apple was super clear they just decided to enable "Full Disk Encryption" instead of "Some Disk Encryption"

I get that Surveillance-Authoritarianism is the pumpkin spice of this decade's political season - you get a bit of it with everything. At one point one of the Senators said "Is Apple willing to take liability for any attack that could have been prevented by a decryptable device?" which is an insane question, since Apple is ALSO not willing to take liability for any damage from an unencrypted device falling into the wrong hands, nor is the Government able to prevent Apple from BEING ATTACKED BY OTHER NATION-STATES or willing to take THAT liability.


It's not possible to do key escrow as a matter of legislative policy. Even assuming a law passed that mandated it, Apple and Google would also have to magically ban any application that disabled it. This is something Apple could do to non-Jailbroken devices the same way they ban VPN services in China, but it's not something Google can do on their platform. And you cannot do key escrow without making devices less safe - Matt Tait is 100% wrong about this being a technologically feasible effort.

We live in a world where you can't even trust hardware (this bug came out DURING the hearing!), so adding special hardware to decrypt your device is a dumb dumb thing to do and Apple knows it.

Lindsay Graham said point blank that if Industry doesn't magically solve this problem for him, then he's going to pass legislation about it, and there was a big bipartisan show on the floor of both support and opposition, which makes it hard to say if there's an actual plausible threat there (unlikely). But the deal has already been cast: What Law Enforcement wants out of the cloud, it can have. What it wants from the device, it cannot. 

Sunday, December 8, 2019


If you read Richard Haass's book ( or listen to the GCSC or Joe Nye you will hear a lot about the value of stability, both in the world and in Cyberspace. But like the real world, when you enforce stability it is like pushing on a tape bubble. What is stable for one stakeholder is oppression or uncertainty for another.

The old sayings about the Navy protecting the seas for large multinationals to exploit the tiny economies of third world countries are also portable directly to cyberspace, and you can see it in the GCSC's first "Norm".

Why DNS? To a technologist, DNS is one of a suite of aged Internet protocols along with SMTP and HTTP. And of course it gets manipulated in many ways all the time - most importantly the FBI will blackhole a name that is being used as part of a botnet, for example. Also commonly, courts will assign names to various companies based on their trademark.

But anyone on the upstream path that is trusted by your browser or operating system can manipulate, and often does, DNS. You may recall the concerns from various ISPs and network providers about DNS over TLS, which would prevent them from monitoring and manipulating their customers' DNS when using certain browsers. Many VPN and security providers filter DNS for you, providing many security benefits.

The UK's now scuttled war-on-porn was slated to use these exact methods to try to filter out adult content. It would, in other words, have violated this proposed norm.

A better and simpler "Norm" would simply be "no DDoS attacks anymore, please". But the ACTUAL NORM is that lots of governments use DDoS attacks whenever they want to punish a company that is posting content outside their legal reach, China, in particular with their "Great Cannon", but also Iran, the US, etc. DDoS attacks often flood core routers, breaking things in bad ways, and so it's possible this idea of "please don't mess with the core" is an attempt to shoehorn in a bunch of unstated things to stop what every country already does.

Not to mention, many countries spend a lot of time hacking routers, which are the very definition of the core?

In other words, from the very beginning the GCSC proposal has severe challenges. No doubt some handwaving will occur in the name of "a perception, true or not, of forward progress".


Part 2:

What really annoys me is that providing guerrilla uncontrolled internet to people is the best way to affect change in this day and age. Imagine the Hong Kongers having secure internet, unmonitored by the PRC? But this conflicts with the essential values of controlling pirated Disney movies from ever being reachable by the market...

Part 3:

Also, to have workable norms in this space, you have to agree on a shared reality. We don't currently have that in our system.

Thursday, November 14, 2019

A Lot To Feel

Last month I gave two talks on cyber policy, one in Israel and one in Tokyo, Japan.  What I learned was this: The two places could not be further apart!

The Israeli conference was called ICT and you can view their extensive YouTube channel here, but you probably won't so I will summarize it for you in this post in two words: Terrorism bad.

It was election season in Israel, and every single Israeli political figure stepped across the stage of the conference to get interviewed on their positions on the Palestinian Territories. They spoke in great depth, taking tough questions from interviewers about their policies and positions, most of which made any right wing American political figure seem like Bernie Sanders. Surreally pictures of these politicians were on billboards all over the country, often with Donald Trump standing in a close hug.

I say this because not just because Americans cannot leave politics behind at the border. At this conference, partially held in an Israeli Defense University, American and Israeli politicians, spooks, policy makers, and other nations came together to proclaim their continued support in the fight against terror. One of the standout talks was Albania's Minister of Foreign Affairs, which is one of the few talks not on the YouTube channel. It was packed so I stood in the back, ignoring the conspicuous security team that scanned the already extremely well vetted audience the entire time he spoke.

Another amazing talk was Facebook's policy person, who came to explain how not EVERYTHING is its problem and frankly "Just block everything bad as it happens" is an annoying thing for governments to keep asking for.

I want to point out that for many people, when you speak a conference, you expect them to pay for your flight and hotel. But at the high end of conferences, this does not happen!

At the VERY highest end, the conference itself can be more aptly described as a small roundtable discussion at a five star hotel with a paid-for staff of communications experts to organize everything and elicit collaboration since everyone is a speaker and expert at their topic. Typically at these you'll have current government officials and former government officials still making policy decisions behind the scenes as well as the top executives at various large conglomerates and you'll pay for everything as well as a conference "entry" fee that goes to the organizers. In return you'll get a neat printed out pamphlet with everyone's name and position and a picture of them, which saves time when wheeling and dealing.

At a slightly lower (but still super high) end, you still pay for everything (other than conference entry, which is sponsored by the host government or some large corporation or both) and the talks end up being "panels" with some people choosing to do a powerpoint presentation with their fifteen minute introduction speech and some just rambling.

My feeling on panels is that they are almost always terrible, although there are exceptions, like this one with Jennifer Cafarella discussing ISIS's potential reconstitution. But almost because the talks are typically not great, these kinds of conferences attract a special crowd of interested parties, to the point where the local service had someone ask me about what I was doing there and what my background was. I answered one hundred percent honestly - that I'm an executive in a computer company that offers hosting, and that I have an expensive side hobby of attending conferences like this because I masochistically enjoy cyber policy debates. Somehow this was so unbelievable that he then followed up with, "...What other cover stories do you use?" to which I replied "collage student?" which, of course is the one all the Israeli's use at Defcon every year.

Also speaking at both conferences was Sophia d’Antoine and it was sometimes fun to see people not realize how much background she had on both the technical (her speciality is automated program analysis) and policy aspects.

What I mean from "the two places could not be further apart" is that Japan and Israel are almost orthogonal in their approaches to all the important policy problems in the space. In Japan there was a focus on the rule of law as defined very closely with international norms creations efforts. Pacifism is a virtue. Of course, should the Japanese decide to go offensive and use their cyber capabilities for deterrence as I and others recommended, the world would shake.

Israel has a very different view on things. Just down the road from the big Microsoft building in the hip tech town of Herzliya is NSO Group, currently being sued by Facebook for selling 0day capabilities, in what is a silly lawsuit that demonstrates the failure of international norms efforts between nation states that now these sorts of things are hashed out between large corporations.

Probably the most amusing part of ICT for me was when at the end of the panel I was on the moderator asked some simple questions. "What happened to ISIS's cyber capabilities? Why haven't they developed as we thought they would into a force to be reckoned with?" These were Talmudically hypothetical questions of course. Looking out at the crowd you could see the answer in their eyes, an entire conference of deadly spook subclasses dedicated to removing that particular threat from the world.

Friday, October 18, 2019

Unexpected Norms Setters

Paper Review: The unexpected norm-setters: Intelligence agencies in cyberspace

I wanted to do a line by line review of Ilina Georgieva's recent piece on cyber norms because on a brief read-through, I liked a lot of it. That said, the difficulty with reviewing policy pieces is you tend to think the ones that AGREE with you are naturally genius, which is not always the case. So after a more thorough review, there are a lot of serious issues with the piece and these are painfully listed below (if you happen to be Iliana).

To be specific, the paper focuses on the norms implications of NSA's leaked tool TERRITORIAL DISPUTE which is, not really this at all, and it's weird how confused the author sounds trying to describe it:
This article examines a particular technique of infiltrating computer networks to gather intelligence data (i.e., computer network exploitation or CNE), in order to exemplify the norm-setting impact of intelligence agencies.

What TeDi (to use her terminology) really is, is a simple script that you can run once you are on a box to find out if another APT is also installed on that box, complete with a few simple signatures, essentially the simplest and dumbest anti-virus of all time. It is a not a "technique of infiltrating computer networks" and the main flaw of the whole paper is that it's impossible to say what the norm implied by TeDi is in a simple sentence. Without a very clear statement as to what the norm is, it's folly to analyze or draw any conclusions.

Another major issue with the paper is purely stylistic, in the sense that many international relations papers will say things like:

The exploitation technique portrays a norm of cyber espionage that is
widely implemented by the intelligence community.

But we have no public evidence that any other group has anything like TeDi, or a clear understanding of what norm it would imply if they did.

The paper also confuses activities taking place because there is a norm of behavior with activities taking place because operational security (OPSEC) measures are part of how you do this business. This includes not leaving your rootkits around to be looked at by your opponents, which is the obvious purpose of TeDi. The attacker running TeDi wants a minimal number of signatures because they:

  • Assume their checks will leak as someone may eventually detect them via some sort of honeypot
  • Know that every check they run is taking time away from running the mission of their operation, and adds potential complicated failure modes to something already difficult to do

Any real critique of the paper would have to put words in the author's mouth - starting with what you propose specifically is the norm implied by the existence of TeDi. That seems a bit like tilting at windmills. A better question around TeDi is probably what it means to the FBI or other defensive domestic teams that these signatures were not shared more widely, but then we don't know that they were not.

It's true, as the paper points out, that almost all discussion of cyber norms is fantasy. Every paper is a broadside focused on trying to make some imaginary opponent believe they should adopt a particular set of rules. Nobody wants to say what the current rules of the game are, perhaps because it means admitting to things they would rather not say out loud.

Thursday, October 3, 2019

CNAS is wrong about how easy regulating AI is

Like is your plan to control a certain amount of data or only labeled data ? What is an "amount" of data when different formats of data are obviously much different sizes... And we don't know how much data is useful...

Of course then we get to talk about "dual use" databases. And the efficiency of controlling all non-labeled databases ?