Tuesday, February 11, 2020

The Transmission Curve

Imagine everything your company does, but in terms of a RAR file. Every document, and email, and VOIP-call, and Teams message, every password and LDAP entry, every piece of source code in the git repo, and webex, and document scan, and database of PII, and Salesforce spreadsheet. Everything, no matter how trivial, related to the running of your company. If you're a five hundred person company, let's say that you generate about a Petabyte worth of information per year. This is dominated by useless webex video conference calls, which a hacker could not care less about. A more realistic total cost of ownership (TC0), in terms of bytes, for a five hundred person company for one decade, is 35 Terabytes (I backed this up with some real-world information and some calculations which I can share as needed - this includes all emails, documents, source code, and phone calls, but no video).

That is currently just over a month of downloading for our hacker friends - but we will be nice and say they only download data at night (aka, 1/3 the time). Also, a month is a very long time to be "on target" but download size is basically static over the years and the time is pressured down by increasing network speeds. If you are in the ever growing box-of-pain (see below) then every time you get hacked, your entire company's IP value walks out the door.

Everything in this graph is either my estimate or Crowdstrike's but just understand that as speeds go up, and corporate IP size remains static, the odds of any hacked company being completely downloaded before you catch the pesky hacker goes to 1.

Hackers or signals intelligence agencies deal with this question every day in a different form, because 99% of what you see on most networks is useless porn and Windows updates. You want to filter that out on-site and then only send back the good stuff. But as network speeds go up, and storage costs go down, it's often easier to download everything and sort through it later. This is of course similar to the problem a certain large SIGINT group reportedly had.

Following this curve is why I think the Endpoint Security people's 1/10/60 minute rule is ridiculous, and why humans in the loop for security response are also hilarious. Ask yourself, at what speed of network does your company enter the box of pain before 60 minutes is up?

Thursday, January 23, 2020

AI Cyber Controls and Bezos and MBS

I wanted to link to this post on CFR where I wrote about the right way to look at creating new export controls in a complex technical space.

Also, I'll be talking about cyber export controls at the 10th forum on Global Encryption, Cloud, and Cyber Controls, March 24th and 25th in San Fransisco if you want to come heckle.

But more I wanted to write a few words here about the recent Bezos hack, which is something still developing:


A good perspective on the "Civil Society" (ugh, what a phrase) take on this sort of thing is this lawfare article. Like many articles it leans heavily on export control of spyware as the solution to human rights ills. The first thing you'll notice about this, and other policy groups, is that they call for "Transparency", a term which is worth dissecting.

In particular, it is ironic that the FTI report on Bezos's phone is generated with the exact technology they want to control! It is the very definition of dual use! And it is incomplete, because the one thing you do not have on your own iPhone is Transparency, so we do not even know for sure what the exploit was that got KSA (allegedly) onto Bezos's phone. In fact, Apple is currently suing under weird parts of the DMCA a company that does help with transparency, Corellium, after trying to buy them (presumably to stop them from selling their virtualization platform for iOS).

When you hear Transparency from Citizen Lab what they mean is that they want long spreadsheets on basically everyone who buys any dual use software, based on confusing and inexact export control regulations which would strangle small companies who work in this space. This would in theory feed into stricter export control rules, or even domestic legislation. It would probably be easier and better to fix the DMCA and our vision of copyright so everyone can do forensics on their own phones and find out when they get hacked.

It's also worth noting that Israel is not a member of the Wassenaar group of export control nations (nor is China, obviously, although Russia IS a member) and that the Kingdom has extensive offensive resources that go far beyond buying off the shelf exploit toolkits. I did a quick open source Twitter survey a while back after the UAE Project Raven articles came out and all I found was good penetration testing and offensive research teams in the KSA.


Wednesday, January 15, 2020

Local PrivEscs that are Remote Code Execution

One thing you will notice if you read the NSA advisory yesterday and the Microsoft advisory is that the NSA advisory had MORE information in it. Despite both organizations being "defenders" this is because software vendors have views of the world colored by a completely different view of systemic risk. Sometimes this means advisories get issued for vulnerabilities that are not really exploitable, but typically it means the impact of a vulnerability is vastly underrated. This is presumably why Project Zero releases full details at 90 days, instead of letting the vendor do all public communication, but it's also why most bug bounties include non-disclosure clauses.

In other words, if vendors had their way, an advisory would have less information in it than a fortune cookie.


If you've been in the security research business then you also know that vendors, and often other researchers, will often under-analyze a vulnerability. It's an interesting metric to have to see which bugs got patched, but were called LPEs when you really know they are RCE. Some companies are known to label every remote heap overflow a "crash/DoS", which becomes a funny meme, but also has strategic implications for critical infrastructure.

I guess what I'm trying to say is that a disparity in information is a disparity of control, and nothing leverages this more than an operator in the cyber domain.

Thursday, December 19, 2019

DHS's cyber policy is a straight up casualty of the partisan wars

A great Politico article came out this week on DHS and its rocky history when it comes to executing on its cyber mission. Every aspect of it deserves a read, but it can be summed up in a few bullet points as well, for the lazy:

  • DHS did not have the talent base to pull of a lot of its cyber mission, and probably never will
  • Budgetary scale in DHS to address cyber issues is minimal and probably always will be
  • DHS has forever lost the trust of the constituencies it needs

That last bullet point is hammered in by Kirstjen Nielsen's career implosion as she promoted harsh anti-migration methods but is emphasized by the current DHS twitter account, which is now a partisan parody of what you would want to see from an organization trying to get cooperation from large technology companies.

sheesh


argh


Hmmm.


Many people, myself included, always wonder at the efforts of government agencies to turn themselves into budget anti-virus companies. But strategically, the one thing DHS or DOJ has to offer is their reputation. When they make an attribution or statement from their official Twitter feed, that has to be believed by everyone. And we don't have that anymore, which is going to have implications up and down the cyber domain.

In some ways, having an independent cyber agency is the only solution. Untainted by the other missions of DHS, without the history of DHS, without the offensive mission of the NSA or military, and perhaps set up in a way that allows private industry to trust it with a respected technical leadership. I don't see this happening any time soon, however, but it might be something for a future administration to consider.

Monday, December 16, 2019

Are End Use Controls Fit For Use?


https://twitter.com/Aristot73/status/1203250494745010177?s=20
(You can see here a classic case of End Use Controls)

You would not know it from public reporting, but Export Control is in a bit of a crisis, and that crisis has a name, and that name is "End Use Controls". This is important because when I started really looking at Export Controls, post the "Intrusion Software Debacle", Export Controls were a sleepy little town at the edge of the wilderness, and now they are the center of everything, with Huawei as the most obvious example.

But deep down, the US has less and less ways to project strategic power, and export control is taking over the role of many other parts of diplomatic basket, parts it is not especially suited for. You can sum up the selling point of export control's historical role by saying "Preventing bad things from getting into the hands of bad people" and to a certain extent, that still exists. 

But let's take a look at a new article from Ely Ratner, Elizabeth Rosenburg, and Paul Scharre in Foreign Affairs on Countering China.


Let's sum up their recommendations so you don't have to do all the reading:
  • Boost R&D Spending
  • Attract talent by expanding high skilled Visa Program
  • Enable domestic production of 5G by using Tax incentives and government buying power
  • Enhanced Visa Screening to counter espionage and coordination with Academia on a blacklist
  • Adding PLA organizations to the Entity List
  • Blacklisting all PLA-associates from Visas (this conflicts with their other recommendation, obviously)
  • Expanding Export controls based on End-Use
  • Finding new sources of Rare Earth minerals (I assume by asteroid mining? lol)
  • Forcing Chinese companies to comply with US Financial Transparency Rules
  • Promoting BLOCKCHAIN (lol)
  • New Multilateral agreements "Just like TPP but somehow different in that we actually sign them this time"

Wait, WTF? Is this for real?

I don't know how articles like this from CNAS are not supposed to be meant as the duct-taped-banana of "Countering China" thought-pieces. The bit on export control is the most likely-to-happen bit! Equally as insane as all the other ideas though.

By definition, an end use control does not prevent technology from getting in the hands of bad people. It prevents companies from MARKETING technology as for a specific thing, but the technology itself is going to invariably become ubiquitous. If at any point in your creation of an export control you're saying things like "Well, this technology is so dual use that the only difference between Military and Non-Military use is the going to be the description on the task order" then what you're doing is creating a nice way to talk to people informally about the wiseness of their business model and customer-set, more than an actual "Export Control".

These issues are hugely relevant when it comes to understanding strategic contention around cyber tooling, but also around machine learning technology, 3d-printing, biologics,  and the next generation of consumer products. Ask yourself, is there a GUIDELINE anywhere for how to create a GOOD end use control for your subject matter? What's the difference between an export control that WORKS and one that DOESN'T? If you don't have such a guideline, then you know the answer is that there is probably no good way to do it.

But if export controls aren't the answer, what is?


Wednesday, December 11, 2019

Crypto Prima Nocta


Yesterday there was a big Senate hearing on Encryption and the witnesses were Matt Tait (hacker), Cyrus Vance (DA NY), Erik Neuenschwander (Apple), and Jay Sullivan (Facebook).

https://www.judiciary.senate.gov/meetings/encryption-and-lawful-access-evaluating-benefits-and-risks-to-public-safety-and-privacy



For any dying government policy there's going to be a set of policy experts that advocate keeping it in the interests of Stability. Crypto policy is no different from the principle of Prima Nocta in that way. You can see this in Cyrus Vance's testimony, which harkened back to the balance of power when CALEA was signed into law. CALEA was 25 years ago. Has anything changed since then, do you think? It is the OK BOOMER of surveillance balances.

What's really changed, since today Judaism is being defined as a nationality for some reason, is the public's awareness that maybe giving governments free access to our deepest secrets is not a great idea. What governments always say is "Terrorism, Child Exploitation Materials, Murders and Serious Crimes" but what they mean is "War on Drugs and political resistance".  Senator Kennedy probably was the most pessimistic person on the panel, and literally said "Your companies don't care what we think, do they? They don't trust governments." But it's not the companies that don't trust governments so much as everybody in general.

The Government (and Matt Tait's) argument is pretty simple: We need a balance that allows the Government access to anything stored on your phone at any time. They'll say "decrypted when presented with a lawful court order" but Apple's policy is even simpler: "No."



There were a couple obvious fallacies from the pro Key-Escrow testimony from Matt Tait and Cyrus Vance.

  • Key Escrow (on devices) is doable and easily splittable from the problem of end-to-end encryption on the wire
  • Key Escrow will be secure against modification by people on their own phones
  • Various Senators assumed Apple HAD a magic key, and then decided to delete it, when Apple was super clear they just decided to enable "Full Disk Encryption" instead of "Some Disk Encryption"

I get that Surveillance-Authoritarianism is the pumpkin spice of this decade's political season - you get a bit of it with everything. At one point one of the Senators said "Is Apple willing to take liability for any attack that could have been prevented by a decryptable device?" which is an insane question, since Apple is ALSO not willing to take liability for any damage from an unencrypted device falling into the wrong hands, nor is the Government able to prevent Apple from BEING ATTACKED BY OTHER NATION-STATES or willing to take THAT liability.

Lol

It's not possible to do key escrow as a matter of legislative policy. Even assuming a law passed that mandated it, Apple and Google would also have to magically ban any application that disabled it. This is something Apple could do to non-Jailbroken devices the same way they ban VPN services in China, but it's not something Google can do on their platform. And you cannot do key escrow without making devices less safe - Matt Tait is 100% wrong about this being a technologically feasible effort.

We live in a world where you can't even trust hardware (this bug came out DURING the hearing!), so adding special hardware to decrypt your device is a dumb dumb thing to do and Apple knows it.

Lindsay Graham said point blank that if Industry doesn't magically solve this problem for him, then he's going to pass legislation about it, and there was a big bipartisan show on the floor of both support and opposition, which makes it hard to say if there's an actual plausible threat there (unlikely). But the deal has already been cast: What Law Enforcement wants out of the cloud, it can have. What it wants from the device, it cannot. 



Sunday, December 8, 2019

"Stability"

If you read Richard Haass's book (https://www.cfr.org/book/world-disarray) or listen to the GCSC or Joe Nye you will hear a lot about the value of stability, both in the world and in Cyberspace. But like the real world, when you enforce stability it is like pushing on a tape bubble. What is stable for one stakeholder is oppression or uncertainty for another.

The old sayings about the Navy protecting the seas for large multinationals to exploit the tiny economies of third world countries are also portable directly to cyberspace, and you can see it in the GCSC's first "Norm".


Why DNS? To a technologist, DNS is one of a suite of aged Internet protocols along with SMTP and HTTP. And of course it gets manipulated in many ways all the time - most importantly the FBI will blackhole a name that is being used as part of a botnet, for example. Also commonly, courts will assign names to various companies based on their trademark.

But anyone on the upstream path that is trusted by your browser or operating system can manipulate, and often does, DNS. You may recall the concerns from various ISPs and network providers about DNS over TLS, which would prevent them from monitoring and manipulating their customers' DNS when using certain browsers. Many VPN and security providers filter DNS for you, providing many security benefits.

The UK's now scuttled war-on-porn was slated to use these exact methods to try to filter out adult content. It would, in other words, have violated this proposed norm.


A better and simpler "Norm" would simply be "no DDoS attacks anymore, please". But the ACTUAL NORM is that lots of governments use DDoS attacks whenever they want to punish a company that is posting content outside their legal reach, China, in particular with their "Great Cannon", but also Iran, the US, etc. DDoS attacks often flood core routers, breaking things in bad ways, and so it's possible this idea of "please don't mess with the core" is an attempt to shoehorn in a bunch of unstated things to stop what every country already does.

Not to mention, many countries spend a lot of time hacking routers, which are the very definition of the core?

In other words, from the very beginning the GCSC proposal has severe challenges. No doubt some handwaving will occur in the name of "a perception, true or not, of forward progress".

---

Part 2:

What really annoys me is that providing guerrilla uncontrolled internet to people is the best way to affect change in this day and age. Imagine the Hong Kongers having secure internet, unmonitored by the PRC? But this conflicts with the essential values of controlling pirated Disney movies from ever being reachable by the market...


Part 3:

Also, to have workable norms in this space, you have to agree on a shared reality. We don't currently have that in our system.