Wednesday, April 26, 2017

Technical Paper Review: Lighting the Dark Web

There was a bit of a 1 vs 1 between two law professors on the subject of NITs, which I think is worth a read. Ahmed Ghappour, contends that the FBI and current legal practice of NITs is a thorn in the side of international legal norms, on the summary basis of "Hacking random computers in other countries has issues". Orin Ker, on the other hand, is like "It's all cool, yo." Both papers are drafts.

There are several technical holes that need to be discussed, although Ghappour's paper goes into depth into the details of what a NIT is, the reality is that "NIT" is not a technical specification but simply a description of the end user (aka, Law Enforcement), and the definition is simply "Malware that helps you investigate".  In other words, legal analysis that assumes or presumes that "NIT" is in some way a special technique, separate from intrusion techniques as they are used elsewhere, feels highly deceptive to a technical crowd.

Current known LE practice is to take over a web domain that is utilized exclusively by "bad people" and then attack those bad people from that trusted website with a client-side exploit, which then does some very simple things to locate them.

But building legal rules on this scenario is short-sighted because future scenarios are almost certain to be much more complex and require essentially the same level of toolkit and methodology that penetration testing or SIGINT attacks carried out by TAO have. For example, future targets could be using the Grugq's "Portal" firewall.

Likewise, a key issue is revealed in Orin's draft text:
In this particular context, it is doubtful that the rule expressed in the 1987 Restatement is viewed by states today as extending to the use of NITs. First, the 1987 rule is focused on conduct that occurs exclusively in a foreign state. Yet use of a NIT is not necessarily such conduct; it is entirely possible that the use of a NIT results in conduct solely within the territory of the state employing the NIT. To put it another way, application of the 1987 rule in the manner suggested by Ghappour results in a state being prohibited from using a NIT even to pursue criminal conduct in its own territory. The 1987 rule had no (and was intended to have no) such effect.
There are two ways to think about hosts that you do not know the location of:

  • They default to somewhere within your borders.
  • They default to somewhere NOT within your borders.
  • It is in fact, NEITHER within or without your borders - but handled in a special way, much like Microsoft and Google would prefer, because of rule 2.

From original paper by Ahmed Ghappour:
The legal process for network investigative techniques presumes search targets are territorially located, which is not at all accurate. Indeed, most potential targets on the dark web are outside the territorial United States.27 Approximately 80% of the computers on the dark web are located outside the United States.28
So as far as I can tell only in special circumstances should the default warrant process really be valid. Just because this results in a situation LE does not like, where Tor-services are not domestically warrantable under current legal frameworks, does not mean we should pretend this is not the case.

And of course, computer networks have many more complexities than are addressed. For example, what happens when your intrusion discovers that what you reasonably THOUGHT was a domestic computer, is in fact, a foriegn computer? There are many ways this can happen: Load balancers and various types of proxies can redirect connections from one computer to another transparently, for example.

Keep in mind that IP addresses are often ephemeral - the very process of uniquely identifying a machine is an extremely difficult one technically, especially for remote techniques (speaking from the experience of anyone who has built a large scale scanner).

Orin's paper talks about attacks (CNA):
To be sure, the FBI’s existing hacking techniques, properly executed, do not rise to the level of a cyber “armed attack,” which would permit a state to respond with force under Article 51 of the U.N. Charter.43
While inadvertent "computer attack", meaning "damage or destruction" is unlikely under the current methodologies, it is none-the-less technically possible, and becomes more likely in the future as techniques become more necessarily invasive. Collateral damage is a very real threat - there were a lot of legitimate customers on MegaUpload, for example. There is real risk here of international incident - Orin's paper currently states "There is no sign that the USG or the American public was offended by the foreign search", but there are easy ways to see scenarios where this would not be the case.

For example, BSDM is illegal in the UK but very legal in the States. Should the UK's Law Enforcement officers execute a UK NIT warrant collecting the list of Fetlife.com users to search for UK citizens, we can see immediate conflict with American perspectives.

The Playpen story in Oren's paper, where we discovered the server was in Iceland with NIT, then collected it with a MLAT, is instructive. What's our plan when we discover the server is in Iran? Likewise, we had already conducted a search of the Icelandic server BEFORE we knew it was in Iceland, where we had a good legal relationship.

Orin's paper continues:
"But he does not point to any instances in which the ongoing practice actually caused the reaction he fears"

  1. Friction may be covert and handled quietly. Not seeing the friction does not mean there isn't any.
  2. We may self-limit to child porn and mega-crime for a reason. What about when it's not that? What about the norms we set?


As another side note about how hard this is getting to be in practice, check out what happens when Law Enforcement asks an unwilling party to help them with an investigation, as seen today on the Tor mailing list:
https://lists.torproject.org/pipermail/tor-relays/2017-April/012217.html

That response shows the built in gravitational forces that are going to require Law Enforcement step up to the level of the NSA or CIA's teams.

Lastly, Orin's paper has a VERY strange ending:
NITs are very expensive to develop and require a great deal of technical sophistication to use. Drafting an NIT warrant requires considerable legal sophistication and the evaluation of significant legal ambiguities. Use of NITs may lead to disclosure of their details in subsequent litigation, potentially depriving the government of future access to computers by using that same vulnerability. 
So far the Government has successfully prevented any disclosure of vulnerabilities used (and here of course we have a built in confusion of the "vulnerability" and "malware/implant" with the term "NIT").  Likewise, there's no technical reason the FBI cannot scale to the level of the NSA, given sufficient funding. Oren seems to be implying there's an operational security issue here, when it's really a resources issue. The FBI COULD, in theory, use a new exploit and implant and toolchain for every single investigation. This is, in fact, the most secure way to do this kind of work.

Keep in mind, Law Enforcement, especially local Law Enforcement, often leaks the things they find out in order to pressure people, even people who are not suspects. For example, here is an article of them leaking the porn browsing data from a non suspect.

Thursday, April 20, 2017

Making a NEW and NextGen AntiVirus Company out of DHS instead of an old and busted one


So I have yet another exciting policy proposal based on how the USG can't trust any software vendor's remediation process to be beyond control of the FSB. :)

You can see in the DHS a tiny shadow of an anti-virus company. EINSTEIN and Threat Intelligence and incident response, and managed penetration testing - the whole works. But we're kinda doing it without realizing what we're building. And why not develop real next-gen infosec companies instead?

In fact, the way using secret USG information would work best is if we could use it ALL AT ONCE. Instead of publishing reports, and giving the Russians time to upgrade all their trojans as various companies react at different times, we can FLASH UNINSTALL every variant of a single Russian trojan, as if we were FireEye, on any company that opts-in to our system.

Also, why should we rely on Microsoft's patches when we can, as soon as we need to, make our own USG-deved patches with something like 0patch.com? Not doing this, seems like being horribly unprepared for real-world events like leaks, no?

Why can't I sign up to the DHS "behavioral analysis" AI endpoint protection for my company, which has a neural network trained not just on open-source malware, but on the latest captured Russian trojans? 

Think Next Gen people! :)

Alternative Theories

Fact 1: ShadowBrokers release was either "Old-Day" or "Patched"
Fact 2: Microsoft PR claims no individual or organization told them (found them all internally, eh?)

And of course, Fact 3: the US-CERT response to the ShadowBroker's earlier announcements.

So there are a lot of possibilities here that remain unexplored. I know the common thought (say, on Risky.biz) is that the Vulnerability Equities Process jumped into action, and helped MS with these bugs and then the patches came out JUST IN TIME.

Question: Why would the US not publicize, as Susan Hennessey has suggested, this effort from the VEP?

Fact 4: The SB release was on Friday, three short days after MS Patch Tuesday.

One possibility is that the SB team tested all their bugs in a trivial way by running them against the patched targets, then released when nothing worked anymore. But no pro team works this way, because a lot of time "patches" break exploits by mistake, and with a minor change, you can re-enable your access.

Another possibility is that the ShadowBroker's team reverse engineered everything in the patch, realized their stolen bugs were really and truly fixed, and then released. That's some oddly fast RE work.

Maybe the SB has a source/access inside the USG team that makes up the VEP or is connected in some way (they had to get this information somehow!), and is able to say definitively these bugs were getting fixed conclusively, and doesn't have to do any reverse engineering.

If the SB is FSB, then it seems likely that they have a source inside Microsoft or access to the patch or security or QA team, and were able to get advanced notice of the patches. This presents some further dilemmas and "Strategy Opportunities". Or, as someone pointed out, they could have access to MAPP, assuming these bugs went through the MAPP process.

One thing I think missed in the discussion is that Microsoft's Security strategy is in many ways, subordinate to a PR strategy. This makes sense if you think of Microsoft a company out to make money. What if we take the Microsoft statement to Reuters at their word, and also note that Microsoft has the best and oldest non-State Intelligence service available in this space? In other words, maybe they did not get their vulnerability information from the VEP.

There are a ton of unanswered questions, and weird timings with this release, which I don't see explored, but maybe Grugq will do a more thorough piece. I wanted to explore this much to point out one quick thing: The USG can not trust the integrity of Microsoft's networks or decision makers when it comes to national security interests.


Wednesday, April 19, 2017

0-12 and some duct tape

In a recent podcast Susan Hennessey at about seven minutes in says:
"...The authors here are from rather different communities, attorneys, private industry, non-legal policy areas, technical people, and again and again when we talk about cyber policy issues there's this concern that lawyers don't know enough about technology or technologists don't know enough about policy and there's this idea that there's this mythical person that's going to emerge that knows how to code and knows the law and has this really sharp policy and political sensibility and we're going to have this cabbage patch and then cyber security will be fixed - that's never struck me as particularly realistic. . . ."

"I've heard technologists say many many times in the policy space that if you've never written a line of code you should put duct tape over your mouth when it comes to these discussions"

Rob Lee, who has a background in SCADA security, responds with tact saying "Maybe we can at least drive the policy discussion with things that are at least a bit technically feasible."

He adds "You don't have to be technical, but you do have to be informed by the technical community and its priorities".

He's nicer than I am, but I'm also writing a paper with Sandro for NATO policy makers and the thesis has been bugging me for weeks on "What I want Policy Makers to know about cyber war". So here goes:

  1. Non-state actors are as important as States
  2. Data and computation don't happen in any particular geo-political place, which has wide ramifications, and you're not going to like them
  3. We do not know what makes for secure code or secure networks. We literally have no idea what helps and what doesn't help. So trying to apply standards or even looking for "due diligence" on security practices is often futile (c.f FTC case on the HTC Phones)
  4. Almost all the useful historical data on cyber is highly classified, and this makes it hard to make policy, and if you don't have data, you should not make policy (c.f. the Vulnerability Equities Process) because what you're doing is probably super wrong
  5. Surveillance software is the exact same thing as intrusion detection software
  6. Intrusion software is the exact same thing as security assessment and penetration testing software
  7. Packets cannot be "American or Foreign" which means a lot of our intel community is using outdated laws and practices
  8. States cannot hope to control or even know what cyber operations take place "within their borders" because the very concept makes almost no sense
  9. Releasing information on vulnerabilities has far ranging consequences both in the future and for your past operations and it's unlikely to useful to have simple policies on these sorts of things
  10. No team is entirely domestic anymore - every organization and company is multi-national to the core
  11. In the cyber world, academia is almost entirely absent from influential thought leadership. This was not the case in the nuclear age when our policy structures were born, and all the top nuclear scientists worked at Universities. The culture of cyber thinkers (and hence doers) is a strange place, and in ways that will both astonish and annoy you, but also in ways which are strategically relevant.
  12. Give up thinking about "Defense" and "Offense" and start thinking about what is being controlled by what, or in other words what thing is being informed or instrumented or manipulated by what other thing
  13. Monitoring and manipulation are basically the same thing and have the same risks
  14. Software does not have a built in "intent". In fact, code and data are the same thing. Think of it this way, if I control everything you see and hear, can I control what you do? That's because code and data are the same, like energy and matter.

If I had to answer Susan's question, I'd say the less tactful version of Rob's answer. Which is that in fact we are now in a place where those cabbage patch dolls are becoming prominent. Look at John De Long, who was a technologist sitting next to me before he became a lawyer, and Lily Ablon, and Ryan Speers, Rob Joyce, and a host of others, who all have deep technological experience before they became policy people. It's just the other side of the story is that every Belfer center post-grad or "Cyber Law and Policy Professor" with no CS experience of any kind has to leave the field and go spend some time doing bug bounties or pen testing or incident response for a while to get some chops.

But think of it this way, the soccer game's score is 0-12, and not in your favor. Wouldn't you want to change the lineup for the second half?

Monday, April 17, 2017

Fusion Centers


So the Grugq does great stand up - his timing and sense of using words is amazing. But it is important to remember that when I met him, a million years ago, he was not pontificating. He was, as I was, working for @stake and on the side writing Solaris kernel rootkits. Since then he's spent a couple decades sitting in cyber-land, getting written up by Forbes, and hanging out in Asia talking to actual hackers about stuff. My point is that he's a native in the lingo, unlike quite a lot of other people who write and talk about the subject.

Which is why I found his analysis of Chinese Fusion Centers (see roughly 35 minutes in) very interesting. Because if you're building cyber norms or trying to enforce them, you have to understand the mechanisms other countries use to govern their cyber capabilities all the way to the ground floor. It's not all "confidence building measures" and other International Relations Alchemy. I haven't been able to find any other open source information on how this Fusion Center process works in China, which is why I am pointing you at this talk. [UPDATE: here is one, maybe this, also this book]

Likewise, the perspectives of foreign SIGINT programs that the US has decided to Gerrymander the cyber norms process is fascinating. "What we are good at is SUPER OK, and what you are good at is NOT GOOD CYBER NORMS" is the US position according to the rest of the world, especially when it comes to our stance on economic espionage over cyber. This is an issue we need to address.


Saturday, April 15, 2017

VEP: When disclosure is not disclosure.

True story, yo.

I want to tell a personal story of tragedy and woe to illustrate a subtle point that apparently is not well known in the policy sect. That point is that sometimes, even when an entire directory of tools and exploits leaks, your bugs still survive, hiding in plain sight.

A bunch of years ago, one of my 0days leaked in a tarball of other things, and became widely available. At the time, we used it as training - porting it to newer versions of an OS or to a related OS was a sort of fun practice for new people, and also useful.

And when it leaked, I assumed the gig was up. Everyone would play with it, and not just kill that bug, but the whole technique around the exploitation and the attack surface it resided in.

And yet, it never happened. Fifteen years later only one person has even realized what it was, and when he contacted us, we sent him a more recent version of the exploit, and then he sent back a much better version, in his own style, and then he STFU about it forever.

I see this aspect in the rest of the world too - the analysis of a leaked mailspool or toolset is more work than the community at large is going to put into it. People are busy. Figuring out which vulnerability some exploit targets and how requires extreme expertise and effort in most cases.

So I have this to say: Just because your adversary or even the entire world has a copy of your exploit, does not mean it is 100% burnt. And you have to add this kind of difficult calculus to any VEP decision. It happens all the time, and I've seen the effects up close.

ShadowBrokers, the VEP, and You

Quoting Nicolas Weaver in his latest Lawfare article about the ShadowBroker's Windows 0days release, which has a few common thematic errors as relates to the VEP:
This dump also provides significant ammunition for those concerned with the US government developing and keeping 0-day exploits. Like both previous Shadow Brokers dumps, this batch contains vulnerabilities that the NSA clearly did not disclose even after the tools were stolen. This means either that the NSA can’t determine which tools were stolen—a troubling possibility post-Snowden—or that the NSA was aware of the breach but failed to disclose to vendors despite knowing an adversary had access. I’m comfortable with the NSA keeping as many 0-days affecting U.S. systems as they want, so long as they are NOBUS (Nobody But Us). Once the NSA is aware an adversary knows of the vulnerabilities, the agency has an obligation to protect U.S. interests through disclosure.

This is a common feeling. The idea that "when you know an adversary has it, you should release it to the vendor". And of course, hilariously, this is what happened in this particular case, where we learned a few interesting things.

"No individual or organization has contacted us..."

"Yet mysteriously all the bugs got patched right before the ShadowBroker's release!"
We also learned that either the Russians have not penetrated the USG->Microsoft communication channel and Microsoft's security team, or else Snowden was kept out of the loop, from his tweets chiding the USG for not helping MS.

This is silly because codenames are by definition unclassified, and having a LIST OF CODENAMES and claiming you have the actual exploits does not mean anything has really leaked.

The side-understanding here, is that the USG has probably penetrated ShadowBrokers to some extent. Not only were they certain that ShadowBrokers had the real data, but they also seem to have known their timeframe for leaking it...assuming ShadowBrokers didn't do their release after noticing many of the bugs were patched.

And this is the information feed that is even more valuable than the exploits: What parts of your adversary have you penetrated? Because if we send every bug to MS that the Russians have, then the Russians know we've penetrated their comms. That's why a "kill all bugs we know the Russians have" rule as @ncweaver posits and which is often held as a "common-sense policy" is dangerous and unrealistic without taking into consideration extremely complex OPSEC requirements for your sources. Any patch is an information feed from you, about your most sensitive operations, to your enemy. We can do so only with extreme caution.

Of course the other possibility, looking at this timeline carefully, is that the ShadowBrokers IS the USG. Because the world of mirrors is a super fun place, is why. :)




Tuesday, April 11, 2017

"Don't capture the flag"

Technically Rooted Norms


In Lawfare I critiqued an existing and ridiculous norms proposal from Carnegie Endowment for International Peace. But many people find my own proposal a bit vague, so I want to un-vague it up a bit here on a more technical blog. :)

Let's start with a high level proposal and work down into some exciting details as follow from the original piece:
"To that end, I propose a completely different approach to this particular problem. Instead of getting the G20 to sign onto a doomed lofty principle of non-interference, let’s give each participating country 50 cryptographic tokens a year, which they can distribute as they see fit, even to non-participating states. When any offensive teams participating in the scheme see such tokens on a machine or network service, they will back off. 
While I hesitate to provide a full protocol spec for this proposal in a Lawfare post, my belief is that we do have the capability to do this, from both a policy and technical capacity. The advantages are numerous. For example, this scheme works at wire speed, and is much less likely to require complex and ambiguous legal interpretation."

FAQ for "Don't Capture the Flag" System


Q: I’m not sure how your proposal works. Banks pick their most sensitive data sets, the ones they really can’t afford to have attacked, and put a beacon on those sets so attackers know when they’ve found the crown jewels? But it all works out for the best because a lot of potential attackers have agreed to back off when they do find the crown jewels? ;-)

A: Less a beacon than a cryptographic signature really. But of course for a working system you need something essentially steganographic, along with decoys, and a revocation system, and many other slightly more complex but completely workable features that your local NSA or GCHQ person could whip up in 20 minutes on a napkin using things laying around on GitHub.
Also, ideally you want a system that could be sent via the network as well as stored on hosts. In addition, just because you have agreed upon it with SOME adversaries, doesn't mean you publish the scheme for all adversaries to read.

Q: I think the problem is that all it takes for the system to produce a bad outcome is one non-compliant actor, who can treat the flags not as “keep out” signs but as “treasure here” signs. I’d like a norm system in which we had 80% compliance, but not at the cost of tipping the other 20% off whenever they found a file that maximized their leverage.

A: I agree of course, and to combat this you have a few features:
1. Enough tokens that you have the ability to put some on honeypots
2. Leaks, which as much as we hate them would provide transparency on this subject retrospectively, and of course, our IC will monitor for transgressions in our anti-hacker operations
3. The fact that knowing whether something is important is often super-easy anyways. It's not like we are confused where the important financial systems are in a network. 

Ok, so that's that! Hopefully that helps or gives the scheme's critiques more to chew on. :)