Tuesday, December 20, 2016

The Wierding Way

You don't have to believe I know anything about cyber combat or science fiction, but if you read this blog, and haven't read Dune, you're missing out on the philosophy behind how cyber offense works. 
I want to teach the whole Policy World about the Weirding Way in this blogpost. It is hard to explain, but I want to start with this: Scrippie is a better exploit writer than I ever was. I am in the good fortune to be able to watch world class exploit writers do their work. Even now, when I should be selling INFILTRATE tickets, I stand around behind people and talk to them about their exploitation strategies and how they are manipulating a heap overflow to do what they want and what their chances of success are. Sometimes I can help. Mostly I just help by letting them talk it out.

I know that no policy lawyer can read Bratus's paper on Weird Machines. I also know that even Halvar's INFILTRATE keynote on the subject is probably too technical.

But let me tell you something in the Wassenaar Arrangement that is leading the policy world down the wrong path, a sugar coated path of simplicity: The idea that computer code has intent, and even a chain of preferred execution!

The reason Scrippie is a better exploit writer than I am is because he flattens the code out in his head. He reads the whole thing, and then inside his head the input parsing routines and the heap allocation routines and even the KERNEL system call routines are all at the same level, literally as if they are all in a line and he is simply calling them with his data.

This is what it takes to do real exploitation in the world where you don't have Javascript around to do your heap grooming for you. Because most policy experts have only really seen clientsides in occasions where there is a Javascript interpreter, they have a warped view of how exploitation works in general.

Below, I respond to Nicolas Weaver's Lawfare post, but with <sarcasm>, which translates poorly on Twitter.

Nopes.

Ok, so if you're still with me, I want you to think of it this way: Data is also code. I don't mean "Code can be represented as data because everything is just bytes". I mean, the data I pump into your algorithm controls it as much as the executable code itself does. That's how hackers think of your code and it's closer to the true nature of the code than how the regulators and most academics are thinking right now. It's why every time an academic paper comes out on "ROP/JOP/etc" hackers find it redundant and hilarious.

To make this a Koan: Your computer is a state-space, and our data explores it. When it has no input, your computer program is in all potential quantum states - literally anything is possible because it is Turing complete if it has enough complexity. When we give it data, we collapse that waveform into a particular state of our choosing.

Hopefully that helps?


Monday, December 19, 2016

"Harm"

I've spent nearly three years reading policy papers in cyber security, which is a SMALL community, every conference has the same names. And most papers talk about how to classify the problem and map it to existing problems and then use existing solutions. The GOOD papers, (Danzig and Gary) tend to argue the opposite. They are darker, and more painful to read, but also more true and likely to point ways to actual solutions that work.



"Put Simply"

Another thing to watch out for is quick divisions into "phases" of operations. These are vast oversimplifications for the purpose of communicating one particular concept, but you see papers steal classification phases and then run with them as if they are useful in other contexts, which they never are.

Likewise, often the papers that are cited don't support the arguments in the paper, which I always find weird and upsetting, like I'm being cheated by getting a Caribbean lobster instead of a Maine lobster at a restaurant.

11) Dept of Commerce blathers on about stuff unrelated to this paper. This concept needs better support.

The conclusion isn't hard enough on what defines an "activity"

Lots of good reasons to establish presence on SCADA boxes other than direct CNA...

Should probably link to: SIM HEIST



Thursday, December 15, 2016

The DIRNSA trap



New domains, such as cyber, are challenging for leadership. There are always moments where you see people hang on the words of a DIRNSA, especially one who has just exited and is more free to talk (aka, sell whatever solution they are hawking in their retirement). But I want to point out that in most respects these high level people have very little experience in the cyber domain as we know it, and you are better off going to an old 90's hacker-type like Halvar to get strategic advice about what sorts of solutions are going to work next year and the year after that.

Look, I get the magic of the NSA. Being inside the bubble is like living in a crystal ball. I read the presidential daily brief every morning, and then browsed the crypto library or looked at papers on various things far ahead of the outside space. It's like being a Guild Navigator, steeped in Melange.

Folding spacetime is not easy.


Typically, people confuse CLEARANCE with UNDERSTANDING. But reading high level reports and hearing briefings can occlude strategic understanding, especially if you don't have the background to see the whole picture. Obama, towards the end of his term, put in a whole staff on cyber security with no technical or industry experience. Look at Michael Daniels - 17 years at OMB (!) doing financial review of the IC, I'm sure at a very high level of clearance. But he has no technical understanding - he has a undergrad in public policy, not even computer science. This trend was throughout Obama's appointments, and it has led to serious undercutting of our national policy efforts. It doesn't matter how cleared you get, what SAPs you get read into: you cannot get clarity on these issues that way.

I'm often chided for holding fast to a rule that you cannot operate strategically in this domain without understanding the technology - I used to make as a rule that nobody could use the exploits my team created operationally without being able to write them themselves. And spending some time in industry seems like it is a requirement for making good policy decisions when nearly everything you do in the cyber domain goes over private networks and software.

A lot of it is just time in grade. A DIRNSA comes from an intel background, but obviously will probably not have 20 years of cyber-hacking under their belt. Your average 90's hacker will. And these days, they all have the clearances and money from their respective governments to use it. We're not playing against amateurs anymore, and we need to stock our bench respectively.

Wednesday, December 14, 2016

The back and forth of 0day


In case you don't read my twitter feed, I wanted to post a quick blog about this talk. There's a few things in it, but we go over both Authentication and VPN+Wireless as bug categories, and then talk about next gen targeting for phishing (aka, microtargeting using Twitter ads) and a few other things that are policy related.

Dwell Time


I like to think of the amount of useful information in any size organization as a static number. It's like, movies all compress differently, but roughly they are 1G per hour. So at a certain point, due to bandwidth improvements on average across the world, torrents moved from being mostly songs, to mostly movies. The same thing is true for corporations. Can you download an entire midsize corporation's information-sphere overnight, before the incident response team comes into work the next day? Lately we've seen this information-sphere include phone calls, recorded from VOIP systems.

But the point here is that every part of the defense equation changes when you hit "complete compromise" times of about a day. If you assume, not just compromise, but a Snowden-level event every five years, how would you organize the NSA?

Awareness Training


Almost all awareness training happens via "someone sends you an email". We've seen how well this works.  But even worse is acknowledging that hackers can leverage the entire battery of advertising targeting tools, to narrow down very targeted ads against your IT staff, even down to one or two members. Facebook and Twitter are great for this. And because it's not a spam email, your organization's defenses never get to see it.

DoS

We talk a lot at Immunity about how DoS and resource exhaustion are a "medium" severity vulnerability in the reports we often write, and a "critical" in the wild when they get exploited.

What is NG anyways?


Our position is next-gen is not monitoring, but automated response. This means you have to know ahead of time what it takes to deprovision and reprovision anything on your network.


Monday, December 12, 2016

Sources and Methods

Interesting.

I didn't want to lose this train of thought - but my initial reaction to people in policy places is that they always undervalue the "single server" because from an operator's perspective, there is no such thing. That server is a foothold on a network - probably in a unique position, and the toolchain on that server and that GOT you onto that server puts every mission you have at risk, typically. 

So from that perspective, it's likely that even if it is one server, that a real offensive organization has human lives at risk if that server is deliberately outed. You have to do a massive cleanup job first, equivalent to an enterprise-level forensics job, to cover your tracks. Sometimes that's impossible because you've lost access to part of your toolchain...


Wombo-Combo Cyber Offense


So I'm reviewing a paper on cyber offense resourcing and what I find hard to explain to non-operators is wombo combos. It's not even about "operators" per se. It's about the crucial elements of cyber strategy that evolve from the experience of hackers working in small teams ("islands", if you will). I, like many people, spend a lot of time doing wombo-combos in Overwatch - the standard one being Zarya's gravity bomb, which pulls people all into a group and "Justice Raining From Above", which is a barrage of missiles from the flying character Pharah, which cleans them all up. Obviously the coolest wombo-combos are the weirdest and least expected ones. Many videos have been dedicated to dealing with having control over only two members of a six-person team, which is identical to almost everyone's decisions when doing cyber strategy.

If you want to see a basic outline of the overall picture, the old post on metrics around cyber capabilities is useful. This post, in some senses, is the next level down in terms of technical focus.



A wombo-combo is a strategy of resource choice in a way that creates instant dominant synergies. Most cyber offensive organizations come upon these by accident, or the hard way. They end up throwing a bunch of resources at the problem and get lucky by sometimes having a wombo-combo, but typically they fail to realize why they are getting so successful and eventually disrupt their own synergy. Building these capabilities takes time and forethought, and so it's easy to disrupt them with personnel loss or reorgs.

But good hacker teams do wombo-combos on purpose. The traditional one is PHP + Linux locals. You can get pretty far by specializing in two areas that have great synergies like that, which is something many early hackers groups did instinctively.

So for example, if you specialize in supply chain interception and hardware trojans then what else do you need to have to generate synergies? Can China completely forgo any iPhone client-side or exploitation capability if they get a significant advantage in hardware hacking + somethingelse? Maybe all I invest in is XSS + a pile of cheap RATs? What is Singapore's best "punch above your weight" strategy? I mean, the question for everyone in the next few years is going to be "How do I best team with Equation Group so I can get under the security umbrella?", which even stalwarts like Germany would be best off preparing for now, from a technical capabilities perspective.

I deliberately left exploitation out of the original post on attacker metrics, focusing instead only on implants, which are easier to analyze when you're trying to create measurements from publicly available data. But you can see these strategies operating in the wild every day with the right kind of eyes. Of course, a corollary is I think of HUMINT as just another arm of cyber offense, which would probably insult a lot of CIA-types. :)


Thursday, December 1, 2016

How ‘Active Defense’ Would Work

The Problems


In many cases American (and other Western) companies know they have had an intrusion and even who the beneficiary is - but cannot prove it because to do so would require information only available on a remote server in another country, one typically unresponsive to subpoenas from the American court system.

Likewise, large scale botnets and worms such as Mirai can be difficult to combat as no public agency has the authority (and desire) to conduct the necessary international trespasses for the public good. And while penetrating carder forums and child abuse imagery trading websites on the Dark Web can be done by the largest law enforcement agencies, it's time to prepare for a specialist arm that can support all of law enforcement.

In addition, there is a talent problem. Even if there was a clear authority for many of these issues, the US Government does not have an additional natural critical mass of experienced hackers and management teams necessary to safely mount these sorts of operations.

No intelligence community arm is aimed at defeating economic cyber espionage on behalf of American industry. Nor should this become a priority of the foreign intelligence community’s mission. While the protection of the American industrial base is a strategic goal, there are limited resources within the IC and penetrating Chinese corporations which are not involved in military applications is a problematic thing to do for the NSA and CIA.

Desired End State

The first order desired goal is the end of widespread economic cyber espionage, which at scale, is a national security issue, but individually is a law enforcement issue. No Chinese/Russian company would receive stolen American R&D intellectual property or sales plans if it knew that accepting that information could lead to heavy personal and corporate legal sanctions.

Essentially, we want to have a chilling effect on cyber economic espionage while providing the beginnings of the ability to deal with wide ranging international systemic threats such as the Mirai worm, leveraging the deep bench of penetration testing talent and resources available in the private sector to do this without impacting our intelligence community missions.


Active Defense Done Safely and Legally

active_defense (1).png

Issues and Concerns

Escalation into a cyber war or a trade war is most commonly cited concern with this kind of structure for normalized hack-back. But there's no reason to assume that "cyber war" will escalate when countries have the option to simply being responsive to law enforcement requests. The key to avoiding escalation in this case is splitting the effort from traditional IC (which can be involved in battleground preparation operations), and massive transparency as to the scoping and goal of this agency's work.

Another question is why is there a private sector penetration testing company involved at all? Why not do this entirely in-house in a law enforcement agency? The answer is twofold:
  1. Law enforcement agencies have a culture that does not mesh well with cyber teams, to be blunt, which makes it hard for them to maintain the management talent required to run operations as well as you need them to. For example, while initial attacks against child abuse imagery sites and users can be performed somewhat easily, it's reasonable to expect that community to invest in protection and detection mechanisms (as evidenced by them catching the latest Tor Browser 0day when it was used).
  2. There’s a moral hazard issue here - you want American companies to pay for the technical work involved because otherwise every issue becomes the Government’s problem, and there is no incentive to orient their business to security. This is what happened with Credit Cards. Instead of building secured payment infrastructure, Banks relied on the Secret Service to go chase down every 19 year old who got involved in carding.

Of course, if this model works well, the vast majority of these efforts would end with no hacking at all, simply subpoena and information requests between two law enforcement agencies.

How would this model work for other countries?

Other (smaller) countries may not see this model as necessary in terms of the private-public partnership. They may want to make it entirely a law enforcement agency function, because they would manage the moral hazard issue more directly that way and they don’t face talent or culture problems and have a history of joining LE and SIGINT functions.

But the truth is of course that many smaller countries will simply want to have the American cyber security umbrella also apply to their companies, and will work on bilateral agreements to make this possible.

Are you just suggesting this model because you want to do the work?

It’s extremely unlikely Immunity, a small business located in Miami and Argentina with a huge foreign national component would be eligible for this kind of business, although those involved might buy INNUENDO and CANVAS and SILICA (as practically everyone in the industry already does). I assure you this will not drastically affect our profitability.

The penetration testing companies in this model have a very particular risk structure which we can fully explore in another paper - i.e. they need to at some level be closer to classified defense contractors than normal penetration testing companies, even though they are doing unclassified work.

How do other countries trust this process is not itself committing espionage?

This is where measures typical in International Relations such as having, say, Chinese/Russian observers become part of the process. Likewise, this is a great argument for having the tools and techniques and infrastructure for this effort be completely distinct from intelligence community toolchains, and at some level attributable at a group level using specific technical means.

What crimes other than straight economic espionage would this model apply to?

We have a problem in that many crimes in cyberspace are viewed very differently by different countries. For example, posts that defame the Royalty of a country on Twitter are viewed as capital crimes by some countries. What do we do when they send us a subpoena to unmask an anonymous poster of such content, which we would consider protected speech? Are they within their rights under this framework to go active against Twitter?

Those are still painful and unanswered questions.

Isn’t this super risky? What if you break something?

We already handle these issues in SIGINT collection quite well - or at least, well enough to not fear arbitrary escalation when we make mistakes. It's possible that having the best technical talent the industry has to offer is a net benefit in this way, as it will reduce unfortunate side effects.

Some Resources On This Subject


Scenario Walk-through

In many cases, a narrative explains these concepts better than anything else. So below is a hypothetical walk-through of how this could function.


---

US SteelCo is in the practice of building a new method for creating high tensile steel girders exposed to tropical environments. The goal is to market them into the Caribbean for anti-hurricane buildings which are now more in demand due to Global Warming. The methodology of creating these girders involves dousing them in a cooled molybdenum bath at a precise time during the tempering of the steel. They go through months of testing to determine the exact right formula and finally make a breakthrough.

Unbeknownst to them, a Chinese hacker has been waiting for just such a breakthrough and is resident on their main mail server using a variant of a trojan he also sold to the PRC Army team.  He pulls the PDF and some notes on the methodology from a triumphant email sent to the management team and then has them translated into Chinese using a Beijing translation service he is friends with. He then sells that information to a Chinese Fusion Center where it is noticed by a local mega-company ChinaSteel which then decides to invest in a brief market exploration of the girders produced by this technique. They have some success locally in the southern Chinese market and then expand into the Pacific and Caribbean tropics.

A few months later a SteelCo sales team has one of its Bahamian customers hand it a sales pamphlet from ChinaSteel that has the exact same parameters for steel girders. It cannot be coincidence. At first, they assume one of their own technology team has left with the valuable formula, but after an internal investigation, done entirely in person in a hotel room offsite, they place a quick phone call to their FBI contact from the local Infraguard meeting, and she sets them up with a meeting at the local Active Defense fusion center, where they present their case.

The board of directors of SteelCo meets as well, and decides to put a budget towards tracking this issue down. Once the DHS officials working at Active Defense look at the evidence, they connect them with a licensed Investigator firm who constructs a simple Word document that pings back to covert infrastructure created for the test. The engineering team at SteelCo fakes a new announcement of an advance in the formula configuration, and then emails it to the SteelCo executive team, where it is caught by the Chinese hacker’s implant.

A simple HTTP connection ping is made from the Beijing-based translator as they work on the new file, urgently passing it onto their customer at ChinaSteel. With this evidence in hand, the Investigator firm packages a request for additional scope to the Active Defense DHS point of contact. The DHS team looks through their history with the Chinese authorities and notes that they have been previously unresponsive to efforts to get information from this exact translation firm.

Once approved, they began a more thorough exploration of the servers the translation team runs, using a simple phishing document and a custom 0day to penetrate the pirated Windows XP laptop the company uses. Once inside, they find evidence of years of ongoing economic espionage, for both ChinaSteel, and many other “customers”.


This evidence then goes, not to US SteelCo, but to the DHS Active Defense team and then onwards to the US Agencies responsible for enforcing legal sanctions. When ChinaSteel’s management team meets later that year to discuss the yearly strategy, they implement a global policy to not use information from the Fusion center to shortcut their R&D, as it has damaged both their brand, and their bottom line.
---

Wednesday, November 30, 2016

The event horizon of software liability and cyber insurance

Software liability and cyber insurance seem inevitable but you can never reach them - they are singularities.


There's a gravity in the policy world to try to "solve systemic information security risk" via one of two horrible ideas:

  • Cyber Insurance
  • Software Liabilities

These twin black holes spin around each other, generating gravity waves that can be felt from every other part of the information security universe.

The latest musing into this quixotic adventure is Rob Knake's idea to have the Federal Govt backstop universal cyber insurance - eventually leading to massive SEC-level controls over every company in America:
There are not good ideas. Also, email-spoofing is not what anyone does when it comes to phishing in 2016 - which is a weird technical detail to have in this paper at all.
As much as AIG would love to be the middleman in a massive new insurance market for which we have no actuarial data, but where the risk is pushed onto the US Taxpayer , the reality is there are some risks you cannot insure. Insurance was created during the Great Fire of London, but fire does not choose to burn down only the houses of the insured to cause maximum damage to the taxpayer the way a cyber adversary would. This system would be built to create an additional vulnerability on the state that another state could take advantage of.

From a technical perspective, the idea is also bankrupt. As Rob himself points out, we don't know what WORKS when it comes to securing things, and even if we knew what worked in the past, we would not know that it would continue to work in the future.
The smart thing to do is not try to build a new, trusted email, but just not to trust email. I don't know why Knake is so hot on email spoofing. Also, I want to point out that when an APT does their job right, you never know you took damage. What exactly are we insuring?

And yet, you have seen a burgeoning market for security products which offer guarantees, often backstopped by insurance companies who treat it like a marketing wager, such as this one by Cymmetria. In this end, this may be as "good as we get" when it comes to how insurance is going to work in this space.

The following is the most hilariously scary part of the recommendations:
Yes, nobody will have a problem with THAT clause.
The job of protecting against a systemic massive 9/11-style attack from a nation state in the cyber domain is rightfully the federal government's. But you can't replace a robust and realistic policy program with a Flood Insurance for Cyber. When Keith Alexander went around asking banks to give him access to their incoming traffic with a black box, they all said no, and for good reasons. Rob argues that not only should we go further than a black box doing network inspection, but this should apply to every company. It's a massive power grab and, luckily for all of us, a non-starter.

Remember, when Rob says this will encourage the adoption of best practices, what he means is "We are going to mandate how you run your networks, even though we cannot secure our own."


Monday, November 28, 2016

Unboxing "0day" for Policy People

Sometimes the bugs come out of the box.


Today's painful realization is that the very term "0day" has put this weird box around the policy brain, and minimized the dangers of regulation on all research in the security space, especially, for some reason, the European policy brain (including our British friends!). So I want to demonstrate some Zen Koans to help unbox you, so when Microsoft says they're looking "widely" with their "bounty" program you know what they mean.

Some things which are 0day, but outside the box:

  • Techniques for undetectable persistence on Windows 10
  • Ways to manipulate a heap on iOS that guarantee a certain heap layout
  • A function pointer that is always at a static location in Google Chrome and is called periodically
  • A way to send a lot of data using DNS through Microsoft Exchange servers
  • A shellcode that does something useful on Cisco's OS
  • Ways to clean up a process so that it continues nicely after exploitation.

If you think "Oh, they have promised not to regulate knowledge in general, just dangerous exploits!" then think again. There are many clauses in the Wassenaar agreements and every other proposed regulation (looking in Ari Schwartz's direction here) that seek to control exactly these things. Hopefully this post helps clarify why every security researcher had a big freakout with the Wassenaar proposals.


Wednesday, November 23, 2016

CIS VEP Panel Commentary


You can be super smart and not understand CNO operational issues because of a lack of experience in the area. And you can be smart and have ethical issues with the very idea of doing CNO. Above is a link to the CIS panel released last week on "Government Hacking" that discusses the VEP where both are on display.

It's hard to address the "ethical" issues around SIGINT collection that make people unhappy. I find it disturbing (as should you) that Ari Schwartz and Rob Knake and the Obama White House decided to do what they did with the VEP, sacrificing years of effort to maintain operational advantage by our IC, because of vague ethical issues with something they don't even understand fully. In the video, you can see Ari's face panic when the question comes in about what a vulnerability "Class" is, something we've written about on this blog. Sinan Eren answers it, much to Ari's relief, because Ari has no idea what a vulnerability class is except in the most general sense. He couldn't name them if his life depended on it. AND LIVES DO DEPEND ON IT.

It's also funny in the video to see Ari's look of surprise when he hears Sinan say "Vulnerabilities don't matter from a defensive perspective - focusing on mitigating factors is what makes the difference from a software security perspective". You can see an epiphany almost start to form in his head, then fade away as he returns to his blind ideology.

Inexperience with operational matters is something we can point out clearly though. You can always tell someone is inexperienced when they say things like "How long should we hold a vulnerability for?" or "You don't even need 0days to attack things!" That second one is true, except against hard targets, or when you cannot afford to get caught. Does that sound like the exact position the IC is in? Yes, yes it does.

This is the probabilistic game every good operator has in their heads. This is why it's not simple. Like a scuba operator measuring their outgassing a good CNO OPSEC person is also measuring their exposure to other operations across their entire toolchain at all times.
The reason hackers love 0day is not always the high success levels. It's the protection against detection by intermediaries or the target themselves. Likewise, it takes a very long time - sometimes years - to properly test an exploit in the wild. When people say "How long have you had this bug?" the answer from a properly trained operator is always "Not long enough to be comfortable with it".

The saddest part of the VEP video was when Ari says "Just because we've given it to a vendor doesn't mean it's blown!" Everyone in the IC was headslapping as he said that. It demonstrates a complete lack of understanding of how operations are protected that should not be the case in someone making policy that affects the IC.

But it comes out, during the video, that Ari believes we should control the whole vulnerability "market". That was his real goal with the VEP. And that means everyone. It means Ari thinks the entire research community should follow some disclosure law he and his friends think up and ram through Congress, without any understanding of the impact of his "Ethics" on the rest of us. It's the same as the Wassenaar Agreement. And yet the EFF is still trying to support him on this one. And that too, is baffling.


Sunday, November 20, 2016

you have to love it

No matter how many years it's been since I left - two decades now - people still look only at this one thing on my resume.

I want to spend a couple minutes pointing out the massive cultural differences between hackers and everyone else in this industry. Because as I crawl around in the policy world I sense these wide gulfs. The first one I sense is about books. Because hackers, despite coming from many backgrounds, share a common philosophy learned entirely from D&D, Neuromancer, Asimov, Heinlein, Snow Crash, Cryptonomicon, The Long Run, Dune, and all the other hard core science fiction they nestled in as they began the ascent up the mountain towards internalizing a difficult discipline.

Because of this, they are universally paranoid, atheistic, libertarian. And when the policy world tries to define norms without understanding the built-in philosophy of the domain, they run into fierce resistance from the denizens of this space, a lot of it only understandable if they have read the curriculum. I was astonished when Sue and other people I know in the legal/policy sphere hadn't read Snow Crash - I assumed EVERYONE had read it, because in my world, everyone has.

There are apparently moves to fire Rogers as DIRNSA. "Huge if true", as they say. I can't even remember any hints of this kind of action ever happening before.

NSA is far too important to this country to ruin, a shining jewel of technological prowess. We should be proud of it. It has gone through some hard times, most of which were not the fault of the organization. VEP is an example of our White House being ashamed of the NSA's competence. How can you expect to keep people if you are ashamed of them?

That then, is all we should ask as the American people of our next DIRNSA. You don't have to be a geek, or have read any science fiction. But it doesn't hurt. You have to join the culture you are becoming a part of. You cannot lead the NSA without falling in love with it.

And if you are in the cyber policy space as a lawyer, make the effort to read the cannon of cyber war, and we hackers will read your books about more recent history.

Many of these are free. :)

<live list follows>

  1. The Long Run PDF 
  2. Snow Crash AMZN
  3. Cryptonomicon AMZN
  4. Dune AMZN

Saturday, November 19, 2016

The State of Cyber Norms

It's worth pointing out that despite the insanely optimistic musings (1,2,3) from everyone in the State Dept about the progress of the international relations world on cyber norms, the reality is a disaster.

The shining light, which State and the Obama administration completely get credit for is the dissolution of the Chinese State economic espionage strategy. But that ignores the overall picture:

  • The UN GGE process misses clear players (Russia/China) and has at the root the issue of nobody agreeing on any of the definitions of the words they use
  • NATO's Tallinn process has many key problems (i.e. it is largely disconnected from the realities of the domain's technical characteristics) 
  • The US's transparency about our SIGINT process has been met with nothing from its European partners, who continue to batter us with hypocritical cries about privacy post-Snowden
To put it in the clearest possible terms: Nobody at State had the foresight to delimit "Not messing with our election" to the  Russians, which meant we had to get into a last minute massive escalation game with them instead. In addition to the lack of progress on any realistic front from our more traditional international efforts, this is the kind of total failure that needs to be publicly recognized.

Here is an example paper exploring what collateral damage might mean. Imagine trying to apply international law or norms to a domain where you don't even know what collateral damage is yet! 



Tuesday, November 8, 2016

New Agencies We Need in the Next Administration



National Cyber Forensics Agency

We need an agency in charge of decryption of phones and analysis of data. It's not just about managing the decryption tools themselves, which are going to remain secret and not handed out to local PDs and FBI offices, but gaining the know-how of how you do forensics and data minimization in a robust way to protect US civil liberties.

This is going to cost a lot more money than I think people are expecting, but we have to do it, and the longer we wait, the more expensive it will be to bootstrap.

National Active Defense Agency

Marketing buzz has ruined the term "Active Defense". But "hack-back" is unpalatably honest. However, if you keep a careful eye on the policy groups, they are quickly finding ways to lay the groundwork for an agency that uses private dollars to hack back against Chinese/Russian C2, and legalize active measures against botnets and worms such as MIRAI.

This is not as hard legally and politically as people sometimes make it sound. You just run it like a penetration testing company, with scope and authority from DHS and money and talent from the private sector. And you make the State Dept sell it overseas, because that's their job and we work with the cyber norms we have, not the ones we want, sometimes.

National CISO

CISO is one of those jobs that destroys people. Thankless, and with the cloud of doom sticking to your pant legs like a toddler's poo everywhere you go. But we need, not centralization, but clarity of vision and of quality and, frankly, someone to give our executives in Government the straight dope of what they can and can't do with regards to their own IT infrastructure. We need a salesperson who can sell a unified government security fabric to all of the many business units that make up the Federal Government. So far we've concentrated on finding bureaucrats with authorities.

Every big bank has the identical federated business plan as the USG when it comes to how this sort of information security and IT infrastructure needs to be run. We need to copy their DNA and figure out how to do this, if not right, at least a lot less wrong.

Saturday, November 5, 2016

It's Raining VEP!

This is the technical community's feeling on these papers, summed up in a tweet.

And this is it summed up in...another tweet.


There's this weird trend in the policy world to have a group of law students get together, read a bunch of Wired and NYT articles on all things cyber, quote a few of their own papers on the subject and call it "research". That's like doing lemur research by looking at the drawings a first grade class does after they watch Madagascar. If they really wanted to do research on the 0day market, they should find some 0day, then sell them on the "market"! Can't? Then STFU about all things vuln markets!

But to get more substantive on a critique of the latest Pro-VEP paper (this time by @jason_healey and co). Let's start with the most ridiculous recommendation in any VEP paper so far!
Yes, because opportunities just wait around for the VEP process to complete.

Just to revisit how hacking works: your shock troops are small teams building and deploying exploits faster than their hard targets can adjust. Custom-built is what works best for both return on investment and OPSEC. Policies that apply to "every vulnerability" ignore the vast differences in how the Government finds and uses vulnerabilities across its entire mission set. Do we have an entire-Government-wide policy on shovels?

Aside from the data being of "who the fuck knows" in terms of accuracy, and the contents of the table being entirely conjecture by law students who've never touched a vulnerability, I can't imagine the the purchasing officer who is going to spend 4.4M USD on things they don't get to use. In what world do people think that's how the system works? SEQUESTRATION IS A REAL THING, THIS TABLE IS NOT.

Look, not for nothing, but you need lots more bugs than 45 to get the job done, and beyond that, to get the job done safely. Your instinct should tell you that a team with 8 Solaris implants (c.f. EQGRP latest leak) has more than 15 bugs a year? THINK ABOUT IT: 8 DIFFERENT SOLARIS IMPLANTS. IMAGINE THE REASONS WHY.


THIS is just a description of the ethical argument in favor of unilaterally deciding not to do SIGINT. Not one I think we should use when designing government policy.
Here's what it comes down to: Unrealistic expectations of some sort of ethical purity argument as applied to SIGINT policy. The icing on the cake is the below feeling of betrayal when reality steps in.


I guarantee you that Obama, who is involved in our offensive operations more than any previous President, does not intend what VEP supporters think he intended. The key is the phrase "or the technology sector expects". That's what this is about. The Tech Sector has expectations. They are not being met, nor should they be.

Friday, November 4, 2016

Rule 41 Panel Discussion


So the link above is to the Government Hacking: Rule 41 talk held on Nov 2, 2016 panel with Granick, Salgado, Cocker, and several other lawyers heavily involved in the space.

I want to sum this up with two clear things:


  1. Whether you think the rule 41 changes are Substantive (and hence should be delayed and addressed by Congress) or simply a procedural change and therefor should be passed immediately to address major gaps - appears to depend entirely upon who is paying you to argue which side, as Granick points out 
  2. The Government (pro Rule Change, particularly Anello at ~1:40) side appears to intentionally confuse the issue. On one hand they state that it doesn't matter that most of the computers they will be hacking under this order are international because that's handled by multilateral treaties and cannot be handled with Rules or even by Congress. They acknowledge that there's no current way to handle the international issue by law or to handle this issue in the current MLATs. Not only that, there's no way we'd be comfortable with any other country doing what we're proposing to do under this Rule. On the other hand Allison Bragg claims that this does not change that we need the rule - as if there is some way we can make "anonymous computers" definitively domestic pursuant to our searches in some way. 

Straight up, Allison Bragg claims that the Rule change says that "if you don't know where a computer is, if it is domestic, you'll be able to apply Rule 41 to it". That makes NO SENSE from a technical perspective. And it is something that should be called out a bit further by Granick, Cocker and the others on the panel.

It would not be incorrect to state that Bragg thinks "We need this, no matter what the issues with it are, because otherwise we can't prosecute anything". This is probably short-sighted, especially considering this Rule change is not limited to terrorism or child porn cases, but in fact, across the whole board of crimes.

There are a lot of other issues in this area that are not REALLY Rule 41 change-related (particularity concerns with the 4thA, etc.) but are slightly touched upon by the panel, which is full of super-strong legal minds fun to listen to. Although if you are reading this, you probably don't have the time. :)

Thursday, November 3, 2016

The Ethical Argument against VEP


I promised myself I wouldn't write about VEP anymore on this blog. But we have reached stage 2 in the argument and it's important to note that originally everyone was claiming the VEP was necessary as a function of public safety, to coordinate defense against systemic risks that were pertinent to our national structure. Now they say it is an ethical issue, as Ari eventually did on stage at CyCon.

More correctly, the idea of the Government holding exploits, and in particular the NSA, makes people feel "icky" and when you talk to congressional staffers they don't trust the NSA to make decisions in the best interests of the American people with the exploits they do have and use. People who support the VEP rationalize their feelings of ickyness and distrust into an "Ethics problem".

Mozilla and Microsoft and Google and every other large software company would love to make it seem like the Ethics of the issue basically requires that the government get out of the business of having and using exploits at all. But they don't secure their systems because of ethical issues - they secure them because of market forces.

From a purely ethical issue, who knows: it may be that SIGINT is an unethical thing to ever do. Or it may be that it is a proportional and reasonable response to our national security needs. If we want to get out of the SIGINT business, we should just say so.

To put this in concrete terms for Jeff: Going through with the VEP is eventually going to require strict export controls on what you are allowed to say at Defcon. The ethical judgments on that seem to point towards a less free society.

Wednesday, November 2, 2016

Quick note on Tallinn and International Law of Conflict issues in Cyber

Check out what the Marine Colonel Gary Brown says at minute 28 of this CSIS video from last year. Some of the best explanation of Tallinn's flaws I've ever heard. Way better than how I say it. :)

https://www.c-span.org/video/?400091-1/discussion-laws-war

One interesting thing that I don't know is addressed in International Law is that cyber splits up whether a State is responsible and WHICH State is responsible.

Tuesday, November 1, 2016

The GWU Active Defense Report is a Secret Argument for Cyber Letters of Marque!

Let's talk about the Active Defense Report from GWU, starting with who wrote it, but not to bury the leed, this paper is all about an argument FOR CYBER LETTERS OF MARQUE! :)


It's always a bad sign when you have Jane Holl Lute talking about anything  computer related. She spent her entire BlackHat talk saying how little she knew about the subject. This tells me they picked people by who had titles, not knowledge. But they also have Tim Evans and Nate Fick and Stewart Baker and other people who clearly DO have experience in the subject. 
The first thing the paper does is of course to try to define active defense. If we go over the history of the term, it was synonymous for hacking back, and occasionally for doing some other pretty obviously illegal things, until the marketing droids at Crowdstrike who were using it daily got tons of heat and had to walk it backwards about ten steps by including a lot of stuff that is not at all active defense in their marketing material and public statements.

This paper is no different.

On the left, stuff that is in no way related to active defense! On the right, stuff that is clearly illegal! For example, "Hunting" is "looking at logs to find patterns". Let's not fall for our own marketing BS.
The smart thing to do with this paper is ignore entirely the Executive Summary where they suggest about a million things for the government to do, including the executive branch, congress, various government agencies, etc. This would be a tremendous effort of ungodly proportions! And I think it occludes the important nature of this paper, which is buried far below.

This section is where the paper starts to acknowledge the issues at hand: Government is failing to provide a protective security umbrella.

This paper drives relentlessly towards one conclusion: We need a new model for Cyber Letters of Marque. It starts subtly.

Note the highlighted section that we call "Foreshadowing".

The paper goes a bit more explicit with why this is important in the next few sections.



Let's talk about that example, because it is clearly "Hack Back", the real and only definition of active defense. But the Government is acknowledging that it currently uses selective prosecution as its current plan for Cyber Letters of Marque, and needs a new, more explicit model, which it goes on to explain as essentially the exact thing we have in this blog post over here.

The snippet from the Appendix (written by the Center for Democracy and Technology) shows how most people in Government feel about these ideas (uneasy), but they will have to get used to it.

What should make us MORE uneasy, as the paper presents in the conclusion, is the idea of doing nothing, and allowing the norm of "Not acting unless we feel like it" to be the international rule of the road. The reason cyber letters of marque are a good idea is that they explicitly address what is clearly already happening, while allowing for resources from private companies to directly solve the problems private companies are having.

While private companies are not going to be directing hacking against C2 infrastructure in this model, they will be paying for it, and getting their priorities met. This addresses a significant gap in nation-state sovereignty, and the authors of the paper argue that it needs to happen as soon as possible. This paper sneaked these ideas in there in diplomatic terms, which is a very interesting development, to say the least.