Friday, January 4, 2019

VEP: Centralized Control is Centralized Risk




When we talk about the strategic risks of a Vulnerability Equities Process people get absorbed in the information security risks. But the risks of any centralized inter-agency control group also include:

  • Time delay on decision making. Operational tempo is a thing. And you can't use a capability and then kill it on purpose on a regular basis without massive impact to your opsec, although this point appears to be lost on a few former members of the VEP. 
    • For teams that need a fast operational tempo (and are capable of it), any time delay can be fatal. 
    • For teams that don't operate like that, you either have to invest in pre-positioning capability you are not going to use (which is quite expensive) or further delay integration until the decision about whether to use it has been made.
  • One-size-fits-all capability building. While there may be plenty of talented individuals for the VEP process, it is unlikely they are all subject matter experts at the size and scale that would be needed for a truly universal process. I.E. the SIGINT usefulness of a SAP XSS may be ... very high for some specialized team.
  • Having multiple arms allows for simpler decision making by each arm, similar to the way an octopus thinks. 
  • Static processes are unlikely to work for the future. Even without enshrining a VEP in law, any bureaucratic engine has massive momentum. A buggy system stays buggy forever, just like a legacy font-rendering library. 
It may not even result in different decisions than a distributed system. For example: Any bug bought/found by SIGINT team is likely to be useful for SIGINT, and retained. Otherwise your SIGINT team is wasting its time and money, right?

Likewise, any bug found externally, say through a government bug bounty, is likely to be disclosed.

Here's a puzzle for you: What happens if your SIGINT team finds a bug in IIS 9, which is an RCE, but hard to exploit, and they work for a while, and produce something super useful. But then, a bit later, that same bug comes in through the bug bounty program DHS has set up, but as an almost useless DoS or information leak? How do you handle the disparity in what YOU know about a bug (exploitability f.e.) versus what the public knows?

Outputs

This leads you into thinking - why is the only output available from the VEP disclosure to a Vendor? Why are we not feeding things to NATO, and our DHS-AntiVirus systems, and building our own patches for strategic deployment, and using 0days for testing our own systems (aka, NSA Red Team?). There are a ton of situations where you would want to issue advisories to the public, or just to internal government use, or to lots of people that are not vendors.

During the VEP Conference you heard this as an undercurrent. People were almost baffled by how useless it was to just give bugs to vendors, since that didn't seem to improve systemic security risks nearly enough. But that's the only option they had thought of? It seems short sighted.




Thursday, December 20, 2018

VEP: Handling This Patch's Meta Changes

We may be about to enter the Healer meta in Cyber War for Season 3, what does that mean for you?

The Meta Change


Google caught an 0day this week, and Microsoft issued an out of band patch. Kaspersky has been catching and killing 0day as well recently, at a much higher pace than you would expect, something over one a month. If that doesn't surprise you, then it's probably because you're accustomed to seeing defensive companies catch only implants or lateral movement and then helplessly shrugging their shoulders when asked how the hackers got in.

But we're about to enter a healer-meta. In Overwatch, that is when the value of defensive abilities overwhelms the damage you can do over time. So your offense is forced into one of two options:

  • High Burst Damage Heroes (such as snipers, Pharah, ultimate-economy playstyles, etc.)
  • Diving healers
In Cyberwar what this means is that when effective defensive instrumentation gets widely deployed (and used) attackers are forced to do one of three things:
  • Use worms and other automated techniques that execute your mission faster than any possible response (burst damage)
  • Operate at layers that are not instrumented (aka, lower into firmware, higher into app-level attacks)
  • Modify the security layers themselves (aka, you dive the healer by injecting into the AV and filtering what it sends)

VEP

There were two arguments around what you should do in a VEP in a Healer Meta at the Carnegie conference. One, was that since you knew bugs were going to get caught, you should make sure you killed them as soon as possible after use (which reduces threat to yourself from your own bugs). The other was that you were going to need a huge stockpile of bugs and that you should stick to only killing bugs that you knew for sure HAD gotten caught, and only if your attribution was already broken by your adversary.

These are very different approaches and have vastly different strategic needs.

Of course, the third take is to avoid the problem of 0day getting caught by focusing on the basics of countering the meta as outlined above. But a nation state can (and will) also use other arms to restrict the spread of defensive technology or erode its effectiveness (i.e. by attacking defensive technology supply chains, using export control to create deliminations between HAVES and HAVE NOTS, etc.). 






Wednesday, December 19, 2018

The Brussels Winter VEP Conference


So recently I went to a conference on vulnerability equities which was under Chatham House Rule, which means I can't say WHO SAID anything or who was there, but they did publish an agenda, so your best guess is probably right, if you've been following the VEP discussion.

Anyways, here (in click-bait format like Jenny Nicholson) are my top three things that are literally and mathematically irrational about the VEP, as informed by the discussion at the conference:

1. A lot of the questions you are supposed to answer in order to make the VEP decision are 100% unknowable.

Questions typically include:
  • How many targets use a particular software?
  • How many friendly people use a software platform?
  • Will the Chinese find this bug easily or not?
  • etc.

Some panel members thought a partial solution might be for every technology company to give all their customer survey information to the government, which could help answer questions like "Do we need to protect or hack more people who are vulnerable to this bug?" This idea is a bad idea and you could sense the people in the room laughing internally at it, although it is partially already the goal of Export Control regulations.

Needless to say, if you are making your decisions based on a bunch of questions you have NO ANSWERS TO, you are making RANDOM decisions. And some of the questions are obviously unknowable because they involve the future. For example, the answer to "Do our opponents use the latest version of Weblogic?" is always "not at the moment but the future is an unknown quantum interplay between dark energy and dark matter that may decide if the universe continues to expand and also if the system administrator in Tehran upgrades to something vulnerable to this particular deserialize issue!". An even better example is the question of "How hard is this bug for the Chinese to find?" to which if you KNEW WHAT BUGS THE CHINESE COULD FIND IN THE FUTURE you would not be worrying about CyberWar problems so much as how to deal with the crippling level of depression that happens when you have a brain the size of a planet.

Although ironically the VEP will tell the Chinese how hard it is for US to find particular bugclasses, so we have THAT going for us at least.


2. Voting does not resolve equities issues. One of the panelists mentioned that if you want to take every bug, and rank its usefulness from 1 to 10, and then take its negative impact, and rank that one to ten, you can draw a nice diagram like the one below.

Then (they posit) you can just look at the equities decisions you've made, and draw a simple line with some sort of slope between the yay's and the nays and you've "made progress" (tm).



Except that in reality, every number on the graph is somewhere on the axis of "would stop World War III if we could use it for SIGINT" and "would end all commerce over the Internet as we know it resulting in the second Great Depression". I.E. every number is zero, infinity, or both zero AND infinity at the same time using a set of irrational numbers that can only graphed on the side of a twelve dimensional Klein bottle. Voting amongst stakeholders does not solve this fundamental unit comparison issue, to say the least.

What if a bug has no use, but the bugclass it belongs to is one you rely on for other ops? The complications are literally an endless Talmudic whirlpool into the abyss.

For example, I am continually mystified by certain high level officials misunderstanding of the basics of OPSEC when you give a bug out. They seem to think that you can USE a bug operationally before you go through the VEP, and then decide to kill it, and not suffer huge risks with OPSEC (including attribution). They often justify this with the idea that "sometimes bugs get caught in the wild or die by themselves" which is TRUE. In that sense, yes, every operational use of an exploit is an equities decision - one that you take for OPSEC reasons. Which is why GOOD OPERATORS use one whole toolchain per target if possible. And if you think that's overkill, then maybe you've underestimated the difficulty of your future target set.

Also note that no person in government policy wants to use this process to measure the impact of the VEP over time - although I'm not sure what units you would measure your operational loss in, other than human lives? Likewise, there's only one output to the VEP, "Give bug to Vendor" as opposed to a multi-output system including "Write and publish our own Patch" which seems like a better choice if you want to have options for when you disagree with a vendor's triage or timeline?

3. No Government in Europe is dumb enough in this geopolitical environment to do VEP for real. It may happen that every Western government signs or sets up some document that assigns a ton of unanswerable rote paperwork per-bug to their already small technical and cleared teams, if for no other reason, because Microsoft and Mozilla and the Software Alliance all have legitimate soft power that can influence public policy. I mention them in particular because they funded this conference and following the money is a thing I once heard about. As a positive bonus note: VEPs are, great cover for killing OTHER people's bugs once you catch them in the wild.

But the EU technical teams were also there at the conference, with the government policy people responsible for getting their cyber war game from D-level to A-level. You can imagine the post-Snowden meetings all across Europe in rooms with no electronic devices where elected officials looked at their teams and said "What exactly do they mean "SSL Added and Removed Here?!? We need to 'Get Gud', as the teens are saying. Pronto."

Does anyone realistically think that they're going to hamstring themselves? Because I talked to them there and I'm pretty sure they're not going to. (insert SHRUG emoji!)



And here's the actual strategy implication that they know, but don't want to say: Your best people will leave if you implement the VEP seriously. There are those Sardaukar for whom it is not about money, who are with you for life, as long as you have a mutual understanding that their work is on mission, all warheads in foreheads. And to them, the VEP is an anathema.

And then there are people out for fame and money, and those people are going to get stolen by a random company anyway, because why would they ever stay and be a glorified bug bounty hunter?

I mean, every country is different. It's possible I'm misjudging cultures and talent pools. Or not. But if you are running a country's VEP program, you have to be pretty confident that I'm wrong about that to move forward. This is the kind of thing you'd want to start asking about in your exit interviews.

Oh, and as a final note: One of the submitted talks to INFILTRATE required an equities decision. Cool 0day, very old, and you should come and see the talk even though we haven't officially announced it yet. :)





Thursday, November 29, 2018

The false path of ReIntermediation

Salsa Shark. . . 

I sent a recent paper on information operations over Twitter to some people for feedback and one of the comments was the following:
I also think there’s a meta question that needs to be answered. 
You say that censoring inauthentic accounts is not the way to go for various reasons...but you still have an info ops problem given that inauthentic accounts are still publishing garbage getting likes, retweets, shares, and driving online activity/conversations. How does your Jiu Jitsu solution remedy that?
And here's the larger picture: I never think re-intermediation is a valid policy solution, but it's the most common reflex in government and policy bodies. The internet was a tidal wave of disintermediation - in technology, in society, in geopolitics. It's hard to explain that yes the Russian Government gets to talk to all your people directly without you being able to do anything about it or that people can talk to each other in encrypted format now without you being able to listen or that money or music is going to transfer around without any real mediation by governments.

The instinct is always to try to force social media companies to just get REALLY good at banning Russian propaganda networks, or legally enforce impossible encryption regulations, or somehow enforce a global copyright regime. To re-intermediate, in other words. It's always the wrong direction. You can't roll back the tide.





Tuesday, November 20, 2018

A Question of Trust

For those of you who have not read Eugene Kasperky's latest piece, it is here:
https://www.forbes.com/sites/eugenekaspersky/2018/11/19/a-question-of-trust/

I have pasted the most relevant section below.


While obviously Kaspersky's transparency initiative is a good thing, and probably something that should be emulated by other companies in the field, I think it's worth taking a step back to see what metrics you can judge its design on for effectiveness. Many portions of the stated initiative don't seem to be relevant in security sense - they are for marketing purposes, as cover for people who want to use Kaspersky software and are looking for an excuse.

Some questions, a positive answer to any one of which is fatal to the goals of a Transparency initiative:

  • Can Kaspersky update the software of only one computer, or write a rule that would run only a subset of computers? 
  • Is the data from computers in France still searchable from Moscow? (And hence, subject to Russian law?)
  • Could Kaspersky install a NOBUS backdoor which would get through the review of the Transparency team in Switzerland and get installed on international customers?
I think the answer to these questions is probably "yes".

The hard problem here is that the goal of a "Trust" initiative of this nature is to be able to protect your customers while provably being being unable to see what they are doing, or target them in any way. The most obvious solution would be for Kaspersky to start up an entirely independent operation to handle the international market, at the cost of any economies of scale (and also at a reaction time trade-off). Even that might not even solve the third question, although at a certain point you have to admit that you are setting a bar high enough that software from extremely risky development locales is not going to clear it (which sucks for Kaspersky, but is an extremely realistic risk profile, depending on who you talk to!).

As a final note, this talk by a Kaspersky researcher is fantastic:




Tuesday, November 13, 2018

The true meaning of the Paris Call for Trust in Cyberspace

Links:


I often find it hard to explain anything in the cyber policy realm without pointing out how weird an idea "copyright" is. The easiest way to read Cat's article in the Washington Post is that the PR minions of most big companies wants to make it seem like some sort of similar global controls over cyber vulnerabilities and their use are a natural thing, or at least as natural as copyright. In some sense, it's a coalition of the kinda-willing, but that's all the PR people need since this argument is getting played out largely in newspapers.

But just to take one bullet point from the Paris text:
  • Develop ways to prevent the proliferation of malicious ICT tools and practices intended to cause harm;

What ... would that mean? you have to ask yourself.

You can paraphrase what software (and other) companies want, which is to find a way to ameliorate what in the industry is called "technical debt" by changing the global environment. If governments could assume the burden of preventing hacking, this can allow for greater risk-taking in the cyber realm by companies. I liken it to the credit card companies making it law enforcement's problem that they built an entire industry on the idea of everyone having a secret number small enough to memorize that you would give to everyone you wanted to pay money to.

From the WP article:
This could make way for other players on the global stage. France and the United Kingdom, Jordan said, are now emerging as leaders in the push to develop international cybersecurity norms. But the absence of the United States also reflects the Trump administration’s aversion to signing on to global pacts, instead favoring a transactional approach to issues, Singer said.

It's not so much "transactional" as it is "practical and workable" because to have a real agreement in cyber you need more trust than is typical of most arraignments. This is driven by the much reduced visibility into capabilities that is part and parcel of the domain, which frankly I could probably find a supporting quote for in Singer's new book :).

Aside from really asking yourself what it would MEAN IN REAL PRACTICAL TERMS for humanitarian law to apply to the cyber domain, you also have to ask yourself if all the parties in any particular group would AGREE on those meanings.

And then, as a follow up, ask yourself what the norms are that the various countries really live by, as a completely non-aspirational practicality, and especially the UK and France.



Wednesday, October 24, 2018

Book Review: LikeWar (Peter W. Singer, Emerson T. Brooking)

TL;DR


Buy it here!

Summary


There are some great stories in this book. From Mike Flynn's real role pre-Trump Admin, to a hundred other people's stories as they manipulated social media or tried to prevent it in order to have real world effects. This book draws a compelling narrative. It's well written and it holds your interest.

That said, it feels whitewashed throughout. There's something almost ROMANTIC about the people interviewed through much of it. But the particular take the authors have on the problem illustrates an interesting schism between the technical community and the academic community. For example, in the technical community, the minute you say something like this, people give you horrible looks like you were doing a Physics lecture and somehow wanted to tie your science to a famous medieval  Alchemist :

Highlight (yellow) - Location 307

Carl von Clausewitz was born a couple of centuries before the internet,
but he would have implicitly understood almost everything it is doing to
conflict today.
What's next? OODA Loops?!? Sheesh.

In a way though, it's good that the book started this way, as it's almost a flare to say exactly what perspective the book is coming from.

Two other early pieces of the book also stuck out:
Highlight (yellow) - Location 918

For it has also become a colossal information battlefield, one
that has obliterated centuries’ worth of conventional wisdom
about what is secret and what is known.
And:
Highlight (yellow) - Location 3627

And in this sort of war, Western democracies find themselves
at a distinct disadvantage. Shaped by the Enlightenment,
they seek to be logical and consistent. Built upon notions of transparency,
In other words: This book has a extreme and subjective view of government and industry and an American perspective. Its goal is often less to explain LikeWar than to decry its effects on US geopolitical dominance. We have a million Cleared individuals in the US. Are we really built on notions of transparency? This would have been worth examining, but does not fit with the tenor of the author's work here.

The book does bring humor to its subject though and many of the stories within are fleshed out beyond what even someone who lived through them would remember, such as a detailed view on AOLs early attempts to censor the Internet:

Highlight (yellow) - Location 4203

AOL recognized two truths that every web company would
eventually confront. The first was that the internet was a teeming
hive of scum and villainy.

Missing in Action

That said, anyone who lived through any of the pieces of this book will find lots missing. Unnoticed is the outsided role of actual hackers in the stories that fill this book. It's not a coincidence where w00w00 or Mendax ended up, although it goes unmentioned. And the role of porn and alternative websites is barely touched upon. How the credit card companies have controlled Fetlife would be right in line with what this book should cover, yet I doubt the authors have heard of FL (or for that matter could name the members of w00w00). Nor is Imgur mentioned. It's also not recognized that the same social network the intelligence community uses to publish their policies (Tumblr) is 90% used for browsing pornography.

Clay Shirky, the first real researcher into this topic, who gets one mention in the book (iirc), pointed out that whenever you create a social network of any kind, it becomes a dating site. This is one of those axioms that can produce predictive effects on the subject matter at hand. Sociology itself has been revolutionized by the advent of big data from Big Dating. The very shape of human society has changed, as the spread of STDs has pointed out. And the shape of society changes War, so this book could be illustrating it.

At its most basic example, examining social networks involves looking for network effect - the same thing that drives most dating sites to create fake profiles and bots so they can convince people to pay for their service. These are primal features of the big networks - how to get big and stay big. As Facebook loses relevance, Instagram gains it, and as Instagram loses it....we didn't see any of this sweep in the book. Some topics were too dirty, perhaps?

Conclusion


Like many books coming out, this book is a reflexive reaction to the 2016 election and nowhere is that more evident than in the conclusion.

Some statements are impossible to justify:
Highlight (yellow) - Location 4488

Like them or hate them, the majority of today’s most
prominent social media companies and voices will
continue to play a crucial role in public life for years to come.
Other statements are bizarre calls for a global or at least American censorship regime:

Highlight (yellow) - Location 4577

In a democracy, you have a right to your opinion, but no
right to be celebrated for an ugly, hateful opinion, especially
if you’ve spread lie after lie.
The following paragraph is not really true, but also telling:
Highlight (yellow) - Location 4621

Of the major social media companies, Reddit is the only one that preserved the known fake Russian accounts for public examination. By wiping clean this crucial evidence, the firms are doing the digital equivalent of bringing a vacuum cleaner to the scene of a crime. They are not just preventing

The authors, like many people, see the big social networks as criminal conspirators, responsible for a host of social ills. But for the past generations we have "Taught the Controversy" when it comes to evolution in our schools and it's hard to be confused as to why the population finds it hard to verify facts.

Instead of trying to adjust our government and society to technological change, we've tried to stymie it. Why romanticize the past, which was governed by Network News, the least trustworthy arbiters of Truth possible? We've moved beyond the TV age into the Internet age, and this book is a mournful paean to the old gods, rightfully toppled by disintermediation.

Still worth a read though.