Tuesday, October 20, 2020

Projecting Cyber Power via Beneficience



So many articles come out decrying Europe's inability to create another Google or AWS or Azure or even a DigitalOcean, Oracle Cloud, IBM Cloud, Rackspace, Alibaba, or Tencent. Look, when you list it out loud, it's even more obvious how far behind Europe is in this space compared to where it should be.

And of course, projecting power via regulatory action only gets you so far. Governments like to negotiate with other governments, and you see this in cyber policy a lot, but it's worth mentioning that the European populace has a vastly different opinion on the value of Privacy than everyone else. We talk a lot at RSAC about Confidentiality, Integrity, and Availability, but in Europe personal Privacy is in the Triad, so to speak.

I think this is a unique strength. But I also think: Why try to beat the rest of the world at creating giant warehouses full of k8s clusters, when you can just pick almost any vendor now and get roughly the same thing. Moving the bits around and storing them redundantly is the BORING part. 

But there are things Silicon Valley categorically, for reasons built into the bones of the system, cannot do. Some of those things hold great power.

Education is the obvious market vertical for Europe. There's massive power projection in being able to provide useful services, as Hezbollah does, as the local city council does. Look at the disaster that is the underfunded US education system, and think about the opportunity there. And in smaller countries, it's even more useful as strength projection. You just need to invest in translation and customer service. The key is NOT to exploit it for the obvious opportunities it would present to an aggressive intelligence service. Trust is as important an element of cyber power as deterrence is in nuclear policy. 

I don't mean to understate the difficulty in doing good customer support across time zones and translation into the specific cultural dialects worldwide, but there's real technical innovation to be done in education as well. And innovation in software scales and has network effects and can provide the basis for a 21st century economy a lot easier than something built purely on advertising and surveillance. 



Wednesday, October 14, 2020

A 2020 Look at the Vulnerability Equities Process

 I stole this picture from Marco Ivaldi's presentation on Solaris. :)


The Vulnerability Equity Process’s original sin is that it attempts to address complex equities on a per-bug basis. But the equities involved are complex and interlinked. You cannot reduce a vulnerability to a list of components with a numerical score and make any kind of sane decision on whether to release it to a vendor or not. The VEP shares this weakness with the often maligned CVSS vulnerability scoring system. 


That said, an understanding of the equities around sensitive subjects in the cyber security world is valuable for the United State’s larger strategic goals. So what this paper tries to do is present some revisions to the policy, first made public under the Obama NSC, that would attempt to harmonize it with our strategic goals.


There are several areas where the VEP policy can be drastically improved, and we will go over each in turn.


Integrating Understanding of Attack Paths 


Scoring individual vulnerabilities is most difficult because exploits are not built from just one vulnerability, so much as a chain of vulnerabilities. In a sense, the value (and risk) of a vulnerability is linked to all the other vulnerabilities it enables. 


Attack surfaces are another example where it makes sense to be careful when assigning any equity judgement. For example, if we release a vulnerability in a hypervisor’s 3D rendering code, it can be assumed that both the hypervisor vendor and the outside research community will spend time focusing on that area for additional vulnerabilities. This means that even if an initial new vulnerability is not useful for a mission, other vulnerabilities in that same space may be useful, more exploitable, or affect more platforms. It may be worth not releasing a particular vulnerability based on how it may inform the broader research community about attack surfaces.


Exploitability and discoverability also needs to be understood in terms of modern automation techniques. Automatic exploit generation, fuzzing, source code analysis and other new static analysis techniques change the profile for how likely a new vulnerability is to be rediscovered by our adversaries and the wider research community. Likewise, we need a model of the sizes and capabilities of our adversaries - if the Chinese have essentially unlimited Microsoft Word exploits, then killing one of ours has little impact on their capabilities.





Aligning Equities to Mission Tempo


As we move into adopting persistent engagement, we are going to find more and more that our decisions around exploits cannot wait for a bureaucratic process to complete. For some missions, especially special task forces conducting counter-cyber operations or other high-tempo mission types, we are going to need to have a blanket approval for exploitation use and deal with the VEP process on the backend. On the reverse side, we can special-case scenarios where we know we have been discovered or have found third-party exploitation of a vulnerability. 


Likewise, the risks of some missions affect our understanding of how to use vulnerabilities - in some cases we want to reserve vulnerabilities for only our most least risky missions (or vice versa). 


Analysis of Supply Chains

 

We clearly need to communicate to our vendors that we have a presumptive denial of release of any vulnerability we purchase. As well, a process that brings our vulnerability suppliers into the discussion would be a valuable addition. The technical details of the vulnerabilities, the attack surfaces they target, and the potential risks to other areas of research are known best by our suppliers. They may also have the best information on how to design covert mitigations that we can apply to our own systems without revealing information about the vulnerability itself. 


The security of our suppliers is also a factor in our equities decisions. Coordinating around security issues is essential for long-term understanding of the equities around vulnerability use and may need some formal processes. Individual vulnerability finders often have their own style fingerprint, or method of analysis or exploitation. These impact attribution and other parts of our toolchain equities up the stack. 


Currently we have no way of measuring how “close” two vulnerabilities are - even bugs that look like they collide in the code from a summary description can often be completely different. With recent advances in exploitation techniques and mitigation bypasses, fixing bugs that look unexploitable or low-impact can have massive detrimental effects on future exploit chains. 


The ability to maintain capability still has many unknowns. This means our decisions must often delve into details that evade a summary analysis.




Communications

We may also want to revise how we communicate to the community when we have released a vulnerability for patching by a vendor. Do we have the ability to inform the public about the details of a particular vulnerability, when our assessment differs from the vendor’s assessment? In some cases we should be releasing and supporting mitigations and point-patches for software ourselves to the general public. The answer here is not calling up a friendly news site, but an official channel that can release and iterate on code (much as we do for Ghidra). 


Measurement of Impact


Implementing any kind of government policy like this without attempting to measure the impact on our operations and also on the broader security of the community is difficult. Nevertheless we should find a way to put metrics, or even just historical anecdotes, on how the VEP performs over time.