Tuesday, October 20, 2020

Projecting Cyber Power via Beneficience



So many articles come out decrying Europe's inability to create another Google or AWS or Azure or even a DigitalOcean, Oracle Cloud, IBM Cloud, Rackspace, Alibaba, or Tencent. Look, when you list it out loud, it's even more obvious how far behind Europe is in this space compared to where it should be.

And of course, projecting power via regulatory action only gets you so far. Governments like to negotiate with other governments, and you see this in cyber policy a lot, but it's worth mentioning that the European populace has a vastly different opinion on the value of Privacy than everyone else. We talk a lot at RSAC about Confidentiality, Integrity, and Availability, but in Europe personal Privacy is in the Triad, so to speak.

I think this is a unique strength. But I also think: Why try to beat the rest of the world at creating giant warehouses full of k8s clusters, when you can just pick almost any vendor now and get roughly the same thing. Moving the bits around and storing them redundantly is the BORING part. 

But there are things Silicon Valley categorically, for reasons built into the bones of the system, cannot do. Some of those things hold great power.

Education is the obvious market vertical for Europe. There's massive power projection in being able to provide useful services, as Hezbollah does, as the local city council does. Look at the disaster that is the underfunded US education system, and think about the opportunity there. And in smaller countries, it's even more useful as strength projection. You just need to invest in translation and customer service. The key is NOT to exploit it for the obvious opportunities it would present to an aggressive intelligence service. Trust is as important an element of cyber power as deterrence is in nuclear policy. 

I don't mean to understate the difficulty in doing good customer support across time zones and translation into the specific cultural dialects worldwide, but there's real technical innovation to be done in education as well. And innovation in software scales and has network effects and can provide the basis for a 21st century economy a lot easier than something built purely on advertising and surveillance. 



Wednesday, October 14, 2020

A 2020 Look at the Vulnerability Equities Process

 I stole this picture from Marco Ivaldi's presentation on Solaris. :)


The Vulnerability Equity Process’s original sin is that it attempts to address complex equities on a per-bug basis. But the equities involved are complex and interlinked. You cannot reduce a vulnerability to a list of components with a numerical score and make any kind of sane decision on whether to release it to a vendor or not. The VEP shares this weakness with the often maligned CVSS vulnerability scoring system. 


That said, an understanding of the equities around sensitive subjects in the cyber security world is valuable for the United State’s larger strategic goals. So what this paper tries to do is present some revisions to the policy, first made public under the Obama NSC, that would attempt to harmonize it with our strategic goals.


There are several areas where the VEP policy can be drastically improved, and we will go over each in turn.


Integrating Understanding of Attack Paths 


Scoring individual vulnerabilities is most difficult because exploits are not built from just one vulnerability, so much as a chain of vulnerabilities. In a sense, the value (and risk) of a vulnerability is linked to all the other vulnerabilities it enables. 


Attack surfaces are another example where it makes sense to be careful when assigning any equity judgement. For example, if we release a vulnerability in a hypervisor’s 3D rendering code, it can be assumed that both the hypervisor vendor and the outside research community will spend time focusing on that area for additional vulnerabilities. This means that even if an initial new vulnerability is not useful for a mission, other vulnerabilities in that same space may be useful, more exploitable, or affect more platforms. It may be worth not releasing a particular vulnerability based on how it may inform the broader research community about attack surfaces.


Exploitability and discoverability also needs to be understood in terms of modern automation techniques. Automatic exploit generation, fuzzing, source code analysis and other new static analysis techniques change the profile for how likely a new vulnerability is to be rediscovered by our adversaries and the wider research community. Likewise, we need a model of the sizes and capabilities of our adversaries - if the Chinese have essentially unlimited Microsoft Word exploits, then killing one of ours has little impact on their capabilities.





Aligning Equities to Mission Tempo


As we move into adopting persistent engagement, we are going to find more and more that our decisions around exploits cannot wait for a bureaucratic process to complete. For some missions, especially special task forces conducting counter-cyber operations or other high-tempo mission types, we are going to need to have a blanket approval for exploitation use and deal with the VEP process on the backend. On the reverse side, we can special-case scenarios where we know we have been discovered or have found third-party exploitation of a vulnerability. 


Likewise, the risks of some missions affect our understanding of how to use vulnerabilities - in some cases we want to reserve vulnerabilities for only our most least risky missions (or vice versa). 


Analysis of Supply Chains

 

We clearly need to communicate to our vendors that we have a presumptive denial of release of any vulnerability we purchase. As well, a process that brings our vulnerability suppliers into the discussion would be a valuable addition. The technical details of the vulnerabilities, the attack surfaces they target, and the potential risks to other areas of research are known best by our suppliers. They may also have the best information on how to design covert mitigations that we can apply to our own systems without revealing information about the vulnerability itself. 


The security of our suppliers is also a factor in our equities decisions. Coordinating around security issues is essential for long-term understanding of the equities around vulnerability use and may need some formal processes. Individual vulnerability finders often have their own style fingerprint, or method of analysis or exploitation. These impact attribution and other parts of our toolchain equities up the stack. 


Currently we have no way of measuring how “close” two vulnerabilities are - even bugs that look like they collide in the code from a summary description can often be completely different. With recent advances in exploitation techniques and mitigation bypasses, fixing bugs that look unexploitable or low-impact can have massive detrimental effects on future exploit chains. 


The ability to maintain capability still has many unknowns. This means our decisions must often delve into details that evade a summary analysis.




Communications

We may also want to revise how we communicate to the community when we have released a vulnerability for patching by a vendor. Do we have the ability to inform the public about the details of a particular vulnerability, when our assessment differs from the vendor’s assessment? In some cases we should be releasing and supporting mitigations and point-patches for software ourselves to the general public. The answer here is not calling up a friendly news site, but an official channel that can release and iterate on code (much as we do for Ghidra). 


Measurement of Impact


Implementing any kind of government policy like this without attempting to measure the impact on our operations and also on the broader security of the community is difficult. Nevertheless we should find a way to put metrics, or even just historical anecdotes, on how the VEP performs over time. 


 








Friday, May 29, 2020

Cyber Lunarium Commission #001: The Case for Cyber Letters of Marque

Cyber Lunarium Commission #001: 
The Case for Cyber Letters of Marque


Introducing The Cyber Lunarium Commission

The Cyber Lunarium Commission was established to propose novel approaches to United States cyber strategy grounded in technical and operational realities. The commissioners of the CLC “moonlight” in cyber policy, drawing upon their experiences in government, military, industry, and academia.


The Cyber Lunarium Commission can be reached at cyberlunarium@gmail.com and followed at @CyberLunarium
---

The United States is losing ground in cyberspace. We are faced with adversaries who have benefited from rapid proliferation of commercial hacking capabilities. We are blocked by outdated legal frameworks, sluggish procurement practices, and a national talent pool we struggle to harness. In order to retain our dominance we must evolve our strategies. One solution may be found within the Constitution itself - letters of marque.


What is a Letter of Marque? 
In 2007, Rep. Ron Paul introduced H.R.3216, the Marque and Reprisal Act of 2007, an act to allow the President to issue letters of marque against Osama bin Laden, al-Qaeda, and co-conspirators involved in 9/11. While this bill never passed, it brought up a fascinating question - do letters of marque have a place in modern conflict? 


In Article I, Section 8, the Constitution establishes Congress’ authority to “grant letters of marque and reprisal.” These letters are commissions allowing holders to engage in privateering - in other words, historically allowing private operators to attack or capture the maritime vessels of adversary or criminal actors, without the need for the government to provide direct command-and-control. Both Revolutionary American forces and the post-Constitutional Convention US Congress employed this authority several times, most notably to fight piracy off the Barbary Coast in 1805, and against British maritime targets during the War of 1812. 


The concept of “cyber letters of marque” (CLoM) comes up every few years in online cyber policy discussions. CLoMs would harness legal reform to allow private operators to conduct limited cyber operations at the direction of the US government and - in some limited cases - hack back.


Why Letters of Marque?
Cyber power depends on our national ability to leverage technical and operational prowess to achieve desired outcomes across political, military, and economic domains, at scale and over multiple concurrent operations. US cyber freedom of action relies on our nation’s ability to not only discover and create vulnerabilities in technology systems, but to then operationalize these accesses, while simultaneously denying our adversaries the ability to do the same.


By stealing US intellectual property, and using their indigenously developed technologies to project power, foreign adversaries are threatening our technical dominance. Domestically, we face a depleted workforce, damaging leaks, and restrictive legal regimes around cyber operations.


The international market for offensive cyber capability is also increasingly moving to “access-as-a-service” (AaaS) offerings. With AaaS, governments or other actors purchase access to compromised devices, or even fully managed cyber operations from private contractors. Successful examples of AaaS include criminal botnet sales, commercialized cyberespionage offered by Indian companies, and high-end mobile hacking operations offered by the controversial NSO Group. In addition to leveraging these companies for intelligence, foreign countries that house AaaS companies gain an experienced cyber workforce and grow their cyber security economy as the companies grow. The United States, by contrast, currently has few ways to utilize its own domestic hackers aside from direct employment with government or government contractors. 

If the US is to regain dominance in cyberspace, we must lean into the winds of change already blowing - leveraging and empowering cyber talent outside of government to operate in cyberspace without fear of prosecution - naturally with appropriate legal oversight. Paired with American free market ingenuity and robust oversight mechanisms borrowed from existing federal agencies and structures, the disruptive potential for cyber letters of marque is profound.


Operating Concepts
CLoMs could be employed for a variety of operating concepts, but would never eclipse government operations - instead acting as a force multiplier and enabler. As it stands, the right to conduct cyber operations is reserved for government employees under special legal authorities (Titles 10 and 50).


CLoMs would not be used for high risk operations (e.g., intelligence collection against foreign heads of state, or “left-of-launch” missile defense operations). These letters could provide a valuable tool against targets such as ISIL, or serve as a way to leverage niche or short-term capabilities against targets of opportunity that appear and disappear before a government program could be leveraged against them. In severe cases, CLoM authorized-operations could even be used as a deterrence measure against foreign organizations that have broken US law and threatened US national security. 


CLoM Operating Groups
Operations under CLoM would be carried out by businesses within the US similar to those involved in commercial sales of 0day exploits and other offensive cyber capabilities. In other words, boutique firms offering deep technical skill, specialized subject matter expertise, and innovative tooling working in conjunction with traditional defense industrial base companies managing less glamorous issues and manpower-intensive engineering problems. 


US companies holding CLoMs could hire cyber talent in the private sector and veterans of the US intelligence community and military, providing them with an additional option other than directly working in government or its contractors to legally work on offensive cyber challenges. 


CLoMs would provide indemnity from prosecution in the US legal system for otherwise “illegal” computer hacking activity in violation of the outdated 1986 Computer Fraud and Abuse Act (CFAA) and other pertinent statutes, against non-US entities. As private citizens protected inside the US, CLoM operators would have to assume the risks of foreign prosecution for their actions - though the US would not extradite CLoM holders. 


In order to protect CLoM operators, the specific identities of groups carrying out these operations would be kept private, but announcement and fact of issuance of CLoMs could be made public in some circumstances (e.g., after operations have taken place successfully, or upon authorization of “hackback” style CLoMs to project a deterrent effect against would-be attackers).


Funding CLoM Operations
In traditional maritime LoM contexts, operators were allowed to keep seized assets from captured vessels, paying modest taxes on this “treasure” to the government. In cyberspace, capturing real value is much harder - digital files are infinitely and instantly reproducible non-exclusive goods. In CLoM operations, funding would come from agencies benefiting from outsourced private operations - e.g., DoD, CIA, NSA, etc. In limited reprisal contexts (explored further in later posts) funding from third parties or captured value would be possible.


Oversight
Congressional involvement in issuing CLoMs would help normalize cyber operations as a tool of national power, bringing them out of the shadows of classified Executive Branch programs where they have traditionally been housed.


Rather than holding whole-of-Congress referendums for each CLoM, Congress could delegate authority to a select or special committee drawing upon expertise from committees on defense, intelligence, foreign affairs, government oversight, etc. Congressional authorization of CLoMs would ideally also be worked in conjunction with relevant stakeholder agencies across government. 


CLoMs would only be issued to actors deemed trustworthy and qualified. While operations under CLoM would ideally be conducted at the unclassified level, members of CLoM operating companies could be required to maintain clearances to facilitate communication of targeting, deconfliction, and counterintelligence information. 


Granting authority to legally engage in cyber operations to non-government operators may be seen as “norms violating.” However, internationally, delegating authority is in fact the norm - a concept of operations that China has embraced with particular vigor


Future Reporting
This is the first report of the Cyber Lunarium Commission. Over the coming days, we will publish three additional reports exploring various operating concepts that CLoMs could enable: privatized counter-ISIL cyber operations, access-as-a-service IoT offerings, and limited “hackback”-style reprisal operations against adversaries.

Thursday, May 21, 2020

Chinese Games have Ring0 on Everything

Like many of you, my kids love Doom Eternal, Valorant, Overwatch, Fortnite, Plants Vs Zombies, Team Fortress 2,  and many other video games that involve some shooting stuff but mostly calling each other names over the internet. I, on the other hand, often play a game called "Zoom calls where people try to explain what IS and IS NOT critical infrastructure". 

Back in the day (two decades ago) when Brandon Baker was at Microsoft writing Palladium, which then became the Next Generation Trusted Computing Base, I had a lot of questions, and the easiest way to answer those questions was "How would you create a GnuPG binary that could run on an untrusted kernel, which could still encrypt files without the kernel being able to get the keys?" You end up with memory fencing and a chain of trust that comes from Dell and signing and sealing and trusted peripherals and all that good stuff. 

The other thing people penciled out was "Remote Attestation" which essentially was "How do you play Quake 2 online and prove to the SERVER that you're not cheating." In this sense, Trusted Computing is not so much Trusted BY the User, but Trusted AGAINST the User. 

Doom Eternal removed their Ring0 anti-cheat but it's not that competitive a game really, especially compared to Valorant or Plants vs. Zombies

Because writing game cheats is somehow (in this dystopia) extremely lucrative (see this Immunity presentation on it),  game developers have quite logically invested in a budget implementation of Remote Attestation, largely by including mandatory kernel drivers which get installed alongside your game. These kernel drivers sometimes load bytecode from the internet, are encrypted and obfuscated, and have a wide view of what is running on your system - one you as the gamer or security analyst can not interpret any more than you can what scripts are run by your AV.

To add to your paranoia, as you probably DON'T know, most gaming companies are owned or controlled by Tencent, a Chinese conglomerate which is also very active in cyber security, so to speak, even though they are often headquartered in the US. 

To put it directly, nobody wants to say that Tencent can control nearly every machine in the world via obfuscated bytecode that runs directly in the kernel, but it's not a whole lot of steps between here and there. Of course, aside from direct manipulation gaming data, which includes lots of PII, offers a massive value to any SIGINT organization, has huge implications for running COVCOM networks (c.f. the plot of Homeland), and is generally a high value target simply because it is assumed to be such a low value target. 

We spend so much of our time trying to define critical infrastructure, but one easy way is to think about your network posture from an attacker's perspective, which hopefully this blogpost did without raising your quarantine-shredded anxiety levels too much. 

-----

League of Legends is owned by Tencent.