Wednesday, December 9, 2020

The Deep Wrong of Kyle on Platform Speech Governance

Kyle Langvardt (@kylelangvardt) recently wrote a piece for Lawfare on Platform Speech Governance - in a sense, how and when can the Government make censorship decisions for social media companies. He drives the argument with theories on how the First Amendment is interpreted and applied (as he is, in fact, a legal specialist in First Amendment law).

  • Editing (by social media companies) is not speech (because if it is, any regulation has to pass strict scrutiny, which it would probably not)
  • Code is not speech (because not all language is speech and therefore govt regulation of social media company code is ok)
  • He also includes some argument about the scale of social media companies meaning that the speech of their customers overrides their own first amendment rights

Each of these arguments is nonsense, but he makes them because the ends justify the means, as stated quite clearly:


He states directly on his podcast that he does not believe there is a particular ideological intent to content moderation at modern social media companies, but that he would be worried if the Mercer family owned them. But we live in a world where the top media and news companies have been owned and controlled by just a few powerful families. He's skeptical that market pressures from the public do anything because the gravity of network effects are too strong, but this is more a feeling than any kind of data-based analytical approach. Social media networks go in and out of style all the time. They add and remove content moderation features as pressured by their customers. 

But let's start at the top: Editing is speech and also code is speech. Writing a neural network that scans all of Trump's tweets, and downgrades any tweet that matches their political views is an act of expression. It's highly ironic that a law professor would reach for arguments that had such a keyhole sized view on human expression. 

A banana taped to a wall can be art in the same way. It's not just the code itself that is expression, but also my choice to write that particular code

It's hard to explain how tortured the arguments made in the paper are - he throws in a straw-man that Google could potentially claim that buying office space in a particular city is an editorial choice, but a better analogy might be a restaurant owner picking their decor and requiring that loud people keep their conversations down, which is more closely a business policy of expression.

Apple made a First Amendment argument in the San Bernardino case - essentially saying that when the Govt forced it to write a backdoor that was a violation of their First Amendment rights. And a similar argument applies here, although perhaps even more clearly.

I also don't think there's any serious reason why scale matters - even Parler has 10M users. I'm not sure we have a threshold for scale anyone could agree on and I don't think we want the courts interpreting First Amendment rights based on how much of a marketshare or stock valuation you have.

What is most worrying about Kyle's paper however, is not the speciousness of his arguments, but the collateral damage of his recommendations. Gutting prior restraint because you are scared of "Viral Content" opens a door to unknown horrors. 

The ends, in this case, not only don't justify the means, but lead to unexplored dangers when it comes to government regulation of public content and the platforms we are allowed to build. For that reason, I highly recommend applying strict scrutiny not just to this paper's recommends, but to the rest of the Lawfare content moderation project.

-----

Listening to the podcast while you run down the beach is the best way to analyze this piece.


 







 

Wednesday, November 25, 2020

Our Top Priority for US Cyber Policy

Progress is cyber policy is mostly apolitical and organic and international. A mistake we in the US have sometimes made is viewing our cyber policy as being purely domestic, when the key feature of the cyber domain itself is to transcend borders and to be interlinked.

If you look at what works for other countries, one policy effort in a major ally stands out as being something we desperately need to adopt: The UK's NCSC Industry-100 platform.

At its heart, it's very simple. Essentially, you can find talent within private industry, ask them to take 20% of their time and donate that as work for the US Government. In exchange, they get experience they can't get elsewhere, and we hold their clearance. 

It requires management, and funding, some basic distributed infrastructure, and the ability to scale, and it requires the will to enact a different way of recruiting and dealing with talent. But the follow-on effects would be vastly out of proportion to what we invest, and we need to do it as soon as possible. With this effort, we solve clearance issues, counterintelligence, recruitment and training, industry relationship building. We inform our government and our technical industry at the same time. Instead of saying private-public partnership, we actually build one. 

It's past time. Let's get to work.


Sunday, November 15, 2020

Fifth order effects


There are methods of cyber policy and strategy thought that various countries keep quiet about the way ADM/TESO kept their 0day. When it takes a long time to integrate information warfare into your techniques and operationalize it and test it and learn from the practice of it, then knowing its relative weight in hybrid warfare before your adversary does is useful enough to hide.

But of course, the same thing is true on the other side. You could call out the United State's primacy in early lessons on ICS hacking as the results of opportunistic investment, or you could see them as payoff for forethought around the policy implications of ongoing technology change, slowly evolving into the Stuxnet-shaped Stegosaurus Thagomizer that pummels any society advanced enough to have email.

Persistent engagement might be one of these. Look far enough into the future on it and what you see is a sophisticated regime of communication strategies to reduce signal error between adversaries, sometimes leveraging the information security industry (c.f. USCC sending implants to VirusTotal), but also sometimes USCC silently protecting the ICS networks of Iran and Russia from other intruders

Recently I did a panel with one of the longest serving CSOs of a major financial that I know about, and one thing that struck me is how at the scale of a large financial institution, your goal is raising the bar ON AVERAGE. As an attacker, my goal is to find ways to create BINARY risk decisions, where if you lose, it's not ON AVERAGE but all at once. Your goal as a defender is to make any offense have a cost that you can mitigate on average.

Phishing is the obvious example. So many training courses (aka, scams) have been sold that provide a metric on reducing your exposure to phishing from 5% of clicked attachments to 2% of clicked attachments. But anything above 0% of clicked attachments is really all the attacker needs. There's a mismatch here in understanding of the granularity of risk that I still find it difficult to explain to otherwise smart people to this day! "It doesn't matter how deep the Thagomizer went into your heart, there's no antibiotics in the Jurassic and you're going to die!" might be my next attempt.

But other examples include things like "JITs" where any vulnerability can become EVERY vulnerability - from replacing an object to introducing a timing attack. You can't even understand the pseudo expression that defines what a JIT vulnerability is because it's written in an alien language only a specialist in x86 code optimization can even pretend to understand, and usually doesn't.


This is true for a large section of the new technology we rely on, especially cloud computing. What we've lost sight of is our understanding of fragility, or conversely of resilience. We no longer have tools to measure it, or we no longer bother to do so. What used to be clear and managed is now more often unclear and unmanaged and un-introspectable. 


Tuesday, November 3, 2020

A second byte at the China apple



Recently I read an interesting paper by Michael Fischerkeller, who works at IDA (a US Govt contractor that does cutting-edge cyber policy work). The first concept in the paper is that the Chinese HAD to implement a massive program of cyber economic espionage in order to avoid a common economic trap that developing countries fall into, the "middle-income trap". 

One thing that always surprises me is that most people have missed the public and declassified announcement that the USG made when it came to how primary the effort of cyber economic espionage was to the Chinese strategy - to the point of having fusion centers to coordinate the integration of stolen IP into Chinese companies.

It shouldn't surprise anyone on this blog that security policy and economic policy are tightly linked, but it's worth taking a second look a this paper's recommendations and perhaps tweaking them. Especially in light of US Government actions against Huawei, which demonstrate a clear path towards US power projection. 


But our path probably runs more efficiently in a different direction - protecting Intel, AMD, Synopsys, ASML, TSMC, and other firms key to building the chips China desperately needs, and which the US has recently restricted via export control. Because TSMC and ASML are not US companies, we would need to flesh out policy that would enable US "Hunt Forward" teams to operate on their networks proactively, instead of reactively.

And offensive cyber operations could be levied against the fusion centers distributing stolen IP, and against companies that receive that IP. "Hacking the hackers" is flashy and sounds good in terms of defensive operations that USCC can do, but as a long term strategy, it might simply be training up the hackers to have better OPSEC. Deploying an intelligence capability against the fusion centers, or the companies LIKELY to receive stolen information maybe have better return on investment, especially if that intelligence capability can be turned into a deterrent effort with the push of a button (something we also need to build policy around).


 







Tuesday, October 20, 2020

Projecting Cyber Power via Beneficience



So many articles come out decrying Europe's inability to create another Google or AWS or Azure or even a DigitalOcean, Oracle Cloud, IBM Cloud, Rackspace, Alibaba, or Tencent. Look, when you list it out loud, it's even more obvious how far behind Europe is in this space compared to where it should be.

And of course, projecting power via regulatory action only gets you so far. Governments like to negotiate with other governments, and you see this in cyber policy a lot, but it's worth mentioning that the European populace has a vastly different opinion on the value of Privacy than everyone else. We talk a lot at RSAC about Confidentiality, Integrity, and Availability, but in Europe personal Privacy is in the Triad, so to speak.

I think this is a unique strength. But I also think: Why try to beat the rest of the world at creating giant warehouses full of k8s clusters, when you can just pick almost any vendor now and get roughly the same thing. Moving the bits around and storing them redundantly is the BORING part. 

But there are things Silicon Valley categorically, for reasons built into the bones of the system, cannot do. Some of those things hold great power.

Education is the obvious market vertical for Europe. There's massive power projection in being able to provide useful services, as Hezbollah does, as the local city council does. Look at the disaster that is the underfunded US education system, and think about the opportunity there. And in smaller countries, it's even more useful as strength projection. You just need to invest in translation and customer service. The key is NOT to exploit it for the obvious opportunities it would present to an aggressive intelligence service. Trust is as important an element of cyber power as deterrence is in nuclear policy. 

I don't mean to understate the difficulty in doing good customer support across time zones and translation into the specific cultural dialects worldwide, but there's real technical innovation to be done in education as well. And innovation in software scales and has network effects and can provide the basis for a 21st century economy a lot easier than something built purely on advertising and surveillance. 



Wednesday, October 14, 2020

A 2020 Look at the Vulnerability Equities Process

 I stole this picture from Marco Ivaldi's presentation on Solaris. :)


The Vulnerability Equity Process’s original sin is that it attempts to address complex equities on a per-bug basis. But the equities involved are complex and interlinked. You cannot reduce a vulnerability to a list of components with a numerical score and make any kind of sane decision on whether to release it to a vendor or not. The VEP shares this weakness with the often maligned CVSS vulnerability scoring system. 


That said, an understanding of the equities around sensitive subjects in the cyber security world is valuable for the United State’s larger strategic goals. So what this paper tries to do is present some revisions to the policy, first made public under the Obama NSC, that would attempt to harmonize it with our strategic goals.


There are several areas where the VEP policy can be drastically improved, and we will go over each in turn.


Integrating Understanding of Attack Paths 


Scoring individual vulnerabilities is most difficult because exploits are not built from just one vulnerability, so much as a chain of vulnerabilities. In a sense, the value (and risk) of a vulnerability is linked to all the other vulnerabilities it enables. 


Attack surfaces are another example where it makes sense to be careful when assigning any equity judgement. For example, if we release a vulnerability in a hypervisor’s 3D rendering code, it can be assumed that both the hypervisor vendor and the outside research community will spend time focusing on that area for additional vulnerabilities. This means that even if an initial new vulnerability is not useful for a mission, other vulnerabilities in that same space may be useful, more exploitable, or affect more platforms. It may be worth not releasing a particular vulnerability based on how it may inform the broader research community about attack surfaces.


Exploitability and discoverability also needs to be understood in terms of modern automation techniques. Automatic exploit generation, fuzzing, source code analysis and other new static analysis techniques change the profile for how likely a new vulnerability is to be rediscovered by our adversaries and the wider research community. Likewise, we need a model of the sizes and capabilities of our adversaries - if the Chinese have essentially unlimited Microsoft Word exploits, then killing one of ours has little impact on their capabilities.





Aligning Equities to Mission Tempo


As we move into adopting persistent engagement, we are going to find more and more that our decisions around exploits cannot wait for a bureaucratic process to complete. For some missions, especially special task forces conducting counter-cyber operations or other high-tempo mission types, we are going to need to have a blanket approval for exploitation use and deal with the VEP process on the backend. On the reverse side, we can special-case scenarios where we know we have been discovered or have found third-party exploitation of a vulnerability. 


Likewise, the risks of some missions affect our understanding of how to use vulnerabilities - in some cases we want to reserve vulnerabilities for only our most least risky missions (or vice versa). 


Analysis of Supply Chains

 

We clearly need to communicate to our vendors that we have a presumptive denial of release of any vulnerability we purchase. As well, a process that brings our vulnerability suppliers into the discussion would be a valuable addition. The technical details of the vulnerabilities, the attack surfaces they target, and the potential risks to other areas of research are known best by our suppliers. They may also have the best information on how to design covert mitigations that we can apply to our own systems without revealing information about the vulnerability itself. 


The security of our suppliers is also a factor in our equities decisions. Coordinating around security issues is essential for long-term understanding of the equities around vulnerability use and may need some formal processes. Individual vulnerability finders often have their own style fingerprint, or method of analysis or exploitation. These impact attribution and other parts of our toolchain equities up the stack. 


Currently we have no way of measuring how “close” two vulnerabilities are - even bugs that look like they collide in the code from a summary description can often be completely different. With recent advances in exploitation techniques and mitigation bypasses, fixing bugs that look unexploitable or low-impact can have massive detrimental effects on future exploit chains. 


The ability to maintain capability still has many unknowns. This means our decisions must often delve into details that evade a summary analysis.




Communications

We may also want to revise how we communicate to the community when we have released a vulnerability for patching by a vendor. Do we have the ability to inform the public about the details of a particular vulnerability, when our assessment differs from the vendor’s assessment? In some cases we should be releasing and supporting mitigations and point-patches for software ourselves to the general public. The answer here is not calling up a friendly news site, but an official channel that can release and iterate on code (much as we do for Ghidra). 


Measurement of Impact


Implementing any kind of government policy like this without attempting to measure the impact on our operations and also on the broader security of the community is difficult. Nevertheless we should find a way to put metrics, or even just historical anecdotes, on how the VEP performs over time. 


 








Friday, May 29, 2020

Cyber Lunarium Commission #001: The Case for Cyber Letters of Marque

Cyber Lunarium Commission #001: 
The Case for Cyber Letters of Marque


Introducing The Cyber Lunarium Commission

The Cyber Lunarium Commission was established to propose novel approaches to United States cyber strategy grounded in technical and operational realities. The commissioners of the CLC “moonlight” in cyber policy, drawing upon their experiences in government, military, industry, and academia.


The Cyber Lunarium Commission can be reached at cyberlunarium@gmail.com and followed at @CyberLunarium
---

The United States is losing ground in cyberspace. We are faced with adversaries who have benefited from rapid proliferation of commercial hacking capabilities. We are blocked by outdated legal frameworks, sluggish procurement practices, and a national talent pool we struggle to harness. In order to retain our dominance we must evolve our strategies. One solution may be found within the Constitution itself - letters of marque.


What is a Letter of Marque? 
In 2007, Rep. Ron Paul introduced H.R.3216, the Marque and Reprisal Act of 2007, an act to allow the President to issue letters of marque against Osama bin Laden, al-Qaeda, and co-conspirators involved in 9/11. While this bill never passed, it brought up a fascinating question - do letters of marque have a place in modern conflict? 


In Article I, Section 8, the Constitution establishes Congress’ authority to “grant letters of marque and reprisal.” These letters are commissions allowing holders to engage in privateering - in other words, historically allowing private operators to attack or capture the maritime vessels of adversary or criminal actors, without the need for the government to provide direct command-and-control. Both Revolutionary American forces and the post-Constitutional Convention US Congress employed this authority several times, most notably to fight piracy off the Barbary Coast in 1805, and against British maritime targets during the War of 1812. 


The concept of “cyber letters of marque” (CLoM) comes up every few years in online cyber policy discussions. CLoMs would harness legal reform to allow private operators to conduct limited cyber operations at the direction of the US government and - in some limited cases - hack back.


Why Letters of Marque?
Cyber power depends on our national ability to leverage technical and operational prowess to achieve desired outcomes across political, military, and economic domains, at scale and over multiple concurrent operations. US cyber freedom of action relies on our nation’s ability to not only discover and create vulnerabilities in technology systems, but to then operationalize these accesses, while simultaneously denying our adversaries the ability to do the same.


By stealing US intellectual property, and using their indigenously developed technologies to project power, foreign adversaries are threatening our technical dominance. Domestically, we face a depleted workforce, damaging leaks, and restrictive legal regimes around cyber operations.


The international market for offensive cyber capability is also increasingly moving to “access-as-a-service” (AaaS) offerings. With AaaS, governments or other actors purchase access to compromised devices, or even fully managed cyber operations from private contractors. Successful examples of AaaS include criminal botnet sales, commercialized cyberespionage offered by Indian companies, and high-end mobile hacking operations offered by the controversial NSO Group. In addition to leveraging these companies for intelligence, foreign countries that house AaaS companies gain an experienced cyber workforce and grow their cyber security economy as the companies grow. The United States, by contrast, currently has few ways to utilize its own domestic hackers aside from direct employment with government or government contractors. 

If the US is to regain dominance in cyberspace, we must lean into the winds of change already blowing - leveraging and empowering cyber talent outside of government to operate in cyberspace without fear of prosecution - naturally with appropriate legal oversight. Paired with American free market ingenuity and robust oversight mechanisms borrowed from existing federal agencies and structures, the disruptive potential for cyber letters of marque is profound.


Operating Concepts
CLoMs could be employed for a variety of operating concepts, but would never eclipse government operations - instead acting as a force multiplier and enabler. As it stands, the right to conduct cyber operations is reserved for government employees under special legal authorities (Titles 10 and 50).


CLoMs would not be used for high risk operations (e.g., intelligence collection against foreign heads of state, or “left-of-launch” missile defense operations). These letters could provide a valuable tool against targets such as ISIL, or serve as a way to leverage niche or short-term capabilities against targets of opportunity that appear and disappear before a government program could be leveraged against them. In severe cases, CLoM authorized-operations could even be used as a deterrence measure against foreign organizations that have broken US law and threatened US national security. 


CLoM Operating Groups
Operations under CLoM would be carried out by businesses within the US similar to those involved in commercial sales of 0day exploits and other offensive cyber capabilities. In other words, boutique firms offering deep technical skill, specialized subject matter expertise, and innovative tooling working in conjunction with traditional defense industrial base companies managing less glamorous issues and manpower-intensive engineering problems. 


US companies holding CLoMs could hire cyber talent in the private sector and veterans of the US intelligence community and military, providing them with an additional option other than directly working in government or its contractors to legally work on offensive cyber challenges. 


CLoMs would provide indemnity from prosecution in the US legal system for otherwise “illegal” computer hacking activity in violation of the outdated 1986 Computer Fraud and Abuse Act (CFAA) and other pertinent statutes, against non-US entities. As private citizens protected inside the US, CLoM operators would have to assume the risks of foreign prosecution for their actions - though the US would not extradite CLoM holders. 


In order to protect CLoM operators, the specific identities of groups carrying out these operations would be kept private, but announcement and fact of issuance of CLoMs could be made public in some circumstances (e.g., after operations have taken place successfully, or upon authorization of “hackback” style CLoMs to project a deterrent effect against would-be attackers).


Funding CLoM Operations
In traditional maritime LoM contexts, operators were allowed to keep seized assets from captured vessels, paying modest taxes on this “treasure” to the government. In cyberspace, capturing real value is much harder - digital files are infinitely and instantly reproducible non-exclusive goods. In CLoM operations, funding would come from agencies benefiting from outsourced private operations - e.g., DoD, CIA, NSA, etc. In limited reprisal contexts (explored further in later posts) funding from third parties or captured value would be possible.


Oversight
Congressional involvement in issuing CLoMs would help normalize cyber operations as a tool of national power, bringing them out of the shadows of classified Executive Branch programs where they have traditionally been housed.


Rather than holding whole-of-Congress referendums for each CLoM, Congress could delegate authority to a select or special committee drawing upon expertise from committees on defense, intelligence, foreign affairs, government oversight, etc. Congressional authorization of CLoMs would ideally also be worked in conjunction with relevant stakeholder agencies across government. 


CLoMs would only be issued to actors deemed trustworthy and qualified. While operations under CLoM would ideally be conducted at the unclassified level, members of CLoM operating companies could be required to maintain clearances to facilitate communication of targeting, deconfliction, and counterintelligence information. 


Granting authority to legally engage in cyber operations to non-government operators may be seen as “norms violating.” However, internationally, delegating authority is in fact the norm - a concept of operations that China has embraced with particular vigor


Future Reporting
This is the first report of the Cyber Lunarium Commission. Over the coming days, we will publish three additional reports exploring various operating concepts that CLoMs could enable: privatized counter-ISIL cyber operations, access-as-a-service IoT offerings, and limited “hackback”-style reprisal operations against adversaries.

Thursday, May 21, 2020

Chinese Games have Ring0 on Everything

Like many of you, my kids love Doom Eternal, Valorant, Overwatch, Fortnite, Plants Vs Zombies, Team Fortress 2,  and many other video games that involve some shooting stuff but mostly calling each other names over the internet. I, on the other hand, often play a game called "Zoom calls where people try to explain what IS and IS NOT critical infrastructure". 

Back in the day (two decades ago) when Brandon Baker was at Microsoft writing Palladium, which then became the Next Generation Trusted Computing Base, I had a lot of questions, and the easiest way to answer those questions was "How would you create a GnuPG binary that could run on an untrusted kernel, which could still encrypt files without the kernel being able to get the keys?" You end up with memory fencing and a chain of trust that comes from Dell and signing and sealing and trusted peripherals and all that good stuff. 

The other thing people penciled out was "Remote Attestation" which essentially was "How do you play Quake 2 online and prove to the SERVER that you're not cheating." In this sense, Trusted Computing is not so much Trusted BY the User, but Trusted AGAINST the User. 

Doom Eternal removed their Ring0 anti-cheat but it's not that competitive a game really, especially compared to Valorant or Plants vs. Zombies

Because writing game cheats is somehow (in this dystopia) extremely lucrative (see this Immunity presentation on it),  game developers have quite logically invested in a budget implementation of Remote Attestation, largely by including mandatory kernel drivers which get installed alongside your game. These kernel drivers sometimes load bytecode from the internet, are encrypted and obfuscated, and have a wide view of what is running on your system - one you as the gamer or security analyst can not interpret any more than you can what scripts are run by your AV.

To add to your paranoia, as you probably DON'T know, most gaming companies are owned or controlled by Tencent, a Chinese conglomerate which is also very active in cyber security, so to speak, even though they are often headquartered in the US. 

To put it directly, nobody wants to say that Tencent can control nearly every machine in the world via obfuscated bytecode that runs directly in the kernel, but it's not a whole lot of steps between here and there. Of course, aside from direct manipulation gaming data, which includes lots of PII, offers a massive value to any SIGINT organization, has huge implications for running COVCOM networks (c.f. the plot of Homeland), and is generally a high value target simply because it is assumed to be such a low value target. 

We spend so much of our time trying to define critical infrastructure, but one easy way is to think about your network posture from an attacker's perspective, which hopefully this blogpost did without raising your quarantine-shredded anxiety levels too much. 

-----

League of Legends is owned by Tencent.