- Cyber Signals are easily muddled or misconstrued, such as with overall noise or system outages.
- Reliance on "attribution" may make Signals delayed (and hence, less powerful)
- Hard to say what a cyber event was intended to Signal
- Most cyber events don't cause big visible effects which makes them cheap (and hence, basically worthless)
Sunday, March 21, 2021
Thursday, January 21, 2021
Until recently I hadn't realized just how terrible I was at playing video games. And now after finishing Cyberpunk and watching a bunch of "spoiler" reviews I realize most people think the goal of these games is to increase some stats numbers so that the already braindead enemy AI is somehow even easier to beat up. Anyways, here's how you play open world video games, or as they will be known in the future: Games.
Wednesday, December 9, 2020
Kyle Langvardt (@kylelangvardt) recently wrote a piece for Lawfare on Platform Speech Governance - in a sense, how and when can the Government make censorship decisions for social media companies. He drives the argument with theories on how the First Amendment is interpreted and applied (as he is, in fact, a legal specialist in First Amendment law).
- Editing (by social media companies) is not speech (because if it is, any regulation has to pass strict scrutiny, which it would probably not)
- Code is not speech (because not all language is speech and therefore govt regulation of social media company code is ok)
- He also includes some argument about the scale of social media companies meaning that the speech of their customers overrides their own first amendment rights
Each of these arguments is nonsense, but he makes them because the ends justify the means, as stated quite clearly:
He states directly on his podcast that he does not believe there is a particular ideological intent to content moderation at modern social media companies, but that he would be worried if the Mercer family owned them. But we live in a world where the top media and news companies have been owned and controlled by just a few powerful families. He's skeptical that market pressures from the public do anything because the gravity of network effects are too strong, but this is more a feeling than any kind of data-based analytical approach. Social media networks go in and out of style all the time. They add and remove content moderation features as pressured by their customers.
But let's start at the top: Editing is speech and also code is speech. Writing a neural network that scans all of Trump's tweets, and downgrades any tweet that matches their political views is an act of expression. It's highly ironic that a law professor would reach for arguments that had such a keyhole sized view on human expression.
A banana taped to a wall can be art in the same way. It's not just the code itself that is expression, but also my choice to write that particular code.
It's hard to explain how tortured the arguments made in the paper are - he throws in a straw-man that Google could potentially claim that buying office space in a particular city is an editorial choice, but a better analogy might be a restaurant owner picking their decor and requiring that loud people keep their conversations down, which is more closely a business policy of expression.
Apple made a First Amendment argument in the San Bernardino case - essentially saying that when the Govt forced it to write a backdoor that was a violation of their First Amendment rights. And a similar argument applies here, although perhaps even more clearly.
I also don't think there's any serious reason why scale matters - even Parler has 10M users. I'm not sure we have a threshold for scale anyone could agree on and I don't think we want the courts interpreting First Amendment rights based on how much of a marketshare or stock valuation you have.
What is most worrying about Kyle's paper however, is not the speciousness of his arguments, but the collateral damage of his recommendations. Gutting prior restraint because you are scared of "Viral Content" opens a door to unknown horrors.
The ends, in this case, not only don't justify the means, but lead to unexplored dangers when it comes to government regulation of public content and the platforms we are allowed to build. For that reason, I highly recommend applying strict scrutiny not just to this paper's recommends, but to the rest of the Lawfare content moderation project.
Listening to the podcast while you run down the beach is the best way to analyze this piece.
Wednesday, November 25, 2020
Progress is cyber policy is mostly apolitical and organic and international. A mistake we in the US have sometimes made is viewing our cyber policy as being purely domestic, when the key feature of the cyber domain itself is to transcend borders and to be interlinked.
If you look at what works for other countries, one policy effort in a major ally stands out as being something we desperately need to adopt: The UK's NCSC Industry-100 platform.
At its heart, it's very simple. Essentially, you can find talent within private industry, ask them to take 20% of their time and donate that as work for the US Government. In exchange, they get experience they can't get elsewhere, and we hold their clearance.
It requires management, and funding, some basic distributed infrastructure, and the ability to scale, and it requires the will to enact a different way of recruiting and dealing with talent. But the follow-on effects would be vastly out of proportion to what we invest, and we need to do it as soon as possible. With this effort, we solve clearance issues, counterintelligence, recruitment and training, industry relationship building. We inform our government and our technical industry at the same time. Instead of saying private-public partnership, we actually build one.
It's past time. Let's get to work.
Sunday, November 15, 2020
There are methods of cyber policy and strategy thought that various countries keep quiet about the way ADM/TESO kept their 0day. When it takes a long time to integrate information warfare into your techniques and operationalize it and test it and learn from the practice of it, then knowing its relative weight in hybrid warfare before your adversary does is useful enough to hide.
But of course, the same thing is true on the other side. You could call out the United State's primacy in early lessons on ICS hacking as the results of opportunistic investment, or you could see them as payoff for forethought around the policy implications of ongoing technology change, slowly evolving into the Stuxnet-shaped Stegosaurus Thagomizer that pummels any society advanced enough to have email.
Persistent engagement might be one of these. Look far enough into the future on it and what you see is a sophisticated regime of communication strategies to reduce signal error between adversaries, sometimes leveraging the information security industry (c.f. USCC sending implants to VirusTotal), but also sometimes USCC silently protecting the ICS networks of Iran and Russia from other intruders.
Recently I did a panel with one of the longest serving CSOs of a major financial that I know about, and one thing that struck me is how at the scale of a large financial institution, your goal is raising the bar ON AVERAGE. As an attacker, my goal is to find ways to create BINARY risk decisions, where if you lose, it's not ON AVERAGE but all at once. Your goal as a defender is to make any offense have a cost that you can mitigate on average.
Phishing is the obvious example. So many training courses (aka, scams) have been sold that provide a metric on reducing your exposure to phishing from 5% of clicked attachments to 2% of clicked attachments. But anything above 0% of clicked attachments is really all the attacker needs. There's a mismatch here in understanding of the granularity of risk that I still find it difficult to explain to otherwise smart people to this day! "It doesn't matter how deep the Thagomizer went into your heart, there's no antibiotics in the Jurassic and you're going to die!" might be my next attempt.
But other examples include things like "JITs" where any vulnerability can become EVERY vulnerability - from replacing an object to introducing a timing attack. You can't even understand the pseudo expression that defines what a JIT vulnerability is because it's written in an alien language only a specialist in x86 code optimization can even pretend to understand, and usually doesn't.
This is true for a large section of the new technology we rely on, especially cloud computing. What we've lost sight of is our understanding of fragility, or conversely of resilience. We no longer have tools to measure it, or we no longer bother to do so. What used to be clear and managed is now more often unclear and unmanaged and un-introspectable.
Tuesday, November 3, 2020
Recently I read an interesting paper by Michael Fischerkeller, who works at IDA (a US Govt contractor that does cutting-edge cyber policy work). The first concept in the paper is that the Chinese HAD to implement a massive program of cyber economic espionage in order to avoid a common economic trap that developing countries fall into, the "middle-income trap".
One thing that always surprises me is that most people have missed the public and declassified announcement that the USG made when it came to how primary the effort of cyber economic espionage was to the Chinese strategy - to the point of having fusion centers to coordinate the integration of stolen IP into Chinese companies.
It shouldn't surprise anyone on this blog that security policy and economic policy are tightly linked, but it's worth taking a second look a this paper's recommendations and perhaps tweaking them. Especially in light of US Government actions against Huawei, which demonstrate a clear path towards US power projection.
Tuesday, October 20, 2020
So many articles come out decrying Europe's inability to create another Google or AWS or Azure or even a DigitalOcean, Oracle Cloud, IBM Cloud, Rackspace, Alibaba, or Tencent. Look, when you list it out loud, it's even more obvious how far behind Europe is in this space compared to where it should be.
And of course, projecting power via regulatory action only gets you so far. Governments like to negotiate with other governments, and you see this in cyber policy a lot, but it's worth mentioning that the European populace has a vastly different opinion on the value of Privacy than everyone else. We talk a lot at RSAC about Confidentiality, Integrity, and Availability, but in Europe personal Privacy is in the Triad, so to speak.
I think this is a unique strength. But I also think: Why try to beat the rest of the world at creating giant warehouses full of k8s clusters, when you can just pick almost any vendor now and get roughly the same thing. Moving the bits around and storing them redundantly is the BORING part.
But there are things Silicon Valley categorically, for reasons built into the bones of the system, cannot do. Some of those things hold great power.
Education is the obvious market vertical for Europe. There's massive power projection in being able to provide useful services, as Hezbollah does, as the local city council does. Look at the disaster that is the underfunded US education system, and think about the opportunity there. And in smaller countries, it's even more useful as strength projection. You just need to invest in translation and customer service. The key is NOT to exploit it for the obvious opportunities it would present to an aggressive intelligence service. Trust is as important an element of cyber power as deterrence is in nuclear policy.
I don't mean to understate the difficulty in doing good customer support across time zones and translation into the specific cultural dialects worldwide, but there's real technical innovation to be done in education as well. And innovation in software scales and has network effects and can provide the basis for a 21st century economy a lot easier than something built purely on advertising and surveillance.