Thursday, November 29, 2018

The false path of ReIntermediation

Salsa Shark. . . 

I sent a recent paper on information operations over Twitter to some people for feedback and one of the comments was the following:
I also think there’s a meta question that needs to be answered. 
You say that censoring inauthentic accounts is not the way to go for various reasons...but you still have an info ops problem given that inauthentic accounts are still publishing garbage getting likes, retweets, shares, and driving online activity/conversations. How does your Jiu Jitsu solution remedy that?
And here's the larger picture: I never think re-intermediation is a valid policy solution, but it's the most common reflex in government and policy bodies. The internet was a tidal wave of disintermediation - in technology, in society, in geopolitics. It's hard to explain that yes the Russian Government gets to talk to all your people directly without you being able to do anything about it or that people can talk to each other in encrypted format now without you being able to listen or that money or music is going to transfer around without any real mediation by governments.

The instinct is always to try to force social media companies to just get REALLY good at banning Russian propaganda networks, or legally enforce impossible encryption regulations, or somehow enforce a global copyright regime. To re-intermediate, in other words. It's always the wrong direction. You can't roll back the tide.

Tuesday, November 20, 2018

A Question of Trust

For those of you who have not read Eugene Kasperky's latest piece, it is here:

I have pasted the most relevant section below.

While obviously Kaspersky's transparency initiative is a good thing, and probably something that should be emulated by other companies in the field, I think it's worth taking a step back to see what metrics you can judge its design on for effectiveness. Many portions of the stated initiative don't seem to be relevant in security sense - they are for marketing purposes, as cover for people who want to use Kaspersky software and are looking for an excuse.

Some questions, a positive answer to any one of which is fatal to the goals of a Transparency initiative:

  • Can Kaspersky update the software of only one computer, or write a rule that would run only a subset of computers? 
  • Is the data from computers in France still searchable from Moscow? (And hence, subject to Russian law?)
  • Could Kaspersky install a NOBUS backdoor which would get through the review of the Transparency team in Switzerland and get installed on international customers?
I think the answer to these questions is probably "yes".

The hard problem here is that the goal of a "Trust" initiative of this nature is to be able to protect your customers while provably being being unable to see what they are doing, or target them in any way. The most obvious solution would be for Kaspersky to start up an entirely independent operation to handle the international market, at the cost of any economies of scale (and also at a reaction time trade-off). Even that might not even solve the third question, although at a certain point you have to admit that you are setting a bar high enough that software from extremely risky development locales is not going to clear it (which sucks for Kaspersky, but is an extremely realistic risk profile, depending on who you talk to!).

As a final note, this talk by a Kaspersky researcher is fantastic:

Tuesday, November 13, 2018

The true meaning of the Paris Call for Trust in Cyberspace


I often find it hard to explain anything in the cyber policy realm without pointing out how weird an idea "copyright" is. The easiest way to read Cat's article in the Washington Post is that the PR minions of most big companies wants to make it seem like some sort of similar global controls over cyber vulnerabilities and their use are a natural thing, or at least as natural as copyright. In some sense, it's a coalition of the kinda-willing, but that's all the PR people need since this argument is getting played out largely in newspapers.

But just to take one bullet point from the Paris text:
  • Develop ways to prevent the proliferation of malicious ICT tools and practices intended to cause harm;

What ... would that mean? you have to ask yourself.

You can paraphrase what software (and other) companies want, which is to find a way to ameliorate what in the industry is called "technical debt" by changing the global environment. If governments could assume the burden of preventing hacking, this can allow for greater risk-taking in the cyber realm by companies. I liken it to the credit card companies making it law enforcement's problem that they built an entire industry on the idea of everyone having a secret number small enough to memorize that you would give to everyone you wanted to pay money to.

From the WP article:
This could make way for other players on the global stage. France and the United Kingdom, Jordan said, are now emerging as leaders in the push to develop international cybersecurity norms. But the absence of the United States also reflects the Trump administration’s aversion to signing on to global pacts, instead favoring a transactional approach to issues, Singer said.

It's not so much "transactional" as it is "practical and workable" because to have a real agreement in cyber you need more trust than is typical of most arraignments. This is driven by the much reduced visibility into capabilities that is part and parcel of the domain, which frankly I could probably find a supporting quote for in Singer's new book :).

Aside from really asking yourself what it would MEAN IN REAL PRACTICAL TERMS for humanitarian law to apply to the cyber domain, you also have to ask yourself if all the parties in any particular group would AGREE on those meanings.

And then, as a follow up, ask yourself what the norms are that the various countries really live by, as a completely non-aspirational practicality, and especially the UK and France.