Ok, so after I did a review of the German VEP paper, Riana pointed me at her paper
. Academics have thick skins as a rule, so I went through and live tweeted a response, but she is owed a deeper review, on reflection.
First of all, I am often chided for lumping all policy people together, or being overly derogatory towards the efforts of policy people in this area who are not also subject matter experts. But it has not gone unnoticed that there are fundamental problems with study in the area, most recently this article
on CFR and this Twitter thread
When you talk to major funders in this area they complain that "Every paper I read is both the same, and entirely irrelevant". And the reasons why get dissected pretty well by that CFR post as quoted below:
There are three categories of factors that make scholarly cyber conflict research a significantly more challenging task than its nuclear era counterparts: (1) the characteristics of the threat space, (2) data availability constraints, and (3) the state of political science as a discipline.
Historically, and luckily, when non-subject matter experts attempt to draw conclusions in this field they make glaring and weird mistakes about the history of the subject. This is most often to attempt to back up the trope that cyber tools are extremely likely to get caught, and then when caught are used against everyone else. (You can see another example of someone without any technical experience doing the same kind of thing here
Below (from page 9 of the paper) Riana makes some very odd claims:
In another example, after nation-state actors widely believed to be the United States
and Israel unleashed the so-called Stuxnet malware to undermine Iran’s nuclear
program, new malware which was in part identical to the Stuxnet code appeared on
the internet.25 Researchers also discovered additional types of malware that used
Stuxnet’s USB port infection technique to spread to computers.26
The reality is of course more complex, but it worries me that when reading the released reports on Stuxnet, Duqu, and Gauss, she did not appear to understand the sweep of how things fit together. The technical concepts of how code works cannot be avoided when making policy claims of this nature, and it has the problem of invalidating other arguments in the paper when this sort of thing is broken from the very beginning.
Likewise, when talking about bug rediscovery, it's impossible to discuss these things by giving equal weight to two papers with completely different results. It's like CNN trying to give equal weight to a climate change denier and an atmospheric scientist.
But that's what we see in Riana's paper.
Trey Herr, found rediscovery rates of 14% to 17% for vulnerabilities in browser
software and 22% for bugs in the Android mobile operating system.5 After their
conclusions were criticized as inaccurate, Schneier and Herr updated their paper,
revising their rediscovery rates slightly upward and concluding that “rediscovery
takes place more often than previously thought.”6 On the other hand, the RAND
Corporation issued a report analyzing a different set of data and put the rediscovery
rate at only about 5% per year.7
I'm fairly sure they revised their rates downwards, and not upwards? It doesn't matter though. It's impossible to draw the kinds of conclusions you would want from any of these numbers, as she goes on to state a few paragraphs later:
Ultimately, experts do not precisely know the rediscovery rate for any specific
vulnerability or class of vulnerabilities, and aren’t going to know anytime soon.
Then there are paragraphs which try to push a political agenda, but don't have a grasp on the history of how vulnerabilities have been handled. None of the claims here can be substantiated, and many of them are pure fantasy.
Today we have companies that are in the business of developing and selling 0-days,
with no intention of revealing the flaw to the vendor so that it may be fixed. 0-days
are generally used by state actors, may not be very common, and are not the biggest
security problem out there. The existence of a market for 0-days may incentivize the
discovery of more vulnerabilities. Some think that could lead to better security
overall, so long as the government buying the 0-day ultimately discloses it to the
vendor to be fixed. But that assumes 0-days are relatively rare; if they are plentiful,
then an active 0-day market could be harmful.
The market for bugs has always been a smaller part of the larger community of people who find, use, and trade bugs, which existed long before there were governments in the game. The commercial consulting market dwarfs the government market, and is largely in the same exact business.
And governments are not a free bug-bounty program - they don't buy bugs to then disclose them to a vendor. That would be an exceedingly poor use of tax money.
Some parts of the paper, of course, highlight common-sense areas where there are wide policy gaps.
Judges issue hacking warrants ex parte based on the assurances of the government,
but those representations may not capture the hacking campaign’s impact on
people for whom there is no probable cause to believe they have committed any
crime. As its use of hacking techniques continues and expands, it will be important
for the government to narrowly tailor hacking campaigns to minimize impact on
innocent users and to explain the expected impact accurately to the authorizing
Most substantially, I thought the paper represented a cautionary note against using Government Hacking as a policy bulwark against government mandated backdoors, which are on their face, a simpler, less chaotic, policy.
The problem however, is without deeply understanding the technical details this kind of paper can only misrepresent and over-abstract the risks on both sides. In that sense, it does more to muddy the issue than clarify it, even as it claims in the conclusion to want to further the discussion.