Friday, July 28, 2017

A partial retraction of the Belfer paper

"Did not ever look at the data - hope the other two data sources are ok?"

As a scientist, I had the same reaction to the Belfer VEP paper that a climate scientist would when confronting a new "science" paper telling me the Earth was going into an ice age. So you can imagine my "surprise" when it got partially retracted a few days after being published in response to me having a few spare hours the afternoon the day after it was originally released.

Look, on the face of it, this paper's remaining numbers cannot be right. There are going to be obvious confounding variables and glaring statistical flaws in any version of this document that claims 15% of all bugs collide between two independent bug finders within the conditions this paper uses. They haven't released the new paper and data yet, so we will have to wait to find out what they are. But if you're in the mood for a Michael Bay movie-style trailer, I will say this: The answer is fuzzers, fuzzers, and more fuzzers. A better title for this paper would have been "Modern fuzzers are very complex and have a weird effect on bug tracking systems and also we have some unsubstantiated opinions on the VEP".

The only way to study these sorts of things is to get truly intimate with the data. This requires YEARS OF WORK reading reports daily about vulnerabilities and also experience writing and using exploits. Nobody in policy-land with a political science or international relations background wants to hear that. It sounds like something a total jerk would say. I get that and for sure Ari Schwartz gets that. But also I get that this is not a simple field where we can label a few things with a pocket-sized methodology and expect real data, the way this paper tried to.

An example of how not being connected to the bugs goes horribly wrong has been published on this blog previously, when a different Befler "researcher" (Mailyn Fiddler) had to correct her post on Stuxnet and bug collisions TWICE IN A ROW because she didn't have any of the understanding of the bugs themselves necessary to know when her thesis was obviously wrong.

As in the case of the current paper, she eventually claimed her "conclusion didn't change" just because the data changed drastically. That's a telling statement that divides evidence-based policy creation from ideological nonsense. Just as an ideological point of reference, Bruce Schneier, one of the current paper's authors, also was one of the people to work with the Guardian on the Snowden Archive.

The perfect paper on bug collision would probably find that the issue is multi-dimensional, and hardly a linear question of a "percentage". And any real effort to determine how this policy works against our real adversaries would be TOP SECRET CODEWORD at a minimum.

No comments:

Post a Comment