I want to point out that in many policy and academic papers I've read (some of which are referenced in the above piece) they've both over-simplified the idea of bug collision (to "sparse" or "dense" - terms which make no real technical sense) and come to the opposite conclusion about vulnerability overlap of every technical person I know, many of whom have decades of experience holding 0day.
Below are some Twitter notes from Steffan Esser, Halvar, Grugq, Argp, and others who point out that while anecdotal evidence of a lack of overlap is not conclusive in any way, it's interesting that everyone in the business seems to have the same basic experience.
To wit, the most common way vulnerabilities get "killed" appears to be because of coincidental code refactor.
And of course, sometimes it's not a vulnerability, but a CLASS of vulnerabilities that you are trying to measure. Most big research firms have new classes of bugs and new exploit techniques that are not seen or used publicly. There are no clear lines here, but at a certain point, what you're trying to measure is math. Why is there no Math Equities Process for the government? It's because MATH is not as sexy as 0day (aka, not as clearly impactful on Microsoft's bottom line and marketing message?).
Even if you had all the data, normalizing it, analyzing it and understanding it would be a complex, difficult endeavor. And beyond that, making a sane policy choice is even harder. But until then, we have to admit that our policy choices are a bit...insane. :)
No comments:
Post a Comment