Friday, January 4, 2019

VEP: Centralized Control is Centralized Risk

When we talk about the strategic risks of a Vulnerability Equities Process people get absorbed in the information security risks. But the risks of any centralized inter-agency control group also include:

  • Time delay on decision making. Operational tempo is a thing. And you can't use a capability and then kill it on purpose on a regular basis without massive impact to your opsec, although this point appears to be lost on a few former members of the VEP. 
    • For teams that need a fast operational tempo (and are capable of it), any time delay can be fatal. 
    • For teams that don't operate like that, you either have to invest in pre-positioning capability you are not going to use (which is quite expensive) or further delay integration until the decision about whether to use it has been made.
  • One-size-fits-all capability building. While there may be plenty of talented individuals for the VEP process, it is unlikely they are all subject matter experts at the size and scale that would be needed for a truly universal process. I.E. the SIGINT usefulness of a SAP XSS may be ... very high for some specialized team.
  • Having multiple arms allows for simpler decision making by each arm, similar to the way an octopus thinks. 
  • Static processes are unlikely to work for the future. Even without enshrining a VEP in law, any bureaucratic engine has massive momentum. A buggy system stays buggy forever, just like a legacy font-rendering library. 
It may not even result in different decisions than a distributed system. For example: Any bug bought/found by SIGINT team is likely to be useful for SIGINT, and retained. Otherwise your SIGINT team is wasting its time and money, right?

Likewise, any bug found externally, say through a government bug bounty, is likely to be disclosed.

Here's a puzzle for you: What happens if your SIGINT team finds a bug in IIS 9, which is an RCE, but hard to exploit, and they work for a while, and produce something super useful. But then, a bit later, that same bug comes in through the bug bounty program DHS has set up, but as an almost useless DoS or information leak? How do you handle the disparity in what YOU know about a bug (exploitability f.e.) versus what the public knows?


This leads you into thinking - why is the only output available from the VEP disclosure to a Vendor? Why are we not feeding things to NATO, and our DHS-AntiVirus systems, and building our own patches for strategic deployment, and using 0days for testing our own systems (aka, NSA Red Team?). There are a ton of situations where you would want to issue advisories to the public, or just to internal government use, or to lots of people that are not vendors.

During the VEP Conference you heard this as an undercurrent. People were almost baffled by how useless it was to just give bugs to vendors, since that didn't seem to improve systemic security risks nearly enough. But that's the only option they had thought of? It seems short sighted.

No comments:

Post a Comment