Quite recently, Tavis Ormandy released a 0-day vulnerability in a prominent piece of software. For this transgression, both he and his employer received a good deal of bad press. Sadly, very few in the professional security researcher crowd made enough noise about this, and to the contrary, one man in particular came down squarely against him. Thankfully however we still have Brad Spengler. Last night he posted what none of us had the courage to say. You can find this post on the Daily Dave mailing list archives: http://seclists.org/dailydave/2010/q2/58

I won't rehash the post, I'd very much rather you read it yourselves. But I would like to point out the timeline.

June 5) Tavis contacts Microsoft requesting a 60 day patch timeframe.

June 5-9) Tavis and Microsoft argue about the patch timeframe and are unable to come to an agreement.

June 9) Tavis releases the information to the public.

June 11) Microsoft releases an automated FixIt solution

Tavis did not "give Microsoft 5 days to patch the bug" as was said by various media outlets.

As a few people (@dinodaizovi, @weldpond) have pointed out, this strikes at the heart of the term "Responsible Disclosure". A clever branding trick by software vendors, the term automatically assumes that any other method of disclosure is irresponsible. So we must ask, were the actions that  Tavis took responsible? Would it have been more responsible to allow a company to sit on a serious bug for an extended period of time? The bugs we are discussing are APT quality bugs. Disclosing them removes ammunition from APT attackers. If your goal is to stop attacks, where bugs are the supply chain of attacks, you must make bug and exploit creation prohibitively expensive as compared to the return on that investment. This is why OS mitigations are helpful. Removing high-value bugs from the marketplace is what full disclosure is good at.

I'd like to explicitly debunk a couple of myths related to this issue now.

Myth 1) Targets are a commodity. (All targets carry the same value)

At some point, the security posture of common software is no longer about your mother's Windows XP desktop with a CRT monitor from 8 years back. It is not about the money wasted when sales people's laptops need to be reimaged. It is about real security. It is about the financial information of your public company. It is about the plans for Marine 1 ending up in the hands of people who shouldn't have them. It is about the stability of our power grid.

This is because when a vulnerability becomes public it is no longer as useful for serious attackers. Defense companies provide detection and prevention mechanisms, researchers provide useful mitigations, and high end companies are able to arm their response teams with the information necessary to protect their particular environments. The companies with high-value data that are regularly attacked are able to proactively protect themselves. The attackers who have spent significant time evaluating a company's vulnerability with regard to a particular bug, will now find that bug to be much less useful for a stealthy attack. Yes, you may see an uptick in attacks, but you see a downtick in overall target value. The loss due to a 20+ company exploit spree such as "Aurora" is significantly greater than the monetary loss due to low-end compromises which can be cleaned with off the shelf anti-virus tools. No one is persistently using advanced exploitation techniques against low-value targets such as Joe's Desktop. These attacks are focused on large corporations, government, and military targets with the goals of industrial espionage and military superiority.

Myth 2) Only Tavis knew about the bug

The media asks, "how could attackers know about this flaw if Tavis hadn't released it?" Every bug hunter knows this statement is ridiculous. Security research, like all scientific research, moves like a flock of birds. I'm relatively sure that Leibniz wasn't spying on Newton's work, but they both developed calculus at the same time. They both had the same environment and the same problem to solve, so they developed the same working solution. I'm sure I'm not the only researcher to have lost bugs to another researcher's reporting. Within the past year I have lost several bugs which on the market would have sold for in excess of $65,000. At the point in which the bugs became public, their value dropped to approximately $0 because companies are able to build protections against the vulnerabilities. The bugs that I lost were bugs that had lived for more than 5 years, yet they were discovered independently by myself and others within months. Even if no one else had found the bug, there are other ways an attacker could become aware of it. It would be unreasonable to assume that high-end researchers and their companies are not the targets of espionage. The value of their research is high, and if an attacker can get a free exploit and know that it won't be patched in the next 60 days that is a win for the attacker. It is unreasonable to assume that a bug is not known to attackers once it is found by a researcher. Tavis has protected high-value targets by refusing to allow an unreasonable timeline for patching. Tavis has devalued the vulnerability by letting companies know about a threat that they otherwise would have been unaware of. Tavis has acted responsibly.

The long and short of this is that when only a handful of people have information, that information is very valuable and very useful. When everyone has this information, everyone can use it, but its value decreases significantly. Tavis simply devalued this flaw. Yes, what Tavis did means you might have to reimage your mother's computer when you visit at Thanksgiving. But also, what Tavis did means that you won't think twice about whether or not the power will be on when you get there. Despite branding, what Tavis did was responsible. In this case, "responsible disclosure" wouldn't have been responsible.