As someone with a track history in non-traditional vulnerability disclosure, and given recent industry mishaps related to the world of bug bounties, I’ve been asked for comment on this matter. What started as a small email outlining my professional opinion grew larger so I decided to make a post out of the whole thing.
I’m not anti-bug bounty programs. Presenting a viable information-sharing business model that encourages industry examination of your security posture is a worthwhile endeavor, and there are expert companies that specialize in their effective implementation.
But although bug bounty programs are considered a component of a “coordinated” vulnerability disclosure program, they are typically driven, along with all the other contractual terms that come into play, by the information acquirer, who is not (yet) motivated to disclose.
Most programs work to a model where the shared information will be disclosed in a timely manner, but how this actually plays out is a little more fuzzy. Depending on the details of the contract terms, the information may not be released publicly at all. Many acquirers will use the contract as a tool for strong-arming, denial, and the re-categorization of other types of security events, including extortion.
Contract terms are designed to empower manufacturers to obtain and control information related to security vulnerabilities and incidents in exchange for some level of compensation for the researcher. Hush money, if you will. There is nothing new in this model – confidentiality clauses are commonplace in all sorts of agreements – we just need to recognize that this is what it is. I’m betting there are very few bounty contracts that completely separate payment from confidentiality and disclosure terms, and if they don’t, that could be an exposure for the discloser in and of itself. And this is how bug bounty programs can be used as a tool to fuel a company’s anti-disclosure strategy.
In recent weeks we have seen evidence of bug bounty budgets going an extra mile - potentially being used to avoid compliance with breach notification laws, and/or to disguise extortion payments as bounty payments. You would think a company the size of Uber would take the hit and own it, but no, looks a bit more like denial and deception. We as an industry love to criticize hospitals paying public ransoms to get their systems back up and running, but at least they’ve a) got the balls to own that decision and b) take responsibility for acting on the disclosure notification requirements as a result.
Bug bounty programs also provide companies with an opportunity to mold the messaging around security issues – delay release of information, cover up significance/impact by overlaying a rhetoric focused on “improving security”, or, hiding bugs behind other bugs.
And then we have compensation levels. The jury is still out on what this research is worth. There is plenty of argument on all sides. On the part of the vendor, there’s the value of the information when you consider the impact should that issue be discovered/exploited/disclosed via a non-bug bounty method. If you’re looking at it through the lens of cost to researcher, obviously the minimum amounts should address time and materials to discover the issue, but what about documentation, time spent in follow-up, educating the vendor, or the legal fees associated with reviewing the extensive terms sheet you are no doubt about to be presented with? We are also seeing research tools specifically designed to cater to bounty program participants. These tools may drive down bounties and distract researchers from more difficult-to-spot security problems.
Researchers also need to protect themselves. Good luck to those researchers who have to go the extra mile to prove the impact/severity or even just existence of a vulnerability without exploitation because its an API endpoint. And if exploitation generates data exfiltration, now you’re running up against breach disclosure laws. When bug hunting becomes initiation of company breach things start getting really hairy.
At the end of the day, reaching mutually agreeable terms around all these points may be right up there with divorce, in terms of the likelihood of maintaining a functional relationship throughout the negotiation process. Even unacceptably slow response times will kill the deal. Other disagreements around scope, regulatory compliance, EULA violations/exemptions, non-compete and confidentiality clauses, and future obligations are enough to send some researchers running to the nearest dark web portal or government agency as a simpler alternative.
With the media seeing bug bounty programs in such positive light, and so much press going into bounty program launches, we risk them being perceived as a fix-all solution. In reality, they could equally likely be an attempt to redirect public concerns or downplay researcher attention to the products in question.
For a bug that is not seen as a security issue, or an avid disclosure-driven research team who don’t see the value or impact, a “bug bounty” program simply won’t work at all.
So just as we learned with “responsible disclosure” – in our race to find solutions and as useful as bug bounty programs may be, let’s be careful not to back ourselves into a corner here. Plus, some re-branding might be in order.