Few weeks ago, Enno Rey published an interesting reflection around vulnerability disclosure blog post discussing how the industry needs to adjust the “traditional” practices for disclosing software defects to vendors. If you haven't read the post, it’s highly recommended as it exemplifies a genuine experience from someone who has been dealing with vulnerabilities for over a decade.
At the end of the post, Enno is suggesting an open debate asking the community “What could that new approach look like?”
It’s just: what could that new approach look like?
Being a multi-author blog composed by security professionals with different backgrounds, interests and opinions, we decided to provide our input to this important discussion.
Luca Carettoni - @_ikki
If you believe in the vision of building a secure Internet, disclosing vulnerabilities to the vendor is evidently a strong requirement. Since the traditional model of reporting defects “for free” has demonstrated its limitations, it’s important that we build a sustainable ecosystem where security researchers can disclosure vulnerabilities, get a decorous compensation and ultimately hand over the bug to the vendor. Bug bounties and few vulnerability brokers that do not rely on the secrecy of the information (e.g. ZDI) are the incentive for disclosing to the vendor, while alleviating the pain of the process. We need to increase those opportunities by having more programs and higher rewards. Even without outbidding the black market, many researchers will prefer this approach for its ethical implications, resulting in a win-win situation.
If the vendor doesn't care (hey Mary!), digital self-defense in the form of full disclosure is a valid alternative, so that the community can work together on creating mitigations and resilient infrastructure in a timely manner. In these situations, Google's 90-day disclosure deadline is an example of a mechanism used to improve industry response times to security bugs.
If the vendor doesn't care (hey Mary!), digital self-defense in the form of full disclosure is a valid alternative, so that the community can work together on creating mitigations and resilient infrastructure in a timely manner. In these situations, Google's 90-day disclosure deadline is an example of a mechanism used to improve industry response times to security bugs.
Michele Orrù - @antisnatchor
Freedom is the key. I’m tired of regulations and compliance to rules imposed by people who are not even in the security industry. If I find a bug, I want to have the freedom to do whatever I want with it, for instance:
- Keep it private and use it during legit penetration tests or red team engagements, then report it to the vendor 12 months later because it’s Unholy Christmas time;
- Sell it just like a PoC, or sit on it to achieve full RCE and then sell it to some broker;
- Just go Full Disclosure and publish as a fake persona to cause mayhem;
- Privately report it to the vendor, helping them fixing it, etc.
Let's say I find a bug in a (defensive) security product. I would never report it to the vendor unless they pay a (very) good amount of money. There are tons of security product vendors who make millions of dollars selling crap that works so-so and most of the time can be owned remotely, effectively becoming a pivot point in the customer’s network. Why should I help them for free to make even more money silently patching bugs in their systems?
Moreover, the annoying stories of people saying “hey, if you release that 0day, the black market will use it!”, or “hey, isn’t that open source hacking tool very dangerous if used by the wrong people?” can be demystified very easily. In my opinion governments use the black market as a resource, if they really need to, like the Italian government uses Mafia(s) to get intel/help in certain circumstances. Moreover, about open source hacking tools (same as vulnerabilities) being dangerous: how they are used is the key here. In fact I see a certain analogy between OSS hacking tools and 0days. If someone use an OSS hacking tool to own a financial institution and he gets caught, would you blame the developers of the tool or the guy who did the hack? Same thing for a 0day, would you blame who found it, who used it, or the vendor? Would you blame Vitaly for discovering and selling the infamous Flash 0day, HackingTeam who weaponized it to “rule-them-all”, or Adobe for caring so little about security?
Truth is, education and knowledge are the keys. If we will be able to teach the new generation how to write secure code, how to do fuzzing during software development and testing and to never blindly trust input, then we would really increase Internet security. If we continue to go down through the path of ignorance and security by obscurity, chaos is nearer.
Luca De Fulgentis - @_daath
Said that full disclosure may not be that ethical in certain circumstances (remember Gobbles' apache-scalp?), I do neither truly believe in what is named “responsible” disclosure. Being “responsible” implies withstanding ethics that, in turn, implies naming things as “right” or “wrong”. Instead my own experience points me to think in term of what simply “works” rather than limiting choices – such as disclosing a bug – on the basis of a dualistic paradigm.
I never really understood the term “ethics”, especially if applied to the real-(security)world. We live in the dark ages of the Internet of Things where we are observing the rise of “ethical white knights”, which are building their fame and glory stealing someone else code or shitting on enemies (of the Internet, of course). While these useless characters only exist because of the “evil” the are trying to banish – and, hopefully, they will get of out scene now that the evil has been heavily hacked – what really makes me suffer is the term “ethical hacking”.
Ethical hacking’s deliverables are often intended as weapons to fuck up or deceive someone: technology or services providers, colleagues, managers and sometimes even customers. And let me say that out there most of the security firms and related professionals blindly accept this perverse “game”, even if they are claiming to be "ethical" or "white-something" - after all, business is business.
Ethical hacking’s deliverables are often intended as weapons to fuck up or deceive someone: technology or services providers, colleagues, managers and sometimes even customers. And let me say that out there most of the security firms and related professionals blindly accept this perverse “game”, even if they are claiming to be "ethical" or "white-something" - after all, business is business.
Back to the vulnerability disclosure debate, I’m not in the right position to properly identify a model that works, but let me say that it sounds like a NP-complete problem to be solved, and I think I’m not wrong when I’m saying that it can be compared to other well-know issues afflicting mankind.
So the whole topic could be shifted to a completely different level: we had, have and will always have insurmountable constraints, represented by subjects only interested in money, fame or power, that will always mark both the upper and lower bounds of "improvements" - name it, in example, a safer Internet via a robust vulnerability disclosure model. It's the same as the old plain physical world. It’s all the same, only the names will change.
This comment has been removed by a blog administrator.
ReplyDelete