Vulnerability Disclosure: what could that new approach look like?

Few weeks ago, Enno Rey published an interesting reflection around vulnerability disclosure blog post discussing how the industry needs to adjust the “traditional” practices for disclosing software defects to vendors. If you haven't read the post, it’s highly recommended as it exemplifies a genuine experience from someone who has been dealing with vulnerabilities for over a decade.

At the end of the post, Enno is suggesting an open debate asking the community “What could that new approach look like?”
It’s just: what could that new approach look like?
Being a multi-author blog composed by security professionals with different backgrounds, interests and opinions, we decided to provide our input to this important discussion.

Luca Carettoni - @_ikki


If you believe in the vision of building a secure Internet, disclosing vulnerabilities to the vendor is evidently a strong requirement. Since the traditional model of reporting defects “for free” has demonstrated its limitations, it’s important that we build a sustainable ecosystem where security researchers can disclosure vulnerabilities, get a decorous compensation and ultimately hand over the bug to the vendor. Bug bounties and few vulnerability brokers that do not rely on the secrecy of the information (e.g. ZDI) are the incentive for disclosing to the vendor, while alleviating the pain of the process. We need to increase those opportunities by having more programs and higher rewards. Even without outbidding the black market, many researchers will prefer this approach for its ethical implications, resulting in a win-win situation.

If the vendor doesn't care (hey Mary!), digital self-defense in the form of full disclosure is a valid alternative, so that the community can work together on creating mitigations and resilient infrastructure in a timely manner. In these situations, Google's 90-day disclosure deadline is an example of a mechanism used to improve industry response times to security bugs.

Michele OrrĂ¹ - @antisnatchor


Freedom is the key. I’m tired of regulations and compliance to rules imposed by people who are not even in the security industry.  If I find a bug, I want to have the freedom to do whatever I want with it, for instance:  
  • Keep it private and use it during legit penetration tests or red team engagements, then report it to the vendor 12 months later because it’s Unholy Christmas time; 
  • Sell it just like a PoC, or sit on it to achieve full RCE and then sell it to some broker;  
  • Just go Full Disclosure and publish as a fake persona to cause mayhem;  
  • Privately report it to the vendor, helping them fixing it, etc. 

Let's say I find a bug in a (defensive) security product. I would never report it to the vendor unless they pay a (very) good amount of money. There are tons of security product vendors who make millions of dollars selling crap that works so-so and most of the time can be owned remotely, effectively becoming a pivot point in the customer’s network. Why should I help them for free to make even more money silently patching bugs in their systems?

Moreover, the annoying stories of people saying “hey, if you release that 0day, the black market will use it!”, or “hey, isn’t that open source hacking tool very dangerous if used by the wrong people?” can be demystified very easily. In my opinion governments use the black market as a resource, if they really need to, like the Italian government uses Mafia(s) to get intel/help in certain circumstances. Moreover, about open source hacking tools (same as vulnerabilities) being dangerous: how they are used is the key here. In fact I see a certain analogy between OSS hacking tools and 0days. If someone use an OSS hacking tool to own a financial institution and he gets caught, would you blame the developers of the tool or the guy who did the hack? Same thing for a 0day, would you blame who found it, who used it, or the vendor? Would you blame Vitaly for discovering and selling the infamous Flash 0day, HackingTeam who weaponized it to “rule-them-all”, or Adobe for caring so little about security?

Truth is, education and knowledge are the keys. If we will be able to teach the new generation how to write secure code, how to do fuzzing during software development and testing and to never blindly trust input, then we would really increase Internet security. If we continue to go down through the path of ignorance and security by obscurity, chaos is nearer.

Luca De Fulgentis - @_daath


Said that full disclosure may not be that ethical in certain circumstances (remember Gobbles' apache-scalp?), I do neither truly believe in what is named “responsible” disclosure. Being “responsible” implies withstanding ethics that, in turn, implies naming things as “right” or “wrong”. Instead my own experience points me to think in term of what simply “works” rather than limiting choices – such as disclosing a bug – on the basis of a dualistic paradigm.

I never really understood the term “ethics”, especially if applied to the real-(security)world. We live in the dark ages of the Internet of Things where we are observing the rise of “ethical white knights”, which are building their fame and glory stealing someone else code or shitting on enemies (of the Internet, of course). While these useless characters only exist because of the “evil” the are trying to banish – and, hopefully, they will get of out scene now that the evil has been heavily hacked – what really makes me suffer is the term “ethical hacking”.

Ethical hacking’s deliverables are often intended as weapons to fuck up or deceive someone: technology or services providers, colleagues, managers and sometimes even customers. And let me say that out there most of the security firms and related professionals blindly accept this perverse “game”, even if they are claiming to be "ethical" or "white-something" - after all, business is business.

Back to the vulnerability disclosure debate, I’m not in the right position to properly identify a model that works, but let me say that it sounds like a NP-complete problem to be solved, and I think I’m not wrong when I’m saying that it can be compared to other well-know issues afflicting mankind.

So the whole topic could be shifted to a completely different level: we had, have and will always have insurmountable constraints, represented by subjects only interested in money, fame or power, that will always mark both the upper and lower bounds of "improvements" - name it, in example, a safer Internet via a robust vulnerability disclosure model. It's the same as the old plain physical world. It’s all the same, only the names will change.


Using Dharma to rediscover Node.js out-of-band write in UTF8 decoder

A month ago, Node.js released a security update for a bug in V8's utf-8 decoder affecting Buffer to String conversions. Since numerous native functions for networking and I/O are affected, a malicious user could deliver a crafted input to crash a remote Node.js process. A truncated four-bytes sequence can be used to create a misalignment in the WriteUtf16Slow function, resulting in a segmentation fault. For more details on the actual vulnerability, have a look at the V8 patch and the original bug report.

Just after the release of the patch, I started experimenting with this vulnerability to create a proof-of-concept:


Almost around the same time, I noticed that Christoph Diehl from Mozilla published a grammar-based fuzzer named Dharma. The tool parses formal grammar definitions and generates test cases. Although the concept is not new, Mozilla released a neat implementation with great efficiency.


Can we rediscover the same bug using Dharma? 

As an excuse to play with Dharma, I decided to try to replicate the same Buffer vulnerability. In this post, I will guide you through the setup and execution.

First, we need to create a grammar to define Node's Buffer functions. From the official API doc, I started classifying all APIs in three categories: definitionspermutations (from Buffer to Buffer) and operations (from Buffer to other types).

Based on this model, all test cases will resemble the following template:


The resulting buffer.dg grammar has been merged in the official Github repository.

With Dharma, we can now generate test cases with a simple command:


At this point, we just need to execute our test cases and wait for the results. After trying a few different solutions, I ended up using a very simple bash script:


After leaving the fuzzer alone for the night, I came back in the morning to discover a multitude of core dumps. Hidden among thousands of V8::FatalProcessOutOfMemory and SIGILL Illegal instruction errors, I finally discovered a sample that was triggering something interesting.

Looking at the backtrace,  we can confirm that we're triggering the same vulnerability. If you're interested, I've uploaded the auto-generated test case.


Now what?!

Node.js Buffer provides a very powerful API with raw memory allocation capabilities. Ilja van Sprundel outlined some of the risks during a recent webcast, and the latest vulnerability was a clear demonstration of the possible outcomes. Having already spent a few hours on building the grammar, I expanded this little fuzzing exercise with the goal of discovering similar vulnerabilities. After a few days of generation/execution and over 400,000 test cases, I have yet to triggered another segmentation fault in Node.js' Buffer. Although this exercise doesn't give us a definitive assurance, it is probably a good sign of the maturity of the API. Nonetheless, grammar-based fuzzing is fun and can lead to interesting results.