Responding to DoS attacks at the web layer

Are you ready to respond to DoS attacks at the web layer? In this article, Kevin Beaver shares an anecdote from his own experience whilst highlighting some important steps to take.

First things first; responding to DoS attacks at the web layer starts with ensuring you have a solid incident response plan in place. But what else do you need to be thinking about? Are DoS attacks at the web layer simple to thwart? Many believe it’s just a matter of getting on the phone with your Internet or hosting provider and have them block the attacks for you. Well, it’s not always that simple. It can be downright impossible if you’re experiencing a distributed DoS (DDoS) attack. Every DoS attack is unique and so are the web environments they’re attempting to exploit.

During a recent project, I learned first-hand what a seemingly benign DoS attack can do to a business’s web presence.

I have a client who hosts numerous high-visibility websites for various enterprise customers. Over 10 years ago, one of the hosted websites had a vulnerable page that effectively served as an open HTTP proxy. Even though the file was long gone, it somehow made it to an underground list of known vulnerable web pages. Criminal hackers then decided to try and take advantage of this vulnerable page by making numerous requests that evolved into an outright distributed denial of service situation for my client.

My client started to notice high utilization in their web server farm and tons of requests for this page that no longer existed. Once I got involved they were getting approximately 12,000 requests for this page every minute. The attack quickly grew to around 20,000 page requests per minute. The cloud service providers wouldn’t want you to know this but this website and phantom page was being hosted in one of the prominent cloud environments that everyone has heard of – a provider who could (presumably) handle such an attack. That certainly wasn’t the case.

It got to the point where my client was about to lose its biggest customer because their website was becoming non-functional. Thanks to some niche players with content delivery and cloud-based WAF technologies that can help offload such nasty traffic (namely Imperva, CloudFlare and Incapsula), my client has since seen some relief.

During this project, we learned a lot about web-based DoS attacks and what does and doesn’t work. Here are some findings you may find beneficial:

  1. You might not know a DoS attack is occurring unless/until someone tells you about it. I suspect my client’s problem was ongoing for a while. They just started noticing it once their customer pointed out the problem. This issue is underscored by the recent Verizon 2013 Data Breach Investigations Report finding that 69% of attacks/intrusions are discovered by external parties. That in itself is not good for business.
  2. You can reach out to your ISP and your hosting provider to see if they can do anything to help block the attacks. If it’s a DDoS attack from hundreds or thousands of hosts, this could prove difficult
  3. If you experience a situation similar to my client’s and you believe that the pages (whether real or phantom) are being discovered/listed by the search engines, you can reach out to Google, Bing, and others to request link removal.
  4. You can also send an email to the ‘abuse’ contact listed in the attacking host’s DNS records. It may prove futile, especially during a distributed attack, but it’s something to keep in mind.
  5. You can experiment with various web server configuration and page setups. For example, my client created a bogus page on their website that mimicked the vulnerable page being requested. This ended up reducing their traffic by a considerable amount. We assumed this was because the systems doing the requesting were getting an HTTP 200 OK versus an HTTP 404 Not Found or HTTP 403 Forbidden – the latter two of which could’ve been forcing secondary requests for the file and thus generating the extra traffic.
  6.  You can use a cloud-based WAF provider such as Imperva, CloudFlare or Incapsula for near immediate relief without having to make any internal network infrastructure changes. However, if the attacker knows the IP address of your web server he can just try connecting to it directly rather than taking the route of traditional DNS lookups and redirects so that the traffic can be filtered before it gets to your network. This begs the question: How long are these DoS mitigation technologies going to be able to hold out? Given enough DoS attacks coming through their systems, one would think that even they will get bogged down at some point.
  7.  Attacks like this may be completely automated with numerous zombie computers on the other end making the requests so don’t take it too personally and automatically assume it’s a targeted attack.

Again, every situation is going to be unique, but it pays to think about how you’re going to address such issues before they arise.

Keep in mind that these types of vulnerabilities are not going to be uncovered by traditional web security scanning and testing. This fact underscores the importance of 1) understanding your network and web application environment 2) knowing what’s “normal”, and 3) having the means to detect, track down, and block such attacks.

ShareShare on FacebookTweet about this on TwitterShare on Google+

Leave a Reply


*

  1. Frank

    The title is “Responding to DoS attacks at the web layer” but doesn’t it make more sense to respond to DoS attacks at the network layer? If the DoS is happening because of bad coding, it’d make more sense to fix the code. If the problem isn’t with coding and is just a network flood, that gets handled lower in the stack, no?

    May 22, 2013 at 6:36 pm Reply
  2. Hi Kevin,

    Great summary of what a wed admin can experience and expect during during a DDoS attack. Having operated a large hosting provider backbone for more than a decade, I can certainly relate to some of the frustrations felt when the hosting provider NOC could not rapidly leverage the overarching network to mitigate an application layer attack, or even a large network layer attack. Even worse was the ineffectiveness of monitoring solutions designed to identify large traffic anomalies, but were unable to identify the more sophisticated, smaller attacks against individual customers. Ultimately the hosting provider operations primary role is provisioning, maintenance, and ensuring highest amount of uptime for ALL customers; a DDoS attack represents a time investment that they cannot provide full focus for, especially if it is causing collateral damage on shared infrastructure with adjacent customers. A quick cost- benefit analysis will usually lead them to simply blackhole your ip.

    I will offer a my thoughts on some of the concerns expressed in point 6). The question of whether a cloud based WAF/DDoS protection service has enough capacity, since it is always under attack, becomes a much easier function to calculate and factor, provided their network infrastructure is engineered to only provide that service exclusively. By restricting the types of traffic profiles your network will accommodate (in our case we only allow port 80 and 443 in protecting our customers ), you can contract your upstreams (albeit at an extra cost and a lot of negotiation) to discard all other traffic so it never touches your network infrastructure, thereby leveraging their overall capacity as well. It becomes much easier to scale and forecast capacity requirements. There are many companies that offer DDoS protection along side their other core products, such as hosting, and capacity becomes a greater concern as they intermingle attack traffic along with their regular traffic.

    As for the concern about an attack that bypasses a DNS lookup and targets the IP, admittedly we have seen that on the odd occasion, but it is rare. When it does occur, customers usually press on their hosting providers to implement some basic rules and if that fails, ask to be provided a new set of ip’s for their web servers, that will remain hidden when behind our services, which is always on. It’s important for web admins to design their site in a modular fashion so that should they need to re-ip their web systems, it can be done fairly easily.

    Thanks again for a great article. I look forward to your next blog.

    May 23, 2013 at 10:23 pm Reply
  3. Kevin Beaver

    Good points Frank. You may be over-thinking the title of my post. I’m referring to web-based attacks that come through HTTP and request known vulnerable pages. So, technically the network layer and the application layer are involved.

    The thing is the “bad code” has long been removed but someone – many people , in fact – out there still had that page as an attackable page. How do you stop that? It’s tricky.

    August 15, 2013 at 1:33 pm Reply
  4. Kevin Beaver

    Thanks for your thoughtful response Jag. This is a complex area for sure and it’s good to see things from your side of the table.

    August 15, 2013 at 1:34 pm Reply