Ways to avoid email floods when running Web vulnerability scans

If you’ve ever ran a Web vulnerability scan you’ve likely experienced this situation. You fire up your scanner, tweak your settings, and click Start. The next thing you know people in customer service, marketing, IT, etc. are wondering why they’re getting hit with hundreds – often thousands – of emails from the site. You immediately realize it’s your Web vulnerability scanner doing the misdeed. So you stop the scan and discuss some options with everyone. Odds are you couldn’t come up a good solution for the short term. After all, your auditor or compliance manager is breathing down your neck for the scan results in the name of PCI compliance or whatever. Everyone decides to continue on with the scan and they’ll just live with the consequences of the email floods.

Sound familiar? That’s the typical scenario I’ve seen. Before you go down this path – again – you have some options for preventing email floods to begin with. Depending on the environment, timing, etc., I’ve found the following to work well:
• Setup a rule in your email server to block, reject, or black-hole email messages coming from your scanner or specific forms
• Code the application with a dummy email account that just sends emails to the bit bucket
• Depending on application logic, you may be able to point email requests back to the localhost where they’re queued or black-holed
• Setup a CAPTCHA on each form (this will help prevent email floods but it also may get in the way of effectively testing for input validation problems)

These may not work in your environment – especially if you’re scanning against production. You have another option: don’t submit form data, or depending on how your scanner handles this, just ignore the forms altogether. I don’t like this approach because you’re overlooking the very root of many XSS, SQL injection and related input validation issues. To me, if it’s there for the bad guys to access it needs to be tested in-depth.

There’s no real good answer to the dilemma of how to handle email floods other than it depends. Given all the different programming languages, methods for form handling and so on the possibilities for preventing email floods are endless. The important thing is to think this through beforehand to see what can be done both short-term and over the long haul. Above all, no matter what controls you have in place to prevent email floods, let people know that it could happen. Setting everyone’s expectations during security assessments is half the battle anyway.

I’m curious to know what you do to address this situation. Drop us a comment.

ShareShare on FacebookTweet about this on TwitterShare on Google+

Leave a Reply


*

  1. Andre Gironda

    Is this from a crawl run or an attack run of the app scanner?

    It could possible be both, which is why I generally recommend to not crawl and to not scan web applications.

    Instead, and using a tool such as Burp Suite, a web application security tester should manually walk a web application. He or she should discover the query strings, parameters, and forms. Then the tester should build test cases that are customized to the specific web application e.g. by using the Intruder to customize where attack strings are sent, which attack strings are sent, and what specific data in the responses to look for or sort by.

    What you have brought up is one of many reasons to avoid using app scanners in typical ways. Perhaps a better way would be to walk or train a proxy before attack strings are sent. Netsparker supports walking through it as a proxy (i.e. it will not crawl). HP Web Proxy sessions can be loaded into WebInspect or QAI. W3AF also has a proxy mode. Many tools work in this way.

    Also — your point about auditors and compliance managers (i.e. PCI DSS) demonstrates a lack of attention and respect to the process of auditing. Did you know that all audits include the capability to fulfill requirements through the use of compensating (or alternate) controls?

    September 11, 2010 at 2:30 am Reply
  2. Hi Dre, thanks for sharing your thoughts.

    Such situation can happen in both crawling and scanning stages. Though I do not personally think the solution to such problem is to not scan the website. Today’s web applications are large and complex and doing a manual audit without the help of an automated scanner can be a very lengthy and inaccurate process. One can also easily miss a number of pages / inputs, therefore automating processes in such situation is always of great help.

    Acunetix WVS can also be used as a proxy and trained to browse a website (manual crawling). Such process is explained in detail here; http://www.acunetix.com/blog/docs/manual-crawling-http-sniffer/

    Another option would be to use the Input Fields settings, where one can pre-define the values that Acunetix WVS should submit when crawling a specific web form; http://www.acunetix.com/vulnerability-scanner/wvsmanual/websecurity-scanner79.htm

    As regards audits, we are just mentioning a situation which security guys frequently encounter; we are not sharing any opinion. If we didn’t respect audits we wouldn’t be developing a tool to help auditors and consultants do their job, i.e. Acunetix WVS :)

    Robert

    September 11, 2010 at 4:53 am Reply
  3. Hey Andre,

    I appreciate your response and ideas. You obviously have a lot of experience and technical skills as an auditor that I admire.

    I suppose in an ideal world if you have unlimited time and a very limited set of systems to test that manually assessing every single aspect of an application could be an option. Based on my experience, as with source code analysis, there’s just too much room for error and oversight without good tools.

    Like a F1 engineer trying visually analyze his race car’s performance without data acquisition or an auto mechanic trying to troubleshoot a problem in today’s modern cars without a OBD-II code reader or a radiologist forced to use x-rays rather than a CT scan or MRI – without good tools that automate the process and give you details like nothing else could your data and your results are very, very limited. Most applications today are just too complex to try to tackle page by page, variable by variable.

    Sure, manual analysis is absolutely required where it’s warranted but there’s not enough time in the day nor enough focus of the human eye to approach everything that way. That’s been my experience along with numerous friends, clients, and colleagues I work with. Everyone’s situation is different though.

    Regarding the checklist audit mentality, I’m just calling it like I see it. Sure, there are some great auditors who are highly technical and very good at what they do. However, all too often – typically in the name of “compliance” – people hurriedly run down their checklists, tell management that all’s well and everyone gets on with their business until next quarter (or year). We all know it’s not that simple.

    Kevin

    September 11, 2010 at 7:55 pm Reply
  4. Considering most applications that will be scanned are likely to have some way to handle configuration of the SMTP server, if/when you do your scanning in a test environment you can point the application to a ‘dummy’ SMTP server internal to your company, but simply not allow that SMTP server to forward on to real email addresses nor do bounce-backs.

    If an application currently does not allow easy configuration of the SMTP (and/or pop/IMAP server), it would be a good requirement next time through.

    This approach allows you to configure a current solution in a way that is conducive to comprehensive scanning/testing, but without the repercussion of filled email boxes, false-positives, or accidental messages sent outside of the company.

    September 14, 2010 at 1:02 am Reply
  5. Good idea dhartford. I agree. The only issue I’ve come across related to this in larger enterprise is when other people have to be involved and change management procedures have to be followed. This underscores the importance of planning your security testing in advance.

    Kevin

    September 14, 2010 at 7:38 pm Reply
  6. Pekka

    How about this: teach the scanner not to submit form data to the forms that are known to flood email boxes? Currently there’s just one “do not send form data”, but generally it’s not an option.

    option 1) give the regex of the blacklisted forms or forbidden parameter/value combinations for specific scripts.

    option 2) give a “flood danger fingerprint” for the response. e.g. “thank you for your feedback” or “your registration has been processed”. The scanner would learn relatively quickly which parameter causes flood and would not send thousands of messages anymore.

    September 16, 2010 at 2:37 am Reply
  7. Hi Pekka,

    It is technically possible to teach the scanner not to submit data to forms that submit emails, but then you would not be security testing such forms.

    That is why the best solution is actually to allow the scanner do its job and capture the emails as explained in the blog post.

    Robert

    September 16, 2010 at 3:18 am Reply
  8. Good points Pekka. My experience has been that if you don’t submit form data your results can be very limited.

    Kevin

    September 17, 2010 at 7:55 am Reply
  9. None of the information on the Free Scan page warns users about the email attack. There is no mandatory reading of the white paper (mentioned only in the Acunetix blog, and there is no recommendation to read the blog BEFORE using the Free Scan). We used the Free Scan; it crashed our site, destroyed our database, and forced us to shut down our mail server.
    Now months after we made the mistake of using Acunetix Free Scan, the email attack from Acunetix is starting all over again ! You need to warn potential users of Free Scan about this fact and also advise your legal team to brace for the ensuing legal actions resulting from it.

    December 10, 2012 at 2:01 pm Reply