Recently a client of mine sent over the results of a web vulnerability scan that one of their customers had run against their production web environment. My client was curious why the results of this third-party scan were different from my findings just a few weeks prior using the same web vulnerability scanner.
Looking at this new vulnerability scan report, it became clear we were comparing apples to oranges. First off, this third-party used a more limited scan policy. Ironically, the policy I used tested for everything yet found fewer issues. Your scan policy choice alone can dramatically impact the outcome of your web security audit. But that’s not all that can impact the results of your tests. A few more considerations I shared with my client were:
- What’s being looked at: the actual production application or an ad-hoc test environment? The other scan was against this third-party’s unique production system that’s part of a multi-tenanted cloud application. I had looked at a lab environment setup specifically for my assessment which had zero customization. A follow-up assessment of their production environment found nothing new. The key difference here was not the application environment itself (although that could’ve made a difference). Instead it was the customization of the application that’s taking place for each customer.
- What credentials were used for the third-party test? What's different about those user permissions compared to what I was given?
- How was the application’s security policy configured during the third-party scan? How is that different than what I looked at?
- The third-party’s Web vulnerability scanner version was several weeks newer than what I originally used. New builds, new vulnerability checks and updated policies had since come out.
Some additional things that can affect what’s uncovered are your scanner’s crawler depth and timeout settings, HTTP request handling, parameter exclusions and even firewall or IPS controls that affect production differently than a test environment. It may seem like a no-brainer and it probably should be but once you start throwing in all of these variables you may very well get different results.
Moving forward you can dig down further and uncover the not-so-obvious gotchas. Just be careful because certain (often many) scanner findings are false positives or don't really matter in the grand scheme of things in the context of your business. As I advised my client, unless this third-party is manually validating every single finding, a person running a vulnerability scanner is not nearly as detailed a test as what’s involved in a more in-depth web application assessment so be careful what you commit to fixing.
The important thing is to ensure you’re looking at all possible areas of your applications from all possible angles and doing so on a periodic and consistent basis. You’re not going to get it figured out the first scan, or maybe even your fiftieth scan. Just strive to tweak your environment (be it production or test) and customize your scanner to provide greatest insight so you can find the most vulnerabilities in the shortest period of time.