As application security professionals, we want to get as much as possible out of our security assessments. We’re not only expected to but we’re proud of our work and want to provide the best results and most value possible. As I’ve written in a previous article about how to plan your web security assessments, ensuring you have your ducks in a row before you start your testing is crucial. Planning is key. But there’s one literal roadblock to web application testing that’s often overlooked – or comes as an afterthought: firewalls and intrusion prevention systems. What do you do about those pesky network security controls that keep blocking your scans?
The answer seems obvious: just setup trusting rules so that you can have unfettered access to the application. Simple enough, right? Well, not really. The minute you do that you’re changing the real-world view of the application. Why not just test it as the bad guys see it and be done with it? That’s a great point and along the lines of the age-old black box/white box/gray box testing debate. I’m just not convinced that’s the best approach for managing overall risks. Oh, and for larger organizations with complex (i.e. inefficient) change management procedures, making such a request can take a week or longer. If you find out that you’re being blocked after you begin your scanning, this will put you even further behind the eight ball.
Here’s my take: if I know that firewalls and IPSs may or will block my web vulnerability scans, I’ll usually run my scans anyway to see what happens. You’d be surprised what you can get away with if the scanning is done over SSL and, thus, the network security controls can’t see what’s going on. I’ll document any such findings. If I continue on and a firewall or IPS keeps blocking or even slowing my scans yet I know I need to keep digging in, I’ll proceed to have trusting rules configured. Many will argue that this is not real-world. What’s your reality? If a web flaw exists and cannot be found via automated scans because of network security controls blocking it and then a criminal comes along and exploits it during a targeted manual attack, how’s that going to fly? It would’ve been nice to know about this flaw beforehand if you could’ve scanned with trusting rules in place from the get-go.
Setting up trusting rules can provide additional assurance that all’s well behind the scenes for those times that your network security controls don’t (or can’t) provide adequate security. In the end, management and other stakeholders want to know: where do things stand with web security? Sure there are tons of variables. Semantics too. But we have to approach this sensibly and logically. Use some common sense and do what’s best. Only you’ll know what that is for your specific situation.