The best way to start addressing this important question is to first consider the differences in scope between Invasive and Non-Invasive Scans.
A non-invasive scan will, by definition, stay away from making any request which could either deface the web application, or somehow corrupt any of the web application's underlying data.
As a generic description, you could say that a non-invasive scan will launch some very basic “security” tests against the target, such as text searches, file checks, version checks and some other basic tests, which typically do not lead to a malicious defacement of the site or web application.
As an example, consider that one technique that could be used during a non-invasive scan is Fingerprinting. This technique involves creating unique hashes for every file at known or discovered locations in the web application, and comparing these hashes against other hashes of the same files taken at a previous point in time. A scan would be able to identify files which have been changed by comparing against past computed hashes.
This technique, however, is not a very good analysis tool, because it can only be used for static files.
What is a Malicious Hacker trying to do?
Every web application developer programs the application so that it can accept legitimate data as input from its users.
A malicious hacker is always trying to invade your web application by submitting bogus data. The idea is to find some input field, for example, where the programmer did not implement any field validation, or did not implement strong enough validation. This input field would then be exploited by the attacker, with the attacker attempting to submit bogus data crafted in a particular way that the programmer did not anticipate in his validation checks, and which would make the web application react in unintended ways. This might lead to exposure of personally identifiable information, or to allow the attacker to infer some parts of the structure of the web application, in turn allowing him to create a more targeted exploit.
Also, a web application could possibly have forms to request data from its users, then INSERT this data into the database or UPDATE a record in the database with updated information; it could also have features that allow a user to delete certain data (such as a shipping address). An attacker could possibly try to use these forms and features to corrupt or poison the data by injecting garbage data or deleting valid records.
Invasive Scans and Effects
An invasive scan is where the scanner attempts to evaluate whether a hacker is able to intrude into the web application using techniques such as described earlier (and other, possibly much more sophisticated, ones).
The only way a scanner can effectively reach this scope is to simulate the hacker's actions. While the Acunetix is scanning the web application, it will simulate clicking on all buttons available in any of the pages found. It will also attempt to fill in fields and forms with payloads that are designed to fully test the field validation implemented by the programmer. If the web application accepts any of this data or accepts submissions of such forms, bogus records might be added to the database, or some fields of existing records might be changed with meaningless data, and some values or records may be deleted.
Scan Only Simulated or Robustly Backed-Up Environments
Automated web application security scanners are designed to send data that the target web application cannot handle. In reality though, the automated scanner is only following a number of links and forms (e.g. a link in an administrator interface could lead to a deletion of a database record) and trying to submit bogus data, of which the end result could lead to vulnerability. This is why it is always important to launch such scans against test or simulated environments. If a test environment is not available, it is highly suggested to ensure a robust backup and restore procedure is in place for critical data to be restored quickly should anything go wrong.
Garbage Data Added or Overwritten, and Deleted Records
Effects of Scanning
While scanning the web application, each check may have a different effect. When Acunetix is testing for Cross-site Scripting or SQL Injection vulnerabilities, it may inject bogus data into the web application's database to determine this.
This can give rise to the creation of comments, posts, or articles in a CMS web application, for example.
If input fields do not have robust validation, an attacker could use this as an attack vector to trigger commands at the Operating System level.
Keep in mind that if an Acunetix Scan somehow corrupts database information, this means that it has identified a vulnerability that a malicious hacker could also take advantage of.
If you run a scan on a production environment, you should first ensure that you have a rock-solid backup procedure, and that a restore from backup is known to work and is regularly tested.
If the web application contains a "delete_customer.php" page which could delete records and damage the underlying database, you can adjust your target definition to exclude the path and file concerned.
Mass Email Generation
Effects of Scanning
During the scanning process, Acunetix will submit forms that are intended for enrolling or subscribing users to a notification service, for example. The scanner may trigger an email to be sent each time the form is submitted, and this might create a domino effect if the form triggers an email to a distribution list. This could result in hundreds of emails being delivered during the scanning stage, slowing down the mail server while the scan is running.
To mitigate the effects of such situations, you can identify the path to the submit action of the form, and exclude that path in the target definition.
Adding a CAPTCHA to each form will also help mitigate by preventing a non-human from triggering the form action.
You may also implement rules on the mail server to stop emails coming from the scanner or the forms while the scan is running.
Server Overload or Downtime
Effects of Scanning
By default, Acunetix will scan at a rate of 10 concurrent requests with 0ms delay time to the web application – the fastest setting available. Some targets may not be able to handle the rate, with the result that the web application may slow down or even stop responding.
The worst-case scenario would be that visitors to the web application would not be serviced. Alternatively, bottlenecks may be created that not only affect visitors but also greatly increase the scan time, sometimes to several days.
One other possible effect is log file flooding. The web application may have defense mechanisms against a large number of requests, but still report the attempts to the log files – the log files might fill up to the point that the target runs out of disk space.
To avoid slowing down the web server, you can try reducing the scan rate – you can bring it down to 5 or 2 or even 1 concurrent request(s).
Another option is to increase your server's capacity, allowing the scanner to generate quicker and more accurate results.
If your web application might consider the scanner to be a threat (maybe through IDS or IPS rules on a firewall), you may be able to resolve this by whitelisting the IP Address of your scanner.
If you are using Acunetix Online, you can whitelist 188.8.131.52/32 (a single IP Address)
If you are using Acunetix 360, you can whitelist 184.108.40.206 and 220.127.116.11 (two IP addresses)
Unfortunately, it is not possible to properly scan a website for vulnerabilities using ONLY non-invasive techniques. With non-invasive techniques, we are simply scratching the surface, and avoiding the real potential intrusion points.
Malicious hackers will use any and all tools and vectors available to them when trying to exploit vulnerabilities in a website. If a Web Vulnerability Scanner does not do the same, it will not be able to discover points of entry with the same level of effectiveness as a malicious hacker could.
Remember the following golden rules:
- whenever possible, run your scans against a staging version of your web application rather than the production version
- before you point your scanner at a production web application, make sure that you have all the infrastructure AND data backed up such that you can recover easily and quickly if the scan should adversely affect your website.