Configuring Acunetix to exclude scanning a portion of website
There are situations where you may need to configure Acunetix to exclude a portion of a web application from crawling and scanning. This might be required if the web application being scanned is too large, or if scanning part of the site might trigger unwanted actions such as submitting data.
Exclude Paths from the Site Structure of an existing Crawl / Scan
One way to configure Acunetix to not scan parts of the site is to configure exclusions from the Site Structure following a crawl or a scan of the site.
Adding an Excluded Path using this method is quite easy and can be accomplished as follows.
- Select the "Scans" option from the sidebar
- Click on your scan
- Navigate to the "Site Structure" tab
- Hover the mouse over the Site Structure, and click on the "Exclude" option to automatically configure exclusion for the specific path.
- The exclusions will be configured in the Target’s settings
Exclude Paths in a Target’s Settings
Another method is to make use of the "Excluded Paths" option, which enables you to specify a list of directories and files to be excluded from a crawl. Multiple paths may be excluded for each Target.
Adding an Excluded Path can be accomplished as follows.
- Select "Targets" from the sidebar
- Click on your target to edit it
- Scroll down to the "Navigation" panel in the "Crawling" section
- In the "Excluded Paths" field, enter exclusions for any paths or files and click the "Add" icon. The exclusions can be set using regular expressions, wildcards, and also normal strings
- Click the "Save" button when done
The format in which the exclusions should be created is with a forward slash at the front (/) and the path that should be after the Target URL. For example if you wish to exclude /dir2 which is in directory /dir1 from www.example.com, the exclusion should be created as such: /dir1/dir2/ where /dir2 will be ignored by the crawler. Do note that /dir1 and everything in it (except /dir2) will still be scanned.
If you have a directory named /dir2 in the root, this directory will still be scanned since the exclusion we created was specifically for the directory named /dir2 which is in the /dir1 folder. These are not considered the same – even though they are named the same – because they are in two different locations.
Once a path is excluded from scanning, all its subdirectories will be also excluded from the scan because once a directory is not crawled, the scanner cannot know that there is anything below that directory that has been ignored. Slightly modifying the previous example, if /dir1 is excluded, the crawler will ignore this directory and anything below it, including /dir2.
Excluding Paths Based on Regular Expressions
Acunetix also allows path exclusions to contain regular expressions (RegEx). This is useful in situations where you want to exclude a URL pattern rather than a single URL. Acunetix accepts the widely-used Perl Compatible Regular Expressions (PCRE) syntax for defining regular expressions.
The following are examples of regular expressions you can configure in Acunetix to restrict URL patterns.
🔍 RegEx Testing
Before applying an exclusion RegEx in Acunetix, you may wish to test your RegEx in a tool such as Regex101
Does not match (does not exclude path)
Exclude URLs more than 1-level deep
Exclude URLs more than 2-level deep
Exclude specific directories
Exclude all URLs (useful when supplying Acunetix with a list of URLs to scan)