Targets Options

Crawling Options

When a target is being scanned, the crawler performs the critical function of searching for all the pages available, so that the scanner can then test for vulnerabilities. You can make adjustments to the crawler's functionality to optimise your tests to your site.

Custom User Agent String

Each HTTP request sent by the crawler and scanner contains a "User Agent" string, including information that may identify the browser name and version (for example: Mozilla or Opera), the rendering engine upon which the browser is based (for example: AppleWebKit), and the type of system which the browser is running on (for example: Android)

The web server may present different content depending on the content of the User Agent string; for advanced testing you may need to run scans with different versions of the User Agent string to make sure that all parts of the target are scanned.

Apart from the default, a number of pre-set options are available; you can also customize the User Agent string to any value of your choice.

Case Sensitive Paths

By default, Acunetix will try to automatically detect whether the target web server uses case-sensitive URLs. Most, but not all, web servers are case sensitive. In addition, some web applications can be configured to be case sensitive or insensitive using rewrite rules or other mechanisms.

If you need to need to force the crawling process to be case sensitive (to ensure accuracy and completeness for a target which you know is case sensitive) or insensitive (to reduce scan time for a target which you know is case insensitive), you can use this option.

Limit Crawling to address and sub-directories only

This option is useful to limit the scope of the scan to part of the web application. By default, the option “Limit Crawling to address and sub-directories only” is enabled for new targets.

This option will limit the scope of the scan up to the last forward slash (/) in the target address.

Note that any target URL WITH a path but WITHOUT a trailing slash will cause the crawler to consider the final part of the path to be a FILE and not a FOLDER; the result is that the parent folder of that file will be the real target URL. For example:

  • the target URL http://www.example.com/folder1/subfolder1/ WITH the option "Limit Crawling to address and sub-directories only" will scan items beneath /folder1/subfolder1/ (WITHOUT the option, you will be scanning the full domain)
  • the target URL http://www.example.com/folder1/subfolder1 WITH the option "Limit Crawling to address and sub-directories only" will scan items beneath /folder1/

🔍 Limiting Scan Scope - Examples

  • Example 1: Scan the full domain
  • Set target URL to http://www.example.com (WITH or WITHOUT trailing forward slash); in this case, the option "Limit Crawling to address and sub-directories only" will have no effect on the scope of the scan
  • Example 2: Scan only part of the site or domain
  • Set target URL to http://www.example.com/part1/ (WITH trailing forward slash), and set the option "Limit Crawling to address and sub-directories only" to ENABLED to limit the scope of the scan to ONLY resources beneath the /part1/ folder
  • If you DISABLE the option "Limit Crawling to address and sub-directories only", then any path specified in the target URL will be ignored and you will scan the full domain

Therefore, if your target URL is set to http://www.example.com/task/subtask, you can disable the option "Limit Crawling to address and sub-directories only" to instruct the crawler to also look for resources in http://www.example.com/task/ and http://www.example.com.

Excluded Paths

You can exclude specific parts of a web application from the scan by specifying paths to exclude based on regular expressions. For example:

Description

Regular Expression

Will Match (Will Exclude)

Will NOT Match (Will NOT Exclude)

Exclude specific folder

^\/sub1\/sub2(\/.*)?$

/sub1/sub2

/sub1/sub2/abc

/sub2

/sub1/sub2

Exclude URLs more than 2-level deep

^(\/.+){3,}

/sub1/sub2/sub3

/sub1/sub2/sub.html?qry=abc

/sub1

/sub1/sub2

/sub1/sub2.php?qry=def

Import Files

You can add import files to your target to guide the crawler, specifying paths for the crawler to add to the scan even if none of the other pages in the target link to the paths listed in the import file.

If you ENABLE the option labelled "Restrict scans to import files", then the crawler will add to the scan ONLY the paths listed in the import file, ignoring all other parts of the target.

If you DISABLE the option labelled "Restrict scans to import files", then the crawler will crawl the target as usual, and use the import file to add other paths listed in the import file EVEN if no other part of the target links to them (orphaned folders/files).

For example, if you create a target with URL http://www.example.com, and use a text import file for with the following:

http://www.example.com/main/sub1/

http://www.example.com/extra/sub3/

...then, depending on whether the option "Restrict scans to import files" is enabled or disabled, we get the following behaviour:

Restrict Option

Will crawl and scan

Will NOT crawl and scan

Enabled

http://www.example.com/extra/sub3/

http://www.example.com/main/sub1/

http://www.example.com/main/sub2/

http://www.example.com/extra/sub1

http://www.example.com/new/

http://www.example.com/

Disabled

http://www.example.com/

http://www.example.com/extra/sub1

http://www.example.com/extra/sub3/

http://www.example.com/main/sub1/

http://www.example.com/main/sub2/

http://www.example.com/new/

HTTP Options

When creating your targets, you can configure additional settings to customise how the scanner will send out web requests.

HTTP Authentication

Web servers may require users to authenticate themselves, presenting the user with a dialog to fill in a username and password. This information is sent to the web server in the "Authorization: Basic" header.

If your target requires this type of authentication, you can specify the login URL, username and password in the fields provided for Acunetix to use when it encounters an HTTP Authentication request by the web server.

Client Certificate

If your target's web server requires client (browser) authentication via certificates, you can configure your target to use a client certificate file, by performing the following:

  • enable the Client Certificate slider
  • upload the client certificate .crt file
  • enter and confirm the password for the client certificate file

Proxy Server

If your network requires traffic to go through an HTTP proxy, or you wish to make Acunetix send HTTP requests through a proxy server to analyse the traffic later on, you can configure your target for this by performing the following:

  • enable the Proxy Server slider
  • set the IP Address or hostname for the proxy server
  • set the listening port for the proxy server
  • if your proxy server requires authentication, also set user name and password for the proxy server

Other Options

Web Technologies

Acunetix can typically detect the web technologies being used by your target, but you can use this option to set which technology or technologies Acunetix will optimize for during the scanning process.

Custom Headers

If your target may change response behaviour depending on custom headers, you can set custom headers as follows:

  • enable the Custom Headers slider
  • for each custom header you need to set:
  • enter your custom header in the format <headerName>:<headerValue>
  • click the "+" icon

Custom Cookies

You can configure custom cookies to be sent to your target with each request, allowing Acunetix to crawl and scan your target correctly. To set custom cookies for your target:

  • enable the Custom Cookies slider
  • for each path you need to set cookies for:
  • enter the URL for the path
  • enter your cookie values for the URL
  • click the "+" icon

Issue Tracker Integration

You can pair your target with an Issue Tracker configuration set up previously, by doing the following:

  • enable the Issue Tracker slider
  • from the dropdown, select the name of the pre-configured issue tracker configuration you wish to use
  • at the top of the Target Information panel, click the "Save" button

When the scan of your target is completed, you will be able to select the vulnerabilities to submit to your issue tracker.

Allowed Hosts

Your target may start at one domain, but span multiple domains. You can use this option to allow the crawler and scanner to follow links in your web applications across multiple domains, as long as each of those multiple domains are:

  • already configured as targets
  • listed in the Allowed Hosts list

To set up your Allowed Hosts, for each required Allowed Host:

  • click the "Add Host" button
  • select the target with the required domain name

Excluded Hours

Newly created targets are bound to the default excluded hours profile.

You may select a different profile for your target by selecting your desired profile from the dropdown list.

Scanning Engine

If you have a multi-engine setup, every newly created target will use the Main Installation engine by default.

You can choose to have your target scanned by one of the additional scanning engines by enabling the Scanning Engine slider and selecting the desired engine from the dropdown list.

Debug Scans

In the event where you need to troubleshoot the scanning done on a specific target, you can enable debug logging for the Target. The Acunetix scanner will log the progress of the scan and any problems that are encountered during the scan. Simply enable the "Debug scans for this target" checkbox.

The debug logs are stored in:

  • Linux: /home/acunetix/.acunetix/data/scans/
  • MacOS: /Applications/Acunetix.app/Contents/Resources/data/scans/
  • Windows: C:\ProgramData\Acunetix\shared\scans

The Events tab of the scan will indicate the name of the zip file containing the logging for the scan.

 

« Back to the Acunetix Support Page