Skip to content
GitLab
Projects Groups Snippets
  • /
  • Help
    • Help
    • Support
    • Community forum
    • Submit feedback
    • Contribute to GitLab
  • Sign in / Register
  • A arachni
  • Project information
    • Project information
    • Activity
    • Labels
    • Members
  • Repository
    • Repository
    • Files
    • Commits
    • Branches
    • Tags
    • Contributors
    • Graph
    • Compare
  • Issues 125
    • Issues 125
    • List
    • Boards
    • Service Desk
    • Milestones
  • Merge requests 8
    • Merge requests 8
  • CI/CD
    • CI/CD
    • Pipelines
    • Jobs
    • Schedules
  • Deployments
    • Deployments
    • Environments
    • Releases
  • Packages and registries
    • Packages and registries
    • Package Registry
    • Infrastructure Registry
  • Monitor
    • Monitor
    • Incidents
  • Analytics
    • Analytics
    • Value stream
    • CI/CD
    • Repository
  • Wiki
    • Wiki
  • Snippets
    • Snippets
  • Activity
  • Graph
  • Create a new issue
  • Jobs
  • Commits
  • Issue Boards
Collapse sidebar
  • Arachni - Web Application Security Scanner Framework
  • arachni
  • Wiki
  • Guides
  • User
  • Command line user interface

Command line user interface · Changes

Page history
Updated Command-line-user-interface (markdown) authored Aug 02, 2014 by Tasos Laskos's avatar Tasos Laskos
Hide whitespace changes
Inline Side-by-side
guides/user/Command-line-user-interface.md
View page @ 9ae714bc
...@@ -2492,10 +2492,9 @@ Tells Arachni what protocol to use to connect and comunicate with the proxy serv ...@@ -2492,10 +2492,9 @@ Tells Arachni what protocol to use to connect and comunicate with the proxy serv
<h2 id='cli_help_output'><a href='#cli_help_output'>CLI Help Output</a></h2> <h2 id='cli_help_output'><a href='#cli_help_output'>CLI Help Output</a></h2>
``` ```
$ arachni -h $ arachni -h
Arachni - Web Application Security Scanner Framework v0.4.6 Arachni - Web Application Security Scanner Framework v1.0
Author: Tasos "Zapotek" Laskos <tasos.laskos@gmail.com> Author: Tasos "Zapotek" Laskos <tasos.laskos@gmail.com>
(With the support of the community and the Arachni Team.) (With the support of the community and the Arachni Team.)
...@@ -2504,234 +2503,288 @@ Arachni - Web Application Security Scanner Framework v0.4.6 ...@@ -2504,234 +2503,288 @@ Arachni - Web Application Security Scanner Framework v0.4.6
Documentation: http://arachni-scanner.com/wiki Documentation: http://arachni-scanner.com/wiki
Usage: arachni [options] url Usage: ./bin/arachni [options] URL
Supported options: Generic
-h, --help Output this message.
--version Show version information.
General ---------------------- --authorized-by EMAIL_ADDRESS
E-mail address of the person who authorized the scan.
(It'll make it easier on the sys-admins during log reviews.)
(Will be used as a value for the 'From' HTTP request header.)
-h
--help Output this.
--version Show version information and exit. Output
--verbose Show verbose output.
-v Be verbose. --debug [LEVEL 1-3] Show debugging information.
--debug Show what is happening internally. --only-positives Only output positive results.
(You should give it a shot sometime ;) )
--only-positives Echo positive results *only*.
--http-username=<string> Username for HTTP authentication. Scope
--scope-include-pattern PATTERN
Only include resources whose path/action matches PATTERN.
(Can be used multiple times.)
--http-password=<string> Password for HTTP authentication. --scope-include-subdomains
Follow links to subdomains.
(Default: false)
--http-req-limit=<integer> Concurrent HTTP requests limit. --scope-exclude-pattern PATTERN
(Default: 20) Exclude resources whose path/action matches PATTERN.
(Be careful not to kill your server.) (Can be used multiple times.)
(*NOTE*: If your scan seems unresponsive try lowering the limit.)
--http-queue-size=<integer> Maximum amount of requests to keep in the queue. --scope-exclude-content-pattern PATTERN
Bigger size means better scheduling and better performance, Exclude pages whose content matches PATTERN.
smaller means less RAM consumption. (Can be used multiple times.)
(Default: 500)
--http-timeout=<integer> HTTP request timeout in milliseconds. --scope-exclude-binaries
Exclude non text-based pages.
(Binary content can confuse passive checks that perform pattern matching.)
--cookie-jar=<filepath> Netscape HTTP cookie file, use curl to create it. --scope-redundant-path-pattern PATTERN:LIMIT
Limit crawl on redundant pages like galleries or catalogs.
(URLs matching PATTERN will be crawled LIMIT amount of times.)
(Can be used multiple times.)
--cookie-string='<name>=<value>; <name2>=<value2>' --scope-auto-redundant [LIMIT]
Only follow URLs with identical query parameter names LIMIT amount of times.
(Default: 10)
Cookies, as a string, to be sent to the web application. --scope-directory-depth-limit LIMIT
Directory depth limit.
(Default: inf)
(How deep Arachni should go into the site structure.)
--user-agent=<string> Specify user agent. --scope-page-limit LIMIT
How many pages to crawl and audit.
(Default: inf)
--custom-header='<name>=<value>' --scope-extend-paths FILE
Add the paths in FILE to the ones discovered by the crawler.
(Can be used multiple times.)
Specify custom headers to be included in the HTTP requests. --scope-restrict-paths FILE
Use the paths in FILE instead of crawling.
(Can be used multiple times.) (Can be used multiple times.)
--authed-by=<string> E-mail address of the person who authorized the scan. --scope-url-rewrite PATTERN:SUBSTITUTION
(It'll make it easier on the sys-admins during log reviews.) Rewrite URLs based on the given PATTERN and SUBSTITUTION.
(Will be used as a value for the 'From' HTTP header.) To convert: http://test.com/articles/some-stuff/23 to http://test.com/articles.php?id=23
Use: /articles\/[\w-]+\/(\d+)/:articles.php?id=\1
--scope-dom-depth-limit LIMIT
How deep to go into the DOM tree of each page, for pages with JavaScript code.
(Default: 10)
(Setting it to '0' will disable browser analysis.)
--scope-https-only Forces the system to only follow HTTPS URLs.
(Default: false)
--login-check-url=<url> A URL used to verify that the scanner is still logged in to the web application.
(Requires 'login-check-pattern'.)
--login-check-pattern=<regexp> Audit
--audit-links Audit links.
A pattern used against the body of the 'login-check-url' to verify that the scanner is still logged in to the web application. --audit-forms Audit forms.
(Requires 'login-check-url'.)
Profiles ----------------------- --audit-cookies Audit cookies.
--save-profile=<filepath> Save the current run profile/options to <filepath>. --audit-cookies-extensively
Submit all links and forms of the page along with the cookie permutations.
(*WARNING*: This will severely increase the scan-time.)
--load-profile=<filepath> Load a run profile from <filepath>. --audit-headers Audit headers.
(Can be used multiple times.)
(You can complement it with more options, except for:
* --modules
* --redundant)
--show-profile Will output the running profile as CLI arguments. --audit-link-template TEMPLATE
Regular expression with named captures to use to extract input information from generic paths.
To extract the 'input1' and 'input2' inputs from:
http://test.com/input1/value1/input2/value2
Use:
/input1\/(?<input1>\w+)\/input2\/(?<input2>\w+)/
(Can be used multiple times.)
--audit-with-both-methods
Audit elements with both GET and POST requests.
(*WARNING*: This will severely increase the scan-time.)
--audit-exclude-vector PATTERN
Exclude input vectorS whose name matches PATTERN.
(Can be used multiple times.)
Crawler ----------------------- --audit-include-vector PATTERN
Include only input vectors whose name matches PATTERN.
(Can be used multiple times.)
-e <regexp>
--exclude=<regexp> Exclude urls matching <regexp>.
(Can be used multiple times.)
--exclude-page=<regexp> Exclude pages whose content matches <regexp>. Input
(Can be used multiple times.) --input-value PATTERN:VALUE
PATTERN to match against input names and VALUE to use for them.
(Can be used multiple times.)
-i <regexp> --input-values-file FILE
--include=<regexp> Include *only* urls matching <regex>. YAML file containing a Hash object with regular expressions, to match against input names, as keys and input values as values.
(Can be used multiple times.)
--redundant=<regexp>:<limit> --input-without-defaults
Do not use the system default input values.
Limit crawl on redundant pages like galleries or catalogs. --input-force Fill-in even non-empty inputs.
(URLs matching <regexp> will be crawled <limit> amount of times.)
(Can be used multiple times.)
--auto-redundant=<limit> Only follow <limit> amount of URLs with identical query parameter names.
(Default: inf)
(Will default to 10 if no value has been specified.)
-f HTTP
--follow-subdomains Follow links to subdomains. --http-user-agent USER_AGENT
(Default: off) Value for the 'User-Agent' HTTP request header.
(Default: Arachni/v1.0)
--depth=<integer> Directory depth limit. --http-request-concurrency MAX_CONCURRENCY
(Default: inf) Maximum HTTP request concurrency.
(How deep Arachni should go into the site structure.) (Default: 20)
(Be careful not to kill your server.)
(*NOTE*: If your scan seems unresponsive try lowering the limit.)
--link-count=<integer> How many links to follow. --http-request-timeout TIMEOUT
(Default: inf) HTTP request timeout in milliseconds.
(Default: 50000)
--redirect-limit=<integer> How many redirects to follow. --http-request-redirect-limit LIMIT
(Default: 20) Maximum amount of redirects to follow for each HTTP request.
(Default: 5)
--extend-paths=<filepath> Add the paths in <file> to the ones discovered by the crawler. --http-request-queue-size QUEUE_SIZE
(Can be used multiple times.) Maximum amount of requests to keep in the queue.
Bigger size means better scheduling and better performance,
smaller means less RAM consumption.
(Default: 500)
--restrict-paths=<filepath> Use the paths in <file> instead of crawling. --http-request-header NAME=VALUE
(Can be used multiple times.) Specify custom headers to be included in the HTTP requests.
(Can be used multiple times.)
--https-only Forces the system to only follow HTTPS URLs. --http-response-max-size LIMIT
Do not download response bodies larger than the specified LIMIT, in bytes.
(Default: inf)
--http-cookie-jar COOKIE_JAR_FILE
Netscape-styled HTTP cookiejar file.
Auditor ------------------------ --http-cookie-string COOKIE
Cookie representation as an 'Cookie' HTTP request header.
-g --http-authentication-username USERNAME
--audit-links Audit links. Username for HTTP authentication.
-p --http-authentication-password PASSWORD
--audit-forms Audit forms. Password for HTTP authentication.
-c --http-proxy ADDRESS:PORT
--audit-cookies Audit cookies. Proxy to use.
--exclude-cookie=<name> Cookie to exclude from the audit by name. --http-proxy-authentication USERNAME:PASSWORD
(Can be used multiple times.) Proxy authentication credentials.
--exclude-vector=<name> Input vector (parameter) not to audit by name. --http-proxy-type http,http_1_0,socks4,socks5,socks4a
(Can be used multiple times.) Proxy type.
(Default: auto)
--audit-headers Audit HTTP headers.
(*NOTE*: Header audits use brute force.
Almost all valid HTTP request headers will be audited
even if there's no indication that the web app uses them.)
(*WARNING*: Enabling this option will result in increased requests,
maybe by an order of magnitude.)
Coverage ----------------------- Checks
--checks-list [PATTERN] List available checks based on the provided pattern.
(If no pattern is provided all checks will be listed.)
--audit-cookies-extensively Submit all links and forms of the page along with the cookie permutations. --checks CHECK,CHECK2,...
(*WARNING*: This will severely increase the scan-time.) Comma separated list of checks to load.
Checks are referenced by their filename without the '.rb' extension, use '--checks-list' to list all.
Use '*' as a check name to load all checks or as a wildcard, like so:
xss* to load all XSS checks
sqli* to load all SQL injection checks
etc.
--fuzz-methods Audit links, forms and cookies using both GET and POST requests. You can exclude checks by prefixing their name with a minus sign:
(*WARNING*: This will severely increase the scan-time.) --checks=*,-backup_files,-xss
The above will load all checks except for the 'backup_files' and 'xss' checks.
--exclude-binaries Exclude non text-based pages from the audit. Or mix and match:
(Binary content can confuse recon modules that perform pattern matching.) -xss* to unload all XSS checks.
Modules ------------------------
--lsmod=<regexp> List available modules based on the provided regular expression. Plugins
(If no regexp is provided all modules will be listed.) --plugins-list [PATTERN]
(Can be used multiple times.) List available plugins based on the provided pattern.
(If no pattern is provided all plugins will be listed.)
--plugin 'PLUGIN:OPTION=VALUE,OPTION2=VALUE2'
PLUGIN is the name of the plugin as displayed by '--plugins-list'.
(Plugins are referenced by their filename without the '.rb' extension, use '--plugins-list' to list all.)
(Can be used multiple times.)
-m <modname,modname,...>
--modules=<modname,modname,...>
Comma separated list of modules to load. Platforms
(Modules are referenced by their filename without the '.rb' extension, use '--lsmod' to list all. --platforms-list List available platforms.
Use '*' as a module name to deploy all modules or as a wildcard, like so:
xss* to load all xss modules
sqli* to load all sql injection modules
etc.
You can exclude modules by prefixing their name with a minus sign: --platforms-no-fingerprinting
--modules=*,-backup_files,-xss Disable platform fingerprinting.
The above will load all modules except for the 'backup_files' and 'xss' modules. (By default, the system will try to identify the deployed server-side platforms automatically
in order to avoid sending irrelevant payloads.)
Or mix and match: --platforms PLATFORM,PLATFORM2,...
-xss* to unload all xss modules.) Comma separated list of platforms (by shortname) to audit.
(The given platforms will be used *in addition* to fingerprinting. In order to restrict the audit to
these platforms enable the '--platforms-no-fingerprinting' option.)
Reports ------------------------ Session
--login-check-url URL URL to use to verify that the scanner is still logged in to the web application.
(Requires 'login-check-pattern'.)
--lsrep=<regexp> List available reports based on the provided regular expression. --login-check-pattern PATTERN
(If no regexp is provided all reports will be listed.) Pattern used against the body of the 'login-check-url' to verify that the scanner is still logged in to the web application.
(Can be used multiple times.) (Requires 'login-check-url'.)
--repload=<filepath> Load audit results from an '.afr' report file.
(Allows you to create new reports from finished scans.)
--report='<report>:<optname>=<val>,<optname2>=<val2>,...' Profiles
--profile-save-filepath FILEPATH
Save the current configuration profile/options to FILEPATH.
<report>: the name of the report as displayed by '--lsrep' --profile-load-filepath FILEPATH
(Reports are referenced by their filename without the '.rb' extension, use '--lsrep' to list all.) Load a configuration profile from FILEPATH.
(Default: stdout)
(Can be used multiple times.)
Plugins ------------------------ Browser cluster
--browser-cluster-pool-size SIZE
Amount of browser workers to keep in the pool and put to work.
--lsplug=<regexp> List available plugins based on the provided regular expression. --browser-cluster-job-timeout SECONDS
(If no regexp is provided all plugins will be listed.) Maximum allowed time for each job.
(Can be used multiple times.)
--plugin='<plugin>:<optname>=<val>,<optname2>=<val2>,...' --browser-cluster-worker-time-to-live LIMIT
Re-spawn the browser of each worker every LIMIT jobs.
<plugin>: the name of the plugin as displayed by '--lsplug' --browser-cluster-ignore-images
(Plugins are referenced by their filename without the '.rb' extension, use '--lsplug' to list all.) Do not load images.
(Can be used multiple times.)
Platforms ---------------------- --browser-cluster-screen-width
Browser screen width.
--lsplat List available platforms. --browser-cluster-screen-height
Browser screen height.
--no-fingerprinting Disable platform fingerprinting.
(By default, the system will try to identify the deployed server-side platforms automatically
in order to avoid sending irrelevant payloads.)
--platforms=<platform,platform,...> Report
--report-save-path PATH Directory or file path where to store the scan report.
You can use the generated file to create reports in several formats with the 'arachni_report' executable.
Comma separated list of platforms (by shortname) to audit.
(The given platforms will be used *in addition* to fingerprinting. In order to restrict the audit to
these platforms enable the '--no-fingerprinting' option.)
Proxy -------------------------- Snapshot
--snapshot-save-path PATH
Directory or file path where to store the snapshot of a suspended scan.
You can use the generated file to resume the scan with the 'arachni_restore' executable.
--proxy=<server:port> Proxy address to use.
--proxy-auth=<user:passwd> Proxy authentication credentials. Timeout
--timeout HOURS:MINUTES:SECONDS
Stop the scan after the given duration is exceeded.
--proxy-type=<type> Proxy type; can be http, http_1_0, socks4, socks5, socks4a --timeout-suspend Suspend after the timeout.
(Default: http) You can use the generated file to resume the scan with the 'arachni_restore' executable.```
```
Clone repository

Pages [all]

  • Home
  • Installation instructions
  • For users
    • Executables
    • Command Line Interface
    • Web User Interface
    • Distributed components (Dispatchers and Instances)
      • RPC Client
      • RPC Server (Dispatcher)
  • For developers
    • Coding guidelines
    • Core API documentation
    • RPC API
    • Development environment

Can't find what you're looking for? Why not have a look at the support portal?