Skip to content
GitLab
Projects Groups Snippets
  • /
  • Help
    • Help
    • Support
    • Community forum
    • Submit feedback
    • Contribute to GitLab
  • Sign in / Register
  • A arachni
  • Project information
    • Project information
    • Activity
    • Labels
    • Members
  • Repository
    • Repository
    • Files
    • Commits
    • Branches
    • Tags
    • Contributors
    • Graph
    • Compare
  • Issues 125
    • Issues 125
    • List
    • Boards
    • Service Desk
    • Milestones
  • Merge requests 8
    • Merge requests 8
  • CI/CD
    • CI/CD
    • Pipelines
    • Jobs
    • Schedules
  • Deployments
    • Deployments
    • Environments
    • Releases
  • Packages and registries
    • Packages and registries
    • Package Registry
    • Infrastructure Registry
  • Monitor
    • Monitor
    • Incidents
  • Analytics
    • Analytics
    • Value stream
    • CI/CD
    • Repository
  • Wiki
    • Wiki
  • Snippets
    • Snippets
  • Activity
  • Graph
  • Create a new issue
  • Jobs
  • Commits
  • Issue Boards
Collapse sidebar
  • Arachni - Web Application Security Scanner Framework
  • arachni
  • Wiki
  • Guides
  • User
  • RPC client

RPC client · Changes

Page history
Updated RPC-client (markdown) authored Aug 03, 2014 by Tasos Laskos's avatar Tasos Laskos
Show whitespace changes
Inline Side-by-side
guides/user/RPC-client.md
View page @ ac4fd021
## Version 0.4.3
## Version 1.0
The RPC client command line interface is similar to the
[[Command line user interface | Command line user interface]].
The differences between the two are:
* The `--server` option -- The URL of the RPC Dispatcher server to connect to in
* The `--dispatcher-url` option -- The URL of the RPC Dispatcher server to connect to in
the form of `host:port`
* Support for distribution options.
* Support for SSL peer verification for Dispatch server.
```
Arachni - Web Application Security Scanner Framework v0.4.3
Arachni - Web Application Security Scanner Framework v1.0
Author: Tasos "Zapotek" Laskos <tasos.laskos@gmail.com>
(With the support of the community and the Arachni Team.)
......@@ -20,241 +20,281 @@ Arachni - Web Application Security Scanner Framework v0.4.3
Documentation: http://arachni-scanner.com/wiki
Usage: arachni_rpc --server host:port [options] url
Usage: arachni_rpc --server host:port [options] url
Usage: ./bin/arachni_rpc [options] --dispatcher-url HOST:PORT URL
Supported options:
Generic
-h, --help Output this message.
--version Show version information.
General ----------------------
-h
--help Output this.
--version Show version information and exit.
--authorized-by EMAIL_ADDRESS
E-mail address of the person who authorized the scan.
(It'll make it easier on the sys-admins during log reviews.)
(Will be used as a value for the 'From' HTTP request header.)
-v Be verbose.
--debug Show what is happening internally.
(You should give it a shot sometime ;) )
Scope
--scope-include-pattern PATTERN
Only include resources whose path/action matches PATTERN.
(Can be used multiple times.)
--only-positives Echo positive results *only*.
--scope-include-subdomains
Follow links to subdomains.
(Default: false)
--http-req-limit=<integer> Concurrent HTTP requests limit.
(Default: 20)
(Be careful not to kill your server.)
(*NOTE*: If your scan seems unresponsive try lowering the limit.)
--scope-exclude-pattern PATTERN
Exclude resources whose path/action matches PATTERN.
(Can be used multiple times.)
--http-timeout=<integer> HTTP request timeout in milliseconds.
--scope-exclude-content-pattern PATTERN
Exclude pages whose content matches PATTERN.
(Can be used multiple times.)
--cookie-jar=<filepath> Netscape HTTP cookie file, use curl to create it.
--scope-exclude-binaries
Exclude non text-based pages.
(Binary content can confuse passive checks that perform pattern matching.)
--cookie-string='<name>=<value>; <name2>=<value2>'
--scope-redundant-path-pattern PATTERN:LIMIT
Limit crawl on redundant pages like galleries or catalogs.
(URLs matching PATTERN will be crawled LIMIT amount of times.)
(Can be used multiple times.)
Cookies, as a string, to be sent to the web application.
--scope-auto-redundant [LIMIT]
Only follow URLs with identical query parameter names LIMIT amount of times.
(Default: 10)
--user-agent=<string> Specify user agent.
--scope-directory-depth-limit LIMIT
Directory depth limit.
(Default: inf)
(How deep Arachni should go into the site structure.)
--custom-header='<name>=<value>'
--scope-page-limit LIMIT
How many pages to crawl and audit.
(Default: inf)
Specify custom headers to be included in the HTTP requests.
--scope-extend-paths FILE
Add the paths in FILE to the ones discovered by the crawler.
(Can be used multiple times.)
--authed-by=<string> E-mail address of the person who authorized the scan.
(It'll make it easier on the sys-admins during log reviews.)
(Will be used as a value for the 'From' HTTP header.)
--scope-restrict-paths FILE
Use the paths in FILE instead of crawling.
(Can be used multiple times.)
--login-check-url=<url> A URL used to verify that the scanner is still logged in to the web application.
(Requires 'login-check-pattern'.)
--scope-url-rewrite PATTERN:SUBSTITUTION
Rewrite URLs based on the given PATTERN and SUBSTITUTION.
To convert: http://test.com/articles/some-stuff/23 to http://test.com/articles.php?id=23
Use: /articles\/[\w-]+\/(\d+)/:articles.php?id=\1
--login-check-pattern=<regexp>
--scope-dom-depth-limit LIMIT
How deep to go into the DOM tree of each page, for pages with JavaScript code.
(Default: 10)
(Setting it to '0' will disable browser analysis.)
A pattern used against the body of the 'login-check-url' to verify that the scanner is still logged in to the web application.
(Requires 'login-check-url'.)
--scope-https-only Forces the system to only follow HTTPS URLs.
(Default: false)
Profiles -----------------------
--save-profile=<filepath> Save the current run profile/options to <filepath>.
Audit
--audit-links Audit links.
--load-profile=<filepath> Load a run profile from <filepath>.
(Can be used multiple times.)
(You can complement it with more options, except for:
* --modules
* --redundant)
--audit-forms Audit forms.
--show-profile Will output the running profile as CLI arguments.
--audit-cookies Audit cookies.
--audit-cookies-extensively
Submit all links and forms of the page along with the cookie permutations.
(*WARNING*: This will severely increase the scan-time.)
Crawler -----------------------
--audit-headers Audit headers.
-e <regexp>
--exclude=<regexp> Exclude urls matching <regexp>.
--audit-link-template TEMPLATE
Regular expression with named captures to use to extract input information from generic paths.
To extract the 'input1' and 'input2' inputs from:
http://test.com/input1/value1/input2/value2
Use:
/input1\/(?<input1>\w+)\/input2\/(?<input2>\w+)/
(Can be used multiple times.)
--exclude-page=<regexp> Exclude pages whose content matches <regexp>.
(Can be used multiple times.)
--audit-with-both-methods
Audit elements with both GET and POST requests.
(*WARNING*: This will severely increase the scan-time.)
-i <regexp>
--include=<regexp> Include *only* urls matching <regex>.
--audit-exclude-vector PATTERN
Exclude input vectorS whose name matches PATTERN.
(Can be used multiple times.)
--redundant=<regexp>:<limit>
Limit crawl on redundant pages like galleries or catalogs.
(URLs matching <regexp> will be crawled <limit> amount of times.)
--audit-include-vector PATTERN
Include only input vectors whose name matches PATTERN.
(Can be used multiple times.)
--auto-redundant=<limit> Only follow <limit> amount of URLs with identical query parameter names.
(Default: inf)
(Will default to 10 if no value has been specified.)
-f
--follow-subdomains Follow links to subdomains.
(Default: off)
--depth=<integer> Directory depth limit.
(Default: inf)
(How deep Arachni should go into the site structure.)
--link-count=<integer> How many links to follow.
(Default: inf)
Input
--input-value PATTERN:VALUE
PATTERN to match against input names and VALUE to use for them.
(Can be used multiple times.)
--redirect-limit=<integer> How many redirects to follow.
(Default: 20)
--input-values-file FILE
YAML file containing a Hash object with regular expressions, to match against input names, as keys and input values as values.
--extend-paths=<filepath> Add the paths in <file> to the ones discovered by the crawler.
(Can be used multiple times.)
--input-without-defaults
Do not use the system default input values.
--interceptor.callict-paths=<filepath> Use the paths in <file> instead of crawling.
(Can be used multiple times.)
--input-force Fill-in even non-empty inputs.
--https-only Forces the system to only follow HTTPS URLs.
HTTP
--http-user-agent USER_AGENT
Value for the 'User-Agent' HTTP request header.
(Default: Arachni/v1.0)
Auditor ------------------------
--http-request-concurrency MAX_CONCURRENCY
Maximum HTTP request concurrency.
(Default: 20)
(Be careful not to kill your server.)
(*NOTE*: If your scan seems unresponsive try lowering the limit.)
-g
--audit-links Audit links.
--http-request-timeout TIMEOUT
HTTP request timeout in milliseconds.
(Default: 50000)
-p
--audit-forms Audit forms.
--http-request-redirect-limit LIMIT
Maximum amount of redirects to follow for each HTTP request.
(Default: 5)
-c
--audit-cookies Audit cookies.
--http-request-queue-size QUEUE_SIZE
Maximum amount of requests to keep in the queue.
Bigger size means better scheduling and better performance,
smaller means less RAM consumption.
(Default: 500)
--exclude-cookie=<name> Cookie to exclude from the audit by name.
--http-request-header NAME=VALUE
Specify custom headers to be included in the HTTP requests.
(Can be used multiple times.)
--exclude-vector=<name> Input vector (parameter) not to audit by name.
(Can be used multiple times.)
--http-response-max-size LIMIT
Do not download response bodies larger than the specified LIMIT, in bytes.
(Default: inf)
--audit-headers Audit HTTP headers.
(*NOTE*: Header audits use brute force.
Almost all valid HTTP request headers will be audited
even if there's no indication that the web app uses them.)
(*WARNING*: Enabling this option will result in increased requests,
maybe by an order of magnitude.)
--http-cookie-jar COOKIE_JAR_FILE
Netscape-styled HTTP cookiejar file.
Coverage -----------------------
--http-cookie-string COOKIE
Cookie representation as an 'Cookie' HTTP request header.
--audit-cookies-extensively Submit all links and forms of the page along with the cookie permutations.
(*WARNING*: This will severely increase the scan-time.)
--http-authentication-username USERNAME
Username for HTTP authentication.
--fuzz-methods Audit links, forms and cookies using both GET and POST requests.
(*WARNING*: This will severely increase the scan-time.)
--http-authentication-password PASSWORD
Password for HTTP authentication.
--exclude-binaries Exclude non text-based pages from the audit.
(Binary content can confuse recon modules that perform pattern matching.)
--http-proxy ADDRESS:PORT
Proxy to use.
Modules ------------------------
--http-proxy-authentication USERNAME:PASSWORD
Proxy authentication credentials.
--lsmod=<regexp> List available modules based on the provided regular expression.
(If no regexp is provided all modules will be listed.)
(Can be used multiple times.)
--http-proxy-type http,http_1_0,socks4,socks5,socks4a
Proxy type.
(Default: auto)
-m <modname,modname,...>
--modules=<modname,modname,...>
Checks
--checks-list [PATTERN] List available checks based on the provided pattern.
(If no pattern is provided all checks will be listed.)
Comma separated list of modules to load.
(Modules are referenced by their filename without the '.rb' extension, use '--lsmod' to list all.
Use '*' as a module name to deploy all modules or as a wildcard, like so:
xss* to load all xss modules
sqli* to load all sql injection modules
--checks CHECK,CHECK2,...
Comma separated list of checks to load.
Checks are referenced by their filename without the '.rb' extension, use '--checks-list' to list all.
Use '*' as a check name to load all checks or as a wildcard, like so:
xss* to load all XSS checks
sqli* to load all SQL injection checks
etc.
You can exclude modules by prefixing their name with a minus sign:
--modules=*,-backup_files,-xss
The above will load all modules except for the 'backup_files' and 'xss' modules.
You can exclude checks by prefixing their name with a minus sign:
--checks=*,-backup_files,-xss
The above will load all checks except for the 'backup_files' and 'xss' checks.
Or mix and match:
-xss* to unload all xss modules.)
-xss* to unload all XSS checks.
Reports ------------------------
Plugins
--plugins-list [PATTERN]
List available plugins based on the provided pattern.
(If no pattern is provided all plugins will be listed.)
--lsrep=<regexp> List available reports based on the provided regular expression.
(If no regexp is provided all reports will be listed.)
--plugin 'PLUGIN:OPTION=VALUE,OPTION2=VALUE2'
PLUGIN is the name of the plugin as displayed by '--plugins-list'.
(Plugins are referenced by their filename without the '.rb' extension, use '--plugins-list' to list all.)
(Can be used multiple times.)
--repload=<filepath> Load audit results from an '.afr' report file.
(Allows you to create new reports from finished scans.)
--report='<report>:<optname>=<val>,<optname2>=<val2>,...'
<report>: the name of the report as displayed by '--lsrep'
(Reports are referenced by their filename without the '.rb' extension, use '--lsrep' to list all.)
(Default: stdout)
(Can be used multiple times.)
Platforms
--platforms-list List available platforms.
Plugins ------------------------
--platforms-no-fingerprinting
Disable platform fingerprinting.
(By default, the system will try to identify the deployed server-side platforms automatically
in order to avoid sending irrelevant payloads.)
--lsplug=<regexp> List available plugins based on the provided regular expression.
(If no regexp is provided all plugins will be listed.)
(Can be used multiple times.)
--platforms PLATFORM,PLATFORM2,...
Comma separated list of platforms (by shortname) to audit.
(The given platforms will be used *in addition* to fingerprinting. In order to restrict the audit to
these platforms enable the '--platforms-no-fingerprinting' option.)
--plugin='<plugin>:<optname>=<val>,<optname2>=<val2>,...'
<plugin>: the name of the plugin as displayed by '--lsplug'
(Plugins are referenced by their filename without the '.rb' extension, use '--lsplug' to list all.)
(Can be used multiple times.)
Session
--login-check-url URL URL to use to verify that the scanner is still logged in to the web application.
(Requires 'login-check-pattern'.)
Platforms ----------------------
--login-check-pattern PATTERN
Pattern used against the body of the 'login-check-url' to verify that the scanner is still logged in to the web application.
(Requires 'login-check-url'.)
--lsplat List available platforms.
--no-fingerprinting Disable platform fingerprinting.
(By default, the system will try to identify the deployed server-side platforms automatically
in order to avoid sending irrelevant payloads.)
Profiles
--profile-save-filepath FILEPATH
Save the current configuration profile/options to FILEPATH.
--platforms=<platform,platform,...>
--profile-load-filepath FILEPATH
Load a configuration profile from FILEPATH.
Comma separated list of platforms (by shortname) to audit.
(The given platforms will be used *in addition* to fingerprinting. In order to restrict the audit to
these platforms enable the '--no-fingerprinting' option.)
Proxy --------------------------
Browser cluster
--browser-cluster-pool-size SIZE
Amount of browser workers to keep in the pool and put to work.
(Default: 6)
--proxy=<server:port> Proxy address to use.
--browser-cluster-job-timeout SECONDS
Maximum allowed time for each job.
(Default: 120)
--proxy-auth=<user:passwd> Proxy authentication credentials.
--browser-cluster-worker-time-to-live LIMIT
Re-spawn the browser of each worker every LIMIT jobs.
(Default: 100)
--proxy-type=<type> Proxy type; can be http, http_1_0, socks4, socks5, socks4a
(Default: http)
--browser-cluster-ignore-images
Do not load images.
--browser-cluster-screen-width
Browser screen width.
(Default: 1600)
Distribution -----------------
--browser-cluster-screen-height
Browser screen height.
(Default: 1200)
--server=<address:port> Dispatcher server to use.
(Used to provide scanner Instances.)
Distribution
--dispatcher-url HOST:PORT
Dispatcher server to use.
--spawns=<integer> How many slaves to spawn for a high-performance mult-Instance scan.
--spawns SPAWNS How many slaves to spawn for a high-performance mult-Instance scan.
(When no grid mode has been specified, all slaves will all be from the same Dispatcher machine.
When a grid-mode has been specified, this option will be treated as a possible maximum and
not a hard value.)
--grid-mode=<mode> Sets the Grid mode of operation for this scan.
--grid-mode balance,aggregate
Sets the Grid mode of operation for this scan.
Valid modes are:
* balance -- Slaves will be provided by the least burdened Grid Dispatchers.
* aggregate -- In addition to balancing, slaves will all be from Dispatchers
......@@ -263,15 +303,20 @@ Arachni - Web Application Security Scanner Framework v0.4.3
--grid Shorthand for '--grid-mode=balance'.
SSL --------------------------
(Do *not* use encrypted keys!)
SSL
--ssl-ca FILE Location of the CA certificate (.pem).
--ssl-private-key FILE Location of the client SSL private key (.pem).
--ssl-certificate FILE Location of the client SSL certificate (.pem).
--ssl-pkey=<file> Location of the SSL private key (.pem)
(Used to verify the the client to the servers.)
Report
--report-save-path PATH Directory or file path where to store the scan report.
You can use the generated file to create reports in several formats with the 'arachni_reporter' executable.
--ssl-cert=<file> Location of the SSL certificate (.pem)
(Used to verify the the client to the servers.)
--ssl-ca=<file> Location of the CA certificate (.pem)
(Used to verify the servers to the client.)
Timeout
--timeout HOURS:MINUTES:SECONDS
Stop the scan after the given duration is exceeded.
```
Clone repository

Pages [all]

  • Home
  • Installation instructions
  • For users
    • Executables
    • Command Line Interface
    • Web User Interface
    • Distributed components (Dispatchers and Instances)
      • RPC Client
      • RPC Server (Dispatcher)
  • For developers
    • Core API documentation
    • RPC API
    • Development environment

Can't find what you're looking for? Why not have a look at the support portal?