Skip to content
GitLab
Projects Groups Snippets
  • /
  • Help
    • Help
    • Support
    • Community forum
    • Submit feedback
    • Contribute to GitLab
  • Sign in / Register
  • A arachni
  • Project information
    • Project information
    • Activity
    • Labels
    • Members
  • Repository
    • Repository
    • Files
    • Commits
    • Branches
    • Tags
    • Contributors
    • Graph
    • Compare
  • Issues 125
    • Issues 125
    • List
    • Boards
    • Service Desk
    • Milestones
  • Merge requests 8
    • Merge requests 8
  • CI/CD
    • CI/CD
    • Pipelines
    • Jobs
    • Schedules
  • Deployments
    • Deployments
    • Environments
    • Releases
  • Packages and registries
    • Packages and registries
    • Package Registry
    • Infrastructure Registry
  • Monitor
    • Monitor
    • Incidents
  • Analytics
    • Analytics
    • Value stream
    • CI/CD
    • Repository
  • Wiki
    • Wiki
  • Snippets
    • Snippets
  • Activity
  • Graph
  • Create a new issue
  • Jobs
  • Commits
  • Issue Boards
Collapse sidebar
  • Arachni - Web Application Security Scanner Framework
  • arachni
  • Wiki
  • Guides
  • User
  • RPC client

RPC client · Changes

Page history
Updated RPC-client (markdown) authored Aug 03, 2014 by Tasos Laskos's avatar Tasos Laskos
Hide whitespace changes
Inline Side-by-side
guides/user/RPC-client.md
View page @ ac4fd021
## Version 0.4.3 ## Version 1.0
The RPC client command line interface is similar to the The RPC client command line interface is similar to the
[[Command line user interface | Command line user interface]]. [[Command line user interface | Command line user interface]].
The differences between the two are: The differences between the two are:
* The `--server` option -- The URL of the RPC Dispatcher server to connect to in * The `--dispatcher-url` option -- The URL of the RPC Dispatcher server to connect to in
the form of `host:port` the form of `host:port`
* Support for distribution options. * Support for distribution options.
* Support for SSL peer verification for Dispatch server. * Support for SSL peer verification for Dispatch server.
``` ```
Arachni - Web Application Security Scanner Framework v0.4.3 Arachni - Web Application Security Scanner Framework v1.0
Author: Tasos "Zapotek" Laskos <tasos.laskos@gmail.com> Author: Tasos "Zapotek" Laskos <tasos.laskos@gmail.com>
(With the support of the community and the Arachni Team.) (With the support of the community and the Arachni Team.)
...@@ -20,258 +20,303 @@ Arachni - Web Application Security Scanner Framework v0.4.3 ...@@ -20,258 +20,303 @@ Arachni - Web Application Security Scanner Framework v0.4.3
Documentation: http://arachni-scanner.com/wiki Documentation: http://arachni-scanner.com/wiki
Usage: arachni_rpc --server host:port [options] url Usage: ./bin/arachni_rpc [options] --dispatcher-url HOST:PORT URL
Usage: arachni_rpc --server host:port [options] url
Supported options: Generic
-h, --help Output this message.
--version Show version information.
General ---------------------- --authorized-by EMAIL_ADDRESS
E-mail address of the person who authorized the scan.
(It'll make it easier on the sys-admins during log reviews.)
(Will be used as a value for the 'From' HTTP request header.)
-h
--help Output this.
--version Show version information and exit. Scope
--scope-include-pattern PATTERN
-v Be verbose. Only include resources whose path/action matches PATTERN.
(Can be used multiple times.)
--debug Show what is happening internally.
(You should give it a shot sometime ;) )
--only-positives Echo positive results *only*. --scope-include-subdomains
Follow links to subdomains.
(Default: false)
--http-req-limit=<integer> Concurrent HTTP requests limit. --scope-exclude-pattern PATTERN
(Default: 20) Exclude resources whose path/action matches PATTERN.
(Be careful not to kill your server.) (Can be used multiple times.)
(*NOTE*: If your scan seems unresponsive try lowering the limit.)
--http-timeout=<integer> HTTP request timeout in milliseconds. --scope-exclude-content-pattern PATTERN
Exclude pages whose content matches PATTERN.
(Can be used multiple times.)
--cookie-jar=<filepath> Netscape HTTP cookie file, use curl to create it. --scope-exclude-binaries
Exclude non text-based pages.
(Binary content can confuse passive checks that perform pattern matching.)
--cookie-string='<name>=<value>; <name2>=<value2>' --scope-redundant-path-pattern PATTERN:LIMIT
Limit crawl on redundant pages like galleries or catalogs.
(URLs matching PATTERN will be crawled LIMIT amount of times.)
(Can be used multiple times.)
Cookies, as a string, to be sent to the web application. --scope-auto-redundant [LIMIT]
Only follow URLs with identical query parameter names LIMIT amount of times.
(Default: 10)
--user-agent=<string> Specify user agent. --scope-directory-depth-limit LIMIT
Directory depth limit.
(Default: inf)
(How deep Arachni should go into the site structure.)
--custom-header='<name>=<value>' --scope-page-limit LIMIT
How many pages to crawl and audit.
(Default: inf)
Specify custom headers to be included in the HTTP requests. --scope-extend-paths FILE
Add the paths in FILE to the ones discovered by the crawler.
(Can be used multiple times.) (Can be used multiple times.)
--authed-by=<string> E-mail address of the person who authorized the scan. --scope-restrict-paths FILE
(It'll make it easier on the sys-admins during log reviews.) Use the paths in FILE instead of crawling.
(Will be used as a value for the 'From' HTTP header.) (Can be used multiple times.)
--login-check-url=<url> A URL used to verify that the scanner is still logged in to the web application.
(Requires 'login-check-pattern'.)
--login-check-pattern=<regexp>
A pattern used against the body of the 'login-check-url' to verify that the scanner is still logged in to the web application. --scope-url-rewrite PATTERN:SUBSTITUTION
(Requires 'login-check-url'.) Rewrite URLs based on the given PATTERN and SUBSTITUTION.
To convert: http://test.com/articles/some-stuff/23 to http://test.com/articles.php?id=23
Use: /articles\/[\w-]+\/(\d+)/:articles.php?id=\1
Profiles ----------------------- --scope-dom-depth-limit LIMIT
How deep to go into the DOM tree of each page, for pages with JavaScript code.
(Default: 10)
(Setting it to '0' will disable browser analysis.)
--save-profile=<filepath> Save the current run profile/options to <filepath>. --scope-https-only Forces the system to only follow HTTPS URLs.
(Default: false)
--load-profile=<filepath> Load a run profile from <filepath>.
(Can be used multiple times.)
(You can complement it with more options, except for:
* --modules
* --redundant)
--show-profile Will output the running profile as CLI arguments. Audit
--audit-links Audit links.
--audit-forms Audit forms.
Crawler ----------------------- --audit-cookies Audit cookies.
-e <regexp> --audit-cookies-extensively
--exclude=<regexp> Exclude urls matching <regexp>. Submit all links and forms of the page along with the cookie permutations.
(Can be used multiple times.) (*WARNING*: This will severely increase the scan-time.)
--exclude-page=<regexp> Exclude pages whose content matches <regexp>. --audit-headers Audit headers.
(Can be used multiple times.)
-i <regexp> --audit-link-template TEMPLATE
--include=<regexp> Include *only* urls matching <regex>. Regular expression with named captures to use to extract input information from generic paths.
(Can be used multiple times.) To extract the 'input1' and 'input2' inputs from:
http://test.com/input1/value1/input2/value2
Use:
/input1\/(?<input1>\w+)\/input2\/(?<input2>\w+)/
(Can be used multiple times.)
--redundant=<regexp>:<limit> --audit-with-both-methods
Audit elements with both GET and POST requests.
(*WARNING*: This will severely increase the scan-time.)
Limit crawl on redundant pages like galleries or catalogs. --audit-exclude-vector PATTERN
(URLs matching <regexp> will be crawled <limit> amount of times.) Exclude input vectorS whose name matches PATTERN.
(Can be used multiple times.) (Can be used multiple times.)
--auto-redundant=<limit> Only follow <limit> amount of URLs with identical query parameter names. --audit-include-vector PATTERN
(Default: inf) Include only input vectors whose name matches PATTERN.
(Will default to 10 if no value has been specified.) (Can be used multiple times.)
-f
--follow-subdomains Follow links to subdomains.
(Default: off)
--depth=<integer> Directory depth limit. Input
(Default: inf) --input-value PATTERN:VALUE
(How deep Arachni should go into the site structure.) PATTERN to match against input names and VALUE to use for them.
(Can be used multiple times.)
--link-count=<integer> How many links to follow. --input-values-file FILE
(Default: inf) YAML file containing a Hash object with regular expressions, to match against input names, as keys and input values as values.
--redirect-limit=<integer> How many redirects to follow. --input-without-defaults
(Default: 20) Do not use the system default input values.
--extend-paths=<filepath> Add the paths in <file> to the ones discovered by the crawler. --input-force Fill-in even non-empty inputs.
(Can be used multiple times.)
--interceptor.callict-paths=<filepath> Use the paths in <file> instead of crawling.
(Can be used multiple times.)
--https-only Forces the system to only follow HTTPS URLs. HTTP
--http-user-agent USER_AGENT
Value for the 'User-Agent' HTTP request header.
(Default: Arachni/v1.0)
--http-request-concurrency MAX_CONCURRENCY
Maximum HTTP request concurrency.
(Default: 20)
(Be careful not to kill your server.)
(*NOTE*: If your scan seems unresponsive try lowering the limit.)
Auditor ------------------------ --http-request-timeout TIMEOUT
HTTP request timeout in milliseconds.
(Default: 50000)
-g --http-request-redirect-limit LIMIT
--audit-links Audit links. Maximum amount of redirects to follow for each HTTP request.
(Default: 5)
-p --http-request-queue-size QUEUE_SIZE
--audit-forms Audit forms. Maximum amount of requests to keep in the queue.
Bigger size means better scheduling and better performance,
smaller means less RAM consumption.
(Default: 500)
-c --http-request-header NAME=VALUE
--audit-cookies Audit cookies. Specify custom headers to be included in the HTTP requests.
(Can be used multiple times.)
--exclude-cookie=<name> Cookie to exclude from the audit by name. --http-response-max-size LIMIT
(Can be used multiple times.) Do not download response bodies larger than the specified LIMIT, in bytes.
(Default: inf)
--exclude-vector=<name> Input vector (parameter) not to audit by name. --http-cookie-jar COOKIE_JAR_FILE
(Can be used multiple times.) Netscape-styled HTTP cookiejar file.
--audit-headers Audit HTTP headers. --http-cookie-string COOKIE
(*NOTE*: Header audits use brute force. Cookie representation as an 'Cookie' HTTP request header.
Almost all valid HTTP request headers will be audited
even if there's no indication that the web app uses them.)
(*WARNING*: Enabling this option will result in increased requests,
maybe by an order of magnitude.)
Coverage ----------------------- --http-authentication-username USERNAME
Username for HTTP authentication.
--audit-cookies-extensively Submit all links and forms of the page along with the cookie permutations. --http-authentication-password PASSWORD
(*WARNING*: This will severely increase the scan-time.) Password for HTTP authentication.
--fuzz-methods Audit links, forms and cookies using both GET and POST requests. --http-proxy ADDRESS:PORT
(*WARNING*: This will severely increase the scan-time.) Proxy to use.
--exclude-binaries Exclude non text-based pages from the audit. --http-proxy-authentication USERNAME:PASSWORD
(Binary content can confuse recon modules that perform pattern matching.) Proxy authentication credentials.
Modules ------------------------ --http-proxy-type http,http_1_0,socks4,socks5,socks4a
Proxy type.
(Default: auto)
--lsmod=<regexp> List available modules based on the provided regular expression.
(If no regexp is provided all modules will be listed.)
(Can be used multiple times.)
Checks
--checks-list [PATTERN] List available checks based on the provided pattern.
(If no pattern is provided all checks will be listed.)
-m <modname,modname,...> --checks CHECK,CHECK2,...
--modules=<modname,modname,...> Comma separated list of checks to load.
Checks are referenced by their filename without the '.rb' extension, use '--checks-list' to list all.
Use '*' as a check name to load all checks or as a wildcard, like so:
xss* to load all XSS checks
sqli* to load all SQL injection checks
etc.
Comma separated list of modules to load. You can exclude checks by prefixing their name with a minus sign:
(Modules are referenced by their filename without the '.rb' extension, use '--lsmod' to list all. --checks=*,-backup_files,-xss
Use '*' as a module name to deploy all modules or as a wildcard, like so: The above will load all checks except for the 'backup_files' and 'xss' checks.
xss* to load all xss modules
sqli* to load all sql injection modules
etc.
You can exclude modules by prefixing their name with a minus sign: Or mix and match:
--modules=*,-backup_files,-xss -xss* to unload all XSS checks.
The above will load all modules except for the 'backup_files' and 'xss' modules.
Or mix and match:
-xss* to unload all xss modules.)
Plugins
--plugins-list [PATTERN]
List available plugins based on the provided pattern.
(If no pattern is provided all plugins will be listed.)
Reports ------------------------ --plugin 'PLUGIN:OPTION=VALUE,OPTION2=VALUE2'
PLUGIN is the name of the plugin as displayed by '--plugins-list'.
(Plugins are referenced by their filename without the '.rb' extension, use '--plugins-list' to list all.)
(Can be used multiple times.)
--lsrep=<regexp> List available reports based on the provided regular expression.
(If no regexp is provided all reports will be listed.)
(Can be used multiple times.)
--repload=<filepath> Load audit results from an '.afr' report file. Platforms
(Allows you to create new reports from finished scans.) --platforms-list List available platforms.
--report='<report>:<optname>=<val>,<optname2>=<val2>,...' --platforms-no-fingerprinting
Disable platform fingerprinting.
(By default, the system will try to identify the deployed server-side platforms automatically
in order to avoid sending irrelevant payloads.)
<report>: the name of the report as displayed by '--lsrep' --platforms PLATFORM,PLATFORM2,...
(Reports are referenced by their filename without the '.rb' extension, use '--lsrep' to list all.) Comma separated list of platforms (by shortname) to audit.
(Default: stdout) (The given platforms will be used *in addition* to fingerprinting. In order to restrict the audit to
(Can be used multiple times.) these platforms enable the '--platforms-no-fingerprinting' option.)
Plugins ------------------------ Session
--login-check-url URL URL to use to verify that the scanner is still logged in to the web application.
(Requires 'login-check-pattern'.)
--lsplug=<regexp> List available plugins based on the provided regular expression. --login-check-pattern PATTERN
(If no regexp is provided all plugins will be listed.) Pattern used against the body of the 'login-check-url' to verify that the scanner is still logged in to the web application.
(Can be used multiple times.) (Requires 'login-check-url'.)
--plugin='<plugin>:<optname>=<val>,<optname2>=<val2>,...'
<plugin>: the name of the plugin as displayed by '--lsplug' Profiles
(Plugins are referenced by their filename without the '.rb' extension, use '--lsplug' to list all.) --profile-save-filepath FILEPATH
(Can be used multiple times.) Save the current configuration profile/options to FILEPATH.
Platforms ---------------------- --profile-load-filepath FILEPATH
Load a configuration profile from FILEPATH.
--lsplat List available platforms.
--no-fingerprinting Disable platform fingerprinting. Browser cluster
(By default, the system will try to identify the deployed server-side platforms automatically --browser-cluster-pool-size SIZE
in order to avoid sending irrelevant payloads.) Amount of browser workers to keep in the pool and put to work.
(Default: 6)
--platforms=<platform,platform,...> --browser-cluster-job-timeout SECONDS
Maximum allowed time for each job.
(Default: 120)
Comma separated list of platforms (by shortname) to audit. --browser-cluster-worker-time-to-live LIMIT
(The given platforms will be used *in addition* to fingerprinting. In order to restrict the audit to Re-spawn the browser of each worker every LIMIT jobs.
these platforms enable the '--no-fingerprinting' option.) (Default: 100)
Proxy -------------------------- --browser-cluster-ignore-images
Do not load images.
--proxy=<server:port> Proxy address to use. --browser-cluster-screen-width
Browser screen width.
(Default: 1600)
--proxy-auth=<user:passwd> Proxy authentication credentials. --browser-cluster-screen-height
Browser screen height.
(Default: 1200)
--proxy-type=<type> Proxy type; can be http, http_1_0, socks4, socks5, socks4a Distribution
(Default: http) --dispatcher-url HOST:PORT
Dispatcher server to use.
--spawns SPAWNS How many slaves to spawn for a high-performance mult-Instance scan.
(When no grid mode has been specified, all slaves will all be from the same Dispatcher machine.
When a grid-mode has been specified, this option will be treated as a possible maximum and
not a hard value.)
Distribution ----------------- --grid-mode balance,aggregate
Sets the Grid mode of operation for this scan.
Valid modes are:
* balance -- Slaves will be provided by the least burdened Grid Dispatchers.
* aggregate -- In addition to balancing, slaves will all be from Dispatchers
with unique bandwidth Pipe-IDs to result in application-level line-aggregation.
--server=<address:port> Dispatcher server to use. --grid Shorthand for '--grid-mode=balance'.
(Used to provide scanner Instances.)
--spawns=<integer> How many slaves to spawn for a high-performance mult-Instance scan.
(When no grid mode has been specified, all slaves will all be from the same Dispatcher machine.
When a grid-mode has been specified, this option will be treated as a possible maximum and
not a hard value.)
--grid-mode=<mode> Sets the Grid mode of operation for this scan. SSL
Valid modes are: --ssl-ca FILE Location of the CA certificate (.pem).
* balance -- Slaves will be provided by the least burdened Grid Dispatchers.
* aggregate -- In addition to balancing, slaves will all be from Dispatchers
with unique bandwidth Pipe-IDs to result in application-level line-aggregation.
--grid Shorthand for '--grid-mode=balance'. --ssl-private-key FILE Location of the client SSL private key (.pem).
--ssl-certificate FILE Location of the client SSL certificate (.pem).
SSL --------------------------
(Do *not* use encrypted keys!)
--ssl-pkey=<file> Location of the SSL private key (.pem) Report
(Used to verify the the client to the servers.) --report-save-path PATH Directory or file path where to store the scan report.
You can use the generated file to create reports in several formats with the 'arachni_reporter' executable.
--ssl-cert=<file> Location of the SSL certificate (.pem)
(Used to verify the the client to the servers.)
--ssl-ca=<file> Location of the CA certificate (.pem) Timeout
(Used to verify the servers to the client.) --timeout HOURS:MINUTES:SECONDS
Stop the scan after the given duration is exceeded.
``` ```
Clone repository

Pages [all]

  • Home
  • Installation instructions
  • For users
    • Executables
    • Command Line Interface
    • Web User Interface
    • Distributed components (Dispatchers and Instances)
      • RPC Client
      • RPC Server (Dispatcher)
  • For developers
    • Core API documentation
    • RPC API
    • Development environment

Can't find what you're looking for? Why not have a look at the support portal?