Skip to content
GitLab
Projects Groups Snippets
  • /
  • Help
    • Help
    • Support
    • Community forum
    • Submit feedback
    • Contribute to GitLab
  • Sign in / Register
  • A arachni
  • Project information
    • Project information
    • Activity
    • Labels
    • Members
  • Repository
    • Repository
    • Files
    • Commits
    • Branches
    • Tags
    • Contributors
    • Graph
    • Compare
  • Issues 125
    • Issues 125
    • List
    • Boards
    • Service Desk
    • Milestones
  • Merge requests 8
    • Merge requests 8
  • CI/CD
    • CI/CD
    • Pipelines
    • Jobs
    • Schedules
  • Deployments
    • Deployments
    • Environments
    • Releases
  • Packages and registries
    • Packages and registries
    • Package Registry
    • Infrastructure Registry
  • Monitor
    • Monitor
    • Incidents
  • Analytics
    • Analytics
    • Value stream
    • CI/CD
    • Repository
  • Wiki
    • Wiki
  • Snippets
    • Snippets
  • Activity
  • Graph
  • Create a new issue
  • Jobs
  • Commits
  • Issue Boards
Collapse sidebar
  • Arachni - Web Application Security Scanner Framework
  • arachni
  • Wiki
  • Guides
  • User
  • RPC client

RPC client · Changes

Page history
RPC client and server pages converted to markdown authored Mar 30, 2013 by Tasos Laskos's avatar Tasos Laskos
Hide whitespace changes
Inline Side-by-side
guides/user/RPC-client.md 0 → 100644
View page @ 23acc16f
## Version 0.4.2
The RPC client command line interface is similar to the
[[Command line user interface | Command line user interface]].
The differences between the two are:
* The `--server` option -- The URL of the RPC Dispatcher server to connect to in
the form of `host:port`
* Support for Grid/distribution option.
* Support for SSL peer verification for Dispatch server.
```
Arachni - Web Application Security Scanner Framework v0.4.2
Author: Tasos "Zapotek" Laskos <tasos.laskos@gmail.com>
(With the support of the community and the Arachni Team.)
Website: http://arachni-scanner.com
Documentation: http://arachni-scanner.com/wiki
Usage: arachni_rpc --server host:port [options] url
Supported options:
General ----------------------
-h
--help Output this.
--version Show version information and exit.
-v Be verbose.
--debug Show what is happening internally.
(You should give it a shot sometime ;) )
--only-positives Echo positive results *only*.
--http-req-limit=<integer> Concurrent HTTP requests limit.
(Default: 20)
(Be careful not to kill your server.)
(*NOTE*: If your scan seems unresponsive try lowering the limit.)
--http-timeout=<integer> HTTP request timeout in milliseconds.
--cookie-jar=<filepath> Netscape HTTP cookie file, use curl to create it.
--cookie-string='<name>=<value>; <name2>=<value2>'
Cookies, as a string, to be sent to the web application.
--user-agent=<string> Specify user agent.
--custom-header='<name>=<value>'
Specify custom headers to be included in the HTTP requests.
(Can be used multiple times.)
--authed-by=<string> Who authorized the scan, include name and e-mail address.
(It'll make it easier on the sys-admins during log reviews.)
(Will be appended to the user-agent string.)
--login-check-url=<url> A URL used to verify that the scanner is still logged in to the web application.
(Requires 'login-check-pattern'.)
--login-check-pattern=<regexp>
A pattern used against the body of the 'login-check-url' to verify that the scanner is still logged in to the web application.
(Requires 'login-check-url'.)
Profiles -----------------------
--save-profile=<filepath> Save the current run profile/options to <filepath>.
--load-profile=<filepath> Load a run profile from <filepath>.
(Can be used multiple times.)
(You can complement it with more options, except for:
* --modules
* --redundant)
--show-profile Will output the running profile as CLI arguments.
Crawler -----------------------
-e <regexp>
--exclude=<regexp> Exclude urls matching <regexp>.
(Can be used multiple times.)
--exclude-page=<regexp> Exclude pages whose content matches <regexp>.
(Can be used multiple times.)
-i <regexp>
--include=<regexp> Include *only* urls matching <regex>.
(Can be used multiple times.)
--redundant=<regexp>:<limit>
Limit crawl on redundant pages like galleries or catalogs.
(URLs matching <regexp> will be crawled <limit> amount of times.)
(Can be used multiple times.)
--auto-redundant=<limit> Only follow <limit> amount of URLs with identical query parameter names.
(Default: inf)
(Will default to 10 if no value has been specified.)
-f
--follow-subdomains Follow links to subdomains.
(Default: off)
--depth=<integer> Directory depth limit.
(Default: inf)
(How deep Arachni should go into the site structure.)
--link-count=<integer> How many links to follow.
(Default: inf)
--redirect-limit=<integer> How many redirects to follow.
(Default: 20)
--extend-paths=<filepath> Add the paths in <file> to the ones discovered by the crawler.
(Can be used multiple times.)
--interceptor.callict-paths=<filepath> Use the paths in <file> instead of crawling.
(Can be used multiple times.)
--https-only Forces the system to only follow HTTPS URLs.
Auditor ------------------------
-g
--audit-links Audit links.
-p
--audit-forms Audit forms.
-c
--audit-cookies Audit cookies.
--exclude-cookie=<name> Cookie to exclude from the audit by name.
(Can be used multiple times.)
--exclude-vector=<name> Input vector (parameter) not to audit by name.
(Can be used multiple times.)
--audit-headers Audit HTTP headers.
(*NOTE*: Header audits use brute force.
Almost all valid HTTP request headers will be audited
even if there's no indication that the web app uses them.)
(*WARNING*: Enabling this option will result in increased requests,
maybe by an order of magnitude.)
Coverage -----------------------
--audit-cookies-extensively Submit all links and forms of the page along with the cookie permutations.
(*WARNING*: This will severely increase the scan-time.)
--fuzz-methods Audit links, forms and cookies using both GET and POST requests.
(*WARNING*: This will severely increase the scan-time.)
--exclude-binaries Exclude non text-based pages from the audit.
(Binary content can confuse recon modules that perform pattern matching.)
Modules ------------------------
--lsmod=<regexp> List available modules based on the provided regular expression.
(If no regexp is provided all modules will be listed.)
(Can be used multiple times.)
-m <modname,modname..>
--modules=<modname,modname..>
Comma separated list of modules to load.
(Modules are referenced by their filename without the '.rb' extension, use '--lsmod' to list all.
Use '*' as a module name to deploy all modules or as a wildcard, like so:
xss* to load all xss modules
sqli* to load all sql injection modules
etc.
You can exclude modules by prefixing their name with a minus sign:
--modules=*,-backup_files,-xss
The above will load all modules except for the 'backup_files' and 'xss' modules.
Or mix and match:
-xss* to unload all xss modules.)
Reports ------------------------
--lsrep=<regexp> List available reports based on the provided regular expression.
(If no regexp is provided all reports will be listed.)
(Can be used multiple times.)
--repload=<filepath> Load audit results from an '.afr' report file.
(Allows you to create new reports from finished scans.)
--report='<report>:<optname>=<val>,<optname2>=<val2>,...'
<report>: the name of the report as displayed by '--lsrep'
(Reports are referenced by their filename without the '.rb' extension, use '--lsrep' to list all.)
(Default: stdout)
(Can be used multiple times.)
Plugins ------------------------
--lsplug=<regexp> List available plugins based on the provided regular expression.
(If no regexp is provided all plugins will be listed.)
(Can be used multiple times.)
--plugin='<plugin>:<optname>=<val>,<optname2>=<val2>,...'
<plugin>: the name of the plugin as displayed by '--lsplug'
(Plugins are referenced by their filename without the '.rb' extension, use '--lsplug' to list all.)
(Can be used multiple times.)
Proxy --------------------------
--proxy=<server:port> Proxy address to use.
--proxy-auth=<user:passwd> Proxy authentication credentials.
--proxy-type=<type> Proxy type; can be http, http_1_0, socks4, socks5, socks4a
(Default: http)
Distribution -----------------
--server=<address:port> Dispatcher server to use.
(Used to provide scanner Instances.)
--slaves=<integer> How many slaves to spawn for a high-performance distributed scan.
(Slaves will all be from the same Dispatcher machine.)
(*WARNING*: This feature is experimental.)
--grid Tell the scanner to use the Grid for a High-Performance scan.
(Slaves will all be from the Dispatchers running
on machines with unique bandwidth pipe.)
(*WARNING*: This feature is experimental.)
SSL --------------------------
(Do *not* use encrypted keys!)
--ssl-pkey=<file> Location of the SSL private key (.pem)
(Used to verify the the client to the servers.)
--ssl-cert=<file> Location of the SSL certificate (.pem)
(Used to verify the the client to the servers.)
--ssl-ca=<file> Location of the CA certificate (.pem)
(Used to verify the servers to the client.)
```
Clone repository

Pages [all]


  • Home
  • Installation instructions
  • For users
    • Executables
    • Command Line Interface
    • Web User Interface
    • RPC Client
    • RPC Server (Dispatcher)
  • For developers
    • Coding guidelines
    • Core API documentation
    • RPC API
    • Development environment
  • Technology
    • The Brains
    • Distributed components (Dispatchers and Instances)
    • High Performance Grid

Can't find what you're looking for? Why not have a look at the support portal?