Recursion is the technique of making a function call itself. This technique provides a way to break complicated problems down into simple problems which are easier to solve.
Recursion may be a bit difficult to understand. The best way to figure out how it works is to experiment with it.
Recursion Example
Adding two numbers together is easy to do, but adding a range of numbers is more complicated. In the following example, recursion is used to add a range of numbers together by breaking it down into the simple task of adding two numbers:
Example
Use recursion to add all of the numbers up to 10.
public class Main { public static void main[String[] args] { int result = sum[10]; System.out.println[result];
}public static int sum[int k] { if [k > 0] { return k + sum[k - 1]; } else { return 0;
}}
}
Try it Yourself »
Example Explained
When the sum[]
function is called, it adds parameter k
to the sum of all numbers smaller than k
and returns the result. When k becomes 0, the function just returns 0. When running, the program follows these steps:
10 + sum[9]
10 + [ 9 + sum[8] ]
10 + [ 9 + [ 8 + sum[7] ] ]
...
10 + 9 + 8 + 7 + 6 + 5 + 4 + 3 + 2 + 1 + sum[0]
10 + 9 + 8 + 7 + 6 + 5 + 4 + 3 + 2 + 1 + 0
Since the function does not call itself when k
is 0, the program stops there and returns the result.
Halting Condition
Just as loops can run into the problem of infinite looping, recursive functions can run into the problem of infinite recursion. Infinite recursion is when the function never stops calling itself. Every recursive function should have a halting condition, which is the condition where the function stops calling itself. In the previous example, the halting condition is when the parameter k
becomes 0.
It is helpful to see a variety of different examples to better understand the concept. In this example, the function adds a range of numbers between a start and an end. The halting condition for this recursive function is when end is not greater than start:
Example
Use recursion to add all of the numbers between 5 to 10.
public class Main { public static void main[String[] args] { int result = sum[5, 10]; System.out.println[result];
}public static int sum[int start, int end] { if [end > start] { return end + sum[start, end - 1]; } else { return end; } } }
Try it Yourself »
The developer should be very careful with recursion as it can be quite easy to slip into writing a function which never terminates, or one that uses excess amounts of memory or processor power. However, when written correctly recursion can be a very efficient and mathematically-elegant approach to programming.
DIRB is a Web Content Scanner. It looks for existing [and/or hidden] Web Objects. It basically works by launching a dictionary based attack against a web server and analyzing the responses.
DIRB comes with a set of preconfigured attack wordlists for easy usage but you can use your custom wordlists. Also DIRB sometimes can be used as a classic CGI scanner, but remember that it is a content scanner not a vulnerability scanner.
DIRB’s main purpose is to help in professional web application auditing. Specially in security related testing. It covers some holes not covered by classic web vulnerability scanners. DIRB looks for specific web objects that other generic CGI scanners can’t look for. It doesn’t search vulnerabilities nor does it look for web contents that can be vulnerable.
void
tower[
int
n,
#include
0 #include
1#include
0 #include
4#include
0 #include
6
Passing php and html as extensions with -f/--force-extensions flag will generate the following dictionary:
admin
admin.php
admin.html
admin/
- Overwrite extensions:
login.html
Passing jsp and jspa as extensions with -O/--overwrite-extensions flag will generate the following dictionary:
login.html
login.jsp
login.jspa
Options
Usage: dirsearch.py [-u|--url] target [-e|--extensions] extensions [options]
Options:
--version show program's version number and exit
-h, --help show this help message and exit
Mandatory:
-u URL, --url=URL Target URL[s], can use multiple flags
-l PATH, --url-file=PATH
URL list file
--stdin Read URL[s] from STDIN
--cidr=CIDR Target CIDR
--raw=PATH Load raw HTTP request from file [use '--scheme' flag
to set the scheme]
-s SESSION_FILE, --session=SESSION_FILE
Session file
--config=PATH Path to configuration file [Default:
'DIRSEARCH_CONFIG' environment variable, otherwise
'config.ini']
Dictionary Settings:
-w WORDLISTS, --wordlists=WORDLISTS
Customize wordlists [separated by commas]
-e EXTENSIONS, --extensions=EXTENSIONS
Extension list separated by commas [e.g. php,asp]
-f, --force-extensions
Add extensions to the end of every wordlist entry. By
default dirsearch only replaces the %EXT% keyword with
extensions
-O, --overwrite-extensions
Overwrite other extensions in the wordlist with your
extensions [selected via `-e`]
--exclude-extensions=EXTENSIONS
Exclude extension list separated by commas [e.g.
asp,jsp]
--remove-extensions
Remove extensions in all paths [e.g. admin.php ->
admin]
--prefixes=PREFIXES
Add custom prefixes to all wordlist entries [separated
by commas]
--suffixes=SUFFIXES
Add custom suffixes to all wordlist entries, ignore
directories [separated by commas]
-U, --uppercase Uppercase wordlist
-L, --lowercase Lowercase wordlist
-C, --capital Capital wordlist
General Settings:
-t THREADS, --threads=THREADS
Number of threads
-r, --recursive Brute-force recursively
--deep-recursive Perform recursive scan on every directory depth [e.g.
api/users -> api/]
--force-recursive Do recursive brute-force for every found path, not
only directories
-R DEPTH, --max-recursion-depth=DEPTH
Maximum recursion depth
--recursion-status=CODES
Valid status codes to perform recursive scan, support
ranges [separated by commas]
--subdirs=SUBDIRS Scan sub-directories of the given URL[s] [separated by
commas]
--exclude-subdirs=SUBDIRS
Exclude the following subdirectories during recursive
scan [separated by commas]
-i CODES, --include-status=CODES
Include status codes, separated by commas, support
ranges [e.g. 200,300-399]
-x CODES, --exclude-status=CODES
Exclude status codes, separated by commas, support
ranges [e.g. 301,500-599]
--exclude-sizes=SIZES
Exclude responses by sizes, separated by commas [e.g.
0B,4KB]
--exclude-text=TEXTS
Exclude responses by text, can use multiple flags
--exclude-regex=REGEX
Exclude responses by regular expression
--exclude-redirect=STRING
Exclude responses if this regex [or text] matches
redirect URL [e.g. '/index.html']
--exclude-response=PATH
Exclude responses similar to response of this page,
path as input [e.g. 404.html]
--skip-on-status=CODES
Skip target whenever hit one of these status codes,
separated by commas, support ranges
--min-response-size=LENGTH
Minimum response length
--max-response-size=LENGTH
Maximum response length
--max-time=SECONDS Maximum runtime for the scan
--exit-on-error Exit whenever an error occurs
Request Settings:
-m METHOD, --http-method=METHOD
HTTP method [default: GET]
-d DATA, --data=DATA
HTTP request data
--data-file=PATH File contains HTTP request data
-H HEADERS, --header=HEADERS
HTTP request header, can use multiple flags
--header-file=PATH File contains HTTP request headers
-F, --follow-redirects
Follow HTTP redirects
--random-agent Choose a random User-Agent for each request
--auth=CREDENTIAL Authentication credential [e.g. user:password or
bearer token]
--auth-type=TYPE Authentication type [basic, digest, bearer, ntlm, jwt,
oauth2]
--cert-file=PATH File contains client-side certificate
--key-file=PATH File contains client-side certificate private key
[unencrypted]
--user-agent=USER_AGENT
--cookie=COOKIE
Connection Settings:
--timeout=TIMEOUT Connection timeout
--delay=DELAY Delay between requests
--proxy=PROXY Proxy URL [HTTP/SOCKS], can use multiple flags
--proxy-file=PATH File contains proxy servers
--proxy-auth=CREDENTIAL
Proxy authentication credential
--replay-proxy=PROXY
Proxy to replay with found paths
--tor Use Tor network as proxy
--scheme=SCHEME Scheme for raw request or if there is no scheme in the
URL [Default: auto-detect]
--max-rate=RATE Max requests per second
--retries=RETRIES Number of retries for failed requests
--ip=IP Server IP address
Advanced Settings:
--crawl Crawl for new paths in responses
View Settings:
--full-url Full URLs in the output [enabled automatically in
quiet mode]
--redirects-history
Show redirects history
--no-color No colored output
-q, --quiet-mode Quiet mode
Output Settings:
-o PATH, --output=PATH
Output file
--format=FORMAT Report format [Available: simple, plain, json, xml,
md, csv, html, sqlite]
--log=PATH Log file
Configuration
By default,
login.html
login.jsp
login.jspa
4 inside your dirsearch directory is used as the configuration file but you can select another file via login.html
login.jsp
login.jspa
5 flag or login.html
login.jsp
login.jspa
6 environment variable.# If you want to edit dirsearch default configurations, you can # edit values in this file. Everything after `#` is a comment # and won't be applied [general] threads = 25 recursive = False deep-recursive = False force-recursive = False recursion-status = 200-399,401,403 max-recursion-depth = 0 exclude-subdirs = %%ff/,.;/,..;/,;/,./,../,%%2e/,%%2e%%2e/ random-user-agents = False max-time = 0 exit-on-error = False # subdirs = /,api/ # include-status = 200-299,401 # exclude-status = 400,500-999 # exclude-sizes = 0b,123gb # exclude-text = "Not found" # exclude-regex = "^403$" # exclude-redirect = "*/error.html" # exclude-response = 404.html # skip-on-status = 429,999 [dictionary] default-extensions = php,aspx,jsp,html,js force-extensions = False overwrite-extensions = False lowercase = False uppercase = False capitalization = False # exclude-extensions = old,log # prefixes = .,admin # suffixes = ~,.bak # wordlists = /path/to/wordlist1.txt,/path/to/wordlist2.txt [request] http-method = get follow-redirects = False # headers-file = /path/to/headers.txt # user-agent = MyUserAgent # cookie = SESSIONID=123 [connection] timeout = 7.5 delay = 0 max-rate = 0 max-retries = 1 ## By disabling `scheme` variable, dirsearch will automatically identify the URI scheme # scheme = http # proxy = localhost:8080 # proxy-file = /path/to/proxies.txt # replay-proxy = localhost:8000 [advanced] crawl = False [view] full-url = False quiet-mode = False color = True show-redirects-history = False [output] ## Support: plain, simple, json, xml, md, csv, html, sqlite report-format = plain autosave-report = True autosave-report-folder = reports/ # log-file = /path/to/dirsearch.log # log-file-size = 50000000
How to use
Some examples for how to use dirsearch - those are the most common arguments. If you need all, just use the -h argument.
Simple usage
python3 dirsearch.py -u //target
python3 dirsearch.py -e php,html,js -u //target
index
index.asp
index.aspx
0Pausing progress
dirsearch allows you to pause the scanning progress with CTRL+C, from here, you can save the progress [and continue later], skip the current target, or skip the current sub-directory.
Recursion
- Recursive brute-force is brute-forcing continuously the after of found directories. For example, if dirsearch finds
7, it will brute-forcelogin.html login.jsp login.jspa
8 [login.html login.jsp login.jspa
9 is where it brute forces]. To enable this feature, use -r [or --recursive] flaglogin.html login.jsp login.jspa
index
index.asp
index.aspx
1- You can set the max recursion depth with --max-recursion-depth, and status codes to recurse with --recursion-status
index
index.asp
index.aspx
2There are 2 more options: --force-recursive and --deep-recursive
- Force recursive: Brute force recursively all found paths, not just paths end with
2login.html login.jsp login.jspa
- Deep recursive: Recursive brute-force all depths of a path [
1 => addUsage: dirsearch.py [-u|--url] target [-e|--extensions] extensions [options] Options: --version show program's version number and exit -h, --help show this help message and exit Mandatory: -u URL, --url=URL Target URL[s], can use multiple flags -l PATH, --url-file=PATH URL list file --stdin Read URL[s] from STDIN --cidr=CIDR Target CIDR --raw=PATH Load raw HTTP request from file [use '--scheme' flag to set the scheme] -s SESSION_FILE, --session=SESSION_FILE Session file --config=PATH Path to configuration file [Default: 'DIRSEARCH_CONFIG' environment variable, otherwise 'config.ini'] Dictionary Settings: -w WORDLISTS, --wordlists=WORDLISTS Customize wordlists [separated by commas] -e EXTENSIONS, --extensions=EXTENSIONS Extension list separated by commas [e.g. php,asp] -f, --force-extensions Add extensions to the end of every wordlist entry. By default dirsearch only replaces the %EXT% keyword with extensions -O, --overwrite-extensions Overwrite other extensions in the wordlist with your extensions [selected via `-e`] --exclude-extensions=EXTENSIONS Exclude extension list separated by commas [e.g. asp,jsp] --remove-extensions Remove extensions in all paths [e.g. admin.php -> admin] --prefixes=PREFIXES Add custom prefixes to all wordlist entries [separated by commas] --suffixes=SUFFIXES Add custom suffixes to all wordlist entries, ignore directories [separated by commas] -U, --uppercase Uppercase wordlist -L, --lowercase Lowercase wordlist -C, --capital Capital wordlist General Settings: -t THREADS, --threads=THREADS Number of threads -r, --recursive Brute-force recursively --deep-recursive Perform recursive scan on every directory depth [e.g. api/users -> api/] --force-recursive Do recursive brute-force for every found path, not only directories -R DEPTH, --max-recursion-depth=DEPTH Maximum recursion depth --recursion-status=CODES Valid status codes to perform recursive scan, support ranges [separated by commas] --subdirs=SUBDIRS Scan sub-directories of the given URL[s] [separated by commas] --exclude-subdirs=SUBDIRS Exclude the following subdirectories during recursive scan [separated by commas] -i CODES, --include-status=CODES Include status codes, separated by commas, support ranges [e.g. 200,300-399] -x CODES, --exclude-status=CODES Exclude status codes, separated by commas, support ranges [e.g. 301,500-599] --exclude-sizes=SIZES Exclude responses by sizes, separated by commas [e.g. 0B,4KB] --exclude-text=TEXTS Exclude responses by text, can use multiple flags --exclude-regex=REGEX Exclude responses by regular expression --exclude-redirect=STRING Exclude responses if this regex [or text] matches redirect URL [e.g. '/index.html'] --exclude-response=PATH Exclude responses similar to response of this page, path as input [e.g. 404.html] --skip-on-status=CODES Skip target whenever hit one of these status codes, separated by commas, support ranges --min-response-size=LENGTH Minimum response length --max-response-size=LENGTH Maximum response length --max-time=SECONDS Maximum runtime for the scan --exit-on-error Exit whenever an error occurs Request Settings: -m METHOD, --http-method=METHOD HTTP method [default: GET] -d DATA, --data=DATA HTTP request data --data-file=PATH File contains HTTP request data -H HEADERS, --header=HEADERS HTTP request header, can use multiple flags --header-file=PATH File contains HTTP request headers -F, --follow-redirects Follow HTTP redirects --random-agent Choose a random User-Agent for each request --auth=CREDENTIAL Authentication credential [e.g. user:password or bearer token] --auth-type=TYPE Authentication type [basic, digest, bearer, ntlm, jwt, oauth2] --cert-file=PATH File contains client-side certificate --key-file=PATH File contains client-side certificate private key [unencrypted] --user-agent=USER_AGENT --cookie=COOKIE Connection Settings: --timeout=TIMEOUT Connection timeout --delay=DELAY Delay between requests --proxy=PROXY Proxy URL [HTTP/SOCKS], can use multiple flags --proxy-file=PATH File contains proxy servers --proxy-auth=CREDENTIAL Proxy authentication credential --replay-proxy=PROXY Proxy to replay with found paths --tor Use Tor network as proxy --scheme=SCHEME Scheme for raw request or if there is no scheme in the URL [Default: auto-detect] --max-rate=RATE Max requests per second --retries=RETRIES Number of retries for failed requests --ip=IP Server IP address Advanced Settings: --crawl Crawl for new paths in responses View Settings: --full-url Full URLs in the output [enabled automatically in quiet mode] --redirects-history Show redirects history --no-color No colored output -q, --quiet-mode Quiet mode Output Settings: -o PATH, --output=PATH Output file --format=FORMAT Report format [Available: simple, plain, json, xml, md, csv, html, sqlite] --log=PATH Log file
2,Usage: dirsearch.py [-u|--url] target [-e|--extensions] extensions [options] Options: --version show program's version number and exit -h, --help show this help message and exit Mandatory: -u URL, --url=URL Target URL[s], can use multiple flags -l PATH, --url-file=PATH URL list file --stdin Read URL[s] from STDIN --cidr=CIDR Target CIDR --raw=PATH Load raw HTTP request from file [use '--scheme' flag to set the scheme] -s SESSION_FILE, --session=SESSION_FILE Session file --config=PATH Path to configuration file [Default: 'DIRSEARCH_CONFIG' environment variable, otherwise 'config.ini'] Dictionary Settings: -w WORDLISTS, --wordlists=WORDLISTS Customize wordlists [separated by commas] -e EXTENSIONS, --extensions=EXTENSIONS Extension list separated by commas [e.g. php,asp] -f, --force-extensions Add extensions to the end of every wordlist entry. By default dirsearch only replaces the %EXT% keyword with extensions -O, --overwrite-extensions Overwrite other extensions in the wordlist with your extensions [selected via `-e`] --exclude-extensions=EXTENSIONS Exclude extension list separated by commas [e.g. asp,jsp] --remove-extensions Remove extensions in all paths [e.g. admin.php -> admin] --prefixes=PREFIXES Add custom prefixes to all wordlist entries [separated by commas] --suffixes=SUFFIXES Add custom suffixes to all wordlist entries, ignore directories [separated by commas] -U, --uppercase Uppercase wordlist -L, --lowercase Lowercase wordlist -C, --capital Capital wordlist General Settings: -t THREADS, --threads=THREADS Number of threads -r, --recursive Brute-force recursively --deep-recursive Perform recursive scan on every directory depth [e.g. api/users -> api/] --force-recursive Do recursive brute-force for every found path, not only directories -R DEPTH, --max-recursion-depth=DEPTH Maximum recursion depth --recursion-status=CODES Valid status codes to perform recursive scan, support ranges [separated by commas] --subdirs=SUBDIRS Scan sub-directories of the given URL[s] [separated by commas] --exclude-subdirs=SUBDIRS Exclude the following subdirectories during recursive scan [separated by commas] -i CODES, --include-status=CODES Include status codes, separated by commas, support ranges [e.g. 200,300-399] -x CODES, --exclude-status=CODES Exclude status codes, separated by commas, support ranges [e.g. 301,500-599] --exclude-sizes=SIZES Exclude responses by sizes, separated by commas [e.g. 0B,4KB] --exclude-text=TEXTS Exclude responses by text, can use multiple flags --exclude-regex=REGEX Exclude responses by regular expression --exclude-redirect=STRING Exclude responses if this regex [or text] matches redirect URL [e.g. '/index.html'] --exclude-response=PATH Exclude responses similar to response of this page, path as input [e.g. 404.html] --skip-on-status=CODES Skip target whenever hit one of these status codes, separated by commas, support ranges --min-response-size=LENGTH Minimum response length --max-response-size=LENGTH Maximum response length --max-time=SECONDS Maximum runtime for the scan --exit-on-error Exit whenever an error occurs Request Settings: -m METHOD, --http-method=METHOD HTTP method [default: GET] -d DATA, --data=DATA HTTP request data --data-file=PATH File contains HTTP request data -H HEADERS, --header=HEADERS HTTP request header, can use multiple flags --header-file=PATH File contains HTTP request headers -F, --follow-redirects Follow HTTP redirects --random-agent Choose a random User-Agent for each request --auth=CREDENTIAL Authentication credential [e.g. user:password or bearer token] --auth-type=TYPE Authentication type [basic, digest, bearer, ntlm, jwt, oauth2] --cert-file=PATH File contains client-side certificate --key-file=PATH File contains client-side certificate private key [unencrypted] --user-agent=USER_AGENT --cookie=COOKIE Connection Settings: --timeout=TIMEOUT Connection timeout --delay=DELAY Delay between requests --proxy=PROXY Proxy URL [HTTP/SOCKS], can use multiple flags --proxy-file=PATH File contains proxy servers --proxy-auth=CREDENTIAL Proxy authentication credential --replay-proxy=PROXY Proxy to replay with found paths --tor Use Tor network as proxy --scheme=SCHEME Scheme for raw request or if there is no scheme in the URL [Default: auto-detect] --max-rate=RATE Max requests per second --retries=RETRIES Number of retries for failed requests --ip=IP Server IP address Advanced Settings: --crawl Crawl for new paths in responses View Settings: --full-url Full URLs in the output [enabled automatically in quiet mode] --redirects-history Show redirects history --no-color No colored output -q, --quiet-mode Quiet mode Output Settings: -o PATH, --output=PATH Output file --format=FORMAT Report format [Available: simple, plain, json, xml, md, csv, html, sqlite] --log=PATH Log file
3]Usage: dirsearch.py [-u|--url] target [-e|--extensions] extensions [options] Options: --version show program's version number and exit -h, --help show this help message and exit Mandatory: -u URL, --url=URL Target URL[s], can use multiple flags -l PATH, --url-file=PATH URL list file --stdin Read URL[s] from STDIN --cidr=CIDR Target CIDR --raw=PATH Load raw HTTP request from file [use '--scheme' flag to set the scheme] -s SESSION_FILE, --session=SESSION_FILE Session file --config=PATH Path to configuration file [Default: 'DIRSEARCH_CONFIG' environment variable, otherwise 'config.ini'] Dictionary Settings: -w WORDLISTS, --wordlists=WORDLISTS Customize wordlists [separated by commas] -e EXTENSIONS, --extensions=EXTENSIONS Extension list separated by commas [e.g. php,asp] -f, --force-extensions Add extensions to the end of every wordlist entry. By default dirsearch only replaces the %EXT% keyword with extensions -O, --overwrite-extensions Overwrite other extensions in the wordlist with your extensions [selected via `-e`] --exclude-extensions=EXTENSIONS Exclude extension list separated by commas [e.g. asp,jsp] --remove-extensions Remove extensions in all paths [e.g. admin.php -> admin] --prefixes=PREFIXES Add custom prefixes to all wordlist entries [separated by commas] --suffixes=SUFFIXES Add custom suffixes to all wordlist entries, ignore directories [separated by commas] -U, --uppercase Uppercase wordlist -L, --lowercase Lowercase wordlist -C, --capital Capital wordlist General Settings: -t THREADS, --threads=THREADS Number of threads -r, --recursive Brute-force recursively --deep-recursive Perform recursive scan on every directory depth [e.g. api/users -> api/] --force-recursive Do recursive brute-force for every found path, not only directories -R DEPTH, --max-recursion-depth=DEPTH Maximum recursion depth --recursion-status=CODES Valid status codes to perform recursive scan, support ranges [separated by commas] --subdirs=SUBDIRS Scan sub-directories of the given URL[s] [separated by commas] --exclude-subdirs=SUBDIRS Exclude the following subdirectories during recursive scan [separated by commas] -i CODES, --include-status=CODES Include status codes, separated by commas, support ranges [e.g. 200,300-399] -x CODES, --exclude-status=CODES Exclude status codes, separated by commas, support ranges [e.g. 301,500-599] --exclude-sizes=SIZES Exclude responses by sizes, separated by commas [e.g. 0B,4KB] --exclude-text=TEXTS Exclude responses by text, can use multiple flags --exclude-regex=REGEX Exclude responses by regular expression --exclude-redirect=STRING Exclude responses if this regex [or text] matches redirect URL [e.g. '/index.html'] --exclude-response=PATH Exclude responses similar to response of this page, path as input [e.g. 404.html] --skip-on-status=CODES Skip target whenever hit one of these status codes, separated by commas, support ranges --min-response-size=LENGTH Minimum response length --max-response-size=LENGTH Maximum response length --max-time=SECONDS Maximum runtime for the scan --exit-on-error Exit whenever an error occurs Request Settings: -m METHOD, --http-method=METHOD HTTP method [default: GET] -d DATA, --data=DATA HTTP request data --data-file=PATH File contains HTTP request data -H HEADERS, --header=HEADERS HTTP request header, can use multiple flags --header-file=PATH File contains HTTP request headers -F, --follow-redirects Follow HTTP redirects --random-agent Choose a random User-Agent for each request --auth=CREDENTIAL Authentication credential [e.g. user:password or bearer token] --auth-type=TYPE Authentication type [basic, digest, bearer, ntlm, jwt, oauth2] --cert-file=PATH File contains client-side certificate --key-file=PATH File contains client-side certificate private key [unencrypted] --user-agent=USER_AGENT --cookie=COOKIE Connection Settings: --timeout=TIMEOUT Connection timeout --delay=DELAY Delay between requests --proxy=PROXY Proxy URL [HTTP/SOCKS], can use multiple flags --proxy-file=PATH File contains proxy servers --proxy-auth=CREDENTIAL Proxy authentication credential --replay-proxy=PROXY Proxy to replay with found paths --tor Use Tor network as proxy --scheme=SCHEME Scheme for raw request or if there is no scheme in the URL [Default: auto-detect] --max-rate=RATE Max requests per second --retries=RETRIES Number of retries for failed requests --ip=IP Server IP address Advanced Settings: --crawl Crawl for new paths in responses View Settings: --full-url Full URLs in the output [enabled automatically in quiet mode] --redirects-history Show redirects history --no-color No colored output -q, --quiet-mode Quiet mode Output Settings: -o PATH, --output=PATH Output file --format=FORMAT Report format [Available: simple, plain, json, xml, md, csv, html, sqlite] --log=PATH Log file
- Force recursive: Brute force recursively all found paths, not just paths end with
If there are sub-directories that you do not want to brute-force recursively, use
4Usage: dirsearch.py [-u|--url] target [-e|--extensions] extensions [options] Options: --version show program's version number and exit -h, --help show this help message and exit Mandatory: -u URL, --url=URL Target URL[s], can use multiple flags -l PATH, --url-file=PATH URL list file --stdin Read URL[s] from STDIN --cidr=CIDR Target CIDR --raw=PATH Load raw HTTP request from file [use '--scheme' flag to set the scheme] -s SESSION_FILE, --session=SESSION_FILE Session file --config=PATH Path to configuration file [Default: 'DIRSEARCH_CONFIG' environment variable, otherwise 'config.ini'] Dictionary Settings: -w WORDLISTS, --wordlists=WORDLISTS Customize wordlists [separated by commas] -e EXTENSIONS, --extensions=EXTENSIONS Extension list separated by commas [e.g. php,asp] -f, --force-extensions Add extensions to the end of every wordlist entry. By default dirsearch only replaces the %EXT% keyword with extensions -O, --overwrite-extensions Overwrite other extensions in the wordlist with your extensions [selected via `-e`] --exclude-extensions=EXTENSIONS Exclude extension list separated by commas [e.g. asp,jsp] --remove-extensions Remove extensions in all paths [e.g. admin.php -> admin] --prefixes=PREFIXES Add custom prefixes to all wordlist entries [separated by commas] --suffixes=SUFFIXES Add custom suffixes to all wordlist entries, ignore directories [separated by commas] -U, --uppercase Uppercase wordlist -L, --lowercase Lowercase wordlist -C, --capital Capital wordlist General Settings: -t THREADS, --threads=THREADS Number of threads -r, --recursive Brute-force recursively --deep-recursive Perform recursive scan on every directory depth [e.g. api/users -> api/] --force-recursive Do recursive brute-force for every found path, not only directories -R DEPTH, --max-recursion-depth=DEPTH Maximum recursion depth --recursion-status=CODES Valid status codes to perform recursive scan, support ranges [separated by commas] --subdirs=SUBDIRS Scan sub-directories of the given URL[s] [separated by commas] --exclude-subdirs=SUBDIRS Exclude the following subdirectories during recursive scan [separated by commas] -i CODES, --include-status=CODES Include status codes, separated by commas, support ranges [e.g. 200,300-399] -x CODES, --exclude-status=CODES Exclude status codes, separated by commas, support ranges [e.g. 301,500-599] --exclude-sizes=SIZES Exclude responses by sizes, separated by commas [e.g. 0B,4KB] --exclude-text=TEXTS Exclude responses by text, can use multiple flags --exclude-regex=REGEX Exclude responses by regular expression --exclude-redirect=STRING Exclude responses if this regex [or text] matches redirect URL [e.g. '/index.html'] --exclude-response=PATH Exclude responses similar to response of this page, path as input [e.g. 404.html] --skip-on-status=CODES Skip target whenever hit one of these status codes, separated by commas, support ranges --min-response-size=LENGTH Minimum response length --max-response-size=LENGTH Maximum response length --max-time=SECONDS Maximum runtime for the scan --exit-on-error Exit whenever an error occurs Request Settings: -m METHOD, --http-method=METHOD HTTP method [default: GET] -d DATA, --data=DATA HTTP request data --data-file=PATH File contains HTTP request data -H HEADERS, --header=HEADERS HTTP request header, can use multiple flags --header-file=PATH File contains HTTP request headers -F, --follow-redirects Follow HTTP redirects --random-agent Choose a random User-Agent for each request --auth=CREDENTIAL Authentication credential [e.g. user:password or bearer token] --auth-type=TYPE Authentication type [basic, digest, bearer, ntlm, jwt, oauth2] --cert-file=PATH File contains client-side certificate --key-file=PATH File contains client-side certificate private key [unencrypted] --user-agent=USER_AGENT --cookie=COOKIE Connection Settings: --timeout=TIMEOUT Connection timeout --delay=DELAY Delay between requests --proxy=PROXY Proxy URL [HTTP/SOCKS], can use multiple flags --proxy-file=PATH File contains proxy servers --proxy-auth=CREDENTIAL Proxy authentication credential --replay-proxy=PROXY Proxy to replay with found paths --tor Use Tor network as proxy --scheme=SCHEME Scheme for raw request or if there is no scheme in the URL [Default: auto-detect] --max-rate=RATE Max requests per second --retries=RETRIES Number of retries for failed requests --ip=IP Server IP address Advanced Settings: --crawl Crawl for new paths in responses View Settings: --full-url Full URLs in the output [enabled automatically in quiet mode] --redirects-history Show redirects history --no-color No colored output -q, --quiet-mode Quiet mode Output Settings: -o PATH, --output=PATH Output file --format=FORMAT Report format [Available: simple, plain, json, xml, md, csv, html, sqlite] --log=PATH Log file
index
index.asp
index.aspx
3Threads
The thread number [-t | --threads] reflects the number of separated brute force processes. And so the bigger the thread number is, the faster dirsearch runs. By default, the number of threads is 25, but you can increase it if you want to speed up the progress.
In spite of that, the speed still depends a lot on the response time of the server. And as a warning, we advise you to keep the threads number not too big because it can cause DoS [Denial of Service].
index
index.asp
index.aspx
4Prefixes / Suffixes
- --prefixes: Add custom prefixes to all entries
index
index.asp
index.aspx
5Wordlist:
index
index.asp
index.aspx
6Generated with prefixes:
index
index.asp
index.aspx
7- --suffixes: Add custom suffixes to all entries
index
index.asp
index.aspx
8Wordlist:
index
index.asp
index.aspx
9Generated with suffixes:
admin
0Blacklist
Inside the
Usage: dirsearch.py [-u|--url] target [-e|--extensions] extensions [options]
Options:
--version show program's version number and exit
-h, --help show this help message and exit
Mandatory:
-u URL, --url=URL Target URL[s], can use multiple flags
-l PATH, --url-file=PATH
URL list file
--stdin Read URL[s] from STDIN
--cidr=CIDR Target CIDR
--raw=PATH Load raw HTTP request from file [use '--scheme' flag
to set the scheme]
-s SESSION_FILE, --session=SESSION_FILE
Session file
--config=PATH Path to configuration file [Default:
'DIRSEARCH_CONFIG' environment variable, otherwise
'config.ini']
Dictionary Settings:
-w WORDLISTS, --wordlists=WORDLISTS
Customize wordlists [separated by commas]
-e EXTENSIONS, --extensions=EXTENSIONS
Extension list separated by commas [e.g. php,asp]
-f, --force-extensions
Add extensions to the end of every wordlist entry. By
default dirsearch only replaces the %EXT% keyword with
extensions
-O, --overwrite-extensions
Overwrite other extensions in the wordlist with your
extensions [selected via `-e`]
--exclude-extensions=EXTENSIONS
Exclude extension list separated by commas [e.g.
asp,jsp]
--remove-extensions
Remove extensions in all paths [e.g. admin.php ->
admin]
--prefixes=PREFIXES
Add custom prefixes to all wordlist entries [separated
by commas]
--suffixes=SUFFIXES
Add custom suffixes to all wordlist entries, ignore
directories [separated by commas]
-U, --uppercase Uppercase wordlist
-L, --lowercase Lowercase wordlist
-C, --capital Capital wordlist
General Settings:
-t THREADS, --threads=THREADS
Number of threads
-r, --recursive Brute-force recursively
--deep-recursive Perform recursive scan on every directory depth [e.g.
api/users -> api/]
--force-recursive Do recursive brute-force for every found path, not
only directories
-R DEPTH, --max-recursion-depth=DEPTH
Maximum recursion depth
--recursion-status=CODES
Valid status codes to perform recursive scan, support
ranges [separated by commas]
--subdirs=SUBDIRS Scan sub-directories of the given URL[s] [separated by
commas]
--exclude-subdirs=SUBDIRS
Exclude the following subdirectories during recursive
scan [separated by commas]
-i CODES, --include-status=CODES
Include status codes, separated by commas, support
ranges [e.g. 200,300-399]
-x CODES, --exclude-status=CODES
Exclude status codes, separated by commas, support
ranges [e.g. 301,500-599]
--exclude-sizes=SIZES
Exclude responses by sizes, separated by commas [e.g.
0B,4KB]
--exclude-text=TEXTS
Exclude responses by text, can use multiple flags
--exclude-regex=REGEX
Exclude responses by regular expression
--exclude-redirect=STRING
Exclude responses if this regex [or text] matches
redirect URL [e.g. '/index.html']
--exclude-response=PATH
Exclude responses similar to response of this page,
path as input [e.g. 404.html]
--skip-on-status=CODES
Skip target whenever hit one of these status codes,
separated by commas, support ranges
--min-response-size=LENGTH
Minimum response length
--max-response-size=LENGTH
Maximum response length
--max-time=SECONDS Maximum runtime for the scan
--exit-on-error Exit whenever an error occurs
Request Settings:
-m METHOD, --http-method=METHOD
HTTP method [default: GET]
-d DATA, --data=DATA
HTTP request data
--data-file=PATH File contains HTTP request data
-H HEADERS, --header=HEADERS
HTTP request header, can use multiple flags
--header-file=PATH File contains HTTP request headers
-F, --follow-redirects
Follow HTTP redirects
--random-agent Choose a random User-Agent for each request
--auth=CREDENTIAL Authentication credential [e.g. user:password or
bearer token]
--auth-type=TYPE Authentication type [basic, digest, bearer, ntlm, jwt,
oauth2]
--cert-file=PATH File contains client-side certificate
--key-file=PATH File contains client-side certificate private key
[unencrypted]
--user-agent=USER_AGENT
--cookie=COOKIE
Connection Settings:
--timeout=TIMEOUT Connection timeout
--delay=DELAY Delay between requests
--proxy=PROXY Proxy URL [HTTP/SOCKS], can use multiple flags
--proxy-file=PATH File contains proxy servers
--proxy-auth=CREDENTIAL
Proxy authentication credential
--replay-proxy=PROXY
Proxy to replay with found paths
--tor Use Tor network as proxy
--scheme=SCHEME Scheme for raw request or if there is no scheme in the
URL [Default: auto-detect]
--max-rate=RATE Max requests per second
--retries=RETRIES Number of retries for failed requests
--ip=IP Server IP address
Advanced Settings:
--crawl Crawl for new paths in responses
View Settings:
--full-url Full URLs in the output [enabled automatically in
quiet mode]
--redirects-history
Show redirects history
--no-color No colored output
-q, --quiet-mode Quiet mode
Output Settings:
-o PATH, --output=PATH
Output file
--format=FORMAT Report format [Available: simple, plain, json, xml,
md, csv, html, sqlite]
--log=PATH Log file
5 folder, there are several "blacklist files". Paths in those files will be filtered from the scan result if they have the same status as mentioned in the filename.Example: If you add
Usage: dirsearch.py [-u|--url] target [-e|--extensions] extensions [options]
Options:
--version show program's version number and exit
-h, --help show this help message and exit
Mandatory:
-u URL, --url=URL Target URL[s], can use multiple flags
-l PATH, --url-file=PATH
URL list file
--stdin Read URL[s] from STDIN
--cidr=CIDR Target CIDR
--raw=PATH Load raw HTTP request from file [use '--scheme' flag
to set the scheme]
-s SESSION_FILE, --session=SESSION_FILE
Session file
--config=PATH Path to configuration file [Default:
'DIRSEARCH_CONFIG' environment variable, otherwise
'config.ini']
Dictionary Settings:
-w WORDLISTS, --wordlists=WORDLISTS
Customize wordlists [separated by commas]
-e EXTENSIONS, --extensions=EXTENSIONS
Extension list separated by commas [e.g. php,asp]
-f, --force-extensions
Add extensions to the end of every wordlist entry. By
default dirsearch only replaces the %EXT% keyword with
extensions
-O, --overwrite-extensions
Overwrite other extensions in the wordlist with your
extensions [selected via `-e`]
--exclude-extensions=EXTENSIONS
Exclude extension list separated by commas [e.g.
asp,jsp]
--remove-extensions
Remove extensions in all paths [e.g. admin.php ->
admin]
--prefixes=PREFIXES
Add custom prefixes to all wordlist entries [separated
by commas]
--suffixes=SUFFIXES
Add custom suffixes to all wordlist entries, ignore
directories [separated by commas]
-U, --uppercase Uppercase wordlist
-L, --lowercase Lowercase wordlist
-C, --capital Capital wordlist
General Settings:
-t THREADS, --threads=THREADS
Number of threads
-r, --recursive Brute-force recursively
--deep-recursive Perform recursive scan on every directory depth [e.g.
api/users -> api/]
--force-recursive Do recursive brute-force for every found path, not
only directories
-R DEPTH, --max-recursion-depth=DEPTH
Maximum recursion depth
--recursion-status=CODES
Valid status codes to perform recursive scan, support
ranges [separated by commas]
--subdirs=SUBDIRS Scan sub-directories of the given URL[s] [separated by
commas]
--exclude-subdirs=SUBDIRS
Exclude the following subdirectories during recursive
scan [separated by commas]
-i CODES, --include-status=CODES
Include status codes, separated by commas, support
ranges [e.g. 200,300-399]
-x CODES, --exclude-status=CODES
Exclude status codes, separated by commas, support
ranges [e.g. 301,500-599]
--exclude-sizes=SIZES
Exclude responses by sizes, separated by commas [e.g.
0B,4KB]
--exclude-text=TEXTS
Exclude responses by text, can use multiple flags
--exclude-regex=REGEX
Exclude responses by regular expression
--exclude-redirect=STRING
Exclude responses if this regex [or text] matches
redirect URL [e.g. '/index.html']
--exclude-response=PATH
Exclude responses similar to response of this page,
path as input [e.g. 404.html]
--skip-on-status=CODES
Skip target whenever hit one of these status codes,
separated by commas, support ranges
--min-response-size=LENGTH
Minimum response length
--max-response-size=LENGTH
Maximum response length
--max-time=SECONDS Maximum runtime for the scan
--exit-on-error Exit whenever an error occurs
Request Settings:
-m METHOD, --http-method=METHOD
HTTP method [default: GET]
-d DATA, --data=DATA
HTTP request data
--data-file=PATH File contains HTTP request data
-H HEADERS, --header=HEADERS
HTTP request header, can use multiple flags
--header-file=PATH File contains HTTP request headers
-F, --follow-redirects
Follow HTTP redirects
--random-agent Choose a random User-Agent for each request
--auth=CREDENTIAL Authentication credential [e.g. user:password or
bearer token]
--auth-type=TYPE Authentication type [basic, digest, bearer, ntlm, jwt,
oauth2]
--cert-file=PATH File contains client-side certificate
--key-file=PATH File contains client-side certificate private key
[unencrypted]
--user-agent=USER_AGENT
--cookie=COOKIE
Connection Settings:
--timeout=TIMEOUT Connection timeout
--delay=DELAY Delay between requests
--proxy=PROXY Proxy URL [HTTP/SOCKS], can use multiple flags
--proxy-file=PATH File contains proxy servers
--proxy-auth=CREDENTIAL
Proxy authentication credential
--replay-proxy=PROXY
Proxy to replay with found paths
--tor Use Tor network as proxy
--scheme=SCHEME Scheme for raw request or if there is no scheme in the
URL [Default: auto-detect]
--max-rate=RATE Max requests per second
--retries=RETRIES Number of retries for failed requests
--ip=IP Server IP address
Advanced Settings:
--crawl Crawl for new paths in responses
View Settings:
--full-url Full URLs in the output [enabled automatically in
quiet mode]
--redirects-history
Show redirects history
--no-color No colored output
-q, --quiet-mode Quiet mode
Output Settings:
-o PATH, --output=PATH
Output file
--format=FORMAT Report format [Available: simple, plain, json, xml,
md, csv, html, sqlite]
--log=PATH Log file
6 into Usage: dirsearch.py [-u|--url] target [-e|--extensions] extensions [options]
Options:
--version show program's version number and exit
-h, --help show this help message and exit
Mandatory:
-u URL, --url=URL Target URL[s], can use multiple flags
-l PATH, --url-file=PATH
URL list file
--stdin Read URL[s] from STDIN
--cidr=CIDR Target CIDR
--raw=PATH Load raw HTTP request from file [use '--scheme' flag
to set the scheme]
-s SESSION_FILE, --session=SESSION_FILE
Session file
--config=PATH Path to configuration file [Default:
'DIRSEARCH_CONFIG' environment variable, otherwise
'config.ini']
Dictionary Settings:
-w WORDLISTS, --wordlists=WORDLISTS
Customize wordlists [separated by commas]
-e EXTENSIONS, --extensions=EXTENSIONS
Extension list separated by commas [e.g. php,asp]
-f, --force-extensions
Add extensions to the end of every wordlist entry. By
default dirsearch only replaces the %EXT% keyword with
extensions
-O, --overwrite-extensions
Overwrite other extensions in the wordlist with your
extensions [selected via `-e`]
--exclude-extensions=EXTENSIONS
Exclude extension list separated by commas [e.g.
asp,jsp]
--remove-extensions
Remove extensions in all paths [e.g. admin.php ->
admin]
--prefixes=PREFIXES
Add custom prefixes to all wordlist entries [separated
by commas]
--suffixes=SUFFIXES
Add custom suffixes to all wordlist entries, ignore
directories [separated by commas]
-U, --uppercase Uppercase wordlist
-L, --lowercase Lowercase wordlist
-C, --capital Capital wordlist
General Settings:
-t THREADS, --threads=THREADS
Number of threads
-r, --recursive Brute-force recursively
--deep-recursive Perform recursive scan on every directory depth [e.g.
api/users -> api/]
--force-recursive Do recursive brute-force for every found path, not
only directories
-R DEPTH, --max-recursion-depth=DEPTH
Maximum recursion depth
--recursion-status=CODES
Valid status codes to perform recursive scan, support
ranges [separated by commas]
--subdirs=SUBDIRS Scan sub-directories of the given URL[s] [separated by
commas]
--exclude-subdirs=SUBDIRS
Exclude the following subdirectories during recursive
scan [separated by commas]
-i CODES, --include-status=CODES
Include status codes, separated by commas, support
ranges [e.g. 200,300-399]
-x CODES, --exclude-status=CODES
Exclude status codes, separated by commas, support
ranges [e.g. 301,500-599]
--exclude-sizes=SIZES
Exclude responses by sizes, separated by commas [e.g.
0B,4KB]
--exclude-text=TEXTS
Exclude responses by text, can use multiple flags
--exclude-regex=REGEX
Exclude responses by regular expression
--exclude-redirect=STRING
Exclude responses if this regex [or text] matches
redirect URL [e.g. '/index.html']
--exclude-response=PATH
Exclude responses similar to response of this page,
path as input [e.g. 404.html]
--skip-on-status=CODES
Skip target whenever hit one of these status codes,
separated by commas, support ranges
--min-response-size=LENGTH
Minimum response length
--max-response-size=LENGTH
Maximum response length
--max-time=SECONDS Maximum runtime for the scan
--exit-on-error Exit whenever an error occurs
Request Settings:
-m METHOD, --http-method=METHOD
HTTP method [default: GET]
-d DATA, --data=DATA
HTTP request data
--data-file=PATH File contains HTTP request data
-H HEADERS, --header=HEADERS
HTTP request header, can use multiple flags
--header-file=PATH File contains HTTP request headers
-F, --follow-redirects
Follow HTTP redirects
--random-agent Choose a random User-Agent for each request
--auth=CREDENTIAL Authentication credential [e.g. user:password or
bearer token]
--auth-type=TYPE Authentication type [basic, digest, bearer, ntlm, jwt,
oauth2]
--cert-file=PATH File contains client-side certificate
--key-file=PATH File contains client-side certificate private key
[unencrypted]
--user-agent=USER_AGENT
--cookie=COOKIE
Connection Settings:
--timeout=TIMEOUT Connection timeout
--delay=DELAY Delay between requests
--proxy=PROXY Proxy URL [HTTP/SOCKS], can use multiple flags
--proxy-file=PATH File contains proxy servers
--proxy-auth=CREDENTIAL
Proxy authentication credential
--replay-proxy=PROXY
Proxy to replay with found paths
--tor Use Tor network as proxy
--scheme=SCHEME Scheme for raw request or if there is no scheme in the
URL [Default: auto-detect]
--max-rate=RATE Max requests per second
--retries=RETRIES Number of retries for failed requests
--ip=IP Server IP address
Advanced Settings:
--crawl Crawl for new paths in responses
View Settings:
--full-url Full URLs in the output [enabled automatically in
quiet mode]
--redirects-history
Show redirects history
--no-color No colored output
-q, --quiet-mode Quiet mode
Output Settings:
-o PATH, --output=PATH
Output file
--format=FORMAT Report format [Available: simple, plain, json, xml,
md, csv, html, sqlite]
--log=PATH Log file
7, whenever you do a scan that Usage: dirsearch.py [-u|--url] target [-e|--extensions] extensions [options]
Options:
--version show program's version number and exit
-h, --help show this help message and exit
Mandatory:
-u URL, --url=URL Target URL[s], can use multiple flags
-l PATH, --url-file=PATH
URL list file
--stdin Read URL[s] from STDIN
--cidr=CIDR Target CIDR
--raw=PATH Load raw HTTP request from file [use '--scheme' flag
to set the scheme]
-s SESSION_FILE, --session=SESSION_FILE
Session file
--config=PATH Path to configuration file [Default:
'DIRSEARCH_CONFIG' environment variable, otherwise
'config.ini']
Dictionary Settings:
-w WORDLISTS, --wordlists=WORDLISTS
Customize wordlists [separated by commas]
-e EXTENSIONS, --extensions=EXTENSIONS
Extension list separated by commas [e.g. php,asp]
-f, --force-extensions
Add extensions to the end of every wordlist entry. By
default dirsearch only replaces the %EXT% keyword with
extensions
-O, --overwrite-extensions
Overwrite other extensions in the wordlist with your
extensions [selected via `-e`]
--exclude-extensions=EXTENSIONS
Exclude extension list separated by commas [e.g.
asp,jsp]
--remove-extensions
Remove extensions in all paths [e.g. admin.php ->
admin]
--prefixes=PREFIXES
Add custom prefixes to all wordlist entries [separated
by commas]
--suffixes=SUFFIXES
Add custom suffixes to all wordlist entries, ignore
directories [separated by commas]
-U, --uppercase Uppercase wordlist
-L, --lowercase Lowercase wordlist
-C, --capital Capital wordlist
General Settings:
-t THREADS, --threads=THREADS
Number of threads
-r, --recursive Brute-force recursively
--deep-recursive Perform recursive scan on every directory depth [e.g.
api/users -> api/]
--force-recursive Do recursive brute-force for every found path, not
only directories
-R DEPTH, --max-recursion-depth=DEPTH
Maximum recursion depth
--recursion-status=CODES
Valid status codes to perform recursive scan, support
ranges [separated by commas]
--subdirs=SUBDIRS Scan sub-directories of the given URL[s] [separated by
commas]
--exclude-subdirs=SUBDIRS
Exclude the following subdirectories during recursive
scan [separated by commas]
-i CODES, --include-status=CODES
Include status codes, separated by commas, support
ranges [e.g. 200,300-399]
-x CODES, --exclude-status=CODES
Exclude status codes, separated by commas, support
ranges [e.g. 301,500-599]
--exclude-sizes=SIZES
Exclude responses by sizes, separated by commas [e.g.
0B,4KB]
--exclude-text=TEXTS
Exclude responses by text, can use multiple flags
--exclude-regex=REGEX
Exclude responses by regular expression
--exclude-redirect=STRING
Exclude responses if this regex [or text] matches
redirect URL [e.g. '/index.html']
--exclude-response=PATH
Exclude responses similar to response of this page,
path as input [e.g. 404.html]
--skip-on-status=CODES
Skip target whenever hit one of these status codes,
separated by commas, support ranges
--min-response-size=LENGTH
Minimum response length
--max-response-size=LENGTH
Maximum response length
--max-time=SECONDS Maximum runtime for the scan
--exit-on-error Exit whenever an error occurs
Request Settings:
-m METHOD, --http-method=METHOD
HTTP method [default: GET]
-d DATA, --data=DATA
HTTP request data
--data-file=PATH File contains HTTP request data
-H HEADERS, --header=HEADERS
HTTP request header, can use multiple flags
--header-file=PATH File contains HTTP request headers
-F, --follow-redirects
Follow HTTP redirects
--random-agent Choose a random User-Agent for each request
--auth=CREDENTIAL Authentication credential [e.g. user:password or
bearer token]
--auth-type=TYPE Authentication type [basic, digest, bearer, ntlm, jwt,
oauth2]
--cert-file=PATH File contains client-side certificate
--key-file=PATH File contains client-side certificate private key
[unencrypted]
--user-agent=USER_AGENT
--cookie=COOKIE
Connection Settings:
--timeout=TIMEOUT Connection timeout
--delay=DELAY Delay between requests
--proxy=PROXY Proxy URL [HTTP/SOCKS], can use multiple flags
--proxy-file=PATH File contains proxy servers
--proxy-auth=CREDENTIAL
Proxy authentication credential
--replay-proxy=PROXY
Proxy to replay with found paths
--tor Use Tor network as proxy
--scheme=SCHEME Scheme for raw request or if there is no scheme in the
URL [Default: auto-detect]
--max-rate=RATE Max requests per second
--retries=RETRIES Number of retries for failed requests
--ip=IP Server IP address
Advanced Settings:
--crawl Crawl for new paths in responses
View Settings:
--full-url Full URLs in the output [enabled automatically in
quiet mode]
--redirects-history
Show redirects history
--no-color No colored output
-q, --quiet-mode Quiet mode
Output Settings:
-o PATH, --output=PATH
Output file
--format=FORMAT Report format [Available: simple, plain, json, xml,
md, csv, html, sqlite]
--log=PATH Log file
6 returns 403, it will be filtered from the result.Filters
Use -i | --include-status and -x | --exclude-status to select allowed and not allowed response status-codes
For more advanced filters: --exclude-sizes, --exclude-texts, --exclude-regexps, --exclude-redirects and --exclude-response
admin
1admin
2admin
3admin
4admin
5Raw request
dirsearch allows you to import the raw request from a file. The content would be something looked like this:
admin
6Since there is no way for dirsearch to know what the URI scheme is, you need to set it using the
Usage: dirsearch.py [-u|--url] target [-e|--extensions] extensions [options]
Options:
--version show program's version number and exit
-h, --help show this help message and exit
Mandatory:
-u URL, --url=URL Target URL[s], can use multiple flags
-l PATH, --url-file=PATH
URL list file
--stdin Read URL[s] from STDIN
--cidr=CIDR Target CIDR
--raw=PATH Load raw HTTP request from file [use '--scheme' flag
to set the scheme]
-s SESSION_FILE, --session=SESSION_FILE
Session file
--config=PATH Path to configuration file [Default:
'DIRSEARCH_CONFIG' environment variable, otherwise
'config.ini']
Dictionary Settings:
-w WORDLISTS, --wordlists=WORDLISTS
Customize wordlists [separated by commas]
-e EXTENSIONS, --extensions=EXTENSIONS
Extension list separated by commas [e.g. php,asp]
-f, --force-extensions
Add extensions to the end of every wordlist entry. By
default dirsearch only replaces the %EXT% keyword with
extensions
-O, --overwrite-extensions
Overwrite other extensions in the wordlist with your
extensions [selected via `-e`]
--exclude-extensions=EXTENSIONS
Exclude extension list separated by commas [e.g.
asp,jsp]
--remove-extensions
Remove extensions in all paths [e.g. admin.php ->
admin]
--prefixes=PREFIXES
Add custom prefixes to all wordlist entries [separated
by commas]
--suffixes=SUFFIXES
Add custom suffixes to all wordlist entries, ignore
directories [separated by commas]
-U, --uppercase Uppercase wordlist
-L, --lowercase Lowercase wordlist
-C, --capital Capital wordlist
General Settings:
-t THREADS, --threads=THREADS
Number of threads
-r, --recursive Brute-force recursively
--deep-recursive Perform recursive scan on every directory depth [e.g.
api/users -> api/]
--force-recursive Do recursive brute-force for every found path, not
only directories
-R DEPTH, --max-recursion-depth=DEPTH
Maximum recursion depth
--recursion-status=CODES
Valid status codes to perform recursive scan, support
ranges [separated by commas]
--subdirs=SUBDIRS Scan sub-directories of the given URL[s] [separated by
commas]
--exclude-subdirs=SUBDIRS
Exclude the following subdirectories during recursive
scan [separated by commas]
-i CODES, --include-status=CODES
Include status codes, separated by commas, support
ranges [e.g. 200,300-399]
-x CODES, --exclude-status=CODES
Exclude status codes, separated by commas, support
ranges [e.g. 301,500-599]
--exclude-sizes=SIZES
Exclude responses by sizes, separated by commas [e.g.
0B,4KB]
--exclude-text=TEXTS
Exclude responses by text, can use multiple flags
--exclude-regex=REGEX
Exclude responses by regular expression
--exclude-redirect=STRING
Exclude responses if this regex [or text] matches
redirect URL [e.g. '/index.html']
--exclude-response=PATH
Exclude responses similar to response of this page,
path as input [e.g. 404.html]
--skip-on-status=CODES
Skip target whenever hit one of these status codes,
separated by commas, support ranges
--min-response-size=LENGTH
Minimum response length
--max-response-size=LENGTH
Maximum response length
--max-time=SECONDS Maximum runtime for the scan
--exit-on-error Exit whenever an error occurs
Request Settings:
-m METHOD, --http-method=METHOD
HTTP method [default: GET]
-d DATA, --data=DATA
HTTP request data
--data-file=PATH File contains HTTP request data
-H HEADERS, --header=HEADERS
HTTP request header, can use multiple flags
--header-file=PATH File contains HTTP request headers
-F, --follow-redirects
Follow HTTP redirects
--random-agent Choose a random User-Agent for each request
--auth=CREDENTIAL Authentication credential [e.g. user:password or
bearer token]
--auth-type=TYPE Authentication type [basic, digest, bearer, ntlm, jwt,
oauth2]
--cert-file=PATH File contains client-side certificate
--key-file=PATH File contains client-side certificate private key
[unencrypted]
--user-agent=USER_AGENT
--cookie=COOKIE
Connection Settings:
--timeout=TIMEOUT Connection timeout
--delay=DELAY Delay between requests
--proxy=PROXY Proxy URL [HTTP/SOCKS], can use multiple flags
--proxy-file=PATH File contains proxy servers
--proxy-auth=CREDENTIAL
Proxy authentication credential
--replay-proxy=PROXY
Proxy to replay with found paths
--tor Use Tor network as proxy
--scheme=SCHEME Scheme for raw request or if there is no scheme in the
URL [Default: auto-detect]
--max-rate=RATE Max requests per second
--retries=RETRIES Number of retries for failed requests
--ip=IP Server IP address
Advanced Settings:
--crawl Crawl for new paths in responses
View Settings:
--full-url Full URLs in the output [enabled automatically in
quiet mode]
--redirects-history
Show redirects history
--no-color No colored output
-q, --quiet-mode Quiet mode
Output Settings:
-o PATH, --output=PATH
Output file
--format=FORMAT Report format [Available: simple, plain, json, xml,
md, csv, html, sqlite]
--log=PATH Log file
9 flag. By default, dirsearch automatically detects the scheme.Wordlist formats
Supported wordlist formats: uppercase, lowercase, capitalization
Lowercase:
admin
7Uppercase:
admin
8Capital:
admin
9Exclude extensions
Use -X | --exclude-extensions with an extension list will remove all paths in the wordlist that contains the given extensions
# If you want to edit dirsearch default configurations, you can # edit values in this file. Everything after `#` is a comment # and won't be applied [general] threads = 25 recursive = False deep-recursive = False force-recursive = False recursion-status = 200-399,401,403 max-recursion-depth = 0 exclude-subdirs = %%ff/,.;/,..;/,;/,./,../,%%2e/,%%2e%%2e/ random-user-agents = False max-time = 0 exit-on-error = False # subdirs = /,api/ # include-status = 200-299,401 # exclude-status = 400,500-999 # exclude-sizes = 0b,123gb # exclude-text = "Not found" # exclude-regex = "^403$" # exclude-redirect = "*/error.html" # exclude-response = 404.html # skip-on-status = 429,999 [dictionary] default-extensions = php,aspx,jsp,html,js force-extensions = False overwrite-extensions = False lowercase = False uppercase = False capitalization = False # exclude-extensions = old,log # prefixes = .,admin # suffixes = ~,.bak # wordlists = /path/to/wordlist1.txt,/path/to/wordlist2.txt [request] http-method = get follow-redirects = False # headers-file = /path/to/headers.txt # user-agent = MyUserAgent # cookie = SESSIONID=123 [connection] timeout = 7.5 delay = 0 max-rate = 0 max-retries = 1 ## By disabling `scheme` variable, dirsearch will automatically identify the URI scheme # scheme = http # proxy = localhost:8080 # proxy-file = /path/to/proxies.txt # replay-proxy = localhost:8000 [advanced] crawl = False [view] full-url = False quiet-mode = False color = True show-redirects-history = False [output] ## Support: plain, simple, json, xml, md, csv, html, sqlite report-format = plain autosave-report = True autosave-report-folder = reports/ # log-file = /path/to/dirsearch.log # log-file-size = 500000000
Wordlist:
admin
admin.php
admin.html
admin/
0After:
admin
admin.php
admin.html
admin/
1Scan sub-directories
- From an URL, you can scan a list of sub-directories with --subdirs.
admin
admin.php
admin.html
admin/
2Proxies
dirsearch supports SOCKS and HTTP proxy, with two options: a proxy server or a list of proxy servers.
admin
admin.php
admin.html
admin/
3admin
admin.php
admin.html
admin/
4admin
admin.php
admin.html
admin/
5Reports
Supported report formats: simple, plain, json, xml, md, csv, html, sqlite
admin
admin.php
admin.html
admin/
6admin
admin.php
admin.html
admin/
7More example commands
admin
admin.php
admin.html
admin/
8admin
admin.php
admin.html
admin/
9login.html
0login.html
1There are more to discover, try yourself!
Support Docker
Install Docker Linux
Install Docker
login.html
2To use docker you need superuser power
Build Image dirsearch
To create image
login.html
3dirsearch is the name of the image and v0.4.3 is the version
Using dirsearch
For using
login.html
4References
- Comprehensive Guide on Dirsearch by Shubham Sharma
- Comprehensive Guide on Dirsearch Part 2 by Shubham Sharma
- How to Find Hidden Web Directories with Dirsearch by GeeksforGeeks
- GUÍA COMPLETA SOBRE EL USO DE DIRSEARCH by ESGEEKS
- How to use Dirsearch to detect web directories by EHacking
- dirsearch how to by VK9 Security
- Find Hidden Web Directories with Dirsearch by Wonder How To
- Brute force directories and files in webservers using dirsearch by Raj Upadhyay
- Live Bug Bounty Recon Session on Yahoo [Amass, crts.sh, dirsearch] w/ @TheDawgyg by Nahamsec
- Dirsearch to find Hidden Web Directories by Irfan Shakeel
- Getting access to 25000 employees details by Sahil Ahamad
- Best Tools For Directory Bruteforcing by Shubham Goyal
- Discover hidden files & directories on a webserver - dirsearch full tutorial by CYBER BYTES
Tips
- The server has requests limit? That's bad, but feel free to bypass it, by randomizing proxy with
# If you want to edit dirsearch default configurations, you can # edit values in this file. Everything after `#` is a comment # and won't be applied [general] threads = 25 recursive = False deep-recursive = False force-recursive = False recursion-status = 200-399,401,403 max-recursion-depth = 0 exclude-subdirs = %%ff/,.;/,..;/,;/,./,../,%%2e/,%%2e%%2e/ random-user-agents = False max-time = 0 exit-on-error = False # subdirs = /,api/ # include-status = 200-299,401 # exclude-status = 400,500-999 # exclude-sizes = 0b,123gb # exclude-text = "Not found" # exclude-regex = "^403$" # exclude-redirect = "*/error.html" # exclude-response = 404.html # skip-on-status = 429,999 [dictionary] default-extensions = php,aspx,jsp,html,js force-extensions = False overwrite-extensions = False lowercase = False uppercase = False capitalization = False # exclude-extensions = old,log # prefixes = .,admin # suffixes = ~,.bak # wordlists = /path/to/wordlist1.txt,/path/to/wordlist2.txt [request] http-method = get follow-redirects = False # headers-file = /path/to/headers.txt # user-agent = MyUserAgent # cookie = SESSIONID=123 [connection] timeout = 7.5 delay = 0 max-rate = 0 max-retries = 1 ## By disabling `scheme` variable, dirsearch will automatically identify the URI scheme # scheme = http # proxy = localhost:8080 # proxy-file = /path/to/proxies.txt # replay-proxy = localhost:8000 [advanced] crawl = False [view] full-url = False quiet-mode = False color = True show-redirects-history = False [output] ## Support: plain, simple, json, xml, md, csv, html, sqlite report-format = plain autosave-report = True autosave-report-folder = reports/ # log-file = /path/to/dirsearch.log # log-file-size = 50000000
1 - Want to find out config files or backups? Try
# If you want to edit dirsearch default configurations, you can # edit values in this file. Everything after `#` is a comment # and won't be applied [general] threads = 25 recursive = False deep-recursive = False force-recursive = False recursion-status = 200-399,401,403 max-recursion-depth = 0 exclude-subdirs = %%ff/,.;/,..;/,;/,./,../,%%2e/,%%2e%%2e/ random-user-agents = False max-time = 0 exit-on-error = False # subdirs = /,api/ # include-status = 200-299,401 # exclude-status = 400,500-999 # exclude-sizes = 0b,123gb # exclude-text = "Not found" # exclude-regex = "^403$" # exclude-redirect = "*/error.html" # exclude-response = 404.html # skip-on-status = 429,999 [dictionary] default-extensions = php,aspx,jsp,html,js force-extensions = False overwrite-extensions = False lowercase = False uppercase = False capitalization = False # exclude-extensions = old,log # prefixes = .,admin # suffixes = ~,.bak # wordlists = /path/to/wordlist1.txt,/path/to/wordlist2.txt [request] http-method = get follow-redirects = False # headers-file = /path/to/headers.txt # user-agent = MyUserAgent # cookie = SESSIONID=123 [connection] timeout = 7.5 delay = 0 max-rate = 0 max-retries = 1 ## By disabling `scheme` variable, dirsearch will automatically identify the URI scheme # scheme = http # proxy = localhost:8080 # proxy-file = /path/to/proxies.txt # replay-proxy = localhost:8000 [advanced] crawl = False [view] full-url = False quiet-mode = False color = True show-redirects-history = False [output] ## Support: plain, simple, json, xml, md, csv, html, sqlite report-format = plain autosave-report = True autosave-report-folder = reports/ # log-file = /path/to/dirsearch.log # log-file-size = 50000000
2 and# If you want to edit dirsearch default configurations, you can # edit values in this file. Everything after `#` is a comment # and won't be applied [general] threads = 25 recursive = False deep-recursive = False force-recursive = False recursion-status = 200-399,401,403 max-recursion-depth = 0 exclude-subdirs = %%ff/,.;/,..;/,;/,./,../,%%2e/,%%2e%%2e/ random-user-agents = False max-time = 0 exit-on-error = False # subdirs = /,api/ # include-status = 200-299,401 # exclude-status = 400,500-999 # exclude-sizes = 0b,123gb # exclude-text = "Not found" # exclude-regex = "^403$" # exclude-redirect = "*/error.html" # exclude-response = 404.html # skip-on-status = 429,999 [dictionary] default-extensions = php,aspx,jsp,html,js force-extensions = False overwrite-extensions = False lowercase = False uppercase = False capitalization = False # exclude-extensions = old,log # prefixes = .,admin # suffixes = ~,.bak # wordlists = /path/to/wordlist1.txt,/path/to/wordlist2.txt [request] http-method = get follow-redirects = False # headers-file = /path/to/headers.txt # user-agent = MyUserAgent # cookie = SESSIONID=123 [connection] timeout = 7.5 delay = 0 max-rate = 0 max-retries = 1 ## By disabling `scheme` variable, dirsearch will automatically identify the URI scheme # scheme = http # proxy = localhost:8080 # proxy-file = /path/to/proxies.txt # replay-proxy = localhost:8000 [advanced] crawl = False [view] full-url = False quiet-mode = False color = True show-redirects-history = False [output] ## Support: plain, simple, json, xml, md, csv, html, sqlite report-format = plain autosave-report = True autosave-report-folder = reports/ # log-file = /path/to/dirsearch.log # log-file-size = 50000000
3 - Want to find only folders/directories? Why not combine
# If you want to edit dirsearch default configurations, you can # edit values in this file. Everything after `#` is a comment # and won't be applied [general] threads = 25 recursive = False deep-recursive = False force-recursive = False recursion-status = 200-399,401,403 max-recursion-depth = 0 exclude-subdirs = %%ff/,.;/,..;/,;/,./,../,%%2e/,%%2e%%2e/ random-user-agents = False max-time = 0 exit-on-error = False # subdirs = /,api/ # include-status = 200-299,401 # exclude-status = 400,500-999 # exclude-sizes = 0b,123gb # exclude-text = "Not found" # exclude-regex = "^403$" # exclude-redirect = "*/error.html" # exclude-response = 404.html # skip-on-status = 429,999 [dictionary] default-extensions = php,aspx,jsp,html,js force-extensions = False overwrite-extensions = False lowercase = False uppercase = False capitalization = False # exclude-extensions = old,log # prefixes = .,admin # suffixes = ~,.bak # wordlists = /path/to/wordlist1.txt,/path/to/wordlist2.txt [request] http-method = get follow-redirects = False # headers-file = /path/to/headers.txt # user-agent = MyUserAgent # cookie = SESSIONID=123 [connection] timeout = 7.5 delay = 0 max-rate = 0 max-retries = 1 ## By disabling `scheme` variable, dirsearch will automatically identify the URI scheme # scheme = http # proxy = localhost:8080 # proxy-file = /path/to/proxies.txt # replay-proxy = localhost:8000 [advanced] crawl = False [view] full-url = False quiet-mode = False color = True show-redirects-history = False [output] ## Support: plain, simple, json, xml, md, csv, html, sqlite report-format = plain autosave-report = True autosave-report-folder = reports/ # log-file = /path/to/dirsearch.log # log-file-size = 50000000
4 and# If you want to edit dirsearch default configurations, you can # edit values in this file. Everything after `#` is a comment # and won't be applied [general] threads = 25 recursive = False deep-recursive = False force-recursive = False recursion-status = 200-399,401,403 max-recursion-depth = 0 exclude-subdirs = %%ff/,.;/,..;/,;/,./,../,%%2e/,%%2e%%2e/ random-user-agents = False max-time = 0 exit-on-error = False # subdirs = /,api/ # include-status = 200-299,401 # exclude-status = 400,500-999 # exclude-sizes = 0b,123gb # exclude-text = "Not found" # exclude-regex = "^403$" # exclude-redirect = "*/error.html" # exclude-response = 404.html # skip-on-status = 429,999 [dictionary] default-extensions = php,aspx,jsp,html,js force-extensions = False overwrite-extensions = False lowercase = False uppercase = False capitalization = False # exclude-extensions = old,log # prefixes = .,admin # suffixes = ~,.bak # wordlists = /path/to/wordlist1.txt,/path/to/wordlist2.txt [request] http-method = get follow-redirects = False # headers-file = /path/to/headers.txt # user-agent = MyUserAgent # cookie = SESSIONID=123 [connection] timeout = 7.5 delay = 0 max-rate = 0 max-retries = 1 ## By disabling `scheme` variable, dirsearch will automatically identify the URI scheme # scheme = http # proxy = localhost:8080 # proxy-file = /path/to/proxies.txt # replay-proxy = localhost:8000 [advanced] crawl = False [view] full-url = False quiet-mode = False color = True show-redirects-history = False [output] ## Support: plain, simple, json, xml, md, csv, html, sqlite report-format = plain autosave-report = True autosave-report-folder = reports/ # log-file = /path/to/dirsearch.log # log-file-size = 50000000
5! - The mix of
# If you want to edit dirsearch default configurations, you can # edit values in this file. Everything after `#` is a comment # and won't be applied [general] threads = 25 recursive = False deep-recursive = False force-recursive = False recursion-status = 200-399,401,403 max-recursion-depth = 0 exclude-subdirs = %%ff/,.;/,..;/,;/,./,../,%%2e/,%%2e%%2e/ random-user-agents = False max-time = 0 exit-on-error = False # subdirs = /,api/ # include-status = 200-299,401 # exclude-status = 400,500-999 # exclude-sizes = 0b,123gb # exclude-text = "Not found" # exclude-regex = "^403$" # exclude-redirect = "*/error.html" # exclude-response = 404.html # skip-on-status = 429,999 [dictionary] default-extensions = php,aspx,jsp,html,js force-extensions = False overwrite-extensions = False lowercase = False uppercase = False capitalization = False # exclude-extensions = old,log # prefixes = .,admin # suffixes = ~,.bak # wordlists = /path/to/wordlist1.txt,/path/to/wordlist2.txt [request] http-method = get follow-redirects = False # headers-file = /path/to/headers.txt # user-agent = MyUserAgent # cookie = SESSIONID=123 [connection] timeout = 7.5 delay = 0 max-rate = 0 max-retries = 1 ## By disabling `scheme` variable, dirsearch will automatically identify the URI scheme # scheme = http # proxy = localhost:8080 # proxy-file = /path/to/proxies.txt # replay-proxy = localhost:8000 [advanced] crawl = False [view] full-url = False quiet-mode = False color = True show-redirects-history = False [output] ## Support: plain, simple, json, xml, md, csv, html, sqlite report-format = plain autosave-report = True autosave-report-folder = reports/ # log-file = /path/to/dirsearch.log # log-file-size = 50000000
6,# If you want to edit dirsearch default configurations, you can # edit values in this file. Everything after `#` is a comment # and won't be applied [general] threads = 25 recursive = False deep-recursive = False force-recursive = False recursion-status = 200-399,401,403 max-recursion-depth = 0 exclude-subdirs = %%ff/,.;/,..;/,;/,./,../,%%2e/,%%2e%%2e/ random-user-agents = False max-time = 0 exit-on-error = False # subdirs = /,api/ # include-status = 200-299,401 # exclude-status = 400,500-999 # exclude-sizes = 0b,123gb # exclude-text = "Not found" # exclude-regex = "^403$" # exclude-redirect = "*/error.html" # exclude-response = 404.html # skip-on-status = 429,999 [dictionary] default-extensions = php,aspx,jsp,html,js force-extensions = False overwrite-extensions = False lowercase = False uppercase = False capitalization = False # exclude-extensions = old,log # prefixes = .,admin # suffixes = ~,.bak # wordlists = /path/to/wordlist1.txt,/path/to/wordlist2.txt [request] http-method = get follow-redirects = False # headers-file = /path/to/headers.txt # user-agent = MyUserAgent # cookie = SESSIONID=123 [connection] timeout = 7.5 delay = 0 max-rate = 0 max-retries = 1 ## By disabling `scheme` variable, dirsearch will automatically identify the URI scheme # scheme = http # proxy = localhost:8080 # proxy-file = /path/to/proxies.txt # replay-proxy = localhost:8000 [advanced] crawl = False [view] full-url = False quiet-mode = False color = True show-redirects-history = False [output] ## Support: plain, simple, json, xml, md, csv, html, sqlite report-format = plain autosave-report = True autosave-report-folder = reports/ # log-file = /path/to/dirsearch.log # log-file-size = 50000000
7,# If you want to edit dirsearch default configurations, you can # edit values in this file. Everything after `#` is a comment # and won't be applied [general] threads = 25 recursive = False deep-recursive = False force-recursive = False recursion-status = 200-399,401,403 max-recursion-depth = 0 exclude-subdirs = %%ff/,.;/,..;/,;/,./,../,%%2e/,%%2e%%2e/ random-user-agents = False max-time = 0 exit-on-error = False # subdirs = /,api/ # include-status = 200-299,401 # exclude-status = 400,500-999 # exclude-sizes = 0b,123gb # exclude-text = "Not found" # exclude-regex = "^403$" # exclude-redirect = "*/error.html" # exclude-response = 404.html # skip-on-status = 429,999 [dictionary] default-extensions = php,aspx,jsp,html,js force-extensions = False overwrite-extensions = False lowercase = False uppercase = False capitalization = False # exclude-extensions = old,log # prefixes = .,admin # suffixes = ~,.bak # wordlists = /path/to/wordlist1.txt,/path/to/wordlist2.txt [request] http-method = get follow-redirects = False # headers-file = /path/to/headers.txt # user-agent = MyUserAgent # cookie = SESSIONID=123 [connection] timeout = 7.5 delay = 0 max-rate = 0 max-retries = 1 ## By disabling `scheme` variable, dirsearch will automatically identify the URI scheme # scheme = http # proxy = localhost:8080 # proxy-file = /path/to/proxies.txt # replay-proxy = localhost:8000 [advanced] crawl = False [view] full-url = False quiet-mode = False color = True show-redirects-history = False [output] ## Support: plain, simple, json, xml, md, csv, html, sqlite report-format = plain autosave-report = True autosave-report-folder = reports/ # log-file = /path/to/dirsearch.log # log-file-size = 50000000
8 and will reduce most of noises + false negatives when brute-forcing with a CIDR - Scan a list of URLs, but don't want to see a 429 flood?
# If you want to edit dirsearch default configurations, you can # edit values in this file. Everything after `#` is a comment # and won't be applied [general] threads = 25 recursive = False deep-recursive = False force-recursive = False recursion-status = 200-399,401,403 max-recursion-depth = 0 exclude-subdirs = %%ff/,.;/,..;/,;/,./,../,%%2e/,%%2e%%2e/ random-user-agents = False max-time = 0 exit-on-error = False # subdirs = /,api/ # include-status = 200-299,401 # exclude-status = 400,500-999 # exclude-sizes = 0b,123gb # exclude-text = "Not found" # exclude-regex = "^403$" # exclude-redirect = "*/error.html" # exclude-response = 404.html # skip-on-status = 429,999 [dictionary] default-extensions = php,aspx,jsp,html,js force-extensions = False overwrite-extensions = False lowercase = False uppercase = False capitalization = False # exclude-extensions = old,log # prefixes = .,admin # suffixes = ~,.bak # wordlists = /path/to/wordlist1.txt,/path/to/wordlist2.txt [request] http-method = get follow-redirects = False # headers-file = /path/to/headers.txt # user-agent = MyUserAgent # cookie = SESSIONID=123 [connection] timeout = 7.5 delay = 0 max-rate = 0 max-retries = 1 ## By disabling `scheme` variable, dirsearch will automatically identify the URI scheme # scheme = http # proxy = localhost:8080 # proxy-file = /path/to/proxies.txt # replay-proxy = localhost:8000 [advanced] crawl = False [view] full-url = False quiet-mode = False color = True show-redirects-history = False [output] ## Support: plain, simple, json, xml, md, csv, html, sqlite report-format = plain autosave-report = True autosave-report-folder = reports/ # log-file = /path/to/dirsearch.log # log-file-size = 50000000
9 will help you to skip a target whenever it returns 429 - The server contains large files that slow down the scan? You might want to use
0 HTTP method instead ofpython3 dirsearch.py -u //target
1python3 dirsearch.py -u //target
- Brute-forcing CIDR is slow? Probably you forgot to reduce request timeout and request retries. Suggest:
2python3 dirsearch.py -u //target
Contribution
We have been receiving a lot of helps from many people around the world to improve this tool. Thanks so much to everyone who have helped us so far! See CONTRIBUTORS.md to know who they are.