Multiple filters works together with AND condition,
below example runs all templates with cve tags
AND has critical OR high severity AND geeknik as author of template.
body string (containing all request bodies if any)
matcher_type slice of string
extractor_type slice of string
description string
Also, every key-value pair from the template metadata section is accessible. All fields can be combined with logical operators (|| and &&) and used with DSL helper functions.
Similarly, all filters are supported in workflows as well.
Nuclei has built-in support for automatic template download/update from nuclei templates project which provides community-contributed list of ready-to-use templates that is constantly updated.
Nuclei checks for new community template releases upon each execution and automatically downloads the latest version when available. optionally, this feature can be disabled using the -duc cli flag or the configuration file.
Users can create custom templates on a personal public / private GitHub / AWS Bucket that they wish to run / update while using nuclei from any environment without manually downloading the GitHub repository everywhere.
To use this feature, users need to set the following environment variables:
export GITLAB_SERVER_URL=https://gitlab.com# The GitLab token must have the read_api and read_repository scopeexport GITLAB_TOKEN=XXXXXXXXXX# Comma separated list of repository IDs (not names)export GITLAB_REPOSITORY_IDS=12345,67890
Environment variables can also be provided to disable download from default and custom template locations:
Copy
Ask AI
# Disable download from the default nuclei-templates projectexport DISABLE_NUCLEI_TEMPLATES_PUBLIC_DOWNLOAD=true# Disable download from public / private GitHub project(s)export DISABLE_NUCLEI_TEMPLATES_GITHUB_DOWNLOAD=true# Disable download from public / private GitLab project(s)export DISABLE_NUCLEI_TEMPLATES_GITLAB_DOWNLOAD=true# Disable download from public / private AWS Bucket(s)export DISABLE_NUCLEI_TEMPLATES_AWS_DOWNLOAD=true# Disable download from public / private Azure Blob Storageexport DISABLE_NUCLEI_TEMPLATES_AZURE_DOWNLOAD=true
Once the environment variables are set, following command to download the custom templates:
Copy
Ask AI
nuclei -update-templates
This command will clone the repository containing the custom templates to the default nuclei templates directory ($HOME/nuclei-templates/github/).
The directory structure of the custom templates looks as follows:
Copy
Ask AI
tree $HOME/nuclei-templates/nuclei-templates/└── github/$GH_REPO_NAME # Custom templates downloaded from public / private GitHub project└── gitlab/$GL_REPO_NAME # Custom templates downloaded from public / private GitLab project└── s3/$BUCKET_NAME # Custom templates downloaded from public / private AWS Bucket└── azure/$CONTAINER_NAME # Custom templates downloaded from public / private Azure Blob Storage
Users can then use the custom templates with the -t flag as follows:
This will display help for the tool. Here are all the switches it supports.
Copy
Ask AI
Nuclei is a fast, template based vulnerability scanner focusingon extensive configurability, massive extensibility and ease of use.Usage: nuclei [flags]Flags:TARGET: -u, -target string[] target URLs/hosts to scan -l, -list string path to file containing a list of target URLs/hosts to scan (one per line) -resume string resume scan using resume.cfg (clustering will be disabled) -sa, -scan-all-ips scan all the IP's associated with dns record -iv, -ip-version string[] IP version to scan of hostname (4,6) - (default 4)TEMPLATES: -nt, -new-templates run only new templates added in latest nuclei-templates release -ntv, -new-templates-version string[] run new templates added in specific version -as, -automatic-scan automatic web scan using wappalyzer technology detection to tags mapping -t, -templates string[] list of template or template directory to run (comma-separated, file) -tu, -template-url string[] list of template urls to run (comma-separated, file) -w, -workflows string[] list of workflow or workflow directory to run (comma-separated, file) -wu, -workflow-url string[] list of workflow urls to run (comma-separated, file) -validate validate the passed templates to nuclei -nss, -no-strict-syntax disable strict syntax check on templates -td, -template-display displays the templates content -tl list all available templatesFILTERING: -a, -author string[] templates to run based on authors (comma-separated, file) -tags string[] templates to run based on tags (comma-separated, file) -etags, -exclude-tags string[] templates to exclude based on tags (comma-separated, file) -itags, -include-tags string[] tags to be executed even if they are excluded either by default or configuration -id, -template-id string[] templates to run based on template ids (comma-separated, file) -eid, -exclude-id string[] templates to exclude based on template ids (comma-separated, file) -it, -include-templates string[] templates to be executed even if they are excluded either by default or configuration -et, -exclude-templates string[] template or template directory to exclude (comma-separated, file) -em, -exclude-matchers string[] template matchers to exclude in result -s, -severity value[] templates to run based on severity. Possible values: info, low, medium, high, critical, unknown -es, -exclude-severity value[] templates to exclude based on severity. Possible values: info, low, medium, high, critical, unknown -pt, -type value[] templates to run based on protocol type. Possible values: dns, file, http, headless, network, workflow, ssl, websocket, whois -ept, -exclude-type value[] templates to exclude based on protocol type. Possible values: dns, file, http, headless, network, workflow, ssl, websocket, whois -tc, -template-condition string[] templates to run based on expression conditionOUTPUT: -o, -output string output file to write found issues/vulnerabilities -sresp, -store-resp store all request/response passed through nuclei to output directory -srd, -store-resp-dir string store all request/response passed through nuclei to custom directory (default "output") -silent display findings only -nc, -no-color disable output content coloring (ANSI escape codes) -json write output in JSONL(ines) format -irr, -include-rr include request/response pairs in the JSONL output (for findings only) -nm, -no-meta disable printing result metadata in cli output -ts, -timestamp enables printing timestamp in cli output -rdb, -report-db string nuclei reporting database (always use this to persist report data) -ms, -matcher-status display match failure status -me, -markdown-export string directory to export results in markdown format -se, -sarif-export string file to export results in SARIF formatCONFIGURATIONS: -config string path to the nuclei configuration file -fr, -follow-redirects enable following redirects for http templates -fhr, -follow-host-redirects follow redirects on the same host -mr, -max-redirects int max number of redirects to follow for http templates (default 10) -dr, -disable-redirects disable redirects for http templates -rc, -report-config string nuclei reporting module configuration file -H, -header string[] custom header/cookie to include in all http request in header:value format (cli, file) -V, -var value custom vars in key=value format -r, -resolvers string file containing resolver list for nuclei -sr, -system-resolvers use system DNS resolving as error fallback -dc, -disable-clustering disable clustering of requests -passive enable passive HTTP response processing mode -fh2, -force-http2 force http2 connection on requests -ev, -env-vars enable environment variables to be used in template -cc, -client-cert string client certificate file (PEM-encoded) used for authenticating against scanned hosts -ck, -client-key string client key file (PEM-encoded) used for authenticating against scanned hosts -ca, -client-ca string client certificate authority file (PEM-encoded) used for authenticating against scanned hosts -sml, -show-match-line show match lines for file templates, works with extractors only -ztls use ztls library with autofallback to standard one for tls13 -sni string tls sni hostname to use (default: input domain name) -sandbox sandbox nuclei for safe templates execution -i, -interface string network interface to use for network scan -at, -attack-type string type of payload combinations to perform (batteringram,pitchfork,clusterbomb) -sip, -source-ip string source ip address to use for network scan -config-directory string override the default config path ($home/.config) -rsr, -response-size-read int max response size to read in bytes (default 10485760) -rss, -response-size-save int max response size to read in bytes (default 1048576)INTERACTSH: -iserver, -interactsh-server string interactsh server url for self-hosted instance (default: oast.pro,oast.live,oast.site,oast.online,oast.fun,oast.me) -itoken, -interactsh-token string authentication token for self-hosted interactsh server -interactions-cache-size int number of requests to keep in the interactions cache (default 5000) -interactions-eviction int number of seconds to wait before evicting requests from cache (default 60) -interactions-poll-duration int number of seconds to wait before each interaction poll request (default 5) -interactions-cooldown-period int extra time for interaction polling before exiting (default 5) -ni, -no-interactsh disable interactsh server for OAST testing, exclude OAST based templatesUNCOVER: -uc, -uncover enable uncover engine -uq, -uncover-query string[] uncover search query -ue, -uncover-engine string[] uncover search engine (shodan,shodan-idb,fofa,censys,quake,hunter,zoomeye,netlas,criminalip) (default shodan) -uf, -uncover-field string uncover fields to return (ip,port,host) (default "ip:port") -ul, -uncover-limit int uncover results to return (default 100) -ucd, -uncover-delay int delay between uncover query requests in seconds (0 to disable) (default 1)RATE-LIMIT: -rl, -rate-limit int maximum number of requests to send per second (default 150) -rlm, -rate-limit-minute int maximum number of requests to send per minute -bs, -bulk-size int maximum number of hosts to be analyzed in parallel per template (default 25) -c, -concurrency int maximum number of templates to be executed in parallel (default 25) -hbs, -headless-bulk-size int maximum number of headless hosts to be analyzed in parallel per template (default 10) -headc, -headless-concurrency int maximum number of headless templates to be executed in parallel (default 10)OPTIMIZATIONS: -timeout int time to wait in seconds before timeout (default 10) -retries int number of times to retry a failed request (default 1) -ldp, -leave-default-ports leave default HTTP/HTTPS ports (eg. host:80,host:443) -mhe, -max-host-error int max errors for a host before skipping from scan (default 30) -nmhe, -no-mhe disable skipping host from scan based on errors -project use a project folder to avoid sending same request multiple times -project-path string set a specific project path -spm, -stop-at-first-match stop processing HTTP requests after the first match (may break template/workflow logic) -stream stream mode - start elaborating without sorting the input -ss, -scan-strategy value strategy to use while scanning(auto/host-spray/template-spray) (default 0) -irt, -input-read-timeout duration timeout on input read (default 3m0s) -nh, -no-httpx disable httpx probing for non-url input -no-stdin disable stdin processingHEADLESS: -headless enable templates that require headless browser support (root user on Linux will disable sandbox) -page-timeout int seconds to wait for each page in headless mode (default 20) -sb, -show-browser show the browser on the screen when running templates with headless mode -sc, -system-chrome use local installed Chrome browser instead of nuclei installed -lha, -list-headless-action list available headless actionsDEBUG: -debug show all requests and responses -dreq, -debug-req show all sent requests -dresp, -debug-resp show all received responses -p, -proxy string[] list of http/socks5 proxy to use (comma separated or file input) -pi, -proxy-internal proxy all internal requests -ldf, -list-dsl-function list all supported DSL function signatures -tlog, -trace-log string file to write sent requests trace log -elog, -error-log string file to write sent requests error log -version show nuclei version -hm, -hang-monitor enable nuclei hang monitoring -v, -verbose show verbose output -profile-mem string optional nuclei memory profile dump file -vv display templates loaded for scan -svd, -show-var-dump show variables dump for debugging -ep, -enable-pprof enable pprof debugging server -tv, -templates-version shows the version of the installed nuclei-templates -hc, -health-check run diagnostic check upUPDATE: -un, -update update nuclei engine to the latest released version -ut, -update-templates update nuclei-templates to latest released version -ud, -update-template-dir string custom directory to install / update nuclei-templates -duc, -disable-update-check disable automatic nuclei/templates update checkSTATISTICS: -stats display statistics about the running scan -sj, -stats-json write statistics data to an output file in JSONL(ines) format -si, -stats-interval int number of seconds to wait between showing a statistics update (default 5) -mp, -metrics-port int port to expose nuclei metrics on (default 9092)
From Nuclei v3.0.0 -metrics port has been removed and merged with -stats
when using -stats flag metrics will be by default available at localhost:9092/metrics
and metrics-port can be configured by -metrics-port flag
Nuclei have multiple rate limit controls for multiple factors, including a number of templates to execute in parallel, a number of hosts to be scanned in parallel for each template, and the global number of request / per second you wanted to make/limit using nuclei, here is an example of each flag with description.
Flag
Description
rate-limit
Control the total number of request to send per seconds
bulk-size
Control the number of hosts to process in parallel for each template
c
Control the number of templates to process in parallel
Feel free to play with these flags to tune your nuclei scan speed and accuracy.
rate-limit flag takes precedence over the other two flags, the number of
requests/seconds can’t go beyond the value defined for rate-limit flag
regardless the value of c and bulk-size flag.
Many BugBounty platform/programs requires you to identify the HTTP traffic you make, this can be achieved by setting custom header using config file at $HOME/.config/nuclei/config.yaml or CLI flag -H / header
Setting custom header using config file
Copy
Ask AI
# Headers to include with each request.header: - 'X-BugBounty-Hacker: h1/geekboy' - 'User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64) / nuclei'
Nuclei supports a variety of methods for excluding / blocking templates from execution. By default, nuclei excludes the tags/templates listed below from execution to avoid unexpected fuzz based scans and some that are not supposed to run for mass scan, and these can be easily overwritten with nuclei configuration file / flags.
nuclei-ignore file is not supposed to be updated / edited / removed by
user, to overwrite default ignore list, utilize nuclei
configuration file.
Nuclei engine supports two ways to manually exclude templates from scan,
Exclude Templates (-exclude-templates/exclude)
exclude-templates flag is used to exclude single or multiple templates and directory, multiple -exclude-templates flag can be used to provide multiple values.
Exclude Tags (-exclude-tags/etags)
exclude-tags flag is used to exclude templates based in defined tags, single or multiple can be used to exclude templates.
Nuclei supports integration with uncover module that supports services like Shodan, Censys, Hunter, Zoomeye, many more to execute Nuclei on these databases.
Here are uncover options to use -
Copy
Ask AI
nuclei -h uncoverUNCOVER: -uc, -uncover enable uncover engine -uq, -uncover-query string[] uncover search query -ue, -uncover-engine string[] uncover search engine (shodan,shodan-idb,fofa,censys,quake,hunter,zoomeye,netlas,criminalip) (default shodan) -uf, -uncover-field string uncover fields to return (ip,port,host) (default "ip:port") -ul, -uncover-limit int uncover results to return (default 100) -ucd, -uncover-delay int delay between uncover query requests in seconds (0 to disable) (default 1)
You need to set the API key of the engine you are using as an environment variable in your shell.
Nuclei fully utilises resources to optimise scanning speed. However, when scanning thousands, if not millions, of targets, scanning using default parameter values is bound to cause some performance issues ex: low RPS, Slow Scans, Process Killed, High RAM consumption, etc. this is due to limited resources and network I/O. Hence following parameters need to be tuned based on system configuration and targets.
For enterprises dealing with large-scale scanning, optimizing Nuclei can be a burdensome task, especially when scans change frequently. That’s where Nuclei Enterprise comes in. With its managed offering and dedicated support, Nuclei Enterprise minimizes the burden of optimizing Nuclei on large scans, making it an ideal choice for enterprise-level scanning needs.
Flag
Short
Description
scan-strategy
-ss
Scan Strategy to Use (auto/host-spray/template-spray)
bulk-size
-bs
Max Number of targets to scan in parallel
concurrency
-c
Max Number of templates to use in parallel while scanning
stream
-
stream mode - start elaborating without sorting the input
These are common parameters that need to be tuned. Apart from these
-rate-limit, -retries, -timeout, -max-host-error also need to be tuned
based on targets that are being scanned
scan-strategy option can have three possible values
host-spray : All templates are iterated over each target.
template-spray : Each template is iterated over all targets.
auto(Default) : Placeholder of template-spray for now.
User should select Scan Strategy based on number of targets and Each strategy has its own pros & cons.
When targets < 1000, template-spray should be used. This strategy is slightly faster than host-spray but uses more RAM and does not optimally reuse connections.
When targets > 1000, host-spray should be used. This strategy uses less RAM than template-spray and reuses HTTP connections along with some minor improvements and these are crucial when mass scanning.
Whatever the scan-strategy is -concurrency and -bulk-size are crucial for tuning any type of scan. While tuning these parameters following points should be noted.
Since release of v2.3.2 nuclei uses goflags for clean CLI experience and long/short formatted flags.
goflags comes with auto-generated config file support that coverts all available CLI flags into config file, basically you can define all CLI flags into config file to avoid repetitive CLI flags that loads as default for every scan of nuclei.
Default path of nuclei config file is $HOME/.config/nuclei/config.yaml, uncomment and configure the flags you wish to run as default.
Here is an example config file:
Copy
Ask AI
# Headers to include with all HTTP requestheader: - 'X-BugBounty-Hacker: h1/geekboy'# Directory based template executiontemplates: - cves/ - vulnerabilities/ - misconfiguration/# Tags based template executiontags: exposures,cve# Template Filterstags: exposures,cveauthor: geeknik,pikpikcu,dhiyaneshdkseverity: critical,high,medium# Template Allowlistinclude-tags: dos,fuzz # Tag based inclusion (allows overwriting nuclei-ignore list)include-templates: # Template based inclusion (allows overwriting nuclei-ignore list) - vulnerabilities/xxx - misconfiguration/xxxx# Template Denylistexclude-tags: info # Tag based exclusionexclude-templates: # Template based exclusion - vulnerabilities/xxx - misconfiguration/xxxx# Rate Limit configurationrate-limit: 500bulk-size: 50concurrency: 50
Once configured, config file be used as default, additionally custom config file can be also provided using -config flag.
Nuclei comes with reporting module support with the release of v2.3.0 supporting GitHub, GitLab, and Jira integration, this allows nuclei engine to create automatic tickets on the supported platform based on found results.
Platform
GitHub
GitLab
Jira
Markdown
SARIF
Elasticsearch
Splunk HEC
Support
-rc, -report-config flag can be used to provide a config file to read configuration details of the platform to integrate. Here is an example config file for all supported platforms.
For example, to create tickets on GitHub, create a config file with the following content and replace the appropriate values:
To store results in Elasticsearch, create a config file with the following content and replace the appropriate values:
Copy
Ask AI
# elasticsearch contains configuration options for elasticsearch exporterelasticsearch: # IP for elasticsearch instance ip: 127.0.0.1 # Port is the port of elasticsearch instance port: 9200 # IndexName is the name of the elasticsearch index index-name: nuclei
To forward results to Splunk HEC, create a config file with the following content and replace the appropriate values:
Copy
Ask AI
# splunkhec contains configuration options for splunkhec exportersplunkhec: # Hostname for splunkhec instance host: '$hec_host' # Port is the port of splunkhec instance port: 8088 # IndexName is the name of the splunkhec index index-name: nuclei # SSL enables ssl for splunkhec connection ssl: true # SSLVerification disables SSL verification for splunkhec ssl-verification: true # HEC Token for the splunkhec instance token: '$hec_token'
To forward results to Jira, create a config file with the following content and replace the appropriate values:
The Jira reporting options allows for custom fields, as well as using variables from the Nuclei templates in those custom fields.
The supported variables currently are: $CVSSMetrics, $CVEID, $CWEID, $Host, $Severity, $CVSSScore, $Name
In addition, Jira is strict when it comes to custom field entry. If the field is a dropdown, Jira accepts only the case sensitive specific string and the API call is slightly different. To support this, there are three types of customfields.
name is the dropdown value
id is the ID value of the dropdown
freeform is if the customfield the entry of any value
To avoid duplication, the JQL query run can be slightly modified by the config file.
The CLOSED_STATUS can be changed in the Jira template file using the status-not variable.
summary ~ TEMPLATE_NAME AND summary ~ HOSTNAME AND status != CLOSED_STATUS
Copy
Ask AI
jira: # cloud is the boolean which tells if Jira instance is running in the cloud or on-prem version is used cloud: true # update-existing is the boolean which tells if the existing, opened issue should be updated or new one should be created update-existing: false # URL is the jira application url url: https://localhost/jira # account-id is the account-id of the Jira user or username in case of on-prem Jira account-id: test-account-id # email is the email of the user for Jira instance email: test@test.com # token is the token for Jira instance or password in case of on-prem Jira token: test-token #project-name is the name of the project. project-name: test-project-name #issue-type is the name of the created issue type (case sensitive) issue-type: Bug # SeverityAsLabel (optional) sends the severity as the label of the created issue # User custom fields for Jira Cloud instead severity-as-label: true # Whatever your final status is that you want to use as a closed ticket - Closed, Done, Remediated, etc # When checking for duplicates, the JQL query will filter out status's that match this. # If it finds a match _and_ the ticket does have this status, a new one will be created. status-not: Closed # Customfield supports name, id and freeform. name and id are to be used when the custom field is a dropdown. # freeform can be used if the custom field is just a text entry # Variables can be used to pull various pieces of data from the finding itself. # Supported variables: $CVSSMetrics, $CVEID, $CWEID, $Host, $Severity, $CVSSScore, $Name custom_fields: customfield_00001: name: 'Nuclei' customfield_00002: freeform: $CVSSMetrics customfield_00003: freeform: $CVSSScore
Similarly, other platforms can be configured. Reporting module also supports basic filtering and duplicate checks to avoid duplicate ticket creation.
Copy
Ask AI
allow-list: severity: high,critical
This will ensure to only creating tickets for issues identified with high and critical severity; similarly, deny-list can be used to exclude issues with a specific severity.
If you are running periodic scans on the same assets, you might want to consider -rdb, -report-db flag that creates a local copy of the valid findings in the given directory utilized by reporting module to compare and create tickets for unique issues only.
Nuclei supports markdown export of valid findings with -me, -markdown-export flag, this flag takes directory as input to store markdown formatted reports.
Including request/response in the markdown report is optional, and included when -irr, -include-rr flag is used along with -me.
These are not official viewers of Nuclei and Nuclei has no liability
towards any of these options to visualize Nuclei results. These are just
some publicly available options to visualize SARIF files.
Nuclei expose running scan metrics on a local port 9092 when -metrics flag is used and can be accessed at localhost:9092/metrics, default port to expose scan information is configurable using -metrics-port flag.
Here is an example to query metrics while running nuclei as following nuclei -t cves/ -l urls.txt -metrics
Nuclei engine supports passive mode scanning for HTTP based template utilizing file support, with this support we can run HTTP based templates against locally stored HTTP response data collected from any other tool.
Copy
Ask AI
nuclei -passive -target http_data
Passive mode support is limited for templates having {{BasedURL}} or {{BasedURL/}} as base path.
If Nuclei was installed within a Docker container based on the installation instructions,
the executable does not have the context of the host machine. This means that the executable will not be able to access
local files such as those used for input lists or templates. To resolve this, the container should be run with volumes
mapped to the local filesystem to allow access to these files.
This example runs a Nuclei container against a list of URLs, writes the results to a .jsonl file and removes the
container once it has completed.
Copy
Ask AI
# This assumes there's a file called `urls.txt` in the current directorydocker run --rm -v ./:/app/ projectdiscovery/nuclei -l /app/urls.txt -jsonl /app/results.jsonl# The results will be written to `./results.jsonl` on the host machine once the container has completed