waf - Configure the Web Application Firewall (WAF)
waf generic [(protocol_violations | protocol_anomalies | request_limits | http_policy | bad_robots | generic_attacks | xss_attacks | sql_injection_attacks | tight_security | trojans | common_exceptions | outbound) (on | off )]
waf limit [(response | request | assertions | name | value | arguments | files) <limit-value>]
waf errors [(allow | deny)]
waf rweb generic [<site-name> <generic-filter> (on | off)]
waf rweb custom [<site-name> [clear | (load | save) (ftp | sftp | tftp) <file-server> <file-name> [new]]]
waf rweb audit [<site-name> [on | off]]
waf rweb denyurl [(set <site-name> [<url>]) | raz [<site-name>]]
waf rweb errors [raz | add <site-name> (allow | deny) | del <site-name> | show [<site-name>]]
waf rweb bypass [<site-name> [raz | (add | del) (<rule-id>)+]]
waf bypass [raz]
The appliance can filter malicious requests to protect Web servers against unwanted or dangerous accesses, acting as a Web Application Firewall (WAF). This command is used to manage the WAF rules for reversed websites.
The first usage form allows you to activate or deactivate global generic WAF rules applicable to all reversed Websites. Generic rules are provided by OWASP ModSecurity Core Rule Set (www.owasp.org). Please note that generic rules may produce false positive matching. In this case you can review your Web application for adjustment instead of deactivating the related generic rule. You can use the Web auditing module to inspect Web requests and rule matching (see below).
Here is a brief description of generic WAF rules:
protocol_violations: some protocol violations are common in application layer attacks. Validating HTTP requests eliminates a large number of application layer attacks. The purpose of these generic WAF rules is to enforce HTTP RFC requirements that state how the client is supposed to interact with the server. The details are as follows:
* Validate request line against the format specified in the HTTP RFC.
* Identify Invalid URIs Blocked by Apache.
* Identify multipart/form-data name evasion attempts.
* Verify that we’ve correctly processed the request body.
* Strict Multipart Parsing Checks.
* Multipart Unmatched Boundary Check.
* Accept only digits in content length.
* Do not accept GET or HEAD requests with bodies.
* Require Content-Length to be provided with every POST request.
* Deny inbound compressed content.
* Expect header is an HTTP/1.1 protocol feature.
* Pragma Header requires a Cache-Control Header.
* Range Header Checks.
* Broken/Malicous clients often have duplicate or conflicting headers.
* Check URL encodings.
* Check UTF enconding.
* Disallow use of full-width unicode as decoding evasions my be possible.
* Restrict type of characters sent.
protocol_anomalies: some common HTTP usage patterns are indicative of attacks but may also be used by non-browsers for legitimate uses. The purpose of these generic WAF rules is to not accept requests without common headers (all normal web browsers include Host, User-Agent and Accept headers. This implies either an attacker or a legitimate automation client). The details are as follows:
* Missing/Empty Host Header.
* Missing/Empty Accept Header.
* Missing/Empty User-Agent Header.
* Missing Content-Type Header with Request Body.
* Check that the host header is not an IP address.
request_limits: in most cases, you should expect a certain volume of each request on your website. For example, a request with 400 arguments, can be suspicious. These generic WAF rules create limitations on the request. Limit values can be customized globally for all Web requests (see the second usage form below).
http_policy: few applications require the breadth and depth of the HTTP protocol. On the other hand many attacks abuse valid but rare HTTP use patterns. Restricting HTTP protocol usage is therefore effective in blocking many application layer attacks. These generic WAF rules set limitations on the use of HTTP by clients. The details are as follows:
* Allow request methods.
* Restrict which content-types we accept.
* Restrict protocol versions.
* Restrict file extension.
* Restricted HTTP headers.
bad_robots: bad robots detection is based on checking elements easily controlled by the client. As such, a determined attack can bypass those checks. Therefore bad robots detection should not be viewed as a security mechanism against targeted attacks but rather as a nuisance reduction, eliminating most of the random attacks against your website.
generic_attacks: details are listed below:
* OS Command Injection Attacks.
* Coldfusion Injection.
* LDAP Injection.
* SSI injection.
* UPDF XSS.
* Email Injection.
* HTTP Request Smuggling.
* HTTP Response Splitting.
* RFI Attack.
* Prequalify Request Matches.
* Session fixation.
* File Injection.
* Command access.
* Command injection.
* PHP injection.
xss_attacks: details are listed below:
* Script tag based XSS vectors, e.g., <script> alert(1)</script>
* XSS vectors making use of event handlers like onerror, onload etc, e.g., <body onload="alert(1)">.
* XSS vectors making use of Javascripts URIs, e.g., <p style="background:url(javascript:alert(1))">.
* All types of XSS (Cross Site Scripting).
sql_injection_attacks: details are listed below:
* Detect SQL Comment Sequences.
* SQL Hex Evasion Methods.
* String Termination/Statement Ending Injection Testing.
* SQL Operators.
* SQL Tautologies.
* Detect DB Names.
* SQL Keyword Anomaly Scoring.
* Blind SQL injection.
* SQL injection.
* SQL Injection Character Anomaly Usage.
* PHPIDS - Converted SQLI Filters.
tight_security: The details are as follows:
* Directory Traversal.
trojans: the trojan access detection rule detects access to known Trojans already installed on a Web server. Uploading of Trojans is part of the Antivirus module (see the antivirus command).
common_exceptions: this is used as an exception mechanism to remove common false positives that may be encountered. The details are as follows:
* Exception for Apache SSL pinger.
* Exception for Apache internal dummy connection.
* Exception for Adobe Flash Player.
outbound: inspect outbound data for information leakage types of issues (scanner, web analysis, program errors, source codes...). The details are as follows:
* Zope Information Leakage.
* CF Information Leakage.
* PHP Information Leakage.
* ISA server existence revealed.
* Microsoft Office document properties leakage.
* CF source code leakage.
* IIS default location.
* The application is not available.
* Weblogic information disclosure.
* File or Directory Names Leakage.
* IFrame Injection.
* Generic Malicious JS Detection.
* ASP/JSP source code leakage.
* PHP source code leakage.
* Statistics pages revealed.
* SQL Errors leakage.
* IIS Errors leakage.
* Directory Listing.
The second usage form allows you to specify limits for all Web requests and responses. These limits are applied globally to all responses and requests (when the request_limits generic rule is activated). The following limits can be configured:
* response: maximum response body size in KB.
* request: maximum request body size in KB, excluding the size of any files being transported in the request.
* assertions: maximum number of assertions (attributes=values) or arguments in a request (the separator should be ’&’).
* name: maximum length for an argument name in a request.
* value: maximum length for an argument value in a request.
* arguments: total arguments (names and values) length limit in a request.
* files: maximum size in KB for combined uploaded files. This value can’t be greater than the value given during the installation for uploaded files.
The third usage form allows you to globally expose or rewrite original error pages sent by reversed websites. By an error page we mean a page with an HTTP status code other than 200. To allow the exposure of original error pages use the keyword allow. To rewrite original error pages use the keyword deny.
The fourth usage form allows you to activate or deactivate generic filters for a specific website. A website that no specific generic filters are defined for inherits global filters.
The fifth usage form allows you to load/save custom rule files from/to a file server. Custom rules allow you to have restrictive controls on allowed or denied requests on a website. The "GET", "HEAD", "POST", "PUT", "DELETE", "CONNECT", "OPTIONS" and "TRACE", HTTP methods can be filtered by defining regular expressions for allowed or denied requests. Please note that HTTP methods other than "GET", "HEAD", "POST" and âOPTIONSâ are not allowed by default. To allow blocked methods you should first bypass default rules using the waf rweb bypass usage form and then create custom rules to allow them.
The process of custom rule definition is very easy. To do so, just create your rule file for a specific website and then load it into the system. Because regular expressions may be complex, this process allows you to use your favourite text editor (vi, emacs...) to edit the rule file.
In the custom rule file, each rule definition begins with a line having the following syntax:
rule <rule-name> (allow | deny) (get | head | post | put | delete | connect | options | trace )
followed by optionnal lines having the following syntax:
(uri | body | ip) "<regular-rexpression>"
In the custom rule file, each keyword and argument must be separated with blanks or tabulations delimiters.
If a line begins with the rule statement, three mandatory arguments must be specified:
The first argument is the rule identifier. It allows you to identify the rule in audit mode (see below). A rule identifier must be a combination of alphanumeric characters and the characters "_", "-" or "." (the dot) and begin with an alphanumeric character.
The second argument specifies the action (allow deny) and finally the third argument is the HTTP method in lowercase. Optional lines begins with the following keywords:
* uri: The part after the "/" character in a URL (can be used with POST and GET methods)
* body: Arguments in the body of a POST (can be used only in conjunction with a uri statement and POST method)
* ip: the source IP address of the client making the Web request
followed by a regular expression between quotation marks. Note that a quotation mark in a regular expression should not be preceded by a back slash character here. Regular expressions are based on PCRE (Perl Compatible Regular Expression).
For instance to allow only the GET on "/" and the POST on "/cgi-bin/set-phone.cgi" with arguments "name=<string> phone=<numbers>" the custom rule file looks like:
rule r1 allow
get
uri "^/$"
rule r2 allow
get
uri "^/cgi-bin/get-phone\.cgi"
rule r3 allow
post
uri "^/cgi-bin/set-phone\.cgi$"
body "^name=[[:print:]]*\&phone=[[:digit:]]*$"
ip "^192\.168\.155\.254$"
rule r4 deny
post
uri "^/cgi-bin/set-phone\.cgi$"
rule r5 allow get
rule r6 allow head
rule r7 allow post
After a custom rule file is loaded, its syntax is verified and the loading is confirmed only if no errors are detected.Note that generic filters are always applied (when activated) before custom rules.
The sixth usage form allows you to manage the audit mode for a reversed website. Auditing allows you to inspect HTTP/S request contents and facilitate the filtering rule design process. To activate the audit mode for a website use the keyword on. To deactivate the audit mode use the keyword off. Without specifying a state (on or off), this command prints the audit mode for a website (or for all websites if no website is specified).
Caution: Note that the auditing feature is for debugging purpose only and in normal circumstances, it should not be activated.
When the audit mode is activated for a reversed website and when the administration audit mode is turned on (see the command admin), all related Web request contents can be inspected at the URL https://<admin-ip>:<wadmin-port>.
In audit mode, an HTTP request header and body (for the POST method), the matched filtering rule and the resulting state (allowed or denied) are logged. In this way, the security staff can easily design custom filtering rules for managed websites.
Note that the audit mode is helpful during the filtering rule design process and should not be activated on a production appliance. Auditing consumes lots of hardware resources and, in terms of security, is not recommended for production appliances.
When the audit mode is deactivated for a website, all auditing data for that website is lost.
By default when access is denied, a generic deny page is displayed. The seventh usage form allows you to set a custom deny page to redirect to a website. To set a deny URL page for a website, use the keyword denyurl followed by the keyword set, the website name and the URL you want to redirect to. To reset to the default behaviour use the keyword raz followed by the website name. To reset to the default behaviour for all websites give no website name. When no optional arguments are used, this usage form displays the current settings. If a URL is given it should be in the form: (http|https|ftp)://<domain-name>[/<URI>] where an URI may contain an alphanumeric or any of the following characters: -._~:/?#[]@!$&()*+,;=. Any other character needs to be encoded with the percent-encoding. A percent-encoding is in the form %(a-fA-F0-9)(a-fA-F0-9) (use %27 for the quote character).
Note: When both the waf and the antivirus modes are activated, the system filters all attempts to upload malware files on protected Web servers (usually files are uploaded using the post method and the multipart/form-data encryption type).
The eighth usage form allows you specifically set the exposure or rewriting of original error pages for a reversed website. If no specific rule is defined for a website the global behaviour is used (set the third usage form). Keywords add, del, show and raz allows you respectively add a rule, delete a rule, show a rule and erase all rules. To allow the exposure of original error pages use the keyword allow. To rewrite original error pages use the keyword deny.
OWASP rules may generate a lot of false positive matches. The term false positive refers to legitimate Web requests denied by the WAF. To avoid this situation you can review your Web application and modify the content of Web requests to bypass false positive matches. But in most cases this is not an option because you don’t own the application or just don’t want to modify things. In this case you have the possibility to bypass the rule causing the false positive match. To do so you can use the ninth usage form.
To bypass an OWASP rule for a given website use the keywords rweb bypass followed by the website name, the keyword add and one or more rule IDs to bypass. To remove a bypass statement use the keyword del instead of the keyword add. To remove all bypass statements for a given website use the keyword raz. To remove all bypass statements for all websites use the tenth usage form.
Please note that the rule ID is a numerical value and is different than the rule name given for custom rules. You can find the ID of the rules causing false positive match using the auditing module. See the command admin waudit for further information on the auditing module.
admin (1) antivirus (1) apply (1) domainname (1) hostname (1) ip (1) mode (1) port (1) vlan (1)
CacheGuard Technologies Ltd <www.cacheguard.com>
Send bug reports or comments to the above author.
Copyright (C) 2009-2017 CacheGuard - All rights reserved