Published On: April 11th, 2021Last Updated: April 23rd, 2021Categories: hardening0 Comments


Nginx has become very popular in the last years and is now almost identical to Apache in terms of usage statistics. We won’t discuss the benefits of using one or another because we all already know that Nginx is by far better ! This hardening post is a short summary of some features already included into bunkerized-nginx. It’s an open-source project we created in order to make the process of hardening easier without any hassle. If you’re too lazy to do the hardening yourself you should give it a try !


Unprivileged user

Any service, especially if it’s listening on the network, should not be run as root. In case of a vulnerability the attacker won’t have full privileges, the goal is to make the task harder for privilege escalation. You can use the user directive at main context so workers are run as an unprivileged user :

Copy to Clipboard


Logging is an important part when it comes to crucial services. It can help you a lot to find the solution when there is a problem. The generated logs can also be analyzed by additional tools to detect malicious behaviours in order to report or ban the attacker for example. Nginx has two types of logs : access and error. The first one contains information about each request/response made (e.g. : status code, client IP, URI, …) and the second one is used when nginx failed to do something and/or there are errors in your configuration. You can define your own log format for the access type but not for the error type (see log_format directive). Here is an example which simply define a format and write logs to files (see access_log and error_log directives) :

Copy to Clipboard

Default server

Many bots scan the internet to find vulnerable web services. They simply send HTTP requests to all the IP addresses and analyze the response. It means that they don’t send a valid Host header containing your domain name. You can safely disable any access if the host header is not a known domain. One possible way of doing it is to add a “catch-all” server that will return a 444 status code (which tells nginx to close the connection) :

Copy to Clipboard

Not that any self-signed certificate will do the trick because only “bad” clients will receive it. Another important thing is to not use “default_server” for any other server configuration.

Information leak

Remove headers

Some HTTP headers are really verbose and can give sensitive information to attacks. The rule is simple : the less you tell to your enemy the better is. When these headers are not necessary, we can simply remove them. The easiest way to do it is to use an external module called ngx_header_more. Once installed, you have access to the more_clear_header directive that can be used in http, server and even location contexts. We recommend doing it at http context so it will apply to every server :

Copy to Clipboard

Remove version

Hiding the version of the service/application you use is always a good practice. With that kind of information, attackers can search for vulnerabilities for that specific version. Hiding the version won’t patch a vulnerability if it exists but it can fool some attackers so they choose another target. Set the user_tokens  to the off value at http context :

Copy to Clipboard

Error pages

By default, Nginx will display basic error pages for standard HTTP codes like 404, 403, 500, … The problem is it also displays the string “nginx” and even if you recompile nginx to replace that string, the template is still identifiable. The attacker will know that we use nginx, which isn’t a good thing. A quick way to mitigate this is to define your own error pages using the error_page directive at http or server context :

Copy to Clipboard

If you follow this best practice, consider doing for at least the most “common” status code that an attacker may trigger : 400, 403, 404, 500, …


Let’s Encrypt

With Let’s Encrypt you don’t have to pay anymore to get a valid HTTPS certificate. There two major things to note : you need to resolve a “challenge” to prove that you own the corresponding domain and certificates are only valid for 60 days. Fortunately, they provide a tool called certbot which automate the generation and renewal of the certificates. We will assume that you use the webroot plugin to keep control of the configurations (this way certbot won’t edit your configuration files). First, you will need to handle requests sent by Let’s Encrypt to verify that you own the domain (the “challenge”) :

Copy to Clipboard

Next, you need to use certbot to generate the certificate. This process will be done only one time :

Copy to Clipboard

Once the certificate has been generated, you can edit your server configuration to use it and listen for HTTPS connections :

Copy to Clipboard

Last but not least, you need to automate the renewal of certificates before they expire. We can do this by adding a cron job that will execute at midnight :

Copy to Clipboard

TLS settings

Using HTTPS out-of-the-box is not enough. Behind the hood, the TLS protocol is used and the whole security (authentication, integrity and confidentiality) is covered by cryptographic algorithms. If one of the algorithm is not secure, the HTTPS connection won’t be secure. From TLS version to ciphersuites, there is some important settings. You can use a tool like SSL Configuration Manager from Mozilla to help you. Here is a configuration sample which should be a good balance between pure security and compatibility :

Copy to Clipboard

The /etc/nginx/dhparam can be downloaded from here.

HTTP redirection

The first time a user visit your website he may visit it using unsecure HTTP. Because he followed an external link that is missing the “s” from https:// or he just forgot to add https:// to the address bar for example. A good practice is to redirect him to the HTTPS website transparently. One way of doing in Nginx, is to separate the HTTP and HTTPS servers and use the return directive on the HTTP server to perform the redirection to the HTTPS one :

Copy to Clipboard

Security headers

Although some HTTP headers are not mandatory, some of them can add extra security especially for the clients. Here is a non-exhaustive list of headers that you should definitively look into :

  • Strict-Transport-Security : tell clients to always use HTTPS connection
  • X-Frame-Options : define how a web service can be include in other web services (e.g. : iframe)
  • Content-Security-Policy : define a policy on what can be loaded on the pages (e.g. : inline scripts, object, …)
  • X-Content-Type-Options : tell the client to be strict with MIME type
  • Referrer-Policy : define policy of when to send the Referer header
  • Permissions-Policy : define policy of what features a web service will ask to clients (e.g. : geolocation, fullscreen, webcam, …)
  • Set-Cookie (secure, HttpOnly and SameSite flags) : only send cookies over HTTPS (secure), disable cookie access from JavaScript (HttpOnly) and whether or not to send the cookies if client came from another web service (SameSite)

Some headers (especially the Content-Security-Policy) needs some research to be adapted for your web service. The more_set_headers directive from the ngx_header_more module lets you add easily these headers :

Copy to Clipboard

For the cookie header you can use another module called ngx_http_cookie_flag that provides the set_cookie_flag directive :

Copy to Clipboard


ModSecurity is a WAF (Web Application Firewall), it can analyze requests and responses and search for common “hacking” patterns. If it finds something “suspicious” it will block it to prevent any damage. Even if a WAF can be bypassed with some techniques it’s one of the best protection to annoy attackers. To configure ModSecurity for Nginx, you will need to compile libmodsecurity and then the ModSecurity-nginx plugin. A WAF without any rules is worth nothing, this is why you can use OWASP CRS (Core Rule Set) which provides rules against common web attacks.

Anti bot


Attackers may use some automated tools (e.g. : scanner, bruteforce, …) in order to find and exploit vulnerabilities on your web service. We can try to slow them down by using two features that comes with Nginx : request and connection limiting. The first one lets you define how often requests from a client can be made (e.g. 60 requests / minute, 10 requests / second, …) and the second one how many TCP connections we accept from one single IP address. Here is an example where only 1 request by second for a specific resource can be done (with a burst of 2) and only 30 TCP connections by IP is allowed :

Copy to Clipboard


When trying to “hack” a web service, the attacker will surely generate some “strange” HTTP status code. A certain amount of “uncommon” HTTP status code within a period of time may be suspicious enough so we can ban the corresponding IP address. This is exactly a thing we can do with fail2ban : its main feature is to analyze logs, count occurrence between a period of time and then do an action.

Include the configuration generated by fail2ban :

Copy to Clipboard

fail2ban jail :

Copy to Clipboard

fail2ban filter :

Copy to Clipboard

fail2ban action :

Copy to Clipboard

That’s not enough !

Don’t rely solely on these tips, use other hardening guides and adapt everything to your needs and web services. If you don’t want to do it “by hand” you should really consider using bunkerized-nginx !

You like this post ? You can share it !

Leave A Comment

Stay tuned !

Connect with us to get the latest news.