A Comprehensive Web Server Security Guide

A lot of security advice I read tends to fall into two camps: tailored to enterprises with full time security and infrastructure teams, or rapid fire tool installation without context for solo web developers.

In this article, we will tread a middle path, and look at some ways to manage a web server securely for a variety of common deployment scenarios. My goal is to provide better instrumentation & security understanding for mixed teams of developers and sysadmins often found in small businesses, startups, and building interesting solo projects.

For each topic, I'll go over the tooling in detail, when and why you should implement it (and when not to), and provide step by step instructions to get it up and running on a Linux web server. These are laid out in the order I suggest implementing them, based on ease of implementation and security benefit.

  1. Configuring automated installation of security patches
  2. Setting up a host based firewall.
  3. Setting up and configuring a web application firewall.
  4. Setting up host based intrusion detection and active response.

What I am specifically not covering is generic server or service hardening. Each OS, web application server, and piece of your stack likely has it's own hardening best practices, which should also be followed separately.

Automated Security Patching

Security patching is the number one thing you can do to protect your server and applications. Always try to keep all running services up to date. Fortunately, we can automatically apply security patches using a tool called unattended upgrades.

It's great because it truly is set and forget - once you have it running, just continue your normal weekly, monthly, or quarterly patch cycle of upgrading and restarting all services (OS to application). Unattended upgrades will ensure you get security patches as soon as they are available, without any restarts or service interruption.

Everyone should use it - thankfully, it is production safe!

Click to expand the technical walk-through for Ubuntu & Debian

Unattended upgrades on Ubuntu & Debian

It's super simple! Just install with apt and check the configuration files to make sure it's enabled (often it will be after install).

$ sudo apt install unattended-upgrades
$ cat /etc/apt/apt.conf.d/20auto-upgrades
APT::Periodic::Update-Package-Lists "1";
APT::Periodic::Unattended-Upgrade "1";
Click to expand the technical walk-through for RedHat and CentOS

Unattended upgrades on RHEL & CentOS

On Red Hat variants, we generally have to modify the /etc/yum/yum-cron.conf config file with the entries shown below to enable updates, but it is very simple. The preferred package is yum-cron.

$ yum install -y yum-cron
$ cat /etc/yum/yum-cron.conf | grep -E 'update_cmd|download_updates|apply_updates'
update_cmd = security
download_updates = yes
apply_updates = yes
$ systemctl start yum-cron.service

Setting up a Host Based Firewall

Firewalls are the first line of defense for network based attacks and exploits. I find that many smaller teams and early stage businesses do not deploy firewalls because they can be costly, slow down development, and require expertise to setup and manage.

However, it is very simple and highly effective to get a host based firewall running in just a few minutes, with little risk to your environment. The significant benefits a firewall bring include preventing all sorts of attacks against both existing and future services, and the ability to halt automated attacks in progress that do not take your firewall into account.

If you don't have a network team designing subnets and DMZ's for security, this is a quick and easy way to get many firewall benefits without much administration overhead.

Who should use a host based firewall?

I believe everyone should use a firewall in front of every device and service they control. However, host based firewalls are reserved only for teams that have minimal footprints on the web and cannot yet manage network firewalls.

Use this if you do not have a firewall blocking traffic flows elsewhere in your infrastructure.

Who should not use a host based firewall?

If you have a network or security team that manages subnets and firewalls within the network, then the rules I will describe here are often better done within those teams and processes.

Let's look at the UFW firewall for Linux

I like the UFW host based firewall - it's simple to use, runs on top of the powerful iptables, and is easy to get up and running. It isn't good for a larger network firewall job - single host protection is what it is designed for.

Below, I walk through a full install and configuration for a typical webserver.

Click to expand the technical UFW walk-through

Installing UFW

It comes by default in Ubuntu, but if you are missing it you can run apt get install ufw. In RHEL variants, it comes with the EPEL repository:

$ sudo yum install -y epel-release
$ sudo yum install -y ufw

Configuring Basic Firewall Rules

Let's configure some rules for our web server. I am assuming that this is for a basic site that only runs on ports 80 or 443, and allows remote SSH connections, so we'll need port 22 as well.

When configuring this, the order of commands is important - blocking port 22 and enabling the firewall will disconnect any open connection you have with SSH!

The following commands will allow all outgoing traffic, and all incoming traffic on these three ports, then start the firewall. At any time you can see the status of the firewall and all active rules by running ufw status verbose

$ sudo ufw default deny incoming
$ sudo ufw default allow outgoing
$ sudo ufw allow ssh
# Before running this next command, make sure you have allowed SSH!
$ sudo ufw enable
$ sudo ufw allow http
$ sudo ufw allow https

You might wonder why this is necessary if you aren't running any other services on the server currently. This helps ensure that if one day someone mis-configures a local database to listen on a port, or installs and runs redis, or any other service, then a conscious choice to update the firewall must be made before any remote connections can happen.

Configuring Advanced Firewall Rules (I promise it's still simple!)

I call these advanced because they need to be thought through. Decide if they make sense for your specific implementation, and think about what other needs your environment might have.

Limit SSH only to known IP address blocks

If possible, it is best to limit SSH only to your own company's IP address ranges, or specific jump host servers on your network with static IP's. This can be dangerous if you don't have control of your IP's, though you could limit it to specific countries.

Make sure that you also remove the rule allowing SSH from anywhere after setting this rule, and that you are currently connected through a valid IP or you risk being locked out of SSH.

# Limit to one IP (Jump host perhaps). Note that 10. addresses are unroutable.
$ sudo ufw allow from 10.10.10.10 to any port 22
# or an IP CIDR range
$ sudo ufw allow from 10.10.10.0/24 to any port 22

Limit outgoing connections

Most servers don't need to communicate outside on all other ports. Limit it only to the web and DNS ports. These are generally always needed since updates will be pulled from those ports. This can help prevent malware that uses hard coded ports from reaching back out of your server if you end up infected.

Add other rules as needed - and make them as specific as possible (for instance, a database IP and port combination), while not making them so specific that they would require frequent changes.

$ sudo ufw default deny outgoing
$ sudo ufw allow out 53
$ sudo ufw allow out http
$ sudo ufw allow out https

Deleting rules

Need to remove a rule? List them by number and delete the numbered rule.

$ sudo ufw status numbered
Status: active

     To                         Action      From
     --                         ------      ----
[ 1] 80                         ALLOW IN    Anywhere
$ sudo ufw delete 1

Installing and Configuring a Web Application Firewall

The best way of preventing a web application from being hacked is to quickly install the latest vendor patches and to consistently scan for and remove security bugs as they are found. Bonus points if you can setup server routing rules that only allows admin page access from company IP address ranges.

However, for many businesses, the reality is that vendor software is out of date (sometimes years), or custom software is left unchanged for lengthy periods of time since it just works and is costly to maintain.

A web application firewall can help mitigate some of this risk by filtering out common attacks like SQL Injection and Cross Site Scripting. It isn't a fail safe - experienced hackers can almost always bypass them, but it does prevent common bot attacks.

Who should consider installing a WAF?

If you are running applications which you do not have full control over, it probably makes sense to install and configure a WAF. The extra security protections can easily make up for the administration overhead.

If your application is not routinely being scanned by a variety of static and dynamic security scanners, or management puts off fixing known security vulnerabilities, a WAF can provide a bit of a stopgap mitigation.

Who should avoid a WAF?

If you have complete control over your application and infrastructure (a fully custom developed application), your time may be better spent fixing security bugs and using a combination of static and dynamic analysis in your build pipeline over maintaining a WAF.

Additionally, some applications may break with the introduction of a WAF, so careful testing should be used, along with a minimum monitoring period where active defense is disabled.

Installing mod_security as our WAF

I like mod_security because it is free, relatively easy to setup and get running, and supports both Apache and NGINX. Below, I walk through installing, configuring, and monitoring both kinds of servers with mod_security.

Click to expand the WAF technical walk-through for Apache

Installing mod_security with Apache

It's really easy with Apache!

# Debian / Ubuntu
$ sudo apt install libapache2-modsecurity
$ sudo service apache restart

# RHEL / CentOS
$ sudo yum install mod_security
$ sudo /etc/init.d/httpd restart

Install default & OWASP rulesets

The default and OWASP rules are a good starting point. From there, you can determine additional rules that are specific to your environment.

Navigate to the default modsecurity folder (it should have been created and populated on install) - /etc/modsecurity/ for Apache. Then download and re-name the rulesets. The first (default) ruleset I show may have been installed automatically. If so, it is up to you whether you wish to download the latest directly or leave what was installed.

$ cd /etc/modsecurity
# Default rules
$ wget https://raw.githubusercontent.com/SpiderLabs/ModSecurity/v3/master/modsecurity.conf-recommended
$ wget https://raw.githubusercontent.com/SpiderLabs/ModSecurity/v3/master/unicode.mapping
$ mv modsecurity.conf-recommended modsecurity_default.conf
# OWASP rules
$ git clone https://github.com/SpiderLabs/owasp-modsecurity-crs
$ cp owasp-modsecurity-crs/crs-setup.conf.example owasp-modsecurity-crs/crs-setup.conf

Let's now go ahead and actually turn the rules on within our webserver.

Turn the rules on in Apache

Now we need to update the Apache configuration. Open the /etc/apache2/mods-available/security2.conf file and update it to contain the following:

<IfModule security2_module>
        # Default dir for modsecurity's persistent data
        SecDataDir /var/cache/modsecurity

        IncludeOptional /etc/modsecurity/*.conf
        Include /etc/modsecurity/rules/*.conf
        IncludeOptional /etc/modsecurity/owasp-modsecurity-crs/*.conf
        Include /etc/modsecurity/owasp-modsecurity-crs/rules/*.conf
</IfModule>

Test the configuration changes using apachectl configtest and if all looks good restart Apache to enable the WAF.

Actively block caught traffic

I recommend actively blocking malicious traffic. If you feel comfortable doing more than logging attack requests, then it's a simple config update. Only do this on production systems if you are confident it won't break existing functionality and have viewed the logs in /var/log/modsec_audit.log

In the modsecurity_default.conf file, update the line near the top with SecRuleEngine DetectionOnly to SecRuleEngine On

Restart the web server to have these take effect.

Click to expand the WAF technical walk-through for NGINX

Installing mod_security with NGINX

It's a bit more involved with NGINX, and many guides do not seem to clarify what needs to be in place to get this installed and working.

First, check if the NGINX you are running is compiled with the --with-compat flag by running nginx -V. Any installation without this flag will need to re-install NGINX with this flag enabled. As of this writing, most distribution default installs do not distribute with the --with-compat flag, so I assume you will have to reinstall NGINX.

Installing the official NGINX repository & version

There are a couple of options, but I find the easiest one to be just installing the version in the official NGINX repository. If you already have NGINX installed, remove it first with sudo apt remove nginx before installing new (this will create a service interruption for a few minutes, but no configuration files will be lost)

On Ubuntu, to install the official NGINX :

$ wget https://nginx.org/keys/nginx_signing.key
$ sudo apt-key add nginx_signing.key
$ echo deb https://nginx.org/packages/mainline/ubuntu/ `lsb_release -c | cut -f2` nginx >> /etc/apt/sources.list.d/nginx.list
$ echo deb-src https://nginx.org/packages/mainline/ubuntu/ `lsb_release -c | cut -f2` nginx >> /etc/apt/sources.list.d/nginx.list
$ sudo apt update
# Check that the official nginx source is planned. 
$ apt policy nginx
nginx:
  Installed: (none)
  Candidate: 1.15.8-1~bionic
  Version table:
     1.15.8-1~bionic 500
        500 https://nginx.org/packages/mainline/ubuntu bionic/nginx amd64 Packages
$ sudo apt install nginx
Building the mod_security module

Unfortunately, we have to build this module manually, but it isn't too hard. NGINX has an excellent walk-through on building and installing the module, so follow their instructions to install mod_security.

The commands I ran to get this running on my latest server are here if you just want to get moving quickly. You may have to install certain build tools such as make first.

# Download and build mod_security - may take 15-20 minutes
$ git clone --depth 1 -b v3/master --single-branch https://github.com/SpiderLabs/ModSecurity
$ cd ModSecurity/
$ git submodule init
$ git submodule update
$ ./build.sh
$ ./configure
$ make
$ make install
$ cd ..
# Build custom NGINX package to make the module - update the 
# version to match the latest one in mod_security repo. 
# Start only after above commands finish successfully.
$ wget http://nginx.org/download/nginx-1.14.0.tar.gz
$ tar zxvf nginx-1.14.0.tar.gz nginx-1.14.0/
$ cd nginx-1.14.0/
$ ./configure --with-compat --add-dynamic-module=../ModSecurity-nginx
$ make modules
$ cp objs/ngx_http_modsecurity_module.so /usr/share/nginx/modules/ngx_http_modsecurity_module.so

Install default & OWASP rulesets

The default and OWASP rules are a good starting point. From there, you can determine additional rules that are specific to your environment.

Create mod_security folder (/etc/nginx/modsec), then download and re-name the rulesets.

# Default rules
$ wget https://raw.githubusercontent.com/SpiderLabs/ModSecurity/v3/master/modsecurity.conf-recommended
$ wget https://raw.githubusercontent.com/SpiderLabs/ModSecurity/v3/master/unicode.mapping
$ mv modsecurity.conf-recommended modsecurity_default.conf
# OWASP rules
$ git clone https://github.com/SpiderLabs/owasp-modsecurity-crs
$ cp owasp-modsecurity-crs/crs-setup.conf.example owasp-modsecurity-crs/crs-setup.conf

Let's now go ahead and actually turn the rules on within our webserver.

Turn the rules on in NGINX

First, let's create a single configuration file that loads all of our other conf files and rules (Still from within our ruleset directory /etc/nginx/modsec).

$ cat main.conf
Include /etc/nginx/modsec/modsecurity_default.conf
Include /etc/nginx/modsec/owasp-modsecurity-crs/crs-setup.conf
Include /etc/nginx/modsec/owasp-modsecurity-crs/rules/*.conf

Now update the NGINX config file at /etc/nginx/conf.d/default.conf to include the following two lines right below the server_name directive.

modsecurity on;
modsecurity_rules_file /etc/nginx/modsec/main.conf;

Test the changes with nginx -t and if everything looks ok, restart the web server!

Actively block caught traffic

I recommend actively blocking malicious traffic. If you feel comfortable doing more than logging attack requests, then it's a simple config update. Only do this on production systems if you are confident it won't break existing functionality and have viewed the logs in /var/log/modsec_audit.log

In the modsecurity_default.conf file, update the line near the top with SecRuleEngine DetectionOnly to SecRuleEngine On

Restart the web server to have these take effect.

Intrusion Detection and Active Defense

Intrusion Detection is the practice of identifying a compromise in process, and trying to automatically stop it or at least notify someone that it is happening.

When setting up such a system, we can usually also configure automated responses to attacks - automatically blocking IP addresses or even fully custom scripts, while also sending out alerts to admins who can respond quickly to a potential breach.

For small teams that don't have dedicated security operations centers, it can take a lot of effort to setup and tune security alerts and responses in a way that won't irritate team members with false positive and low criticality alerts.

For this reason, consider carefully the effort overhead against the benefits of this setup. Furthermore, the logs that are generated by default are stored on the server. In the event of a compromise, those logs can no longer be trusted as accurate, since an attacker may modify them. To get the full benefit of a detection system, you must also setup a separate service to continuously ingest and store critical logs from your monitored servers, which is outside the scope of this article.

Who should use HIDS & alerting?

If you are committed to monitoring logs and tuning alerts beyond default rulesets, or are just starting out in building out a security operations center, this is a great place to start.

In addition, if you want to have a way to be notified quickly of a potential compromise, this is one of the best ways to stay on alert.

Who should put off using HIDS?

If you are a small team who wants a system to set and forget, this is probably just going to get in the way and generate noise.

To truly make good use of a HIDS system, at least one person on a team should have regular monitoring and tuning the system as part of their formal responsibilities. It isn't a full time job for a single server, but is also not trivial.

Getting up and running with OSSEC HIDS

For now, we will setup OSSEC detection software with some basic rules and alerts, and active defense to limit SSH brute force attempts. All logs will remain on the monitored server. From there, trying it out should tell you how much adjustment is needed.

Below, I go through a detailed installation and configuration walk-through, followed by setting up active response to block SSH login attempt brute forcing.

Click to expand the OSSEC technical walk-through

Installing & Configuring OSSEC

I choose OSSEC for this purpose because it is a well known standard, open source, and pretty easy to get started.

Installing from a repository is more likely to help you get setup as a client or a server as part of a larger system like I discussed above, but today we just want to consider monitoring a single isolated server. To do this, we will pull down and run a setup manually.

Let's start by downloading and installing OSSEC 3.1. You can check for more recent updates if you prefer. You can see a full install log below - I chose to enable everything, but think about what makes sense for your installation.

The important part is to choose a local install, and active response. You can also see that I added my own (fake) IP to the whitelist so that I cannot be banned.

$ wget https://github.com/ossec/ossec-hids/archive/3.1.0.tar.gz
$ tar -xvzf 3.1.0.tar.gz
$ cd ossec-hids-3.1.0/
$ ./install.sh

- What kind of installation do you want (server, agent, local, hybrid or help)? local

  - Local installation chosen.

2- Setting up the installation environment.

 - Choose where to install the OSSEC HIDS [/var/ossec]: 

    - Installation will be made at  /var/ossec .

    - The installation directory already exists. Should I delete it? (y/n) [y]: 

3- Configuring the OSSEC HIDS.

  3.1- Do you want e-mail notification? (y/n) [y]: 
   - What's your e-mail address? null.sweep@nullsweep.com

   - We found your SMTP server as: mail.nullsweep.com.
   - Do you want to use it? (y/n) [y]: 

   --- Using SMTP server:  mail.nullsweep.com.

  3.2- Do you want to run the integrity check daemon? (y/n) [y]: 

   - Running syscheck (integrity check daemon).

  3.3- Do you want to run the rootkit detection engine? (y/n) [y]: 

   - Running rootcheck (rootkit detection).

  3.4- Active response allows you to execute a specific 
       command based on the events received. For example,
       you can block an IP address or disable access for
       a specific user.  
       More information at:
       http://www.ossec.net/en/manual.html#active-response
       
   - Do you want to enable active response? (y/n) [y]: 

     - Active response enabled.
   
   - By default, we can enable the host-deny and the 
     firewall-drop responses. The first one will add
     a host to the /etc/hosts.deny and the second one
     will block the host on iptables (if linux) or on
     ipfilter (if Solaris, FreeBSD or NetBSD).
   - They can be used to stop SSHD brute force scans, 
     portscans and some other forms of attacks. You can 
     also add them to block on snort events, for example.

   - Do you want to enable the firewall-drop response? (y/n) [y]: 

     - firewall-drop enabled (local) for levels >= 6

   - Default white list for the active response:
      - 127.0.0.53

   - Do you want to add more IPs to the white list? (y/n)? [n]: y
   - IPs (space separated): 10.10.10.10

  3.6- Setting the configuration to analyze the following logs:
    -- /var/log/auth.log
    -- /var/log/syslog
    -- /var/log/dpkg.log
    -- /var/log/nginx/access.log (apache log)
    -- /var/log/nginx/error.log (apache log)

 - If you want to monitor any other file, just change 
   the ossec.conf and add a new localfile entry.
   Any questions about the configuration can be answered
   by visiting us online at http://www.ossec.net .
   
   
   --- Press ENTER to continue ---

Now we will want to customize the rules. I notice that OSSEC default log locations are often not quite right. For instance, it was looking for my nginx log in /var/www (despite what it said in the install output), which does not exist on my server. Instead, I opened /var/ossec/etc/ossec.conf and added the following entry:

  <localfile>
    <log_format>apache</log_format>
    <location>/var/log/nginx/error.log</location>
  </localfile>

Look around in this file and update any other log files that need monitoring. A few other things to consider looking at and updating:

  • The directory list under the syscheck section. Consider adding your web directory or some parts of it if it does not change frequently
  • Other log files that are important: web, database, system, root commands, mailserver. Find the full list of supported formats in the documentation (and you can write your own rulesets if so inclined)
  • Consider email notifications - off by default. This might be noisy depending on your configuration, so try it and tune.
  • Active Response settings. By default, OSSEC will block IP's for 5 minutes that trigger a ruleset, such as multiple invalid SSH login attempts.

Final Thoughts

There are many ways to secure a server. Getting automated security patching and a narrow set of firewall rules will take you a long way.

Setting up a WAF to protect potentially uncontrolled or vulnerable software can be the difference between being hacked by the next automated wordpress scanning bot when your business team delays an upgrade and staying safe until the upgrade can be performed.

And finally, setting up HIDS with alerting can give you an early warning system to know when something has gone wrong so you can respond quickly and get back to business.

I am sure I missed a lot - I would love to hear from you on what has worked well for you.