Chapter 11. Unix Application Servers

Application servers are an obvious target for an attacker. They are often a central repository for all manner of data, be it authentication credentials, intellectual property, or financial data. Being so data rich provides an obvious point for a financially motivated attacker to monetize his attack, and for a politically motivated attacker to steal, destroy, or corrupt data.

Of course in a system architected to have many tiers, application servers may not contain data; however, they will contain application code and serve as an ideal pivot point to other systems. They are typically connected to other systems, such as databases, which places a target on the application servers.

For these reasons we should seek to ensure that the servers are built both to perform their desired function with specification and to withstand an attack.

It is always recommended that the infrastructure surrounding an application be configured to defend the server from attack. However, ensuring that a server is as well-defended as possible in its own right is also strongly advised. This way, in the event that any other defensive countermeasures fail or are bypassed—for example, by an attacker using lateral movement from within the infrastructure—the server is still defended as well as is sensibly possible.

The essentials for Windows-based platforms have already been described in Chapter 1, so this chapter will focus on Unix platforms such as Linux, FreeBSD, and Solaris.

The topics covered in this chapter, Unix patch management and operating system hardening principles, are discussed in a deliberately general fashion. Securing Unix application servers, as with most chapters in this book, could be a book unto itself. In order to remain agnostic to the flavor of Unix being used, the topics discussed are deliberately those that are common to most Unix flavors. If you wish to take further precautions and implement features that are common to only a specific few versions, it is certainly worth consulting guides that are specifically written for your operating system.

Keeping Up-to-Date

One of the most effective and yet overlooked aspects of managing Unix servers is patch management. A large number of vulnerabilities in Unix environments occurs either as a result of bugs in software that is installed on a system or bugs in the system itself. Thus many vulnerabilities in an environment can often be remediated purely by keeping a system patched and up-to-date.

Third-Party Software Updates

Unlike Microsoft environments, Unix-based environments typically use a system of package management to install the majority of third-party applications.

Package management and update tools vary depending not only on which flavor of Unix you are running, but also differ depending on distribution you use. For example, Debian Linux and SUSE Linux use two different package management systems, and FreeBSD uses another.

Despite the differences, there are common themes surrounding the package management systems. Typically, each host will hold a repository of packages that are available to install on the system via local tools. The system administrator issues commands to the package management system to indicate that she wishes to install, update, or remove packages. The package management system will, depending on configuration, either download and compile, or download a binary of the desired package and its dependencies (libraries and other applications required to run the desired application), and install them on the system.

The various package management systems are so comprehensive in a modern distribution that for many environments it would be unusual to require anything further. Deploying software via package management, as opposed to downloading from elsewhere, is the preference unless there is a compelling reason to do otherwise. This greatly simplifies the issue of staying up-to-date and tracking dependencies.

The same package management system can be used to perform upgrades. As the repository of available packages is updated, new versions of already installed packages appear in the package database. These new version numbers can be compared against the installed version numbers and a list of applications due for an upgrade to a new version can determined automatically, typically via a single command line.

This ease of upgrade using package management means that unless a robust system of checking for and applying changes is in place for installed applications, the package management system should be used to provide an easy, automated method of updating all packages on Unix application servers. Not only does this remove the need to manually track each application installed on the application servers, along with all their associated dependencies, but it (typically) means that it has already been tested and confirmed to work on that distribution. Of course, individual quirks between systems mean that you cannot be sure that everything will always work smoothly, and so the testing process should remain. However, the testing process may be entered with a good degree of confidence.

To illustrate how this typically works, let’s take a look at the Debian Linux method of patching. First, we can update the repository via a single command; in the case of Debian, apt-get with the argument update:

$ sudo apt-get update
Get:1 http://security.debian.org wheezy/updates Release.gpg [1,554 B]
Get:2 http://security.debian.org wheezy/updates Release [102 kB]                   
Get:3 http://security.debian.org wheezy/updates/main amd64 Packages [347 kB]  
Get:4 http://ftp.us.debian.org wheezy Release.gpg [2,373 B]
Get:5 http://security.debian.org wheezy/updates/main Translation-en [202 kB]
Get:6 http://ftp.us.debian.org unstable Release.gpg [1,554 B]                         
Get:7 http://ftp.us.debian.org wheezy Release [191 kB]                                
Get:8 http://ftp.us.debian.org unstable Release [192 kB]       
Get:9 http://ftp.us.debian.org wheezy/main amd64 Packages [5,838 kB]
Get:10 http://ftp.us.debian.org wheezy/main Translation-en [3,846 kB]
Get:11 http://ftp.us.debian.org unstable/main amd64 Packages/DiffIndex [27.9 kB]
Get:12 http://ftp.us.debian.org unstable/non-free amd64 Packages/DiffIndex [23B]
Get:13 http://ftp.us.debian.org unstable/contrib amd64 Packages/DiffIndex [102B]
Get:14 http://ftp.us.debian.org unstable/contrib Translation-en/DiffIndex [78B]
Get:15 http://ftp.us.debian.org unstable/main Translation-en/DiffIndex [27.9 kB]
Get:16 http://ftp.us.debian.org unstable/non-free Translation-en/DiffIndex [93B]
Get:17 http://ftp.us.debian.org unstable/contrib Translation-en [48.7 kB]
Get:18 http://ftp.us.debian.org unstable/main Translation-en [5,367 kB]
Get:19 http://ftp.us.debian.org unstable/non-free Translation-en [81.3 kB]
Get:20 http://ftp.us.debian.org unstable/main amd64 Packages [7,079 kB]
Get:21 http://ftp.us.debian.org unstable/non-free amd64 Packages [79.2 kB]
Get:22 http://ftp.us.debian.org unstable/contrib amd64 Packages [53.5 kB]          
Fetched 23.5 MB in 13s (1,777 kB/s)                                      

Now that the repository is up-to-date we can use the apt-get command once again, this time with the argument upgrade, to perform upgrades on any packages that have newer versions available than the one that is currently installed:

$ sudo apt-get upgrade
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following packages will be upgraded:
  package-1 package-5
2 upgraded, 0 newly installed, 0 to remove and 256 not upgraded.
Need to get 4.0 MB of archives.
After this operation, 1,149 kB of additional disk space will be used.

Do you want to continue [Y/n]?

Here we can see that the system administrator is told that the example packages “package-1” and “package-5” will be installed. If she selects yes, the system will automatically build and install those packages.

Although this example uses Debian, the process is almost identical across most Unix systems and is covered in the base documentation for every system that we have seen.

Sometimes applications need to be installed outside of the package management system. This can be because it is not included in the package management system, or your organization has particular build and deployment requirements that require a custom build. If this is the case, it is recommended that someone be tasked with monitoring both the new releases of the application and its security mailing. Subscribing to these lists should provide notification of any vulnerabilities that have been discovered, as vulnerabilites in these applications will not be covered by updates addressed automatically by the package management system.

Core Operating System Updates

Many, but not all, Unix systems have a delineation between the operating system and applications that are installed on it. As such, the method of keeping the operating system itself up-to-date will often differ from that of the applications. The method of upgrading will vary from operating system to operating system, but the upgrade methods fall into two broad buckets:

Binary update

Commercial operating systems particularly favor the method of applying a binary update; that is, distributing precompiled binary executables and libraries that are copied to disk, replacing the previous versions. Binary updates cannot make use of custom compiler options and make assumptions about dependencies, but they require less work in general and are fast to install.

Update from source

Many open source operating systems favor updates from source, meaning that they are compiled locally from a copy of the source code and previous versions on disk are replaced by these binaries. Updating from source takes more time and is more complex, however the operating system can include custom compiler optimizations and patches.

There are many debates over which system is better, and each has its pros and cons. For the purposes of this book, however, we will assume that you are sticking with the default of your operating system as the majority of arguments center around topics unrelated to security.

Updates to the operating system are typically less frequent than updates to third-party software. Additionally, they are more disruptive, as they typically require a reboot because they often involve an update to the kernel or other subsystems that only load at startup, unlike application updates, which can be instantiated via the restart of the appropriate daemon. Core operating updates are advisable, though as vulnerabilities are often found within both operating systems and applications.

As with any other patch of this nature, it is advisable to have a rollback plan in place for any large update such as one for an operating system. In the case of virtualized infrastructure, this can be achieved simply by taking a snapshot of the filesystem prior to upgrade; thus a failed upgrade can be simply rolled back by reverting to the last snapshot. In physical infrastructure this can be more problematic, but most operating systems have mechanisms to cope with this issue, typically by storing a copy of the old binaries and replacing them if required.

Nevertheless, patches to the operating system are often required in order to close security gaps, so you should have a process defined to cope with this. As with applications, the effort to upgrade the operating system is lower the more up-to-date a system already is, so we recommend remaining as current as is reasonable, leaving only small increments to update at any one time.

Hardening a Unix Application Server

The next area to discuss is that of hardening the servers. This is the art of making the most secure configuration possible, without compromising the ability of the system to perform its primary business functions.

This can be a particularly difficult balancing act as restricting access to user and processes must be tempered with the fact that the server must still perform its primary function properly and system administrators must still be able to access the system to perform their duties.

Disable services

Every service (daemon) that runs is executing code on the server. If there is a vulnerability within that code, it is a potential weakness that can be leveraged by an attacker; it is also consuming resources in the form of RAM and CPU cycles.

Many operating systems ship with a number of services enabled by default, many of which you may not use. These services should be disabled to reduce the attack surface on your servers. Of course you should not just start disabling services with reckless abandon—before disabling a service, it is prudent to ascertain exactly what it does and determine if you require it.

There are a number of ways to ascertain which services are running on a Unix system, the easiest of which is to use the ps command to list running services. Exact argument syntax can vary between versions, but the ps ax syntax works on most systems and will list all currently running processes. For minor variations in syntax on your operating system, check the manual page for ps using the command man ps.

Services should be disabled in startup scripts (rc or init, depending on operating system) unless your system uses systemd, in which case you can refer to the following discussion on systemd. Using the kill command will merely stop the currently running service, which will start once more during a reboot. On Linux the commands are typically one of: rc-update, update-rc.d, or service. On BSD-based systems, you typically edit the file /etc/rc.conf. For example, on several flavors of Linux the service command can be used to stop the sshd service:

service sshd stop

To start sshd (one time):

service start sshd

And to disable it from starting after a reboot:

update-rc.d -f sshd remove

Some Linux distributions have moved toward using systemd as opposed to SysV startup scripts to manage services. systemd can be used to perform other administrative functions with regards to services, such as reloading configuration and displaying dependancy information. To stop sshd (one time):

systemctl stop sshd

To enable sshd upon every reboot:

systemctl enable sshd

And to disable sshd upon further reboots:

systemctl disable sshd

Older Unix operating systems may use inetd or xinetd to manage services rather than rc or init scripts. (x)inetd is used to preserve system resources by being almost the only service running and starting other services on demand, rather than leaving them all running all of the time. If this is the case, services can be disabled by editing the inetd.conf or xinetd.conf files, typically located in the /etc/ directory.

File permissions

Most Unix filesystems have a concept of permissions—that is, files which users and groups can read, write, or execute. Most also have the SETUID (set user ID upon execution) permission, which allows a nonroot user to execute a file with the permission of the owning user, typically root. This is because the normal operation of that command, even to a nonroot user, requires root privileges, such as su or sudo.

Typically, an operating system will set adequate file permissions on the system files during installation. However, as you create files and directories, permissions will be created according to your umask settings. As a general rule, the umask on a system should only be made more restrictive than the default. Cases where a less restrictive umask is required should be infrequent enough that chmod can be used to resolve the issue. Your umask settings can be viewed and edited using the umask command. See man umask1 for further detail on this topic.

Incorrect file permissions can leave files readable by users other than whom it is intended for. Many people wrongly believe that because a user has to be authenticated to log in to a host, leaving world or group readable files on disk is not a problem. However, they do not consider that services also run using their own user accounts.

Take, for example, a system running a web server such as Apache, nginx, or lighttpd; these web servers typically run under a user ID of their own such as “www-data.” If files you create are readable by “www-data”, then, if configured to do so, accidentally or otherwise, the web server has permission to read that file and to potentially serve it to a browser. By restricting filesystem-level access, we can prevent this from happening—even if the web server is configured to do so, as it will no longer have permission to open the file.

As an example, in the following, the file test can be read and written to by the owner _www, it can be read and executed by the group staff, and can be read by anybody. This is denoted by the rw-, r-x, and r-- permissions in the directory listing:

$ ls -al test
-rw-r-xr--  1 _www  staff  1228 16 Apr 05:22 test

In the Unix filesystem listing, there are 10 hyphens (-), the last 9 of which correspond to read, write, and execute permissions for owner, group and other (everyone). A hyphen indicates the permission is not set; a letter indicates that it is set. Other special characters appear less often; for example, an S signifies that the SETUID flag has been set.

If we wish to ensure that other can no longer see this file, then we can modify the permissions. We can alter them using the chmod command (o= sets the other permissions to nothing):

$ sudo chmod o= test
$ ls -la test
-rw-r-x---  1 _www  staff  1228 16 Apr 05:22 test

Note that the “r” representing the read permission for other is now a “-”2.

Host-based firewalls

Many people consider firewalls to be appliances located at strategic points around a network to allow and permit various types of connection. While this is true, most Unix operating systems have local firewall software built in so that hosts can firewall themselves. By enabling and configuring this functionality, the server is not only offered some additional protection should the network firewall fail to operate as expected, but it will also offer protection against hosts on the local LAN that can communicate with the server directly, as opposed to via a network appliance firewall.

Typical examples of firewall software in Unix systems are IPTables/NetFilter, ipchains, pf, ipf, and ipfw, the configuration and use of which will vary from platform to platform. The end goal, however, is the same: to create a ruleset that permits all traffic required to successfully complete the server’s tasks and any related administration of the server—and nothing else.

One point to note is that using a stateful firewall on a host will consume RAM and CPU with keeping track of sessions and maintaining a TCP state table. This is because a stateful firewall not only permits and denies packets based on IP address and port numbers alone, but also tracks features such as TCP handshake status in a state table. On a busy server, a simple packetfilter (i.e., permitting and denying based on IP addresses, port numbers, protocols, etc., on a packet-by-packet basis) will consume way fewer resources but still allow an increased level of protection from unwanted connections.

Managing file integrity

File Integrity Management tools monitor key files on the filesystem and alert the administrator in the event that they change.  These tools can be used to ensure that key system files are not tampered with, as in the case with a rootkit, and that files are not added to directories without the administrator’s permission, or configuration files modified, as can be the case with backdoors in web applications, for example.

There are both commercial tools and free/open source tools available through your preferred package management tool. Examples of open source tools that perform file integrity monitoring include Samhain and OSSEC. If you are looking to spend money to obtain extra features like providing integration with your existing management systems, there are also a number of commercial tools available.

Alternatively, if you cannot for whatever reason install file integrity monitoring tools, many configuration management tools can be configured to report on modified configuration files on the filesystem as part of their normal operation. This is not their primary function and does not offer the same level of coverage, and so is not as robust as a dedicated tool. However, if you are in a situation where you cannot deploy security tools but do have configuration management in place, this may be of some use.

Separate disk partitions

Disk partitions within Unix can be used not only to distribute the filesystem across several physical or logical partitions, but also to restrict certain types of action depending on which partition they are taking place on. 

Options can be placed on each mount point in /etc/fstab.

Note

When editing /etc/fstab to make changes, the changes will not take effect until the partition is either remounted using the umount and/or mount commands or following a reboot.

There are some minor differences between different flavors of Unix with regards to the options, and so consulting the system manual page—using man mount—before using options is recommended.

Some of the most useful and common mount point options, from a security perspective, are:

nodev

Do not interpret any special dev devices. If no special dev devices are expected, this option should be used. Typically only the /dev/ mount point would contain special dev devices.

nosuid

Do not allow setuid execution. Certain core system functions, such as su and sudo will require setuid execution, thus this option should be used carefully. Attackers can use setuid binaries as a method of backdooring a system to quickly obtain root privileges from a standard user account. Setuid execution is probably not required outside of the system-installed bin and sbin directories. You can check for the location of setuid binaries using the following command:

$ sudo find / -perm -4000

Binaries that are specifically setuid root, as opposed to any setuid binary, can be located using the following variant:

$ sudo find / -user root -perm -4000
ro

Mount the filesystem read-only. If data does not need to be written or updated, this option may be used to prevent modification. This removes the ability for an attacker to modify files stored in this location such as config files, static website content, and the like.

noexec

Prevents execution, of any type, from that particular mount point. This can be set on mount points used exclusively for data and document storage. It prevents an attacker from using this as a location to execute tools he may load onto a system and it can defeat certain classes of exploit.

chroot

chroot alters the apparent root directory of a running process and any children processes. The most important aspect of this is that the process inside the chroot jail cannot access files outside of its new apparent root directory, which is particularly useful in the case of ensuring that a poorly configured or exploited service cannot access anything more than it needs to.

There are two ways in which chroot can be initiated:

  • The process in question can use the chroot system call and chroot itself voluntarily. Typically, these processes will contain chroot options within their configuration files, most notably allowing the user to set the new apparent root directory.
  • The chroot wrapper can be used on the command line when executing the command. Typically this would look something like:
sudo chroot /chroot/dir/ /chroot/dir/bin/binary -args

For details of specific chroot syntax for your flavor of Unix, consult man chroot.3

It should be noted, however, that there is a common misconception that chroot offers some security features that it simply does not. Chroot jails are not impossible to break out of, especially if the process within the chroot jail is running with root privileges. Typically processes that are specifically designed to use chroot will drop their root privileges as soon as possible so as to mitigate this risk. Additionally, chroot does not offer the process any protection from privileged users outside of the chroot on the same system.

Neither of these are reasons to abandon chroot, but should be considered when designing use cases as it is not an impenetrable fortress, but more a method of further restricting filesystem access.

Mandatory Access Controls

There are various flavors of Unix that support Mandatory Access Controls (MAC), some of the most well-known being SELinux, TrustedBSD, and the grsecurity patches. The method of configuration, granularity, and features of Mandatory Access Controls vary across systems; however, the high-level concepts remain consistent.

MAC allows policies to be enforced that are far more granular in nature than those offered by traditional Unix filesystem permissions. The ability to read, write, and execute files is set in policies with more fine-grained controls, allowing a user to be granted or denied access on a per-file basis rather than all files within the group to which they belong, for example.

Using MAC with a defined policy allows the owner of a system to enforce the principles of least privilege—that is, only permitting access to those files and functions that users require to perform their job and nothing more. This limits their access and reduces the chances of accidental or deliberate abuse from that account.

MAC can also be used with enforcement disabled; that is, operating in a mode in which violations of policy are not blocked, but are logged. This can be used in order to create a more granular level of logging for user activity. The reasons for this will be discussed later in Chapter 20.

Conclusion

Keeping Unix application servers secure does not necessarily require the purchase of additional infrastructure or software. Unix operating systems as a whole are designed to have a large number of useful tools available to the user out of the box, with package management systems to provide supplemental open source tools.

A large number of vulnerabilities can be mitigated simply by keeping patches up-to-date and ensuring that a sensible configuration is used.

1 Type man umask at the command prompt of almost any Unix system.

2 For further reading on this topic, consult your system manual for the commands chmod, chgrp, chown, and ls.

3 Type man chroot at the command prompt of almost any Unix system.