Table of Contents for
Practical UNIX and Internet Security, 3rd Edition

Version ebook / Retour

Cover image for bash Cookbook, 2nd Edition Practical UNIX and Internet Security, 3rd Edition by Alan Schwartz Published by O'Reilly Media, Inc., 2003
  1. Cover
  2. Practical Unix & Internet Security, 3rd Edition
  3. A Note Regarding Supplemental Files
  4. Preface
  5. Unix “Security”?
  6. Scope of This Book
  7. Which Unix System?
  8. Conventions Used in This Book
  9. Comments and Questions
  10. Acknowledgments
  11. A Note to Would-Be Attackers
  12. I. Computer Security Basics
  13. 1. Introduction: Some Fundamental Questions
  14. What Is Computer Security?
  15. What Is an Operating System?
  16. What Is a Deployment Environment?
  17. Summary
  18. 2. Unix History and Lineage
  19. History of Unix
  20. Security and Unix
  21. Role of This Book
  22. Summary
  23. 3. Policies and Guidelines
  24. Planning Your Security Needs
  25. Risk Assessment
  26. Cost-Benefit Analysis and Best Practices
  27. Policy
  28. Compliance Audits
  29. Outsourcing Options
  30. The Problem with Security Through Obscurity
  31. Summary
  32. II. Security Building Blocks
  33. 4. Users, Passwords, and Authentication
  34. Logging in with Usernames and Passwords
  35. The Care and Feeding of Passwords
  36. How Unix Implements Passwords
  37. Network Account and Authorization Systems
  38. Pluggable Authentication Modules (PAM)
  39. Summary
  40. 5. Users, Groups, and the Superuser
  41. Users and Groups
  42. The Superuser (root)
  43. The su Command: Changing Who You Claim to Be
  44. Restrictions on the Superuser
  45. Summary
  46. 6. Filesystems and Security
  47. Understanding Filesystems
  48. File Attributes and Permissions
  49. chmod: Changing a File’s Permissions
  50. The umask
  51. SUID and SGID
  52. Device Files
  53. Changing a File’s Owner or Group
  54. Summary
  55. 7. Cryptography Basics
  56. Understanding Cryptography
  57. Symmetric Key Algorithms
  58. Public Key Algorithms
  59. Message Digest Functions
  60. Summary
  61. 8. Physical Security for Servers
  62. Planning for the Forgotten Threats
  63. Protecting Computer Hardware
  64. Preventing Theft
  65. Protecting Your Data
  66. Story: A Failed Site Inspection
  67. Summary
  68. 9. Personnel Security
  69. Background Checks
  70. On the Job
  71. Departure
  72. Other People
  73. Summary
  74. III. Network and Internet Security
  75. 10. Modems and Dialup Security
  76. Modems: Theory of Operation
  77. Modems and Security
  78. Modems and Unix
  79. Additional Security for Modems
  80. Summary
  81. 11. TCP/IP Networks
  82. Networking
  83. IP: The Internet Protocol
  84. IP Security
  85. Summary
  86. 12. Securing TCP and UDP Services
  87. Understanding Unix Internet Servers and Services
  88. Controlling Access to Servers
  89. Primary Unix Network Services
  90. Managing Services Securely
  91. Putting It All Together: An Example
  92. Summary
  93. 13. Sun RPC
  94. Remote Procedure Call (RPC)
  95. Secure RPC (AUTH_DES)
  96. Summary
  97. 14. Network-Based Authentication Systems
  98. Sun’s Network Information Service (NIS)
  99. Sun’s NIS+
  100. Kerberos
  101. LDAP
  102. Other Network Authentication Systems
  103. Summary
  104. 15. Network Filesystems
  105. Understanding NFS
  106. Server-Side NFS Security
  107. Client-Side NFS Security
  108. Improving NFS Security
  109. Some Last Comments on NFS
  110. Understanding SMB
  111. Summary
  112. 16. Secure Programming Techniques
  113. One Bug Can Ruin Your Whole Day . . .
  114. Tips on Avoiding Security-Related Bugs
  115. Tips on Writing Network Programs
  116. Tips on Writing SUID/SGID Programs
  117. Using chroot( )
  118. Tips on Using Passwords
  119. Tips on Generating Random Numbers
  120. Summary
  121. IV. Secure Operations
  122. 17. Keeping Up to Date
  123. Software Management Systems
  124. Updating System Software
  125. Summary
  126. 18. Backups
  127. Why Make Backups?
  128. Backing Up System Files
  129. Software for Backups
  130. Summary
  131. 19. Defending Accounts
  132. Dangerous Accounts
  133. Monitoring File Format
  134. Restricting Logins
  135. Managing Dormant Accounts
  136. Protecting the root Account
  137. One-Time Passwords
  138. Administrative Techniques for Conventional Passwords
  139. Intrusion Detection Systems
  140. Summary
  141. 20. Integrity Management
  142. The Need for Integrity
  143. Protecting Integrity
  144. Detecting Changes After the Fact
  145. Integrity-Checking Tools
  146. Summary
  147. 21. Auditing, Logging, and Forensics
  148. Unix Log File Utilities
  149. Process Accounting: The acct/pacct File
  150. Program-Specific Log Files
  151. Designing a Site-Wide Log Policy
  152. Handwritten Logs
  153. Managing Log Files
  154. Unix Forensics
  155. Summary
  156. V. Handling Security Incidents
  157. 22. Discovering a Break-in
  158. Prelude
  159. Discovering an Intruder
  160. Cleaning Up After the Intruder
  161. Case Studies
  162. Summary
  163. 23. Protecting Against Programmed Threats
  164. Programmed Threats: Definitions
  165. Damage
  166. Authors
  167. Entry
  168. Protecting Yourself
  169. Preventing Attacks
  170. Summary
  171. 24. Denial of Service Attacks and Solutions
  172. Types of Attacks
  173. Destructive Attacks
  174. Overload Attacks
  175. Network Denial of Service Attacks
  176. Summary
  177. 25. Computer Crime
  178. Your Legal Options After a Break-in
  179. Criminal Hazards
  180. Criminal Subject Matter
  181. Summary
  182. 26. Who Do You Trust?
  183. Can You Trust Your Computer?
  184. Can You Trust Your Suppliers?
  185. Can You Trust People?
  186. Summary
  187. VI. Appendixes
  188. A. Unix Security Checklist
  189. Preface
  190. Chapter 1: Introduction: Some Fundamental Questions
  191. Chapter 2: Unix History and Lineage
  192. Chapter 3: Policies and Guidelines
  193. Chapter 4: Users, Passwords, and Authentication
  194. Chapter 5: Users, Groups, and the Superuser
  195. Chapter 6: Filesystems and Security
  196. Chapter 7: Cryptography Basics
  197. Chapter 8: Physical Security for Servers
  198. Chapter 9: Personnel Security
  199. Chapter 10: Modems and Dialup Security
  200. Chapter 11: TCP/IP Networks
  201. Chapter 12: Securing TCP and UDP Services
  202. Chapter 13: Sun RPC
  203. Chapter 14: Network-Based Authentication Systems
  204. Chapter 15: Network Filesystems
  205. Chapter 16: Secure Programming Techniques
  206. Chapter 17: Keeping Up to Date
  207. Chapter 18: Backups
  208. Chapter 19: Defending Accounts
  209. Chapter 20: Integrity Management
  210. Chapter 21: Auditing, Logging, and Forensics
  211. Chapter 22: Discovering a Break-In
  212. Chapter 23: Protecting Against Programmed Threats
  213. Chapter 24: Denial of Service Attacks and Solutions
  214. Chapter 25: Computer Crime
  215. Chapter 26: Who Do You Trust?
  216. Appendix A: Unix Security Checklist
  217. Appendix B: Unix Processes
  218. Appendixes C, D, and E: Paper Sources, Electronic Sources, and Organizations
  219. B. Unix Processes
  220. About Processes
  221. Signals
  222. Controlling and Examining Processes
  223. Starting Up Unix and Logging In
  224. C. Paper Sources
  225. Unix Security References
  226. Other Computer References
  227. D. Electronic Resources
  228. Mailing Lists
  229. Web Sites
  230. Usenet Groups
  231. Software Resources
  232. E. Organizations
  233. Professional Organizations
  234. U.S. Government Organizations
  235. Emergency Response Organizations
  236. Index
  237. Index
  238. Index
  239. Index
  240. Index
  241. Index
  242. Index
  243. Index
  244. Index
  245. Index
  246. Index
  247. Index
  248. Index
  249. Index
  250. Index
  251. Index
  252. Index
  253. Index
  254. Index
  255. Index
  256. Index
  257. Index
  258. Index
  259. Index
  260. Index
  261. Index
  262. Index
  263. About the Authors
  264. Colophon
  265. Copyright

One Bug Can Ruin Your Whole Day . . .

The Unix security model makes a tremendous investment in the infallibility of the superuser and in the reliability of software that runs with the privileges of the superuser. If the superuser account is compromised, then the system is left wide open—hence, our many admonitions in this book to protect the superuser account and restrict the number of people who must know the superuser password.

Unfortunately, even if you prevent users from logging into the superuser account, many Unix programs need to run with some sort of administrative privileges. Many of these programs are set up to run with superuser privileges—typically by having them run as SUID root programs, by having the programs launched when the computer starts up, or by having them started by other programs running with superuser privileges (the common manner in which network servers are started). A single bug in any of these complicated programs can compromise the safety of your entire system. Furthermore, the environment and trusted inputs to these programs also need to be protected to prevent unexpected (and unwanted!) behavior.[235] This characteristic is a security architecture design flaw, but it is basic to the design of Unix and is not likely to change.

The Lesson of the Internet Worm

One of the best examples of how a single line of code in a program can result in the compromise of thousands of machines dates back to the pre-dawn of the commercial Internet. The year was 1988, and a graduate student at Cornell University had discovered several significant security flaws in versions of Unix that were widely used on the Internet. Using his knowledge, the student created a program (known as a worm) that would find vulnerable computers, exploit one of these flaws, transfer a copy of itself to the compromised system, and then repeat the process. The program infected between 2,000 and 6,000 computers within hours of being released. While that does not seem like a lot of machines today, in 1988 it represented a substantial percentage of the academic and commercial mail servers on the Internet. The Internet was effectively shut down for two days following the worm’s release.

Although the worm used several techniques for compromising systems, the most effective attack in its arsenal was a buffer overflow attack directed against the Unix fingerd daemon.

The original fingerd program contained these lines of code:

                char line[512];
...
                line[0] = '\0';
                gets(line);

Because the gets( ) function does not check the length of the line read, a program that supplied more than 512 bytes of valid data would overrun the memory allocated to the line[] array and, ultimately, corrupt the program’s stack frame. The worm contained code that used the stack overflow to cause the fingerd program to execute a shell; because at the time it was standard practice to run fingerd as the superuser, this shell inherited superuser access to the server computer. fingerd didn’t need to run as superuser, but it was spawned as a root process during the system startup and never switched to a different user ID.[236] Because fingerd’s standard input and standard output file descriptors were connected to the TCP socket, the remote process that caused the overflow was given complete, interactive control of the system.

The fix for the fingerd program was simple: replace the gets( ) function with the fgets( ) function. Whereas gets( ) takes one parameter, the buffer, the fgets( ) function takes three arguments: the buffer, the size of the buffer, and the file handle from which to fetch the data:

fgets(line,sizeof(line),stdin);

When the original fingerd program was written, it was common practice among many programers to use gets( ) instead of fgets( )—probably because using gets( ) required typing fewer characters each time. Nevertheless, because of the way that the C programming language and the Standard IO library were designed, any program that used gets( ) to fill a buffer on the stack potentially had—and still has—this vulnerability.

Although it seems like ancient history now, this story continues to illustrate many important lessons:

  • The worm demonstrated that a single flaw in a single innocuous Internet server could compromise the security of an entire system—and, indeed, an entire network.

  • Many of the administrators whose systems were compromised by the worm did not even know what the fingerd program did and had not made a conscious decision to have the service running. Likewise, many of the security flaws that have been discovered in the years since have been with software that was installed by default and not widely used.[237]

  • Although the worm did not use its superuser access to intentionally damage programs or data on computers that it penetrated, the program did result in significant losses. Many of those losses were the result of lost time, lost productivity, and the loss of confidence in the compromised systems. There is no such thing as a “harmless break-in.”

  • The worm showed that flaws in deployed software might lurk for years before being exploited by someone with the right tools and the wrong motives. Indeed, the flaw in the finger code had been unnoticed for more than six years, from the time of the first Berkeley Unix network software release until the day that the worm ran loose. This illustrates a fundamental lesson: because a hole has never been discovered in a program does not mean that no hole exists. The fact that a hole has not been exploited today does not guarantee that the hole will not be exploited tomorrow.

Interestingly enough, the fallible human component of secure programming is illustrated by the same example. Shortly after the problem with the gets( ) subroutine was exposed, the Berkeley programming group went through all of its code and eliminated every similar use of the gets( ) call in a network server. Most vendors did the same with their code. Several people, including Spafford in his paper analyzing the operations and effects of the worm, publicly warned that uses of other library calls that wrote to buffers without bounds checks also needed to be examined. These included calls to the sprintf( ) routine, and byte-copy routines such as strcpy ( ) . However, those admonitions were not heeded.

In late 1995, as we were finishing the second edition of this book, a new security vulnerability in several versions of Unix was widely publicized. It was based on buffer overruns in the syslog library routine. An attacker could carefully craft an argument to a network daemon such that, when an attempt was made to log it using syslog, the message overran the buffer and compromised the system in a manner hauntingly similar to the fingerd problem. After seven years, a close cousin to the fingerd bug was discovered. What underlying library calls contribute to the problem? The sprintf( ) library call does, and so do byte-copy routines such as strcpy( ).

In the summer of 2002, as we were working on the third edition of this book, not one but four separate overflow vulnerabilities were found in the popular OpenSSL security library, based on effectively the same vulnerability. In use on more than a million Internet servers, this SSL library is the basis of the SSL offering used by the Apache web server and all Unix SSL-wrapped mail services.

While many Unix security bugs are the result of poor programming tools and methods, even more regrettable is the failure to learn from old mistakes, and the failure to redesign the underlying operating system or programming languages so that this broad class of attacks will no longer be effective.[238]

An Empirical Study of the Reliability of Unix Utilities

In December 1990, the Communications of the ACM published an article by Miller, Fredrickson, and So entitled “An Empirical Study of the Reliability of Unix Utilities” (Volume 33, issue 12, pp. 32-44). The paper started almost as a joke: a researcher was logged into a Unix computer from home, and the programs he was running kept crashing because of line noise from a poor modem connection. Eventually, Barton Miller, a professor at the University of Wisconsin, decided to subject the Unix utility programs from a variety of different vendors to a selection of random inputs and monitor the results.[239]

What he found

The results were discouraging. Between 25% and 33% of the Unix utilities could be crashed or hung by supplying them with unexpected inputs—sometimes input that was as simple as an end-of-file on the middle of an input line. On at least one occasion, crashing a program tickled an operating system bug and caused the entire computer to crash. Many times, programs would freeze for no apparent reason.

In 1995 a new team headed by Miller repeated the experiment, this time running a program called Fuzz on nine different Unix platforms. The team also tested Unix network servers, and a variety of X Window System applications (both clients and servers). Here are some of the highlights:

  • According to the 1995 paper, vendors were still shipping a distressingly buggy set of programs: “...the failure rate of utilities on the commercial versions of Unix that we tested (from Sun, IBM, SGI, DEC, and NeXT) ranged from 15-43%.”

  • Unix vendors don’t seem to be overly concerned about bugs in their programs: “Many of the bugs discovered (approximately 40%) and reported in 1990 are still present in their exact form in 1995. The 1990 study was widely published in at least two languages. The code was made freely available via anonymous FTP. The exact random data streams used in our testing were made freely available via FTP. The identification of failures that we found were also made freely available via FTP; these include code fragments with file and line numbers for the errant code. According to our records, over 2000 copies of the...tools and bug identifications were fetched from our FTP sites...It is difficult to understand why a vendor would not partake of a free and easy source of reliability improvements.”

  • The two lowest failure rates in the study were the Free Software Foundation’s GNU utilities (failure rate of 7%) and the utilities included with the freely distributed Linux version of the Unix operating system (failure rate 9%).[240] Interestingly enough, the Free Software Foundation has strict coding rules that forbid the use of fixed-length buffers. (Miller et al. failed to note that many of the Linux utilities were repackaged GNU utilities.)

There were a few bright points in the 1995 paper. Most notable was the fact that Miller’s group was unable to crash any Unix network server. The group was also unable to crash any X Window System server.

On the other hand, the group discovered that many X clients will readily crash when fed random streams of data. Others will lock up—and in the process, freeze the X server until the programs are terminated.

In 2000, Professor Miller and Justin Forrester ran the Fuzz tests a third time, although this time exclusively against Windows NT. Their testing revealed that they could crash or hang 45% of all programs expecting user input. When they tried sending random Win32 messages to applications (something any user can accomplish), they disrupted 100% of all applications!

Where’s the beef?

Many of the errors that Miller’s group discovered resulted from common programming mistakes with the C programming language: programmers wrote clumsy or confusing code that did the wrong things; programmers neglected to check for array boundary conditions; and programmers assumed that their char variables were of type unsigned, when in fact they were signed.

While these errors can certainly cause programs to crash when they are fed random streams of data, these errors are exactly the kinds of problems that can be exploited by carefully crafted streams of data to achieve malicious results. Think back to the Internet worm: if tested by the Miller Fuzz program, the original fingerd program would have crashed. But when presented with the carefully crafted stream that was present in the worm, the program gave its attacker a root shell!

What is somewhat frightening about the study is that the tests employed by Miller’s group are among the least comprehensive known to testers: random, black-box testing. Different patterns of input could possibly cause more programs to fail. Inputs made under different environmental circumstances could also lead to abnormal behavior. Other testing methods could expose these problems whereas random testing, by its very nature, would not.

Miller’s group also found that use of several commercially available tools enabled them to discover errors and perform other tests, including discovery of buffer overruns and related memory errors. These tools were readily available; however, vendors were apparently not using them.[241]

Why don’t vendors care more about quality? Well, according to many of them, they do care, but quality does not sell. Writing good code and testing it carefully is not a quick or simple task. It requires extra effort and extra time. The extra time spent on ensuring quality will result in increased cost, and an increase in time-to-market. To date, few customers (possibly including you, gentle reader) have indicated a willingness to pay extra for better-quality software. Vendors have thus put their efforts into what customers are willing to buy, such as new features. Although we believe that most vendors could do a better job in this respect (and some could do a much better job), we must be fair and point the finger at the user population, too.

In some sense, any program you write might fare as well as vendor-supplied software. However, that isn’t good enough if the program is running in a sensitive role and could potentially be abused. Therefore, you must practice good coding habits, and pay special attention to common trouble spots.



[235] Many of these programs should not run as superuser, but instead should run under another user that has a somewhat more restricted set of privileges.

[236] This was common practice at the time. It predated the inetd and its mechanism of spawning servers with other user IDs. It also was a vulnerable paradigm that has led to countless other break-ins over the years, and it still poses a trap for the unwary.

[237] This is not restricted to Unix—it has been common in the Windows family of systems, too.

[238] Some efforts have been made to make Unix fundamentally more resistant to buffer overflow attacks. Modern BSD systems offer a nonexecutable stack—even if an attacker overflows a buffer into stack space, the code they insert cannot be executed. Solaris has made nonexecutable stack available since Version 2.6 (as a kernel option in /etc/system) and automatically enabled it for setuid files in Solaris 9. For Linux systems, the Openwall patches (http://www.openwall.com) can provide similar functionality. However, even with nonexecutable stacks, buffer overflows can be exploited to run arbitrary code, crash privileged code, and otherwise disrupt expected behavior.

[239] The Fuzz archive, including source code and additional papers—including the 1995 paper, “Fuzz Revisited: A Re-examination of the Reliability of UNIX Utilities and Sources,” by Barton Miller et al.—can be found at http://www.cs.wisc.edu/~bart/fuzz/fuzz.html.

[240] We don’t believe that 7% is an acceptable failure rate, either.

[241] In the last decade, several of the firms making these tools went out of business or switched to selling other products. The reason? There were insufficient sales of software-testing tools to remain viable!