Table of Contents for
Practical UNIX and Internet Security, 3rd Edition

Version ebook / Retour

Cover image for bash Cookbook, 2nd Edition Practical UNIX and Internet Security, 3rd Edition by Alan Schwartz Published by O'Reilly Media, Inc., 2003
  1. Cover
  2. Practical Unix & Internet Security, 3rd Edition
  3. A Note Regarding Supplemental Files
  4. Preface
  5. Unix “Security”?
  6. Scope of This Book
  7. Which Unix System?
  8. Conventions Used in This Book
  9. Comments and Questions
  10. Acknowledgments
  11. A Note to Would-Be Attackers
  12. I. Computer Security Basics
  13. 1. Introduction: Some Fundamental Questions
  14. What Is Computer Security?
  15. What Is an Operating System?
  16. What Is a Deployment Environment?
  17. Summary
  18. 2. Unix History and Lineage
  19. History of Unix
  20. Security and Unix
  21. Role of This Book
  22. Summary
  23. 3. Policies and Guidelines
  24. Planning Your Security Needs
  25. Risk Assessment
  26. Cost-Benefit Analysis and Best Practices
  27. Policy
  28. Compliance Audits
  29. Outsourcing Options
  30. The Problem with Security Through Obscurity
  31. Summary
  32. II. Security Building Blocks
  33. 4. Users, Passwords, and Authentication
  34. Logging in with Usernames and Passwords
  35. The Care and Feeding of Passwords
  36. How Unix Implements Passwords
  37. Network Account and Authorization Systems
  38. Pluggable Authentication Modules (PAM)
  39. Summary
  40. 5. Users, Groups, and the Superuser
  41. Users and Groups
  42. The Superuser (root)
  43. The su Command: Changing Who You Claim to Be
  44. Restrictions on the Superuser
  45. Summary
  46. 6. Filesystems and Security
  47. Understanding Filesystems
  48. File Attributes and Permissions
  49. chmod: Changing a File’s Permissions
  50. The umask
  51. SUID and SGID
  52. Device Files
  53. Changing a File’s Owner or Group
  54. Summary
  55. 7. Cryptography Basics
  56. Understanding Cryptography
  57. Symmetric Key Algorithms
  58. Public Key Algorithms
  59. Message Digest Functions
  60. Summary
  61. 8. Physical Security for Servers
  62. Planning for the Forgotten Threats
  63. Protecting Computer Hardware
  64. Preventing Theft
  65. Protecting Your Data
  66. Story: A Failed Site Inspection
  67. Summary
  68. 9. Personnel Security
  69. Background Checks
  70. On the Job
  71. Departure
  72. Other People
  73. Summary
  74. III. Network and Internet Security
  75. 10. Modems and Dialup Security
  76. Modems: Theory of Operation
  77. Modems and Security
  78. Modems and Unix
  79. Additional Security for Modems
  80. Summary
  81. 11. TCP/IP Networks
  82. Networking
  83. IP: The Internet Protocol
  84. IP Security
  85. Summary
  86. 12. Securing TCP and UDP Services
  87. Understanding Unix Internet Servers and Services
  88. Controlling Access to Servers
  89. Primary Unix Network Services
  90. Managing Services Securely
  91. Putting It All Together: An Example
  92. Summary
  93. 13. Sun RPC
  94. Remote Procedure Call (RPC)
  95. Secure RPC (AUTH_DES)
  96. Summary
  97. 14. Network-Based Authentication Systems
  98. Sun’s Network Information Service (NIS)
  99. Sun’s NIS+
  100. Kerberos
  101. LDAP
  102. Other Network Authentication Systems
  103. Summary
  104. 15. Network Filesystems
  105. Understanding NFS
  106. Server-Side NFS Security
  107. Client-Side NFS Security
  108. Improving NFS Security
  109. Some Last Comments on NFS
  110. Understanding SMB
  111. Summary
  112. 16. Secure Programming Techniques
  113. One Bug Can Ruin Your Whole Day . . .
  114. Tips on Avoiding Security-Related Bugs
  115. Tips on Writing Network Programs
  116. Tips on Writing SUID/SGID Programs
  117. Using chroot( )
  118. Tips on Using Passwords
  119. Tips on Generating Random Numbers
  120. Summary
  121. IV. Secure Operations
  122. 17. Keeping Up to Date
  123. Software Management Systems
  124. Updating System Software
  125. Summary
  126. 18. Backups
  127. Why Make Backups?
  128. Backing Up System Files
  129. Software for Backups
  130. Summary
  131. 19. Defending Accounts
  132. Dangerous Accounts
  133. Monitoring File Format
  134. Restricting Logins
  135. Managing Dormant Accounts
  136. Protecting the root Account
  137. One-Time Passwords
  138. Administrative Techniques for Conventional Passwords
  139. Intrusion Detection Systems
  140. Summary
  141. 20. Integrity Management
  142. The Need for Integrity
  143. Protecting Integrity
  144. Detecting Changes After the Fact
  145. Integrity-Checking Tools
  146. Summary
  147. 21. Auditing, Logging, and Forensics
  148. Unix Log File Utilities
  149. Process Accounting: The acct/pacct File
  150. Program-Specific Log Files
  151. Designing a Site-Wide Log Policy
  152. Handwritten Logs
  153. Managing Log Files
  154. Unix Forensics
  155. Summary
  156. V. Handling Security Incidents
  157. 22. Discovering a Break-in
  158. Prelude
  159. Discovering an Intruder
  160. Cleaning Up After the Intruder
  161. Case Studies
  162. Summary
  163. 23. Protecting Against Programmed Threats
  164. Programmed Threats: Definitions
  165. Damage
  166. Authors
  167. Entry
  168. Protecting Yourself
  169. Preventing Attacks
  170. Summary
  171. 24. Denial of Service Attacks and Solutions
  172. Types of Attacks
  173. Destructive Attacks
  174. Overload Attacks
  175. Network Denial of Service Attacks
  176. Summary
  177. 25. Computer Crime
  178. Your Legal Options After a Break-in
  179. Criminal Hazards
  180. Criminal Subject Matter
  181. Summary
  182. 26. Who Do You Trust?
  183. Can You Trust Your Computer?
  184. Can You Trust Your Suppliers?
  185. Can You Trust People?
  186. Summary
  187. VI. Appendixes
  188. A. Unix Security Checklist
  189. Preface
  190. Chapter 1: Introduction: Some Fundamental Questions
  191. Chapter 2: Unix History and Lineage
  192. Chapter 3: Policies and Guidelines
  193. Chapter 4: Users, Passwords, and Authentication
  194. Chapter 5: Users, Groups, and the Superuser
  195. Chapter 6: Filesystems and Security
  196. Chapter 7: Cryptography Basics
  197. Chapter 8: Physical Security for Servers
  198. Chapter 9: Personnel Security
  199. Chapter 10: Modems and Dialup Security
  200. Chapter 11: TCP/IP Networks
  201. Chapter 12: Securing TCP and UDP Services
  202. Chapter 13: Sun RPC
  203. Chapter 14: Network-Based Authentication Systems
  204. Chapter 15: Network Filesystems
  205. Chapter 16: Secure Programming Techniques
  206. Chapter 17: Keeping Up to Date
  207. Chapter 18: Backups
  208. Chapter 19: Defending Accounts
  209. Chapter 20: Integrity Management
  210. Chapter 21: Auditing, Logging, and Forensics
  211. Chapter 22: Discovering a Break-In
  212. Chapter 23: Protecting Against Programmed Threats
  213. Chapter 24: Denial of Service Attacks and Solutions
  214. Chapter 25: Computer Crime
  215. Chapter 26: Who Do You Trust?
  216. Appendix A: Unix Security Checklist
  217. Appendix B: Unix Processes
  218. Appendixes C, D, and E: Paper Sources, Electronic Sources, and Organizations
  219. B. Unix Processes
  220. About Processes
  221. Signals
  222. Controlling and Examining Processes
  223. Starting Up Unix and Logging In
  224. C. Paper Sources
  225. Unix Security References
  226. Other Computer References
  227. D. Electronic Resources
  228. Mailing Lists
  229. Web Sites
  230. Usenet Groups
  231. Software Resources
  232. E. Organizations
  233. Professional Organizations
  234. U.S. Government Organizations
  235. Emergency Response Organizations
  236. Index
  237. Index
  238. Index
  239. Index
  240. Index
  241. Index
  242. Index
  243. Index
  244. Index
  245. Index
  246. Index
  247. Index
  248. Index
  249. Index
  250. Index
  251. Index
  252. Index
  253. Index
  254. Index
  255. Index
  256. Index
  257. Index
  258. Index
  259. Index
  260. Index
  261. Index
  262. Index
  263. About the Authors
  264. Colophon
  265. Copyright

Cost-Benefit Analysis and Best Practices

Time and money are finite. After you complete your risk assessment, you will have a long list of risks—far more than you can possibly address or defend against. You now need a way of ranking these risks to decide which you need to mitigate through technical means, which you will insure against, and which you will simply accept. Traditionally, the decision of which risks to address and which to accept was done using a cost-benefit analysis, a process of assigning cost to each possible loss, determining the cost of defending against it, determining the probability that the loss will occur, and then determining if the cost of defending against the risk outweighs the benefit. (See Cost-Benefit Examples sidebar for some examples.)

Risk assessment and cost-benefit analyses generate a lot of numbers, making the process seem quite scientific and mathematical. In practice, however, putting together these numbers can be a time-consuming and expensive process, and the result is numbers that are frequently soft or inaccurate. That’s why the approach of defining best practices has become increasingly popular, as we’ll discuss in a later section.

The Cost of Loss

Determining the cost of loss can be very difficult. A simple cost calculation considers the cost of repairing or replacing a particular item. A more sophisticated cost calculation can consider the cost of out-of-service equipment, the cost of added training, the cost of additional procedures resulting from a loss, the cost to a company’s reputation, and even the cost to a company’s clients. Generally speaking, including more factors in your cost calculation will increase your effort, but will also increase the accuracy of your calculations.

For most purposes, you do not need to assign an exact value to each possible risk. Normally, assigning a cost range to each item is sufficient. For instance, the loss of a dozen blank diskettes may be classed as “under $500,” while a destructive fire in your computer room might be classed as “over $1,000,000.” Some items may actually fall into the category “irreparable/irreplaceable”; these could include loss of your entire accounts-due database or the death of a key employee.

You may want to assign these costs based on a finer scale of loss than simply “lost/not lost.” For instance, you might want to assign separate costs for each of the following categories (these are not in any order):

  • Non-availability over a short term (< 7-10 days)

  • Non-availability over a medium term (1-2 weeks)

  • Non-availability over a long term (more than 2 weeks)

  • Permanent loss or destruction

  • Accidental partial loss or damage

  • Deliberate partial loss or damage

  • Unauthorized disclosure within the organization

  • Unauthorized disclosure to some outsiders

  • Unauthorized full disclosure to outsiders, competitors, and the press

  • Replacement or recovery cost

The Probability of a Loss

After you have identified the threats, you need to estimate the likelihood of each occurring. These threats may be easiest to estimate on a year-by-year basis.

Quantifying the threat of a risk is hard work. You can obtain some estimates from third parties, such as insurance companies. If the event happens on a regular basis, you can estimate it based on your records. Industry organizations may have collected statistics or published reports. You can also base your estimates on educated guesses extrapolated from past experience. For instance:

  • Your power company can provide an official estimate of the likelihood that your building would suffer a power outage during the next year. They may also be able to quantify the risk of an outage lasting a few seconds versus the risk of an outage lasting minutes or hours.

  • Your insurance carrier can provide you with actuarial data on the probability of death of key personnel based on age, health, smoker/nonsmoker status, weight, height, and other issues.

  • Your personnel records can be used to estimate the probability of key computing employees quitting.

  • Past experience and best guess can be used to estimate the probability of a serious bug being discovered in your software during the next year (100% for some software platforms).

If you expect something to happen more than once per year, then record the number of times that you expect it to happen. Thus, you may expect a serious earthquake only once every 100 years (for a per-year probability of 1% in your list), but you may expect three serious bugs in Microsoft’s Internet Information Server (IIS) to be discovered during the next month (for an adjusted probability of 3,600%).

The Cost of Prevention

Finally, you need to calculate the cost of preventing each kind of loss.

For instance, the cost to recover from a momentary power failure is probably only that of personnel “downtime” and the time necessary to reboot. However, the cost of prevention may be that of buying and installing a UPS system.

Costs need to be amortized over the expected lifetime of your approaches, as appropriate. Deriving these costs may reveal secondary costs and credits that should also be factored in. For instance, installing a better fire-suppression system may result in a yearly decrease in your fire insurance premiums and give you a tax benefit for capital depreciation. But spending money on a fire-suppression system means that the money is not available for other purposes, such as increased employee training or even investments.

Adding Up the Numbers

At the conclusion of this exercise, you should have a multidimensional matrix consisting of assets, risks, and possible losses. For each loss, you should know its probability, the predicted loss, and the amount of money required to defend against the loss. If you are very precise, you will also have a probability that your defense will prove inadequate.

The process of determining if each defense should or should not be employed is now straightforward. You do this by multiplying each expected loss by the probability of its occurring as a result of each threat. Sort these in descending order, and compare each cost of occurrence to its cost of defense.

This comparison results in a prioritized list of things you should address. The list may be surprising. Your goal should be to avoid expensive, probable losses before worrying about less likely, low-damage threats. In many environments, fire and loss of key personnel are much more likely to occur, and are more damaging than a break-in over the network. Surprisingly, however, it is break-ins that seem to occupy the attention and budget of most managers. This practice is simply not cost-effective, nor does it provide the highest levels of trust in your overall system.

To figure out what you should do, take the figures that you have gathered for avoidance and recovery to determine how best to address your high-priority items. The way to do this is to add the cost of recovery to the expected average loss, and multiply that by the probability of occurrence. Then, compare the final product with the yearly cost of avoidance. If the cost of avoidance is lower than the risk you are defending against, you would be advised to invest in the avoidance strategy if you have sufficient financial resources. If the cost of avoidance is higher than the risk that you are defending against, then consider doing nothing until after other threats have been dealt with.[21]

Best Practices

Risk analysis has a long and successful history in the fields of public safety and civil engineering. Consider the construction of a suspension bridge. It’s a relatively straightforward matter to determine how much stress cars, trucks, and severe weather will place on the bridge’s cables. Knowing the anticipated stress, an engineer can compute the chance that the bridge will collapse over the course of its life given certain design and construction choices. Given the bridge’s width, length, height, anticipated traffic, and other factors, an engineer can compute the projected destruction to life, property, and commuting patterns that would result from the bridge’s failure. All of this information can be used to calculate cost-effective design decisions and a reasonable maintenance schedule for the bridge’s owners to follow.

The application of risk analysis to the field of computer security has been less successful. Risk analysis depends on the ability to gauge the expected use of an asset, assess the likelihood of each risk to the asset, identify the factors that enable those risks, and calculate the potential impact of various choices—figures that are devilishly hard to pin down. How do you calculate the risk that an attacker will be able to obtain system administrator privileges on your web server? Does this risk increase over time, as new security vulnerabilities are discovered, or does it decrease over time, as the vulnerabilities are publicized and corrected? Does a well-maintained system become less secure or more secure over time? And how do you calculate the likely damages of a successful penetration? Few statistical, scientific studies have been performed on these questions. Many people think they know the answers to these questions, but research has shown that most people badly estimate risk based on personal experience.

Because of the difficulty inherent in risk analysis, another approach for securing computers called best practices or due care, has emerged in recent years. This approach consists of a series of recommendations, procedures, and policies that are generally accepted within the community of security practitioners to give organizations a reasonable level of overall security and risk mitigation at a reasonable cost. Best practices can be thought of as “rules of thumb” for implementing sound security measures.

The best practices approach is not without its problems. The biggest problem is that there really is no one set of “best practices” that is applicable to all sites and users. The best practices for a site that manages financial information might have similarities to the best practices for a site that publishes a community newsletter, but the financial site would likely have additional security measures.

Following best practices does not assure that your system will not suffer a security-related incident. Most best practices require that an organization’s security office monitor the Internet for news of new attacks and download patches from vendors when they are made available.[22] But even if you follow this regimen, an attacker might still be able to use a novel, unpublished attack to compromise your computer system. And if the person monitoring security announcements goes on vacation, then the attackers will have a lead on your process of installing needed patches.

The very idea that tens of thousands of organizations could or even should implement the “best” techniques available to secure their computers is problematical. The “best” techniques available are simply not appropriate or cost-effective for all organizations. Many organizations that claim to be following best practices are actually adopting the minimum standards commonly used for securing systems. In practice, most best practices really aren’t.

We recommend a combination of risk analysis and best practices. Starting from a body of best practices, an educated designer should evaluate risks and trade-offs, and pick reasonable solutions for a particular configuration and management. For instance, servers should be hosted on isolated machines, and configured with an operating system and software providing the minimally required functionality. The operators should be vigilant for changes, keep up to date on patches, and prepare for the unexpected. Doing this well takes a solid understanding of how the system works, and what happens when it doesn’t work. This is the approach that we will explain in the chapters that follow.

Convincing Management

Security is not free. The more elaborate your security measures become, the more expensive they become. Systems that are more secure may also be more difficult to use, although this need not always be the case.[23] Security can also get in the way of “power users” who wish to exercise many difficult and sometimes dangerous operations without authentication or accountability. Some of these power users can be politically powerful within your organization.

After you have completed your risk assessment and cost-benefit analysis, you will need to convince your organization’s management of the need to act upon the information. Normally, you would formulate a policy that is then officially adopted. Frequently, this process is an uphill battle. Fortunately, it does not have to be.

The goal of risk assessment and cost-benefit analysis is to prioritize your actions and spending on security. If your business plan is such that you should not have an uninsured risk of more than $10,000 per year, you can use your risk analysis to determine what needs to be spent to achieve this goal. Your analysis can also be a guide as to what to do first, then second, and can identify which things you should relegate to later years.

Another benefit of risk assessment is that it helps to justify to management that you need additional resources for security. Most managers and directors know little about computers, but they do understand risk and cost/benefit analysis.[24] If you can show that your organization is currently facing an exposure to risk that could total $20,000,000 per year (add up all the expected losses plus recovery costs for what is currently in place), then this estimate might help convince management to fund some additional personnel and resources.

On the other hand, going to management with a vague “We’re really likely to see several break-ins on the Internet after the next CERT/CC announcement” is unlikely to produce anything other than mild concern (if that).



[20] That is, 1 - (1.0 - 0.02)50.

[21] Alternatively, you may wish to reconsider your costs.

[22] We are appalled at the number of patches issued for some systems, especially patches for problem classes that have long been known. You should strongly consider risk abatement strategies based on use of software that does not require frequent patches to fix security flaws.

[23] The converse is also not true. PC operating systems are not secure, even though some are difficult to use.

[24] In like manner, few computer security personnel seem to understand risk analysis techniques.