Table of Contents for
Site Reliability Engineering

Version ebook / Retour

Cover image for bash Cookbook, 2nd Edition Site Reliability Engineering by Jennifer Petoff Published by O'Reilly Media, Inc., 2016
  1. nav
  2. Cover
  3. Praise for Site Reliability Engineering
  4. Site Reliability Engineering
  5. Site Reliability Engineering
  6. Foreword
  7. Preface
  8. I. Introduction
  9. 1. Introduction
  10. 2. The Production Environment at Google, from the Viewpoint of an SRE
  11. II. Principles
  12. 3. Embracing Risk
  13. 4. Service Level Objectives
  14. 5. Eliminating Toil
  15. 6. Monitoring Distributed Systems
  16. 7. The Evolution of Automation at Google
  17. 8. Release Engineering
  18. 9. Simplicity
  19. III. Practices
  20. 10. Practical Alerting from Time-Series Data
  21. 11. Being On-Call
  22. 12. Effective Troubleshooting
  23. 13. Emergency Response
  24. 14. Managing Incidents
  25. 15. Postmortem Culture: Learning from Failure
  26. 16. Tracking Outages
  27. 17. Testing for Reliability
  28. 18. Software Engineering in SRE
  29. 19. Load Balancing at the Frontend
  30. 20. Load Balancing in the Datacenter
  31. 21. Handling Overload
  32. 22. Addressing Cascading Failures
  33. 23. Managing Critical State: Distributed Consensus for Reliability
  34. 24. Distributed Periodic Scheduling with Cron
  35. 25. Data Processing Pipelines
  36. 26. Data Integrity: What You Read Is What You Wrote
  37. 27. Reliable Product Launches at Scale
  38. IV. Management
  39. 28. Accelerating SREs to On-Call and Beyond
  40. 29. Dealing with Interrupts
  41. 30. Embedding an SRE to Recover from Operational Overload
  42. 31. Communication and Collaboration in SRE
  43. 32. The Evolving SRE Engagement Model
  44. V. Conclusions
  45. 33. Lessons Learned from Other Industries
  46. 34. Conclusion
  47. A. Availability Table
  48. B. A Collection of Best Practices for Production Services
  49. C. Example Incident State Document
  50. D. Example Postmortem
  51. E. Launch Coordination Checklist
  52. F. Example Production Meeting Minutes
  53. Bibliography
  54. Index
  55. About the Authors
  56. Colophon

Appendix E. Launch Coordination Checklist

This is Google’s original Launch Coordination Checklist, circa 2005, slightly abridged for brevity:

Architecture

  • Architecture sketch, types of servers, types of requests from clients

  • Programmatic client requests

Machines and datacenters

  • Machines and bandwidth, datacenters, N+2 redundancy, network QoS

  • New domain names, DNS load balancing

Volume estimates, capacity, and performance

  • HTTP traffic and bandwidth estimates, launch “spike,” traffic mix, 6 months out

  • Load test, end-to-end test, capacity per datacenter at max latency

  • Impact on other services we care most about

  • Storage capacity

System reliability and failover

  • What happens when:

    • Machine dies, rack fails, or cluster goes offline

    • Network fails between two datacenters

  • For each type of server that talks to other servers (its backends):

    • How to detect when backends die, and what to do when they die

    • How to terminate or restart without affecting clients or users

    • Load balancing, rate-limiting, timeout, retry and error handling behavior

  • Data backup/restore, disaster recovery

Monitoring and server management

  • Monitoring internal state, monitoring end-to-end behavior, managing alerts

  • Monitoring the monitoring

  • Financially important alerts and logs

  • Tips for running servers within cluster environment

  • Don’t crash mail servers by sending yourself email alerts in your own server code

Security

  • Security design review, security code audit, spam risk, authentication, SSL

  • Prelaunch visibility/access control, various types of blacklists

Automation and manual tasks

  • Methods and change control to update servers, data, and configs

  • Release process, repeatable builds, canaries under live traffic, staged rollouts

Growth issues

  • Spare capacity, 10x growth, growth alerts

  • Scalability bottlenecks, linear scaling, scaling with hardware, changes needed

  • Caching, data sharding/resharding

External dependencies

  • Third-party systems, monitoring, networking, traffic volume, launch spikes

  • Graceful degradation, how to avoid accidentally overrunning third-party services

  • Playing nice with syndicated partners, mail systems, services within Google

Schedule and rollout planning

  • Hard deadlines, external events, Mondays or Fridays

  • Standard operating procedures for this service, for other services