We have now encapsulated our application’s component services into portable, self-contained Docker images, which can be run as containers. In doing so, we have improved our deployment process by making it:
- Portable: The Docker images can be distributed just like any other file. They can also be run in any environment.
- Predictable/Consistent: The image is self-contained and pre-built, which means it will run in the same way wherever it is deployed.
- Automated: All instructions are specified inside a Dockerfile, meaning our computer can run them like code.
However, despite containerizing our application, we are still manually running the docker run commands. Furthermore, we are running single instances of these containers on a single server. If the server fails, our application will go down. Moreover, if we have to make an update to our application, there'll still be downtime (although now it's a shorter downtime because deployment can be automated).
Therefore, while Docker is part of the solution, it is not the whole solution.
In the next chapter, we will build on this chapter and use cluster orchestration systems such as Kubernetes to manage the running of these containers. Kubernetes allows us to create distributed clusters of redundant containers, each deployed on a different server, so that when one server fails, the containers deployed on the other servers will still keep the whole application running. This also allows us to update one container at a time without downtime.
Overall, Kubernetes will allow us to scale our application to handle heavy loads, and allows our application to have a reliable uptime, even when we experience hardware failures.