In the users directory, create a file named Dockerfile containing the following:
FROM node:10
ENV DEBUG="users:*"
ENV PORT="3333"
ENV SEQUELIZE_CONNECT="sequelize-docker-mysql.yaml"
ENV REST_LISTEN="0.0.0.0"
RUN mkdir -p /userauth
COPY package.json sequelize-docker-mysql.yaml *.mjs *.js /userauth/
WORKDIR /userauth
RUN apt-get update -y \
&& apt-get -y install curl python build-essential git ca-certificates \
&& npm install --unsafe-perm
EXPOSE 3333
CMD npm run docker
Dockerfiles describe the installation of an application on a server. See https://docs.docker.com/engine/reference/builder/ for documentation. They document assembly of the bits in a Docker container image, and the instructions in a Dockerfile are used to build a Docker image.
The FROM command specifies a pre-existing image from which to derive a given image. We talked about this earlier; you can build a Docker container starting from an existing image. The official Node.js Docker image (https://hub.docker.com/_/node/) we're using is derived from debian:jessie. Therefore, commands available within the container are what Debian offers, and we use apt-get to install more packages. We use Node.js 10 because it supports ES6 modules and the other features we've been using.
The ENV commands define environment variables. In this case, we're using the same environment variables defined within the user authentication service, except we have a new REST_LISTEN variable. We'll take a look at that shortly.
The RUN commands are where we run the shell commands required to build the container. The first thing is to make a /userauth directory that will contain the service source code. The COPY command copies files into that directory. And then we'll need to run an npm install so that we can run the service. But first we use the WORKDIR command to move the current working directory into /userauth so that the npm install is run in the correct place. We also install the requisite Debian packages so that any native code Node.js packages can be installed.
It's recommended that you always combine apt-get update with apt-get install in the same command line, like this, because of the Docker build cache. When rebuilding an image, Docker starts with the first changed line. By putting those two together, you ensure that apt-get update is executed any time you change the list of packages to be installed. For a complete discussion, see the documentation at https://docs.docker.com/develop/develop-images/dockerfile_best-practices/.
At the end of this command is npm install --unsafe-perm. The issue here is that these commands are being run as root. Normally, when npm is run as root, it changes its user ID to a nonprivileged user. This can cause failure, however, and the --unsafe-perm option prevents changing the user ID.
The EXPOSE command informs Docker that the container listens on the named TCP port. This does not expose the port beyond the container.
Finally, the CMD command documents the process to launch when the container is executed. The RUN commands are executed while building the container, while CMD says what's executed when the container starts.
We could have installed PM2 in the container, then used a PM2 command to launch the service. But Docker is able to fulfill the same function, because it supports automatically restarting a container if the service process dies. We'll see how to do this later.