Now that we have a network, we can start connecting containers to that network. And then we'll explore the containers to see how private they are.
Create a script, startdb.sh, containing:
docker run --name db-userauth --env MYSQL_RANDOM_ROOT_PASSWORD=true \
--env MYSQL_USER=userauth --env MYSQL_PASSWORD=userauth \
--env MYSQL_DATABASE=userauth \
--volume `pwd`/my.cnf:/etc/my.cnf \
--volume `pwd`/../userauth-data:/var/lib/mysql \
--network authnet mysql/mysql-server:5.7
On Windows, you will need to name the script startdb.ps1 instead, and put the text all on one line rather than extend the lines with backslashes. And, the volume mounted on /var/lib/mysql must be created separately. Use these commands instead:
docker volume create db-userauth-volume
docker run --name db-userauth --env MYSQL_RANDOM_ROOT_PASSWORD=true --env MYSQL_USER=userauth --env MYSQL_PASSWORD=userauth --env MYSQL_DATABASE=userauth --volume $PSScriptRoot\my.cnf:/etc/my.cnf --volume db-userauth-volume:/var/lib/mysql --network authnet mysql/mysql-server:5.7
When run, the container will be named db-userauth. To give a little bit of security, the root password has been randomized. We've instead defined a database named userauth, accessed by a user named userauth, using the password userauth. That's not exactly secure, so feel free to choose better names and passwords. The container is attached to the authnet network.
There are two --volume options that we must talk about. In Dockerese, a volume is a thing inside a container that can be mounted from outside the container. In this case, we're defining a volume, userauth-data, in the host filesystem to be mounted as /var/lib/mysql inside the container. And, we're defining a local my.cnf file to be used as /etc/my.cnf inside the container.
For the Windows version, we have two changes to the --volume mounts. We specify the mount for /etc/my.cnf as $PSScriptRoot\my.cnf:/etc/my.cnf, because that's how you reference a local file in Powershell.
For /var/lib/mysql, we referenced a separately created volume. The volume is created using the volume create command, and with that command there is no opportunity to control the location of the volume. It's important that the volume lives outside the container, so that the database files survive the destruction/creation cycle for this container.
Taken together, those settings mean the database files and the configuration file live outside the container and will therefore exist beyond the lifetime of one specific container. To get the my.cnf, you will have to run the container once without the --volume `pwd`/my.cnf:/etc/my.cnf option so you can copy the default my.cnf file into the authnet directory.
Run the script once without that option:
$ sh startdb.sh
... much output
[Entrypoint] GENERATED ROOT PASSWORD: UMyh@q]@j4qijyj@wK4s4SkePIkq
... much output
The output is similar to what we saw earlier, but for this newline giving the randomized password:
$ docker network inspect authnet
This will tell you the db-userauth container is attached to authnet:
$ docker exec -it db-userauth mysql -u userauth -p
Enter password:
Welcome to the MySQL monitor. Commands end with ; or \g.
... much output
mysql> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| userauth |
+--------------------+
2 rows in set (0.00 sec)
mysql> use userauth;
Database changed
mysql> show tables;
Empty set (0.00 sec)
We see our database has been created and it's empty. But we did this so we could grab the my.cnf file:
$ docker cp db-userauth:/etc/my.cnf .
$ ls
my.cnf mysql-data startdb.sh
The docker cp command is used for copying files in and out of containers. If you've used scp, the syntax will be familiar.
Once you have the my.cnf file, there's a big pile of setting changes you might want to make. The first specific change to make is commenting out the line reading socket=/var/lib/mysql/mysql.sock, and the second is adding a line reading bind-address = 0.0.0.0. The purpose with these changes is to configure the MySQL service to listen on a TCP port rather than a Unix domain socket. This makes it possible to communicate with the MySQL service from outside the container. The result would be:
# socket=/var/lib/mysql/mysql.sock
bind-address = 0.0.0.0
Now stop the db-userauth service, and remove the container, as we did earlier. Edit the startdb script to enable the line mounting /etc/my.cnf into the container, and then restart the container:
$ docker stop db-userauth
db-userauth
$ docker rm db-userauth
db-userauth
$ sh ./startdb.sh
[Entrypoint] MySQL Docker Image 5.7.21-1.1.4
[Entrypoint] Starting MySQL 5.7.21-1.1.4
Now, if we inspect the authnet network, we see the following:
$ docker network inspect authnet
"Name": "authnet",
...
"Subnet": "172.18.0.0/16",
"Gateway": "172.18.0.1"
...
"Containers": {
"Name": "db-userauth",
"MacAddress": "02:42:ac:12:00:02",
"IPv4Address": "172.18.0.2/16",
...
In other words, the authnet network has the network number 172.18.0.0/16, and the db-userauth container was assigned 172.18.0.2. This level of detail is rarely important, but it is useful on our first time through to carefully examine the setup so we understand what we're dealing with:
# cat /etc/resolv.conf
search attlocal.net
nameserver 127.0.0.11
options ndots:0
As we said earlier, there is a DNS server running within the Docker bridge network setup, and domain name resolution is configured to use nodots. That's so Docker container names are the DNS hostname for the container:
# mysql -h db-userauth -u userauth -p
Enter password:
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 33
Server version: 5.7.21 MySQL Community Server (GPL)
Access the MySQL server using the container name as the hostname.