Table of Contents for
Node.js Complete Reference Guide

Version ebook / Retour

Cover image for bash Cookbook, 2nd Edition Node.js Complete Reference Guide by Diogo Resende Published by Packt Publishing, 2018
  1. Node.js Complete Reference Guide
  2. Title Page
  3. Copyright and Credits
  4. Node.js Complete Reference Guide
  5. About Packt
  6. Why subscribe?
  7. Packt.com
  8. Contributors
  9. About the authors
  10. Packt is searching for authors like you
  11. Table of Contents
  12. Preface
  13. Who this book is for
  14. What this book covers
  15. To get the most out of this book
  16. Download the example code files
  17. Conventions used
  18. Get in touch
  19. Reviews
  20. About Node.js
  21. The capabilities of Node.js
  22. Server-side JavaScript
  23. Why should you use Node.js?
  24. Popularity
  25. JavaScript at all levels of the stack
  26. Leveraging Google's investment in V8
  27. Leaner, asynchronous, event-driven model
  28. Microservice architecture
  29. Node.js is stronger for having survived a major schism and hostile fork
  30. Threaded versus event-driven architecture
  31. Performance and utilization
  32. Is Node.js a cancerous scalability disaster?
  33. Server utilization, the business bottom line, and green web hosting
  34. Embracing advances in the JavaScript language
  35. Deploying ES2015/2016/2017/2018 JavaScript code
  36. Node.js, the microservice architecture, and easily testable systems
  37. Node.js and the Twelve-Factor app model
  38. Summary
  39. Setting up Node.js
  40. System requirements
  41. Installing Node.js using package managers
  42. Installing on macOS with MacPorts
  43. Installing on macOS with Homebrew
  44. Installing on Linux, *BSD, or Windows from package management systems
  45. Installing Node.js in the Windows Subsystem for Linux (WSL)
  46. Opening an administrator-privileged PowerShell on Windows
  47. Installing the Node.js distribution from nodejs.org
  48. Installing from source on POSIX-like systems
  49. Installing prerequisites
  50. Installing developer tools on macOS
  51. Installing from source for all POSIX-like systems
  52. Installing from source on Windows
  53. Installing multiple Node.js instances with nvm
  54. Installing nvm on Windows
  55. Native code modules and node-gyp
  56. Node.js versions policy and what to use
  57. Editors and debuggers
  58. Running and testing commands
  59. Node.js's command-line tools
  60. Running a simple script with Node.js
  61. Conversion to async functions and the Promise paradigm
  62. Launching a server with Node.js
  63. NPM – the Node.js package manager
  64. Node.js, ECMAScript 2015/2016/2017, and beyond 
  65. Using Babel to use experimental JavaScript features
  66. Summary
  67. Node.js Modules
  68. Defining a module
  69. CommonJS and ES2015 module formats
  70. CommonJS/Node.js module format
  71. ES6 module format
  72. JSON modules
  73. Supporting ES6 modules on older Node.js versions
  74. Demonstrating module-level encapsulation
  75. Finding and loading CommonJS and JSON modules using require
  76. File modules
  77. Modules baked into Node.js binary
  78. Directories as modules
  79. Module identifiers and pathnames
  80. An example of application directory structure
  81. Finding and loading ES6 modules using import
  82. Hybrid CommonJS/Node.js/ES6 module scenarios
  83. Dynamic imports with import()
  84. The import.meta feature
  85. npm - the Node.js package management system
  86. The npm package format
  87. Finding npm packages
  88. Other npm commands
  89. Installing an npm package
  90. Installing a package by version number
  91. Global package installs
  92. Avoiding global module installation
  93. Maintaining package dependencies with npm
  94. Automatically updating package.json dependencies
  95. Fixing bugs by updating package dependencies
  96. Packages that install commands
  97. Configuring the PATH variable to handle commands installed by modules
  98. Configuring the PATH variable on Windows
  99. Avoiding modifications to the PATH variable
  100. Updating outdated packages you've installed
  101. Installing packages from outside the npm repository
  102. Initializing a new npm package
  103. Declaring Node.js version compatibility
  104. Publishing an npm package
  105. Explicitly specifying package dependency version numbers
  106. The Yarn package management system
  107. Summary
  108. HTTP Servers and Clients
  109. Sending and receiving events with EventEmitters
  110. JavaScript classes and class inheritance
  111. The EventEmitter Class
  112. The EventEmitter theory
  113. HTTP server applications
  114. ES2015 multiline and template strings
  115. HTTP Sniffer – listening to the HTTP conversation
  116. Web application frameworks
  117. Getting started with Express
  118. Setting environment variables in Windows cmd.exe command line
  119. Walking through the default Express application
  120. The Express middleware
  121. Middleware and request paths
  122. Error handling
  123. Calculating the Fibonacci sequence with an Express application
  124. Computationally intensive code and the Node.js event loop
  125. Algorithmic refactoring
  126. Making HTTP Client requests
  127. Calling a REST backend service from an Express application
  128. Implementing a simple REST server with Express
  129. Refactoring the Fibonacci application for REST
  130. Some RESTful modules and frameworks
  131. Summary
  132. Your First Express Application
  133. Promises, async functions, and Express router functions
  134. Promises and error handling
  135. Flattening our asynchronous code
  136. Promises and generators birthed async functions
  137. Express and the MVC paradigm
  138. Creating the Notes application
  139. Your first Notes model
  140. Understanding ES-2015 class definitions
  141. Filling out the in-memory Notes model
  142. The Notes home page
  143. Adding a new note – create
  144. Viewing notes – read
  145. Editing an existing note – update
  146. Deleting notes – destroy
  147. Theming your Express application
  148. Scaling up – running multiple Notes instances
  149. Summary
  150. Implementing the Mobile-First Paradigm
  151. Problem – the Notes app isn't mobile friendly
  152. Mobile-first paradigm
  153. Using Twitter Bootstrap on the Notes application
  154. Setting it up
  155. Adding Bootstrap to application templates
  156. Alternative layout frameworks
  157. Flexbox and CSS Grids
  158. Mobile-first design for the Notes application
  159. Laying the Bootstrap grid foundation
  160. Responsive page structure for the Notes application
  161. Using icon libraries and improving visual appeal
  162. Responsive page header navigation bar
  163. Improving the Notes list on the front page
  164. Cleaning up the Note viewing experience
  165. Cleaning up the add/edit note form
  166. Cleaning up the delete-note window
  167. Building a customized Bootstrap
  168. Pre-built custom Bootstrap themes
  169. Summary
  170. Data Storage and Retrieval
  171. Data storage and asynchronous code
  172. Logging
  173. Request logging with Morgan
  174. Debugging messages
  175. Capturing stdout and stderr
  176. Uncaught exceptions
  177. Unhandled Promise rejections
  178. Using the ES6 module format
  179. Rewriting app.js as an ES6 module
  180. Rewriting bin/www as an ES6 module
  181. Rewriting models code as ES6 modules
  182. Rewriting router modules as ES6 modules
  183. Storing notes in the filesystem
  184. Dynamic import of ES6 modules
  185. Running the Notes application with filesystem storage
  186. Storing notes with the LevelUP data store
  187. Storing notes in SQL with SQLite3
  188. SQLite3 database schema
  189. SQLite3 model code
  190. Running Notes with SQLite3
  191. Storing notes the ORM way with Sequelize
  192. Sequelize model for the Notes application
  193. Configuring a Sequelize database connection
  194. Running the Notes application with Sequelize
  195. Storing notes in MongoDB
  196. MongoDB model for the Notes application
  197. Running the Notes application with MongoDB
  198. Summary
  199. Multiuser Authentication the Microservice Way
  200. Creating a user information microservice
  201. User information model
  202. A REST server for user information
  203. Scripts to test and administer the user authentication server
  204. Login support for the Notes application
  205. Accessing the user authentication REST API
  206. Login and logout routing functions
  207. Login/logout changes to app.js
  208. Login/logout changes in routes/index.mjs
  209. Login/logout changes required in routes/notes.mjs
  210. View template changes supporting login/logout
  211. Running the Notes application with user authentication
  212. Twitter login support for the Notes application
  213. Registering an application with Twitter
  214. Implementing TwitterStrategy
  215. Securely keeping secrets and passwords
  216. The Notes application stack
  217. Summary
  218. Dynamic Client/Server Interaction with Socket.IO
  219. Introducing Socket.IO
  220. Initializing Socket.IO with Express
  221. Real-time updates on the Notes homepage
  222. The Notes model as an EventEmitter class
  223. Real-time changes in the Notes home page
  224. Changing the homepage and layout templates
  225. Running Notes with real-time homepage updates
  226. Real-time action while viewing notes
  227. Changing the note view template for real-time action
  228. Running Notes with real-time updates while viewing a note
  229. Inter-user chat and commenting for Notes
  230. Data model for storing messages
  231. Adding messages to the Notes router
  232. Changing the note view template for messages
  233. Using a Modal window to compose messages
  234. Sending, displaying, and deleting messages
  235. Running Notes and passing messages
  236. Other applications of Modal windows
  237. Summary
  238. Deploying Node.js Applications
  239. Notes application architecture and deployment considerations
  240. Traditional Linux Node.js service deployment
  241. Prerequisite – provisioning the databases
  242. Installing Node.js on Ubuntu
  243. Setting up Notes and user authentication on the server
  244. Adjusting Twitter authentication to work on the server
  245. Setting up PM2 to manage Node.js processes
  246. Node.js microservice deployment with Docker
  247. Installing Docker on your laptop
  248. Starting Docker with Docker for Windows/macOS
  249. Kicking the tires of Docker
  250. Creating the AuthNet for the user authentication service
  251. MySQL container for Docker
  252. Initializing AuthNet
  253. Script execution on Windows
  254. Linking Docker containers
  255. The db-userauth container
  256. Dockerfile for the authentication service
  257. Configuring the authentication service for Docker
  258. Building and running the authentication service Docker container
  259. Exploring Authnet
  260. Creating FrontNet for the Notes application
  261. MySQL container for the Notes application
  262. Dockerizing the Notes application
  263. Controlling the location of MySQL data volumes
  264. Docker deployment of background services
  265. Deploying to the cloud with Docker compose
  266. Docker compose files
  267. Running the Notes application with Docker compose
  268. Deploying to cloud hosting with Docker compose
  269. Summary
  270. Unit Testing and Functional Testing
  271. Assert – the basis of testing methodologies
  272. Testing a Notes model
  273. Mocha and Chai­ – the chosen test tools
  274. Notes model test suite
  275. Configuring and running tests
  276. More tests for the Notes model
  277. Testing database models
  278. Using Docker to manage test infrastructure
  279. Docker Compose to orchestrate test infrastructure
  280. Executing tests under Docker Compose
  281. MongoDB setup under Docker and testing Notes against MongoDB
  282. Testing REST backend services
  283. Automating test results reporting
  284. Frontend headless browser testing with Puppeteer
  285. Setting up Puppeteer
  286. Improving testability in the Notes UI
  287. Puppeteer test script for Notes
  288. Running the login scenario
  289. The Add Note scenario
  290. Mitigating/preventing spurious test errors in Puppeteer scripts
  291. Configuring timeouts
  292. Tracing events on the Page and the Puppeteer instance
  293. Inserting pauses
  294. Avoiding WebSockets conflicts
  295. Taking screenshots
  296. Summary
  297. REST – What You Did Not Know
  298. REST fundamentals
  299. Principle 1 – Everything is a resource
  300. Principle 2 – Each resource is identifiable by a unique identifier
  301. Principle 3 – Manipulate resources via standard HTTP methods
  302. Principle 4 – Resources can have multiple representations
  303. Principle 5 – Communicate with resources in a stateless manner
  304. The REST goals
  305. Separation of the representation and the resource
  306. Visibility
  307. Reliability
  308. Scalability and performance
  309. Working with WADL
  310. Documenting RESTful APIs with Swagger
  311. Taking advantage of the existing infrastructure
  312. Summary
  313. Building a Typical Web API
  314. Specifying the API
  315. Implementing routes
  316. Querying the API using test data
  317. Content negotiation
  318. API versioning
  319. Self-test questions
  320. Summary
  321. Using NoSQL Databases
  322. MongoDB – a document store database
  323. Database modeling with Mongoose
  324. Testing a Mongoose model with Mocha
  325. Creating a user-defined model around a Mongoose model
  326. Wiring up a NoSQL database module to Express
  327. Self-test questions
  328. Summary
  329. Restful API Design Guidelines
  330. Endpoint URLs and HTTP status codes best practices
  331. Extensibility and versioning
  332. Linked data
  333. Summary
  334. Implementing a Full Fledged RESTful Service
  335. Working with arbitrary data
  336. Linking
  337. Implementing paging and filtering
  338. Caching
  339. Supplying the Cache-Control header in Express applications
  340. Discovering and exploring RESTful services
  341. Summary
  342. Consuming a RESTful API
  343. Consuming RESTful services with jQuery
  344. Troubleshooting and identifying problems on the wire
  345. Cross Origin Resource Sharing
  346. Content Delivery Networks
  347. Handling HTTP status codes on the client side
  348. Summary
  349. Securing the Application
  350. Authentication
  351. Basic authentication
  352. Passport
  353. Passport's basic authentication strategy
  354. Passport's OAuth Strategy
  355. Passport's third-party authentication strategies
  356. Authorization
  357. Transport layer security
  358. Self-test questions
  359. Summary
  360. The Age of Microservices
  361. From monolith to microservices
  362. Patterns of microservices
  363. Decomposable
  364. Autonomous
  365. Scalable
  366. Communicable
  367. Summary
  368. Modules and Toolkits
  369. Seneca
  370. Hydra
  371. Summary
  372. Building a Microservice
  373. Using Hydra
  374. Using Seneca
  375. Plugins
  376. Summary
  377. State
  378. State
  379. Storing state
  380. MySQL
  381. RethinkDB
  382. Redis
  383. Conclusion
  384. Security
  385. Summary
  386. Testing
  387. Integrating tests
  388. Using chai
  389. Adding code coverage
  390. Covering all code
  391. Mocking our services
  392. Summary
  393. Design Patterns
  394. Choosing patterns
  395. Architectural patterns
  396. Front Controller
  397. Layered
  398. Service Locator
  399. Observer
  400. Publish-Subscribe
  401. Using patterns
  402. Planning your microservice
  403. Obstacles when developing
  404. Summary
  405. Other Books You May Enjoy
  406. Leave a review - let other readers know what you think

Deploying to cloud hosting with Docker compose

We've verified on our laptop that the services described by the compose file work as intended. Launching the containers is now automated, fixing one of the issues we named earlier. It's now time to see how to deploy to a cloud-hosting provider. This is where we turn to Docker machine.

Docker machine can be used to provision Docker instances inside a VirtualBox host on your laptop. What we'll be doing is provisioning a Docker system on DigitalOcean. The docker-machine command comes with drivers supporting a long list of cloud-hosting providers.  It's easy to adapt the instructions shown here for other providers, simply by substituting a different driver.

After signing up for a DigitalOcean account, click on the API link in the dashboard. We need an API token to grant docker-machine access to the account. Go through the process of creating a token and save away the token string you're given. The Docker website has a tutorial at https://docs.docker.com/machine/examples/ocean/.

With the token in hand, type the following:

$ docker-machine create --driver digitalocean --digitalocean-size 2gb \
--digitalocean-access-token TOKEN-FROM-PROVIDER \
sandbox
Running pre-create checks...
Creating machine...
(sandbox) Creating SSH key...
(sandbox) Creating Digital Ocean droplet...
(sandbox) Waiting for IP address to be assigned to the Droplet...
Waiting for machine to be running, this may take a few minutes...
Detecting operating system of created instance...
Waiting for SSH to be available...
Detecting the provisioner...
Provisioning with ubuntu(systemd)...
Installing Docker...
Copying certs to the local machine directory...
Copying certs to the remote machine...
Setting Docker configuration on the remote daemon...
Checking connection to Docker...
Docker is up and running!
To see how to connect your Docker Client to the Docker Engine running on this virtual machine, run: docker-machine env sandbox

The digitalocean driver is, as we said earlier, used with Digital Ocean. The Docker website has a list of drivers at https://docs.docker.com/machine/drivers/.

A lot of information is printed here about things being set up. The most important is the message at the end. A series of environment variables are used to tell the docker command where to connect to the Docker Engine instance. As the messages say, run: docker-machine env sandbox:

$ docker-machine env sandbox
export DOCKER_TLS_VERIFY="1"
export DOCKER_HOST="tcp://45.55.37.74:2376"
export DOCKER_CERT_PATH="/home/david/.docker/machine/machines/sandbox"
export DOCKER_MACHINE_NAME="sandbox"
# Run this command to configure your shell:
# eval $(docker-machine env sandbox)

That's the environment variables used to access the Docker host we just created. You should also go to your cloud-hosting provider dashboard and see that the host has been created. This command also gives us some instructions to follow:

$ eval $(docker-machine env sandbox) 
$ docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
sandbox * digitalocean Running tcp://45.55.37.74:2376 v18.01.0-ce

This shows that we have a Docker Engine instance running in a host at our chosen cloud-hosting provider.

One interesting test at this point is to run docker ps -a on this Terminal, and then to run it in another Terminal that does not have these environment variables. That should show the cloud host has no containers at all, while your local machine may have some containers (depending on what you currently have running):

$ docker run hello-world 
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
ca4f61b1923c: Pull complete
Digest: sha256:66ef312bbac49c39a89aa9bcc3cb4f3c9e7de3788c944158df3ee0176d32b751
Status: Downloaded newer image for hello-world:latest
...
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
hello-world latest f2a91732366c 2 months ago 1.85kB

Here, we've verified that we can launch a container on the remote host.

The next step is to build our containers for the new machine. Because we've switched the environment variables to point to the new server, these commands cause action to happen there rather than inside our laptop:

$ docker-compose build
db-userauth uses an image, skipping
db-notes uses an image, skipping
Building notes
Step 1/22 : FROM node:9.5
9.5: Pulling from library/node
f49cf87b52c1: Pull complete
7b491c575b06: Pull complete
b313b08bab3b: Pull complete
51d6678c3f0e: Pull complete
...

Because we changed the environment variables, the build occurs on the sandbox machine rather than on our laptop, as previously.

This will take a while because the Docker image cache on the remote machine is empty. Additionally, building the notesapp and userauth containers copies the entire source tree to the server and runs all build steps on the server.

The build may fail if the default memory size is 500 MB, the default on DigitalOcean at the time of writing. If so, the first thing to try is resizing the memory on the host to at least 2 GB.

Once the build is finished, launch the containers on the remote machine:

$ docker-compose up 
Creating notes ... done
Recreating db-userauth ... done
Recreating db-notes ... done
Creating notes ...
Attaching to db-userauth, db-notes, userauth, notes

Once the containers start, you should test the userauth container as we've done previously. Unfortunately, the first time you do this, that command will fail. The problem is these lines in the docker-compose.yml:

 - ../authnet/my.cnf:/etc/my.cnf
...
- ../frontnet/my.cnf:/etc/my.cnf

In this case, the build occurs on the remote machine, and the docker-machine command does not copy the named file to the server. Hence, when Docker attempts to start the container, it is unable to do so because that volume mount cannot be satisfied because the file is simply not there. This, then, means some surgery on docker-compose.yml, and to add two new Dockerfiles.

First, make these changes to docker-compose.yml:

  ...
db-userauth:
build: ../authnet
container_name: db-userauth
networks:
- authnet
volumes:
- db-userauth-data:/var/lib/mysql
restart: always
...

db-notes:
build: ../frontnet
container_name: db-notes
networks:
- frontnet
volumes:
- db-notes-data:/var/lib/mysql
restart: always

Instead of building the database containers from a Docker image, we're now building them from a pair of Dockerfiles. Now we must create those two Dockerfiles.

In authnet, create a file named Dockerfile containing the following:

FROM mysql/mysql-server:5.7
EXPOSE 3306
COPY my.cnf /etc/
ENV MYSQL_RANDOM_ROOT_PASSWORD="true"
ENV MYSQL_USER=userauth
ENV MYSQL_PASSWORD=userauth
ENV MYSQL_DATABASE=userauth
CMD [ "mysqld", "--character-set-server=utf8mb4", \
"--collation-server=utf8mb4_unicode_ci", "--bind-address=0.0.0.0" ]

This copies certain settings from what had been the db-userauth description in docker-compose.yml. The important thing is that we now COPY the my.cnf file rather than use a volume mount.

In frontnet, create a Dockerfile containing the following:

FROM mysql/mysql-server:5.7
EXPOSE 3306
COPY my.cnf /etc/
ENV MYSQL_RANDOM_ROOT_PASSWORD="true"
ENV MYSQL_USER=notes
ENV MYSQL_PASSWORD=notes12345
ENV MYSQL_DATABASE=notes
CMD [ "mysqld", "--character-set-server=utf8mb4", \
"--collation-server=utf8mb4_unicode_ci", "--bind-address=0.0.0.0" ]

This is the same, but with a few critical values changed.

After making these changes, we can now build the containers, and launch them:

$ docker-compose build
... much output
$ docker-compose up --force-recreate
... much output

Now that we have a working build, and can bring up the containers, let's inspect them and verify everything works.

Execute a shell in userauth to test and set up the user database:

$ docker exec -it userauth bash
root@931dd2a267b4:/userauth# PORT=3333 node users-list.js
List [ { id: 'me', username: 'me', provider: 'local',
familyName: 'Einarrsdottir', givenName: 'Ashildr', middleName: '',
emails: '[]', photos: '[]' } ]

As mentioned previously, this verifies that the userauth service works, that the remote containers are set up, and that we can proceed to using the Notes application.

The question is: What's the URL to use?  The service is not on localhost, because it's on the remote server. We don't have a domain name assigned, but there is an IP address for the server.

Run the following command:

$ docker-machine ip sandbox
45.55.37.74

Docker tells you the IP address, which you should use as the basis of the URL. Hence, in your browser, visit http://IP-ADDRESS:3000

With Notes deployed to the remote server, you should check out all the things we've looked at previously. The bridge networks should exist, as shown previously, with the same limited access between containers. The only public access should be port 3000 on the notes container. 

Remember to set the TWITTER_CALLBACK_HOST environment variable appropriately for your server.

Because our database containers mount a volume to store the data, let's see where that volume landed on the server:

$ docker volume ls
DRIVER VOLUME NAME
local compose_db-notes-data
local compose_db-userauth-data

Those are the expected volumes, one for each container:

$ docker volume inspect compose_db-notes-data
[
{
"CreatedAt": "2018-02-07T06:30:06Z",
"Driver": "local",
"Labels": {
"com.docker.compose.project": "compose",
"com.docker.compose.volume": "db-notes-data"
},
"Mountpoint": "/var/lib/docker/volumes/compose_db-notes-
data/_data",
"Name": "compose_db-notes-data",
"Options": {},
"Scope": "local"
}
]

Those are the directories, but they're not located on our laptop. Instead, they're on the remote server. Accessing these directories means logging into the remote server to take a look:

$ docker-machine ssh sandbox
Welcome to Ubuntu 16.04.3 LTS (GNU/Linux 4.4.0-112-generic x86_64)

* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage

Get cloud support with Ubuntu Advantage Cloud Guest:
http://www.ubuntu.com/business/services/cloud

4 packages can be updated.
0 updates are security updates.

Last login: Wed Feb 7 04:00:29 2018 from 108.213.68.139
root@sandbox:~#

From this point, you can inspect the directories corresponding to these volumes and see that they indeed contain MySQL configuration and data files:

root@sandbox:~# ls /var/lib/docker/volumes/compose_db-notes-data/_data 
auto.cnf client-key.pem ib_logfile1 mysql.sock.lock public_key.pem
ca-key.pem ib_buffer_pool ibtmp1 notes server-cert.pem
ca.pem ibdata1 mysql performance_schema server-key.pem
client-cert.pem ib_logfile0 mysql.sock private_key.pem sys

You'll also find that the Docker command-line tools will work. The process list is especially interesting:

Look closely at this and you see a process corresponding to every container in the system. These processes are running in the host operating system. Docker creates layers of configuration/containment around those processes to create the appearance that the process is running under a different operating system, and with various system/network configuration files, as specified in the container screenshot. 

The claimed advantage Docker has over virtualization approaches, such as VirtualBox, is that Docker is very lightweight. We see right here why Docker is lightweight: there is no virtualization layer, there is only a containerization process (docker-containerd-shim).

Once you're satisfied that Notes is working on the remote server, you can shut it down and remove it as follows:

$ docker-compose stop
Stopping notesapp ... done
Stopping userauth ... done
Stopping db-notes ... done
Stopping db-auth ... done

This shuts down all the containers at once:

$ docker-machine stop sandbox
Stopping "sandbox"...
Machine "sandbox" was stopped.

This shuts down the remote machine. The cloud-hosting provider dashboard will show that the Droplet has stopped.

At this point, you can go ahead and delete the Docker machine instance as well, if you like:

$ docker-machine rm sandbox
About to remove sandbox
Are you sure? (y/n): y
Successfully removed sandbox  

And, if you're truly certain you want to delete the machine, the preceding command does the deed. As soon as you do this, the machine will be erased from your cloud-hosting provider dashboard.