Mocha is a popular, multiparadigm testing framework for Node.js. It features several different styles for describing your tests. We’ll be using the behavior-driven development (BDD) style.
To use Mocha, first we’ll install it with npm, Node.js’s built-in package manager. Next we’ll develop a unit test for the LDJClient class. And finally we’ll use npm to run the test suite.
Installing Node.js packages with npm can be quite easy, which partly explains the abundance of modules available to you. Even so, it’s important to understand what’s going on so you can manage your dependencies.
npm relies on a configuration file called package.json, so let’s create one now. Open a terminal to your networking project and run this:
| | $ npm init -y |
Calling npm init will create a default package.json file. We’ll go through this important file in future chapters; for now let’s move on to installing Mocha by running npm install.
| | $ npm install --save-dev --save-exact mocha@3.4.2 |
| | npm notice created a lockfile as package-lock.json. You should commit this file. |
| | npm WARN networking@1.0.0 No description |
| | npm WARN networking@1.0.0 No repository field. |
| | |
| | + mocha@3.4.2 |
| | added 34 packages in 2.348s |
You can safely ignore the warnings in the output for now. npm is just suggesting that you add some descriptive fields to your package.json.
When the command finishes, it will have made a few changes. You’ll now have a directory called node_modules in your project, which contains Mocha and its dependencies. And if you open your package.json file, you should find a devDependencies section that looks like this:
| | "devDependencies": { |
| | "mocha": "3.4.2" |
| | } |
In Node.js, there are a few different kinds of dependencies. Regular dependencies are used at runtime by your code when you use require to bring in modules. Dev dependencies are programs that your project needs during development. Mocha is the latter kind, and the --save-dev flag (-D for short) told npm to add it to the devDependencies list.
Note that both dev dependencies and regular runtime dependencies are installed when you run npm install with no additional arguments. If you specifically want only the regular runtime dependencies and not the dev dependencies, you can run npm install with the --production flag, or by setting the NODE_ENV environment variable to production.
npm also created a file called package-lock.json. This file contains the exact version of every module that Mocha depends on, transitively. We’ll talk more about this file in a bit, but first it helps to understand semantic versioning and how npm resolves package versions.
The --save-exact (or -E) flag tells npm that we really want the version specified, in this case 3.4.2. By default, npm will use semantic versioning (or SemVer) to try to find the best available, compatible version of a package.[21]
Semantic versioning is a strong convention in the Node.js community, which you should definitely follow when setting version numbers on your packages. A version number consists of three parts joined by dots: the major version, the minor version, and the patch.
To abide by the semantic versioning convention, when you make a change to your code you have to increment the correct part of the version number:
If your code change does not introduce or remove any functionality (like a bug fix), then just increment the patch version.
If your code introduces functionality but doesn’t remove or alter existing functionality, then increment the minor version and reset the patch.
If your code in any way breaks existing functionality, then increment the major version and reset the minor and patch versions.
You can omit the --save-exact flag if you want npm to pull in the nearest matching version. You can even omit the version number entirely when running npm install, in which case npm will pull down the latest released version.
For the purposes of this book, all version numbers will be rigorously specified. This makes the code examples more future-proof and guards against potential violations of semantic versioning in packages we use.
If you leave off the --save-exact flag when installing a module through npm, it will append the version number with a caret (^) in the package.json. For example, "^3.4.2" instead of just "3.4.2". The caret means that npm will use the latest minor version greater than or equal to the one you specified.
For example, if your dependency version is set to ^1.5.7, and the module authors release a new minor version at 1.6.0, then anyone installing your module will get 1.6.0 of your dependency. They would not, however, pick up version 2.0, since major versions are assumed not to be backward compatible.
As long as everyone abides by the semantic versioning convention, everything should be fine, since minor versions can only add new functionality without breaking existing functionality. In practice, things don’t always work out that way.
If you want to have some leeway but still be a little more strict, you can use the tilde (~) prefix character instead. To continue the previous example, if your dependency is set to ~1.5.7 and the authors release 1.5.8, your users will get 1.5.8 but would not automatically be upgraded to 1.6.0. Prefixing with ~ is somewhat safer than prefixing with ^ because people are somewhat less likely to introduce breaking changes in patch releases, but you never know.
Although semantic versioning has been widely adopted by the community, authors sometimes make breaking changes in minor and patch releases prior to major version 1. For example, a project might start at version 0.0.1 and make breaking changes in each of 0.0.2, 0.0.3, and so on. Likewise for projects that go from 0.1.0 to 0.2.0 to 0.3.0, etc. npm accounts for this by ignoring leading zeros when trying to figure out which version to use for caret- and tilde-prefixed version numbers.
My advice: always use --save-exact when installing packages. The downside is that you’ll have to explicitly update the version numbers of packages you depend on to pick up newer versions. But at least you get to tackle this on your own terms instead of having a surprise breakage introduced by an upstream dependency you don’t control.
I have one more parting tip on version numbers before we move on. Even if you meticulously manage your direct dependencies with --save-exact, those dependencies may not be as strict in their own dependencies. This is why the package-lock.json is so important. It cements the versions of the entire dependency tree, including checksums.
If you’re serious about having identical files on disk from one install to the next, you should commit your package-lock.json to your version-control system. When you’re ready to perform updates, use npm outdated to get a report showing which of the modules you depend on have updated versions. Then when you install the latest version of a module, your package-lock.json will have the newly updated tree.
By committing the package-lock.json as you develop your project, you create an audit trail that allows you to run the exact same code stack from any point in the past. This can be an invaluable resource when trying to track down bugs, whether they’re in your own code or your dependencies.
That’s enough for now about semantic versioning, package.json, and package-lock.json. Let’s move on to writing unit tests with Mocha.
With Mocha installed, now we’ll develop a unit test that uses it.
Create a subdirectory named test to hold your test-related code. This is the convention for Node.js projects generally, and by default Mocha will look for your tests there.
Next, create a file in your test directory called ldj-client-test.js and add the following code to it.
| | 'use strict'; |
| | const assert = require('assert'); |
| | const EventEmitter = require('events').EventEmitter; |
| | const LDJClient = require('../lib/ldj-client.js'); |
| | |
| | describe('LDJClient', () => { |
| | let stream = null; |
| | let client = null; |
| | |
| | beforeEach(() => { |
| | stream = new EventEmitter(); |
| | client = new LDJClient(stream); |
| | }); |
| | |
| | it('should emit a message event from a single data event', done => { |
| | client.on('message', message => { |
| | assert.deepEqual(message, {foo: 'bar'}); |
| | done(); |
| | }); |
| | stream.emit('data', '{"foo":"bar"}\n'); |
| | }); |
| | }); |
Let’s step through this code. First we pull in the modules we’ll need, including Node.js’s built-in assert module. This contains useful functions for comparing values.
Next we use Mocha’s describe method to create a named context for our tests involving LDJClient. The second argument to describe is a function that contains the test content.
Inside the test, first we declare two variables with let—one for the LDJClient instance, client, and one for the underlying EventEmitter, stream. Then in beforeEach, we assign fresh instances to both of those variables.
Finally we call it to test a specific behavior of the class. Since our class is asynchronous by nature, we invoke the done callback that Mocha provides to signal when the test has finished.
In the body of the test, we set up a message event handler on the client. This handler uses the deepEqual method to assert that the payload we received matches our expectations. At last we tell our synthetic stream to emit a data event. This will cause our message handler to be invoked in a few turns of the event loop.
Now that we have a test written, it’s time to run it!
To run Mocha tests using npm, we have to add an entry to the package.json file. Open your package.json and update the scripts section so that it looks like this:
| | "scripts": { |
| | "test": "mocha" |
| | }, |
Entries in scripts are commands you can invoke from the command line using npm run. For example, if you do npm run test, it’ll run mocha as a command-line tool.
And for test in particular (and a few other scripts), npm has an alias so you can omit the run and just do npm test.
Let’s run it now. Open a terminal and run npm test:
| | $ npm test |
| | |
| | > @ test ./code/networking |
| | > mocha |
| | |
| | |
| | |
| | LDJClient |
| | ✓ should emit a message event from single data event |
| | |
| | |
| | 1 passing (9ms) |
Great! Our test passed on the first try. Next, let’s convert the chunked test from test-json-service.js into a true Mocha test.
With this infrastructure in place, we can also easily upgrade the test-json-service.js to be a Mocha test. Open your test/ldj-client-test.js file and add the following inside the describe block:
| | it('should emit a message event from split data events', done => { |
| | client.on('message', message => { |
| | assert.deepEqual(message, {foo: 'bar'}); |
| | done(); |
| | }); |
| | stream.emit('data', '{"foo":'); |
| | process.nextTick(() => stream.emit('data', '"bar"}\n')); |
| | }); |
This test breaks up the message into two parts to be emitted by the stream one after the other. Notice the use of process.nextTick on the second chunk. This Node.js built-in method allows you to schedule code as a callback to be executed as soon as the current code finishes.
If you’ve done front-end JavaScript programming, you may be familiar with using setTimeout with a delay of 0 for this purpose. The difference between setTimeout(callback, 0) and process.nextTick(callback) is that the latter will execute before the next spin of the event loop. By contrast, setTimeout will wait for the event loop to spin at least once, allowing for other queued callbacks to be executed.
This test will pass irrespective of which of these methods you use to delay the second chunk, as long as the delay is less than Mocha’s test timeout. By default, this timeout is 2 seconds (2000 ms), but you can change it for the whole suite or on a test-by-test basis.
To set the Mocha timeout for the whole run, use the --timeout flag to specify the timeout in milliseconds. Set the timeout to 0 to disable it entirely.
If you want to set a specific timeout for a particular test, you can call the timeout method on the object returned by Mocha’s it method. Here’s an example:
| | it('should finish within 5 seconds', done => { |
| | setTimeout(done, 4500); // Call done after 4.5 seconds. |
| | }).timeout(5000); |
You can also call timeout on the describe returned object to set a default timeout for a set of tests.
OK, let’s review what we’ve covered before moving on to even bigger things.