Table of Contents for
Node.js 8 the Right Way

Version ebook / Retour

Cover image for bash Cookbook, 2nd Edition Node.js 8 the Right Way by Jim Wilson Published by Pragmatic Bookshelf, 2018
  1. Title Page
  2. Node.js 8 the Right Way
  3. Node.js 8 the Right Way
  4. Node.js 8 the Right Way
  5. Node.js 8 the Right Way
  6.  Acknowledgments
  7.  Preface
  8. Why Node.js the Right Way?
  9. What’s in This Book
  10. What This Book Is Not
  11. Code Examples and Conventions
  12. Online Resources
  13. Part I. Getting Up to Speed on Node.js 8
  14. 1. Getting Started
  15. Thinking Beyond the web
  16. Node.js’s Niche
  17. How Node.js Applications Work
  18. Aspects of Node.js Development
  19. Installing Node.js
  20. 2. Wrangling the File System
  21. Programming for the Node.js Event Loop
  22. Spawning a Child Process
  23. Capturing Data from an EventEmitter
  24. Reading and Writing Files Asynchronously
  25. The Two Phases of a Node.js Program
  26. Wrapping Up
  27. 3. Networking with Sockets
  28. Listening for Socket Connections
  29. Implementing a Messaging Protocol
  30. Creating Socket Client Connections
  31. Testing Network Application Functionality
  32. Extending Core Classes in Custom Modules
  33. Developing Unit Tests with Mocha
  34. Wrapping Up
  35. 4. Connecting Robust Microservices
  36. Installing ØMQ
  37. Publishing and Subscribing to Messages
  38. Responding to Requests
  39. Routing and Dealing Messages
  40. Clustering Node.js Processes
  41. Pushing and Pulling Messages
  42. Wrapping Up
  43. Node.js 8 the Right Way
  44. Part II. Working with Data
  45. 5. Transforming Data and Testing Continuously
  46. Procuring External Data
  47. Behavior-Driven Development with Mocha and Chai
  48. Extracting Data from XML with Cheerio
  49. Processing Data Files Sequentially
  50. Debugging Tests with Chrome DevTools
  51. Wrapping Up
  52. 6. Commanding Databases
  53. Introducing Elasticsearch
  54. Creating a Command-Line Program in Node.js with Commander
  55. Using request to Fetch JSON over HTTP
  56. Shaping JSON with jq
  57. Inserting Elasticsearch Documents in Bulk
  58. Implementing an Elasticsearch Query Command
  59. Wrapping Up
  60. Node.js 8 the Right Way
  61. Part III. Creating an Application from the Ground Up
  62. 7. Developing RESTful Web Services
  63. Advantages of Express
  64. Serving APIs with Express
  65. Writing Modular Express Services
  66. Keeping Services Running with nodemon
  67. Adding Search APIs
  68. Simplifying Code Flows with Promises
  69. Manipulating Documents RESTfully
  70. Emulating Synchronous Style with async and await
  71. Providing an Async Handler Function to Express
  72. Wrapping Up
  73. 8. Creating a Beautiful User Experience
  74. Getting Started with webpack
  75. Generating Your First webpack Bundle
  76. Sprucing Up Your UI with Bootstrap
  77. Bringing in Bootstrap JavaScript and jQuery
  78. Transpiling with TypeScript
  79. Templating HTML with Handlebars
  80. Implementing hashChange Navigation
  81. Listing Objects in a View
  82. Saving Data with a Form
  83. Wrapping Up
  84. 9. Fortifying Your Application
  85. Setting Up the Initial Project
  86. Managing User Sessions in Express
  87. Adding Authentication UI Elements
  88. Setting Up Passport
  89. Authenticating with Facebook, Twitter, and Google
  90. Composing an Express Router
  91. Bringing in the Book Bundle UI
  92. Serving in Production
  93. Wrapping Up
  94. Node.js 8 the Right Way
  95. 10. BONUS: Developing Flows with Node-RED
  96. Setting Up Node-RED
  97. Securing Node-RED
  98. Developing a Node-RED Flow
  99. Creating HTTP APIs with Node-RED
  100. Handling Errors in Node-RED Flows
  101. Wrapping Up
  102. A1. Setting Up Angular
  103. A2. Setting Up React
  104. Node.js 8 the Right Way

Adding Search APIs

With the basic structure of the web services project in place, it’s time to start adding some APIs. First we’ll add APIs for searching the books index, and then we’ll add APIs for creating and manipulating book bundles.

To begin, open a terminal to your b4 project directory and add a new subdirectory called lib. This will house the individual modules that contribute API code for the service.

Next, open a text editor and enter the following skeleton code for the search APIs.

 /**
  * Provides API endpoints for searching the books index.
  */
 'use strict'​;
 const​ request = require(​'request'​);
 module.exports = (app, es) => {
 
 const​ url = ​`http://​${es.host}​:​${es.port}​/​${es.books_index}​/book/_search`​;
 
 };

Save this file as lib/search.js. At the top, we pull in the Request module, which you may recall from Chapter 6, Commanding Databases, where it was central to the development of the esclu program.

Next, we assign a function to module.exports that takes two parameters. The app parameter will be the Express application object, and es will contain the configuration parameters relevant to Elasticsearch, as provided through nconf.

Inside the function, all we’re doing currently is establishing the URL that will be key to performing searches against the books index. Shortly we’ll be adding additional code to this file to implement the APIs.

To use the Request module with this project, go ahead and install it.

 $ ​​npm​​ ​​install​​ ​​--save​​ ​​--save-exact​​ ​​request@2.79.0

Finally, let’s wire this new module up in server.js. Open that file now, and add the following in the space between the app.get() line and the app.listen() line:

 require(​'./lib/search.js'​)(app, nconf.​get​(​'es'​));

This code brings in the lib/search.js module, then immediately invokes the module function by passing in the Express application object and the Elasticsearch configuration. When you call nconf.get(’es’), nconf returns an object that includes all of the settings from es on down.

Once you save the server.js file, nodemon should automatically restart the service. If it fails to start back up for any reason, you should see the relevant exception printed to the console.

However, since lib/search.js currently doesn’t do anything with the Express app, there’s nothing to test with curl. We’ll fix that next.

Using Request with Express

Open lib/search.js using your text editor. Inside the exported module function, after setting up the Elasticsearch url constant, add the following code:

 /**
  * Search for books by matching a particular field value.
  * Example: /api/search/books/authors/Twain
  */
 app.​get​(​'/api/search/books/:field/:query'​, (req, res) => {
 
 });

This shell establishes an endpoint for the field-search API. The code inside will proceed in two parts.

In the first part we’ll construct a request body—an object that will be serialized as JSON and sent to Elasticsearch. In the second part, we’ll fire off the request to Elasticsearch, handle the eventual response, and forward the results to the upstream requester that hit the API.

Since we’ll be making a request to Elasticsearch, there will be two distinct request/reply pairs that this code will deal with. The first pair is the Express request and response objects called req and res, respectively. To distinguish the Elasticsearch variables from the Express pair, we’ll prefix the Elasticsearch variables with es, as in esReq and esRes.

Add the following code to construct the Elasticsearch request body, esReqBody.

 const​ esReqBody = {
  size: 10,
  query: {
  match: {
  [req.params.field]: req.params.query
  }
  },
 };

The Elasticsearch request body that we’re constructing conforms to Elasticsearch’s Request Body Search API.[68] It includes a size parameter that limits the number of documents that will be sent back, and a query object describing what kinds of documents we want to find.

Take a moment to observe how the esReqBody.query.match object is created.

 match: {
» [req.params.field]: req.params.query
 }

When a JavaScript object literal key is surrounded with brackets, like [req.params.field] is here, this is called a computed property name. The expression inside the brackets is evaluated at runtime, and the result is used as the key. In this case, since the expression in brackets is req.params.field, the key used in the match object will be whatever the :field param of the incoming request contained.

For example, say the incoming URL is /api/search/books/authors/Twain. Then the query.match object will have a property called authors whose value is Twain.

With the request body ready to go, add this code underneath to issue the request to Elasticsearch and handle the response:

 const​ options = {url, json: ​true​, body: esReqBody};
 request.​get​(options, (err, esRes, esResBody) => {
 
 if​ (err) {
  res.status(502).json({
  error: ​'bad_gateway'​,
  reason: err.code,
  });
 return​;
  }
 
 if​ (esRes.statusCode !== 200) {
  res.status(esRes.statusCode).json(esResBody);
 return​;
  }
 
  res.status(200).json(esResBody.hits.hits.map(({_source}) => _source));
 });

This use of request is similar to what we first explored back in Using request to Fetch JSON over HTTP. Here we pass two arguments to request: an options object and a callback to handle the response. Inside the callback function, most of the code covers potential error conditions.

In the first error-handling block, we deal with the case where the connection couldn’t be made at all. If the err object is not null, this means that the connection to Elasticsearch failed before a response could be retrieved. Typically this would be because the Elasticsearch cluster is unreachable—maybe it’s down, or the hostname has been misconfigured. It could also be that the server has run out of file descriptors, but this is less common. For whatever reason, if we couldn’t get a response from Elasticsearch, then the correct HTTP code to send back to the caller is 502 Bad Gateway.

In the second error-handling block, we’ve received a response from Elasticsearch, but it came with some HTTP status code other than 200 OK. This could be for any of a variety of reasons, such as a 404 Not Found if, say, the books index has not been created. Or during development, while you’re experimenting to get the right request body for Elasticsearch, you may receive a 400 Bad Request. In any of these cases, we just pass the response more or less straight through to the caller with the same status code and response body.

Finally, if there were no errors, we extract just the _source objects (the underlying documents) from the Elasticsearch response, and report these to the caller as JSON. The _source extraction code deserves a little extra attention. Here it is again:

 resBody.hits.hits.map(({_source}) => _source)

Note that the repetition of hits.hits is not an accident. This is in fact how Elasticsearch structures query responses (recall the in-depth exploration of these from the last chapter).

The tiny, anonymous callback method passed here into the map method uses a technique called destructuring assignment. The pair of curly braces in the parameter to the anonymous function, ({_source}), indicates that we expect an object with a property named _source, and that we want to create a local variable of the same name with the same value.

You can use destructuring assignment when declaring variables, as well. The following code is identical in effect to the code we’ve been discussing.

 resBody.hits.hits.map(hit => {
 const​ {_source} = hit;
 return​ _source;
 })

If you’ve been following along, the new search API code you’ve been filling in should look like the following:

 /**
  * Search for books by matching a particular field value.
  * Example: /api/search/books/authors/Twain
  */
 app.​get​(​'/api/search/books/:field/:query'​, (req, res) => {
 
 const​ esReqBody = {
  size: 10,
  query: {
  match: {
  [req.params.field]: req.params.query
  }
  },
  };
 
 const​ options = {url, json: ​true​, body: esReqBody};
  request.​get​(options, (err, esRes, esResBody) => {
 
 if​ (err) {
  res.status(502).json({
  error: ​'bad_gateway'​,
  reason: err.code,
  });
 return​;
  }
 
 if​ (esRes.statusCode !== 200) {
  res.status(esRes.statusCode).json(esResBody);
 return​;
  }
 
  res.status(200).json(esResBody.hits.hits.map(({_source}) => _source));
  });
 
 });

Save your search.js file if you haven’t already. Provided nodemon is still running, your server should automatically restart and you can try out the API immediately.

Now let’s use curl and jq to list some of Shakespeare’s works.

 $ ​​curl​​ ​​-s​​ ​​localhost:60702/api/search/books/authors/Shakespeare​​ ​​|​​ ​​jq​​ ​​'.[].title'
 "Venus and Adonis"
 "The Second Part of King Henry the Sixth"
 "King Richard the Second"
 "The Tragedy of Romeo and Juliet"
 "A Midsummer Night's Dream"
 "Much Ado about Nothing"
 "The Tragedy of Julius Caesar"
 "As You Like It"
 "The Tragedy of Othello, Moor of Venice"
 "The Tragedy of Macbeth"

Using this API, you can search other fields, as well. For example, you could search for books with Sawyer in the title:

 $ ​​curl​​ ​​-s​​ ​​localhost:60702/api/search/books/title/sawyer​​ ​​|​​ ​​jq​​ ​​'.[].title'
 "Tom Sawyer Abroad"
 "Tom Sawyer, Detective"
 "The Adventures of Tom Sawyer"
 "Tom Sawyer\nKoulupojan historia"
 "Tom Sawyer Abroad"
 "Tom Sawyer, Detective"
 "The Adventures of Tom Sawyer, Part 3."
 "De Lotgevallen van Tom Sawyer"
 "The Adventures of Tom Sawyer"
 "Les Aventures De Tom Sawyer"

If you’re getting results like these, great! It’s time to move on to the next API, which returns suggestions based on a search term.