In this first chapter of Part II of the book, we started working with data from external sources. Acquiring, transforming, storing, and querying data are crucial skills in modern software development with Node.js.
Using Project Gutenberg’s catalog data, you iteratively developed the code and tests to parse and make sense of RDF (XML) files. This allowed us to use Mocha and to harness the expressive power of Chai, an assertion library that facilitates for BDD.
For the nuts and bolts of parsing and querying the XML documents, you used Cheerio, a Node.js module that provides a jQuery-like API. Although we didn’t use a lot of CSS, we used some sophisticated selectors to pick out specific elements, then we walked the DOM using Cheerio’s methods to extract data.
Once this robust parsing library was complete, we used it in combination with the node-dir module to create rdf-to-bulk.js. This program walks down a directory tree looking for RDF files, parses each one, and collects the resulting output objects. You’ll use this intermediate, bulk data file in the following chapter to populate an Elasticsearch index.
Finally, you learned how to launch a Node.js program in debug mode and attach Chrome DevTools for interactive, step-through debugging. While there are certainly some kinks that need to be worked out, it sure beats debugging by gratuitous console.log!
Whereas this chapter was all about manipulating input data and transforming it into a usable form, the next chapter is about storing this data and querying it from a database. In particular, we’re going to use Elasticsearch, a full-text indexing, JSON-based document datastore. With its RESTful, HTTP-based API, working with Elasticsearch will let us use Node.js in new and interesting ways.
In case you’d like to have more practice with the techniques we used in this chapter, the following tasks ask you to think about how you would pull out even more data from the RDF files we’ve been looking at. Good luck!
When extracting fields from the Project Gutenberg RDF (XML) files, in Traversing the Document, we specifically selected the Library of Congress Subject Headings (LCSH) and stored them in an array called subjects. At that time, we carefully avoided the Library of Congress Classification (LCC) single-letter codes. Recall that the LCC portion of an RDF file looks like this:
| | <dcterms:subject> |
| | <rdf:Description rdf:nodeID="Nfb797557d91f44c9b0cb80a0d207eaa5"> |
| | <dcam:memberOf rdf:resource="http://purl.org/dc/terms/LCC"/> |
| | <rdf:value>U</rdf:value> |
| | </rdf:Description> |
| | </dcterms:subject> |
Using your BDD infrastructure built on Mocha and Chai, implement the following:
Add a new assertion to parse-rdf-test.js that checks for book.lcc. It should be of type string and it should be at least one character long. It should start with an uppercase letter of the English alphabet, but not I, O, W, X, or Y.
Run the tests to see that they fail.
Add code to your exported module function in parse-rdf.js to make the tests pass.
Hint: When working on the code, use Cheerio to find the <dcam:memberOf> element with an rdf:resource attribute that ends with /LCC. Then traverse up to its parent <rdf:Description>, and read the text of the first descendent <rdf:value> tag. You may want to refer to Chai’s documentation when crafting your new assertions.[45]
Most of the metadata in the Project Gutenberg RDF files describes where each book can be downloaded in various formats. For example, here’s the part that shows where to download the plain text of The Art of War:
| | <dcterms:hasFormat> |
| | <pgterms:file rdf:about="http://www.gutenberg.org/ebooks/132.txt.utf-8"> |
| | <dcterms:isFormatOf rdf:resource="ebooks/132"/> |
| | <dcterms:modified rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime"> |
| | 2016-09-01T01:20:00.437616</dcterms:modified> |
| | <dcterms:format> |
| | <rdf:Description rdf:nodeID="N2293d0caa918475e922a48041b06a3bd"> |
| | <dcam:memberOf rdf:resource="http://purl.org/dc/terms/IMT"/> |
| | <rdf:value |
| | rdf:datatype="http://purl.org/dc/terms/IMT">text/plain</rdf:value> |
| | </rdf:Description> |
| | </dcterms:format> |
| | <dcterms:extent rdf:datatype="http://www.w3.org/2001/XMLSchema#integer"> |
| | 343691</dcterms:extent> |
| | </pgterms:file> |
| | </dcterms:hasFormat> |
Suppose we wanted to include a list of download sources in each JSON object we create from an RDF file. To get an idea of what data you might want, take a look at the Project Gutenberg page for The Art of War.[46]
Consider these questions:
Which fields in the raw data would we want to capture, and which could we discard?
What structure would make the most sense for this data?
What information would you need to be able to produce a table that looked like the one on the Project Gutenberg site?
Once you have an idea of what data you’ll want to extract, try creating a JSON object by hand for this one download source. When you’re happy with your data representation, use your existing continuous testing infrastructure and add a test that checks for this new information.
Finally, extend the book object produced in parse-rdf.js to include this data to make the test pass. You can do it!