Even though most of us work on projects with source code that is not publicly available, we can all benefit from following open source best practices, many of which still apply in closed-source project development. Pretending all of our code is going to be open source results in better configuration and secret management, better documentation, better interfaces, and more maintainable codebases overall.
In this chapter, we’ll explore open source principles and look at ways to adapt a methodology and set of robustness principles known as The Twelve-Factor App (generally devised for backend development) to modern JavaScript application development, frontend and backend alike.1
When it comes to configuration secrets in closed-source projects, like API keys or HTTPS session decryption keys, it is common for them to be hardcoded in place. In open source projects, these are typically instead obtained through environment variables or encrypted configuration files that aren’t committed to version-control systems alongside our codebase.
In open source projects, this allows the developer to share the vast majority of their application without compromising the security of their production systems. While this might not be an immediate concern in closed-source environments, we need to consider that once a secret is committed to version control, it’s etched into our version history unless we force a rewrite of that history, scrubbing the secrets from existence. Even then, it cannot be guaranteed that a malicious actor hasn’t gained access to these secrets at some point before they were scrubbed from history. Therefore, a better solution to this problem is rotating the secrets that might be compromised, revoking access through the old secrets and starting to use new, uncompromised secrets.
Although this approach is effective, it can be time-consuming when we have several secrets under our belt. When our application is large enough, leaked secrets pose significant risk even when exposed for a short period of time. As such, it’s best to approach secrets with careful consideration by default, and avoid headaches later in the lifetime of a project.
The absolute least we could be doing is giving every secret a unique name and placing them in a JSON file. Any sensitive information or configurable values may qualify as a secret, and this might range from private signing keys used to sign certificates to port numbers or database connection strings:
{"PORT":3000,"MONGO_URI":"mongodb://localhost/mjavascript","SESSION_SECRET":"ditch-foot-husband-conqueror"}
Instead of hardcoding these variables wherever they’re used, or even placing them in a constant at the beginning of the module, we centralize all sensitive information in a single file that can then be excluded from version control. Besides helping us share the secrets across modules, making updates easier, this approach encourages us to isolate information that we previously wouldn’t have considered sensitive, like the work factor used for salting passwords.
Another benefit of going down this road is that, because we have all environment configuration in a central store, we can point our application to a different secret store depending on whether we’re provisioning the application for production, staging, or one of the local development environments used by our developers.
Because we’re purposely excluding the secrets from source version control, we can take many approaches when sharing them, such as using environment variables, storing them in JSON files kept in an Amazon S3 bucket, or using an encrypted repository dedicated to our application secrets.
Using what’s commonly referred to as dot env files is an effective way of securely managing secrets in Node.js applications, and a module called nconf can aid us in setting these up. These files typically contain two types of data: secrets that mustn’t be shared outside execution environments, and configuration values that should be editable and that we don’t want to hardcode.
One concrete and effective way of accomplishing this in real-world environments is using several dot env files, each with a clearly defined purpose. In order of precedence:
.env.defaults.json can be used to define default values that aren’t necessarily overwritten across environments, such as the application listening port, the NODE_ENV variable, and configurable options you don’t want to hardcode into your application code. These default settings should be safe to check into source control.
.env.production.json, .env.staging.json, and others can be used for environment-specific settings, such as the various production connection strings for databases, cookie encoding secrets, API keys, and so on.
.env.json could be your local, machine-specific settings, useful for secrets or configuration changes that shouldn’t be shared with other team members.
Furthermore, you could also accept simple modifications to environment settings through environment variables, such as when executing PORT=3000 node app, which is convenient during development.
We can use the nconf npm package to handle reading and merging all of these sources of application settings with ease.
The following piece of code shows how you could configure nconf to do what we’ve just described: we import the nconf package, and declare configuration sources from highest priority to lowest priority, while nconf will do the merging (higher-priority settings will always take precedence). We then set the actual NODE_ENV environment variable, because libraries rely on this property to decide whether to instrument or optimize their output:
// envimportnconffrom'nconf'nconf.env()nconf.file('environment',`.env.${nodeEnv()}.json`)nconf.file('machine','.env.json')nconf.file('defaults','.env.defaults.json')process.env.NODE_ENV=nodeEnv()// consistencyfunctionnodeEnv(){returnaccessor('NODE_ENV')}functionaccessor(key){returnnconf.get(key)}exportdefaultaccessor
The module also exposes an interface through which we can consume these application settings by making a function call such as env('PORT'). Whenever we need to access one of the configuration settings, we can import env.js and ask for the computed value of the relevant setting, and nconf takes care of the bulk of figuring out which settings take precedence over what, and what the value should be for the current environment:
importenvfrom'./env'constport=env('PORT')
Assuming we have an .env.defaults.json that looks like the following, we could pass in the NODE_ENV flag when starting our staging, test, or production application and get the proper environment settings back, helping us simplify the process of loading up an environment:
{"NODE_ENV":"development"}
We usually find ourselves needing to replicate this sort of logic in the client side. Naturally, we can’t share server-side secrets in the client side, as that’d leak our secrets to anyone snooping through our JavaScript files in the browser. Still, we might want to be able to access a few environment settings such as the NODE_ENV, our application’s domain or port, Google Analytics tracking ID, and similarly safe-to-advertise configuration details.
When it comes to the browser, we could use the exact same files and environment variables, but include a dedicated browser-specific object field, like so:
{"NODE_ENV":"development","BROWSER_ENV":{"MIXPANEL_API_KEY":"some-api-key","GOOGLE_MAPS_API_KEY":"another-api-key"}}
Then, we could write a tiny script like the following to print all of those settings:
// print-browser-envimportenvfrom'./env'constbrowserEnv=env('BROWSER_ENV')constprettyJson=JSON.stringify(browserEnv,null,2)console.log(prettyJson)
Naturally, we don’t want to mix server-side settings with browser settings. Browser settings are usually accessible to anyone with a user agent, the ability to visit our website, and basic programming skills, meaning we would do well not to bundle highly sensitive secrets with our client-side applications. To resolve the issue, we can have a build step that prints the settings for the appropriate environment to an .env.browser.json file, and then use only that file on the client-side.
We could incorporate this encapsulation into our build process, adding the following command-line call:
node print-browser-env > browser/.env.browser.json
Note that in order for this pattern to work properly, we need to know the environment we’re building for at the time that we compile the browser dot env file. Passing in a different NODE_ENV environment variable would produce different results, depending on our target environment.
By compiling client-side configuration settings in this way, we avoid leaking server-side configuration secrets onto the client-side.
Furthermore, we should replicate the env file from the server side to the client side, so that application settings are consumed in much the same way on both sides of the wire:
// browser/envimportenvfrom'./env.browser.json'exportdefaultfunctionaccessor(key){if(typeofkey!=='string'){returnenv}returnkeyinenv?env[key]:null}
There are many other ways of storing our application settings, each with its own associated pros and cons. The approach we just discussed, though, is relatively easy to implement and solid enough to get started. As an upgrade, you might want to look into using AWS Secrets Manager. That way, you’d have a single secret to take care of in team members’ environments, instead of every single secret.
A secret service also takes care of encryption, secure storage, and secret rotation (useful in the case of a data breach), among other advanced features.
The reason that we sometimes feel tempted to check our dependencies into source control is so we get the exact same versions across the dependency tree, every time, in every environment.
Including dependency trees in our repositories is not practical, however, given these are typically in the hundreds of megabytes and frequently include compiled assets that are built based on the target environment and operating system.2 The build process itself is environment-dependent, and thus not suitable for a presumably platform-agnostic code repository.
During development, we want to make sure we get nonbreaking upgrades to our dependencies, which can help us resolve upstream bugs, tighten our grip around security vulnerabilities, and leverage new features or improvements. For deployments however, we want reproducible builds, where installing our dependencies yields the same results every time.
The solution is to include a dependency manifest, indicating the exact versions of the libraries in our dependency tree that we want to be installing. This can be accomplished with npm (starting with version 5) and its package-lock.json manifest, as well as through Facebook’s Yarn package manager and its yarn.lock manifest, either of which we should be publishing to our versioned repository.
Using these manifests across environments ensures that we get reproducible installs of our dependencies. Everyone working with the codebase, as well as hosted environments, deal with the same package versions, both at the top level (direct dependencies) and regardless the nesting depth (dependencies of dependencies of dependencies).
Every dependency in our application should be explicitly declared in our manifest, relying on globally installed packages or global variables as little as possible—and ideally, not at all. Implicit dependencies involve additional steps across environments; developers and deployment flow alike must take action to ensure that these extra dependencies are installed, beyond what a simple npm install step could achieve. Here’s an example of how a package-lock.json file might look:
{"name":"A","version":"0.1.0",//metadata..."dependencies":{"B":{"version":"0.0.1","resolved":"https://registry.npmjs.org/B/-/B-0.0.1.tgz","integrity":"sha512-DeAdb33F+""dependencies":{"C":{"version":"git://github.com/org/C.git#5c380ae3"}}}}}
Using the information in a package lock file, which contains details about every package we depend upon and all of their dependencies as well, package managers can take steps to install the same bits every time, preserving our ability to quickly iterate and install package updates, while keeping our code safe.
Always installing identical versions of our dependencies—and identical versions of our dependencies’ dependencies—brings us one step closer to having development environments that closely mirror what we do in production. This increases the likelihood that we can swiftly reproduce bugs that occurred in production in our local environments, while decreasing the odds that something that worked during development fails in staging.
On a similar note to that of the preceding section, we should treat our own components no differently than how we treat third-party libraries and modules. Granted, we can make changes to our own code a lot more quickly than we can effect change in third-party code (if that’s at all possible, in some cases). However, when we treat all components and interfaces (including our own HTTP API) as if they were foreign to us, we can focus on consuming and testing against interfaces, while ignoring the underlying implementation.
One way to improve our interfaces is to write detailed documentation about the input that an interface touchpoint expects, and how it affects the output it provides in each case. The process of writing documentation leads to uncovering limitations in the way the interface is designed, and we might decide to change it as a result. Consumers love good documentation because it means less fumbling about with the implementation (or its implementors) to understand how the interface is meant to be consumed, and whether it can accomplish what they need.
Avoiding distinctions helps us write unit tests where we mock dependencies that aren’t under test, regardless of whether they were developed inhouse or by a third party. When writing tests, we always assume that third-party modules are generally well-tested enough that it’s not our responsibility to include them in our test cases. The same thinking should apply to first-party modules that just happen to be dependencies of the module we’re currently writing tests for.
This same reasoning can be applied to security concerns such as input sanitization. Regardless of the kind of application we’re developing, we can’t trust user input unless it’s sanitized. Malicious actors could be angling to take over our servers or our customers’ data, or otherwise inject content onto our web pages. These users might be customers or even employees, so we shouldn’t treat them differently depending on that, when it comes to input sanitization.
Putting ourselves in the shoes of the consumer is the best tool to guard against half-baked interfaces. When—as a thought exercise—you stop and think about how you’d want to consume an interface, and the different ways in which you might need to consume it, you end up with a much better interface as a result. This is not to say we want to enable consumers to be able to do just about everything, but we want to make affordances so consuming an interface becomes as straightforward as possible and doesn’t feel like a chore. If consumers are all but required to include long blocks of business logic right after they consume an interface, we need to stop ourselves and ask: would that business logic belong behind the interface rather than at its doorstep?
Build processes have multiple aspects. At the highest level, the shared logic is where we install and compile our assets so that they can be consumed by our runtime application. This can mean installing system or application dependencies, copying files over to a different directory, compiling files into a different language, or bundling them together, among a multitude of other requirements your application might have.
Having clearly defined and delineated build processes is key when it comes to successfully managing an application across development, staging, and production environments. Each of these commonplace environments, and other environments you might encounter, is used for a specific purpose and benefits from being geared toward that purpose.
For development, we focus on enhanced debugging facilities, using development versions of libraries, source maps, and verbose logging levels. We also rely on custom ways of overriding behavior, so that we can easily mimic how the production environment would look. Where possible, we also throw in a real-time debugging server that takes care of restarting our application when code changes, applying CSS changes without refreshing the page, and so on.
In staging, we want an environment that closely resembles production, so we’ll avoid most debugging features. But we might still want source maps and verbose logging to be able to trace bugs with ease. Our primary goal with staging environments generally is to weed out as many bugs as possible before the production push. Therefore, it is vital that these environments represent this middle ground between debugging affordance and production resemblance.
Production focuses more heavily on minification, optimizing images statically to reduce their byte size, and advanced techniques like route-based bundle splitting, where we serve only modules that are actually used by the pages visited by a user. We might rely on the tree shaking step, where we statically analyze our module graph and remove functions that aren’t being used. Advanced techniques such as critical CSS inlining, where we precompute the most frequently used CSS styles so that we can inline them in the page and defer the rest of the styles to an asynchronous model that has a quicker time to interactive, can also be a boon. Security features, such as a hardened Content-Security-Policy policy that mitigates attack vectors like XSS or CSRF are often more stringent in production as well.
Testing also plays a significant role when it comes to processes around an application. Testing is typically done in two stages. Locally, developers test before a build, making sure linters don’t produce any errors or that tests aren’t failing. Then, before merging code into the mainline repository, we often run tests in a continuous integration (CI) environment to ensure that we don’t merge broken code into our application. When it comes to CI, we start off by building our application, and then test against that, making sure the compiled application is in order.
For these processes to be effective, they must be consistent. Intermittent test failures feel worse than not having tests for the particular part of our application we’re having trouble testing, because these failures affect every single test job. When tests fail in this way, we can no longer feel confident that a passing build means everything is in order, and this translates directly into decreased morale and increased frustration across the team as well. When an intermittent test failure is identified, the best course of action is to eliminate the intermittence as soon as possible, either by fixing the source of the intermittence, or by removing the test entirely. If the test is removed, make sure to file a ticket so that a well-functioning test is added later. Intermittence in test failures can be a symptom of bad design, and in our quest to fix these failures, we might resolve architecture issues along the way.
As we’ll extensively discuss in the fourth book in the Modular JavaScript series, numerous services can aid with the CI process. Travis offers a quick way to get started integration testing your applications by connecting to your project’s Git repository and running a command of your choosing; an exit code of 0 means the CI job passes, and a different exit code means the CI job failed. Codecov can help out on the code coverage side, ensuring that most code paths in our application logic are covered by test cases. Solutions like WebPagetest, PageSpeed, and Lighthouse can be integrated into the CI process we run on a platform like Travis to ensure that changes to our web applications don’t have a negative impact on performance. Running these hooks on every commit and even in pull request branches can help keep bugs and regressions out of the mainline of your applications, and thus out of staging and production environments.
Note that until this point, we have focused on how we build and test our assets, but not on how we deploy them. These two processes, build and deployment, are closely related but shouldn’t be intertwined. A clearly isolated build process that ends with a packaged application we can easily deploy, and a deployment process that takes care of the specifics regardless of whether you’re deploying to your own local environment or to a hosted staging or production environment, means that, for the most part, we won’t need to worry about environments during our build processes nor at runtime.
We’ve already explored how state, if left unchecked, can lead us straight to the heat death of our applications. Keeping state to a minimum translates directly into applications that are easier to debug. The less global state there is, the less unpredictable the current conditions of an application are at any one point in time, and the fewer surprises we’ll run into while debugging.
One particularly insidious form of state is caching. A cache is a great way to increase performance in an application by avoiding expensive lookups most of the time. When state management tools are used as a caching mechanism, we might fall into a trap; different bits and pieces of derived application state are derived at different points in time, thus rendering different bits of the application by using data computed at different points in time.
Derived state should seldom be treated as state that’s separate from the data it was derived from. When it’s not separate, we might run into situations where the original data is updated, but the derived state is not, so it becomes stale and inaccurate. When, instead, we always compute derived state from the original data, we reduce the likelihood that this derived state will become stale.
State is almost ubiquitous, and practically a synonym of applications, because applications without state aren’t particularly useful. The question then arises: how can we better manage state? If we look at applications such as your typical web server, its main job is to receive requests, process them, and send back the appropriate responses. Consequently, web servers associate state to each request, keeping it near request handlers, the most relevant consumer of request state. There is as little global state as possible when it comes to web servers, with the vast majority of state contained in each request/response cycle instead. In this way, web servers save themselves a world of trouble when setting up horizontal scaling with multiple server nodes. In that way, they don’t need to communicate with each other in order to maintain consistency across web server nodes. Ultimately, stateless servers refer to a data persistence layer, which is responsible for the application state, acting as the source of truth from which all other state is derived.
When a request results in a long-running job (such as sending out an email campaign, modifying records in a persistent database, etc.), it’s best to hand that off into a separate service that, again, mostly keeps state regarding that job. Separating services into specific needs means we can keep web servers lean and stateless, and improve our flows by adding more servers, persistent queues (so that we don’t drop jobs), and so on. When every task is tethered together through tight coupling and state, it could become challenging to maintain, upgrade, and scale a service over time.
Derived state in the form of caches is common in the world of web servers. In the case of a personal website with books available for download, for instance, we might be tempted to store the PDF representation of each book in a file, so that we don’t have to recompile the PDF whenever the corresponding /book route is visited. When the book is updated, we’d recompute the PDF file and flush it to disk again, so that this derived state remains fresh. When our web server ceases to be single-node and we start using a cluster of several nodes, however, it might not be so trivial to broadcast the news about books being updated across nodes, and thus it’d be best to leave derived state to the persistence layer. Otherwise, a web server node might receive the request to update a book, perform the update, and recompute the PDF file on that node, but we’d be failing to invalidate the PDF files being served by other nodes, which would have and continue to serve stale copies of the PDF representation.
A better alternative in such a case is to store derived state in a data store like Redis or Amazon S3, either of which we could update from any web server, and then serve precomputed results from Redis directly. In this way, we’d still be able to access the latency benefits of using precomputed derived state, but at the same time we’d stay resilient when these requests or updates can happen on multiple web server nodes.
Another improvement that could aid in complexity management is to structure applications so that all business logic is contained in a single directory structure (for example, lib/ or services/) acting as a physical layer where we keep all the logic together. In doing so, we’ll open ourselves up for more opportunities to reuse logic, because team members will know to go looking here before reimplementing slightly different functions that perform more or less similar computations for derived state.
Colocation of view components with its immediate counterparts is appealing—that is, keeping each view’s main component, child components, controllers, and logic in the same structure. However, doing so in a way that tightly couples business logic to specific components can be detrimental to having a clear understanding of the way an application works as a whole.
Large client-side applications often suffer from not having a single place where logic should be deposited. As a result, the logic is instead spread among components, view controllers, and the API, instead of being mostly handled in the server side, and then in a single physical location in the client-side code structure. This centralization can be key for newcomers to the team seeking to better understand the way the application flows, because otherwise they’d have to go fishing around our view components and controllers in order to ascertain what’s going on. This is a daunting proposition when first dipping our toes in the uncharted shores of a new codebase.
The same case could be made about any other function of our code, as having clearly defined layers in an application can make it straightforward to understand the way an algorithm flows from layer to layer. But we’ll find the biggest rewards to reap when it comes to isolating business logic from the rest of the application code.
Using a state management solution like Redux or MobX, where we isolate all state from the rest of the application, is another option. Regardless of our approach, the most important aspect remains that we stick to clearly isolating the view-rendering aspects in our applications from the business logic aspects as much as possible.
We’ve established the importance of having clearly defined build and deployment processes. In a similar vein, we have the different application environments including development, production, staging, feature branches, SaaS versus on-premises environments, and so on. Environments are divergent by definition. We are going to end up with different features in different environments, whether they are debugging facilities, product features, or performance optimizations.
Whenever we incorporate environment-specific feature flags or logic, we need to pay attention to the discrepancies introduced by these changes. Could the environment-dependent logic be tightened so that the bare-minimum divergence is introduced? Should we isolate the newly introduced logic fork into a single module that takes care of as many aspects of the divergence as possible? Could the flags that are enabled as we’re developing features for a specific environment result in inadvertently introducing bugs into other environments where a different set of flags is enabled?
Conversely, the opposite is true. As with many things programming, creating these divergences is relatively easy, whereas deleting them might prove most challenging. This difficulty arises from the unknown situations that we might not typically run into during development or unit testing, but that are still valid situations in our production environments.
As an example, consider the following scenario. We have a production application using Content-Security-Policy rules to mitigate malicious attack vectors. For the development environment, we also add a few extra rules like 'unsafe-inline', which lets our developer tools manipulate the page so that code and style changes are reloaded without requiring a full page refresh, speeding up our precious development productivity and saving time. Our application already has a component that users can leverage to edit programming source code, but we now have a requirement to change that component.
We swap the current component with a new one from our company’s own component framework, so we know it’s battle-tested and works well in other production applications developed in house. We test things in our local development environment, and everything works as expected. Tests pass. Other developers review our code, test locally in their own environments as well, and find nothing wrong with it. We merge our code, and a couple of weeks later deploy to production. Before long, we start getting support requests about the code-editing feature being broken, and need to roll back the changeset that introduced the new code editor.
What went wrong? We didn’t notice that the new component doesn’t work unless style-src: 'unsafe-inline' is present. Given that we allow inline styles in development, catering to our convenient developer tools, this wasn’t a problem during development or local testing performed by our teammates. However, when we deploy to production, which follows a stricter set of CSP rules, the 'unsafe-inline' rule is not served, and the component breaks down.
The problem here is that we had a divergence in parity that prevented us from identifying a limitation in the new component: it uses inline styles to position the text cursor. This is at odds with our strict CSP rules, but it can’t be properly identified because our development environment is more lax about CSP than production is.
As much as possible, we should strive to keep these kinds of divergences to a minimum. If we don’t, bugs might find their way to production, and a customer might end up reporting the bug to us. Merely being aware of discrepancies like this is not enough. It’s not practical nor effective to keep these logic gates in your head so that whenever you’re implementing a change, you mentally go through the motions of how the change would differ if your code was running in production instead.
Proper integration testing might catch many of these kinds of mistakes, but that won’t always be the case.
Eager abstraction can result in catastrophe. Conversely, failure to identify and abstract away sources of major complexity can be incredibly costly as well. When we consume complex interfaces directly, but don’t necessarily take advantage of all the advanced configuration options that an interface has to offer, we are missing out on a powerful abstraction we could be using. The alternative is to create a middle layer in front of the complex interface, and have consumers go through that layer instead.
This intermediate layer would be in charge of calling the complex abstraction itself, but offers a simpler interface with fewer configuration options and improved ease of use for the use cases that matter to us. Often, complicated or legacy interfaces demand that we offer up data that could be derived from other parameters being passed into the function call. For example, we might be asked how many adults, how many children, and how many people in total are looking to make a flight booking, even though the latter can be derived from the former. Other examples include expecting fields to be in a particular string format (such as a date string that could be derived from a native JavaScript date instead), using nomenclature that’s relevant to the implementation but not so much to the consumer, or a lack of sensible defaults (required fields that are rarely changed into anything other than a recommended value that isn’t set by default).
When we’re building out a web application that consumes a highly parametized API in order to search for the cheapest hassle-free flights, for example, and we anticipate consuming this API in a few different ways, it would cost us dearly not to abstract away most of the parameters demanded by the API that do not fit our use case. This middle layer can take care of establishing sensible default values and of converting reasonable data structures such as native JavaScript dates or case-insensitive airport codes into the formats demanded by the API we’re using.
In addition, our abstraction could also take care of any follow-up API calls that need to be made in order to hydrate data. For example, a flight search API might return an airline code for each different flight, such as AA for American Airlines, but a UI consumer would also necessitate to hydrate AA into a display name for the airline, accompanied by a logo to embed on the user interface, and perhaps even a quick link to its check-in page.
When we call into the backing API every time, with the full query, appeasing its quirks and shortcomings instead of taking the abstracted approach, it will not only be difficult to maintain an application that consumes those endpoints in more than one place, but also becomes a challenge down the road, when we want to include results from a different provider (which of course will have its own set of quirks and shortcomings). At this point, we would have two separate sets of API calls, one for each provider, and each massaging the data to accommodate provider-specific quirks in a module that shouldn’t be concerned with such matters, but only the results themselves.
A middle layer could leverage a normalized query from the consumer, such as the one where we took a native date and then formatted it when calling the flight search API, and then adapt that query into either of the backing services that actually produce flight search results. This way, the consumer has to deal with only a single, simplified interface, while having the ability to seamlessly interact with two similar backing services that offer different interfaces.
The same case could, and should, be made for the data structures returned from either of these backing services. By normalizing the data into a structure that contains only information that’s relevant to our consumers, and augmenting it with the derived information they need (such as the airline name and details as explained earlier), consumers can focus on their own concerns while leveraging a data structure that’s close to their needs. At the same time, this normalization empowers our abstraction to merge results from both backing services and treat them as if they came from a single source: the abstraction itself, leaving the backing services as mere implementation details.
When we rely directly on the original responses, we may find ourselves writing view components that are more verbose than they need to be. These components contain logic to pull together the different bits of metadata needed to render our views, map data from the API representation into what we actually want to display, and then map user input back into what the API expects. With a layer in between, we can keep this mapping logic contained in a single place, and leave the rest of our application unencumbered by it.
Mastering modular JavaScript isn’t strictly about following a well-defined set of rules, but rather about being able to put yourself in the shoes of your consumers by planning for feature development that may be coming down the pipe (but not too extensively) and treating documentation with the same respect and care that you should be putting into interface design. The internals, as the implementation details that they are, can always be improved later. Of course, we’ll want to patch—or at least abstract away—those sources of complexity, but it is in their shell that beautiful modules truly shine. Above all, trust your own judgment and don’t let the latest development fads clog your decision making!
1 You can find the original Twelve-Factor App methodology and its documentation online.
2 When we run npm install, npm also executes a rebuild step after npm install ends. The rebuild step recompiles native binaries, building different assets depending on the execution environment and the local machine’s operating system.