We covered a lot of material in Part II, which provided a solid understanding of how isomorphic JavaScript applications work. Armed with this knowledge you can now evaluate and amend existing solutions, or create something entirely new to meet your specific needs. However, before you venture out into this brave new world, a bit of reflection on what we have covered will help you to be even better equipped to take the next steps into isomorphic development. We will begin this process with a quick review of what we built in Part II, why we built it, and its limitations.
Throughout Part II we progressively built an application core and a simple example that leveraged the core. While this was instrumental in learning the concepts, we don’t recommend using this core in a production application. The core was written solely for this book as a learning device. There are better production-ready libraries for handling common functionality like managing session history, such as history. history and other modules were intentionally not utilized in order to focus on learning the underlying concepts and native APIs as opposed to learning a library API. However, leveraging highly adopted and well-supported open source solutions is a good idea because these libraries will likely cover edge cases that we might not have accounted for in our code.
In Part II we emphasized creating a common request/response lifecycle, which is something that you may or may not need. For instance, if you are going to buy wholesale into a technology such as React, then you might not need this level of abstraction. There are many articles on the Web, like “Exploring Isomorphic JavaScript” that illustrate how simple it is to create an isomorphic solution using React and other open source libraries.
Just remember the trade-off—the Web and related technologies are going to change, and complete application rewrites are rarely approved by companies (and when they are approved, they frequently fail). If you want to adapt to change, then a very thin layer of structure will allow you to easily test out new technologies and migrate legacy code in phases. Of course, you need to balance abstraction with the expected life of the application, use case details, and other factors.
Rapid change in the Web is not only evident from history, but in the current mindset of the community that you should always use a transpiler. We are now writing code as if change is imminent—because it is. A new version of JavaScript will be released annually now. And keep in mind that this new release schedule only represents a fraction of the change you will face. Patterns, libraries, and tools are evolving even more rapidly. So be prepared!
Come gather ‘round people
Wherever you roam
And admit that the waters
Around you have grown
And accept it that soon
You’ll be drenched to the bone
If your time to you
Is worth savin’
Then you better start swimmin’
Or you’ll sink like a stone…Bob Dylan, The Times Are a Changin’
While this song was intended to describe an entirely different concept, a good portion of the lyrics aptly describe the daily, weekly, monthly, and yearly struggles a web developer faces: responding to change. There is a constant wave of excitement over the latest browser APIs, newest libraries, language enhancements, and emerging development and application architecture patterns—so much so that it can feel like your worth as a developer and the quality of your applications are sinking in the midst of all the rising change.
Fortunately, even with this ever-increasing rate of change, isomorphic JavaScript applications will remain consistent (at least in terms of their lifecycle), so you have a constant in a sea of variables. This is because their design is based on the Web’s HTTP request/reply lifecycle. A user makes a request for a resource using a URL and the server responds with a payload. The only difference is that this lifecycle is used on both the client and the server, so there are a few key points in the lifecycle that you need to be cognizant of when making architectural and implementation decisions:
There should be a router that can map a URL to a function.
The function should execute in an asynchronous fashion—i.e., there should be a callback that is executed once processing is complete.
Once execution is complete, rendering should occur in an asynchronous manner.
Any data retrieved and used during the execution and rendering phases should be part of the server response.
Any objects and data need to be recreated on the client, so that they are available on the client during runtime when a user is interacting with the application.
The event handlers should be bound, so that the application is interactive.
It is that simple. All the rest of the hype is just enhancements and implementation details. How you stitch these steps together and the libraries you utilize to do so is entirely up to you. For instance, you could choose to loosely glue together open source solutions, or you could create a more formalized lifecycle like we did in this part of the book, into which you can plug the latest JavaScript libraries.
Just remember that the more patterns and standards you add to your reusable application core, the less flexible it becomes, which makes it more susceptible to breakage over time when adapting to change. For instance, a team that I worked on decided to standardize on Backbone and RequireJS when we created an isomorphic JavaScript framework because these libraries were popular at the time. Since then, new patterns and libraries have emerged. These standards have made it difficult to adapt to change at the application level. The trick is finding the balance that offers value while still being flexible.
The degree to which you add structure to support a formalized lifecycle should be contingent upon your concern for the need to respond to change. This need to respond to change should also be balanced with the expected life of the application. If it is going to be replaced in a few years, then it might not be worth the overhead of a formalized lifecycle. The need for standardization—i.e., whether you will be building more than a single application—should also be considered.
If you want the ability to fully respond to change at any time, then creating an application core that allows inversion of control for swapping out libraries should be of high importance to you. Lastly, don’t let anyone tell you what you need or don’t need. You know your use case, your colleagues’ needs, the business’s needs, and the customers’ needs better than anyone.
That’s it for Part II. We hope you enjoyed the journey as much as we did. However, we are not done yet. We saved the best for last. Industry experts have graciously donated their time to share their real-world experiences in Part III. I encourage you to read on and absorb every ounce of wisdom they have to share. I did, and as a result I have a much more enriched view of the landscape and the Web to come.