We’re at the end of the book, but not the end of our microservices journey. While most of this book has focused on designing, building, and operationalizing Spring-based microservices using the Spring Cloud technology, we haven’t yet touched on how to build and deploy microservices. Creating a build and deployment pipeline might seem like a mundane task, but in reality it’s one of the most important pieces of your microservices architecture.
Why? Remember, one of the key advantages of a microservices architecture is that microservices are small units of code that can be quickly built, modified, and deployed to production independently of one another. The small size of the service means that new features (and critical bug fixes) can be delivered with a high degree of velocity. Velocity is the key word here because velocity implies that little to no friction exists between making a new feature or fixing a bug and getting your service deployed. Lead times for deployment should be minutes, not days.
To accomplish this, the mechanism that you use to build and deploy your code needs to be
Building a robust and generalized build deployment pipeline is a significant amount of work and is often specifically designed toward the runtime environment your services are going to run. It often involves a specialized team of DevOps (developer operations) engineers whose sole job is to generalize the build process so that each team can build their microservices without having to reinvent the entire build process for themselves. Unfortunately, Spring is a development framework and doesn’t offer a significant amount of capabilities for implementing a build and deployment pipeline.
For this chapter, we’re going to see how to implement a build and deployment pipeline using a number of non-Spring tools. You’re going to take the suite of microservices you’ve been building for this book and do the following:
I want to start our discussion with the end goal in mind: a deployed set of services to AWS Elastic Container Service (ECS). Before we get into all the details of how you’re going to implement a build/deployment pipeline, let’s walk through how the EagleEye services are going to look running in Amazon’s cloud. Then we’ll discuss how to manually deploy the EagleEye services to the AWS cloud. Once that’s done, we will automate the entire process.
Throughout all the code examples in this book, you’ve run all of your applications inside a single virtual machine image with each individual service running as a Docker container. You’re going to change that now by separating your database server (PostgreSQL) and caching server (Redis) away from Docker into Amazon’s cloud. All the other services will remain running as Docker containers running inside a single-node Amazon ECS cluster. Figure 10.1 shows the deployment of the EagleEye services to the Amazon cloud.

Let’s walk through figure 10.1 and dive into more detail:
To set up your Amazon infrastructure, you’re going to need the following:
If you don’t have any experience with using Amazon’s Web Services, I’d set up an AWS account and install the tools in the list. I’d also spend time familiarizing yourself with the platform.
If you’re completely new to AWS, I highly recommend you pick up a copy of Michael and Andreas Wittig’s book Amazon Web Services in Action (Manning, 2015). The first chapter of the book (https://www.manning.com/books/amazon-web-services-in-action#downloads) is available for download and includes a well-written tutorial at the end of the chapter on how to sign up and configure your AWS account. Amazon Web Services in Action is a well-written and comprehensive book on AWS. Even though I’ve been working with the AWS environment for years, I still find it a useful resource.
Finally, in this chapter I’ve tried as much as possible to use the free-tier services offered by Amazon. The only place where I couldn’t do this is when setting up the ECS cluster. I used a t2.large server that costs approximately .10 cents per hour to run. Make sure that you shut down your services after you’re done if you don’t want to incur significant costs.
There’s no guarantee that the Amazon resources (Postgres, Redis, and ECS) that I’m using in this chapter will be available if you want to run this code yourself. If you’re going to run the code from this chapter, you need to set up your own GitHub repository (for your application configuration), your own Travis CI account, Docker Hub (for your Docker images), and Amazon account, and then modify your application configuration to point to your account and credentials.
Before we begin this section, you need to set up and configure your Amazon AWS account. Once this is done, your first task is to create the PostgreSQL database that you’re going to use for your EagleEye services. To do this you’re going to log in into the Amazon AWS console (https://aws.amazon.com/console/) and do the following:
The first thing the Amazon database creation wizard will ask you is whether this is a production database or a dev/test database. You’re going to create a dev/test database using the free tier. Figure 10.2 shows this screen.

Next, you’re going to set up basic information about your PostgreSQL database and also set the master user ID and password you’re going to use to log into the database. Figure 10.3 shows this screen.

The last and final step of the wizard is to set up the database security groups, port information, and database backup information. Figure 10.4 shows the contents of this screen.

At this point, your database creation process will begin (it can take several minutes). Once it’s done, you’ll need to configure the Eagle Eye services to use the database. After the database is created (this will take several minutes), you’ll navigate back to the RDS dashboard and see your database created. Figure 10.5 shows this screen.

For this chapter, I created a new application profile called aws-dev for each microservice that needs to access the Amazon-base PostgreSQL database. I added a new Spring Cloud Config server application profile in the Spring Cloud Config GitHub repository (https://github.com/carnellj/config-repo) containing the Amazon database connection information. The property files follow the naming convention (service-name)-aws-dev.yml in each of the property files using the new database (licensing service, organization service, and authentication service).
At this point your database is ready to go (not bad for setting it up in approximately five clicks). Let’s move to the next piece of application infrastructure and see how to create the Redis cluster that your EagleEye licensing service is going to use.
To set up the Redis cluster, you’re going to use the Amazon ElastiCache service. Amazon ElastiCache allows you to build in-memory data caches using Redis or Memcached (https://memcached.org/). For the EagleEye services, you’re going to move the Redis server you were running in Docker to ElastiCache.
To begin, navigate back to the AWS Console’s main page (click the orange cube on the upper left-hand side of the page) and click the ElastiCache link.
From the ElastiCache console, select the Redis link (left-hand side of the screen), and then hit the blue Create button at the top of the screen. This will bring up the ElastiCache/Redis creation wizard.
Figure 10.6 shows the Redis creation screen.

Go ahead and hit the create button once you’ve filled in all your data. Amazon will begin the Redis cluster creation process (this will take several minutes). Amazon will build a single-node Redis server running on the smallest Amazon server instance available. Once you hit the button you’ll see your Redis cluster being created. Once the cluster is created, you can click on the name of the cluster and it will bring you to a detailed screen showing the endpoint used in the cluster. Figure 10.7 shows the details of the Redis clustered after it has been created.

The licensing service is the only one of your services to use Redis, so make sure that if you deploy the code examples in this chapter to your own Amazon instance, you modify the licensing service’s Spring Cloud Config files appropriately.
The last and final step before you deploy the EagleEye services is to set up an Amazon ECS cluster. Setting up an Amazon ECS cluster provisions the Amazon machines that are going to host your Docker containers. To do this you’re going to again go to the Amazon AWS console. From there you’re going to click on the Amazon EC2 Container Service link.
This brings you to the main EC2 Container service page, where you should see a “Getting Started” button.
Click on the “Start” button. This will bring you to the “Select options to Configure” screen shown in figure 10.8.

Uncheck the two checkboxes on the screen and click the cancel button. ECS offers a wizard for setting up an ECS container based on a set of predefined templates. You’re not going to use this wizard. Once you cancel out of the ECS set-up wizard, you should see the “Clusters” tab on the ECS home page. Figure 10.9 shows this screen. Hit the “Create Cluster” button to begin the process of creating an ECS cluster.

Now you’ll see a screen called “Create Cluster” that has three major sections. The first section is going to define the basic cluster information. Here you’re going to enter the
Figure 10.10 shows the screen as I populated it for the test examples in this book.

One of the first tasks you do when you set up an Amazon account is define a key pair for SSHing into any EC2 servers you start. We’re not going to cover setting up a key pair in this chapter, but if you’ve never done this before, I recommend you look at Amazon’s directions regarding this (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-key-pairs.html).
Next, you’re going to set up the network configuration for the ECS cluster. Figure 10.11 shows the networking screen and the values you’re configuring.

The first thing to note is selecting the Amazon Virtual Private Cloud (VPC) that the ECS cluster will run. By default, the ECS set-up wizard will offer to set up a new VPC. I’ve selected to run the ECS cluster in my default VPC. The default VPC houses the database server and Redis cluster. In Amazon’s cloud, an Amazon-managed Redis server can only be accessed by servers that are in the same VPC as the Redis server.
Next, you have to select the subnets in the VPC that you want to give access to the ECS cluster. Because each subnet corresponds to an Amazon availability zone, I usually select all subnets in the VPC to make the cluster available.
Finally, you have to select to create a new security group or select an existing Amazon security group that you’ve created to apply to the new ECS cluster. Because you’re running Zuul, you want all traffic to flow through a single port, port 5555. You’re going to configure the new security group being created by the ECS wizard to allow all in-bound traffic from the world (0.0.0.0/0 is the network mask for the entire internet).
The last step that has to be filled out in the form is the creation of an Amazon IAM Role for the ECS container agent that runs on the server. The ECS agent is responsible for communicating with Amazon about the status of the containers running on the server. You’re going to allow the ECS wizard to create a IAM role, called ecsInstanceRole, for you. Figure 10.12 shows this configuration step.

At this point you should see a screen tracking the status of the cluster creation. Once the cluster is created, you should see a blue button on the screen called “View Cluster.” Click on the “View Cluster” button. Figure 10.13 shows the screen that will appear after the “View Cluster” button has been pressed.

At this point, you have all the infrastructure you need to successfully deploy the EagleEye microservices.
Right now, you’re doing everything via the Amazon AWS console. In a real environment, you’d have scripted the creation of all this infrastructure using Amazon’s CloudFormation scripting DSL (domain specific language) or a cloud infrastructure scripting tool like HashiCorp’s Terraform (https://www.terraform.io/). However, that’s an entire topic to itself and far outside the scope of this book. If you’re using Amazon’s cloud, you’re probably already familiar with CloudFormation. If you’re new to Amazon’s cloud, I recommend you take the time to learn it before you get too far down the road of setting up core infrastructure via the Amazon AWS Console.
Again, I want to point the reader back to Amazon Web Services in Action (Manning, 2015) by Michael and Andreas Wittig. They walk through the majority of Amazon Web Services and demonstrate how to use CloudFormation (with examples) to automate the creation of your infrastructure.
At this point you have the infrastructure set up and can now move into the second half of the chapter. In this second part, you’re going to deploy the EagleEye services to your Amazon ECS container. You’re going to do this in two parts. The first part of your work is for the terminally impatient (like me) and will show how to deploy EagleEye manually to your Amazon instance. This will help you understand the mechanics of deploying the service and see the deployed services running in your container. While getting your hands dirty and manually deploying your services is fun, it isn’t sustainable or recommended.
This is where the second part of this section comes into play. You’re going to automate the entire build and deployment process and take the human being out of the picture. This is your targeted end state and really caps the work you’ve been doing in the book by demonstrating how to design, build, and deploy microservices to the cloud.
To manually deploy your EagleEye services, you’re going to switch gears and move away from the Amazon AWS console. To deploy the EagleEye services, you’re going to use the Amazon ECS command-line client (https://github.com/aws/amazon-ecs-cli). After you’ve installed the ECS command-line client, you need to configure the ecs-cli run-time environment to
ecs-cli configure --region us-west-1 \
--access-key $AWS_ACCESS_KEY \
--secret-key $AWS_SECRET_KEY \
--cluster spmia-tmx-dev
The ecs-cli configure command will set the region where your cluster is located, your Amazon access and secret key, and the name of the cluster (spmia-tmx-dev) you’ve deployed to. If you look at the previous command, I’m using environment variables ($AWS_ACCESS_KEY and $AWS_SECRET_KEY) to hold my Amazon access and secret key.
I selected the us-west-1 region for purely demonstrative purposes. Depending on the country you’re located in, you might choose an Amazon region more specific to your part of the world.
Next, let’s see how to do a build. Unlike in other chapters, you have to set the build name because the Maven scripts in this chapter are going to be used in the build-deploy pipeline being set up later on in the chapter. You’re going to set an environment variable called $BUILD_NAME. The $BUILD_NAME environment variable is used to tag the Docker image that’s created by the build script. Change to the root directory of the chapter 10 code you downloaded from GitHub and issue the following two commands:
export BUILD_NAME=TestManualBuild mvn clean package docker:build
This will execute a Maven build using a parent POM located at the root of the project directory. The parent pom.xml is set up to build all the services you’ll deploy in this chapter. Once the Maven code is done executing, you can deploy the Docker images to the ECS instance you set up earlier in the section 10.1.3. To do the deployment, issue the following command:
ecs-cli compose --file docker/common/docker-compose.yml up
The ECS command line client allows you to deploy containers using a Docker-compose file. By allowing you to reuse your Docker-compose file from your desktop development environment, Amazon has significantly simplified the deployment of your services to Amazon ECS. After the ECS client has run, you can validate that the services are running and discover the IP address of the servers by issuing the following command:
ecs-cli ps
Figure 10.14 shows the output from the ecs-cli ps command.

Note three things from the output in figure 10.14:
At this point you’ve successfully deployed your first set of services to an Amazon ECS client. Now, let’s build on this by looking at how to design a build and deployment pipeline that can automate the process of compiling, packaging, and deploying your services to Amazon.
ECS has limited tools to debug why a container doesn’t start. If you have problems with an ECS deployed service starting or staying up, you’ll need to SSH onto the ECS cluster to look at the Docker logs. To do this you need to add port 22 to the security group that the ECS cluster runs with, and then SSH onto the box using the Amazon key pair you defined at the time the cluster was set (see figure 10.9) as the ec2-user. Once you’re on the server, you can get a list of all the Docker containers running on the server by running the docker ps command. Once you’ve located the container image that you want to debug, you can run a docker logs –f <<container id>> command to tail the logs of the targeted Docker container.
This is a primitive mechanism for debugging an application, but sometimes you only need to log on to a server and see the actual console output to determine what’s going on.
The goal of this chapter is to provide you with the working pieces of a build/deployment pipeline so that you can take these pieces and tailor them to your specific environment.
Let’s start our discussion by looking at the general architecture of your build deployment pipeline and several of the general patterns and themes that it represents. To keep the examples flowing, I’ve done a few things that I wouldn’t normally do in my own environment and I’ll call those pieces out accordingly.
Our discussion on deploying microservices is going to begin with a picture you saw way back in chapter 1. Figure 10.15 is a duplicate of the diagram we saw in chapter 1 and shows the pieces and steps involved in building a microservices build and deployment pipeline.

Figure 10.15 should look somewhat familiar, because it’s based on the general build-deploy pattern used for implementing Continuous Integration (CI):
With the build and deployment pipeline (shown in figure 10.15), a similar process is followed up until the code is ready to be deployed. In the build and deployment shown in figure 10.15, you’re going to tack Continuous Delivery (CD) onto the process:
You’ll see from figure 10.15 that I do several types of testing (unit, integration, and platform) during the build and deployment of a service. Three types of testing are typical in a build and deployment pipeline:
Unit tests—Unit tests are run immediately before the compiliation of the service code, but before it’s deployed to an environment. They’re designed to run in complete isolation, with each unit test being small and narrow in focus. A unit test should have no dependencies on third-party infrastructure databases, services, and so on. Usually a unit test scope will encompass the testing of a single method or function.
Integration tests—Integration tests are run immediately after packaging the service code. These tests are designed to test an entire workflow and stub or mock out major services or components that would need to be called off box. During an integration test, you might be running an in-memory database to hold data, mocking out third-party service calls, and so on. Integration tests test an entire workflow or code path. For integration tests, third-party dependencies are mocked or stubbed so that any calls that would invoke a remote service are mocked or stubbed so that calls never leave the build server.
Platform tests—Platform tests are run right before a service is deployed to an environment. These tests typically test an entire business flow and also call all the third-party dependencies that would normally be called in a production system. Platform tests are running live in a particular environment and don’t involve any mocked-out services. Platform tests are run to determine integration problems with third-party services that would normally not be detected when a third-party service is stubbed out during an integration test.
This build/deploy process is built on four core patterns. These patterns aren’t my creation but have emerged from the collective experience of development teams building microservice and cloud-based applications. These patterns include
With the concept of immutable servers, we should always be guaranteed that a server’s configuration matches exactly with what the machine image for the server says it does. A server should have the option to be killed and restarted from the machine image without any changes in the service or microservices behavior. This killing and resurrection of a new server was termed “Phoenix Server” by Martin Fowler (http://martinfowler.com/bliki/PhoenixServer.html) because when the old server is killed, the new server should rise from the ashes. The Phoenix server pattern has two key benefits.
First, it exposes and drives configuration drift out of your environment. If you’re constantly tearing down and setting up new servers, you’re more likely to expose configuration drift early. This is a tremendous help in ensuring consistency. I’ve has spent way too much of my time and life away from my family on “critical situation” calls because of configuration drift.
Second, the Phoenix server pattern helps to improve resiliency by helping find situations where a server or service isn’t cleanly recoverable after it has been killed and restarted. Remember, in a microservice architecture your services should be stateless and the death of a server should be a minor blip. Randomly killing and restarting servers quickly exposes situations where you have state in your services or infrastructure. It’s better to find these situations and dependencies early in your deployment pipeline, rather than when you’re on the phone with an angry company.
The organization where I work uses Netflix’s Chaos Monkey (https://github.com/-Netflix/SimianArmy/wiki/Chaos-Monkey) to randomly select and kill servers. Chaos Monkey is an invaluable tool for testing the immutability and recoverability of your microservice environment. Chaos Monkey randomly selects server instances in your environment and kills them. The idea with using Chaos Monkey is that you’re looking for services that can’t recover from the loss of a server, and when a new server is started, it will behave in the same fashion as the server that was killed.
From the general architecture laid out in section 10.3, you can see that there are many moving pieces behind a build/deployment pipeline. Because the purpose of this book is to show you things “in action,” we’re going to walk through the specifics of implementing a build/deployment pipeline for the EagleEye services. Figure 10.16 lays out the different technologies you’re going to use to implement your pipeline:

You might find it a little odd that I wrote the platform tests in Python rather than Java. I did this purposefully. Python (like Groovy) is a fantastic scripting language for writing REST-based test cases. I believe in using the right tool for the job. One of the biggest mind shifts I’ve seen for organizations adopting microservices is that the responsibility for picking the language should lie with the development teams. In too many organizations, I’ve seen a dogmatic embrace of standards (“our enterprise standard is Java . . . and all code must be written in Java”). As a result, I’ve seen development teams jump through hoops to write large amounts of Java code when a 10-line Groovy or Python script would do the job.
The second reason I chose Python is that unlike unit and integration tests, platform tests are truly “black box” tests where you’re acting like an actual API consumer running in a real environment. Unit tests exercise the lowest level of code and shouldn’t have any external dependencies when they run. Integration tests come up a level and test the API, but key external dependencies, like calls to other services, database calls, and so on, are mocked or stubbed out. Platform tests should be truly independent tests of the underlying infrastructure.
Dozens of source control engines and build deploy engines (both on-premise and cloud-based) can implement your build and deploy pipeline. For the examples in this book, I purposely chose GitHub as the source control repository and Travis CI as the build engine. The Git source control repository is an extremely popular repository and GitHub is one of the largest cloud-based source control repositories available today.
Travis CI is a build engine that integrates tightly with GitHub (it also supports Subversion and Mercurial). It’s extremely easy to use and is completely driven off a single configuration file (.travis.yml) in your project’s root directory. Its simplicity and opinionated nature make it easy to get a simple build pipeline off the ground
Up to now, all of the code examples in this book could be run solely from your desktop (with the exception of connectivity out to GitHub). For this chapter, if you want to completely follow the code examples, you’ll need to set up your own GitHub, Travis CI, and Docker hub accounts. We’re not going to walk through how to set up these accounts, but the setup of a personal Travis CI account and your GitHub account can all be done right from the Travis CI web page (http://travis-ci.org).
For the purposes of this book (and my sanity), I set up a separate GitHub repository for each chapter in the book. All the source code for the chapter can be built and deployed as a single unit. However, outside this book, I highly recommend that you set up each microservice in your environment with its own repository with its own independent build processes. This way each service can be deployed independently of one another. With the build process, I’m deploying all of the services as a single unit only because I wanted to push the entire environment to the Amazon cloud with a single build script and not manage build scripts for each individual service.
At the heart of every service built in this book has been a Maven pom.xml file that’s used to build the Spring Boot service, package it into an executable JAR, and then build a Docker image that can be used to launch the service. Up until this chapter, the compilation and startup of the services occurred by
The question is, how do you repeat this process in Travis CI? It all begins with a single file called .travis.yml. The .travis.yml is a YAML-based file that describes the actions you want taken when Travis CI executes your build. This file is stored in the root directory of your microservice’s GitHub repository. For chapter 10, this file can be found in spmia-chapter10-code/. travis.yml.
When a commit occurs on a GitHub repository Travis CI is monitoring, it will look for the .travis.yml file and then initiate the build process. Figure 10.17 shows the steps your .travis.yml file will undertake when a commit is made to the GitHub repository used to hold the code for this chapter (https://github.com/carnellj/spmia-chapter10).

Now that we’ve walked through the general steps involved in the .travis.yml file, let’s look at the specifics of your .travis.yml file. Listing 10.1 shows the different pieces of the .travis.yml file.
The code annotations in listing 10.1 are lined up with the numbers in figure 10.17.


We’re now going to walk through each of the steps involved in the build process in more detail.
The first part of the travis.yml file deals with configuring the core runtime configuration of your Travis build. Typically this section of the .travis.yml file will contain Travis-specific functions that will do things like
The next listing shows this specific section of the build file.

The first thing your Travis build script is doing is telling Travis what primary language is going to be used for performing
the build. By specifying the language as java and jdk attributes as java and oraclejdk8,
Travis will ensure that the JDK is installed and configured for your project.
The next part of your .travis.yml file, the cache.directories attribute
, tells Travis to cache the results of this directory when a build is executed and reuse it across multiple builds. This is
extremely useful when dealing with package managers such as Maven, where it can take a significant amount of time to download
fresh copies of jar dependencies every time a build is kicked off. Without the cache.directories attribute set, the build for this chapter can take up to 10 minutes to download all of the dependent jars.
The next two attributes in listing 10.2 are the sudo attribute and the service attribute.
The sudo attribute is used to tell Travis that your build process will need to use sudo as part of the build. The UNIX sudo command is used to temporarily elevate a user to root privileges. Generally, you use sudo when you need to install third-party tools. You do exactly this later in the build when you need to install the Amazon ECS
tools.
The services attribute is used to tell Travis whether you’re going to use certain key services while executing your build. For instance, if your integration tests need a local database available for them to run, Travis allows you start a MySQL or PostgreSQL database right on your build box. In this case, you need Docker running to build your Docker images for each of your EagleEye services and push your images to the Docker hub. You’ve set the services attribute to start Docker when the build is kicked off.
The next attribute, notifications
defines the communication channel to use whenever a build succeeds or fails. Right now, you always communicate the build
results by setting the notification channel for the build to email. Travis will notify you via email on both the success and
failure of the build. Travis CI can notify via multiple channels besides email, including Slack, IRC, HipChat, or a custom
web hook.
The branches.only
attribute tells Travis what branches Travis should build against. For the examples in this chapter, you’re only going to
perform a build off the master branch of Git. This prevents you from kicking off a build every time you tag a repo or commit to a branch within GitHub.
This is important because GitHub does a callback into Travis every time you tag a repo or create a release. The presence of
the branches.only attribute being set to master prevents Travis from going into an endless build.
The last part of the build configuration is the setting of sensitive environment variables
. In your build process, you might communicate with third-party vendors such as Docker, GitHub, and Amazon. Sometimes you’re
communicating via their command line tools and other times you’re using the APIs. Regardless, you often have to present sensitive
credentials. Travis CI gives you the ability to add encrypted environment variables to protect these credentials.
To add an encrypted environment variable, you must encrypt the environment variable using the travis command line tool on your desk in the project directory where you have your source code. To install the Travis command-line tool locally, review the documentation for the tool at https://github.com/travis-ci/travis.rb. For the .travis.yml used in this chapter, I created and encrypted the following environment variables:
Once the travis tool is installed, the following command will add the encrypted environment variable DOCKER_USERNAME to the env.global section of you .travis.yml file:
travis encrypt DOCKER_USERNAME=somerandomname --add env.global
Once this command is run, you should now see in the env.global section of your .travis.yml file a secure attribute tag followed by a long string of text. Figure 10.18 shows what an encrypted environment variable looks like.

Unfortunately, Travis doesn’t label the names of your encrypted environment variables in your .travis.yml file.
Encrypted variables are only good for the single GitHub repository they’re encrypted in and Travis is building against. You can’t cut and paste an encrypted environment variable across multiple .travis.yml files. Your builds will fail to run because the encrypted environment variables won’t decrypt properly.
Even though all our examples use Travis CI as the build tool, all modern build engines allow you to encrypt your credentials and tokens. Please, please, please make sure you encrypt your credentials. Credentials embedded in a source repository are a common security vulnerability. Don’t rely on the belief that your source control repository is secure and therefore the credentials in it are secure.
Wow, the pre-build configuration was huge, but the next section is small. Build engines are often a source of a significant amount of “glue code” scripting to tie together different tools used in the build process. With your Travis script, you need to install two command-line tools:
Each item listed in the before_install section of the .travis.yml file is a UNIX command that will be executed before the build kicks off. The following listing shows the before_install attribute along with the commands that need to be run.

The first thing to do in the build process is install the travis command-line tool on the remote build server:
gem install travis -v 1.8.5 --no-rdoc --no-ri
Later on in the build you’re going to kick off another Travis job via the Travis REST API. You need the travis command line tool to get a token for invoking this REST call.
After you’ve installed the travis tool, you’re going to install the Amazon ecs-cli tool. This is a command-line tool used for deploying, starting, and stopping Docker containers running within Amazon. You install the ecs-cli by first downloading the binary and then changing the permission on the downloaded binary to be executable:
- sudo curl -o /usr/local/bin/ecs-cli https://s3.amazonaws.com/amazon-ecs-cli/ecs-cli-linux-amd64-latest - sudo chmod +x /usr/local/bin/ecs-cli
The last thing you do in the before_install section of the .travis.yml is set three environment variables in your build. These three environment variables will help drive the behavior of your builds. These environment variables are
The actual values set in these environment variables are
- export BUILD_NAME=chapter10-$TRAVIS_BRANCH-$(date -u "+%Y%m%d%H%M%S")-$TRAVIS_BUILD_NUMBER - export CONTAINER_IP=52.53.169.60 - export PLATFORM_TEST_NAME="chapter10-platform-tests"
The first environment variable, BUILD_NAME, generates a unique build name that contains the name of the build, followed by the date and time (down to the seconds field) and then the build number in Travis. This BUILD_NAME will be used to tag your source code in GitHub and your Docker image when it’s pushed to the Docker hub repository.
The second environment variable, CONTAINER_IP, contains the IP address of the Amazon ECS virtual machine that your Docker containers will run on. This CONTAINER_IP will be passed later to another Travis CI job that will execute your platform tests.
I’m not assigning a static IP address to the Amazon ECS server that’s spun. If I tear down the container completely, I’ll be given a new IP. In a real production environment, the servers in your ECS cluster will probably have static (non-changing) IPs assigned to them, and the cluster will have an Amazon Enterprise Load Balancer (ELB) and an Amazon Route 53 DNS name so that the actual IP address of the ECS server would be transparent to the services. However, setting up this much infrastructure is outside the scope of the example I’m trying to demonstrate in this chapter.
The third environment variable, PLATFORM_TEST_NAME, contains the name of the build job being executed. We’ll explore its use later in the chapter.
A common requirement in many financial services and healthcare companies is that they have to prove traceability of the deployed software in production, all the way back through all the lower environments, back to the build job that built the software, and then back to when the code was checked into the source code repository. The immutable server pattern really shines in helping organizations meet this requirement. As you saw in our build example, you tagged the source control repository and the container image that’s going to be deployed with the same build name. That build name is unique and tied into a Travis build number. Because you only promote the container image through each environment and each container image is labeled with the build name, you’ve established traceability of that container image back to the source code associated with it. Because the containers are never changed once they’re tagged, you have a strong audit position to show that the deployed code matches the underlying source code repository. Now, if you wanted to play it extra safe, at the time you labeled the project source code, you could also label the application configuration residing in the Spring Cloud Config repository with the same label generated for the build.
At this point, all the pre-build configuration and dependency installation is complete. To execute your build, you’re going to use the Travis script attribute. Like the before_install attribute, the script attribute takes a list of commands that will be executed. Because these commands are lengthy, I chose to encapsulate each major step in the build into its own shell script and have Travis execute the shell script. The following listing shows the major steps that are going to be undertaken in the build.
script: - sh travis_scripts/tag_build.sh - sh travis_scripts/build_services.sh - sh travis_scripts/deploy_to_docker_hub.sh - sh travis_scripts/deploy_amazon_ecs.sh - sh travis_scripts/trigger_platform_tests.sh
Let’s walk through each of the major steps execute in the script step.
The travis_scripts/tag_build.sh script takes care of tagging code in the repository with a build name. For the example here, I’m creating a GitHub release via the GitHub REST API. A GitHub release will not only tag the source control repository, but will also allow you to post things like release notes to the GitHub web page along with whether the source code is a pre-release of the code.
Because the GitHub release API is a REST-based call, you’ll use curl in your shell script to do the actual invocation. The following listing shows the code from the travis_scripts/tag_build.sh script.

This script is simple. The first thing you do is build the target URL for the GitHub release API:
export TARGET_URL="https://api.github.com/repos/carnellj/spmia-chapter10/
releases?access_token=$GITHUB_TOKEN"
In the TARGET_URL you’re passing an HTTP query parameter called access_token. This parameter contains a GitHub personal access token set up to specifically allow your script to take action via the REST API. Your GitHub personal access token is stored in an encrypted environment variable called GITHUB_TOKEN. To generate a personal access token, log in to your GitHub account and navigate to https://github.com/settings/tokens. When you generate a token, make sure you cut and paste it right away. When you leave the GitHub screen it will be gone and you’ll need to regenerate it.
The second step in your script is to set up the JSON body for the REST call:
body="{
\"tag_name\": \"$BUILD_NAME\",
\"target_commitish\": \"master\",
\"name\": \"$BUILD_NAME\",
\"body\": \"Release of version $BUILD_NAME\",
\"draft\": true,
\"prerelease\": true
}"
In the previous code snippet you’re supplying the $BUILD_NAME for a tag_name value and the setting basic release notes using the body field.
Once the JSON body for the call is built, executing the call via the curl command is trivial:
curl –k -X POST \ -H "Content-Type: application/json" \ -d "$body" \ $TARGET_URL
The next step in the Travis script attribute is to build the individual services and then create Docker container images for each service. You do this via a small script called travis_scripts/build_services.sh. This script will execute the following command:
mvn clean package docker:build
This Maven command executes the parent Maven spmia-chapter10-code/pom.xml file for all of the services in the chapter 10 code repository. The parent pom.xml executes the individual Maven pom.xml for each service. Each individual service builds the service source code, executes any unit and integration tests, and then packages the service into an executable jar.
The last thing that happens in the Maven build is the creation of a Docker container image that’s pushed to the local Docker repository running on your Travis build machine. The creation of the Docker image is carried out using the Spotify Docker plugin (https://github.com/spotify/docker-maven-plugin). If you’re interested in how the Spotify Docker plug-in works within the build process, please refer to appendix A, “Setting up your desktop environment”. The Maven build process and the Docker configuration are explained there.
At this point in the build, the services have been compiled and packaged and a Docker container image has been created on the Travis build machine. You’re now going to push the Docker container image to a central Docker repository via your travis_scripts/deploy_to_docker_hub.sh script. A Docker repository is like a Maven repository for your created Docker images. Docker images can be tagged and uploaded to it, and other projects can download and use the images.
For this code example, you’re going to use the Docker hub (https://hub.docker.com/). The following listing shows the commands used in the travis_scripts/deploy_to_docker_hub.sh script.
echo "Pushing service docker images to docker hub ...." docker login -u $DOCKER_USERNAME -p $DOCKER_PASSWORD docker push johncarnell/tmx-authentication-service:$BUILD_NAME docker push johncarnell/tmx-licensing-service:$BUILD_NAME docker push johncarnell/tmx-organization-service:$BUILD_NAME docker push johncarnell/tmx-confsvr:$BUILD_NAME docker push johncarnell/tmx-eurekasvr:$BUILD_NAME docker push johncarnell/tmx-zuulsvr:$BUILD_NAME
The flow of this shell script is straightforward. The first thing you have to do is log in to Docker hub using the Docker command line-tools and the user credentials of the Docker Hub account the images are going to be pushed to. Remember, your credentials for Docker Hub are stored as encrypted environment variables:
docker login -u $DOCKER_USERNAME -p $DOCKER_PASSWORD
Once the script has logged in, the code will push each individual microservice's Docker image residing in the local Docker repository running on the Travis build server, to the Docker Hub repository:
docker push johncarnell/tmx-confsvr:$BUILD_NAME
In the previous command you tell the Docker command line tool to push to the Docker hub (which is the default hub that the Docker command line tools use) to the johncarnell account. The image being pushed will be the tmx-confsvr image with the tag name of the value from the $BUILD_NAME environment variable.
At this point, all of the code has been built and tagged and a Docker image has been created. You’re now ready to deploy your services to the Amazon ECS container you created back in section 10.1.3. The work to do this deployment is found in travis_scripts/deploy_to_amazon_ecs.sh. The following listing shows the code from this script.
echo "Launching $BUILD_NAME IN AMAZON ECS"
ecs-cli configure --region us-west-1 \
--access-key $AWS_ACCESS_KEY
--secret-key $AWS_SECRET_KEY
--cluster spmia-tmx-dev
ecs-cli compose --file docker/common/docker-compose.yml up
rm –rf ~/.ecs
In the Amazon console, Amazon only shows the name of the state/city/country the region is in and not the actual region name (us-west-1, us-east-1, and so on). For example, if you were to look in the Amazon console and wanted to see the Northern California region, there would be no indication that the region name is us-west-1. For a list of all the Amazon regions (and endpoints for each service), please refer to http://docs.aws.amazon.com/general/latest/gr/rande.html.
Because a new build virtual machine is kicked off by Travis with every build, you need to configure your build environment’s ecs-cli client with your AWS access and secret key. Once that’s complete, you can then kick off a deploy to your ECS cluster using the ecs-cli compose command and a docker-compose.yml file. Your docker-compose.yml is parameterized to use the build name (contained in the environment variable $BUILD_NAME).
You have one last step to your build process: kicking off a platform test. After every deployment to a new environment, you kick off a set of platform tests that check to make sure all your services are functioning properly. The goal of the platform tests is to call the microservices in the deployed build and ensure that the services are functioning properly.
I’ve separated the platform test job from the main build so that it can be invoked independently of the main build. To do this, I use the Travis CI REST API to programmatically invoke the platform tests. The travis_scripts/trigger_platform_tests.sh script does this work. The following listing shows the code from this script.


The first thing you do in listing 10.8 is use the Travis CI command-line tool to log in to Travis CI and get an OAuth2 token you can use to call other Travis REST APIs. You store this OAUTH2 token in the $RESULTS environment variable.
Next, you build the JSON body for the REST API call. Your downstream Travis CI job kicks off a series of Python scripts that tests your API. This downstream job expects two environment variables to be set. In the JSON body being built in listing 10.8, you’re passing in two environment variables, $BUILD_NAME and $CONTAINER_IP, that will be passed to your testing job:
\"env\": {
\"global\": [\"BUILD_NAME=$BUILD_NAME\",
\"CONTAINER_IP=$CONTAINER_IP\"]
}
The last action in your script is to invoke the Travis CI build job that runs your platform test scripts. This is done by using the curl command to call the Travis CI REST endpoint for your test job:
curl -s -X POST \ -H "Content-Type: application/json" \ -H "Accept: application/json" \ -H "Travis-API-Version: 3" \ -H "Authorization: token $RESULTS" \ -d "$body" \ $TARGET_URL
The platform test scripts are stored in a separate GitHub repository called chapter10-platform-tests (https://github.com/carnellj/chapter10-platform-tests). This repository has three Python scripts that test the Spring Cloud Config server, the Eureka server, and the Zuul server. The Zuul server platform tests also test the licensing and organization services. These tests aren’t comprehensive in the sense that they exercise every aspect of the services, but they do exercise enough of the service to ensure they’re functioning.
We’re not going to walk through the platform tests. The tests are straightforward and a walk-through of the tests would not add a significant amount of value to this chapter.
As this chapter (and the book) closes out, I hope you’ve gained an appreciation for the amount of work that goes into building a build/deployment pipeline. A well-functioning build and deployment pipeline is critical to the deployment of services. The success of your microservice architecture depends on more than just the code involved in the service: