You can deploy modern Java applications across a variety of platforms, and therefore it is beneficial to understand the fundamentals of how to best package applications using recommended artifact formats and practices. This chapter will walk you through building a JAR step-by-step, and along the way explore issues such as creating manifests, packaging dependencies (and classloading), and making a JAR executable. This information is fundamental for building artifacts for all platforms, even modern serverless ones. After this, you will explore other packaging options, such as fat JARs, skinny JARs, and WARs, and also lower-level OS artifacts like RPMs, DEBS, machine images, and container images.
This chapter will be much easier to understand if you work through several concrete examples, and to help with this, a simple example project has been created with one dependency: the popular Logback logging framework. Maven will be used in the examples, but we will also mention how similar practices can be applied to other build tools. The example pom.xml for the project can be seen in Example 7-1.
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0
http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>uk.co.danielbryant.oreillyexamples</groupId>
<artifactId>builddemo</artifactId>
<version>0.1.0-SNAPSHOT</version>
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding>
<maven.compiler.source>1.8</maven.compiler.source>
<maven.compiler.target>1.8</maven.compiler.target>
</properties>
<dependencies>
<dependency>
<groupId>ch.qos.logback</groupId>
<artifactId>logback-classic</artifactId>
<version>1.2.3</version>
</dependency>
</dependencies>
</project>
All this project will do is output a log message to the console. You can see the main class in Example 7-2.
package uk.co.danielbryant.oreillyexamples.builddemo;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
public class LoggingDemo {
public static final Logger LOGGER = LoggerFactory.getLogger(LoggingDemo.class);
public static void main(String[] args) {
LOGGER.info("Hello, (Logging) World!");
}
}
The directory of this project follows the standard Maven project structure. Before you issue any build command, a tree of the root directory will look like Example 7-3.
builddemo $ tree . ├── builddemo.iml ├── pom.xml └── src ├── main │ ├── java │ │ └── uk │ │ └── co │ │ └── danielbryant │ │ └── oreillyexamples │ │ └── builddemo │ │ └── LoggingDemo.java │ └── resources └── test └── java 11 directories, 3 files
If you build and package the Maven project, as shown in Example 7-4, you will see that a JAR file is created and stored within target/builddemo-0.1.0-SNAPSHOT.jar.
builddemo $ mvn package [INFO] Scanning for projects... [INFO] [INFO] ------------------------------------------------------------------------ [INFO] Building builddemo 0.1.0-SNAPSHOT [INFO] ------------------------------------------------------------------------ [INFO] [INFO] --- maven-resources-plugin:2.6:resources (default-resources) @ builddemo --- [INFO] Using 'UTF-8' encoding to copy filtered resources. [INFO] Copying 0 resource [INFO] [INFO] --- maven-compiler-plugin:3.1:compile (default-compile) @ builddemo --- [INFO] Changes detected - recompiling the module! [INFO] Compiling 1 source file to /Users/danielbryant/Documents/↵ dev/daniel-bryant-uk/builddemo/target/classes [INFO] [INFO] --- maven-resources-plugin:2.6:testResources↵ (default-testResources) @ builddemo --- [INFO] Using 'UTF-8' encoding to copy filtered resources. [INFO] skip non existing resourceDirectory /Users/danielbryant/Documents/↵ dev/daniel-bryant-uk/builddemo/src/test/resources [INFO] [INFO] --- maven-compiler-plugin:3.1:testCompile↵ (default-testCompile) @ builddemo --- [INFO] Nothing to compile - all classes are up-to-date [INFO] [INFO] --- maven-surefire-plugin:2.12.4:test (default-test) @ builddemo --- [INFO] No tests to run. [INFO] [INFO] --- maven-jar-plugin:2.4:jar (default-jar) @ builddemo --- [INFO] Building jar: /Users/danielbryant/Documents/↵ dev/daniel-bryant-uk/builddemo/target/builddemo-0.1.0-SNAPSHOT.jar [INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 1.412 s [INFO] Finished at: 2017-12-04T15:11:22-06:00 [INFO] Final Memory: 14M/48M [INFO] ------------------------------------------------------------------------
If you try to execute the JAR by using the java -jar command, you will see the following error message:
builddemo $ java -jar target/builddemo-0.1.0-SNAPSHOT.jar no main manifest attribute, in target/builddemo-0.1.0-SNAPSHOT.jar
In order for a JAR to be a runnable JAR, there must be a manifest. You could easily correct this now by adding such a file, using maven-jar-plugin, as shown in Example 7-5.
<?xml version="1.0" encoding="UTF-8"?><projectxmlns="http://maven.apache.org/POM/4.0.0"xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"xsi:schemaLocation="http://maven.apache.org/POM/4.0.0http://maven.apache.org/xsd/maven-4.0.0.xsd">...<build><plugins><plugin><groupId>org.apache.maven.plugins</groupId><artifactId>maven-jar-plugin</artifactId><version>2.6</version><configuration><archive><manifest><addClasspath>true</addClasspath><mainClass>uk.co.danielbryant. oreillyexamples.builddemo.LoggingDemo</mainClass></manifest></archive></configuration></plugin></plugins></build></project>
However, if you package the project and attempt to run the JAR, you will still receive an error:
builddemo $ java -jar target/builddemo-0.1.0-SNAPSHOT.jar Exception in thread "main" java.lang.NoClassDefFoundError:↵ org/slf4j/LoggerFactory at uk.co.danielbryant.oreillyexamples.builddemo. LoggingDemo.<clinit>(LoggingDemo.java:8) Caused by: java.lang.ClassNotFoundException:↵ org.slf4j.LoggerFactory at java.base/jdk.internal.loader.BuiltinClassLoader.↵ loadClass(BuiltinClassLoader.java:582) at java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.↵ loadClass(ClassLoaders.java:185) at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:496) ... 1 more
This error message is quite helpful, and you can see by the NoClassDefFoundError that the Logback dependencies that are required to run have not been included within the JAR. You can see this by looking into the JAR file:
builddemo $ jar tf target/builddemo-0.1.0-SNAPSHOT.jar META-INF/ META-INF/MANIFEST.MF uk/ uk/co/ uk/co/danielbryant/ uk/co/danielbryant/oreillyexamples/ uk/co/danielbryant/oreillyexamples/builddemo/ uk/co/danielbryant/oreillyexamples/builddemo/LoggingDemo.class META-INF/maven/ META-INF/maven/uk.co.danielbryant.oreillyexamples/ META-INF/maven/uk.co.danielbryant.oreillyexamples/builddemo/ META-INF/maven/uk.co.danielbryant.oreillyexamples/builddemo/pom.xml META-INF/maven/uk.co.danielbryant.oreillyexamples/builddemo/pom.properties
The LoggingDemo.class file is present, as is your META-INF/MANIFEST.MF file (which was missing earlier), but there are no other Java class files, such as your dependencies.
You can create an executable JAR, most commonly referred to as a fat JAR or uber JAR, by using plugins, but the most effective is typically the Maven Shade Plugin. Many modern Java web application frameworks, such as Spring, now include this plugin (or something offering equivalent functionality) by default, and you may not even by aware you are using it. However, it is worth peeking under the covers of how a fat JAR is built, as this can often provide hints as to how to solve bizarre classpath issues!
The Maven Shade plugin can be added to your project pom.xml, as shown in Example 7-6.
<?xml version="1.0" encoding="UTF-8"?><projectxmlns="http://maven.apache.org/POM/4.0.0"xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"xsi:schemaLocation="http://maven.apache.org/POM/4.0.0http://maven.apache.org/xsd/maven-4.0.0.xsd">...<build><plugins><plugin><groupId>org.apache.maven.plugins</groupId><artifactId>maven-shade-plugin</artifactId><version>3.1.0</version><executions><execution><phase>package</phase><goals><goal>shade</goal></goals><configuration><transformers><transformerimplementation=↵"org.apache.maven.plugins.shade.resource.ManifestResourceTransformer"><mainClass>uk.co.danielbryant.oreillyexamples.builddemo.LoggingDemo</mainClass></transformer></transformers></configuration></execution></executions></plugin></plugins></build></project>
The key points to note in the plugin are contained within the execution tag. The phase specifies in which part of the life cycle this plugin should be executed (which in this case is the package phase), and the goal specifies that the “shade” functionality should be executed. The preceding configuration includes a ManifestResourceTransformer resource transformer that specifies a main class to include within the JAR manifest.
If you now package your project, you will see additional details from the Maven Shade plugin as it explains how the uber JAR is being assembled; see Example 7-7.
builddemo $ mvn clean package [INFO] Scanning for projects... [INFO] [INFO] ------------------------------------------------------------------------ [INFO] Building builddemo 0.1.0-SNAPSHOT [INFO] ------------------------------------------------------------------------ [INFO] ... [INFO] [INFO] --- maven-jar-plugin:2.4:jar (default-jar) @ builddemo --- [INFO] Building jar: /Users/danielbryant/Documents/dev/daniel-bryant-uk/↵ builddemo/target/builddemo-0.1.0-SNAPSHOT.jar [INFO] [INFO] --- maven-shade-plugin:3.1.0:shade (default) @ builddemo --- [INFO] Including ch.qos.logback:logback-classic:jar:1.2.3 in the shaded jar. [INFO] Including ch.qos.logback:logback-core:jar:1.2.3 in the shaded jar. [INFO] Including org.slf4j:slf4j-api:jar:1.7.25 in the shaded jar. [INFO] Replacing original artifact with shaded artifact. [INFO] Replacing /Users/danielbryant/Documents/dev/daniel-bryant-uk/↵ builddemo/target/builddemo-0.1.0-SNAPSHOT.jar with /Users/danielbryant/↵ Documents/dev/daniel-bryant-uk/builddemo/target/builddemo-0.1.0-SNAPSHOT-shaded.jar [INFO] Dependency-reduced POM written at: /Users/danielbryant/↵ Documents/dev/daniel-bryant-uk/builddemo/dependency-reduced-pom.xml [INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 2.402 s [INFO] Finished at: 2018-01-03T16:28:25Z [INFO] Final Memory: 19M/65M [INFO] ------------------------------------------------------------------------
This all looks great, so now you can try to execute the resulting fat JAR, as shown in Example 7-8.
builddemo $ java -jar target/builddemo-0.1.0-SNAPSHOT.jar 16:28:38.198 [main] INFO uk.co.danielbryant.oreillyexamples↵ .builddemo.LoggingDemo - Hello, (Logging) World!
Success! You can now look into the resulting fat JAR to see all of the dependency class files that have been included by the Maven Shade plugin; see Example 7-9.
builddemo $ jar tf target/builddemo-0.1.0-SNAPSHOT.jar META-INF/MANIFEST.MF META-INF/ uk/ uk/co/ uk/co/danielbryant/ uk/co/danielbryant/oreillyexamples/ uk/co/danielbryant/oreillyexamples/builddemo/ uk/co/danielbryant/oreillyexamples/builddemo/LoggingDemo.class META-INF/maven/ META-INF/maven/uk.co.danielbryant.oreillyexamples/ META-INF/maven/uk.co.danielbryant.oreillyexamples/builddemo/ META-INF/maven/uk.co.danielbryant.oreillyexamples/builddemo/pom.xml META-INF/maven/uk.co.danielbryant.oreillyexamples/builddemo/pom.properties ch/ ch/qos/ ch/qos/logback/ ch/qos/logback/classic/ ch/qos/logback/classic/AsyncAppender.class ch/qos/logback/classic/BasicConfigurator.class ... org/slf4j/impl/StaticMarkerBinder.class org/slf4j/impl/StaticMDCBinder.class META-INF/maven/ch.qos.logback/ META-INF/maven/ch.qos.logback/logback-classic/ META-INF/maven/ch.qos.logback/logback-classic/pom.xml META-INF/maven/ch.qos.logback/logback-classic/pom.properties ch/qos/logback/core/ ... META-INF/maven/org.slf4j/slf4j-api/pom.xml META-INF/maven/org.slf4j/slf4j-api/pom.properties
As you can see, a lot of extra files are included as a result of shading the dependencies into the fat JAR (and we’ve deliberately omitted 600 other classes from the preceding example list). Hopefully, you are starting to understand some of the challenges with managing dependencies with large applications.
It is quite common to get dependency clashes when initially using the Shade plugin to package an artifact. A useful Maven command to know is mvn dependency:tree. Executing this command on a project will show you a tree of all of your dependencies. You can also use the -Dverbose flag to add more details about conflicts, and the -Dincludes=<dependency-name> flag to target specific dependencies. For example:
mvn dependency:tree -Dverbose -Dincludes=commons-collections
If you are using Spring Boot, you have the option of using the Spring Boot Maven plugin in order to create fat JARs rather than the Maven Shade plugin. Including the plugin into your project is super simple; see Example 7-10.
<?xml version="1.0" encoding="UTF-8"?><projectxmlns="http://maven.apache.org/POM/4.0.0"xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"xsi:schemaLocation="http://maven.apache.org/POM/4.0.0http://maven.apache.org/xsd/maven-4.0.0.xsd">...<build><plugins><plugin><groupId>org.springframework.boot</groupId><artifactId>spring-boot-maven-plugin</artifactId></plugin></plugins></build></project>
The Spring Boot Maven plugin will repackage a JAR or WAR that is built during the package phase of the Maven life cycle. All you need to do is trigger a regular Maven build.
You can build Spring Boot applications by using the Maven Shade plugin, but you may run into problems, particularly in regards to application entry points and controllers not functioning correctly. We’re sure all these problems can be solved (with enough time and effort), but our advice is that if you are working with Spring Boot, stick to using the Spring Boot Maven plugin unless you have a very good reason not to. Otherwise, you could be exposing yourself to more pain than necessary.
In modern Java web development, the thought of packaging and running applications in anything other that a fat JAR is almost becoming heretical. However, there can be distinct disadvantages to building and deploying these large files. The HubSpot engineering team has created a fantastic blog post that explains some of the challenges they were having when deploying large fat JARs continuously to the AWS cloud.
The blog explains how the team initially used the Maven Shade plugin to build and package applications, but this was turning an application with 70 class files—which totalled 210 KB in the original JAR containing no dependencies—into a 150+ MB-sized fat JAR. Using Shade to combine 100,000+ files into a single archive was also a slow process, and then as the build server copied and deployed the resulting JAR to and from the AWS S3 storage service, this consumed both time and network resources. This was magnified by the fact that the HubSpot team has 100 engineers that were constantly committing and triggering 1,000–2,000 builds per day; they were generating 50–100 GB of build artifacts per day!
The HubSpot team ultimately created a new Maven plugin: SlimFast. This plugin differs from the Shade plugin, in that it separates the application code from the associated dependencies, and accordingly builds and uploads two separate artifacts. It may sound inefficient to build and upload the application dependencies separately, but this step occurs only if the dependencies have changed. As the dependencies change infrequently, the HubSpot team states that this step is often a no-op; the package dependencies’ JAR file is uploaded to S3 only once.
The HubSpot blog post and corresponding GitHub repository provide comprehensive details, but, in essence, the SlimFast plugin uses the Maven JAR plugin to add a Class-Path manifest entry to the Skinny JAR that points to the dependencies JAR file, and generates a JSON file with information about all of the dependency artifacts in S3 so that these can downloaded later. At deploy time, the HubSpot team downloads all of the application’s dependencies, but then caches these artifacts on each of the application servers, so this step is usually a no-op as well. The net result is that at build time only the application’s skinny JAR is uploaded, which is only a few hundred kilobytes. At deploy time, only this same thin JAR needs to be downloaded, which takes a fraction of a second.
The SlimFast plugin is currently tied to AWS S3 for the storage of artifacts, but the code is available on GitHub, and the principles can be adapted for any type of external storage (see the following sidebar for other plugin options).
If you are deploying your code to an application server (or potentially some serverless platforms), you may need to package your code as a WAR file. A WAR file is much like a JAR file: it is a zipped collection of files. However, in addition to class files, a WAR file contains files needed to serve a web application, such as JSP, HTML, and image files, and a WEB-INF folder is required that requires web application metadata. If you have been following along in this chapter, you are probably thinking that you could build your own WAR by using any one of the previously mentioned techniques—and you would be correct. However, there exists the even more convenient Maven WAR plugin, shown in Example 7-11.
<projectxmlns="http://maven.apache.org/POM/4.0.0"xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"xsi:schemaLocation="http://maven.apache.org/POM/4.0.0http://maven.apache.org/xsd/maven-4.0.0.xsd"><groupId>uk.co.danielbryant.oreillyexamples</groupId><artifactId>builddemo</artifactId><version>0.1.0-SNAPSHOT</version><packaging>war</packaging>...<build><plugins><plugin><groupId>org.apache.maven.plugins</groupId><artifactId>maven-war-plugin</artifactId><version>3.2.0</version><configuration><archive><manifest><addClasspath>true</addClasspath></manifest></archive></configuration></plugin>...</plugins></build></project>
You simply need to use Maven to package the application, and a resulting WAR file will be generated within the target folder.
If you include the Spring Boot Maven plugin within your project, you can easily build WAR files, as this plugin automatically includes (and configures) the Maven WAR plugin. You will need to change the packaging to war (as you saw in the standalone Maven WAR plugin) and you will also need to explicitly specify the spring-boot-starter-tomcat dependency within the dependencies section of the POM and indicate that the scope is provided in order to ensure that the embedded servlet container doesn’t interfere with the servlet container to which the WAR file will be deployed.
The Maven WAR plugin is highly configurable. It is relatively easy to include and exclude specific Java class files (or other files), create Skinny WARs, or quickly spin up the built WAR for testing by using the Jetty embedded application server plugin.
When deploying Java applications to the cloud, it can be advantageous to package the resulting build artifacts in OS or VM native artifacts, as this allows you to specify more fine-grained configuration and deployment instructions, and also include additional metadata. For example, by building a Red Hat RPM Package Manager artifact (an RPM) or a Debian Software Package file (a DEB file), you can specify where your fat JAR file can be deployed within the filesystem, create a user to run this, and specify configuration. This provides much more scope for automated installs, and it helps developers and operators to collaborate and capture the required installation instructions and process.
Moving down another layer of abstraction into the machine or VM image, if you build this type of artifact, then in addition to all of the previously mentioned controls, you also have complete control of the entire OS installation and configuration. Netflix popularized the approach to deploying Java applications as complete AWS VM images—referred to as Amazon Machine Images (AMIs)—with its Aminator tooling before the introduction of container technology, which also allows a similar approach.
Creating and deploying an application as a machine or VM image is often referred to as baking an image. Using a cooking analogy, you are effectively putting all of the application deployment ingredients together and baking this as a single action before the food is ready. You will often hear that that the counterapproach to this deployment style is frying, and although this abuses the coking analogy slightly, the key idea is that application deployment ingredients are added gradually, perhaps in layers. Deploying a Java application by using an RPM or DEB (or even a basic JAR or WAR file) is part of a frying deployment, whereas deploying an application using a machine VM image is baking.
The advantage of baking is that you are creating immutable deployment artifacts, and therefore it is easier to understand what was deployed, and much more difficult to see configuration drift (i.e., with frying, it is possible that each application could be installed in subtly different ways across a large fleet of machines). The disadvantages are that this process generally takes longer than frying a machine, and the resulting deployment artifacts can be large, which can, in turn, cause storage and network issues. The advantages of frying are that this is generally quicker and more flexible to deploy an application using a prebuilt base image that uses something like configuration management tooling (Chef, Ansible, Puppet, SaltStack, etc.) to deploy the smaller application layer. Although config management tools attempt to minimize configuration drift, this is still one of the main disadvantages with the frying approach.
The construction of RPM and DEB packages is relatively easy because of the Maven plugins available. The core challenges you may experience are the configuration of the installation process within the package (which requires operational/sysadmin knowledge) and how you will test the Java application outside the OS package.
When you are creating OS artifacts, you will be modifying the OS during installation of the application, and with this comes greater responsibility than simply packaging a JAR artifact. Depending on how the underlying OS is configured, you may (at worst) be able to irreparably damage the OS or render the machine unbootable. Your organization may also have a specific method or configuration for installing software, perhaps for compliance or governance reasons. Because of this, it is always advisable to consult with your organization’s operations or sysadmin team.
In general, it is recommended that you build and deploy your Java application locally as you always have done (e.g., as a fat JAR or WAR). However, all build servers and remote environments (QA, staging, and production) build and deploy using the OS package. This provides choice minimal hassle (and tool changes) locally, but will catch any configuration issues early within the build pipeline.
It is generally inadvisable to configure your local build process to build the OS package on every build (mvn package). For a project of medium size or complexity, this will soon start to consume your time waiting for builds to complete, or your system resources such as CPU and storage. This advice does somewhat go against the general principle of making your local development environment as similar to production as possible, but as with everything in software development, this is a trade-off.
An RPM can be built using the RPM Maven plugin, and it is recommend that the RPM is either built on demand (see the preceding warning) or built as a side effect within the Maven life cycle.
<projectxmlns="http://maven.apache.org/POM/4.0.0"xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"xsi:schemaLocation="http://maven.apache.org/POM/4.0.0http://maven.apache.org/xsd/maven-4.0.0.xsd"><groupId>uk.co.danielbryant.oreillyexamples</groupId><artifactId>builddemo</artifactId><version>0.1.0-SNAPSHOT</version><packaging>jar</packaging>...<build><plugins><plugin><groupId>org.codehaus.mojo</groupId><artifactId>rpm-maven-plugin</artifactId><version>2.1.5</version><executions><execution><id>generate-rpm</id><goals><goal>rpm</goal></goals></execution></executions></plugin></plugins></build></project>
You will often find yourself creating quite complicated application deployment procedures, and the plugin is generally well equipped to handle common use cases. A snippet of a typical plugin configuration (taken from the samples section of the plugin website) is shown in Example 7-13.
<configuration><license>GPL (c) 2005, SWWDC</license><distribution>Trash 2005</distribution><group>Application/Collectors</group><icon>src/main/resources/icon.gif</icon><packager>SWWDC</packager><prefix>/usr/local</prefix><changelogFile>src/changelog</changelogFile><defineStatements><defineStatement>_unpackaged_files_terminate_build 0</defineStatement></defineStatements><mappings><mapping><directory>/usr/local/bin/landfill</directory><filemode>440</filemode><username>dumper</username><groupname>dumpgroup</groupname><sources><source><location>target/classes</location></source></sources></mapping>...<mapping><directory>/usr/local/lib</directory><filemode>750</filemode><username>dumper</username><groupname>dumpgroup</groupname><dependency><includes><include>jmock:jmock</include><include>javax.servlet:servlet-api:2.4</include></includes><excludes><exclude>junit:junit</exclude></excludes></dependency></mapping>...<mapping><directory>/usr/local/oldbin</directory><filemode>750</filemode><username>dumper</username><groupname>dumpgroup</groupname><sources><softlinkSource><location>/usr/local/bin</location></softlinkSource></sources></mapping>...</mappings><preinstallScriptlet><script>echo "installing now"</script></preinstallScriptlet><postinstallScriptlet><scriptFile>src/main/scripts/postinstall</scriptFile><fileEncoding>utf-8</fileEncoding></postinstallScriptlet><preremoveScriptlet><scriptFile>src/main/scripts/preremove</scriptFile><fileEncoding>utf-8</fileEncoding></preremoveScript></configuration>
The Debian Maven plugin allows the simple artifact creation for DEB files, as shown in Example 7-14.
<projectxmlns="http://maven.apache.org/POM/4.0.0"xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"xsi:schemaLocation="http://maven.apache.org/POM/4.0.0http://maven.apache.org/xsd/maven-4.0.0.xsd"><groupId>uk.co.danielbryant.oreillyexamples</groupId><artifactId>builddemo</artifactId><version>0.1.0-SNAPSHOT</version><packaging>jar</packaging>...<build><plugins><plugin><groupId>net.sf.debian-maven</groupId><artifactId>debian-maven-plugin</artifactId><version>1.0.6</version><configuration><packageName>my-package</packageName><packageVersion>1.0.0</packageVersion></configuration></plugin></plugins></build></project>
As with the RPM Maven plugin, the DEB Maven plugin also allows lots of configuration options for installing and configuring your application.
As an alternative to RPMs and DEBs, there exist other mechanisms for creating OS artifacts in order to deploy Java applications. The first is IzPack, which allows you to create installers that can deploy applications to Linux and Solaris, as well as to Microsoft Windows and macOS. The deployment and configuration of a Java application is codified in IzPack by the creation of an installation description XML file; Example 7-15 shows a sample file. This is then read in by an IzPack compiler (which can be invoked via the command line, Maven, or Ant), and an OS-specific runnable installer is created. The installer can run interactively using a Swing GUI or text console, or more usefully for continuous delivery, noninteractively using records of previous sessions of properties file.
<izpack:installationversion="5.0"xmlns:izpack="http://izpack.org/schema/installation"xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"xsi:schemaLocation="http://izpack.org/schema/installationhttp://izpack.org/schema/5.0/izpack-installation-5.0.xsd"><info><appname>Test</appname><appversion>0.0</appversion><appsubpath>myapp</appsubpath><javaversion>1.6</javaversion></info><locale><langpackiso3="eng"/></locale><guiprefswidth="800"height="600"resizable="no"><splash>images/peas_load.gif</splash><lafname="substance"><osfamily="windows"/><osfamily="unix"/><paramname="variant"value="mist-silver"/></laf><lafname="substance"><osfamily="mac"/><paramname="variant"value="mist-aqua"/></laf><modifierkey="useHeadingPanel"value="yes"/></guiprefs><panels><panelclassname="TargetPanel"/><panelclassname="PacksPanel"/><panelclassname="InstallPanel"/><panelclassname="FinishPanel"/></panels><packs><packname="Test Core"required="yes"><description>The core files needed for the application</description><filesetdir="plain"targetdir="${INSTALL_PATH}"override="true"/><parsabletargetfile="${INSTALL_PATH}/test.properties"/></pack></packs></izpack:installation>
Additional open source tooling in this space includes Launch4j and (Windows-specific) Nullsoft Scriptable Install System (NSIS). Many commercial tools are also available that can be found from a web search.
Packer is an open source tool from HashiCorp for creating identical machine images for multiple platforms from a single source configuration. Packer is lightweight, command-line driven, runs on every major operating system, and is highly performant, creating machine images for multiple platforms in parallel. Packer can provision images by using shell scripts or configuration management tooling like Ansible, Chef, or Puppet. Packer defines a machine image as a single static unit that contains a preconfigured operating system and installed software that is used to quickly create new running machines. Machine image formats change for each platform, and some examples include AMIs for EC2, VMDK/VMX files for VMware, and OVF exports for VirtualBox.
Packer can be installed through most operating system installers, or by visiting the Install Packer web page. The Packer Getting Started web page provides a fantastic introduction to the tool. A sample configuration file in Example 7-16 demonstrates the metadata required in order to build an AWS AMI using provisioners that copy files from a local directory to the image, and run a series of scripts. These commands could be the copying of a JAR file and a simple init script to run this application.
{"variables":{"aws_access_key":"{{env `AWS_ACCESS_KEY_ID`}}","aws_secret_key":"{{env `AWS_SECRET_ACCESS_KEY`}}","region":"us-east-1"},"builders":[{"access_key":"{{user `aws_access_key`}}","ami_name":"packer-linux-aws-demo-{{timestamp}}","instance_type":"t2.micro","region":"us-east-1","secret_key":"{{user `aws_secret_key`}}","source_ami_filter":{"filters":{"virtualization-type":"hvm","name":"ubuntu/images/*ubuntu-xenial-16.04-amd64-server-*","root-device-type":"ebs"},"owners":["099720109477"],"most_recent":true},"ssh_username":"ubuntu","type":"amazon-ebs"}],"provisioners":[{"type":"file","source":"./welcome.txt","destination":"/home/ubuntu/"},{"type":"shell","inline":["ls -al /home/ubuntu","cat /home/ubuntu/welcome.txt"]},{"type":"shell","script":"./example.sh"}]}
A Packer configuration can include multiple builders, so you can easily specify a local VirtualBox build in addition to the AWS builder. Packer is run via the packer command-line tool, and properties can be loaded in via flags or a properties file. An example run of the packer build command is shown in Example 7-17.
$ export AWS_ACCESS_KEY_ID=MYACCESSKEYID $ export AWS_SECRET_ACCESS_KEY=MYSECRETACCESSKEY $ packer build firstimage.json amazon-ebs output will be in this color. ==> amazon-ebs: Prevalidating AMI Name: packer-linux-aws-demo-1507231105 amazon-ebs: Found Image ID: ami-fce3c696 ==> amazon-ebs: Creating temporary keypair:↵ packer_59d68581-e3e6-eb35-4ae3-c98d55cfa04f ==> amazon-ebs: Creating temporary security group for this instance:↵ packer_59d68584-cf8a-d0af-ad82-e058593945ea ==> amazon-ebs: Authorizing access to port 22 on the temporary security group... ==> amazon-ebs: Launching a source AWS instance... ==> amazon-ebs: Adding tags to source instance amazon-ebs: Adding tag: "Name": "Packer Builder" amazon-ebs: Instance ID: i-013e8fb2ced4d714c ==> amazon-ebs: Waiting for instance (i-013e8fb2ced4d714c) to become ready... ==> amazon-ebs: Waiting for SSH to become available... ==> amazon-ebs: Connected to SSH! ==> amazon-ebs: Uploading ./scripts/welcome.txt => /home/ubuntu/ ==> amazon-ebs: Provisioning with shell script:↵ /var/folders/8t/0yb5q0_x6mb2jldqq_vjn3lr0000gn/T/packer-shell661094204 amazon-ebs: total 32 amazon-ebs: drwxr-xr-x 4 ubuntu ubuntu 4096 Oct 5 19:19 . amazon-ebs: drwxr-xr-x 3 root root 4096 Oct 5 19:19 .. amazon-ebs: -rw-r--r-- 1 ubuntu ubuntu 220 Apr 9 2014 .bash_logout amazon-ebs: -rw-r--r-- 1 ubuntu ubuntu 3637 Apr 9 2014 .bashrc amazon-ebs: drwx------ 2 ubuntu ubuntu 4096 Oct 5 19:19 .cache amazon-ebs: -rw-r--r-- 1 ubuntu ubuntu 675 Apr 9 2014 .profile amazon-ebs: drwx------ 2 ubuntu ubuntu 4096 Oct 5 19:19 .ssh amazon-ebs: -rw-r--r-- 1 ubuntu ubuntu 18 Oct 5 19:19 welcome.txt amazon-ebs: WELCOME TO PACKER! ==> amazon-ebs: Provisioning with shell script: ./example.sh amazon-ebs: hello ==> amazon-ebs: Stopping the source instance... amazon-ebs: Stopping instance, attempt 1 ==> amazon-ebs: Waiting for the instance to stop... ==> amazon-ebs: Creating the AMI: packer-linux-aws-demo-1507231105 amazon-ebs: AMI: ami-f76ea98d ==> amazon-ebs: Waiting for AMI to become ready...
Packer can create images for Microsoft Windows machines and the corresponding cloud instances, and there is also open source support, such as osx-vm-templates, for creating macOS images.
Several additional open source image creation solutions do exist, such as Netflix’s aminator (a tool for creating AWS AMIs) and Veewee (a tool for creating Vagrant base boxes, kernel-based virtual machines, and VMs). However, Aminator is AWS specific and requires that your Java application is packaged as an RPM or DEB before installation, and Veewee doesn’t support the majority of the main cloud vendor image formats and requires Ruby to be installed.
You can also use open source multicloud Java toolkits like jclouds to create machine images, but the scope of doing this is outside this book. If you are interested in this, you can explore the jclouds ImageApi JavaDoc. Finally, commercial machine-image creation tools are also available; for example, Boxfuse, which can create AWS AMIs for deploying JVM, Node.js, and Go applications. There is comprehensive support for a range of Java web frameworks, such as Spring Boot, Dropwizard, and Play, and images are built using a simple (automatable) command-line tool.
If you want to package and deploy applications as machine images, we recommend using HashiCorp Packer, as described in the previous section of this chapter. Packer is a fully open source tool that is configurable and supports multiple platforms. This allows you to look into the internals to see what is happening during a build, tweak the build steps and configuration, and change the output format (with minimal changes) if you move to deploying to a new platform.
The main trade-off with using Packer in comparison with a commercial tool like Boxfuse is user experience. In our opinion, the HashiCorp tools are generally awesome, but they do assume a certain level of operational awareness, which not all developers have.
Deploying Java applications to containers like Docker requires that not only the Java application artifact be created, but also a container image be built.
By packaging your Java artifact within a container image, you will be exposed to potentially new operational concerns. For example, you will have to specify a base image with an operating system and associated tooling to be used as a foundation for your image (or use a Google Distroless base image), and also configure ports to be exposed and the execution method of the JVM and Java application. We recommend consulting your operations or platform team if this is the first time you are doing this, or if you are unsure as to what the values should be.
Building a Docker image requires the creation of a Dockerfile, which is essentially an image manifest that specifies the base operating system, the application artifacts to be added, and associated runtime configuration; see Example 7-18.
FROM openjdk:8-jre ADD target/productcatalogue-0.0.1-SNAPSHOT.jar app.jar ADD product-catalogue.yml app-config.yml EXPOSE 8020 ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-jar","app.jar",↵ "server", "app-config.yml"]
Once you have your Dockerfile, you can build and tag a Docker image by using the commands in Example 7-19.
$ docker build -t danielbryantuk/productcatalogue:1.1 . Sending build context to Docker daemon 15.56MB Step 1/5 : FROM openjdk:8-jre ---> 8363d7ceb7b7 Step 2/5 : ADD target/productcatalogue-0.0.1-SNAPSHOT.jar app.jar ---> 664d4edcb774 Step 3/5 : ADD product-catalogue.yml app-config.yml ---> 8c732b560055 Step 4/5 : EXPOSE 8020 ---> Running in 3955d790a531 ---> 738157101d64 Removing intermediate container 3955d790a531 Step 5/5 : ENTRYPOINT java -Djava.security.egd=file:/dev/./urandom -jar app.jar server app-config.yml ---> Running in 374eb13492e7 ---> e504828640df Removing intermediate container 374eb13492e7 Successfully built e504828640df Successfully tagged danielbryantuk/productcatalogue:1.1
fabric8 is an open source project stewarded by Red Hat that aims to provide an end-to-end development platform from development to production for the creation of cloud-native applications and microservices. You can build, test, and deploy your applications via continuous delivery pipelines and then run and manage them with ChatOps tooling. You will explore fabric8 in more detail later in the book, but in this chapter, you will learn about a relevant useful feature: fabric8 provides a Maven plugin that makes building Docker images easy. The Docker Maven plugin not only allows container images to be built, but you can also run containers, perhaps during an integration test. Example 7-20 shows how the plugin can be used to build a container.
<projectxmlns="http://maven.apache.org/POM/4.0.0"xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"xsi:schemaLocation="http://maven.apache.org/POM/4.0.0http://maven.apache.org/xsd/maven-4.0.0.xsd"><groupId>uk.co.danielbryant.oreillyexamples</groupId><artifactId>builddemo</artifactId><version>0.1.0-SNAPSHOT</version><packaging>jar</packaging>...<build><plugins><plugin><groupId>io.fabric8</groupId><artifactId>docker-maven-plugin</artifactId><configuration><images></images></configuration></plugin></plugins></build></project>
A FaaS Java application code can typically be uploaded to the service by using either a fat JAR (which shouldn’t be executable) or a ZIP file.
When building a fat JAR, the AWS Lambda guide recommends using the Maven Shade plugin. If you look at the example pom.xml file in Example 7-21, you will see the aws-lambda-java-core dependency that you can reference within your source code (and does not affect the build life cycle), and you can also see the Shade plugin with the createDependencyReducedPom configuration being declared as false. This is because a FaaS Java application that is uploaded to the AWS Lambda service must include all of their dependencies.
<projectxmlns="http://maven.apache.org/POM/4.0.0"xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"xsi:schemaLocation="http://maven.apache.org/POM/4.0.0http://maven.apache.org/maven-v4_0_0.xsd"><modelVersion>4.0.0</modelVersion><groupId>doc-examples</groupId><artifactId>lambda-java-example</artifactId><packaging>jar</packaging><version>1.0-SNAPSHOT</version><name>lambda-java-example</name><dependencies><dependency><groupId>com.amazonaws</groupId><artifactId>aws-lambda-java-core</artifactId><version>1.1.0</version></dependency></dependencies><build><plugins><plugin><groupId>org.apache.maven.plugins</groupId><artifactId>maven-shade-plugin</artifactId><version>2.3</version><configuration><createDependencyReducedPom>false</createDependencyReducedPom></configuration><executions><execution><phase>package</phase><goals><goal>shade</goal></goals></execution></executions></plugin></plugins></build></project>
The Azure Functions documentation recommends using the maven-dependency-plugin to package all of the relevant class and configuration files appropriately, and this can be seen in the pom.xml file generated by the Maven artefact generator, as shown in Example 7-22.
<?xml version="1.0" encoding="UTF-8"?><projectxmlns="http://maven.apache.org/POM/4.0.0"xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"xsi:schemaLocation="http://maven.apache.org/POM/4.0.0http://maven.apache.org/xsd/maven-4.0.0.xsd"><modelVersion>4.0.0</modelVersion><groupId>helloworld</groupId><artifactId>ProductCatalogue</artifactId><version>1.0-SNAPSHOT</version><packaging>jar</packaging><name>Azure Java Functions</name><dependencyManagement><dependencies>...</dependencies></dependencyManagement><dependencies><dependency><groupId>com.microsoft.azure.functions</groupId><artifactId>azure-functions-java-library</artifactId></dependency>...</dependencies><build><pluginManagement><plugins>...</plugins></pluginManagement><plugins>...<plugin><groupId>org.apache.maven.plugins</groupId><artifactId>maven-dependency-plugin</artifactId><executions><execution><id>copy-dependencies</id><phase>prepare-package</phase><goals><goal>copy-dependencies</goal></goals><configuration><outputDirectory>${stagingDirectory}/lib</outputDirectory><overWriteReleases>false</overWriteReleases><overWriteSnapshots>false</overWriteSnapshots><overWriteIfNewer>true</overWriteIfNewer><includeScope>runtime</includeScope><excludeArtifactIds>azure-functions-java-library</excludeArtifactIds></configuration></execution></executions></plugin></plugins></build></project>
Creating an AWS Lambda or Azure Function artifact is as simple as running mvn package.
In Chapter 8, you will learn how to work locally with AWS Lambda and Azure Functions and how to deploy locally and test a FaaS-based Java application.
In this chapter, you have learned all you need to know about building JAR files. You have also learned about other packaging options that are available, and explored the creation of lower-level deployment artifacts like machine and container images:
Understanding (in-depth) how a JAR file is built is essential. You can use this knowledge when creating artifacts for any platform and when you are debugging build issues like class-loading problems.
You can create executable fat JARs or skinny JARs, depending on your requirements and constraints.
There are Maven (and other build tool) plugins for creating OS artifacts like DEBs and RPMs, and for creating container images.
You can use a tool like HashiCorp’s Packer to create and package Java applications into a variety of machine images for test and production deployment across OS hypervisors like VirtualBox and cloud platforms like AWS or Azure.
FaaS applications are typically packaged in the same way as traditional Java applications, by using JARs and tooling like the Maven Shade or Maven Dependency plugin for managing dependencies.
Now that you have developed a good understanding of building and packaging Java applications, it is time to work on ensuring that your pre-pipeline local development process is as effective as possible. This is the topic of the next chapter.