A Layer 7 (L7) load balancer is similar to an L4 load balancer, but uses information from the highest layer on the OSI model – the application layer. For web services like our API, the Hypertext Transfer Protocol (HTTP) is used.
An L7 load balancer can use information from the URL, HTTP headers (for example, Content-Type), cookies, contents of the message body, client's IP address, and other information to route a request.
By working on the application layer, an L7 load balancer has several advantages over an L4 load balancer:
- Smarter: Because L7 load balancers can base their routing rules on more information, such as the client's geolocation data, they can offer more sophisticated routing rules than L4 load balancers.
- More capabilities: Because L7 load balancers have access to the message content, they are able to alter the message, such as encrypting and/or compressing the body.
- Cloud Load Balancing: Because L4 load balancers are typically hardware devices, cloud providers usually do not allow you to configure them. In contrast, L7 load balancers are typically software, which can be fully managed by the developer.
- Ease of debugging: They can use cookies to keep the same client hitting the same backend server. This is a must if you implement stateful logic such as "sticky" sessions, but is also otherwise advantageous when debugging—you only have to parse logs from one backend server instead of all of them.
However, L7 load balancers are not always "better" than their L4 counterparts. L7 load balancers require more system resources and have high latency, because it must take into consideration more parameters. However, this latency is not significant enough for us to worry about.
There are currently a few production-ready L7 load balancers on the market—High Availability Proxy (HAProxy), NGINX, and Envoy. We will look into deploying a distributed load balancer in front of our backend servers later in this chapter.