As Redis IO indicates in its documentation,[6] it is preferable to deploy Redis on a physical machine over a VM. This is because a VM will have a higher intrinsic latency, or rather latency that we cannot improve upon with any amount of server or application configuration.
The redis-cli does have a means by which to measure intrinsic latency. Simply run the following on your Redis server (not the client), from the redis directory. It will measure latency on the machine (Redis does not need to be running) for a period of 30 seconds:
src/redis-cli --intrinsic-latency 30
Running this command (after installing Redis) will return output similar to this:
Max latency so far: 1 microseconds. Max latency so far: 2 microseconds. Max latency so far: 44 microseconds. Max latency so far: 54 microseconds. Max latency so far: 59 microseconds. 284459835 total runs (avg latency: 0.1055 microseconds / 105.46 nanoseconds per run). Worst run took 559x longer than the average latency.
We'll run this test on a few different implementations (checking intrinsic latency for a period of 30 seconds), and receive the following results:
Machine #1: AMD A8-5500, quad core, 8GB RAM, Ubuntu 16.04
Machine #2: Intel i7-4870, quad core, 16GB RAM, OSX 10.12
Machine #3: VM hosted on OpenStack cloud, dual core, 16GB RAM, CentOS 7.3
The following table shows intrinsic latencies for various implementations, measured in microseconds (μs):
|
M1 (metal) |
M2 (metal) |
M3 (OpenStack) |
|
|
Max Latency |
59 μs |
96 μs |
100 μs |
|
Avg Latency |
0.1055 μs |
0.0738 μs |
0.1114 μs |
|
Worst run relative to Avg Latency |
559x |
1300x |
898x |
This test measures how much CPU time the redis-cli process is getting (or not getting). It is important for application architects to run this test on their target platforms to gain an understanding of the latencies, which they may be subject to.