As mentioned in the official Elasticsearch Guide (https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html#_notes_for_production_use_and_defaults), we must configure the node running Elasticsearch in a certain way when deploying on production. For instance:
-
By default, Elasticsearch uses a mmapfs directory to store its indices. However, most systems set a limit of 65530 on mmap counts, which means Elasticsearch may run out of memory for its indices. If we do not change this setting, you'll encounter the following error when trying to run Elasticsearch:
[INFO ][o.e.b.BootstrapChecks ] [6tcspAO] bound or publishing to a non-loopback address, enforcing bootstrap checks
ERROR: [1] bootstrap checks failed
[1]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]Therefore, we should change the vm.max_map_count kernel setting to at least 262144. This can be done temporarily by running sysctl -w vm.max_map_count=262144, or permanently by adding it to a new file at /etc/sysctl.d/elasticsearch.conf.
-
UNIX systems impose an upper limit on the number of open files, or more specifically, the number of file descriptors. If you go over that limit, the process which is trying to open a new file will encounter the error Too many open files.
There's a global limit for the kernel, which is stored at /proc/sys/fs/file-max; on most systems, this is a large number like 2424348. There's also a hard and soft limit per user; hard limits can only be raised by the root, while soft limits can be changed by the user, but never go above the hard limit. You can check the soft limit on file descriptors by running ulimit -Sn; on most systems, this defaults to 1024. You can check the hard limit by running ulimit -Hn; the hard limit on my machine is 1048576, for example.
Elasticsearch recommends that we change the soft and hard limit to at least 65536. This can be done by running ulimit -n 65536 as root.
We need to make these changes for every node in our cluster. But first, let's return to our DigitalOcean dashboard to see if our nodes have been created successfully.