While you can use Volumes to persists data for individual Pods, this won't work for our StatefulSet. This is because each of the replica Elasticsearch nodes will try to write to the same files at the same time; only one will succeed, the others will fail. If you tried, the following hanged state is what you'll encounter:
$ kubectl get pods
NAME READY STATUS RESTARTS
elasticsearch-0 1/1 Running 0
elasticsearch-1 0/1 CrashLoopBackOff 7
elasticsearch-2 0/1 CrashLoopBackOff 7
If we use kubectl logs to inspect one of the failing Pods, you'll see the following error message:
$ kubectl logs elasticsearch-1
[WARN ][o.e.b.ElasticsearchUncaughtExceptionHandler] [] uncaught exception in thread [main]
org.elasticsearch.bootstrap.StartupException: java.lang.IllegalStateException: failed to obtain node locks, tried [[/usr/share/elasticsearch/data/docker-cluster]] with lock id [0]; maybe these locations are not writable or multiple nodes were started without increasing [node.max_local_storage_nodes] (was [1])?
Basically, before an Elasticsearch instance is writing to the database files, it creates a node.lock file. Before other instances try to write to the same files, it will detect this node.lock file and abort.
Apart from this issue, attaching Volumes directly to Pods is not good for another reason—Volumes persist data at the Pod-level, but Pods can get rescheduled to other Nodes. When this happens, the "old" Pod is destroyed, along with its associated Volume, and a new Pod is deployed on a different Node with a blank Volume.
Finally, scaling storage this way is also difficult—if the Pod requires more storage, you'll have to destroy the Pod (so it doesn't write anything to the Volume, create a new Volume, copy contents from the old Volume to the new, and then restart the Pod).