We can now combine these subdomains into a comma-separated list, and use it as the value for the discovery.zen.ping.unicast.hosts environment variable we are passing into the Elasticsearch containers. Update the manifests/elasticsearch/stateful-set.yaml file to read the following:
env:
- name: discovery.zen.ping.unicast.hosts
value: "elasticsearch-0.elasticsearch.default.svc.cluster.local,elasticsearch-1.elasticsearch.default.svc.cluster.local,elasticsearch-2.elasticsearch.default.svc.cluster.local"
The final stateful-set.yaml should read as follows:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: elasticsearch
spec:
replicas: 3
serviceName: elasticsearch
selector:
matchLabels:
app: elasticsearch
template:
metadata:
name: elasticsearch
labels:
app: elasticsearch
spec:
containers:
- name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch-oss:6.3.2
ports:
- containerPort: 9200
- containerPort: 9300
env:
- name: discovery.zen.ping.unicast.hosts
value: "elasticsearch-0.elasticsearch.default.svc.cluster.local,elasticsearch-1.elasticsearch.default.svc.cluster.local,elasticsearch-2.elasticsearch.default.svc.cluster.local"
Now, we can add this StatefulSet to our cluster by running kubectl apply:
$ kubectl apply -f manifests/elasticsearch/stateful-set.yaml
statefulset.apps "elasticsearch" created
We can check that the StatefulSet is deployed by running kubectl get statefulset:
$ kubectl get statefulsets
NAME DESIRED CURRENT AGE
elasticsearch 3 3 42s
We should also check that the Pods are deployed and running:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
elasticsearch-0 1/1 Running 0 1m
elasticsearch-1 1/1 Running 0 1m
elasticsearch-2 1/1 Running 0 1m
Note how each Pod now has a name with the structure <statefulset-name>-<ordinal>.
Now, let's curl port 9200 of each Pod and see if the Elasticsearch Nodes have discovered each other and have collectively formed a single cluster. We will be using the -o flag of kubectl get pods to extract the IP address of each Pod. The -o flag allows you to specify custom formats for your output. For example, you can get a table of Pod names and IPs:
$ kubectl get pods -l app=elasticsearch -o=custom-columns=NAME:.metadata.name,IP:.status.podIP
NAME IP
elasticsearch-0 172.17.0.4
elasticsearch-1 172.17.0.5
elasticsearch-2 172.17.0.6
We will run the following command to get the Cluster ID of the Elasticsearch node running on Pod elasticsearch-0:
$ curl -s $(kubectl get pod elasticsearch-0 -o=jsonpath='{.status.podIP}'):9200 | jq -r '.cluster_uuid'
eeDC2IJeRN6TOBr227CStA
kubectl get pod elasticsearch-0 -o=jsonpath='{.status.podIP}' returns the IP address of the Pod. This is then used to curl the port 9200 of this IP; the -s flag silences the progress information that cURL normally prints to stdout. Lastly, the JSON returned from Elasticsearch is parsed by the jq tool which extracts the cluster_uuid field from the JSON object.
The end result gives a Elasticsearch Cluster ID of eeDC2IJeRN6TOBr227CStA. Repeat the same step for the other Pods to confirm that they've successfully performed Node Discovery and are part of the same Elasticsearch Cluster:
$ curl -s $(kubectl get pod elasticsearch-1 -o=jsonpath='{.status.podIP}'):9200 | jq -r '.cluster_uuid'
eeDC2IJeRN6TOBr227CStA
$ curl -s $(kubectl get pod elasticsearch-2 -o=jsonpath='{.status.podIP}'):9200 | jq -r '.cluster_uuid'
eeDC2IJeRN6TOBr227CStA
Perfect! Another way to confirm this is to send a GET /cluster/state request to any one of the Elasticsearch nodes:
$ curl "$(kubectl get pod elasticsearch-2 -o=jsonpath='{.status.podIP}'):9200/_cluster/state/master_node,nodes/?pretty"
{
"cluster_name" : "docker-cluster",
"compressed_size_in_bytes" : 874,
"master_node" : "eq9YcUzVQaiswrPbwO7oFg",
"nodes" : {
"lp4lOSK9QzC3q-YEsqwRyQ" : {
"name" : "lp4lOSK",
"ephemeral_id" : "e58QpjvBR7iS15FhzN0zow",
"transport_address" : "172.17.0.5:9300",
"attributes" : { }
},
"eq9YcUzVQaiswrPbwO7oFg" : {
"name" : "eq9YcUz",
"ephemeral_id" : "q7zlTKCqSo2qskkY8oSStw",
"transport_address" : "172.17.0.4:9300",
"attributes" : { }
},
"77CpcuDDSom7hTpWz8hBLQ" : {
"name" : "77CpcuD",
"ephemeral_id" : "-yq7bhphQ5mF5JX4qqXHoQ",
"transport_address" : "172.17.0.6:9300",
"attributes" : { }
}
}
}