Kubernetes by Parts: etcd cluster (3)
etcd is a distributed key-value store. Kubernetes uses it for persisting all cluster data. (Not to be confused with application data, which is a different concept.) Kubernetes might as well be backed by a relational database such as PostgreSQL, or a key-value store such as Redis. For example, Rancher’s k3s defaults to sqlite3 instead. etcd is younger technology and specifically in the context of Kubernetes is more performant and reliable than the alternatives.
- Peering certificates for etcd generated in the previous chapter. Either one certificate for all members, or three unique certificates.
- The CA that issued (signed) those certificates.
In this part, we will deploy and etcd cluster directly onto the three linux servers you have prepared for the tutorial.
The following text will closely match official etcd docs: https://github.com/etcd-io/etcd/blob/master/Documentation/op-guide/clustering.md#tls Also consult https://github.com/etcd-io/etcd/blob/master/Documentation/op-guide/configuration.md (or the man page) for all the configuration flags you set.
First, install the etcd from a package (which tend to lag behind a little bit) or download the binaries from official release https://github.com/etcd-io/etcd/releases.
Skim through the options on
etcd --help to get an idea of what is configurable. Take special interest those chapters:
Member — options for a single process that may differ between hosts. Obviously you want as few changes as possible to simplify deployment and operations, but there is nothing in etcd design that requires it.
Clustering — specifies how should the distinct etcd processes running on different hosts connect to form a cluster.
Security — somewhat complex but necessary configuration. The options follow the client cert and server TLS + CA for validation pattern we will see in all remaining Kubernetes components.
Prepare a set of three commands to start and etcd cluster on your servers. Use the certificates we generated in previous chapter for authentication and TLS encryption. Configure the processes to automatically form a three member initial cluster (as opposed to starting a process and manually adding members via
This is the final configuration for first etcd member we used:
Listen (bind) peer and client urls are set to
0.0.0.0 which means listen on all available interfaces. Initial advertisements only announce the local member. Cluster token is an arbitrary string identifying the cluster in a network. We use the initial cluster option to set all members before even starting the servers. Alternatively we could start a single etcd instance, add a second member manually, start the second etcd instance etc. With the initial cluster option we don’t have to rely on specific initialization order. Remaining options are the certificates we generated in a previous chapter.
If at this point a member fails to join the cluster with tls: failed to verify client’s certificate: x509: certificate specifies an incompatible key usage you generated its certificates with invalid CA profile and did not grant it both the server and client Key Usage. This is further discussed in the previous chapter.
etcdctl to connect to the newly formed cluster to list members. You will need to specify a CA for
etcdctl to validate server certificates and both private and a public client key for the etcd cluster to authorize our access.
Querying the etcd cluster to verify all three pre-configured members successfully joined and have expected peering and client addresses. The screenshot is from etcd version 3.4 with new learner capabilities.
If not all three members are listed or you are refused connection altogether, investigate the logs. Common failures are invalid certificates (invalid signing profile, mismatched SANs, incorrect CA, …) and misconfigured cert paths.
The etcd processes we started are ephemeral and will not restart after a crash or system reboot. Create a systemd service to ensure etcd process re-launches.
Alternative steps and improvements 🔗︎
Instead of downloading binaries directly onto the servers and installing dependencies to the host, it is possible deploy etcd as a container on each host. One of the advantages is – as with containerization in general – somewhat simpler deployment. You will have to install a container runtime (which is otherwise needed only for the kubelet to connect to and run pods on). Note that nothing requires you to actually use the same container runtime kubelet will use. It makes sense from operational perspective, so we suggest planning ahead and using the same runtime for both.
Better yet, you can start the etcd containers as Kubernetes pods. Kubelet has a concept of static pods exactly for this purpose. Static pods have their complete specification directly on the host itself so they can start even without functional Kubernetes cluster. Traditionally Kubelet connects to kube-apiserver, which in turn connects to etcd.
The following timeline illustrates how the etcd cluster is formed with Kubelet:
- At first, no etcd process runs on any server (and thus kube-apiserver is not starting)
- Kubelet process is started, falls into a loop and does multiple things at once:
- reads static pod configuration for etcd and creates the pod
- reads static pod configuration for kube-apiserver and creates the pod
- attempts to connect to kube-apiserver and fails
- kube-apiserver containers flap between running and failed as they cannot connect to etcd
- kubelet loops and periodically outputs an error informing that it failed to connect to kube-apiserver
- etcd processes start and form a quorum
- kube-apiserver stops flapping and starts exposing kubernetes API
- kubelet connects to the kube-apiserver
We will not use this form of self-hosting as it results in complex component dependencies. During the series we will repeatedly illustrate broken/missing functionality and then deploy additional components. Self-hosting requires many components at once and as such is harder to learn the main concepts on. We suggest completing the series as is first and then building a self-hosted cluster.