Kubernetes by Parts: Intro (1)
This tutorial builds on the legacy of Kubernetes The Hard Way by Kelsey Hightower. While KTHW is a great resource, it’s aging and not using many new Kubernetes features which make manual cluster deployment both easier and more robust.
You will find only a few commands and configuration snippets in this tutorial. We aim to explain the concepts. Instead of blindly copy and pasting commands, we encourage you to dive into manuals and documentation.
You may be tempted to automate this in tutorial with Terraform, Ansible or similar techs. Attempt to deploy the cluster manually at first.
Completing this tutorial will leave you with a fully functional highly available Kubernetes cluster. That being said, we advise against deploying production clusters in this way as it’s prone to misconfiguration and harder to manage in the long run. Consider using kubeadm.
To follow this series, you will need:
- Three or more linux servers with pre-configured networking. Bare metal, local virtualization or cloud is all acceptable. The servers do not have to be accessible from internet.
- All servers should be routable pairwise. This means every one of the selected servers should be able to communicate with any other server. (The servers do not have to be directly connected. In fact, many components of the cluster benefit from servers being in distinct geographical locations. This leads to non-trivial routes and propagating those routes with BGP.) Pod networking is in scope of this tutorial, server connectivity itself is not.
You should have somewhat good understanding of:
- Public key cryptography
- Layer 3 TCP/IP networking
- systemd services
Container — a process invoked inside a restricted environment with limited access to the host
Container runtime — a service you ask to run a container. Kubelet requests containers from a container runtime whenever it finds a pod scheduled on its’ node. Examples: Docker, containerd, CRI-O, rkt, frakti, …
CRI, container runtime interface — an API contract. This lets Kubelet implement the relevant code once and the choice of the runtime is transparent. Without CRI, Kubelet would have to have a dedicated code for all runtimes (in fact it still does and tries to phase it away). Most container runtimes expose a CRI-compliant API.
Server — a bare-metal or virtual computer, a thing you can run a shell in
Node — an abstraction, usually each server is registered to Kubernetes as a node. Another example is virtual-kubelet, which simply put registers cloud services as a node with unlimited resources.
Control plane — servers running etcd or kube-apiservers. Optionally, kubelet could be running on some of those servers. In that case, the modes would conventionally be labelled as
node-role.kubernetes.io/master and tainted as
Worker nodes — servers running kubelet that are not in the control plane
Mission overview 🔗︎
Starting from plain linux distribution, we will first install a highly available ETCD cluster. We will deploy Kubernetes API Server connecting to the cluster and a set of controllers. After this point, we will be able to interact with the cluster, but no nodes will be registered and no containers will be executed (let alone scheduled). In later part, we will register selected servers to the cluster as nodes by deploying a CRI and Kubelet. And finally, we will deploy cluster components such as scheduler, kube-proxy and DNS.