Kubernetes by Parts: Controllers (5)

So far we have deployed a highly available kube-apiserver backed by etcd and we verified we can connect to the API with highest privileges.

As kube-apiserver’s purpose is to be a REST API wrapper of etcd and handle authentication and authorization, we have to deploy additional components that will act on the data we send to the API. Those components are what makes Kubernetes seem like magic. Pretty much everything is done by kube-controller-manager. Besides other things, it:

  • creates Pod objects defined by a Jobs and CronJobs, ReplicaSet, DaemonSet, and StatefulSets
  • creates ReplicaSets specified by a Deployment
  • watches Pod readiness and update service endpoints accordingly
  • creates a default ServiceAccount when a new Namespace is created, and generates token for the account
  • monitors health of Nodes and marks them as Ready/NotReady

Fortunately we don’t have to deploy 30 controllers separately as all those controllers are packaged in an application called kube-controller-manager. In essence, all it requires to function properly is access to kube-apiserver with the correct privileges. Upon start, it connects and then loops ad infinitum, reconciling the state of your Kubernetes cluster. This means it can run on any server: either on the same servers as kube-apiserver or any other server.

Download the kube-controller-manager binary from official source CHANGELOG-1.17.md and as with previous components, skim through --help. Many of the controllers have dedicated options in the categories; feel free to skip those for now and focus at first on connecting the manager to kube-apiserver.

By default, each controller starts with the credentials given by whatever you configure the relevant kubeconfig with. This essentially means every controller would have unlimited privileges, which is undesirable. Granting each controller only the privileges it requires limits the scope what attacked or misbehaving controller can affect. This can be configured with --use-service-account-credentials, which is poorly linked from the official kube-controller-manager documentation.

Starting the service 🔗︎

Building atop the knowledge from previous chapters, prepare a systemd service that will run kube-controller-manager. Feel free to experiment and only include the options that are logically required. Then start the service and watch logs; you will most likely be informed if you forgot to specify an option.

/etc/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes by Parts kube-controller-manager
After=syslog.target network.target

[Service]
ExecStart=/usr/local/bin/kube-controller-manager --kubeconfig /opt/kbp/kubeconfig-kcm.yaml \
  --use-service-account-credentials \
  --service-account-private-key-file /opt/kbp/pki/kubeapi-server-key.pem \
  --root-ca-file /opt/kbp/pki/kubeapi-ca.pem

[Install]
WantedBy=multi-user.target
systemctl daemon-reload
systemctl enable kube-controller-manager
systemctl start kube-controller-manager
↪ Expand hint

If you used the service account credentials option as advised, you were very likely to omit the service account private key. Fortunately the process helpfully warns that it requires the option.

We included an additional root-ca-file option. While not strictly necessary at this point, it will be useful later. When set, all service account tokens are then created with an extra ca.crt, which can be read at /var/run/secrets/kubernetes.io/serviceaccount/ca.crt in pods using this service account.

The kubeconfig was generated through kubectl as in Part 4, with only the certificates modified (refer to PKI certificates and requirements):

kube-apiserver-client-kcm-csr.json
{
    "CN": "system:kube-controller-manager",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "L": "Kubernetes by Parts",
            "O": "Clusterise",
            "ST": "kube-apiserver",
            "OU": "kube-controller-manager"
        }
    ]
}
cfssl gencert -ca=kubeapi-ca.pem -ca-key=kubeapi-ca-key.pem -config=ca-config.json -profile=client kube-apiserver-client-kcm-csr.json | cfssljson -bare kube-apiserver-client-kcm
↪ Expand hint

Verification 🔗︎

It is fairly obvious from logs and API objects when the kube-controller-manager starts successfully for the first time on otherwise empty kube-apiserver. However, to test at least one controller, we can create a ServiceAccount:

kubectl apply -f - <<EOF
apiVersion: v1
kind: ServiceAccount
metadata:
  name: kbp-demo
EOF

With serviceaccount-token controller, we should see a new Secret of type kubernetes.io/service-account-token in the same namespace. List secrets and verify the token was successfully created and cleanup the service account afterwards.

Chapters 🔗︎