Kubernetes by Parts: kube-apiserver (4)

In the previous chapters we set-up a highly available etcd cluster with three members. Kube-apiserver is a REST API built on top of etcd, with a solid authentication and authorization layer. To illustrate a point about availability, we will only install the kube-apiserver on two servers. In a real production deployment we would ideally have three separate availability zones with an instance of etcd and kube-apiserver in each.

Certificate architecture overview

Start by downloading the release binaries from official source CHANGELOG-1.17.md.

Same as with etcd, skim through kube-apiserver --help or equivalent generated documentation. I personally prefer the CLI as it eliminates version mismatch and splits the options by categories, which are omitted from the web page. While we can leave most options at their defaults, we must configure a few flags in the following categories. Refer to the component diagram in part 2 when configuring the certificate flags.

Etcd — We have to tell the process to what etcd endpoints it should connect. Note that we may specify multiple endpoints so there is no need for a load balancer between etcd and kube-apiserver. (Interestingly, kubeadm without an external etcd installs both etcd and kube-apiserver on the same server and only specifies a single etcd endpoint, a localhost.) We will need to pass it an etcd client certificate for authentication and a CA to make sure we are connecting to known etcd member (the CC cert on the main diagram).

Secure serving — kube-apiserver REST API TLS certificates (the KS certs on the main diagram)

Authentication — the process of assigning identity. Disable anonymous requests as there is no reason to keep it enabled and it creates an attack vector. Kube-apiserver offers multiple ways to authenticate a request, for now we will use client certificate validation against a CA (--client-ca-file).

Authorization — what should the identified user (or service) be allowed to do? Auth mode defaults to AlwaysAllow which essentially makes all users admins. The best practice is to use RBAC. We may also include a Node mode, which will be explained further when setting up kubelet.

Issuing certificates 🔗︎

Having read the options for kube-apiserver, issue all necessary certificates. Refer to the component diagram in part 2.

etcd-client-kubeapi-csr.json
{
    "CN": "Kubernetes by Parts: shared kube-apiservers as etcd clients",
    "hosts": [
        "127.0.0.1",
        "kubeapi-1.kbp.local",
        "kubeapi-2.kbp.local"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "L": "Kubernetes by Parts",
            "O": "Clusterise",
            "ST": "etcd",
            "OU": "kube-apiserver as client"
        }
    ]
}
cfssl gencert -ca=etcd-ca.pem -ca-key=etcd-ca-key.pem -config=ca-config.json -profile=client etcd-client-kubeapi-csr.json | cfssljson -bare etcd-client-kubeapi
↪ Expand hint

kube-apiserver cert for connecting to etcd

kubeapi-ca-csr.json
{
    "CN": "Kubernetes by Parts: kube-apiserver CA",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "ca": {
        "expiry": "8760h"
    },
    "names": [
        {
            "L": "Kubernetes by Parts",
            "O": "Clusterise",
            "ST": "kube-apiserver"
        }
    ]
}
cfssl gencert -initca kubeapi-ca-csr.json | cfssljson -bare kubeapi-ca -
↪ Expand hint
kubeapi-server-csr.json
{
    "CN": "Kubernetes by Parts: shared kube-apiserver TLS server",
    "hosts": [
        "192.168.2.185",
        "192.168.2.186",
        "kubeapi-1.kbp.local",
        "kubeapi-2.kbp.local",
        "kubernetes",
        "kubernetes.default",
        "kubernetes.default.svc",
        "kubernetes.default.svc.cluster",
        "kubernetes.default.svc.cluster.local"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "L": "Kubernetes by Parts",
            "O": "Clusterise",
            "ST": "kube-apiserver",
            "OU": "TLS server"
        }
    ]
}
cfssl gencert -ca=kubeapi-ca.pem -ca-key=kubeapi-ca-key.pem -config=ca-config.json -profile=server kubeapi-server-csr.json | cfssljson -bare kubeapi-server
↪ Expand hint

kube-apiserver cert for TLS server

Starting kube-apiserver process 🔗︎

With the client certs for etcd, kubeapi CA and TLS certs we are ready to start kube-apiserver process. Create a command that will start a secure kube-apiserver with etcd backend.

start-kubeapiserver.sh
kube-apiserver \
    --etcd-cafile=/opt/kbp/pki/etcd-ca.pem \
    --etcd-certfile=/opt/kbp/pki/etcd-client-kubeapi.pem \
    --etcd-keyfile=/opt/kbp/pki/etcd-client-kubeapi-key.pem \
    --etcd-servers=https://etcd-1.kbp.local:2379,https://etcd-2.kbp.local:2379,https://etcd-3.kbp.local:2379 \
    --tls-cert-file=/opt/kbp/pki/kubeapi-server.pem \
    --tls-private-key-file=/opt/kbp/pki/kubeapi-server-key.pem \
    --anonymous-auth=false \
    --client-ca-file=/opt/kbp/pki/kubeapi-ca.pem \
    --enable-bootstrap-token-auth=true \
    --authorization-mode=Node,RBAC \
    --service-cluster-ip-range=192.168.3.0/24
↪ Expand hint

Kubelet options were intentionally omitted for the time being for teaching purposes and we will reconfigure this again in a later part in this series.

If the etcd-servers flag you set fails with transport: Error while dialing dial tcp, too many colons in address, make sure you are not quoting the parameter. It seems that --etcd-servers=https://name:port,https://name:port works fine while --etcd-servers="https://name:port,https://name:port" ends with an error.

If you are interested what the reconciler options of kube-apiserver do, see detailed explanation in a previous article.

Connecting to kube-apiserver 🔗︎

To verify we successfully deployed kube-apiserver we need to generate yet another certificate. Any certificate issued by CA passed with --client-ca-file is fine. With RBAC, the Subject (O) will be mapped to a group, while the Common Name (CN) will be the role name.

Create a kubernetes-admin certificate for authenticating to kube-apiserver. The recommended group mapping is documented at Configure certificates for user accounts.

kube-apiserver-client-admin-csr.json
{
    "CN": "Kubernetes by Parts: admin user",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "L": "Kubernetes by Parts",
            "O": "Clusterise",
            "ST": "kube-apiserver",
            "OU": "admin"
        },
        {
            "O": "system:masters"
        }
    ]
}
cfssl gencert -ca=kubeapi-ca.pem -ca-key=kubeapi-ca-key.pem -config=ca-config.json -profile=client kube-apiserver-client-admin-csr.json | cfssljson -bare kube-apiserver-client-admin
↪ Expand hint

A certificate itself is not enough for kubectl to successfully connect to a cluster. We need to create a v1/Config object (not to be confused with v1/ConfigMap), more commonly referred to as a kubeconfig. This config is not only used by kubectl, but can be consumed by all components that connect to kube-apiserver such as kube-controller-manager and kube-scheduler we will deploy later, besides others.

Create a kubeconfig for the newly deployed kube-apiserver, preferably in a working directory for this series (as opposed to default config in your home directory). You may either create the YAML representation manually, or let kubectl generate it in a carefully crafted series of commands. The configuration itself has three parts:

  1. a cluster, which lists server (endpoint) to connect to and a CA that is used for validating those servers
  2. credentials, either a certificate pair, a token or a different method used for authentication
  3. a context, which pairs a cluster with credentials (optionally specifying default namespace)

We used kubectl to generate the kubeconfig. As illustrated in the main architecture diagram we should setup some form of load balancing for the kube-apiservers. It can be a Level 34 network load balancer, a Level 7 HTTPS load balancer (not trivial due to cert validation), a DNS load balancer, a BGP anycast or something entirely different. The load balancer setup is out of scope of this series and depends on your environment. You may continue further without a load balancer by connecting to a single kube-apiserver endpoint, but understand that your setup will have a single point of failure.

create-kubeconfig.sh
export KUBECONFIG=kbp.yaml
kubectl config set-cluster kbp --server=https://kubeapi-loadbalancer.kbp.local:6443 --certificate-authority=/opt/kbp/pki/ca.pem
kubectl config set-credentials admin --client-certificate=/opt/kbp/pki/kube-apiserver-client-admin.pem --client-key=/opt/kbp/pki/kube-apiserver-client-admin-key.pem
kubectl config set-context kbp-admin --cluster=kbp --namespace=kube-system --user=admin
kubectl config use-context kbp-admin
↪ Expand hint

Having done all of the above, you should be able to successfully connect to the API with kubectl. Make sure to export KUBECONFIG env with the correct path if you are switching between shell instances.

We can now issue queries against the kube-apiserver, but since no additional components are running there will be no available nodes and no containers will be run (let alone scheduled).

Issuing queries against empty kube-apiserver

We can also write to and read from the api:

Writing to empty kube-apiserver

Stabilization 🔗︎

Same as with etcd, we want the kube-apiserver process to be managed as a systemd unit.

/etc/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes by Parts kube-apiserver
After=syslog.target
After=network.target

[Service]
ExecStart=/usr/local/bin/kube-apiserver \
    --etcd-cafile=/opt/kbp/pki/etcd-ca.pem \
    --etcd-certfile=/opt/kbp/pki/etcd-client-kubeapi.pem \
    --etcd-keyfile=/opt/kbp/pki/etcd-client-kubeapi-key.pem \
    --etcd-servers=https://etcd-1.kbp.local:2379,https://etcd-2.kbp.local:2379,https://etcd-3.kbp.local:2379 \
    --tls-cert-file=/opt/kbp/pki/kubeapi-server.pem \
    --tls-private-key-file=/opt/kbp/pki/kubeapi-server-key.pem \
    --anonymous-auth=false \
    --client-ca-file=/opt/kbp/pki/kubeapi-ca.pem \
    --enable-bootstrap-token-auth=true \
    --authorization-mode=Node,RBAC \
    --service-cluster-ip-range=192.168.3.0/24

[Install]
WantedBy=multi-user.target
systemctl daemon-reload
systemctl enable kube-apiserver
systemctl start kube-apiserver
↪ Expand hint

What’s Up Next? 🔗︎

We deployed a kube-apiserver and verified we can access it via kubectl. We could use this as an API scaffolding for any application; by defining custom objects, we could model an e-shop and persist our products and orders. This is obviously not what kube-apiserver was built for, but it illustrates the versatility of the design.

To run proper Kubernetes workloads (containers) we need to do quite a few additional steps:

  • Register worker servers to kube-apiserver as Nodes. This means deploying Kubelet. There are alternatives, such as virtual-kubelet https://github.com/virtual-kubelet/virtual-kubelet. You technically could laso create the Node objects manually, but Kubelet does many more things, primarily focused on container lifecycle.
  • Deploy foundational controllers (cronjob operator, node health manager, endpoints controller, …) which are all bundled into a single binary, the kube-controller-manager. We will do this before installing worker nodes as this will allow us to dynamically generate kubelet certificates.
  • Deploy a scheduler, which will automatically assign workload to nodes.
  • Deploy in-cluster components such as DNS and kube-proxy.

We will do all of this in the upcoming chapters.

Chapters 🔗︎