Vendor-Neutral, Self-Managed K8s Cluster

k8s

Setting up a three node cluster on Digital Ocean. The purpose is to generate a portable cluster independent of any cloud platform. A self-managed Kubernetes cluster can provide a cost-effective neutral environment for development. Segregating development into this kind of environment can provide many benefits for platform developers, forcing a deeper understanding about the underlying infrastructure without losing the benefits of abstraction. Platform development on a custom Kubernetes cluster ensures portability and provides an opportunity for redundancy, leveraged by compatibility with multiple vendors.

This cluster utilizes three VM instances initialized from Digital Ocean. Any cloud service provider can be utilized. Each VM instance has 2GB RAM and 2CPUs. Three instance with this configuration cost approximately 40 Euro/month. After the instances deployed, ssh to each of them.

$ ssh root@INSTANCE_PUBLIC_IP

1) Install Core Dependencies

Update and upgrade all packages to ensure server is up to date. Repeat the command at each instance

$ apt update && apt upgrade -y

Install following packages for each instance also:

  • apt-transport-https to allow the apt package manager to pull packages from HTTPS endpoints,
  • ca-certificates for SSL/TLS certificates from the Certificate Authorities along with an updater,
  • curl for interacting HTTP-based endpoints from command line,
  • gnupg-agent which provides a system daemon for GPG signing and the management of keys, and
  • software-properties-common to support the management of independent software vendor repositories added later on for WireGuard, Docker and Kubernetes
$ apt install -y apt-transport-https ca-certificates curl gnupg-agent software-properties-common

2) Install WireGuard VPN

Operating a Kubernetes cluster over a VPN encrypts network traffic between nodes. However, within the cluster, network traffic is not encrypted by default. WireGuard is secure, easy to install, configure and route all internal cluster traffic. This step describes the process of generating public and private keys for each server instance, adding a WireGuard config file, starting the service and creating an overlay network to tunnel Kubernetes traffic through.

Add the WireGuard repository, update the package list and install the WireGuard

$ add-apt-repository -y ppa:wireguard/wireguard
$ apt update
$ apt install -y wireguard

WireGuard requires a private and public key for each server. Following shell command can be used to generate three key pairs. The private keys priv are used for the VPN interface on each server and public key pub are used to communicate with peers. Save the keys somewhere to be used later.

$ for i in {1..3}; do prvk=$(wg genkey); \
echo "$i - priv: $prvk pub: $(echo $prvk | wg pubkey)"; done

Now WireGuard should be configured for each server instance. The following configuration example will setup a new network interface named wg0 with the IP address 10.0.1.1. Use the IP addresses 10.0.1.2 and 10.0.1.3 and generated priv and pub keys. Store the wg0.conf file under directory /etc/wireguard/

<!-- ### Figure 3-8 here -->
[Interface]
Address = 10.0.1.1
PrivateKey = SERVER_1_PRIVATE_KEY
ListenPort = 51820
[Peer]
PublicKey = SERVER_2_PUBLIC_KEY
AllowedIps = 10.0.1.2/32
Endpoint = SERVER_2_PRIVATE_IP:51820
[Peer]
PublicKey = SERVER_3_PUBLIC_KEY
AllowedIps = 10.0.1.3/32
Endpoint = SERVER_3_PRIVATE_IP:51820

Next, we should be ensure that IP forwarding is enabled. If the command sysctl net.ipv4.ip_forward retuns 0, then following commands should be run in each instance of the cluster

$ echo "net.ipv4.ip_forward=1" >> /etc/sysctl.conf
$ sysctl -p

After adding the configuration file, WireGuard VPN should be started on each instance

$ systemctl start wg-quick@wg0
$ systemctl enable wg-quick@wg0

3) Install Docker

Docker is a industry standard container runtime interface (CRI). In this step, Docker will be installed on to each instance of the cluster. We will first add the Docker GPG repository GPG key and the repository itself.

$ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
$ add-apt-repository -y "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"

Then, we will update the package list and install the latest version of Docker community edition.

$ apt update
$ apt install -y docker-ce docker-ce-cli containerd.io

Ubuntu operating system uses Systemd to track processes using Linux cgroups. Having separate cgroup managers can cause instability when managing resources under load. Therefore, we should configure Docker to use Systemd by supplying a configuration file under /etc/docker/daemon.json

{
 "exec-opts": ["native.cgroupdriver=systemd"],
 "log-driver": "json-file",
 "log-opts": {
    "max-size": "100m"
 },
 "storage-driver": "overlay2"
}

Then we create a Systemd drop-in directory for Docker, enable the Docker service and restart the Docker

$ mkdir -p /etc/systemd/system/docker.service.d
$ systemctl enable docker.service
$ systemctl daemon-reload && systemctl restart docker

4) Install Kubernetes Utilities

In addition to Docker, each VM instance will need kubelet, kubeadm and kubectl to debug the cluster directly from a node when needed. We will first add Google GPG key for apt and then Kubernetes apt repository.

$ curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
$ cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF

And we will update the package list so the Kubernetes repository can be added, install the packages and lock the installed packages to their current version

$ apt update
$ apt install -y kubelet kubeadm kubectl
$ apt-mark hold kubelet kubeadm kubectl

5) Install Master Node

To install the master node, we will utilize the kubeadm installer on node 1 (your first VM instance), configured with the private VPM IP address with flag --apiserver-advertise-address of the master node to advertise on, and add a public address with the flag --apiserver-cert-extra-sans as an extra for inclusion into the generated TLS certificate. The domain api.cluster.dev1.ylcnky.com (use your own domain) is assigned a DNS A record with the public IP address of the master node.

$ kubeadm init \
--apiserver-advertise-address=10.0.1.1 \
--apiserver-cert-extra-sans=api.cluster.dev1.ylcnky.com

This process can take some minutes to be completed. After a successfull installation, the message Your Kubernetes control-plane has initialized successfully! will present along with instructions for configuring kubectl. Finally run the follwing commands to set the .kube/config

$ mkdir -p $HOME/.kube
$ cp /etc/kubernetes/admin.conf $HOME/.kube/config

Next, a pod network is necessary for communication between pods on the cluster. Weave Net is a valid alternative which can be installed in one command. Install Weave with kubectl to the master node

$ kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')&env.IPALLOC_RANGE=10.32.0.0/16"

For network visualization and monitoring, run the following command

$ kubectl apply -f https://cloud.weave.works/k8s/scope.yaml?k8s-version=$(kubectl version | base64 | tr -d '\n')

We should route all pod network 10.96.0.0/16 traffic over the WireGuard VPN. Run following command in each node on the cluster and change the IP address accordingly

$ ip route add 10.96.0.0/16 dev wg0 src 10.0.1.1 # on master node
$ ip route add 10.96.0.0/16 dev wg0 src 10.0.1.2 # on worker-1
$ ip route add 10.96.0.0/16 dev wg0 src 10.0.1.3 # on worker-2

We should also persist this route with the following file on each node in cluster, replacing 10.0.1.1 with VPN interface corresponding to the server

$ cat <<EOF >/etc/systemd/system/overlay-route.service
[Unit]
Description=Overlay network route for Wireguard
After=wg-quick@wg0.service
[Service]
Type=oneshot
User=root
ExecStart=/sbin/ip route add 10.96.0.0/16 dev wg0 src 10.0.1.1 # Change this on each node
[Install]
WantedBy=multi-user.target
EOF

Finally apply the new configuration on each node on the cluster

$ systemctl enable overlay-route.service

6) Join Worker Nodes

The second and third VM instances are Kubernetes worker nodes. ssh to each server and run following kubeadm join command.

$ kubeadm join 10.0.1.1:6443 --token YOUR_TOKEN --discovery-token-ca-cert-hash YOUR_HASH

You can find YOUR_TOKEN and YOUR_HASH with the following command. Copy the output of this command and run in each worker node.

$ kubeadm token create --print-join-command

Now the worker nodes joined the master node. But kubectl functions only in the master node. If you run kubectl get nodes in either worker node, you can get following error

$ kubectl get nodes

The connection to the server localhost:8080 was refused - did you specify the right host or port?

This is because there is no admin.conf file and we did not have KUBECONFIG=/root/admin.conf set. The admin.conf file is created in /etc/kubernetes by the kubeadmin init command and we need to copy it to all your worker nodes. kubeadmin does not do this for you. To set the configuration for the worker nodes, we need to copy the admin.conf file from master node. (change domain and config file name. Simply replace ylcnky with your preferrence.)

$ scp root@api.cluster.dev1.ylcnky.com:/root/.kube/config ~/.kube/ylcnky-dev1
$ cd ./kube
$ nano ylcnky-dev1

This will open the configuration file

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
    server: https://10.0.1.1:6443 ## Change this with your domain --> https://api.cluster.dev1.ylcnky.com:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: kubernetes-admin
  name: kubernetes-admin@kubernetes ## Change this with the name you want --> ylcnky-dev1
current-context: kubernetes-admin@kubernetes ## Also change the current-context --> ylcnky-dev1
kind: Config
preferences: {}
users:
- name: kubernetes-admin
  user:
    client-certificate-data: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
    client-key-data: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

Finally export the config file as the environment variable

$ export KUBECONFIG=~/.kube/ylcnky-dev1

7) Add DNS to Domain

Add DNS A record for the same domain name you have. The cluster has two A records, each pointing the public IP address of a worker node using the wildcard *.dev1. After the DNS is activated on your domain, you can check the nodes in your cluster with the following command.

$ kubectl get nodes

NAME         STATUS   ROLES    AGE   VERSION
ylcnky-n01   Ready    master   1h   v1.19.2
ylcnky-n02   Ready    <none>   1h   v1.19.2
ylcnky-n03   Ready    <none>   1h   v1.19.2

The instructions given so far is to setup a three node cluster. Now we have vendor-neutral three node Kubernetes cluster up and running. This tutorial continues with the definition of Kubernetes manifests to define the essential resources for the later phases of developments.

This cluster will be mostly used for various projects which asre grouped as K8s-MLOps, K8s-DevOps and K8s-Data-Platform. Therefore, the resources will be generated as per needs of the planned upcoming projects.

Follow the second part of this tutorial for Kubernetes Manifests

blog

copyright©2021 ylcnky all rights reserved