Kubernetes seems to have taken the IT infrastructure world by storm, with every company either providing their own distribution of kubernetes. However, if you’ve tried to provision and control your own kubeadm-managed distribution, you probably discovered that you almost need a PhD in kube-ology to make sense of the various options, settings, parameters, and configuration that are available and how they affect each other.
Instead of trying to make heads or tails of kubeadm, let’s look at k3s. K3s is a kubernetes distribution developed by rancher, targeting initially IoT and edge deployments. It is kubernetes simplified, which it accomplishes by removing some features from the core platform, such as some alpha features or some out-of-tree plugins. It also defaults to using an internal, embedded sqlite3 database instead of etcd to store the cluster configuration. Nevertheless, the final product is a fully certified kubernetes distribution, with support for single node, single master, and multi master deployments.
Why not minikube, microk8s, or many other options?
Simply put, minikube and microk8s are designed specifically for development-type workloads. They are meant to be run in a single machine, sometimes within docker containers. On the other hand, while k3s can be used in containers, it was built with the goal to run production workloads, whether it be in large datacenters or the raspberry pi in your closet. k3s can also run as a true cluster, with the workers separated from the master(s).
Kind (Kubernetes-in-Docker) is designed for kubernetes developers to test and develop kubernetes itself, so the emphasis is on tooling and functionality to exercise kubernetes, not run workloads.
For our example, we’ll do a single-master, multiple-worker setup, which is likely to be sufficient for most hobbyist and some enterprise applications. The procedure to install the k3s server—the kubernetes master—is remarkably simple. For our examples, we’ll install on a set of CentOS 7 servers, but the instructions should be
The master node
# ensure that firewalld is removed, as k3s requires iptables yum remove firewalld # install and start iptables yum install iptables-services systemctl enable iptables && systemctl start iptables # install selinux policies # the k3s-selinux policies are currently being tested, so expect the package to change yum install container-selinux selinux-policy-targeted https://rpm.rancher.io/k3s-selinux-0.1.1-rc1.el7.noarch.rpm # install k3s, we purposely remove servicelb as we'll install metallb later on curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--no-deploy=servicelb" sh -
As plenty of people have commented in the past, piping curl straight into a shell is not ideal, so feel free to download the script and inspect it manually before executing it. I would love to see k3s get packaged as a rpm or deb in the future though.
Once k3s is installed, you should soon have a running, production-ready cluster at your fingertips! You can check by running the following kubectl command:
sudo k3s kubectl get nodes
k3s embeds its own version of kubectl, but there is nothing that prohibits you from downloading and installing that yourself—exercise left to the reader. The last thing to do before continuing is to copy the kubeconfig file out of the k3s configuration directory as it is restricted to the root user.
# copy the kubeconfig file out of the k3s configuration directory mkdir ~/.kube/ sudo cp /etc/rancher/k3s/k3s.yaml ~/.kube/config sudo chown $UID:$GID ~/.kube/config # edit the kubeconfig and replace 127.0.0.1 with the external ip of your server # feel free to use your own preferred editor. vim ~/.kube/config
Now test the configuration file. You should be able to access the cluster without problems.
kubectl get nodes
You can copy that kubeconfig file and place it on any machine that you want to use to connect to the cluster, provided that the machine can reach the k3s server.
The worker nodes
Installing the worker nodes—k3s agents—is extremely simple. You simply need to point them to the k3s server and give them the correct token to authenticate. Let’s start by obtaining the k3s secret from the k3s server.
sudo cat /var/lib/rancher/k3s/server/node-token
Next, on the worker nodes, install the agent.
# ensure that firewalld is removed, as k3s requires iptables yum remove firewalld # install and start iptables yum install iptables-services systemctl enable iptables && systemctl start iptables # install selinux policies # the k3s-selinux policies are currently being tested, so expect the package to change yum install container-selinux selinux-policy-targeted https://rpm.rancher.io/k3s-selinux-0.1.1-rc1.el7.noarch.rpm # install k3s, we purposely remove servicelb as we'll install metallb later on curl -sfL https://get.k3s.io | K3S_URL="https://IP_OR_HOSTNAME_OF_K3S_SERVER:6443" K3S_TOKEN="VALUE_OF_K3S_TOKEN" sh -
After a minute or so, check to make sure that the new worker correctly joined the cluster. You should see a new entry when listing the nodes in the cluster. N.B.: make sure that you have configured the kubeconfig on the machine where you run this command.
kubectl get nodes
As we have multiple nodes in our cluster, we need a way to route traffic from “outside” to the actual pods—the containers—running our services. Normally we’d use the native loadbalancers provided by the IaaS platform to route the traffic in the cluster, but that option is not always available. Enter MetalLB.
MetalLb is a load balancer designed to work in bare metal kubernetes clusters. It uses either BGP or gratuitous ARP announcements to advertise routes to the rest of the network, allowing for both failover and routing of external traffic to the cluster. For this installation, we’ll use k3s’s automatic manifest application to install MetalLb. The automatic manifest application work by continuously monitoring the /var/lib/rancher/k3s/server/manifests folder on the server and applying any manifest present to the cluster.
# Download the namespace definition sudo curl -sL https://raw.githubusercontent.com/google/metallb/v0.9.3/manifests/namespace.yaml -o /var/lib/rancher/k3s/server/manifests/metallb-namespace.yaml # Download the metallb manifests sudo curl -sL https://raw.githubusercontent.com/google/metallb/v0.9.3/manifests/metallb.yaml -o /var/lib/rancher/k3s/server/manifests/metallb-manifest.yaml
After a few seconds, metallb will be installed, but sitting idle. We next will need to configure it. We will use the Layer 2 setup as configuring BGP is an exercise not left for the faint of heart.
First, we create a secret value that will be user for memberlist-backed dead pear detection. This algorithm is optional, but strongly recommended.
# you only need to run this once. kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)"
Finally, we create a configuration file that we place at /var/lib/rancher/k3s/server/manifests/metallb-config.yaml.
MetalLB will pickup the configuration and automatically start assigning IPs from the configured range to kubernetes services of type
Modify the list of addresses available in the address pool and ensure that no other network device can obtain these.
If you are using DHCP on your network, make sure that the addresses available for metallb are not in the pool used by DHCP.
#/var/lib/rancher/k3s/server/manifests/metallb-config.yaml apiVersion: v1 kind: ConfigMap metadata: namespace: metallb-system name: config data: config: | address-pools: - name: default protocol: layer2 addresses: # Define the addresses here - 192.168.1.240-192.168.1.250
With this done, you know have kubernetes running and ready to execute whatever workload you throw at it.