Install Kubernetes Cluster with kubeadm

We are going to setup Kubernetes Cluster with kubeadm.
We already created three Ubuntu VMs for K8s nodes. The control plane node and two Worker nodes.

 

The following instructions should be performed on all three nodes.

 

1. Update OS:

apt update && apt upgrade

 

2. Create a configuration file for containerd:

cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf
overlay
br_netfilter
EOF

 

3. Load modules:

sudo modprobe overlay
sudo modprobe br_netfilter

 

4. Set system configurations for Kubernetes networking:

cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF

 

5. Apply new settings:

sudo sysctl --system

 

6. Add Docker repository and Install containerd:

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update && sudo apt-get install -y containerd.io

 

7. Create a default configuration file for containerd:

sudo mkdir -p /etc/containerd

 

8. Generate the default containerd configuration and save it to the newly created default file:

sudo containerd config default | sudo tee /etc/containerd/config.toml

 

9. Configuring the systemd cgroup driver

sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/g' /etc/containerd/config.toml

 

10. Restart containerd to ensure new configuration file usage:

sudo systemctl restart containerd

 

11. Verify that containerd is running:

sudo systemctl status containerd

 

12. Disable swap:

sudo swapoff -a

 

13. Install dependency packages:

sudo apt-get update && sudo apt-get install -y apt-transport-https curl

 

14. Download and add GPG key:

curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -

 

15. Add Kubernetes repository:

cat <<EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF

 

16. Update package listings:

sudo apt-get update

 

17. Install Kubernetes packages (Note: If you get a dpkg lock message, wait a minute or two before trying the command again):

sudo apt-get install -y kubelet=1.26.4-00 kubeadm=1.26.4-00 kubectl=1.26.4-00

 

18. Turn off automatic updates:

sudo apt-mark hold kubelet kubeadm kubectl

 

 

The following instructions should be performed only on the Control plane node.

1. Initialize the Kubernetes cluster by using kubeadm:

sudo kubeadm init --pod-network-cidr 172.16.0.0/16 --kubernetes-version 1.26.4

 

2. Set kubectl access:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

 

4. Install the Calico Network Plugin

    4.1. Install Calico Network Plugin

kubectl apply -f https://raw.githubusercontent.com/projectcalico/calico/v3.25.1/manifests/calico.yaml

 

    4.2. Restart kubelet and containerd:

systemctl restart kubelet.service containerd.service 

 

   4.3 Check the status of the control plane node:

kubectl get nodes

 

5. Generate cluster join command

kubeadm token create --print-join-command

 

The following instructions should be performed only on Worker nodes.

1. Execute cluster join command to all Worker nodes:

sudo kubeadm join...

 

2. Check the cluster status:

kubectl get nodes

 

Install MetalLb on the Cluster

We are going to install metallb by using the Bitnami HELM chart.

https://artifacthub.io/packages/helm/bitnami/metallb

 

1. Install MetalLb HELM chart:

helm install --namespace metallb --create-namespace metallb-system bitnami/metallb --version 4.4.1

 

2. Create IPAddressPool configuration:

vim ipaddresspool.yaml

 

Add the following code to the file:

apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  # A name for the address pool. Services can request allocation
  # from a specific address pool using this name.
  name: first-pool
  namespace: metallb-system
spec:
  # A list of IP address ranges over which MetalLB has
  # authority. You can list multiple ranges in a single pool, they
  # will all share the same settings. Each range can be either a
  # CIDR prefix, or an explicit start-end range of IPs.
  addresses:
  - 192.168.20.0/24

 

3. Apply IPAddressPool configuration:

kubectl apply -f ipaddresspool.yaml

 

 

4. Do not forget to create a route in your router.

e.g:

Network/Host IP Netmask Gateway
192.168.20.0 255.255.255.0 <K8s NODE IP>

 

 

There are no comments yet.
Authentication required

You must log in to post a comment.

Log in