This guide will teach you how to deploy a Kubernetes Cluster on CentOS 8 by using kubeadm and with CRI-O Container runtime

Will use one VM as a master node and 2 VM as a worker nodes

VM Role IP Hostname Resource
Master 192.168.151.128 master01.tayeh.me 2GB Ram, 2vcpus
Worker 01 192.168.151.129 worker01.tayeh.me 2GB Ram, 2vcpus
Worker 02 192.168.151.130 worker02.tayeh.me 2GB Ram, 2vcpus

Steps

1. Set hostname on VM’s

$ hostnamectl set-hostname master01.tayeh.me

2. Disable Swap

$ swapoff -a
$ vi /etc/fstab # and remove the line containe swap 

3. Disable SELinux

$ setenforce 0
$ vi /etc/selinux/config # and set SELINUX=disabled 

“reboot VM’s after disable the SELinux”

4. Configure sysctl and enable kernel mod

$ modprobe overlay
$ modprobe br_netfilter
tee /etc/sysctl.d/k8s.conf<<EOF
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF
$ sysctl --system

5. Install kubelet, kubeadm and kubectl (and enable epel-release)

$ tee /etc/yum.repos.d/kubernetes.repo<<EOF
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
$ dnf -y install kubelet kubeadm kubectl --disableexcludes=kubernetes epel-release
$ kubectl version --client # check the version of kubectl
Client Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.3", GitCommit:"816c97ab8cff8a1c72eccca1026f7820e93e0d25", GitTreeState:"clean", BuildDate:"2022-01-25T21:25:17Z", GoVersion:"go1.17.6", Compiler:"gc", Platform:"linux/amd64"}

6. Install Container runtime (CRI-O)

$ export OS=CentOS_8_Stream # or OS=CentOS_8
$ export VERSION=1.23 # it's must matching your kubernetes version
# Add repo
$ curl -L -o /etc/yum.repos.d/devel:kubic:libcontainers:stable.repo https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/devel:kubic:libcontainers:stable.repo
$ curl -L -o /etc/yum.repos.d/devel:kubic:libcontainers:stable:cri-o:$VERSION.repo https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable:cri-o:$VERSION/$OS/devel:kubic:libcontainers:stable:cri-o:$VERSION.repo

# Install CRI-O
$ dnf install cri-o

# Start and enable Service
$ systemctl daemon-reload
$ systemctl enable --now crio
$ systemctl status crio

7. Firewalld Rule

on master server

$ firewall-cmd --add-port={6443,2379-2380,10250,10251,10252,5473,179,5473}/tcp --permanent
$ firewall-cmd --add-port={4789,8285,8472}/udp --permanent
$ firewall-cmd --reload

on worker server

$ firewall-cmd --add-port={10250,30000-32767,5473,179,5473}/tcp --permanent
$ firewall-cmd --add-port={4789,8285,8472}/udp --permanent
$ firewall-cmd --reload

8. install control-plane node

make sure the br_netfilter module is loaded

$ lsmod | grep br_netfilter
br_netfilter           24576  0
bridge                278528  1 br_netfilter
# if you not see the output like above rerun this commands ($ modprobe overlay; modprobe br_netfilter)

Enable kubelet service

$ systemctl enable kubelet

Pull container images

$ kubeadm config images pull
[config/images] Pulled k8s.gcr.io/kube-apiserver:v1.23.3
[config/images] Pulled k8s.gcr.io/kube-controller-manager:v1.23.3
[config/images] Pulled k8s.gcr.io/kube-scheduler:v1.23.3
[config/images] Pulled k8s.gcr.io/kube-proxy:v1.23.3
[config/images] Pulled k8s.gcr.io/pause:3.6
[config/images] Pulled k8s.gcr.io/etcd:3.5.1-0
[config/images] Pulled k8s.gcr.io/coredns/coredns:v1.8.6

Create cluster

$ kubeadm init
W0206 20:14:42.692845    2943 version.go:103] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get "https://dl.k8s.io/release/stable-1.txt": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
W0206 20:14:42.692944    2943 version.go:104] falling back to the local client version: v1.23.3
[init] Using Kubernetes version: v1.23.3
[preflight] Running pre-flight checks
        [WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
        [WARNING FileExisting-tc]: tc not found in system path
        [WARNING Hostname]: hostname "master01.tayeh.me" could not be reached
        [WARNING Hostname]: hostname "master01.tayeh.me": lookup master01.tayeh.me on 192.168.151.2:53: read udp 192.168.151.128:54368->192.168.151.2:53: i/o timeout
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master01.tayeh.me] and IPs [10.96.0.1 192.168.151.128]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost master01.tayeh.me] and IPs [192.168.151.128 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost master01.tayeh.me] and IPs [192.168.151.128 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 11.002001 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.23" in namespace kube-system with the configuration for the kubelets in the cluster
NOTE: The "kubelet-config-1.23" naming of the kubelet ConfigMap is deprecated. Once the UnversionedKubeletConfigMap feature gate graduates to Beta the default name will become just "kubelet-config". Kubeadm upgrade will handle this transition transparently.
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node master01.tayeh.me as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node master01.tayeh.me as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: ddyzgx.nuhs3eyyhm3disy7
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.151.128:6443 --token ddyzgx.nuhs3eyyhm3disy7 \
        --discovery-token-ca-cert-hash sha256:bd4cf138e1e0b018ebeb5c34074354d5eeee90082468728ee165c6be2bfc1d69

Configure kubectl using commands in the output

$ mkdir -p $HOME/.kube
$ cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ chown $(id -u):$(id -g) $HOME/.kube/config

Check cluster status

$ kubectl cluster-info
Kubernetes control plane is running at https://192.168.151.128:6443
CoreDNS is running at https://192.168.151.128:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

$ kubectl get node 
NAME                STATUS   ROLES                  AGE     VERSION
master01.tayeh.me   Ready    control-plane,master   8m32s   v1.23.3

9. Install network plugin

kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml

check the pods rununing

$ kubectl get pods --all-namespaces
NAMESPACE     NAME                                        READY   STATUS              RESTARTS   AGE
kube-system   calico-kube-controllers-566dc76669-dm8ph    0/1     ContainerCreating   0          20s
kube-system   calico-node-2qphl                           0/1     Init:0/3            0          21s
kube-system   coredns-64897985d-b2xsl                     1/1     Running             0          14m
kube-system   coredns-64897985d-kjb7z                     1/1     Running             0          14m
kube-system   etcd-master01.tayeh.me                      1/1     Running             0          14m
kube-system   kube-apiserver-master01.tayeh.me            1/1     Running             0          14m
kube-system   kube-controller-manager-master01.tayeh.me   1/1     Running             0          14m
kube-system   kube-proxy-ksq56                            1/1     Running             0          14m
kube-system   kube-scheduler-master01.tayeh.me            1/1     Running             0          14m

10. Add Worker Node

kubeadm join 192.168.151.128:6443 --token ddyzgx.nuhs3eyyhm3disy7 \
        --discovery-token-ca-cert-hash sha256:bd4cf138e1e0b018ebeb5c34074354d5eeee90082468728ee165c6be2bfc1d69 
# this command from the output of (kubeadm init)

check nodes

$ kubectl get node  -o wide
NAME                STATUS   ROLES                  AGE   VERSION   INTERNAL-IP       EXTERNAL-IP   OS-IMAGE          KERNEL-VERSION          CONTAINER-RUNTIME
master01.tayeh.me   Ready    control-plane,master   17m   v1.23.3   192.168.151.128   <none>        CentOS Stream 8   4.18.0-358.el8.x86_64   cri-o://1.23.0
worker01.tayeh.me   Ready    <none>                 28s   v1.23.3   192.168.151.129   <none>        CentOS Stream 8   4.18.0-358.el8.x86_64   cri-o://1.23.0
worker02.tayeh.me   Ready    <none>                 23s   v1.23.3   192.168.151.130   <none>        CentOS Stream 8   4.18.0-358.el8.x86_64   cri-o://1.23.0

11. try Deploy application on cluster

$ kubectl apply -f https://k8s.io/examples/pods/commands.yaml
$ kubectl get pods -o wide
NAME           READY   STATUS              RESTARTS   AGE   IP       NODE                NOMINATED NODE   READINESS GATES
command-demo   0/1     ContainerCreating   0          53s   <none>   worker01.tayeh.me   <none>           <none>

$ kubectl get pods -o wide 
NAME           READY   STATUS      RESTARTS   AGE     IP          NODE                NOMINATED NODE   READINESS GATES
command-demo   0/1     Completed   0          7m29s   10.85.0.2   worker01.tayeh.me   <none>           <none>

enjoy