Setup Kubernetes Cluster Using Kubeadm: Difference between revisions

From wiki.baghirzade.pro
Jump to navigation Jump to search
No edit summary
Blanked the page
Tags: Blanking Visual edit
Line 1: Line 1:
== Kubeadm Setup Prerequisites ==
Following are the prerequisites for '''Kubeadm Kubernetes cluster setup'''.


# Minimum two '''Ubuntu nodes''' [One master and one worker node]. You can have more worker nodes as per your requirement.
# The master node should have a minimum of '''2 vCPU and 2GB RAM'''.
# For the worker nodes, a minimum of 1vCPU and 2 GB RAM is recommended.
# '''10.X.X.X/X''' network range with static IPs for master and worker nodes. We will be using the '''192.x.x.x''' series as the pod network range that will be used by the Calico network plugin. Make sure the Node IP range and pod IP range don't overlap.
Set static IP:<pre>
network:
  version: 2
  renderer: networkd
  ethernets:
    enp3s0:
      dhcp4: no
      addresses:
        - 10.10.0.50/24
      gateway4: 10.10.0.1
      nameservers:
        addresses:
          - 8.8.8.8
          - 1.1.1.1
</pre>
On cp1:<pre>
sudo hostnamectl set-hostname cp1
</pre>On worker1:<pre>
sudo hostnamectl set-hostname w1
</pre>On worker2:<pre>sudo hostnamectl set-hostname w2</pre>Add entries to /etc/hosts on all nodes:<pre>
sudo nano /etc/hosts
</pre>Add the following to the end of the file, replacing with your IP addresses:<pre>
10.0.0.10  cp1
10.0.0.11  w1
10.0.0.12  w2
</pre>This allows nodes to address each other by name rather than by IP address.
'''1. General system preparation (all nodes)'''
'''1.1. Updating packages'''<pre>
sudo apt update
sudo apt upgrade -y
</pre>We are simply updating the system to avoid bugs caused by old packages.
'''1.2. Disable swap (required for kubeadm)'''
On all nodes:<pre>
sudo swapoff -a
sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
</pre>Verification:<pre>
free -h
</pre>'''1.3. Kernel modules and sysctl (for the grid)↵↵On each node:'''<pre>
cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf
overlay
br_netfilter
EOF
sudo modprobe overlay
sudo modprobe br_netfilter
</pre>Now sysctl:<pre>
cat <<EOF | sudo tee /etc/sysctl.d/kubernetes.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables  = 1
net.ipv4.ip_forward                = 1
EOF
sudo sysctl --system
</pre>This includes correct traffic handling through the Linux bridge and forwarding — mandatory for CNI (Calico).
'''1.4. (For lab) Disable UFW'''
If UFW is enabled and this is a clean lab:<pre>
sudo ufw disable
</pre>In Prod, of course, it is better to open the necessary ports, but for the first build it is easier without a firewall.
'''2. Install containerd (all nodes)'''
At each node:<pre>
sudo apt install -y containerd
</pre>Generate default configuration:<pre>
sudo mkdir -p /etc/containerd
sudo containerd config default | sudo tee /etc/containerd/config.toml > /dev/null
</pre>Restart and switch on:<pre>
sudo systemctl restart containerd
sudo systemctl enable containerd
sudo systemctl status containerd
</pre>SystemdCgroup = true → kubelet and containerd use the same cgroup driver (systemd). This is now considered best practice.
'''3. Install kubeadm/kubelet/kubectl 1.34 (all nodes)'''
According to the official documentation for v1.34:
'''3.1. Repo packages'''
At each node:<pre>
sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl gpg
</pre>'''3.2. GPG key and pkgs.k8s.io repository (v1.34)↵↵Ubuntu 22.04 already has /etc/apt/keyrings, but if it doesn't:'''
Repository key:
<syntaxhighlight lang="shell">
sudo mkdir -p /etc/apt/keyrings
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.34/deb/Release.key \
  | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
</syntaxhighlight>Add repo specifically for Kubernetes 1.34:<pre>
echo "deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.34/deb/ /" \
  | sudo tee /etc/apt/sources.list.d/kubernetes.list
</pre>'''3.3. Installing kubeadm/kubelet/kubectl'''<pre>
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
</pre><blockquote>Repository v1.34 ensures that branch 1.34.x is installed. apt-mark hold will prevent accidental updates to other minor versions during apt upgrade.</blockquote>(Not required, but you can enable kubelet immediately; it will run and wait for kubeadm init / join):<pre>
sudo systemctl enable --now kubelet
</pre>'''5. Control-plane initialisation (cp1 only)'''
Now we create a cluster on <code>cp1</code>.
'''5.1. kubeadm init with pod-CIDR for Calico'''
Calico uses 192.168.0.0/16 by default. For convenience, we will use the same:
On <code>cp1</code>:<pre>
sudo kubeadm init \
  --apiserver-advertise-address=10.0.0.10 \
  --pod-network-cidr=192.168.0.0/16
</pre>
* <code>--apiserver-advertise-address</code> — IP cp1, which everyone will use to access the API.
* <code>--pod-network-cidr</code> — The IP range for pods that Calico will use.
At the end, kubeadm will output:
* message about successful initialisation;
* The command kubeadm join ... — be sure to copy it somewhere (we will use it on the workers).
'''5.2. Configuring kubectl for the sadmin user (cp1)'''
Currently, kubeconfig is located in /etc/kubernetes/admin.conf and belongs to root.
On cp1 under sadmin:<pre>
mkdir -p $HOME/.kube
sudo cp /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
</pre>Verification:<pre>
kubectl get nodes
</pre>Until the network is up, the node may be NotReady — this is normal.
'''6. Install Calico 3.31 (cp1 only)'''
We take the latest Calico 3.31.1 from the official on-prem instructions.<pre>
kubectl apply -f https://raw.githubusercontent.com/projectcalico/calico/v3.28.1/manifests/tigera-operator.yaml
kubectl apply -f https://raw.githubusercontent.com/projectcalico/calico/v3.28.1/manifests/custom-resources.yaml
</pre>'''7. Make the control plane "clean" (without workload pods)'''
By default, in 1.34, the control plane can accept normal pods (taint is not set automatically).
You don't need this, so let's set taint right away:
On <code>cp1</code>:<pre>
kubectl taint nodes cp1 node-role.kubernetes.io/control-plane=:NoSchedule
</pre>Verification:<pre>
kubectl describe node cp1 | grep -i Taint
</pre>It should be something like:<pre>
Taints: node-role.kubernetes.io/control-plane:NoSchedule
</pre><blockquote>Now the scheduler will NOT place regular pods on cp1. Only system components will remain there (kube-apiserver, etcd, kube-controller-manager, scheduler, Calico DaemonSet, etc.).</blockquote>'''8. Connect worker1 and worker2'''
'''8.1. Obtain the join command (if lost)'''
If you did not save the output of kubeadm init, on cp1:<pre>
sudo kubeadm token create --print-join-command
</pre>Example:<pre>
kubeadm join 10.0.0.10:6443 --token abcdef.0123456789abcdef \
  --discovery-token-ca-cert-hash sha256:xxxxxxxx...
</pre>8.2. Perform a join on worker1 / worker2↵↵On worker1:<pre>
sudo kubeadm join 10.0.0.10:6443 --token abcdef.0123456789abcdef \
  --discovery-token-ca-cert-hash sha256:xxxxxxxx...
</pre>On <code>worker2</code>:<pre>
sudo kubeadm join 10.0.0.10:6443 --token abcdef.0123456789abcdef \
  --discovery-token-ca-cert-hash sha256:xxxxxxxx...
</pre><blockquote>Here, a token and CA hash are used to enable workers to securely join the cluster.</blockquote>After that, on <code>cp1</code>:<pre>
kubectl get nodes
</pre>Expected result:<pre>
NAME      STATUS  ROLES          AGE  VERSION
cp1      Ready    control-plane  20m  v1.34.x
worker1  Ready    <none>          5m    v1.34.x
worker2  Ready    <none>          3m    v1.34.x
</pre>If workers are NotReady for some time, wait until kube-proxy + Calico catch up and start.
'''9. Verification: only on worker nodes'''
Let's create a test deployment:
On <code>cp1</code>:<pre>
kubectl create deployment nginx --image=nginx --replicas=3
kubectl expose deployment nginx --port=80 --type=NodePort
</pre>Let's see:<pre>
kubectl get pods -o wide
</pre>All nginx pods should be on worker1 and worker2, but not on cp1.↵↵If you see a pod on cp1, it means that taint did not apply → once again:<pre>
kubectl taint nodes cp1 node-role.kubernetes.io/control-plane=:NoSchedule --overwrite
kubectl rollout restart deploy nginx
</pre>✅ 1. How to delete these pods (Deployment + Service)
You created:
* Deployment: '''nginx'''
* Service: '''nginx'''
To remove:<pre>
kubectl delete service nginx
</pre>Remove Deployment:<pre>
kubectl delete deployment nginx
</pre>Let's check:<pre>
kubectl get pods
kubectl get svc
</pre>📌 If you want to remove EVERYTHING related to nginx with a single command:<pre>
kubectl delete deploy,svc nginx
</pre>Установить пакет bash-completion (если ещё нет)<pre>
sudo apt install bash-completion -y
</pre>Активировать автодополнение в текущей сессии:<pre>
source /etc/bash_completion
</pre>Включить autocomplete для kubectl<pre>
echo "source <(kubectl completion bash)" >> ~/.bashrc
source ~/.bashrc
</pre>

Revision as of 13:13, 7 January 2026