Setup Kubernetes Cluster Using Kubeadm: Difference between revisions

From wiki.baghirzade.pro
Jump to navigation Jump to search
No edit summary
No edit summary
Line 7: Line 7:
# '''10.X.X.X/X''' network range with static IPs for master and worker nodes. We will be using the '''192.x.x.x''' series as the pod network range that will be used by the Calico network plugin. Make sure the Node IP range and pod IP range don't overlap.
# '''10.X.X.X/X''' network range with static IPs for master and worker nodes. We will be using the '''192.x.x.x''' series as the pod network range that will be used by the Calico network plugin. Make sure the Node IP range and pod IP range don't overlap.


'''На cp1:'''<pre>
On cp1:<pre>
sudo hostnamectl set-hostname cp1
sudo hostnamectl set-hostname cp1
</pre>'''На worker1:'''<pre>
</pre>On worker1:<pre>
sudo hostnamectl set-hostname w1
sudo hostnamectl set-hostname w1
</pre>'''На worker2:'''<pre>sudo hostnamectl set-hostname w2</pre>'''На всех нодах''' добавь записи в <code>/etc/hosts</code>:<pre>
</pre>On worker2:<pre>sudo hostnamectl set-hostname w2</pre>Add entries to /etc/hosts on all nodes:<pre>
sudo nano /etc/hosts
sudo nano /etc/hosts
</pre>В конец файла допиши, подставив свои IP:<pre>
</pre>Add the following to the end of the file, replacing with your IP addresses:<pre>
10.0.0.10  cp1
10.0.0.10  cp1
10.0.0.11  w1
10.0.0.11  w1
10.0.0.12  w2
10.0.0.12  w2
</pre>Это позволяет нодам обращаться друг к другу по имени, а не по IP.  
</pre>This allows nodes to address each other by name rather than by IP address.  


'''1. Общая подготовка системы (все ноды)'''
'''1. General system preparation (all nodes)'''


'''1.1. Обновляем пакеты'''<pre>
'''1.1. Updating packages'''<pre>
sudo apt update
sudo apt update
sudo apt upgrade -y
sudo apt upgrade -y
</pre>Просто приводим систему в свежий вид, чтобы избежать багов из-за старых пакетов.
</pre>We are simply updating the system to avoid bugs caused by old packages.


'''1.2. Отключаем swap (обязательно для kubeadm)'''
'''1.2. Disable swap (required for kubeadm)'''


На всех нодах:<pre>
On all nodes:<pre>
sudo swapoff -a
sudo swapoff -a
sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
</pre>Проверка:<pre>
</pre>Verification:<pre>
free -h
free -h
</pre>'''1.3. Модули ядра и sysctl (для сетки)'''
</pre>'''1.3. Kernel modules and sysctl (for the grid)↵↵On each node:'''<pre>
 
На '''каждой''' ноде:<pre>
cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf
cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf
overlay
overlay
Line 43: Line 41:
sudo modprobe overlay
sudo modprobe overlay
sudo modprobe br_netfilter
sudo modprobe br_netfilter
</pre>Теперь sysctl:<pre>
</pre>Now sysctl:<pre>
cat <<EOF | sudo tee /etc/sysctl.d/kubernetes.conf
cat <<EOF | sudo tee /etc/sysctl.d/kubernetes.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-ip6tables = 1
Line 51: Line 49:


sudo sysctl --system
sudo sysctl --system
</pre>Это включает корректную обработку трафика через Linux bridge и форвардинг обязательно для CNI (Calico).
</pre>This includes correct traffic handling through the Linux bridge and forwarding mandatory for CNI (Calico).


'''1.4. (Для лаба) Отключить UFW'''
'''1.4. (For lab) Disable UFW'''


Если UFW включён и это чистый лаб:<pre>
If UFW is enabled and this is a clean lab:<pre>
sudo ufw disable
sudo ufw disable
</pre>В проде, конечно, лучше открыть нужные порты, но для первой сборки проще без firewall.
</pre>In Prod, of course, it is better to open the necessary ports, but for the first build it is easier without a firewall.


'''2. Устанавливаем containerd (все ноды)'''
'''2. Install containerd (all nodes)'''


На '''каждой''' ноде:<pre>
At each node:<pre>
sudo apt install -y containerd
sudo apt install -y containerd
</pre>Генерируем дефолтный конфиг:<pre>
</pre>Generate default configuration:<pre>
sudo mkdir -p /etc/containerd
sudo mkdir -p /etc/containerd
sudo containerd config default | sudo tee /etc/containerd/config.toml > /dev/null
sudo containerd config default | sudo tee /etc/containerd/config.toml > /dev/null
</pre>Рестарт и включение:<pre>
</pre>Restart and switch on:<pre>
sudo systemctl restart containerd
sudo systemctl restart containerd
sudo systemctl enable containerd
sudo systemctl enable containerd
sudo systemctl status containerd
sudo systemctl status containerd
</pre><code>SystemdCgroup = true</code> → kubelet и containerd используют один и тот же cgroup-драйвер (systemd). Это теперь “best practice”.
</pre>SystemdCgroup = true → kubelet and containerd use the same cgroup driver (systemd). This is now considered best practice.


'''3. Устанавливаем kubeadm/kubelet/kubectl 1.34 (все ноды)'''
'''3. Install kubeadm/kubelet/kubectl 1.34 (all nodes)'''


По официальным докам для '''v1.34''':
According to the official documentation for v1.34:


'''3.1. Пакеты для репо'''
'''3.1. Repo packages'''


На '''каждой''' ноде:<pre>
At each node:<pre>
sudo apt-get update
sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl gpg
sudo apt-get install -y apt-transport-https ca-certificates curl gpg
</pre>'''3.2. GPG-ключ и репозиторий pkgs.k8s.io (v1.34)'''
</pre>'''3.2. GPG key and pkgs.k8s.io repository (v1.34)↵↵Ubuntu 22.04 already has /etc/apt/keyrings, but if it doesn't:'''<pre>
 
Ubuntu 22.04 уже имеет <code>/etc/apt/keyrings</code>, но если вдруг нет:<pre>
sudo mkdir -p -m 755 /etc/apt/keyrings
sudo mkdir -p -m 755 /etc/apt/keyrings
</pre>Ключ репозитория:<pre>
</pre>Repository key:<pre>
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.34/deb/Release.key \
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.34/deb/Release.key \
   | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
   | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
</pre>Добавляем репо '''именно для Kubernetes 1.34''':<pre>
</pre>Add repo specifically for Kubernetes 1.34:<pre>
echo "deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.34/deb/ /" \
echo "deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.34/deb/ /" \
   | sudo tee /etc/apt/sources.list.d/kubernetes.list
   | sudo tee /etc/apt/sources.list.d/kubernetes.list
</pre>'''3.3. Установка kubeadm/kubelet/kubectl'''<pre>
</pre>'''3.3. Installing kubeadm/kubelet/kubectl'''<pre>
sudo apt-get update
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
</pre><blockquote>Репозиторий <code>v1.34</code> гарантирует, что ставится именно ветка 1.34.x. <code>apt-mark hold</code> не даст случайно обновиться до другой минорной версии при <code>apt upgrade</code></blockquote>(Не обязательно, но можно сразу включить kubelet, он будет крутиться и ждать <code>kubeadm init / join</code>):<pre>
</pre><blockquote>Repository v1.34 ensures that branch 1.34.x is installed. apt-mark hold will prevent accidental updates to other minor versions during apt upgrade.</blockquote>(Not required, but you can enable kubelet immediately; it will run and wait for kubeadm init / join):<pre>
sudo systemctl enable --now kubelet
sudo systemctl enable --now kubelet
</pre>'''5. Инициализация control-plane (только cp1)'''
</pre>'''5. Control-plane initialisation (cp1 only)'''


Теперь на <code>cp1</code> создаём кластер.
Now we create a cluster on <code>cp1</code>.


'''5.1. kubeadm init с pod-CIDR для Calico'''
'''5.1. kubeadm init with pod-CIDR for Calico'''


Calico по умолчанию использует <code>192.168.0.0/16</code>. Для удобства берём его же:
Calico uses 192.168.0.0/16 by default. For convenience, we will use the same:


На <code>cp1</code>:<pre>
On <code>cp1</code>:<pre>
sudo kubeadm init \
sudo kubeadm init \
   --apiserver-advertise-address=10.0.0.10 \
   --apiserver-advertise-address=10.0.0.10 \
Line 111: Line 107:
</pre>
</pre>


* <code>--apiserver-advertise-address</code> — IP cp1, по которому все будут ходить на API.
* <code>--apiserver-advertise-address</code> — IP cp1, which everyone will use to access the API.
* <code>--pod-network-cidr</code> — диапазон IP для pod’ов, который будет использовать Calico.
* <code>--pod-network-cidr</code> — The IP range for pods that Calico will use.


В конце <code>kubeadm</code> выдаст:
At the end, kubeadm will output:


* сообщение об успешной инициализации;
* message about successful initialisation;
* '''команду <code>kubeadm join ...</code>''' обязательно скопируй её куда-нибудь (будем использовать на worker’ах).
* The command kubeadm join ... — be sure to copy it somewhere (we will use it on the workers).


'''5.2. Настраиваем kubectl для юзера <code>sadmin</code> (cp1)'''
'''5.2. Configuring kubectl for the sadmin user (cp1)'''


Сейчас kubeconfig лежит в <code>/etc/kubernetes/admin.conf</code> и принадлежит root’у.
Currently, kubeconfig is located in /etc/kubernetes/admin.conf and belongs to root.


На <code>cp1</code> под <code>sadmin</code>:<pre>
On cp1 under sadmin:<pre>
mkdir -p $HOME/.kube
mkdir -p $HOME/.kube
sudo cp /etc/kubernetes/admin.conf $HOME/.kube/config
sudo cp /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
</pre>Проверка:<pre>
</pre>Verification:<pre>
kubectl get nodes
kubectl get nodes
</pre>Пока сеть не поднята, нода может быть <code>NotReady</code> это нормально.
</pre>Until the network is up, the node may be NotReady — this is normal.


'''6. Устанавливаем Calico 3.31 (только cp1)'''
'''6. Install Calico 3.31 (cp1 only)'''


Берём свежий Calico 3.31.1 из официальной инструкции для on-prem.
We take the latest Calico 3.31.1 from the official on-prem instructions.


'''6.1. Tigera Operator + CRD'''
'''6.1. Tigera Operator + CRD'''


На <code>cp1</code>:<pre>
On <code>cp1</code>:<pre>
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.31.1/manifests/operator-crds.yaml
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.31.1/manifests/operator-crds.yaml
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.31.1/manifests/tigera-operator.yaml
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.31.1/manifests/tigera-operator.yaml
</pre><blockquote>Это ставит оператор Calico, который потом сам раскатает нужные компоненты по нодам.</blockquote>'''6.2. Конфигурация Calico (custom-resources)'''
</pre><blockquote>This sets the Calico operator, which then rolls out the necessary components across the nodes itself.</blockquote>'''6.2. Calico configuration (custom-resources)'''


По дефолту используем iptables-датаплейн (без eBPF, чтоб пока не усложнять).
By default, we use iptables-dataplane (without eBPF, so as not to complicate things for now).


На <code>cp1</code>:<pre>
On <code>cp1</code>:<pre>
curl -O https://raw.githubusercontent.com/projectcalico/calico/v3.31.1/manifests/custom-resources.yaml
curl -O https://raw.githubusercontent.com/projectcalico/calico/v3.31.1/manifests/custom-resources.yaml
kubectl create -f custom-resources.yaml
kubectl create -f custom-resources.yaml
</pre>(Если хочешь eBPF в будущем — там же есть <code>custom-resources-bpf.yaml</code>.)
</pre>(If you want eBPF in the future, there is also ''custom-resources-bpf.yaml.'')


'''6.3. Проверяем, что Calico поднялся'''
'''6.3. Verify that Calico has started'''


На <code>cp1</code>:<pre>
On <code>cp1</code>:<pre>
watch kubectl get tigerastatus
watch kubectl get tigerastatus
</pre>Ждём, пока всё будет <code>AVAILABLE=True</code>:<pre>
</pre>We are waiting until everything is ''AVAILABLE=True:''<pre>
NAME              AVAILABLE  PROGRESSING  DEGRADED  SINCE
NAME              AVAILABLE  PROGRESSING  DEGRADED  SINCE
calico            True        False        False      ...
calico            True        False        False      ...
ippools          True        False        False      ...
ippools          True        False        False      ...
...
...
</pre>И смотрим pod’ы:<pre>
</pre>And let's look at the pods:<pre>
kubectl get pods -n calico-system
kubectl get pods -n calico-system
kubectl get pods -n kube-system
kubectl get pods -n kube-system
kubectl get nodes
kubectl get nodes
</pre>cp1 должен перейти в статус <code>Ready</code>.
</pre><code>cp1</code> should transition to Ready status.


'''7. Делаем control-plane “чистым” (без workload-подов)'''
'''7. Make the control plane "clean" (without workload pods)'''


По умолчанию в 1.34 control-plane '''может''' принимать обычные поды (taint не ставится автоматически).
By default, in 1.34, the control plane can accept normal pods (taint is not set automatically).


Тебе этого '''не нужно''', поэтому сразу ставим taint:
You don't need this, so let's set taint right away:


На <code>cp1</code>:<pre>
On <code>cp1</code>:<pre>
kubectl taint nodes cp1 node-role.kubernetes.io/control-plane=:NoSchedule
kubectl taint nodes cp1 node-role.kubernetes.io/control-plane=:NoSchedule
</pre>Проверка:<pre>
</pre>Verification:<pre>
kubectl describe node cp1 | grep -i Taint
kubectl describe node cp1 | grep -i Taint
</pre>Должно быть что-то типа:<pre>
</pre>It should be something like:<pre>
Taints: node-role.kubernetes.io/control-plane:NoSchedule
Taints: node-role.kubernetes.io/control-plane:NoSchedule
</pre><blockquote>Теперь scheduler НЕ будет ставить обычные pod’ы на cp1. Там останутся только системные компоненты (kube-apiserver, etcd, kube-controller-manager, scheduler, Calico DaemonSet и т.п.).</blockquote>'''8. Подключаем worker1 и worker2'''
</pre><blockquote>Now the scheduler will NOT place regular pods on cp1. Only system components will remain there (kube-apiserver, etcd, kube-controller-manager, scheduler, Calico DaemonSet, etc.).</blockquote>'''8. Connect worker1 and worker2'''


'''8.1. Получаем join-команду (если потерял)'''
'''8.1. Obtain the join command (if lost)'''


Если не сохранил вывод <code>kubeadm init</code>, на <code>cp1</code>:<pre>
If you did not save the output of kubeadm init, on cp1:<pre>
sudo kubeadm token create --print-join-command
sudo kubeadm token create --print-join-command
</pre>Пример:<pre>
</pre>Example:<pre>
kubeadm join 10.0.0.10:6443 --token abcdef.0123456789abcdef \
kubeadm join 10.0.0.10:6443 --token abcdef.0123456789abcdef \
   --discovery-token-ca-cert-hash sha256:xxxxxxxx...
   --discovery-token-ca-cert-hash sha256:xxxxxxxx...
</pre>'''8.2. Выполняем join на worker1 / worker2'''
</pre>8.2. Perform a join on worker1 / worker2↵↵On worker1:<pre>
 
На <code>worker1</code>:<pre>
sudo kubeadm join 10.0.0.10:6443 --token abcdef.0123456789abcdef \
sudo kubeadm join 10.0.0.10:6443 --token abcdef.0123456789abcdef \
   --discovery-token-ca-cert-hash sha256:xxxxxxxx...
   --discovery-token-ca-cert-hash sha256:xxxxxxxx...
</pre>На <code>worker2</code>:<pre>
</pre>On <code>worker2</code>:<pre>
sudo kubeadm join 10.0.0.10:6443 --token abcdef.0123456789abcdef \
sudo kubeadm join 10.0.0.10:6443 --token abcdef.0123456789abcdef \
   --discovery-token-ca-cert-hash sha256:xxxxxxxx...
   --discovery-token-ca-cert-hash sha256:xxxxxxxx...
</pre><blockquote>Здесь используется токен и хеш CA, чтобы worker’ы безопасно присоединились к кластеру.</blockquote>После этого на <code>cp1</code>:<pre>
</pre><blockquote>Here, a token and CA hash are used to enable workers to securely join the cluster.</blockquote>After that, on <code>cp1</code>:<pre>
kubectl get nodes
kubectl get nodes
</pre>Ожидаемый результат:<pre>
</pre>Expected result:<pre>
NAME      STATUS  ROLES          AGE  VERSION
NAME      STATUS  ROLES          AGE  VERSION
cp1      Ready    control-plane  20m  v1.34.x
cp1      Ready    control-plane  20m  v1.34.x
worker1  Ready    <none>          5m    v1.34.x
worker1  Ready    <none>          5m    v1.34.x
worker2  Ready    <none>          3m    v1.34.x
worker2  Ready    <none>          3m    v1.34.x
</pre>Если worker’ы какое-то время <code>NotReady</code> — подожди, пока на них подтянутся и стартанут kube-proxy + Calico.
</pre>If workers are NotReady for some time, wait until kube-proxy + Calico catch up and start.


'''9. Проверка: поды только на worker-нодах'''
'''9. Verification: only on worker nodes'''


Создадим тестовый deployment:
Let's create a test deployment:


На <code>cp1</code>:<pre>
On <code>cp1</code>:<pre>
kubectl create deployment nginx --image=nginx --replicas=3
kubectl create deployment nginx --image=nginx --replicas=3
kubectl expose deployment nginx --port=80 --type=NodePort
kubectl expose deployment nginx --port=80 --type=NodePort
</pre>Смотрим:<pre>
</pre>Let's see:<pre>
kubectl get pods -o wide
kubectl get pods -o wide
</pre>Все pod’ы <code>nginx</code> должны сидеть на <code>worker1</code> и <code>worker2</code>, '''но не на cp1'''.
</pre>Все pod’ы <code>nginx</code> должны сидеть на <code>worker1</code> и <code>worker2</code>, '''но не на cp1'''.

Revision as of 08:06, 19 November 2025

Kubeadm Setup Prerequisites

Following are the prerequisites for Kubeadm Kubernetes cluster setup.

  1. Minimum two Ubuntu nodes [One master and one worker node]. You can have more worker nodes as per your requirement.
  2. The master node should have a minimum of 2 vCPU and 2GB RAM.
  3. For the worker nodes, a minimum of 1vCPU and 2 GB RAM is recommended.
  4. 10.X.X.X/X network range with static IPs for master and worker nodes. We will be using the 192.x.x.x series as the pod network range that will be used by the Calico network plugin. Make sure the Node IP range and pod IP range don't overlap.

On cp1:

sudo hostnamectl set-hostname cp1

On worker1:

sudo hostnamectl set-hostname w1

On worker2:

sudo hostnamectl set-hostname w2

Add entries to /etc/hosts on all nodes:

sudo nano /etc/hosts

Add the following to the end of the file, replacing with your IP addresses:

10.0.0.10 cp1 10.0.0.11 w1 10.0.0.12 w2

This allows nodes to address each other by name rather than by IP address.

1. General system preparation (all nodes)

1.1. Updating packages

sudo apt update
sudo apt upgrade -y

We are simply updating the system to avoid bugs caused by old packages.

1.2. Disable swap (required for kubeadm)

On all nodes:

sudo swapoff -a
sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab

Verification:

free -h

1.3. Kernel modules and sysctl (for the grid)↵↵On each node:

cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf overlay br_netfilter EOF

sudo modprobe overlay sudo modprobe br_netfilter

Now sysctl:

cat <<EOF | sudo tee /etc/sysctl.d/kubernetes.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 EOF

sudo sysctl --system

This includes correct traffic handling through the Linux bridge and forwarding — mandatory for CNI (Calico).

1.4. (For lab) Disable UFW

If UFW is enabled and this is a clean lab:

sudo ufw disable

In Prod, of course, it is better to open the necessary ports, but for the first build it is easier without a firewall.

2. Install containerd (all nodes)

At each node:

sudo apt install -y containerd

Generate default configuration:

sudo mkdir -p /etc/containerd sudo containerd config default | sudo tee /etc/containerd/config.toml > /dev/null

Restart and switch on:

sudo systemctl restart containerd sudo systemctl enable containerd sudo systemctl status containerd

SystemdCgroup = true → kubelet and containerd use the same cgroup driver (systemd). This is now considered best practice.

3. Install kubeadm/kubelet/kubectl 1.34 (all nodes)

According to the official documentation for v1.34:

3.1. Repo packages

At each node:

sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl gpg

3.2. GPG key and pkgs.k8s.io repository (v1.34)↵↵Ubuntu 22.04 already has /etc/apt/keyrings, but if it doesn't:

sudo mkdir -p -m 755 /etc/apt/keyrings

Repository key:

curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.34/deb/Release.key \

 | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg

Add repo specifically for Kubernetes 1.34:

echo "deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.34/deb/ /" \

 | sudo tee /etc/apt/sources.list.d/kubernetes.list

3.3. Installing kubeadm/kubelet/kubectl

sudo apt-get update sudo apt-get install -y kubelet kubeadm kubectl sudo apt-mark hold kubelet kubeadm kubectl

Repository v1.34 ensures that branch 1.34.x is installed. apt-mark hold will prevent accidental updates to other minor versions during apt upgrade.

(Not required, but you can enable kubelet immediately; it will run and wait for kubeadm init / join):

sudo systemctl enable --now kubelet

5. Control-plane initialisation (cp1 only)

Now we create a cluster on cp1.

5.1. kubeadm init with pod-CIDR for Calico

Calico uses 192.168.0.0/16 by default. For convenience, we will use the same:

On cp1:

sudo kubeadm init \
  --apiserver-advertise-address=10.0.0.10 \
  --pod-network-cidr=192.168.0.0/16
  • --apiserver-advertise-address — IP cp1, which everyone will use to access the API.
  • --pod-network-cidr — The IP range for pods that Calico will use.

At the end, kubeadm will output:

  • message about successful initialisation;
  • The command kubeadm join ... — be sure to copy it somewhere (we will use it on the workers).

5.2. Configuring kubectl for the sadmin user (cp1)

Currently, kubeconfig is located in /etc/kubernetes/admin.conf and belongs to root.

On cp1 under sadmin:

mkdir -p $HOME/.kube
sudo cp /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Verification:

kubectl get nodes

Until the network is up, the node may be NotReady — this is normal.

6. Install Calico 3.31 (cp1 only)

We take the latest Calico 3.31.1 from the official on-prem instructions.

6.1. Tigera Operator + CRD

On cp1:

kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.31.1/manifests/operator-crds.yaml
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.31.1/manifests/tigera-operator.yaml

This sets the Calico operator, which then rolls out the necessary components across the nodes itself.

6.2. Calico configuration (custom-resources)

By default, we use iptables-dataplane (without eBPF, so as not to complicate things for now).

On cp1:

curl -O https://raw.githubusercontent.com/projectcalico/calico/v3.31.1/manifests/custom-resources.yaml
kubectl create -f custom-resources.yaml

(If you want eBPF in the future, there is also custom-resources-bpf.yaml.)

6.3. Verify that Calico has started

On cp1:

watch kubectl get tigerastatus

We are waiting until everything is AVAILABLE=True:

NAME AVAILABLE PROGRESSING DEGRADED SINCE calico True False False ... ippools True False False ... ...

And let's look at the pods:

kubectl get pods -n calico-system kubectl get pods -n kube-system kubectl get nodes

cp1 should transition to Ready status.

7. Make the control plane "clean" (without workload pods)

By default, in 1.34, the control plane can accept normal pods (taint is not set automatically).

You don't need this, so let's set taint right away:

On cp1:

kubectl taint nodes cp1 node-role.kubernetes.io/control-plane=:NoSchedule

Verification:

kubectl describe node cp1 | grep -i Taint

It should be something like:

Taints: node-role.kubernetes.io/control-plane:NoSchedule

Now the scheduler will NOT place regular pods on cp1. Only system components will remain there (kube-apiserver, etcd, kube-controller-manager, scheduler, Calico DaemonSet, etc.).

8. Connect worker1 and worker2

8.1. Obtain the join command (if lost)

If you did not save the output of kubeadm init, on cp1:

sudo kubeadm token create --print-join-command

Example:

kubeadm join 10.0.0.10:6443 --token abcdef.0123456789abcdef \

 --discovery-token-ca-cert-hash sha256:xxxxxxxx...

8.2. Perform a join on worker1 / worker2↵↵On worker1:

sudo kubeadm join 10.0.0.10:6443 --token abcdef.0123456789abcdef \

 --discovery-token-ca-cert-hash sha256:xxxxxxxx...

On worker2:

sudo kubeadm join 10.0.0.10:6443 --token abcdef.0123456789abcdef \

 --discovery-token-ca-cert-hash sha256:xxxxxxxx...

Here, a token and CA hash are used to enable workers to securely join the cluster.

After that, on cp1:

kubectl get nodes

Expected result:

NAME STATUS ROLES AGE VERSION cp1 Ready control-plane 20m v1.34.x worker1 Ready <none> 5m v1.34.x worker2 Ready <none> 3m v1.34.x

If workers are NotReady for some time, wait until kube-proxy + Calico catch up and start.

9. Verification: only on worker nodes

Let's create a test deployment:

On cp1:

kubectl create deployment nginx --image=nginx --replicas=3
kubectl expose deployment nginx --port=80 --type=NodePort

Let's see:

kubectl get pods -o wide

Все pod’ы nginx должны сидеть на worker1 и worker2, но не на cp1. Если вдруг увидишь под на cp1, значит taint не применился → ещё раз:

kubectl taint nodes cp1 node-role.kubernetes.io/control-plane=:NoSchedule --overwrite
kubectl rollout restart deploy nginx

1. Как удалить эти pod’ы (Deployment + Service)

Ты создал:

  • Deployment: nginx
  • Service: nginx

Чтобы удалить:

kubectl delete service nginx

Удалить Deployment:

kubectl delete deployment nginx

Проверяем:

kubectl get pods kubectl get svc

📌 Если хочешь удалить ВСЁ, что относится к nginx, одной командой:

kubectl delete deploy,svc nginx