Set Up a Kubernetes Cluster with Kubeadm¶
This guide provides a step-by-step process to deploy a production-ready Kubernetes cluster using kubeadm on Ubuntu 24.04 LTS, tailored for high availability (HA) and integrated with the Calico CNI. Designed for clarity and precision, it ensures you can initialize a control plane, join worker nodes, and verify a fully functional cluster with confidence. Follow each step to build a robust Kubernetes environment for development, testing, or production.
Prerequisites¶
Before starting, ensure your environment meets these requirements:
- Operating System: Ubuntu 24.04 LTS on all nodes.
- Hardware:
- Control Plane Nodes: Minimum 2 CPUs, 4 GB RAM (e.g., AWS EC2
t2.medium). - Worker Nodes: Minimum 1 CPU, 2 GB RAM.
- Storage: 20 GB disk per node.
- Networking:
- Full connectivity between nodes (private or public network).
- Unique hostname, MAC address, and
product_uuidfor each node. - Open ports as per the Kubernetes ports and protocols.
- Software:
kubeadm,kubelet,kubectl, andcontainerd(installed in later steps). - Access: SSH access to all nodes with
sudoprivileges.
Example Cluster Setup (AWS EC2): | Instance Name | Private IP | Role | |---------------|--------------|---------------| | k8s-master-1 | 10.0.138.123 | Control Plane | | k8s-master-2 | 10.0.138.124 | Control Plane | | k8s-worker-1 | 10.0.138.125 | Worker Node | | k8s-worker-2 | 10.0.138.126 | Worker Node |
🚀 Quick & Effortless Kubernetes Cluster Setup — One Click Away!¶
Setting up a Kubernetes control plane manually?
No need!
Head over to infra-bootstrap — my dedicated automation repo offering one-click Kubernetes control plane installation.
🎯 What you'll find there: - ⚡ Fully automated control plane setup scripts for EC2 - 💾 Clean, reliable Kubernetes installation workflows - 🚀 No step-by-step guides, no copy-pasting — just one command to rule them all
👉 Explore infra-bootstrap now and experience effortless Kubernetes cluster initialization!
🌸 Let automation bloom, while you focus on building the future on Kubernetes. 🌸
Get Started¶
Step 1: Set Up AWS EC2 Instances¶
Configure EC2 instances to host your Kubernetes cluster, ensuring proper networking and security settings.
- Create EC2 Instances:
- Instance Type:
t2.medium(2 vCPUs, 4 GB RAM) for control plane;t2.microor higher for workers. - OS: Ubuntu 24.04 LTS.
- Storage: 20 GB SSD (gp3 recommended).
- Networking: Place all instances in the same VPC and subnet for simplicity. Assign private IPs (e.g.,
10.0.138.123fork8s-master-1). -
Security Group: Create a security group allowing:
- Control Plane: TCP 6443 (API server), 2379-2380 (etcd), 10250-10259 (kubelet, scheduler, controller).
- Worker Nodes: TCP 10250 (kubelet), 30000-32767 (NodePort).
- Inter-Node: All traffic within the VPC (e.g.,
10.0.0.0/16) for pod communication. - SSH: TCP 22 from your IP for access.
- Reference: Kubernetes Ports.
-
Verify Setup:
- SSH into each instance:
ssh -i <key.pem> ubuntu@<public-ip>. - Confirm private IPs:
ip addr show. - Ensure unique MAC and UUID:
ip link show | grep ether sudo cat /sys/class/dmi/id/product_uuid
Step 2: Configure the Base OS on All Nodes¶
Prepare each node’s operating system to meet Kubernetes requirements, including disabling swap, setting hostnames, and enabling networking.
-
Update OS and Install Tools:
sudo apt update && sudo apt upgrade -y sudo apt install -y net-tools -
Disable Swap: Kubernetes requires swap to be disabled to ensure predictable performance.
Expected Output:# Disable swap immediately sudo swapoff -a # Remove swap entries from fstab sudo sed -i '/\s\+swap\s\+/d' /etc/fstab # Verify swap is disabled free -h | grep SwapSwap: 0B 0B 0B
Explanation:¶
\s\+→ Matches one or more whitespace characters.swap→ Looks for the word "swap".\s\+→ Ensures "swap" is surrounded by whitespace./d→ Deletes matching lines.
Effect:¶
- It removes only lines where "swap" appears with spaces around it, ensuring it targets properly formatted swap entries.
- This leaves other lines in fstab unaffected.
- This is a safe operation as it only removes lines that match the specified pattern.
Purpose:¶
-
This removes any swap entries from
/etc/fstab, which prevents the system from mounting swap partitions or swap files on boot. -
Set Unique Hostnames: Assign descriptive hostnames to each node for clarity.
# On k8s-master-1 sudo hostnamectl set-hostname k8s-master-1 # On k8s-master-2 sudo hostnamectl set-hostname k8s-master-2 # On k8s-worker-1, etc. sudo hostnamectl set-hostname k8s-worker-1 -
Configure /etc/hosts (Optional): Add entries for node resolution without a DNS server.
Add:sudo nano /etc/hostsVerify:127.0.0.1 localhost 10.0.138.123 k8s-master-1 10.0.138.124 k8s-master-2 10.0.138.125 k8s-worker-1 10.0.138.126 k8s-worker-2ping k8s-master-1.
Step 3: Install Kubernetes Dependencies on All Nodes¶
Install kubeadm, kubelet, and kubectl to bootstrap and manage the cluster. Ensure version consistency (v1.32.2) across components.
-
kubeadm: the command to bootstrap the cluster. -
kubelet: the component that runs on all of the machines in your cluster and does things like starting pods and containers. -
kubectl: the command line utility to talk to your cluster.
kubeadmwill not install or managekubeletorkubectlfor you, so you will need to ensure they match the version of the Kubernetes control plane you want kubeadm to install for you.kubeadmwill install all of the necessary kubernetes components, exceptkubelet. That's why you need to installkubeletseparately.
-
Add Kubernetes Repository:
sudo apt update sudo apt install -y apt-transport-https ca-certificates curl gpg sudo mkdir -p -m 755 /etc/apt/keyrings curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.32/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.32/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list -
Install Kubernetes Components:
sudo apt update # sudo apt install -y kubelet=1.32.2-1.1 kubeadm=1.32.2-1.1 kubectl=1.32.2-1.1 sudo apt-get install -y kubelet kubeadm kubectl sudo apt-mark hold kubelet kubeadm kubectl -
Verify Installation:
Expected Output (partial):kubeadm version kubectl version --client kubelet --versionVerifykubeadm version: &version.Info{Major:"1", Minor:"32", GitVersion:"v1.32.2", ...}sudo ls /etc/kubernetes/manifests/to ensure thekubeletconfiguration files are not yet present.
Verify sudo systemctl status kubelet to ensure the kubelet service is not yet running.
Step 4: Install and Configure Containerd on All Nodes¶
Kubernetes uses containerd as the container runtime via the Container Runtime Interface (CRI).
-
Install Containerd:
sudo apt update sudo apt install -y containerd -
Configure Containerd: Ensure containerd uses
systemdas the cgroup driver and OverlayFS for storage.sudo mkdir -p /etc/containerd containerd config default | sudo tee /etc/containerd/config.toml sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/' /etc/containerd/config.toml sudo sed -i 's/snapshotter = ".*"/snapshotter = "overlayfs"/' /etc/containerd/config.toml -
Enable and Start Containerd:
Expected Output:sudo systemctl restart containerd sudo systemctl enable containerd sudo systemctl status containerdActive: active (running).
ONE Command Solution: Head over again to infra-bootstrap to install and configure containerd on all nodes with a single command.
Step 5: Configure Kubernetes Networking on All Nodes¶
Enable kernel modules and sysctl settings for Kubernetes networking.
-
Load Kernel Modules:
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf overlay br_netfilter EOF sudo modprobe overlay sudo modprobe br_netfilter -
Configure Sysctl Parameters:
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-iptables = 1 net.bridge.bridge-nf-call-ip6tables = 1 net.ipv4.ip_forward = 1 EOF sudo sysctl --system -
Verify Settings:
Expected Output:sysctl net.bridge.bridge-nf-call-iptables sysctl net.bridge.bridge-nf-call-ip6tables sysctl net.ipv4.ip_forwardnet.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1
📌 Explanation - ✅ overlay – Needed for container storage - ✅ br_netfilter – Required for Kubernetes networking (so iptables sees bridged traffic). - ✅ net.bridge.bridge-nf-call-iptables = 1 – Ensures Kubernetes networking works properly. - ✅ net.bridge.bridge-nf-call-ip6tables = 1 – Same, but for IPv6. - ✅ net.ipv4.ip_forward = 1 – Enables packet forwarding, mandatory for Kubernetes networking.
Step 6: Initialize the Control Plane¶
Initialize the first control plane node using kubeadm init, setting up the cluster with Calico networking.
-
Pre-Checks:
sudo swapoff -a sudo systemctl start containerd kubelet sudo netstat -tulnp | grep 6443 # Ensure port 6443 is free systemctl is-active "kubelet" # kubelet is activating, becuase it requires configuration, which is done by kubeadm init kubeadm config images pull -
Run kubeadm init: Use your control plane’s private IP and Calico’s pod CIDR.
sudo kubeadm init \ --control-plane-endpoint "10.0.138.123:6443" \ # Replace with your control plane's private IP --upload-certs \ --pod-network-cidr=10.244.0.0/16 \ --apiserver-advertise-address=10.0.138.123 \ # Replace with your control plane's private IP --node-name=k8s-master-1 \ --cri-socket=unix:///var/run/containerd/containerd.sock -
Understand the Flags:
--control-plane-endpoint: Stable API server endpoint for HA (supports DNS or load balancer).--upload-certs: Shares certificates for additional control planes.--pod-network-cidr: Sets Calico’s pod IP range (10.244.0.0/16).--apiserver-advertise-address: Control plane’s private IP.--node-name: Unique node name.--cri-socket: Specifies containerd’s CRI socket.-
Click here for more information on
kubeadm initflags. -
Save Join Commands: The output includes
kubeadm joincommands for control planes and workers. Save them:kubeadm join 10.0.138.123:6443 --token <token> --discovery-token-ca-cert-hash sha256:<hash> --control-plane --certificate-key <key> kubeadm join 10.0.138.123:6443 --token <token> --discovery-token-ca-cert-hash sha256:<hash>
⚙️ Ever wondered what black magic unfolds behind the curtain when you run kubeadm init?
It’s not just a command — it’s a symphony of components, certificates, manifests, and networking dances happening in perfect sync.
👉 Peek behind the scenes and witness how your Kubernetes control plane is born, one daemon and one API handshake at a time.
Step 7: Configure kubectl Access¶
Enable kubectl to interact with the cluster from the control plane node.
What this means?¶
After initialization, your Kubernetes cluster is running, but you need to configure your kubectl command to interact with the cluster.
Why is this needed?¶
- The Kubernetes control plane stores its credentials in
/etc/kubernetes/admin.conf. - By default, only root can access it.
-
You need to copy and set the permissions properly so that your non-root user can use
kubectlwithout issues. So, follow this step on your control plane node or any other node where you want to usekubectl. -
Set Up kubeconfig:
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config -
Verify Access:
Expected Output:kubectl get nodesNAME STATUS ROLES AGE VERSION k8s-master-1 Ready control-plane 5m v1.32.2
Step 8: Install Calico CNI¶
Deploy Calico to enable pod networking, matching the --pod-network-cidr.
-
Download and Configure Calico:
Editcurl -O https://raw.githubusercontent.com/projectcalico/calico/v3.28.0/manifests/calico.yamlcalico.yamlto set the CIDR:- name: CALICO_IPV4POOL_CIDR value: "10.244.0.0/16" -
Apply Calico:
kubectl apply -f calico.yaml -
Verify Calico:
Expected Output: All pods inkubectl get pods -n kube-system -l k8s-app=calico-nodeRunningstate.
Click here to learn more about CNI plugins.
Step 9: Join Additional Control Planes (Optional)¶
For high availability, add more control plane nodes.
-
Run kubeadm join (on
k8s-master-2):sudo kubeadm join 10.0.138.123:6443 \ --token <token> \ --discovery-token-ca-cert-hash sha256:<hash> \ --control-plane \ --certificate-key <key> \ --node-name=k8s-master-2 \ --cri-socket=unix:///var/run/containerd/containerd.sock -
Verify:
kubectl get nodes
Step 10: Join Worker Nodes¶
Add worker nodes to run workloads. Worker nodes don't manage the cluster; they just run workloads.
-
Run kubeadm join (on
k8s-worker-1,k8s-worker-2):sudo kubeadm join 10.0.138.123:6443 \ --token <token> \ --discovery-token-ca-cert-hash sha256:<hash> \ --node-name=k8s-worker-<1 or 2> \ --cri-socket=unix:///var/run/containerd/containerd.sock # No `--control-plane` flag is needed since these are just worker nodes. -
Verify:
Expected Output:kubectl get nodesNAME STATUS ROLES AGE VERSION k8s-master-1 Ready control-plane 10m v1.32.2 k8s-worker-1 Ready <none> 2m v1.32.2 k8s-worker-2 Ready <none> 1m v1.32.2
Step 11: Secure Certificates¶
Certificates are sensitive and expire after 2 hours. Regenerate if needed:
sudo kubeadm init phase upload-certs --upload-certs
--certificate-key securely. Step 12: Verify Cluster Health¶
Confirm the cluster is operational.
-
Check Nodes:
kubectl get nodes -o wide -
Check Pods:
Expected Output: All pods (e.g.,kubectl get pods -n kube-system -o widecalico-node,coredns,kube-apiserver) inRunningstate. -
Test Networking: Deploy a sample pod:
Find the NodePort:kubectl run nginx --image=nginx --port=80 kubectl expose pod nginx --type=NodePortAccess:kubectl get svc nginxhttp://<worker-ip>:<nodeport>.
Troubleshooting¶
- Cluster Initialization Fails:
-
Fix: Check logs:
Ensure swap is disabled, containerd is running, and ports are open.sudo journalctl -u kubelet -
Nodes Not Joining:
-
Fix: Verify token and hash. Regenerate token if expired:
kubeadm token create --print-join-command -
Calico Pods Not Running:
-
Fix: Confirm
CALICO_IPV4POOL_CIDRmatches--pod-network-cidr:Check logs:kubectl get ippool -o yamlkubectl logs -n kube-system -l k8s-app=calico-node -
kubectl Access Issues:
- Fix: Verify kubeconfig:
cat $HOME/.kube/config
Best Practices¶
- Backup Certificates: Store
/etc/kubernetes/pki/securely. - Use Version Control: Pin
kubeadm,kubelet,kubectlto the same version (e.g.,1.32.2-1.1). - Monitor Security Groups: Restrict ports to trusted IPs where possible.
- Automate Setup: Use tools like Ansible for multi-node deployments.
- Regular Updates: Keep Ubuntu and Kubernetes components updated.
Conclusion¶
You’ve successfully deployed a Kubernetes cluster using kubeadm, complete with a Calico CNI and optional HA control planes. This guide, tailored to your setup with pod-network-cidr=10.244.0.0/16 and Ubuntu 24.04, ensures a robust and scalable cluster. Explore advanced features like network policies with Calico or deploy workloads to test your cluster’s capabilities.