☸️ AlmaLinux Kubernetes Cluster: Complete K8s Deployment Guide
Welcome to the exciting world of Kubernetes on AlmaLinux! 🚀 Whether you’re orchestrating microservices, scaling applications globally, or building cloud-native infrastructure, this comprehensive guide will transform you into a Kubernetes master who can deploy and manage production-grade clusters! 🎯
Kubernetes is the operating system of the cloud, and with AlmaLinux as your foundation, you’ll be running containerized applications at massive scale with automatic healing, scaling, and zero-downtime deployments! Let’s conquer container orchestration! 💪
🤔 Why is Kubernetes Important?
Kubernetes is like having an army of robots managing your applications 24/7! 🤖 Here’s why mastering K8s on AlmaLinux is absolutely essential:
- 🚀 Automatic Scaling - Scale from 1 to 1000s of containers instantly
- 🔄 Self-Healing - Automatic container restart and rescheduling
- 🌍 Service Discovery - Built-in DNS and load balancing
- 📦 Rolling Updates - Zero-downtime deployments
- 💾 Storage Orchestration - Automatic storage provisioning
- 🔐 Secret Management - Secure configuration and credentials
- 🎯 Resource Optimization - Efficient hardware utilization
- ☁️ Cloud Agnostic - Run anywhere from laptop to cloud
🎯 What You Need
Let’s prepare your environment for Kubernetes excellence! ✅
Hardware Requirements (Minimum):
- ✅ Master Node: 2 CPUs, 2GB RAM, 20GB disk
- ✅ Worker Nodes: 2 CPUs, 2GB RAM, 20GB disk each
- ✅ At least 3 nodes total (1 master, 2 workers)
- ✅ AlmaLinux 8.x or 9.x on all nodes
- ✅ Static IP addresses for all nodes
Network Requirements:
- ✅ Full network connectivity between all nodes
- ✅ Unique hostname, MAC address, and product_uuid
- ✅ Swap disabled on all nodes
- ✅ Ports 6443, 2379-2380, 10250-10252 open
📝 Preparing AlmaLinux for Kubernetes
Let’s prepare all nodes for Kubernetes installation! 🔧
System Preparation (All Nodes)
# Set hostname (unique for each node)
sudo hostnamectl set-hostname master.k8s.local # Or worker1.k8s.local, etc.
# Update hosts file on all nodes
sudo tee -a /etc/hosts << 'EOF'
192.168.1.10 master.k8s.local master
192.168.1.11 worker1.k8s.local worker1
192.168.1.12 worker2.k8s.local worker2
EOF
# Disable swap
sudo swapoff -a
sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
# Disable SELinux (or set to permissive)
sudo setenforce 0
sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
# Configure kernel modules
sudo modprobe overlay
sudo modprobe br_netfilter
cat << EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF
# Configure sysctl parameters
cat << EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF
sudo sysctl --system
# Configure firewall (Master node)
sudo firewall-cmd --permanent --add-port=6443/tcp
sudo firewall-cmd --permanent --add-port=2379-2380/tcp
sudo firewall-cmd --permanent --add-port=10250/tcp
sudo firewall-cmd --permanent --add-port=10251/tcp
sudo firewall-cmd --permanent --add-port=10252/tcp
sudo firewall-cmd --permanent --add-port=10255/tcp
sudo firewall-cmd --permanent --add-masquerade
sudo firewall-cmd --reload
# Configure firewall (Worker nodes)
sudo firewall-cmd --permanent --add-port=10250/tcp
sudo firewall-cmd --permanent --add-port=30000-32767/tcp
sudo firewall-cmd --permanent --add-masquerade
sudo firewall-cmd --reload
Installing Container Runtime (containerd)
# Install containerd
sudo dnf config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
sudo dnf install -y containerd.io
# Configure containerd
sudo containerd config default | sudo tee /etc/containerd/config.toml
# Enable SystemdCgroup
sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/g' /etc/containerd/config.toml
# Start containerd
sudo systemctl enable --now containerd
# Verify containerd
sudo systemctl status containerd
🔧 Installing Kubernetes Components
Time to install Kubernetes on all nodes! 🌟
Installing kubeadm, kubelet, and kubectl
# Add Kubernetes repository
cat << EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kubelet kubeadm kubectl
EOF
# Install Kubernetes components
sudo dnf install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
# Enable kubelet
sudo systemctl enable --now kubelet
# Check versions
kubeadm version
kubectl version --client
kubelet --version
Initializing the Master Node
# On Master node only
# Initialize cluster with pod network CIDR
sudo kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=192.168.1.10
# Save the join command output! You'll need it for workers
# Configure kubectl for regular user
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
# Verify cluster status
kubectl get nodes
kubectl get pods --all-namespaces
# Install Calico network plugin
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.0/manifests/tigera-operator.yaml
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.0/manifests/custom-resources.yaml
# Wait for all pods to be running
watch kubectl get pods -n calico-system
# Verify node is Ready
kubectl get nodes
🌟 Joining Worker Nodes
Let’s add worker nodes to the cluster! 🔗
Joining Workers to Cluster
# On each worker node, run the join command from master init output
# Example (use your actual command):
sudo kubeadm join 192.168.1.10:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef
# If you lost the join command, generate a new one on master:
kubeadm token create --print-join-command
# On master, verify all nodes joined
kubectl get nodes
# Label worker nodes
kubectl label node worker1.k8s.local node-role.kubernetes.io/worker=worker
kubectl label node worker2.k8s.local node-role.kubernetes.io/worker=worker
Installing Kubernetes Dashboard
# Deploy dashboard
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml
# Create admin user for dashboard
cat << EOF | kubectl apply -f -
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kubernetes-dashboard
EOF
# Get login token
kubectl -n kubernetes-dashboard create token admin-user
# Access dashboard (create SSH tunnel first)
kubectl proxy
# Browse to: http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/
✅ Deploying Applications
Let’s deploy your first applications on Kubernetes! 🚀
Deploying a Sample Application
# Create nginx deployment
cat << EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
selector:
app: nginx
type: LoadBalancer
ports:
- protocol: TCP
port: 80
targetPort: 80
EOF
# Check deployment status
kubectl get deployments
kubectl get pods
kubectl get services
# Scale deployment
kubectl scale deployment nginx-deployment --replicas=5
# Check pod distribution across nodes
kubectl get pods -o wide
Setting Up Persistent Storage
# Create StorageClass for dynamic provisioning
cat << EOF | kubectl apply -f -
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
EOF
# Create PersistentVolume
cat << EOF | kubectl apply -f -
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-volume
labels:
type: local
spec:
storageClassName: local-storage
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
EOF
# Create PersistentVolumeClaim
cat << EOF | kubectl apply -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pv-claim
spec:
storageClassName: local-storage
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
EOF
# Deploy MySQL with persistent storage
cat << EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
spec:
selector:
matchLabels:
app: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: mysql:5.7
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
value: password123
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-storage
persistentVolumeClaim:
claimName: pv-claim
EOF
🎮 Quick Examples
Example 1: Deploying Microservices Application
# Deploy a complete microservices stack
cat << 'EOF' | kubectl apply -f -
# Frontend Service
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend
spec:
replicas: 3
selector:
matchLabels:
app: frontend
template:
metadata:
labels:
app: frontend
spec:
containers:
- name: frontend
image: nginx:alpine
ports:
- containerPort: 80
env:
- name: BACKEND_URL
value: "http://backend-service:8080"
---
apiVersion: v1
kind: Service
metadata:
name: frontend-service
spec:
selector:
app: frontend
ports:
- port: 80
targetPort: 80
type: LoadBalancer
---
# Backend Service
apiVersion: apps/v1
kind: Deployment
metadata:
name: backend
spec:
replicas: 2
selector:
matchLabels:
app: backend
template:
metadata:
labels:
app: backend
spec:
containers:
- name: backend
image: httpd:alpine
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: backend-service
spec:
selector:
app: backend
ports:
- port: 8080
targetPort: 80
EOF
# Check services
kubectl get all
kubectl get endpoints
Example 2: Setting Up Ingress Controller
# Install NGINX Ingress Controller
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.8.1/deploy/static/provider/baremetal/deploy.yaml
# Create Ingress rules
cat << EOF | kubectl apply -f -
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: app-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
ingressClassName: nginx
rules:
- host: app.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: frontend-service
port:
number: 80
- path: /api
pathType: Prefix
backend:
service:
name: backend-service
port:
number: 8080
EOF
# Get Ingress status
kubectl get ingress
kubectl describe ingress app-ingress
Example 3: Monitoring with Prometheus and Grafana
# Add Prometheus Helm repository
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update
# Install Prometheus stack
helm install prometheus prometheus-community/kube-prometheus-stack \
--namespace monitoring --create-namespace
# Port-forward to access Grafana
kubectl port-forward -n monitoring svc/prometheus-grafana 3000:80
# Default credentials: admin/prom-operator
# Deploy application with metrics
cat << EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
name: metrics-app
spec:
replicas: 3
selector:
matchLabels:
app: metrics-app
template:
metadata:
labels:
app: metrics-app
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "8080"
prometheus.io/path: "/metrics"
spec:
containers:
- name: app
image: prom/node-exporter
ports:
- containerPort: 9100
EOF
🚨 Fix Common Kubernetes Problems
Let’s solve the most frequent K8s issues! 🛠️
Problem 1: Pods Stuck in Pending State
Symptoms: Pods won’t start, stuck in Pending Solution:
# Check pod events
kubectl describe pod pod-name
# Common causes and fixes:
# 1. No nodes available
kubectl get nodes
kubectl taint nodes --all node-role.kubernetes.io/master-
# 2. Insufficient resources
kubectl describe nodes | grep -A 5 "Allocated resources"
# 3. PVC not bound
kubectl get pvc
kubectl describe pvc pvc-name
Problem 2: Node Not Ready
Symptoms: Node shows NotReady status Solution:
# Check node status
kubectl describe node node-name
# Check kubelet logs
sudo journalctl -u kubelet -f
# Common fixes:
# Restart kubelet
sudo systemctl restart kubelet
# Check container runtime
sudo systemctl status containerd
sudo systemctl restart containerd
# Check network plugin
kubectl get pods -n kube-system | grep calico
Problem 3: Service Not Accessible
Symptoms: Can’t reach service from outside cluster Solution:
# Check service and endpoints
kubectl get svc
kubectl get endpoints
# Test service internally
kubectl run test --image=busybox -it --rm -- sh
# Inside pod:
wget -O- service-name:port
# Check iptables rules
sudo iptables -t nat -L -n | grep service-port
# Use NodePort for external access
kubectl patch svc service-name -p '{"spec":{"type":"NodePort"}}'
Problem 4: Image Pull Errors
Symptoms: ImagePullBackOff errors Solution:
# Check pod events
kubectl describe pod pod-name
# Create image pull secret
kubectl create secret docker-registry regcred \
--docker-server=docker.io \
--docker-username=username \
--docker-password=password \
[email protected]
# Use secret in deployment
kubectl patch deployment deployment-name -p '{"spec":{"template":{"spec":{"imagePullSecrets":[{"name":"regcred"}]}}}}'
📋 Essential Kubernetes Commands
Master these K8s commands for daily operations! ⚡
Command | Purpose |
---|---|
kubectl get nodes | List all nodes |
kubectl get pods -A | List all pods |
kubectl describe pod | Pod details |
kubectl logs pod-name | View pod logs |
kubectl exec -it pod sh | Shell into pod |
kubectl apply -f file.yaml | Deploy resources |
kubectl scale deployment | Scale replicas |
kubectl rollout status | Check deployment |
kubectl port-forward | Access services |
kubectl top nodes | Resource usage |
💡 Kubernetes Best Practices
Become a K8s expert with these pro tips! 🎯
- 📝 Use Namespaces - Organize resources logically
- 🏷️ Label Everything - Use labels for organization and selection
- 📊 Set Resource Limits - Prevent resource exhaustion
- 🔐 RBAC Security - Implement role-based access control
- 💾 Persistent Storage - Use PVCs for stateful apps
- 🔄 Health Checks - Configure liveness and readiness probes
- 📈 Monitor Everything - Deploy metrics and logging
- 🔧 GitOps Workflow - Store configs in version control
- 🚀 Rolling Updates - Use deployment strategies
- 💡 Documentation - Document your cluster architecture
🏆 What You’ve Accomplished
Congratulations on mastering Kubernetes on AlmaLinux! 🎉 You’ve achieved:
- ✅ Complete K8s cluster deployed from scratch
- ✅ Master and worker nodes configured properly
- ✅ Container networking with Calico installed
- ✅ Application deployments successfully running
- ✅ Persistent storage configured for stateful apps
- ✅ Ingress controller for external access
- ✅ Dashboard access for GUI management
- ✅ Monitoring stack with Prometheus/Grafana
- ✅ Troubleshooting skills for common issues
- ✅ Best practices implemented throughout
🎯 Why These Skills Matter
Your Kubernetes expertise enables cloud-native transformation! 🌟 With these skills, you can:
Immediate Benefits:
- 🚀 Deploy applications in seconds, not hours
- 🔄 Achieve zero-downtime deployments
- 📈 Scale from 1 to 1000s of containers instantly
- 💰 Optimize resource utilization by 50% or more
Long-term Value:
- ☁️ Build cloud-native architectures
- 🌍 Deploy globally distributed applications
- 💼 Become a sought-after DevOps engineer
- 🏆 Lead digital transformation initiatives
You’re now equipped to orchestrate containerized applications at any scale, from startup MVPs to enterprise platforms serving millions! Kubernetes is your superpower! 🌟
Keep orchestrating, keep scaling! 🙌