+
+
neo4j
+
ubuntu
+
pnpm
+
quarkus
+
go
+
+
raspbian
swift
+
+
+
alpine
yarn
+
!!
+
angular
crystal
+
+
+
+
+
clj
bash
+
wasm
symfony
s3
+
+
symfony
+
ubuntu
go
nvim
play
+
+
+
+
js
+
+
+
http
+
cargo
+
groovy
rollup
+
asm
+
+
graphql
+
py
#
+
css
influxdb
+
fauna
+
torch
notepad++
stimulus
_
+
+
rb
+
py
+
$
+
rb
riot
choo
+
+
Back to Blog
⚖️ Configuring Load Balancer: Simple Guide
Alpine Linux Load Balancer Beginner

⚖️ Configuring Load Balancer: Simple Guide

Published Jun 1, 2025

Easy tutorial for beginners to configure load balancers in Alpine Linux. Perfect for new admins with step-by-step instructions and clear examples.

13 min read
0 views
Table of Contents

⚖️ Configuring Load Balancer: Simple Guide

Want to distribute traffic across multiple servers? I’ll show you how to set up load balancing! 🚀 This tutorial makes load balancer configuration super easy. Even if networking seems complex, you can do this! 😊

🤔 What is a Load Balancer?

A load balancer spreads incoming requests across multiple servers. Think of it like a traffic director at a busy intersection!

Load balancers help you:

  • ⚡ Handle more traffic than one server can
  • 🛡️ Keep websites running if one server fails
  • 📊 Improve response times for users
  • 🔄 Scale applications easily

🎯 What You Need

Before we start, you need:

  • ✅ Alpine Linux system running
  • ✅ Root or sudo permissions
  • ✅ At least two web servers to balance
  • ✅ About 40 minutes to complete

📋 Step 1: Setting Up Test Environment

Create Backend Servers

Let’s create some simple web servers to balance traffic between. This is our practice setup! 🎭

What we’re doing: Setting up multiple backend web servers for load balancing.

# Install nginx for our backend servers
apk add nginx

# Create first backend server
mkdir -p /var/www/server1
echo "<h1>Backend Server 1</h1><p>You're connected to server 1!</p>" > /var/www/server1/index.html

# Create second backend server  
mkdir -p /var/www/server2
echo "<h1>Backend Server 2</h1><p>You're connected to server 2!</p>" > /var/www/server2/index.html

# Create nginx config for server 1 (port 8081)
cat > /etc/nginx/conf.d/server1.conf << 'EOF'
server {
    listen 8081;
    server_name localhost;
    root /var/www/server1;
    index index.html;
    
    location / {
        try_files $uri $uri/ =404;
    }
}
EOF

# Create nginx config for server 2 (port 8082)
cat > /etc/nginx/conf.d/server2.conf << 'EOF'
server {
    listen 8082;
    server_name localhost;
    root /var/www/server2;
    index index.html;
    
    location / {
        try_files $uri $uri/ =404;
    }
}
EOF

# Start nginx
rc-service nginx start
rc-update add nginx default

# Test backend servers
curl http://localhost:8081
curl http://localhost:8082

What this does: 📖 Creates two web servers running on different ports.

Example output:

✅ Backend Server 1 responding
✅ Backend Server 2 responding
✅ Test environment ready

What this means: You have servers to balance traffic between! ✅

💡 Setup Tips

Tip: In production, these would be separate physical servers! 💡

Note: We’re using different ports to simulate multiple servers! ⚠️

🔧 Step 2: Installing HAProxy Load Balancer

Install and Configure HAProxy

HAProxy is a popular load balancer. Let’s install and set it up! 📦

What we’re doing: Installing HAProxy and creating basic load balancer configuration.

# Install HAProxy
apk add haproxy

# Create HAProxy configuration
cat > /etc/haproxy/haproxy.cfg << 'EOF'
global
    daemon
    maxconn 4096
    log stdout local0
    
defaults
    mode http
    timeout connect 5000ms
    timeout client 50000ms
    timeout server 50000ms
    option httplog
    balance roundrobin

# Statistics page
stats enable
stats uri /stats
stats refresh 30s
stats show-desc "HAProxy Load Balancer Stats"

# Frontend configuration
frontend web_frontend
    bind *:80
    default_backend web_servers

# Backend server pool
backend web_servers
    balance roundrobin
    option httpchk GET /
    server server1 127.0.0.1:8081 check
    server server2 127.0.0.1:8082 check
EOF

# Start HAProxy
rc-service haproxy start
rc-update add haproxy default

# Check HAProxy status
rc-service haproxy status

# Test load balancer
curl http://localhost/

Code explanation:

  • frontend: Receives incoming requests
  • backend: Pool of servers to send requests to
  • balance roundrobin: Distributes requests evenly
  • option httpchk: Health checks servers
  • check: Monitors server health

Expected Output:

✅ HAProxy installed and running
✅ Load balancer responding
✅ Backend servers being checked

What this means: Your load balancer is working! 🎉

🔄 Step 3: Testing Load Balancing

Verify Load Distribution

Now let’s test that traffic is actually being balanced! This is exciting! 🎯

What we’re doing: Testing load balancing behavior and verifying traffic distribution.

# Test multiple requests to see load balancing
echo "Testing load balancing..."
for i in {1..10}; do
    echo "Request $i:"
    curl -s http://localhost/ | grep "Backend Server"
    sleep 1
done

# Check which servers are active
curl -s http://localhost/stats | grep "server[12]"

# Test with concurrent requests
echo "Testing concurrent load..."
for i in {1..5}; do
    curl -s http://localhost/ | grep "Backend Server" &
done
wait

# Monitor HAProxy logs
tail -f /var/log/messages | grep haproxy &
curl http://localhost/
curl http://localhost/
curl http://localhost/
killall tail

Check Load Balancer Statistics

# View HAProxy statistics page
curl http://localhost/stats

# Check server health status
echo "Server health status:"
curl -s http://localhost/stats | grep -A 5 "web_servers"

# Show active connections
netstat -an | grep :80

# Check backend server processes
ps aux | grep nginx

You should see:

✅ Requests alternating between servers
✅ Both servers healthy and active
✅ Statistics page showing traffic
✅ Even distribution of requests

Amazing work! Your load balancer is distributing traffic! 🌟

📊 Load Balancer Configuration Summary

ComponentPurposeConfiguration
⚖️ FrontendReceive requestsbind *:80
🖥️ BackendServer poolserver1, server2
🔄 AlgorithmTraffic distributionroundrobin
🏥 Health CheckMonitor serversoption httpchk

🎮 Practice Time!

Let’s configure advanced load balancing features:

Example 1: Weighted Load Balancing 🟢

What we’re doing: Giving some servers more traffic than others based on capacity.

# Update HAProxy config with weights
cat > /etc/haproxy/haproxy.cfg << 'EOF'
global
    daemon
    maxconn 4096
    log stdout local0
    
defaults
    mode http
    timeout connect 5000ms
    timeout client 50000ms
    timeout server 50000ms
    option httplog

# Statistics page
stats enable
stats uri /stats
stats refresh 30s

# Frontend configuration
frontend web_frontend
    bind *:80
    default_backend web_servers

# Backend with weighted servers
backend web_servers
    balance roundrobin
    option httpchk GET /
    # Server 1 gets 70% of traffic (weight 7)
    server server1 127.0.0.1:8081 check weight 7
    # Server 2 gets 30% of traffic (weight 3)  
    server server2 127.0.0.1:8082 check weight 3
EOF

# Reload HAProxy configuration
rc-service haproxy reload

# Test weighted distribution
echo "Testing weighted load balancing..."
for i in {1..20}; do
    curl -s http://localhost/ | grep "Backend Server" | cut -d' ' -f3
done | sort | uniq -c

What this does: Sends more traffic to server 1 than server 2! ⚖️

Example 2: Health Check and Failover 🟡

What we’re doing: Setting up automatic failover when servers become unhealthy.

# Stop one backend server to simulate failure
rc-service nginx stop

# Wait a moment for health checks
sleep 10

# Test that load balancer still works
echo "Testing failover..."
for i in {1..5}; do
    curl -s http://localhost/ | grep "Backend Server"
done

# Check which servers are marked as down
curl -s http://localhost/stats | grep -E "(server1|server2)" | grep -o "UP\|DOWN"

# Restart the failed server
rc-service nginx start

# Wait for server to come back online
sleep 15

# Verify both servers are back in rotation
echo "Testing recovery..."
for i in {1..10}; do
    curl -s http://localhost/ | grep "Backend Server" | cut -d' ' -f3
done | sort | uniq -c

# Check server status
curl -s http://localhost/stats | grep -E "(server1|server2)"

What this does: Automatically removes failed servers and adds them back! 🏥

🚨 Fix Common Problems

Problem 1: Load balancer not distributing traffic ❌

What happened: All requests go to the same server. How to fix it: Check backend server configuration and health!

# Check HAProxy configuration syntax
haproxy -f /etc/haproxy/haproxy.cfg -c

# Verify backend servers are running
curl http://localhost:8081/
curl http://localhost:8082/

# Check HAProxy logs for errors
tail -20 /var/log/messages | grep haproxy

# Restart HAProxy with debug info
rc-service haproxy stop
haproxy -f /etc/haproxy/haproxy.cfg -d &

# Test again
curl http://localhost/

Problem 2: Backend servers marked as down ❌

What happened: Health checks fail and servers show as unavailable. How to fix it: Check health check configuration and server responses!

# Test health check URL manually
curl -I http://localhost:8081/
curl -I http://localhost:8082/

# Check if health check path exists
curl http://localhost:8081/health 2>/dev/null || echo "Health check path missing"

# Update health check configuration
cat >> /etc/haproxy/haproxy.cfg << 'EOF'
# Fix health check path
backend web_servers
    option httpchk GET /index.html
    http-check expect status 200
EOF

# Reload configuration
rc-service haproxy reload

# Monitor server status
watch -n 2 'curl -s http://localhost/stats | grep -E "(server1|server2)"'

Don’t worry! Load balancer configuration takes some tweaking to get right! 💪

💡 Advanced Load Balancing Tips

  1. Monitor server health 📅 - Set up proper health checks
  2. Use session persistence 🌱 - Keep users on the same server when needed
  3. Plan for failures 🤝 - Always have backup servers ready
  4. Monitor performance 💪 - Watch response times and error rates

✅ Verify Load Balancer Works

Let’s make sure everything is working properly:

# Complete load balancer check
echo "=== Load Balancer Status ==="

# Check HAProxy process
if pgrep haproxy >/dev/null; then
    echo "✅ HAProxy running"
    echo "Process ID: $(pgrep haproxy)"
else
    echo "❌ HAProxy not running"
fi

# Check frontend listening
if netstat -an | grep -q ":80.*LISTEN"; then
    echo "✅ Frontend listening on port 80"
else
    echo "❌ Frontend not listening"
fi

# Check backend servers
echo "=== Backend Server Status ==="
curl -s http://localhost:8081/ >/dev/null && echo "✅ Server 1 responding" || echo "❌ Server 1 down"
curl -s http://localhost:8082/ >/dev/null && echo "✅ Server 2 responding" || echo "❌ Server 2 down"

# Test load balancing
echo "=== Load Balancing Test ==="
RESULTS=$(for i in {1..10}; do curl -s http://localhost/ | grep "Backend Server" | cut -d' ' -f3; done | sort | uniq -c)
echo "$RESULTS"

# Check if both servers received requests
if echo "$RESULTS" | grep -q "1" && echo "$RESULTS" | grep -q "2"; then
    echo "✅ Load balancing working"
else
    echo "❌ Load balancing not working properly"
fi

# Show statistics summary
echo "=== Statistics Summary ==="
curl -s http://localhost/stats | grep -A 2 "web_servers" | tail -2

Good load balancer setup signs:

✅ HAProxy process running
✅ Frontend accepting connections
✅ Backend servers responding
✅ Traffic distributed evenly
✅ Health checks working

🏆 What You Learned

Great job! Now you can:

  • ✅ Set up backend web servers for testing
  • ✅ Install and configure HAProxy load balancer
  • ✅ Configure round-robin traffic distribution
  • ✅ Set up weighted load balancing
  • ✅ Implement health checks and failover
  • ✅ Monitor load balancer statistics and performance

🎯 What’s Next?

Now you can try:

  • 📚 Setting up SSL termination on load balancers
  • 🛠️ Implementing sticky sessions for stateful applications
  • 🤝 Configuring geographic load balancing
  • 🌟 Building high-availability load balancer clusters!

Remember: Every infrastructure engineer started with simple load balancing. You’re building real scalability skills! 🎉

Keep practicing and you’ll become a load balancing expert! 💫