🌐 Implementing API Gateway on Alpine Linux: Simple Guide
Setting up an API Gateway on Alpine Linux helps manage and route API traffic efficiently! 💻 This guide shows you how to build a powerful API gateway for microservices. Let’s create an amazing API management system! 😊
🤔 What is an API Gateway?
An API Gateway is a single entry point that manages, routes, and secures API requests to multiple backend services.
API Gateway is like:
- 📝 A smart traffic controller - Direct requests to the right services
- 🔧 A security guard for APIs - Protect and authenticate requests
- 💡 A load balancer for services - Distribute traffic efficiently
🎯 What You Need
Before we start, you need:
- ✅ Alpine Linux running on your computer
- ✅ Root access or sudo permissions
- ✅ Basic understanding of APIs and web services
- ✅ Knowledge of HTTP and networking concepts
📋 Step 1: Install API Gateway Components
Install NGINX and Kong
Let’s install the components for our API Gateway! 😊
What we’re doing: Installing NGINX as a reverse proxy and Kong as an API gateway platform.
# Update package list
apk update
# Install NGINX
apk add nginx nginx-mod-http-lua
# Install additional tools for API gateway
apk add curl wget jq openssl
# Install PostgreSQL for Kong (Kong's preferred database)
apk add postgresql postgresql-client postgresql-contrib
# Install Docker for Kong installation
apk add docker docker-compose
# Start services
rc-service nginx start
rc-service docker start
rc-service postgresql start
# Enable services on boot
rc-update add nginx boot
rc-update add docker boot
rc-update add postgresql boot
# Verify installations
echo "📋 API Gateway Components Check:"
echo " NGINX: $(nginx -v 2>&1)"
echo " Docker: $(docker --version)"
echo " PostgreSQL: $(psql --version)"
echo " curl: $(curl --version | head -1)"
What this does: 📖 Installs essential components for API gateway implementation.
Example output:
✅ (1/8) Installing nginx (1.24.0-r6)
✅ (2/8) Installing nginx-mod-http-lua (1.24.0-r6)
✅ (3/8) Installing curl (8.2.1-r0)
✅ (4/8) Installing postgresql (15.4-r0)
✅ (5/8) Installing docker (24.0.5-r1)
📋 API Gateway Components Check:
NGINX: nginx version: nginx/1.24.0
Docker: Docker version 24.0.5, build ced0996
PostgreSQL: psql (PostgreSQL) 15.4
What this means: Your API Gateway environment is ready! ✅
💡 Important Tips
Tip: API Gateways handle high traffic - plan for scalability! 💡
Warning: Secure your gateway with proper authentication! ⚠️
🛠️ Step 2: Configure Basic NGINX API Gateway
Set Up NGINX as Reverse Proxy
Now let’s configure NGINX as a basic API gateway! 😊
What we’re doing: Setting up NGINX to route API requests to different backend services.
# Create API gateway configuration directory
mkdir -p /etc/nginx/api-gateway
cd /etc/nginx/api-gateway
# Create main API gateway configuration
cat > /etc/nginx/nginx.conf << 'EOF'
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
events {
worker_connections 1024;
use epoll;
multi_accept on;
}
http {
# Basic settings
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Logging format for API requests
log_format api_gateway '$remote_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent" '
'rt=$request_time uct="$upstream_connect_time" '
'uht="$upstream_header_time" urt="$upstream_response_time"';
# Rate limiting
limit_req_zone $binary_remote_addr zone=api_limit:10m rate=10r/s;
# Upstream backend services
upstream user_service {
server 127.0.0.1:3001;
server 127.0.0.1:3002 backup;
}
upstream product_service {
server 127.0.0.1:4001;
server 127.0.0.1:4002 backup;
}
upstream auth_service {
server 127.0.0.1:5001;
server 127.0.0.1:5002 backup;
}
# API Gateway server
server {
listen 80;
listen 443 ssl http2;
server_name api.example.com localhost;
# SSL configuration (optional)
ssl_certificate /etc/nginx/ssl/api.crt;
ssl_certificate_key /etc/nginx/ssl/api.key;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384;
# Logging
access_log /var/log/nginx/api_access.log api_gateway;
error_log /var/log/nginx/api_error.log;
# API Gateway health check
location /health {
access_log off;
return 200 "API Gateway is healthy\n";
add_header Content-Type text/plain;
}
# CORS headers for all API requests
add_header 'Access-Control-Allow-Origin' '*' always;
add_header 'Access-Control-Allow-Methods' 'GET, POST, PUT, DELETE, OPTIONS' always;
add_header 'Access-Control-Allow-Headers' 'DNT,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Range,Authorization' always;
# Handle preflight requests
location ~ ^/api/.* {
if ($request_method = 'OPTIONS') {
add_header 'Access-Control-Allow-Origin' '*';
add_header 'Access-Control-Allow-Methods' 'GET, POST, PUT, DELETE, OPTIONS';
add_header 'Access-Control-Allow-Headers' 'DNT,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Range,Authorization';
add_header 'Access-Control-Max-Age' 1728000;
add_header 'Content-Type' 'text/plain; charset=utf-8';
add_header 'Content-Length' 0;
return 204;
}
}
# User service routes
location /api/v1/users {
limit_req zone=api_limit burst=20 nodelay;
proxy_pass http://user_service;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# Timeout settings
proxy_connect_timeout 30s;
proxy_send_timeout 30s;
proxy_read_timeout 30s;
# Health check
proxy_next_upstream error timeout http_500 http_502 http_503;
}
# Product service routes
location /api/v1/products {
limit_req zone=api_limit burst=20 nodelay;
proxy_pass http://product_service;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_connect_timeout 30s;
proxy_send_timeout 30s;
proxy_read_timeout 30s;
proxy_next_upstream error timeout http_500 http_502 http_503;
}
# Authentication service routes
location /api/v1/auth {
limit_req zone=api_limit burst=10 nodelay;
proxy_pass http://auth_service;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_connect_timeout 10s;
proxy_send_timeout 10s;
proxy_read_timeout 10s;
}
# API documentation
location /api/docs {
alias /var/www/api-docs;
index index.html;
try_files $uri $uri/ =404;
}
# Default API response for unknown endpoints
location /api {
return 404 '{"error": "API endpoint not found", "code": 404}';
add_header Content-Type application/json;
}
# Redirect root to API docs
location = / {
return 301 /api/docs;
}
}
}
EOF
# Create SSL directory and self-signed certificate
mkdir -p /etc/nginx/ssl
openssl req -x509 -nodes -days 365 -newkey rsa:2048 \
-keyout /etc/nginx/ssl/api.key \
-out /etc/nginx/ssl/api.crt \
-subj "/C=US/ST=State/L=City/O=Organization/CN=api.example.com"
# Test NGINX configuration
nginx -t
# Reload NGINX
nginx -s reload
echo "✅ NGINX API Gateway configured successfully!"
Code explanation:
upstream
: Defines backend service groups with load balancingproxy_pass
: Routes requests to backend serviceslimit_req_zone
: Implements rate limitingadd_header
: Sets CORS headers for cross-origin requests
Expected Output:
✅ NGINX API Gateway configured successfully!
✅ nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
✅ nginx: configuration file /etc/nginx/nginx.conf test is successful
✅ SSL certificate created
What this means: Great job! Your basic API Gateway is running! 🎉
🎮 Let’s Test API Gateway!
Time for hands-on practice! This is the fun part! 🎯
What we’re doing: Creating mock backend services and testing API routing.
# Create mock backend services for testing
mkdir -p ~/api-services
cd ~/api-services
# Create mock user service
cat > user_service.js << 'EOF'
const http = require('http');
const url = require('url');
const users = [
{ id: 1, name: 'John Doe', email: '[email protected]' },
{ id: 2, name: 'Jane Smith', email: '[email protected]' },
{ id: 3, name: 'Bob Johnson', email: '[email protected]' }
];
const server = http.createServer((req, res) => {
const parsedUrl = url.parse(req.url, true);
// Set CORS headers
res.setHeader('Access-Control-Allow-Origin', '*');
res.setHeader('Content-Type', 'application/json');
if (parsedUrl.pathname === '/users' && req.method === 'GET') {
res.writeHead(200);
res.end(JSON.stringify({
service: 'user_service',
data: users,
timestamp: new Date().toISOString()
}));
} else if (parsedUrl.pathname.startsWith('/users/') && req.method === 'GET') {
const userId = parseInt(parsedUrl.pathname.split('/')[2]);
const user = users.find(u => u.id === userId);
if (user) {
res.writeHead(200);
res.end(JSON.stringify({
service: 'user_service',
data: user,
timestamp: new Date().toISOString()
}));
} else {
res.writeHead(404);
res.end(JSON.stringify({ error: 'User not found' }));
}
} else {
res.writeHead(404);
res.end(JSON.stringify({ error: 'Endpoint not found' }));
}
});
const PORT = process.env.PORT || 3001;
server.listen(PORT, () => {
console.log(`User service running on port ${PORT}`);
});
EOF
# Create mock product service
cat > product_service.js << 'EOF'
const http = require('http');
const url = require('url');
const products = [
{ id: 1, name: 'Laptop', price: 999.99, category: 'Electronics' },
{ id: 2, name: 'Mouse', price: 29.99, category: 'Electronics' },
{ id: 3, name: 'Keyboard', price: 79.99, category: 'Electronics' }
];
const server = http.createServer((req, res) => {
const parsedUrl = url.parse(req.url, true);
res.setHeader('Access-Control-Allow-Origin', '*');
res.setHeader('Content-Type', 'application/json');
if (parsedUrl.pathname === '/products' && req.method === 'GET') {
res.writeHead(200);
res.end(JSON.stringify({
service: 'product_service',
data: products,
timestamp: new Date().toISOString()
}));
} else if (parsedUrl.pathname.startsWith('/products/') && req.method === 'GET') {
const productId = parseInt(parsedUrl.pathname.split('/')[2]);
const product = products.find(p => p.id === productId);
if (product) {
res.writeHead(200);
res.end(JSON.stringify({
service: 'product_service',
data: product,
timestamp: new Date().toISOString()
}));
} else {
res.writeHead(404);
res.end(JSON.stringify({ error: 'Product not found' }));
}
} else {
res.writeHead(404);
res.end(JSON.stringify({ error: 'Endpoint not found' }));
}
});
const PORT = process.env.PORT || 4001;
server.listen(PORT, () => {
console.log(`Product service running on port ${PORT}`);
});
EOF
# Create service startup script
cat > start_services.sh << 'EOF'
#!/bin/bash
echo "🚀 Starting mock API services..."
# Start user service
node user_service.js &
USER_PID=$!
echo "✅ User service started (PID: $USER_PID) on port 3001"
# Start product service
node product_service.js &
PRODUCT_PID=$!
echo "✅ Product service started (PID: $PRODUCT_PID) on port 4001"
# Save PIDs for later cleanup
echo $USER_PID > user_service.pid
echo $PRODUCT_PID > product_service.pid
echo "📋 Services are running. Use 'pkill -f node' to stop all services."
EOF
# Create service stop script
cat > stop_services.sh << 'EOF'
#!/bin/bash
echo "🛑 Stopping mock API services..."
if [ -f user_service.pid ]; then
kill $(cat user_service.pid) 2>/dev/null
rm user_service.pid
echo "✅ User service stopped"
fi
if [ -f product_service.pid ]; then
kill $(cat product_service.pid) 2>/dev/null
rm product_service.pid
echo "✅ Product service stopped"
fi
echo "🏁 All services stopped"
EOF
# Make scripts executable
chmod +x start_services.sh stop_services.sh
# Start mock services
./start_services.sh
# Wait a moment for services to start
sleep 2
# Test API Gateway
echo ""
echo "🧪 Testing API Gateway:"
echo "======================"
# Test health endpoint
echo "📊 Health Check:"
curl -s http://localhost/health
echo ""
echo "👥 User Service Test:"
curl -s http://localhost/api/v1/users | jq '.' || curl -s http://localhost/api/v1/users
echo ""
echo "📦 Product Service Test:"
curl -s http://localhost/api/v1/products | jq '.' || curl -s http://localhost/api/v1/products
echo ""
echo "✅ API Gateway testing completed!"
You should see:
🚀 Starting mock API services...
✅ User service started (PID: 1234) on port 3001
✅ Product service started (PID: 1235) on port 4001
🧪 Testing API Gateway:
======================
📊 Health Check:
API Gateway is healthy
👥 User Service Test:
{
"service": "user_service",
"data": [
{"id": 1, "name": "John Doe", "email": "[email protected]"}
]
}
Awesome work! 🌟
📊 API Gateway Features Comparison
Feature | NGINX | Kong | Traefik | API Gateway |
---|---|---|---|---|
🔧 Load Balancing | ✅ Excellent | ✅ Excellent | ✅ Good | ✅ Built-in |
🛠️ Rate Limiting | ✅ Basic | ✅ Advanced | ✅ Good | ✅ Configurable |
🎯 Authentication | ❌ Manual | ✅ Plugins | ✅ Middleware | ✅ JWT/OAuth |
💾 Monitoring | ✅ Logs | ✅ Dashboard | ✅ Metrics | ✅ Analytics |
🛠️ Step 3: Implement Advanced API Gateway Features
Add Authentication and Authorization
What we’re doing: Implementing JWT authentication and API key management.
# Create authentication module for NGINX
mkdir -p /etc/nginx/lua
cat > /etc/nginx/lua/auth.lua << 'EOF'
local jwt = require "jwt"
local cjson = require "cjson"
local M = {}
-- JWT secret key (should be stored securely)
local JWT_SECRET = "your-secret-key-change-this"
-- API keys storage (in production, use Redis or database)
local valid_api_keys = {
["api-key-123"] = { client = "mobile-app", rate_limit = 1000 },
["api-key-456"] = { client = "web-app", rate_limit = 500 },
["api-key-789"] = { client = "partner", rate_limit = 100 }
}
-- Validate JWT token
function M.validate_jwt()
local auth_header = ngx.var.http_authorization
if not auth_header then
ngx.status = 401
ngx.say(cjson.encode({error = "Missing Authorization header"}))
ngx.exit(401)
return
end
local token = auth_header:match("Bearer%s+(.+)")
if not token then
ngx.status = 401
ngx.say(cjson.encode({error = "Invalid Authorization format"}))
ngx.exit(401)
return
end
local ok, payload = pcall(jwt.decode, token, JWT_SECRET)
if not ok then
ngx.status = 401
ngx.say(cjson.encode({error = "Invalid JWT token"}))
ngx.exit(401)
return
end
-- Check token expiration
if payload.exp and payload.exp < ngx.time() then
ngx.status = 401
ngx.say(cjson.encode({error = "Token expired"}))
ngx.exit(401)
return
end
-- Set user information in request headers
ngx.req.set_header("X-User-ID", payload.user_id)
ngx.req.set_header("X-User-Role", payload.role)
end
-- Validate API key
function M.validate_api_key()
local api_key = ngx.var.http_x_api_key
if not api_key then
ngx.status = 401
ngx.say(cjson.encode({error = "Missing X-API-Key header"}))
ngx.exit(401)
return
end
local key_info = valid_api_keys[api_key]
if not key_info then
ngx.status = 401
ngx.say(cjson.encode({error = "Invalid API key"}))
ngx.exit(401)
return
end
-- Set client information in request headers
ngx.req.set_header("X-Client-ID", key_info.client)
ngx.req.set_header("X-Rate-Limit", key_info.rate_limit)
end
-- Rate limiting based on API key
function M.rate_limit_by_key()
local api_key = ngx.var.http_x_api_key
if api_key and valid_api_keys[api_key] then
local rate_limit = valid_api_keys[api_key].rate_limit
-- Simple rate limiting logic (in production, use Redis)
local key = "rate_limit:" .. api_key
local current = ngx.shared.rate_limit_cache:get(key) or 0
if current >= rate_limit then
ngx.status = 429
ngx.say(cjson.encode({error = "Rate limit exceeded"}))
ngx.exit(429)
return
end
ngx.shared.rate_limit_cache:set(key, current + 1, 3600) -- 1 hour window
end
end
return M
EOF
# Update NGINX configuration with authentication
cat > /etc/nginx/conf.d/api-gateway.conf << 'EOF'
# Add Lua shared memory for rate limiting
lua_shared_dict rate_limit_cache 10m;
server {
listen 80;
server_name api.localhost;
# API Gateway health check (no auth required)
location /health {
access_log off;
return 200 "API Gateway is healthy\n";
add_header Content-Type text/plain;
}
# Public authentication endpoint
location /api/v1/auth {
proxy_pass http://auth_service;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
# Protected user endpoints (require JWT)
location /api/v1/users {
access_by_lua_block {
local auth = require "auth"
auth.validate_jwt()
}
proxy_pass http://user_service;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
# Protected product endpoints (require API key)
location /api/v1/products {
access_by_lua_block {
local auth = require "auth"
auth.validate_api_key()
auth.rate_limit_by_key()
}
proxy_pass http://product_service;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
# API metrics endpoint
location /api/metrics {
access_by_lua_block {
local auth = require "auth"
auth.validate_api_key()
}
content_by_lua_block {
local cjson = require "cjson"
local metrics = {
requests_total = 1000,
requests_per_second = 50,
response_time_avg = 120,
error_rate = 0.05,
timestamp = ngx.time()
}
ngx.say(cjson.encode(metrics))
}
}
}
EOF
echo "✅ Advanced authentication configured!"
What this does: Adds JWT and API key authentication to the gateway! 🌟
Implement API Monitoring and Analytics
What we’re doing: Creating monitoring dashboard and analytics for API usage.
# Create API monitoring script
cat > ~/bin/api_gateway_monitor.sh << 'EOF'
#!/bin/bash
echo "📊 API Gateway Monitoring Dashboard"
echo "=================================="
LOG_FILE="/var/log/nginx/api_access.log"
ERROR_LOG="/var/log/nginx/api_error.log"
# Function to analyze API access logs
analyze_api_usage() {
echo "📈 API Usage Analytics"
echo "--------------------"
if [ ! -f "$LOG_FILE" ]; then
echo "❌ No access log found: $LOG_FILE"
return 1
fi
# Total requests today
TODAY=$(date +%d/%b/%Y)
TOTAL_REQUESTS=$(grep "$TODAY" "$LOG_FILE" | wc -l)
echo "📊 Total requests today: $TOTAL_REQUESTS"
# Requests by endpoint
echo ""
echo "🔗 Top API endpoints:"
grep "$TODAY" "$LOG_FILE" | awk '{print $7}' | sort | uniq -c | sort -nr | head -5 | while read count endpoint; do
echo " $endpoint: $count requests"
done
# Response status codes
echo ""
echo "📋 Response status codes:"
grep "$TODAY" "$LOG_FILE" | awk '{print $9}' | sort | uniq -c | sort -nr | while read count status; do
echo " HTTP $status: $count responses"
done
# Response times
echo ""
echo "⏱️ Response time analysis:"
AVG_RESPONSE_TIME=$(grep "$TODAY" "$LOG_FILE" | awk -F'rt=' '{print $2}' | awk '{print $1}' | awk '{sum+=$1; count++} END {if(count>0) printf "%.3f", sum/count; else print "0"}')
echo " Average response time: ${AVG_RESPONSE_TIME}s"
# Top client IPs
echo ""
echo "🌐 Top client IPs:"
grep "$TODAY" "$LOG_FILE" | awk '{print $1}' | sort | uniq -c | sort -nr | head -5 | while read count ip; do
echo " $ip: $count requests"
done
}
# Function to check API gateway health
check_gateway_health() {
echo ""
echo "🔍 Gateway Health Check"
echo "----------------------"
# Check NGINX status
if rc-service nginx status >/dev/null 2>&1; then
echo "✅ NGINX: Running"
else
echo "❌ NGINX: Stopped"
fi
# Test health endpoint
HEALTH_RESPONSE=$(curl -s -w "%{http_code}" http://localhost/health -o /dev/null)
if [ "$HEALTH_RESPONSE" = "200" ]; then
echo "✅ Health endpoint: OK"
else
echo "❌ Health endpoint: Failed (HTTP $HEALTH_RESPONSE)"
fi
# Check backend services
echo ""
echo "🔧 Backend Services:"
# Test user service
USER_STATUS=$(curl -s -w "%{http_code}" http://localhost:3001/users -o /dev/null 2>/dev/null)
if [ "$USER_STATUS" = "200" ]; then
echo " ✅ User Service: Running"
else
echo " ❌ User Service: Down"
fi
# Test product service
PRODUCT_STATUS=$(curl -s -w "%{http_code}" http://localhost:4001/products -o /dev/null 2>/dev/null)
if [ "$PRODUCT_STATUS" = "200" ]; then
echo " ✅ Product Service: Running"
else
echo " ❌ Product Service: Down"
fi
}
# Function to show recent errors
show_recent_errors() {
echo ""
echo "🚨 Recent Errors"
echo "---------------"
if [ -f "$ERROR_LOG" ]; then
ERROR_COUNT=$(tail -100 "$ERROR_LOG" | grep "$(date +%Y/%m/%d)" | wc -l)
echo "Error count today: $ERROR_COUNT"
if [ "$ERROR_COUNT" -gt 0 ]; then
echo ""
echo "Latest errors:"
tail -100 "$ERROR_LOG" | grep "$(date +%Y/%m/%d)" | tail -5 | while read line; do
echo " $line"
done
fi
else
echo "No error log found"
fi
}
# Function to generate API report
generate_api_report() {
REPORT_FILE="/var/log/api_gateway_report_$(date +%Y%m%d_%H%M%S).json"
echo ""
echo "📋 Generating API report..."
# Create JSON report
cat > "$REPORT_FILE" << EOF
{
"timestamp": "$(date -Iseconds)",
"gateway_status": "$(rc-service nginx status >/dev/null 2>&1 && echo 'running' || echo 'stopped')",
"total_requests_today": $(grep "$(date +%d/%b/%Y)" "$LOG_FILE" 2>/dev/null | wc -l),
"error_count_today": $(tail -100 "$ERROR_LOG" 2>/dev/null | grep "$(date +%Y/%m/%d)" | wc -l),
"average_response_time": $(grep "$(date +%d/%b/%Y)" "$LOG_FILE" 2>/dev/null | awk -F'rt=' '{print $2}' | awk '{print $1}' | awk '{sum+=$1; count++} END {if(count>0) printf "%.3f", sum/count; else print "0"}'),
"health_check": "$(curl -s -w "%{http_code}" http://localhost/health -o /dev/null 2>/dev/null)"
}
EOF
echo "✅ Report saved to: $REPORT_FILE"
}
# Main execution
analyze_api_usage
check_gateway_health
show_recent_errors
generate_api_report
echo ""
echo "=================================="
echo "Dashboard updated: $(date)"
EOF
# Create API testing script
cat > ~/bin/api_gateway_test.sh << 'EOF'
#!/bin/bash
echo "🧪 API Gateway Testing Suite"
echo "============================"
BASE_URL="http://localhost"
API_KEY="api-key-123"
# Test health endpoint
test_health() {
echo "📊 Testing health endpoint..."
RESPONSE=$(curl -s -w "%{http_code}" "$BASE_URL/health" -o /tmp/health_response)
STATUS_CODE=$RESPONSE
if [ "$STATUS_CODE" = "200" ]; then
echo "✅ Health check: PASSED"
echo " Response: $(cat /tmp/health_response)"
else
echo "❌ Health check: FAILED (HTTP $STATUS_CODE)"
fi
echo ""
}
# Test API key authentication
test_api_key_auth() {
echo "🔑 Testing API key authentication..."
# Test without API key
RESPONSE=$(curl -s -w "%{http_code}" "$BASE_URL/api/v1/products" -o /tmp/no_key_response)
if [ "$RESPONSE" = "401" ]; then
echo "✅ No API key: Correctly rejected (HTTP 401)"
else
echo "❌ No API key: Should be rejected but got HTTP $RESPONSE"
fi
# Test with valid API key
RESPONSE=$(curl -s -w "%{http_code}" "$BASE_URL/api/v1/products" \
-H "X-API-Key: $API_KEY" -o /tmp/valid_key_response)
if [ "$RESPONSE" = "200" ]; then
echo "✅ Valid API key: Access granted (HTTP 200)"
else
echo "❌ Valid API key: Access denied (HTTP $RESPONSE)"
fi
# Test with invalid API key
RESPONSE=$(curl -s -w "%{http_code}" "$BASE_URL/api/v1/products" \
-H "X-API-Key: invalid-key" -o /tmp/invalid_key_response)
if [ "$RESPONSE" = "401" ]; then
echo "✅ Invalid API key: Correctly rejected (HTTP 401)"
else
echo "❌ Invalid API key: Should be rejected but got HTTP $RESPONSE"
fi
echo ""
}
# Test rate limiting
test_rate_limiting() {
echo "⏱️ Testing rate limiting..."
SUCCESS_COUNT=0
RATE_LIMITED_COUNT=0
for i in {1..15}; do
RESPONSE=$(curl -s -w "%{http_code}" "$BASE_URL/api/v1/products" \
-H "X-API-Key: $API_KEY" -o /dev/null)
if [ "$RESPONSE" = "200" ]; then
SUCCESS_COUNT=$((SUCCESS_COUNT + 1))
elif [ "$RESPONSE" = "429" ]; then
RATE_LIMITED_COUNT=$((RATE_LIMITED_COUNT + 1))
fi
sleep 0.1
done
echo "📊 Rate limiting results:"
echo " Successful requests: $SUCCESS_COUNT"
echo " Rate limited requests: $RATE_LIMITED_COUNT"
if [ "$RATE_LIMITED_COUNT" -gt 0 ]; then
echo "✅ Rate limiting: Working correctly"
else
echo "⚠️ Rate limiting: May not be working (no 429 responses)"
fi
echo ""
}
# Test backend routing
test_backend_routing() {
echo "🔀 Testing backend routing..."
# Test user service routing
RESPONSE=$(curl -s "$BASE_URL/api/v1/users" \
-H "Authorization: Bearer dummy-token" 2>/dev/null | grep -o '"service":"[^"]*"' | cut -d'"' -f4)
if [ "$RESPONSE" = "user_service" ]; then
echo "✅ User service routing: Working"
else
echo "❌ User service routing: Failed"
fi
# Test product service routing
RESPONSE=$(curl -s "$BASE_URL/api/v1/products" \
-H "X-API-Key: $API_KEY" 2>/dev/null | grep -o '"service":"[^"]*"' | cut -d'"' -f4)
if [ "$RESPONSE" = "product_service" ]; then
echo "✅ Product service routing: Working"
else
echo "❌ Product service routing: Failed"
fi
echo ""
}
# Load testing
load_test() {
echo "🚀 Running load test..."
START_TIME=$(date +%s)
TOTAL_REQUESTS=100
SUCCESS_COUNT=0
for i in $(seq 1 $TOTAL_REQUESTS); do
RESPONSE=$(curl -s -w "%{http_code}" "$BASE_URL/api/v1/products" \
-H "X-API-Key: $API_KEY" -o /dev/null)
if [ "$RESPONSE" = "200" ]; then
SUCCESS_COUNT=$((SUCCESS_COUNT + 1))
fi
done
END_TIME=$(date +%s)
DURATION=$((END_TIME - START_TIME))
RPS=$((TOTAL_REQUESTS / DURATION))
SUCCESS_RATE=$((SUCCESS_COUNT * 100 / TOTAL_REQUESTS))
echo "📊 Load test results:"
echo " Total requests: $TOTAL_REQUESTS"
echo " Successful requests: $SUCCESS_COUNT"
echo " Success rate: ${SUCCESS_RATE}%"
echo " Duration: ${DURATION}s"
echo " Requests per second: ${RPS}"
echo ""
}
# Run all tests
test_health
test_api_key_auth
test_rate_limiting
test_backend_routing
load_test
echo "============================"
echo "Testing completed: $(date)"
# Cleanup
rm -f /tmp/health_response /tmp/no_key_response /tmp/valid_key_response /tmp/invalid_key_response
EOF
# Make scripts executable
chmod +x ~/bin/api_gateway_monitor.sh ~/bin/api_gateway_test.sh
echo "✅ API monitoring and testing tools created!"
echo "📱 Monitor: ~/bin/api_gateway_monitor.sh"
echo "🧪 Test: ~/bin/api_gateway_test.sh"
Expected Output:
✅ Advanced authentication configured!
✅ API monitoring and testing tools created!
📊 API Gateway Monitoring Dashboard
==================================
📈 API Usage Analytics
📊 Total requests today: 156
🔗 Top API endpoints:
/api/v1/products: 89 requests
/api/v1/users: 67 requests
What this does: Provides comprehensive API gateway monitoring and testing! 💫
🛠️ Step 4: Deploy Kong API Gateway
Install and Configure Kong
What we’re doing: Setting up Kong as an enterprise-grade API gateway solution.
# Install Kong using Docker
echo "🐳 Installing Kong API Gateway..."
# Create Kong network
docker network create kong-net
# Start PostgreSQL for Kong
docker run -d --name kong-database \
--network=kong-net \
-p 5432:5432 \
-e "POSTGRES_USER=kong" \
-e "POSTGRES_DB=kong" \
-e "POSTGRES_PASSWORD=kong" \
postgres:13
# Wait for PostgreSQL to start
sleep 10
# Run Kong migrations
docker run --rm \
--network=kong-net \
-e "KONG_DATABASE=postgres" \
-e "KONG_PG_HOST=kong-database" \
-e "KONG_PG_PASSWORD=kong" \
kong:latest kong migrations bootstrap
# Start Kong
docker run -d --name kong \
--network=kong-net \
-e "KONG_DATABASE=postgres" \
-e "KONG_PG_HOST=kong-database" \
-e "KONG_PG_PASSWORD=kong" \
-e "KONG_PROXY_ACCESS_LOG=/dev/stdout" \
-e "KONG_ADMIN_ACCESS_LOG=/dev/stdout" \
-e "KONG_PROXY_ERROR_LOG=/dev/stderr" \
-e "KONG_ADMIN_ERROR_LOG=/dev/stderr" \
-e "KONG_ADMIN_LISTEN=0.0.0.0:8001" \
-p 8000:8000 \
-p 8443:8443 \
-p 8001:8001 \
-p 8444:8444 \
kong:latest
# Wait for Kong to start
sleep 15
# Verify Kong installation
echo "🔍 Verifying Kong installation..."
KONG_STATUS=$(curl -s http://localhost:8001 | grep -o '"version":"[^"]*"' | cut -d'"' -f4)
echo "✅ Kong version: $KONG_STATUS"
# Create Kong management script
cat > ~/bin/kong_manager.sh << 'EOF'
#!/bin/bash
echo "🦍 Kong API Gateway Manager"
echo "=========================="
KONG_ADMIN_URL="http://localhost:8001"
# Function to add a service
add_service() {
local service_name="$1"
local service_url="$2"
echo "➕ Adding service: $service_name"
curl -i -X POST "$KONG_ADMIN_URL/services/" \
--data "name=$service_name" \
--data "url=$service_url"
echo ""
echo "✅ Service '$service_name' added"
}
# Function to add a route
add_route() {
local service_name="$1"
local route_path="$2"
echo "🛤️ Adding route: $route_path for service $service_name"
curl -i -X POST "$KONG_ADMIN_URL/services/$service_name/routes" \
--data "paths[]=$route_path"
echo ""
echo "✅ Route '$route_path' added for service '$service_name'"
}
# Function to add authentication plugin
add_auth_plugin() {
local service_name="$1"
local plugin_type="$2"
echo "🔐 Adding $plugin_type authentication to service: $service_name"
case "$plugin_type" in
"key-auth")
curl -i -X POST "$KONG_ADMIN_URL/services/$service_name/plugins" \
--data "name=key-auth"
;;
"jwt")
curl -i -X POST "$KONG_ADMIN_URL/services/$service_name/plugins" \
--data "name=jwt"
;;
"oauth2")
curl -i -X POST "$KONG_ADMIN_URL/services/$service_name/plugins" \
--data "name=oauth2" \
--data "config.enable_authorization_code=true"
;;
esac
echo ""
echo "✅ $plugin_type authentication added to '$service_name'"
}
# Function to add rate limiting
add_rate_limiting() {
local service_name="$1"
local requests_per_minute="$2"
echo "⏱️ Adding rate limiting to service: $service_name"
curl -i -X POST "$KONG_ADMIN_URL/services/$service_name/plugins" \
--data "name=rate-limiting" \
--data "config.minute=$requests_per_minute"
echo ""
echo "✅ Rate limiting ($requests_per_minute req/min) added to '$service_name'"
}
# Function to list services
list_services() {
echo "📋 Current Kong services:"
curl -s "$KONG_ADMIN_URL/services" | \
jq -r '.data[] | "\(.name): \(.host):\(.port)\(.path // "")"' 2>/dev/null || \
echo "Install jq for better formatting"
}
# Function to list routes
list_routes() {
echo "🛤️ Current Kong routes:"
curl -s "$KONG_ADMIN_URL/routes" | \
jq -r '.data[] | "\(.paths[0]): \(.service.name)"' 2>/dev/null || \
echo "Install jq for better formatting"
}
# Function to get Kong status
kong_status() {
echo "📊 Kong Gateway Status:"
echo "====================="
# Check Kong health
HEALTH=$(curl -s "$KONG_ADMIN_URL" | grep -o '"version":"[^"]*"' | cut -d'"' -f4)
echo "Health: Kong $HEALTH"
# Count services
SERVICE_COUNT=$(curl -s "$KONG_ADMIN_URL/services" | grep -o '"total":[0-9]*' | cut -d':' -f2)
echo "Services: $SERVICE_COUNT"
# Count routes
ROUTE_COUNT=$(curl -s "$KONG_ADMIN_URL/routes" | grep -o '"total":[0-9]*' | cut -d':' -f2)
echo "Routes: $ROUTE_COUNT"
# Count plugins
PLUGIN_COUNT=$(curl -s "$KONG_ADMIN_URL/plugins" | grep -o '"total":[0-9]*' | cut -d':' -f2)
echo "Plugins: $PLUGIN_COUNT"
}
# Menu system
show_menu() {
echo ""
echo "Choose an action:"
echo "1. Add service"
echo "2. Add route"
echo "3. Add authentication"
echo "4. Add rate limiting"
echo "5. List services"
echo "6. List routes"
echo "7. Kong status"
echo "8. Setup demo configuration"
echo "0. Exit"
echo ""
}
# Demo configuration
setup_demo() {
echo "🚀 Setting up demo Kong configuration..."
# Add user service
add_service "user-service" "http://host.docker.internal:3001"
add_route "user-service" "/api/v1/users"
add_auth_plugin "user-service" "key-auth"
add_rate_limiting "user-service" "100"
# Add product service
add_service "product-service" "http://host.docker.internal:4001"
add_route "product-service" "/api/v1/products"
add_auth_plugin "product-service" "key-auth"
add_rate_limiting "product-service" "200"
echo "✅ Demo configuration completed!"
}
# Main menu loop
if [ "$1" = "demo" ]; then
setup_demo
exit 0
fi
while true; do
show_menu
read -p "Enter your choice [0-8]: " choice
case $choice in
1)
read -p "Service name: " name
read -p "Service URL: " url
add_service "$name" "$url"
;;
2)
read -p "Service name: " service
read -p "Route path: " path
add_route "$service" "$path"
;;
3)
read -p "Service name: " service
read -p "Auth type (key-auth/jwt/oauth2): " auth_type
add_auth_plugin "$service" "$auth_type"
;;
4)
read -p "Service name: " service
read -p "Requests per minute: " rpm
add_rate_limiting "$service" "$rpm"
;;
5)
list_services
;;
6)
list_routes
;;
7)
kong_status
;;
8)
setup_demo
;;
0)
echo "👋 Goodbye!"
exit 0
;;
*)
echo "❌ Invalid option. Please try again."
;;
esac
read -p "Press Enter to continue..."
done
echo "=========================="
EOF
# Make executable and setup demo
chmod +x ~/bin/kong_manager.sh
# Setup demo Kong configuration
~/bin/kong_manager.sh demo
echo "✅ Kong API Gateway installed and configured!"
echo "📱 Kong Admin: http://localhost:8001"
echo "🌐 Kong Proxy: http://localhost:8000"
echo "🔧 Manage: ~/bin/kong_manager.sh"
Expected Output:
🐳 Installing Kong API Gateway...
✅ Kong version: 3.4.0
🚀 Setting up demo Kong configuration...
✅ Demo configuration completed!
✅ Kong API Gateway installed and configured!
📱 Kong Admin: http://localhost:8001
🌐 Kong Proxy: http://localhost:8000
What this does: Provides enterprise-grade API gateway with Kong! 📚
🎮 Practice Time!
Let’s practice what you learned! Try these simple examples:
Example 1: Custom API Gateway Middleware 🟢
What we’re doing: Creating custom middleware for request/response transformation.
# Create custom middleware for API gateway
mkdir -p ~/api-middleware
cd ~/api-middleware
# Create request transformation middleware
cat > request_transformer.lua << 'EOF'
local cjson = require "cjson"
local M = {}
-- Add request ID to all requests
function M.add_request_id()
local request_id = ngx.var.request_id or "req_" .. ngx.time() .. "_" .. math.random(1000, 9999)
ngx.req.set_header("X-Request-ID", request_id)
ngx.ctx.request_id = request_id
end
-- Log request details
function M.log_request()
local method = ngx.var.request_method
local uri = ngx.var.request_uri
local user_agent = ngx.var.http_user_agent or "unknown"
local request_id = ngx.ctx.request_id or "unknown"
local log_entry = {
timestamp = ngx.time(),
request_id = request_id,
method = method,
uri = uri,
user_agent = user_agent,
client_ip = ngx.var.remote_addr
}
ngx.log(ngx.INFO, "REQUEST: " .. cjson.encode(log_entry))
end
-- Transform request body
function M.transform_request_body()
if ngx.var.request_method == "POST" or ngx.var.request_method == "PUT" then
ngx.req.read_body()
local body = ngx.req.get_body_data()
if body then
local ok, json_body = pcall(cjson.decode, body)
if ok then
-- Add metadata to request
json_body._metadata = {
gateway_timestamp = ngx.time(),
request_id = ngx.ctx.request_id,
client_ip = ngx.var.remote_addr
}
local new_body = cjson.encode(json_body)
ngx.req.set_body_data(new_body)
end
end
end
end
-- Validate request format
function M.validate_request()
local content_type = ngx.var.content_type
if ngx.var.request_method == "POST" or ngx.var.request_method == "PUT" then
if not content_type or not content_type:match("application/json") then
ngx.status = 400
ngx.say(cjson.encode({
error = "Content-Type must be application/json",
request_id = ngx.ctx.request_id
}))
ngx.exit(400)
end
end
end
return M
EOF
# Create response transformation middleware
cat > response_transformer.lua << 'EOF'
local cjson = require "cjson"
local M = {}
-- Add response headers
function M.add_response_headers()
ngx.header["X-Request-ID"] = ngx.ctx.request_id
ngx.header["X-Gateway"] = "Alpine-API-Gateway"
ngx.header["X-Timestamp"] = ngx.time()
ngx.header["Cache-Control"] = "no-cache, no-store, must-revalidate"
end
-- Transform response body
function M.transform_response_body()
local body = ngx.arg[1]
local eof = ngx.arg[2]
if eof and body then
local ok, json_body = pcall(cjson.decode, body)
if ok then
-- Add gateway metadata to response
json_body._gateway = {
request_id = ngx.ctx.request_id,
processed_at = ngx.time(),
version = "1.0"
}
local new_body = cjson.encode(json_body)
ngx.arg[1] = new_body
end
end
end
-- Log response details
function M.log_response()
local status = ngx.status
local request_id = ngx.ctx.request_id or "unknown"
local response_time = ngx.now() - ngx.req.start_time()
local log_entry = {
timestamp = ngx.time(),
request_id = request_id,
status = status,
response_time = response_time,
uri = ngx.var.request_uri
}
ngx.log(ngx.INFO, "RESPONSE: " .. cjson.encode(log_entry))
end
return M
EOF
# Create NGINX configuration with custom middleware
cat > /etc/nginx/conf.d/api-middleware.conf << 'EOF'
# Add custom middleware location
location /etc/nginx/lua {
# Lua path for custom modules
}
server {
listen 8080;
server_name api-middleware.localhost;
# Enable custom middleware
access_by_lua_block {
local req_transformer = require "request_transformer"
req_transformer.add_request_id()
req_transformer.log_request()
req_transformer.validate_request()
req_transformer.transform_request_body()
}
header_filter_by_lua_block {
local resp_transformer = require "response_transformer"
resp_transformer.add_response_headers()
}
body_filter_by_lua_block {
local resp_transformer = require "response_transformer"
resp_transformer.transform_response_body()
}
log_by_lua_block {
local resp_transformer = require "response_transformer"
resp_transformer.log_response()
}
# Proxy to backend services
location /api/v1/users {
proxy_pass http://user_service;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
location /api/v1/products {
proxy_pass http://product_service;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
EOF
# Copy middleware to NGINX lua directory
cp *.lua /etc/nginx/lua/
# Test configuration and reload
nginx -t && nginx -s reload
echo "✅ Custom API gateway middleware created!"
echo "🌐 Test at: http://localhost:8080/api/v1/products"
What this does: Creates sophisticated API transformation middleware! 🌟
Example 2: API Gateway Performance Optimization 🟡
What we’re doing: Implementing caching, compression, and performance optimizations.
# Create performance optimization configuration
cat > /etc/nginx/conf.d/api-performance.conf << 'EOF'
# API Gateway Performance Optimizations
# Enable gzip compression
gzip on;
gzip_vary on;
gzip_min_length 1024;
gzip_types
text/plain
text/css
text/xml
text/javascript
application/json
application/javascript
application/xml+rss
application/atom+xml
image/svg+xml;
# Response caching
proxy_cache_path /var/cache/nginx/api levels=1:2 keys_zone=api_cache:10m max_size=100m inactive=60m use_temp_path=off;
server {
listen 9090;
server_name api-performance.localhost;
# Connection optimization
keepalive_timeout 65;
keepalive_requests 100;
# Buffer optimization
proxy_buffering on;
proxy_buffer_size 128k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;
# Caching for GET requests
location /api/v1/products {
proxy_cache api_cache;
proxy_cache_key "$scheme$request_method$host$request_uri";
proxy_cache_valid 200 10m;
proxy_cache_valid 404 1m;
proxy_cache_use_stale error timeout updating http_500 http_502 http_503 http_504;
proxy_cache_background_update on;
add_header X-Cache-Status $upstream_cache_status;
proxy_pass http://product_service;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
# No caching for user data (dynamic content)
location /api/v1/users {
proxy_cache off;
add_header Cache-Control "no-cache, no-store, must-revalidate";
proxy_pass http://user_service;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
# Cache purge endpoint
location ~ /api/cache/purge/(.+) {
proxy_cache_purge api_cache "$scheme$request_method$host/$1";
return 200 "Cache purged for $1\n";
}
# Performance metrics endpoint
location /api/performance {
content_by_lua_block {
local cjson = require "cjson"
-- Get cache statistics
local cache_size = 0
local cache_files = 0
-- Simple performance metrics
local metrics = {
timestamp = ngx.time(),
cache = {
size_mb = cache_size,
files = cache_files,
hit_ratio = "75%" -- Would be calculated from real stats
},
performance = {
avg_response_time = 120,
requests_per_second = 150,
active_connections = 25
},
optimization = {
gzip_enabled = true,
caching_enabled = true,
keepalive_enabled = true
}
}
ngx.header.content_type = "application/json"
ngx.say(cjson.encode(metrics))
}
}
}
EOF
# Create cache directory
mkdir -p /var/cache/nginx/api
chown nginx:nginx /var/cache/nginx/api
# Create performance monitoring script
cat > ~/bin/api_performance_monitor.sh << 'EOF'
#!/bin/bash
echo "⚡ API Gateway Performance Monitor"
echo "================================="
NGINX_STATUS_URL="http://localhost/nginx_status"
API_PERFORMANCE_URL="http://localhost:9090/api/performance"
# Function to test response times
test_response_times() {
echo "⏱️ Testing API response times..."
ENDPOINTS=(
"http://localhost:9090/api/v1/products"
"http://localhost:9090/api/v1/users"
"http://localhost/health"
)
for endpoint in "${ENDPOINTS[@]}"; do
echo -n " $(echo $endpoint | cut -d'/' -f4-): "
RESPONSE_TIME=$(curl -w "%{time_total}" -s -o /dev/null "$endpoint" 2>/dev/null)
RESPONSE_CODE=$(curl -w "%{http_code}" -s -o /dev/null "$endpoint" 2>/dev/null)
echo "${RESPONSE_TIME}s (HTTP $RESPONSE_CODE)"
done
echo ""
}
# Function to test caching
test_caching() {
echo "💾 Testing response caching..."
# First request (should be MISS)
CACHE_STATUS1=$(curl -s -I "http://localhost:9090/api/v1/products" | grep "X-Cache-Status" | cut -d' ' -f2)
echo " First request: $CACHE_STATUS1"
# Second request (should be HIT)
CACHE_STATUS2=$(curl -s -I "http://localhost:9090/api/v1/products" | grep "X-Cache-Status" | cut -d' ' -f2)
echo " Second request: $CACHE_STATUS2"
if [ "$CACHE_STATUS2" = "HIT" ]; then
echo " ✅ Caching is working correctly"
else
echo " ⚠️ Caching may not be working as expected"
fi
echo ""
}
# Function to test compression
test_compression() {
echo "🗜️ Testing response compression..."
# Test with gzip
COMPRESSED_SIZE=$(curl -s -H "Accept-Encoding: gzip" "http://localhost:9090/api/v1/products" | wc -c)
UNCOMPRESSED_SIZE=$(curl -s "http://localhost:9090/api/v1/products" | wc -c)
if [ "$COMPRESSED_SIZE" -lt "$UNCOMPRESSED_SIZE" ]; then
COMPRESSION_RATIO=$(echo "scale=1; ($UNCOMPRESSED_SIZE - $COMPRESSED_SIZE) * 100 / $UNCOMPRESSED_SIZE" | bc)
echo " ✅ Compression working: ${COMPRESSION_RATIO}% size reduction"
else
echo " ⚠️ Compression may not be working"
fi
echo " Uncompressed: ${UNCOMPRESSED_SIZE} bytes"
echo " Compressed: ${COMPRESSED_SIZE} bytes"
echo ""
}
# Function to show performance metrics
show_performance_metrics() {
echo "📊 Performance Metrics"
echo "--------------------"
if command -v jq >/dev/null 2>&1; then
curl -s "$API_PERFORMANCE_URL" | jq '.'
else
curl -s "$API_PERFORMANCE_URL"
fi
echo ""
}
# Function to load test
simple_load_test() {
echo "🚀 Simple Load Test (50 requests)"
echo "--------------------------------"
START_TIME=$(date +%s.%N)
SUCCESS_COUNT=0
for i in {1..50}; do
RESPONSE=$(curl -s -w "%{http_code}" "http://localhost:9090/api/v1/products" -o /dev/null)
if [ "$RESPONSE" = "200" ]; then
SUCCESS_COUNT=$((SUCCESS_COUNT + 1))
fi
done
END_TIME=$(date +%s.%N)
DURATION=$(echo "$END_TIME - $START_TIME" | bc)
RPS=$(echo "scale=2; 50 / $DURATION" | bc)
SUCCESS_RATE=$(echo "scale=2; $SUCCESS_COUNT * 100 / 50" | bc)
echo " Total requests: 50"
echo " Successful: $SUCCESS_COUNT"
echo " Success rate: ${SUCCESS_RATE}%"
echo " Duration: ${DURATION}s"
echo " Requests/second: $RPS"
echo ""
}
# Run all performance tests
test_response_times
test_caching
test_compression
show_performance_metrics
simple_load_test
echo "================================="
echo "Performance monitoring completed: $(date)"
EOF
# Make executable and test
chmod +x ~/bin/api_performance_monitor.sh
# Test configuration and reload NGINX
nginx -t && nginx -s reload
echo "✅ API Gateway performance optimizations implemented!"
echo "⚡ Performance endpoint: http://localhost:9090/api/performance"
echo "📊 Monitor: ~/bin/api_performance_monitor.sh"
What this does: Implements comprehensive performance optimizations for the API gateway! 📚
🚨 Fix Common Problems
Problem 1: Backend services not reachable ❌
What happened: API Gateway cannot connect to backend services. How to fix it: Check service connectivity and configurations!
# Test backend service connectivity
curl -v http://localhost:3001/users
curl -v http://localhost:4001/products
# Check NGINX upstream configuration
nginx -T | grep -A 5 "upstream"
# Test DNS resolution
nslookup user_service
ping -c 1 localhost
# Check firewall rules
iptables -L
# Restart services
rc-service nginx restart
Problem 2: Rate limiting not working ❌
What happened: API requests are not being rate limited. How to fix it: Verify rate limiting configuration!
# Check NGINX rate limiting configuration
grep -r "limit_req" /etc/nginx/
# Test rate limiting manually
for i in {1..20}; do
curl -s -w "%{http_code}\n" http://localhost/api/v1/products
sleep 0.1
done
# Check rate limiting logs
tail -f /var/log/nginx/error.log | grep "limiting requests"
# Verify shared memory zone
nginx -T | grep "limit_req_zone"
Problem 3: SSL/TLS certificate issues ❌
What happened: HTTPS connections fail or show security warnings. How to fix it: Update SSL certificate configuration!
# Check SSL certificate validity
openssl x509 -in /etc/nginx/ssl/api.crt -text -noout
# Test SSL connection
openssl s_client -connect localhost:443 -servername api.example.com
# Regenerate self-signed certificate
openssl req -x509 -nodes -days 365 -newkey rsa:2048 \
-keyout /etc/nginx/ssl/api.key \
-out /etc/nginx/ssl/api.crt \
-subj "/C=US/ST=State/L=City/O=Organization/CN=localhost"
# Reload NGINX
nginx -s reload
Don’t worry! API Gateway troubleshooting takes practice. You’re doing great! 💪
💡 Simple Tips
- Start simple 📅 - Begin with basic reverse proxy before adding features
- Monitor everything 🌱 - Track metrics, logs, and performance
- Security first 🤝 - Always implement authentication and rate limiting
- Test thoroughly 💪 - Verify each feature works before deployment
✅ Check Everything Works
Let’s make sure your API Gateway setup is working:
# Check NGINX status
rc-service nginx status
# Test basic API Gateway functionality
curl -s http://localhost/health
# Test Kong gateway (if installed)
curl -s http://localhost:8001
# Test API routing
curl -s http://localhost/api/v1/products
# Run monitoring tools
~/bin/api_gateway_monitor.sh
# Run performance tests
~/bin/api_performance_monitor.sh
# Check logs
tail -5 /var/log/nginx/api_access.log
tail -5 /var/log/nginx/api_error.log
echo "API Gateway fully operational! ✅"
Good output:
✅ nginx * service started
API Gateway is healthy
✅ Kong version: 3.4.0
{"service": "product_service", "data": [...]}
📊 API Gateway Monitoring Dashboard loaded
⚡ Performance optimizations active
API Gateway fully operational! ✅
🏆 What You Learned
Great job! Now you can:
- ✅ Install and configure NGINX as an API Gateway
- ✅ Set up Kong for enterprise API management
- ✅ Implement authentication and authorization
- ✅ Create custom middleware for request/response transformation
- ✅ Configure rate limiting and security policies
- ✅ Implement caching and performance optimizations
- ✅ Build monitoring and analytics dashboards
- ✅ Fix common API Gateway issues and troubleshoot problems
🎯 What’s Next?
Now you can try:
- 📚 Implementing service mesh integration with Istio or Linkerd
- 🛠️ Building custom plugins for Kong or creating NGINX modules
- 🤝 Setting up multi-region API Gateway deployments
- 🌟 Implementing advanced security features like WAF and DDoS protection
Remember: Every expert was once a beginner. You’re doing amazing! 🎉
Keep practicing and you’ll become an API Gateway expert too! 💫