The shift to cloud-native architecture has revolutionized how we build, deploy, and scale applications. This comprehensive guide demonstrates how to build production-ready cloud-native applications using microservices architecture on Rocky Linux 9, covering everything from basic containerization to advanced service mesh implementations, distributed tracing, and modern deployment strategies.
🌟 Understanding Cloud-Native and Microservices
Cloud-native applications are designed to fully leverage cloud computing frameworks, embracing microservices, containers, service meshes, immutable infrastructure, and declarative APIs. These technologies enable loosely coupled systems that are resilient, manageable, and observable.
Key Principles
- Microservices Architecture - Small, independent services that do one thing well 🔧
- Containerization - Consistent environments from development to production 📦
- Dynamic Orchestration - Automated deployment, scaling, and management 🎯
- DevOps Culture - Rapid, frequent, and reliable software delivery 🚀
- Continuous Everything - CI/CD, monitoring, and improvement ♾️
📋 Prerequisites and Environment Setup
System Requirements
# Rocky Linux 9 Development Environment
- CPU: 4+ cores (8 recommended)
- RAM: 8 GB minimum (16 GB recommended)
- Storage: 100 GB SSD
- Network: Reliable internet connection
# Software Stack
- Docker/Podman for containerization
- Kubernetes for orchestration
- Service mesh (Istio/Linkerd)
- Message broker (RabbitMQ/Kafka)
- Databases (PostgreSQL, MongoDB, Redis)
Initial System Setup
# Update system
sudo dnf update -y
# Install development tools
sudo dnf groupinstall -y "Development Tools"
sudo dnf install -y \
git \
vim \
curl \
wget \
jq \
htop \
net-tools \
bind-utils
# Install container runtime
sudo dnf install -y podman podman-compose
# Or Docker
sudo dnf config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
sudo dnf install -y docker-ce docker-ce-cli containerd.io docker-compose-plugin
sudo systemctl enable --now docker
🏗️ Microservices Architecture Design
Sample E-Commerce Application
┌─────────────┐ ┌─────────────┐ ┌─────────────┐
│ Frontend │────▶│ API Gateway │────▶│ Auth │
│ (React) │ │ (Kong) │ │ Service │
└─────────────┘ └──────┬──────┘ └─────────────┘
│
┌──────────────────┼──────────────────┐
│ │ │
▼ ▼ ▼
┌─────────────┐ ┌─────────────┐ ┌─────────────┐
│ Product │ │ Order │ │ Payment │
│ Service │ │ Service │ │ Service │
└─────────────┘ └─────────────┘ └─────────────┘
│ │ │
▼ ▼ ▼
┌─────────────┐ ┌─────────────┐ ┌─────────────┐
│ PostgreSQL │ │ MongoDB │ │ Redis │
└─────────────┘ └─────────────┘ └─────────────┘
Project Structure
# Create project structure
mkdir -p cloud-native-app/{services,infrastructure,deployments}
cd cloud-native-app
# Service directories
mkdir -p services/{auth,product,order,payment,frontend}
mkdir -p infrastructure/{docker,kubernetes,terraform}
mkdir -p deployments/{dev,staging,production}
# Initialize git repository
git init
echo "# Cloud Native E-Commerce" > README.md
🐳 Building Microservices
Auth Service (Node.js)
# services/auth/package.json
cat > services/auth/package.json << 'EOF'
{
"name": "auth-service",
"version": "1.0.0",
"main": "index.js",
"scripts": {
"start": "node index.js",
"dev": "nodemon index.js",
"test": "jest"
},
"dependencies": {
"express": "^4.18.2",
"jsonwebtoken": "^9.0.0",
"bcrypt": "^5.1.0",
"mongoose": "^7.0.0",
"redis": "^4.6.0",
"dotenv": "^16.0.3",
"express-rate-limit": "^6.7.0",
"helmet": "^7.0.0",
"joi": "^17.9.0",
"winston": "^3.8.2",
"express-prometheus-middleware": "^1.2.0"
}
}
EOF
# services/auth/index.js
cat > services/auth/index.js << 'EOF'
const express = require('express');
const jwt = require('jsonwebtoken');
const bcrypt = require('bcrypt');
const mongoose = require('mongoose');
const redis = require('redis');
const helmet = require('helmet');
const rateLimit = require('express-rate-limit');
const promMid = require('express-prometheus-middleware');
const winston = require('winston');
const app = express();
const port = process.env.PORT || 3001;
// Security middleware
app.use(helmet());
app.use(express.json());
// Logging
const logger = winston.createLogger({
level: 'info',
format: winston.format.json(),
transports: [
new winston.transports.Console(),
new winston.transports.File({ filename: 'auth.log' })
]
});
// Prometheus metrics
app.use(promMid({
metricsPath: '/metrics',
collectDefaultMetrics: true,
requestDurationBuckets: [0.1, 0.5, 1, 1.5]
}));
// Rate limiting
const limiter = rateLimit({
windowMs: 15 * 60 * 1000, // 15 minutes
max: 100 // limit each IP to 100 requests per windowMs
});
app.use('/api/', limiter);
// MongoDB connection
mongoose.connect(process.env.MONGODB_URI || 'mongodb://localhost:27017/auth', {
useNewUrlParser: true,
useUnifiedTopology: true
});
// Redis connection
const redisClient = redis.createClient({
url: process.env.REDIS_URL || 'redis://localhost:6379'
});
redisClient.connect();
// User schema
const UserSchema = new mongoose.Schema({
email: { type: String, required: true, unique: true },
password: { type: String, required: true },
role: { type: String, default: 'user' },
createdAt: { type: Date, default: Date.now }
});
const User = mongoose.model('User', UserSchema);
// Health check endpoint
app.get('/health', (req, res) => {
res.json({
status: 'healthy',
service: 'auth',
timestamp: new Date().toISOString()
});
});
// Register endpoint
app.post('/api/register', async (req, res) => {
try {
const { email, password } = req.body;
// Hash password
const hashedPassword = await bcrypt.hash(password, 10);
// Create user
const user = new User({
email,
password: hashedPassword
});
await user.save();
logger.info(`User registered: ${email}`);
res.status(201).json({ message: 'User created successfully' });
} catch (error) {
logger.error(`Registration error: ${error.message}`);
res.status(400).json({ error: error.message });
}
});
// Login endpoint
app.post('/api/login', async (req, res) => {
try {
const { email, password } = req.body;
// Find user
const user = await User.findOne({ email });
if (!user) {
return res.status(401).json({ error: 'Invalid credentials' });
}
// Check password
const validPassword = await bcrypt.compare(password, user.password);
if (!validPassword) {
return res.status(401).json({ error: 'Invalid credentials' });
}
// Generate JWT
const token = jwt.sign(
{ userId: user._id, email: user.email, role: user.role },
process.env.JWT_SECRET || 'secret',
{ expiresIn: '24h' }
);
// Store in Redis
await redisClient.setEx(`token:${user._id}`, 86400, token);
logger.info(`User logged in: ${email}`);
res.json({ token });
} catch (error) {
logger.error(`Login error: ${error.message}`);
res.status(500).json({ error: error.message });
}
});
// Verify token endpoint
app.post('/api/verify', async (req, res) => {
try {
const { token } = req.body;
// Verify JWT
const decoded = jwt.verify(token, process.env.JWT_SECRET || 'secret');
// Check Redis
const storedToken = await redisClient.get(`token:${decoded.userId}`);
if (storedToken !== token) {
return res.status(401).json({ valid: false });
}
res.json({ valid: true, user: decoded });
} catch (error) {
res.status(401).json({ valid: false });
}
});
app.listen(port, () => {
logger.info(`Auth service listening on port ${port}`);
});
EOF
# services/auth/Dockerfile
cat > services/auth/Dockerfile << 'EOF'
FROM node:18-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
FROM node:18-alpine
RUN apk add --no-cache dumb-init
WORKDIR /app
COPY --from=builder /app/node_modules ./node_modules
COPY . .
USER node
EXPOSE 3001
ENTRYPOINT ["dumb-init", "--"]
CMD ["node", "index.js"]
EOF
Product Service (Go)
// services/product/main.go
cat > services/product/main.go << 'EOF'
package main
import (
"context"
"encoding/json"
"fmt"
"log"
"net/http"
"os"
"time"
"github.com/gorilla/mux"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promhttp"
"go.mongodb.org/mongo-driver/bson"
"go.mongodb.org/mongo-driver/bson/primitive"
"go.mongodb.org/mongo-driver/mongo"
"go.mongodb.org/mongo-driver/mongo/options"
"go.opentelemetry.io/otel"
"go.opentelemetry.io/otel/trace"
)
type Product struct {
ID primitive.ObjectID `bson:"_id,omitempty" json:"id"`
Name string `bson:"name" json:"name"`
Description string `bson:"description" json:"description"`
Price float64 `bson:"price" json:"price"`
Stock int `bson:"stock" json:"stock"`
Category string `bson:"category" json:"category"`
CreatedAt time.Time `bson:"created_at" json:"created_at"`
}
var (
client *mongo.Client
collection *mongo.Collection
tracer trace.Tracer
// Prometheus metrics
httpDuration = prometheus.NewHistogramVec(prometheus.HistogramOpts{
Name: "http_duration_seconds",
Help: "Duration of HTTP requests.",
}, []string{"path"})
httpRequests = prometheus.NewCounterVec(prometheus.CounterOpts{
Name: "http_requests_total",
Help: "Total number of HTTP requests.",
}, []string{"path", "method", "status"})
)
func init() {
prometheus.MustRegister(httpDuration)
prometheus.MustRegister(httpRequests)
}
func main() {
// MongoDB connection
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
defer cancel()
mongoURI := os.Getenv("MONGODB_URI")
if mongoURI == "" {
mongoURI = "mongodb://localhost:27017"
}
var err error
client, err = mongo.Connect(ctx, options.Client().ApplyURI(mongoURI))
if err != nil {
log.Fatal(err)
}
defer client.Disconnect(ctx)
collection = client.Database("products").Collection("products")
// Setup tracing
tracer = otel.Tracer("product-service")
// Setup routes
r := mux.NewRouter()
// Middleware
r.Use(prometheusMiddleware)
r.Use(loggingMiddleware)
// Routes
r.HandleFunc("/health", healthHandler).Methods("GET")
r.HandleFunc("/metrics", promhttp.Handler().ServeHTTP)
r.HandleFunc("/api/products", getProducts).Methods("GET")
r.HandleFunc("/api/products", createProduct).Methods("POST")
r.HandleFunc("/api/products/{id}", getProduct).Methods("GET")
r.HandleFunc("/api/products/{id}", updateProduct).Methods("PUT")
r.HandleFunc("/api/products/{id}", deleteProduct).Methods("DELETE")
port := os.Getenv("PORT")
if port == "" {
port = "3002"
}
log.Printf("Product service listening on port %s", port)
log.Fatal(http.ListenAndServe(":"+port, r))
}
func healthHandler(w http.ResponseWriter, r *http.Request) {
json.NewEncoder(w).Encode(map[string]interface{}{
"status": "healthy",
"service": "product",
"timestamp": time.Now().Format(time.RFC3339),
})
}
func getProducts(w http.ResponseWriter, r *http.Request) {
ctx, span := tracer.Start(r.Context(), "getProducts")
defer span.End()
var products []Product
cursor, err := collection.Find(ctx, bson.M{})
if err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
return
}
defer cursor.Close(ctx)
for cursor.Next(ctx) {
var product Product
cursor.Decode(&product)
products = append(products, product)
}
w.Header().Set("Content-Type", "application/json")
json.NewEncoder(w).Encode(products)
}
func createProduct(w http.ResponseWriter, r *http.Request) {
ctx, span := tracer.Start(r.Context(), "createProduct")
defer span.End()
var product Product
if err := json.NewDecoder(r.Body).Decode(&product); err != nil {
http.Error(w, err.Error(), http.StatusBadRequest)
return
}
product.CreatedAt = time.Now()
result, err := collection.InsertOne(ctx, product)
if err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
return
}
product.ID = result.InsertedID.(primitive.ObjectID)
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(http.StatusCreated)
json.NewEncoder(w).Encode(product)
}
func prometheusMiddleware(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
start := time.Now()
wrapped := &responseWriter{ResponseWriter: w, statusCode: http.StatusOK}
next.ServeHTTP(wrapped, r)
duration := time.Since(start).Seconds()
httpDuration.WithLabelValues(r.URL.Path).Observe(duration)
httpRequests.WithLabelValues(r.URL.Path, r.Method, fmt.Sprintf("%d", wrapped.statusCode)).Inc()
})
}
func loggingMiddleware(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
log.Printf("%s %s %s", r.RemoteAddr, r.Method, r.URL)
next.ServeHTTP(w, r)
})
}
type responseWriter struct {
http.ResponseWriter
statusCode int
}
func (rw *responseWriter) WriteHeader(code int) {
rw.statusCode = code
rw.ResponseWriter.WriteHeader(code)
}
EOF
# services/product/Dockerfile
cat > services/product/Dockerfile << 'EOF'
FROM golang:1.20-alpine AS builder
WORKDIR /app
COPY go.mod go.sum ./
RUN go mod download
COPY . .
RUN CGO_ENABLED=0 GOOS=linux go build -o product-service .
FROM alpine:latest
RUN apk --no-cache add ca-certificates
WORKDIR /root/
COPY --from=builder /app/product-service .
EXPOSE 3002
CMD ["./product-service"]
EOF
Order Service (Python)
# services/order/app.py
cat > services/order/app.py << 'EOF'
from flask import Flask, request, jsonify
from flask_pymongo import PyMongo
from flask_cors import CORS
from prometheus_flask_exporter import PrometheusMetrics
import redis
import json
import os
import logging
from datetime import datetime
import requests
from opentelemetry import trace
from opentelemetry.instrumentation.flask import FlaskInstrumentor
from opentelemetry.instrumentation.requests import RequestsInstrumentor
app = Flask(__name__)
CORS(app)
# Configuration
app.config['MONGO_URI'] = os.getenv('MONGODB_URI', 'mongodb://localhost:27017/orders')
mongo = PyMongo(app)
redis_client = redis.Redis(
host=os.getenv('REDIS_HOST', 'localhost'),
port=int(os.getenv('REDIS_PORT', 6379)),
decode_responses=True
)
# Metrics
metrics = PrometheusMetrics(app)
# Tracing
FlaskInstrumentor().instrument_app(app)
RequestsInstrumentor().instrument()
tracer = trace.get_tracer(__name__)
# Logging
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
# Service URLs
AUTH_SERVICE_URL = os.getenv('AUTH_SERVICE_URL', 'http://localhost:3001')
PRODUCT_SERVICE_URL = os.getenv('PRODUCT_SERVICE_URL', 'http://localhost:3002')
PAYMENT_SERVICE_URL = os.getenv('PAYMENT_SERVICE_URL', 'http://localhost:3004')
@app.route('/health')
def health():
return jsonify({
'status': 'healthy',
'service': 'order',
'timestamp': datetime.utcnow().isoformat()
})
@app.route('/api/orders', methods=['POST'])
def create_order():
with tracer.start_as_current_span("create_order"):
try:
# Validate auth token
token = request.headers.get('Authorization', '').replace('Bearer ', '')
auth_response = requests.post(
f"{AUTH_SERVICE_URL}/api/verify",
json={'token': token}
)
if not auth_response.json().get('valid'):
return jsonify({'error': 'Unauthorized'}), 401
user_data = auth_response.json()['user']
order_data = request.json
# Validate products
total_amount = 0
for item in order_data['items']:
product_response = requests.get(
f"{PRODUCT_SERVICE_URL}/api/products/{item['product_id']}"
)
if product_response.status_code != 200:
return jsonify({'error': f"Product {item['product_id']} not found"}), 400
product = product_response.json()
if product['stock'] < item['quantity']:
return jsonify({'error': f"Insufficient stock for {product['name']}"}), 400
total_amount += product['price'] * item['quantity']
# Create order
order = {
'user_id': user_data['userId'],
'items': order_data['items'],
'total_amount': total_amount,
'status': 'pending',
'created_at': datetime.utcnow(),
'shipping_address': order_data.get('shipping_address')
}
result = mongo.db.orders.insert_one(order)
order['_id'] = str(result.inserted_id)
# Cache order
redis_client.setex(
f"order:{order['_id']}",
3600,
json.dumps(order, default=str)
)
# Process payment
payment_response = requests.post(
f"{PAYMENT_SERVICE_URL}/api/payments",
json={
'order_id': order['_id'],
'amount': total_amount,
'currency': 'USD'
},
headers={'Authorization': request.headers.get('Authorization')}
)
if payment_response.status_code == 200:
mongo.db.orders.update_one(
{'_id': result.inserted_id},
{'$set': {'status': 'paid'}}
)
order['status'] = 'paid'
logger.info(f"Order created: {order['_id']}")
return jsonify(order), 201
except Exception as e:
logger.error(f"Error creating order: {str(e)}")
return jsonify({'error': str(e)}), 500
@app.route('/api/orders/<order_id>', methods=['GET'])
def get_order(order_id):
with tracer.start_as_current_span("get_order"):
# Check cache first
cached_order = redis_client.get(f"order:{order_id}")
if cached_order:
return jsonify(json.loads(cached_order))
# Get from database
order = mongo.db.orders.find_one({'_id': order_id})
if not order:
return jsonify({'error': 'Order not found'}), 404
order['_id'] = str(order['_id'])
# Cache result
redis_client.setex(
f"order:{order_id}",
3600,
json.dumps(order, default=str)
)
return jsonify(order)
@app.route('/api/orders', methods=['GET'])
def get_orders():
with tracer.start_as_current_span("get_orders"):
# Validate auth token
token = request.headers.get('Authorization', '').replace('Bearer ', '')
auth_response = requests.post(
f"{AUTH_SERVICE_URL}/api/verify",
json={'token': token}
)
if not auth_response.json().get('valid'):
return jsonify({'error': 'Unauthorized'}), 401
user_data = auth_response.json()['user']
# Get user's orders
orders = list(mongo.db.orders.find({'user_id': user_data['userId']}))
for order in orders:
order['_id'] = str(order['_id'])
return jsonify(orders)
if __name__ == '__main__':
app.run(host='0.0.0.0', port=int(os.getenv('PORT', 3003)))
EOF
# services/order/requirements.txt
cat > services/order/requirements.txt << 'EOF'
Flask==2.3.2
flask-pymongo==2.3.0
flask-cors==4.0.0
redis==4.5.5
requests==2.31.0
prometheus-flask-exporter==0.22.4
opentelemetry-api==1.18.0
opentelemetry-sdk==1.18.0
opentelemetry-instrumentation-flask==0.39b0
opentelemetry-instrumentation-requests==0.39b0
gunicorn==20.1.0
EOF
# services/order/Dockerfile
cat > services/order/Dockerfile << 'EOF'
FROM python:3.11-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
USER nobody
EXPOSE 3003
CMD ["gunicorn", "--bind", "0.0.0.0:3003", "--workers", "4", "app:app"]
EOF
🌐 API Gateway with Kong
Kong Configuration
# infrastructure/docker/docker-compose.yml
cat > infrastructure/docker/docker-compose.yml << 'EOF'
version: '3.8'
services:
kong-database:
image: postgres:15-alpine
environment:
POSTGRES_USER: kong
POSTGRES_DB: kong
POSTGRES_PASSWORD: kongpass
volumes:
- kong-db-data:/var/lib/postgresql/data
networks:
- microservices
kong-migration:
image: kong:3.3
command: kong migrations bootstrap
depends_on:
- kong-database
environment:
KONG_DATABASE: postgres
KONG_PG_HOST: kong-database
KONG_PG_USER: kong
KONG_PG_PASSWORD: kongpass
networks:
- microservices
kong:
image: kong:3.3
depends_on:
- kong-database
- kong-migration
environment:
KONG_DATABASE: postgres
KONG_PG_HOST: kong-database
KONG_PG_USER: kong
KONG_PG_PASSWORD: kongpass
KONG_PROXY_ACCESS_LOG: /dev/stdout
KONG_ADMIN_ACCESS_LOG: /dev/stdout
KONG_PROXY_ERROR_LOG: /dev/stderr
KONG_ADMIN_ERROR_LOG: /dev/stderr
KONG_ADMIN_LISTEN: 0.0.0.0:8001
ports:
- "8000:8000"
- "8443:8443"
- "8001:8001"
- "8444:8444"
networks:
- microservices
konga:
image: pantsel/konga:latest
depends_on:
- kong
environment:
NODE_ENV: production
TOKEN_SECRET: some-secret-token
ports:
- "1337:1337"
networks:
- microservices
networks:
microservices:
driver: bridge
volumes:
kong-db-data:
EOF
# Configure Kong routes
cat > infrastructure/kong/configure.sh << 'EOF'
#!/bin/bash
KONG_ADMIN_URL="http://localhost:8001"
# Wait for Kong to be ready
until curl -s $KONG_ADMIN_URL > /dev/null; do
echo "Waiting for Kong..."
sleep 2
done
# Create services
curl -X POST $KONG_ADMIN_URL/services \
-H "Content-Type: application/json" \
-d '{
"name": "auth-service",
"url": "http://auth-service:3001"
}'
curl -X POST $KONG_ADMIN_URL/services \
-H "Content-Type: application/json" \
-d '{
"name": "product-service",
"url": "http://product-service:3002"
}'
curl -X POST $KONG_ADMIN_URL/services \
-H "Content-Type: application/json" \
-d '{
"name": "order-service",
"url": "http://order-service:3003"
}'
# Create routes
curl -X POST $KONG_ADMIN_URL/services/auth-service/routes \
-H "Content-Type: application/json" \
-d '{
"paths": ["/api/auth"]
}'
curl -X POST $KONG_ADMIN_URL/services/product-service/routes \
-H "Content-Type: application/json" \
-d '{
"paths": ["/api/products"]
}'
curl -X POST $KONG_ADMIN_URL/services/order-service/routes \
-H "Content-Type: application/json" \
-d '{
"paths": ["/api/orders"]
}'
# Add plugins
# Rate limiting
curl -X POST $KONG_ADMIN_URL/plugins \
-H "Content-Type: application/json" \
-d '{
"name": "rate-limiting",
"config": {
"minute": 60,
"policy": "local"
}
}'
# CORS
curl -X POST $KONG_ADMIN_URL/plugins \
-H "Content-Type: application/json" \
-d '{
"name": "cors",
"config": {
"origins": ["*"],
"methods": ["GET", "POST", "PUT", "DELETE", "OPTIONS"],
"headers": ["Accept", "Authorization", "Content-Type"],
"exposed_headers": ["X-Auth-Token"],
"credentials": true,
"max_age": 3600
}
}'
# Request transformer
curl -X POST $KONG_ADMIN_URL/plugins \
-H "Content-Type: application/json" \
-d '{
"name": "request-transformer",
"config": {
"add": {
"headers": ["X-Request-ID:$(uuidgen)"]
}
}
}'
echo "Kong configuration completed!"
EOF
chmod +x infrastructure/kong/configure.sh
🕸️ Service Mesh with Istio
Installing Istio
# Download and install Istio
curl -L https://istio.io/downloadIstio | sh -
cd istio-*
export PATH=$PWD/bin:$PATH
# Install Istio on Kubernetes
istioctl install --set profile=demo -y
# Enable sidecar injection
kubectl label namespace default istio-injection=enabled
Istio Configuration
# infrastructure/istio/virtual-service.yaml
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: productpage
spec:
hosts:
- productpage
http:
- match:
- headers:
end-user:
exact: jason
route:
- destination:
host: reviews
subset: v2
- route:
- destination:
host: reviews
subset: v3
---
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: reviews
spec:
host: reviews
trafficPolicy:
connectionPool:
tcp:
maxConnections: 100
http:
http1MaxPendingRequests: 100
http2MaxRequests: 100
maxRequestsPerConnection: 2
loadBalancer:
simple: ROUND_ROBIN
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
- name: v3
labels:
version: v3
---
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: bookinfo-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
Circuit Breaker Configuration
# infrastructure/istio/circuit-breaker.yaml
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: product-service
spec:
host: product-service
trafficPolicy:
connectionPool:
tcp:
maxConnections: 10
http:
http1MaxPendingRequests: 10
http2MaxRequests: 20
maxRequestsPerConnection: 2
outlierDetection:
consecutiveErrors: 3
interval: 30s
baseEjectionTime: 30s
maxEjectionPercent: 50
minHealthPercent: 30
📊 Observability Stack
Distributed Tracing with Jaeger
# infrastructure/kubernetes/jaeger.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: jaeger
spec:
replicas: 1
selector:
matchLabels:
app: jaeger
template:
metadata:
labels:
app: jaeger
spec:
containers:
- name: jaeger
image: jaegertracing/all-in-one:1.45
ports:
- containerPort: 16686
- containerPort: 14268
env:
- name: COLLECTOR_ZIPKIN_HOST_PORT
value: ":9411"
---
apiVersion: v1
kind: Service
metadata:
name: jaeger
spec:
selector:
app: jaeger
ports:
- name: ui
port: 16686
targetPort: 16686
- name: collector
port: 14268
targetPort: 14268
type: LoadBalancer
Metrics with Prometheus and Grafana
# infrastructure/kubernetes/prometheus.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: prometheus-config
data:
prometheus.yml: |
global:
scrape_interval: 15s
scrape_configs:
- job_name: 'kubernetes-pods'
kubernetes_sd_configs:
- role: pod
relabel_configs:
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
action: keep
regex: true
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
action: replace
target_label: __metrics_path__
regex: (.+)
- source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
action: replace
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $1:$2
target_label: __address__
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: prometheus
spec:
replicas: 1
selector:
matchLabels:
app: prometheus
template:
metadata:
labels:
app: prometheus
spec:
containers:
- name: prometheus
image: prom/prometheus:v2.44.0
args:
- '--config.file=/etc/prometheus/prometheus.yml'
- '--storage.tsdb.path=/prometheus/'
ports:
- containerPort: 9090
volumeMounts:
- name: config
mountPath: /etc/prometheus
- name: storage
mountPath: /prometheus
volumes:
- name: config
configMap:
name: prometheus-config
- name: storage
emptyDir: {}
Centralized Logging with EFK
# infrastructure/kubernetes/elasticsearch.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: elasticsearch
spec:
serviceName: elasticsearch
replicas: 1
selector:
matchLabels:
app: elasticsearch
template:
metadata:
labels:
app: elasticsearch
spec:
containers:
- name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:8.8.0
env:
- name: discovery.type
value: single-node
- name: xpack.security.enabled
value: "false"
ports:
- containerPort: 9200
- containerPort: 9300
volumeMounts:
- name: data
mountPath: /usr/share/elasticsearch/data
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 10Gi
---
# Fluentd DaemonSet
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fluentd
spec:
selector:
matchLabels:
app: fluentd
template:
metadata:
labels:
app: fluentd
spec:
serviceAccountName: fluentd
containers:
- name: fluentd
image: fluent/fluentd-kubernetes-daemonset:v1.16-debian-elasticsearch8
env:
- name: FLUENT_ELASTICSEARCH_HOST
value: "elasticsearch"
- name: FLUENT_ELASTICSEARCH_PORT
value: "9200"
- name: FLUENT_ELASTICSEARCH_SCHEME
value: "http"
volumeMounts:
- name: varlog
mountPath: /var/log
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
volumes:
- name: varlog
hostPath:
path: /var/log
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
🚀 Kubernetes Deployment
Service Deployments
# deployments/kubernetes/auth-service.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: auth-service
labels:
app: auth-service
version: v1
spec:
replicas: 3
selector:
matchLabels:
app: auth-service
template:
metadata:
labels:
app: auth-service
version: v1
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "3001"
prometheus.io/path: "/metrics"
spec:
containers:
- name: auth-service
image: auth-service:latest
ports:
- containerPort: 3001
env:
- name: MONGODB_URI
valueFrom:
secretKeyRef:
name: mongodb-secret
key: uri
- name: REDIS_URL
valueFrom:
secretKeyRef:
name: redis-secret
key: url
- name: JWT_SECRET
valueFrom:
secretKeyRef:
name: jwt-secret
key: secret
resources:
requests:
memory: "128Mi"
cpu: "100m"
limits:
memory: "256Mi"
cpu: "500m"
livenessProbe:
httpGet:
path: /health
port: 3001
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
httpGet:
path: /health
port: 3001
initialDelaySeconds: 5
periodSeconds: 5
---
apiVersion: v1
kind: Service
metadata:
name: auth-service
spec:
selector:
app: auth-service
ports:
- protocol: TCP
port: 3001
targetPort: 3001
type: ClusterIP
---
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: auth-service-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: auth-service
minReplicas: 3
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: 80
ConfigMaps and Secrets
# deployments/kubernetes/config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
data:
AUTH_SERVICE_URL: "http://auth-service:3001"
PRODUCT_SERVICE_URL: "http://product-service:3002"
ORDER_SERVICE_URL: "http://order-service:3003"
PAYMENT_SERVICE_URL: "http://payment-service:3004"
---
apiVersion: v1
kind: Secret
metadata:
name: mongodb-secret
type: Opaque
stringData:
uri: "mongodb://mongo:27017/microservices"
---
apiVersion: v1
kind: Secret
metadata:
name: redis-secret
type: Opaque
stringData:
url: "redis://redis:6379"
---
apiVersion: v1
kind: Secret
metadata:
name: jwt-secret
type: Opaque
stringData:
secret: "your-super-secret-jwt-key"
🔄 CI/CD Pipeline
GitLab CI/CD
# .gitlab-ci.yml
stages:
- build
- test
- scan
- deploy
variables:
DOCKER_DRIVER: overlay2
KUBERNETES_NAMESPACE: microservices
before_script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
# Build stage
.build_template: &build_template
stage: build
image: docker:latest
services:
- docker:dind
script:
- cd services/$SERVICE_NAME
- docker build -t $CI_REGISTRY_IMAGE/$SERVICE_NAME:$CI_COMMIT_SHA .
- docker push $CI_REGISTRY_IMAGE/$SERVICE_NAME:$CI_COMMIT_SHA
- docker tag $CI_REGISTRY_IMAGE/$SERVICE_NAME:$CI_COMMIT_SHA $CI_REGISTRY_IMAGE/$SERVICE_NAME:latest
- docker push $CI_REGISTRY_IMAGE/$SERVICE_NAME:latest
build_auth:
<<: *build_template
variables:
SERVICE_NAME: auth
build_product:
<<: *build_template
variables:
SERVICE_NAME: product
build_order:
<<: *build_template
variables:
SERVICE_NAME: order
# Test stage
test_auth:
stage: test
image: node:18
script:
- cd services/auth
- npm install
- npm test
coverage: '/Coverage: \d+\.\d+%/'
test_product:
stage: test
image: golang:1.20
script:
- cd services/product
- go test -v -cover ./...
test_order:
stage: test
image: python:3.11
script:
- cd services/order
- pip install -r requirements.txt
- python -m pytest --cov=.
# Security scanning
security_scan:
stage: scan
image: aquasec/trivy:latest
script:
- trivy image --severity HIGH,CRITICAL $CI_REGISTRY_IMAGE/auth:$CI_COMMIT_SHA
- trivy image --severity HIGH,CRITICAL $CI_REGISTRY_IMAGE/product:$CI_COMMIT_SHA
- trivy image --severity HIGH,CRITICAL $CI_REGISTRY_IMAGE/order:$CI_COMMIT_SHA
# Deploy stage
deploy_staging:
stage: deploy
image: bitnami/kubectl:latest
script:
- kubectl set image deployment/auth-service auth-service=$CI_REGISTRY_IMAGE/auth:$CI_COMMIT_SHA -n $KUBERNETES_NAMESPACE
- kubectl set image deployment/product-service product-service=$CI_REGISTRY_IMAGE/product:$CI_COMMIT_SHA -n $KUBERNETES_NAMESPACE
- kubectl set image deployment/order-service order-service=$CI_REGISTRY_IMAGE/order:$CI_COMMIT_SHA -n $KUBERNETES_NAMESPACE
- kubectl rollout status deployment/auth-service -n $KUBERNETES_NAMESPACE
- kubectl rollout status deployment/product-service -n $KUBERNETES_NAMESPACE
- kubectl rollout status deployment/order-service -n $KUBERNETES_NAMESPACE
environment:
name: staging
only:
- develop
deploy_production:
stage: deploy
image: bitnami/kubectl:latest
script:
- kubectl set image deployment/auth-service auth-service=$CI_REGISTRY_IMAGE/auth:$CI_COMMIT_SHA -n production
- kubectl set image deployment/product-service product-service=$CI_REGISTRY_IMAGE/product:$CI_COMMIT_SHA -n production
- kubectl set image deployment/order-service order-service=$CI_REGISTRY_IMAGE/order:$CI_COMMIT_SHA -n production
environment:
name: production
when: manual
only:
- main
ArgoCD GitOps
# deployments/argocd/application.yaml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: microservices-app
namespace: argocd
spec:
project: default
source:
repoURL: https://gitlab.com/yourorg/microservices
targetRevision: HEAD
path: deployments/kubernetes
destination:
server: https://kubernetes.default.svc
namespace: microservices
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- CreateNamespace=true
🔒 Security Best Practices
Network Policies
# deployments/kubernetes/network-policies.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: auth-service-netpol
spec:
podSelector:
matchLabels:
app: auth-service
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector:
matchLabels:
app: api-gateway
- podSelector:
matchLabels:
app: order-service
ports:
- protocol: TCP
port: 3001
egress:
- to:
- podSelector:
matchLabels:
app: mongodb
ports:
- protocol: TCP
port: 27017
- to:
- podSelector:
matchLabels:
app: redis
ports:
- protocol: TCP
port: 6379
Pod Security Policies
# deployments/kubernetes/pod-security.yaml
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: restricted
spec:
privileged: false
allowPrivilegeEscalation: false
requiredDropCapabilities:
- ALL
volumes:
- 'configMap'
- 'emptyDir'
- 'projected'
- 'secret'
- 'downwardAPI'
- 'persistentVolumeClaim'
hostNetwork: false
hostIPC: false
hostPID: false
runAsUser:
rule: 'MustRunAsNonRoot'
seLinux:
rule: 'RunAsAny'
supplementalGroups:
rule: 'RunAsAny'
fsGroup:
rule: 'RunAsAny'
readOnlyRootFilesystem: true
🚨 Monitoring and Alerting
Prometheus Rules
# deployments/kubernetes/prometheus-rules.yaml
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
name: microservices-alerts
spec:
groups:
- name: microservices
interval: 30s
rules:
- alert: HighErrorRate
expr: |
sum(rate(http_requests_total{status=~"5.."}[5m])) by (service)
/
sum(rate(http_requests_total[5m])) by (service)
> 0.05
for: 5m
labels:
severity: critical
annotations:
summary: "High error rate on {{ $labels.service }}"
description: "{{ $labels.service }} has error rate of {{ $value }}"
- alert: HighLatency
expr: |
histogram_quantile(0.99, sum(rate(http_duration_seconds_bucket[5m])) by (service, le))
> 1
for: 5m
labels:
severity: warning
annotations:
summary: "High latency on {{ $labels.service }}"
description: "99th percentile latency is {{ $value }}s"
- alert: PodMemoryUsage
expr: |
container_memory_usage_bytes{pod!=""}
/
container_spec_memory_limit_bytes{pod!=""}
> 0.8
for: 5m
labels:
severity: warning
annotations:
summary: "High memory usage in pod {{ $labels.pod }}"
description: "Pod is using {{ $value | humanizePercentage }} of memory limit"
Grafana Dashboards
// deployments/kubernetes/grafana-dashboard.json
{
"dashboard": {
"title": "Microservices Overview",
"panels": [
{
"title": "Request Rate",
"targets": [
{
"expr": "sum(rate(http_requests_total[5m])) by (service)"
}
],
"type": "graph"
},
{
"title": "Error Rate",
"targets": [
{
"expr": "sum(rate(http_requests_total{status=~\"5..\"}[5m])) by (service)"
}
],
"type": "graph"
},
{
"title": "Response Time (p99)",
"targets": [
{
"expr": "histogram_quantile(0.99, sum(rate(http_duration_seconds_bucket[5m])) by (service, le))"
}
],
"type": "graph"
}
]
}
}
🎯 Best Practices
Microservices Design Principles
-
Single Responsibility
- ✅ Each service does one thing well
- ✅ Clear boundaries and interfaces
- ✅ Independent deployment
- ✅ Technology agnostic
- ✅ Decentralized data management
-
Communication Patterns
- ✅ RESTful APIs for synchronous
- ✅ Message queues for asynchronous
- ✅ Service mesh for observability
- ✅ Circuit breakers for resilience
- ✅ Rate limiting for stability
-
Data Management
- ✅ Database per service
- ✅ Event sourcing for audit trails
- ✅ CQRS for read/write separation
- ✅ Saga pattern for transactions
- ✅ Eventually consistent
-
Security
- ✅ Zero trust networking
- ✅ mTLS between services
- ✅ OAuth2/OIDC for authentication
- ✅ RBAC for authorization
- ✅ Secrets management
-
Observability
- ✅ Distributed tracing
- ✅ Structured logging
- ✅ Metrics collection
- ✅ Health checks
- ✅ Performance monitoring
Deployment Strategies
# Blue-Green Deployment
apiVersion: v1
kind: Service
metadata:
name: my-app
spec:
selector:
app: my-app
version: green
ports:
- port: 80
targetPort: 8080
---
# Canary Deployment with Istio
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: my-app
spec:
hosts:
- my-app
http:
- match:
- headers:
canary:
exact: "true"
route:
- destination:
host: my-app
subset: v2
weight: 100
- route:
- destination:
host: my-app
subset: v1
weight: 90
- destination:
host: my-app
subset: v2
weight: 10
📚 Resources and Next Steps
Learning Path
- Container Orchestration - Deep dive into Kubernetes
- Service Mesh - Advanced Istio configurations
- Event-Driven Architecture - Kafka and event streaming
- Serverless - Functions as a Service (FaaS)
- Edge Computing - Distributed cloud-native apps
Useful Resources
- CNCF Cloud Native Landscape
- Microservices.io
- 12 Factor App
- Rocky Linux Documentation
- Kubernetes Documentation
Building cloud-native applications with microservices on Rocky Linux 9 provides a robust foundation for modern, scalable applications. From containerization to service mesh, from CI/CD to observability, this architecture enables rapid development and deployment while maintaining reliability and security. Remember that the journey to cloud-native is iterative – start small, measure everything, and continuously improve. Happy coding! ☁️