sqlite
+
+
docker
swc
play
+
+
+
+
+
+
+
gulp
scala
+
+
π
influxdb
+
+
jwt
+
+
+
+
!==
+
+
elixir
+
argocd
apex
+
alpine
raspbian
termux
+
[]
+
linux
+
express
+
!
actix
+
+
+
[]
+
^
+
+
xgboost
+
graphdb
+
+
haiku
+
azure
+
+
+
weaviate
fastapi
+
+
+
+
+
+
zig
tcl
redis
+
rider
+
laravel
+
angular
+
pnpm
ionic
+
fastapi
+
Back to Blog
Setting Up Centralized Log Management with Elasticsearch on Rocky Linux 📊
rocky-linux elasticsearch elk-stack

Setting Up Centralized Log Management with Elasticsearch on Rocky Linux 📊

Published Jul 15, 2025

Build a powerful centralized logging system using Elasticsearch, Logstash, and Kibana (ELK Stack) on Rocky Linux 9. Learn to collect, process, visualize, and analyze logs from multiple sources for enhanced monitoring and troubleshooting.

5 min read
0 views
Table of Contents

In today’s distributed computing environments, managing logs from multiple servers, applications, and services is crucial for maintaining system health, troubleshooting issues, and ensuring security compliance. This comprehensive guide walks you through setting up a production-ready centralized log management system using the ELK Stack (Elasticsearch, Logstash, and Kibana) on Rocky Linux 9.

🎯 Understanding Centralized Log Management

Centralized logging aggregates log data from various sources into a single, searchable repository. This approach transforms how organizations monitor their infrastructure, moving from scattered log files to a unified view of system behavior.

Benefits of Centralized Logging

  • Unified View - Monitor all systems from a single dashboard 📊
  • Real-time Analysis - Detect issues as they happen ⚡
  • Historical Data - Analyze trends and patterns over time 📈
  • Faster Troubleshooting - Correlate events across systems 🔍
  • Compliance - Meet regulatory requirements for log retention 📋

📋 Prerequisites and Architecture Overview

System Requirements

For a production ELK Stack deployment on Rocky Linux 9:

# Minimum requirements per node
- Elasticsearch: 4 GB RAM, 2 CPU cores, 50 GB storage
- Logstash: 2 GB RAM, 2 CPU cores, 20 GB storage
- Kibana: 2 GB RAM, 1 CPU core, 10 GB storage

# Recommended for production
- Elasticsearch: 16 GB RAM, 8 CPU cores, 500 GB SSD
- Logstash: 8 GB RAM, 4 CPU cores, 100 GB storage
- Kibana: 4 GB RAM, 2 CPU cores, 50 GB storage

Architecture Components

# Typical ELK Stack flow
Log Sources Beats/Agents Logstash Elasticsearch Kibana

Applications  Collectors    Processing    Storage      Visualization

🔧 Preparing Rocky Linux 9

System Preparation

# Update system packages
sudo dnf update -y

# Install essential tools
sudo dnf install -y \
  java-11-openjdk \
  java-11-openjdk-devel \
  curl \
  wget \
  vim \
  net-tools \
  gnupg2

# Verify Java installation
java -version

# Set JAVA_HOME
echo 'export JAVA_HOME=/usr/lib/jvm/java-11-openjdk' >> ~/.bashrc
echo 'export PATH=$JAVA_HOME/bin:$PATH' >> ~/.bashrc
source ~/.bashrc

System Optimization

# Disable swap for Elasticsearch
sudo swapoff -a
sudo sed -i '/ swap / s/^/#/' /etc/fstab

# Increase virtual memory
echo "vm.max_map_count=262144" | sudo tee -a /etc/sysctl.conf
sudo sysctl -p

# Configure file descriptors
echo "* soft nofile 65536" | sudo tee -a /etc/security/limits.conf
echo "* hard nofile 65536" | sudo tee -a /etc/security/limits.conf
echo "* soft nproc 4096" | sudo tee -a /etc/security/limits.conf
echo "* hard nproc 4096" | sudo tee -a /etc/security/limits.conf

📦 Installing Elasticsearch

Add Elasticsearch Repository

# Import GPG key
sudo rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch

# Create repository file
sudo tee /etc/yum.repos.d/elasticsearch.repo << EOF
[elasticsearch-8.x]
name=Elasticsearch repository for 8.x packages
baseurl=https://artifacts.elastic.co/packages/8.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md
EOF

Install and Configure Elasticsearch

# Install Elasticsearch
sudo dnf install -y elasticsearch

# Backup original configuration
sudo cp /etc/elasticsearch/elasticsearch.yml /etc/elasticsearch/elasticsearch.yml.orig

# Configure Elasticsearch
sudo tee /etc/elasticsearch/elasticsearch.yml << EOF
# Cluster settings
cluster.name: rocky-linux-logs
node.name: elk-node-1

# Network settings
network.host: 0.0.0.0
http.port: 9200
discovery.type: single-node

# Path settings
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch

# Memory settings
bootstrap.memory_lock: true

# Security settings (disable for initial setup)
xpack.security.enabled: false
xpack.security.enrollment.enabled: false
EOF

JVM Heap Configuration

# Set heap size (50% of available RAM, max 32GB)
sudo sed -i 's/-Xms1g/-Xms4g/g' /etc/elasticsearch/jvm.options
sudo sed -i 's/-Xmx1g/-Xmx4g/g' /etc/elasticsearch/jvm.options

Start Elasticsearch Service

# Enable and start Elasticsearch
sudo systemctl daemon-reload
sudo systemctl enable elasticsearch
sudo systemctl start elasticsearch

# Verify Elasticsearch is running
curl -X GET "localhost:9200/"

# Check cluster health
curl -X GET "localhost:9200/_cluster/health?pretty"

🔄 Installing Logstash

Install Logstash

# Install Logstash from repository
sudo dnf install -y logstash

# Create Logstash pipeline directory
sudo mkdir -p /etc/logstash/conf.d

Configure Logstash Input

# /etc/logstash/conf.d/01-input.conf
input {
  # Beats input for Filebeat/Metricbeat
  beats {
    port => 5044
    ssl => false
  }
  
  # Syslog input
  syslog {
    port => 5514
    type => "syslog"
  }
  
  # HTTP input for applications
  http {
    port => 8080
    codec => json
  }
  
  # File input for local logs
  file {
    path => "/var/log/messages"
    type => "syslog"
    start_position => "beginning"
  }
}

Configure Logstash Filters

# /etc/logstash/conf.d/02-filter.conf
filter {
  # Parse syslog messages
  if [type] == "syslog" {
    grok {
      match => { 
        "message" => "%{SYSLOGTIMESTAMP:timestamp} %{SYSLOGHOST:hostname} %{DATA:program}(?:\[%{POSINT:pid}\])?: %{GREEDYDATA:message}" 
      }
      overwrite => [ "message" ]
    }
    
    date {
      match => [ "timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
      target => "@timestamp"
    }
  }
  
  # Parse Apache access logs
  if [type] == "apache-access" {
    grok {
      match => { 
        "message" => "%{COMBINEDAPACHELOG}" 
      }
    }
    
    geoip {
      source => "clientip"
      target => "geoip"
    }
  }
  
  # Parse JSON logs
  if [type] == "json" {
    json {
      source => "message"
    }
  }
  
  # Add metadata
  mutate {
    add_field => {
      "environment" => "production"
      "datacenter" => "rocky-linux-dc1"
    }
  }
}

Configure Logstash Output

# /etc/logstash/conf.d/03-output.conf
output {
  # Send to Elasticsearch
  elasticsearch {
    hosts => ["localhost:9200"]
    index => "logs-%{type}-%{+YYYY.MM.dd}"
    template_overwrite => true
  }
  
  # Debug output (disable in production)
  stdout { 
    codec => rubydebug 
  }
}

Start Logstash Service

# Test configuration
sudo /usr/share/logstash/bin/logstash --config.test_and_exit -f /etc/logstash/conf.d/

# Enable and start Logstash
sudo systemctl enable logstash
sudo systemctl start logstash

# Monitor Logstash logs
sudo journalctl -u logstash -f

📊 Installing Kibana

Install and Configure Kibana

# Install Kibana
sudo dnf install -y kibana

# Configure Kibana
sudo tee /etc/kibana/kibana.yml << EOF
# Server settings
server.port: 5601
server.host: "0.0.0.0"
server.name: "rocky-linux-kibana"

# Elasticsearch settings
elasticsearch.hosts: ["http://localhost:9200"]

# Logging settings
logging.dest: /var/log/kibana/kibana.log
logging.verbose: false

# Security settings
xpack.security.enabled: false
EOF

Create Kibana Log Directory

# Create log directory
sudo mkdir -p /var/log/kibana
sudo chown kibana:kibana /var/log/kibana

Start Kibana Service

# Enable and start Kibana
sudo systemctl enable kibana
sudo systemctl start kibana

# Verify Kibana is running
curl -I http://localhost:5601

🔒 Configuring Firewall

# Configure firewall for ELK Stack
sudo firewall-cmd --permanent --add-port=9200/tcp  # Elasticsearch
sudo firewall-cmd --permanent --add-port=5601/tcp  # Kibana
sudo firewall-cmd --permanent --add-port=5044/tcp  # Beats
sudo firewall-cmd --permanent --add-port=5514/tcp  # Syslog
sudo firewall-cmd --permanent --add-port=8080/tcp  # HTTP input

# Reload firewall
sudo firewall-cmd --reload

# Verify open ports
sudo firewall-cmd --list-all

📡 Installing Beats Agents

Filebeat Installation

# Install Filebeat on client systems
sudo dnf install -y filebeat

# Configure Filebeat
sudo tee /etc/filebeat/filebeat.yml << EOF
filebeat.inputs:
- type: log
  enabled: true
  paths:
    - /var/log/*.log
    - /var/log/messages
    - /var/log/secure
  exclude_files: ['.gz$']
  
- type: log
  enabled: true
  paths:
    - /var/log/nginx/access.log
  fields:
    type: nginx-access
  
- type: log
  enabled: true
  paths:
    - /var/log/nginx/error.log
  fields:
    type: nginx-error

output.logstash:
  hosts: ["your-logstash-server:5044"]

processors:
  - add_host_metadata:
      when.not.contains.tags: forwarded
EOF

# Enable and start Filebeat
sudo systemctl enable filebeat
sudo systemctl start filebeat

Metricbeat Installation

# Install Metricbeat for system metrics
sudo dnf install -y metricbeat

# Configure Metricbeat
sudo tee /etc/metricbeat/metricbeat.yml << EOF
metricbeat.config.modules:
  path: \${path.config}/modules.d/*.yml
  reload.enabled: true

metricbeat.modules:
- module: system
  metricsets:
    - cpu
    - memory
    - network
    - process
    - process_summary
    - socket_summary
    - filesystem
    - fsstat
    - uptime
  enabled: true
  period: 10s
  processes: ['.*']

output.elasticsearch:
  hosts: ["localhost:9200"]
  index: "metrics-%{+yyyy.MM.dd}"

setup.kibana:
  host: "localhost:5601"
EOF

# Enable system module
sudo metricbeat modules enable system

# Start Metricbeat
sudo systemctl enable metricbeat
sudo systemctl start metricbeat

🎨 Creating Kibana Dashboards

Access Kibana Interface

# Access Kibana web interface
http://your-server-ip:5601

# Default credentials (if security enabled)
Username: elastic
Password: [check during installation]

Creating Index Patterns

  1. Navigate to Stack ManagementIndex Patterns
  2. Create patterns for:
    • logs-* for all log data
    • metrics-* for system metrics
    • logs-nginx-* for nginx specific logs

Building Visualizations

// Example visualization configuration
{
  "version": "8.11.0",
  "objects": [
    {
      "id": "log-count-timeline",
      "type": "visualization",
      "attributes": {
        "title": "Log Count Timeline",
        "visState": {
          "type": "line",
          "params": {
            "grid": { "categoryLines": false },
            "categoryAxes": [{
              "id": "CategoryAxis-1",
              "type": "category",
              "position": "bottom",
              "show": true,
              "style": {},
              "scale": { "type": "linear" },
              "labels": { "show": true, "truncate": 100 },
              "title": {}
            }],
            "valueAxes": [{
              "id": "ValueAxis-1",
              "name": "LeftAxis-1",
              "type": "value",
              "position": "left",
              "show": true,
              "style": {},
              "scale": { "type": "linear", "mode": "normal" },
              "labels": { "show": true, "rotate": 0, "filter": false, "truncate": 100 },
              "title": { "text": "Log Count" }
            }]
          }
        }
      }
    }
  ]
}

Creating Dashboards

  1. System Overview Dashboard

    • CPU and Memory usage
    • Disk I/O statistics
    • Network traffic
    • Process count
  2. Security Dashboard

    • Failed login attempts
    • SSH access patterns
    • Firewall blocks
    • Sudo usage
  3. Application Dashboard

    • Request rates
    • Error rates
    • Response times
    • Geographic distribution

🔍 Advanced Log Processing

Enriching Logs with GeoIP

# Add to Logstash filter configuration
filter {
  if [clientip] {
    geoip {
      source => "clientip"
      target => "geoip"
      database => "/etc/logstash/GeoLite2-City.mmdb"
      add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}" ]
      add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}"  ]
    }
    mutate {
      convert => [ "[geoip][coordinates]", "float" ]
    }
  }
}

Custom Parsing Patterns

# Create custom patterns
mkdir -p /etc/logstash/patterns

# Custom pattern file
echo 'NGINX_ERROR_LOG %{DATA:timestamp} \[%{LOGLEVEL:severity}\] %{POSINT:pid}#%{POSINT:tid}: %{GREEDYDATA:message}' > /etc/logstash/patterns/nginx

# Use in Logstash
filter {
  if [type] == "nginx-error" {
    grok {
      patterns_dir => ["/etc/logstash/patterns"]
      match => { "message" => "%{NGINX_ERROR_LOG}" }
    }
  }
}

🛡️ Security and Authentication

Enable Security Features

# Generate passwords for built-in users
sudo /usr/share/elasticsearch/bin/elasticsearch-setup-passwords auto

# Save the generated passwords securely

Configure SSL/TLS

# Generate SSL certificates
sudo /usr/share/elasticsearch/bin/elasticsearch-certutil ca
sudo /usr/share/elasticsearch/bin/elasticsearch-certutil cert --ca elastic-stack-ca.p12

# Configure Elasticsearch for SSL
echo "xpack.security.enabled: true" >> /etc/elasticsearch/elasticsearch.yml
echo "xpack.security.transport.ssl.enabled: true" >> /etc/elasticsearch/elasticsearch.yml
echo "xpack.security.transport.ssl.verification_mode: certificate" >> /etc/elasticsearch/elasticsearch.yml
echo "xpack.security.transport.ssl.keystore.path: elastic-certificates.p12" >> /etc/elasticsearch/elasticsearch.yml
echo "xpack.security.transport.ssl.truststore.path: elastic-certificates.p12" >> /etc/elasticsearch/elasticsearch.yml

Configure Nginx Reverse Proxy

# /etc/nginx/conf.d/kibana.conf
server {
    listen 80;
    server_name logs.example.com;
    return 301 https://$server_name$request_uri;
}

server {
    listen 443 ssl http2;
    server_name logs.example.com;
    
    ssl_certificate /etc/nginx/ssl/cert.pem;
    ssl_certificate_key /etc/nginx/ssl/key.pem;
    
    location / {
        proxy_pass http://localhost:5601;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_set_header Host $host;
        proxy_cache_bypass $http_upgrade;
    }
}

📈 Performance Optimization

Elasticsearch Tuning

# Optimize for logging workload
index.refresh_interval: "30s"
index.number_of_replicas: 0  # For single node
index.translog.durability: "async"
index.translog.sync_interval: "30s"

# Bulk indexing optimization
thread_pool.bulk.size: 16
thread_pool.bulk.queue_size: 1000

Index Lifecycle Management

# Create ILM policy
curl -X PUT "localhost:9200/_ilm/policy/logs_policy" -H 'Content-Type: application/json' -d'
{
  "policy": {
    "phases": {
      "hot": {
        "actions": {
          "rollover": {
            "max_size": "50GB",
            "max_age": "7d"
          },
          "set_priority": {
            "priority": 100
          }
        }
      },
      "warm": {
        "min_age": "7d",
        "actions": {
          "shrink": {
            "number_of_shards": 1
          },
          "forcemerge": {
            "max_num_segments": 1
          },
          "set_priority": {
            "priority": 50
          }
        }
      },
      "delete": {
        "min_age": "30d",
        "actions": {
          "delete": {}
        }
      }
    }
  }
}'

🚨 Monitoring and Alerting

Setting Up Watcher

{
  "trigger": {
    "schedule": {
      "interval": "5m"
    }
  },
  "input": {
    "search": {
      "request": {
        "indices": ["logs-*"],
        "body": {
          "query": {
            "bool": {
              "must": [
                { "match": { "level": "ERROR" }},
                { "range": { "@timestamp": { "gte": "now-5m" }}}
              ]
            }
          }
        }
      }
    }
  },
  "condition": {
    "compare": {
      "ctx.payload.hits.total": {
        "gt": 10
      }
    }
  },
  "actions": {
    "send_email": {
      "email": {
        "to": "[email protected]",
        "subject": "High Error Rate Alert",
        "body": "More than 10 errors in the last 5 minutes"
      }
    }
  }
}

Integration with External Tools

# Configure Logstash to send alerts to Slack
output {
  if [level] == "CRITICAL" {
    http {
      url => "https://hooks.slack.com/services/YOUR/WEBHOOK/URL"
      format => "json"
      mapping => {
        "text" => "Critical Error: %{message}"
        "channel" => "#alerts"
      }
    }
  }
}

🔧 Troubleshooting Common Issues

Elasticsearch Not Starting

# Check logs
sudo journalctl -u elasticsearch -n 100

# Common fixes
# 1. Memory lock issues
sudo systemctl edit elasticsearch
# Add:
[Service]
LimitMEMLOCK=infinity

# 2. Heap size issues
# Ensure heap size is set correctly in jvm.options

# 3. Permission issues
sudo chown -R elasticsearch:elasticsearch /var/lib/elasticsearch
sudo chown -R elasticsearch:elasticsearch /var/log/elasticsearch

Logstash Pipeline Errors

# Test configuration
sudo -u logstash /usr/share/logstash/bin/logstash --config.test_and_exit -f /etc/logstash/conf.d/

# Debug mode
sudo -u logstash /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/ --debug

# Check for syntax errors
sudo -u logstash /usr/share/logstash/bin/logstash --config.test_and_exit

Kibana Connection Issues

# Verify Elasticsearch is accessible
curl -X GET "localhost:9200/_cluster/health?pretty"

# Check Kibana logs
sudo tail -f /var/log/kibana/kibana.log

# Test connectivity
telnet localhost 9200

🎯 Best Practices

Log Retention Strategy

  1. Hot Data (0-7 days): Fast SSD storage, full indexing
  2. Warm Data (7-30 days): Slower storage, reduced shards
  3. Cold Data (30-90 days): Archive storage, searchable snapshots
  4. Frozen Data (90+ days): Object storage, rarely accessed

Security Recommendations

  • Enable authentication and SSL/TLS
  • Use role-based access control (RBAC)
  • Implement network segmentation
  • Regular security updates
  • Audit log access

Backup Strategy

# Configure snapshot repository
curl -X PUT "localhost:9200/_snapshot/backup" -H 'Content-Type: application/json' -d'
{
  "type": "fs",
  "settings": {
    "location": "/mnt/backups/elasticsearch"
  }
}'

# Create snapshot
curl -X PUT "localhost:9200/_snapshot/backup/snapshot_1?wait_for_completion=true"

📚 Next Steps and Resources

Advanced Topics

  1. Machine Learning - Anomaly detection in logs
  2. APM Integration - Application performance monitoring
  3. Multi-cluster Setup - Geographic distribution
  4. Custom Plugins - Extend functionality
  5. API Integration - Programmatic log analysis

Useful Resources


Setting up centralized log management with Elasticsearch on Rocky Linux 9 provides a robust foundation for monitoring and analyzing your infrastructure. Start with basic log collection, gradually add more sources, and customize the system to meet your specific needs. Remember that effective log management is an iterative process – continuously refine your filters, dashboards, and alerts based on your operational requirements. Happy logging! 📊