Linux Database Administration: MySQL and PostgreSQL Management Guide 2026

Linux database administration 2026: Complete MySQL and PostgreSQL management guide. Installation, optimization, security, and high availability.

Linux Database Administration: The Complete 2026 Guide

Linux database administration remains a critical skill for system administrators and DevOps engineers in 2026. With data volumes growing exponentially and uptime requirements becoming increasingly stringent, managing MySQL and PostgreSQL databases on Linux servers requires both traditional expertise and modern automation techniques.

This comprehensive guide covers everything from initial installation to advanced optimization, security hardening, and high availability configuration. Whether you are managing a single database server or a complex multi-node cluster, these practices will help you maintain reliable, performant, and secure database infrastructure.

Choosing Between MySQL and PostgreSQL on Linux

Both MySQL and PostgreSQL are mature, production-ready database systems with strong Linux support. Your choice depends on specific requirements:

MySQL: Best for Web Applications and Read-Heavy Workloads

MySQL excels in scenarios requiring:

  • High read performance: Excellent for content management systems, e-commerce platforms
  • Simple replication: Straightforward master-slave and master-master configurations
  • Wide ecosystem: Extensive tooling and hosting provider support
  • Memory efficiency: Lower resource requirements for small to medium deployments

PostgreSQL: Best for Complex Queries and Data Integrity

PostgreSQL is the better choice when you need:

  • Advanced SQL features: Window functions, CTEs, complex joins
  • Data integrity: Strict ACID compliance, advanced constraint support
  • Extensibility: Custom data types, stored procedures in multiple languages
  • JSON/NoSQL features: Native JSON support with indexing

Installation and Initial Configuration

Installing MySQL on Ubuntu 24.04

sudo apt update
sudo apt install mysql-server-8.0
sudo mysql_secure_installation

The mysql_secure_installation script configures basic security settings including root password, remote access restrictions, and test database removal.

Installing PostgreSQL on Ubuntu 24.04

sudo apt update
sudo apt install postgresql postgresql-contrib
sudo systemctl enable postgresql

PostgreSQL creates a default postgres user during installation. Use sudo -u postgres psql to access the database prompt.

Installing on Red Hat Enterprise Linux 9

For RHEL-based systems, the installation differs slightly:

MySQL on RHEL 9:

sudo dnf install mysql-server
sudo systemctl start mysqld
sudo systemctl enable mysqld
sudo mysql_secure_installation

PostgreSQL on RHEL 9:

sudo dnf install postgresql-server postgresql-contrib
sudo postgresql-setup --initdb
sudo systemctl start postgresql
sudo systemctl enable postgresql

Essential Linux Database Administration Tasks

User Management and Security

MySQL User Creation:

CREATE USER "appuser"@"localhost" IDENTIFIED BY "strong_password";
GRANT SELECT, INSERT, UPDATE ON mydb.* TO "appuser"@"localhost";
FLUSH PRIVILEGES;

PostgreSQL User Creation:

CREATE USER appuser WITH PASSWORD "strong_password";
GRANT CONNECT ON DATABASE mydb TO appuser;
GRANT USAGE ON SCHEMA public TO appuser;
GRANT SELECT, INSERT, UPDATE ON ALL TABLES IN SCHEMA public TO appuser;

Backup Strategies

MySQL Backup with mysqldump:

mysqldump -u root -p --all-databases > full_backup_$(date +%Y%m%d).sql

For large databases, use Percona XtraBackup for hot backups without locking tables.

PostgreSQL Backup with pg_dump:

pg_dump -U postgres -Fc mydb > mydb_$(date +%Y%m%d).dump

The custom format (-Fc) enables selective restore and compression.

Automated Backup Scripts

Create a cron job for daily automated backups:

# /etc/cron.daily/database-backup
#!/bin/bash
BACKUP_DIR=/var/backups/databases
DATE=$(date +%Y%m%d)

# MySQL backup
mysqldump -u root -p\$MYSQL_ROOT_PASSWORD --all-databases | gzip > $BACKUP_DIR/mysql_$DATE.sql.gz

# PostgreSQL backup
sudo -u postgres pg_dumpall | gzip > $BACKUP_DIR/postgres_$DATE.sql.gz

# Keep only 7 days of backups
find $BACKUP_DIR -type f -mtime +7 -delete

Performance Optimization

MySQL Performance Tuning

Edit /etc/mysql/mysql.conf.d/mysqld.cnf:

# InnoDB Buffer Pool (should be 70-80% of RAM)
innodb_buffer_pool_size = 4G

# Connection settings
max_connections = 200
wait_timeout = 600

# Query cache (MySQL 8.0+ removed query cache, use ProxySQL instead)
# Logging
slow_query_log = 1
slow_query_log_file = /var/log/mysql/slow.log
long_query_time = 2

PostgreSQL Performance Tuning

Edit /etc/postgresql/16/main/postgresql.conf:

# Memory settings
shared_buffers = 4GB
effective_cache_size = 12GB
work_mem = 256MB
maintenance_work_mem = 512MB

# Connection settings
max_connections = 200

# Write-ahead logging
wal_buffers = 16MB
checkpoint_completion_target = 0.9

# Query planner
random_page_cost = 1.1
effective_io_concurrency = 200

Index Optimization

Proper indexing is crucial for database performance:

MySQL Index Management:

# Create index
CREATE INDEX idx_users_email ON users(email);

# Analyze query performance
EXPLAIN SELECT * FROM users WHERE email = "user@example.com";

# Check for unused indexes
SELECT * FROM sys.schema_unused_indexes;

PostgreSQL Index Management:

# Create index with specific type
CREATE INDEX CONCURRENTLY idx_users_email ON users USING btree(email);

# Check index usage
SELECT * FROM pg_stat_user_indexes WHERE idx_scan = 0;

# Create partial index
CREATE INDEX idx_active_users ON users(email) WHERE status = "active";

Monitoring and Alerting

Essential Metrics to Monitor

  • Connection counts: Alert when approaching max_connections
  • Query performance: Track slow queries and execution times
  • Replication lag: Critical for high availability setups
  • Disk space: Database growth can rapidly consume storage
  • Lock waits: Indication of contention issues

Setting Up Prometheus Monitoring

# Install mysqld_exporter for MySQL
wget https://github.com/prometheus/mysqld_exporter/releases/download/v0.15.0/mysqld_exporter-0.15.0.linux-amd64.tar.gz
tar xvf mysqld_exporter-*.tar.gz
sudo cp mysqld_exporter-*/mysqld_exporter /usr/local/bin/

Configure Prometheus to scrape database metrics and set up Grafana dashboards for visualization.

Log Analysis

Regular log analysis helps identify performance issues:

MySQL Slow Query Log Analysis:

pt-query-digest /var/log/mysql/slow.log > slow_query_report.txt

PostgreSQL Log Analysis:

pgbadger /var/log/postgresql/postgresql-16-main.log -o report.html

High Availability Configuration

MySQL Master-Slave Replication

Master Configuration:

server-id = 1
log_bin = /var/log/mysql/mysql-bin
binlog_do_db = mydb

Slave Configuration:

server-id = 2
relay_log = /var/log/mysql/mysql-relay-bin
log_bin = /var/log/mysql/mysql-bin
read_only = 1

Set up replication with:

CHANGE MASTER TO
MASTER_HOST="master_host",
MASTER_USER="replica",
MASTER_PASSWORD="password",
MASTER_LOG_FILE="mysql-bin.000001",
MASTER_LOG_POS=0;
START SLAVE;

PostgreSQL Streaming Replication

Use Patroni with etcd for automated failover and leader election:

pip install patroni[etcd]
patroni /etc/patroni.yml

Patroni handles failover automatically and provides a REST API for monitoring.

Security Hardening

Database Server Security

  • Firewall rules: Restrict database port access (3306 MySQL, 5432 PostgreSQL) to application servers only
  • SSL/TLS: Enforce encrypted connections
  • Strong passwords: Implement password policies and rotation
  • Regular updates: Keep database software patched

Data Encryption

MySQL Transparent Data Encryption (TDE):

innodb_encrypt_tables = ON
innodb_encrypt_log = ON

PostgreSQL pgcrypto Extension:

CREATE EXTENSION pgcrypto;
INSERT INTO users (username, password) VALUES (\$1, crypt(\$2, gen_salt(\$bf)));

Network Security

Configure database binding to specific interfaces:

MySQL:

bind-address = 10.0.1.10
skip-networking = 0

PostgreSQL:

listen_addresses = "localhost,10.0.1.10"
ssl = on
ssl_cert_file = "/etc/ssl/certs/server.crt"
ssl_key_file = "/etc/ssl/private/server.key"

Disaster Recovery Planning

A robust disaster recovery strategy includes:

  • Point-in-time recovery: Binary log archiving for MySQL, WAL archiving for PostgreSQL
  • Off-site backups: Automated replication to secondary location
  • Recovery testing: Regular restore drills to verify backup integrity
  • Documentation: Clear runbooks for failover procedures

Recovery Procedures

MySQL Point-in-Time Recovery:

# Restore full backup
mysql -u root -p < full_backup_20260220.sql # Apply binary logs up to specific time
mysqlbinlog --stop-datetime="2026-02-20 14:00:00" mysql-bin.000001 mysql-bin.000002 | mysql -u root -p

PostgreSQL Point-in-Time Recovery:

# Restore base backup
tar -xzf base_backup.tar.gz -C /var/lib/postgresql/16/main/

# Configure recovery
echo "restore_command = "cp /var/lib/postgresql/archive/%f %p"" >> recovery.conf
echo "recovery_target_time = "2026-02-20 14:00:00"" >> recovery.conf

Conclusion

Effective Linux database administration in 2026 requires a combination of traditional database management skills and modern DevOps practices. Automation, monitoring, and proactive optimization are essential for maintaining reliable database infrastructure.

Whether you choose MySQL or PostgreSQL, the principles remain the same: secure your data, optimize performance, plan for failure, and automate repetitive tasks. Master these fundamentals, and you will be well-equipped to handle any database administration challenge.

Remember that database administration is not a set-and-forget task—it requires continuous learning, monitoring, and adaptation to changing requirements. Stay current with security patches, performance best practices, and new features to ensure your database infrastructure remains robust and efficient.