Leveling Up: From Docker Compose to Kubernetes and Helm with WordPress

Congratulations, container enthusiast! You've mastered the art of docker-compose up -d
and can spin up multi-container applications with a single YAML file. But as your application grows, you've started noticing the limitations. Your deployment needs to scale, handle traffic spikes, and recover gracefully from failures-all things Docker Compose wasn't really designed to handle. This guide will walk you through the natural evolution from Docker Compose to Kubernetes manifests and finally to Helm charts, using WordPress as our example application. By the end, you'll understand not just how to deploy applications in Kubernetes, but why this evolution matters for production readiness.
Docker Compose and Its Limitations
Quick Overview
Docker Compose has become a beloved tool in the developer toolkit for good reason. It provides a simple, declarative way to define multi-container applications using a single YAML file:
version: '3'
services:
wordpress:
image: wordpress:latest
ports:
- "8080:80"
environment:
WORDPRESS_DB_HOST: db
WORDPRESS_DB_PASSWORD: wordpress
volumes:
- wordpress_data:/var/www/html
depends_on:
- db
db:
image: mysql:5.7
environment:
MYSQL_DATABASE: wordpress
MYSQL_ROOT_PASSWORD: rootpassword
volumes:
- db_data:/var/lib/mysql
volumes:
wordpress_data:
db_data:
The beauty of Docker Compose lies in its simplicity, declarative structure, and ease of use. With just this file, a single docker-compose up -d
command brings your WordPress site to life. It's elegant and works beautifully... until it doesn't.
The Catch
While Docker Compose shines for development, it has significant limitations in production environments:
-
Single-host architecture: Docker Compose was designed to run containers on a single machine, making it impossible to distribute workloads across multiple servers for high availability.
-
Limited fault tolerance: When your single machine goes down, all your services go down with it. There's no built-in recovery mechanism.
-
Manual scaling: Although you can scale services with
docker-compose up --scale service=3
, this is a manual process that requires intervention. -
Lack of native orchestration: Docker Compose lacks critical orchestration features like automatic load balancing, self-healing, and efficient cross-server scaling.
-
Environment mismatch: Running containers with Docker Compose locally differs significantly from how they're typically managed in production environments. This discrepancy increases the likelihood of "works on my machine" problems.
Docker Compose is like a good tent-perfect for a weekend camping trip, not so great for building a permanent residence. It's an excellent development tool but wasn't designed for the complexities of production environments.
Step Into Kubernetes: The Real Orchestrator
What Is Kubernetes (K8s)?
Kubernetes is an open-source platform designed to automate deploying, scaling, and managing containerized applications. It groups containers into logical units for easy management and discovery.
Key features that set Kubernetes apart include:
-
Multi-node cluster management: Unlike Docker Compose, Kubernetes is built to manage containers across multiple machines, forming a "cluster."
-
Self-healing: Kubernetes automatically replaces failed containers, reschedules workloads when nodes become unavailable, and ensures that the desired state of the system is maintained.
-
Auto-scaling: Automatically scales your applications based on CPU usage or custom metrics.
-
Service discovery and load balancing: Assigns a single DNS name for a set of containers and can load-balance across them.
-
Declarative configuration: You specify the desired state, and Kubernetes works to ensure the actual state matches it.
-
RBAC and namespace isolation: Provides security and multi-tenancy capabilities.
-
Storage orchestration: Mounts the storage system of your choice, whether from local storage, public cloud providers, or network storage systems.
Comparing Docker Compose vs Kubernetes
Understanding the differences helps clarify why the transition to Kubernetes makes sense as applications grow:
Feature | Docker Compose | Kubernetes |
---|---|---|
Scope | Single host | Multi-node cluster |
Health checks | Basic | Advanced + auto-restart |
Load balancing | Manual | Built-in (Service, Ingress) |
Config reuse | Limited | ConfigMap + Secret |
Scaling | Manual | Auto + declarative |
Deployment strategies | Limited | Rolling updates, canary, blue/green |
State management | Simple volumes | StatefulSets with stable identities |
Self-healing | Minimal | Comprehensive |
Learning curve | Gentle slope | Steep mountain |
The table shows a clear progression: Kubernetes offers more robust features at the cost of increased complexity-a trade-off that becomes worthwhile as applications grow.
Deploying WordPress with Kubernetes Manifests
Required Concepts
Before diving into deployment, let's understand the core Kubernetes resources we'll be using:
-
Pod: The smallest deployable unit in Kubernetes, containing one or more containers that share storage and network resources.
-
Deployment: Manages a replicated set of Pods and handles updates with controlled rollout strategies.
-
Service: An abstraction that defines a logical set of Pods and a policy to access them, enabling loose coupling.
-
PersistentVolumeClaim (PVC): A request for storage that can be fulfilled by a PersistentVolume, providing persistent data storage.
-
Secret: Stores sensitive information like passwords, OAuth tokens, and SSH keys.
-
ConfigMap: Stores non-sensitive configuration data as key-value pairs.
-
Ingress: Manages external access to services, typically HTTP, providing routing rules, SSL termination, and name-based virtual hosting.
Manifests Overview
Let's examine the Kubernetes manifests needed for our WordPress deployment.
MySQL Deployment
First, we need a MySQL database for WordPress:
apiVersion: v1
kind: Service
metadata:
name: wordpress-mysql
labels:
app: wordpress
spec:
ports:
- port: 3306
selector:
app: wordpress
tier: mysql
clusterIP: None
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
labels:
app: wordpress
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: wordpress-mysql
labels:
app: wordpress
spec:
selector:
matchLabels:
app: wordpress
tier: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: wordpress
tier: mysql
spec:
containers:
- image: mysql:5.7
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-pass
key: password
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
This manifest defines:
- A Service exposing MySQL on port 3306, but only within the cluster (note
clusterIP: None
) - A PersistentVolumeClaim requesting 20GB of storage for MySQL data
- A Deployment running the MySQL container with a reference to the password secret
WordPress Deployment
Next, we need the WordPress application itself:
apiVersion: v1
kind: Service
metadata:
name: wordpress
labels:
app: wordpress
spec:
ports:
- port: 80
selector:
app: wordpress
tier: frontend
type: ClusterIP
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: wp-pv-claim
labels:
app: wordpress
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: wordpress
labels:
app: wordpress
spec:
selector:
matchLabels:
app: wordpress
tier: frontend
strategy:
type: Recreate
template:
metadata:
labels:
app: wordpress
tier: frontend
spec:
containers:
- image: wordpress:6.2.1-apache
name: wordpress
env:
- name: WORDPRESS_DB_HOST
value: wordpress-mysql
- name: WORDPRESS_DB_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-pass
key: password
ports:
- containerPort: 80
name: wordpress
volumeMounts:
- name: wordpress-persistent-storage
mountPath: /var/www/html
volumes:
- name: wordpress-persistent-storage
persistentVolumeClaim:
claimName: wp-pv-claim
This manifest defines:
- A Service exposing WordPress on port 80 within the cluster
- A PersistentVolumeClaim requesting 20GB of storage for WordPress files
- A Deployment running the WordPress container, configured to connect to the MySQL service
Password Secret
Both deployments reference a secret for the MySQL password:
apiVersion: v1
kind: Secret
metadata:
name: mysql-pass
type: Opaque
data:
password: cGFzc3dvcmQ= # "password" encoded in base64
Ingress for External Access
Instead of using a LoadBalancer service, we'll use an Ingress resource to expose WordPress externally:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: wordpress-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
ingressClassName: nginx
rules:
- host: wordpress.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: wordpress
port:
number: 80
This Ingress resource routes traffic for wordpress.example.com
to our WordPress service.
Understanding Ingress-Nginx Controller
For our Ingress resource to work, we need an Ingress controller. The most popular is Ingress-Nginx, which uses Nginx as the underlying proxy.
How Ingress-Nginx Works
The Ingress-Nginx controller works by:
- Watching for Ingress resources created with the
nginx
ingressClassName - Validating these resources and generating Nginx configurations based on them
- Acting as a reverse proxy, routing traffic according to the Ingress rules
When you create an Ingress resource, the controller detects it, verifies that it's valid and has the necessary attributes, and adds the routing details to its internal configuration file. The controller then hot-reloads the configuration without downtime.
Here's how the controller interacts with your Kubernetes cluster:
- The controller pod sees events related to Ingress resources
- It reacts to these events by updating its configuration
- It runs Nginx which routes incoming traffic to the appropriate services
Deploying Ingress-Nginx Controller
You can deploy the Ingress-Nginx controller using Helm:
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
helm install ingress-nginx ingress-nginx/ingress-nginx \
--namespace ingress-nginx \
--create-namespace
This creates a dedicated ingress-nginx
namespace and deploys the controller.
Applying the Manifests
To deploy WordPress with the manifests we've created:
kubectl apply -f secret.yaml
kubectl apply -f mysql-deployment.yaml
kubectl apply -f wordpress-deployment.yaml
kubectl apply -f ingress.yaml
Or, if all files are in the same directory:
kubectl apply -f .
So Why Use Helm? Enter the Package Manager for Kubernetes
What is Helm?
After manually creating and applying multiple YAML files, you might wonder if there's a more efficient approach. That's where Helm comes in.
Helm is the package manager for Kubernetes, similar to apt/yum for Linux or npm for Node.js. It uses a packaging format called charts, which are collections of files that describe a related set of Kubernetes resources.
A Helm chart consists of:
- Template YAML files for Kubernetes resources
- A
values.yaml
file with default configuration - A
Chart.yaml
file with metadata
Benefits of Helm
Helm offers significant advantages over raw Kubernetes manifests:
- One command deployment: Deploy an entire application stack with
helm install
- Versioned releases and rollbacks: Track versions and easily roll back to previous states
- Cleaner configuration management: Separate configuration from templates
- Templating for multiple environments: Use the same chart with different values files
- Dependency management: Manage charts that depend on other charts
- Community ecosystem: Access pre-built charts for common applications
Deploying WordPress via Helm (Bitnami Edition)
Bitnami Helm Charts
Bitnami provides production-ready Helm charts with sensible defaults. Their WordPress chart "bootstraps a WordPress deployment on a Kubernetes cluster using the Helm package manager. It also packages the Bitnami MariaDB chart".
The chart includes support for:
- Persistent storage
- Database backend (MariaDB/MySQL)
- Custom domains and Ingress configuration
- Security features and customization options
Step-by-Step Deployment
Here's how to deploy WordPress using the Bitnami Helm chart:
-
Add the Bitnami repository:
helm repo add bitnami https://charts.bitnami.com/bitnami
-
Update repositories:
helm repo update
-
Install WordPress:
helm install my-wordpress bitnami/wordpress \ --set wordpressUsername=admin \ --set wordpressPassword=password \ --set mariadb.auth.rootPassword=rootpassword \ --set ingress.enabled=true \ --set ingress.hostname=wordpress.example.com
-
Check deployment status:
helm status my-wordpress
That's it! With just a few commands, you've deployed WordPress with a MariaDB backend, persistent storage, and an Ingress resource.
Anatomy of the Chart
The Bitnami WordPress chart includes numerous customizable values:
- WordPress image settings
- Database configuration
- Persistence settings
- Ingress configuration
- Resource limits
- Security settings
You can see all available options with:
helm show values bitnami/wordpress
Or create a custom values file:
# my-values.yaml
wordpressUsername: admin
wordpressPassword: password
mariadb:
auth:
rootPassword: rootpassword
ingress:
enabled: true
hostname: wordpress.example.com
And install with:
helm install my-wordpress bitnami/wordpress -f my-values.yaml
Comparing Kubernetes Manifests vs Helm Charts
Feature | Raw Manifests | Helm Charts |
---|---|---|
Reusability | Low | High |
Customization | Manual edits | Parameterized values |
Rollbacks | Manual | Built-in |
Onboarding | Steep | Smoother (with opinionated defaults) |
Versioning | Manual | Built-in release tracking |
Dependencies | Manual coordination | Managed automatically |
Template logic | None | Built-in (Go templates) |
Portability | Medium | High |
Kubernetes manifests are like cooking from scratch-complete control over every ingredient and step, but time-consuming and requiring expertise. Helm is like using a meal kit: faster, cleaner, still customizable, but with some of the hard work already done for you.
Why Kubernetes is Enterprise-Grade (and Docker Compose Isn't)
Robust Architecture
Kubernetes offers an architecture designed for enterprise deployments:
-
Control plane vs worker nodes: Separates control functions from application execution for better fault isolation.
-
Scalability across regions: Kubernetes clusters can span multiple data centers or cloud availability zones.
-
Scheduling and resource quotas: The scheduler places workloads optimally while preventing resource hogging.
-
GitOps integration: Works seamlessly with CI/CD pipelines and GitOps workflows.
Operational Features
Kubernetes includes operational features that make it enterprise-grade:
-
Self-healing capabilities: It "automatically replaces failed containers, reschedules workloads when nodes become unavailable, and ensures that the desired state of the system is maintained".
-
Monitoring integration: Works well with systems like Prometheus and Grafana.
-
Multi-tenant support: Namespaces provide division of cluster resources between teams.
-
Security features: RBAC, Pod Security Policies, and Secrets encryption protect your applications.
-
Rolling updates: Perform zero-downtime deployments with controlled rollout strategies.
The Natural Evolution
We've covered quite a journey: from the simplicity of Docker Compose to the robustness of Kubernetes manifests, and finally to the elegance of Helm charts. This progression represents a natural evolution that many teams experience as they scale.
The key takeaways from this evolution are:
-
Docker Compose excels for local development but hits limitations in production environments.
-
Kubernetes provides the robust orchestration features needed for production workloads, including self-healing, auto-scaling, and multi-node management.
-
Helm simplifies Kubernetes deployments by providing package management, versioning, and templating.
-
Ingress-Nginx offers a powerful way to manage external access to your services without relying on individual LoadBalancer services.
As your applications grow, your tooling should evolve with them. Start simple with Docker Compose for development, transition to Kubernetes as you need more orchestration features, and adopt Helm to manage complex deployments more efficiently.
For your next steps, consider setting up a local Kubernetes environment with minikube or kind, converting your Docker Compose applications to Kubernetes manifests or Helm charts, or exploring GitOps tools like ArgoCD or Flux for managing deployments through Git.
Remember, the goal isn't to use the most complex tool available but to find the right balance of simplicity and power for your specific needs. For some projects, Docker Compose might still be the right choice. For others, the full capabilities of Kubernetes with Helm are necessary. Understanding each tool's strengths and limitations empowers you to make informed decisions for your infrastructure journey.
Happy containerizing! May your pods always be healthy and your deployments forever smooth.