Part 4 - RKE2 Zero to Hero: Deploying Applications - From Hello World to Production

Part 4 - RKE2 Zero to Hero: Deploying Applications - From Hello World to Production

Welcome back to our RKE2: Zero to Hero journey! If you've been following along, you've successfully built a solid foundation with your first RKE2 cluster in Part 1, scaled it to a multi-node powerhouse in Part 2, and mastered the art of configuration in Part 3. Now comes the moment you've been waiting for: it's time to actually deploy some applications and make your cluster earn its keep.

This is where the rubber meets the road, where all that careful planning and configuration transforms into real workloads running in production. Think of it as the grand opening of your carefully constructed digital theater – the stage is set, the infrastructure is humming, and now it's time for the main performance. By the end of this post, you'll have transformed from someone who can build and configure RKE2 clusters to someone who can confidently deploy applications ranging from simple web servers to complex multi-tier applications using modern deployment techniques.

Understanding Application Deployment in RKE2

Before we dive into the practical aspects, let's establish a clear understanding of how application deployment works in RKE2 . RKE2, being a hardened Kubernetes distribution, follows the standard Kubernetes deployment patterns while providing additional enterprise-grade features and security enhancements .

The Deployment Landscape

RKE2 supports multiple deployment approaches, each with its own strengths and use cases . The most common methods include:

Direct YAML Manifests: The foundational approach where you define your application's desired state using Kubernetes YAML files . This method provides maximum control and transparency but requires more manual management.

Helm Charts: The package manager approach that bundles multiple YAML files into reusable, parameterized packages . Helm charts simplify complex deployments and make them more maintainable.

Auto-Deploying Manifests: RKE2's built-in capability to automatically deploy any YAML files placed in /var/lib/rancher/rke2/server/manifests . This feature is particularly useful for system-level components and automated deployments.

RKE2's Deployment Advantages

RKE2 brings several advantages to the deployment process . The platform's hardened security posture means your applications inherit enterprise-grade security controls by default. The simplified networking stack reduces complexity while maintaining flexibility, and the integrated container runtime provides optimized performance for containerized workloads.

Creating Your First Deployment Manifest

Let's start with the fundamental building block of Kubernetes deployments: the deployment manifest . A deployment manifest is a YAML file that describes your application's desired state, including the container image, resource requirements, and scaling parameters.

Basic Deployment Structure

Here's a comprehensive example of a deployment manifest that demonstrates best practices :

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-webapp
  labels:
    app: nginx-webapp
    tier: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx-webapp
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 25%
      maxSurge: 25%
  template:
    metadata:
      labels:
        app: nginx-webapp
        tier: frontend
    spec:
      containers:
      - name: nginx
        image: nginx:1.21-alpine
        ports:
        - containerPort: 80
          name: http
        resources:
          requests:
            memory: "64Mi"
            cpu: "250m"
          limits:
            memory: "128Mi"
            cpu: "500m"
        livenessProbe:
          httpGet:
            path: /
            port: 80
          initialDelaySeconds: 30
          periodSeconds: 10
        readinessProbe:
          httpGet:
            path: /
            port: 80
          initialDelaySeconds: 5
          periodSeconds: 5
        env:
        - name: NGINX_PORT
          value: "80"

Key Configuration Elements

This deployment manifest incorporates several production-ready best practices . The replicas: 3 setting ensures high availability by running multiple instances of your application. The RollingUpdate strategy enables zero-downtime deployments by gradually replacing old pods with new ones.

Resource requests and limits are crucial for cluster stability . Requests guarantee that your pods have the minimum resources they need, while limits prevent any single pod from consuming excessive cluster resources. The liveness and readiness probes ensure that Kubernetes can properly manage your application's health and traffic routing .

Applying Your Deployment

To deploy this manifest to your RKE2 cluster, save it as nginx-deployment.yaml and apply it using kubectl :

kubectl apply -f nginx-deployment.yaml

Verify that your deployment is running successfully:

kubectl get deployments
kubectl get pods -l app=nginx-webapp
kubectl describe deployment nginx-webapp

Exposing Services: NodePort vs LoadBalancer

Once your application is running, you need to make it accessible to users . Kubernetes provides several service types for exposing applications, with NodePort and LoadBalancer being the most common for external access.

Understanding Service Types

ClusterIP: The default service type that provides internal access only . This is perfect for services that only need to communicate with other components within the cluster.

NodePort: Exposes your service on a specific port on every node in the cluster . This approach is simple but has limitations in terms of port management and load balancing.

LoadBalancer: Creates an external load balancer that distributes traffic across your pods . This is the preferred method for production deployments as it provides better scalability and reliability.

Creating a NodePort Service

Here's how to create a NodePort service for your nginx deployment :

apiVersion: v1
kind: Service
metadata:
  name: nginx-webapp-nodeport
  labels:
    app: nginx-webapp
spec:
  type: NodePort
  selector:
    app: nginx-webapp
  ports:
  - port: 80
    targetPort: 80
    nodePort: 30080
    protocol: TCP
    name: http

Implementing LoadBalancer Services

For production environments, LoadBalancer services provide superior capabilities :

apiVersion: v1
kind: Service
metadata:
  name: nginx-webapp-loadbalancer
  labels:
    app: nginx-webapp
spec:
  type: LoadBalancer
  selector:
    app: nginx-webapp
  ports:
  - port: 80
    targetPort: 80
    protocol: TCP
    name: http
  loadBalancerSourceRanges:
  - 10.0.0.0/8
  - 172.16.0.0/12
  - 192.168.0.0/16

Service Selection Considerations

The choice between NodePort and LoadBalancer depends on your environment and requirements . NodePort is suitable for development and testing environments where simplicity is more important than scalability. LoadBalancer is preferred for production deployments where you need proper load distribution and integration with cloud provider load balancing services.

It's worth noting that LoadBalancer services automatically include NodePort functionality, providing multiple access methods . This redundancy can be useful for troubleshooting and provides flexibility in how you expose your applications.

Implementing Ingress Controllers for Advanced Routing

While services handle basic traffic routing, Ingress controllers provide sophisticated HTTP and HTTPS routing capabilities . This is where RKE2 really shines, as it comes with a pre-configured NGINX Ingress controller that you can customize or replace as needed.

Understanding Ingress Architecture

An Ingress controller consists of two main components : the Ingress controller itself (which runs as a deployment in your cluster) and Ingress resources (which define the routing rules). The controller watches for Ingress resources and configures the underlying load balancer accordingly.

RKE2 includes the NGINX Ingress controller by default, but you can disable it and use alternatives like Traefik, HAProxy, or cloud-specific controllers . The flexibility to choose your ingress solution is one of RKE2's strengths in enterprise environments.

Creating Ingress Resources

Here's a comprehensive Ingress resource that demonstrates advanced routing capabilities :

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: nginx-webapp-ingress
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
    nginx.ingress.kubernetes.io/ssl-redirect: "true"
    nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
    cert-manager.io/cluster-issuer: "letsencrypt-prod"
spec:
  ingressClassName: nginx
  tls:
  - hosts:
    - webapp.example.com
    secretName: webapp-tls
  rules:
  - host: webapp.example.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: nginx-webapp-loadbalancer
            port:
              number: 80
      - path: /api
        pathType: Prefix
        backend:
          service:
            name: api-service
            port:
              number: 8080

Advanced Ingress Configuration

This Ingress resource showcases several advanced features . The annotations configure SSL redirection, ensuring all traffic uses HTTPS. The cert-manager.io/cluster-issuer annotation enables automatic SSL certificate management through cert-manager. Multiple path rules demonstrate how to route different URL paths to different services within the same application.

The ingressClassName field specifies which Ingress controller should handle this resource . In RKE2, this is typically nginx for the default controller, but you can have multiple controllers running simultaneously and route traffic to different ones based on your needs.

Ingress Controller Customization

RKE2 allows you to customize the NGINX Ingress controller through HelmChartConfig resources . This enables you to modify controller behavior, add custom configurations, or adjust resource allocation without rebuilding the entire cluster.

Utilizing Helm Charts for Streamlined Deployment

Helm represents the next evolution in Kubernetes application deployment, providing package management capabilities that dramatically simplify complex deployments . RKE2's integration with Helm makes it easy to leverage this powerful tool for both simple and sophisticated applications.

Understanding Helm Fundamentals

Helm operates on three core concepts : Charts (packages of Kubernetes resources), Repositories (collections of charts), and Releases (instances of charts running in your cluster). This architecture provides a clean separation between application templates and their deployed instances.

RKE2 includes built-in Helm integration through the HelmChart CRD . This means you can deploy Helm charts using standard Kubernetes YAML files, making Helm deployments part of your GitOps workflow and ensuring consistency with other cluster resources.

Installing Helm

Before you can use Helm charts, you need to install the Helm client . The installation process is straightforward:

curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3
chmod +x get_helm.sh
./get_helm.sh

Verify your installation:

helm version
helm env

Working with Helm Repositories

Helm repositories are collections of charts that you can search and install . Start by adding some popular repositories:

helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo add nginx-stable https://helm.nginx.com/stable
helm repo add jetstack https://charts.jetstack.io
helm repo update

Search for available charts:

helm search repo nginx
helm search repo postgresql
helm search repo prometheus

Deploying Applications with Helm

Here's how to deploy a complete application stack using Helm . Let's deploy a WordPress instance with a MySQL database:

# Deploy MySQL first
helm install mysql bitnami/mysql \
  --set auth.rootPassword=secretpassword \
  --set auth.database=wordpress \
  --set auth.username=wordpress \
  --set auth.password=wordpress \
  --set primary.persistence.enabled=true \
  --set primary.persistence.size=10Gi

# Deploy WordPress
helm install wordpress bitnami/wordpress \
  --set mariadb.enabled=false \
  --set externalDatabase.host=mysql \
  --set externalDatabase.user=wordpress \
  --set externalDatabase.password=wordpress \
  --set externalDatabase.database=wordpress \
  --set wordpressUsername=admin \
  --set wordpressPassword=secretpassword \
  --set service.type=LoadBalancer

Using Helm Charts in RKE2's Manifest Directory

RKE2's automatic manifest deployment feature works seamlessly with Helm charts . Create a HelmChart resource and place it in the manifests directory:

# /var/lib/rancher/rke2/server/manifests/prometheus.yaml
apiVersion: helm.cattle.io/v1
kind: HelmChart
metadata:
  name: prometheus
  namespace: kube-system
spec:
  chart: prometheus
  repo: https://prometheus-community.github.io/helm-charts
  targetNamespace: monitoring
  createNamespace: true
  valuesContent: |-
    server:
      service:
        type: LoadBalancer
      persistentVolume:
        enabled: true
        size: 20Gi
    alertmanager:
      enabled: true
      service:
        type: LoadBalancer

Helm Chart Customization and Best Practices

When working with Helm charts, always review the available configuration options . Use helm show values to see all configurable parameters:

helm show values bitnami/wordpress > wordpress-values.yaml

Create custom values files for different environments:

# production-values.yaml
replicaCount: 3
image:
  tag: "1.21-alpine"
resources:
  requests:
    memory: "256Mi"
    cpu: "250m"
  limits:
    memory: "512Mi"
    cpu: "500m"
service:
  type: LoadBalancer
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
ingress:
  enabled: true
  className: nginx
  annotations:
    cert-manager.io/cluster-issuer: "letsencrypt-prod"
  hosts:
  - host: myapp.example.com
    paths:
    - path: /
      pathType: Prefix
  tls:
  - secretName: myapp-tls
    hosts:
    - myapp.example.com

Deploy using your custom values:

helm install myapp ./mychart -f production-values.yaml

Production Deployment Best Practices

Moving from development to production requires careful attention to several key areas . These practices ensure that your applications are reliable, secure, and performant in real-world environments.

Resource Management and Limits

Proper resource management is fundamental to production deployments . Always specify resource requests and limits for your containers:

resources:
  requests:
    memory: "256Mi"
    cpu: "250m"
  limits:
    memory: "512Mi"
    cpu: "500m"

Resource requests ensure that your pods have the minimum resources they need to function properly . Limits prevent any single pod from consuming excessive resources and affecting other workloads. This balance is crucial for maintaining cluster stability and predictable performance.

High Availability and Scaling

Production deployments should be designed for high availability from the start . This means running multiple replicas of your applications and ensuring they're distributed across different nodes:

spec:
  replicas: 3
  template:
    spec:
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - weight: 100
            podAffinityTerm:
              labelSelector:
                matchExpressions:
                - key: app
                  operator: In
                  values:
                  - nginx-webapp
              topologyKey: kubernetes.io/hostname

Health Checks and Monitoring

Implement comprehensive health checks to ensure Kubernetes can properly manage your applications :

livenessProbe:
  httpGet:
    path: /healthz
    port: 8080
  initialDelaySeconds: 30
  periodSeconds: 10
  timeoutSeconds: 5
  failureThreshold: 3

readinessProbe:
  httpGet:
    path: /ready
    port: 8080
  initialDelaySeconds: 5
  periodSeconds: 5
  timeoutSeconds: 3
  failureThreshold: 3

Security Considerations

Security should be built into your deployment process from the beginning . Use security contexts to run containers with minimal privileges:

securityContext:
  runAsNonRoot: true
  runAsUser: 1000
  allowPrivilegeEscalation: false
  capabilities:
    drop:
    - ALL
  readOnlyRootFilesystem: true

Configuration Management

Use ConfigMaps and Secrets to manage application configuration :

apiVersion: v1
kind: ConfigMap
metadata:
  name: app-config
data:
  database.host: "mysql.default.svc.cluster.local"
  database.port: "3306"
  log.level: "info"
---
apiVersion: v1
kind: Secret
metadata:
  name: app-secrets
type: Opaque
data:
  database.password: 
  api.key: 

Reference these in your deployment:

containers:
- name: app
  image: myapp:latest
  envFrom:
  - configMapRef:
      name: app-config
  - secretRef:
      name: app-secrets

Troubleshooting Common Deployment Issues

Even with careful planning, deployment issues can arise . Understanding common problems and their solutions will help you maintain reliable applications.

Pod Startup Issues

When pods fail to start, the most common causes are image pull failures, resource constraints, or configuration errors . Use these debugging commands:

kubectl describe pod 
kubectl logs 
kubectl get events --sort-by=.metadata.creationTimestamp

Service Discovery Problems

If your applications can't communicate with each other, check service and endpoint configurations :

kubectl get services
kubectl get endpoints
kubectl describe service 

Ingress Routing Issues

When external access isn't working, verify your Ingress configuration and controller status :

kubectl get ingress
kubectl describe ingress 
kubectl get pods -n ingress-nginx
kubectl logs -n ingress-nginx 

Resource Constraints

If pods are being evicted or showing poor performance, check resource usage :

kubectl top nodes
kubectl top pods
kubectl describe node 

Monitoring and Observability

Production deployments require comprehensive monitoring to ensure reliability and performance . While detailed monitoring setup will be covered in Part 5, it's important to understand the basics when deploying applications.

Basic Monitoring Setup

Ensure your applications expose metrics and health endpoints :

ports:
- containerPort: 8080
  name: http
- containerPort: 9090
  name: metrics

Logging Configuration

Configure proper logging for your applications :

containers:
- name: app
  image: myapp:latest
  env:
  - name: LOG_LEVEL
    value: "info"
  - name: LOG_FORMAT
    value: "json"

Application Performance Monitoring

Consider integrating APM tools early in your deployment process . This will provide valuable insights into application behavior and help identify performance bottlenecks before they impact users.

Preparing for Production Scale

As you prepare to move your applications to production, consider these scaling and operational aspects .

Horizontal Pod Autoscaling

Configure automatic scaling based on resource utilization :

apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
  name: nginx-webapp-hpa
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: nginx-webapp
  minReplicas: 3
  maxReplicas: 10
  metrics:
  - type: Resource
    resource:
      name: cpu
      target:
        type: Utilization
        averageUtilization: 70
  - type: Resource
    resource:
      name: memory
      target:
        type: Utilization
        averageUtilization: 80

Deployment Strategies

Plan your deployment strategy for minimal downtime :

strategy:
  type: RollingUpdate
  rollingUpdate:
    maxUnavailable: 25%
    maxSurge: 25%

Backup and Recovery

Implement backup strategies for persistent data :

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: app-data-pvc
  annotations:
    backup.kubernetes.io/schedule: "0 2 * * *"
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi

Moving Forward: Advanced Operations

You've now mastered the fundamentals of deploying applications in RKE2, from simple deployments to complex multi-service applications using Helm charts. You understand how to expose services, configure ingress routing, and implement production-ready deployment patterns. Your cluster is no longer just a carefully configured platform; it's a living system running real workloads.

This knowledge forms the foundation for advanced operations and management. You've learned to create deployment manifests that follow best practices, expose services through multiple methods, implement sophisticated routing with Ingress controllers, and leverage Helm charts for complex application deployments. These skills will serve you well as you continue to expand your RKE2 expertise.

In Part 5, "Advanced RKE2 Management: Monitoring, Scaling, and Upgrades," we'll dive deep into the operational aspects of running RKE2 in production. You'll learn how to implement comprehensive monitoring with Prometheus and Grafana, set up centralized logging with modern tools like Loki, configure automatic scaling for both pods and nodes, and master the art of performing rolling upgrades and maintaining disaster recovery capabilities. The applications you've deployed in this post will serve as the foundation for these advanced management techniques.

Your journey from RKE2 newcomer to production-ready practitioner is nearly complete, but the most critical skills for maintaining a robust, scalable, and reliable Kubernetes environment await in our final installment.