Part 4 - K3s Zero to Hero: K3s Application Deployment - From Hello World to Production-Ready Workloads

Welcome back to our K3s journey! By now, you've built a cluster, connected multiple nodes, and fine-tuned configurations like a Kubernetes conductor orchestrating a symphony of containers. But what good is an empty orchestra without music? Today we're diving into the meat and potatoes of K3s: actually deploying applications that do useful things instead of just sitting there looking pretty.
Think of this as the moment your carefully crafted cluster transforms from an expensive hobby into something that might actually impress your boss (or at least your cat, who's been judging your late-night kubectl sessions). We'll start with simple deployments, work our way through service exposure strategies, tackle the mysterious world of Ingress controllers, and finish with Helm charts that make deployment so smooth you'll wonder why you ever did anything manually.
Creating Your First Deployment: YAML Manifests Demystified
Let's start with the foundation of Kubernetes application deployment: YAML manifests. If you've been following this series, you're probably familiar with YAML from your K3s configuration adventures, but now we're using it to describe entire applications rather than just cluster settings.
A Kubernetes deployment YAML file serves as a declarative blueprint that tells your cluster exactly what you want running and how you want it configured. Think of it as a very detailed recipe that your cluster follows religiously, even if the kitchen (your nodes) occasionally catches fire.
Here's a basic deployment manifest that creates a simple NGINX web server:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
resources:
limits:
memory: "256Mi"
cpu: "200m"
requests:
memory: "128Mi"
cpu: "100m"
The beauty of this approach lies in its declarative nature. You're not telling Kubernetes how to deploy your application step-by-step; you're describing the desired end state and letting Kubernetes figure out the details. It's like telling a very competent assistant "I want three NGINX containers running" and trusting them to handle the logistics while you focus on more important things, like deciding what to have for lunch.
To deploy this manifest, save it as nginx-deployment.yaml
and run:
kubectl apply -f nginx-deployment.yaml
The kubectl apply
command is your best friend in the deployment world. Unlike kubectl create
, which throws a tantrum if the resource already exists, apply
intelligently updates existing resources or creates new ones as needed. It's the difference between a rigid robot and an adaptable human assistant.
Resource limits deserve special attention in your manifests. The resources
section prevents any single container from becoming a resource glutton that devours all available CPU and memory. Think of these limits as portion control for containers, ensuring that one misbehaving application doesn't crash your entire cluster by eating all the RAM like a digital Pac-Man.
You can verify your deployment with several helpful commands:
kubectl get deployments
kubectl get pods
kubectl describe deployment nginx-deployment
These commands provide different levels of detail, from the high-level deployment status to individual pod information. The describe
command is particularly useful when things go wrong, offering detailed events and status information that can help diagnose issues.
Exposing Services: Making Your Applications Accessible
Creating a deployment is only half the battle. Without properly exposing your services, your applications are like brilliant performers trapped in a soundproof room. Kubernetes provides several service types to make your applications accessible, each with distinct use cases and characteristics.
NodePort Services: The Simple Approach
NodePort services represent the most straightforward method for exposing applications outside your cluster. When you create a NodePort service, Kubernetes opens the specified port on every node in your cluster and automatically routes traffic to your service. It's like giving every node in your cluster a direct phone line to your application.
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
type: NodePort
selector:
app: nginx
ports:
- port: 80
targetPort: 80
nodePort: 30080
The NodePort approach has both advantages and limitations. On the positive side, it's simple, predictable, and works immediately without additional configuration. However, it requires you to manage port assignments carefully since you can only run one service per port across your entire cluster. It's like having a parking garage where each space can only be used for one specific car model.
LoadBalancer Services: The K3s Magic
Here's where K3s shows its thoughtful design. Unlike standard Kubernetes distributions that require external load balancer infrastructure, K3s includes a built-in load balancer called "Klipper" that works out of the box. This feature eliminates one of the traditional pain points of Kubernetes deployment.
When you create a LoadBalancer service in K3s, the system automatically deploys a DaemonSet that listens on the specified ports across all nodes. It's surprisingly elegant in its simplicity:
apiVersion: v1
kind: Service
metadata:
name: nginx-loadbalancer
spec:
type: LoadBalancer
selector:
app: nginx
ports:
- port: 80
targetPort: 80
The Klipper load balancer works well for single-node clusters or simple multi-node setups, but it has limitations worth understanding. Each LoadBalancer service claims its ports exclusively across all nodes, so you can't run multiple services on the same port. Additionally, it doesn't provide advanced features like health checking or intelligent traffic distribution that enterprise load balancers offer.
For production environments or clusters with multiple nodes requiring sophisticated load balancing, you might want to consider alternatives like MetalLB, but for learning and many real-world applications, Klipper provides an excellent balance of simplicity and functionality.
Mastering Ingress: Advanced Traffic Routing with Traefik
While services handle basic exposure, Ingress controllers provide sophisticated HTTP routing capabilities that enable you to run multiple services behind a single IP address. K3s includes Traefik as its default Ingress controller, which brings powerful routing features with minimal configuration overhead.
Traefik differs from traditional Ingress controllers like NGINX in several important ways. It automatically discovers services and routes, provides a built-in dashboard, and supports advanced features like automatic SSL certificate management. Think of Traefik as an intelligent traffic director that not only routes requests but also provides real-time insights into traffic patterns.
Basic Ingress Configuration
Setting up a basic Ingress with Traefik involves creating an Ingress resource that defines routing rules:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx-ingress
annotations:
traefik.ingress.kubernetes.io/router.entrypoints: web
spec:
rules:
- host: nginx.local
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nginx-service
port:
number: 80
This configuration tells Traefik to route all requests for nginx.local
to your NGINX service. The beauty of Ingress lies in its ability to handle multiple services through a single entry point. You can add additional rules to the same Ingress or create separate Ingress resources for different applications.
One common gotcha when working with Traefik in K3s involves API versions. Older tutorials might reference deprecated API versions like extensions/v1beta1
, but modern Kubernetes requires networking.k8s.io/v1
. Always use the current API version to avoid compatibility issues.
Enabling the Traefik Dashboard
The Traefik dashboard provides valuable insights into your Ingress configuration and traffic patterns. Since it's disabled by default in K3s, enabling it requires creating both a Service and Ingress resource:
apiVersion: v1
kind: Service
metadata:
name: traefik-dashboard
namespace: kube-system
spec:
type: ClusterIP
ports:
- name: traefik
port: 9000
targetPort: traefik
selector:
app.kubernetes.io/name: traefik
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: traefik-dashboard
namespace: kube-system
annotations:
traefik.ingress.kubernetes.io/router.entrypoints: websecure
spec:
rules:
- host: traefik.local
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: traefik-dashboard
port:
number: 9000
The dashboard becomes invaluable when debugging routing issues or understanding traffic patterns. It displays real-time information about routes, services, and middleware, making it much easier to troubleshoot complex Ingress configurations.
Advanced Traefik Features
Traefik supports numerous advanced features through annotations and custom resources. You can configure SSL termination, request routing based on headers, rate limiting, and authentication middleware. These features transform basic HTTP routing into sophisticated application delivery patterns.
For example, you can configure automatic HTTPS redirection:
annotations:
traefik.ingress.kubernetes.io/router.entrypoints: web,websecure
traefik.ingress.kubernetes.io/router.middlewares: default-redirect-https@kubernetescrd
While these advanced features extend beyond basic deployment needs, understanding their availability helps you plan for future requirements and appreciate Traefik's capabilities.
Streamlining Deployment with Helm Charts
After manually creating YAML manifests for a few applications, you'll quickly appreciate the value of Helm charts. Helm functions as a package manager for Kubernetes, similar to how apt manages packages on Ubuntu or brew manages packages on macOS. However, K3s adds an interesting twist with its built-in HelmChart Custom Resource Definition (CRD).
Understanding K3s HelmChart CRD
Traditional Helm deployments require the Helm client and server-side components, but K3s simplifies this with its integrated Helm Controller. Instead of running helm install
commands, you create HelmChart resources that K3s automatically processes. This approach integrates seamlessly with K3s's auto-deploying manifests feature.
The HelmChart CRD captures most options you would normally pass to the helm command-line tool:
apiVersion: helm.cattle.io/v1
kind: HelmChart
metadata:
name: grafana
namespace: kube-system
spec:
chart: grafana
repo: https://grafana.github.io/helm-charts
targetNamespace: monitoring
createNamespace: true
valuesContent: |-
adminPassword: changeme
service:
type: LoadBalancer
persistence:
enabled: true
size: 10Gi
This approach offers several advantages over traditional Helm usage. First, it integrates with K3s's auto-deploying manifests system, meaning you can place HelmChart resources in /var/lib/rancher/k3s/server/manifests/
for automatic deployment. Second, it eliminates the need for separate Helm client installation and configuration. Third, it provides consistent resource management through Kubernetes APIs rather than separate Helm state tracking.
Deploying Charts with Auto-Deploying Manifests
K3s's auto-deploying manifests feature creates an elegant deployment workflow. Any file placed in /var/lib/rancher/k3s/server/manifests/
gets automatically deployed, both at startup and when files change. This capability transforms configuration management from an imperative process into a declarative one.
For example, to deploy a monitoring stack, you could create a file /var/lib/rancher/k3s/server/manifests/monitoring.yaml
:
apiVersion: v1
kind: Namespace
metadata:
name: monitoring
---
apiVersion: helm.cattle.io/v1
kind: HelmChart
metadata:
name: prometheus
namespace: kube-system
spec:
chart: prometheus
repo: https://prometheus-community.github.io/helm-charts
targetNamespace: monitoring
valuesContent: |-
server:
service:
type: LoadBalancer
alertmanager:
enabled: false
K3s automatically detects the new file and deploys the chart without manual intervention. This automation proves particularly valuable for infrastructure-as-code workflows where deployment consistency matters more than interactive flexibility.
Working with Chart Values
Helm charts achieve their power through templating and configurable values. The valuesContent
field in HelmChart resources allows you to override default chart values using standard YAML syntax. This capability enables you to customize applications without modifying the underlying charts.
Understanding chart documentation becomes crucial for effective value customization. Most Helm charts include comprehensive README files and values.yaml examples that document available configuration options. Spending time reviewing these resources before deployment saves significant troubleshooting time later.
For complex value configurations, you can reference external ConfigMaps or Secrets:
spec:
valuesContent: |-
database:
host: postgres.database.svc.cluster.local
passwordSecret:
name: app-secrets
key: db-password
This approach separates sensitive configuration data from chart definitions, improving security and enabling better secret management practices.
Managing Chart Lifecycle
K3s Helm Controller automatically handles chart installation, upgrades, and basic lifecycle management. When you modify a HelmChart resource, the controller detects changes and performs appropriate updates. However, chart removal requires additional consideration since deleting the HelmChart resource doesn't automatically uninstall the deployed applications.
To properly clean up a chart deployment, you need to delete both the HelmChart resource and any associated namespaces or persistent volumes. This behavior reflects Kubernetes's general approach of preserving data unless explicitly removed.
Monitoring HelmChart status helps track deployment progress and identify issues:
kubectl get helmcharts -n kube-system
kubectl describe helmchart grafana -n kube-system
The Helm Controller creates Jobs to execute chart operations, so examining Job logs can provide detailed troubleshooting information when deployments fail.
Practical Deployment Examples
Let's put these concepts together with some practical examples that demonstrate real-world deployment patterns. These examples illustrate how the different service types, Ingress configurations, and Helm charts work together to create functional application environments.
Example 1: WordPress with MySQL
This example demonstrates a multi-tier application deployment using both traditional manifests and Helm charts:
# MySQL deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
spec:
replicas: 1
selector:
matchLabels:
app: mysql
template:
metadata:
labels:
app: mysql
spec:
containers:
- name: mysql
image: mysql:8.0
env:
- name: MYSQL_ROOT_PASSWORD
value: "rootpassword"
- name: MYSQL_DATABASE
value: "wordpress"
- name: MYSQL_USER
value: "wpuser"
- name: MYSQL_PASSWORD
value: "wppassword"
ports:
- containerPort: 3306
volumeMounts:
- name: mysql-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-storage
emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
name: mysql
spec:
selector:
app: mysql
ports:
- port: 3306
targetPort: 3306
For WordPress, we'll use a HelmChart to demonstrate the different approaches:
apiVersion: helm.cattle.io/v1
kind: HelmChart
metadata:
name: wordpress
namespace: kube-system
spec:
chart: wordpress
repo: https://charts.bitnami.com/bitnami
targetNamespace: default
valuesContent: |-
mariadb:
enabled: false
externalDatabase:
host: mysql
user: wpuser
password: wppassword
database: wordpress
service:
type: LoadBalancer
ingress:
enabled: true
hostname: wordpress.local
This example illustrates several important concepts: database services typically use ClusterIP (the default) since they shouldn't be externally accessible, web applications can benefit from LoadBalancer services for direct access, and Ingress provides name-based virtual hosting for multiple applications.
Example 2: Development Environment with Multiple Services
Development environments often require multiple services with easy access patterns. Here's a configuration that sets up a development stack with different exposure methods:
# API service with NodePort
apiVersion: apps/v1
kind: Deployment
metadata:
name: api-service
spec:
replicas: 2
selector:
matchLabels:
app: api
template:
metadata:
labels:
app: api
spec:
containers:
- name: api
image: node:16-alpine
command: ["node", "-e", "require('http').createServer((req,res) => res.end('API Response')).listen(3000)"]
ports:
- containerPort: 3000
---
apiVersion: v1
kind: Service
metadata:
name: api-service
spec:
type: NodePort
selector:
app: api
ports:
- port: 3000
targetPort: 3000
nodePort: 30001
Combined with an Ingress that routes different paths to different services:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: dev-ingress
annotations:
traefik.ingress.kubernetes.io/router.entrypoints: web
spec:
rules:
- host: dev.local
http:
paths:
- path: /api
pathType: Prefix
backend:
service:
name: api-service
port:
number: 3000
- path: /
pathType: Prefix
backend:
service:
name: wordpress
port:
number: 80
This configuration demonstrates how a single hostname can route to multiple backend services based on URL paths, creating a unified development environment accessible through consistent URLs.
Troubleshooting Common Deployment Issues
Real-world deployments rarely work perfectly on the first try, so developing troubleshooting skills proves essential for effective K3s management. Most deployment issues fall into predictable categories with systematic debugging approaches.
Image Pull Problems
One of the most common issues involves container image problems. Symptoms include pods stuck in ImagePullBackOff
or ErrImagePull
states. Use kubectl describe pod
to examine detailed error messages that often reveal the root cause:
kubectl describe pod
kubectl logs
Common image-related issues include typos in image names, missing tags (defaulting to latest
which might not exist), private registry authentication problems, and network connectivity issues preventing image downloads.
Service Discovery Issues
When pods can't communicate with services, DNS resolution often provides clues. K3s includes CoreDNS for service discovery, and you can test DNS resolution from within pods:
kubectl exec -it -- nslookup
kubectl exec -it -- wget -qO- http://:
Service selector mismatches represent another common issue. Ensure that service selectors exactly match pod labels, including case sensitivity and spacing.
Resource Constraints
Resource limit problems manifest as pods failing to schedule or getting killed unexpectedly. The scheduler events and pod status provide diagnostic information:
kubectl get events --sort-by=.metadata.creationTimestamp
kubectl top nodes
kubectl top pods
Understanding these troubleshooting patterns accelerates problem resolution and builds confidence in managing K3s deployments.
Preparing for Production Considerations
While this post focuses on deployment mechanics, production readiness requires additional considerations that we'll explore thoroughly in Part 5. However, understanding these requirements helps shape deployment decisions from the beginning.
Security considerations include using specific image tags rather than latest
, implementing resource quotas, configuring pod security policies, and managing secrets properly. Performance considerations involve resource requests and limits, horizontal pod autoscaling, and persistent volume management.
Monitoring and observability become crucial as deployments grow in complexity. Part 5 will dive deep into Prometheus, Grafana, and logging solutions, but thinking about these requirements during initial deployment planning prevents architectural problems later.
Your K3s Deployment Journey Continues
You've now transformed your K3s cluster from an empty platform into a functional application environment. The journey from basic deployments to sophisticated multi-service architectures illustrates the power and flexibility that K3s brings to container orchestration.
The techniques covered here provide the foundation for most real-world Kubernetes workloads. YAML manifests give you precise control over deployment specifications, service types offer flexibility in application exposure strategies, Ingress controllers enable sophisticated routing patterns, and Helm charts streamline complex application management.
As your deployments grow in complexity and scale, you'll need robust monitoring, logging, and operational practices to maintain reliability and performance. Part 5 will tackle these advanced operational challenges, transforming your deployment skills into production-ready expertise that keeps applications running smoothly even when things go sideways.
The next phase of our journey explores the operational side of K3s management, where monitoring becomes your early warning system, scaling keeps performance consistent under varying loads, and upgrade strategies ensure long-term cluster health. These advanced topics build directly on the deployment foundation you've just mastered, creating a comprehensive K3s skill set that handles everything from development environments to production workloads.