Part 7 - Deploying Applications to Kubernetes: Step-by-Step Guide
Launching your app into Kubernetes is like sending it into orbit—this guide ensures it doesn’t burn up on re-entry.

Before diving into the technical details, let me set the scene: your application is ready, your code is polished, and now comes the moment of truth—deployment to Kubernetes. If you've made it this far in our series, you're about to experience that satisfying feeling of watching your containerized creation come alive in a Kubernetes cluster. Think of it as sending your application astronaut into the Kubernetes cosmos—exciting, a bit nerve-wracking, but ultimately rewarding when everything falls into place.
Understanding Kubernetes Deployments
A Kubernetes Deployment is the launch vehicle for your application. It's responsible for creating and updating instances of your application, essentially telling Kubernetes how many copies to run and how they should behave.
What makes Deployments particularly powerful is their self-healing mechanism. If a Node hosting one of your application instances crashes or gets deleted, the Deployment controller automatically replaces it with a new instance on another Node. This fundamentally changes how we manage applications compared to traditional installation scripts that couldn't recover from machine failures.
As the official Kubernetes documentation states: "By both creating your application instances and keeping them running across Nodes, Kubernetes Deployments provide a fundamentally different approach to application management".
Prerequisites for Your Deployment Journey
Before strapping your application into its Kubernetes rocket, ensure you have the following tools in your mission control center:
- A working Kubernetes cluster (either local like Minikube or a managed service)
- kubectl command-line tool installed and configured to communicate with your cluster
- Docker installed for containerizing your application
- A container registry account (Docker Hub, Google Container Registry, etc.)
- Basic understanding of YAML syntax
If you're using a local setup, Kubernetes recommends "a cluster with at least two nodes that are not acting as control plane hosts". This provides redundancy for your applications and better reflects a production environment.
Step 1: Creating a Simple Application
Let's start with something simple—a basic web application. For this tutorial, I'll use a Flask application similar to the one in our search results. This Python-based web server will display a "Hello World" message and the hostname of the machine serving the request.
from flask import Flask
import os
import socket
app = Flask(__name__)
@app.route("/")
def hello():
html = """Hello {name}!
Hostname: {hostname}"""
return html.format(name=os.getenv("NAME", "world"), hostname=socket.gethostname())
if __name__ == "__main__":
app.run(host='0.0.0.0', port=80)
Save this as app.py
. Notice how we're using environment variables for customization—this is a best practice for containerized applications, allowing us to inject configuration without changing the code.
Next, create a requirements.txt
file with our dependencies:
flask==2.0.1
Step 2: Containerizing Your Application
Now that we have our application, we need to package it for its journey. Just as astronauts need spacesuits to survive in space, applications need containers to survive in Kubernetes.
Create a Dockerfile
in the same directory:
FROM python:3.9-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY app.py .
EXPOSE 80
CMD
This Dockerfile does several things:
- Starts with a slim Python 3.9 base image
- Sets the working directory to
/app
- Copies and installs dependencies
- Copies our application code
- Exposes port 80
- Specifies the command to run our application
Build the container image with:
docker build -t my-flask-app:v1 .
If you're planning to use this image in a remote Kubernetes cluster, you'll need to push it to a registry:
# Tag the image with your registry info
docker tag my-flask-app:v1 yourusername/my-flask-app:v1
# Push to the registry
docker push yourusername/my-flask-app:v1
Step 3: Creating Your Kubernetes Deployment Manifest
Now comes the mission briefing document—your deployment manifest. This YAML file instructs Kubernetes on how to deploy your application.
Create a file named deployment.yaml
:
apiVersion: apps/v1
kind: Deployment
metadata:
name: flask-app-deployment
spec:
selector:
matchLabels:
app: flask-app
replicas: 2 # Run two copies of our application for redundancy
template:
metadata:
labels:
app: flask-app
spec:
containers:
- name: flask-app
image: yourusername/my-flask-app:v1 # Use your image here
ports:
- containerPort: 80
env:
- name: NAME
value: "Kubernetes Explorer"
This manifest defines a Deployment that creates two replicas of our application, each running in a separate pod. We've also set an environment variable NAME
to customize our greeting message.
Step 4: Deploying Your Application
Time for lift-off! Deploy your application using kubectl:
kubectl apply -f deployment.yaml
This command sends your deployment manifest to the Kubernetes API server, which then creates the necessary resources to run your application.
Step 5: Verifying Your Deployment
After launch, mission control needs confirmation that everything's working as expected. Check the status of your deployment:
kubectl get deployments
You should see something like:
NAME READY UP-TO-DATE AVAILABLE AGE
flask-app-deployment 2/2 2 2 30s
To see more details about your deployment:
kubectl describe deployment flask-app-deployment
This command provides a wealth of information about your deployment, including events that occurred during creation, the current state, and any issues encountered.
To see the actual pods created by your deployment:
kubectl get pods
You should see two pods (because we specified replicas: 2
) with names starting with flask-app-deployment-
.
Step 6: Checking Application Logs
Want to see what's happening inside your application? Just like mission control monitors astronaut communications, you can check your application logs:
kubectl logs
Replace `` with one of the pod names from the previous command. If you're feeling particularly mission-control-like, you can stream the logs in real-time:
kubectl logs -f
The -f
flag follows the log output, similar to tail -f
on Unix systems.
Step 7: Exposing Your Application with a Service
Your application is now running in the cluster, but it's isolated—like an astronaut inside a spacecraft who can't communicate with the outside world. Let's create a communication channel using a Kubernetes Service.
Create a file named service.yaml
:
apiVersion: v1
kind: Service
metadata:
name: flask-app-service
spec:
selector:
app: flask-app
ports:
- port: 80
targetPort: 80
type: LoadBalancer
This service definition:
- Uses the label selector
app: flask-app
to identify which pods to route traffic to - Maps port 80 on the service to port 80 on the pods
- Uses type
LoadBalancer
to expose the service externally (on cloud providers, this provisions an external load balancer)
Apply the service:
kubectl apply -f service.yaml
Alternatively, you can use the kubectl expose
command to create a service directly from your deployment:
kubectl expose deployment flask-app-deployment --port=80 --name=flask-app-service --type=LoadBalancer
This command exposes your deployment as a new Kubernetes service, using the selector from the deployment for the new service.
Step 8: Accessing Your Application
Check the status of your service:
kubectl get services
You should see something like:
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
flask-app-service 10.123.45.678 203.0.113.100 80:30000/TCP 1m
If you're using Minikube or a local Kubernetes setup, you might not get an external IP. In that case, use:
minikube service flask-app-service
This command will open your application in a browser.
If you're on a cloud provider, you can access your application via the EXTERNAL-IP
address shown.
Handling Updates: Changing Course Mid-Flight
One of the most powerful features of Kubernetes is its ability to handle application updates seamlessly. Imagine you've improved your application and created a new container image version yourusername/my-flask-app:v2
.
To update your deployment:
- Edit your
deployment.yaml
to point to the new image version - Apply the updated manifest:
kubectl apply -f deployment.yaml
Kubernetes will perform a rolling update, gradually replacing old pods with new ones to ensure zero downtime. You can watch this process in real-time:
kubectl rollout status deployment/flask-app-deployment
If something goes wrong, you can roll back:
kubectl rollout undo deployment/flask-app-deployment
Scaling Your Application: More Astronauts for the Mission
Need to handle more traffic? Scaling in Kubernetes is straightforward:
kubectl scale deployment flask-app-deployment --replicas=5
This command adjusts the number of pod replicas to five. Your application is now running on five pods distributed across your cluster nodes.
Common Deployment Issues and Troubleshooting
Even the best-planned space missions encounter unexpected challenges. Here are some common issues and how to troubleshoot them:
Pods Stuck in Pending State
This typically means there aren't enough resources in your cluster. Check the pod events:
kubectl describe pod
Look for events related to resource constraints.
ImagePullBackOff Error
This indicates Kubernetes couldn't pull your container image. Possible causes include:
- Incorrect image name
- Missing credentials for private registries
- Network issues
Check the pod events and ensure your image is accessible.
CrashLoopBackOff Error
Your application is starting but crashing repeatedly. Check the logs:
kubectl logs
This often reveals application-specific errors that need to be fixed in your code.
Advanced Deployment Strategies
Once you're comfortable with basic deployments, consider these advanced strategies:
Helm Charts
Helm is the package manager for Kubernetes—think of it as apt or yum but for Kubernetes applications. It simplifies complex deployments by packaging all required components together.
Continuous Deployment with CI/CD Pipelines
Automate your deployments using CI/CD tools like Jenkins or GitOps approaches with ArgoCD or FluxCD. These tools can automatically deploy your application when you push changes to your code repository.
Blue-Green Deployments
This strategy involves running two identical environments (blue and green) and switching traffic between them during updates, providing a safe rollback mechanism if issues arise.
The Road Ahead: What You've Accomplished
You've successfully launched your application into the Kubernetes cosmos! Your application is now:
- Running in a containerized environment
- Replicated for redundancy
- Self-healing in case of node failures
- Exposed to the outside world
- Ready for easy updates and scaling
Think about it—what used to take days of server provisioning, software installation, and configuration now happens in minutes with a few YAML files and kubectl commands. You've joined the ranks of cloud-native developers who deploy applications not just to servers, but to entire orchestrated clusters.
As you continue your Kubernetes journey, remember that deployment is just the beginning. In upcoming posts, we'll explore monitoring, logging, and advanced management techniques to ensure your application not only launches successfully but thrives in its Kubernetes home.
After all, getting to space is impressive—but building a sustainable colony there is the real achievement. Your application has made it to Kubernetes; now let's help it flourish.