Introduction to Kubernetes: Orchestration for Everyone
The container orchestration war is over. Kubernetes has won.
Docker Swarm is being quietly abandoned. Mesos is retreating to niche use cases. Cloud providers are all-in on managed Kubernetes services. If you’re going to run containers in production, you need to understand Kubernetes.
Here’s your practical introduction.
What Problem Does Kubernetes Solve?
Docker made it easy to package and run individual containers. But running a single container isn’t the challenge—running hundreds of containers across dozens of machines is.
Kubernetes (K8s) solves the orchestration problem:
- Scheduling: Which machine should run each container?
- Scaling: How do we add more containers when load increases?
- Networking: How do containers find and talk to each other?
- Storage: How do containers access persistent data?
- Health: How do we detect and replace failed containers?
Without orchestration, you’re manually managing all of this. With Kubernetes, you declare what you want, and K8s makes it happen.
Core Concepts
Let’s break down the key abstractions you’ll work with.
Pods
A Pod is the smallest deployable unit in Kubernetes. It’s one or more containers that share network and storage.
Most of the time, a Pod contains a single container. But sometimes you’ll have sidecar containers for logging, monitoring, or proxying.
apiVersion: v1
kind: Pod
metadata:
name: my-app
spec:
containers:
- name: app
image: myapp:1.0
ports:
- containerPort: 8080
Deployments
You rarely create Pods directly. Instead, you create a Deployment that manages Pods for you.
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
spec:
replicas: 3
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: app
image: myapp:1.0
ports:
- containerPort: 8080
This creates 3 replicas of your app. If one dies, Kubernetes automatically creates a new one from the template.
Services
Pods are ephemeral—they come and go. Services provide stable networking.
apiVersion: v1
kind: Service
metadata:
name: my-app-service
spec:
selector:
app: my-app
ports:
- port: 80
targetPort: 8080
type: ClusterIP
Now other Pods can reach your app at my-app-service:80, regardless of which Pods are currently running.
ConfigMaps and Secrets
Configuration should be separate from code. ConfigMaps hold non-sensitive configuration. Secrets hold sensitive data (encrypted at rest).
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
data:
DATABASE_URL: "postgres://db:5432/myapp"
LOG_LEVEL: "info"
Architecture Overview
A Kubernetes cluster has two types of machines:
Control Plane (Master):
- API Server: The interface for all operations
- etcd: Key-value store for cluster state
- Scheduler: Assigns Pods to nodes
- Controller Manager: Runs control loops for resources
Worker Nodes:
- kubelet: Agent that runs Pods
- Container Runtime: Docker, containerd, or CRI-O
- kube-proxy: Network proxy for Services
In managed services (GKE, EKS, AKS), the control plane is managed for you. You only care about worker nodes.
Getting Started Locally
For learning, use one of these local options:
Minikube: The original local K8s. Runs in a VM.
minikube start
minikube dashboard
Docker Desktop: Includes a single-node K8s cluster. Enable it in Docker Desktop → Preferences → Kubernetes.
kind (Kubernetes in Docker): Runs K8s nodes as Docker containers.
kind create cluster
Your First Deployment
Let’s deploy nginx:
# Create a deployment
kubectl create deployment nginx --image=nginx
# Expose it as a service
kubectl expose deployment nginx --port=80 --type=NodePort
# Check the status
kubectl get pods
kubectl get services
# Access it
minikube service nginx # if using minikube
Essential kubectl Commands
# View resources
kubectl get pods
kubectl get deployments
kubectl get services
kubectl get all
# Describe resources (detailed info)
kubectl describe pod my-pod
kubectl describe deployment my-app
# View logs
kubectl logs my-pod
kubectl logs -f my-pod # follow
# Execute commands in containers
kubectl exec -it my-pod -- /bin/bash
# Apply configuration
kubectl apply -f deployment.yaml
# Delete resources
kubectl delete deployment my-app
kubectl delete -f deployment.yaml
When to Use Kubernetes
Kubernetes is powerful, but it’s not always the right choice.
Use Kubernetes when:
- You have multiple services that need to scale independently
- You need zero-downtime deployments
- You’re running in multiple environments (dev, staging, prod)
- You need self-healing infrastructure
Consider alternatives when:
- You have a single monolithic application
- Your team is small and you need to move fast
- You’re not yet at scale that requires orchestration
Heroku, Railway, or even plain Docker Compose might be better starting points.
The Ecosystem
Kubernetes is just the foundation. The real power comes from the ecosystem:
- Helm: Package manager for K8s (like apt for Ubuntu)
- Prometheus/Grafana: Monitoring and visualization
- Istio/Linkerd: Service mesh for advanced networking
- ArgoCD/Flux: GitOps continuous deployment
- Cert-Manager: Automated TLS certificates
Don’t try to learn everything at once. Start with core K8s, then add tools as you need them.
Managed vs Self-Managed
Managed Kubernetes (GKE, EKS, AKS):
- Control plane managed for you
- Automatic upgrades
- Integrated with cloud services
- Costs more, but saves operational burden
Self-Managed (kubeadm, Rancher):
- Full control
- Works on any infrastructure
- Requires dedicated expertise
- Better for on-premise or multi-cloud
For most teams, start with a managed service. You can always move to self-managed later.
Final Thoughts
Kubernetes won because it solved real problems at scale. It’s complex, but that complexity exists for a reason—distributed systems are hard.
Start small. Deploy one application. Add complexity as you need it. The learning curve is real, but the skills are valuable across the industry.
The container era is here. Kubernetes is how we manage it.
Embrace the YAML. It gets easier.