Kubernetes
Container Orchestration at Scale
What is K8s?
Kubernetes (K8s) is a container orchestration platform that automates deploying, scaling, and managing containerized applications. It groups containers into logical units (Pods) for easy management and discovery. Originally designed by Google based on their Borg system, it's now maintained by the Cloud Native Computing Foundation (CNCF).
Think of Kubernetes like an automated shipping port
Imagine a massive shipping port with thousands of containers arriving daily. Kubernetes is the port's automated management system—deciding which dock (node) each container goes to, replacing damaged containers, scaling workers when traffic increases, and routing goods to the right destinations.
Key Features
Automated Scheduling
Intelligently places containers based on resource requirements, constraints, and policies.
Self-Healing
Restarts failed containers, replaces containers, and reschedules when nodes die.
Horizontal Scaling
Scale apps automatically based on CPU, memory, or custom metrics.
Service Discovery
Exposes containers via DNS or IP and load balances traffic across them.
When to Use
- Running microservices architectures at scale
- Multi-cloud or hybrid-cloud deployments
- Applications requiring high availability
- Teams practicing GitOps and Infrastructure as Code
- Workloads with variable traffic needing auto-scaling
- Complex applications with multiple services
When Not to Use
- Simple single-container applications (use Docker Compose)
- Small teams without dedicated DevOps expertise
- Applications with minimal scaling requirements
- Tight budgets—K8s has significant operational overhead
- Monolithic applications that don't benefit from orchestration
Prerequisites
- Basic understanding of containers (Docker)
- Familiarity with YAML syntax
- Command-line experience
- Understanding of networking concepts
Installation
curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64 && sudo install minikube-linux-amd64 /usr/local/bin/minikube && minikube startLocal development
brew install minikube && minikube startLocal development
Enable Kubernetes in Docker Desktop settingsBuilt-in K8s
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl" && sudo install kubectl /usr/local/bin/kubectlVerify installation: kubectl version --client && kubectl get nodes
Quick Start Steps
Start cluster
Start local Kubernetes cluster
minikube start
Run first pod
Create your first pod
kubectl run nginx --image=nginx
Expose service
Expose the pod as a service
kubectl expose pod nginx --port=80 --type=NodePort
View resources
List all resources
kubectl get all
Access logs
View pod logs
kubectl logs nginx
kubectl get podsList all pods in current namespace
kubectl apply -f manifest.yamlApply configuration from file
kubectl describe pod nginxShow detailed pod information
kubectl logs pod-namePrint container logs
kubectl exec -it pod-name -- /bin/bashExecute command in container
kubectl delete -f manifest.yamlDelete resources from file
kubectl scale deployment nginx --replicas=5Scale deployment
kubectl rollout status deployment/nginxCheck rollout status
Commands by Category
Pro Tips8
Use kubectl aliases
workflowSet alias k=kubectl, kgp='kubectl get pods'. Add shell completion for productivity.
alias k=kubectl && alias kgp='kubectl get pods'Always set resource limits
best-practiceWithout limits, a single pod can starve the node. Without requests, scheduler can't make informed decisions.
resources: { limits: { cpu: '500m', memory: '128Mi' } }Deploying without resource specificationsUse namespaces to organize
best-practiceCreate namespaces for dev/staging/prod. Apply ResourceQuotas per namespace.
Implement health probes
best-practiceReadiness: 'am I ready for traffic?' Liveness: 'am I still alive?' Prevents routing to broken pods.
Use kubectl diff before apply
best-practicekubectl diff -f manifest.yaml shows what will change before applying. Prevents surprises.
kubectl diff -f deployment.yamlUse k9s for interactive management
workflowk9s provides a terminal UI for navigating clusters, viewing logs, and managing resources.
Implement GitOps with ArgoCD
best-practiceStore all manifests in Git. ArgoCD/Flux sync cluster state with Git, providing audit trail.
Configure pod anti-affinity for HA
best-practiceSpread replicas across nodes/zones using podAntiAffinity to survive failures.
Key Facts8
A Pod is the smallest deployable unit, not a container
Pods can contain multiple containers that share network and storage. Containers in a pod are always co-located.
Deployments manage ReplicaSets, which manage Pods
Deployment → ReplicaSet → Pods. ReplicaSets handle scaling; Deployments handle updates and rollbacks.
Services provide stable IPs for ephemeral pods
Pods come and go, but Service ClusterIP is stable. Services use selectors to route traffic.
etcd stores all cluster state
The control plane's etcd is the single source of truth. Losing etcd data means losing cluster configuration.
Labels and selectors are fundamental
Resources connect via labels (key-value metadata) and selectors (queries). Services find pods this way.
ConfigMaps and Secrets have 1MB limit
They're stored in etcd; large values degrade API server performance.
Rolling updates are the default strategy
Deployments update pods gradually: create new, wait for ready, terminate old. Zero-downtime by default.
StatefulSets provide stable pod identities
Unlike Deployments, StatefulSet pods have persistent names (app-0, app-1) and ordered startup/shutdown.
Interview & Exam Practice2
Production debugging
A pod keeps restarting with CrashLoopBackOff. How do you troubleshoot?
Architecture understanding