Chapter 2: Core Pod Concepts
Chapter 2: Core Pod Concepts
- Deeply understand Pod design philosophy and lifecycle
- Master Pod creation, viewing, and deletion operations
- Learn to configure multi-container Pods and Init containers
- Understand Pod resource limits and health checks
Pod Design Philosophy
What is a Pod
A Pod is the smallest deployable unit in Kubernetes. A Pod represents a running process in the cluster. Pods encapsulate one or more containers, storage resources, a network IP, and options that control how containers run.
- Shared Network: Containers within the same Pod share IP and port space, can access each other via localhost
- Shared Storage: Containers can mount the same Volume for data sharing
- Shared Namespaces: Optionally share PID, IPC, and other namespaces
- Atomic Scheduling: All containers in a Pod are always scheduled to the same node
Why Pods are Needed
Problems Solved by Pods:
| Scenario | Single Container | Multi-container Pod |
|---|---|---|
| Tightly Coupled Apps | Requires network config | Direct localhost communication |
| Shared Storage | Requires extra config | Naturally shares Volumes |
| Co-scheduling | May be distributed | Guaranteed same node |
| Lifecycle | Managed independently | Managed together |
Pod Lifecycle
Lifecycle Phases
Pod Status Details
| Status | Meaning | Common Causes |
|---|---|---|
| Pending | Pod created but not running | Image pulling, insufficient resources, scheduling |
| Running | Pod bound to node, containers created | At least one container running |
| Succeeded | All containers terminated successfully | Job-type Pod completed normally |
| Failed | All containers terminated, at least one failed | Container exit code non-zero |
| Unknown | Cannot obtain Pod status | Node communication issues |
Container States
Restart Policies
apiVersion: v1
kind: Pod
metadata:
name: restart-demo
spec:
restartPolicy: Always # Always | OnFailure | Never
containers:
- name: app
image: nginx
| Policy | Behavior | Use Case |
|---|---|---|
| Always | Always restart after container exits | Long-running services |
| OnFailure | Restart only on failure | Batch processing tasks |
| Never | Never restart | One-time tasks |
Pod Configuration Details
Complete Pod Specification
apiVersion: v1
kind: Pod
metadata:
name: complete-pod-example
namespace: default
labels:
app: myapp
version: v1
annotations:
description: "Complete Pod example"
spec:
# Container definitions
containers:
- name: main-container
image: nginx:1.24
imagePullPolicy: IfNotPresent # Always | Never | IfNotPresent
# Port configuration
ports:
- name: http
containerPort: 80
protocol: TCP
# Environment variables
env:
- name: ENV_VAR
value: "production"
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: SECRET_KEY
valueFrom:
secretKeyRef:
name: my-secret
key: password
# Resource limits
resources:
requests:
memory: "128Mi"
cpu: "100m"
limits:
memory: "256Mi"
cpu: "500m"
# Volume mounts
volumeMounts:
- name: config-volume
mountPath: /etc/config
- name: data-volume
mountPath: /data
# Health checks
livenessProbe:
httpGet:
path: /healthz
port: 80
initialDelaySeconds: 15
periodSeconds: 10
readinessProbe:
httpGet:
path: /ready
port: 80
initialDelaySeconds: 5
periodSeconds: 5
startupProbe:
httpGet:
path: /startup
port: 80
failureThreshold: 30
periodSeconds: 10
# Init containers
initContainers:
- name: init-db
image: busybox
command: ['sh', '-c', 'until nc -z db-service 3306; do sleep 2; done']
# Volume definitions
volumes:
- name: config-volume
configMap:
name: app-config
- name: data-volume
emptyDir: {}
# Scheduling configuration
nodeSelector:
disktype: ssd
tolerations:
- key: "key"
operator: "Equal"
value: "value"
effect: "NoSchedule"
# Service account
serviceAccountName: default
# DNS policy
dnsPolicy: ClusterFirst
# Restart policy
restartPolicy: Always
# Termination grace period
terminationGracePeriodSeconds: 30
Resource Management
Resource Unit Explanation:
resources:
requests:
memory: "256Mi" # 256 MiB = 256 * 1024 * 1024 bytes
cpu: "500m" # 500 millicores = 0.5 CPU cores
limits:
memory: "512Mi"
cpu: "1" # 1 CPU core = 1000m
| Resource | Unit | Example | Description |
|---|---|---|---|
| CPU | m (millicores) | 100m, 500m, 1, 2 | 1000m = 1 core |
| Memory | Mi, Gi | 128Mi, 1Gi | Binary units |
| Memory | M, G | 128M, 1G | Decimal units |
Health Checks
Three Probe Types
Purpose: Detect if container is alive, restart on failure
livenessProbe:
# HTTP probe
httpGet:
path: /healthz
port: 8080
httpHeaders:
- name: Custom-Header
value: Awesome
initialDelaySeconds: 15 # Initial probe delay
periodSeconds: 10 # Probe interval
timeoutSeconds: 3 # Timeout
successThreshold: 1 # Success threshold
failureThreshold: 3 # Failure threshold
Use Cases: Detect application deadlock, unresponsiveness, etc.
Purpose: Detect if container is ready to receive traffic
readinessProbe:
# TCP probe
tcpSocket:
port: 3306
initialDelaySeconds: 5
periodSeconds: 5
failureThreshold: 3
Use Cases: Application needs to load data, warm up cache, etc.
Purpose: Detect if application inside container has started
startupProbe:
# Command execution probe
exec:
command:
- cat
- /tmp/healthy
failureThreshold: 30 # Maximum 30 attempts
periodSeconds: 10 # Wait up to 300 seconds
Use Cases: Legacy applications with long startup times
Probe Method Comparison
# Method 1: HTTP GET probe
httpGet:
path: /healthz
port: 8080
scheme: HTTP # or HTTPS
# Method 2: TCP Socket probe
tcpSocket:
port: 3306
# Method 3: Command execution probe
exec:
command:
- sh
- -c
- test -f /app/ready
# Method 4: gRPC probe (K8s 1.24+)
grpc:
port: 50051
service: my.health.v1.Health
Multi-Container Pod Patterns
Common Patterns
Sidecar Pattern Example
# Logging Sidecar
apiVersion: v1
kind: Pod
metadata:
name: sidecar-logging
spec:
containers:
# Main application container
- name: app
image: busybox
command: ['sh', '-c', 'while true; do echo "$(date) INFO: App running" >> /var/log/app.log; sleep 5; done']
volumeMounts:
- name: log-volume
mountPath: /var/log
# Sidecar log collection container
- name: log-collector
image: busybox
command: ['sh', '-c', 'tail -f /var/log/app.log']
volumeMounts:
- name: log-volume
mountPath: /var/log
volumes:
- name: log-volume
emptyDir: {}
Ambassador Pattern Example
# Database Proxy Ambassador
apiVersion: v1
kind: Pod
metadata:
name: ambassador-proxy
spec:
containers:
# Main application
- name: app
image: myapp:1.0
env:
- name: DB_HOST
value: "localhost" # Access proxy via localhost
- name: DB_PORT
value: "3306"
# Ambassador proxy container
- name: db-proxy
image: mysql-proxy:1.0
env:
- name: MYSQL_HOST
value: "mysql-primary.database.svc.cluster.local"
- name: MYSQL_REPLICA
value: "mysql-replica.database.svc.cluster.local"
ports:
- containerPort: 3306
Adapter Pattern Example
# Monitoring Data Adapter
apiVersion: v1
kind: Pod
metadata:
name: adapter-monitoring
spec:
containers:
# Main application (outputs custom format)
- name: app
image: legacy-app:1.0
ports:
- containerPort: 8080
volumeMounts:
- name: metrics
mountPath: /metrics
# Adapter (converts to Prometheus format)
- name: prometheus-adapter
image: prom-adapter:1.0
ports:
- containerPort: 9090
volumeMounts:
- name: metrics
mountPath: /metrics
volumes:
- name: metrics
emptyDir: {}
Init Containers
Init Container Characteristics
- Always run to completion
- Execute sequentially in definition order
- Pod restarts if any Init container fails
- Main containers start only after completion
- Probes not supported (must run to completion)
Init Container Use Cases
apiVersion: v1
kind: Pod
metadata:
name: init-demo
spec:
initContainers:
# Scenario 1: Wait for dependent services to be ready
- name: wait-for-db
image: busybox
command: ['sh', '-c', 'until nc -z mysql-service 3306; do echo waiting for mysql; sleep 2; done']
# Scenario 2: Download configuration files
- name: download-config
image: curlimages/curl
command: ['sh', '-c', 'curl -o /config/app.conf http://config-server/app.conf']
volumeMounts:
- name: config
mountPath: /config
# Scenario 3: Modify system parameters
- name: sysctl
image: busybox
command: ['sh', '-c', 'sysctl -w net.core.somaxconn=65535']
securityContext:
privileged: true
# Scenario 4: Database migration
- name: migrate
image: myapp:1.0
command: ['./migrate', '--database', '$(DATABASE_URL)']
env:
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: db-secret
key: url
containers:
- name: app
image: myapp:1.0
volumeMounts:
- name: config
mountPath: /config
volumes:
- name: config
emptyDir: {}
Hands-on Practice
Create Pod with Health Checks
# health-check-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: health-check-demo
labels:
app: health-demo
spec:
containers:
- name: web
image: nginx:1.24
ports:
- containerPort: 80
# Liveness probe
livenessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 10
periodSeconds: 5
failureThreshold: 3
# Readiness probe
readinessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 5
periodSeconds: 3
resources:
requests:
memory: "64Mi"
cpu: "100m"
limits:
memory: "128Mi"
cpu: "200m"
# Create Pod
kubectl apply -f health-check-pod.yaml
# Watch Pod status
kubectl get pod health-check-demo -w
# View probe events
kubectl describe pod health-check-demo | grep -A 10 "Events"
# Simulate probe failure (delete nginx default page)
kubectl exec health-check-demo -- rm /usr/share/nginx/html/index.html
# Observe Pod restart
kubectl get pod health-check-demo -w
# Clean up
kubectl delete pod health-check-demo
Create Multi-Container Pod
# multi-container-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: multi-container-demo
spec:
containers:
# Main application container
- name: nginx
image: nginx:1.24
ports:
- containerPort: 80
volumeMounts:
- name: shared-logs
mountPath: /var/log/nginx
# Log collection Sidecar
- name: log-reader
image: busybox
command: ['sh', '-c', 'tail -f /var/log/nginx/access.log']
volumeMounts:
- name: shared-logs
mountPath: /var/log/nginx
volumes:
- name: shared-logs
emptyDir: {}
# Create Pod
kubectl apply -f multi-container-pod.yaml
# View all container statuses
kubectl get pod multi-container-demo -o jsonpath='{.status.containerStatuses[*].name}'
# Access nginx to generate logs
kubectl port-forward pod/multi-container-demo 8080:80 &
curl http://localhost:8080
# View log collector output
kubectl logs multi-container-demo -c log-reader
# Enter different containers
kubectl exec -it multi-container-demo -c nginx -- /bin/bash
kubectl exec -it multi-container-demo -c log-reader -- /bin/sh
# Clean up
kubectl delete pod multi-container-demo
Create Pod with Init Containers
# init-container-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: init-demo
spec:
initContainers:
- name: init-service
image: busybox
command: ['sh', '-c', 'echo "Init container running..."; sleep 5; echo "Init complete!"']
- name: init-data
image: busybox
command: ['sh', '-c', 'echo "Hello from init" > /work-dir/index.html']
volumeMounts:
- name: workdir
mountPath: /work-dir
containers:
- name: nginx
image: nginx:1.24
ports:
- containerPort: 80
volumeMounts:
- name: workdir
mountPath: /usr/share/nginx/html
volumes:
- name: workdir
emptyDir: {}
# Create Pod
kubectl apply -f init-container-pod.yaml
# Watch Pod startup process
kubectl get pod init-demo -w
# View Init container logs
kubectl logs init-demo -c init-service
kubectl logs init-demo -c init-data
# Verify Init container work results
kubectl exec init-demo -- cat /usr/share/nginx/html/index.html
# Clean up
kubectl delete pod init-demo
Resource Limit Demonstration
# resource-demo.yaml
apiVersion: v1
kind: Pod
metadata:
name: resource-demo
spec:
containers:
- name: stress
image: progrium/stress
command: ['stress']
args: ['--vm', '1', '--vm-bytes', '150M', '--vm-hang', '1']
resources:
requests:
memory: "100Mi"
cpu: "100m"
limits:
memory: "200Mi" # Limit 200M, application requests 150M, runs normally
cpu: "500m"
# Create Pod
kubectl apply -f resource-demo.yaml
# View resource usage
kubectl top pod resource-demo
# View QoS class in Pod details
kubectl get pod resource-demo -o jsonpath='{.status.qosClass}'
# Clean up
kubectl delete pod resource-demo
Common Issues and Troubleshooting
Pod Status Troubleshooting
Common Troubleshooting Commands
# View Pod detailed status and events
kubectl describe pod <pod-name>
# View Pod logs
kubectl logs <pod-name> # Current logs
kubectl logs <pod-name> --previous # Previous logs
kubectl logs <pod-name> -c <container> # Specific container
kubectl logs <pod-name> --all-containers # All containers
# Enter container for debugging
kubectl exec -it <pod-name> -- /bin/sh
# View container processes
kubectl exec <pod-name> -- ps aux
# View container network
kubectl exec <pod-name> -- netstat -tlnp
# Temporary debug container (K8s 1.23+)
kubectl debug <pod-name> -it --image=busybox
# Copy files for analysis
kubectl cp <pod-name>:/path/to/file ./local-file
- CrashLoopBackOff: Check startup command, environment variables, health check configuration
- ImagePullBackOff: Check image name, tag, repository authentication
- OOMKilled: Increase memory limits or optimize application memory usage
- CreateContainerError: Check security context, storage volume configuration
Summary
Through this chapter, you should have mastered:
- Pod Design Philosophy: Understood the design rationale for Pods as the smallest scheduling unit
- Lifecycle Management: Learned about various states and transitions of Pods and containers
- Resource Configuration: Mastered CPU/Memory request and limit configuration
- Health Checks: Learned use cases and configuration methods for three probe types
- Multi-Container Patterns: Mastered patterns like Sidecar, Ambassador, and Adapter
- Init Containers: Understood use cases and configuration of Init containers
In the next chapter, we will learn about workload controllers, including Deployment, StatefulSet, DaemonSet, etc., and understand how to manage Pod replicas and lifecycle.