Deployment Strategies

Yes ❀️ exactly β€” both Blue-Green and Canary are deployment strategies.

πŸ‘‰ Their purpose is the same: release a new version of an application safely in Kubernetes (or any system) without downtime. πŸ‘‰ The difference lies in how traffic is shifted from the old version to the new version:

  • Blue-Green Deployment β†’ switch is all at once (100% β†’ new version).
  • Canary Deployment β†’ switch is gradual (e.g., 5%, 10%, 50%, 100%).

They’re both valid strategies, and teams usually choose based on:

  • Risk tolerance (instant cutover vs slow rollout).
  • Resources available (Blue-Green is more costly).
  • Business need (fast rollback vs safe testing).

⚑ So yes, both are deployment strategies under the broader umbrella of Continuous Delivery / Release Management.


πŸ“Œ Blue-Green Deployment Strategy

  • Concept:

  • Maintain two environments:

    • Blue β†’ current live version.
    • Green β†’ new version (standby).
    • Only one of them receives production traffic at a time.
    • Traffic Switching:
  • Achieved using labels + selectors in Kubernetes.

  • Service routes traffic to Pods based on labels.
  • During switch: update the Service’s selector label from Blue β†’ Green.
  • Rollback:

  • Simply re-point Service back to the old label.

  • No pod termination/re-creation needed, traffic switch is instant.

Example YAML (Blue-Green with Labels)

# --------------------------
# Service (entry point)
# --------------------------
apiVersion: v1
kind: Service
metadata:
  name: myapp-service
spec:
  selector:
    app: myapp       # Constant app name
    version: blue    # πŸ‘ˆ Service currently pointing to "blue" version
  ports:
    - port: 80
      targetPort: 8080
---
# --------------------------
# Blue Deployment
# --------------------------
apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp-blue
spec:
  replicas: 3
  selector:
    matchLabels:
      app: myapp
      version: blue   # πŸ‘ˆ Label matches Service selector
  template:
    metadata:
      labels:
        app: myapp
        version: blue
    spec:
      containers:
        - name: myapp
          image: myapp:v1   # Old version
          ports:
            - containerPort: 8080
---
# --------------------------
# Green Deployment
# --------------------------
apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp-green
spec:
  replicas: 3
  selector:
    matchLabels:
      app: myapp
      version: green   # πŸ‘ˆ Label for new version
  template:
    metadata:
      labels:
        app: myapp
        version: green
    spec:
      containers:
        - name: myapp
          image: myapp:v2   # New version
          ports:
            - containerPort: 8080

⚑ Switching Traffic (Blue β†’ Green)

  • Initially:
selector:
  app: myapp
  version: blue   # Traffic goes to Blue pods
  • After switch:
selector:
  app: myapp
  version: green  # πŸ‘ˆ Change here β†’ traffic now goes to Green pods
  • Rollback: just change green back to blue.

πŸ“Œ Canary Deployment Strategy

  • Concept:

  • Deploy a small subset of pods with the new version (canary).

  • Run them alongside the stable version (main).
  • Service points to both using common labels β†’ traffic is split between stable & canary.
  • Increase canary replicas gradually until stable version is fully replaced.
  • Traffic Control:

  • Labels + replica count control distribution.

  • Example: 90% traffic to stable (9 pods), 10% to canary (1 pod).
  • Advanced routing (e.g., Istio/NGINX) can provide percentage-based splits.

Example YAML (Canary with Labels)

# --------------------------
# Service (entry point)
# --------------------------
apiVersion: v1
kind: Service
metadata:
  name: myapp-service
spec:
  selector:
    app: myapp   # πŸ‘ˆ Service only looks at "app: myapp"
  ports:
    - port: 80
      targetPort: 8080
---
# --------------------------
# Stable Deployment
# --------------------------
apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp-stable
spec:
  replicas: 9       # πŸ‘ˆ 90% of traffic (9 pods)
  selector:
    matchLabels:
      app: myapp
      version: stable
  template:
    metadata:
      labels:
        app: myapp
        version: stable
    spec:
      containers:
        - name: myapp
          image: myapp:v1   # Stable version
          ports:
            - containerPort: 8080
---
# --------------------------
# Canary Deployment
# --------------------------
apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp-canary
spec:
  replicas: 1       # πŸ‘ˆ 10% of traffic (1 pod)
  selector:
    matchLabels:
      app: myapp
      version: canary
  template:
    metadata:
      labels:
        app: myapp
        version: canary
    spec:
      containers:
        - name: myapp
          image: myapp:v2   # New version (canary)
          ports:
            - containerPort: 8080

⚑ How Traffic Splitting Works

  • Service selects pods by app: myapp (ignores version).
  • Both stable (version: stable) and canary (version: canary) pods match β†’ Service sends traffic to both.
  • Traffic ratio depends on replica count:

  • 9 stable pods β†’ ~90% traffic.

  • 1 canary pod β†’ ~10% traffic.

Canary Progression

  1. Start with small replica count for canary (e.g., 1 pod).
  2. Monitor metrics, logs, errors.
  3. Gradually increase canary replicas β†’ shift more traffic.
  4. Once stable, scale down stable to 0 and run only canary.

πŸ“Š Blue-Green vs Canary Deployment (Detailed Difference)

Aspect Blue-Green Deployment Canary Deployment
Core Idea Maintain two identical environments: Blue (current) and Green (new). Switch all traffic at once by changing labels. Deploy small subset (canary) of new version alongside stable version. Gradually shift traffic.
Labels Usage - Service selector uses app: myapp + version: blue/green.
- Switching = change Service label selector from blue β†’ green.
- Service selector uses only common label app: myapp.
- Both stable & canary pods share app: myapp but differ in version: stable/canary.
- Traffic split is controlled by replica count (or advanced routing tools).
Traffic Shift Instant & complete: 100% traffic moves to new version once Service label changes. Gradual: Percentage of traffic flows to canary based on replicas (e.g., 9 stable + 1 canary = ~90/10 split).
Rollback Very fast β†’ simply change Service selector back to old label. Requires scaling down/removing canary and keeping stable pods, or adjusting replicas.
Risk Level Higher risk ⚠️ (since all traffic moves instantly to new version). Lower risk βœ… (small % of traffic exposed first, issues detected early).
Resource Usage Requires two full environments (Blue + Green), doubling resource cost during rollout. Only a few canary pods in addition to stable pods β†’ less resource heavy.
Use Case - When you need zero downtime cutover.
- When infra cost is not a concern.
- When rollback speed is critical.
- When you want to test new version with real traffic gradually.
- When you need safer rollouts and monitoring-based progression.
YAML Difference - Service points to version: blue or version: green.
- Example:
yaml selector: {app: myapp, version: blue} β†’ switched to green.
- Service points only to common labelapp: myapp.
- Example:
yaml selector: {app: myapp}
Both stable and canary pods match, traffic splits automatically.

πŸ”‘ Summary

  • Blue-Green = fast switch, high risk, needs Service label change (blue ↔ green).
  • Canary = gradual rollout, safer, needs replica adjustments (stable vs canary pods).