π LimitRange and Pod Scheduling β Case Study¶
This document explains how a LimitRange interacts with Pods in Kubernetes, particularly when resource requests and limits are defined or omitted.
We analyze two example Pods and determine which one will be scheduled, and why.
π LimitRange Manifest¶
apiVersion: v1
kind: LimitRange
metadata:
name: cpu-resource-constraint
spec:
limits:
- default: # Applies if no limits are defined in the container
cpu: 500m
defaultRequest: # Applies if no requests are defined in the container
cpu: 500m
max: # Upper bound for requests and limits
cpu: "1"
min: # Lower bound for requests and limits
cpu: 100m
type: Container
π What this means¶
- If a Pod doesnβt define
requestsorlimits, default values (500m) will be applied. - Any container's CPU must fall between
100mand1(i.e., 100m β€ value β€ 1000m). - If only one (request or limit) is defined, Kubernetes may attempt to default the other using this policy.
π§ͺ Pod-One: Fails to Schedule¶
apiVersion: v1
kind: Pod
metadata:
name: pod-one
spec:
containers:
- name: demo
image: registry.k8s.io/pause:3.8
resources:
requests:
cpu: 700m
π What happens here?¶
- Request defined: 700m
- Limit not defined: Defaults to 500m (from LimitRange)
Result: - π« Invalid: requests.cpu (700m) > limits.cpu (500m) β Violates policy - β Pod will not be scheduled - β Error message:
spec.containers[].resources.requests.cpu: Invalid value: "700m": must be less than or equal to cpu limit
β Pod-Two: Successfully Scheduled¶
apiVersion: v1
kind: Pod
metadata:
name: pod-two
spec:
containers:
- name: demo
image: registry.k8s.io/pause:3.8
resources:
requests:
cpu: 700m
limits:
cpu: 700m
π What happens here?¶
- Request = Limit = 700m
- β
Within LimitRange bounds:
100m β€ 700m β€ 1 - β No defaulting required
βοΈ Pod will be scheduled successfully
β Summary Table¶
| Pod Name | requests.cpu | limits.cpu | Result | Why |
|---|---|---|---|---|
| pod-one | 700m | 500m (defaulted) | β Rejected | Request > Limit |
| pod-two | 700m | 700m | β Accepted | All values valid |
π Let's Add a ResourceQuota¶
apiVersion: v1
kind: ResourceQuota
metadata:
name: compute-quota
spec:
hard:
requests.cpu: "1"
limits.cpu: "2"
β Now Letβs Analyze:¶
- Pod-Two:
requests.cpu = 700m,limits.cpu = 700m-
β Fits within quota (request β€ 1, limit β€ 2)
-
Adding another pod: with same values would exceed
requests.cpu > 1
π So LimitRange enforces individual pod constraints. π ResourceQuota enforces total usage in the namespace.
Let's Create a Multi-container Pod with ResourceQuota and LimitRange in place¶
We will create a pod with two containers, keeping in mind LimitRange and ResourceQuota we created earlier.
apiVersion: v1
kind: Pod
metadata:
name: multi-container-pod
spec:
containers:
- name: app
image: nginx
resources:
requests:
cpu: 300m
limits:
cpu: 500m
- name: sidecar
image: busybox
command: ["sh", "-c", "sleep 3600"]
resources:
requests:
cpu: 300m
limits:
cpu: 500m
π Evaluation¶
| Checkpoint | Value from both containers | ResourceQuota Limit | Result |
|---|---|---|---|
requests.cpu total | 300m + 300m = 600m | β€ 1 (1000m) | β Pass |
limits.cpu total | 500m + 500m = 1000m | β€ 2 (2000m) | β Pass |
Per-container requests | Both β₯ 100m | LimitRange minimum | β Pass |
Per-container limits | Both β€ 1 core | LimitRange maximum | β Pass |
β Conclusion¶
The multi-container pod will be scheduled because:
- Each container respects the
LimitRange(min β€ cpu β€ max) - The sum of
requestsandlimitsacross containers is within theResourceQuota
π§ Final Notes¶
- Always check
LimitRangeandResourceQuotatogether for scheduling decisions. requestsmust be β€limitsrequestsandlimitsmust fall betweenminandmaxdefined inLimitRange- Namespace-wide usage must respect the
ResourceQuota - When using
LimitRange, be aware that omitted values may be filled in using the policy. - Use
kubectl describe limitrange <name>to inspect your active policy.