Kubernetes taints serve a critical role in managing pod scheduling by controlling which nodes can accept specific pods. By applying taints to nodes, you can ensure that only pods with matching tolerations are scheduled on those nodes. This mechanism is particularly useful for maintaining the stability and efficiency of a Kubernetes cluster. It allows for isolating workloads based on various criteria—such as resource requirements, performance characteristics, or security levels—but it doesn’t guarantee exclusivity, meaning other tolerating pods can also be placed on these nodes.

In this article, you will learn about Kubernetes taints and why and how they are applied. We also provide a step-by-step guide demonstrating how taints and tolerations affect pod scheduling and highlight the common use cases for Kubernetes taints.

Summary of Kubernetes Taints concepts

ConceptDescription
Taint the node, tolerate the podUsing taints and tolerations together in Kubernetes provides a powerful mechanism for controlling pod scheduling on nodes. This combination ensures that only the right pods are scheduled on specific nodes based on the taints applied and the tolerations defined in pod specifications. 
Kubectl taint commandThe kubectl taint command is used to manage taints on Kubernetes nodes, which control the scheduling of pods on these nodes.
Use cases for Kubernetes taintsCommon use cases for Kubernetes taints include dedicated nodes, resource management, and isolating testing and development environments from production.

Taint the Node, Tolerate the Pod

A taint is a key-value pair associated with a node accompanied by an effect. Tolerations are specified in pod definitions to allow them to be scheduled on nodes with matching taints. The toleration essentially “tolerates” the taint, bypassing the restriction imposed by the taint. Tolerations have a key, operator, value, and effect corresponding to the taint they tolerate.

You apply taints to nodes to mark them as special-purpose or to indicate that they should not accept certain types of pods. Then, you configure the pods to include tolerations in their specifications to indicate that they can be scheduled on nodes with matching taints.

When Kubernetes schedules a pod, it checks the taints on nodes and the tolerations on pods. If a node has a taint, only pods with a matching toleration can be scheduled on that node. Depending on the effect—NoSchedule, PreferNoSchedule, or NoExecute, which will be discussed in the following section—the taint-toleration mechanism either prevents, discourages, or evicts pods without matching tolerations from being scheduled or remaining on the node.

It is worth mentioning the node affinity feature and how it can overlap with taints and tolerations. Node affinity allows for more granular control by letting you label nodes and then use these labels to set node selectors on pods. This ties pods to nodes with specific labels, ensuring pods will be scheduled according to preference on nodes with matching labels, like green pods on green-labelled nodes. However, node affinity does not prevent other pods from being scheduled on those nodes without those specific affinities.

Combining taints and tolerations with node affinity can provide a more powerful solution by both preventing unwanted pod placement and guiding pods to preferred nodes.

Kubectl taint Command

The kubectl taint command adds, modifies, or removes taints on Kubernetes nodes. The basic structure of the command is as follows:

$ kubectl taint nodes <node-name> <key>=<value>:<effect>

Where:

  • <node-name> is the name of the node to which the taint is applied.
  • <key>=<value>:<effect> refers to the taint being added, modified, or removed:
    • <key> is the key of the taint.
    • <value> is the value of the taint.
    • <effect> is the effect of the taint.

Possible Values for Effect in Kubernetes Taints

When using the kubectl taint command, the <effect> parameter specifies the action Kubernetes takes when a pod without matching toleration needs to be deployed. The possible values are NoSchedule, PreferNoSchedule, and NoExecute.

The table below explains each of these values.

ValueDescriptionUse case
NoSchedulePods that do not tolerate the taint will not be scheduled on the node. This is a strict restriction.Used to reserve nodes for specific workloads, ensuring that no other pods are scheduled on these nodes.
PreferNoScheduleKubernetes will try to avoid scheduling pods that do not tolerate the taint on the node, but it is not guaranteed. This is a soft preference.Useful for nodes that should generally not accept certain pods but can do so if necessary (e.g., during high load or resource constraints).
NoExecutePods that do not tolerate the taint will be evicted if they are already running on the node, and new pods that do not tolerate the taint will not be scheduled on the node.Ideal for scenarios where nodes need to be drained of specific pods, such as during maintenance or when deprecating certain nodes.

Demo: How Taints Affect Pod Scheduling

A practical, helpful example of applying Kubernetes taints is enabling the segregation of different types of workloads within the same cluster. For example, production workloads can be isolated from development or testing environments, reducing the risk of resource contention and potential performance degradation. This proactive approach to node management contributes to a more resilient and robust Kubernetes environment where the risk of application downtime and performance issues is significantly minimized.

This section demonstrates how taints and tolerations influence pod scheduling in a Kubernetes cluster. To proceed, make sure you have the following prerequisites :

  • A running Kubernetes cluster. You may use minikube or kind to set up one locally.
  • The kubectl command-line tool installed and configured. 

Step 1: Identify a node to apply the taint

First, identify a node in your cluster where you will apply the taint. 

# sh
$ kubectl get nodes

Step 2: Apply a taint to the node

Choose a node, which here we will call node1. The command below taints node1 with the dedicated=app1 label and the effect NoSchedule, which means that pods that do not tolerate the taint will not be scheduled on the node.

# sh
$ kubectl taint nodes node1 dedicated=app1:NoSchedule

Step 3: Create a pod without a toleration

Create a pod definition file named non-tolerating-pod.yaml without toleration.

# yaml
apiVersion: v1
kind: Pod
metadata:
name: non-tolerating-pod
spec:
containers:
- name: nginx
image: nginx

Then apply this pod configuration.

# sh
$ kubectl apply -f non-tolerating-pod.yaml

Step 4: Observe pod scheduling

Check the status of the pod to see if it has been scheduled.

# sh
$ kubeclt get pods
$ kubectl describe pods non-tolerating-pod

You should see that the non-tolerating-pod is not scheduled on node1 because it does not tolerate the taint.

If you are running a cluster with one node for this demo, you should be able to see the pod in a pending state. If you describe the pod, you will see a FailedScheduling event as shown below.

 Warning  FailedScheduling  34s   default-scheduler  0/1 nodes are available: 1 node(s) had untolerated taint {dedicated: app1}. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

An untolerated taint means that the pod doesn’t have a matched toleration to be scheduled on that node.

Step 5: Create a pod with a toleration

Create another pod definition file named tolerating-pod.yaml with a toleration.

# yaml
apiVersion: v1
kind: Pod
metadata:
name: tolerating-pod
spec:
tolerations:
- key: "dedicated"
operator: "Equal"
value: "app1"
effect: "NoSchedule"
containers:
- name: nginx
image: nginx

Then apply this pod configuration.

# sh
$ kubectl apply -f tolerating-pod.yaml

Step 6: Observe pod scheduling again

Check the status of the new pod to see if it has been scheduled.

# sh
$ kubectl get pods tolerating-pod -o wide

You should see that tolerating-pod is scheduled on node1, as it has a matched toleration for the taint applied to node1.

Step 7: Remove the taint

Finally, you can remove the taint from node1 by appending a hyphen to the effect, per the example below.  Then, verify that the taint has been removed by using “kubectl describe”:.

# sh
$ kubectl taint nodes node1 dedicated=app1:NoSchedule-
$ kubectl describe nodes node1 | grep Taints

Common Use Cases for Kubernetes Taints

Let’s review some common use cases for Kubernetes taints, including dedicated nodes, resource management, and isolating testing and development environments from production. 

Dedicated Nodes: Ensuring that Specific Workloads Run on Designated Nodes

One primary use case for taints is dedicating certain nodes to specific workloads. This is particularly useful for nodes with specialized hardware or specific performance requirements. For example, a node with a GPU can be tainted to ensure that only machine learning or data processing workloads that require GPU resources are scheduled on it. This optimizes resource utilization and ensures critical workloads have access to the necessary hardware without interference from other pods.

Example:

# sh
$ kubectl taint nodes gpu-node dedicated=gpu:NoSchedule

In this scenario, only pods with the corresponding toleration can be scheduled on gpu-node, ensuring that GPU resources are reserved for the appropriate workloads.

For a pod to be scheduled on that node, it has to be configured as shown below:

# yaml
apiVersion: v1
kind: Pod
metadata:
name: tolerating-pod
spec:
tolerations:
- key: "dedicated"
operator: "Equal"
value: "gpu"
effect: "NoSchedule"
containers:
- name: nginx
image: nginx

Resource Management: Preventing the Overloading of Critical Codes

Taints are also valuable for managing resources and preventing the overloading of critical nodes. By applying taints, administrators can prevent non-essential pods from being scheduled on nodes that are reserved for critical services. This is crucial in maintaining the performance and reliability of essential applications, particularly in environments with mixed workloads where resource contention can be a concern.

Example:

# sh
$ kubectl taint nodes critical-node critical-service=true:NoSchedule

This ensures that only pods with a toleration for critical-service=true can be scheduled on critical-node, preventing less critical pods from consuming valuable resources.

For a pod to be scheduled on that node, it has to be configured as shown below:

# yaml
apiVersion: v1
kind: Pod
metadata:
name: tolerating-pod
spec:
tolerations:
- key: "critical-service"
operator: "Equal"
value: "true"
effect: "NoSchedule"
containers:
- name: nginx
image: nginx

Testing and Development: Isolating Test Environments from Production

Another common use case for taints is to isolate testing and development environments from production. In a shared Kubernetes cluster, it is important to ensure that test and development workloads do not interfere with production applications. By tainting nodes designated for testing and development, administrators can ensure that these environments remain separate, reducing the risk of resource contention and accidental interference with production workloads.

Example:

# sh
$ kubectl taint nodes dev-node environment=dev:NoSchedule

In this example, only pods with the appropriate toleration for environment=dev will be scheduled on dev-node, ensuring that development and test workloads are isolated from production.

A great example of using Kubernetes taints through Karpenter can be found in this article.

Conclusion

Kubernetes taints and tolerations are powerful concepts for managing containerized applications’ complex and dynamic scheduling requirements. They provide engineers with the means to enforce scheduling policies, ensuring that nodes are utilized according to their intended purposes and that workloads are effectively isolated and prioritized.

This article discussed Kubernetes taints theory and why and how to apply it through the kubectl taint command. In addition, we discussed the various effect methodologies while applying a taint to a node. Finally, we reviewed the most common use cases for Kubernetes taints.

Solve your cloud ROI problem

See for yourself how CloudBolt’s full lifecycle approach can help you.

Request a demo

Explore the chapters:

Related Blogs

 
thumbnail
How I Rethought Cloud Budgeting—And What Finance Leaders Need to Know 

If you’re a finance leader trying to bring more structure and strategy to cloud budgeting, you’re not alone. While most…

 
thumbnail
Podcast – Emerging Kubernetes tools for AI and optimizing GPU workloads

Discover how Kubernetes is evolving to support AI/ML workloads in this interview with John Platt, CTO at StormForge (now part…

 
thumbnail
The End of Manual Optimization: Why We Acquired StormForge 

Today is a big day for CloudBolt—we’ve officially announced our acquisition of StormForge.  This marks a major milestone for us…