Kubernetes Learning Path: Adding Worker Nodes and Distributing Pods
Kubernetes Learning Path: Adding Worker Nodes and Distributing Pods
I noticed something interesting about my cluster: my application pods were unevenly distributed. Some were running on the master node (which shouldn’t happen), some on worker 1, but worker 2 was barely being used. My background worker pods were split between the master and worker 2, but not on worker 1.
I started looking into it—there had to be a way to tell Kubernetes to distribute pods evenly across all worker nodes and keep them off the master.
In this post, I’ll show you how to add a new worker node to your cluster and configure Kubernetes to automatically distribute your pods across all workers while keeping the master node free for control plane tasks.
Before We Start: What Your Cluster Looks Like Now
Let’s first check what your current setup looks like:
1
kubectl get nodes
You might see something like this:
1
2
3
4
NAME STATUS ROLES AGE VERSION
k3s-master Ready control-plane,master 30d v1.28.2+k3s1
k3s-worker-1 Ready <none> 30d v1.28.2+k3s1
k3s-worker-2 Ready <none> 30d v1.28.2+k3s1
Now let’s see where your pods are running:
1
kubectl get pods -o wide
Before (the problem):
1
2
3
4
5
6
NAME READY STATUS RESTARTS AGE NODE
rails-app-7d4f8c9b5-abc12 1/1 Running 0 2d k3s-master
rails-app-7d4f8c9b5-def34 1/1 Running 0 2d k3s-worker-1
solid-queue-5f8a3b2c-ghi56 1/1 Running 0 2d k3s-master
solid-queue-5f8a3b2c-jkl78 1/1 Running 0 2d k3s-worker-2
postgres-0 1/1 Running 0 5d k3s-worker-1
See the problems?
- Some app pods are on
k3s-master(they shouldn’t be there) - App pods are on master and worker 1, but worker 2 is barely used
- Background worker pods are split between master and worker 2, but not on worker 1
- The distribution is uneven and inefficient
Part 1: Adding a New Worker Node
Let’s add a third worker node to your cluster. This process is straightforward, but you need to do it in the right order.
Step 1: Set the Hostname on the New Worker
First, SSH into your new worker node (the physical machine you want to add). Set a hostname so it’s easy to identify:
1
2
3
4
5
# On the new worker node
sudo hostnamectl set-hostname k3s-worker-3
# Verify it worked
hostname
You should see k3s-worker-3 printed. This name will show up when you run kubectl get nodes later.
Step 2: Get the Join Token from Master
Now, on your master node, get the token that allows new nodes to join the cluster:
1
2
# On the master node
sudo cat /var/lib/rancher/k3s/server/node-token
This will print a long string of random characters. Copy this token—you’ll need it in the next step. Think of it like a password that lets new workers join your cluster.
Step 3: Join the New Worker to the Cluster
Back on your new worker node, run this command to join it to the cluster. Replace <MASTER_IP> with your master node’s IP address and <TOKEN_FROM_MASTER> with the token you just copied:
1
2
3
4
5
# On the new worker node
curl -sfL https://get.k3s.io | \
K3S_URL=https://<MASTER_IP>:6443 \
K3S_TOKEN=<TOKEN_FROM_MASTER> \
sh -
For example, if your master IP is 192.168.1.100 and your token is K10abc123def456..., it would look like:
1
2
3
4
curl -sfL https://get.k3s.io | \
K3S_URL=https://192.168.1.100:6443 \
K3S_TOKEN=K10abc123def456... \
sh -
This downloads and installs k3s on the worker node and connects it to your cluster. It might take a minute or two.
Step 4: Verify the Worker Joined Successfully
Check that the worker is running:
1
2
# On the new worker node
sudo systemctl status k3s-agent
You should see Active: active (running). If you see any errors, check the logs with sudo journalctl -u k3s-agent -f.
Now, back on your master node (or any node where you have kubectl configured), check that the new worker appears:
1
2
# On master node (or wherever you run kubectl)
kubectl get nodes
After adding worker:
1
2
3
4
5
NAME STATUS ROLES AGE VERSION
k3s-master Ready control-plane,master 30d v1.28.2+k3s1
k3s-worker-1 Ready <none> 30d v1.28.2+k3s1
k3s-worker-2 Ready <none> 30d v1.28.2+k3s1
k3s-worker-3 Ready <none> 5m v1.28.2+k3s1 ← New!
Perfect! Your new worker is now part of the cluster. But we’re not done yet. We still need to make sure pods actually use these workers instead of crowding on the master.
Part 2: Keeping Pods Off the Master Node
Now comes the important part: telling Kubernetes “don’t put any application pods on the master node, and spread them across all the workers.”
Step 1: Taint the Master Node
A “taint” is like putting up a “No Entry” sign on a node. It tells Kubernetes “don’t schedule regular pods here.” We want to taint the master so only system pods (Kubernetes itself) run there.
1
2
# On master node (or wherever you run kubectl)
kubectl taint nodes k3s-master node-role.kubernetes.io/control-plane:NoSchedule
This command says: “On the node named k3s-master, add a taint that prevents new pods from being scheduled here, unless they specifically tolerate this taint.”
After running this, Kubernetes won’t schedule new pods on the master. But what about the pods that are already running there? Don’t worry—we don’t need to manually delete them. When we update our deployment files with affinity rules in the next step and apply them, Kubernetes will automatically perform a rolling update. This means it will terminate the old pods (including those on the master) and create new ones that respect the affinity rules, placing them on worker nodes instead.
Part 3: Adding Affinity Rules to Your Deployments
Tainting the master prevents new pods from going there, but we can do better. We can add “affinity rules” to our deployments that explicitly say:
- “Don’t schedule on the master node” (node affinity)
- “Spread pods across different worker nodes” (pod anti-affinity)
This ensures pods are distributed evenly and never accidentally end up on the master, even if someone removes the taint later.
Understanding Affinity Rules
Think of affinity rules like preferences for where pods should live:
- Node Affinity: “I want to run on nodes that have certain labels” or “I don’t want to run on nodes with certain labels”
- Pod Anti-Affinity: “I don’t want to run on the same node as other pods like me” (this spreads pods across nodes)
Let’s add these rules to your deployments.
Since I have a Rails app and SolidQueue workers running in my cluster, these are the changes I needed to make to ensure even distribution across all worker nodes. I’ll show you the before and after for both deployments.
Example: Updating Your Rails Deployment
Here’s what my Rails deployment looked like before:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
apiVersion: apps/v1
kind: Deployment
metadata:
name: rails-app
spec:
replicas: 3
selector:
matchLabels:
app: rails # This label must match the label in the pod template below (in this same file)
template:
metadata:
labels:
app: rails # This label must match the selector above (in this same file)
spec:
containers:
- name: rails
image: my-rails-app:latest
ports:
- containerPort: 3000
After (with affinity rules added):
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
apiVersion: apps/v1
kind: Deployment
metadata:
name: rails-app
spec:
replicas: 3
selector:
matchLabels:
app: rails # This label must match the label in the pod template below (in this same file)
template:
metadata:
labels:
app: rails # This label must match the selector above (in this same file) and is used by podAntiAffinity below
spec:
affinity:
# Prevent scheduling on master node - only schedule on nodes that DON'T have the control-plane label
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution: # Hard requirement - if no suitable node exists, don't schedule the pod
nodeSelectorTerms:
- matchExpressions:
- key: node-role.kubernetes.io/control-plane
operator: DoesNotExist # Master nodes have this label, so pods won't go there
# Spread pods across different worker nodes - try to avoid scheduling on same node as other rails pods
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution: # Preference, not requirement - Kubernetes will try its best
- weight: 100 # Higher weight = stronger preference
podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- rails # This must match the app label above - avoids scheduling with other rails pods
topologyKey: kubernetes.io/hostname # Different nodes (each node has unique hostname)
containers:
- name: rails
image: my-rails-app:latest
ports:
- containerPort: 3000
Let me break down what these affinity rules do and why they matter:
Node Affinity: The nodeAffinity section tells Kubernetes: “Only schedule this pod on nodes that DON’T have the node-role.kubernetes.io/control-plane label.” Master nodes have this label, so pods won’t go there. The requiredDuringSchedulingIgnoredDuringExecution part means “this is a hard requirement—if no suitable node exists, don’t schedule the pod at all.” I use required here because we absolutely don’t want pods on the master, even if it means a pod can’t be scheduled.
Pod Anti-Affinity: The podAntiAffinity section says: “Try to avoid scheduling this pod on the same node as other pods with the label app=rails.” The topologyKey: kubernetes.io/hostname means “different nodes” (each node has a unique hostname). The preferredDuringSchedulingIgnoredDuringExecution means “this is a preference, not a requirement—if you can’t spread them, that’s okay, but try your best.” I use preferred here because spreading is nice to have, but if we only have one worker node, we still want the pods to run. The weight: 100 gives this preference a high priority, so Kubernetes will really try to spread them out.
Together, these rules ensure:
- Rails pods never go on the master
- Rails pods try to spread across different worker nodes
- If you have 3 replicas and 3 workers, ideally one pod per worker
Example: Updating Your SolidQueue Worker Deployment
For my SolidQueue worker deployment, I added the same affinity section but changed the app value to match the worker’s label:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
apiVersion: apps/v1
kind: Deployment
metadata:
name: solid-queue-worker
spec:
replicas: 2
selector:
matchLabels:
app: solid-queue # This label must match the label in the pod template below (in this same file)
template:
metadata:
labels:
app: solid-queue # This label must match the selector above (in this same file) and is used by podAntiAffinity below
spec:
affinity:
# Prevent scheduling on master node - same as Rails deployment above
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution: # Hard requirement - if no suitable node exists, don't schedule the pod
nodeSelectorTerms:
- matchExpressions:
- key: node-role.kubernetes.io/control-plane
operator: DoesNotExist # Master nodes have this label, so pods won't go there
# Spread pods across different worker nodes - try to avoid scheduling on same node as other solid-queue pods
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution: # Preference, not requirement - Kubernetes will try its best
- weight: 100 # Higher weight = stronger preference
podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- solid-queue # This must match the app label above - avoids scheduling with other solid-queue pods
topologyKey: kubernetes.io/hostname # Different nodes (each node has unique hostname)
containers:
- name: worker
image: my-rails-app:latest
# ... rest of container config
Step 2: Apply the Updated Deployments
After updating your deployment files, apply them:
1
2
kubectl apply -f rails-deployment.yaml
kubectl apply -f solid-queue-deployment.yaml
Kubernetes will automatically perform a rolling update. This means:
- Old pods (including any on the master node) will be terminated
- New pods will be created with the updated configuration
- The new pods will respect the affinity rules and be scheduled on worker nodes
You can watch the process:
1
kubectl get pods -o wide -w
Press Ctrl+C to stop watching. You should see the pods being recreated on worker nodes as the rolling update progresses. The old pods on the master will be terminated, and new ones will appear on your worker nodes.
Verifying Everything Works
Let’s verify that everything is set up correctly:
Check Node Status
1
kubectl get nodes
All nodes should show Ready status.
Check Pod Distribution
1
kubectl get pods -o wide
After (the solution):
1
2
3
4
5
6
7
NAME READY STATUS RESTARTS AGE NODE
rails-app-7d4f8c9b5-xyz12 1/1 Running 0 5m k3s-worker-1
rails-app-7d4f8c9b5-abc34 1/1 Running 0 5m k3s-worker-2
rails-app-7d4f8c9b5-def56 1/1 Running 0 5m k3s-worker-3
solid-queue-5f8a3b2c-ghi78 1/1 Running 0 5m k3s-worker-1
solid-queue-5f8a3b2c-jkl90 1/1 Running 0 5m k3s-worker-2
postgres-0 1/1 Running 0 10m k3s-worker-1
Perfect! Notice:
- No pods on
k3s-master - Pods spread across all three workers
- Each worker has some pods
Count Pods Per Node
You can also get a quick count of how many pods are on each node:
1
kubectl get pods -o wide | awk '{print $7}' | sort | uniq -c
Output:
1
2
3
4
0 k3s-master
3 k3s-worker-1
2 k3s-worker-2
1 k3s-worker-3
This shows zero pods on master and pods distributed across workers. The distribution might not be perfectly even (like 3-2-1 in this example), but that’s okay—the important thing is that pods are spread across multiple workers and none are on the master.
What Changed
So what did we actually accomplish? Before I made these changes, my cluster was a mess. Some pods were running on the master (which they shouldn’t), worker 2 was barely being used, and my background workers were split weirdly between the master and worker 2. The master was doing double duty—running Kubernetes itself and also handling application workloads.
Now, after adding the taint and affinity rules, things are much better. The master node only runs the control plane, which is what it should do. All my application pods are on worker nodes, and they’re spread across all three workers. This means better resource utilization—no more idle workers while others are overloaded. Plus, if one worker goes down, the others can keep running, which gives me better reliability.
The key difference is that Kubernetes now has clear rules about where pods should go, and it follows those rules automatically. No more manual intervention needed.
Common Issues and Solutions
Issue: Pods still on master after adding affinity
If you added affinity rules but pods are still on master:
- Make sure you applied the updated deployment:
kubectl apply -f <deployment-file> - Wait for the rolling update to complete - it should automatically move pods off the master
- Verify the master is tainted:
kubectl describe node k3s-master(look for Taints section) - Check if the rolling update is in progress:
kubectl get pods -o wide -w
Issue: New worker not showing up
If the new worker doesn’t appear in kubectl get nodes:
- Check the worker node is running:
sudo systemctl status k3s-agent - Check network connectivity between master and worker
- Verify the token and master IP are correct
- Check logs on worker:
sudo journalctl -u k3s-agent -f
Conclusion
Adding worker nodes and properly distributing pods might seem like extra work, but it’s essential for a production-ready cluster. Your master node should focus on running Kubernetes itself, while your worker nodes handle all your applications. Spreading pods across multiple workers gives you better reliability, performance, and resource utilization.
The key takeaways:
- Taint the master to prevent new pods from scheduling there
- Add node affinity to deployments to explicitly exclude the master
- Add pod anti-affinity to spread pods across workers
- Update and apply your deployments - Kubernetes will automatically move existing pods via rolling updates
Once set up, Kubernetes will automatically follow these rules for all new pods, keeping your cluster organized and resilient.
Stay tuned for more posts in this series!
Series Navigation
Part 1: Deploy Your First App
Part 2: ConfigMaps and Secrets
Part 3: Understanding Namespaces
Part 4: Understanding Port Mapping in k3d
Part 5: Setting Up k3s on Raspberry Pi
Part 6: Deploying Rails 8 with SolidQueue on k3s
Part 7: Setting Up Ingress Controller
Part 8: Understanding Persistent Storage
Part 9: Adding Worker Nodes and Distributing Pods ← You just finished this!