Scale up a Cluster
Follow these steps:
-
Identify the existing
PreprovisionedInventory
that you wish to scale:kubectl get preprovisionedinventory -A
-
Edit the
PreprovisionedInventory
to add additional IPs needed for additional worker nodes in thespec.hosts
section:kubectl edit preprovisionedinventory <name> -n <namespace>
-
(Optional) Add any additional host IPs that you require for adding to your cluster:
spec: hosts: - address: <worker.ip.add.1> - address: <worker.ip.add.2>
After you edit
PreprovisionedInventory
, fetch theMachineDeployment
. The naming convention withmd
means that it is for worker machines backed by aMachineDeployment
. For example:kubectl get machinedeployment -A
NAME CLUSTER AGE PHASE REPLICAS READY UPDATED UNAVAILABLE machinedeployment-md-0 cluster-name 9m10s Running 4 4 4
-
Scale the worker nodes to the required number. In this example we are scaling from 4 to 6 worker nodes:
kubectl scale --replicas=6 machinedeployment machinedeployment-md-0 -n default
machinedeployment.cluster.x-k8s.io/machinedeployment-md-0 scaled
-
Monitor the scaling with this command, by adding
-w
option to watch:kubectl get machinedeployment -n default -w
NAME CLUSTER AGE PHASE REPLICAS READY UPDATED UNAVAILABLE machinedeployment-md-0 cluster-name 20m ScalingUp 6 4 6 2
-
Check the machine deployment to ensure it has successfully scaled. The output should resemble this example:
kubectl get machinedeployment -n default
NAME CLUSTER AGE PHASE REPLICAS READY UPDATED UNAVAILABLE machinedeployment-md-0 machinedeployment 3h33m Running 6 6 6
-
Alternately, you can use this command and verify the
NODENAME
column and you should see the additional worker nodes added and inRunning
state:kubectl get machines -A -o wide
NAMESPACE NAME CLUSTER AGE PROVIDERID PHASE VERSION NODENAME default machinedeployment-control-plane-sljgr machinedeployment 113m preprovisioned:////34.123.456.162 Running v1.22.8 ip-10-0-245-186.us-west-2.compute.internal default machinedeployment-control-plane-wn6pp machinedeployment 108m preprovisioned:////54.123.456.63 Running v1.22.8 ip-10-0-21-113.us-west-2.compute.internal default machinedeployment-control-plane-zpsh6 machinedeployment 119m preprovisioned:////35.12.345.183 Running v1.22.8 ip-10-0-43-72.us-west-2.compute.internal default machinedeployment-md-0-d9b7658b-59ndc machinedeployment 119m preprovisioned:////18.123.456.224 Running v1.22.8 ip-10-0-6-233.us-west-2.compute.internal default machinedeployment-md-0-d9b7658b-5tbq9 machinedeployment 119m preprovisioned:////35.12.345.237 Running v1.22.8 ip-10-0-19-175.us-west-2.compute.internal default machinedeployment-md-0-d9b7658b-9cgc8 machinedeployment 119m preprovisioned:////54.123.45.76 Running v1.22.8 ip-10-0-2-119.us-west-2.compute.internal default machinedeployment-md-0-d9b7658b-9cgc7 machinedeployment 119m preprovisioned:////55.123.45.76 Running v1.22.8 ip-10-0-2-118.us-west-2.compute.internal default machinedeployment-md-0-d9b7658b-9cgc6 machinedeployment 5m23s preprovisioned:////56.123.45.76 Running v1.22.8 ip-10-0-2-117.us-west-2.compute.internal default machinedeployment-md-0-d9b7658b-9cgc5 machinedeployment 5m23s preprovisioned:////57.123.45.76 Running v1.22.8 ip-10-0-2-116.us-west-2.compute.internal
Scale Down a Cluster
Run the following command:
kubectl scale machinedeployment <name> -n <namespace> --replicas <new number>
For control plane nodes, currently there is an upstream bug that prevents scaling, but once the issue gets resolved and the fixed CAPI version gets used for DKP, you can perform the following command:
kubectl scale kubeadmcontrolplane <name> -n <namespace> --replicas <new number>
While this item (for control planes) is still being worked on, a workaround would be to perform a kubectl edit kubeadmcontrolplane
command and change the replica value under the spec
section.
Additional Notes for Scaling Down
It is possible for machines to get stuck in the provisioning stage when scaling down. You can utilize a delete operation to clear the stale machine deployment:
kubectl delete machine <name> -n <namespace>