Prerequisites
-
Create an on-demand backup of your current configuration with Velero.
-
Follow the steps listed in the DKP upgrade overview.
-
Ensure that all platform applications in the management cluster have been upgraded to avoid compatibility issues with the Kubernetes version included in this release. This is done automatically when upgrading Kommander, so ensure that you upgrade Kommander prior to upgrading Konvoy.
-
For air-gapped: Download the required bundles either at our support site or by using the CLI.
-
For Azure, set the required environment variables.
-
For AWS, set the required environment variables.
The following infrastructure environments are supported:
-
Amazon Web Services (AWS)
-
Microsoft Azure
-
Pre-provisioned environments
Overview
To upgrade Konvoy for DKP Enterprise:
- Upgrade the Cluster API (CAPI) components
- Upgrade the core addons
- Upgrade the Kubernetes version
Run all three steps on the management cluster (Kommander cluster) first. Then, run the second and third steps on additional managed clusters (Konvoy clusters), one cluster at a time using the KUBECONFIG configured for each cluster.
For a full list of DKP Enterprise features, see DKP Enterprise.
Upgrade the CAPI components
New versions of DKP come pre-bundled with newer versions of CAPI, newer versions of infrastructure providers, or new infrastructure providers. When using a new version of the DKP CLI, upgrade all of these components first.
If you are running on more than one management cluster (Kommander cluster), you must upgrade the CAPI components on each of these clusters.
-
If your cluster was upgraded to 2.1 from 1.8, prepare the old cert-manager installation for upgrade:
helm -n cert-manager get manifest cert-manager-kubeaddons | kubectl label -f - clusterctl.cluster.x-k8s.io/core=cert-manager kubectl delete validatingwebhookconfigurations/cert-manager-kubeaddons-webhook mutatingwebhookconfigurations/cert-manager-kubeaddons-webhook
-
For all clusters, upgrade capi-components:
dkp upgrade capi-components
-
If your cluster was upgraded to 2.1 from 1.8, remove the remaining old cert-manager resources from 1.8:
helm -n cert-manager delete cert-manager-kubeaddons
The command should output something similar to the following:
✓ Upgrading CAPI components
✓ Waiting for CAPI components to be upgraded
✓ Initializing new CAPI components
✓ Deleting Outdated Global ClusterResourceSets
If the upgrade fails, review the prerequisites section and ensure that you’ve followed the steps in the DKP upgrade overview.
Upgrade the core addons
To install the core addons, DKP relies on the ClusterResourceSet
Cluster API feature. In the CAPI component upgrade, we deleted the previous set of outdated global ClusterResourceSets
because prior to DKP 2.2 some addons were installed using a global configuration. In order to support individual cluster upgrades, DKP 2.2 now installs all addons with a unique set of ClusterResourceSet
and corresponding referenced resources, all named using the cluster’s name as a suffix. For example: calico-cni-installation-my-aws-cluster
.
Your cluster comes preconfigured with a few different core addons that provide functionality to your cluster upon creation. These include: CSI, CNI, Cluster Autoscaler, and Node Feature Discovery. New versions of DKP may come pre-bundled with newer versions of these addons. Perform the following steps to update these addons. If you have any additional managed clusters, you will need to upgrade the core addons and Kubernetes version for each one.
Upgrade the core addons in a cluster using the ‘dkp upgrade addons’ command specifying the cluster infrastructure (choose [aws, azure, preprovisioned]) and the name of the cluster.
Examples:
export CLUSTER_NAME=my-azure-cluster
dkp upgrade addons azure --cluster-name=${CLUSTER_NAME}
OR
export CLUSTER_NAME=my-aws-cluster
dkp upgrade addons aws --cluster-name=${CLUSTER_NAME}
The output for the AWS example should be similar to:
Generating addon resources
clusterresourceset.addons.cluster.x-k8s.io/calico-cni-installation-my-aws-cluster upgraded
configmap/calico-cni-installation-my-aws-cluster upgraded
clusterresourceset.addons.cluster.x-k8s.io/tigera-operator-my-aws-cluster upgraded
configmap/tigera-operator-my-aws-cluster upgraded
clusterresourceset.addons.cluster.x-k8s.io/aws-ebs-csi-my-aws-cluster upgraded
configmap/aws-ebs-csi-my-aws-cluster upgraded
clusterresourceset.addons.cluster.x-k8s.io/cluster-autoscaler-my-aws-cluster upgraded
configmap/cluster-autoscaler-my-aws-cluster upgraded
clusterresourceset.addons.cluster.x-k8s.io/node-feature-discovery-my-aws-cluster upgraded
configmap/node-feature-discovery-my-aws-cluster upgraded
clusterresourceset.addons.cluster.x-k8s.io/nvidia-feature-discovery-my-aws-cluster upgraded
configmap/nvidia-feature-discovery-my-aws-cluster upgraded
export CLUSTER_NAME=my-aws-cluster
helm uninstall -n kube-system awsebscsiprovisioner-kubeaddons
kubectl label cluster $CLUSTER_NAME konvoy.d2iq.io/csi=aws-ebs
See also
Once complete, begin upgrading the Kubernetes version.
Upgrade the Kubernetes version
When upgrading the Kubernetes version of a cluster, first upgrade the control plane and then the node pools. If you have any additional managed clusters, you will need to upgrade the core addons and Kubernetes version for each one.
-
Upgrade the Kubernetes version of the control plane.
dkp update controlplane aws --cluster-name=${CLUSTER_NAME} --kubernetes-version=v1.22.8
The output should be similar to:
Updating control plane resource controlplane.cluster.x-k8s.io/v1beta1, Kind=KubeadmControlPlane default/my-aws-cluster-control-plane Waiting for control plane update to finish. ✓ Updating the control plane
-
Upgrade the Kubernetes version of each of your node pools. Get a list of all node pools available in your cluster by running the following command:
dkp get nodepool --cluster-name ${CLUSTER_NAME}
-
Replace
my-nodepool
with the name of the node pool.export NODEPOOL_NAME=<my-nodepool>
dkp update nodepool aws ${NODEPOOL_NAME} --cluster-name=${CLUSTER_NAME} --kubernetes-version=v1.22.8
The output should be similar to:
Updating node pool resource cluster.x-k8s.io/v1beta1, Kind=MachineDeployment default/my-aws-cluster-my-nodepool
Waiting for node pool update to finish.
✓ Updating the my-aws-cluster-my-nodepool node pool
Repeat this step for each additional node pool.
For the overall process for upgrading to the latest version of DKP, refer back to DKP Upgrade