In previous versions of Konvoy, you used Kubernetes Based Addons (KBAs) which are now managed through Kommander and are known as platform applications.
This section automatically adapts your Konvoy addons to Kommander platform applications. Certain applications may need manual configuration changes prior to adapting.
Prerequisites
To successfully adapt your applications, you must have:
-
A Konvoy 1.8.3 or 1.8.4 cluster that you have already upgraded to DKP 2.1, with the
kommander
addon disabled in your cluster.yaml. -
Download and install the Kommander CLI binary on your computer.
-
Sufficient disk and resource capacity to support the following applications that come installed by default with Kommander:
Name Minimum Resources Suggested Minimum Persistent Storage Required centralized-grafana cpu: 200m
memory: 100Micentralized-kubecost cpu: 1200m
memory: 4151Mi# of PVs: 1
PV sizes: 32Gidex cpu: 100m
memory: 50Midex-k8s-authenticator cpu: 100m
memory: 128Migitea cpu: 500m
memory: 512Mi# of PVs: 2
PV sizes: 10Gikarma kommander-flux cpu: 4000m
memory: 4Gikubefed cpu: 300m
memory: 192Mithanos traefik-forward-auth-mgmt cpu: 100m
memory: 128MiPlease see workspace platform application requirements to plan for additional requirements your custom workloads may demand.
Prepare your cluster
Check for the following in your existing cluster.yaml
:
- One or more of
spec.kubernetes.networking.noProxy
,spec.kubernetes.networking.httpProxy
orspec.kubernetes.networking.httpsProxy
is set inClusterConfiguration
. - Gatekeeper is enabled. Gatekeeper is enabled by default, but if you have manually disabled it, re-enable the application by changing the enabled field from
false
totrue
in theaddons
object of the cluster’sClusterConfiguration
and perform akonvoy up
to update the addons.
If none of the conditions apply to your cluster, then you can skip to next section. If they do, follow the below steps.
-
Because Kommander 2.0+ uses Flux to manage applications, you must configure the Gatekeeper
mutatingwebhookconfigurations
(which is a cluster-scoped resource) to allowdry-run
calls. They are required by the Flux kustomize controller to calculate the difference of a resource. To do this:kubectl get mutatingwebhookconfigurations gatekeeper-mutating-webhook-configuration
If there are no
mutatingwebhookconfigurations
, skip to the next step. This is expected if you setmutations.enable
tofalse
in Gatekeeper addonvalues
. If you see thegatekeeper-mutating-webhook-configuration
then execute the following:kubectl patch mutatingwebhookconfigurations gatekeeper-mutating-webhook-configuration --type "json" -p '[{"op": "add", "path": "/webhooks/0/sideEffects", "value": "None"}]'
-
Update the
metadata.annotations
of these Gatekeeper resources:kubectl annotate mutatingwebhookconfigurations gatekeeper-mutating-webhook-configuration --overwrite "meta.helm.sh/release-name"="kommander-gatekeeper" "meta.helm.sh/release-namespace"="kommander" kubectl annotate assign pod-mutation-no-proxy --overwrite "meta.helm.sh/release-name"="kommander-gatekeeper" "meta.helm.sh/release-namespace"="kommander"
If the patch fails because the above resource do not exist, you can ignore those errors.
-
In the
ClusterConfiguration
, if you have set one or more ofnoProxy
,httpProxy
,orhttpsProxy
inspec.kubernetes.networking
but these values differ from thevalues
section ofgatekeeper
addon, then you need to update the Gatekeeper addon configuration to match these values. Look up this ConfigMap rendered fromspec.kubernetes.networking
:kubectl get cm kubeaddons-remap-values -nkubeaddons -o=jsonpath={.data.values}
which should print an output more or less similar to the following:
gatekeeper: mutation: enable: true enablePodProxy: true namespaceSelectorForProxy: "gatekeeper.d2iq.com/mutate": "pod-proxy" no-proxy: "<YOUR noProxy settings>" http-proxy: "<YOUR httpProxy settings>" https-proxy: "<YOUR httpsProxy settings>"
You need to copy the above configuration into the
Addon
resource ofgatekeeper
. Start by printing the currentvalues
section:kubectl get addon gatekeeper -nkubeaddons -o=jsonpath={.spec.chartReference.values}
This will print the following output:
--- replicas: 2 webhook: certManager: enabled: true # enable mutations mutations: enable: false enablePodProxy: false podProxySettings: noProxy: httpProxy: httpsProxy: excludeNamespacesFromProxy: [] namespaceSelectorForProxy: {}
Copy the values from the
ConfigMap
into the GatekeeperAddon
resource accordingly:ConfigMap kubeaddons-remap-values .data.values
Addon gatekeeper .spec.chartReference.values
gatekeeper.mutation.enable mutations.enable gatekeeper.mutation.enablePodProxy mutations.enablePodProxy gatekeeper.mutation.namespaceSelectorForProxy mutations.namespaceSelectorForProxy gatekeeper.mutation.no-proxy mutations.podProxySettings.noProxy gatekeeper.mutation.http-proxy mutations.podProxySettings.httpProxy gatekeeper.mutation.https-proxy mutations.podProxySettings.httpsProxy If the values in the Gatekeeper
Addon
resource already match the values from thekubeaddons-remap-values
ConfigMap inkubeaddons
namespace, then there is no need to update anything. If not, edit the GatekeeperAddon
to reflect the above value remapping:kubectl edit addon -nkubeaddons gatekeeper
Then, save the changes before continuing with the migration procedure.
Move your applications
If your environment has a HTTP proxy configured, create a TCP connection before running the following command.
To adapt your existing platform applications to Kommander enter the following command:
kommander migrate -y
As the command progresses, your output will look like the following:
✓ Checking if migration from DKP 1.x is necessary
Found the following Konvoy 1.x addons:
cert-manager
dashboard
dex
dex-k8s-authenticator
konvoyconfig
kube-oidc-proxy
metallb
nvidia
opsportal
reloader
traefik
traefik-forward-auth
velero
...
✓ Checking if migration from DKP 1.x is necessary
✓ Ensuring applications repository fetcher is deployed
✓ Ensuring base resources are deployed
✓ Ensuring Flux is deployed
✓ Ensuring helm repository configuration is deployed
✓ Ensuring Kommander Root CA is deployed
✓ Ensuring Gitea is deployed
✓ Ensuring Application definitions are deployed
✓ Ensuring Bootstrap repository is deployed
✓ Ensuring Age encryption is deployed
✓ Ensuring Flux configuration is deployed
✓ Ensuring Kommander App Management is deployed
✓ Ensuring Konvoy Config is migrated
✓ Ensuring Traefik ingress controller is migrated
✓ Ensuring Gatekeeper is migrated
✓ Ensuring Reloader is migrated
✓ Ensuring External DNS is migrated
✓ Ensuring MetalLB is migrated
✓ Ensuring Dex is migrated
✓ Ensuring Traefik Forward Auth is migrated
✓ Ensuring Kubernetes OIDC proxy is migrated
✓ Ensuring Dex authenticator is migrated
✓ Ensuring Kubernetes Dashboard is migrated
✓ Ensuring Nvidia is migrated
✓ Ensuring Velero is migrated
✓ Ensuring Fluent-Bit is migrated and the DKP 2.x Logging Stack is installed
✓ Ensuring deletion of Addon elasticsearch orphaning its Helm release
✓ Ensuring deletion of Addon elasticsearch-curator orphaning its Helm release
✓ Ensuring deletion of Addon kibana orphaning its Helm release
✓ Ensuring deletion of Addon prometheus-elasticsearch-exporter orphaning its Helm release
✓ Ensuring KubePrometheusStack (Prometheus and Grafana) is migrated
✓ Ensuring Prometheus Adapter is migrated
✓ Ensuring Istio is migrated
✓ Ensuring Jaeger Operator is migrated
✓ Ensuring Kiali is migrated
✓ Ensuring deletion of ClusterAddon kommander orphaning its Helm release
✓ Ensuring deletion of ClusterAddon awsebscsiprovisioner orphaning its Helm release
✓ Ensuring deletion of ClusterAddon cert-manager orphaning its Helm release
✓ Ensuring deletion of ClusterAddon defaultstorageclass-protection orphaning its Helm release
✓ Ensuring deletion of Addon konvoyconfig orphaning its Helm release
✓ Ensuring deletion of Addon opsportal orphaning its Helm release
✓ Ensuring deletion of Addon gatekeeper orphaning its Helm release
✓ Ensuring check that there remain no addons and deletion of the Kubeaddons controller
If there is a timeout error at this step, start kommander migrate -y
again, and it will eventually continue where this timed out.
If you are upgrading to DKP v2.1.4 or below, check the release notes you will need to confirm that the Traefik Middleware ConfigMap was updated correctly. If you are on at least DKP v2.1.5, continue below.
Refer to the Verify installation topic to ensure successful completion.
Environments with an HTTP proxy server
The kommander migrate
command requires a connection from your environment to the Traefik ingress controller in your cluster. If your environment demands the use of an HTTP proxy server, establish an alternative connection to the Traefik ingress controller by running a port-forward into Traefik on a localhost.
-
Ensure you create or modify the
127.0.0.1
record to include the ingress domain name in your hosts file:[root@workstation ~] cat /etc/hosts 127.0.0.1 localhost localhost.localdomain <ingress_domain_name>
-
Open a separate terminal window to create a TCP connection to Traefik on your cluster:
kubectl --kubeconfig admin.conf -n kommander port-forward svc/kommander-traefik 443:443
-
From your standard terminal window, continue with the Move your applications section.
Post-upgrade cleanup
Depending on what Konvoy addons you had configured, the upgrade may leave Kubernetes objects behind that belonged to Konvoy, but are not used by Kommander. While you can safely disregard these objects, you should not arbitrarily remove or modify them, or use third-party tools (like Helm) that expect these objects to be in a correct state against these objects.
If you want to clean these objects up, you need to perform specific steps after a successful upgrade.
Refer to the Verify installation topic to ensure successful completion.
Related Information
Remove unneeded Kubernetes resources after the upgrade
Steps to manually remove Konvoy 1.8 resources not used by Kommander 2.1…Read More
Prepare applications for moving
Certain applications may need manual intervention prior to moving…Read More
Supported addons
Adapt Konvoy addons to Kommander platform applications…Read More