You can attach existing Kubernetes clusters to Kommander. After attaching the cluster, you can use Kommander to examine and manage this cluster. The following procedure shows how to attach an existing Amazon Elastic Kubernetes Service (EKS) cluster to Kommander.
Before you begin
This procedure requires the following items and configurations:
- A fully configured and running Amazon EKS cluster with administrative privileges.
- Konvoy v2.0.0 or above, installed and configured for your Amazon EKS cluster, on your machine.
- Kommander v2.0.0 or above, installed and configured on your machine.
Attach Amazon EKS Clusters to Kommander
-
Ensure you are connected to your EKS clusters. Enter the following commands for each of your clusters:
kubectl config get-contexts kubectl config use-context <context for first eks cluster>
-
Confirm
kubectl
can access the EKS cluster.kubectl get nodes
-
Create a service account for Kommander on your EKS cluster.
kubectl -n kube-system create serviceaccount kommander-cluster-admin
-
Configure your
kommander-cluster-admin
service account to havecluster-admin
permissions. Enter the following command:cat << EOF | kubectl apply -f - apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: kommander-cluster-admin roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: kommander-cluster-admin namespace: kube-system EOF
-
You must create a kubeconfig file that is compatible with the Kommander UI. Enter these commands to set the following environment variables:
export USER_TOKEN_NAME=$(kubectl -n kube-system get serviceaccount kommander-cluster-admin -o=jsonpath='{.secrets[0].name}') export USER_TOKEN_VALUE=$(kubectl -n kube-system get secret/${USER_TOKEN_NAME} -o=go-template='{{.data.token}}' | base64 --decode) export CURRENT_CONTEXT=$(kubectl config current-context) export CURRENT_CLUSTER=$(kubectl config view --raw -o=go-template='{{range .contexts}}{{if eq .name "'''${CURRENT_CONTEXT}'''"}}{{ index .context "cluster" }}{{end}}{{end}}') export CLUSTER_CA=$(kubectl config view --raw -o=go-template='{{range .clusters}}{{if eq .name "'''${CURRENT_CLUSTER}'''"}}{{ index .cluster "certificate-authority-data" }}{{end}}{{ end }}') export CLUSTER_SERVER=$(kubectl config view --raw -o=go-template='{{range .clusters}}{{if eq .name "'''${CURRENT_CLUSTER}'''"}}{{ .cluster.server }}{{end}}{{ end }}')
-
Confirm these variables have been set correctly:
env | grep CLUSTER
-
Create your kubeconfig file to use in the Kommander UI. Enter the following commands:
cat << EOF > kommander-cluster-admin-config apiVersion: v1 kind: Config current-context: ${CURRENT_CONTEXT} contexts: - name: ${CURRENT_CONTEXT} context: cluster: ${CURRENT_CONTEXT} user: kommander-cluster-admin namespace: kube-system clusters: - name: ${CURRENT_CONTEXT} cluster: certificate-authority-data: ${CLUSTER_CA} server: ${CLUSTER_SERVER} users: - name: kommander-cluster-admin user: token: ${USER_TOKEN_VALUE} EOF
-
Verify the kubeconfig file can access the EKS cluster.
kubectl --kubeconfig $(pwd)/kommander-cluster-admin-config get all --all-namespaces
-
Copy
kommander-cluster-admin-config
file contents to your clipboard.cat kommander-cluster-admin-config | pbcopy
Now that you have kubeconfig, go to the Kommander UI and follow these steps below:
-
Select the Add Cluster button in your Kommander window.
-
Select the Attach Cluster button.
-
Select the No additional networking restrictions card to display the Cluster Configuration dialog box. This dialog box accepts a kubeconfig file that you can paste, or upload, into the field.
-
Paste the contents of your clipboard (or upload the file you created) into the Kubeconfig File text box.
-
Assign a name and add any desired labels for the cluster.
-
Confirm you are assigning the cluster to your desired workspace.
-
Select the Submit button.
Related information
For information on related topics or procedures, refer to the following: