You can attach existing Kubernetes clusters to Kommander. After attaching the cluster, you can use Kommander to examine and manage this cluster. The following procedure shows how to attach an existing Amazon Elastic Kubernetes Service (EKS) cluster to Kommander.
Before you begin
This procedure requires the following items and configurations:
- A fully configured and running Amazon EKS cluster with administrative privileges.
- Konvoy v1.5.0 or above, installed and configured for your Amazon EKS cluster, on your machine.
- Kommander v1.2.0 or above, installed and configured on your machine.
Attach Amazon EKS Clusters to Kommander
Attaching an Amazon EKS cluster to Kommander requires that you:
-
Verify you are connected to the clusters
-
Create and connect service accounts
-
Create and implement a kubeconfig file
-
Attach the EKS clusters to Kommander
Verify connections to your EKS clusters
-
Ensure you are connected to your EKS clusters. Enter the following commands for each of your clusters:
kubectl config get-contexts kubectl config use-context <context for first eks cluster>
-
Confirm
kubectl
can access the EKS cluster.kubectl get nodes
Create and connect the service accounts
-
Create a service account for Kommander on your EKS cluster.
kubectl -n kube-system create serviceaccount kommander-cluster-admin
-
Configure your
kommander-cluster-admin
service account to havecluster-admin
permissions. Enter the following command:cat << EOF | kubectl apply -f - apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: kommander-cluster-admin roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: kommander-cluster-admin namespace: kube-system EOF
Create and implement a kubeconfig file
-
You must create a kubeconfig file that is compatible with Kommander. Enter these commands to set the following environment variables:
export USER_TOKEN_NAME=$(kubectl -n kube-system get serviceaccount kommander-cluster-admin -o=jsonpath='{.secrets[0].name}') export USER_TOKEN_VALUE=$(kubectl -n kube-system get secret/${USER_TOKEN_NAME} -o=go-template='{{.data.token}}' | base64 --decode) export CURRENT_CONTEXT=$(kubectl config current-context) export CURRENT_CLUSTER=$(kubectl config view --raw -o=go-template='{{range .contexts}}{{if eq .name "'''${CURRENT_CONTEXT}'''"}}{{ index .context "cluster" }}{{end}}{{end}}') export CLUSTER_CA=$(kubectl config view --raw -o=go-template='{{range .clusters}}{{if eq .name "'''${CURRENT_CLUSTER}'''"}}{{ index .cluster "certificate-authority-data" }}{{end}}{{ end }}') export CLUSTER_SERVER=$(kubectl config view --raw -o=go-template='{{range .clusters}}{{if eq .name "'''${CURRENT_CLUSTER}'''"}}{{ .cluster.server }}{{end}}{{ end }}')
-
Confirm the variables are set correctly. Run the following command:
env | grep CLUSTER
-
You should see a response similar to this:
CLUSTER_CA=LS0tLS1CRUdJTiBDRVJUtristiqueInterdumetmalesuadaUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJeE1EUXhOakl3TWpjeU9Gb1hEVE14TURReE5ESXdNamN5T0Zvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTDVRCmJCV1pEMkd6N0oxeSswY3FuYWE2aDBqYVVOdXI0ZGJidkZ5N1VqcU55bTd0KzhHaFl6Wk5VQzZPVFpWT3FZRkMKKzJZK1FoV0xLYzR3SW1sTjdVYWZxamh4MExONC8zR3BpQ1ZwSU52ZG9HelNYTXdLalg0dHViVUN0OTRjUnV2QgpjMlVYR0kvS0paWWV5TDY5UGQwWno5RUNTdlJFa2VqWlU3RHB0WVVtUldhUmdXUkgvbHNoRWl6ODl3WmlHWVUxCLoremipsumdolorsitamet05secteturadipiscingelitSuspendi2xsodalesnisisedleofacilisisafringillapuruscursusProindictumsuscipitloremnonfringillaaugueultriciestristiqueInterdumetmalesuadafamesacanteipsumprimisinfaucibusNamatestnecmagnaultricesposuereMorbi05vallisnuncquamapellentesquemetustemporeuVivamusfinibusnibhutiaculismalesuadaDonecsitametlaciniafelisNamultriceseunibhvitaeultrice2xdvolutpatporttitortellusvitaehendreritVivamusetmagnatellusDuisidurnaodioFuscealiquamvelitetexpharetraluctusNamultriciesdignissimsagittisMaecenasquissapiensapienS0tLQo= CLUSTER_SERVER=https://your-server-info.gr7.your-region-1.eks.amazonaws.com CURRENT_CLUSTER=dkp-engineering-eks.us-west-2.eksctl.io
-
Create a kubeconfig file to use in Kommander. Enter the following command:
cat << EOF > kommander-cluster-admin-config apiVersion: v1 kind: Config current-context: ${CURRENT_CONTEXT} contexts: - name: ${CURRENT_CONTEXT} context: cluster: ${CURRENT_CONTEXT} user: kommander-cluster-admin namespace: kube-system clusters: - name: ${CURRENT_CONTEXT} cluster: certificate-authority-data: ${CLUSTER_CA} server: ${CLUSTER_SERVER} users: - name: kommander-cluster-admin user: token: ${USER_TOKEN_VALUE} EOF
-
Verify that the kubeconfig file can access the EKS cluster.
kubectl --kubeconfig $(pwd)/kommander-cluster-admin-config get all --all-namespaces
-
Copy
kommander-cluster-admin-config
file contents to your clipboard.cat kommander-cluster-admin-config | pbcopy
Attach the Amazon EKS cluster to Kommander
-
From the Clusters page, select the Add Cluster button in your Kommander window.
-
Select the Attach Cluster button. If you do not have any additional networking restrictions, select the No additional networking restrictions card. If you do have a cluster with networking restrictions, follow the instructions to attach a cluster with networking restrictions.
-
Paste the contents of your clipboard into the Connection Information Kubeconfig File text box.
-
Assign a name and add any desired labels for the cluster.
-
Select the intended context with the config in the Context select list.
-
Confirm you are assigning the cluster to your desired workspace.
-
Select the Submit button.
Related information
For information on related topics or procedures, refer to the following: