This guide provides instructions for getting started with Konvoy to get your Kubernetes cluster up and running with basic configuration requirements on an Amazon Web Services (AWS) public cloud instances. If you want to customize your AWS environment, see Install AWS Advanced.
Prerequisites
Before starting the Konvoy installation, verify that you have:
- An x86_64-based Linux or macOS machine with a supported version of the operating system.
- The
dkp
binary for Linux, or macOS. - Docker version 18.09.2 or later.
- kubectl for interacting with the running cluster.
- A valid AWS account with credentials configured.
Configure AWS prerequisites (required only if creating an AWS cluster)
-
Follow the steps in IAM Policy Configuration.
-
Export the AWS region where you want to deploy the cluster:
export AWS_REGION=us-west-2
-
Export the AWS Profile with the credentials that you want to use to create the Kubernetes cluster:
export AWS_PROFILE=<profile>
Bootstrap a kind cluster and CAPI controllers
-
Create a bootstrap cluster:
dkp create bootstrap --kubeconfig $HOME/.kube/config
Create a new AWS Kubernetes cluster
-
Give your cluster a name suitable for your environment:
export CLUSTER_NAME=$(whoami)-aws-cluster
-
Make sure your AWS credentials are up to date. Refresh the credentials using this command:
dkp update bootstrap credentials aws
-
Create a Kubernetes cluster:
dkp create cluster aws --cluster-name=${CLUSTER_NAME} --additional-tags=owner=$(whoami)
-
(Optional) Specify an authorized key file to have SSH access to the machines.
The file must contain exactly one entry, as described in this manual.
You can use the
.pub
file that complements your private ssh key. For example, use the public key that complements your RSA private key:--ssh-public-key-file=${HOME}/.ssh/id_rsa.pub
The default username for SSH access is
konvoy
. For example, use your own username:--ssh-username=$(whoami)
-
Wait for the cluster control-plane to be ready:
kubectl wait --for=condition=ControlPlaneReady "clusters/${CLUSTER_NAME}" --timeout=20m
Explore the new Kubernetes cluster
-
Fetch the kubeconfig file:
dkp get kubeconfig -c ${CLUSTER_NAME} > ${CLUSTER_NAME}.conf
-
List the Nodes with the command:
kubectl --kubeconfig=${CLUSTER_NAME}.conf get nodes
-
List the Pods with the command:
kubectl --kubeconfig=${CLUSTER_NAME}.conf get pods -A
(Optional) Move controllers to the newly-created cluster
-
Deploy CAPI controllers on the worker cluster:
dkp create bootstrap controllers --with-aws-bootstrap-credentials=false --kubeconfig ${CLUSTER_NAME}.conf
-
Issue the move command:
dkp move --to-kubeconfig ${CLUSTER_NAME}.conf
Note that the Konvoy
move
operation has the following limitations:- Only one workload cluster is supported. This also implies that Konvoy does not support moving more than one bootstrap cluster onto the same worker cluster.
- The Konvoy version used for creating the worker cluster must match the Konvoy version used for deleting the worker cluster.
- The Konvoy version used for deploying a bootstrap cluster must match the Konvoy version used for deploying a worker cluster.
- Konvoy only supports moving all namespaces in the cluster; Konvoy does not support migration of individual namespaces.
- You must ensure that the permissions available to the CAPI controllers running on the worker cluster are sufficient.
-
Remove the bootstrap cluster, as the worker cluster is now self-managing:
dkp delete bootstrap --kubeconfig $HOME/.kube/config
(Optional) Moving controllers back to the temporary bootstrap cluster
-
Create a bootstrap cluster:
dkp create bootstrap --kubeconfig $HOME/.kube/config
-
Issue the move command:
dkp move --from-kubeconfig ${CLUSTER_NAME}.conf --to-kubeconfig $HOME/.kube/config
Delete the Kubernetes cluster and cleanup your environment
-
Delete the provisioned Kubernetes cluster and wait a few minutes:
dkp delete cluster --cluster-name=${CLUSTER_NAME}
-
Delete the
kind
Kubernetes cluster:dkp delete bootstrap --kubeconfig $HOME/.kube/config