Konvoy is a tool for provisioning Kubernetes clusters with a suite of pre-selected Cloud Native Computing Foundation (CNCF) and community-contributed tools. By combining a native Kubernetes cluster as its foundation with a default set of cluster extensions, Konvoy provides a complete out-of-the-box solution for organizations that want to deploy production-ready Kubernetes.
This Quick Start guide provides simplified instructions to get your Konvoy cluster up and running with minimal configuration requirements on an Amazon Web Services (AWS) public cloud instance.
Before you begin
Before installing Konvoy, ensure you have the following items.
Installing Konvoy
-
Install required packages. In most cases, you can install the required software using your preferred package manager. For example, on a macOS computer, you can use Homebrew to install
kubectl
and theaws
command-line utility by running the following command:brew install kubernetes-cli awscli
-
Check the Kubernetes client version. Many important Kubernetes functions do not work if your client is outdated. You can verify that the version of
kubectl
you have installed is supported by running the following command:kubectl version --short=true
-
To download Konvoy, see the Download Konvoy topic for information. You will need to download and extract the Konvoy package tarball.
-
Install with default settings.
-
Verify you have valid AWS security credentials to deploy the cluster on AWS. This step is not required if you are installing Konvoy on an on-premises environment. For information about installing in an on-premises environment, see Install on-premises.
-
Create a directory for storing state information for your cluster by running the following commands:
mkdir konvoy-quickstart cd konvoy-quickstart
This directory for state information is required for performing future operations on your cluster. For example, state files stored in this directory are required to tear down a cluster. If you were to delete the state information or this directory, destroying the cluster would require you to manually perform clean-up tasks.
-
Deploy with all of the default settings and addons by running the following command:
konvoy up
The
konvoy up
command performs the following tasks:- Provisions three control plane machines of
m5.xlarge
(a highly-available control-plane API). - Provisions four worker machines of
m5.2xlarge
on AWS. - Deploys all of the following default addons:
- Calico to provide pod network, and policy-driven perimeter network security.
- CoreDNS for DNS and service discovery.
- Helm to help you manage Kubernetes applications and application lifecycles.
- AWS EBS CSI driver to support persistent volumes.
- Elasticsearch (including Elasticsearch exporter) to enable scalable, high-performance logging pipeline.
- Kibana to support data visualization for content indexed by Elasticsearch.
- Fluent Bit to collect and collate logs from different sources and send logged messages to multiple destinations.
- Prometheus operator (including Grafana AlertManager and Prometheus Adaptor) to collect and evaluate metrics for monitoring and alerting.
- Traefik to route layer 7 traffic as a reverse proxy and load balancer.
- Kubernetes dashboard to provide a general-purpose web-based user interface for the Kubernetes cluster.
- Operations portal to centralize access to addon dashboards.
- Velero to back up and restore Kubernetes cluster resources and persistent volumes.
- Dex identity service to provide identity service (authentication) to the Kubernetes clusters.
- Dex Kubernetes client authenticator to enable authentication flow to obtain
kubectl
token for accessing the cluster. - Traefik forward authorization proxy to provide basic authorization for Traefik ingress.
- Kommander for multi-cluster management.
- Provisions three control plane machines of
Verifying your installation
The konvoy up
command produces output from Terraform and Ansible provisioning operations.
When deployment is complete, you should see a confirmation message similar to the following:
Kubernetes cluster and addons deployed successfully!
Run `konvoy apply kubeconfig` to update kubectl credentials.
Navigate to the URL below to access various services running in the cluster.
https://lb_addr-12345.us-west-2.elb.amazonaws.com/ops/landing
And login using the credentials below.
Username: AUTO_GENERATED_USERNAME
Password: SOME_AUTO_GENERATED_PASSWORD_12345
The dashboard and services may take a few minutes to be accessible.
You should copy the cluster URL and login information and paste it into a text file, then save the file in a secured, shared location on your network.
By default, the login credentials that are automatically-generated by the konvoy up
command use self-signed SSL/TLS certificates.
For a production cluster, you can modify the cluster configuration to use your own certificates.
You can then use this information to access the operations portal and associated dashboards.
Explore the cluster and addons
Use the URL you copied from the deployment output (for example, https://lb_addr-12345.us-west-2.elb.amazonaws.com/ops/landing
) to access the cluster’s dashboards using the operations portal.
The default operations portal provides links to several dashboards of the installed services, including:
- Grafana dashboards for metrics
- Kibana dashboards for logs
- Prometheus AlertManager dashboard for viewing alerts and alert configurations
- Traefik dashboards for inbound HTTP traffic
- Kubernetes dashboard for cluster activity
After you log in to the operations portal, you can view the dashboards to see information about cluster activity performance.
Although these are the most common next steps, you do not need to log in to the operations portal or run basic diagnostics to verify a successful installation. If there were issues with installing or bringing the Kubernetes cluster online, the addons installation would fail.
Merge the kubeconfig
Once the cluster is provisioned and functional, you should store its access configuration information in your main kubeconfig
file before using kubectl
to interact with the cluster.
The access configuration contains certificate credentials and the API server endpoint for accessing the cluster.
The konvoy
cluster stores this information internally as admin.conf
, but you can merge it into your “home” kubeconfig
file, so you can access the cluster from other working directories on your machine.
To merge the access configuration, use the following command:
konvoy apply kubeconfig
-
Specify the kubeconfig location.
By default, the
konvoy apply kubeconfig
command uses the value of theKUBECONFIG
environment variable to declare the path to the correct configuration file. If theKUBECONFIG
environment variable is not defined, the default path of~/.kube/config
is used.You can override the default Kubernetes configuration path in one of two ways:
-
By specifying an alternate path before running the
konvoy apply kubeconfig
command. For example:export KUBECONFIG="${HOME}/.kube/konvoy.conf" konvoy apply kubeconfig
-
By setting
KUBECONFIG
to the path of the current configuration file created and used withinkonvoy
. For example:export KUBECONFIG="${PWD}/admin.conf"
-
-
Validate a merged configuration.
To validate the merged configuration, you should be able to list nodes in the Kubernetes cluster by running the following command:
kubectl get nodes
The command returns output similar to the following:
NAME STATUS ROLES AGE VERSION ip-10-0-129-3.us-west-2.compute.internal Ready <none> 24m v1.20.13 ip-10-0-131-215.us-west-2.compute.internal Ready <none> 24m v1.20.13 ip-10-0-131-239.us-west-2.compute.internal Ready <none> 24m v1.20.13 ip-10-0-131-24.us-west-2.compute.internal Ready <none> 24m v1.20.13 ip-10-0-192-174.us-west-2.compute.internal Ready master 25m v1.20.13 ip-10-0-194-137.us-west-2.compute.internal Ready master 26m v1.20.13 ip-10-0-195-215.us-west-2.compute.internal Ready master 26m v1.20.13
Next steps
Now that you have a basic Konvoy cluster installed and ready to use, you might want to test operations by deploying a simple, sample application, customizing the cluster configuration, or checking the status of cluster components.
For more details, see the following topics: