This topic describes how to prepare your environment and install Konvoy on VMWare vSphere. This installation is similar to deploying the entire Kubernetes cluster onto vSphere Infrastructure as a Service (IaaS).
Before you Begin
Before installing, verify that your environment meets the following basic requirements:
-
vCenter version v6.7.x
vCenter provides the vSphere APIs that Konvoy uses to create the cluster VMs. The API endpoint must be reachable from where the Konvoy command line interface (CLI) runs.
-
vSphere account with credentials configured
Konvoy uses the account to access vCenter APIs. This account must have administrator privileges.
-
govc command-line utility
This guide shows how to use the govc CLI to create vSphere roles that are used by the Kubernetes cluster components.
-
Docker version 18.09.2 or later
You must have Docker installed on the host where the Konvoy CLI runs. For example, if you are installing Konvoy on your laptop, ensure the laptop has a supported version of Docker.
-
kubectl v1.20.6 or later
To enable interaction with the running cluster, you must have
kubectl
installed on the host where the Konvoy command line interface (CLI) runs.
Install Konvoy on vSphere
To install Konvoy on vSphere, perform the following tasks:
- Set the vSphere environment variables
- Create roles using govc
- Configure prerequisites cloud-provider user
- Configure tags for Datacenters and Zones
- Checking your VM templates exists
- Install Konvoy
- Modify the Cluster Name (optional)
- Show planned infrastructure changes
Set the vSphere environment variables
Set the following environment variables:
# export of settings
export VSPHERE_SERVER=_YOUR_VCENTER_URL
export VSPHERE_USER=_YOUR_VCENTER_USERNAME
export VSPHERE_PASSWORD=_YOUR_VCENTER_PASSWORD
export VSPHERE_ALLOW_UNVERIFIED_SSL=true
export VSPHERE_PERSIST_SESSION=true
Create roles using govc
Ensure the following roles are set:
# set environment variables from before set variables
export GOVC_URL=${VSPHERE_SERVER}
export GOVC_USERNAME=${VSPHERE_USER}
export GOVC_PASSWORD=${VSPHERE_PASSWORD}
export GOVC_INSECURE=${VSPHERE_ALLOW_UNVERIFIED_SSL}
export GOVC_PERSIST_SESSION=${VSPHERE_PERSIST_SESSION}
# create roles
govc role.create CNS-DATASTORE Datastore.FileManagement
govc role.create CNS-HOST-CONFIG-STORAGE Host.Config.Storage
govc role.create CNS-VM VirtualMachine.Config.AddExistingDisk VirtualMachine.Config.AddRemoveDevice
govc role.create CNS-SEARCH-AND-SPBM Cns.Searchable StorageProfile.View
Create a cloud-provider user and assign it to the roles on hierarchical levels
The cloud-provider user is used for CPI and CSI, to get the full advantage of running a Konvoy cluster on vSphere.
- Assign the role
CNS-DATASTORE
to your cloud-provider user at all to be usedDatastores
. - Assign the role
CNS-HOST-CONFIG-STORAGE
to your cloud-provider user at all to be used vSAN clusters. - Assign the role
CNS-VM
to your cloud-provider user propagated at the folder where your VMs will be started in. We recommend to create an extra VM Folder for this purpose and not use the root (/). - Assign the role
CNS-SEARCH-AND-SPBM
to your cloud-provider user at the root level of the vCenter Server. - Assign the role
ReadOnly
to your cloud-provider user at allDatacenters
. - Assign the role
ReadOnly
to your cloud-provider user propagated at allClusters
.
More details about setting the roles to the correct vSphere level can be found at the CSI Driver prerequisites.
Create tags for Datacenters and Zones
To use the cloud-provider CSI, refer to the Set Up Zones in the vSphere CNS Environment guide.
Keep the categories named k8s-region
and k8s-zone
, the tags can and should match your Datacenter
and Cluster
names.
VM Templates
You must have a VM Template registered in your Datacenters
storage. The following software must be present in
- cloud-init
- cloud-init-vmware-guestinfo
- open-vm-tools (or VMWare provided version)
Installation
-
After verifying your prerequisites, create a vSphere Kubernetes cluster.yaml by running
konvoy init --provisioner vsphere
. This command creates yourcluster.yaml
for vSphere, installs Kubernetes and default addons to support your Kubernetes cluster. -
Edit your
cluster.yaml
file and define the empty set values inspec.vsphere
. If you want to configure a multiDatacenter
setup, define the lists with all needed values. For example, thecluster.yaml
content can look similar to the following:
vsphere:
server: vcenter.hw.ca1.ksphere-platform.d2iq.cloud
port: 443
datacenters:
- name: dc1
cluster: zone1
network: VMs
datastore: vsanDatastore
# This is a VM folder you pre-created in your cluster, as mentioned for the CNS-VM role.
vmFolder: D2iQ
username: _YOUR_CLOUD_PROVIDER_USER_NAME_
password: _YOUR_CLOUD_PROVIDER_USER_PASSWORD_
If you do not want to insert the CSI username and password directly you can write instead:
username: ${KONVOY_VSPHERE_CSI_USERNAME}
password: ${KONVOY_VSPHERE_CSI_PASSWORD}
In this case you need to make sure to set KONVOY_VSPHERE_CSI_USERNAME
and KONVOY_VSPHERE_CSI_PASSWORD
environment variables.
Change the addon metallb
to be enabled: true
and set the addresses
you like to provide
as ServiceType: LoadBalancer
in your network.
More details about Load balancing for external traffic here.
Specifically, the konvoy up
command, for a preconfigured cluster.yaml
, does the following:
- Provisions three
xlarge
virtual machines as Kubernetes master nodes. Definition is 4 CPUs, 16GB RAM. - Provisions four
2xlarge
virtual machines as Kubernetes worker nodes. Definition is 8 CPUs, 32GB RAM. - Deploys all of the following default addons:
- Calico
- Cert-Manager
- CoreDNS
- Helm
- vSphere CSI driver
- Elasticsearch (including Elasticsearch Exporter)
- Fluent Bit
- Kibana
- Prometheus operator (including Grafana, AlertManager and Prometheus Adapter)
- Traefik
- Kubernetes dashboard
- Operations portal
- Velero
- Dex identity service
- Dex Kubernetes client authenticator
- Traefik forward authorization proxy
- Kommander
- Reloader
- Default Storage Class Protection
- Gatekeeper
- Konvoy Config
The default configuration options are recommended for a small cluster (about 10 worker nodes).
Modify the cluster name
By default, the cluster name is the name of the folder where the konvoy
command is run. The cluster name is used to tag the provisioned infrastructure and the context when applying the kubeconfig file. To change the cluster name, run the following command:
konvoy init --provisioner vsphere --cluster-name <YOUR_SPECIFIED_NAME>
Show planned infrastructure changes
Before running konvoy up
or konvoy provision
, it is also possible to show the calculated changes that would be performed on the infrastructure by Terraform.
You should see the following output:
$ konvoy provision --plan-only
...
Plan: 11 to add, 0 to change, 0 to destroy.
Add custom cloud.conf file
Konvoy generates a default cloud.conf.konvoy
, cloud-csi.conf.konvoy
and cpi-global-secret.yaml.konvoy
file based on the provisioned infrastructure.
If your cluster requires additional configuration, you can specify it by creating an extras/cloud-provider/cloud.conf
, extras/cloud-provider/cloud-csi.conf
and extras/cloud-provider/cpi-global-secret.yaml
file in your working directory.
Konvoy then copies this file to the remote machines and configures the necessary Kubernetes components to use this configuration file.
You can also configure Konvoy to use the files already present on the Kubernetes machines. On the remote machines, create /root/kubernetes/cloud.conf
, /root/kubernetes/cloud-csi.conf
and /root/kubernetes/cpi-global-secret.yaml
files and Konvoy will configure the necessarily Kubernetes components to use this configuration file.
When files in extras/cloud-provider
and root/kubernetes
are specified, the remote /root/kubernetes/
presented files are used.
View installation operations
As the konvoy up
command runs to start the cluster installation defined by cluster.yaml
, you will see output as the operations are performed. The first output messages you see are from Terraform as it provisions your nodes.
After the nodes are provisioned, Ansible connects to the instances and installs Kubernetes in steps called tasks and playbooks. Near the end of the output, addons are installed.
View cluster operations
You can monitor your cluster through the Operations Portal user interface. After you run the konvoy up
command, if the installation is successful, the command output displays the information you need to access the Operations Portal.
For example, you should see information similar to this:
Kubernetes cluster and addons deployed successfully!
Run `konvoy apply kubeconfig` to update kubectl credentials.
Run `konvoy check` to verify that the cluster has reached a steady state and all deployments have finished.
Navigate to the URL below to access various services running in the cluster.
https://172.10.42.42/ops/landing
And login using the credentials below.
Username: AUTO_GENERATED_USERNAME
Password: SOME_AUTO_GENERATED_PASSWORD_12345
If the cluster was recently created, the dashboard and services may take a few minutes to be accessible.
Check the files installed
When the konvoy up --provisioner vsphere
completes setup operations, the following files are generated:
cluster.yaml
- defines the Konvoy configuration for the cluster, where you customize your cluster configuration.admin.conf
- is a kubeconfig file, which contains credentials to connect to the kube-apiserver of your cluster through kubectl.inventory.yaml
- is an Ansible Inventory file.state
folder - contains Terraform files, including a state file.cluster-name-ssh.pem
/cluster-name-ssh.pub
- stores the SSH keys used to connect to the instances.runs
folder - which contains logging information.
For a full list of attributed 3rd party software, see D2IQ Legal.