This topic describes how to use the CLI to deploy an application to attached clusters within a project. To use the Kommander UI to deploy applications, see Deploy applications in a project.
See Project Applications for a list of all applications and those that are enabled by default.
Prerequisites
Before you begin, you must have:
- A running cluster with Kommander installed.
- An existing Kubernetes cluster attached to Kommander.
Set the PROJECT_NAMESPACE
environment variable to the name of the project’s namespace where the cluster is attached:
export PROJECT_NAMESPACE=<project_namespace>
Deploy the application
The list of available applications that can be deployed on the attached cluster can be found in this topic.
-
Deploy one of the supported applications to your existing attached cluster with an
AppDeployment
resource. -
Within the
AppDeployment
, define theappRef
to specify whichApp
will be deployed:cat <<EOF | kubectl apply -f - apiVersion: apps.kommander.d2iq.io/v1alpha1 kind: AppDeployment metadata: name: project-grafana-logging-6.13.9 namespace: ${PROJECT_NAMESPACE} spec: appRef: name: project-grafana-logging-6.13.9 EOF
-
Create the resource in the project you just created, which instructs Kommander to deploy the
AppDeployment
to theKommanderCluster
s in the same project.
Deploy an application with a custom configuration
-
Provide the name of a
ConfigMap
in theAppDeployment
, which provides custom configuration on top of the default configuration:cat <<EOF | kubectl apply -f - apiVersion: apps.kommander.d2iq.io/v1alpha1 kind: AppDeployment metadata: name: project-grafana-logging namespace: ${PROJECT_NAMESPACE} spec: appRef: name: project-grafana-logging-6.13.9 configOverrides: name: project-grafana-logging-overrides EOF
-
Create the
ConfigMap
with the name provided in the step above, with the custom configuration:cat <<EOF | kubectl apply -f - apiVersion: v1 kind: ConfigMap metadata: namespace: ${PROJECT_NAMESPACE} name: project-grafana-logging-overrides data: values.yaml: | datasources: datasources.yaml: apiVersion: 1 datasources: - name: Loki type: loki url: "http://project-grafana-loki-loki-distributed-gateway" access: proxy isDefault: false EOF
Kommander waits for the ConfigMap
to be present before deploying the AppDeployment
to the attached clusters.
Verify applications
The applications are now deployed. Connect to the attached cluster and check the HelmReleases
to verify the deployment:
kubectl get helmreleases -n ${PROJECT_NAMESPACE}
NAMESPACE NAME READY STATUS AGE
project-test-vjsfq project-grafana-logging True Release reconciliation succeeded 7m3s