This topic describes how to use the CLI to deploy a workspace catalog application to attached clusters within a workspace.
Prerequisites
Before you begin, you must have:
- A running cluster with Kommander installed.
- An existing Kubernetes cluster attached to Kommander.
Set the WORKSPACE_NAMESPACE
environment variable to the name of the workspace’s namespace the attached cluster exists in:
export WORKSPACE_NAMESPACE=<workspace_namespace>
After creating a GitRepository, use either the Kommander UI or the CLI to deploy your catalog applications.
Deploy the application using the Kommander UI
Follow these steps to deploy your catalog applications from the Kommander UI:
-
Select the desired Workspace
-
Select Applications on the left navigation bar to browse the available applications from your configured repositories.
-
Select your desired application.
-
Select the version you’d like to deploy from the version drop-down, and then select Deploy. The
Deploy Workspace Catalog Application
page is displayed. -
(Optional) If you want to override the default configuration values, copy your customized values into the text editor under Configure Service or upload your yaml file that contains the values:
someField: someValue
-
Confirm the details are correct, and then select the
Deploy
button.
For all applications, you must provide a display name and an ID which is automatically generated based on what you enter for the display name, unless or until you edit the ID directly. The ID must be compliant with Kubernetes DNS subdomain name validation rules.
Alternately, you can use the CLI to deploy your catalog applications.
Deploy the application using the CLI
See workspace catalog applications for the list of available applications that you can deploy on the attached cluster.
-
Deploy a supported application to your existing attached cluster with an
AppDeployment
resource. -
Within the
AppDeployment
, define theappRef
to specify whichApp
to deploy:cat <<EOF | kubectl apply -f - apiVersion: apps.kommander.d2iq.io/v1alpha2 kind: AppDeployment metadata: name: spark-operator namespace: ${WORKSPACE_NAMESPACE} spec: appRef: name: spark-operator-1.1.6 kind: App EOF
-
Create the resource in the workspace you just created, which instructs Kommander to deploy the
AppDeployment
to theKommanderCluster
s in the same workspace.
Deploy an application with a custom configuration using the CLI
-
Provide the name of a
ConfigMap
in theAppDeployment
, which provides custom configuration on top of the default configuration:cat <<EOF | kubectl apply -f - apiVersion: apps.kommander.d2iq.io/v1alpha2 kind: AppDeployment metadata: name: spark-operator namespace: ${WORKSPACE_NAMESPACE} spec: appRef: name: spark-operator-1.1.6 kind: App configOverrides: name: spark-operator-overrides EOF
-
Create the
ConfigMap
with the name provided in the step above, with the custom configuration:cat <<EOF | kubectl apply -f - apiVersion: v1 kind: ConfigMap metadata: namespace: ${WORKSPACE_NAMESPACE} name: spark-operator-overrides data: values.yaml: | configInline: uiService: enable: false EOF
Kommander waits for the ConfigMap
to be present before deploying the AppDeployment
to the attached clusters.
Verify applications
The applications are now deployed. Connect to the attached cluster and check the HelmReleases
to verify the deployment:
kubectl get helmreleases -n ${WORKSPACE_NAMESPACE}
NAMESPACE NAME READY STATUS AGE
workspace-test-vjsfq spark-operator True Release reconciliation succeeded 7m3s