After you have a basic Konvoy cluster installed and ready to use, you might want to test operations by deploying a simple, sample application. This task is optional and is only intended to demonstrate the basic steps for deploying applications in a production environment. If you are configuring the Konvoy cluster for a production deployment, you can use this section to learn the deployment process. However, deploying applications on a production cluster typically involves more planning and custom configuration than covered in this example.
This tutorial shows how to deploy a simple application that connects to the redis
service.
The sample application used in this tutorial is a condensed form of the Kubernetes sample guestbook application.
Before you begin
You must have a Konvoy cluster running.
To deploy the sample application
-
Deploy the Redis pods and service by running the following commands:
kubectl apply -f https://k8s.io/examples/application/guestbook/redis-leader-deployment.yaml kubectl apply -f https://k8s.io/examples/application/guestbook/redis-leader-service.yaml
-
Deploy Redis followers. The leader deployment created above is a single pod. Adding followers (or replicas) makes it highly available to meet greater traffic demands. You must then setup the guestbook application to communicate with the Redis followers to read the data. To do this, set up another service (the
redis-follower-service.yaml
below). Do this by running the following commands:kubectl apply -f https://k8s.io/examples/application/guestbook/redis-follower-deployment.yaml kubectl apply -f https://k8s.io/examples/application/guestbook/redis-follower-service.yaml
-
Deploy the web app frontend by running the following command:
kubectl apply -f https://k8s.io/examples/application/guestbook/frontend-deployment.yaml
-
Confirm that there are three frontend replicas running:
kubectl get pods -l app=guestbook -l tier=frontend
-
Apply the frontend Service:
kubectl apply -f https://k8s.io/examples/application/guestbook/frontend-service.yaml
-
Configure the front end service to use a cloud load balancer:
cat << EOF | kubectl apply -f - apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend EOF
-
View the frontend service via the LoadBalancer by running the following command to get the IP address for the frontend Service:
kubectl get service frontend
-
Copy the external IP address, and load the page in your browser to view your guestbook.
The service properties provide the name of the load balancer. You can connect to the application by accessing that load balancer address in your web browser. Because this sample deployment creates a cloud load balancer, you should keep in mind that creating the load balancer can take up to a few minutes. You also might experience a slight delay before it is running properly due to DNS propagation and synchronization.
-
Remove the sample application by running the following commands:
kubectl delete deployment -l app=redis kubectl delete service -l app=redis kubectl delete deployment frontend kubectl delete service frontend
-
Optional: Tear down the cluster by running the following command:
konvoy down
This command destroys the Kubernetes cluster and the infrastructure it runs on.