In this tutorial, we'll set up a demo application and have it undergo some chaos in combination with load testing. We will then use Keptn quality gates to evaluate the resilience of the application based on SLO-driven quality gates.
You'll find a time estimate until the end of this tutorial in the right top corner of your screen - this should give you guidance how much time is needed for each step.
In this tutorial, we are going to install Keptn on a Kubernetes cluster.
The full set up that we are going to deploy is sketched in the following image.
If you are interested, please have a look at this presentation from Litmus and Keptn maintainers presenting the initial integration.
Keptn can be installed on a variety of Kubernetes distributions. Please find a full compatibility matrix for supported Kubernetes versions here.
Positive : For the sizing of the Kubernetes cluster we recommend a cluster with at least 8vCPUs and 30 GB of memory. Detailed sizing recommendations for different platforms can be found in the respective setup documentation.
Please find tutorial how to set up your cluster here. For the best tutorial experience, please follow the sizing recommendations given in the tutorials.
Positive : Please note that if you are following one of the installation tutorials, only steps ①-③ are needed (setup of cluster) since we are going to install Keptn as part of this tutorial.
Please make sure your environment matches these prerequisites:
Download the Istio command line tool by following the official instructions or by executing the following steps.
curl -L https://istio.io/downloadIstio | ISTIO_VERSION=1.14.2 sh -
Check the version of Istio that has been downloaded and execute the installer from the corresponding folder, e.g.:
./istio-1.14.2/bin/istioctl install
The installation of Istio should be finished within a couple of minutes.
This will install the Istio default profile with ["Istio core" "Istiod" "Ingress gateways"] components into the cluster. Proceed? (y/N) y
✔ Istio core installed
✔ Istiod installed
✔ Ingress gateways installed
✔ Installation complete
Every release of Keptn provides binaries for the Keptn CLI. These binaries are available for Linux, macOS, and Windows.
There are multiple options how to get the Keptn CLI on your machine.
curl -sL https://get.keptn.sh | KEPTN_VERSION=0.18.0 bash
This will download and install the Keptn CLI in the specified version automatically.brew install keptn
keptn
binary (e.g., keptn-0.18.0-amd64.exe
) in the unpacked directory and rename it to keptn
- Linux / macOS: Add executable permissions (chmod +x keptn
), and move it to the desired destination (e.g. mv keptn /usr/local/bin/keptn
)- *Windows*: Copy the executable to the desired folder and add the executable to your PATH environment variable.
Now, you should be able to run the Keptn CLI:
keptn --help
.\keptn.exe --help
Positive : For the rest of the documentation we will stick to the Linux / macOS version of the commands.
To install the latest release of Keptn with full quality gate + continuous delivery capabilities in your Kubernetes cluster, execute the helm install
command.
helm install keptn --version 0.18.0 -n keptn --repo=https://charts.keptn.sh --create-namespace --wait --set=continuousDelivery.enabled=true --generate-name
Positive : The installation process will take about 3-5 minutes.
Positive : Please note that Keptn comes with different installation options, all of the described in detail in the Keptn docs.
These are additional microservices that will handle certain tasks
helm install jmeter-service keptn/jmeter-service -n keptn
helm install helm-service keptn/helm-service -n keptn
By default Keptn installs into the keptn
namespace. Once the installation is complete we can verify the deployments:
kubectl get deployments -n keptn
Here is the output of the command:
NAME READY UP-TO-DATE AVAILABLE AGE
api-gateway-nginx 1/1 1 1 2m44s
api-service 1/1 1 1 2m44s
approval-service 1/1 1 1 2m44s
bridge 1/1 1 1 2m44s
resource-service 1/1 1 1 2m44s
helm-service 1/1 1 1 2m44s
jmeter-service 1/1 1 1 2m44s
lighthouse-service 1/1 1 1 2m44s
litmus-service 1/1 1 1 2m44s
keptn-mongo 1/1 1 1 2m44s
mongodb-datastore 1/1 1 1 2m44s
remediation-service 1/1 1 1 2m44s
shipyard-controller 1/1 1 1 2m44s
statistics-service 1/1 1 1 2m44s
webhook-service 1/1 1 1 2m51s
We are using Istio for traffic routing and as an ingress to our cluster. To make the setup experience as smooth as possible we have provided some scripts for your convenience. If you want to run the Istio configuration yourself step by step, please take a look at the Keptn documentation.
The first step for our configuration automation for Istio is downloading the configuration bash script from Github:
curl -o configure-istio.sh https://raw.githubusercontent.com/keptn/examples/0.15.0/istio-configuration/configure-istio.sh
After that you need to make the file executable using the chmod
command.
chmod +x configure-istio.sh
Finally, let's run the configuration script to automatically create your Ingress resources.
./configure-istio.sh
Positive : There is no need to copy the following resources; they are for information purposes only.
With this script, you have created an Ingress based on the following manifest.
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: istio
name: api-keptn-ingress
namespace: keptn
spec:
rules:
- host: <IP-ADDRESS>.nip.io
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: api-gateway-nginx
port:
number: 80
Please be aware that, when using OpenShift 3.11, instead of using the above manifest, you should use the following one, as it uses an already deprecated apiVersion.
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: istio
name: api-keptn-ingress
namespace: keptn
spec:
rules:
- host: <IP-ADDRESS>.nip.io
http:
paths:
- backend:
serviceName: api-gateway-nginx
servicePort: 80
In addition, the script has created a gateway resource for you so that the onboarded services are also available publicly.
---
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: public-gateway
namespace: istio-system
spec:
selector:
istio: ingressgateway
servers:
- port:
name: http
number: 80
protocol: HTTP
hosts:
- '*'
Finally, the script restarts the helm-service
pod of Keptn to fetch this new configuration.
In this section we are referring to the Linux/MacOS derivatives of the commands. If you are using a Windows host, please follow the official instructions.
First let's extract the information used to access the Keptn installation and store this for later use.
KEPTN_ENDPOINT=http://$(kubectl -n keptn get ingress api-keptn-ingress -ojsonpath='{.spec.rules[0].host}')/api
KEPTN_API_TOKEN=$(kubectl get secret keptn-api-token -n keptn -ojsonpath='{.data.keptn-api-token}' | base64 --decode)
KEPTN_BRIDGE_URL=http://$(kubectl -n keptn get ingress api-keptn-ingress -ojsonpath='{.spec.rules[0].host}')/bridge
Use this stored information and authenticate the CLI.
keptn auth --endpoint=$KEPTN_ENDPOINT --api-token=$KEPTN_API_TOKEN
That will give you:
Starting to authenticate
Successfully authenticated
Positive : Congratulations - Keptn is successfully installed and your CLI is connected to your Keptn installation!
If you want, you can go ahead and take a look at the Keptn API by navigating to the endpoint that is given via:
echo $KEPTN_ENDPOINT
Demo resources are prepared for you on Github for a convenient experience. We are going to download them to a local machine so we have them handy.
git clone --branch=release-0.2.3 https://github.com/keptn-sandbox/litmus-service.git --single-branch
Now, let's switch to the directory including the demo resources.
cd litmus-service/test-data
kubectl
.kubectl apply -f ./litmus/litmus-operator-v2.13.0.yaml
kubectl create namespace litmus-chaos
kubectl apply -f ./litmus/pod-delete-ChaosExperiment-CR.yaml
kubectl apply -f ./litmus/pod-delete-rbac.yaml
Before we are going to create the project with Keptn, we'll install the Prometheus integration to be ready to fetch the data that is later on needed for the SLO-based quality gate evaluation.
Keptn doesn't install or manage Prometheus and its components. Users need to install Prometheus and Prometheus Alert manager as a prerequisite.
kubectl create ns monitoring
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm install prometheus prometheus-community/prometheus --namespace monitoring
helm upgrade --install -n keptn prometheus-service \
https://github.com/keptn-contrib/prometheus-service/releases/download/0.9.1/prometheus-service-0.9.1.tgz \
--reuse-values \
--set prometheus.namespace="monitoring" \
--set prometheus.endpoint="http://prometheus-server.monitoring.svc.cluster.local:80" \
--set prometheus.namespace_am="monitoring"
kubectl port-forward svc/prometheus-server 8080:80 -n monitoring
Similar to the Prometheus integration, we are now adding the Litmus integration. This integration triggers the experiments with Litmus and listens for sh.keptn.event.test.triggered
events that are sent from Keptn.
This can be done via the following command.
kubectl apply -f ../deploy/service.yaml
We now have all the integrations installed and connected to the Keptn control plane. Now we can set up a project!
A project in Keptn is the logical unit that can hold multiple (micro)services. Therefore, it is the starting point for each Keptn installation. We have already cloned the demo resources from Github, so we can go ahead and create the project.
Recommended: Create a new project with Git upstream:
To configure a Git upstream for this tutorial, the Git user (--git-user
), an access token (--git-token
), and the remote URL (--git-remote-url
) are required. If a requirement is not met, go to the Keptn documentation where instructions for GitHub, GitLab, and Bitbucket are provided.
Let's define the variables before running the command:
GIT_USER=gitusername
GIT_TOKEN=gittoken
GIT_REMOTE_URL=remoteurl
Now let's create the project using the keptn create project
command.
keptn create project litmus --shipyard=./shipyard.yaml --git-user=$GIT_USER --git-token=$GIT_TOKEN --git-remote-url=$GIT_REMOTE_URL
For creating the project, the tutorial relies on a shipyard.yaml
file. Refer shipyard to know more.
After creating the project, services can be created for our project. For this purpose we need the helm charts as a tar.gz archive. To archive it, use following command:
cd helloservice && tar cfvz ./helm.tgz ./helm && cd ..
keptn create service helloservice --project=litmus
keptn add-resource --project=litmus --service=helloservice --all-stages --resource=./helloservice/helm.tgz --resourceUri=helm/helloservice.tgz
keptn add-resource --project=litmus --stage=chaos --service=helloservice --resource=./jmeter/load.jmx --resourceUri=jmeter/load.jmx
keptn add-resource --project=litmus --stage=chaos --service=helloservice --resource=./jmeter/jmeter.conf.yaml --resourceUri=jmeter/jmeter.conf.yaml
Now, each time Keptn triggers the test execution, the JMeter service picks up both files and executes the tests.
We have not yet added our quality gate, i.e., the evaluation of several SLOs done by Keptn. Let's do this now!
keptn add-resource --project=litmus --stage=chaos --service=helloservice --resource=./prometheus/sli.yaml --resourceUri=prometheus/sli.yaml
slo.yaml
which adds objectives for our metrics that have to be satisfied. earn more about the concept of Service-Level Objectives in the Keptn docs.keptn add-resource --project=litmus --stage=chaos --service=helloservice --resource=helloservice/slo.yaml --resourceUri=slo.yaml
We've now added our quality gate, We can now move on to add the chaos instructions and then run our experiment!
We have installed LitmusChaos on our Kubernetes cluster, but we have not yet added or executed a chaos experiment. Let's do this now!
Let us add the experiment.yaml
file that holds the chaos experiment instructions. It will be picked up by the LitmusChaos integration of Keptn each time a test is triggered. Therefore, Keptn makes sure that both JMeter tests and LitmusChaos tests, are executed during the test
task sequence.
keptn add-resource --project=litmus --stage=chaos --service=helloservice --resource=./litmus/experiment.yaml --resourceUri=litmus/experiment.yaml
Great job - the file is added and we can move on!
Before we are run the experiment, we must make sure that we have some observability software in place that will actually monitor how the service is behaving under the testing conditions.
keptn configure monitoring prometheus --project=litmus --service=helloservice
blackbox-exporter
for Prometheus that is able to observe our service under test from the outside, i.e., as a blackbox.kubectl apply -f ./prometheus/blackbox-exporter.yaml
kubectl apply -f ./prometheus/prometheus-server-conf-cm.yaml -n monitoring
kubectl delete pod -l component=server -n monitoring
Now everything is in place, so we can run our experiments and evaluate the resilience of our demo application!
We are now ready to kick off a new deployment of our test application with Keptn and have it deployed, tested, and evaluated.
keptn trigger delivery --project=litmus --service=helloservice --image=jetzlstorfer/hello-server:v0.1.1
echo http://$(kubectl -n keptn get ingress api-keptn-ingress -ojsonpath='{.spec.rules[0].host}')/bridge
The credentials can be retrieved via the following commands: echo Username: $(kubectl get secret -n keptn bridge-credentials -o jsonpath="{.data.BASIC_AUTH_USERNAME}" | base64 --decode)
echo Password: $(kubectl get secret -n keptn bridge-credentials -o jsonpath="{.data.BASIC_AUTH_PASSWORD}" | base64 --decode)
probe_duration_ms
nor the probe_success_percentage
SLOs met their criteria. Considering the fact that our chaos experiment deleted the pod of our application, we might want to increase the number of replicas that are running to make our application more resilient. Let's do this in the next step.replicaCount
, meaning that we run 3 instances of our application. If one of those is deleted by Litmus, the two others should still be able to serve the traffic. This time we are using the keptn send event
command with an event payload that has been already prepared for the demo (i.e., the replicaCount
is set to 3).keptn send event -f helloservice/deploy-event.json
replicaCount
values to evaluate the resilience of your application in terms of being responsive when the pod of this application gets deleted. Keptn will make sure that JMeter tests and chaos tests are executed each time you run the experiment.Congratulations! You have successfully completed this tutorial and evaluated the resilience of a demo microservice application with LitmusChaos and Keptn.
shipyard
definition.apiVersion: "spec.keptn.sh/0.2.0"
kind: "Shipyard"
metadata:
name: "shipyard-litmus-chaos"
spec:
stages:
- name: "chaos"
sequences:
- name: "delivery"
tasks:
- name: "deployment"
properties:
deploymentstrategy: "direct"
- name: "test"
properties:
teststrategy: "performance"
- name: "evaluation"
Positive : We are happy to hear your feedback!
Please visit us in our Keptn Slack and tell us how you like Keptn and this tutorial! We are happy to hear your thoughts & suggestions!
Also, make sure to follow us on Twitter to get the latest news on Keptn, our tutorials and newest releases!