TLDR; This application had a few purposes:
- to teach me, a rookie Dungeon Master, how D&D combat rules work (specifically, D&D 5e)
- to explore usage and capabilities of metrics libraries, starting with Micrometer
- to mess with metrics and spring boot applications with Kubernetes, Prometheus, and Grafana.
You can read more here Monsters in combat: exploring application metrics with D&D
Additional Notes:
- The Spring application also uses WebFlux (no Tomcat).
- The Quarkus application uses the micrometer core library
One injectable class, CombatMetrics.java
in the core library, defines metrics gathered
using micrometer. This class is used by both the Spring and Quarkus-micrometer applications
to collect custom metrics. I wanted metrics definitions to be easy to find, and easy to change.
This choice means I'm not making extensive use of annotation-based configurations, but I
think the result is clear and concise, and much less invasive than annotations would have been.
- Docker
- Java 11
Obtain the source for this repository:
- HTTPS: git clone /~https://github.com/ebullient/monster-combat.git
- SSH: git clone git@github.com:ebullient/monster-combat.git
Start with:
cd monster-combat # cd into the project directory
export MONSTER_DIR=${PWD} # for future reference
It is possible to run (and measure) this app using jars or native binaries alone.
# Use the mc.sh script to build jars (--native is optional)
./mc.sh jars --native
# OR:
# 1. Build jars (clean is optional)
./mvnw clean package
# 2. build platform-native images (with GraalVM) .. not containerized (windows-specific, mac-specific, etc)
./mvnw install -Dnative
JVM-mode and Native mode containers need to be
# Use the mc.sh script to build images (--native is optional)
./mc.sh images --native
# OR:
# 1. Create jvm-mode containers (skipping tests)
./mvnw clean package -Dimages -DskipTests
# 2. Create native container images (skipping tests)
./mvnw clean package -Dimages -DskipTests -Dnative
NB: There may be issues on windows, and there is some known weirdness on Mac M1
Get your system up and running using either
Once you have your system configured and running, you can use client.sh
script to keep a steady stream of
requests hitting an endpoint of your choosing.
Hopefully, it will all work fine. If it doesn't, come find me in the gameontext slack and let me know. Or open an issue. That works, too.
This application is all about application metrics. The surrounding environment doesn't matter much.
If you're lazy, or on a constrained system, docker-compose will work fine to start all the bits. Note: I'm lazy, so this is the method I use the most often.
See below for notes on adding native images to the mix:
# go to docker-compose directory
cd deploy/dc
# start all services (prom, grafana, spring, quarkus, quarkus-mpmetrics)
docker-compose up -d
Alternately, use mc.sh
to manage some of these operations for you:
# start services using docker compose
./mc.sh dc up -d
The mc.sh
script looks for some flags (like --native
or --format
) to add options to maven commands,
but otherwise hands all remaining command line arguments to invoked commands.
In the case of dc
, mc.sh
will execute the docker-compose command with explicitly specified
docker-compose files, which can save a lot of typing once you add native images to the mix.
You should then be able to do the following and get something interesting in return:
# Start traffic to the /any endpoint for all of the running servers:
./client.sh start
# Stop all of the clients:
./client.sh stop
# List active clients:
./client.sh list
# Spring:
curl http://127.0.0.1:8280/
curl http://127.0.0.1:8280/actuator/metrics
curl http://127.0.0.1:8280/actuator/prometheus
curl http://127.0.0.1:8280/combat/faceoff # 2 monsters
curl http://127.0.0.1:8280/combat/melee # 3-6 monsters
curl http://127.0.0.1:8280/combat/any # 2-6 monsters
# start stream of requests to the /any endpoint of the spring server
./client.sh spring
# Quarkus
curl http://127.0.0.1:8281/
curl http://127.0.0.1:8281/metrics # micrometer & prometheus
curl http://127.0.0.1:8281/combat/faceoff # 2 monsters
curl http://127.0.0.1:8281/combat/melee # 3-6 monsters
curl http://127.0.0.1:8281/combat/any # 2-6 monsters
# start stream of requests to the /any endpoint of the quarkus (micrometer) server
./client.sh quarkus
# Quarkus with MP Metrics
curl http://127.0.0.1:8282/
curl http://127.0.0.1:8282/metrics # MP metrics endpoint
curl http://127.0.0.1:8282/combat/faceoff # 2 monsters
curl http://127.0.0.1:8282/combat/melee # 3-6 monsters
curl http://127.0.0.1:8282/combat/any # 2-6 monsters
# start stream of requests to the /any endpoint of the quarkus (mpmetrics) server
./client.sh mpmetrics
Check out the prometheus dashboard (http://127.0.0.1:9090) to see emitted metrics.
You can import pre-created dashboards (see below) to visualize collected metrics
in grafana (http://127.0.0.1:3000, admin|admin is default username/password). When
configuring the Prometheus datasource in Graphana, use the docker-compose service
name as the hostname: http://prometheus:9090
.
If you are using linux, building and testing native images is straightforward, but if you are using windows or mac, we need to separate the steps a bit, as the native image needs to be built with a container, and that will overwrite the OS-native image used for tests.
Use an additional docker compose file to start native images. Append docker-compose.override.yml to the list of files if necessary.
docker-compose -f docker-compose.yml -f docker-compose-native.yml up -d
Alternately, use mc.sh
to manage some of these operations for you:
# start all services (including non-native) using docker compose
./mc.sh --native dc up -d
# Quarkus Native
curl http://127.0.0.1:8283/
curl http://127.0.0.1:8283/metrics # micrometer & prometheus
curl http://127.0.0.1:8283/combat/faceoff # 2 monsters
curl http://127.0.0.1:8283/combat/melee # 3-6 monsters
curl http://127.0.0.1:8283/combat/any # 2-6 monsters
# start stream of requests to the /any endpoint of the quarkus (micrometer) server
./client.sh quarkus-native
# Quarkus with MP Metrics Native
curl http://127.0.0.1:8284/
curl http://127.0.0.1:8284/metrics # MP metrics endpoint
curl http://127.0.0.1:8284/combat/faceoff # 2 monsters
curl http://127.0.0.1:8284/combat/melee # 3-6 monsters
curl http://127.0.0.1:8284/combat/any # 2-6 monsters
# start stream of requests to the /any endpoint of the quarkus (mpmetrics) server
./client.sh mpmetrics-native
The ${MONSTER_DIR}/deploy/dc/config
directory contains configuration for Prometheus and Grafana when run with docker-compose.
The config directory is bind-mounted into both containers. The docker-compose configuration also creates a bind mount to
service-specific subdirectories under ${MONSTER_DIR}/deploy/dc/target/data
for output.
The config directory conains the following files:
grafana.ini
configures grafanagrafana-*.json
are importable grafana dashboardsprometheus.yml
defines jobs for spring and quarkus metrics, and declaresprometheus.rules.yaml
prometheus.rules.yaml
defines reporting rules for prometheus that create additional time series to pre-aggregate chattier metrics.
To reset prometheus and grafana (tossing all data):
# From ${MONSTER_DIR}/deploy/dc directory:
docker-compose stop prom grafana
docker-compose rm prom grafana
# Remove data for prometheus and grafana
rm -rf ./target/data/prometheus/* ./target/data/grafana/*
# restart
docker-compose up -d prom grafana
Note: the prometheus and grafana data directories must be owned by the host user. If you delete the directories by accident, recreate them manually before using docker-compose to start the services again (as it will create the missing directories for you, and those will be owned by root, which will cause permission issues for services running as the host user).
-
Copy a configuration file, e.g. copy
quarkus-micrometer/src/main/resources/application.properties
to${MONSTER_DIR}/deploy/dc/target/mc-quarkus-micrometer.properties
-
Create an override file,
${MONSTER_DIR}/deploy/dc/docker-compose.override.yml
, that mounts this file as a volume, replacing the configuration file in the image:version: '3.7' services: quarkus: volumes: - './target/mc-quarkus-micrometer.properties:/app/resources/application.properties'
-
Set up custom namespaces (
gameontext
andebullientworks
)kubectl apply -f deploy/k8s/namespaces.yaml
-
Set up kube-prometheus
This script wraps all kinds of jsonnet goodness in a container so there is less setup over-all:
./deploy/k8s/kube-prometheus/build.sh prep # Once. setup kube-prometheus jsonnet # At least one time. Repeat if you change the monsters.jsonnet file ./deploy/k8s/kube-prometheus/build.sh generate # Create manifests ./deploy/k8s/kube-prometheus/build.sh apply # Apply configuration to cluster
Note there are customizations happening (in
./deploy/k8s/kube-prometheus/monsters.jsonnet
):- We reduce prometheus and alertmanager to single replicas. This is definitely a "fit on a tinier system" move that goes away from resilience.
- We instruct prometheus to monitor three additional namespaces:
gameon-system
,ebullientworks
anddefault
. The first is for services from https://gameontext.org, the second is used by this project, and the third is for your own experiments.
-
Create an ingress for prometheus and grafana
kubectl apply -f deploy/k8s/ingress/monitoring-ingress.yaml echo "Use the following urls for Prometheus: http://prometheus.$(minikube ip).nip.io Grafana dashboard: http://grafana.$(minikube ip).nip.io"
-
Once the kube-prometheus manifests have applied cleanly, set up a Prometheus
ServiceMonitor
for our applications:kubectl apply -f deploy/k8s/service-monitor/
If you delete/re-apply kube-prometheus metadata, you'll need to reapply this, too, as it is deployed into the
monitoring
namespace.For best results, ensure this is applied, and both
mc-quarkus-prometheus
andmc-spring-prometheus
are included in the list of Prometheus targets before moving on to the next step.echo Visit http://prometheus.$(minikube ip).nip.io/targets
-
Finally (!!), build and install the application:
# Choices choices. For minikube, you may want to share the VM registry eval $(minikube docker-env) # Run through all of the sub-projects and build them # in the local docker registry. Feel free to change that up. ./mvnw install # Depending on your choices, you may have to do a docker push # to put fresh images wherever they need to go. # Now deploy application metadata (service, deployment, ingress) # Verify that the ingress definition will work for your kubernetes cluster kubectl apply -f deploy/k8s/monsters/ kubectl apply -f deploy/k8s/ingress/monster-ingress.yaml echo " Spring with Micrometer: http://spring.$(minikube ip).nip.io Quarkus with Micrometer: http://quarkus.$(minikube ip).nip.io Quarkus with MP Metrics: http://mpmetrics.$(minikube ip).nip.io"
So, after all of that, you should be able to do the following and get something interesting in return:
# This assumes minikube and/or minishift, with the configured ingress URL
curl http://monsters.192.168.99.100.nip.io/
curl http://monsters.192.168.99.100.nip.io/actuator/metrics
curl http://monsters.192.168.99.100.nip.io/actuator/prometheus
# Encounters:
# faceoff is 2 monsters
curl http://monsters.192.168.99.100.nip.io/combat/faceoff
# melee is 3-6 monsters
curl http://monsters.192.168.99.100.nip.io/combat/melee
# any is random from 2-6
curl http://monsters.192.168.99.100.nip.io/combat/any
Check out the prometheus endpoint to see what metrics are being emitted.
We'll use Quarkus Micrometer for this example.
-
Let's start by creating a new ConfigMap for application.properties that specifies runtime configuration attributes, e.g.
deploy/k8s/config/mc-quarkus-micrometer-config.yaml
apiVersion: v1 kind: ConfigMap metadata: name: mc-quarkus-micrometer-config namespace: ebullientworks data: application.properties: |+ quarkus.http.port=8080
-
Update the appropriate deployment definition to reference the volume, e.g.
deploy/k8s/monsters/quarkus-micrometer.yaml
:spec: volumes: - name: properties-volume configMap: name: mc-quarkus-micrometer-config containers: - image: ebullient/mc-quarkus-micrometer:latest-jvm imagePullPolicy: IfNotPresent name: mc-quarkus-micrometer volumeMounts: - name: properties-volume mountPath: /app/resources/mc-quarkus-micrometer.properties ...
-
Create the ConfigMap and update your deployment
kubectl apply -f deploy/k8s/config/mc-quarkus-micrometer-config.yaml kubectl apply -f deploy/k8s/monsters/quarkus-micrometer.yml
kubectl
needs to be able to talk to a Kuberenetes cluster! You may have one already, in which case, all you need to do
is make sure kubectl
can work with it.
- Minikube -- local development cluster
- CodeReady Containers -- local development cluster (OpenShift 3.x)
If you already have a configured minikube instance, skip to step 3.
-
Start Minikube:
minikube delete minikube start --kubernetes-version=v1.19.4 \ --cpus 4 --disk-size 40g \ --memory 16384 --bootstrapper=kubeadm \ --extra-config=kubelet.authentication-token-webhook=true \ --extra-config=kubelet.authorization-mode=Webhook \ --extra-config=scheduler.address=0.0.0.0 \ --extra-config=controller-manager.address=0.0.0.0 minikube addons disable metrics-server minikube addons enable ingress
-
Ensure the
minikube
context is current context forkubectl
kubectl config set-context minikube # ensure ingress is working curl -v --raw http://$(minikube ip)/healthz
-
Update the ingress for your cluster to match the IP
./mvnw -Dcluster.ip=$(minikube ip) -Dminikube install -pl deploy/k8s
Coming soon.