I wrote a Prometheus Exporter that converts and exposes some metrics from Kibana API. This was mostly inspired by an already existing Exporter, that needs to be installed in Kibana as a Plugin. While that is a neat feature, managed ELK services like ElasticSearch Service by Elastic (commonly known as ElasticCloud) require another, more standalone approach.
kibana-exporter -kibana.uri http://localhost:5601 -kibana.username elastic -kibana.password password
The Exporter is pretty basic one written in less than 500 lines of collective Go code. The following metrics are exposed to be scraped by Prometheus.
||Kibana overall status||Gauge|
||Kibana Concurrent Connections||Gauge|
||Kibana uptime in milliseconds||Gauge|
||Kibana Heap maximum in bytes||Gauge|
||Kibana Heap usage in bytes||Gauge|
||Kibana load average 1m||Gauge|
||Kibana load average 5m||Gauge|
||Kibana load average 15m||Gauge|
||Kibana average response time in milliseconds||Gauge|
||Kibana maximum response time in milliseconds||Gauge|
||Kibana request disconnections count||Gauge|
||Kibana total request count||Gauge|
The Exporter is distributed as a statically linked binary that can be started with no dependencies. In addition to the binaries that can be found on the GitHub releases page, the Exporter is also distributed as a Docker image on Docker Hub. How to use, report bugs, and contribute can be found on the project README.
docker run -p 9684:9684 -it chamilad/kibana-prometheus-exporter:v7.5.x.1 -kibana.username elastic -kibana.password password -kibana.uri https://elasticcloud.kibana.aws.found.io
Additionally, definitions for a K8s Deployment and a Service is also provided that may be of help to quickly deploy this Exporter in a K8s environment. When done so, the following Prometheus scrape config can be used to collect metrics.
- job_name: "kibana" scrape_interval: 1m metrics_path: "/metrics" kubernetes_sd_configs: - role: service relabel_configs: - source_labels: [__meta_kubernetes_service_label_app] regex: "kibana-exporter" action: keep - source_labels: [__meta_kubernetes_namespace] action: replace target_label: kubernetes_namespace - source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port] target_label: __address__ regex: ([^:]+)(?::\d+)?;(\d+) replacement: $1:$2
If you don’t like my project, you can checkout others on the Prometheus Exporter Collection page.
Why did I write one? Especially in this day and age where Prometheus Client Libraries enable any code base to easily integrate OpenMetrics format into their metrics subsystem? Why doesn’t Elastic expose their metrics in OpenMetrics format, or at least enable a configuration that a user can select out of?
Looking at discussions that the user community and Elastic has had in the past this topic can be seen to have gone off topic really quickly. Metricbeat, which doesn’t offer any Prometheus output, is recommended as the way to collect ELK metrics. However, most deployments really don’t want to use multiple solutions in their monitoring stack. I understand the value of tier differentiators, however, if you’re producing writings about “embracing” Prometheus, you might as well meet halfway and provide integrations towards that side too.
As someone who would really like to integrate rather than code something new, writing a Prometheus Exporter is not fun (although coding itself is something I enjoy). That is why I felt this post should end with a rant.