Integrate with OpenTelemetry Collector (Experimental) Latest

Detail of integrating OpenTelemetry Collector in KEDA

Push Metrics to OpenTelemetry Collector (Experimental)


The KEDA Operator supports outputting metrics to the OpenTelemetry collector using HTTP. The parameter --enable-opentelemetry-metrics=true needs to be set. KEDA will push metrics to the OpenTelemetry collector specified by the OTEL_EXPORTER_OTLP_ENDPOINT environment variable. Other environment variables in OpenTelemetry are also supported ( Here is an example configuration of the operator:

apiVersion: apps/v1
kind: Deployment
  name: keda-operator
        - name: keda-operator
            - /keda
              value: "http://opentelemetry-collector.default.svc.cluster.local:4318"

The following metrics are being gathered:

MetricDescription metric, with static information about KEDA build like: version, git commit and Golang runtime info.
keda.scaler.activeThis metric marks whether the particular scaler is active (value == 1) or in-active (value == 0).
keda.scaled.object.pausedThis metric indicates whether a ScaledObject is paused (value == 1) or un-paused (value == 0).
keda.scaler.metrics.valueThe current value for each scaler’s metric that would be used by the HPA in computing the target average.
keda.scaler.metrics.latencyThe latency of retrieving current metric from each scaler.
keda.scaler.errorsThe number of errors that have occurred for each scaler.
keda.scaler.errors.totalThe total number of errors encountered for all scalers.
keda.scaled.object.errorsThe number of errors that have occurred for each ScaledObject.
keda.scaled.job.errorsThe number of errors that have occurred for each ScaledJob.
keda.resource.totalsTotal number of KEDA custom resources per namespace for each custom resource type (CRD).
keda.trigger.totalsTotal number of triggers per trigger type.
keda.internal.scale.loop.latencyTotal deviation (in milliseconds) between the expected execution time and the actual execution time for the scaling loop. This latency could be produced due to accumulated scalers latencies or high load. This is an internal metric. emitted cloudevents with destination of this emitted event (eventsink) and emitted state. number of events that are in the emitting queue.