Warning
You are currently viewing v"2.15" of the documentation and it is not the latest. For the most recent documentation, kindly click here.
What is KEDA and why is it useful?
What are the prerequisites for using KEDA?
KEDA is designed, tested and is supported to be run on any Kubernetes cluster that runs Kubernetes v1.17.0 or above.
It uses a CRD (custom resource definition) and the Kubernetes metric server so you will have to use a Kubernetes version which supports these.
💡 Kubernetes v1.16 is supported with KEDA v2.4.0 or below
Can KEDA be used in production?
What does it cost?
Using multiple triggers for the same scale target
KEDA allows you to use multiple triggers as part of the same ScaledObject
or ScaledJob
.
By doing this, your autoscaling becomes better:
ScaledObject
’s or ScaledJob
’s interfering with each otherKEDA will start scaling as soon as when one of the triggers meets the criteria. Horizontal Pod Autoscaler (HPA) will calculate metrics for every scaler and use the highest desired replica count to scale the workload to.
Don’t combine ScaledObject
with Horizontal Pod Autoscaler (HPA)
We recommend not to combine using KEDA’s ScaledObject
with a Horizontal Pod Autoscaler (HPA) to scale the same workload.
They will compete with each other resulting given KEDA uses Horizontal Pod Autoscaler (HPA) under the hood and will result in odd scaling behavior.
If you are using a Horizontal Pod Autoscaler (HPA) to scale on CPU and/or memory, we recommend using the CPU scaler & Memory scaler scalers instead.
Can I scale HTTP workloads with KEDA and Kubernetes?
KEDA will scale a container using metrics from a scaler, but unfortunately there is no scaler today for HTTP workloads out-of-the-box.
We do, however, provide some alternative approaches:
Is short polling intervals a problem?
How do I run KEDA with readOnlyRootFilesystem=true
?
As default, KEDA v2.10 or above sets readOnlyRootFilesystem=true
as default without any other manual intervention.
If you are running KEDA v2.9 or below, you can’t run KEDA with readOnlyRootFilesystem=true
as default because Metrics adapter generates self-signed certificates during deployment and stores them on the root file system.
To overcome this, you can create a secret/configmap with a valid CA, cert and key and then mount it to the Metrics Deployment.
To use your certificate, you need to reference it in the container args
section, e.g.:
args:
- '--client-ca-file=/cabundle/service-ca.crt'
- '--tls-cert-file=/certs/tls.crt'
- '--tls-private-key-file=/certs/tls.key'
It is also possible to run KEDA with readOnlyRootFilesystem=true
by creating an emptyDir volume and mounting it to the path where,
by default, metrics server writes its generated cert. The corresponding helm command is:
helm install keda kedacore/keda --namespace keda --set 'volumes.metricsApiServer.extraVolumes[0].name=keda-volume' --set 'volumes.metricsApiServer.extraVolumeMounts[0].name=keda-volume' --set 'volumes.metricsApiServer.extraVolumeMounts[0].mountPath=/apiserver.local.config/certificates/' --set 'securityContext.metricServer.readOnlyRootFilesystem=true'
How do I run KEDA with TLS v1.3 only?
By default, Keda listens on TLS v1.1 and TLSv1.2, with the default Golang ciphersuites. In some environments, these ciphers may be considered less secure, for example CBC ciphers.
As an alternative, you can configure the minimum TLS version to be v1.3 to increase security. Since all modern clients support this version, there should be no impact in most scenarios.
You can set this with args - e.g.:
args:
- '--tls-min-version=VersionTLS13'
What does the target metric value in the Horizontal Pod Autoscaler (HPA) represent?
The target metric value is used by the Horizontal Pod Autoscaler (HPA) to make scaling decisions.
The current target value on the Horizontal Pod Autoscaler (HPA) often does not match with the metrics on the system you are scaling on. This is because of how the Horizontal Pod Autoscaler’s (HPA) scaling algorithm works.
By default, KEDA scalers use average metrics (the AverageValue
metric type). This means that the HPA will use the average value of the metric between the total amount of pods. As of KEDA v2.7, ScaledObjects also support the Value
metric type. You can learn more about it here.
Why does KEDA use external metrics and not custom metrics instead?
Kubernetes allows you to autoscale based on custom & external metrics which are fundamentally different:
Because KEDA primarily serves metrics for metric sources outside of the Kubernetes cluster, it uses external metrics and not custom metrics.
This is why KEDA registers the v1beta1.external.metrics.k8s.io
namespace in the API service. However, this is just an implementation detail as both offer the same functionality.
Read about the different metric APIs or this article by Google Cloud to learn more.
Can I run multiple metric servers serving external metrics in the same cluster?
Unfortunately, you cannot do that.
Kubernetes currently only supports one metric server serving external.metrics.k8s.io
metrics per cluster. This is because only one API Service can be registered to handle external metrics.
If you want to know what external metric server is currently registered, you can use the following command:
~ kubectl get APIService/v1beta1.external.metrics.k8s.io
NAME SERVICE AVAILABLE AGE
v1beta1.external.metrics.k8s.io keda-system/keda-operator-metrics-apiserver True 457d
Once a new metric server is installed, it will overwrite the existing API Server registration and take over the v1beta1.external.metrics.k8s.io
namespace. This will cause the previously installed metric server to be ignored.
There is an open proposal to allow multiple metric servers in the same cluster, but it’s not implemented yet.
Can I run multiple installations of KEDA in the same cluster?
Unfortunately, you cannot do that.
This is a limitation that is because Kubernetes does not allow you to run multiple metric servers in the same cluster that serve external metrics.
Also, KEDA does not allow you to share a single metric server across multiple operator installations.
Learn more in the “Can I run multiple metric servers serving external metrics in the same cluster?” FAQ entry.
How can I get involved?
There are several ways to get involved.
Where can I get to the code for the Scalers?
Does scaler search support wildcard search?
Does KEDA depend on any Azure service?
Does KEDA only work with Azure Functions?
Why should we use KEDA if we are already using Azure Functions in Azure?
There are a few reasons for this: