Cluster capacity requirements
The KEDA runtime require the following resources in a production-ready setup:
|Operator||Limit: 1, Request: 100m||Limit: 1000Mi, Request: 100Mi|
|Metrics Server||Limit: 1, Request: 100m||Limit: 1000Mi, Request: 100Mi|
These are used by default when deploying through YAML.
💡 For more info on CPU and Memory resource units and their meaning, see this link.
KEDA does not provide support for high-availability due to upstream limitations.
Here is an overview of all KEDA deployments and the supported replicas:
KEDA requires to be accessible inside the cluster to be able to autoscale.
Here is an overview of the required ports that need to be accessible for KEDA to work:
|Used by Kubernetes API server to get metrics||Required for all platforms, except for Google Cloud.|
|Used by Kubernetes API server to get metrics||Only required for Google Cloud|