KEDA stands for Kubernetes-based Event Driven Auto-Scaler. It is built to be able to activate a Kubernetes deployment (i.e. no pods to a single pod) and subsequently to more pods based on events from various event sources.
What are the prerequisites for using KEDA?
KEDA is designed to be run on any Kubernetes cluster. It uses a CRD (custom resource definition) and the Kubernetes metric server so you will have to use a Kubernetes version which supports these. Any Kubernetes cluster >= 1.16.0 has been tested and should work.
Does KEDA depend on any Azure service?
No, KEDA only takes a dependency on standard Kubernetes constructs and can run on any Kubernetes cluster whether in OpenShift, AKS, GKE, EKS or your own infrastructure.
Does KEDA only work with Azure Functions?
No, KEDA can scale up/down any container that you specify in your deployment. There has been work done in the Azure Functions tooling to make it easy to scale an Azure Function container.
Why should we use KEDA if we are already using Azure Functions in Azure?
There are a few reasons for this:
Run functions on-premises (potentially in something like an ‘intelligent edge’ architecture)
Run functions alongside other Kubernetes apps (maybe in a restricted network, app mesh, custom environment, etc.)
Run functions outside of Azure (no vendor lock-in)
Specific need for more control (GPU enabled compute clusters, policies, etc.)
Can I scale my HTTP container or function with KEDA and Kubernetes?
KEDA will scale a container using metrics from a scaler, but unfortunately there is no scaler today for HTTP workloads.