Kubernetes Event-driven Autoscaling

Application autoscaling made simple


What is KEDA?

KEDA is a Kubernetes-based Event Driven Autoscaler. With KEDA, you can drive the scaling of any container in Kubernetes based on the number of events needing to be processed.

KEDA is a single-purpose and lightweight component that can be added into any Kubernetes cluster. KEDA works alongside standard Kubernetes components like the Horizontal Pod Autoscaler and can extend functionality without overwriting or duplication. With KEDA you can explicitly map the apps you want to use event-driven scale, with other apps continuing to function. This makes KEDA a flexible and safe option to run alongside any number of any other Kubernetes applications or frameworks.

Features


Autoscaling Made Simple

Bring rich scaling to every workload in your Kubernetes cluster

Event-driven

Intelligently scale your event-driven application

Built-in Scalers

Catalog of 50+ built-in scalers for various cloud platforms, databases, messaging systems, telemetry systems, CI/CD, and more

Multiple Workload Types

Support for variety of workload types such as deployments, jobs & custom resources with /scale sub-resource

Reduce environmental impact

Build sustainable platforms by optimizing workload scheduling and scale-to-zero

Extensible

Bring-your-own or use community-maintained scalers

Vendor-Agnostic

Support for triggers across variety of cloud providers & products

Azure Functions Support

Run and scale your Azure Functions on Kubernetes in production workloads

An overview of Kubernetes Event-Driven Autoscaling (KEDA)

From KubeCon Europe 2021

Scalers

Scalers represent event sources that KEDA can scale based on


ActiveMQ

Scale applications based on ActiveMQ Queue.
ActiveMQ Artemis

Scale applications based on ActiveMQ Artemis queues
Apache Kafka

Scale applications based on an Apache Kafka topic or other services that support Kafka protocol.
AWS CloudWatch

Scale applications based on AWS CloudWatch.
AWS DynamoDB

Scale applications based on the records count in AWS DynamoDB
AWS Kinesis Stream

Scale applications based on AWS Kinesis Stream.
AWS SQS Queue

Scale applications based on AWS SQS Queue.
Azure Application Insights

Scale applications based on Azure Application Insights metrics.
Azure Blob Storage

Scale applications based on the count of blobs in a given Azure Blob Storage container.
Azure Data Explorer

Scale applications based on Azure Data Explorer query result.
Azure Event Hubs

Scale applications based on Azure Event Hubs.
Azure Log Analytics

Scale applications based on Azure Log Analytics query result
Azure Monitor

Scale applications based on Azure Monitor metrics.
Azure Pipelines

Scale applications based on agent pool queues for Azure Pipelines.
Azure Service Bus

Scale applications based on Azure Service Bus Queues or Topics.
Azure Storage Queue

Scale applications based on Azure Storage Queues.
Cassandra

Scale applications based on Cassandra query results.
CPU

Scale applications based on cpu metrics.
Cron

Scale applications based on a cron schedule.
Datadog

Scale applications based on Datadog.
Elasticsearch

Scale applications based on elasticsearch search template query result.
External

Scale applications based on an external scaler.
External Push

Scale applications based on an external push scaler.
Google Cloud Platform Stackdriver

Scale applications based on a metric obtained from Stackdriver.
Google Cloud Platform Storage

Scale applications based on the count of objects in a given Google Cloud Storage (GCS) bucket.
Google Cloud Platform‎ Pub/Sub

Scale applications based on Google Cloud Platform‎ Pub/Sub.
Graphite

Scale applications based on metrics in Graphite.
Huawei Cloudeye

Scale applications based on a Huawei Cloudeye.
IBM MQ

Scale applications based on IBM MQ Queue
InfluxDB

Scale applications based on InfluxDB queries
Kubernetes Workload

Scale applications based on the count of running pods that match the given selectors.
Liiklus Topic

Scale applications based on Liiklus Topic.
Memory

Scale applications based on memory metrics.
Metrics API

Scale applications based on a metric provided by an API
MongoDB

Scale applications based on MongoDB queries.
MSSQL

Scale applications based on Microsoft SQL Server (MSSQL) query results.
MySQL

Scale applications based on MySQL query result.
NATS Streaming

Scale applications based on NATS Streaming.
New Relic

Scale applications based on New Relic NRQL
OpenStack Metric

Scale applications based on a threshold reached by a specific measure from OpenStack Metric API.
OpenStack Swift

Scale applications based on the count of objects in a given OpenStack Swift container.
PostgreSQL

Scale applications based on a PostgreSQL query.
Predictkube

AI-based predictive scaling based on Prometheus metrics & PredictKube SaaS.
Prometheus

Scale applications based on Prometheus.
RabbitMQ Queue

Scale applications based on RabbitMQ Queue.
Redis Lists

Scale applications based on Redis Lists.
Redis Lists (supports Redis Cluster)

Redis Lists scaler with support for Redis Cluster topology
Redis Lists (supports Redis Sentinel)

Redis Lists scaler with support for Redis Sentinel topology
Redis Streams

Scale applications based on Redis Streams.
Redis Streams (supports Redis Cluster)

Redis Streams scaler with support for Redis Cluster topology
Redis Streams (supports Redis Sentinel)

Redis Streams scaler with support for Redis Sentinel topology
Selenium Grid Scaler

Scales Selenium browser nodes based on number of requests waiting in session queue
Solace PubSub+ Event Broker

Scale applications based on Solace PubSub+ Event Broker Queues

Highlighted Samples

We provide a variety of samples, but here are some of our highlights:

RabbitMQ and Go

RabbitMQ Consumer written in Go that is scaled with KEDA.

Azure Functions and Queue

Azure Function that triggers on Azure Storage Queues.

.NET Core worker and Azure Service Bus

Scale a .NET Core worker based on an Azure Service Bus queue.

Users

A variety of users are autoscaling applications with KEDA:



Partners

KEDA is supported by and built by our community, including the following companies:



Supported by

KEDA is supported by the following companies that provide their services for free:


Get Involved

If you’re interested in contributing to or participating in the direction of KEDA, you can join our community meetings.

Just want to learn or chat about KEDA? Feel free to join the conversation in the #KEDA channel on the Kubernetes Slack!

KEDA is a Cloud Native Computing Foundation incubation project