Scaling Jobs Click here for latest


You are currently viewing v"1.5" of the documentation and it is not the latest. For the most recent documentation, kindly click here.


As an alternate to scaling event-driven code as deployments you can also run and scale your code as Kubernetes Jobs. The primary reason to consider this option is to handle processing long-running executions. Rather than processing multiple events within a deployment, for each detected event a single Kubernetes Job is scheduled. That job will initialize, pull a single event from the message source, and process to completion and terminate.

For example, if you wanted to use KEDA to run a job for each message that lands on a RabbitMQ queue, the flow may be:

  1. When no messages are awaiting processing, no jobs are created.
  2. When a message arrives on the queue, KEDA creates a job.
  3. When the job starts running, it pulls a single message and processes it to completion.
  4. As additional messages arrive, additional jobs are created. Each job processes a single message to completion.

ScaledObject spec

This specification describes the ScaledObject custom resource definition which is used to define how KEDA should scale your application and what the triggers are.


kind: ScaledObject
  name: {scaled-object-name}
  scaleType: job
    parallelism: 1 # [max number of desired pods](
    completions: 1 # [desired number of successfully finished pods](
    activeDeadlineSeconds: 600 # Specifies the duration in seconds relative to the startTime that the job may be active before the system tries to terminate it; value must be positive integer
    backoffLimit: 6 # Specifies the number of retries before marking this job failed. Defaults to 6
      # describes the [job template](
  pollingInterval: 30  # Optional. Default: 30 seconds
  cooldownPeriod:  300 # Optional. Default: 300 seconds
  minReplicaCount: 0   # Optional. Default: 0
  maxReplicaCount: 100 # Optional. Default: 100
  # {list of triggers to create jobs}

You can find all supported triggers here.