Autoscaling Jobs with KEDA's Redis List Scaler
Autoscaling jobs with KEDA using a Redis List as an event source with Redis authentication
Published on:
Jun 13, 2025Last updated on:
Jun 18, 2025This blog is part of our KEDA series, we recommend reading the rest of the posts in the series:
- Introducing Kubernetes Event-driven Autoscaling (KEDA)
- Getting Started with Autoscaling in Kubernetes with KEDA
- Autoscaling workloads with KEDA and the Prometheus Scaler
- Autoscaling Jobs with KEDA's Redis List Scaler
KEDA’s Redis List Scaler
In this blog post I will be demonstrating setting up KEDA to authenticate against Redis and using a Redis List to scale jobs. KEDA does have separate built-in scalers for Redis Lists and Redis Streams, but I decided to use KEDA’s scaler for Redis Lists.
The demo will feature:
- A Kubernetes cluster using Kubernetes version v1.29 or higher
- A Redis server
- KEDA
Prerequisite Services: Installation
For my proof of concept, I will:
First, I use the
kind
(Kubernetes in Docker) tool to create a local Kubernetes cluster with the following command:kind create cluster
To verify
kind
created a Kubernetes cluster compatible with KEDA, I runkubectl version
to verify the version of the Kubernetes cluster.Then, I add the
kedacore
Helm repository:helm repo add kedacore https://kedacore.github.io/charts helm repo update
Then, I install the KEDA Helm chart with the following commands:
helm install keda kedacore/keda \ --create-namespace --namespace keda \ --version 2.17.1
Instead of installing Redis with a Helm chart, I created a manifest file to deploy a basic Redis server in the labs repository that requires a password. There you will see a secret with the Redis credentials I will be referencing in a KEDA custom resource.
kubectl create ns redis kubectl apply -f redis.yaml -n redis kubectl get pods -n redis
Authentication flow between KEDA and Redis
Since I need to configure KEDA to be able to authenticate against Redis, before creating a ScaledJob
(or ScaledObject
) I need review the documentation for the Redis List scaler and the authentication providers supported by KEDA.
Looking at the Redis List scaler’s documentation1 I see there is a passwordFromEnv
parameter to configure KEDA to get the password from an environment variable in the target workload. But, the example at the bottom of the page does not use the passwordFromEnv
variable, instead the example uses the authenticationRef
parameter to reference a TriggerAuthentication
.
Looking at the authentication options2 supported by KEDA, I decide to follow the example in the Redis List scaler’s documentation and use the supported pattern of setting up re-usable credentials with the use of KEDA’s TriggerAuthentication
CRD or the cluster-scoped version in the form of the ClusterTriggerAuthentication
CRD.
Setting up the authentication workflow for this demo will involve:
- Creating a Kubernetes secret with the Redis credentials
- Deploying Redis locked behind the credentials from the above Kubernetes secret
- Deploying a
TriggerAuthentication
object - Deploying a
ScaledJob
with theauthenticationRef
parameter
For a more secure authentication workflow, instead of storing the Redis credentials in a Kubernetes secret, you can store the Redis credentials in a secret management service and create a TriggerAuthentication
object with the credentials or pod identity needed to access to the Redis credentials stored in the secret manager service. Examples supported by KEDA include: Hashicorp Vault, Azure Key Vault, AWS Secret Manager and GCP Secret Manager.
KEDA’s Secret Authentication Provider
Now, I need to implement the authentication workflow I have mapped out. My simple Redis deployment included a Kubernetes Secret with the Redis credentials, so I have already completed the first two steps. Now I need to create a TriggerAuthentication
object in the same namespace as the Kubernetes Secret I need it to reference.
Looking at the list of KEDA’s authentication providers3, I see an authentication provider for Kubernetes secrets4 and based on the documentation I just need to create a small manifest file.
TriggerAuthentication (secret auth provider) example
# redis-triggerauth.yaml
apiVersion: keda.sh/v1alpha1
kind: TriggerAuthentication
metadata:
name: keda-trigger-auth-redis-secret
spec:
secretTargetRef:
- parameter: password
name: redis
key: REDIS_PASS
Now I have completed the third step for enabling my desired authentication workflow between KEDA and Redis. I will now have to complete fourth step while creating the ScaledJob
.
Creating a ScaledJob
With the aid of the ScaledJob specification5 I need to create my ScaledJob
object.
Defining the Job Template
Unlike the previous blog post where I had an existing Deployment
in place for the ScaledObject
to target, with a ScaledJob
and starting with zero jobs. I have to define the job template, that will be used to create job as part of scale up operator, in the ScaledJob
manifest.
jobTargetRef
# scaledjob.yaml
apiVersion: keda.sh/v1alpha1
kind: ScaledJob
metadata:
name: scale-job
spec:
jobTargetRef:
parallelism: 1
completions: 1
activeDeadlineSeconds: 30
backoffLimit: 6
template:
spec:
containers:
- image: alpine:3.13.5
name: alpine
command: ['echo', 'hello world']
restartPolicy: Never
Defining the Scaling Strategy
While KEDA will create a HPA object for a ScaledObject
, KEDA does not create an underlying HPA object for ScaledJobs
so I do not have the luxury of defining HPA configurations to define the scaling behaviour. Instead, for a KEDA ScaledJob
the scaling strategy can be defined with the scalingStrategy
parameter5.
After reviewing the valid values for the scalingStrategy
I went with the scaling strategy that I felt was best suited for my use case.
scalingStrategy
# scaledjob.yaml
spec:
...
pollingInterval: 3
successfulJobsHistoryLimit: 5
failedJobsHistoryLimit: 5
maxReplicaCount: 10
scalingStrategy:
strategy: "accurate"
Defining the Scale Trigger(s)
After setting up the job template and scaling strategy I now need to set up the trigger that KEDA will use to determine whether or not to begin scaling the defined job. Looking at the Redis Scalar documentation I will need:
- The address of the Redis server. Since I have Redis and KEDA in the same cluster, I can use Redis’s internal DNS record in the Kubernetes cluster which follows the format
<serviceName>.<namespaceName>.svc.cluster.local
. - The name of the Redis List I want to monitor
- The length the list needs to be for KEDA to activate the autoscaling process
After gathering the information I needed, I added the parameters for the Redis List trigger to the manifest file for my ScaledJob
.
Adding the Authentication Credentials
But to finish configuring the parameters for my Redis List I need to provide KEDA with the information it will need to authentication against the Redis server to access to list. I do this by referencing the TriggerAuthentication
object I created in step 2 for setting up the authentication workflow.
(Redis List) triggers and authenticationRef
# scaledjob.yaml
spec:
...
triggers:
- type: redis
metadata:
address: redis.redis.svc.cluster.local:6379
listName: myotherlist
listLength: "1"
authenticationRef:
name: keda-trigger-auth-redis-secret
Deploying the ScaledJob
Now that I have a manifest file for my ScaledJob
, I deploy it to the same namespace where the referenced Kubernetes secret exists with the following command:
kubectl apply -f scaledjob.yaml -n redis
After deploying my ScaledJob
, I can check to see if it is Ready
by checking the status of the ScaledObject
…
$ kubectl get scaledjob -n redis
NAME MIN MAX READY ACTIVE PAUSED TRIGGERS AUTHENTICATIONS AGE
scale-job 10 True False Unknown redis keda-trigger-auth-redis-secret 20m
…and looking at the logs of the KEDA operator pod, I can see it successfully detected the newly deployed ScaledJob
.
INFO Initializing Scaling logic according to ScaledJob Specification
{
"controller": "scaledjob",
"controllerGroup": "keda.sh",
"controllerKind": "ScaledJob",
"ScaledJob": {
"name": "scale-job",
"namespace": "redis"
},
"namespace": "redis",
"name": "scale-job",
"reconcileID": "d15b043b-7478-44f6-9727-2cf858d384dc"
}
If the credentials defined in the ScaledJob
for KEDA to use are incorrect, the KEDA operator will output an error message connection to redis failed: NOAUTH Authentication required
. The full log would be similar to the example below:
Authentication failed error log
{
"controller": "scaledjob",
"controllerGroup": "keda.sh",
"controllerKind": "ScaledJob",
"ScaledJob": {
"name": "scale-job",
"namespace": "redis"
},
"namespace": "redis",
"name": "scale-job",
"reconcileID": "cd870145-a7c1-4ccb-a204-244114bc38f9",
"error": "connection to redis failed: NOAUTH Authentication required."
}
As noted previously, KEDA does not create HPA objects for ScaledJob
objects.
$ kubectl tree scaledjob scale-job -n redis
No resources are owned by this object through ownerReferences.
Testing the Scale Triggers
Now that I have autoscaling infrastructure place, I want to test whether or not updating the Redis list will trigger KEDA to automatically scale up the job defined in the ScaledJob
. To run this test with my setup:
I open a terminal and run a watch command:
kubectl get pods -n podinfo -w
I open another terminal and execute into the Redis server pod with the following commands
export REDIS_POD=<redis_pod_name> export REDIS_PASS=<redis_password> kubectl -n redis exec -it $REDIS_POD -- redis-cli -a $REDIS_PASS
I check my list is initially empty:
LLEN myotherlist
Then I push an item to my Redis list with the following command:
LPUSH myotherlist "my-item"
I monitor my first terminal to see if the jobs will get created by KEDA according to the job template I defined in my
ScaledJob
Since my current job template does not consume/remove items from my Redis items, the length of the list remains at 1 and I can see this causes to repeatedly create a new job once a job is completed
Checking the logs of the KEDA operator I can see the logs reflecting the behaviour I am observing. In the previous post of this KEDA blog series with the Prometheus scaler example, the KEDA operator did not output any logs related to the scaling events because it was letting the Kubernetes HPA controller handle the autoscaling logic using the HPA object created by KEDA, but in this case, there is no HPA object to lean on therefore KEDA is in full control of handling the autoscaling process and therefore outputting logs related to the scaling event
KEDA Operator Job scaling event example logs
Scaling Jobs {"scaledJob.Name": "scale-job", "scaledJob.Namespace": "redis", "Number of running Jobs": 0} Scaling Jobs {"scaledJob.Name": "scale-job", "scaledJob.Namespace": "redis", "Number of pending Jobs": 0} Creating jobs {"scaledJob.Name": "scale-job", "scaledJob.Namespace": "redis", "Effective number of max jobs": 1} Creating jobs {"scaledJob.Name": "scale-job", "scaledJob.Namespace": "redis", "Number of jobs": 1} Created jobs {"scaledJob.Name": "scale-job", "scaledJob.Namespace": "redis", "Number of jobs": 1} Remove a job by reaching the historyLimit {"scaledJob.Name": "scale-job", "scaledJob.Namespace": "redis", "job.Name": "scale-job-lzvrs", "historyLimit": 5} Scaling Jobs {"scaledJob.Name": "scale-job", "scaledJob.Namespace": "redis", "Number of running Jobs": 0} Scaling Jobs {"scaledJob.Name": "scale-job", "scaledJob.Namespace": "redis", "Number of pending Jobs": 0} Creating jobs {"scaledJob.Name": "scale-job", "scaledJob.Namespace": "redis", "Effective number of max jobs": 1} Creating jobs {"scaledJob.Name": "scale-job", "scaledJob.Namespace": "redis", "Number of jobs": 1} Created jobs {"scaledJob.Name": "scale-job", "scaledJob.Namespace": "redis", "Number of jobs": 1} Remove a job by reaching the historyLimit {"scaledJob.Name": "scale-job", "scaledJob.Namespace": "redis", "job.Name": "scale-job-9dsgj", "historyLimit": 5}
To to stop KEDA autoscaling, I empty my Redis list by running:
LPOP myotherlist
Now that I have a working ScaledJob
, I would:
- Update the job template defined in my
ScaledJob
to replicate/reproduce my actual use case (e.g. a job that would consume the first item in the Redis list) and re-run my test - Try a different setup that would allow me to use another KEDA authentication provider (e.g. Hashicorp Vault, Azure Key Vault, AWS Secret Manager and GCP Secret Manager)
You can find the manifest files and more specific instructions to re-create my proof of concept demo in LiveWyer’s Lab repository.
Wrapping Up
That concludes my demo for autoscaling jobs using KEDA’s Redis List scaler to scale based on the length of a list in Redis. I found it interesting running a test similar to the one in my previous blog post and comparing how KEDA interacts with and manages autoscaling for ScaledObject
and ScaledJob
objects. Hopefully you found it interesting as well.
Footnote
This blog is part of our KEDA series, we recommend reading the rest of the posts in the series:
- Introducing Kubernetes Event-driven Autoscaling (KEDA)
- Getting Started with Autoscaling in Kubernetes with KEDA
- Autoscaling workloads with KEDA and the Prometheus Scaler
- Autoscaling Jobs with KEDA's Redis List Scaler