Laravel in Kubernetes Part 7 - Deploying Redis to run Queue workers and cache

Laravel in Kubernetes Part 7 - Deploying Redis to run Queue workers and cache
Photo by Adrien Delforge / Unsplash

In this post, we'll go over deploying a Redis instance, where we can run our Queue workers from in Laravel.

The Redis instance can also be used for Caching inside Laravel, or a second Redis cluster can be installed for Cache separately

We will cover two methods of running a Redis Instance.

On one hand we'll use a managed Redis Cluster from DigitalOcean, which alleviates the maintenance burden for us, and gives us a Redis cluster which is immediately ready for use.

On the other hand, we'll deploy a Redis Instance into the Kubernetes cluster. This saves us some money, but does add a whole bunch of management problems into the mix.

Table of contents

Managed Redis

In the same fashion as we did our database, we will deploy a managed Redis instance in DigitalOcean.

In the infrastructure repository we created earlier, we can add a new file called redis.tf, where we can store our Terraform configuration for the Redis Instance in DigitalOcean.

resource "digitalocean_database_cluster" "laravel-in-kubernetes-redis" {
  name = "laravel-in-kubernetes-redis"
  engine = "redis"
  version = "6"
  size = "db-s-1vcpu-1gb"
  region = var.do_region
  node_count = 1
}

# We want to allow access to the database from our Kubernetes cluster
# We can also add custom IP addresses
# If you would like to connect from your local machine,
# simply add your public IP
resource "digitalocean_database_firewall" "laravel-in-kubernetes-redis" {
  cluster_id = digitalocean_database_cluster.laravel-in-kubernetes-redis.id

  rule {
    type  = "k8s"
    value = digitalocean_kubernetes_cluster.laravel-in-kubernetes.id
  }

#   rule {
#     type  = "ip_addr"
#     value = "ADD_YOUR_PUBLIC_IP_HERE_IF_NECESSARY"
#   }
}

output "laravel-in-kubernetes-redis-host" {
  value = digitalocean_database_cluster.laravel-in-kubernetes-redis.host
}

output "laravel-in-kubernetes-redis-port" {
  value = digitalocean_database_cluster.laravel-in-kubernetes-redis.port
}

Let's apply that, and we should see a host and port pop up after a little while.

$ terraform apply
[...]
Plan: 3 to add, 0 to change, 0 to destroy.
Enter a value: yes

digitalocean_database_cluster.laravel-in-kubernetes-redis: Creating...
[...]
Outputs:

laravel-in-kubernetes-database-host = "XXX"
laravel-in-kubernetes-database-port = 25060
laravel-in-kubernetes-redis-host = "XXX"
laravel-in-kubernetes-redis-port = 25061

We now have details for our Redis Instance, but not a username and password.

Terraform does output these for us, but these would then be stored in the state file, which is not ideal.

For the moment, you cannot change the password of the deployed Redis Instance in DigitalOcean, so we'll use the username and password from Terraform.

We won't output these from Terraform, as they will then show up in logs when we build CI/CD.

You can cat the state file, search for the Redis instance, and find the username and password in there.

$ cat terraform.tfstate | grep '"name": "laravel-in-kubernetes-redis"' -A 20 | grep -e password -e '"user"'
"password": "XXX",
"user": "default",

Store these values somewhere safe, as we'll use them in the next step of our deployment.

Self-managed Redis

Self-managed Redis, means we will be running Redis ourselves inside of Kubernetes, with AOF and persistence.

This has more management than a managed cluster, but does save us some cost.

We'll run our Redis Instance in a statefulset to ensure a stable running set of pods.

In our deployment repo, create a new directory called redis. Here we will store all our details for the Redis Cluster.

Create a new file in the redis directory, called persistent-volume-claim.yml. This is where we will store the configuration for the storage we need provisioned in DigitalOcean.

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: laravel-in-kubernetes-redis
spec:
  storageClassName: do-block-storage
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      # We are starting with 1GB. We can always increase it later.
      storage: 1Gi

Apply that, and we should see the volume created after a few seconds.

$ kubectl apply -f redis/persistent-volume-claim.yml 
persistentvolumeclaim/laravel-in-kubernetes-redis created
$ kubectl get persistentvolumes
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                                 STORAGECLASS       REASON   AGE
pvc-f5aac936-98f5-48f1-a526-a68bc5c17471   1Gi        RWO            Delete           Bound    default/laravel-in-kubernetes-redis   do-block-storage            25s

Our volume has been successfully created, and we can move on to actually deploying Redis.

Create a new file in the redis folder called statefulset.yml where we will configure the Redis Node.

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: laravel-in-kubernetes-redis
  labels:
    tier: backend
    layer: redis
spec:
  serviceName: laravel-in-kubernetes-redis
  selector:
    matchLabels:
      tier: backend
      layer: redis
  replicas: 1
  template:
    metadata:
      labels:
        tier: backend
        layer: redis
    spec:
      containers:
      - name: redis
        image: redis:5.0.4
        command: ["redis-server", "--appendonly", "yes"]
        ports:
        - containerPort: 6379
          name: web
        volumeMounts:
        - name: redis-aof
          mountPath: /data
      volumes:
        - name: redis-aof
          persistentVolumeClaim:
            claimName: laravel-in-kubernetes-redis

As you can see, we are also mounting our PersistentVolumeClaim into the container, so our AOF file will persist across container restarts.

We can go ahead and apply the statefulset, and we should see our Redis pod pop up.

$ kubectl apply -f redis/statefulset.yml 
statefulset.apps/laravel-in-kubernetes-redis created

# after a few seconds
$ kubectl get pods
laravel-in-kubernetes-redis-0   1/1     Running   0          18s

# Inspect the logs
$ kubectl logs laravel-in-kubernetes-redis-0
1:C 30 Aug 2021 17:31:16.678 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
1:C 30 Aug 2021 17:31:16.678 # Redis version=5.0.4, bits=64, commit=00000000, modified=0, pid=1, just started
1:C 30 Aug 2021 17:31:16.678 # Configuration loaded
1:M 30 Aug 2021 17:31:16.681 * Running mode=standalone, port=6379.
1:M 30 Aug 2021 17:31:16.681 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
1:M 30 Aug 2021 17:31:16.681 # Server initialized
1:M 30 Aug 2021 17:31:16.681 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
1:M 30 Aug 2021 17:31:16.681 * Ready to accept connections

We now have Redis successfully running, and we just need to add a service to make it discoverable in Kubernetes.

Create a new Service file in the redis directory called service.yml where we will store the service for Redis.

apiVersion: v1
kind: Service
metadata:
  name: laravel-in-kubernetes-redis
  labels:
    tier: backend
    layer: redis
spec:
  ports:
  - port: 6379
    protocol: TCP
  selector:
    tier: backend
    layer: redis
  type: ClusterIP

Apply that, and we'll have a Redis connection ready to go.

Onto the next

Next, we'll move on to deploying our Queue workers in Kubernetes.

Laravel in Kubernetes Part 8 - Deploying Laravel Queue workers in Kubernetes
In this post we will cover deploying Laravel Queue workers in Laravel. Deploying Laravel Queue workers in Kubernetes, makes it fairly easy to scale out workers when jobs start piling up, and releasing resources when there is lower load on the system. Queue connection updateWe need to make sure the