Laravel in Kubernetes Part 6 - Deploying Laravel Web App in Kubernetes

Laravel in Kubernetes Part 6 - Deploying Laravel Web App in Kubernetes

In this post we will cover deploying our Laravel Web App inside of Kubernetes.

This covers our main app and our migrations in Kubernetes.

This post also assumes you have Dockerised your application, using Part 2 & Part 3 from this series. If not, and you have containerised your application, you should be able to follow along if you have the same style of Docker files, or if you have a monolithic Docker image, such as the one from Laravel Sail, you can simply replace the images in the manifests with your image.

Table of contents

Deployment Repo

First thing we'll start with is a fresh repository. This is where we will store all of our deployment manifests, and also where we will deploy from.

If you followed the self-managed database tutorial in the previous post, you'll already have created a deployment repo, and can skip the creation of this directory.

Start with a fresh directory in your projects folder, or wherever you keep your source code folders.

mkdir -p laravel-in-kubernetes-deployment
cd laravel-in-kubernetes-deployment

Common Configuration

We want to create a ConfigMap and Secret which we can use for all the different pieces of our application and easily configure them commonly.

Common folder

We'll start with a common folder for the common manifests.

$ mkdir -p common

ConfigMap

Create a ConfigMap, matching all of the details in the .env file, except the Secret values.

Create a new file called common/app-config.yml with the following content

apiVersion: v1
kind: ConfigMap
metadata:
  name: laravel-in-kubernetes
data:
  APP_NAME: "Laravel"
  APP_ENV: "local"
  APP_DEBUG: "true"
  # Once you have an external URL for your application, you can add it here. 
  APP_URL: "http://laravel-in-kubernetes.test"
  
  # Update the LOG_CHANNEL to stdout for Kubernetes
  LOG_CHANNEL: "stdout"
  LOG_LEVEL: "debug"
  DB_CONNECTION: "mysql"
  DB_HOST: "mysql"
  DB_PORT: "3306"
  DB_DATABASE: "laravel_in_kubernetes"
  BROADCAST_DRIVER: "log"
  CACHE_DRIVER: "file"
  FILESYSTEM_DRIVER: "local"
  QUEUE_CONNECTION: "sync"
  
  # Update the Session driver to Redis, based off part-2 of series
  SESSION_DRIVER: "redis"
  SESSION_LIFETIME: "120"
  MEMCACHED_HOST: "memcached"
  REDIS_HOST: "redis"
  REDIS_PORT: "6379"
  MAIL_MAILER: "smtp"
  MAIL_HOST: "mailhog"
  MAIL_PORT: "1025"
  MAIL_ENCRYPTION: "null"
  MAIL_FROM_ADDRESS: "null"
  MAIL_FROM_NAME: "${APP_NAME}"
  AWS_DEFAULT_REGION: "us-east-1"
  AWS_BUCKET: ""
  AWS_USE_PATH_STYLE_ENDPOINT: "false"
  PUSHER_APP_ID: ""
  PUSHER_APP_CLUSTER: "mt1"
  MIX_PUSHER_APP_KEY: "${PUSHER_APP_KEY}"

Secret

Create a Secret, matching all the secret details in .env. This is where we will pull in any secret values for our application.

Create a new file called common/app-secret.yml with the following content

apiVersion: v1
kind: Secret
metadata:
  name: laravel-in-kubernetes
type: Opaque
stringData:
  APP_KEY: "base64:eQrCXchv9wpGiOqRFaeIGPnqklzvU+A6CZYSMosh1to="
  DB_USERNAME: "sail"
  DB_PASSWORD: "password"
  REDIS_PASSWORD: "null"
  MAIL_USERNAME: "null"
  MAIL_PASSWORD: "null"
  AWS_ACCESS_KEY_ID: ""
  AWS_SECRET_ACCESS_KEY: ""
  PUSHER_APP_KEY: ""
  PUSHER_APP_SECRET: ""
  MIX_PUSHER_APP_KEY: "${PUSHER_APP_KEY}"

We can apply both of these files for usage in our Deployments.

$ kubectl apply -f common/

Update ConfigMap with database details

We can fill in our database details as well in the ConfigMap and the Secret so our database can connect easily.

In the common/app-config.yml replace the values for the DB_* connection details,

apiVersion: v1
kind: ConfigMap
metadata:
  name: laravel-in-kubernetes
data:
  DB_CONNECTION: "mysql"
  DB_HOST: "mysql" # Use host from terraform if using managed Mysql
  DB_PORT: "3306" # Use port from terraform if using managed Mysql
  DB_DATABASE: "laravel-in-kubernetes"

Updating configuration with production details

We also need to update our application configuration with production details, so our app runs in a production like fashion in Kubernetes.

In the common/app-config.yml, replace the details with production settings.

apiVersion: v1
kind: ConfigMap
metadata:
  name: laravel-in-kubernetes
data:
  APP_NAME: "Laravel"
  APP_ENV: "production"
  APP_DEBUG: "false"

Apply the configurations

We can now apply those into our cluster.

$ kubectl apply -f common/
configmap/laravel-in-kubernetes configured

Update Secret with database details

We also need to fill our Secret with the correct database details

apiVersion: v1
kind: Secret
metadata:
  name: laravel-in-kubernetes
type: Opaque
stringData:
  DB_USERNAME: "XXX" # Replace with your DB username
  DB_PASSWORD: "XXX" # Replace with your DB password
  

We can apply that, and then move onto the deployments

$ kubectl apply -f common/
secret/laravel-in-kubernetes configured

FPM Deployment

We need a Deployment to run our application.

The Deployment instructs Kubernetes which image to deploy and how many replicas of it to run.

FPM Directory

First we need to create an fpm directory where we can store all of our FPM Deployment configurations

$ mkdir -p fpm

FPM Deployment

We'll start with a very basic Kubernetes Deployment for our FPM app inside the fpm directory called deployment.yml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: laravel-in-kubernetes-fpm
  labels:
    tier: backend
    layer: fpm
spec:
  replicas: 1
  selector:
    matchLabels:
      tier: backend
      layer: fpm
  template:
    metadata:
      labels:
        tier: backend
        layer: fpm
    spec:
      containers:
        - name: fpm
          image: [your_registry_url]/fpm_server:v0.0.1
          ports:
            - containerPort: 9000

We can now apply that, and we should see the application running correctly.

$ kubectl apply -f fpm/deployment.yml 
deployment.apps/laravel-in-kubernetes-fpm created

$ kubectl get deploy,pods
NAME                                        READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/laravel-in-kubernetes-fpm   1/1     1            1           58s

NAME                                             READY   STATUS    RESTARTS   AGE
pod/laravel-in-kubernetes-fpm-79fb79c548-2lp7m   1/1     Running   0          59s

You should also be able to see the logs from the FPM pod.

$ kubectl logs laravel-in-kubernetes-fpm-79fb79c548-2lp7m
[30-Aug-2021 19:33:49] NOTICE: fpm is running, pid 1
[30-Aug-2021 19:33:49] NOTICE: ready to handle connections

Everything is now running well for our FPM Deployment.

Private Registry

If you are using a private registry for your images, you can have a look here for how to authenticate a private registry for your cluster.

FPM Service

We also need a Kubernetes Service. This will expose our FPM container port in Kubernetes for us to use from our future NGINX deployment

Create a new file service.yml in the fpm directory.

apiVersion: v1
kind: Service
metadata:
  name: laravel-in-kubernetes-fpm
spec:
  selector:
    tier: backend
    layer: fpm
  ports:
    - protocol: TCP
      port: 9000
      targetPort: 9000

This will allow us to connect to the FPM container from our Web Server deployment, which we will deploy next.

First, we need to apply the new Service though

$ kubectl apply -f fpm/service.yml    
service/laravel-in-kubernetes-fpm created

Web Server Deployment

The next piece we need to deploy, is our Web Server container as well as it's service.

This will help expose our FPM application to the outside world.

Web Server Directory

Create a new folder called webserver

mkdir -p webserver

Web Server Deployment

Within the webserver folder, create the Web Server deployment.yml file.

We will also inject the FPM_HOST environment variable to point Nginx at our FPM deployment.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: laravel-in-kubernetes-webserver
  labels:
    tier: backend
    layer: webserver
spec:
  replicas: 1
  selector:
    matchLabels:
      tier: backend
      layer: webserver
  template:
    metadata:
      labels:
        tier: backend
        layer: webserver
    spec:
      containers:
        - name: webserver
          image: [your_registry_url]/web_server:v0.0.1
          ports:
            - containerPort: 80
          env:
            # Inject the FPM Host as we did with Docker Compose
            - name: FPM_HOST
              value: laravel-in-kubernetes-fpm:9000

We can apply that, and see that our service is running correctly.

$ kubectl apply -f webserver/deployment.yml 
deployment.apps/laravel-in-kubernetes-webserver created

$ kubectl get pods
NAME                                               READY   STATUS    RESTARTS   AGE
laravel-in-kubernetes-fpm-79fb79c548-2lp7m         1/1     Running   0          9m9s
laravel-in-kubernetes-webserver-5877867747-zm7zm   1/1     Running   0          6s

$ kubectl logs laravel-in-kubernetes-webserver-5877867747-zm7zm
[...]
2021/08/30 19:42:51 [notice] 1#1: start worker processes
2021/08/30 19:42:51 [notice] 1#1: start worker process 38
2021/08/30 19:42:51 [notice] 1#1: start worker process 39

Our Web Server deployment is now running successfully.

We are now be able to move onto the service.

Web Server Service

We also need a webserver service to expose the nginx deployment to the rest of the cluster.

Create a new file in the webserver directory called service.yml

apiVersion: v1
kind: Service
metadata:
  name: laravel-in-kubernetes-webserver
spec:
  selector:
    tier: backend
    layer: webserver
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80

We can apply that, and test our application, by port-forwarding it to our local machine.

$ kubectl apply -f webserver/service.yml 
service/laravel-in-kubernetes-webserver created

$ kubectl port-forward service/laravel-in-kubernetes-webserver 8080:80
Forwarding from 127.0.0.1:8080 -> 80
Forwarding from [::1]:8080 -> 80

If you now open up http://localhost:8080 on your local machine and you should see you application running in Kubernetes

This means your application is running correctly, and it can serve requests.

Using the Database

Next, we need to inject our common config and secret into the FPM deployment, to provide it with all the database details

You can see here for a better understanding of how to use secrets and configmaps as environment variables.

We are going to use envFrom to directly inject our ConfigMap and Secret into the container.

In the FPM deployment

apiVersion: apps/v1
kind: Deployment
metadata:
  [...]
spec:
  [...]
  template:
    [...]
    spec:
      containers:
        - name: fpm
          [...]
          envFrom:
            - configMapRef:
                name: laravel-in-kubernetes
            - secretRef:
                name: laravel-in-kubernetes

Kubernetes will now inject these values as environment variables when our application starts to run.

Apply the new configuration to make sure everything works correctly

$ kubectl apply -f fpm/
deployment.apps/laravel-in-kubernetes-fpm configured
service/laravel-in-kubernetes-fpm unchanged

$ kubectl get pods
NAME                                               READY   STATUS    RESTARTS   AGE
laravel-in-kubernetes-fpm-84cf5b9bd7-z2jfd         1/1     Running   0          32s
laravel-in-kubernetes-webserver-5877867747-zm7zm   1/1     Running   0          15m

$ kubectl logs laravel-in-kubernetes-fpm-84cf5b9bd7-z2jfd
[30-Aug-2021 19:57:31] NOTICE: fpm is running, pid 1
[30-Aug-2021 19:57:31] NOTICE: ready to handle connections

Everything seems to be working swimmingly.

Migrations

The next piece we want to take care of, is running migrations for the application

I've heard multiple opinions on when to run migrations, and there are multiple ways.

Some options around migrations

In Continuous Deployment

You can run Migrations during your CD pipelines or processes.

This option can work quite well, but has some issues where a failed deployment with a successful migration, means your database is one step ahead of your running application, whilst your application can deploy successfully.

In Init Containers

This is the option we are going to reach for in this article.

We can use the migrations as a deployment gate, and only allow the deployment to continue once the migrations have successfully run.

Running migrations as initContainers

We'll be using a Kubernetes initContainer to run our migrations. This makes it quite simple, and stops any deployment if the migrations don't pass first, giving us a clean window to fix any issues and deploy again.

In our application, we need to add a new initContainer.

We can go ahead and do this in the fpm/deployment.yml file.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: fpm
  namespace: app
  labels:
    tier: backend
    layer: fpm
spec:
  [...]
  template:
    metadata: [...]
    spec:
      initContainers:
        - name: migrations
          image: [your_registry_url]/cli:v0.0.1
          command:
            - php
          args:
            - artisan
            - migrate
            - --force
          envFrom:
            - configMapRef:
                name: laravel-in-kubernetes
            - secretRef:
                name: laravel-in-kubernetes
      containers:
        - name: fpm
          [...]

This will run a container before starting up our primary container to run migrations, and only if successful, will it run our primary app, and replace the running instances.

Let's apply that and see the results.

$ kubectl apply -f fpm/
deployment.apps/fpm configured

$ kubectl get pods
NAME                                               READY   STATUS    RESTARTS   AGE
laravel-in-kubernetes-fpm-856dcb9754-trf65         1/1     Running   0          16s
laravel-in-kubernetes-webserver-5877867747-zm7zm   1/1     Running   0          36m

Next, we want to check the logs from the migrations initContainer to see if it was successful.

$ kubectl logs laravel-in-kubernetes-fpm-856dcb9754-trf65 -c migrations
Migrating: 2014_10_12_100000_create_password_resets_table
Migrated:  2014_10_12_100000_create_password_resets_table (70.34ms)
Migrating: 2019_08_19_000000_create_failed_jobs_table
Migrated:  2019_08_19_000000_create_failed_jobs_table (24.21ms)

Our migrations are now successfully run.

Errors

If you receive errors at this point, you can check the logs to see what went wrong.

Most likely you cannot connect to your database or have provided incorrect credentials.

Feel free to comment on this blog, and I'd be happy to help you figure it out.

Onto the next.

In the next episode of this series, we will go over deploying queue workers

Laravel in Kubernetes Part 7 - Deploying Redis to run Queue workers and cache
In this post, we’ll go over deploying a Redis instance, where we can run our Queue workers from in Laravel. The Redis instance can also be used for Caching inside Laravel, or a second Redis cluster can be installed for Cache separately We will cover two methods of running a