Laravel in Kubernetes Part 5 - Deploying a database for our application
Deploying a database for our application can be quite a challenge.
On one hand, using a managed database makes sense from a management perspective, but might be a bit more expensive than running it ourselves.
On the other hand, running it ourselves comes with a whole array of possible maintenance issues like Storage, Backups and Restoration.
Also introducing Storage into our Kubernetes cluster makes it quite a bit more management, especially for production critical loads.
In this post we will cover both options

Table of contents
Managed Database
The easiest to manage, if you are willing to fork out a couple more bucks, is a managed database.
Most Cloud providers offer managed databases, including DigitalOcean on which this series is built.
We are going to use Mysql in this post, as it is the most used option IMO for Laravel.
You are welcome to switch this out for Postgres if you are so inclined.
In the Infrastructure repository we created, we can add a new file called database.tf
where we can define the configuration for our DigitalOcean Managed database.
# Define some constant values for the different versions of DigitalOcean databases
locals {
mysql = {
engine = "mysql"
version = "8"
}
postgres = {
engine = "pg"
version = "13" # Available options: 10 | 11 | 12 | 13
}
}
# We need to create a database cluster in DigitalOcean,
# based on Mysql 8, which is the version DigitalOcean provides.
# You can switch this out for Postgres by changing the `locals.` pointer to point at postgres.
resource "digitalocean_database_cluster" "laravel-in-kubernetes" {
name = "laravel-in-kubernetes"
engine = local.mysql.engine # Replace with `locals.postgres.engine` if using postgres
version = local.mysql.version # Replace with `locals.postgres.version` if using postgres
size = "db-s-1vcpu-1gb"
region = var.do_region
node_count = 1
}
# We want to create a separate database for our application inside the database cluster.
# This way we can share the cluster resources, but have multiple separate databases.
resource "digitalocean_database_db" "laravel-in-kubernetes" {
cluster_id = digitalocean_database_cluster.laravel-in-kubernetes.id
name = "laravel-in-kubernetes"
}
# We want to create a separate user for our application,
# So we can limit access if necessary
# We also use Native Password auth, as it works better with current Laravel versions
resource "digitalocean_database_user" "laravel-in-kubernetes" {
cluster_id = digitalocean_database_cluster.laravel-in-kubernetes.id
name = "laravel-in-kubernetes"
mysql_auth_plugin = "mysql_native_password"
}
# We want to allow access to the database from our Kubernetes cluster
# We can also add custom IP addresses
# If you would like to connect from your local machine,
# simply add your public IP
resource "digitalocean_database_firewall" "laravel-in-kubernetes" {
cluster_id = digitalocean_database_cluster.laravel-in-kubernetes.id
rule {
type = "k8s"
value = digitalocean_kubernetes_cluster.laravel-in-kubernetes.id
}
# rule {
# type = "ip_addr"
# value = "ADD_YOUR_PUBLIC_IP_HERE_IF_NECESSARY"
# }
}
# We also need to add outputs for the database, to easily be able to reach it.
# Expose the host of the database so we can easily use that when connecting to it.
output "laravel-in-kubernetes-database-host" {
value = digitalocean_database_cluster.laravel-in-kubernetes.host
}
# Expose the port of the database, as it is usually different from the default ports of Mysql / Postgres
output "laravel-in-kubernetes-database-port" {
value = digitalocean_database_cluster.laravel-in-kubernetes.port
}
Once we apply that, it might take some time to create the database, but Terraform will pump out a database host and port for us.
$ terraform apply
[...]
Apply complete! Resources: 3 added, 0 changed, 0 destroyed.
Outputs:
laravel-in-kubernetes-database-host = "XXX"
laravel-in-kubernetes-database-port = 25060
You will now see your database host and port.
Security
But what about the username and password ?
We could fetch these from Terraform directly using the digitalocean_database_user.laravel-in-kubernetes.password
attribute like here. The problem with this is that the password will be stored in Terraform state, and anyone who has the state will be able to access this value, which compromises your database.
What we want to be doing is to create the initial user, with a initial password, and then change that outside of Terraform.
There are other solutions to this such as Key Stores provided by Cloud providers, which can be used with the External Secrets Operator to provide these seamlessly in Kubernetes.
For the moment though, we will use the DigitalOcean UI, to regenerate the password, and use that outside of Terraform for the future.
In the DigitalOcean UI, you can regenerate the password, and store it to use in the next steps.
Laravel Changes
When using a default DigitalOcean Managed Database install for our application, we need to make one change to our actual code base.
Laravel migrations will fail with an error for not allowing tables without Primary Keys such as
Migrating: 2014_10_12_100000_create_password_resets_table
In Connection.php line 692:
SQLSTATE[HY000]: General error: 3750 Unable to create or change a table wit
hout a primary key, when the system variable 'sql_require_primary_key' is s
et. Add a primary key to the table or unset this variable to avoid this mes
sage. Note that tables without a primary key can cause performance problems
in row-based replication, so please consult your DBA before changing this
setting. (SQL: create table `password_resets` (`email` varchar(255) not nul
l, `token` varchar(255) not null, `created_at` timestamp null) default char
acter set utf8mb4 collate 'utf8mb4_unicode_ci')
In Connection.php line 485:
SQLSTATE[HY000]: General error: 3750 Unable to create or change a table wit
hout a primary key, when the system variable 'sql_require_primary_key' is s
et. Add a primary key to the table or unset this variable to avoid this mes
sage. Note that tables without a primary key can cause performance problems
in row-based replication, so please consult your DBA before changing this
setting.
To get around this error, we can switch off the primary key constraint.
It's advisable to add primary keys for your tables, but if you have an existing application, it might be a better idea to switch off first, then add primary keys later, depending on your specific case.
The way I like to do this is by adding a specific statement which catches migration events, and then switches off the primary key constraints.
In app/Providers/AppServiceProvider.php
, add a the following to the register
method
use Illuminate\Database\Events\MigrationsEnded;
use Illuminate\Database\Events\MigrationsStarted;
use Illuminate\Support\Facades\DB;
use Illuminate\Support\Facades\Event;
/**
* Register any application services.
*
* @return void
*/
public function register()
{
// https://github.com/laravel/framework/issues/33238#issuecomment-897063577
Event::listen(MigrationsStarted::class, function () {
DB::statement('SET SESSION sql_require_primary_key=0');
});
Event::listen(MigrationsEnded::class, function () {
DB::statement('SET SESSION sql_require_primary_key=1');
});
}
Once we've have done this, we can commit the new fix, and rebuild both our application containers so they contain the new code updates
// Commit the fix
$ git add app/Providers/AppServiceProvider.php
$ git commit -m "Disable Primary Key check for migrations"
// Rebuild our container images
$ make docker-build
// Lastly push up the new container images to our registry
$ make docker-push
When we now run migrations against the managed database, everything should work.
In the next step, we will start deploying our application and run migrations on startup.
Self-managed database
If you would like to use your own database running in Kubernetes, you can of course do this.
For running a database in Kubernetes there are a few things to keep in mind
- Database maintenance such as backups, upgrades, security etc.
- Persistence. You're probably going to need some persistence so your data remains stable throughout upgrades and updates.
- Scalability. Running a distributed database with separated write & read replicas could become quite difficult to manage. As a starting point you will not need to scale your database this way, but in future you might
All of this taken into account, we will deploy a MySQL 8 database inside of Kubernetes with persistence to DigitalOcean, and a manual backup and restore strategy. We won't cover monitoring for it just yet, as this will be covered in depth by a future post.
Creating a PersistentVolumeClaim in Kubernetes
We need to create a PersistentVolumeClaim.
This will trigger the CSI to create us a volume in the Cloud provider, in this case DigitalOcean, register that in Kubernetes, and then create a PersistentVolumeClaim, which we can use to persist our database data across deployments and upgrades.
In the next Step of the series, we will create a deployment repo to store all our Kubernetes configurations in.
Because we are jumping ahead we will go ahead and do that now.
Create a new directory for your deployment manifests, with a subdirectory for your database.
# First make the deployment directory
mkdir -p deployment
cd deployment
# Then next create a database directory to store database specific manifests
mkdir -p database
Next, create a file called database/persistent-volume-claim.yml
where we will store the configuration.
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: laravel-in-kubernetes-mysql
spec:
storageClassName: do-block-storage
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
We specify that we only want 1 GB of data for the moment. You can always resize this at a later point if necessary.
You can apply that to your Kubernetes cluster, and after a few minutes you should see the DigitalOcean volume mounted.
$ kubectl apply -f database
persistentvolumeclaim/laravel-in-kubernetes-mysql created
$ kubectl get persistentvolume
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-47da21f2-113c-4415-b7c0-08e3782ac1c3 1Gi RWO Delete Bound app/laravel-in-kubernetes-mysql do-block-storage 16s
You can also see the volume created in the DigitalOcean UI under Volumes.

You'll notice that it is not mounted to a particular droplet just yet.
The Volume will only be mounted once an application actually tries to use the PVC.
This is intentional, as the volume will be mounted to the specific Droplet where the pod is running.
Creating Secrets for our Mysql database
We need to create a username and password which we can use with Mysql.
Mysql allows us to inject these as environment variables, but first we need to save them to a Kubernetes Secret.
Create a new random password for use in our application.
$ LC_ALL=C tr -dc 'A-Za-z0-9' </dev/urandom | head -c 20 ; echo
eyeckfIIXw3KX0Rd0GHo
We also need a username which in this case we'll call laravel-in-kubernetes
Create a new file called secret.yml
in the database folder which contains our Username and Password.
apiVersion: v1
kind: Secret
metadata:
name: laravel-in-kubernetes-mysql
type: Opaque
stringData:
DB_USERNAME: "laravel-in-kubernetes"
DB_PASSWORD: "eyeckfIIXw3KX0Rd0GHo"
A note on security
A good approach would be to not store this secret in version control as that would expose our passwords to whoever has access to the manifests.
An alternative solution might be to use Sealed Secrets or External Secrets Operator from Container Solutions
For the moment, we will use this to keep the learning simple.
So from here we can apply that secret, and make it available to our database in coming steps.
$ kubectl apply -f database/
secret/laravel-in-kubernetes-mysql created
Creating a StatefulSet for the database
In our database
folder we can create another file called statefulset.yml
where we will declare our database setup, with some liveness and readiness probes, as well as resource requests for most stable running.
We use a StatefulSet so it only reschedules it when it really needs to.
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: laravel-in-kubernetes-mysql
labels:
tier: backend
layer: database
spec:
selector:
matchLabels:
tier: backend
layer: database
serviceName: laravel-in-kubernetes-mysql
replicas: 1
template:
metadata:
labels:
tier: backend
layer: database
spec:
containers:
- name: mysql
image: mysql:5.7
ports:
- name: mysql
containerPort: 3306
env:
- name: MYSQL_RANDOM_ROOT_PASSWORD
value: '1'
- name: MYSQL_DATABASE
value: laravel-in-kubernetes
- name: MYSQL_USER
valueFrom:
secretKeyRef:
name: laravel-in-kubernetes-mysql
key: DB_USERNAME
- name: MYSQL_PASSWORD
valueFrom:
secretKeyRef:
name: laravel-in-kubernetes-mysql
key: DB_PASSWORD
volumeMounts:
- name: data
mountPath: /var/lib/mysql
subPath: mysql
resources:
requests:
cpu: 300m
memory: 256Mi
livenessProbe:
exec:
command:
- bash
- -c
- mysqladmin -u ${MYSQL_USER} -p${MYSQL_PASSWORD} ping
initialDelaySeconds: 10
periodSeconds: 5
timeoutSeconds: 5
readinessProbe:
exec:
command:
- bash
- -c
- mysql -h 127.0.0.1 -u ${MYSQL_USER} -p${MYSQL_PASSWORD} -e "SELECT 1"
initialDelaySeconds: 5
periodSeconds: 2
timeoutSeconds: 1
volumes:
- name: data
persistentVolumeClaim:
claimName: laravel-in-kubernetes-mysql
The StatefulSet will start up a single pod containing our database, mount our PersistentVolumeClaim into the container to store the data in a DigitalOcean Volume, and automatically check for Mysql Availability before allowing other pods to connect.
When we redeploy the StatefulSet for upgrades of Mysql or changing settings, our data will stay persisted, and the CSI will remount the volumes to the new nodes where our StatefulSet is running.
Database Service
The next piece we need is a Kubernetes Service so we can easily connect to our database instance.
In the database
folder, create a new file called service.yml
where we can specify the Service details
apiVersion: v1
kind: Service
metadata:
name: laravel-in-kubernetes-mysql
spec:
selector:
tier: backend
layer: database
ports:
- protocol: TCP
port: 3306
targetPort: 3306
We can apply that, and in future if we'd like to connect to that database we can use mysql
as the url and 3306
as the port.
$ kubectl apply -f database/
service/mysql created
Database backups
As we are mounting to a DigitalOcean volume, our data should be fairly safe.
But, there are a few things we need to take care of.
For example, if we recreate our cluster for a major version upgrade, we need to manually remount our volume into the Kubernetes cluster.
We also need to make sure if we accidentally delete the PersistentVolumeClaim, we can restore it from a data source.
For this and more on Backups, you can have a look at Kubernetes Volume Snapshots and Kubernetes Volume Data Sources. This will allow you to restore data on failure.
There are also a few tools to help alleviate a lot of this manual work called Velero you can have a look at.
Onto the next
Next, we will start deploying our application in Kubernetes.