Laravel in Kubernetes Part 4 - Kubernetes Cluster Setup
In this post, we will spin up our Kubernetes cluster using Terraform, in DigitalOcean.
We will create this using Terraform, so we can easily spin up and spin down our cluster, as well as keep all of our information declarative.
If you'd like to spin up a cluster without Terraform, you can easily do this in the DigitalOcean UI, and download the kubeconfig

Table of contents
Creating our initial Terraform structure
For this blog series, we will create a separate repository for our Terraform setup, but you can feel free to create a subdirectory in the root of your project and run terraform commands from there.
Create a new directory to act as the base of our new repository
mkdir -p laravel-in-kubernetes-infra
cd laravel-in-kubernetes-infra/
Terraform initialisation
In the new directory we need a few files.
We will start with a file called versions.tf
to contain the required versions of our providers.
terraform {
required_providers {
digitalocean = {
source = "digitalocean/digitalocean"
version = "~> 2.11"
}
}
}
Once that file is created, we can initialise the Terraform base and download the DigitalOcean providers
$ terraform init
[...]
Terraform has been successfully initialized!
From here, we can start creating the provider details, and spin up our clusters.
Terraform Provider Setup
Next, we need to get a access token from DigitalOcean which Terraform can use when creating infrastructure.
You can do this by login into your DigitalOcean account, going to API > Generate New Token and giving it an appropriate name, and make sure it has write access.
Create a new file called local.tfvars
and save the token in that file.
do_token="XXX"
Now we need to ignore the local.tfvars
file in our repository along with some other files.
We also need to register the variable with Terraform, so it knows to look for it, and validate it.
Create a variables.tf
file to declare the variable
variable "do_token" {
type = string
}
At this point we can run terraform validate to make sure all our files are in order.
$ terraform validate
Success! The configuration is valid.
Ignore Terraform state files
Create a .gitignore
file containing matching https://github.com/github/gitignore/blob/master/Terraform.gitignore
# Local .terraform directories
**/.terraform/*
# .tfstate files
*.tfstate
*.tfstate.*
# Crash log files
crash.log
# Exclude all .tfvars files, which are likely to contain sentitive data, such as
# password, private keys, and other secrets. These should not be part of version
# control as they are data points which are potentially sensitive and subject
# to change depending on the environment.
#
*.tfvars
# Ignore override files as they are usually used to override resources locally and so
# are not checked in
override.tf
override.tf.json
*_override.tf
*_override.tf.json
# Include override files you do wish to add to version control using negated pattern
#
# !example_override.tf
# Include tfplan files to ignore the plan output of command: terraform plan -out=tfplan
# example: *tfplan*
# Ignore CLI configuration files
.terraformrc
terraform.rc
Once we ignore sensitive files, we can initialise the directory as a git repo, and commit our current changes
Initialise Git Repo
$ git init
Initialized empty Git repository in [your_directory]
$ git add .
$ git commit -m "Init"
Configure DigitalOcean Provider
Create a new file called providers.tf
where we can register the DigitalOcean provider with DigitalOceans' token
provider "digitalocean" {
token = var.do_token
}
Remember to add and commit this new file.
Getting ready to run Kubernetes
Kubernetes Version
In order to run Kubernetes, we need to define which version of Kubernetes we'd like to run.
We'll do this using a Terraform Data Source from DigitalOcean to get us the latest patch version of our chosen version, which for this guide, will be the latest DigitalOcean ships, which is 1.21.X
Create a file in the root of your repository called kubernetes.tf
containing the data source for versions
data "digitalocean_kubernetes_versions" "kubernetes-version" {
version_prefix = "1.21."
}
This should be enough to define the required version.
DigitalOcean and Terraform will now keep your cluster up to date with the latest patches. These are important for security and stability fixes.
Machine Sizes
We also need to define which machine sizes we'd like to run as part of our cluster.
Kubernetes in DigitalOcean runs using Node Pools.
We can use these to have different machines of different capabilities, depending on our needs.
For now, we will create a single Node Pool with some basic machines to run our Laravel application.
In our kubernetes.tf
file, add the data source for the machine sizes we will start off with.
[...]
data "digitalocean_sizes" "small" {
filter {
key = "slug"
values = ["s-2vcpu-2gb"]
}
}
Region
We also need to define a region for where our Kubernetes cluster is going to run.
We can define this as a variable, to make it easy to change for different folks in different places.
in variables.tf
, add a new variable for the region you would like to use.
[...]
variable "do_region" {
type = string
default = "fra1"
}
I have defaulted it to Frankfurt 1 for ease of use, but you can now override it in local.tfvars
like so
do_region="fra1"
Create our Kubernetes cluster
Next step we need to look at is actually spinning up our cluster.
This is a pretty simple step. Create a Kubernetes Cluster resource in our kubernetes.tf
file, with some extra properties for Cluster management with DigitalOcean.
resource "digitalocean_kubernetes_cluster" "laravel-in-kubernetes" {
name = "laravel-in-kubernetes"
region = var.do_region
# Latest patched version of DigitalOcean Kubernetes.
# We do not want to update minor or major versions automatically.
version = data.digitalocean_kubernetes_versions.kubernetes-version.latest_version
# We want any Kubernetes Patches to be added to our cluster automatically.
# With the version also set to the latest version, this will be covered from two perspectives
auto_upgrade = true
maintenance_policy {
# Run patch upgrades at 4AM on a Sunday morning.
start_time = "04:00"
day = "sunday"
}
node_pool {
name = "default-pool"
size = "${element(data.digitalocean_sizes.small.sizes, 0).slug}"
# We can autoscale our cluster according to use, and if it gets high,
# We can auto scale to maximum 5 nodes.
auto_scale = true
min_nodes = 1
max_nodes = 5
# These labels will be available in the node objects inside of Kubernetes,
# which we can use as taints and tolerations for workloads.
labels = {
pool = "default"
size = "small"
}
}
}
Now that we have added the cluster details, we can validate our Terraform once more
$ terraform validate
Success! The configuration is valid.
We can now create our Kubernetes cluster
$ terraform apply
var.do_token
Enter a value:
Terraform is asking us to pass in a do_token, but we have specified this in our local.tfvars file.
Terraform will not automatically pull values from these files, but will from files with auto.tfvars
suffix.
Let's rename our local.tfvars
to local.auto.tfvars
mv local.tfvars local.auto.tfvars
We should now be able to run terraform apply correctly
$ terraform apply
[...]
Plan: 1 to add, 0 to change, 0 to destroy.
[...]
digitalocean_kubernetes_cluster.laravel-in-kubernetes: Creating...
digitalocean_kubernetes_cluster.laravel-in-kubernetes: Still creating... [10s elapsed]
[...]
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
Our cluster is now created successfully, and we need to fetch the kubeconfig file.
Fetching Cluster access details
We need to get a kubeconfig file from DigitalOcean to access our cluster.
We can do this through Terraform with resource attributes, but this does not scale too well with a team, as not everyone should have access to run Terraform locally.
The other mechanism we can use for this is by utilising doctl
https://github.com/digitalocean/doctl
You can follow the installation guide to get it up and running locally https://github.com/digitalocean/doctl#installing-doctl
Get the kubeconfig
Next we need to fetch the kubeconfig using doctl
Get the ID of our cluster first
$ doctl kubernetes clusters list
ID Name Region Version Auto Upgrade Status Node Pools
[your-id-here] laravel-in-kubernetes fra1 1.21.2-do.2 true running default-pool
Copy the id from there, and then download the kubeconfig file into your local config file.
$ doctl k8s cluster kubeconfig save [your-id-here]
Notice: Adding cluster credentials to kubeconfig file found in "/Users/chris/.kube/config"
Notice: Setting current-context to do-fra1-laravel-in-kubernetes
You should now be able to get pods in your new cluster
$ kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system cilium-8r6qz 1/1 Running 0 6m33s
kube-system cilium-operator-6cc67c77f9-4c5vd 1/1 Running 0 9m27s
kube-system cilium-operator-6cc67c77f9-qhwbb 1/1 Running 0 9m27s
kube-system coredns-85d9ccbb46-6nkqb 1/1 Running 0 9m27s
kube-system coredns-85d9ccbb46-hmjbw 1/1 Running 0 9m27s
kube-system csi-do-node-jppxt 2/2 Running 0 6m33s
kube-system do-node-agent-647dj 1/1 Running 0 6m33s
kube-system kube-proxy-xlldk 1/1 Running 0 6m33s
This shows that our Kubernetes cluster is running, and we are ready to move on to the next piece.
Onto the next
Next we are going to spin up a database for our application.
You can do this using either a Managed Database from DigitalOcean, or run it in your new Kubernetes cluster. The next post has instructions on running your database in both of these ways.