Deploying a SailsJS app to Kubernetes, with MySQL and Redis

Deploy a SailsJS app in Kubernetes - from scratch

Deploying a SailsJS app to Kubernetes, with MySQL and Redis

Deploying a SailsJS application in Kubernetes, with MySQL, Redis & exposing that into the real-world with TLS all-in-one.

We'll start with a SailsJS app. Luckily, sails already provides is with a demo app called ration.

You can see a live preview here.

Table of contents

  1. Switch local to Node 8
  2. Get the app up and running
  3. Dockerise the app
  4. Add Docker Compose to run the app locally

Switch to Node version 8

In order to run ration, we need node 8.

A simple way to install node 8 is by using a npm package called n. Yes, just n.

Install n globally & use it to install node 8.

# Install using npm
$ npm install -g n

# Install using Yarn
$ yarn global add n

# Use n to switch to node 8
$ n 8

You can make sure you are running node 8, by calling node --version

$ node --version
v8.16.1

Get the app up and running locally.

Clone it, and install the dependencies.

# Clone the ration project
$ git clone https://github.com/mikermcneil/ration.git
Cloning into 'ration'...
remote: Enumerating objects: 43, done.
remote: Counting objects: 100% (43/43), done.
remote: Compressing objects: 100% (33/33), done.
remote: Total 2217 (delta 16), reused 25 (delta 8), pack-reused 2174
Receiving objects: 100% (2217/2217), 35.31 MiB | 391.00 KiB/s, done.
Resolving deltas: 100% (1382/1382), done.

# Install it's dependencies using npm
$ npm install

# Alternatively use yarn
$ yarn install 

Now you can start up the app using npm start or yarn start

IT WILL FAIL this time around, as we do not have Redis running at all.

To not duplicate effort, we will Dockerise the application first, and then use docker-compose to run the different services locally.

Dockerise the app

We need to Dockerise the ration application in order for it to run in Kubernetes.

We will be using the Node 8 image from Docker hub

Create a file called .dockerignore. This allows us to specify which directories should be ignored when docker copies our source code into the container. We specifically want to ignore node_modules, so the container builds it's own dependencies.

/node_modules

Create a file called Dockerfile where we will specify our Container Image, and how Docker should go about building it. The image built will be quite big, as we have not optimised for size just yet. We will do so in a later step.

# Start with a node 8.16 slim image to keep the container size down
FROM node:8.16-jessie-slim

# Specify a default directory for where our app will be placed within the container.
#
# This can be overridden with docker build --build-arg WORKDIR=some-dir/ .
# Always append the directory name with a /
ARG WORKDIR=/opt/apps/ration/

# Create a directory to contain our application
RUN mkdir -p $WORKDIR

# Switch default working directory to our new directory
WORKDIR $WORKDIR

# Copy our package and lock files over first,
# as the build can then cache this step seperately from our code.
#
# This allows us to build faster when we only have code changes,
# as the install step will be loaded from cache,
# and rebuilt when package files change
COPY package.json package-lock.json $WORKDIR

# Install the actual dependencies
RUN npm install

# Now copy over your actual source code
#
# REMEMBER: We are ignoring node_modules in the .dockerignore file explicitly,
# so docker will not copy over that directory. The app will use th modules installed above.
COPY . $WORKDIR

# Set the default CMD to run when starting this image.
#
# You can easily override this when running the image
CMD npm start

Build the app to a container and tag it with ration:latest

$ docker build . -t ration:latest
docker build . -t ration:latest
Sending build context to Docker daemon  40.04MB
Step 1/8 : FROM node:8.16-jessie-slim
 ---> f62e96235877
Step 2/8 : ARG WORKDIR=/opt/apps/ration/
 ---> Running in 9b2c428970c4
 [...]
Successfully built f3949cea6c5c
Successfully tagged ration:latest

You now we have a container.

Run the container to make sure it starts up correctly

$ docker run ration:latest
> ration@0.0.57 start /opt/apps/ration
> NODE_ENV=production node app.js

debug: Please note: since `sails.config.session.cookie.secure` is set to `true`, the session cookie
debug: will _only_ be sent over TLS connections (i.e. secure https:// requests).
debug: Requests made via http:// will not include a session cookie!
debug:
debug: For more help:
debug:  • https://sailsjs.com/config/session#?the-secure-flag
debug:  • https://sailsjs.com/config/session#?do-i-need-an-ssl-certificate
debug:  • https://sailsjs.com/config/sails-config-http#?properties
debug:  • https://sailsjs.com/support
debug:
debug: Initializing custom hook (`uploads`)
error: A hook (`session`) failed to load!
error: Could not tear down the ORM hook.  Error details: Error: Invalid data store identity. No data store exist with that identity.
    at Object.teardown (/opt/apps/ration/node_modules/sails-mysql/helpers/teardown.js:60:26)
    at wrapper (/opt/apps/ration/node_modules/@sailshq/lodash/lib/index.js:3275:19)
    at Deferred._handleExec (/opt/apps/ration/node_modules/machine/lib/private/help-build-machine.js:1076:19)
    at Deferred.exec (/opt/apps/ration/node_modules/parley/lib/private/Deferred.js:286:10)
    at Deferred.switch (/opt/apps/ration/node_modules/machine/lib/private/help-build-machine.js:1469:16)
    at teardownDatastore (/opt/apps/ration/node_modules/sails-mysql/lib/adapter.js:94:18)
    at /opt/apps/ration/node_modules/async/dist/async.js:3047:20
    at replenish (/opt/apps/ration/node_modules/async/dist/async.js:884:21)
    at /opt/apps/ration/node_modules/async/dist/async.js:888:13
    at eachLimit$1 (/opt/apps/ration/node_modules/async/dist/async.js:3136:26)
    at Object.<anonymous> (/opt/apps/ration/node_modules/async/dist/async.js:920:20)
    at Object.teardown (/opt/apps/ration/node_modules/sails-mysql/lib/adapter.js:89:13)
    at /opt/apps/ration/node_modules/waterline/lib/waterline.js:758:27
    at /opt/apps/ration/node_modules/async/dist/async.js:3047:20
    at eachOfArrayLike (/opt/apps/ration/node_modules/async/dist/async.js:1002:13)
    at eachOf (/opt/apps/ration/node_modules/async/dist/async.js:1052:9)
    at Object.eachLimit (/opt/apps/ration/node_modules/async/dist/async.js:3111:7)
    at Object.teardown (/opt/apps/ration/node_modules/waterline/lib/waterline.js:742:11)
    at Hook.teardown (/opt/apps/ration/node_modules/sails-hook-orm/index.js:246:30)
    at Sails.wrapper (/opt/apps/ration/node_modules/@sailshq/lodash/lib/index.js:3275:19)
    at Object.onceWrapper (events.js:313:30)
    at emitNone (events.js:106:13)
error: Failed to lift app: { Error: Redis connection to 127.0.0.1:6379 failed - connect ECONNREFUSED 127.0.0.1:6379
    at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1191:14)
  errno: 'ECONNREFUSED',
  code: 'ECONNREFUSED',
  syscall: 'connect',
  address: '127.0.0.1',
  port: 6379 }
error: A hook (`orm`) failed to load!

Don't stress too much about the error for now. Our app is trying to start up correctly which is good !

Next we move onto docker-compose to run redis for our app as well.

Add Docker Compose to run the app locally

In order to run our service with all the dependant services, we will use Docker Compose.

This will allow us to easily run the app with all the databases and external services the app needs.

It's important to note that managing config values is often better done through environment variables as there is such a wide variety of environment.

Sails supports environment variables like mentioned here

Create a new file called docker-compose.yml. Here we will specify all the dependencies for our application and our application itself.

version: '3.4'

services:

  # Our app config
  app:
    image: ration:latest
    # Where to find the Dockerfile
    build: .
    # We will override the CMD to run in development so we can have good logging.
    command: node app.js
    environment:
      # Set our environment to development
      - NODE_ENV=development
      # Explicitly set port for app so we always know what it is
      - PORT=1337

      # It seems strange, but you refer to redis by the name of the redis container
      - sails_session__host=redis
      - sails_session__port=6379
      # Use index 0 so we can isolate sockets from sessions
      - sails_session__db=0 
      # locally we serve insecure cookies as we use http://
      - sails_session__cookie__secure=false 

      # Same as above for sockets.
      - sails_sockets__host=redis
      - sails_sockets__port=6379
      # Use index 0 so we can isolate sockets from sessions
      - sails_sockets__db=1

      # Same as above but for mysql. You can see values below in mysql service.
      - sails_datastores__default__database=sails-app
      - sails_datastores__default__user=sails-user
      - sails_datastores__default__password=sails-password
      - sails_datastores__default__host=mysql
      
    ports:
      # A mapping of ports our running container should expose for use to use outside
      # Our Port : Container Port
      - 1337:1337
    volumes:
      - ./:/opt/apps/ration

  # Add a redis instance to which our app can connect. Quite simple.
  redis:
    image: redis:5.0.5-alpine

  # Add a mysql instance as our primary data store
  mysql:
    image: mysql:5.7.27
    environment:
      # All the values here are from https://hub.docker.com/_/mysql
      # You'll want a nice and secure password here.
      - MYSQL_ROOT_PASSWORD=my-secret-pw

      # This will create a database for our application when mysql starts up.
      - MYSQL_DATABASE=sails-app

      # Create a user specially for our application
      - MYSQL_USER=sails-user

      # Create a password specially for our application
      - MYSQL_PASSWORD=sails-password

    volumes:
      # Here we specify that docker should keep mysql data,
      # so the next time we start docker-compose,
      # our data is intact.
      - mysql-data-volume:/var/lib/mysql

# Here we can configure settings for the default network
networks:
  default:

# Here we can configure settings for the mysql data volume where our data is kept.
volumes:
  mysql-data-volume:

Now that we have defined that, we can run our application with docker-compose.

# Specify -d so it runs in the background
$ docker-compose up -d
[...]
$ docker-compose ps
     Name                   Command               State           Ports
--------------------------------------------------------------------------------
ration_app_1     docker-entrypoint.sh /bin/ ...   Up      0.0.0.0:1337->1337/tcp
ration_mysql_1   docker-entrypoint.sh mysqld      Up      3306/tcp, 33060/tcp
ration_redis_1   docker-entrypoint.sh redis ...   Up      6379/tcp
TIP: If you are new to docker-compose and want to bring the services down, you can use docker-compose down to stop the docker-compose services.

Our app is now up, but still starting up. You can follow the logs of the app container to see when the app is ready.

$ docker-compose logs -f app
Attaching to ration_app_1
app_1    |  info: Initializing hook... (`api/hooks/custom`)
[...]
app_1    | debug: Environment : development
app_1    | debug: Port        : 1337
app_1    | debug: -------------------------------------------------------

Once you see that last line, your app should be up and running. Open localhost:1337 in your browser to confirm that the app actually works. Keep the logs going so you can see requests coming in from your browser.

The home page of the ration application, displaying a big Ration text with slogan "Be cool. Share you stuff"
The running app

If you also check back on the logs

Image showing log of browser request for home page
The logs showing our browser request

We now have a full running containerised SailsJS application.

Next we need to push this image to a container registry.

Pushing the app image to a container registry

To simplify this, we will be using a public docker hub repository.

If you are pushing your own app, it would probably be better to create your own private registry.

On docker hub, create a new Repository for your sails app.

Once created, you can use your new repository as the image tag for your app.

In my case it would be chriscmsoft/sailsjs-demo. Your username will differ.

Next build the image to that tag, and push it up to docker hub.

$ docker build . -t <replace-with-your-username>/sailsjs-demo
Sending build context to Docker daemon   41.8MB
Step 1/8 : FROM node:8.16-jessie-slim
[...]
Successfully tagged chriscmsoft/sailsjs-demo:latest
$ docker push <replace-with-your-username>/sailsjs-demo

Now your image is in docker hub and usable by Kubernetes.

Now on to the fun stuff. How to get this running in Kubernetes

Getting a Kubernetes Cluster

Now that we have a containerised the app, we can deploy it to Kubernetes.

You'll need a Kubernetes cluster to start off with.

There are a multitude of ways for getting a Kubernetes cluster setup, but I find the easiest just to use a DigitalOcean managed cluster. They already have all the networking and storage configured and all you have to do is create and download your kubeconfig

You can sign up for Kubernetes using this link
The above is a referral link with $50 free usage :)

You can also spin up clusters using tools like minikube, microk8s, or even using kubeadm to create your own cluster.

Installing kubectl

Checkout the up-to-date Kubernetes docs for installing kubectl

Running Redis and Mysql in Kubernetes.

There are a few options for how to run these, but in this tutorial, we will use KubeDB because it's much simpler to manage than using native StatefulSets.

I also manages scaling, backups etc. for your statefulsets.

Deploy KubeDB

KubeDB has comprehensive docs, but I have documentated the just of it below

Installing kubedb is fairly simple

$ curl -fsSL https://github.com/kubedb/installer/raw/v0.13.0-rc.0/deploy/kubedb.sh | bash

If you check your cluster now, you should see the KubeDB operator running.

$ kubectl get pods -n kube-system | grep kubedb
kubedb-operator-6f84c58bd6-z8dzv        1/1     Running   0          3m53s

KubeDB is now running correctly.

Deploy a Redis instance

Create a new directory where you want to keep your Kubernetes configs.

$ mkdir -p deploy/

Next create a file called deploy/redis.yml. Here is where we will define our Redis instance for KubeDB. When we apply this, KubeDB will create the Redis instance for us.

apiVersion: kubedb.com/v1alpha1
kind: Redis
metadata:
  name: redis
spec:
  version: "5.0.3-v1"
  storageType: Ephemeral

Apply the redis config, and in a few seconds you should see the redis pod come up

$ kubectl apply -f deploy/redis.yml
$ kubectl get pods
NAME      READY   STATUS    RESTARTS   AGE
redis-0   1/1     Running   0          3m40s

We now have Redis installed on our cluster.

Next onto mysql

Deploy a MySQL Instance

In the deploy directory, create a mysql.yml file.

apiVersion: kubedb.com/v1alpha1
kind: MySQL
metadata:
  name: mysql
spec:
  version: "5.7-v2"
  # We want our data to persist
  storageType: Durable
  storage:
    # Notice we are using DigitalOcean Block Storage.
    # This will create a Volume in DigitalOcean where our data will be stored.
    storageClassName: "do-block-storage"
    accessModes:
    - ReadWriteOnce
    resources:
      requests:
        # DigitalOcean will create a 1GB Volume for Mysql
        storage: 1Gi
  # This will prevent all our data being lost when we accidently delete stuff.
  terminationPolicy: DoNotTerminate

Apply the Mysql Config, and after a few minutes, there should be a mysql server started.

$ kubectl apply -f deploy/mysql.yml
mysql.kubedb.com/mysql created
$ kubectl get pods 
NAME      READY   STATUS    RESTARTS   AGE
mysql-0   1/1     Running   0          3m25s
redis-0   1/1     Running   0          13m

Mysql is up and running.

There is one thing to notice here. When Mysql spins up, it will create a username and password by itself, and store that in a Kubernetes secret.

When we deploy our app, we will specify that it should use the secret to get the values.

Next we deploy our app.

Side note about Kubernetes Services

Redis and Mysql, have create Kubernetes Services . These will ensure that no matter which server Redis or Mysql are, we can just call them using redis or mysql like we did with docker-compose.

So when you look at the services in Kubernetes, you will see there is one for Redis and one for Mysql

$ kubectl get services
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
mysql        ClusterIP   10.245.142.204   <none>        3306/TCP   17m
mysql-gvr    ClusterIP   None             <none>        3306/TCP   17m
redis        ClusterIP   10.245.123.231   <none>        6379/TCP   27m

Cool right ?

Onto the app.

Deploy the actual App

In the deploy directory, create a deployment.yml file.

This is where we define how Kubernetes should deploy the image we built earlier.

apiVersion: apps/v1
kind: Deployment
metadata:
  # The name of our deployment
  name: ration-app
  # Kubernetes matches things up by labels,
  # So we give it a few ones to identify this deployment by
  labels:
    app: ration
spec:
  # We want 3 instances of the app running
  replicas: 3
  selector:
    matchLabels:
      app: ration-app
  template:
    metadata:
      labels:
        app: ration-app
    spec:
      containers:
      # Here is our actual definition for deployment
      - name: ration-app
        # Here we add the image we built earlier
        image: chriscmsoft/sailsjs-demo
        # We are still overriding for development,
        # Until we have added tls
        command: ["node", "app.js"]
        ports:
          # The port INSIDE the container
        - containerPort: 1337
          # A name for the port
          name: app

        # Environment variables, the same as for docker-compose.
        # The only real difference is
        # env: key
        # and name:, value: definition.
        # This allows us to use secrets as env variables as well.
        env:
          # We still define a development environment,
          # as it's easier to get started
        - name: NODE_ENV
          value: development
        - name: PORT
          value: "1337"
          # Remember the Kubernetes Service ?
        - name: sails_session__host
          value: redis
        - name: sails_session__port
          value: "6379"
        # Use index 0 so we can isolate sockets from sessions
        - name: sails_session__db
          value: "0"
        # locally we serve insecure cookies as we use http://
        - name: sails_session__cookie__secure
          value: "false"

        # Same as above for sockets.
        - name: sails_sockets__host
          value: redis
        - name: sails_sockets__port
          value: "6379"
        # Use index 1 so we can isolate sockets from sessions
        - name: sails_sockets__db
          value: "1"

        # Mysql created by KubeDB
        - name: sails_datastores__default__database
          value: sails-app
        # Here we read our database credentials
        # From a Kubernetes secret
        # This allows us to hide the details from other users.
        # With Kubernetes, we can deny others access from
        # reading the secrets.
        - name: sails_datastores__default__user
          valueFrom:
            secretKeyRef:
              # We read the username from the Kubernetes secret.
              name: mysql-auth
              key: username
        - name: sails_datastores__default__password
          valueFrom:
            secretKeyRef:
              # We read the username from the Kubernetes secret.
              name: mysql-auth
              key: password
        - name: sails_datastores__default__host
          value: mysql

Apply the new deployment, and you should see your containers start popping up in Kubernetes !!!!!!!

$ kubectl apply -f deploy/deployment.yml
deployment.apps/ration-app created
$ kubectl get pods
NAME                          READY   STATUS    RESTARTS   AGE
mysql-0                       1/1     Running   0          56m
ration-app-688d88c999-42k2p   1/1     Running   0          2m26s
ration-app-688d88c999-828bw   1/1     Running   0          2m26s
ration-app-688d88c999-gb2p9   1/1     Running   0          2m26s
redis-0                       1/1     Running   0          66m

Your app is officially in Kubernetes.

It's celebration time

Check the logs for your deployment

You can check the logs for your Deployment in the same way you did for docker.

Copy the name of one of your pods. I my case ration-app-688d88c999-42k2p.

Now run kubectl logs <pod-name> and you'll see the logs for your pod in Kubernetes.

$ kubectl logs ration-app-688d88c999-42k2p
[...]
debug: -------------------------------------------------------

debug: :: Wed Sep 11 2019 14:34:15 GMT+0000 (UTC)
debug: Environment : development
debug: Port        : 1337
debug: -------------------------------------------------------

You are officially running in Kubernetes !

You can also check logs for all of your pods together !

Simply state you want the logs for the label we created earlier

$ kubectl logs -l app=ration-app
[...]

But how do we reach our deployed app ?

Reaching the deployed app

We need to somehow reach into the Kubernetes cluster.

We will first use Kubernetes port-forward, then we will apply an Kubernetes ingress.

Port Forwarding basically means that we want Kubernetes to take port 1337 on our app, and mount it to port 1338 on our local machine.

The reason for 1338 and not 1337 is that it would mess with our docker-compose instance.

We can just use kubectl port-forward to port-forward our app. Choose a pod name again. For me ration-app-688d88c999-42k2p again.

# kubectl port-forward <pod-name> <local-port>:<pod-port>
$ kubectl port-forward ration-app-688d88c999-42k2p 1338:1337
Forwarding from 127.0.0.1:1338 -> 1337
Forwarding from [::1]:1338 -> 1337

We are now port-forwarded. If you go to localhost:1338 in your browser again, you should see your app running in Kubernetes.

Boom

Add a Kubernetes Service to the app

In the same way as Mysql and Redis are exposed via a Kubernetes service to be called by using redis and mysql, we can do the same for our app so it can be reached by other components, just using ration.

Create a file in the deploy directory called service.yml

apiVersion: v1
kind: Service
metadata:
  # The name we will call our app by
  name: ration
spec:
  selector:
    # Kubernetes Services use labels
    # to identify which pods belongs to it.
    # This is why we labeled our deployment
    # app: ration.
    # Now this service can pick up our deployment this way.
    app: ration
  ports:
    - protocol: TCP
      # The port we want `ration` to be used with
      port: 80
      # The port inside the pod it should point at
      targetPort: 1337

Apply that and you will be able to port-forward the service instead. Meaning the request could come from any of the pods.

$ kubectl apply -f deploy/service.yml
service/ration created

# Notice the 80 below. Because our service points 80 at 1337 for us
$ kubectl port-forward service/ration 1338:80
Forwarding from 127.0.0.1:1338 -> 1337
Forwarding from [::1]:1338 -> 1337

If you go to localhost:1338, you'll see the ration app again.

Boom

Deploy an Ingress

We will be deploying an nginx ingress.

In order to deploy this, we need a few mandatory things in our cluster.

Luckily Nginx has been kind enough to provide the Yaml files for this already, and all we have to do is apply them

$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/mandatory.yaml
namespace/ingress-nginx created
configmap/nginx-configuration created
configmap/tcp-services created
configmap/udp-services created
serviceaccount/nginx-ingress-serviceaccount created
clusterrole.rbac.authorization.k8s.io/nginx-ingress-clusterrole created
role.rbac.authorization.k8s.io/nginx-ingress-role created
rolebinding.rbac.authorization.k8s.io/nginx-ingress-role-nisa-binding created
clusterrolebinding.rbac.authorization.k8s.io/nginx-ingress-clusterrole-nisa-binding created
deployment.apps/nginx-ingress-controller created

$ kubectl get pods --all-namespaces -l app.kubernetes.io/name=ingress-nginx
NAMESPACE       NAME                                        READY   STATUS    RESTARTS   AGE
ingress-nginx   nginx-ingress-controller-79f6884cf6-tb9r5   1/1     Running   0          48s

The ingress controller is now running. Now we need to tell it how to route to our app. Luckily this is fairly simple.

First we create a Load Balancer to distribute traffic accross our nodes.

This will also allow us to point a domain name at our load balancer and everything should work correctly

Deploy a Load Balancer

Create a file in deploy called load-balancer.yml

kind: Service
apiVersion: v1
metadata:
  name: ingress-nginx
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
spec:
  externalTrafficPolicy: Local
  # Specify that we want a load balancer.
  # This will balance load across our nodes
  # and allow us to point a domain name at our load balancer.
  type: LoadBalancer
  selector:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
  ports:
    - name: http
      port: 80
      targetPort: http
    - name: https
      port: 443
      targetPort: https

Apply that and you should see a Load Balancer pop up in DigitalOcean

$ kubectl apply -f deploy/load-balancer.yml
service/ingress-nginx created

Define the ingress rules for our app

Now we need to tell the ingress where to route traffic for certain routes.

Create a file in deploy called ingress.yml

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: application-ingress
  annotations:
    kubernetes.io/ingress.class: nginx
spec:
  rules:
  - http:
      paths:
      # The path to trigger our service
      - path: /
        backend:
          # Specify the name for our app service
          serviceName: ration
          # And which port it is using
          servicePort: 80

Apply that, and then test that you can access your app from the internet.

$ kubectl apply -f deploy/ingress.yml
ingress.extensions/application-ingress created

Get the External IP for your load balancer, and open it in your browser. In our case 178.128.139.239

$ kubectl get service -n ingress-nginx
NAME            TYPE           CLUSTER-IP      EXTERNAL-IP       PORT(S)                      AGE
ingress-nginx   LoadBalancer   10.245.110.85   178.128.139.239   80:30386/TCP,443:30143/TCP   10m

If we go to 178.128.139.239 in our browser.

Boom for real this time.
Finally.