Dcycle Blog

Deploying Drupal to Kubernetes >

Tweaking the Drupal Helm Chart for our needs

This page is part of the Deploying Drupal to Kubernetes series.

Let’s leave aside our cert manager for the moment and get back to Drupal. Earlier, when we installed drupal using helm upgrade --install my-first-vanilla-drupal stable/drupal, it felt a bit like magic (if it worked for you!). But how does it all work?

A Kubernetes application is a series of resources working together, and each resource is defined by a YAML file. Helm takes this a step further by packaging all the YAML files, and by building the YAML files based on default, or custom configuration.

An example will no doubt allow better understanding of this: recall that when we first installed my-first-vanilla-drupal, it came with a handy LoadBalancer. Handy, but not exactly what we need (if you were following along, again, we will eschew the load-balancer-per-application approach in favor of a single reverse proxy which acts as a traffic cop).

Let’s start by taking a look at all the Kubernetes YAML files generated by Helm:

helm template bitnami/drupal

helm template does not actually perform any action; it will only print out, to your screen, the YAML files used to configure Kubernetes. (These are the files you would need to maintain yourself if you were not availing yourself of the power of Helm.) Among all the other information, we will find:

spec:
  type: LoadBalancer

and two instances of:

resources:
  requests:
    storage: "8Gi"

If we were managing all these YAML files directly, we would just modify them. But, because we like to keep things complicated, we’ll need to understand how these files are generated and modify, instead, the Helm configuration used to generate these files. For that we need to look at the Helm chart itself which is on Github. Look, specifically, at:

Noticing that the YAML files are built using a template and configuration variables, for example,

spec:
  type: {{ .Values.service.type }}

This tells us that changing the service.type from LoadBalancer to ClusterIP will cause a corresponding change in the generated YAML file. So, if you run:

helm --set service.type=ClusterIP template bitnami/drupal

You’ll get the desired value for the ingress type, ClusterIP, which means that this way of doing things will not create a load balancer to expose our Drupal instance to the outside world, but rather expose our IP internally to the cluster, so that the traffic cop created earlier will be responsible for exposing it to the outside world:

...
spec:
  type: ClusterIP
...

Which leaves the issue of the 8Gb volumes. If we want 1Gb rather than 8Gb, we can type:

helm --set service.type=ClusterIP \
  --set mariadb.primary.persistence.size=1Gi \
  --set persistence.size=1Gi \
  template bitnami/drupal

(The bitnami/mariadb chart is a dependency of bitnami/drupal, and to override values in a subchart, we need to prefix the value (master.persistence.size) with the subchart name (mariadb).)

Now that our template is working correctly, we can actually create a release based on it:

helm  --set service.type=ClusterIP \
  --set mariadb.primary.persistence.size=1Gi \
  --set persistence.size=1Gi \
  upgrade --install my-first-vanilla-drupal bitnami/drupal

Let’s now make sure we have a ClusterIP and no LoadBalancer (we won’t need one):

kubectl get services
# NAME                     TYPE      CLUSTER-IP    EXTERNAL-IP
# my-first-vanilla-drupal  ClusterIP 10.245.213.92 <none>

And, our volumes should now hold 1Gb or data, not 8:

kubectl get pvc
# ... CAPACITY
#     1Gi
#     1Gi

How’s that for saving money!


This page is part of the Deploying Drupal to Kubernetes series.