13. Kustomize

Kustomize is a tool to manage YAML configurations for Kubernetes objects in a declarative and reusable manner. In this lab, we will use Kustomize to deploy the same app for two different environments.

Installation

Kustomize can be used in two different ways:

  • As a standalone kustomize binary, downloadable from kubernetes.io
  • With the parameter --kustomize or -k in certain oc subcommands such as apply or create

Usage

The main purpose of Kustomize is to build configurations from a predefined file structure (which will be introduced in the next section):

kustomize build <dir>

The same can be achieved with oc:

oc kustomize <dir>

The next step is to apply this configuration to the OpenShift cluster:

kustomize build <dir> | oc apply -f -

Or in one oc command with the parameter -k instead of -f:

oc apply -k <dir>

Task 13.1: Prepare a Kustomize config

We are going to deploy a simple application:

  • The Deployment starts an application based on nginx
  • A Service exposes the Deployment
  • The application will be deployed for two different example environments, integration and production

Kustomize allows inheriting Kubernetes configurations. We are going to use this to create a base configuration and then override it for the different environments. Note that Kustomize does not use templating. Instead, smart patch and extension mechanisms are used on plain YAML manifests to keep things as simple as possible.

Get the example config

Find the needed resource files inside the folder content/en/docs/kustomize/kustomize of the techlab github repository. Clone the repository or get the content as zip

Change to the folder content/en/docs/kustomize/kustomize to execute the kustomize commands.

File structure

The structure of a Kustomize configuration typically looks like this:

.
├── base
│   ├── deployment.yaml
│   ├── kustomization.yaml
│   └── service.yaml
└── overlays
    ├── production
    │   ├── deployment-patch.yaml
    │   ├── kustomization.yaml
    │   └── service-patch.yaml
    └── staging
        ├── deployment-patch.yaml
        ├── kustomization.yaml
        └── service-patch.yaml

Base

Let’s have a look at the base directory first which contains the base configuration. There’s a deployment.yaml with the following content:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: kustomize-app
spec:
  selector:
    matchLabels:
      app: kustomize-app
  template:
    metadata:
      labels:
        app: kustomize-app
    spec:
      containers:
        - name: kustomize-app
          image: quay.io/acend/example-web-go
          env:
            - name: APPLICATION_NAME
              value: app-base
          command:
            - sh
            - -c
            - |-
              set -e
              /bin/echo "My name is $APPLICATION_NAME"
              /usr/local/bin/go              
          ports:
            - name: http
              containerPort: 80
              protocol: TCP

There’s also a Service for our Deployment in the corresponding base/service.yaml:

apiVersion: v1
kind: Service
metadata:
  name: kustomize-app
spec:
  ports:
    - port: 80
      targetPort: 80
  selector:
    app: kustomize-app

And there’s an additional base/kustomization.yaml which is used to configure Kustomize:

resources:
  - service.yaml
  - deployment.yaml

It references the previous manifests service.yaml and deployment.yaml and makes them part of our base configuration.

Overlays

Now let’s have a look at the other directory which is called overlays. It contains two subdirectories staging and production which both contain a kustomization.yaml with almost the same content.

overlays/staging/kustomization.yaml:

nameSuffix: -staging
bases:
  - ../../base
patchesStrategicMerge:
  - deployment-patch.yaml
  - service-patch.yaml

overlays/production/kustomization.yaml:

nameSuffix: -production
bases:
  - ../../base
patchesStrategicMerge:
  - deployment-patch.yaml
  - service-patch.yaml

Only the first key nameSuffix differs.

In both cases, the kustomization.yaml references our base configuration. However, the two directories contain two different deployment-patch.yaml files which patch the deployment.yaml from our base configuration.

overlays/staging/deployment-patch.yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: kustomize-app
spec:
  selector:
    matchLabels:
      app: kustomize-app-staging
  template:
    metadata:
      labels:
        app: kustomize-app-staging
    spec:
      containers:
        - name: kustomize-app
          env:
            - name: APPLICATION_NAME
              value: kustomize-app-staging

overlays/production/deployment-patch.yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: kustomize-app
spec:
  selector:
    matchLabels:
      app: kustomize-app-production
  template:
    metadata:
      labels:
        app: kustomize-app-production
    spec:
      containers:
        - name: kustomize-app
          env:
            - name: APPLICATION_NAME
              value: kustomize-app-production

The main difference here is that the environment variable APPLICATION_NAME is set differently. The app label also differs because we are going to deploy both Deployments into the same Namespace.

The same applies to our Service. It also comes in two customizations so that it matches the corresponding Deployment in the same Namespace.

overlays/staging/service-patch.yaml:

apiVersion: v1
kind: Service
metadata:
  name: kustomize-app
spec:
  selector:
    app: kustomize-app-staging

overlays/production/service-patch.yaml:

apiVersion: v1
kind: Service
metadata:
  name: kustomize-app
spec:
  selector:
    app: kustomize-app-production

Prepare the files as described above in a local directory of your choice.

Task 13.2: Deploy with Kustomize

We are now ready to deploy both apps for the two different environments. For simplicity, we will use the same Namespace.

oc apply -k overlays/staging --namespace <namespace>
service/kustomize-app-staging created
deployment.apps/kustomize-app-staging created
oc apply -k overlays/production --namespace <namespace>
service/kustomize-app-production created
deployment.apps/kustomize-app-production created

As you can see, we now have two deployments and services deployed. Both of them use the same base configuration. However, they have a specific configuration on their own as well.

Let’s verify this. Our app writes a corresponding log entry that we can use for analysis:

oc get pods --namespace <namespace>
NAME                                       READY   STATUS    RESTARTS   AGE
kustomize-app-production-74c7bdb7d-8cccd   1/1     Running   0          2m1s
kustomize-app-staging-7967885d5b-qp6l8     1/1     Running   0          5m33s
oc logs kustomize-app-staging-7967885d5b-qp6l8
My name is kustomize-app-staging
oc logs kustomize-app-production-74c7bdb7d-8cccd
My name is kustomize-app-production

Further information

Kustomize has more features of which we just covered a couple. Please refer to the docs for more information.