Automated Application Packaging And Distribution with OpenShift – Helm Charts and Operators – Part 2/4

May 24, 2021

Part two of the article series. This time about Helm Charts, Operators and various CLI tools to work with container images.

This is part two of my article series around application packaging and distribution. Part one can be found here.

In part one we have learned the basics about application development with OpenShift and then extracting the files we need and make them reusable across stages or for redistribution.

This article will help you to understand the basics of Helm Charts and Operators. It also tries to explain when to use what and why.

Introduction and Motivation

As already mentioned in part one, the most difficult thing in modern development is how to create a distribution out of your application. It is not just zipping all files together and put them somewhere. We have to take care of several meta artifacts, which do not necessarily belong to the developer, but to the owner of the application.

Distribution, by the way, could mean different things.

  • Internal distribution: I have to make my containerized application available for the IT of my company, for the operating department. Even if it „only“ means, I have to participate in company’s CI/CD chain. This is something, I might discuss in a third part of my article.
  • External distribution: I want to make my containerized application available for 3rd party people, for example customers.

Both types of distribution are in several points similar. In fact, before I can make my apps available for others, I might have to put it into my CI/CD chain. Remember? Kubernetes is there to automate most of the tasks.

But this article is focussing on external distribution. A 3rd part will discuss Tekton and the 4th part is all about GitOps and ArgoCD.

Using an external registry with Docker / Podman

As already briefly described, in order to really be able to externally distribute our applications, we either need a public image repository like hub.docker.com or quay.io, or a private one, which is accessible by your customers. For this article, I am focussing on the public repository quay.io

You can easily get an account for free, which limits you only to use public repositories (which means, everybody can read your repositories – but only you can write to).

Working with docker / podman

Once you have created an account, make sure, you’ve installed either Docker or Podman on your local machine (please note that you have to setup a remote linux system to use Podman on Windows or macOS clients). Then go and check out the demo repository for this article

In src/main/docker you can find the Dockerfile. With

$> docker login quay.io -u <username> -p <password>
$> mvn clean package -DskipTests
$> docker build -f src/main/docker/Dockerfile.jvm -t quay.io/wpernath/simple-quarkus .
Code language: HTML, XML (xml)

First, you’re logging into quay.io with your account. Then you’re building the application with maven. And finally, with podman build ... you are creating a docker image out of the app. Please note that you can really basically alias docker with Podman as the arguments are exactly the same.

Setting up Podman on any non-linux system, is a little bit tricky, as mentioned. You have to have access to a linux system (either a VM or a real one on your work environment), which basically works as the execution unit for the Podman client. The link above shows you, how that works.

As this article is not about docker, I am skipping the rest here, as there are many good articles out there to read about building images. The most important step now is that you’re finally pushing the image up to the repository. Otherwise, you can’t use it in your OpenShift environment.

$> docker push quay.io/wpernath/simple-quarkus -a

This will push all (-a) locally stored images including all tags to the repository.

Image 1: Our image on quay.io
Image 1: Our image on quay.io

And that’s it. This workflow is valid for all docker compliant registries.

Testing the image

Now that we have our image stored in quay.io, we would like to test if everything has successfully worked out. For this, we are going to use our kustomize example from part one of this article.

First of all, make sure kustomize_ext/overlays/dev/kustomize.yaml looks like the screenshot:

Image 2: Kustomize.yaml for use in our example
Image 2: Kustomize.yaml for use in our example

And then simply execute the following commands to install our application:

$> oc login -u developer -p developer https://api.crc.testing:6443
$> oc new-project article-test
$> kustomize kustomize_ext/overlays/dev/kustomize.yaml | oc apply -f -
configmap/dev-app-config created
service/dev-simple created
deployment.apps/dev-simple created
route.route.openshift.io/dev-simple created
Code language: JavaScript (javascript)

The result event log should look like Image 3:

Image 3: Event log for dev-simple application
Image 3: Event log for dev-simple application

A word on when to use docker and when podman

As a developer using any other OS than Linux, podman is a little bit complicated to use right now. It’s not about the CLI tool. This is in most use cases identical to docker CLI. It’s about installation and configuration and integration into non-linux OSses.

This is unfortunate, because podman is much more lightweight than docker. And podman does not require root access. So if you have some time, please feel free and use it. If not then continue to use docker CLI.

However, if you plan to create Tekton pipelines (see part 3 of this 2 part article series), you should have a look at podman and buildah.

And what’s about Buildah?

According to the official GitHub page to Buildah, it is more or less the OCI image builder tool, podman uses internally to – well – build the images. „Buildah’s commands replicate all the commands that are found in a Dockerfile. This allows building images with and without a Dockerfile, without requiring any root privileges“, states the official documentation.

As Docker still requires a daemon running and root privileges to let that daemon running, it is not any longer favored to be used inside Kubernetes or OpenShift. This means, if you want to build a container image inside OpenShift (for example using source-to-image or a Tekton pipeline), you should directly use buildah instead.

As everything you need to do with buildah is part of Podman anyway, there is no client for macOS or Windows available. The documentation states to use Podman instead.

Working with skopeo

Skopeo is a command line tool that helps you to work with images without the heavy docker daemon. It is all about coping images from one location into another. On macOS, you can easily install skopeo via brew:

$> brew install skopeo

If you want to upload or download a complete image repository, use skopeo (for example to mirror a certain repository).

Application Packaging

Now that we’re having our image stored in a remotely accessible repository, we can start thinking about how to let a 3rd party user easily install your application.

There are basically 2 different formats out there. One is creating a Helm Chart. The other one is about Kubernetes Operators. Let’s first have a look on how to create a helm chart to install our application on OpenShift.

Creating a basic Helm Chart

First of all, we need to download and install the helm CLI tool. On macOS you can easily do this via

$> brew install helm

Although helm allows you to create a basic template structure with everything in there you need (and even more), I think it’s better to start from scratch. But

$> helm create foo

Would create a new helm structure for a chart called foo.

We first need to have a basic folder structure and some files:

$> mkdir helm
$> mkdir helm/templates
$> touch helm/Chart.yaml
$> touch helm/values.yaml

Done. This is the structure of our first chart. We now have to fill in some basic data into Chart.yaml as shown in Image 4 and we are done with the first helm chart.

Image 4: The structure of the Chart.yaml file
Image 4: The structure of the Chart.yaml file

Of course, right now it does nothing. So we have to fill it with some meat, so we copy the following files from the last chapter into the helm/templates folder:

$> cp kustomize_ext/base/*.yaml helm/templates/

Our helm chart now looks like this:

$> tree helm
helm
├── Chart.yaml
├── templates
│   ├── config-map.yaml
│   ├── deployment.yaml
│   ├── route.yaml
│   └── service.yaml
└── values.yaml

1 directory, 6 files

Package and install our helm chart

Now that we have a very simple helm chart structure, we can package it:

$> helm package helm 
Successfully packaged chart and saved it to: quarkus-simple-0.0.1.tgz

And with the following command we are able to install it into a newly created OpenShift project called article-helm1:

$> oc new-project article-helm1
$> helm install quarkus-simple quarkus-simple-0.0.1.tgz
NAME: quarkus-simple
LAST DEPLOYED: Sat May  8 10:05:38 2021
NAMESPACE: article-helm1
STATUS: deployed
REVISION: 1
TEST SUITE: None
Code language: PHP (php)

If you’re now going to the OpenShift Console, you should see this Helm Release:

Image 5: OpenShift Console with our Helm Chart
Image 5: OpenShift Console with our Helm Chart

Getting the same overview from CLI is easy:

$> helm history quarkus-simple
$> helm list
Code language: PHP (php)

Now let’s put some more meat into the chart

Another nice feature to use is a NOTES.txt file in the helm/templates folder. This file will be shown right after installation of the chart. Basically, you can put your release notes in there. The nice thing is that you’re able to use all of the named parameters out of the values.yaml file.

Parameters? – Yes, of course. Sometimes, you need to replace standard settings, as we did with the OpenShift Templates or the kustomize chapters during the last part of this article.

Let’s have a closer look into the values.yaml file.

Image 6: The values.yaml file of the chart
Image 6: The values.yaml file of the chart

We are just defining our variables here. Image 7 shows how you’re accessing the parameters in a template file.

Image 7: How to access the variables from values.yaml in a template
Image 7: How to access the variables from values.yaml in a template

As Helm’s templating engine is the Go’s one, you also have access to functions and flow control stuff. For example, if you only want to have certain parts of the deployment.yaml file written, you can do something like:

{{- if .Values.deployment.includeHealthChecks }}
<do something here>
{{- end }}
Code language: HTML, XML (xml)

Debugging Templates

Typically, if you’re doing a helm install, all the generated files will be directly sent to Kubernetes. If you want to debug your templates, you can use some commands:

With helm lint <...> you’re able to verify, if your chart is following best practices.

helm install --dry-run --debug is just rendering your files without sending them to Kubernetes. This is very useful if you want to debug your charts.

Defining a Hook

Imagine, you want to install a database with example data as part of your Helm Chart. You need to find a way of initializing the database. This is where Helm Hooks come into play.

Basically, a Hook is just another Kubernetes resource (for example a Job or a Pod), which gets executed if a certain event was triggered. An event could be

  • pre-install, pre-upgrade
  • post-install, post-upgrade
  • pre-delete, post-delete
  • pre-rollback, post-rollback
Image 8: post-install and post-upgrade hook
Image 8: post-install and post-upgrade hook

The type of hook gets configured via „helm.sh/hook“ annotation. With the „helm.sh/hook-weight“ annotation, you can provide some kind of order of the hooks. So if you need to specify more than one install or upgrade hook, you can give them a different weight, thus you control when they get fired.

Subcharts and CRDs

In Helm, you can define sub charts. Whenever your chart gets installed, all dependent sub charts will be installed as well. Just put required sub charts into the helm/charts folder.

This could be quite handy, if your application requires the installation of a database.

Note, all sub charts need to be able to be installable without the main chart, which means, that each sub chart as its own values.yaml file. But you can override those values from within your main chart’s values.yaml.

If your chart requires the installation of a CRD, simply put them into the helm/crds folder of your chart, but keep in mind that helm does NOT take care of deinstalling any CRD if you want to deinstall your chart. So installing CRDs with helm is a one-shot.

Summary

Creating a helm chart is quite easy and mostly self explaining. Think about charts as a package manager for Kubernetes applications, like RPM or DEB for Linux. Once created and hosted on a repository, everyone is able to install, update and delete your chart from a running Kubernetes installation.

Features like hooks help you to do some initialization work after installation, so why would you need just another package format?

Let’s have a closer look at Operators.

What are Operators? And why are they useful. And when.

Our simple-quarkus application above is a pretty good example to explain Operators. It is a stateless web based application. It does not require any special treatment by an administrator. If you’re using the Helm Chart to install it (or even install it manually via oc apply -f), Kubernetes understands how to manage it out of the box pretty well.

Kubernetes’ Control Loop mechanism knows what the desired state of the application is (based on the various yaml files) and compares it all the time with the current state of the application. If there are any differences (for example, if a Pod just has died or if there is a new version of the image), Kubernetes takes care of restarting the Pod or re-instantiating the whole application with the new desired state.

That’s pretty easy.

But what happens, if this application requires the use of a database? What happens, if we have to use some complex integrations into other (non-kubernetes native) applications? What about backup and restore of the stateful data? Or the most easiest thing: What happens, if you need a clustered database? — This typically requires the special know how of an administrator.

A Kubernetes Operator is now exactly that: We create a package which does not only contain everything necessary to deploy our application, but also the complete know how of an administrator to maintain the stateful part of our application.

Of course, this makes an Operator way more complex than a Helm Chart, as all the logic needs to be implemented first. There are officially 3 ways to implement an Operator:

  1. Create from Ansible
  2. Create from Helm Chart
  3. Develop everything in Go

Unofficially (right now not supported), you can also implement the Operator logic with any programming language, like Java via a quarkus extension.

An Operator is creating, watching and maintaining so called CRDs (a custom resource definition). This means, it basically provides new API resources to the cluster (such like Route or BuildConfig, etc.). Whenever someone is creating a new resource via oc apply based on that CRD, the Operator knows what to do. All the logic behind that mechanism is handled by the Operator. This means it makes extensively use of the Kubernetes API (just think about the work necessary to setup a clustered database or backing up and restoring the persistent volume of a database etc).

If you need to have full control over everything, you have to create the Operator with Go or (unofficially) Java, Python etc.

Otherwise you can make use of the Ansible based Operator or the Helm based one. The operator SDK and the base packages of each are taking care of the Kubernetes API calls. So you don’t have to learn Go now in order to build your first Operator.

Creating an Operator

To create an Operator, you need toinstall the Operator-SDK. On macOS, you’re able to simply execute brew install operator-sdk.

Generating the project structure

We now want to create a first Operator based on our Helm Chart, we have created in the last chapter.

$> mkdir operator-new
$> cd operator-new
$> operator-sdk init --plugins=helm --helm-chart=../helm --domain wanja.org --group charts --kind QuarkusSimple --project-name simple-quarkus-operator
Writing kustomize manifests for you to edit...
Creating the API:
$ operator-sdk create api --group charts --kind QuarkusSimple --helm-chart ../helm
Writing kustomize manifests for you to edit...
Created helm-charts/quarkus-simple
Generating RBAC rules
Code language: JavaScript (javascript)

This has initialized the project for the operator, based on the chart found in ../helm folder. We should also have a QuarkusSimple CRD, which you should see in config/crd/bases. Image 9 shows the complete project structure generated by the call.

Image 9: Directory structure after calling operator-sdk
Image 9: Directory structure after calling operator-sdk

The watches.yamlfile is being used by the helm based operator logic to watch changes on the API. So whenever you’re going to create a new resource based on the CRD, the underlying logic knows what to do.

Now have a look at the Makefile. There are 3 different parameters which you should change:

  • VERSION: Whenever you’re changing something in the project file (and have running instances of the Operator somewhere), increase the number.
  • IMAGETAGBASE: This is the base name of the images which will be produced by the makefile. Change it to something like quay.io/wpernath/simple-quarkus-operator
  • IMG: This is the name of the image with our helm based operator. Change it to something like $(IMAGE_TAG_BASE):$(VERSION)

Building the docker image

Now let’s build and push the docker image of our operator:

$> make docker-build 
docker build -t quay.io/wpernath/simple-quarkus-operator:0.0.1 .
[+] Building 7.4s (9/9) FINISHED
.... 
 => [2/4] COPY watches.yaml /opt/helm/watches.yaml                                                                                                                                                                 0.1s
 => [3/4] COPY helm-charts  /opt/helm/helm-charts                                                                                                                                                                  0.0s
 => [4/4] WORKDIR /opt/helm                                                                                                                                                                                        0.0s
 => exporting to image                                                                                                                                                                                             0.0s
 => => exporting layers                                                                                                                                                                                            0.0s
 => => writing image sha256:413e7e6855c7bf011cd919155c0b45f9789b8dc20b61b36c0ba41546c665ac35                                                                                                                       0.0s
 => => naming to quay.io/wpernath/simple-quarkus-operator:0.0.1


$> make docker-push
docker push quay.io/wpernath/simple-quarkus-operator:0.0.1
The push refers to repository [quay.io/wpernath/simple-quarkus-operator]
5f70bf18a086: Pushed
30c3f5bd9e85: Pushed
7788149ff570: Pushed
9e73f5c89672: Mounted from operator-framework/helm-operator
2b30851aefac: Mounted from operator-framework/helm-operator
f4f40754d476: Mounted from operator-framework/helm-operator
144a43b910e8: Mounted from wpernath/simple-quarkus
4a2bc86056a8: Mounted from wpernath/simple-quarkus
0.0.1: digest: sha256:33b56009dadf09e2114d24cbe5484aa3d5a14868f5b4f153e09a105d12875ec8 size: 1984
Code language: PHP (php)

We now have a new image in our repository on quay.io. This image contains the logic to manage the helm chart and it exposes the CRD and the new Kubernetes API.

Building the operator bundle image

In order to release your operator, you have to create an operator-bundle image. This bundle contains metadata and manifests for the Operator Lifecycle Manager (OLM), which takes care of every operator deployed on Kubernetes. To create the bundle, do the following:

Image 10: Building the bundle
Image 10: Building the bundle
$> make bundle 

You have to answer a few questions for the bundle generator. Have a look at Image 10 for the output. This call generates all the necessary files and should be made every time, you’re changing the VERSION field in the Makefile.

$> make bundle-build bundle-push

The bundle image gets build and pushed to quay.io.

And that’s it for now.

Testing your Operator

There are now three different ways of testing the operator. Just have a look at the official SDK tutorial.

The easiest way to test your operator is to call

$> make deploy
cd config/manager && /usr/local/bin/kustomize edit set image controller=quay.io/wpernath/simple-quarkus-operator:0.0.4
/usr/local/bin/kustomize build config/default | kubectl apply -f -
namespace/simple-quarkus-operator-system created
customresourcedefinition.apiextensions.k8s.io/quarkussimples.charts.wanja.org created
serviceaccount/simple-quarkus-operator-controller-manager created
role.rbac.authorization.k8s.io/simple-quarkus-operator-leader-election-role created
clusterrole.rbac.authorization.k8s.io/simple-quarkus-operator-manager-role created
clusterrole.rbac.authorization.k8s.io/simple-quarkus-operator-metrics-reader created
clusterrole.rbac.authorization.k8s.io/simple-quarkus-operator-proxy-role created
rolebinding.rbac.authorization.k8s.io/simple-quarkus-operator-leader-election-rolebinding created
clusterrolebinding.rbac.authorization.k8s.io/simple-quarkus-operator-manager-rolebinding created
clusterrolebinding.rbac.authorization.k8s.io/simple-quarkus-operator-proxy-rolebinding created
configmap/simple-quarkus-operator-manager-config created
service/simple-quarkus-operator-controller-manager-metrics-service created
deployment.apps/simple-quarkus-operator-controller-manager created
Code language: JavaScript (javascript)

This will create a <project-name>-system namespace and installs all the necessary files into Kubernetes.

You’re now able to create an instance of the CRD by executing:

$> oc project simple-quarkus-operator-system
$> oc apply -f config/samples/charts_v1alpha1_quarkussimple

To delete everything, you should delete all instances of the CRD you’ve created before.

$> oc delete quarkussimple.charts.wanja.org/quarkussimple-sample
$> make undeploy
Code language: JavaScript (javascript)

If you want to see, how it looks like if you’re installing the operator, simply call

$> operator-sdk run bundle quay.io/wpernath/simple-quarkus-operator-bundle:v0.0.1

After this, you’ll be able to see, watch and manage your operator in the UI of OpenShift

Image 11: The installed Operator in OpenShift UI
Image 11: The installed Operator in OpenShift UI

You can even create instances of the API from the UI.

Image 12: Installed Operators
Image 12: Installed Operators

If you want to get rid of this operator, just run the following

$> operator-sdk cleanup simple-quarkus-operator --delete-all
Code language: JavaScript (javascript)

cleanup simply needs the project name, which you can find in the PROJECT file.

Summary of Operators

As you have seen, creating an Operator just as a replacement for a Helm Chart, does not really make sense, as Operators are much more complex to create and maintain. And what we’ve done so far is just the tip of the iceberg. We haven’t really touched the Kubernetes API.

However, as soon as you need more influence on how to create and maintain your application, you have to think on building an operator. Fortunately, the Operator SDK and the documentation are helping you with the first steps.

Summary

In this article, I have described how you can work with images. You’ve learned more about the various command line tools (skopeo, podman, buildah etc.). You have also seen, how you’d be able to create a helm chart and a Kubernetes Operator. And you should be able to decide when to use what.

The next chapter of this article series will talk about GitOps and Tekton pipelines.

Thank you for reading. And I am always happy for comments and feedback.

2 replies on “Automated Application Packaging And Distribution with OpenShift – Helm Charts and Operators – Part 2/4”

Leave a Reply

close

Subscribe to our newsletter.

Please select all the ways you would like to hear from Open Sourcerers:

You can unsubscribe at any time by clicking the link in the footer of our emails. For information about our privacy practices, please visit our website.

We use Mailchimp as our newsletter platform. By clicking below to subscribe, you acknowledge that your information will be transferred to Mailchimp for processing. Learn more about Mailchimp's privacy practices here.