Java to Pod

June 5, 2023

From Java code in your repo to a running Pod on Kubernetes. This article explains all the steps needed, including basic shortcuts.

So there’s your Java code on the one side and your Kubernetes (K8s) cluster on the other.

There are quite a few approaches to get your code running in a pod on K8s. Let’s have a look at the general mechanics needed and then explore ways that’ll make life easier.

You can code along or check the examples on a GitHub repo created just for this article. For each approach, a separate project is available there.

As this article is quite comprehensive, here’s a table of contents for you in case you’re looking for something specific:

General Approach

To get Java code running in a pod, basically these four steps are mandatory:

  1. Create a Java Artifact
    We basically need to create an artifact/multiple artifacts that can be executed. We’ll proceed with the most common case, an uber-jar, and discuss other cases (Java/Jakarta EE deployables, native executables) later.
  2. Create a Container Image with the Artifact
    Next, the artifact needs to be placed within a container image (we focus on OCI compatible ones1). It also needs to be started when the container starts, so somehow we need to have a Java runtime placed into the image.
  3. Make the Image Available to K8s
    The created image needs to be available to the targeted K8s cluster. That implies availability through a container image registry, be it local or in the internet’s wilderness.
  4. Use the Image in a Pod
    Finally we need a K8s pod running the image.
Java to Pod: Some steps ahead

Ready to go? Let’s get started!


I assume you have access to the following command-line tools and technology:

  • docker (and/or->)
  • (<- and/or) podman
  • kubectl (and/or->)
  • (<- and/or) oc
  • helm (optional)
  • JDK 17
  • mvn (optional, you can follow along substituting the mvn examples with ./mvnw)
  • pack
  • A Kubernetes or an OpenShift cluster (local or in the wild2)
  • Access to some sort of container image registry (Docker/podman local, public registries, private registries)

I. Create (Source Code for) the Java Artifact

We start with generating a simple Java application based on Quarkus. Just to feel the developer joy it provides πŸ˜€

So we run from the command line…

mvn io.quarkus:quarkus-maven-plugin:3.0.0.Final:create \
    -DprojectGroupId=org.opensourcerers \
    -DprojectArtifactId=java2pod \
    -DclassName="org.opensourcerers.Java2PodResource" \

3 …and get a folder with this structure:

Don’t worry – this article is not about the code at all!

Project Code Available At

To make things easier, we add



To make things more interesting for later, I’ve changed the code in from

package org.opensourcerers;


public class Java2PodResource {

    public String hello() {
        return "Hello from RESTEasy Reactive";


package org.opensourcerers;

import org.eclipse.microprofile.config.inject.ConfigProperty;


public class Java2PodResource {

    @ConfigProperty(name = "", defaultValue="local")
    String environmentId;

    public String getEnvironmentId() {
        return "Your environment ID is: " + environmentId;

Project Code Available At

Nothing spectacular. We b.t.w. don’t need to change the dependencies as Quarkus comes with an integrated Microprofile implementation called SmallRye.4

Let’s build our artifact:

mvn clean package -DskipTests # -U -e

We use./mvnw instead of mvn, but I’ve learnt not everybody is happy with that. I therefore use the shorter command but encourage you to try ./mvnw in case you encounter problems (all examples were tested with mvn 3.8.6, though).

Uncomment the latter arguments if you need for whichever reason re-download Maven artifacts.5

You should find a self-contained executable jar at /02-java2pod-extended/target/.6

Just let us try if it works locally:

java -jar target/java2pod-1.0.0-SNAPSHOT-runner.jar

We should see an output like this:

And should be able to access the REST service at, either via curl (curl -w '\n' or in the browser.

We just ignore the UI at

Quit the application by entering ctrl+c.

To prove that the external configuration is working, we could optionally try this out:

export ENVIRONMENT_ID=dummy
java -jar target/java2pod-1.0.0-SNAPSHOT-runner.jar

The output from should have changed now from “Your environment is: local” to “Your environment is: dummy”. Then, exit the application again by entering ctrl+c.8

Mission Nr. 1 accomplished!9

II. Create a Container Image with the Artifact

Now the fun part begins. We actually have several options and we will go through all of them.

The Hard Way: Plain (Docker|Container)file

Project Code Available At

As a developer, you shouldn’t do this frequently as it’s time consuming and keeps you away from coding. But it greatly helps you understand what other tools are fiddling around with.

Container Image Quickstart (Optional)

A Containerfile (or Dockerfile) is a recipe for a container runtime on how to build a container image.10

The general structure of a (Container|Docker)file is like this

CMD ["echo", "such a simple thing"]

Take a guess what this could mean! In case that’s too much for the moment, here’s a short explanation:

References a so-called base image. Here, I reference one of Red Hat’s Universal Base Images (UBI) which can be used freely and lead to enhanced security – good for production!

Runs this command when the image gets executed as a container image.

I hope you understand the general principle and structure:

# Comment

To create and then run this image, you need to put it into a file, named Dockerfile (conventionally recognized by default when in the current directory) and run

docker build . -t super-simple

The -t will apply a tag to the image to make it easier later to find it.

Or, alternatively you could run

podman build . -t super-simple

if you prefer using podman, a daemon-less docker alternative. There’s a nice desktop implementation for it, Podman Desktop, that even helps you deal with pods on your local machine (!).

The docker command above will lead to an output like:

$>03-minimal-dockerfile>docker build . -t super-simple
Sending build context to Docker daemon  2.048kB
Step 1/2 : FROM
 ---> 853b8b14ac8a
Step 2/2 : CMD ["echo", "such a simple thing"]
 ---> Running in 6820a95d0679
Removing intermediate container 6820a95d0679
 ---> 268abc6693a7
Successfully built 268abc6693a7
Successfully tagged super-simple:latest

We can’t go into details here, but you should understand that this image is created based on layers which we could inspect11 and can be found for further use in (docker|podman) – either using the created tag super-simple or the short12 container ID, in this example 268abc6693a7 (will differ on your machine).

To run the image, we just run

docker run super-simple

and should see an output like this:

$>03-minimal-dockerfile>docker run super-simple       
such a simple thing

Congrats on having built a container image nearly from the ground up.13 But we’re not done. This wasn’t hard, was it?

The Java Container Image

Project Code Available At

As simple as (Container|Docker)files seem, we need to consider that our uber-jar needs a Java runtime, otherwise it couldn’t be executed.

Luckily we got covered with Java-specific UBI-images that are already prepared with a Java runtime (OpenJDK).14.

A very simple approach thus is a Dockerfile like this:

COPY target/*-runner.jar /deployments/quarkus-run.jar
USER 185
ENV JAVA_OPTS=" -Djava.util.logging.manager=org.jboss.logmanager.LogManager"
ENV JAVA_APP_JAR="/deployments/quarkus-run.jar"

Some notes here:


This references the base image. Be aware that Java-specific UBI images can vary: There are builder images that contain everything needed to build your application image from source and pure runtime images as the one used in this example.15

Opens the port to our service. To enable debugging in the container, add 5005.

Makes sure that we run the process under a dedicated UID.16.

Defines environment variables. JAVA_APP_JAR tells this specific image “type” where to find the jar.

To build the image, we first build the image (Dockerfile in the same directory):

docker build . -t java2pod-extended

And then run it:

docker run -p 8080:8080/tcp java2pod-extended

So we should see output like this…

…and be able to interact with the example service as before (e.g. in the web browser or another terminal with curl). Stop the container with ctrl+c.

To run the container in the background, we use -d as parameter:

docker run -d -p 8080:8080/tcp java2pod-extended

And stop it by first searching for the container via docker ps, copying the container id from the output (actually the first few characters are sufficient) and then executing docker stop <container id>, i.e.:

Hint: make sure to check containers are running with docker ps. docker ps -a will show you the locally available containers even if not running.

There’s definitely more to this approach, but I hope you got a basic impression.17

The Easy Way: Use Jib!

Obviously the Dockerfile approach is not the most convenient. Useful for basic understanding, but definitely not what most developers continuously want to deal with.

Fortunately, there’s an approach that’s much more intuitive. Meet JIB.

Jib (Java Image Builder?18) is a project founded by Google. It’s quite comprehensive and actually does much more than just building the image – it also can push the image to a container registry, supports Maven and Gradle, comes with its own CLI etc. etc. The base process gets drastically reduced:


We only have a look from the Maven plugin side, but the approach for Gradle is fully comparable.19

Bare Jib

Project Code Available At

First, we need to add the Jib Maven plugin to our pom.xml and adjust the configuration, in this case for Quarkus, which needs a Jib extension to build the image:

     <!-- more plugins -->
            <!-- special case for Quarkus to suppress warnings-->
     <!-- more plugins ? -->      

The easiest way then – which will not work at the moment for our example – is to run Jib with Maven as follows:

! doesn’t work here !
mvn compile jib:build

This is because Jib directly wants to push the created image to the registry and we haven’t set up this so far. Actually, we’re just at step 2: building the image, right? The command that does the trick is:

# run 
# mvn clean quarkus:build
# before that!
mvn compile jib:dockerBuild

This tells Jib to only run a local image build (with a running Docker daemon in the background). If you prefer Podman like me (daemon-less, rootless, handy, and fully open source), you need to tweak the command even further:

 mvn compile jib:dockerBuild -Djib.dockerClient.executable=$(which podman)

We might have a look at what has been produced with (docker|podman) images. If you wonder why your image is displayed to be more than 50 years old, have a look at and here.

We should then be able to run our image with:

docker run -p8080:8080/tcp java2pod-extended-jib-base

(in this case with docker).20

Jib the Quarkus Way

Project Code Available At

The above example (05.1) might give you the impression that setup and configuration is a bit clumsy.

In fact, for Quarkus there’s built-in Jib support within Quarkus, whereas for Spring Boot Jib has built-in support. Let’s start with Quarkus. All you need to do is adding this dependency to pom.xml:21

    <!-- (...) -->
    <!-- (...) -->

With the added dependency, we run:

mvn install # -DskipTests

The property can also be set in Of course πŸ˜‰

This creates an image with a new name structure that we need to consider before running it:

When executing (docker|podman) images, we see it’s <username>/<artifactId>22. Let’s do this:

docker run -p8080:8080/tcp karsten/java2pod:1.0.0-SNAPSHOT

Again, we should see the running container and be able to query the “API”. Stop the container with ctrl+c. Mission accomplished.

A Jib-Spring Boot Example

Project Code Available At

To show how easy Jib can be in case it supports frameworks such as Spring Boot natively, just check out this example:

We create a simple spring-boot-starter-web project via Spring Initializr as such:

Then we add the Jib build dependency in pom.xml:

    <!-- (...) -->

We then run:

mvn jib:dockerBuild

The image gets created without any hassle:

The image then can be run with:

docker run -p8080:8080/tcp java2pod-spring-boot:0.0.1-SNAPSHOT

Notice that in this case the 0.0.1-SNAPSHOT tag has been created automatically. By opening we should get Spring’s Whitelabel Error Page indicating there’s no URL mapping at all.

Very easy, isn’t it? Stop the container with ctrl+c.

(Cloud Native) Buildpacks

Cloud Native Buildpacks (here referred to as CNB, often referred to as just Buildpacks) is another approach getting from source to image. It’s a project initially spawned by Heroku in 2011. It joined in 2018 the Cloud-Native Computing Foundation and finally became an incubating project thanks to the joint efforts of Heroku and Pivotal. So it has quite a history and so far adopted flexibly to upcoming new standards and specifications such as OCI or Docker registry v2, etc.

CNB‘s basic approach is to get developers away from writing (Container|Docker)files. Instead, the comprehensive “background” tooling inspects the source code and then tries to build the image. Of course we can tweak everything to our liking, write our own buildpacks or extensions. It even goes far beyond building, integrates SBOM support and way much more to discover. Let’s take a first dive into CNB!

As Quarkus supports CNB via dependency, we discover first the basic, then the Quarkus-specific approach.

Buildpacks – Basic Approach

Project Code Available At

Prerequisite: We need to install pack, CNB‘s CLI. We make ourselves comfortable with the CLI and assure Docker is running as daemon.23

In our directory, we run:

pack build java2pod-extended-buildpacks-basic --builder paketobuildpacks/builder:tiny

So the basic syntax is

pack build <name of the image to be created> --builder <builder reference>

We can get a list of suggested builders via

pack builder suggest

NB the behavior of the builders can vary dramatically. Here we take the “tiny” builder from paketo.

After running the above command, we should see output like such:

With a final success message like so:

We see, there’s a bunch of stuff going on and to dive in more deeply, we need to go through the comprehensive documentation at

We can check with docker images that our image has been created and can start it with:

docker run -p8080:8080/tcp java2pod-extended-buildpacks-basic:latest

Also check that the endpoint is working. After that, stop the container with ctrl+c. If you wonder how to finetune the image build, have a look at

You b.t.w. might notice that after stopping the container we see output not seen before:

This is caused by the specific image used.24

Buildpacks – Quarkus Approach

Project Code Available At

Quarkus seems to follow a “as much as possible just through dependencies” approach and Buildpack integration doesn’t differ from it!

There’s just one thing to be specified at the beginning and that is the type of the builder image:

# specifiy quarkus.buildpack.jvm-builder-image=<jvm builder image>, e.g.:

As in the other examples, we add this dependency to pom.xml25 :

    <!-- (...) -->
    <!-- (...) -->

Given that, we execute:

mvn install

This will create an image following the same naming convention as in the Jib/Quarkus example above:

So in my case it’s karsten/java2pod:1.0.0-SNAPSHOT.

To prove that our approach successfully runs, we could remove any previously built image with the same Docker repository/tag combination via docker image rm <repository/ta, e.g. karsten/java2pod:1.0.0-SNAPSHOT> or change the artifact name in pom.xml.

Then, as always, we run the container to see it works:

NB – change the image reference accordingly!
docker run -p8080:8080/tcp karsten/java2pod:1.0.0-SNAPSHOT

And finally stop the container with ctrl+c.

Source-to-Image (S2I) – Locally

Project Code Available At

If you’re familiar with OpenShift, a CNCF-certified Kubernetes distribution, you might have heard about Source-to-Image (S2I), an approach to easily create a pod based just by handing over the Git repo’s URL. Comparable to CNBs (but in a way more minimalistic/focused), S2I uses a builder image to identify the technology to be compiled and to create the final image.

This technology is built into OpenShift. But it also can run locally with a CLI.

Prerequisite: Grab the latest release from and add it to your path so you can run it from your terminal.26

We then run

docker pull

As you can see here, this is a S2I builder image. It differs from the “pure” runtime image we used as base image when creating the image with the Dockerfile.

The image build is executed with

s2i build . java2pod-s2i-local

We check e.g. with

docker images | grep java2pod-s2i-local

the repository and tag information. In this case, the local repository name is the image name and tag is “latest”. All that can be tweaked.

We run the container with:

docker run -p8080:8080/tcp java2pod-s2i-local:latest

And check it’s working with:

curl -w '\n'

and finally stop the container with ctrl+c.

III. Make the Image Available (Registries)

Having explored various approaches to create an OCI compatible image, the next step is to make it available for Kubernetes (K8s).

For this we can either use the (Docker-|Podman-)local images on a single-node K8s27, or push them to either public image registries, such as DockerHub or Quay, or use a dedicated private registry28, such as Harbor (CNCF graduated!), Nexus, Artifactory, or use the private registry from OpenShift29

The base proceeding then is as follows: the image is specified in your Pod object and gets pulled from the registry to the local node where the container is going to be instantiated.

See this example:

apiVersion: v1
kind: Pod
  name: nginx
  - name: nginx
    image: nginx:1.14.2
    - containerPort: 80

For our next steps, we take the Java image manually created from the “hard way” approach, to achieve a basic understanding of the actions which later on will be automated through various tools.30


Project Code Available At

Prerequisite: Existing (free) account at (Docker Hub).

First we need to create a repository on Docker Hub:

You normally would link one image type with one repository and have the ability to add multiple images with different tags31, e.g.


Hope you get the idea.

In our project directory, we then need to create a build that matches our username (which in my case is “gresch”, see above):

Be sure to replace “gresch” with your user-/org-name!
docker build . -t gresch/java2pod-extended

The image will automatically be tagged with the tag “latest”. If you want to change this, ad a colon, followed by the tag, e.g.:

Be sure to replace “gresch” with your user-/org-name!
docker build . -t gresch/java2pod-extended:1.0.0-SNAPSHOT

We then login locally to Docker Hub and finally push the image to Docker Hub:

Be sure to replace “gresch” with your user-/org-name and/or adjust the tag!
docker push gresch/java2pod-extended:latest

We should see the pushed image then on Docker Hub:

If you click here on the Tags tab, you’ll see at least the latest tag.

Well done – our image is now publicly available32 and ready to be pulled!


Project Code Available At

Prerequisite: Existing (free) account at (Red Hat, here referred to as just is a public image registry powered by the open source project Project Quay and basically works quite equally to Docker Hub. You can run it locally with containers, or deploy it to your own K8s with the Quay Operator, but we will use the public offering for this example. comes with a rather sophisticated organizational/permissions setup (organization/repository/tags, see here), but we’ll keep it simple for this example.

Now, let’s got to the basic java-docker project folder and run some commands33:

Be sure to replace “gresch” with your user-/org-name!
docker build . -t

With the -t flag we specify the tag and this is sufficient for creating the repository.

For applying the tag to an existing image, we had to get the image ID34:

docker images | grep java2pod-extended
gresch/java2pod-extended                     1.0.0-SNAPSHOT   62e0943d5f8d   12 hours ago   415MB
gresch/java2pod-extended                     latest           62e0943d5f8d   12 hours ago   415MB
java2pod-extended                            latest           62e0943d5f8d   12 hours ago   415MB             latest           62e0943d5f8d   12 hours ago   415MB

And apply the tag manually35.

Finally, we push the image to

Be sure to replace “gresch” with your user-/org-name and/or adjust the tag!
docker push

Now our image is available to be pulled!

Private Registries

General Thoughts

The approach using these two public registries can be completely applied to private image registries. All you need to do is find out the structure of the “repository URI”. Harbor36 e.g. uses projects instead of users/organizations, so we need to specify this:

Be sure to replace “62e0943d5f8d” with your image ID! Also use a different project name…
docker images | grep java2pod-extended
docker login
# change the image ID here!
docker tag 62e0943d5f8d
docker push

So, instead of the username, a project name (here: java2pod) is used (which you need to change if you want to go this path).

So whether you use Nexus, Artifactory, Harbor, or an internally operated version of Docker Hub or Quay – the approach is basically the same.

OpenShift Private Registry

Project Code Available At

Prerequisite: Accessible OpenShift cluster with cluster-admin permissions (!).

I often get questions about how to leverage the built-in container image registry of OpenShift for local development. As the internal registry is created by an operator which conveniently sets up a default route, the setup is quite easy37:

1. We need to grant permissions for accessing the internal registry:

Be sure to replace with your username!
# pull permission
oc policy add-role-to-user registry-viewer <username>
# push permission
oc policy add-role-to-user registry-editor <username>

2. Get the (external) registry route or expose one:

Be sure to replace “default-route-openshift-image-registry.apps.ocp4.mydomain.mytld” with your values!
oc get routes -n openshift-image-registry
default-route   default-route-openshift-image-registry.apps.ocp4.mydomain.mytld image-registry   <all>   reencrypt/Allow   None

3. Login to the registry (docker/podman):

Be sure to replace “default-route-openshift-image-registry.apps.ocp4.mydomain.mytld” with your values!
docker login -u `oc whoami` -p `oc whoami --show-token` default-route-openshift-image-registry.apps.ocp4.mydomain.mytld

4. Create a project (==K8s namespace) and an image stream for it in OpenShift:

oc new-project java2pod
oc create imagestream java2pod

And an image stream:

oc new-project java2pod

The rest should feel quite familiar now:

Be sure to replace “gresch” with your user-/org-name!
docker build . -t

With the -t flag we specify the tag and this is sufficient for creating the repository.

We now need to get the image ID, which in this case is 62e0943d5f8d and will differ on your computer:

docker images | grep java2pod-extended
gresch/java2pod-extended                     1.0.0-SNAPSHOT   62e0943d5f8d   12 hours ago   415MB
gresch/java2pod-extended                     latest           62e0943d5f8d   12 hours ago   415MB
java2pod-extended                            latest           62e0943d5f8d   12 hours ago   415MB             latest           62e0943d5f8d   12 hours ago   415MB

Finally, we push the image to

Be sure to replace “gresch” with your user-/org-name and/or adjust the tag!
docker push

You can test the cluster-local availability e.g. via the OpenShift console:

  1. Select the Add button, then Container Images to the right hand side to see the form depicted above.
  2. Select Image stream from internal registry.
  3. Select our project, the image stream we have created before and the tag.
  4. We could even change the icon38 πŸ˜€
  5. Important! Change the Resource type to Deployment in case you have OpenShift serverless running on the cluster – unless you really want your application scale down automatically.
  6. Enter create.

After a while you should be able to access the application via the created route.

Automating the Push

We should have grasped an understanding of how to push our image to a registry. For day-to-day work this seems to be cumbersome, though. Fortunately, developer-oriented tooling gets us covered!


Basic Setup

Project Code Available At

Remember our first basic Jib try above? When running mvn compile jib:build it didn’t work due to missing credentials:

[INFO] ------------------------------------------------------------------------
[INFO] Total time:  7.006 s
[INFO] Finished at: 2023-05-13T16:44:47+02:00
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal (default-cli) on project java2pod: Build image failed, perhaps you should make sure your credentials for '' are set up correctly. See for help: Unauthorized for 401 Unauthorized

If you read the message carefully, you see that Jib tried to push the image to:

And we’re not user “library” at all! So it was only a half-truth… What we want to do in our project directory, is the following.

First, we make sure to have done the login to the desired registry at the command line.39

Next, in pom.xml we specify the desired image registry URL and image name as described above. E.g.

Make sure to replace ‘gresch’ with your registry username!
<!-- XPath: /project/build/plugins/plugin[5]/configuration -->
        <!-- (...) -->
            <!--  -->
            <!-- or : -->
          <!-- (...) -->

for pushing to quay. Then, it becomes super-easy!

Needed for Jib-Quarkus extension
mvn compile quarkus:build
mvn compile jib:build
Quarkus-specific setup

Project Code Available At

You might remember that for Quarkus we just had to specify a dependency and one parameter at the command line or in We continue following this approach and need to customize our setup a bit for the quarkus application. In this case, I prefer to use Here, we add:

Be sure to replace group “gresch” with username!
quarkus.container-image.push=true # adjust # adjust!!!

All we need to do (after login to the desired registry, see above) is now

mvn install

and the image should get pushed to desired registry.

Buildpacks (CNB)

Basic Approach

Project Code Available At

You might remember what we did for creating our image with Cloud Native Buildpacks (CNB aka just Buildpacks)? We ran40

From old example (06.1)!
pack build java2pod-extended-buildpacks-basic --builder paketobuildpacks/builder:tiny

All we need to do now is (successful container registry login assumed) to specify the registry correctly in the image reference and add a --publish flag to the build command41 :

Please make sure adjusting the repo URL (here: replace “gresch” accordingly!)
pack build --builder paketobuildpacks/builder:tiny --publish

The image should then have been pushed to the registry.

Quarkus Approach

Warning – this approach seems currently to be failing, see and skip for now!

Again, basically all we need to do is specify the image reference so it can be pushed to the container registry (and make sure we can access it). As you might remember, the Quarkus approach for Buildpack was to just add a dependency. As in the Jib-Quarkus example for pushing to a registry, we specify the image reference etc. in

Be sure to replace group “gresch” with username!
quarkus.buildpack.jvm-builder-image=paketobuildpacks/builder:tiny #change accordingly # change accordingly

Then, we run:

mvn install

If you wonder why we haven’t put into application properties – this is to avoid nested build attempts42.

The image should be built and then pushed to the desired registry.

IV. Create a Pod With the Image (K8s)

We finally come to a close! It was quite a way, but we’re not done yet: We want to use the artifact running in a container on Kubernetes (K8s).

The Hard Way: K8s YAML

We won’t go too much into this – this part should give you an impression about the complexity and what you had to deal with when approaching K8s manually.

In this example we pull the image from an external registry (not a local one on the K8s node) and thus the reference to “gresch” needs to be replaced:

Be sure to replace group “gresch” with your user-/org-name!
apiVersion: v1
kind: Service
  name: java2pod
    app: java2pod
  type: NodePort
    app: java2pod
    - protocol: TCP
      port: 8080
      name: http

apiVersion: apps/v1
kind: Deployment
  name: java2pod
      app: java2pod
  replicas: 1
        app: java2pod
        - name: java2pod
            - containerPort: 8080

Here we define two objects: a Service and a Deployment (in java2pod-service-and-deployment.yaml). We can apply it against our K8s instance43 and play a bit around. But there are many specialties such as the Ingress addon, which I won’t cover here. You can also work against an OpenShift instance with kubectl.44

We apply this through:

kubectl apply -f java2pod-service-and-deployment.yaml

Which would apply this to the default namespace45 Later on we can check that our Pod has been created:

kubectl get pods

But we wouldn’t be done yet: The Pod probably needs to be made available from the outside through an Ingress, we might need to add health checks and so forth.

For this article, I like to leave you with the impression that there’s a lot to learn before you can do K8s the hard way. There must be a better way, focussed on developers.

Helm Charts to the Rescue?

You might have heard about Helm, which calls itself “The package manager for Kubernetes” and claims that Helm Charts are capable of helping you define, install, and upgrade even the most complex K8s application.

Let’s have a look at what’s behind all this:

Following the Quarkus way, we start with adding two dependencies to pom.xml46:

Change version, if needed.

One is for Helm Chart generation, the other for generating K8s-specific files – if we had used the OpenShift extension, OpenShift-specific files would be created.


mvn clean package

Will do the heavy generation for us and generate the files to target/helm/kubernetes/java2pod, namely to the /templates subdirectory:

When looking into these files, we see that they look a bit like the content from our “K8s hard way” approach:

apiVersion: v1
kind: Service
  annotations: 2023-05-14 - 19:57:58 +0000 9a73b69f298ba04885c26ee883479c3962547a08
  labels: java2pod-helm 1.0.0-SNAPSHOT quarkus
  name: java2pod-helm
    - name: http
      port: 80
      protocol: TCP
      targetPort: 8080
    - name: https
      port: 443
      protocol: TCP
      targetPort: 8443
  selector: java2pod-helm 1.0.0-SNAPSHOT
  type: {{ }}
apiVersion: apps/v1
kind: Deployment
  annotations: 2023-05-14 - 19:57:58 +0000 9a73b69f298ba04885c26ee883479c3962547a08
  labels: 1.0.0-SNAPSHOT java2pod-helm quarkus
  name: java2pod-helm
  replicas: 1
    matchLabels: 1.0.0-SNAPSHOT java2pod-helm
      annotations: 2023-05-14 - 19:57:58 +0000 9a73b69f298ba04885c26ee883479c3962547a08
      labels: 1.0.0-SNAPSHOT java2pod-helm quarkus
        - env:
            - name: KUBERNETES_NAMESPACE
                  fieldPath: metadata.namespace
          image: {{ }}
          imagePullPolicy: Always
          name: java2pod-helm
            - containerPort: 8443
              name: https
              protocol: TCP
            - containerPort: 8080
              name: http
              protocol: TCP

To proceed, we had to understand how to build Helm charts, maybe create a Helm repository to make it consumable to others etc.

This basically means we had not only to understand K8s in depth, but also the extensive (and definitely powerful) Helm chart syntax and Helm’s concepts. We also had to take care of image building, specifying the image reference and so on and so forth.

That’s even more to master! Therefore, for our purposes, Helm is a misdirection as we won’t become K8s and Helm experts overnight.47


Another approach to just generate the files needed for K8s is dekorate. The main approach is that you annotate your code with various stuff (K8s config, Helm, Knative, Jaeger, Prometheus, even special annotations for Minishift and Kind are available!) like so:

import io.dekorate.kubernetes.annotation.KubernetesApplication;

public class Main {

    public static void main(String[] args) {
      //Your application code goes here.

Dekorate would then generate the needed resources to deploy your application. You can also follow an annotationless approach when using Spring Boot (only).

But that’s still not what we want: still, we had to take care of applying the generated resources, still we needed to have in-depth knowledge about what to specify, still we had to take care of image handling. There must be a better way…

Full Automation with JKube

Meet JKube. JKube is a project of the Eclipse Foundation. Its purpose is to support in building “Cloud-Native Java Applications without a hassle”.

The approach is basically:

  1. You add dependencies to a Kubernetes/OpenShift Maven/Gradle plugin.
  2. You get enabled to go through the entire Java to Pod lifecycle as described in the build goal documentation .

So let’s see how this works! All we have to is basically adding the JKube Maven plugin to pom.xml:

      <!-- (...) -->
      <!-- (...) -->

This is how we can build an image – in this case we specify Jib as the builder (handy, isn’t it?):

Replace “gresch” with your registry username!
mvn k8s:build""

And we push the image with

Replace “gresch” with your registry username!
mvn k8s:push""

And finally we can generate all resources needed to run our application in a Pod on K8s!

Replace “gresch” with your registry username! Adjust ‘jkube.domain’!
mvn k8s:resource k8s:apply"" -Djkube.namespace=j2p-jkube -Djkube.createExternalUrls=true

Some explanations here:”<user-/orgname>/%a:%l”: Specifies the image URL (incl. tag) at the external repository.

jkube.namespace=<K8s namespace>: The namespace on the K8s cluster

jkube.createExternalUrls=true: Automatically creates the Ingress routes.

jkube.domain=mydomain.mytld: The external URL under which your application shall be available

If you wonder if you could specify this CLI parameters in a file – yes, you got covered: You need to add it to the plugin specification in pom.xml. Find a full-blown example here.

You could dive in extremely deeply (such as with dekorate or Helm) and add specific YAML to src/main/jkube which then is used to “enrich” the generated configuration. See the documentation here.

From my point of view, JKube gives you the best of both worlds:

  • Intuitive Java approach, starting with Maven/Gradle plugin.
  • Full-lifecycle support.
  • Dedicated to Kubernetes/OpenShift.
  • From zero-config over XML to YAML.

OpenShift Ease With S2I

As a final option for OpenShift users48, the built-in should not be forgotten.

If we log in to OpenShift and create a project in OpenShift terms (==K8s namespace), e.g.

oc new-project j2p-s2i
oc new-app \

Then all needed configuration (Deployment, Service, ConfigMaps, Secrets) is generated automatically and we only need to make the application accessible:

oc expose service/java2pod
oc get route
java2pod  java2pod 8080-tcp         None

This is another, really easy way to get from code to Pod.

Please note that specifies an S2I builder image, followed be a tilde sign (~), followed by the Git repo URL and specifying a subdirectory (full documentation here).

Java to Pod: Interactive with odo

There’s even more! If you prefer to work on your code and at the same time interact with a Kubernetes (or OpenShift) cluster, have a look at odo.

It aims to support the developer experience by making it easy to generate resources and deployments directly from the command line.

As odo is language agnostic and something to be integrated perhaps later on in your workflow, we leave it with an honorary mention.


The journey to get a Java application from source code to running in a Kubernetes pod can become quite tedious. Fortunately, Java developers don’t need to become neither Docker gurus nor K8s administration experts: Java-based tooling like Jib and JKube allow for easy image creation, registry push, and K8s deployment without much hassle.

Especially JKube, a project of the Eclipse foundation, not only supports the entire Java to Pod lifecycle, but also to specify everything from the ground up – beside a zero configuration approach. This enables developers to dive deeper into the intricate world of K8s object configuration step-by-step.


In a follow-up article, we’ll have a look at automating and mixing and mangling all the above steps using K8s-native tools: Tekton and ArgoCD.

Feedback Welcome!

Love it? Hate it? Leave some comments below!49

  1. and e.g. not e.g. LXC or nspawn []
  2. you can get a free, 30-days-running instance from or run kind or minikube or – when using Quarkus – toggle devservices k8s support []
  3. b.t.w. – if you really want to experience Quarkus’ developer joy, make sure you install the Quarkus CLI, e.g. with sdk install quarkus – the above command would be made much easier with quarkus create (...) ! []
  4. Find the sourcecode at if you’re… in a hurry πŸ˜‰ []
  5. e.g. having errors like “Plugin org.apache.maven.plugins:maven-surefire-plugin:x.x.x or one of its dependencies could not be resolved: org.apache.maven.plugins:maven-surefire-plugin:jar:x.x.x was not found in during a previous attempt. This failure was cached in the local repository and resolution is not reattempted until the update interval of central has elapsed or updates are forced“. []
  6. see []
  7. If you want to know more about the new Quarkus Dev UI, check out this introduction from Phillip KrΓΌger []
  8. If you are curious how all this stuff works, have a look at: []
  9. You can copy the application to another directory or grab it from here:, but I recommend following the article step-by step or cherrypicking the repositories of the articles you’re interested in. []
  10. We won’t go over the details, but have a look at this quite comprehensive reference here: []
  11. e.g. with podman inspect when having used podman build for creating the image. podman b.t.w. uses Skopeo as a library to perform such tasks. []
  12. Get the full container ID with docker ps --not-trunk to learn even more πŸ™‚ []
  13. Side note: the container normally can only run on the architecture it was built on! []
  14. In case you want to know what’s in such an image, just have a look at this Dockerfile – you see that things becoming more complex under the hood. You could even inspect the ubi8-mininal base image’s Dockerfile . If you want to dig in even more deeply and wonder what this FROM koji/image-build “base image” is, check this article on base images out. But this is far beyond people interested in coding normally go… []
  15. Learn more about it from this article: []
  16. 185 is historical for jboss/wildfly []
  17. To go deeper, here are some challenges for you:
    1. Inspect /src/main/docker/Dockerfile.jvm. Hint: this file is not made for an uber-jar, but for a library-dependent jar, see the explanation here:
    2. Learn about native compilation and reflect upon a) the changes needed for the Dockerfile b) the advantages for operations. []
  18. Assuming this is what the name stands for, but couldn’t find a reliable source. []
  19. We skip the CLI approach, but you can read more about it here: []
  20. If it fails with a weird exception message it’s probably because you use a Docker Desktop version prior to 20.0.14, see []
  21. This can also be achieved by running:

    mvn quarkus:add-extension -Dextensions='container-image-jib'

    This command adds the dependency to pom.xml../mvnw or even the quarkus CLI tool, which is available via SDKman would also work b.t.w. []

  22. Of course, we could just use the image ID. []
  23. There is also support for Podman, but it’s a bit tricky: – check the known issues & limitations before you start! []
  24. As a homework: try to change the base image to ubi πŸ™‚ []
  25. Or we run

    mvn quarkus:add-extension -Dextensions='container-image-buildpack'


  26. macOS: brew install source-to-image should work, too. []
  27. if you do so, please be aware to set imagePullPolicy: Never, see, or in depth: []
  28. both Docker and Quay can be operated privately, too! []
  29. or others listed in the CNCF landscape []
  30. In real life, you rarely would see just the Pod – it’d be live surrounded by a myriad of other workload resources, such as Deployments, Stateful Sets, Daemon Sets, Jobs, Cron Jobs etc. []
  31. see []
  32. Hint – make sure you just do this with example apps and not to publish stuff of your company, if applicable. []
  33. regarding the latest tag see the hints above []
  34. which in this case is 62e0943d5f8d and will differ on your computer []
  35. docker tag <image ID, here 62e0943d5f8d> <tag, here> []
  36. try it out via or with a local setup with kind and Helm:, in general, the setup allows an overwhelming amount of configuration parameters and demands a separate, very long article. []
  37. details see here: []
  38. see, you even could do this later from the command line:
    oc label deployment quarkus-native --overwrite
  39. normally local login is sufficient when developing, but check out further options here []
  40. for this example we can use the same project directory b.t.w. – Buildpacks’ basic approach is quite uninvasive. []
  41. Hint: If the Maven build fails with an error message like Could not transfer artifact – just retry again. []
  42. – at the end []
  43. E.g. we could spin up a K8s instance with minikube; then minikube start []
  44. Hint: try this out with the Developer Sandbox! []
  45. Create an individual namespace with e.g.

    kubectl create namespace j2p-yaml

    and apply by specifiying it with

    kubectl apply -f java2pod-service-and-deployment.yaml -n j2p-yaml


  46. You should be familiar enough now to do this at the command line, if not, check the initial Quarkus examples []
  47. The same would apply to Kustomize, which had the advantage to not being forced learning Helm-specific DSL and be integrated into K8s’ CLI, kubectl. []
  48. Again: as developer, just try the Developer Sandbox to get quite a holistic experience and an environment running for 30 days [before needing to get reprovisioned] []
  49. This article was first published at []

Leave a Reply


Subscribe to our newsletter.

Please select all the ways you would like to hear from Open Sourcerers:

You can unsubscribe at any time by clicking the link in the footer of our emails. For information about our privacy practices, please visit our website.

We use Mailchimp as our newsletter platform. By clicking below to subscribe, you acknowledge that your information will be transferred to Mailchimp for processing. Learn more about Mailchimp's privacy practices here.