Categories
00 - Cloud-Native App Dev development DevOps Uncategorized

Develop: The Inner Loop with OpenShift Dev Spaces (2/4)

In Part II of our 4-part blog series “You’ve written a Kubernetes-native Application? Here is how OpenShift helps you to run, develop, build and deliver it”, we will focus on the aspect of developing our application with and directly on OpenShift.

In Part I, we’ve introduced our sample application “the Local News Application”, and showed how to deploy it via Helm and the Red Hat Universal Base Image (UBI). If you haven’t so far, we recommend you read it first because otherwise, you miss out on the context! 🙂:

  1. Run: Get your Angular, Python, or Java Dockerfile ready with Red Hat’s Universal Base Image and deploy with Helm on OpenShift
  2. Develop: The Inner Loop with OpenShift Dev Spaces
  3. Build: From upstream Tekton and ArgoCD to OpenShift Pipelines and GitOps
  4. Deliver: Publish your own Operator with Operator Lifecycle Manager and the OpenShift Operator Catalog

Develop: The Inner Loop with OpenShift Dev Spaces

There are various approaches to develop for, or even with and on Kubernetes. In our book “Kubernetes Native Development”, from which we have taken the sample application used in this blog to illustrate the capabilities of OpenShift, we demonstrated four different approaches. The methods differ in the degree of integration. The higher the level of integration is, the higher is the value the platform can provide to developers. The below image shows those 4 different categories.

Integration of development into containers

The Red Hat OpenShift Container Platform comes with various development services for increasing developer productivity, matching one or more of the above approaches. And helping developers to abstract complexity away with automated builds and deployments. The final section of this blog contains an overview of them.
However, in this article, we will focus on OpenShift Dev Spaces, which probably falls somewhere between the third and the fourth category, because it provides options to heavily leverage Kubernetes while still being developer-focused.

Inner and Outer Loop

When looking at the development cycle from a 10,000 feet perspective, we will recognize two loops often referred to as an inner and outer loop. The inner loop starts with writing new / changing existing code. Then, the code is (re)built (e.g. it is compiled or a binary is created) to validate the results of the changes by running the new build artifact. If the code does not behave as expected, a debug phase could follow. The inner loop is usually run by a single developer who is working in her own, isolated development environment, e.g. an IDE. This is where Red Hat OpenShift Dev Spaces comes in.

Inner & Outer Loop

To transition from the inner into the outer loop, a developer would push his changes into a central code repository such as Git. When leaving the inner loop, the code is shared with other developers. From there on it triggers CI/CD processes that ensure that the code can be built and run in an isolated CI environment, and can be deployed into UAT/QA or production environments. Red Hat OpenShift supports the outer loop with Builds, Pipelines, and GitOps and that will be covered in Part 3.

What is Red Hat OpenShift Dev Spaces and why should I care? 

Red Hat OpenShift Dev Spaces is a Web IDE (sometimes also referred to as Cloud IDE) that can be used to develop containerized applications. It follows a three-tiered application architecture made up of a client application (the Javascript running in your browser), a server backend (several containers running in OpenShift), and a database (running in or outside OpenShift). Ok, this is not very spectacular, is it? Well, most of the classical IDEs are rich client applications that are installed on your laptop and this comes with some drawbacks that can be overcome with a web-based solution by:

  • Declaring/Describing the development environment as Code (in YAML 🙂 ) and spinning up as many developer workspaces as required in a reproducible manner
  • Providing standardized, up-to-date environments for a larger number of developers
  • Eliminating potential resource constraints on local machines because each developer workspace is running on top of an OpenShift cluster
  • Bringing the inner development loop closer to the target production environment – Kubernetes/OpenShift – because that helps to discover problems early (no longer: “works on my machine!”)
  • Reducing security risks because code (and also other artifacts such as images) stays on a central server, in this case a secure OpenShift cluster
  • Enforcing security standards towards the dev side (shift left)
  • Enabling Mobility of the development workplace (you can log in from wherever you like)
  • Developers can gain access to specialized hardware orchestrated by OpenShift such as GPUs

So, without further ado, let’s get started developing our sample – the Local News Application – on OpenShift Dev Spaces. But first, we need to install it.

Installing Dev Spaces

To install OpenShift Dev Spaces you need to switch to the Operator Hub in your OpenShift console and search for “Dev Spaces”. You will see the official “Red Hat OpenShift Dev Spaces” operator showing up. Click on it and install.

Search Dev Spaces in Operator Hub
Install the Operator

When you switch back to Operators -> Installed Operators, you will see two new Operators: Red Hat OpenShift Dev Spaces and DevWorkspace Operator. 

Result after successful installation

When you switch back to Operators -> Installed Operators, you will see two new Operators: Red Hat OpenShift Dev Spaces and DevWorkspace Operator. In the Provided APIs Column of the former, click on “Red Hat OpenShift Dev Spaces Instance Specification” -> Create CheCluster. Leave all defaults as is and switch to the Topology view to see what is being created.

Components deployed through the Red Hat OpenShift Dev Spaces Instance Specification Resource

We see the following deployments / stateful sets:

  • devfile-registry – stores our devfiles.
  • plugin-registry – stores the workspace plugins.
  • postgres – database to store the configuration data, e.g. workspace metadata.
  • devspaces – the devspaces core server component formerly known as codeready.
  • che-gateway – a Traefik-based Gateway for routing requests to the server components but also to the Kubernetes API. Handles authentication and authorization based on OpenShift RBAC and integrates with OpenShift OpenID Connect (OIDC). 
  • devspaces-dashboard – This is the management dashboard for your devfiles, workspaces, and plugins.

If you have already worked with the predecessor of Dev Spaces, CodeReady Workspaces, you will miss one component: keycloak. The new Dev Spaces operator directly integrates with OpenShift for Single Sign On. Keycloak is not required anymore lowering the resource consumption and complexity.

What is a Workspace?

A workspace is a containerized instance of a development environment provided to a single user. This user can utilize a workspace (or multiple workspaces for different technology stacks) to access, write, build, run or debug the code, i. e. to cover the inner loop. The workspace contains all the necessary tools to increase the productivity of a developer. Some of these tools can be accessed via containers e.g.

  • Language runtimes/development kits such as Node.js, JDK, Python
  • Build Tools such as Maven or Gradle
  • CLIs to interact with OpenShift or other tools 
  • Binaries to run certain processes such as application servers, message brokers, etc. 

Other tools can be integrated via workspace plug-ins that are retrieved from the plug-in registry. These are either native Dev Spaces plug-ins or VSCode plug-ins. Yep, you have heard right, it is possible to integrate existing well-known VSCode plug-ins, too. You can add plug-ins via UI or devfile. Dev what??? 

What is a Devfile?

A devfile is a template to customize the workspaces you will be running on OpenShift. It can be used to capture the instructions for configuring and running your development environment in terms of a text-based YAML file. This enables you to describe consistent development environments in a similar way as other OpenShift resources. A devfile can be shared throughout development teams to work on a project. This ensures that every developer is relying on the same set of resources (e.g. a specific Git repository) and tools (Plugins, Binaries, Commands) that can be used to work with the given technology stack. 

To grasp the concepts of devfiles, let us look at an example from our Local News project: the News Frontend which is written in AngularJS. Hence, the following devfile provides the technology stack for JavaScript as well as access to the project sources:

apiVersion: 1.0.0 metadata: name: localnews-frontend projects: - name: localnews source: type: git location: 'https://github.com/Apress/Kubernetes-Native-Development' components: - alias: nodejs type: dockerimage image: 'registry.redhat.io/devspaces/udi-rhel8@sha256:1983e5…' memoryLimit: 2048Mi endpoints: - name: news-frontend port: 4200 attributes: discoverable: 'true' public: 'true' mountSources: true commands:
Code language: YAML (yaml)

You can see that the devfile defines the Git repository URL in the projects section as well as a component of type dockerimage to run the JavaScript code from. The image points to a container image called udi-rhel8. The abbreviation UDI stands for Universal Developer Image and contains a consolidated set of developer tools and language runtimes such as cpp, dotnet, golang, php, java,kubernetes, and openshift.

Commands can be executed on one of the defined components (in this case there is only one such component but you could have multiple). An example of a command to “download the npm dependencies” and “start the News Frontend” can be found in the following excerpt:

- name: Download dependencies & Start news-frontend in devmode actions: - type: exec component: nodejs command: | npm install npm install @angular/cli node_modules/@angular/cli/bin/ng serve --host 0.0.0.0 --port 4200 --disable-host-check workdir: '${CHE_PROJECTS_ROOT}/localnews/components/news-frontend'
Code language: YAML (yaml)

This is also a great opportunity to onboard new developers or whole teams of developers. They just need the devfile, run it, and can just start coding. Wait, you said to run it? How do I actually run a devfile?

Running devfiles

There are two official ways to run a devfile: (a) use the UI that Red Hat OpenShift Dev Spaces provides or (b) Use a so-called factory URL which is – at the moment – more flexible than using the UI. So let us have a look at how a factory URL is formed:

https://devspaces-<openshift_deployment_name>.<domain_name>#https://github.com/Apress/Kubernetes-Native-Development.git/tree/openshift&devfilePath=snippets/chapter3/devspaces-devfiles/localnews-devfile-frontend-only.yaml

This URL points to the devspaces route that has been created during the installation of Dev Spaces. The part behind the hash sign (#) defines the Git URL and the devfile path. You can try it out on your own cluster. The factory URL is a powerful method to share a devfile and will open a new workspace based on this devfile for you. There are many more parameters that can be used. You can find more information about further parameters in the product documentation.

Looking behind the scenes

When you open the URL in your Browser, you can see the Theia IDE in action. But before we will be working with it let us briefly check what happened behind the scenes. Hence, we click on Workspace panel -> User Runtimes -> theia-ide -> theia-dev on the right side of the window. The Dev Spaces Dashboard will be opened and you can see your workspace on the left side of the menu. If you click on it, you can see meta-information about the workspace, e.g. the Kubernetes namespace it runs in (remember, the workspace is a container!).

Besides the Overview, there is another tab called Devfile. If we click on it we can even edit its definition. You will recognize that the devfile YAML is slightly different from the one we described earlier. This is because there are different specification versions of a Devfile. The one we showed here is a Version 1.0 file (used in the predecessor of Dev Spaces called CodeReady Workspaces) whereas Dev Spaces requires Version 2.1.0. However, we don’t need to worry about this because Dev Spaces has migrated the input file automatically into the target 2.1.0 format. 

Viewing the Devfile

We can even dig deeper to understand how our workspace is actually run (yes as a container, but how, and where?). To switch to the OpenShift console click on the 3×3 grid symbol left to your username on the top bar navigation. You will land on the projects overview page where you click on the project named equally to the Kubernetes namespace we have seen in the previous step. This is the OpenShift project where your workspace resides. Switch to the “Installed Operators” link in the left navigation and make sure that the desired project is selected. Choose DevWorkspace Operator and select the DevWorkspace tab.

The DevWorkspace Custom Resource

You will find a DevWorkspace resource and if you drill into its YAML (by clicking on the name “localnews-frontend”, you will once again see our Devfile stored in the spec section of the resource. What has happened? Whether you used the UI or the factory URL, in either case, Dev Spaces will have created a custom resource for you. This resource is processed by the DevWorkspace operator and turned into a Deployment. 

Let’s switch to the topology view (left navigation Administrator -> Developer -> Topology) and inspect the deployment. We can derive the following information from this view:

  • There is a single deployment per workspace, all in the same OpenShift project that has been created for us.
  • The deployment is managed by the DevWorkspace resource named localnews-frontend. I.e. we should not change the deployment directly because its state is specified and synchronized by the DevWorkspace resource.
  • There is a single workspace pod managed by the deployment.
  • There is a Kubernetes service to access our News Frontend from within the cluster and a route to access it from our browser. If we click on it, however, we will see an error message because we haven’t started our AngularJS process from within our workspace, yet.
The Workspace Deployment

If you click on the pod on the right navigation, you will see a containers section (when you scroll down a bit). There you will find the basic containers theia-ide, che-machine-exec, and che-gateway as well as the container defined by your component in the Devfile called nodejs

Containers inside the Workspace Pod

Working with the (Theia) IDE

To work with the IDE you should make yourself familiar with its window structure.

  • The explorer on the left side allows you to navigate to the files to edit.
  • The editor in the center shows the contents of the selected file with syntax highlighting and code completion.
  • A terminal window with different tabs at the bottom to execute commands or see the output of a command.
  • The left toolbar allows you to switch between files, search, git, and debug perspectives. This will change the explorer into something else, e.g. when you select debug, it changes to a debug overview.
  • The right toolbar shows three symbols
    • Outline to show file outlines such as the class and methods of a Java file
    • Endpoints to show the available TCP endpoints
    • Workspace, where you’ll find all 4 containers that are part of the pod, and the endpoints and commands associated with the respective containers.

To initialize our project we just use the command from our Devfile by clicking on the Workspace -> User Runtimes -> nodejs -> download dependencies… link. This will open a new terminal on the nodejs container and run npm install and ng serve. The service is listening on port 4200 now.

The Theia IDE

One way to access it is to use the route from the topology view that previously ran into an error. If we try it once again, we will see our map.

The Empty Map Rendered by the AngularJS Application

But wait, where are all the markers? Ups, we just ran one component of the Local News application but we need to deploy the others, too. How can we accomplish this? The first option would be to add further components for the other services News-Backend, Database, Location-Extractor, Feed-Scraper to our devfile. Although this would be indeed possible it might not match the common way of working. Firstly, we would rather develop one component at a time and integrate the others in the most recent stable version instead of developing everything at the same time. Secondly, we would assume that there is one team responsible for each component so there would rather be 4 devfiles each specific to the respective technology stack (Java, Quarkus, AngularJS) instead of a single one supporting all components.

Adding dependent components

Due to the given arguments, we will leave the Devfile as is. But how can we solve the problem with the missing components then? We can just use our Helm chart from Part 1 of our article, exclude the News Frontend from deployment and deploy the remaining components to the project where our workspace pod runs. You don’t have Helm installed yet? No problem, we can just run the helm command from our workspace. There is a command called install-all-backend-services that you can run. The result can be found in the topology view: 

Backend Components Deployed by Helm

The last step is to tell the News Fronted the correct backend Url because the default http://localhost:8080 cannot be accessed from our browser. In Explorer -> Workspace we navigate to the JSON configuration in localnews/components/news-frontend/src/assets/settings.json and replace the apiUrl with the route Url of the backend (you can pick it from the topology or under networking -> routes, if you are working from the workspace terminal you can retrieve it via “oc get routes”). Caution: Don’t forget to add the http:// in front of your backend URL. The AngularJS process will live-reload and when you reload the map in your browser (or move the map), you will see the expected markers. 

Map with Markers Enabled by Availability of all Backend Components

How does OpenShift Dev Spaces fit into the other OpenShift Developer Tools?

Here is a brief overview of the different developer services that OpenShift provides. Let us have a quick look at each of them to learn more about the ecosystem around OpenShift Dev Spaces:

  • Red Hat OpenShift Builds: OpenShift leverages build-packs and language detection to fully automate image generation (Source-2-Image) and deployment of an application for you by running a simple command or using the UI. For the developer it feels like “Docker-Less”, but, in fact, already goes into the direction of “Dev-to-Docker” or even “Dev-to-K8s” because it fully decouples the developer from the complexity of writing Dockerfiles or Kubernetes YAML.
  • Red Hat OpenShift Developer CLI (odo): Particularly well suited for the inner loop of development is the command line interface odo. It lets you code on your machine, but executes every code change directly in your OpenShift (or Kubernetes) environment. As before, odo does not require knowledge about Dockerfiles or Kubernetes YAML. If you are interested in this topic, you will find more in this article by Daniel Brintzinger.
  • Red Hat OpenShift Local: A pre-configured OpenShift environment for development purposes to run on your local machine. This is the OpenShift pendant to Minikube. You can read more about it in the following article by Xander Soldaat. 
  • Red Hat OpenShift Dev Spaces: A Web-based Integrated Development Environment (IDE) running in a container on OpenShift. This article puts a focus on this developer service. Dev Spaces is the next generation of its predecessor CodeReady Workspaces. Andrew Pitt published an article about it.
  • Red Hat OpenShift Builds: A image-building capability that allows you to turn your source code into a container image and deploy it directly to OpenShift, e.g. using Source-2-Image. 
  • Red Hat OpenShift Pipelines:This enables you to run your container-based build pipelines natively in OpenShift. 
  • Red Hat OpenShift Gitops: Describe your deployments as code, store them in a Git repository, and synchronize it into an OpenShift environment. For more information on Pipelines and GitOps have a look at Article 3 of our series.
  • Red Hat Service Mesh: Provides various platform services that can be leveraged by your applications. Examples are traffic management, telemetry & observability, security, and policy enforcement. Please find more information on ServiceMesh in this article by Ortwin Schneider.

Authors

Benjamin Schmeling

​Benjamin Schmeling is a solution architect at Red Hat with more than 15 years of experience in developing, building, and deploying Java-based software. His passion is the design and implementation of cloud-native applications running on Kubernetes-based container platforms.

Maximilian Dargatz

I am Max, live in Germany, Saxony-Anhalt, love the family-things of life but also the techie things, particularly Kubernetes, OpenShift and some Machine Learning.

By Benjamin Schmeling

​Benjamin Schmeling is an IT professional with more than 15 years of experience in developing, building, and deploying Java-based software. Today, he works as a solution architect for Red Hat, with a passion for the design and implementation of cloud-native applications running on Kubernetes-based container platforms.

Leave a Reply

%d bloggers like this:
close

Subscribe to our newsletter.

Please select all the ways you would like to hear from Open Sourcerers:

You can unsubscribe at any time by clicking the link in the footer of our emails. For information about our privacy practices, please visit our website.

We use Mailchimp as our newsletter platform. By clicking below to subscribe, you acknowledge that your information will be transferred to Mailchimp for processing. Learn more about Mailchimp's privacy practices here.