As time passes, our IT World is continuously changing…

February 20, 2023

10 years ago companies were desperately trying to implement new technologies to manage and operate their virtual machines. 

They do so with some kind of “Multi Cloud Management Software” or with a managed tool from a hypervisor of their choice at the time being, like vCenter from VMware.

There was also a nice software in place called “Managed IQ” or downstream called, within RedHat, CloudForms. With this Software I can orchestrate, among other things, virtual machines on several cloud platforms like AWS, Azure, Openstack, VMware and so on to get a “Single Pane of Glass”

And nowadays?

Nowadays, only a couple of years later, companies are still in the process of seeking out for “Multi Cloud Management Tools”. But this time rather not for managing and operating virtual machines but for managing and operating container, aka as Microservices or “Single Business Values”

Adoption from VM’s is sooner or later inevitable and we might ourselves prepare for this impact because Hybrid Multi Cloud Management, when it comes to Kubernetes (k8s),  can be hard to digest since many organizations deploy more and more applications across multiple clouds which means there are new challenges to arise!

Across Multi-Platform Deployments, especially in bigger companies, it is really hard to keep track of all my Kubernetes clusters and ressources out there. 

To make sure my clusters and my apps are up and running and are in good conditions I need something to reduce efforts and time. I do not want to put in a different URL in my browser for each single cluster for checking out every single Kubernetes resource and object since this is rather troublesome.

We also have to preserve governance stuff, obey the rules or comply to pre-determined sets of compliance Policies – e.g. which Operator is allowed on which Cluster Set or which RBAC’s and so on….

Is there already a solution in place to leverage a common Cluster Orchestration and a common Management Layer?

With RHACM (Red Hat Advanced Cluster Management for Kubernetes) there has been introduced a new and a very fine piece of software to orchestrate and operate all my Kubernetes clusters, no matter where I’m planning to install or maintain them. 

Open-Cluster-Management (OCM)  is the community driven open source project where we derive our sources for RHACM from. The repositories are currently being aligned. This has not been the case in the past (unfortunately).

The whole project is focused on multi-clusters and multi cloud scenarios. RHACM does have a lot of Open APIs, evolving within this project for all kinds of things like multiple cluster registration, work distribution or dynamic placement of policies and workloads. 

OCM/RHACM is also part of the CNCF (Cloud Native Computing Foundation).

When it comes to OCM/RHACM people ask different questions. It depends entirely on the department they are working for. But what might they ask? One can very roughly distinguish between 3 departments and questions they might be asking.

  1. Operation Team might ask
  • How can I monitor all my clusters across different platforms?
  • How quickly can I detect and solve a failed component in one of my clusters? 
  • How can I manage the lifecycle of multiple clusters regardless of where they reside, in which cloud or platform?
  • …..
  1. DevOps Team might ask
  • How can I automate the provisioning and deleting of my kubernetes clusters? 
  • How can I centralized control my workload placement based on policies?
  • How can I centralized control my workload placement based on capacity?
  • How can I centralized control my workload placement based on labels?
  • …..
  1. Security Team might ask
  • How do I ensure all my clusters are compliant with my defined policies?
  • How do I set consistent security policies across diverse environments and ensure enforcement?
  • How do I get alerted on any configuration drift and remediate it?
  • ….

All these questions generally raise the following problems & issues:

  • Apps are error prone and difficult to manage at scale
  • There is often an inconsistency with security controls across environments.
  • Difficulty in managing configurations, policies, and compliance across environments. 

With RHACM we will get some answers to solve those problems and issues in different ways but ALWAYS orchestrated and centrally from one Single entry-point (RHACM CLI or RHACM Web UI).

Additionally it is also very common to have workload-specific Openshift Container Platform clusters like GPU-installed workers for real time calculations. Also edge clusters might be a challenge. So, here we are, multiple clusters are the reality of life.

Architectural Overview of the RHACM

The Red Hat Advanced Cluster Management for Kubernetes consists of several multicluster components that are used to access and manage your clusters.

Mainly it comprises the “The Hub Cluster” and “The Managed Cluster” Component.

The hub cluster aggregates information from multiple clusters by using an asynchronous work request model. I have a pull model here. This means the hub cluster pulls all the information from the managed nodes via the kubelet agent to gather information.

I could transform that method and replace the standard-pull model against a push model if necessary.

The RHACM hub cluster maintains the state of clusters by its own database. These “cluster-states” coming from the kubelets agents and applications that run on the managed cluster. They are all reporting their states to the cluster hub. 

The hub cluster also uses etcd, a distributed key value store, just like Kubernetes or OpenShift. It stores the state of work requests and results from all clusters which are reporting the the Cluster Hub.

The managed cluster initiates a connection to the hub cluster, receives work requests, applies work requests, then returns the results. The managed cluster connects to various services within the cluster for operations, including the Kubernetes API service, and Weave for topology.

Ansible Integration in RHACM as of Version 2.3

As of Version 2.3 of the Red Hat Advanced Cluster Manager, “Ansible Automation Controller” has been integrated into RHACM. 

This way you can very easily create pre-hook and post-hook Ansible Job instances that can occur BEFORE or AFTER creating or upgrading your clusters. 

It means Red Hat Advanced Cluster Management for Kubernetes and Red Hat Ansible Automation Platform allows you to bridge the (configuration)-gap that has been there for some time now between your existing IT infrastructure and cloud-based systems.

We might divide the Ansible Capabilities in 4 different Use Cases to date:

  1. When Creating Openshift Clusters – Means while Creating Openshift clusters, before or after we create cluster resources, we could trigger different Job Templates. 
  1. Second Scenario – we could do something during an upgrade of an Openshift cluster. We could trigger some jobs before or after updating any cluster we control with RHACM. 
  1. What we also could do, in the Governance Risk and Compliance Section. Here we are now able to trigger Ansible Job Templates to resolve policy violations. 
  1. In the Application Lifecycle in RHACM. Here you also will have an Ansible Job Triggering Opportunity before AND after an Application has been created. 

Two examples that I have tried out myself within our RHACM env’s at Red Hat:

Cluster Install: 

We configured a load balancer and updated a database for connecting a wordpress client. Additionally we opened a port (8081) in a firewall to make the application ready for use since we changed the WordPress Port from 80 to 8081 in our DeploymentConfig.

Cluster Lifecycle: 

Configure cloud defined storage. Since our “root” device consumed too much storage capacity, we defined a NEW persistent volume for our WordPress Server during an update. 

By integrating a software like Ansible to automate everything ,there are no limits to configuration, be it during a cluster installation or later, when administering the cluster in Day2 – the operational phase. 

But how can now I install RHACM as a CR (Custom Resource) with a CRD? (Custom Resource Definition)

All things that are going to be installed are Kubernetes custom resources are defined by a Custom Resource Definition (CRD) which are being created for you when RHACM is installed. It is “just” an extension in regards of the existing kubernetes API. You get additional “kind:resources”. 

And by deploying ACM as Kubernetes native objects, as pods, you can interact with them the same way you would with a normal Pod. For instance, running ‘oc get application’ retrieves a list of deployed RHACM applications just as oc get pods retrieves a list of deployed Pods.

Since I’m a Solution Architect at Red Hat, I’m going to explain the installation process using my OpenShift Cluster.

Here we can use either the OpenShift 4 web console’s built-in OperatorHub or the OpenShift CLI to install RHACM. The installation breaks down to five steps:

  1. Prepare the environment for the RHACM installation.
  1. Create a new OpenShift project and namespace (Running the UI guided installation in OpenShift, this happens automatically)
  2. Install RHACM and subscribe to the RHACM Operator group (Running the UI guided installation in OpenShift, this happens automatically)
  3. Create the MultiClusterHub resource (Running the UI guided installation in Openshift, this happens automatically)
  4. Verify the RHACM installation.

Installation of ACM

I’m doing that with my Openshift Web UI, is the preferred way anyway & this is more comfortable for an old guy like me:)

Go to Operators >> Search for “Advanced Cluster Management”. 

Push Install Button. 

Choose The current Update Channel (As of Feb, 2023, V 2.7)  and keep the predetermined setting.

After installing the operator you need at least (The installer is going to inform you automatically) to install one instance of the kind: Multiclusterhub.

Now wait a couple of minutes. Soon you should be able to access the RHACM Web UI. But how can I retrieve the URL to access the RHACM UI?

Either via the UI in the “route” Section of the correspondent Namespace (Standard: open-cluster-management) or alternatively just head over to your Linux or MAC Console, authenticate yourself with the “oc” CLI Tool and do the following:

# oc project open-cluster-management

# oc get route -n  open-cluster-management “or

# oc describe route multicloud-console

Pick up the corresponding URL and put it into the browser of your choice.

Alternatively you might install it via the Openshift CLI Tool “oc”


Now you can start and install or import some Kubernetes Cluster….. Have fun with deploying and operating all your clusters from one single place. 

In the midst of writing this blog Article, Red Hat Advanced Cluster Management for Kubernetes  2.7 has been launched and the Web Console, existing ever since as of launching RHACM in OpenShift Version 4.4. Back in June 2020, has now been integrated in the OpenShift Cluster itself with leveraging an OpenShift plugin for providing an integrated RHACM Web Console. It really is worth it to peek into it a bit. 

Thanks for reading this blog article on open 

Next time, with the next Article related to RHACM, I will tell you about the main features and components of RHACM and we will play a live scenario with OpenShift Advanced Cluster Management for Kubernetes.

Sources and References (URLs)

Install ACM on Red Hat Openshift

Upstream Project –

ACM is also part of the CNFC

About the Operator Hub

Packaging Applications and Services with Kubernetes Operators

Official Documentation ACM

Maximum Number of Managed Clusters with one ACM Installation

Red Hat Advanced Cluster Management – Bringing the control plane back On-Premises – DC Rookie 

One reply on “As time passes, our IT World is continuously changing…”

Leave a Reply


Subscribe to our newsletter.

Please select all the ways you would like to hear from Open Sourcerers:

You can unsubscribe at any time by clicking the link in the footer of our emails. For information about our privacy practices, please visit our website.

We use Mailchimp as our newsletter platform. By clicking below to subscribe, you acknowledge that your information will be transferred to Mailchimp for processing. Learn more about Mailchimp's privacy practices here.