00 - IT Automation & Management

Why do you need Ansible to Manage your Middleware Runtimes?

Surely, if you have a challenge (faster release, becoming more agile, scalability, …) coming up in the next few years for your application development and deployment, you have heard that containerizing your applications on OpenShift will be the answer. And down the road, when your infrastructure & internal processes have evolved enough, and when legal issues regarding storing confidential or privileged information in the cloud have been addressed, this might be the way. However, what about now? What about the workloads currently deployed on non-container systems, like JEE appserver? You know, this sensitive app, that is simply nowhere close to being migrated to containers and/or the cloud? You know, those apps that still make up  70% of your workloads today?

This is where Ansible, especially Ansible for Middleware and Runtimes, comes into play. But first, let’s explain the why, before we dive into the how.

Making the Case

Cloud or not cloud, the times of manual release with several hours of downtime to carefully build and deploy the new environment are gone. Automation is everywhere and modern applications, especially competitive with others, need to have a fully automated release process, which includes the underlying infrastructure being managed as code (Infrastructure As Code). Why? Because, especially web and mobile apps, are rarely monolithic. Built using microservices, they also rely on infrastructure and other services to deliver their functionality. Which often means that an update is not just about deploying a new version of the application, but also configuration and even infrastructure changes.

Okay, what about an Example?

Let’s discuss an example to make this a bit more concrete and drive our point home. A large insurance company has an app for its customers so they can easily file claims. First, for obvious security reasons and confidentiality, the exchange with the user, either through webapp or mobile applications, needs to be encrypted. This is offloaded to a Nginx server also used as a load balancer in front of the system. Then, when a user connects to the webapp, the authentication process is delegated to a single sign-on server, such as Keycloak, or its Red Hat supported version, RHSSO

Once authenticated, users can file claims using a webapp hosted on a stateful cluster of JBoss EAP instances. This webapp does most of the work, but validates several key items of the claims using several, newly created microservices (built using Red Hat Quarkus each deployed on its own dedicated server). Finally, when all’s said and done, the claims are persisted in a SQL database and the app connects to a remote MOM (let’s assume ActiveMQ) to notify the appropriate system of the newly filed claims, so that the insurance company’s employees can start studying them.

Why not just put the app into the Cloud or Containers?

It’s also relevant to point out that our example system can NOT be migrated into a public cloud, due to legal constraints. Even on premise, our example may not be a good fit  for OpenShift’s capabilities, as it does not really need to scale up (and down). The number of claims filed is steady and even if a catastrophic event were to suddenly increase the number of requests, that may not be commercially sound to let the system scale up to compensate for it. The rest of the company would end up being swamped by all those requests, and most likely, the insurance company may have a completely different process to handle this kind of situation. 

And we didn’t even talk about the Release Process… yet.

So, let’s discuss the release process. Naively, one could think that, most of the time, we just need to deploy a new version of the application onto the JBoss EAP cluster. That might be true, but quite often, changes will also require updating the SQL schema and data. Also, a release might alter the way the app delegates authentication to RHSSO which, in turn, will require modifying this service configuration. SSL certificate renewal (or simply patching a CVE on nginx) may also require updating the load balancer. New features in the system may also leverage its HTTP redirection capabilities and thus lead to changes to its configuration during the release process. Tweaks to the EAP clustering configurations may impact the network stacks, requiring to change firewall rules or TCP/IP settings. And so on.

In short, maintaining and updating the overall system requires automation for all the components involved.

Ansible to the Rescue? Not that easy.

Of course, a tool like Ansible already comes with primitives to manage infrastructure components such as firewall configuration, network settings or Linux services like Nginx or RDBMS systems like Postgresql. But when it comes to middleware products, such as JBoss EAP, Active MQ, RHSOO and even JBoss Web Server, it becomes more complex. Indeed, those Java products are running on top of the Java Virtual Machine, which means, compared to Nginx, for instance, that they are not directly under the control of Ansible (and the underlying operating system). Also compared to Nginx or Postgresql, they are not always fully integrated into the Linux distribution as a system’s service. In the case of JBoss EAP, the water is even more muddy, as the way the server manages its main configuration file (standalone.xml) is not playing nicely with Ansible’s templates. 

In simple words, there is a lack of integration between middleware products and Ansible.

Meet Ansible Middleware Automation

This is where Ansible for Middleware & Runtimes comes in. It aims at providing integration to make all those products first-class citizens in the Ansible ecosystem. So that an automation can be fully built, from the lowest part of the system (Operating system, network stacks…) to the very last layer (JEE app server configuration and webapps deployments). With such integration, the roadblock to fully automate application release, but also to maintain the overall system in a proper state is lifted. It’s also a perfect fit for continuous integration and continuous delivery. This means that without leveraging a container platform such as OpenShift, the application can still be managed in a fully automated manner, and having the same level of comfort as a cloud infrastructure would provide.

Further Reading

Did I get you interested? If you want to dive deeper into how this is actually done, look out for an upcoming post that goes into the technical details by showing some examples. In the meantime I’ve added a number of resources to get you started below.

Using Ansible to manage Wildfly / JBoss EAP servers:

Using Ansible to manage Apache Tomcat / JBoss Web Server:

Ansible collections for Middleware on Galaxy and Ansible Middleware Github organisation.

By Romain Pelisse

Graduated in 2005 from ESME Sudria, where he also teached part time until 2012, Open Source and Free Software have been the leitmotiv of Romain's career. After a decade of consulting around Java technologies (JBoss and its ecosystem) and Linux, first at Atos and then at Red Hat, he joined the R&D departement and the JBoss Sustain Engineering Team. On top of his contributions to many Java project (most notably Wildfly, PMD and Bugclerk), Romain is also a strong advocate of Linux, Git, Ansible and Bash. Since 2019, Ansible has become one of his main focus as he join the Ansible Middleware Intiative, a task force, within the company, aimed at providing the best integration possible between the Red Hat Runtimes software and Ansible.

Leave a Reply