Latest News

…using jolokia to monitor/manage SwitchYard

Install jolokia

Get the latest jolokia war file from their website, rename it to jolokia.war and deploy it into the server.

Get a list of all SwitchYard MBeans

All SwitchYard MBeans are registered under the org.switchyard.admin JMX domain name, as per thedocumentation. So we can get a list of what we have:
or a description of an MBean:

As it is mentioned on the documentation, there are different types of MBeans:
  • Application: Management interface for a SwitchYard application.
  • Service: Management interface for a composite service in a SwitchYard application. One MBean is registered per composite service.
  • Reference: Management interface for a composite reference in a SwitchYard application. One MBean is registered per composite reference.
  • Binding: Management interface for a gateway binding attached to a composite service or reference. One MBean is registered per binding instance on an application’s composite services and references.
  • ComponentService: Management interface for a component service in a SwitchYard application. One MBean is registered per component service.
  • ComponentReference: Management interface for a component reference in a SwitchYard application. One MBean is registered per component reference.
  • Transformer: Management interface for a transformer in a SwitchYard application. One MBean is registered per transformer.
  • Validator: Management interface for a validator in a SwitchYard application. One MBean is registered per validator.
  • Throttling: Management interface for throttling a service in a SwitchYard application. One ThrottlingMBean is registered per composite service instance.
There are two additional MBean objects, that are superclasses, that define custom behavior.
  • Lifecycle: Supertype of BindingMXBean which provides operations related to lifecycle control for service and reference bindings.
  • Metris: Supertype of multiple MBeans providing message metrics information.

Starting/Stopping bindings

As service and reference bindings extends the Lifecycle MXBean, we can start or stop a binding, and know in what state they are:
  • Check the state
  • Stop the binding
  • Check the state
  • Start the binding
  • Check the state

Geting metrics

If you want to get metrics, it is very simple, the only thing is that you need to know which metrics are worth for you, as every component, composite and binding provides with many metrics. Once you know what information you need, you can use jolokia to get the information, and maybe use that information to feed an ElasticSearch or InfluxDB database, and use Kibana/Graphana to view the information in a graphical way, and explore this information. Also RTGov is available.
  • Get all the information available for a binding
  • Get the TotalCount for a binding

Getting metrics from multiple MBeans

You might want to get some metrics for more than one MBean. You can use wildcards for this, and knowing which types of MBeans and information you want is very easy.
  • More complex pattern

Search the MBeans you care for

When you have many apps deployed, you might not know which MBeans are there, and their ObjectNames. You can search for them:


If you want to test this, I have created a Dockerfile that you can use right away, based on the latest SwitchYard image. It is available here.
You just need to get this file, and build the image:
curl -o Dockerfile
docker build --rm -t "switchyard-with-jolokia"
And then run it:
docker run -it --rm -p 8080:8080 -p 9990:9990 switchyard-with-jolokia

…where to bundle SwitchYard application’s dependencies

The one problem, though, that I’ve constantly see is where to package your dependencies. We constantly fail to get the common classes used by many of the applications in a proper place.

Package common classes and model classes

If a class is going to be used by 2 or more SwitchYard applications, this class needs to be placed in a place where both applications will load it using the same classloader.
In a JEE world if the same class is loaded with different classloaders, it is not the same class.
Let’s take for example an application consisting of 3 SwitchYard applications, 2 of them (Sy app 1 and Sy app 2) packaged in an .ear file and another one (Sy app 3) packaged as a jar. These applications use some common classes (JAXB model, Entity beans, utilities, BaseMessageComposers, …​) that are packaged in two jar files, dependency A and dependency B.
If you have a request that initiated through Sy app 1 and calls to Sy app 2, you can safely use any class that is bundled in dependencyA or dependencyB, as long as for dependencyB you have a reference to the appropriate module (This dependency can be deployed either as a dynamic module, if left in the deployments folder, or as a static module, if registered as a module in modules folder). A reference to a module can be specified in the META-INF/MANIFEST file or in jboss-deployment-structure.xml file. (See JBoss documentation for how to do this).
If you have a requets that initiated through Sy app 1 and calls to Sy app 2, you can only use classes that are bundled in dependencyB for request object, otherwise you’ll most probably have a ClassCastException due to the class being loaded with two different class loaders. Any other use of classes from dependency A or B should be safe as long as it doesn’t use any object that is going from Sy app1 to Sy app 2, or viceversa, on the response path.
There are also some bugs in SY 1.1.1 (FSW 6.0) that raises some other similar and related problems, due to classloaders not properly propagated, and the incorrect use of some static code in serialization framework.

…How to setup JBoss EAP with RHEL 7 (systemd linuxes)

JBoss EAP (or Wildfly) has an init.d script that does not play well with systemd startup systems. To configure it, it is as easy as following this simple steps.
1- Create a group and user for the JBoss EAP process (username, uid, gid, and home to your preferences)
groupadd -r jboss -g 1000
useradd -u 1000 -r -g jboss -m -d /opt/jboss -s /sbin/nologin -c "JBoss user" jboss
2- Create the home folder for the user
chown -R jboss:jboss /opt/jboss
3- Create configuration directory for the JBoss EAP instance, create the configuration file for the EAP instance (of course, your values here), and then set appropriate permissions for the used folders.
mkdir /etc/jboss-as

cat > /etc/jboss-as/jboss-as.conf <<EOF

mkdir /var/log/jboss-as
mkdir /var/run/jboss-as
chown -R jboss:jboss /var/log/jboss-as
chown -R jboss:jboss /var/run/jboss-as
4- Create the service file
cat > /etc/systemd/system/jboss-as-standalone.service <<EOF
Description=Jboss Application Server

ExecStart=/usr/share/jboss-as/bin/init.d/ start
ExecStop=/usr/share/jboss-as/bin/init.d/ stop

5- Restart the systemctl daemon, start the service, verify it’s status and enable the service
systemctl daemon-reload
systemctl start jboss-as-standalone.service
systemctl status jboss-as-standalone.service
systemctl enable jboss-as-standalone.service
6- Additionally, if you need to create a firewalld rules for the EAP, do:
cat > /etc/firewalld/services/jboss-as-standalone.xml
<?xml version="1.0" encoding="utf-8"?>
<service version="1.0">
<port port="8080" protocol="tcp"/>
<port port="8443" protocol="udp"/>
<port port="8009" protocol="tcp"/>
<port port="4447" protocol="tcp"/>
<port port="9990" protocol="udp"/>
<port port="9999" protocol="tcp"/>

firewall-cmd --zone=public --add-service=jboss-as-standalone
firewall-cmd --permanent --zone=public --add-service=jboss-as-standalone
firewall-cmd --zone=public --list-services
firewall-cmd --permanent --zone=public --list-services

… Java / JBoss AS tuning

When you want to move your application to production environment, you always wonder how good it will perform, or if you will have any time of problem. It is a good practice to perform some characteristics testing of your application befor going into this stage. For being able to suceed in a characteritics and tuning process you need to continually gather information on your system, not only on the stress tests that you may execute but for the normal lifetime of your application, as it will be more close to reality, and you’ll see if the application/system is able to recover and continue it’s operation after a burst of load, or if a long period of inactivity affects in some way your development.
People usually overlook the process of understanding one’s application, as well as the operating system, the JVM and the architecture where the application is running, as it can similar or more impact than the development itself.
In a Java application, the most important parts to monitor, at least from the beginning are:
  • Process (cpu, load, threads,…​)
  • IO (disk and network)
  • JVM (memory and garbage collection)
  • Application specifics (connection pools, messaging resources length, )
  • Business related metrics (Request/response timings)
In this article I will focus on the first 3, as they are general purpose.

JVM Tuning

One of the things that you’ll need to look at is at the JVM. To do that, I oftleny use Java Mission Control.

Configure Java Mission Control to connect to a remote JBoss AS instance

To enable Java Mission Control for remote connection to a JBoss AS instance you should:
  • Edit the “$JAVA_HOME/bin/jmc.ini” file which contains the JVM and JAVA_OPTS related information in this file somewhere at the end users can add the “-Xbootclasspath/a” option to include the “$JBOSS_HOME/bin/client/jboss-client.jar” as following:
-Xbootclasspath/a:/jboss-eap-6.2.0/bin/client/jboss-client.jar <--------NOTICE HERE------>
  • Once the “jmc.ini” file is edited as above then Java Mission Control (jmc) can be started as following:
  • Then you’ll be able to connect with:

Configure JVM to enable Flight Recording

To use Oracle JDK’s Java Flight Recorder you’ll need to run JBoss AS with the following options:
-XX:+UnlockCommercialFeatures -XX:+FlightRecorder
These can be used with JAVA_OPTS on the shell, or editing either the $JBOSS_HOME/bin/standalone.conf, $JBOSS_HOME/domain/configuration/domain.xml or $JBOSS_HOME/domain/configuration/host*.xml depending on where the jvm parameters are set for your standalone or domain instance.
You can start a Flight recording right when you start your JVM, or you can do it programatically, using jcmd, so once JBoss starts with both of the above options, jcmd is the application needed to start, check, dump and stop JFR.
  • $JAVA_HOME/bin/jcmd $JBOSS_PID JFR.start: Starts JFR recording. It returns the recording number, which is used to dump the results to disk
  • $JAVA_HOME/bin/jcmd $JBOSS_PID JFR.check: Lists the number of recordings running
  • $JAVA_HOME/bin/jcmd $JBOSS_PID JFR.dump recording=<recording_number> filename=<path/file>: Dumps the JFR recording to disk
  • $JAVA_HOME/bin/jcmd $JBOSS_PID JFR.stop recording=<recording_number>: Stops the specified JFR recording. This should be done after dumping and not prior.
  • $JAVA_HOME/bin/jcmd $JBOSS_PID: help and any of the commands above can be used to list the available options for each command.
To limit the duration of the recording, to for 2 hours for example, use run:
$JAVA_HOME/bin/jcmd $JBOSS_PID duration=2h name=MyRecording filename=/tmp/myrecording.jfr
A JFR recording should occur when the system starts displaying the problematic behaviour on a production environment. On a development or testing environment, recording can start as soon as JBoss comes up and stop after the testing is done.
Scripting all these commands is the best way to make it work, so you can start a recording before doing some tests, and dump the recording when done, so you can analyze it.
Marcus Hirt has written a good tutorial on JMC.

Another JVM Tuning tools

Another useful tool that I oftenly use for scripting is Swiss Java Knife – jvm-tools. It has command line tools that enhance some of the out of the box commands/tools that come with the JVM like jps, jmap, jtop,…​

Monitoring the OS

When you monitor the OS you need to pay special attention to the cpu, load, threads as well as io, disk and network.
I’ve seen times where a poor networking configuration or architecture made applications behave very badly, so you need to constantly track every disk and network adapter you are using, and for the network adapters the type of traffic they will be managing (tcp, udp,…​).
Also, any other process that runs on the machine being used for testing needs to be identified, and it’s impact need to be minimized as much as possible, so the information being gathered is as related to your real application as possible.

sysstat – sar/ksar

The sysstat utilities are a collection of performance monitoring tools for Linux. These include sar, sadf, mpstat, iostat, nfsiostat-sysstat, cifsiostat, pidstat and sa tools.
Can monitor a huge number of different metrics:
  • Input / Output and transfer rate statistics (global, per device, per partition, per network filesystem and per Linux task / PID).
  • CPU statistics (global, per CPU and per Linux task / PID), including support for virtualization architectures.
  • Memory, hugepages and swap space utilization statistics.
  • Virtual memory, paging and fault statistics.
  • Per-task (per-PID) memory and page fault statistics.
  • Global CPU and page fault statistics for tasks and all their children.
  • Process creation activity.
  • Interrupt statistics (global, per CPU and per interrupt, including potential APIC interrupt sources, hardware and software interrupts).
  • Extensive network statistics: network interface activity (number of packets and kB received and transmitted per second, etc.) including failures from network devices; network traffic statistics for IP, TCP, ICMP and UDP protocols based on SNMPv2 standards; support for IPv6-related protocols.
  • NFS server and client activity.
  • Socket statistics.
  • Run queue and system load statistics.
  • Kernel internal tables utilization statistics.
  • System and per Linux task switching activity.
  • Swapping statistics.
  • TTY device activity.
  • Power management statistics (instantaneous and average CPU clock frequency, fans speed, devices temperature, voltage inputs, USB devices plugged into the system).
  • Filesystems utilization (inodes and blocks).
Of course, it is quite easy to integrate sysstat with Kibana, Graphana, or any other plotting tool for metrics.
It seems there are some alternatives to sysstat, like DStat

…intro to apiman

I’ll be introducing apiman in a series of blogs, and I’ll try to provide insight into the usually not very well documented parts, like deploying, scaling and providing HA. But for this purpose, I need to provide some details about apiman, the concepts, and the basic architecture.

Description of API Management

I will not write what is already written, so a good description is here provided:
API management is the process of publishing, promoting and overseeing application programming interfaces (APIs) in a secure, scalable environment. It also includes the creation of end user support resources that define and document the API.
The goal of API management is to allow an organization that publishes an API to monitor the interface’s lifecycle and make sure the needs of developers and applications using the API are being met.
API management software tools typically provide the following functions:
1-Automate and control connections between an API and the applications that use it.
2-Ensure consistency between multiple API implementations and versions.
3-Monitor traffic from individual apps.
4-Provide memory management and caching mechanisms to improve application performance.
5-Protect the API from misuse by wrapping it in security procedures and policies.


Although, there is a very nice and short video that can be easily seen introducing the apiman concepts, I will provide here with a human readable description of the core concepts.
  • Organization: Anyone that wants to expose their APIs. The owner of the APIs.
  • Service: Usually, the APIs to be exposed. A service represents an external API that is being governed by the API management system.
  • Application: An application represents a consumer of an API. Typical API consumers are things like mobile applications and B2B applications.
  • Plan: conditions/contraints that govern all the services for an organization, a concrete service or an application. These can be rate-limiting, security, white/black listing, caching,…​ A plan is a set of policies that define a level of service.
  • Service contract: A service contract is simply a link between an application and a service through a plan offered by that service. This is the only way that an application can consume a service.
There is much more to it, that can be read in the docs. Although a work in progress, the concepts are already written.

Use cases

API Management can be used in different scenarios.

Within an organization (On premise)

An Organization wants to have control of their APIs, and can use an API management solution to expose and control these APIs consumption, whether internally or externally. In these cases, the internal department in of the APIs will be the Organization, in apiman terms.

In the cloud

An Organization wants to expose their APIs, and they rely on a public cloud API Management solution that will provide with all the requirements to control/govern/monitor/monetize the APIs being exposed. In these cases, the apiman Organization will map to this Organization.

Basic architecture

apiman is composed of 4 architectural pieces:
  • APIManager UI: User interface layer for API Management. It is only a frontend, and makes REST calls to the APIManager backend which is the one having the information.
  • APIManager backend: It exposes a set of REST interfaces for managing the APIs. It is the one holding the API Management information (data model). Every time an API has to be published, it communicates via REST with the APIGAteway Config.
  • APIGateway Config: It is the layer for managing an APIGateway. It exposes a REST interface for management, and stores the management/configuration state.
  • APIGateway Runtime: The APIGateway Runtime will be primary runtime component of apiman, and will proxy requests from consumers to the services, applying a set of policies for delivering the service. This is the endpoint that will be known by applications. This will be the single source of requests to the services (APIs).
There will be different implementations for the APIGateway, in order to fit different architectures and technologies. Currently available are:
  • Undertow
  • Vertx
  • Servlet
  • Wildfly8 War

Consuming services

Whenever apiman exposes a private service, it will hide the real endpoint, and uses a apikey for the application to identify the service contract. The enpoint used in a private service will be:
Anything after that will get passed through to the service impl.
So if you created a service named MyOrg / MyService version 1.0, and set the service implementation endpoint to http://myhost:8080/myservice Then your client would make managed calls to this endponit:
And the gateway would proxy that request to the back service at this endpoint:
Requests to managed endpoints must include the API Key so that the Gateway knows which Contract is being used to invoke the Service. The API Key can be sent in one of the following ways:
  • As an HTTP Header named X-API-Key
  • As a URL query parameter named apikey
All HTTP headers and all query parameters (except for the API Key) will also be proxied to the back-end service.

jBPM at DevConf 2015

Maciej reports that he’ll be presenting at DevConf 2015 in Brno:I am happy to announce that a talk and workshop about jBPM 6 has been accepted at DevConf 2015 in Brno.Talk: jBPM – BPM Swiss knifeDuring the presentation jBPM will be introduced from the …