… Java / JBoss AS tuning

When you want to move your application to production environment, you always wonder how good it will perform, or if you will have any time of problem. It is a good practice to perform some characteristics testing of your application befor going into this stage. For being able to suceed in a characteritics and tuning process you need to continually gather information on your system, not only on the stress tests that you may execute but for the normal lifetime of your application, as it will be more close to reality, and you’ll see if the application/system is able to recover and continue it’s operation after a burst of load, or if a long period of inactivity affects in some way your development.
People usually overlook the process of understanding one’s application, as well as the operating system, the JVM and the architecture where the application is running, as it can similar or more impact than the development itself.
In a Java application, the most important parts to monitor, at least from the beginning are:
  • Process (cpu, load, threads,…​)
  • IO (disk and network)
  • JVM (memory and garbage collection)
  • Application specifics (connection pools, messaging resources length, )
  • Business related metrics (Request/response timings)
In this article I will focus on the first 3, as they are general purpose.

JVM Tuning

One of the things that you’ll need to look at is at the JVM. To do that, I oftleny use Java Mission Control.

Configure Java Mission Control to connect to a remote JBoss AS instance

To enable Java Mission Control for remote connection to a JBoss AS instance you should:
  • Edit the “$JAVA_HOME/bin/jmc.ini” file which contains the JVM and JAVA_OPTS related information in this file somewhere at the end users can add the “-Xbootclasspath/a” option to include the “$JBOSS_HOME/bin/client/jboss-client.jar” as following:
-startup
../lib/missioncontrol/plugins/org.eclipse.equinox.launcher_1.3.0.v20120522-1813.jar
--launcher.library
../lib/missioncontrol/plugins/org.eclipse.equinox.launcher.gtk.linux.x86_64_1.1.200.v20120913-144807
-vm
./java
-vmargs
-XX:+UseG1GC
-XX:+UnlockCommercialFeatures
-XX:+FlightRecorder
-Djava.net.preferIPv4Stack=true
-Xbootclasspath/a:/jboss-eap-6.2.0/bin/client/jboss-client.jar <--------NOTICE HERE------>
  • Once the “jmc.ini” file is edited as above then Java Mission Control (jmc) can be started as following:
$JAVA_HOME/bin/jmc
  • Then you’ll be able to connect with:
service:jmx:remoting-jmx://$HOST:$PORT
Example:
service:jmx:remoting-jmx://10.10.10.10:9999

Configure JVM to enable Flight Recording

To use Oracle JDK’s Java Flight Recorder you’ll need to run JBoss AS with the following options:
-XX:+UnlockCommercialFeatures -XX:+FlightRecorder
These can be used with JAVA_OPTS on the shell, or editing either the $JBOSS_HOME/bin/standalone.conf, $JBOSS_HOME/domain/configuration/domain.xml or $JBOSS_HOME/domain/configuration/host*.xml depending on where the jvm parameters are set for your standalone or domain instance.
You can start a Flight recording right when you start your JVM, or you can do it programatically, using jcmd, so once JBoss starts with both of the above options, jcmd is the application needed to start, check, dump and stop JFR.
  • $JAVA_HOME/bin/jcmd $JBOSS_PID JFR.start: Starts JFR recording. It returns the recording number, which is used to dump the results to disk
  • $JAVA_HOME/bin/jcmd $JBOSS_PID JFR.check: Lists the number of recordings running
  • $JAVA_HOME/bin/jcmd $JBOSS_PID JFR.dump recording=<recording_number> filename=<path/file>: Dumps the JFR recording to disk
  • $JAVA_HOME/bin/jcmd $JBOSS_PID JFR.stop recording=<recording_number>: Stops the specified JFR recording. This should be done after dumping and not prior.
  • $JAVA_HOME/bin/jcmd $JBOSS_PID: help and any of the commands above can be used to list the available options for each command.
To limit the duration of the recording, to for 2 hours for example, use run:
$JAVA_HOME/bin/jcmd $JBOSS_PID duration=2h name=MyRecording filename=/tmp/myrecording.jfr
A JFR recording should occur when the system starts displaying the problematic behaviour on a production environment. On a development or testing environment, recording can start as soon as JBoss comes up and stop after the testing is done.
Scripting all these commands is the best way to make it work, so you can start a recording before doing some tests, and dump the recording when done, so you can analyze it.
Marcus Hirt has written a good tutorial on JMC.

Another JVM Tuning tools

Another useful tool that I oftenly use for scripting is Swiss Java Knife – jvm-tools. It has command line tools that enhance some of the out of the box commands/tools that come with the JVM like jps, jmap, jtop,…​

Monitoring the OS

When you monitor the OS you need to pay special attention to the cpu, load, threads as well as io, disk and network.
I’ve seen times where a poor networking configuration or architecture made applications behave very badly, so you need to constantly track every disk and network adapter you are using, and for the network adapters the type of traffic they will be managing (tcp, udp,…​).
Also, any other process that runs on the machine being used for testing needs to be identified, and it’s impact need to be minimized as much as possible, so the information being gathered is as related to your real application as possible.

sysstat – sar/ksar

The sysstat utilities are a collection of performance monitoring tools for Linux. These include sar, sadf, mpstat, iostat, nfsiostat-sysstat, cifsiostat, pidstat and sa tools.
Can monitor a huge number of different metrics:
  • Input / Output and transfer rate statistics (global, per device, per partition, per network filesystem and per Linux task / PID).
  • CPU statistics (global, per CPU and per Linux task / PID), including support for virtualization architectures.
  • Memory, hugepages and swap space utilization statistics.
  • Virtual memory, paging and fault statistics.
  • Per-task (per-PID) memory and page fault statistics.
  • Global CPU and page fault statistics for tasks and all their children.
  • Process creation activity.
  • Interrupt statistics (global, per CPU and per interrupt, including potential APIC interrupt sources, hardware and software interrupts).
  • Extensive network statistics: network interface activity (number of packets and kB received and transmitted per second, etc.) including failures from network devices; network traffic statistics for IP, TCP, ICMP and UDP protocols based on SNMPv2 standards; support for IPv6-related protocols.
  • NFS server and client activity.
  • Socket statistics.
  • Run queue and system load statistics.
  • Kernel internal tables utilization statistics.
  • System and per Linux task switching activity.
  • Swapping statistics.
  • TTY device activity.
  • Power management statistics (instantaneous and average CPU clock frequency, fans speed, devices temperature, voltage inputs, USB devices plugged into the system).
  • Filesystems utilization (inodes and blocks).
Of course, it is quite easy to integrate sysstat with Kibana, Graphana, or any other plotting tool for metrics.
It seems there are some alternatives to sysstat, like DStat
Uncategorized

…intro to apiman

I’ll be introducing apiman in a series of blogs, and I’ll try to provide insight into the usually not very well documented parts, like deploying, scaling and providing HA. But for this purpose, I need to provide some details about apiman, the concepts, and the basic architecture.

Description of API Management

I will not write what is already written, so a good description is here provided:
API management is the process of publishing, promoting and overseeing application programming interfaces (APIs) in a secure, scalable environment. It also includes the creation of end user support resources that define and document the API.
The goal of API management is to allow an organization that publishes an API to monitor the interface’s lifecycle and make sure the needs of developers and applications using the API are being met.
API management software tools typically provide the following functions:
1-Automate and control connections between an API and the applications that use it.
2-Ensure consistency between multiple API implementations and versions.
3-Monitor traffic from individual apps.
4-Provide memory management and caching mechanisms to improve application performance.
5-Protect the API from misuse by wrapping it in security procedures and policies.
— http://searchcloudapplications.techtarget.com/definition/API-management

Concepts

Although, there is a very nice and short video that can be easily seen introducing the apiman concepts, I will provide here with a human readable description of the core concepts.
  • Organization: Anyone that wants to expose their APIs. The owner of the APIs.
  • Service: Usually, the APIs to be exposed. A service represents an external API that is being governed by the API management system.
  • Application: An application represents a consumer of an API. Typical API consumers are things like mobile applications and B2B applications.
  • Plan: conditions/contraints that govern all the services for an organization, a concrete service or an application. These can be rate-limiting, security, white/black listing, caching,…​ A plan is a set of policies that define a level of service.
  • Service contract: A service contract is simply a link between an application and a service through a plan offered by that service. This is the only way that an application can consume a service.
There is much more to it, that can be read in the docs. Although a work in progress, the concepts are already written.

Use cases

API Management can be used in different scenarios.

Within an organization (On premise)

An Organization wants to have control of their APIs, and can use an API management solution to expose and control these APIs consumption, whether internally or externally. In these cases, the internal department in of the APIs will be the Organization, in apiman terms.

In the cloud

An Organization wants to expose their APIs, and they rely on a public cloud API Management solution that will provide with all the requirements to control/govern/monitor/monetize the APIs being exposed. In these cases, the apiman Organization will map to this Organization.

Basic architecture

apiman is composed of 4 architectural pieces:
  • APIManager UI: User interface layer for API Management. It is only a frontend, and makes REST calls to the APIManager backend which is the one having the information.
  • APIManager backend: It exposes a set of REST interfaces for managing the APIs. It is the one holding the API Management information (data model). Every time an API has to be published, it communicates via REST with the APIGAteway Config.
  • APIGateway Config: It is the layer for managing an APIGateway. It exposes a REST interface for management, and stores the management/configuration state.
  • APIGateway Runtime: The APIGateway Runtime will be primary runtime component of apiman, and will proxy requests from consumers to the services, applying a set of policies for delivering the service. This is the endpoint that will be known by applications. This will be the single source of requests to the services (APIs).
There will be different implementations for the APIGateway, in order to fit different architectures and technologies. Currently available are:
  • Undertow
  • Vertx
  • Servlet
  • Wildfly8 War

Consuming services

Whenever apiman exposes a private service, it will hide the real endpoint, and uses a apikey for the application to identify the service contract. The enpoint used in a private service will be:
http://gatewayhost:port/apiman-gateway/{organizationid}/{serviceid}/{version}/
Anything after that will get passed through to the service impl.
So if you created a service named MyOrg / MyService version 1.0, and set the service implementation endpoint to http://myhost:8080/myservice Then your client would make managed calls to this endponit:
http://gatewayhost:port/apiman-gateway/MyOrg/MyService/1.0/path/to/resource?query=12345
And the gateway would proxy that request to the back service at this endpoint:
http://myhost:8080/myservice/path/to/resource?query=12345
Requests to managed endpoints must include the API Key so that the Gateway knows which Contract is being used to invoke the Service. The API Key can be sent in one of the following ways:
  • As an HTTP Header named X-API-Key
  • As a URL query parameter named apikey
All HTTP headers and all query parameters (except for the API Key) will also be proxied to the back-end service.

jBPM at DevConf 2015

Maciej reports that he’ll be presenting at DevConf 2015 in Brno:I am happy to announce that a talk and workshop about jBPM 6 has been accepted at DevConf 2015 in Brno.Talk: jBPM – BPM Swiss knifeDuring the presentation jBPM will be introduced from the …