Latest News

…use a Proxy for speeding up docker images creation

Sometimes it is very convenient to use a proxy (squid or any other) to speed up development. When creating an image you might download some packages, maybe, change some steps, which require you to redownload thos packages. To ease this a bit, you can use a proxy, and instruct docker to use that proxy.
I started looking at configuring a proxy for the docker daemon, but once I had it finally working I realized that it was proxying only the images I downloaded from the internet, so not too much benefit. Anyway, it is documented below.
I then tried to hook a proxy between the build process and the internet. After some hours, I got to this nice post from Jerome Petazzo. His work, linked on github is more advanced what is mentioned in that post, and the docs are not very clear, so I will summarize it here, and comment a small issue that I had on my Fedora 20 (docker 1.3.0).

Proxy for images

Here is a description of the steps required to use a proxy with the daemon.

Install the proxy and configure

Installation steps are quite easy, just use yum in Fedora to install squid:
$ yum -y install squid
In fedora (20), squid config file is /etc/squid/squid.conf. We will configure for our usage.
Configuration is dependent on your preferences, this is just an example of my configuration preferences.
  • Uncomment cache_dir directive and set the allowed max size of the cache dir. Example:
cache_dir ufs /var/spool/squid 20000 16 256
sets the max cache dir size to 20 GB.
  • Add maximum_object_size directive and set its value to the largest file size you want to cache. Example:
maximum_object_size 5 GB
allows to cache DVD-sized files.
  • Optional: Disable caching from some domains. If you have some files/mirrors already on your local network and you don’t want to cache those files (the access is already fast enough), you can specify it using acl and cache directives. This example disables caching of all traffic coming from domain:
acl redhat dstdomain
cache deny redhat
  • start Squid service:
$ service squid start
We will not start squid on boot, as we do only want to use squid for Docker image development purposes.
  • Make sure iptables or SELinux do not block Squid operating on port 3128 (the default value).

Configure Docker to use a proxy

By now, we will have squid running on port export 3128 (default). We just need to instruct docker to use that while the containers go to the internet for things.
You need to establish en environment variable to the docker daemon, specifying the http_proxy.
In fedora 20, you can modify your /etc/sysconfig/docker configuration file, with the following:

export HTTP_PROXY HTTPS_PROXY http_proxy https_proxy

# This line already existed. Only lines above this one has been added.
Now you need to restart the daemon:
$ systemctl daemon-reload
$ systemctl restart docker.service

Create images

Now, if you get an image, it will get proxied. If you delete it from your local, and want to fetch it again, it will get it now from the proxy cache.
This might seem as it is bt a big benefit, but if you have a local lan, you can use this to have a proxy/cache for the HUB (or registry).

Proxy for images build contents

As I said before, it usually is mor interesting to proxy what will be in the images you are developing, so if you invalidate a layer (modify the Dockerfile) next time will not go to the internet.
Following Jerome’s blog and his github what I did was:
I cloned his github repo to my local:
$ git clone squid-in-a-can.git
And then I run:
fig up -d squid && fig run tproxy
You need fig, but who does not have it?
Then you just need to do a normal docker build. The first time every download will get into the “squid” container, and the later times will be fetched from there. While doing this, I hit an issue. I do not realy know if it was in my environment, in any Fedora 20/Docker 1.3.0, or any of them. The issue was that I was getting a unreachable host. It turned out that in my iptables I had a rule that was rejecting everything with icmp-host-prohibited. I solved it removing those lines from iptables.
I used:
$ iptables-save > iptables-original.conf
$ cp iptables-original.conf iptables-new.conf
Commented out his lines in iptables-new.conf
-A FORWARD -j REJECT --reject-with icmp-host-prohibited
-A INPUT -j REJECT --reject-with icmp-host-prohibited
And load the new iptables conf:
$ iptables-restore < iptables-new.conf
I also opened a bug in squid-in-a-can github to see if Jerome’s has an answer to this.


Now there are 2 options, as the container created this way stores the cached data in it, so if you remove it, you remove the cache.
  • First option is to use a volume to a local dir. For this, edit the fig.yml in the project’s source dir.
  • Second option is to use your local squid (if you already have one), so you only need to run that second container, or only the add/remove iptables rule:
    • Start to proxy (Asuming squid is running):
iptables -t nat -A PREROUTING -p tcp --dport 80 -j REDIRECT --to 3128
  • Stop to proxy:
iptables -t nat -D PREROUTING -p tcp --dport 80 -j REDIRECT --to 3128

…proxying a SOAP service with 3 faults in SwitchYard

This document describes a problem that I’ve faced with SwitchYard due to one of it’s known issues/features/limitations.


I needed to create a proxy service for a SOAP Web Service where I had a contract with 3 faults.
public interface CustomerListServicePortType {
public CustomerListResponse getCustomerList(CustomerListRequest parameters) throws
GeneralError, CustomerNotFound, InvalidUserCredentials;
SwitchYard has one limitation, it only accepts contracts with one Exception type (as well as only accepts one single input type). When I created the initial service for this, and deployed my service, I had SwitchYard telling me about this:
org.switchyard.SwitchYardException: SWITCHYARD010005: Service operations on a Java interface can only throw one type of exception.
One option could be to modify the contract, but as this is to proxy a legacy service, I need to maintain my contract, so I looked into various options, out of which I’ll describe the one that was the easiest for me.


I created an internal Contract, for my service, to only have one single Exception:
public interface CustomerListServicePortType {
public CustomerListResponse getCustomerList(CustomerListRequest parameters) throws
Use transformers to map from and to the original exceptions to my new “unique” exception. As when doing SOAPFault handling, what really gets marshalled/unmarshalled is the FaultInfo, I decided to keep the original FaultInfo in my new Exception:
import org.w3c.dom.Element;

public class CustomerListException extends Exception {

private Element faultInfo;

public CustomerListException(Element cause) {
faultInfo = cause;

public Element getFaultInfo() {
return faultInfo;
And my tranformers where so simple, that I was happy not having to deal with DOM parsing and Element, and all that stuff.
public final class ExceptionTransformers {

@Transformer(from = "{http://common/errorcodes}invalidUserCredentials")
public CustomerListException InvalidUserCredentialsToCustomerListEx(Element from) {
CustomerListException fe = new CustomerListException(from);
return fe;

@Transformer(from = "{http://common/errorcodes}generalError")
public CustomerListException transformGeneralErrorToCustomerListEx(Element from) {
CustomerListException fe = new CustomerListException(from);
return fe;

@Transformer(from = "{http://common/errorcodes}customerNotFound")
public CustomerListException transformCustomerNotFoundToCustomerListEx(Element from) {
CustomerListException fe = new CustomerListException(from);
return fe;

@Transformer(to = "{http://common/errorcodes}customerNotFound")
public Element transformCustomerListExToCustomerNotFound(CustomerListException e){
return e.getFaultInfo();

@Transformer(to = "{http://common/errorcodes}generalError")
public Element transformCustomerListExToGeneralError(CustomerListException e){
return e.getFaultInfo();

@Transformer(to = "{http://common/errorcodes}invalidUserCredentials")
public Element transformCustomerListExToInvalidUserCredentials(CustomerListException e){
return e.getFaultInfo();
These transformers gets registered as Java transformers (due to the @Transform annotation).
And everything works like a charm

…Docker layer size explained

When you create a docker image, the final size of an image is very relevant, as people will have to download it from somewhere (maybe internet), at least, the first time, and also every time the image will change. (At least will have to download all the changed/new layers).
I was curious about how to optimize the size of a layer, cause I read at some time that docker internally used a “Copy-on-Write filesystem”, so every write that you made while creating a layer was there, even if you removed the software.
I decided to validate this, and to explain how it works, and how to optimize the size of a layer.
I have 3 main tests to validate the concept, using the JBoss Wildfly image, available on github as a base. But as this image, is composed of 2 base images on top of fedora, plus the wildfly image, I decided to merge everything into one single Dockerfile.

Test 1 – Every command in a separate line

This first test, demonstrates how every command creates a layer, so if you split commands in separate lines, you end up with many more layers, plus many more space being used.
The code for this dockerfiles is available on github:
Image sizes
The conclusion to this is to avoid creating unnecesary layers, or combine shell commands in docker commands, like multiple yum install && yum clean

Test 2 – Uncompressing while downloading vs remove downladed file after decompressing

In this test, I wanted to test whether the “copy-on-write” meant that even if I removed a file, it still occupy some disc space. So for this purpose, what I did was uncompressing a file while I was downloading it directly from the internet versus saving that file, decompressing it and then removing it.
The code for this dockerfiles is available on github:
Image sizes
The conclusion is that in terms of size it is the same, if it is done in a single docker command.

Test 3 – One single RUN command for most of the stuff

In this test, I have modified the mage description, to only contain one single RUN command with everything in there.
The code for this dockerfiles is available on github:
Image size
The conclusion for this test is that the benefit we obtain when having s simple layer is not so big, and every change will create a whole new layer, so it is worse on the long run

Overall conclusions

These are the summary of the conclusions I have made:
  • Layer your images for better reusability of the layers
  • Combine all the yum install && yum clean all that you’ll have in an image in a single RUN command
  • When installing software it has smaller footpring to download (via curl) tha to ADD/COPY from local filesystem as you can combine the download with the install and removing stale data.
  • Don’t combine commands in a single RUN more than needed as the benefit in terms of size can not be huge, but the lose in terms reusability it is

Red Hat Job Opening – Software Sustaining Engineer

We are looking to hire someone to help improve the quality of BRMS and BPM Suite platforms. These are the productised versions of the Drools and jBPM open source projects.

The role will involve improving our test coverage, diagnosis problems, creating reproducers for problems as well as helping fix them. You’ll also be responsible for helping to setup and maintain our continuous integration environment to help streamline the various aspects involved in getting timely high quality releases out.

So if you love Drools and jBPM, and want to help make them even better and even more robust – then this is the job for you 🙂

The role is remote, so you can be based almost anywhere.

URL to apply now

…make JBDS faster for SwitchYard

While working with SwitchYard, JBoss Developer Studio can be a pain in the ass. Red Hat is working to provide some better user experience, but in the meantime, you can try some of these tips.

Increase heap memory

Eclipse needs a lot of memory, and maven and SwitchYard projects even more, so provide with a good amount of it to your eclipse. Modify jbdevstudio.ini in the appropiate section:
Provide a good amount of memory, so if you can give 3 or 4 GBs instead of 2 better.

Disable automatic updates

Faster startup time. You’ll update or check for updates whenever you want.
Preferences --> Automatic updates --> (Disable) Automatically find new updates and notify me

Disable auto build

If you build whenever you want, the project you want, then disable. If you have a big ammount of projects you can skip having eclipse doing background building all the time. If you have few projects, you can keep it.
Project --> Build automatically (Uncheck)

Refresh workspace on startup

If you don’t do things on command line, then your workspace should be refreshed. If you use git (command line) or maven (command line) maybe you want to keep it:
General -> Startup and shutdown -> Refresh workspace on startup (Enable)

Disable validations

If you there is a lot of background task processing gone on validating your project (due to any facet your project has, like JPA, …​)
Validation -> Suspend all validations (check)

Disable startup features not needed (FUSE, …​)

Use the fewer plugins needed for your work.
General -> Startup & shutdown ->  Plugins activated on Startup (Remove FUSE, Fabric8, JBoss Central, Forge UI, JBoss Tools reporting, Equinox autoupdate)

Disable XML Honour schemas

There is a known bug in JBDS and SwitchYard, so avoid it with:
XML -> XML Files -> Validation -> Honour all XML schema locations (uncheck)

Close JPA and EAR projects if not working on them

Every project that you have opened is both, eating resources and having to be check with the background tasks, so close them if not needed (as dependencies or at all)

… using vaulted properties in SwitchYard


Usually we define externalize configuration in properties files to just make the application configuration independent. But one problem we face is that this information is usually endpoints and login credentials, like username and password. For this first type of information, there is usually no need to have them encrypted, but there typically is a requirement for this when storing credentials information.
I’m going to explain how to do this in SwitchYard.
Thanks to Nacim Boukhedimi for this post
The steps to be able to use vaulted configuration is:
  • Create a vault file and store the property there
  • Configure the application to read

Create a vault file

Vault mechanism is the mechanism provided by JBoss EAP 6 to enable you to encrypt sensitive strings and store them in an encrypted keystore. This mechanism relies upon tools that are included in all supported Java Development Kit (JDK) implementations. Below is a step by step description of how to encrypt your properties stored in an external properties file.
  1. Create a directory to hold your keystore and other important information. (for instance ${jboss.server.config.dir})
  2. Enter the following command to create a keystore file named vault.keystore:
    keytool -genkey -alias vault -keyalg RSA -keysize 1024 -keystore vault.keystore
  3. Mask the Keystore Password and Initialize the Password Vault. From EAP_HOME/bin folder, run and start a new interactive session
    The salt value, together with the iteration count (below), are used to create the hash value.
    Make a note of generated vault Configuration in a secure location. It will be used in the next step to configure EAP to use vault.
  4. Configure JBoss EAP 6 to Use the Password Vault. Copy and paste the generated vault Configuration in EAP config file:
    <vault-option name="KEYSTORE_URL" value="${jboss.server.config.dir}/vault.keystore"/>
    <vault-option name="KEYSTORE_PASSWORD" value="MASK-18JTA2ZfD4eISrndbFgJRk"/>
    <vault-option name="KEYSTORE_ALIAS" value="vault"/>
    <vault-option name="SALT" value="8675309K"/>
    <vault-option name="ITERATION_COUNT" value="50"/>
    <vault-option name="ENC_FILE_DIR" value="${jboss.server.config.dir}/vault/"/>

Store your data in the vault

  1. Store encrypted Sensitive strings in the Java Keystore:
    1. run and start a new interactive session
    2. Enter the path to the keystore, the keystore password, vault name, salt, and iteration count to perform a handshake.
    3. Select the option 0 to store a value.
    4. Enter the value, vault block, and attribute name.
    5. As a result, the value is stored and a message is displayed showing the vault block, attribute name, shared key, and advice about using the string in your configuration.
      Vault Block:container
      Configuration should be done as follows:
      Please make note of the vaultID to use in your properties file

Use the encrypted property in your properties file

Now it is time to use the property. As we want the property to be externalized, we will define the property in a properties file, and then instruct the server to use that properties file.
  1. In your properties, refer to this property as:${}
You need to start the server with the P option to provide the path to the properties file.
./  -P=file:<path_to_properties>

Use the property in your switchyard.xml

You can refer then to this property from your switchyard.xml
The file endpoint will be scanning for files under /tmp/input/<your secret filename>

…how to read a file programmatically in SwitchYard

I’ve been asked recently how to read a file programmatically, based on a cron, or when a request is made via a call to a Service. I didn’t have an answer, well, really I had an answer and I was wrong. Today, somebody brought the topic back, and I revisited the problem, and now I have an answer, one that works, so a good answer.

Problem description

I have the following use case:
A file should be downloaded by FTP from a remote server. The download of the file can be triggered in two ways:
  • by a scheduler whose polling interval may be changed at runtime
  • by a HTTP GET Request on a specified url. The response of the HTTP call must include transformed data based on the downloaded file.
What is a good way to achieve these requirements with SY/Camel? I would like to use the bindings provided by Camel, but have no idea how they can be triggered (e.g. the FTP binding) from outside the binding.


I have created a sample project, just to demonstrate how it works, not the full problem is solved. You have to use thecontent ennricher EIP.
We have a service, with a scheduled binding (Every 30 secs) that calls a Camel component. In this component, we use pollEnrich to read a file, and the use the contents of the file (the Message) to write into a File (a Reference with file binding).
This is the route we will be using. Note the pollEnrich to read the file.
public void configure() {
.log("Going to read the file")
.log("Read from file : ${body}")
The project is available in github

…SwitchYard in docker


Now that there is an official docker image for SwitchYard, I’m going to explain some usages of this image to make you productive with docker and SwitchYard. This way, you’ll be able to have reproducible environments for:
  • development
  • demos
  • support/bug hunting
  • integration / testing
Let’s get us started:

SwitchYard docker official image

The official images, are maintained by Red Hat, and are published to the docker hub. You can browse the source code for the images. Currently there is only support for SwitchYard on Wildfly. Of course, on the official Wildfly image, also supported by Red Hat.
All of the JBoss community projects that have a docker image, are published in the official docker community site. From there, you can link to the individual projects. As I’m going to talk about SwitchYard, let’s see what’s there:

Get started

In order to get you started you just need to download the image
docker pull jboss/switchyard-wildfly
and create a container
docker run -it jboss/switchyard-wildfly
or in domain mode
docker run -it jboss/switchyard-wildfly /opt/jboss/wildfly/bin/ -b -bmanagement
You’ll see that the base SwitchYard docker container is not so useful (doesn’t have an admin user), so the best thing to do, is to extend it.

Extending the SwitchYard container

First thing you need to know is what you want the container for, so let’s see some sample uses:


In this use case, what you want is a container where to deploy your applications, those that you are developing, so it can be useful to have:
  • An admin user created
  • Debug port exposed
  • Binding to all interfaces, so you can access without forwarding (if required)
  • Some volumes to access your information in the container
  • Some volumes (to test file services)
Let’s create a container for this:
FROM jboss/switchyard-wildfly

# Add admin user
RUN $JBOSS_HOME/bin/ admin admin123! --silent

# Enable port for debugging with --debug

# Volumes
VOLUME /input
VOLUME /output

# Enable binding to all network interfaces and debugging inside the server
RUN echo "JAVA_OPTS="$JAVA_OPTS -Djboss.bind.address="" >> $JBOSS_HOME/bin/standalone.conf

ENTRYPOINT ["/opt/jboss/wildfly/bin/"]
CMD []
I’m going to explain each and every command:
  1. FROM jboss/switchyard-wildfly, this is the base image, so we are reusing everything in there (jboss user, EAP_HOME set, and more. See doc for reference)
  2. RUN $JBOSS_HOME/bin/ admin admin123! –silent, this creates and admin user with admin123! as password.
  3. EXPOSE 8787 adds debug port exposure (8787 is the default one in wildfly.
  4. VOLUME /inputVOLUME /output adds two volumes to the image, so we can easily acces or export this volumes if needed. They are very handy when developing file services, as we will see when we run the container.
  5. RUN echo “JAVA_OPTS=”$JAVA_OPTS -Djboss.bind.address=″” >> $JBOSS_HOME/bin/standalone.conf this adds default binding to any interface (by default only so we can access the server via the ip or name.
  6. ENTRYPOINT [“/opt/jboss/wildfly/bin/”] defines default command to be executed. I like to set the command here as this image will be single purpose, so no need to be able to switch (although it is possible with –entrypoint on command line).
  7. CMD [] this is the arguments to the entrypoint, none by default, but it allows you to set on command line things like “–debug” to enable debugging mode in the container.
Now it is time to build the image:
docker build -t "jmorales/switchyard-dev" .
You can name the image as you like, as this image will be yours. Just need to think that has to have meaningul name so it is easy to remember when used.
Let’s now go to run the image:
docker run -it jmorales/switchyard-dev
This is the simplest command to run the container, but we are not using many of the power we have provided to it, so, let’s make a better command:
docker run -it -name "switchyard" -p 8080:8080 -p 9990:9990 jmorales/switchyard-dev
Now we can access the console with the admin/admin123! user that we have created.
Still not there yet. We need to be able to use file services (remember that in the container the directories will be /input and /output) and we want to debug in the container from the JBDS.
docker run -it -name "switchyard" -v /tmp/input:/input -v /tmp/output:/output -p 8080:8080 -p 9990:9990 -p 8787:8787 jmorales/switchyard-dev --debug
And even maybe use a different profile
docker run -it -name "switchyard" -v /tmp/input:/input -v /tmp/output:/output -p 8080:8080 -p 9990:9990 -p 8787:8787 jmorales/switchyard-dev --debug -c standalone-full.xml
Important things to note here:
  • The image is run in foreground, so we will be viewing the console log. Control-c will stop the container. Since we have the container created, if we want to execute it again, it is just a:
docker start switchyard
  • If you want a disposable container, you can add –rm option to the command line, and every time you stop the container it will be removed. (This option is more useful on other use cases).
  • If you always work with some options, you can just add them to your Dockerfile and create your image with these options, so no need to type them when creating the container (like the profile, –debug, ..)
Of course, this line is very long, and difficult to remember, so we can create a simple alias:
alias docker_switchyard_create="docker run -it -name "switchyard" -v /tmp/input:/input -v /tmp/output:/output -p 8080:8080 -p 9990:9990 -p 8787:8787 jmorales/switchyard-dev --debug -c standalone-full.xml"
alias docker_switchyard_start="docker start switchyard"
Once you are done, you can just delete this container. If you want a fresh instance, or just not working on it anymore:
docker rm -vf switchyard
It is important to delete the volumes if the container has volumes (with the -v) as otherwise this volumes will remain in your docker internal filesystem.


The demo use case is somehow different as you’ll probably have some application developed that you want to have in acontainer ready to start, so the container for demos will be an extension of the previous one, adding the application/configruation that you want.
FROM jboss/switchyard-wildfly

# Add admin user
RUN $JBOSS_HOME/bin/ admin admin123! --silent

# Volumes
VOLUME /input
VOLUME /output

# Enable binding to all network interfaces and debugging inside the server
RUN echo "JAVA_OPTS="$JAVA_OPTS -Djboss.bind.address="" >> ${JBOSS_HOME}/bin/standalone.conf

ADD myapp.war $JBOSS_HOME/standalone/deployments/
ADD standalone.xml $JBOSS_HOME/standalone/configuration/

ENTRYPOINT ["/opt/jboss/wildfly/bin/"]
CMD []
This example above has 3 main differences with the development one:
  • It removes the exposure of the debug port (not really neccesary)
  • It adds an application to the deployments dir ADD myapp.war $JBOSS_HOME/standalone/deployments/. This app has to be on the same directory as the Dockerfile
  • It adds some configuration to the server ADD standalone.xml $JBOSS_HOME/standalone/configuration/There are many ways to customize the configuration of your wildfly/EAP image, but this is one of the simplest.
And of course, if you had your image from the development use case created, you can just simply extend it.
FROM jmorales/switchyard-dev

# Add customizations
ADD myapp.war $JBOSS_HOME/standalone/deployments/
ADD standalone.xml $JBOSS_HOME/standalone/configuration/
Now, time to build your image:
docker build -t "jmorales/myapp-demo" .
And to run it
docker run -it --rm -v /tmp/input:/input -v /tmp/output:/output -p 8080:8080 -p 9990:9990 jmorales/myapp-demo
This time I’ve added –rm to the command line, so when I stop the container, it gets deleted. As this is a demo with everything there, if I need to do it again, just run the container with the same run command line.


The support use case is for somebody to make the environment reproducible, so if sombeody is experiencing a problem, he can create a docker container to isolate the app and the problem, and send it back to the support engineers. Also, some times, we want to test one app with different patches, so can be useful to have all the different versions of the server container as images, so we can try an app in them.
Let’s assume we have to try a feature in SwitchYard 1.1.1 and 2.0.0, and that we have an image for both containers:
  • jboss/switchyard:1.1.1
  • jboss/switchyard:2.0.0
We can just spin up both containers and test the app to see if it works
docker run -it --rm -P jboss/switchyard:1.1.1
docker run -it --rm -P jboss/switchyard:2.0.0
Once we have both containers (just note that if run in foreground, need to run in different terminals, and have different ports exported. I have added -P to auto export ports), we can go to the corresponding consoles and deploy the app and test.
If we want to have our tools set to certain ports (to be predictive) it is easier to have the ports defined in the command line, and use one container at a time

Integration / Testing

In the integration test case, what we want is to have our integration tools spinning up the container for us and deploying the app into this container, do all the integration tests and then stop/destroy the container.
This can easily be achievable with Jenkins (or many of the CI servers out there).
This way we can:
  • Have a clean environment for every test / app
  • Test against many different versions of the Application Server container in an automated way
When adopting testing agains containers, the time for detecting problems lowers so much, that the effort it makes to put it in action pays back very soon.

Tips for developers

As we can develop easily with docker as the container for our Application server, we can easily configure our tools to use this container, so we can have a maven profile to use our containers:
And we can do a
mvn clean install jboss-as:deploy -P sy-1
to deploy to the SwitchYard 1 container (with port 9999 exported at localhost 9999), or
mvn clean install jboss-as:deploy -P sy-2
to deploy to the SwitchYard 2 container (with port 9999 exported at localhost 19999).

…SwitchYard contracts


When working with SwitchYard we have to define the contraccts for our services as well as for our component implementations. In order to define our contracts there is different ways, and different points, depending on:
  • Component implementation
  • Service binding
As we can see in the picture above, in a simple service we can find, at least, the following contracts:
  • Component contracts
    • Component service
    • Component reference
  • Composite contracts
    • Composite service
    • Composite reference
  • Binding contracts
    • Service Binding
    • Reference Binding

Component contracts

This is the contract a component exposes or uses, depending on whether it is a service or reference contract. A component contract can be defined in 3 different ways in SwitchYard:
  • Java. Using a Java interface.
  • WSDL. Using a port type in a wsdl file.
  • ESB. Using a virtual interface definition. (No real file will be used).
A component contract in SwitchYard has the following characteristics:
  • 0..1 argument. If used, this will be the message content. It is optional as there can be operations that don’t expect a message (REST GET, Scheduled operations,…​). Used in Exchanges of type IN_ONLY and IN_OUT.
  • 0..1 return type. If used, this will be the message content for the response. Used only in Exchanges of type IN_OUT.
  • 0..1 exceptions. If used, this will be the message content for the response if there is an Exception. Used in Exchanges of type IN_ONLY and IN_OUT.
Contracts in Camel can be defined with empty parameters, and you will still be able to work with the message contant. DO NOT RELY ON THIS

Java contract

A Java contract is defined by a Java Interface.

Java components require Java contracts.

WSDL contract

A WSDL contract is defined by a port type in a wsdl file.

ESB contract

An ESB contract is a virtual contract (no file required) that declares the types of the input, output and exception types.

This contract can only be used in components with one single operation.

Transformations between contracts

When a message flows in SwitchYard, and there is different types in the contracts on both ends of the “channel”, a transformation must be done. There can be implicit or explicit transformations happening, depending on where they have to happen, and there are extension points to define how the transformation between types in contracts must happen.

Composite Service Binding and Composite Service / Composite Reference and Composite Reference Binding

Contract differences are handled in the ServiceHandlers when composing and decomposing the SwitchYard message.
Any difference in contracts must be handled in the Message composer.
Camel and Http MessageComposers will do Camel Implicit Transformations if required. If target type is JAXB, the class needs to have @XMLRootElement annotation.

Composite Service and Component Service / Component Reference and Composite Reference

Contract differences are handled by transformers defined in the composite application, which are applied by theExchangeHandler chain during the execution of the Exchange.
The transformers usually map from an origin type to a destination type.
When the Exchange is between a composite service and a component service:
  • In the IN phase, from is the argument’s type of the composite service and the to is the type in the component service.
  • In the OUT phase, from is the return/exception type of the component service and the to is the return/exception typein the composite service.
When the Exchange is between a component reference and a composite reference:
  • In the IN phase, from is the argument’s type of the component reference and the to is the type in the composite reference.
  • In the OUT phase, from is the return/exception type of the composite reference and the to is the return/exception typein the component reference.
There will be some implicit transformation happening (not need to declare a transformer) if SwitchYard supports automatic transformation between both types, as declared by CamelTransformer
As of SY 1.1.1, there is no implicit transformation for primitives types

Component Service and Component Reference

Contract differences are handled in the component implementation, and has to be explicitly transformed.

Special attention

  • If a Composite Service does not declare an interface, it will use the interface defined by the promoted Component Service
  • Every Component can have one Service
  • Binding name can be null. In this case, a binding name will be automatically generated with “ServiceName+BindingType+i”
  • When the input parameter of a service contract is empty, the message will not be change, it will be in it’s original form (e.g. for streaming bindings like HTTP, File,…​)