Pages

Monday, January 26, 2015

ESQL code to create mail with attachments using broker events

In this post I will provide an example on how to process events generated by a flow using the default IBM Integration Bus monitoring event capability.

The example will show how

  • to serialize a tree into BLOB using ESQL
  • how to send a mail with attachment
  • How to create/prepare the LocalEnvironment and Root for the EmailOutput node
  • The usage of the business event capabilities provided by IIB


The principle is simple:

  1. Configure a flow to generate an IIB event
  2. Create a subscription to this event with a WMQ Queue as endpoint
  3. Create a flow that consumes these events and send an email with attachments

The example in this post shows how to create mail with attachments using ESQL but this could be easily made using Java as well.

Configure a flow to generate an IIB event


The event generated as a well defined structure and the schema can be imported into a library using new model -> IBM predefined model.

Any nodes in a flow can be configured to generate events (Generating events in WebSphere Message Broker) that may contain context information and payload.

It is for example possible to configure a node to include the localEnvironment, ExceptionList and Message tree structure (under Root). These information will be placed into the IIB events under the folder "complexContent".
Note that the LocalEnvironment is reset when an exception occurs, so the data that would have been stored in this tree would be wiped when the message is propagated to the catch terminal of the input node (will be covered in a future post).

Finally it is also possible to include the full payload (as it was received) by selecting in the monitoring node properties "include payload as bitstream". The payload will then be included into the IIB events under "BistreamData".

Create a subscription

The IIB runtime is publishing the IIB events on the WMQ topic "$SYS/Broker/IBMIBus/Monitoring/#".
Using the WMQ Explorer you create a subscription to these events and select a destination queue:

 The flow that sends email

The flow is very simple: MQInput -> ComputeNode -> EmailOutput node
The compute node is used to create and configure the message that will be send using the emailoutput node. 
The node it self is configured the minimum properties: server:port, email to, from and security.
The rest will be provided by the code in ESQL (subject, body content and attachments).

In this example the complexContent included in the incoming business event is serialized into bitstream and will be send by mail as attachment.
The payload if present is also send as attachment.
The body of the mail is made of event origin data and using a DFDL to have a text document separated with CRLF.

The code is provided here after:




Friday, January 23, 2015

ESQL code Sample

In this post I will provide some example of ESQL code that could be useful.

The codes are made available using Gist.

TREE --> XML 

In the following gist, I provide an example on how to create a XML physical representation of a IIB in memory tree.
The principle is the following
  1. Create an element of type XMLNSC parser
  2. Copy or create a IIB tree
  3. Use the function ASBITSTREAM to serialize the tree in bitstream using the parser (here XML)


BLOB --> XML

In the following gist, I provide an example on how to create a XML tree from a BLOB.
In the example the BLOB is provided as test in a hexadecimal representation. The code parses it to an XML and append it in the current XML.


Friday, January 16, 2015

XPATH in IIB

In this post, I will provide some example of xpath expression allowing to perform complex transformation within a GDM (Graphical Data Mapper).

I will append this post with new example that I would found that could be of any usage.

For this first post, I will use a sample message having the following structure

<?xml version="1.0" encoding="UTF-8"?>
<Q1:INVOICE xmlns:Q1="http://www.acme.be/acme"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.acme.be/acmeInvoice.xsd ">
<CUSTOMERID>ID001</CUSTOMERID>
<NAME>X</NAME>
<INVOICE_ITEM>
<STOCK>OUT</STOCK>
<EXI>IMPORT</EXI>
<CONTAINER>abc</CONTAINER>
<ISOCODE>2210</ISOCODE>
<QUANTITY>1</QUANTITY>
</INVOICE_ITEM>
<INVOICE_ITEM>
<STOCK>OUT</STOCK>
<EXI>EMPTY</EXI>
<CONTAINER>abcd</CONTAINER>
<ISOCODE>2210</ISOCODE>
<QUANTITY>2</QUANTITY>
</INVOICE_ITEM>
<INVOICE_ITEM>
<STOCK>OUT</STOCK>
<EXI>EMPTY</EXI>
<CONTAINER>abcde</CONTAINER>
<ISOCODE>4532</ISOCODE>
<QUANTITY>4</QUANTITY>
</INVOICE_ITEM>
<INVOICE_ITEM>
<STOCK>OUT</STOCK>
<EXI>EMPTY</EXI>
<CONTAINER>abcdef</CONTAINER>
<ISOCODE>4532</ISOCODE>
<QUANTITY>2</QUANTITY>
</INVOICE_ITEM>
<INVOICE_ITEM>
<STOCK>IN</STOCK>
<EXI>IMPORT</EXI>
<CONTAINER>CONTAINER</CONTAINER>
<ISOCODE>ISOCODE</ISOCODE>
<QUANTITY>2</QUANTITY>
</INVOICE_ITEM>
</Q1:INVOICE>


Sum, Count

It is possible to compute the sum of elements under a repeating node in one expression.
For example if you would like to make the sum of the "Quantity" field of all the INVOICE_ITEM nodes, you could do this by performing the following map:

XPath expression is "fn:sum($INVOICE_ITEM/QUANTITY)"

Count can be used in the same way. Count will provide the number of elements.

Predicates

Predicates allows to select only a subset of nodes based on a criteria.
In the above example, it may be possible to compute the sum of QUANTITY for INVOICE_ITEM having its STOCK child element equal to "OUT".
This could be done by using predicates.
The above xpath expression would be changed with:

fn:sum($INVOICE_ITEM[STOCK='OUT']/QUANTITY)

The predicates here is "[STOCK='OUT']".

It is possible to use an input element within the predicates. For example, imagine that you have a list of Stock. And you would like to make the quantity sum only for the STOCK that are in the list.
This could be achieved by providing as input the stocklist to the xpath transform:
With the XPath expression
fn:sum($INVOICE_ITEM[STOCK=$STOCK1]/QUANTITY)

It is possible to have more than one "where" clause. For example is we would like the sum of  Quantity for STOCK='OUT' and EXI='IMPORT' then the following Xpath could be used:

fn:sum($INVOICE_ITEM[STOCK='OUT' and EXI='IMPORT']/QUANTITY)

be careful that the "and" is case sensitive.

Distinct Values

Last useful xpath expression for this post. Distinct values.
Distinct values could be used to retrieve only elements having distinct values.
For example let's say that the input message where STOCK can have the value "IN', "OUT'.
In the example above there are 4 Invoice Item with Stock equals to "OUT" and only 1 with "IN".
The XPATH expression used here is "fn:distinct-values($INVOICE_ITEM/STOCK)"
The input is the INVOICE_ITEM repeating node and the output is a repeating element STOCK under STOCKLIST.
The Stocklist will be populated by STOCK element having the value "IN" and "OUT":

<STOCKLIST>
        <STOCK>IN</STOCK>
        <STOCK>OUT</STOCK>
</STOCKLIST>

If you need to sort on multiple elements then the option that you may have is first create a concatenation of these fields in a previous map and then use the distinct values xpath expression.
So for example if you need to have the list of invoice items having distinct values for STOCK and EXI, then you may create a first map that concatenates STOCK and EXI using the xpath expression "fn:concat" and place this list in the localenvironment and then in the next map use the disctinct values xpath expression
.







HA Cache deployment with IBM Integration Bus

I have been involved in project where customer is looking to have a possibility to cache data  in a high available way within the integration layer.

In this post I will provide some points that have to be take into account when designing the IIB deployment architecture in order to have a high available cache.

Introduction

IBM Integration Bus provides an out of the box caching mechanism based on WebSphere eXtreme Scale.
WebSphere eXtreme Scale provides a scalable, in-memory data grid. The data grid dynamically caches, partitions, replicates, and manages data across multiple servers.

This cache can be used to store reference data that are regularly accessed or to hold a routing table.

The cache is not enabled by default and it is really easy to enable it: the default configuration is activated by setting a configuration parameters through the IBM Explorer administration tool.
To activate the cache across different Integration Node instances, a XML configuration file (templates are provided) has to be defined.
More information on the cache can be found here What's new in the Global Cache in IBM Integration Bus v9

Specialized skills in extremes scale is not necessary in order to use the cache. There is however two important cache components that may be good to know

  • Catalog servers: component that is embedded in an integration server and that controls placement of data and monitors the health of containers. You must have at least one catalog server in your global cache.
  • Container servers: component that is embedded in the integration server that holds a subset of the cache data. Between them, all container servers in the global cache host all of the cache data at least once. If more than one container exists, the default cache policy ensures that all data is replicated at least once. In this way, the global cache can cope with the loss of container servers without losing data.
More information about terminologies can be found here
Global cache terminologies

Principle

The catalog and container servers are embedded in Integration Servers.

To have a high available cache the following is required:

  • At least two catalog servers have to be online: without catalog server it is not possible to reach the data in memory
  • At least two container servers have to be online: this is necessary to replicate data on two different location

One more important point to know: it is not possible to configure an integration node to host a catalog server when it is configured as a multi-instance Integration node.

Possible deployment architecture

If the target is an active/active deployment, the following architecture is possible:
In this architecture, the catalog server is deployed in one Integration Server on both side. The other Integration servers are used to host containers. 
To improve the performance, the catalog server would be placed on a dedicated Integration Server (separated from the containers). This is not required though, a catalog server may reside on the same server as a container.
If the license doesn't permit to have multiple Integration Server per Integration Nodes (standard edition), then you could create a separate Integration Node on the same server to host the catalog server. 

If the target deployment consists of multi-instance queue managers, because for example the message residing on MQ has to be quickly recovered, the following architecture is possible:

Due to the fact that a multi-instance Integration Node can't host a catalog server (a configuration restriction), it is necessary to define an extra Integration Node to hold the catalog server (Integration Node - Catalog). This Integration Node doesn't need to be high available.
The multi-instance Integration Nodes are configured to host the container servers. Two active integration nodes are required to provide an high available cache (replication made on two different servers).

Additional information

The first time that the cache is configured using multiple catalog servers, the cache will become operational when at least two catalog servers are started. If the catalog servers are defined in two Integration Nodes, these two nodes would need to be started before been able to use the cache. Once the cache has been activated (this can be checked by looking at the logs or administration events) it is possible to lose (or stop) catalog servers (at least one catalog server should stay online) without impact to the cache access. This can be useful if maintenance on one instance has to be performed.

The default configuration is to have a maximum of 4 container servers per Integration Nodes. But more containers can be configured by configuring the Integration Server manually using the IBM Integration Explorer.

There are no limitation in term of container servers that can participate to the global cache. If you require more memory, you can just add a new container in the system. There is no need to restart the whole system !! Or you could also access an external extreme scale component like XC10 (Deciding between the embedded global cache and an external WebSphere eXtreme Scale grid).

Integration Server roles can be changed using the IBM Integration Explorer. It is possible to define a policy configuration file and assign it to the Integration Node using the IBM Integration Explorer. Once this has been done, you can start the Integration Node to take the configuration into account. If the global cache policy of the Integration Node is changed using the IBM Integration Explorer to "NONE" when the Integration Node is running, the current configuration will be held even if the Integration Node is restarted. It is therefore possible to change the Integration Server roles afterwards. More information on how to set the roles are provided here How to fix integration server roles in a Global Cache configuration in IBM Integration Bus and WebSphere Message Broker V8.

References

Information about global cache