Pages

Friday, January 16, 2015

HA Cache deployment with IBM Integration Bus

I have been involved in project where customer is looking to have a possibility to cache data  in a high available way within the integration layer.

In this post I will provide some points that have to be take into account when designing the IIB deployment architecture in order to have a high available cache.

Introduction

IBM Integration Bus provides an out of the box caching mechanism based on WebSphere eXtreme Scale.
WebSphere eXtreme Scale provides a scalable, in-memory data grid. The data grid dynamically caches, partitions, replicates, and manages data across multiple servers.

This cache can be used to store reference data that are regularly accessed or to hold a routing table.

The cache is not enabled by default and it is really easy to enable it: the default configuration is activated by setting a configuration parameters through the IBM Explorer administration tool.
To activate the cache across different Integration Node instances, a XML configuration file (templates are provided) has to be defined.
More information on the cache can be found here What's new in the Global Cache in IBM Integration Bus v9

Specialized skills in extremes scale is not necessary in order to use the cache. There is however two important cache components that may be good to know

  • Catalog servers: component that is embedded in an integration server and that controls placement of data and monitors the health of containers. You must have at least one catalog server in your global cache.
  • Container servers: component that is embedded in the integration server that holds a subset of the cache data. Between them, all container servers in the global cache host all of the cache data at least once. If more than one container exists, the default cache policy ensures that all data is replicated at least once. In this way, the global cache can cope with the loss of container servers without losing data.
More information about terminologies can be found here
Global cache terminologies

Principle

The catalog and container servers are embedded in Integration Servers.

To have a high available cache the following is required:

  • At least two catalog servers have to be online: without catalog server it is not possible to reach the data in memory
  • At least two container servers have to be online: this is necessary to replicate data on two different location

One more important point to know: it is not possible to configure an integration node to host a catalog server when it is configured as a multi-instance Integration node.

Possible deployment architecture

If the target is an active/active deployment, the following architecture is possible:
In this architecture, the catalog server is deployed in one Integration Server on both side. The other Integration servers are used to host containers. 
To improve the performance, the catalog server would be placed on a dedicated Integration Server (separated from the containers). This is not required though, a catalog server may reside on the same server as a container.
If the license doesn't permit to have multiple Integration Server per Integration Nodes (standard edition), then you could create a separate Integration Node on the same server to host the catalog server. 

If the target deployment consists of multi-instance queue managers, because for example the message residing on MQ has to be quickly recovered, the following architecture is possible:

Due to the fact that a multi-instance Integration Node can't host a catalog server (a configuration restriction), it is necessary to define an extra Integration Node to hold the catalog server (Integration Node - Catalog). This Integration Node doesn't need to be high available.
The multi-instance Integration Nodes are configured to host the container servers. Two active integration nodes are required to provide an high available cache (replication made on two different servers).

Additional information

The first time that the cache is configured using multiple catalog servers, the cache will become operational when at least two catalog servers are started. If the catalog servers are defined in two Integration Nodes, these two nodes would need to be started before been able to use the cache. Once the cache has been activated (this can be checked by looking at the logs or administration events) it is possible to lose (or stop) catalog servers (at least one catalog server should stay online) without impact to the cache access. This can be useful if maintenance on one instance has to be performed.

The default configuration is to have a maximum of 4 container servers per Integration Nodes. But more containers can be configured by configuring the Integration Server manually using the IBM Integration Explorer.

There are no limitation in term of container servers that can participate to the global cache. If you require more memory, you can just add a new container in the system. There is no need to restart the whole system !! Or you could also access an external extreme scale component like XC10 (Deciding between the embedded global cache and an external WebSphere eXtreme Scale grid).

Integration Server roles can be changed using the IBM Integration Explorer. It is possible to define a policy configuration file and assign it to the Integration Node using the IBM Integration Explorer. Once this has been done, you can start the Integration Node to take the configuration into account. If the global cache policy of the Integration Node is changed using the IBM Integration Explorer to "NONE" when the Integration Node is running, the current configuration will be held even if the Integration Node is restarted. It is therefore possible to change the Integration Server roles afterwards. More information on how to set the roles are provided here How to fix integration server roles in a Global Cache configuration in IBM Integration Bus and WebSphere Message Broker V8.

References

Information about global cache


No comments:

Post a Comment