Tomcat Cluster Directives
Directive Default value Description
worker.list ajp13 A comma separated list of workers names that JK will use. When starting up, the web server plugin will instantiate the workers whose name appears in the worker.list property. These are also the workers to whom you can may requests. This directive can be used multiple times.
worker.maintain 60 Worker connection pool maintain interval in seconds. If set to a positive value, JK will scan all connections for all workers specified in worker.list directive and check if connections need to be recycled. Furthermore, any load balancer does a global maintenance every worker.maintain seconds. During global maintenance, load counters are decayed and workers in error are checked for recover_time.
worker.<workerName>.<directive> Each worker configuration directive consists of three words separated by a dot. The first word is always worker. The second word is the worker name
  • worker.list: This is one of the two global directives. We only use one here. This directive allows you to specifically name any workers that should be loaded when the server starts up. These are the only workers to which you can map requests in httpd.conf. This has more uses when using mod_jk as a proxy server. For our purposes, the two workers we've defined are enough. We can just list the name of the workers in worker.list at the top, and then define the properties for them later on. Perhaps, it would make more sense to have the worker.list line after we've defined all the workers, but it probably does not matter.

General directives pertain specifically to workers, but not to virtual workers. They always takes the form worker.[name].[directive]=[value]. Workers names are defined as part of a directive (unless set in worker.list). Subsequent directives using the same name value will apply to the same worker. Names may only contain underscores, dashes, and alphanumeric characters, and are case sensitive.

There is a very long list of worker directives, allowing configuration of everything from session replication partner nodes to connection timeout values, to weights for use with load balancing algorithms. It's even possible to include workers within multiple nodes, allowing you to do things such as using very fast server as a pinch hitter to handle spikes in multiple clusters. The extensive control this provides over load balancing scenarios is the reason why using mod_jk over mod_proxy is currently worth the extra configuration trouble. You can find the whole list from http://tomcat.apache.org/connectors-doc/reference/workers.html

worker.[name].type allows you to declare a "type" for a given worker. This type can either refer to a virtual worker type (i.e. "lb" for load balancer worker, "status" for the status worker), or to the protocol that the server should use to communicate with the real worker.

worker.[name].host allows you to define the appropriate host for a worker. You can also include port in this entry by separating the host name from the port value with a ":"

worker.[name].port allows you to set a port number to access the relevant server

The mod_jk virtual workers each have their own specialized subsets of directives, which provide extra levels of control over their functions. For example, although the "lb" worker uses a load balancing algorithm based on requests and each sever's lbfactor to distribute the load by default, mod_jk actually includes three additional load balancing algorithms, some of which are more appropriate for certain situations, and can be configured with the "method" directive. See http://tomcat.apache.org/download-connectors.cgi

worker.[name].balance_workers=[name1],[name2],..[nameN]: This is the only required load balancer directive, and is used to associate a group of workers with a given load balancer. You can define multiple load balancer names in the global worker list if you will be balancing multiple clusters with a single Apache instance. See http://tomcat.apache.org/connectors-doc/generic_howto/loadbalancers.html

  • Engine is the standard element that defines Catalina as the component responsible for processing requests. To enable session replication, you must set the jvmRoute attribute to match the corresponding worker you have configured in mod_jk's workers.properties file. This value must be unique for every node included in the cluster. I think that for each worker, we have to define a separte Engine node. The jvmRoute attribute cause Tomcat to generate session ID that look like <random value as before>.<jvmRoute value>. This is to cause the load balancer to do sticky load balancing (if the session was generated on tomcat1, subsequent requests will also be handled by tomcat1). Now, what happens if tomcat1 goes offline due to network error, or out of memory error? The load balancer knows that it need to forward the requests to a different tomcat2. However, the sessions only exists on tomcat1, and tomcat2 will not be able to retrieve information that was stored in the session on tomcat1. We need to setup session replication for that.
  • The channelSendOptions sets the flag within Tomcat's clustering class that chooses between different methods of cluster communication. The safe default is 8, which enable asynchronous communication.
  • Manager is the standard element that Tomcat uses for session management. When nested inside the Cluster element, it is used to tell Tomcat which cluster-aware session manager should be used for session replication. In this example, we have used the DeltaManager, which provides basic cluster-aware session management, as well as additional capabilities you can use to divide your cluster into multiple groups in the future. The attributes we have configured (expireSessionsOnShutdown and notifyListenersOnReplication) have been configured to prevent a failing node from destroying sessions on other clustered nodes and explicitly notify the ClusterListeners when a session has been updated.
  • The Channel element communicates with a component of Tomcat's clustering solution called Tribes. This component handles all communication between the clustered nodes. In this example, we have configured Tribes to use multicast communication, although more complicated situations can be configured using single point broadcasting. The Channel element is used to contain a series of other elements that divide cluster communication into simple blocks.
  • Membership: This tribes-related element defines the address all nodes will use to keep track of one another. The settings we have used here are the Tribes defaults.
  • Sender: This tribes-related element, in conduction with the Transport element nested inside of it, is used to choose from and configure a number of different implementation of cluster communication. Here, we use the NIO transport, which generally provides the best performance.
  • Receiver: This tribes-related element configures a single Receiver component, which receives messages from other nodes' Sender components. The attributes of the element allow you to specify addresses, buffer sizes, thread limits, and more. The settings we have used here allow the nodes to automatically discover one another via an address that Tribes will generate automatically.
  • Interceptor: Interceptor elements are used to make modifications to messages sent between nodes. For example, one of the Interceptor elements we have configured here detects delays that may be preventing a member from updating its table due to timeout, and provides an alternative TCP connection. Tribes includes a number of standard interceptors. To enable any of them, simply add an additional Interceptor element with the appropriate className. Here, we have included only interceptors useful in almost all clustering situations.
  • Valve: Tomcat's standard Valve element can be nested within Cluster elements to provide filtering. The element includes a number of cluster-specific implementations. For example, one of the Valves we have included here can be used to restrict the kind of files replicatated across the cluster. For this example configuration, we have included the most commonly used Valves, with blank attribute values that you can configure as required.
  • ClusterListener: This element listens to all messages sent through by cluster workers, and intercept those that match their respective implementation's specifications. These element operate in a very similar manner to Interceptor elements, except that rather than modifying messages and passing them on to a Receiver, they are intended recipient of the messages for which they are listening.
  • In the Membership node, we specify the frequency (how frequent each cluster node should broadcast its availability, or its heart beat signal), and if the heart beat signal is not received after some particular interval (dropTime), then that Tomcat instance is considered dead.
  • PooledParallelSender use pooled connections to send the session information in parallel so it speeds up session replication process.
  • For NioReceiver, address="auto" means that it will use our system IP address.
  • TcpFailureDetector is used to verify that the instance is actually dead. In some cases, multicast messages are delayed, and therefore all tomcat instances think that a particular tomcat instance is dead. This interceptor makes a TCP unicast to the particular instance to confirm whether it actually failed or not.

What is the purpose of jvmRoute?

The jvmRoute attribute of the Engine element allows the load balancer to match requests to the JVM currently responsible for updating the relevant session. It does this by appending the name of the JVM to the JSESSIONID of the request, and matching this against the worker name provided in worker.properties file.

In order to configure jvmRoute, make sure that the value of jvmRoute for all of your Engines is paired with an identically named worker in the worker.properties file.

If we are not using mod_jk, do we have to set the jvmRoute attribute?

I don't know yet.

What is the purpose of the Engine element?

This is the standard Engine element that defines Catalina as the component responsible for processing requests. To enable session replication, you must set the jvmRoute attribute to match the corresponding worker you have configured in mod_jk's worker.properties file. This value must be unique for every node included in the cluster.

What is the purpose of the Cluster element?

This is the main Cluster element, within which all other clustering elements are nested. It supports a variety of attributes. The channelSendOptions attribute set a flag within Tomcat's clustering class that chooses between different methods of cluster communication. The default value for channelSendOptions is 8, which enable asynchronous communication.

The cluster element is nested inside an enclosing <Host> element, it essentially enables session replication for all applications in the host.

What is the purpose of the Manager element?

This is the standard element that Tomcat uses for session management. When nested inside the Cluster element, it is used to tell Tomcat which cluster-aware session manager should be used for session replication.

This is a mandatory component, this is where you configure either DeltaManager or BackupManager, they both send replication information to others via Channels from the Apache Tribes group communications library.

What is the purpose of the Channel element?

This element communicates with a component of Tomcat's clustering solution called Tribes. This component handles all communication between the clustered nodes. In this example, we have configured Tribes to use multicast communication, although more complicated situations can be configured using single point broadcasting. The Channel element is used to contain a series of other elements that divide cluster communication into simple blocks.

A channel is an abstract endpoint, (like a socket) that a member of the group can send and receive replicated information through. Channels are managed and implemented by the Apache Tribes communications framework. Channel has only one attribute.

What is the purpose of the Membership element?

This Tribes-related element defines the address all nodes will use to keep track of one another. The settings we have used here are the Tribes defaults.

This attribute selects the physical network interface to use on the server (if you have only one network adapter you generally don't need this). This service is based on sending a multicasts heartbeat regularly, which determines and maintains information on the servers that are considered part of the group (cluster) at any point in time.

What is the purpose of the Sender element?

This Tribes-related element, in conduction with the Transport element nested inside of it, is used to choose from and configure a number of different implementations of cluster communication. Here, we have used the NIO transport, which generally provides the best performance.

This element configures the TCP sender component of the Apache Tribes Framework, it sends the replicated data information to other members.

** What is the purpose of the Receiver element?**

This Tribes-related element configures a single Receiver component, which receives messages from other nodes' Sender components. The attributes of the element allow you to specify addresses, buffer sizes, thread limits, and more. The settings we have used here allow the nodes to automatically discover one another via an address that Tribes will generate automatically.

This element configures the TCP receiver component of the Apache Tribes Framework, it receives the replicated data information from other members.

What is the purpose of the Interceptor element?

Interceptor elements are used to make modifications to messages sent between nodes. For example, one of the Interceptor elements we have configured here detects delays that may be preventing a member from updating its table due to timeout, and provides an alternative TCP connection. Tribes includes a number of standard interceptors; to enable any of them, simply add an addition Interceptor element with the appropriate className. Here, we have included only interceptors useful in almost all clustering situations.

Interceptor components are nested components of <Channel> and are message processing components that can chained together to alter the behavior or add value to the option of a channel. basically you have a option flag that will trigger its operation.

What is the purpose of the Transport element?

This element performs the real stuff, tribes support having a pool of senders, so that messages can be sent in parallel and if using NIO sender, you can send messages concurrently as well.

What is the purpose of the Valve element?

Tomcat's standard Valve element can be nested within Cluster elements to provide filtering. The element includes a number of cluster-specific implementations. For example, one of the Valves we have included here can be used to restrict the kinds of files replicated across the cluster. For this example configuration, we have included the most commonly used Valves, with blank attribute values that you can configure as required.

This element acts as a filter for In-Memory replication, it reduces the actual session replication network traffic by determining if the current session needs to be replicated at the end of the request cycle. Even through this element is inside the <Cluster> element it is consider to be inside the <Host> element.

What is the purpose of the ClusterListener element?

This element listens to all messages sent through by cluster workers, and intercepts those that match their respective implementation's specifications. These elements operate in a very similar manner to Inteceptor elements, except that rather than modifying messages and passing them on to a Receiver, they are the intended recipient of the messages for which they are listening.

Some of the work of the cluster is performed by hooking up listeners to replication messages that are passing through it. You must configure org.apache.catalina.ha.session.JvmRouteSessionIDBinderListener if you are using JvmRouteBinderValve to ensure session stickiness transfers with a fail-over. You must also configure org.apache.catalina.ha.session.ClusterSessionListener if using DeltaManager because this listener forwards the messages to the manager for delta and merging operations.

What is the purpose of the channelSendOptions attribute?

The channelSendOptions attribute is part of the Cluster element. Option flags are included with messages sent and can be used to trigger Apache Tribes channel interceptors. The numerical value is a logical OR flag values including:

Channel.SEND_OPTIONS_ASYNCHRONRONUS    8
Channel.SEND_OPTIONS_BYTE_MESSAGE          1
Channel.SEND_OPTIONS_SECURE                    16
Channel.SEND_OPTIONS_SYNCHRONIZED_ACK   4
Channel.SEND_OPTIONS_USE_ACK                   2

The default value is 11 (async with ack)

What is the purpose of the name attribute of the Manager element?

A name for the cluster manager, this name should be the same on all instances.

What is the purpose of the notifyListeners-OnReplication attribute of the Manager element?

Indicates if any session listeners should be notified when sessions are replicated between instances.

What is the purpose of the expireSessions-OnShutdown attribute of the Manager element?

Specifies whether it is necessary to expire of all sessions upon application shutdown.

What is the purpose of the domainReplication attribute of the Manager element?

Specifies whether replication should be limited to domain members only, this option is only available for DeltaManager.

What is the purpose of the mapSendOptions attribute of the Manager element?

When using the BackupManager, this maps the send options that are set to trigger interceptors

Unless otherwise stated, the content of this page is licensed under Creative Commons Attribution-ShareAlike 3.0 License