ehcache not replicating in liferay cluster
Asked Answered
A

2

7

I have the following setup

1.liferay cluster with 2 machines on AWS

2.unicast clustering replication with JGroups over tcp

I have the following parameters in the portal-ext.properties

#Setup hibernate
net.sf.ehcache.configurationResourceName=/myehcache/hibernate-clustered.xml

#Setup distributed ehcache
ehcache.multi.vm.config.location=/myehcache/liferay-multi-vm-clustered.xml

#
# Clustering settings
#
cluster.link.enabled=true
ehcache.cluster.link.replication.enabled=true
cluster.link.channel.properties.control=tcp.xml
cluster.link.channel.properties.transport.0=tcp.xml
lucene.replicate.write=true

#In order to make use of jgroups
    ehcache.bootstrap.cache.loader.factory=com.liferay.portal.cache.ehcache.JGroupsBootstrapCacheLoaderFactory
ehcache.cache.event.listener.factory=net.sf.ehcache.distribution.jgroups.JGroupsCacheReplicatorFactory
 ehcache.cache.manager.peer.provider.factory=net.sf.ehcache.distribution.jgroups.JGroupsCacheManagerPeerProviderFactory
net.sf.ehcache.configurationResourceName.peerProviderProperties=file=/myehcache/tcp.xml
ehcache.multi.vm.config.location.peerProviderProperties=file=/myehcache/tcp.xml

cluster.executor.debug.enabled=true
ehcache.statistics.enabled=true

I am not able to get the cluster cache replication working. Can anybody point me the right direction? I can post more details if needed later. I was also trying to modify the hibernate-clustered.xml and liferay-multi-vm-clustered.xml, but nothing works.

Abelabelard answered 21/1, 2013 at 17:33 Comment(0)
A
8

After spending days reading countless blog posts, forum topics, and of course SO questions, I wanted to summarize here how we finally managed to configure cache replication in a Liferay 6.2 cluster, using unicast TCP to suit Amazon EC2.

JGroups configuration

Before configuring Liferay for cache replication, you must understand that Liferay relies on JGroups channels. Basically, JGroups allows to discover and communicate with remote instances. By default (at least in Liferay) it leverages multicast UDP to achieve these goals. See JGroups website for more.

To enable unicast TCP, you must first get JGroups’ TCP configuration file from jgroups.jar in Liferay webapp (something like $LIFERAY_HOME/tomcat-7.0.42/webapps/ROOT/WEB-INF/lib/jgroups.jar). Extract this file to a place available to Liferay webapp’s classpath. Say $LIFERAY_HOME/tomcat-7.0.42/webapps/ROOT/WEB-INF/classes/custom_jgroups/tcp.xml. Take note of this path.

For this configuration to work in a Liferay cluster, you just need to add a singleton_name="liferay" attribute to TCP tag:

<config xmlns="urn:org:jgroups"
        xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
        xsi:schemaLocation="urn:org:jgroups http://www.jgroups.org/schema/JGroups-3.1.xsd">
    <TCP singleton_name="liferay"
         bind_port="7800"
         loopback="false"
         ...

You may have noticed that:

A. this configuration file does not specify a bind address on which to listen, and

B. that the initial hosts of the cluster must be set through a system property.

In fact, you need to modify $LIFERAY_HOME/tomcat-7.0.42/bin/setenv.sh to add the following JVM system properties:

-Djava.net.preferIPv4Stack=true
-Djgroups.bind_addr=192.168.0.1
-Djgroups.tcpping.initial_hosts=192.168.0.1[7800],80.200.230.2[7800]

The bind address defines which network interface to listen to (JGroups port is set to 7800 in TCP configuration file). The initial hosts property must contain every single instance of the cluster (for more on this, see TCPPING and MERGE2 on JGroups docs), along with their listening ports. Remote instances may be referred to by their host names, local addresses or public addresses.

(Tip: if you are setting up a Liferay cluster on Amazon EC2, chances are the local IP address and host name of your instances are different after each reboot. To work around this, you may replace the local address in setenv.sh by the result of hostname command: `hostname` -- notice the backticks here)

(Tip: if using security groups on EC2, you should also make sure to open port 7800 to all the instances in the same security group)

Liferay configuration

JGroups replication is enabled on Liferay by adding the following properties to your portal-ext.properties:

# Tells Liferay to enable Cluster Link. This sets up JGroups control and transport channels (necessary for indexes and cache replication)
cluster.link.enabled=true
# This external address is used to determine which network interface must be used. This typically points to the database shared between the instances.
cluster.link.autodetect.address=shareddatabase.eu-west-1.rds.amazonaws.com:5432

Configuring JGroups for unicast TCP is just a matter of pointing to the right file:

# Configures JGroups control channel for unicast TCP
cluster.link.channel.properties.control=/custom_jgroups/tcp.xml
# Configures JGroups transport channel for unicast TCP
cluster.link.channel.properties.transport.0=/custom_jgroups/tcp.xml

In the same file, Lucene index replication requires this single property:

# Enable Lucene indexes replication through Cluster Link
lucene.replicate.write=true

EhCache caches replication is more subtle. You must configure JGroups for both Hibernate cache and Liferay’s internal caches. To understand this configuration, you must know that since Liferay 6.2, the default EhCache configuration files are "clustered" (do not set these properties):

# Default hibernate cache configuration file
net.sf.ehcache.configurationResourceName=/ehcache/hibernate-clustered.xml
# Default internal  cache configuration file
ehcache.multi.vm.config.location=/ehcache/liferay-multi-vm-clustered.xml

These configuration files both rely on EhCache factories that must be set the enable JGroups:

# Enable EhCache caches replication through JGroups
ehcache.bootstrap.cache.loader.factory=com.liferay.portal.cache.ehcache.JGroupsBootstrapCacheLoaderFactory
ehcache.cache.event.listener.factory=net.sf.ehcache.distribution.jgroups.JGroupsCacheReplicatorFactory
ehcache.cache.manager.peer.provider.factory=net.sf.ehcache.distribution.jgroups.JGroupsCacheManagerPeerProviderFactory

JGroups' cache manager peer provider factory expects a file parameter containing the JGroups configuration. Specify the unicast TCP configuration file:

# Configure hibernate cache replication for unicast TCP
net.sf.ehcache.configurationResourceName.peerProviderProperties=file=/custom_jgroups/tcp.xml

# Configure internal caches replication for unicast TCP
ehcache.multi.vm.config.location.peerProviderProperties=file=/custom_jgroups/tcp.xml

(Tip: when in doubt, you should refer to the properties definitions and default values: https://docs.liferay.com/portal/6.2/propertiesdoc/portal.properties.html)

Debugging

In addition, you can enable debugging traces with:

cluster.executor.debug.enabled=true

You can even tell Liferay to display on every pages the name of the node which processed the request:

web.server.display.node=true

Finally, JGroups channels expose a diagnostic service available through probe tool.

Final note

Please bear in mind this only covers indexes and cache replication. When setting up a Liferay cluster, you should also consider setting up:

  • Shared database (RDS on AWS),
  • Shared DocumentLibrary (S3 or RDS on AWS),
  • Session replication on Tomcat,
  • And maybe more depending on how you use Liferay.
Arias answered 26/2, 2015 at 10:2 Comment(3)
I think this was the more complete answer, and it helped me resolve my clustering issues. Thanks!Hybridize
You mention tomcat session replication in your list. Isn't LR Portal supposed to handle it across the cluster?Jagatai
@Jagatai As mentioned, Liferay's clustering feature is limited to indexes and caches replication. In addition, you may or may not want to enable sessions replication throughout tomcat instances, depending on your load balancing policy. For a more comprehensive view of Liferay clustering architecture, fell free to refer to the Liferay knowledge base, e.g. dev.liferay.com/discover/portal/-/knowledge_base/6-1/…Arias
B
3

I spent many hours to make Liferay 6.1.1 CE cluster work on AWS.

Here is my "portal-ext.properties" with few differences from your

##
## JDBC
##

# Tomcat datasource
jdbc.default.jndi.name=jdbc/LiferayPool

##
## Mail
##

# Tomcat mail session
mail.session.jndi.name=mail/MailSession

##
## Document Library Portlet
##

# NFS shared folder
dl.store.file.system.root.dir=/opt/document_library/

##
## Cluster Link
##

# Cluster Link over JGroups TCP unicast
cluster.link.enabled=true
cluster.link.channel.properties.control=custom_cache/tcp.xml
cluster.link.channel.properties.transport.0=custom_cache/tcp.xml

# Any VPC internal IP useful to detect local eth interface
cluster.link.autodetect.address=10.0.0.19:22

##
## Lucene Search
##

# Lucene index replication over Cluster Link
lucene.replicate.write=true

##
## Hibernate
##

# Second Level cache distributed with Ehcache over JGroups TCP unicast
net.sf.ehcache.configurationResourceName=/custom_cache/hibernate-clustered.xml
net.sf.ehcache.configurationResourceName.peerProviderProperties=file=custom_cache/tcp.xml

##
## Ehcache
##

# Liferay cache distributed with Ehcache over JGroups TCP unicast
ehcache.multi.vm.config.location=/custom_cache/liferay-multi-vm-clustered.xml
ehcache.multi.vm.config.location.peerProviderProperties=file=custom_cache/tcp.xml

ehcache.bootstrap.cache.loader.factory=com.liferay.portal.cache.ehcache.JGroupsBootstrapCacheLoaderFactory
ehcache.cache.event.listener.factory=net.sf.ehcache.distribution.jgroups.JGroupsCacheReplicatorFactory
ehcache.cache.manager.peer.provider.factory=net.sf.ehcache.distribution.jgroups.JGroupsCacheManagerPeerProviderFactory

I added the following attribute

singleton_name="custom_cache"

to the TCP envelope of the "custom_cache/tcp.xml" JGroups config.

Eventually, I added the following options to Liferay startup script for node NODE_1

JAVA_OPTS="$JAVA_OPTS -Djgroups.bind_addr=NODE_1 -Djgroups.tcpping.initial_hosts=NODE_1[7800],NODE_2[7800] -Djava.net.preferIPv4Stack=true"

and NODE_2

JAVA_OPTS="$JAVA_OPTS -Djgroups.bind_addr=NODE_2 -Djgroups.tcpping.initial_hosts=NODE_1[7800],NODE_2[7800] -Djava.net.preferIPv4Stack=true"

I hope this help you to save time.

Bisulcate answered 28/4, 2013 at 21:55 Comment(1)
It seems that singleton_name="custom_cache" is a very, very important attribute :) Cache replication would not work without this setting. Thanks for sharing!Sulphurate

© 2022 - 2024 — McMap. All rights reserved.