Cluster Support

This guide covers all topics related to running the CMS as a cluster.

1 Overview

1.1 Architecture

The architecture of a CMS instance is built from the following components:

Component Requirements for cluster support
SQL Database (MySQL or MariaDB) Full read/write access to the shared/clustered database from all CMS cluster nodes. See Backend database
Local Filesystem Read/write access to some shared filesystem paths. See Filesystem mounts
Apache HTTP Server with PHP PHP code is stateless, so no cluster awareness necessary.
Apache Tomcat Cluster support provided by Hazelcast IMDG

1.2 Functionality

All user interaction (through the UI or REST API) can be made against any of the CMS cluster nodes. It is therefore possible to distribute incoming client requests to all CMS cluster nodes by using a load balancer (session stickyness is not required).

Automatic (background) jobs are executed on only one of the CMS cluster nodes (the master node). These include:

  • Scheduler Tasks
  • Publish Process
  • Dirting of objects
  • Triggering Activiti Jobs
  • Devtools package synchronization

The master node is automatically elected by the cluster, whenever this is necessary. This means that if the current master node leaves the cluster (e.g. because the Apache Tomcat is stopped), another node will be elected as new master node. There will always be exactly one master node, except in special cases while performing an update (see Updating for details).

2 Setup

2.1 Backend database

All cluster nodes must have read/write access to the same backend database system (MySQL or MariaDB). The backend database itself can also be setup as a cluster of database nodes (so that – for example – each CMS cluster node accesses a separate database cluster node of the same database cluster), but this is optional.

2.2 Filesystem mounts

All cluster nodes must have read/write access to the following list of shared filesystem paths (e.g. mounted via NFS):

Path Description
/Node/node/content Contains binary contents of images/file, resized images for GenticsImageStore, Publish Log files, Devtool Packages and statically published files
/Node/node/system Contains import/export bundles and scheduler log files

Optionally, also the CMS configuration located at /Node/etc/conf.d/ can be shared between the cluster nodes to ensure that all nodes have identical configuration.

3 Configuration

3.1 Activate Feature

/Node/etc/conf.d/*.conf

$FEATURE["cluster"] = true; 

3.2 Hazelcast Configuration

By default, the hazelcast configuration file must be placed at /Node/etc/tomcat/gentics/hazelcast.xml.

The only configuration setting, which is mandatory for the CMS cluster, is setting an individual instance name for each cluster node. This is necessary for the automatic changelog system (used for updating) to work.

/Node/etc/tomcat/gentics/hazelcast.xml
<?xml version="1.0" encoding="UTF-8"?>
<hazelcast
	xsi:schemaLocation="http://www.hazelcast.com/schema/config hazelcast-config-3.0.xsd"
	xmlns="http://www.hazelcast.com/schema/config" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
	<instance-name>gcms_node_1</instance-name>
	...
</hazelcast>

For all other hazelcast specific settings, please consult the Hazelcast IMDG Documentation.

4 REST API

The ClusterResource of the REST API can be used to get information of the cluster and make a specific cluster node the current master.

Administration permission is required to access the ClusterResource.

5 Updating

The nodes of a CMS cluster can be updated individually with the normal “AutoUpdate” functionality. However, there are some important things to note:

  • If an update contains changes in the database, those changes will be applied with the update of the first CMS cluster node. Since the database is shared between all CMS cluster nodes, the changes will be visible for all other (not yet updated) cluster nodes as well.
  • Updates, that contain no database changes (or database changes, which are compatible with older versions) can be done while other cluster nodes are still productive.
  • Updates, that contain incompatible database changes will be marked in the changelog. Additionally, when such an update is applied to the first cluster node, the cluster is specifically prepared for the update:
    • The maintenance mode is automatically enabled (if this was not done before by the administrator).
    • The current master node will drop its master flag, so no background jobs (scheduler tasks, publish process, dirting, import/export) will run anymore.
  • Generally, it is strongly recommended that all nodes of a CMS cluster use the exact same version. This means that the intervals between updates of individual nodes should be as short as possible.