Scale a Fusion 4.x Cluster
To scale a Fusion cluster, you can add new Fusion nodes, add new dedicated indexing nodes, or move Fusion to new nodes.
Adding a new Fusion node to an existing cluster
Follow these steps to add a new node to an existing Fusion cluster:
If you are running embedded zookeepers from Fusion in an ensemble for your cluster, ensure that the you are running an odd number of zookeepers for your environment after addition of the new node. |
-
Stop Fusion on all nodes in the cluster.
This ensures that there is no data inconsistency between the instances when the new node comes up.
-
Decompress the new copy of Fusion and place it in the desired directory.
-
Configure the
fusion.cors
(fusion.properties
in Fusion 4.x) file to match your requirements.If you will also run the embedded ZooKeeper, add the new node’s IP/hostname and port to the
default.zk.connect
string and copy this change to all other instances in your cluster. Configure the memory and other JVM options for the Fusion modules, then save the file. -
If embedded ZooKeepers are used in your cluster and you intend to start ZooKeeper on this node, then follow the additional steps below. If not, then you are ready to start all nodes in the cluster.
-
Copy the
$FUSION_HOME/conf/zookeeper/zoo.cfg
file from one of the existing nodes to the new node, overwriting the default file. -
Add the entry for the new ZooKeeper to the server list in the
zoo.cfg
file.The entry format is
server.x=IP:port:port
. For example, if this is the 5th node, then the new entry inzoo.cfg
isserver.5=IP:port:port
. -
Create a
zookeeper
folder under$FUSION_HOME/data
. -
Create a new
myid
file in$FUSION_HOME/data/zookeeper
.The contents of this file must be an integer equal to the number of the new ZooKeeper node in the ensemble. For example, if the new node will be the 5th node in your ZooKeeper ensemble, then the
myid
file should contain the value "5". -
Copy the
$FUSION_HOME/data/zookeeper/version-2
directory from one of the existing nodes to the new node, overwriting the default directory. -
Modify the connect string for the default search cluster:
-
Start ZooKeeper on all nodes.
Next, you will need the
zkcli
script, located in$FUSION_HOME/apps/solr-dist/server/scripts/cloud-scripts
. Usezkcli.sh
for Unix orzkcli.bat
for Windows. The examples below use the Unix script. -
Download the default search cluster file:
./zkcli.sh -z <zk1>:<port1>,<zk2>:<port2>,... -cmd getfile <path_to_default_cluster> <path_to_dump_file>.json
The path will differ depending on your Fusion version:
-
2.4.x:
/lucid/search-clusters/default
-
3.x:
/lwfusion/<fusion_version>/core/search-clusters/default
For example:
./zkcli.sh -z localhost:9983 -cmd getfile /lwfusion/3.1.2/core/search-clusters/default default_search_cluster.json
-
-
In the downloaded JSON file, find the
connectString
key and replace the old IP value with the IP of the new Fusion node.Be sure to specify the chroot if your cluster is configured to use it. For example:
{ "id" : "default", "connectString" : "localhost:9983/lwfusion/3.1.2/solr", "zkClientTimeout" : 30000, "zkConnectTimeout" : 60000, "cloud" : true, "bufferFlushInterval" : 1000, "bufferSize" : 100, "concurrency" : 10, "authConfig" : { "authType" : "none" }, "validateCluster" : true }
-
Upload the modified search cluster file:
./zkcli.sh -z <zk1>:<port1>,<zk2>:<port2>,... -cmd putfile <path_to_default_cluster> <path_to_dump_file>.json
For example:
./zkcli.sh -z localhost:9983 -cmd putfile /lwfusion/3.1.2/core/search-clusters/default default_search_cluster.json
-
-
-
Start Fusion on all nodes in the cluster.
Adding an indexing node to a Fusion cluster
If you need more capacity for indexing, you can add nodes dedicated to indexing. To do this, you add a new Fusion node, configure it to only run the Solr service, then allocate replicas of your collections to the new node.
-
Install the Fusion package on the new node.
-
Edit
fusion.cors
(fusion.properties
in Fusion 4.x) as follows:-
Edit
group.default
to include only the Solr service.For example, change
group.default = zookeeper, solr, api, connectors-rpc, connectors-classic, admin-ui, proxy, webapps
to
group.default = solr
-
Uncomment
default.zk.connect
and point it to the cluster’s ZooKeeper instances.For example, change
# default.zk.connect = localhost:9983
to
default.zk.connect = 172.23.1.1:9983,172.23.1.2:9983,172.23.1.3:9983/solr
-
Save the file.
-
-
Start Fusion on the new node:
bin/fusion start
At this point, the new node is added to the cluster. No indexing takes place on the new node yet.
-
Allocate one or more collection replicas to this node:
-
Open the Solr UI at
http://<new-node-hostname>:8983/solr/
. -
Click Collections
-
Select a collection to replicate on the new indexing node.
-
Click Shard: shard1 (or another shard if you have more than one for this collection).
-
Click add replica.
-
From the Node drop-down list, select the new node.
-
Click Create Replica.
To verify that the collection is being replicated, you can click Cloud and view the replicas.
-
Consider whether secondary collections should also be replicated. For example, consider adding replicas for the signals and aggregations collections associated with the main collections that you are replicating. |
Moving Fusion from one node to another
-
Stop Fusion on all nodes in the cluster.
This ensures that there is no data inconsistency between the instances when the new node comes up.
-
Compress the Fusion node you wish to move.
-
Copy the compressed file to the destination.
-
Starting with step 2, follow the instructions above for adding a new node.