Cassandra Cluster Migration to a New Cluster V2.2.4

Monday, January 18, 2016

Overview

This page describe how to migrate from an existing cluster to a new cluster. Several times we have a situations where we want to move the entire cluster to a new cluster (reason could be anything; better hardware, moving away from existing data center etc.) without taking any downtime and ensuring all data gets migrated to new cluster, we had the same requirement and we decided to go with below mentioned strategy which worked like a charm.

This whole process is online and without impacting client, Old DC can be decommissioned anytime but as a best practice we should be taking a snapshot on each node of old dc before decommissioning.

This involve 2 major steps:

  1. Build a new Datacenter - Build a parallel new Data center with number of nodes you want.
  2. Decommission old Data center (old cluster nodes).

1. Build a new Datacenter 

This further involve following steps:

1. Ensure that we are using NetworkTopologyStrategy for all of our keyspaces (excluding few system keyspaces)

2. Make necessary modification in each new nodes config files same as per the nodes of old data center (Fastest way is to copy config files from existing node to new node and make node specific changes in config.)

Important configs are:


  • Cluster name should be same as existing cluster name.
  • Specify the seed nodes, don’t forget to add new DC's node as seed node.
  • Specify/verify necessary values for node IP address same as existing nodes - listen_address, rpc_address, broadcast_rpc_address
  • Set auto_bootstrap: false in cassandra.yaml file. This can be specified anywhere in cassandra.yaml file. This prevents the new nodes from attempting to get all the data from the other nodes in the data center.
  • Set endpoint_snitch same as existing node and make entry into respective topology file, in our case we use propertyFileSnitch hence new Datacenter and node entry need to be made in conf/cassandra-topology.properties across all nodes old/new DC.
  • Make sure to configure similar number of num_token.
  • Make sure to enable authentication if it is enabled on existing nodes.
  • Specify data and commitlog directory.
  • 3. Make sure to clean data directory and commitlog directory on new nodes, This will ensure a clean bootstrap.

rm -rf /cassandra/data/*
       rm -rf /cassandra/log/*

4. Start Cassandra on each new nodes one by one, during bootstrap it will fetch all the keyspace and tables schemas but not the data.

5. Verify keyspaces are fetched by looking at data directory or by execute "desc keyspaces" command on new nodes, check "system.log" for any error.

6. Execute "nodetool status" to check new datacenter & nodes are showing up with healthy state but LOAD should be very less since data has not been streamed yet (make a note of it we will compare it after streaming.)

Please see below new datacenter "DC2" having all node up but LOAD across all new node is very less in compare to nodes in existing DataCenter "datacenter1".

Datacenter: DC2
===============
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address        Load       Tokens       Owns    Host ID                               Rack
UN  1**.27.7*.***  78.56 KB   12           ?       7b9ca516-19aa-432b-ac82-8d8fef44beef  RAC1
UN  172.27.74.135  168.65 KB  12           ?       9b47f3eb-b871-4f6b-a04f-894c50dffb5f  RAC1
Datacenter: datacenter1
=======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address        Load       Tokens       Owns    Host ID                               Rack
UN  172.27.74.116  228.66 KB  12           ?       d4797291-68e5-428a-9e87-94d5ab1125d7  rack1
UN  172.27.74.55   234.22 KB  12           ?       55b08035-7517-4cea-8908-5825ef097389  rack1


7. Alter All Keyspaces and modify replication factor to replicate to new data center too. This is applicable for system_auth keyspace too.

ALTER KEYSPACE tutorialspoint WITH replication = {'class': 'NetworkTopologyStrategy','datacenter1': 2,'DC2':1};


8. Execute "nodetool rebuild <source-dc-name>" on each new node to stream data that nodes owns/copy of data that node supposed to own as per RF. This can be verified in datadirectory. Nodetool rebuild only streams data from a single source replica per range.

  • This command need to be executed on each NEW node.
  • This can be executed in parallel on multiple NEW nodes depending on IO.

9. On rebuild against each new node, Verify LOAD owned by each new node by executing "nodetool status".

Please see below new datacenter "DC2" having all node up and LOAD across all new node is similar to the nodes of DataCenter "datacenter1".


Datacenter: DC2
===============
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address        Load       Tokens       Owns    Host ID                               Rack
UN  172.27.74.124  201.81 KB  12           ?       7b9ca516-19aa-432b-ac82-8d8fef44beef  RAC1
UN  172.27.74.135  232.75 KB  12           ?       9b47f3eb-b871-4f6b-a04f-894c50dffb5f  RAC1
Datacenter: datacenter1
=======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address        Load       Tokens       Owns    Host ID                               Rack
UN  172.27.74.116  253.91 KB  12           ?       d4797291-68e5-428a-9e87-94d5ab1125d7  rack1
UN  172.27.74.55   227.61 KB  12           ?       55b08035-7517-4cea-8908-5825ef097389  rack1


10. Remove "auto_bootstrap: false" in the cassandra.yaml file from all new node. Returns this parameter to its normal setting so the nodes can get all the data from the other nodes in the data center if restarted.

Important:
  • Setting up replication factor to replicate to new DC followed by nodetool rebuild ensures all the data has been copied over to new DC nodes and new DC is ready to work independently hence old DC can be decommissioned safely without worrying for data loss.
  • This whole process is online and without impacting client, Old DC can be decommissioned anytime but as a best practice we should be taking a snapshot on each node of old dc before decommissioning.

2. Decommission old Data center (old cluster nodes). 

This further involve following steps:


  1. Make sure to take snapshot on each node of old DC to be on safer side (Optional).
  2. Make sure no clients are still writing to any nodes in the data center (Application tier load balancing through driver.)
  3. Run a full repair on new nodes (DC2 nodes), This ensures that all data is propagated from the data center being decommissioned. This is not required if decommission is being done immediately after adding new DC.
  4. Alter keyspace to remove old datacenter from replication strategy, so they no longer reference the data center being removed.
  5. Run "nodetool decommission" (Causes a live node to decommission itself, streaming its data to the next node on the ring.) on every node in the data center being removed one by one.
  6. Shutdown cassandra service.
  7. Remove old node entry from snitch file (conf/cassandra-topology.properties).
  8. Remove node entry from seed node list in "cassaandra.yaml" file of each new node.
  9. Do a rolling restart to ensure everything come up nicely.


Related Posts Plugin for WordPress, Blogger...

0 comments:

Post a Comment

 
© Copyright 2010-2012 Learn MySQL All Rights Reserved.
Template powered by Blogger.com.