Configuring a Cassandra Cluster on Ubuntu Bionic Beaver
Apache Cassandra is a replicated NoSQL database and an ideal solution for situations that require maximum data redundancy, uptime and horizontal scaling across multiple servers. It is an open-source application that can easily be managed from a simple command-line interface using Cassandra Query Language (CQL) which is very similar to Structured Query Language, making it easy to learn for users that are already firm with SQL.
You may need certain IAM permissions to carry out some actions described on this page. This means:
- Connect to your instance via SSH or by using PuTTY.
- Add the Apache Cassandra repository:
echo "deb http://www.apache.org/dist/cassandra/debian 41x main" | tee /etc/apt/sources.list.d/cassandra.list
- Add the required PGP keys to use the repositories:
curl https://downloads.apache.org/cassandra/KEYS | sudo apt-key add -
- Reload the APT configuration and update the software already installed on your instance:
apt update && apt upgrade
- Install Cassandra and NTP. NTP (Network Time Protocol) is used to keep the time of the instance synchronized:
apt install cassandra ntp
Repeat the steps above on three Instances in total.
The configuration files of Cassandra are located in the
cassandra.yaml is the file that contains most of the Cassandra configuration, such as ports used, file locations and seed node IP addresses.
The key points to edit are:
- cluster_name: Can be anything chosen by you to describe the clusters name. All members of a cluster must have the same name.
- num_tokens: This value represents the number of virtual nodes within a Cassandra instance. It is used to partition the data and to spread it throughout the cluster. A good starting value is 256.
- seeds: These are the IP addresses of the clusters seed servers. Seed nodes are used as known places to obtain cluster information (such as a list of nodes in the cluster). All active nodes have this information, to avoid a single point of failure. They are known locations that can be relied on, to have the information when other machines can come and go. It is recommended to have 3 seed nodes per datacenter.
- listen_address: This is the IP address that Cassandra will listen on for internal (Cassandra to Cassandra) communication. The software will try to guess the IP address of your Instance if you leave it blank, but it is best to specify it yourself. This information will be specific on each node.
- rpc_address: This is the IP address that Cassandra will listen on for client based communication, such as through the CQL protocol. This information will change on each node.
- endpoint_snitch: Represents the ‘snitch’ used by Cassandra. A snitch tells Cassandra which datacenter and rack a node belongs to within a cluster. There are various types that could be used here, you may refer to the official documentation for more information on this topic.
- On Node 1:
cluster_name: 'Test Cluster'num_tokens: 256seed_provider:- class_name: org.apache.cassandra.locator.SimpleSeedProvider- seeds: 10.0.0.1, 10.0.0.2listen_address: 10.0.0.1rpc_address: 10.0.0.1endpoint_snitch: GossipingPropertyFileSnitch
- On Node 2:
cluster_name: 'Test Cluster'num_tokens: 256seed_provider:- class_name: org.apache.cassandra.locator.SimpleSeedProvider- seeds: 10.0.0.1, 10.0.0.2listen_address: 10.0.0.2rpc_address: 10.0.0.2endpoint_snitch: GossipingPropertyFileSnitch
- On Node 3:
cluster_name: 'Test Cluster'num_tokens: 256seed_provider:- class_name: org.apache.cassandra.locator.SimpleSeedProvider- seeds: 10.0.0.1, 10.0.0.2listen_address: 10.0.0.3rpc_address: 10.0.0.3endpoint_snitch: GossipingPropertyFileSnitch
To be fault-tolerant and to minimize the risk of data loss or downtime, Cassandra distributes data across the cluster. Whenever possible it will ensure that data and backups are stored on a different rack, or datacenter to ensure that the impact of even a failing datacenter will be minimal on the production environment.
- On Node 1:
/etc/cassandra/cassandra-rackdc.propertiesfile on each node and set the DC and rack information. You can use your own naming standard to determine the location of each node.
- On Node 1:
- On Node 2:
- On Node 3:
/etc/cassandra/cassandra-topology.propertiesfile, as we do not use it.rm /etc/cassandra/cassandra-topology.properties
- Start Cassandra and enable automatic launching on system boot.
systemctl enable cassandra.servicesystemctl start cassandra.service
- Verify that the service is running.
systemctl -l status cassandra.service
- Check the status of the cluster with the command
nodetool status.root@scw-cassandra:~# nodetool statusDatacenter: datacenter1=======================Status=Up/Down|/ State=Normal/Leaving/Joining/Moving-- Address Load Tokens Owns (effective) Host ID RackUN 10.0.0.3 119.29 KiB 256 60,7% c9b13a33-147f-4293-8aaf-21ace6d1b756 rack2UN 10.0.0.2 170.88 KiB 256 65,3% 2a100701-3da4-444a-892d-164d2222009c rack1UN 10.0.0.1 15.47 KiB 256 74,1% 93feee5d-3de8-4c0a-908d-2432f26a1a1e rack1
Once all nodes have started, the cluster is ready. You can use the cqlsh tool to interact with your cluster. It is installed by default on any of the nodes.
- Connect to your cluster:
cqlsh 10.0.0.1Connected to scw-cassandra01 at 10.0.0.1:9042.[cqlsh 5.0.1 | Cassandra 3.2.1 | CQL spec 3.4.0 | Native protocol v4]Use HELP for help.cqlsh>
- To quit the CQL shell, type
EXITand press enter.Note:
More information about the CQL syntax is available in the official documentation.
By default your cluster is named ‘Test Cluster’, to edit the it to a more friendly name, follow these steps:
- Login to admin shell with
[new_cluster_name]with your new cluster name.UPDATE system.local SET cluster_name = '[new_cluster_name]' WHERE KEY = 'local';
- Leave the CQL shell with the command
- Edit the file
/etc/cassandra/cassandra.yamlon each of the nodes and replace the value in the
cluster_namevariable with the new cluster name you just set.
- Save and close the file.
- Run the following command from your Linux terminal to clear the system cache and preserve all data in the node:
nodetool flush system
- Restart Cassandra.
systemctl restart cassandra.service
- Log into the cluster with
cqlshand verify the new cluster name is visible.