Building your own Ceph distributed storage cluster on dedicated servers
- dedicated-servers
- dedibox
- Ceph
- object-storage
Ceph is an open-source, software-defined storage solution designed to address object, block, and file storage needs. It can handle several exabytes of data, replicating and ensuring fault tolerance using standard hardware. Ceph minimizes administration time and costs, making it both self-healing and self-managing.
This tutorial guides you through deploying a three-node Ceph cluster using Dedibox dedicated servers running Ubuntu Focal Fossa (20.04 LTS).
Before you start
To complete the actions presented below, you must have:
- A Dedibox account logged into the console
- Owner status or IAM permissions allowing you to perform actions in the intended Organization
- 3 Dedibox servers running Ubuntu Focal Fossa 20.04 LTS or later
- An additional admin machine available to install
ceph-deploy
Installing ceph-deploy on the admin machine
ceph-deploy
simplifies Ceph cluster deployment with a user-friendly command-line interface. Install it on an independent admin machine using the following steps:
-
Connect to the admin machine using SSH:
ssh myuser@my.admin.server.ip -
Add the Ceph release key to apt:
wget -q -O- 'https://download.ceph.com/keys/release.asc' | sudo apt-key add - -
Add the Ceph repository to the APT package manager:
echo deb https://eu.ceph.com/debian-octopus/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list -
Update the APT package manager to include Ceph’s repository:
sudo apt update -
Install
ceph-deploy
:sudo apt install ceph-deploy
Creating a ceph-deploy user
ceph-deploy
requires a user with passwordless sudo privileges for installing software on storage nodes. Follow these steps to create a dedicated user:
-
Connect to a Ceph node using SSH:
ssh user@ceph-node -
Create a user called
ceph-deploy
:sudo useradd -d /home/ceph-deploy -m ceph-deploy- Note: You can rename the user to your preferences if needed.
-
Configure the password of the
ceph-deploy
user:sudo passwd ceph-deploy -
Add the user to the sudoers configuration:
echo "ceph-deploy ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/ceph-deploysudo chmod 0440 /etc/sudoers.d/ceph-deploy -
Install an NTP client on Ceph nodes to avoid time-drift issues:
sudo apt install ntpsec -
Install Python for deploying the cluster:
sudo apt install python-minimal -
Repeat the above steps for each of the three nodes.
Enabling passwordless SSH
Generate an SSH key and distribute the public key to each Ceph node for passwordless authentication:
-
Generate an SSH key pair on the admin node:
ssh-keygen- Press Enter to save the key in the default location.
-
Ensure Ceph node hostnames are configured in
/etc/hosts
. -
Transfer the public key to each Ceph node:
ssh-copy-id ceph-deploy@ceph-node-assh-copy-id ceph-deploy@ceph-node-bssh-copy-id ceph-deploy@ceph-node-c
Deploying a Ceph cluster
Deploy the Ceph cluster on your machines by following these steps:
-
Create a directory on the admin node for configuration files and keys:
mkdir my-ceph-clustercd my-ceph-cluster -
Create the cluster:
ceph-deploy --username ceph-deploy new ceph-node-a- Replace
ceph-node-a
with the FQDN of your node.
- Replace
-
Install Ceph packages on the nodes:
ceph-deploy --username ceph-deploy install ceph-node-a ceph-node-b ceph-node-c -
Deploy initial monitors and gather keys:
ceph-deploy --username ceph-deploy mon create-initial- Verify generated files using
ls
.
- Verify generated files using
-
Copy the configuration file and admin key to Ceph Nodes:
ceph-deploy --username ceph-deploy admin ceph-node-a ceph-node-b ceph-node-c -
Deploy manager daemon on all Ceph nodes:
ceph-deploy --username ceph-deploy mgr create ceph-node-a ceph-node-b ceph-node-c -
Configure Object Storage Devices (OSD) on each Ceph node:
ceph-deploy osd create --data /dev/sdb ceph-node-aceph-deploy osd create --data /dev/sdb ceph-node-bceph-deploy osd create --data /dev/sdb ceph-node-c- Ensure the device is not in use and does not contain important data.
-
Check the cluster status:
sudo ceph health- The cluster should report
HEALTH_OK
.
- The cluster should report
Deploying a Ceph Object Gateway (RGW)
Deploy the Ceph Object Gateway (RGW) to access files using S3-compatible clients:
-
Run the following command on the admin machine:
ceph-deploy --username ceph-deploy rgw create ceph-node-a- Note the displayed information about the RGW instance.
-
Modify the port in
/etc/ceph/ceph.conf
:sudo nano /etc/ceph/ceph.conf- Add or modify lines:
[client]rgw frontends = civetweb port=80- For HTTPS:
[client]rgw frontends = civetweb port=443s ssl_certificate=/etc/ceph/private/bundle_keyandcert.pem -
Verify the installation by accessing
http://ceph-node-a:7480
in a web browser.
Creating S3 credentials
On the gateway instance (ceph-node-a
), run the following command to create a new user:
sudo radosgw-admin user create --uid=johndoe --display-name="John Doe" --email=john@example.com
- Note the
access_key
anduser_key
. Proceed to configure your S3 client, e.g., aws-cli.
Configuring AWS-CLI
Use AWS-CLI to manage objects in your Ceph storage cluster:
-
Install
aws-cli
andawscli-plugin
:pip3 install awsclipip3 install awscli-plugin-endpoint -
Create
~/.aws/config
:[plugins]endpoint = awscli_plugin_endpoint[default]region = defaults3 =endpoint_url = http://ceph-node-a:7480signature_version = s3v4max_concurrent_requests = 100max_queue_size = 1000multipart_threshold = 50 MBmultipart_chunksize = 10 MBs3api =endpoint_url = http://ceph-node-a:7480 -
Create
~/.aws/credentials
:[default]aws_access_key_id=<ACCESS_KEY>aws_secret_access_key=<SECRET_KEY>- Replace
<ACCESS_KEY>
and<SECRET_KEY>
with user credentials.
- Replace
-
Create a bucket, upload a test file, and check the content:
aws s3 mb s3://MyBucketecho "Hello World!" > testfile.txtaws s3 cp testfile.txt s3://MyBucketaws s3 ls s3://MyBucket
Conclusion
You have successfully configured an S3-compatible storage cluster using Ceph and three Dedibox dedicated servers. You can now manage your data using any S3-compatible tool. For advanced configuration, refer to the official Ceph documentation.