Building your own Ceph distributed storage cluster on dedicated servers
Ceph is an open-source, software-defined storage solution that provides object, block, and file storage at exabyte scale. It is self-healing, self-managing, and fault-tolerant, using commodity hardware to minimize costs. This tutorial guides you through deploying a three-node Ceph cluster with a RADOS Gateway (RGW) for S3-compatible object storage on Dedibox dedicated servers running Ubuntu 24.04 LTS.
Before you start
To complete the actions presented below, you must have:
- A Dedibox account logged into the console
- Owner status or IAM permissions allowing you to perform actions in the intended Organization
- 3 Dedibox servers (Ceph nodes) running Ubuntu 24.04 LTS, each with:
- At least 8GB RAM, 4 CPU cores, and one unused data disk (e.g., /dev/sdb) for OSDs.
- Network connectivity between nodes and the admin machine.
- An admin machine (Ubuntu 24.04 LTS recommended) with SSH access to Ceph nodes.
Configure networking and SSH
-
Log into each of the ceph nodes and the admin machine using SSH
-
Install software dependencies on all nodes and the admin machine:
sudo apt update sudo apt install -y python3 chrony lvm2 podman sudo systemctl enable chrony
-
Set unique hostnames on each Ceph node:
sudo hostnamectl set-hostname ceph-node-a # Repeat for ceph-node-b, ceph-node-c
-
Configure
/etc/hosts
on all nodes and the admin machine to resolve hostnames:echo "<node-a-ip> ceph-node-a" | sudo tee -a /etc/hosts echo "<node-b-ip> ceph-node-b" | sudo tee -a /etc/hosts echo "<node-c-ip> ceph-node-c" | sudo tee -a /etc/hosts
-
Create a deployment user (cephadm) on each Ceph node:
sudo useradd -m -s /bin/bash cephadm sudo passwd cephadm echo "cephadm ALL=(ALL) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/cephadm sudo chmod 0440 /etc/sudoers.d/cephadm
-
Enable passwordless SSH from the admin machine to Ceph nodes:
ssh-keygen -t ed25519 -N "" -f ~/.ssh/id_ed25519 ssh-copy-id cephadm@ceph-node-a ssh-copy-id cephadm@ceph-node-b ssh-copy-id cephadm@ceph-node-c
-
Verify time synchronization on all nodes:
chronyc sources
Install cephadm on the admin machine
-
Add the Ceph repository for the latest stable release (e.g., Reef or newer):
wget -q -O- 'https://download.ceph.com/keys/release.asc' | sudo tee /etc/apt/trusted.gpg.d/ceph.asc sudo apt-add-repository 'deb https://download.ceph.com/debian-squid/ jammy main' sudo apt update
-
Install
cephadm
:sudo apt install -y cephadm
-
Verify the installation:
cephadm --version
Bootstrap the Ceph cluster
-
Bootstrap the cluster on the admin machine, using the admin node’s IP:
sudo cephadm bootstrap \ --mon-ip <admin-node-ip> \ --initial-dashboard-user admin \ --initial-dashboard-password <strong-password> \ --dashboard-ssl
-
Access the Ceph dashboard at
https://<admin-node-ip>:8443
to verify the setup.
Add Ceph nodes to the cluster
-
Add each Ceph node to the cluster:
sudo cephadm orch host add ceph-node-a sudo cephadm orch host add ceph-node-b sudo cephadm orch host add ceph-node-c
-
Verify hosts:
sudo ceph orch host ls
Deploy Object Storage devices (OSDs)
-
List available disks on each node:
sudo ceph orch device ls
-
Deploy OSDs on all available unused disks:
sudo ceph orch apply osd --all-available-devices
-
Verify the OSD deployment:
sudo ceph osd tree
Deploy RADOS gateway (RGW)
-
Deploy a single RGW instance on ceph-node-a:
sudo ceph orch apply rgw default --placement="count:1 host:ceph-node-a" --port=80
-
To use HTTPS (recommended), generate a self-signed certificate:
sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 \ -keyout /etc/ceph/private/rgw.key \ -out /etc/ceph/private/rgw.crt \ -subj "/CN=ceph-node-a"
-
Redeploy RGW with HTTPS:
sudo ceph orch apply rgw default \ --placement="count:1 host:ceph-node-a" \ --port=443 \ --ssl-cert=/etc/ceph/private/rgw.crt \ --ssl-key=/etc/ceph/private/rgw.key
-
Verify RGW by accessing http://ceph-node-a:80 (or https://ceph-node-a:443 for HTTPS).
Create an RGW user
-
Create a user for S3-compatible access:
sudo radosgw-admin user create \ --uid=johndoe \ --display-name="John Doe" \ --email=john@example.com
Note the generated
access_key
andsecret_key
from the output.
Step 8: Configure AWS-CLI for Object Storage
-
Install AWS-CLI on the admin machine or the "a" client:
pip3 install awscli awscli-plugin-endpoint
-
Create a configuration file
~/.aws/config
:[plugins] endpoint = awscli_plugin_endpoint [default] region = default s3 = endpoint_url = http://ceph-node-a:80 signature_version = s3v4 s3api = endpoint_url = http://ceph-node-a:80 For HTTPS, use https://ceph-node-a:443. Create ~/.aws/credentials: [default] aws_access_key_id=<access_key> aws_secret_access_key=<secret_key>
-
Test the setup:
aws s3 mb s3://mybucket --endpoint-url http://ceph-node-a:80 echo "Hello Ceph!" > testfile.txt aws s3 cp testfile.txt s3://mybucket --endpoint-url http://ceph-node-a:80 aws s3 ls s3://mybucket --endpoint-url http://ceph-node-a:80
-
Verify the cluster health status:
sudo ceph -s
Ensure the output shows
HEALTH_OK
.
Conclusion
You have deployed a Ceph storage cluster with S3-compatible object storage using three Dedibox servers on Ubuntu 24.04 LTS. The cluster is managed with cephadm
, ensuring modern orchestration and scalability. For advanced configurations (e.g., multi-zone RGW, monitoring with Prometheus), refer to the official Ceph documentation.
Visit our Help Center and find the answers to your most frequent questions.
Visit Help Center