
Building your own Ceph distributed storage cluster on dedicated servers
- dedicated-servers
- dedibox
- Ceph
- object-storage
Ceph is an open-source, software-defined storage solution that provides object, block, and file storage at exabyte scale. It is self-healing, self-managing, and fault-tolerant, using commodity hardware to minimize costs. This tutorial guides you through deploying a three-node Ceph cluster with a RADOS Gateway (RGW) for S3-compatible object storage on Dedibox dedicated servers running Ubuntu 24.04 LTS.
Before you startLink to this anchor
To complete the actions presented below, you must have:
- A Dedibox account logged into the console
- Owner status or IAM permissions allowing you to perform actions in the intended Organization
- 3 Dedibox servers (Ceph nodes) running Ubuntu 24.04 LTS, each with:
- At least 8GB RAM, 4 CPU cores, and one unused data disk (e.g., /dev/sdb) for OSDs.
- Network connectivity between nodes and the admin machine.
- An admin machine (Ubuntu 24.04 LTS recommended) with SSH access to Ceph nodes.
Configure networking and SSHLink to this anchor
-
Log into each of the ceph nodes and the admin machine using SSH
-
Install software dependencies on all nodes and the admin machine:
sudo apt updatesudo apt install -y python3 chrony lvm2 podmansudo systemctl enable chrony -
Set unique hostnames on each Ceph node:
sudo hostnamectl set-hostname ceph-node-a # Repeat for ceph-node-b, ceph-node-c -
Configure
/etc/hosts
on all nodes and the admin machine to resolve hostnames:echo "<node-a-ip> ceph-node-a" | sudo tee -a /etc/hostsecho "<node-b-ip> ceph-node-b" | sudo tee -a /etc/hostsecho "<node-c-ip> ceph-node-c" | sudo tee -a /etc/hosts -
Create a deployment user (cephadm) on each Ceph node:
sudo useradd -m -s /bin/bash cephadmsudo passwd cephadmecho "cephadm ALL=(ALL) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/cephadmsudo chmod 0440 /etc/sudoers.d/cephadm -
Enable passwordless SSH from the admin machine to Ceph nodes:
ssh-keygen -t ed25519 -N "" -f ~/.ssh/id_ed25519ssh-copy-id cephadm@ceph-node-assh-copy-id cephadm@ceph-node-bssh-copy-id cephadm@ceph-node-c -
Verify time synchronization on all nodes:
chronyc sources
Install cephadm on the admin machineLink to this anchor
-
Add the Ceph repository for the latest stable release (e.g., Reef or newer):
wget -q -O- 'https://download.ceph.com/keys/release.asc' | sudo tee /etc/apt/trusted.gpg.d/ceph.ascsudo apt-add-repository 'deb https://download.ceph.com/debian-squid/ jammy main'sudo apt update -
Install
cephadm
:sudo apt install -y cephadm -
Verify the installation:
cephadm --version
Bootstrap the Ceph clusterLink to this anchor
-
Bootstrap the cluster on the admin machine, using the admin node’s IP:
sudo cephadm bootstrap \--mon-ip <admin-node-ip> \--initial-dashboard-user admin \--initial-dashboard-password <strong-password> \--dashboard-ssl -
Access the Ceph dashboard at
https://<admin-node-ip>:8443
to verify the setup.
Add Ceph nodes to the clusterLink to this anchor
-
Add each Ceph node to the cluster:
sudo cephadm orch host add ceph-node-asudo cephadm orch host add ceph-node-bsudo cephadm orch host add ceph-node-c -
Verify hosts:
sudo ceph orch host ls
Deploy Object Storage devices (OSDs)Link to this anchor
-
List available disks on each node:
sudo ceph orch device ls -
Deploy OSDs on all available unused disks:
sudo ceph orch apply osd --all-available-devices -
Verify the OSD deployment:
sudo ceph osd tree
Deploy RADOS gateway (RGW)Link to this anchor
-
Deploy a single RGW instance on ceph-node-a:
sudo ceph orch apply rgw default --placement="count:1 host:ceph-node-a" --port=80 -
To use HTTPS (recommended), generate a self-signed certificate:
sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 \-keyout /etc/ceph/private/rgw.key \-out /etc/ceph/private/rgw.crt \-subj "/CN=ceph-node-a" -
Redeploy RGW with HTTPS:
sudo ceph orch apply rgw default \--placement="count:1 host:ceph-node-a" \--port=443 \--ssl-cert=/etc/ceph/private/rgw.crt \--ssl-key=/etc/ceph/private/rgw.key -
Verify RGW by accessing http://ceph-node-a:80 (or https://ceph-node-a:443 for HTTPS).
Create an RGW userLink to this anchor
-
Create a user for S3-compatible access:
sudo radosgw-admin user create \--uid=johndoe \--display-name="John Doe" \--email=john@example.comNote the generated
access_key
andsecret_key
from the output.
Step 8: Configure AWS-CLI for Object StorageLink to this anchor
-
Install AWS-CLI on the admin machine or the “a” client:
pip3 install awscli awscli-plugin-endpoint -
Create a configuration file
~/.aws/config
:[plugins]endpoint = awscli_plugin_endpoint[default]region = defaults3 =endpoint_url = http://ceph-node-a:80signature_version = s3v4s3api =endpoint_url = http://ceph-node-a:80For HTTPS, use https://ceph-node-a:443.Create ~/.aws/credentials:[default]aws_access_key_id=<access_key>aws_secret_access_key=<secret_key> -
Test the setup:
aws s3 mb s3://mybucket --endpoint-url http://ceph-node-a:80echo "Hello Ceph!" > testfile.txtaws s3 cp testfile.txt s3://mybucket --endpoint-url http://ceph-node-a:80aws s3 ls s3://mybucket --endpoint-url http://ceph-node-a:80 -
Verify the cluster health status:
sudo ceph -sEnsure the output shows
HEALTH_OK
.
ConclusionLink to this anchor
You have deployed a Ceph storage cluster with S3-compatible object storage using three Dedibox servers on Ubuntu 24.04 LTS. The cluster is managed with cephadm
, ensuring modern orchestration and scalability. For advanced configurations (e.g., multi-zone RGW, monitoring with Prometheus), refer to the official Ceph documentation.