NavigationContentFooter
Jump toSuggest an edit
Was this page helpful?

Building your own Ceph distributed storage cluster on dedicated servers

Reviewed on 25 June 2025Published on 29 June 2020
  • dedicated-servers
  • dedibox
  • Ceph
  • object-storage

Ceph is an open-source, software-defined storage solution that provides object, block, and file storage at exabyte scale. It is self-healing, self-managing, and fault-tolerant, using commodity hardware to minimize costs. This tutorial guides you through deploying a three-node Ceph cluster with a RADOS Gateway (RGW) for S3-compatible object storage on Dedibox dedicated servers running Ubuntu 24.04 LTS.

Before you startLink to this anchor

To complete the actions presented below, you must have:

  • A Dedibox account logged into the console
  • Owner status or IAM permissions allowing you to perform actions in the intended Organization
  • 3 Dedibox servers (Ceph nodes) running Ubuntu 24.04 LTS, each with:
    • At least 8GB RAM, 4 CPU cores, and one unused data disk (e.g., /dev/sdb) for OSDs.
    • Network connectivity between nodes and the admin machine.
  • An admin machine (Ubuntu 24.04 LTS recommended) with SSH access to Ceph nodes.

Configure networking and SSHLink to this anchor

  1. Log into each of the ceph nodes and the admin machine using SSH

  2. Install software dependencies on all nodes and the admin machine:

    sudo apt update
    sudo apt install -y python3 chrony lvm2 podman
    sudo systemctl enable chrony
  3. Set unique hostnames on each Ceph node:

    sudo hostnamectl set-hostname ceph-node-a # Repeat for ceph-node-b, ceph-node-c
  4. Configure /etc/hosts on all nodes and the admin machine to resolve hostnames:

    echo "<node-a-ip> ceph-node-a" | sudo tee -a /etc/hosts
    echo "<node-b-ip> ceph-node-b" | sudo tee -a /etc/hosts
    echo "<node-c-ip> ceph-node-c" | sudo tee -a /etc/hosts
  5. Create a deployment user (cephadm) on each Ceph node:

    sudo useradd -m -s /bin/bash cephadm
    sudo passwd cephadm
    echo "cephadm ALL=(ALL) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/cephadm
    sudo chmod 0440 /etc/sudoers.d/cephadm
  6. Enable passwordless SSH from the admin machine to Ceph nodes:

    ssh-keygen -t ed25519 -N "" -f ~/.ssh/id_ed25519
    ssh-copy-id cephadm@ceph-node-a
    ssh-copy-id cephadm@ceph-node-b
    ssh-copy-id cephadm@ceph-node-c
  7. Verify time synchronization on all nodes:

    chronyc sources

Install cephadm on the admin machineLink to this anchor

  1. Add the Ceph repository for the latest stable release (e.g., Reef or newer):

    wget -q -O- 'https://download.ceph.com/keys/release.asc' | sudo tee /etc/apt/trusted.gpg.d/ceph.asc
    sudo apt-add-repository 'deb https://download.ceph.com/debian-squid/ jammy main'
    sudo apt update
  2. Install cephadm:

    sudo apt install -y cephadm
  3. Verify the installation:

    cephadm --version

Bootstrap the Ceph clusterLink to this anchor

  1. Bootstrap the cluster on the admin machine, using the admin node’s IP:

    sudo cephadm bootstrap \
    --mon-ip <admin-node-ip> \
    --initial-dashboard-user admin \
    --initial-dashboard-password <strong-password> \
    --dashboard-ssl
  2. Access the Ceph dashboard at https://<admin-node-ip>:8443 to verify the setup.

Add Ceph nodes to the clusterLink to this anchor

  1. Add each Ceph node to the cluster:

    sudo cephadm orch host add ceph-node-a
    sudo cephadm orch host add ceph-node-b
    sudo cephadm orch host add ceph-node-c
  2. Verify hosts:

    sudo ceph orch host ls

Deploy Object Storage devices (OSDs)Link to this anchor

  1. List available disks on each node:

    sudo ceph orch device ls
  2. Deploy OSDs on all available unused disks:

    sudo ceph orch apply osd --all-available-devices
  3. Verify the OSD deployment:

    sudo ceph osd tree

Deploy RADOS gateway (RGW)Link to this anchor

  1. Deploy a single RGW instance on ceph-node-a:

    sudo ceph orch apply rgw default --placement="count:1 host:ceph-node-a" --port=80
  2. To use HTTPS (recommended), generate a self-signed certificate:

    sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 \
    -keyout /etc/ceph/private/rgw.key \
    -out /etc/ceph/private/rgw.crt \
    -subj "/CN=ceph-node-a"
  3. Redeploy RGW with HTTPS:

    sudo ceph orch apply rgw default \
    --placement="count:1 host:ceph-node-a" \
    --port=443 \
    --ssl-cert=/etc/ceph/private/rgw.crt \
    --ssl-key=/etc/ceph/private/rgw.key
  4. Verify RGW by accessing http://ceph-node-a:80 (or https://ceph-node-a:443 for HTTPS).

Create an RGW userLink to this anchor

  1. Create a user for S3-compatible access:

    sudo radosgw-admin user create \
    --uid=johndoe \
    --display-name="John Doe" \
    --email=john@example.com

    Note the generated access_key and secret_key from the output.

Step 8: Configure AWS-CLI for Object StorageLink to this anchor

  1. Install AWS-CLI on the admin machine or the “a” client:

    pip3 install awscli awscli-plugin-endpoint
  2. Create a configuration file ~/.aws/config:

    [plugins]
    endpoint = awscli_plugin_endpoint
    [default]
    region = default
    s3 =
    endpoint_url = http://ceph-node-a:80
    signature_version = s3v4
    s3api =
    endpoint_url = http://ceph-node-a:80
    For HTTPS, use https://ceph-node-a:443.
    Create ~/.aws/credentials:
    [default]
    aws_access_key_id=<access_key>
    aws_secret_access_key=<secret_key>
  3. Test the setup:

    aws s3 mb s3://mybucket --endpoint-url http://ceph-node-a:80
    echo "Hello Ceph!" > testfile.txt
    aws s3 cp testfile.txt s3://mybucket --endpoint-url http://ceph-node-a:80
    aws s3 ls s3://mybucket --endpoint-url http://ceph-node-a:80
  4. Verify the cluster health status:

    sudo ceph -s

    Ensure the output shows HEALTH_OK.

ConclusionLink to this anchor

You have deployed a Ceph storage cluster with S3-compatible object storage using three Dedibox servers on Ubuntu 24.04 LTS. The cluster is managed with cephadm, ensuring modern orchestration and scalability. For advanced configurations (e.g., multi-zone RGW, monitoring with Prometheus), refer to the official Ceph documentation.

Was this page helpful?
API DocsScaleway consoleDedibox consoleScaleway LearningScaleway.comPricingBlogCareers
© 2023-2025 – Scaleway