NavigationContentFooter
Jump toSuggest an edit

Building your own Ceph distributed storage cluster on dedicated servers

Reviewed on 27 November 2023Published on 29 June 2020
  • dedicated-servers
  • dedibox
  • storage-cluster
  • Ceph
  • object-storage

Ceph is an open-source, software-defined storage solution designed to address object, block, and file storage needs. It can handle several exabytes of data, replicating and ensuring fault tolerance using standard hardware. Ceph minimizes administration time and costs, making it both self-healing and self-managing.

This tutorial guides you through deploying a three-node Ceph cluster using Dedibox dedicated servers running Ubuntu Focal Fossa (20.04 LTS).

Before you start

To complete the actions presented below, you must have:

  • A Dedibox account logged into the console
  • Owner status or IAM permissions allowing you to perform actions in the intended Organization
  • 3 Dedibox servers running Ubuntu Focal Fossa 20.04 LTS or later
  • An additional admin machine available to install ceph-deploy

Installing ceph-deploy on the admin machine

ceph-deploy simplifies Ceph cluster deployment with a user-friendly command-line interface. Install it on an independent admin machine using the following steps:

  1. Connect to the admin machine using SSH:

    ssh myuser@my.admin.server.ip
  2. Add the Ceph release key to apt:

    wget -q -O- 'https://download.ceph.com/keys/release.asc' | sudo apt-key add -
  3. Add the Ceph repository to the APT package manager:

    echo deb https://eu.ceph.com/debian-octopus/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list
  4. Update the APT package manager to include Ceph’s repository:

    sudo apt update
  5. Install ceph-deploy:

    sudo apt install ceph-deploy

Creating a ceph-deploy user

ceph-deploy requires a user with passwordless sudo privileges for installing software on storage nodes. Follow these steps to create a dedicated user:

  1. Connect to a Ceph node using SSH:

    ssh user@ceph-node
  2. Create a user called ceph-deploy:

    sudo useradd -d /home/ceph-deploy -m ceph-deploy
    • Note: You can rename the user to your preferences if needed.
  3. Configure the password of the ceph-deploy user:

    sudo passwd ceph-deploy
  4. Add the user to the sudoers configuration:

    echo "ceph-deploy ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/ceph-deploy
    sudo chmod 0440 /etc/sudoers.d/ceph-deploy
  5. Install an NTP client on Ceph nodes to avoid time-drift issues:

    sudo apt install ntpsec
  6. Install Python for deploying the cluster:

    sudo apt install python-minimal
  7. Repeat the above steps for each of the three nodes.

Enabling passwordless SSH

Generate an SSH key and distribute the public key to each Ceph node for passwordless authentication:

  1. Generate an SSH key pair on the admin node:

    ssh-keygen
    • Press Enter to save the key in the default location.
  2. Ensure Ceph node hostnames are configured in /etc/hosts.

  3. Transfer the public key to each Ceph node:

    ssh-copy-id ceph-deploy@ceph-node-a
    ssh-copy-id ceph-deploy@ceph-node-b
    ssh-copy-id ceph-deploy@ceph-node-c

Deploying a Ceph cluster

Deploy the Ceph cluster on your machines by following these steps:

  1. Create a directory on the admin node for configuration files and keys:

    mkdir my-ceph-cluster
    cd my-ceph-cluster
  2. Create the cluster:

    ceph-deploy --username ceph-deploy new ceph-node-a
    • Replace ceph-node-a with the FQDN of your node.
  3. Install Ceph packages on the nodes:

    ceph-deploy --username ceph-deploy install ceph-node-a ceph-node-b ceph-node-c
  4. Deploy initial monitors and gather keys:

    ceph-deploy --username ceph-deploy mon create-initial
    • Verify generated files using ls.
  5. Copy the configuration file and admin key to Ceph Nodes:

    ceph-deploy --username ceph-deploy admin ceph-node-a ceph-node-b ceph-node-c
  6. Deploy manager daemon on all Ceph nodes:

    ceph-deploy --username ceph-deploy mgr create ceph-node-a ceph-node-b ceph-node-c
  7. Configure Object Storage Devices (OSD) on each Ceph node:

    ceph-deploy osd create --data /dev/sdb ceph-node-a
    ceph-deploy osd create --data /dev/sdb ceph-node-b
    ceph-deploy osd create --data /dev/sdb ceph-node-c
    • Ensure the device is not in use and does not contain important data.
  8. Check the cluster status:

    sudo ceph health
    • The cluster should report HEALTH_OK.

Deploying a Ceph Object Gateway (RGW)

Deploy the Ceph Object Gateway (RGW) to access files using S3-compatible clients:

  1. Run the following command on the admin machine:

    ceph-deploy --username ceph-deploy rgw create ceph-node-a
    • Note the displayed information about the RGW instance.
  2. Modify the port in /etc/ceph/ceph.conf:

    sudo nano /etc/ceph/ceph.conf
    • Add or modify lines:
    [client]
    rgw frontends = civetweb port=80
    • For HTTPS:
    [client]
    rgw frontends = civetweb port=443s ssl_certificate=/etc/ceph/private/bundle_keyandcert.pem
  3. Verify the installation by accessing http://ceph-node-a:7480 in a web browser.

Creating S3 credentials

On the gateway instance (ceph-node-a), run the following command to create a new user:

sudo radosgw-admin user create --uid=johndoe --display-name="John Doe" --email=john@example.com
  • Note the access_key and user_key. Proceed to configure your S3 client, e.g., aws-cli.

Configuring AWS-CLI

Use AWS-CLI to manage objects in your Ceph storage cluster:

  1. Install aws-cli and awscli-plugin:

    pip3 install awscli
    pip3 install awscli-plugin-endpoint
  2. Create ~/.aws/config:

    [plugins]
    endpoint = awscli_plugin_endpoint
    [default]
    region = default
    s3 =
    endpoint_url = http://ceph-node-a:7480
    signature_version = s3v4
    max_concurrent_requests = 100
    max_queue_size = 1000
    multipart_threshold = 50 MB
    multipart_chunksize = 10 MB
    s3api =
    endpoint_url = http://ceph-node-a:7480
  3. Create ~/.aws/credentials:

    [default]
    aws_access_key_id=<ACCESS_KEY>
    aws_secret_access_key=<SECRET_KEY>
    • Replace <ACCESS_KEY> and <SECRET_KEY> with user credentials.
  4. Create a bucket, upload a test file, and check the content:

    aws s3 mb s3://MyBucket
    echo "Hello World!" > testfile.txt
    aws s3 cp testfile.txt s3://MyBucket
    aws s3 ls s3://MyBucket

Conclusion

You have successfully configured an S3-compatible storage cluster using Ceph and three Dedibox dedicated servers. You can now manage your data using any S3-compatible tool. For advanced configuration, refer to the official Ceph documentation.

Docs APIScaleway consoleDedibox consoleScaleway LearningScaleway.comPricingBlogCarreer
© 2023-2024 – Scaleway