NavigationContentFooter
Jump toSuggest an edit

Configuring a High-Availability Storage with GlusterFS on Ubuntu

Reviewed on 02 October 2023Published on 28 September 2018
  • glusterfs
  • network
  • filesystem
  • high-availability-storage
  • Ubuntu

GlusterFS is an open-source, scalable network filesystem suitable for high data-intensive workloads such as media streaming, cloud storage, and CDN (Content Delivery Network). Each storage server will be a mirror of the other storage server, and files will be replicated automatically across both storage servers.

This tutorial shows you how to:

Security & Identity (IAM)

To perform certain actions described below, you must either be the Owner of the Organization in which the actions will be performed or an IAM user with the necessary permissions.

Requirements
  • You have an account and are logged into the Scaleway console
  • You have configured your SSH key
  • You have three servers running on Ubuntu
  • You have sudo privileges or access to the root user.

Configuring the host file

Before installing glusterfs on all servers we need to configure the hosts’ file and add the GlusterFS repository to each server.

  1. Connect to your server via SSH.
  2. Update the apt-sources and the software already installed on the server.
    apt update && apt upgrade -y
  3. Log in to each server and edit the /etc/hosts file.
    nano /etc/hosts
  4. Paste hosts configuration below:
    ip_address gluster01
    ip_address gluster02
    ip_address client01
  5. Save and exit.
  6. Ping each server using the hostname as below:
    ping -c 3 gluster01
    ping -c 3 gluster02
    ping -c 3 client01
Note

If the ping command is not installed by default, you can install it with the apt install iputils-ping command.

Each hostname will resolve to each server’s IP address.

--- gluster01 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2033ms
rtt min/avg/max/mdev = 0.560/0.604/0.627/0.031 ms
--- gluster02 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2050ms
rtt min/avg/max/mdev = 0.497/0.593/0.688/0.080 ms
--- client01 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2036ms
rtt min/avg/max/mdev = 0.672/0.728/0.802/0.054 ms

Adding the GlusterFS repository

Install the software-properties-common package to all systems.

apt install software-properties-common -y

Add the glusterfs key and repository to all systems.

sudo add-apt-repository ppa:gluster/glusterfs-9

Installing a GlusterFS server

  1. Install the glusterfs server on both gluster01 and gluster02 servers.

    apt install glusterfs-server -y
  2. Start the glusterd service and enable it to launch every time at system boot

    systemctl start glusterd.service
    systemctl enable glusterd.service

    Glusterfs server is now up and running on the gluster01 and gluster02 servers.

  3. Check the services and the installed software version

    systemctl status glusterd.service
    glusterfsd --version

The command should return an active (running) status and glusterfs 9.5 version.

Configuring GlusterFS servers

The next step is configuring the servers by creating a trusted storage pool and creating a distributed glusterfs volume.

  • From the gluster01 server, we need to add the gluster02 server to the glusterfs storage pool.

    gluster peer probe gluster02

    The result peer probe: success is displayed. We added the gluster02 server to the storage trusted pool.

  • Check the storage pool status and list.

    gluster peer status
    gluster pool list
    root@gluster01:~# gluster peer status
    Number of Peers: 1
    Hostname: gluster02
    Uuid: 17e7a76f-f616-42e5-b741-63a07fd091d6
    State: Peer in Cluster (Connected)
    root@gluster01:~# gluster pool list
    UUID Hostname State
    17e7a76f-f616-42e5-b741-63a07fd091d6 gluster02 Connected
    ecc9cafa-b25d-477e-b6bc-403c051e752d localhost Connected

The gluster02 server is connected to the peer cluster, and it is on the pool list.

After creating the trusted storage pool, we will create a new distributed glusterfs volume. We will create the new glusterfs volume based on the system directory.

Setting up the distributed GlusterFS volume

Note

For the server production, it is recommended to create the glusterfs volume using the different partition, not using a system directory.

  1. Create a new directory /glusterfs/distributed on both the gluster01 and the gluster02 servers.
    mkdir -p /glusterfs/distributed
  2. Create a distributed glusterfs volume in the gluster01 server named vol01 with two replicas: gfs01 and gfs02.
    gluster volume create vol01 transport tcp gluster01:/glusterfs/distributed gluster02:/glusterfs/distributed force
    volume create: vol01: success: please start the volume to access data
  3. Start the vol01 and check the volume info
    gluster volume start vol01
    gluster volume info vol01
    root@gluster01:/# gluster volume start vol01
    volume start: vol01: success
    root@gluster01:/# gluster volume info vol01
    Volume Name: vol01
    Type: Replicate
    Volume ID: 814b103e-522c-48d2-8d1c-3301e10f3416
    Status: Started
    Snapshot Count: 0
    Number of Bricks: 1 x 2 = 2
    Transport-type: tcp
    Bricks:
    Brick1: gluster01:/glusterfs/distributed
    Brick2: gluster02:/glusterfs/distributed
    Options Reconfigured:
    transport.address-family: inet
    nfs.disable: on
    performance.client-io-threads: off

At this stage, we created the Replicate type vol01 volume and two bricks on the gluster01 and gluster02 servers. All data will be distributed automatically to each replica server.

Configuring GlusterFS client

In this step, we will mount the glusterfs volume vol01 to the Ubuntu client, and we need to install the glusterfs-client to the client server.

  1. Install the glusterfs-client on client01.

    apt install glusterfs-client -y
  2. Create a new directory: /mnt/glusterfs.

    mkdir -p /mnt/glusterfs
  3. Mount the distributed glusterfs volume (vol01) to the /mnt/glusterfs directory.

    mount -t glusterfs gluster01:/vol01 /mnt/glusterfs
  4. Check the amount of volume available in the system.

    df -h /mnt/glusterfs
    Note

    To mount glusterfs permanently to the Ubuntu client system, we can add the volume to the /etc/fstab.

  5. Edit the /etc/fstab configuration file: vim /etc/fstab.

  6. Paste the following configuring:gluster01:/vol01 /mnt/glusterfs glusterfs defaults,_netdev 0 0.

  7. Save and exit.

  8. Reboot the server. When online, the glusterfs volume ‘vol01’ is mounted automatically through the fstab.

Testing replication & mirroring

  1. Mount the glusterfs volume vol01 to each glusterfs servers.

    • On gluster01: mount -t glusterfs gluster01:/vol01 /mnt
    • On gluster02: mount -t glusterfs gluster02:/vol01 /mnt
  2. Back on client01, go to the ‘/mnt/glusterfs’ directory.

    cd /mnt/glusterfs
  3. Create three files using the touch command.

    touch file01 file02 file03
  4. Check on each gluster01 and gluster02 that the files that we created from the client machine are displayed.

    cd /mnt/
    ls -lah

    The gluster01 machine returns:

    root@gluster01:/mnt# ls -lah
    total 8.0K
    drwxr-xr-x 3 root root 4.0K Oct 1 15:40 .
    drwxr-xr-x 24 root root 4.0K Sep 28 14:11 ..
    -rw-r--r-- 1 root root 0 Oct 1 15:40 file01
    -rw-r--r-- 1 root root 0 Oct 1 15:40 file02
    -rw-r--r-- 1 root root 0 Oct 1 15:40 file03

    The gluster02 machine returns:

    root@gluster02:/mnt# ls -lah
    total 8.0K
    drwxr-xr-x 3 root root 4.0K Oct 1 15:40 .
    drwxr-xr-x 24 root root 4.0K Sep 28 14:11 ..
    -rw-r--r-- 1 root root 0 Oct 1 15:40 file01
    -rw-r--r-- 1 root root 0 Oct 1 15:40 file02
    -rw-r--r-- 1 root root 0 Oct 1 15:40 file03

As you can see, all the files we created from the client machine are distributed to all the glusterfs volume node servers.

Cloud Products & Resources
  • Scaleway Console
  • Compute
  • Storage
  • Network
  • IoT
  • AI
Dedicated Products & Resources
  • Dedibox Console
  • Dedibox Servers
  • Network
  • Web Hosting
Scaleway
  • Scaleway.com
  • Blog
  • Careers
  • Scaleway Learning
Follow us
FacebookTwitterSlackInstagramLinkedin
ContractsLegal NoticePrivacy PolicyCookie PolicyDocumentation license
© 1999-2024 – Scaleway SAS