If the ping
command is not installed by default, you can install it with the apt install iputils-ping
command.
Configuring a High-Availability Storage with GlusterFS on Ubuntu
- glusterfs
- network
- filesystem
- Ubuntu
GlusterFS is an open-source, scalable network filesystem suitable for high data-intensive workloads such as media streaming, cloud storage, and CDN (Content Delivery Network). Each storage server will be a mirror of the other storage server, and files will be replicated automatically across both storage servers.
Before you start
To complete the actions presented below, you must have:
- A Scaleway account logged into the console
- Owner status or IAM permissions allowing you to perform actions in the intended Organization
- An SSH key
- 3 servers running on Ubuntu
sudo
privileges or access to the root user
Configuring the host file
Before installing glusterfs
on all servers we need to configure the hosts’ file and add the GlusterFS repository to each server.
- Connect to your server via SSH.
- Update the apt-sources and the software already installed on the server.
apt update && apt upgrade -y
- Log in to each server and edit the
/etc/hosts
file.nano /etc/hosts - Paste the hosts configuration below:
ip_address gluster01ip_address gluster02ip_address client01
- Save and exit.
- Ping each server using the hostname as below:
ping -c 3 gluster01ping -c 3 gluster02ping -c 3 client01
Each hostname will resolve to each server’s IP address.
--- gluster01 ping statistics ---3 packets transmitted, 3 received, 0% packet loss, time 2033msrtt min/avg/max/mdev = 0.560/0.604/0.627/0.031 ms--- gluster02 ping statistics ---3 packets transmitted, 3 received, 0% packet loss, time 2050msrtt min/avg/max/mdev = 0.497/0.593/0.688/0.080 ms--- client01 ping statistics ---3 packets transmitted, 3 received, 0% packet loss, time 2036msrtt min/avg/max/mdev = 0.672/0.728/0.802/0.054 ms
Adding the GlusterFS repository
Install the software-properties-common
package to all systems.
apt install software-properties-common -y
Add the GlusterFS key and repository to all systems.
sudo add-apt-repository ppa:gluster/glusterfs-9
Installing a GlusterFS server
-
Install the
glusterfs-server
package on both gluster01 and gluster02 servers.apt install glusterfs-server -y -
Start the
glusterd
service, and enable it to launch at every system boot:systemctl start glusterd.servicesystemctl enable glusterd.serviceGlusterFS server is now up and running on the
gluster01
andgluster02
servers. -
Check the services and the software’s versions:
systemctl status glusterd.serviceglusterfsd --version
The command should return an active (running)
status and glusterfs 9.5
version.
Configuring GlusterFS servers
The next step is configuring the servers by creating a trusted storage pool and creating a distributed GlusterFS volume.
-
From the gluster01 server, we need to add the gluster02 server to the GlusterFS storage pool.
gluster peer probe gluster02The result
peer probe: success
is displayed. We added the gluster02 server to the storage trusted pool. -
Check the storage pool status and list.
gluster peer statusgluster pool listroot@gluster01:~# gluster peer statusNumber of Peers: 1Hostname: gluster02Uuid: 17e7a76f-f616-42e5-b741-63a07fd091d6State: Peer in Cluster (Connected)root@gluster01:~# gluster pool listUUID Hostname State17e7a76f-f616-42e5-b741-63a07fd091d6 gluster02 Connectedecc9cafa-b25d-477e-b6bc-403c051e752d localhost Connected
The gluster02 server is connected to the peer cluster, and it is on the pool list.
After creating the trusted storage pool, we will create a new distributed GlusterFS volume. We will create the new GlusterFS volume based on the system directory.
Setting up the distributed GlusterFS volume
For the server production, it is recommended to create the GlusterFS volume using a different partition, not using a system directory.
- Create a new directory
/glusterfs/distributed
on both the gluster01 and the gluster02 servers.mkdir -p /glusterfs/distributed - Create a distributed glusterfs volume in the gluster01 server named
vol01
with two replicas:gfs01
andgfs02
.gluster volume create vol01 transport tcp gluster01:/glusterfs/distributed gluster02:/glusterfs/distributed forcevolume create: vol01: success: please start the volume to access data - Start the
vol01
volume and check its information:gluster volume start vol01gluster volume info vol01root@gluster01:/# gluster volume start vol01volume start: vol01: successroot@gluster01:/# gluster volume info vol01Volume Name: vol01Type: ReplicateVolume ID: 814b103e-522c-48d2-8d1c-3301e10f3416Status: StartedSnapshot Count: 0Number of Bricks: 1 x 2 = 2Transport-type: tcpBricks:Brick1: gluster01:/glusterfs/distributedBrick2: gluster02:/glusterfs/distributedOptions Reconfigured:transport.address-family: inetnfs.disable: onperformance.client-io-threads: off
At this stage, we created the Replicate
type vol01
volume and two bricks on the gluster01
and gluster02
servers. All data will be distributed automatically to each replica server.
Configuring GlusterFS client
In this step, we will mount the glusterfs
volume vol01
to the Ubuntu client, and we need to install the glusterfs-client
to the client server.
-
Install the glusterfs-client on
client01
.apt install glusterfs-client -y -
Create a new directory:
/mnt/glusterfs
.mkdir -p /mnt/glusterfs -
Mount the distributed glusterfs volume (
vol01
) to the/mnt/glusterfs
directory.mount -t glusterfs gluster01:/vol01 /mnt/glusterfs -
Check the amount of volume available in the system.
df -h /mnt/glusterfsNoteTo mount glusterfs permanently to the Ubuntu client system, we can add the volume to the
/etc/fstab
. -
Edit the
/etc/fstab
configuration file:vim /etc/fstab
. -
Paste the following configuration:
gluster01:/vol01 /mnt/glusterfs glusterfs defaults,_netdev 0 0
. -
Save and exit.
-
Reboot the server. When online, the GlusterFS volume ‘vol01’ is mounted automatically through the fstab.
Testing replication and mirroring
-
Mount the glusterfs volume
vol01
to each glusterfs servers.- On gluster01:
mount -t glusterfs gluster01:/vol01 /mnt
- On gluster02:
mount -t glusterfs gluster02:/vol01 /mnt
- On gluster01:
-
Back on
client01
, go to the ‘/mnt/glusterfs’ directory.cd /mnt/glusterfs -
Create three files using the touch command.
touch file01 file02 file03 -
Check on each
gluster01
andgluster02
that the files that we created from the client machine are displayed.cd /mnt/ls -lahThe gluster01 machine returns:
root@gluster01:/mnt# ls -lahtotal 8.0Kdrwxr-xr-x 3 root root 4.0K Oct 1 15:40 .drwxr-xr-x 24 root root 4.0K Sep 28 14:11 ..-rw-r--r-- 1 root root 0 Oct 1 15:40 file01-rw-r--r-- 1 root root 0 Oct 1 15:40 file02-rw-r--r-- 1 root root 0 Oct 1 15:40 file03The gluster02 machine returns:
root@gluster02:/mnt# ls -lahtotal 8.0Kdrwxr-xr-x 3 root root 4.0K Oct 1 15:40 .drwxr-xr-x 24 root root 4.0K Sep 28 14:11 ..-rw-r--r-- 1 root root 0 Oct 1 15:40 file01-rw-r--r-- 1 root root 0 Oct 1 15:40 file02-rw-r--r-- 1 root root 0 Oct 1 15:40 file03
As you can see, all the files we created from the client machine are distributed to all the GlusterFS volume node servers.