Gluster is capable of automatic data replication from the main node to the brick node.
I am using 2 CentOS 7 nodes. server1 and server2
[root@server1 ~]# cat /etc/os-release
NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
[root@server2 ~]# cat /etc/os-release
NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
Installing in CentOS:
# wget -P /etc/yum.repos.d http://download.gluster.org/pub/gluster/glusterfs/LATEST/CentOS/glusterfs-epel.repo
# yum -y install glusterfs glusterfs-fuse glusterfs-server
# systemctl start glusterd
For glusterfs I will use /dev/sdb1 10G in size on server1. This will be a simple storage volume accessible from 2 nodes.
#fdisk /dev/sdb
option 'n' for new parition. choose 'p' for primary, follow the wizzard to complete, 'w' to write data to disk.
Create a filesystem on sdb1 and mount it (in this example I mount it to "/export/sdb1")
Prepare the cluster brick: 'mkdir -p /export/sdb1/brick'
I will have to create the full path on server1 and /export/sdb1 on server2.
"Brick" is the term used in glusterfs to define a storage pool, which will be part of a volume. I can have multiple Bricks distributed on multiple servers, all part of the same volume.
The network needs to be configured properly before testing, otherwise I get this error:
'peer probe: failed: Probe returned with unknown errno 107'
Added iptable rules for glusterfs
# nano /etc/sysconfig/iptables-glusterfs
-A INPUT -m state --state NEW -m tcp -p tcp -s 192.168.202.0/24 --dport 111 -j ACCEPT
-A INPUT -m state --state NEW -m udp -p udp -s 192.168.202.0/24 --dport 111 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp -s 192.168.202.0/24 --dport 2049 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp -s 192.168.202.0/24 --dport 24007 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp -s 192.168.202.0/24 --dport 38465:38469 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp -s 192.168.202.0/24 --dport 49152 -j ACCEPT
I also stopped firewalld for the simplicity of this configuration.
# systemctl stop firewalld
Added server2 in server1's hosts file, and tested the config:
[root@server1 ~]# gluster peer probe server2
peer probe: success.
[root@server2 ~]# gluster peer probe server1
peer probe: success. Host server1 port 24007 already in peer list
At this time I can test the storage pool:
[root@server1 ~]# gluster pool list
UUID Hostname State
a3ad2dc6-da51-407d-a088-374220381ffe server2 Connected
9458f969-c1ec-42eb-81da-6a970cf06cde localhost Connected
[root@server1 ~]# gluster volume status
No volumes present
I need to create a gluster volume and test replication:
[root@server1 ~]# gluster
gluster> volume create vol0 transport tcp server1:/export/sdb1/brick server2:/export/sdb1/brick
###### make sure server2 is recognized as cluster peer. if not, use IP address.
###### if vol creation failes for some reason, do # setfattr -x trusted.glusterfs.volume-id /export/sdb1/brick and restart glusterd.
gluster> volume start vol0
volume start: vol0: success
create mount point and mount the volume on both nodes:
[root@server1 ~]# mkdir /mnt/gluster
[root@server1 ~]# mount -t glusterfs server1:/vol0 /mnt/gluster/
[root@server2 ~]# mkdir /mnt/gluster
[root@server2 ~]# mount -t glusterfs server2:/vol0 /mnt/gluster/
[root@server1 ~]# cp -r /var/log/ /mnt/gluster/
The content is automaticaly synced between nodes
[root@server1 ~]# ls /mnt/gluster/log/
anaconda btmp cups firewalld lastlog messages ppp samba spooler tuned Xorg.0.log.old yum.log
audit chrony dmesg gdm maillog ntpstats qemu-ga secure sssd wtmp Xorg.9.log
boot.log cron dmesg.old glusterfs mariadb pm-powersave.log sa speech-dispatcher tallylog Xorg.0.log Xorg.9.log.old
[root@server2 ~]# ls /mnt/gluster/log/
anaconda btmp cups firewalld lastlog messages ppp samba spooler tuned Xorg.0.log.old yum.log
audit chrony dmesg gdm maillog ntpstats qemu-ga secure sssd wtmp Xorg.9.log
boot.log cron dmesg.old glusterfs mariadb pm-powersave.log sa speech-dispatcher tallylog Xorg.0.log Xorg.9.log.old