标签为 GlusterFS 的文章

在CentOS 6.4上安装配置GlusterFS

参考资料:
http://www.sohailriaz.com/glusterfs-howto-on-centos-6-x/
http://navyaijm.blog.51cto.com/4647068/1258250

背景介绍:
项目目前在文件同步方面采用的是rsync,在尝试用分布式文件系统替换的时候,使用过MooseFS,效果差强人意,在了解到了GlusterFS之后,决定尝试一下,因为它跟MooseFS相比,感觉部署上更加简单一些,同时没有元数据服务器的特点使其没有单点故障的存在,感觉非常不错。

环境介绍:
OS: CentOS 6.4 x86_64 Minimal
Servers: idc1-server1,idc1-server2,idc1-server3,idc1-server4
Client: idc1-server5

具体步骤:
1. 在idc1-server{1-4}上安装GlusterFS软件包:
# wget -P /etc/yum.repos.d http://download.gluster.org/pub/gluster/glusterfs/LATEST/CentOS/glusterfs-epel.repo
# yum install -y glusterfs glusterfs-server glusterfs-fuse

# /etc/init.d/glusterd start
# chkconfig glusterd on

2. 在idc1-server1上配置整个GlusterFS集群:
[root@idc1-server1 ~]# gluster peer probe idc1-server1

peer probe: success: on localhost not needed

[root@idc1-server1 ~]# gluster peer probe idc1-server2

peer probe: success

[root@idc1-server1 ~]# gluster peer probe idc1-server3

peer probe: success

[root@idc1-server1 ~]# gluster peer probe idc1-server4

peer probe: success

注意事项:
在某些情况下,idc1-server1在peer列表中会被识别为IP地址,这会造成一些通讯的问题。
假设idc1-server1的IP地址为10.100.1.11,则需要通过以下步骤来手动修复。
[root@idc1-server2 ~]# gluster peer detach 10.100.1.11

peer detach: success

[root@idc1-server2 ~]# gluster peer probe idc1-server1

peer probe: success

[root@idc1-server2 ~]# gluster peer status

Number of Peers: 3
  
Hostname: idc1-server3
Uuid: 01f25251-9ee6-40c7-a322-af53a034aa5a
State: Peer in Cluster (Connected)
  
Hostname: idc1-server4
Uuid: 212295a6-1f38-4a1e-968c-577241318ff1
State: Peer in Cluster (Connected)
  
Hostname: idc1-server1
Port: 24007
Uuid: ed016c4e-7159-433f-88a5-5c3ebd8e36c9
State: Peer in Cluster (Connected)

4. 在idc1-server1上创建GlusterFS磁盘:
[root@idc1-server1 ~]# gluster volume create datavolume1 replica 2 transport tcp idc1-server1:/usr/local/share/datavolume1 idc1-server2:/usr/local/share/datavolume1 idc1-server3:/usr/local/share/datavolume1 idc1-server4:/usr/local/share/datavolume1 force

volume create: datavolume1: success: please start the volume to access data

[root@idc1-server1 ~]# gluster volume create datavolume2 replica 2 transport tcp idc1-server1:/usr/local/share/datavolume2 idc1-server2:/usr/local/share/datavolume2 idc1-server3:/usr/local/share/datavolume2 idc1-server4:/usr/local/share/datavolume2 force

volume create: datavolume2: success: please start the volume to access data

[root@idc1-server1 ~]# gluster volume create datavolume3 replica 2 transport tcp idc1-server1:/usr/local/share/datavolume3 idc1-server2:/usr/local/share/datavolume3 idc1-server3:/usr/local/share/datavolume3 idc1-server4:/usr/local/share/datavolume3 force

volume create: datavolume3: success: please start the volume to access data

阅读全文 »

,

6 Comments