高可用nfs文件服务器集群部署(NFS+keepalived+Sersync)
以下是基于 NFS + Keepalived + Sersync 的高可用文件服务器集群部署方案,实现 主从热备+实时同步+自动故障切换:
📌 架构拓扑
[VIP: 192.168.1.100]
|
+-------------------+-------------------+
| | |
[Master NFS] [Backup NFS] [Clients]
(192.168.1.10) (192.168.1.11) (通过VIP挂载)
| |
(NFS Export) (Sersync实时同步)
🔧 部署步骤
1. 基础环境准备(所有节点)
# 关闭防火墙和SELinux
systemctl stop firewalld && systemctl disable firewalld
setenforce 0
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
# 安装依赖包
yum install -y nfs-utils keepalived rsync
2. NFS服务配置
主节点(Master)
# 创建共享目录
mkdir -p /data/nfs_share
chmod 777 /data/nfs_share
# 编辑NFS配置文件
cat > /etc/exports <<EOF
/data/nfs_share 192.168.1.0/24(rw,sync,no_root_squash,no_all_squash)
EOF
# 启动NFS服务
systemctl enable --now nfs-server
exportfs -arv
从节点(Backup)
mkdir -p /data/nfs_share
chmod 777 /data/nfs_share
3. Keepalived配置(主备双机)
主节点(Master)
cat > /etc/keepalived/keepalived.conf <<EOF
vrrp_script chk_nfs {
script "/usr/bin/killall -0 nfsd" # 检测NFS进程是否存活
interval 2
weight -20
}
vrrp_instance VI_1 {
state MASTER
interface eth0
virtual_router_id 51
priority 100 # 主节点优先级更高
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.1.100/24 # 虚拟IP(VIP)
}
track_script {
chk_nfs
}
}
EOF
从节点(Backup)
cat > /etc/keepalived/keepalived.conf <<EOF
vrrp_script chk_nfs {
script "/usr/bin/killall -0 nfsd"
interval 2
weight -20
}
vrrp_instance VI_1 {
state BACKUP
interface eth0
virtual_router_id 51
priority 90 # 从节点优先级更低
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.1.100/24
}
track_script {
chk_nfs
}
}
EOF
启动Keepalived
systemctl enable --now keepalived
4. Sersync实时同步(主→从)
主节点(Master)
# 下载并安装Sersync
wget https://github.com/wsgzao/sersync/raw/master/sersync2.5.4_64bit_binary_stable_final.tar.gz
tar -xzf sersync2.5.4_64bit_binary_stable_final.tar.gz -C /opt/
mv /opt/GNU-Linux-x86/ /opt/sersync
# 配置Sersync
cat > /opt/sersync/confxml.xml <<EOF
<?xml version="1.0" encoding="ISO-8859-1"?>
<head version="2.5">
<host hostip="localhost" port="8008"></host>
<debug start="false"/>
<fileSystem xfs="true"/>
<filter start="false">
<exclude expression="(.*)\.tmp"></exclude>
</filter>
<inotify>
<delete start="true"/>
<createFolder start="true"/>
<createFile start="true"/>
<closeWrite start="true"/>
<moveFrom start="true"/>
<moveTo start="true"/>
</inotify>
<sersync>
<localpath watch="/data/nfs_share">
<remote ip="192.168.1.11" name="/data/nfs_share"/>
</localpath>
<rsync>
<commonParams params="-az"/>
<auth start="true" users="root" passwordfile="/etc/rsync.pass"/>
</rsync>
</sersync>
</head>
EOF
# 创建Rsync密码文件
echo "password123" > /etc/rsync.pass
chmod 600 /etc/rsync.pass
# 启动Sersync
/opt/sersync/sersync2 -d -r -o /opt/sersync/confxml.xml
从节点(Backup)
# 配置Rsync服务
cat > /etc/rsyncd.conf <<EOF
[data]
path = /data/nfs_share
comment = NFS Sync Directory
uid = root
gid = root
read only = no
auth users = root
secrets file = /etc/rsync.pass
EOF
echo "root:password123" > /etc/rsync.pass
chmod 600 /etc/rsync.pass
# 启动Rsync
systemctl enable --now rsyncd
5. 客户端挂载(通过VIP访问)
# 安装NFS客户端工具
yum install -y nfs-utils
# 挂载NFS共享目录
mount -t nfs 192.168.1.100:/data/nfs_share /mnt/nfs
# 验证写入
echo "Test from Client" > /mnt/nfs/test.txt
⚡ 故障模拟测试
场景 | 预期结果 |
---|---|
主节点NFS服务停止 | VIP自动漂移到备节点,客户端无感知中断 |
主节点服务器宕机 | 备节点接管VIP,Sersync停止同步 |
| 主节点恢复服务 | VIP不抢占(需手动切换或配置抢占模式) |
🔧 调优与监控
- NFS性能优化:
# 在/etc/exports中添加async选项(牺牲一致性换性能)
/data/nfs_share 192.168.1.0/24(rw,async,no_root_squash)
- Sersync监控:
# 检查进程是否存活
ps -ef | grep sersync
# 查看同步日志
tail -f /opt/sersync/rsync_fail_log.sh
💡 方案优势
- 高可用:VIP自动切换,服务持续可用。
- 数据一致性:Sersync秒级实时同步。
- 扩展性:可扩展为多从节点同步架构。
按此部署后,您的NFS集群将具备生产级高可用能力!
声明:本站所有文章,如无特殊说明或标注,均为本站原创发布。任何个人或组织,在未征得本站同意时,禁止复制、盗用、采集、发布本站内容到任何网站、书籍等各类媒体平台。如若本站内容侵犯了原著者的合法权益,可联系我们进行处理。