我最近继承了一个 glusterfs 设置,我对它的了解几乎为零。其中一个为卷提供砖块的 HDD 发生故障,我能够更换该 HDD,并且主机操作系统可以看到该 HDD。我已成功格式化它,它位于已更换的 HDD 现在作为它更换的 HDD 安装的位置。
这是我需要帮助的地方。
我相信我需要运行某种修复命令,但对如何使用 GlusterFS 执行此操作感到困惑。这是一些背景信息。
$ mount |grep glus
/dev/sdc1 on /data/glusterfs/sdc1 type xfs (rw,relatime,attr2,inode64,noquota)
/dev/sdg1 on /data/glusterfs/sdg1 type xfs (rw,relatime,attr2,inode64,noquota)
/dev/sdf1 on /data/glusterfs/sdf1 type xfs (rw,relatime,attr2,inode64,noquota)
/dev/sdb1 on /data/glusterfs/sdb1 type xfs (rw,relatime,attr2,inode64,noquota)
/dev/sdd1 on /data/glusterfs/sdd1 type xfs (rw,relatime,attr2,inode64,noquota)
127.0.0.1:/nova on /var/lib/nova/instances type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
127.0.0.1:/cinder on /var/lib/nova/mnt/92ef2ec54fd18595ed18d8e6027a1b3d type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
/dev/sde1 on /data/glusterfs/sde1 type xfs (rw,relatime,attr2,inode64,noquota)
我更换的硬盘是/dev/sde1
. 我已经安装了它(如上所示),当我运行时,glusterfs volume info
我看到它在那里列出:
$ gluster volume info nova
Volume Name: nova
Type: Distributed-Replicate
Volume ID: f0d72d64-288c-4e72-9c53-2d16ce5687ac
Status: Started
Number of Bricks: 10 x 2 = 20
Transport-type: tcp
Bricks:
Brick1: icicle07:/data/glusterfs/sdb1/brick
Brick2: icicle08:/data/glusterfs/sdb1/brick
Brick3: icicle09:/data/glusterfs/sdb1/brick
Brick4: icicle10:/data/glusterfs/sdb1/brick
Brick5: icicle11:/data/glusterfs/sdb1/brick
Brick6: icicle07:/data/glusterfs/sdc1/brick
Brick7: icicle08:/data/glusterfs/sdc1/brick
Brick8: icicle09:/data/glusterfs/sdc1/brick
Brick9: icicle10:/data/glusterfs/sdc1/brick
Brick10: icicle11:/data/glusterfs/sdc1/brick
Brick11: icicle07:/data/glusterfs/sdd1/brick
Brick12: icicle08:/data/glusterfs/sdd1/brick
Brick13: icicle09:/data/glusterfs/sdd1/brick
Brick14: icicle10:/data/glusterfs/sdd1/brick
Brick15: icicle11:/data/glusterfs/sdd1/brick
Brick16: icicle07:/data/glusterfs/sde1/brick
Brick17: icicle08:/data/glusterfs/sde1/brick
Brick18: icicle09:/data/glusterfs/sde1/brick
Brick19: icicle10:/data/glusterfs/sde1/brick
Brick20: icicle11:/data/glusterfs/sde1/brick
尝试运行修复命令会导致:
$ gluster volume heal nova full
Locking failed on c551316f-7218-44cf-bb36-befe3d3df34b. Please check log file for details.
Locking failed on 79a6a414-3569-482c-929f-b7c5da16d05e. Please check log file for details.
Locking failed on ae62c691-ae55-4c99-8364-697cb3562668. Please check log file for details.
Locking failed on 5f43c6a4-0ccd-424a-ae56-0492ec64feeb. Please check log file for details.
Locking failed on cb78ba3c-256f-4413-ae7e-aa5c0e9872b5. Please check log file for details.
Locking failed on 6c0111fc-b5e7-4350-8be5-3179a1a5187e. Please check log file for details.
Locking failed on 88fcb687-47aa-4921-b3ab-d6c3b330b32a. Please check log file for details.
Locking failed on d73de03a-0f66-4619-89ef-b73c9bbd800e. Please check log file for details.
Locking failed on c7416c1f-494b-4a95-b48d-6c766c7bce14. Please check log file for details.
Locking failed on 4a780f57-37e4-4f1b-9c34-187a0c7e44bf. Please check log file for details.
尝试再次运行该命令会导致:
$ gluster volume heal nova full
Another transaction is in progress. Please try again after sometime.
重新启动 glusterd 将刷新该锁,但我不知道上面的愈合命令实际上试图告诉我什么。我发现日志没有用,因为有几个,而且我并不完全清楚哪些与什么相关:
$ ls -ltr /var/log/glusterfs
...
rw------- 1 root root 41711 Aug 1 00:51 glfsheal-nova.log-20150801
-rw------- 1 root root 0 Aug 1 03:39 glfsheal-nova.log
-rw------- 1 root root 4297 Aug 1 14:29 cmd_history.log-20150531
-rw------- 1 root root 830449 Aug 1 17:03 var-lib-nova-instances.log
-rw------- 1 root root 307535 Aug 1 17:03 glustershd.log
-rw------- 1 root root 255801 Aug 1 17:03 nfs.log
-rw------- 1 root root 4544 Aug 1 17:12 cmd_history.log
-rw------- 1 root root 28063 Aug 1 17:12 cli.log
-rw------- 1 root root 17370562 Aug 1 17:14 etc-glusterfs-glusterd.vol.log
-rw------- 1 root root 1759170187 Aug 1 17:14 var-lib-nova-mnt-92ef2ec54fd18595ed18d8e6027a1b3d.log
任何指导将不胜感激。
编辑#1
glusterfsd
当系统尝试为我添加回的砖/硬盘驱动器调出相应的时,似乎系统出现问题。这是日志文件的输出/var/log/glusterfs/bricks/data-glusterfs-sde1-brick.log
::
[2015-08-01 21:40:25.143963] I [MSGID: 100030] [glusterfsd.c:2294:main] 0-/usr/sbin/glusterfsd: Started running /usr/sbin/glusterfsd version 3.7.0 (args: /usr/sbin/glusterfsd -s icicle11 --volfile-id nova.icicle11.data-glusterfs-sde1-brick -p /var/lib/glusterd/vols/nova/run/icicle11-data-glusterfs-sde1-brick.pid -S /var/run/gluster/d0a51f364706915faa35c6cca46e9ce6.socket --brick-name /data/glusterfs/sde1/brick -l /var/log/glusterfs/bricks/data-glusterfs-sde1-brick.log --xlator-option *-posix.glusterd-uuid=5e09f3ec-bfbc-490b-bd93-8e083e8ebd05 --brick-port 49155 --xlator-option nova-server.listen-port=49155)
[2015-08-01 21:40:25.190863] I [event-epoll.c:629:event_dispatch_epoll_worker] 0-epoll: Started thread with index 1
[2015-08-01 21:40:48.359478] I [graph.c:269:gf_add_cmdline_options] 0-nova-server: adding option 'listen-port' for volume 'nova-server' with value '49155'
[2015-08-01 21:40:48.359513] I [graph.c:269:gf_add_cmdline_options] 0-nova-posix: adding option 'glusterd-uuid' for volume 'nova-posix' with value '5e09f3ec-bfbc-490b-bd93-8e083e8ebd05'
[2015-08-01 21:40:48.359696] I [server.c:392:_check_for_auth_option] 0-/data/glusterfs/sde1/brick: skip format check for non-addr auth option auth.login./data/glusterfs/sde1/brick.allow
[2015-08-01 21:40:48.359709] I [server.c:392:_check_for_auth_option] 0-/data/glusterfs/sde1/brick: skip format check for non-addr auth option auth.login.a9c47852-7dcf-4f89-80e5-110101943f36.password
[2015-08-01 21:40:48.359719] I [event-epoll.c:629:event_dispatch_epoll_worker] 0-epoll: Started thread with index 2
[2015-08-01 21:40:48.360606] I [rpcsvc.c:2213:rpcsvc_set_outstanding_rpc_limit] 0-rpc-service: Configured rpc.outstanding-rpc-limit with value 64
[2015-08-01 21:40:48.360679] W [options.c:936:xl_opt_validate] 0-nova-server: option 'listen-port' is deprecated, preferred is 'transport.socket.listen-port', continuing with correction
[2015-08-01 21:40:48.361713] E [ctr-helper.c:250:extract_ctr_options] 0-gfdbdatastore: CTR Xlator is disabled.
[2015-08-01 21:40:48.361745] W [gfdb_sqlite3.h:238:gfdb_set_sql_params] 0-nova-changetimerecorder: Failed to retrieve sql-db-pagesize from params.Assigning default value: 4096
[2015-08-01 21:40:48.361762] W [gfdb_sqlite3.h:238:gfdb_set_sql_params] 0-nova-changetimerecorder: Failed to retrieve sql-db-cachesize from params.Assigning default value: 1000
[2015-08-01 21:40:48.361774] W [gfdb_sqlite3.h:238:gfdb_set_sql_params] 0-nova-changetimerecorder: Failed to retrieve sql-db-journalmode from params.Assigning default value: wal
[2015-08-01 21:40:48.361795] W [gfdb_sqlite3.h:238:gfdb_set_sql_params] 0-nova-changetimerecorder: Failed to retrieve sql-db-wal-autocheckpoint from params.Assigning default value: 1000
[2015-08-01 21:40:48.361812] W [gfdb_sqlite3.h:238:gfdb_set_sql_params] 0-nova-changetimerecorder: Failed to retrieve sql-db-sync from params.Assigning default value: normal
[2015-08-01 21:40:48.361825] W [gfdb_sqlite3.h:238:gfdb_set_sql_params] 0-nova-changetimerecorder: Failed to retrieve sql-db-autovacuum from params.Assigning default value: none
[2015-08-01 21:40:48.362666] I [trash.c:2363:init] 0-nova-trash: no option specified for 'eliminate', using NULL
[2015-08-01 21:40:48.362906] E [posix.c:5894:init] 0-nova-posix: Extended attribute trusted.glusterfs.volume-id is absent
[2015-08-01 21:40:48.362922] E [xlator.c:426:xlator_init] 0-nova-posix: Initialization of volume 'nova-posix' failed, review your volfile again
[2015-08-01 21:40:48.362930] E [graph.c:322:glusterfs_graph_init] 0-nova-posix: initializing translator failed
[2015-08-01 21:40:48.362956] E [graph.c:661:glusterfs_graph_activate] 0-graph: init failed
[2015-08-01 21:40:48.363612] W [glusterfsd.c:1219:cleanup_and_exit] (--> 0-: received signum (0), shutting down
编辑#2
好的,所以一个问题似乎是已安装砖的文件系统上不存在扩展属性。假设此命令可以解决此问题:
$ grep volume-id /var/lib/glusterd/vols/nova/info | cut -d= -f2 | sed 's/-//g'
f0d72d64288c4e729c532d16ce5687ac
$ setfattr -n trusted.glusterfs.volume-id -v 0xf0d72d64288c4e729c532d16ce5687ac /data/glusterfs/sde1
然而,我仍然收到上述关于该属性不存在的警告:
[2015-08-01 18:44:50.481350] E [posix.c:5894:init] 0-nova-posix:扩展属性trusted.glusterfs.volume-id 不存在
完整输出glusterd restart
:
[2015-08-01 22:03:41.467668] I [MSGID: 100030] [glusterfsd.c:2294:main] 0-/usr/sbin/glusterfsd: Started running /usr/sbin/glusterfsd version 3.7.0 (args: /usr/sbin/glusterfsd -s icicle11 --volfile-id nova.icicle11.data-glusterfs-sde1-brick -p /var/lib/glusterd/vols/nova/run/icicle11-data-glusterfs-sde1-brick.pid -S /var/run/gluster/d0a51f364706915faa35c6cca46e9ce6.socket --brick-name /data/glusterfs/sde1/brick -l /var/log/glusterfs/bricks/data-glusterfs-sde1-brick.log --xlator-option *-posix.glusterd-uuid=5e09f3ec-bfbc-490b-bd93-8e083e8ebd05 --brick-port 49155 --xlator-option nova-server.listen-port=49155)
[2015-08-01 22:03:41.514878] I [event-epoll.c:629:event_dispatch_epoll_worker] 0-epoll: Started thread with index 1
[2015-08-01 22:04:00.334285] I [graph.c:269:gf_add_cmdline_options] 0-nova-server: adding option 'listen-port' for volume 'nova-server' with value '49155'
[2015-08-01 22:04:00.334330] I [graph.c:269:gf_add_cmdline_options] 0-nova-posix: adding option 'glusterd-uuid' for volume 'nova-posix' with value '5e09f3ec-bfbc-490b-bd93-8e083e8ebd05'
[2015-08-01 22:04:00.334518] I [server.c:392:_check_for_auth_option] 0-/data/glusterfs/sde1/brick: skip format check for non-addr auth option auth.login./data/glusterfs/sde1/brick.allow
[2015-08-01 22:04:00.334529] I [server.c:392:_check_for_auth_option] 0-/data/glusterfs/sde1/brick: skip format check for non-addr auth option auth.login.a9c47852-7dcf-4f89-80e5-110101943f36.password
[2015-08-01 22:04:00.334540] I [event-epoll.c:629:event_dispatch_epoll_worker] 0-epoll: Started thread with index 2
[2015-08-01 22:04:00.335316] I [rpcsvc.c:2213:rpcsvc_set_outstanding_rpc_limit] 0-rpc-service: Configured rpc.outstanding-rpc-limit with value 64
[2015-08-01 22:04:00.335371] W [options.c:936:xl_opt_validate] 0-nova-server: option 'listen-port' is deprecated, preferred is 'transport.socket.listen-port', continuing with correction
[2015-08-01 22:04:00.336170] E [ctr-helper.c:250:extract_ctr_options] 0-gfdbdatastore: CTR Xlator is disabled.
[2015-08-01 22:04:00.336190] W [gfdb_sqlite3.h:238:gfdb_set_sql_params] 0-nova-changetimerecorder: Failed to retrieve sql-db-pagesize from params.Assigning default value: 4096
[2015-08-01 22:04:00.336197] W [gfdb_sqlite3.h:238:gfdb_set_sql_params] 0-nova-changetimerecorder: Failed to retrieve sql-db-cachesize from params.Assigning default value: 1000
[2015-08-01 22:04:00.336211] W [gfdb_sqlite3.h:238:gfdb_set_sql_params] 0-nova-changetimerecorder: Failed to retrieve sql-db-journalmode from params.Assigning default value: wal
[2015-08-01 22:04:00.336217] W [gfdb_sqlite3.h:238:gfdb_set_sql_params] 0-nova-changetimerecorder: Failed to retrieve sql-db-wal-autocheckpoint from params.Assigning default value: 1000
[2015-08-01 22:04:00.336235] W [gfdb_sqlite3.h:238:gfdb_set_sql_params] 0-nova-changetimerecorder: Failed to retrieve sql-db-sync from params.Assigning default value: normal
[2015-08-01 22:04:00.336241] W [gfdb_sqlite3.h:238:gfdb_set_sql_params] 0-nova-changetimerecorder: Failed to retrieve sql-db-autovacuum from params.Assigning default value: none
[2015-08-01 22:04:00.336951] I [trash.c:2363:init] 0-nova-trash: no option specified for 'eliminate', using NULL
[2015-08-01 22:04:00.337131] E [posix.c:5894:init] 0-nova-posix: Extended attribute trusted.glusterfs.volume-id is absent
[2015-08-01 22:04:00.337142] E [xlator.c:426:xlator_init] 0-nova-posix: Initialization of volume 'nova-posix' failed, review your volfile again
[2015-08-01 22:04:00.337148] E [graph.c:322:glusterfs_graph_init] 0-nova-posix: initializing translator failed
[2015-08-01 22:04:00.337154] E [graph.c:661:glusterfs_graph_activate] 0-graph: init failed
[2015-08-01 22:04:00.337629] W [glusterfsd.c:1219:cleanup_and_exit] (--> 0-: received signum (0), shutting down
好的,看来我必须执行以下操作。
添加扩展属性
trusted.glusterfs.volume-id
- 注意它需要在/brick
目录中,我从那里尝试了一个级别,但它没有工作注意: volume-id 的值来自以下命令:
重新开始
glusterd
如果我再看砖的日志:
/var/log/glusterfs/bricks/data-glusterfs-sde1-brick.log
你会看到效果的消息:现在,当我查看砖块时,我可以看到它正在与集群的其余部分同步:
完成后,运行一个修复命令来仔细检查。
额外细节
我在重新启动后也看到了这些消息
glusterd
:确认扩展属性
您可以使用以下命令查看存在哪些属性:
参考