Recentemente, herdei uma configuração do glusterfs sobre a qual não sei literalmente nada. Um dos HDDs que fornece um bloco para o volume falhou e eu consegui substituir esse HDD e o sistema operacional host pode ver o HDD. Eu o formatei com sucesso e ele está na posição em que o HDD substituído agora está montado como o HDD que ele substituiu.
Aqui é onde eu preciso de ajuda.
Acredito que preciso executar algum tipo de comando de cura, mas estou bastante confuso sobre como fazer isso com o GlusterFS. Aqui estão algumas das informações de fundo.
$ mount |grep glus
/dev/sdc1 on /data/glusterfs/sdc1 type xfs (rw,relatime,attr2,inode64,noquota)
/dev/sdg1 on /data/glusterfs/sdg1 type xfs (rw,relatime,attr2,inode64,noquota)
/dev/sdf1 on /data/glusterfs/sdf1 type xfs (rw,relatime,attr2,inode64,noquota)
/dev/sdb1 on /data/glusterfs/sdb1 type xfs (rw,relatime,attr2,inode64,noquota)
/dev/sdd1 on /data/glusterfs/sdd1 type xfs (rw,relatime,attr2,inode64,noquota)
127.0.0.1:/nova on /var/lib/nova/instances type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
127.0.0.1:/cinder on /var/lib/nova/mnt/92ef2ec54fd18595ed18d8e6027a1b3d type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
/dev/sde1 on /data/glusterfs/sde1 type xfs (rw,relatime,attr2,inode64,noquota)
O HDD que troquei é /dev/sde1
. Eu montei (como visto acima) e quando executo glusterfs volume info
vejo que está listado lá:
$ gluster volume info nova
Volume Name: nova
Type: Distributed-Replicate
Volume ID: f0d72d64-288c-4e72-9c53-2d16ce5687ac
Status: Started
Number of Bricks: 10 x 2 = 20
Transport-type: tcp
Bricks:
Brick1: icicle07:/data/glusterfs/sdb1/brick
Brick2: icicle08:/data/glusterfs/sdb1/brick
Brick3: icicle09:/data/glusterfs/sdb1/brick
Brick4: icicle10:/data/glusterfs/sdb1/brick
Brick5: icicle11:/data/glusterfs/sdb1/brick
Brick6: icicle07:/data/glusterfs/sdc1/brick
Brick7: icicle08:/data/glusterfs/sdc1/brick
Brick8: icicle09:/data/glusterfs/sdc1/brick
Brick9: icicle10:/data/glusterfs/sdc1/brick
Brick10: icicle11:/data/glusterfs/sdc1/brick
Brick11: icicle07:/data/glusterfs/sdd1/brick
Brick12: icicle08:/data/glusterfs/sdd1/brick
Brick13: icicle09:/data/glusterfs/sdd1/brick
Brick14: icicle10:/data/glusterfs/sdd1/brick
Brick15: icicle11:/data/glusterfs/sdd1/brick
Brick16: icicle07:/data/glusterfs/sde1/brick
Brick17: icicle08:/data/glusterfs/sde1/brick
Brick18: icicle09:/data/glusterfs/sde1/brick
Brick19: icicle10:/data/glusterfs/sde1/brick
Brick20: icicle11:/data/glusterfs/sde1/brick
Tentar executar um comando heal resulta nisso:
$ gluster volume heal nova full
Locking failed on c551316f-7218-44cf-bb36-befe3d3df34b. Please check log file for details.
Locking failed on 79a6a414-3569-482c-929f-b7c5da16d05e. Please check log file for details.
Locking failed on ae62c691-ae55-4c99-8364-697cb3562668. Please check log file for details.
Locking failed on 5f43c6a4-0ccd-424a-ae56-0492ec64feeb. Please check log file for details.
Locking failed on cb78ba3c-256f-4413-ae7e-aa5c0e9872b5. Please check log file for details.
Locking failed on 6c0111fc-b5e7-4350-8be5-3179a1a5187e. Please check log file for details.
Locking failed on 88fcb687-47aa-4921-b3ab-d6c3b330b32a. Please check log file for details.
Locking failed on d73de03a-0f66-4619-89ef-b73c9bbd800e. Please check log file for details.
Locking failed on c7416c1f-494b-4a95-b48d-6c766c7bce14. Please check log file for details.
Locking failed on 4a780f57-37e4-4f1b-9c34-187a0c7e44bf. Please check log file for details.
As tentativas de executar o comando novamente resultam nisso:
$ gluster volume heal nova full
Another transaction is in progress. Please try again after sometime.
Reiniciar o glusterd liberará esse bloqueio, mas não sei o que o comando de cura acima está realmente tentando me dizer. Os logs que considero inúteis, pois são vários, e não estão totalmente claros para mim, o que acompanha:
$ ls -ltr /var/log/glusterfs
...
rw------- 1 root root 41711 Aug 1 00:51 glfsheal-nova.log-20150801
-rw------- 1 root root 0 Aug 1 03:39 glfsheal-nova.log
-rw------- 1 root root 4297 Aug 1 14:29 cmd_history.log-20150531
-rw------- 1 root root 830449 Aug 1 17:03 var-lib-nova-instances.log
-rw------- 1 root root 307535 Aug 1 17:03 glustershd.log
-rw------- 1 root root 255801 Aug 1 17:03 nfs.log
-rw------- 1 root root 4544 Aug 1 17:12 cmd_history.log
-rw------- 1 root root 28063 Aug 1 17:12 cli.log
-rw------- 1 root root 17370562 Aug 1 17:14 etc-glusterfs-glusterd.vol.log
-rw------- 1 root root 1759170187 Aug 1 17:14 var-lib-nova-mnt-92ef2ec54fd18595ed18d8e6027a1b3d.log
Qualquer orientação seria apreciada.
EDIÇÃO #1
Parece que o sistema está tendo problemas quando tenta trazer o correspondente glusterfsd
para o bloco/HDD que eu adicionei de volta. Aqui está a saída do arquivo de log /var/log/glusterfs/bricks/data-glusterfs-sde1-brick.log
::
[2015-08-01 21:40:25.143963] I [MSGID: 100030] [glusterfsd.c:2294:main] 0-/usr/sbin/glusterfsd: Started running /usr/sbin/glusterfsd version 3.7.0 (args: /usr/sbin/glusterfsd -s icicle11 --volfile-id nova.icicle11.data-glusterfs-sde1-brick -p /var/lib/glusterd/vols/nova/run/icicle11-data-glusterfs-sde1-brick.pid -S /var/run/gluster/d0a51f364706915faa35c6cca46e9ce6.socket --brick-name /data/glusterfs/sde1/brick -l /var/log/glusterfs/bricks/data-glusterfs-sde1-brick.log --xlator-option *-posix.glusterd-uuid=5e09f3ec-bfbc-490b-bd93-8e083e8ebd05 --brick-port 49155 --xlator-option nova-server.listen-port=49155)
[2015-08-01 21:40:25.190863] I [event-epoll.c:629:event_dispatch_epoll_worker] 0-epoll: Started thread with index 1
[2015-08-01 21:40:48.359478] I [graph.c:269:gf_add_cmdline_options] 0-nova-server: adding option 'listen-port' for volume 'nova-server' with value '49155'
[2015-08-01 21:40:48.359513] I [graph.c:269:gf_add_cmdline_options] 0-nova-posix: adding option 'glusterd-uuid' for volume 'nova-posix' with value '5e09f3ec-bfbc-490b-bd93-8e083e8ebd05'
[2015-08-01 21:40:48.359696] I [server.c:392:_check_for_auth_option] 0-/data/glusterfs/sde1/brick: skip format check for non-addr auth option auth.login./data/glusterfs/sde1/brick.allow
[2015-08-01 21:40:48.359709] I [server.c:392:_check_for_auth_option] 0-/data/glusterfs/sde1/brick: skip format check for non-addr auth option auth.login.a9c47852-7dcf-4f89-80e5-110101943f36.password
[2015-08-01 21:40:48.359719] I [event-epoll.c:629:event_dispatch_epoll_worker] 0-epoll: Started thread with index 2
[2015-08-01 21:40:48.360606] I [rpcsvc.c:2213:rpcsvc_set_outstanding_rpc_limit] 0-rpc-service: Configured rpc.outstanding-rpc-limit with value 64
[2015-08-01 21:40:48.360679] W [options.c:936:xl_opt_validate] 0-nova-server: option 'listen-port' is deprecated, preferred is 'transport.socket.listen-port', continuing with correction
[2015-08-01 21:40:48.361713] E [ctr-helper.c:250:extract_ctr_options] 0-gfdbdatastore: CTR Xlator is disabled.
[2015-08-01 21:40:48.361745] W [gfdb_sqlite3.h:238:gfdb_set_sql_params] 0-nova-changetimerecorder: Failed to retrieve sql-db-pagesize from params.Assigning default value: 4096
[2015-08-01 21:40:48.361762] W [gfdb_sqlite3.h:238:gfdb_set_sql_params] 0-nova-changetimerecorder: Failed to retrieve sql-db-cachesize from params.Assigning default value: 1000
[2015-08-01 21:40:48.361774] W [gfdb_sqlite3.h:238:gfdb_set_sql_params] 0-nova-changetimerecorder: Failed to retrieve sql-db-journalmode from params.Assigning default value: wal
[2015-08-01 21:40:48.361795] W [gfdb_sqlite3.h:238:gfdb_set_sql_params] 0-nova-changetimerecorder: Failed to retrieve sql-db-wal-autocheckpoint from params.Assigning default value: 1000
[2015-08-01 21:40:48.361812] W [gfdb_sqlite3.h:238:gfdb_set_sql_params] 0-nova-changetimerecorder: Failed to retrieve sql-db-sync from params.Assigning default value: normal
[2015-08-01 21:40:48.361825] W [gfdb_sqlite3.h:238:gfdb_set_sql_params] 0-nova-changetimerecorder: Failed to retrieve sql-db-autovacuum from params.Assigning default value: none
[2015-08-01 21:40:48.362666] I [trash.c:2363:init] 0-nova-trash: no option specified for 'eliminate', using NULL
[2015-08-01 21:40:48.362906] E [posix.c:5894:init] 0-nova-posix: Extended attribute trusted.glusterfs.volume-id is absent
[2015-08-01 21:40:48.362922] E [xlator.c:426:xlator_init] 0-nova-posix: Initialization of volume 'nova-posix' failed, review your volfile again
[2015-08-01 21:40:48.362930] E [graph.c:322:glusterfs_graph_init] 0-nova-posix: initializing translator failed
[2015-08-01 21:40:48.362956] E [graph.c:661:glusterfs_graph_activate] 0-graph: init failed
[2015-08-01 21:40:48.363612] W [glusterfsd.c:1219:cleanup_and_exit] (--> 0-: received signum (0), shutting down
EDIÇÃO Nº 2
OK, então um problema parece ser com o atributo estendido não estar presente no sistema de arquivos do tijolo montado. Este comando deve corrigir isso:
$ grep volume-id /var/lib/glusterd/vols/nova/info | cut -d= -f2 | sed 's/-//g'
f0d72d64288c4e729c532d16ce5687ac
$ setfattr -n trusted.glusterfs.volume-id -v 0xf0d72d64288c4e729c532d16ce5687ac /data/glusterfs/sde1
No entanto, ainda estou recebendo o aviso acima sobre a ausência do atributo:
[2015-08-01 18:44:50.481350] E [posix.c:5894:init] 0-nova-posix: O atributo estendido trust.glusterfs.volume-id está ausente
Saída completa de glusterd restart
:
[2015-08-01 22:03:41.467668] I [MSGID: 100030] [glusterfsd.c:2294:main] 0-/usr/sbin/glusterfsd: Started running /usr/sbin/glusterfsd version 3.7.0 (args: /usr/sbin/glusterfsd -s icicle11 --volfile-id nova.icicle11.data-glusterfs-sde1-brick -p /var/lib/glusterd/vols/nova/run/icicle11-data-glusterfs-sde1-brick.pid -S /var/run/gluster/d0a51f364706915faa35c6cca46e9ce6.socket --brick-name /data/glusterfs/sde1/brick -l /var/log/glusterfs/bricks/data-glusterfs-sde1-brick.log --xlator-option *-posix.glusterd-uuid=5e09f3ec-bfbc-490b-bd93-8e083e8ebd05 --brick-port 49155 --xlator-option nova-server.listen-port=49155)
[2015-08-01 22:03:41.514878] I [event-epoll.c:629:event_dispatch_epoll_worker] 0-epoll: Started thread with index 1
[2015-08-01 22:04:00.334285] I [graph.c:269:gf_add_cmdline_options] 0-nova-server: adding option 'listen-port' for volume 'nova-server' with value '49155'
[2015-08-01 22:04:00.334330] I [graph.c:269:gf_add_cmdline_options] 0-nova-posix: adding option 'glusterd-uuid' for volume 'nova-posix' with value '5e09f3ec-bfbc-490b-bd93-8e083e8ebd05'
[2015-08-01 22:04:00.334518] I [server.c:392:_check_for_auth_option] 0-/data/glusterfs/sde1/brick: skip format check for non-addr auth option auth.login./data/glusterfs/sde1/brick.allow
[2015-08-01 22:04:00.334529] I [server.c:392:_check_for_auth_option] 0-/data/glusterfs/sde1/brick: skip format check for non-addr auth option auth.login.a9c47852-7dcf-4f89-80e5-110101943f36.password
[2015-08-01 22:04:00.334540] I [event-epoll.c:629:event_dispatch_epoll_worker] 0-epoll: Started thread with index 2
[2015-08-01 22:04:00.335316] I [rpcsvc.c:2213:rpcsvc_set_outstanding_rpc_limit] 0-rpc-service: Configured rpc.outstanding-rpc-limit with value 64
[2015-08-01 22:04:00.335371] W [options.c:936:xl_opt_validate] 0-nova-server: option 'listen-port' is deprecated, preferred is 'transport.socket.listen-port', continuing with correction
[2015-08-01 22:04:00.336170] E [ctr-helper.c:250:extract_ctr_options] 0-gfdbdatastore: CTR Xlator is disabled.
[2015-08-01 22:04:00.336190] W [gfdb_sqlite3.h:238:gfdb_set_sql_params] 0-nova-changetimerecorder: Failed to retrieve sql-db-pagesize from params.Assigning default value: 4096
[2015-08-01 22:04:00.336197] W [gfdb_sqlite3.h:238:gfdb_set_sql_params] 0-nova-changetimerecorder: Failed to retrieve sql-db-cachesize from params.Assigning default value: 1000
[2015-08-01 22:04:00.336211] W [gfdb_sqlite3.h:238:gfdb_set_sql_params] 0-nova-changetimerecorder: Failed to retrieve sql-db-journalmode from params.Assigning default value: wal
[2015-08-01 22:04:00.336217] W [gfdb_sqlite3.h:238:gfdb_set_sql_params] 0-nova-changetimerecorder: Failed to retrieve sql-db-wal-autocheckpoint from params.Assigning default value: 1000
[2015-08-01 22:04:00.336235] W [gfdb_sqlite3.h:238:gfdb_set_sql_params] 0-nova-changetimerecorder: Failed to retrieve sql-db-sync from params.Assigning default value: normal
[2015-08-01 22:04:00.336241] W [gfdb_sqlite3.h:238:gfdb_set_sql_params] 0-nova-changetimerecorder: Failed to retrieve sql-db-autovacuum from params.Assigning default value: none
[2015-08-01 22:04:00.336951] I [trash.c:2363:init] 0-nova-trash: no option specified for 'eliminate', using NULL
[2015-08-01 22:04:00.337131] E [posix.c:5894:init] 0-nova-posix: Extended attribute trusted.glusterfs.volume-id is absent
[2015-08-01 22:04:00.337142] E [xlator.c:426:xlator_init] 0-nova-posix: Initialization of volume 'nova-posix' failed, review your volfile again
[2015-08-01 22:04:00.337148] E [graph.c:322:glusterfs_graph_init] 0-nova-posix: initializing translator failed
[2015-08-01 22:04:00.337154] E [graph.c:661:glusterfs_graph_activate] 0-graph: init failed
[2015-08-01 22:04:00.337629] W [glusterfsd.c:1219:cleanup_and_exit] (--> 0-: received signum (0), shutting down
OK, então parece que eu tinha que fazer o seguinte.
Adicionar atributo estendido
trusted.glusterfs.volume-id
- observe que ele precisa estar no/brick
diretório, tentei um nível acima e não funcionouNOTA: esse valor para o id do volume vem deste comando:
Reiniciar
glusterd
Se eu observar o log do tijolo:
/var/log/glusterfs/bricks/data-glusterfs-sde1-brick.log
você verá mensagens do efeito:Agora, enquanto observo o tijolo, posso ver que ele está sendo sincronizado com o restante do cluster:
Depois de concluído, execute um comando de cura para verificar novamente as coisas.
Detalhes adicionais
Também vi essas mensagens logo após reiniciar
glusterd
:Confirmando atributos estendidos
Você pode usar o seguinte comando para ver quais atributos estão presentes:
Referências