atualizações: 5 (20171209)
atualizações: 5 (20171210)
mount -t nfs4 [SERVER IP]:/archlinux /mnt
funciona.ss -ntp | grep 2049
o cliente estabelece uma conexão com o servidor antes do início do systemd.- O mapeador de id NSF4 só pode ser usado com Kerberos?
o problema
Estou tentando configurar um nó/estação de trabalho/sistema sem disco. O SO (4.13.12-1-ARCH) está instalado no SERVIDOR /srv/archlinux
. Após uma inicialização de rede bem- sucedida de GRUB para NFSv4 , o systemd inicia, mas falha em vários estágios, por exemplo:
- Falha ao montar o sistema de arquivos de configuração do kernel.
- Falha ao montar o sistema de arquivos de depuração do kernel.
- Falha ao montar o sistema de arquivos Huge Pages
- Falha ao iniciar Carregar/Salvar Semente Aleatória.
- Falha ao montar /tmp.
- Falha ao iniciar a reconstrução do catálogo de periódicos.
- Então termina com
Not tainted 4.13.12-1-ARCH #1...
Ou,
- Falha ao montar o sistema de arquivos de fila de mensagens POSIX.
- Falha ao iniciar Remount Root e Kernel File System.
- Falha ao montar o sistema de arquivos Huge Pages.
- Falha ao montar o sistema de arquivos de depuração do kernel.
- Falha ao montar o sistema de arquivos de configuração do kernel.
- Então termina com
Not tainted 4.13.12-1-ARCH #1...
Suspeito que as falhas sejam causadas por uma configuração incorreta do NFSv4 ou da rede local.
rpc.idmapd
/etc/idmapd.conf
[General]
Verbosity = 7
Pipefs-Directory = /var/lib/nfs/rpc_pipefs
Domain = localdomain
[Mapping]
Nobody-User = nobody
Nobody-Group = nobody
[Translation]
Method = nnswitch
/etc/exports
(printed using # exportfs -v)
/srv <world>(rw,sync,wdelay,hide,no_subtree_check,fsid=0,sec=sys,no_root_squash,no_all_squash)
/srv/archlinux <world>(rw,sync,wdelay,hide,no_subtree_check,sec=sys,no_root_squash,no_all_squash)
(Exposed to "world" for debugging purposes)
A execução rpc.idmapd -fvvv
separada tty
durante a inicialização registra o seguinte:
rpc.idmapd: libnfsidmap: using domain: localdomain
rpc.idmapd: libnfsidmap: Realms list: 'LOCALDOMAIN'
rpc.idmapd: libnfsidmap: processing 'Method' list
rpc.idmapd: libnfsidmap: loaded plugin /usr/lib/libnfsidmap/nsswitch.so for method nsswitch
rpc.idmapd: Expiration time is 600 seconds.
rpc.idmapd: Opened /proc/net/rpc/nfs4.nametoid/channel
rpc.idmapd: Opened /proc/net/rpc/nfs4.idtoname/channel
rpc.idmapd: nfsdcb: authbuf=* authtype=user
rpc.idmapd: nfs4_uid_to_name: calling nsswitch->uid_to_name
rpc.idmapd: nfs4_uid_to_name: nsswitch->uid_to_name returned 0
rpc.idmapd: nfs4_uid_to_name: final return value is 0
rpc.idmapd: Server : (user) id "0" -> name "root@localdomain"
Se exportfs
sec=sys
, continua assim:
rpc.idmapd: nfsdch: authbuf=* authtype=user
rpc.idmapd: nfs4_name_to_uid: calling nsswitch->name_to_uid
rpc.idmapd: nss_getpwnam: name '0' domain 'localdomain': resulting localname '(null)'
rpc.idmapd: nss_getpwnam: name '0' does not map into domain 'localdomain'
rpc.idmapd: nfs4_name_to_uid: nsswitch->name_to_uid returned -22
rpc.idmapd: nfs4_name_to_uid: final return value is -22
rpc.idmapd: Server : (user) name "0" -> id "99"
(stops here)
+(20171209) Depois de certificar-se de que o /etc/hostname
para o CLIENT estava definido como client2
(duh), if exportfs
sec=none
ou sec=sys
, continua assim:
rpc.idmapd: nfsdch: authbuf=* authtype=group
rpc.idmapd: nfs4_gid_to_name: calling nsswitch->gid_to_name
rpc.idmapd: nfs4_gid_to_name: nsswitch->gid_to_name returned 0
rpc.idmapd: nfs4_gid_to_name: final return value is 0
rpc.idmapd: Server : (group) id "190" -> name "systemd-journal@localdomain"
rpc.idmapd: nfsdch: authbuf=* authtype=user
rpc.idmapd: nfs4_name_to_uid: calling nsswitch->name_to_uid
rpc.idmapd: nss_getpwnam: name '0' domain 'localdomain': resulting localname '(null)'
rpc.idmapd: nss_getpwnam: name '0' does not map into domain 'localdomain'
rpc.idmapd: nfs4_name_to_uid: nsswitch->name_to_uid returned -22
rpc.idmapd: nfs4_name_to_uid: final return value is -22
rpc.idmapd: Server : (user) name "0" -> id "99"
(stops here)
Se eu mudar o método de nsswitch
para static
( mapeamento UID no NFS )
/etc/idmapd.conf
...
[Translation]
Method = static
[Static]
root@localdomain = root
O rpc.idmapd -fvvv
em separado tty
durante a inicialização registra o seguinte:
rpc.idmapd: libnfsidmap: using domain: localdomain
rpc.idmapd: libnfsidmap: Realms list: 'LOCALDOMAIN'
rpc.idmapd: libnfsidmap: processing 'Method' list
rpc.idmapd: static_getpwnam: name 'root@localdomain' mapped to 'root'
rpc.idmapd: static_getpwnam: group 'root@localdomain' mapped to ' root'
rpc.idmapd: libnfsidmap: loaded plugin /usr/lib/libnfsidmap/static.so for method static
rpc.idmapd: Expiration time is 600 seconds.
rpc.idmapd: Opened /proc/net/rpc/nfs4.nametoid/channel
rpc.idmapd: Opened /proc/net/rpc/nfs4.idtoname/channel
rpc.idmapd: nfsdcb: authbuf=* authtype=user
rpc.idmapd: nfs4_uid_to_name: calling static->uid_to_name
rpc.idmapd: nfs4_uid_to_name: static->uid_to_name returned 0
rpc.idmapd: nfs4_uid_to_name: final return value is 0
rpc.idmapd: Server : (user) id "0" -> name "root@localdomain"
Se exportfs
sec=sys
, continua assim:
rpc.idmapd: nfsdch: authbuf=* authtype=user
rpc.idmapd: nfs4_name_to_uid: calling static->name_to_uid
rpc.idmapd: nfs4_name_to_uid: static->name_to_uid returned -2
rpc.idmapd: nfs4_name_to_uid: final return value is -2
rpc.idmapd: Server : (user) name "0" -> id "99"
(stops here)
Se exportfs
sec=none
, continua assim:
rpc.idmapd: nfsdch: authbuf=* authtype=group
rpc.idmapd: nfs4_gid_to_name: calling static->gid_to_name
rpc.idmapd: nfs4_gid_to_name: static->gid_to_name returned -2
rpc.idmapd: nfs4_gid_to_name: final return value is -2
rpc.idmapd: Server : (group) id "190" -> name "nobody"
rpc.idmapd: nfsdch: authbuf=* authtype=user
rpc.idmapd: nfs4_name_to_uid: calling static->name_to_uid
rpc.idmapd: nfs4_name_to_uid: static->name_to_uid returned -2
rpc.idmapd: nfs4_name_to_uid: final return value is -2
rpc.idmapd: Server : (user) name "0" -> id "99"
(stops here)
Problemas semelhantes com o mapeamento de ID do usuário:
- Mapeamento de usuários NFSv4
- Mapeamento de usuário NFS
- Mapeamento UID e GID do usuário local para o compartilhamento NFS montado
- E muitos mais... Muitas vezes relacionado a uma mudança de NFSv3 para NFSv4, e raramente sobre netboot.
solução de problemas
- Sem firewall
- Sem Kerberos, LDAP, etc.
- Sem SELinux
- O usuário
root
existe tanto no SERVIDOR quanto no CLIENTE, com a mesma senha.
SERVIDOR
Todos os outros arquivos de configuração relevantes para NFSv4 que pude identificar no SERVIDOR.
/etc/nsswitch.conf
passwd: compat mymachines systemd
group: compat mymachines systemd
shadow: compat
publickey: files
hosts: files mymachines resolve [!UNAVAIL=return] dns myhostname
networks: files
protocols: files
services: files
ethers: files
rpc: files
netgroup: files
/etc/nfs.conf
(all settings commented out)
/etc/conf.d/nfs-common.conf
(all settings commented out)
configuração de rede
- Como definir o nome de domínio no GNU/Linux?
- Configuração de rede Archlinux Wiki: Defina o nome do host
- Configuração de rede Archlinux Wiki: resolução de nome de host de rede local
O hostname SERVER é server
e tem 3 dispositivos de rede (nd[1-3]). O Portal default via 192.168.0.1 nd1
.
/etc/hosts
127.0.0.1 localhost.localdomain localhost
::1 ip6.localhost localhost
192.168.0.101 nd1.localdomain server servernd1
192.168.1.101 nd2.localdomain server servernd2
192.168.2.101 nd3.localdomain server servernd2
192.168.1.102 client1.localdomain client1
192.168.2.102 client2.localdomain client2
/etc/resolveconf.conf
name_servers=192.168.0.1
# hostname -f
# nd1.localdomain
# hostname -i
192.168.0.101 192.168.1.101 192.168.2.101
# getent hosts IP -> the corresponding line in /etc/hosts
# getent ahosts HOSTNAME -> the corresponding line in /etc/hosts
# ping -c 3 server.localdomain -> 0% packet loss
# id -u root -> 0
# id -un 0 -> root
Display the system's effective NFSv4 domain name on stdout.
# nfsidmap -d -> localdomain
Display on stdout all keys currently in the keyring used to cache ID mapping results. These keys are visible only to the superuser.
# nfsidmap -l -> nfsidmap: '.id_resolver' keyring was not found.
CLIENTE
/etc/hostname +(20171209)
client2
/etc/hosts
(exactly the same as the hosts file on the server)
/etc/resolveconf.conf
name_servers=192.168.0.1
/etc/idmapd.conf
(exactly the same as the idmapd.conf file on the server)
/etc/fstab
# sys=sec or sys=none to correspond to server export settings.
/dev/nfs / nfs rw,hard,rsize=9151,sec=sys,clientaddr=192.168.2.102 0 0
devtmpfs /dev devtmpfs defaults
proc /proc proc defaults
none /run tmpfs defaults
sys /sys sysfs defaults
run /run tmpfs defaults
tmp /tmp tmpfs defaults
O fstab
foi definido comparando os diretórios montados no servidor usando findmnt -A
.
net_nfs4
- +(20171210) Versão NFS no SERVIDOR e CLIENTE
cat /proc/fs/nfsd/versions -> -2 +3 +4 +4.1 +4.2
- No SERVIDOR e no CLIENTE
cat /sys/module/nfsd/parameters/nfs4_disable_idmapping -> N
. - No SERVIDOR
echo "options nfsd nfs4_disable_idmapping=0" > /etc/modprobe.d/nfsd.conf
. - No CLIENT, o
/sys/module/nfs/parameters/nfs4_disable_idmapping
não existe e não tenho certeza de como criá-lo manualmente, pois/sys
é somente leitura. - +(20171210) No CLIENTE
echo "options nfs nfs4_disable_idmapping=0" > /etc/modprobe.d/nfs.conf
.
O IP do CLIENTE é 192.168.2.102/24
. O dispositivo de rede CLIENT está conectado ao SERVER nd2 192.168.2.101/24
(hostname: servernd2).
As informações de rede durante a inicialização:
:: running early hook [udev]
starting version 235
:: running hook [udev]
:: Triggering uevents...
:: running hook [net_nfs4]
IP-Config: eth0 hardware address [CLIENT NETWORK DEVICE MAC] mtu 1500 DHCP
hostname client2 IP-Config: eth0 guessed broadcast address 192.168.2.255
IP-Config: eth0 complete (from 192.168.0.101):
address: 192.168.2.102 broadcast: 192.168.2.255 netmask: 255.255.255.0
gateway: 192.168.2.101 dns0 : 192.168.0.1 dns1 : 0.0.0.0
host : client2
domain : localdomain
rootserver: 192.168.0.101 rootpath: /srv/archlinux
filename : /netboot/grub/i386-pc/core.0
NFS-Mount: 192.168.2.101:/archlinux
Waiting 10 seconds for device /dev/nfs ...
(systemd takes over from here)
Por que ocorrem os erros NSFv4?
Server : (group) id "190" -> name "nobody"
Com o NFSv4, as coisas mudam: os usuários são mapeados por nome de usuário e o mapeamento entre nomes de usuário e IDs de usuário é feito por um processo chamado "ID map daemon" (idmapd). Em particular, clientes e servidores NFSv4 devem usar o mesmo domínio para que o mapeamento funcione corretamente, caso contrário, as solicitações serão mapeadas para o usuário/grupo anônimo. -- Experimentando NFSv4 (no Linux e Solaris) -- 15 de março de 2012 - 13:03 / bronto
Em um mundo ideal, o usuário e o grupo do cliente solicitante determinariam as permissões dos dados retornados. Não vivemos em um mundo ideal. Dois problemas do mundo real intervêm:
- Você pode não confiar no usuário root de um cliente com acesso root aos arquivos do servidor.
- O mesmo nome de usuário no cliente e no servidor pode ter IDs numéricos diferentes
O problema 1 é conceitualmente simples. John Q. Programmer recebe uma máquina de teste para a qual ele tem acesso root. De forma alguma isso significa que John Q. Programmer deve ser capaz de alterar os arquivos de propriedade do root no servidor. Portanto, o NFS oferece root squashing, um recurso que mapeia uid 0 (root) para o uid anônimo (nfsnobody), cujo padrão é -2 (65534 em números de 16 bits). -- NFS: Visão geral e pegadinhas -- Copyright (C) 2003 por Steve Litt
+(20171209)rpc.idmapd: nss_getpwnam: name '0' domain 'localdomain': resulting localname '(null)'
De acordo com Steve Dickson em um comentário (2011-08-12 16:01:55 EDT) para um relatório do Red Hat Bugzilla – Bug 715430
A instrução [error] explica o problema. O DNS na máquina local não foi configurado (ou retornou NULL) e a variável Domain= em /etc/idmapd.conf não foi configurada.
nss_getpwnam: name '0' does not map into domain
Nas listas de discussão do Debian, em uma correspondência por e-mail entre Jonas Meurer e Christian Seiler (20150722) sobre "NFSv4 protegido por Kerberos", o erro é explicado em detalhes. Meu resumo da discussão:
Quando o cliente NFS envianss_getpwnam: name '8' domain 'freesources.org': resulting localname '(null)'
O cliente NFS envia apenas o uid convertido em uma string em alguns casos, em vez do nome de usuário NFS traduzido corretamente, que o servidor rejeita.
O cliente deve enviarnss_getpwnam: name '[email protected]' domain 'freesources.org': resulting localname 'mail'
Aqui você pode ver que o nome do proprietário que foi transmitido pelo cliente NFS era '[email protected]' (e não simplesmente '8'), de modo que contém um @; nss_getpwname pode ver que o nome de domínio corresponde e apenas o remove, resultando em um nome de usuário 'mail', que procura em /etc/passwd, retorna o id do usuário (neste caso, 8, porque é o mesmo no cliente e servidor) e o servidor está perfeitamente satisfeito.
Então, por que o cliente envia o nome de usuário errado? ... de vez em quando, o idmapping falhará, então o kernel enviará apenas um número. Mas esse número fará com que o comando chown falhe, já que o servidor não o traduzirá de volta.
Resposta curta: não faço ideia.
Resposta mais longa: ...
If I understand the longer answer correctly, the problem could occur because the NFS client relies on the "kernel's key cache". For the NFS server this should never be a problem because the "kernel's key cache" is never used.
Nonetheless,
Since you are using just regular nsswitch via /etc/passwd, nss_getpwnam should never fail in your case, unless you do some weird stuff with /etc/passwd at the same time.
The answer also refers to an alternative method to idmapd; nfsidmap
, although reading the man
I cannot quite understand how it would replace idmapd
.
+(20171209) nss_getpwnam: name '[email protected]' does not map into domain 'localdomain'
This error message does not seem to occur for me, I am however including the answer from SUSE's support knowledgebase -- 10-DEC-13 Modified Date: 12-OCT-17 -- because of the description of cause, and the proposed remedy which stands in contrast to the other found discussions.
NFSv4 handles user identities differently than NFSv3. In v3, an nfs client would simply pass a UID number in chown (and other requests) and the nfs server would accept that (even if the nfs server did not know of an account with that UID number). However, v4 was designed to pass identities in the form of @. To function correctly, that normally requires idmapd (id mapping daemon) to be active at client and server, and for each to consider themselves part of the same id mapping domain.
Chown failures or idmapd errors like the ones documented above are typically a result of either:
- The username is known to the client but not known to the server, or
- The idmapd domain name is set differently on the client than it is on the server.
Therefore, this issue can be fixed by insuring that the nfs server and client are configured with the same idmapd domain name (/etc/idmapd.conf) and both have knowledge of the usernames / accounts in question.
However, it is often not convenient to insure that both sides have the same user account knowledge, especially if the nfs server is a filer. The NFS community has recognized that this idmapd feature of NFSv4 is often more troublesome that it is worth, so there are steps and modifications coming into effect to allow the NFSv3 behavior to work even under NFSv4.
The proposed remedy is to disable idmapd.
nfs.nfs4_disable_idmapping=1
+(20171209) Wireshark
Analyzing the Wireshark log, it is quite extensive but begins with something like:
[IP CLIENT] -> [IP SERVER] NFS 226 V4 Call ACCESS FH: [HEX VALUE], [Check: RD LU MD XT DL]
[IP SERVER] -> [IP CLIENT] NFS 238 V4 Reply (Call In 34) ACCESS, [Allowed: RD LU MD XT DL]
[IP CLIENT] -> [IP SERVER] NFS 246 V4 Call LOOKUP DH: [HEX VALUE]/archlinux
where a similar pattern [A HEX VALUE]/[PATH]
can be discerned for
/sbin
, /usr
, /bin
, /init
, /lib
, /systemd
, /dev
, /proc
, /sys
, /run
, /
, /lib64
.
When the CLIENT requests /Id-linux-x86-64.so.2
the first errors start to appear:
[IP CLIENT] -> [IP SERVER] NFS 342 V4 Call OPEN DH: [HEX VALUE]/Id-linux-x86-64.so.2
[SERVER IP] -> [CLIENT IP] NFS 166 V4 Reply (Call In 124) OPEN Status: NFS4ERR_SYMLINK
The pattern more or less repeats itself with more frequent errors, for example, LOOKUP Status;
and OPEN Status:
reporting NFS4ERR_NOENT
.
Interestingly, it is at the very end of the log where to first and only reference to user permission is made,
[SERVER IP] -> [CLIENT IP] NFS 182 V4 Reply (Call In 9562) SETATTR Status: NFS4ERR_BADOWNER
RFC
According to
- RFC7530 (Network File System (NFS) Version 4 Protocol, 201503, PROPOSED STANDARD) -- Updated by RFC7931
- RFC5661 (Network File System (NFS) Version 4 Minor Version 1 Protocol, 201001, PROPOSED STANDARD) -- Updated by RFC8178
- RFC7862 (Network File System (NFS) Version 4 Minor Version 2 Protocol, 201001, PROPOSED STANDARD) -- Updated by RFC8178 -- which refers back to [RFC5661].
NFS4ERR_BADOWNER (Error Code 10039)
This error is returned when an owner or owner_group attribute value or the who field of an ACE within an ACL attribute value cannot be translated to a local representation.
The specifications discuss in Section 5.9. Interpreting owner and owner_group, I am not sure what to cite as relevant however.
NFS4ERR_SYMLINK (Error Code 10029)
The current filehandle designates a symbolic link when the current operation does not allow a symbolic link as the target.
NFS4ERR_NOENT (Error Code 2)
This indicates no such file or directory. The file system object referenced by the name specified does not exist.
The error could however be expected ...
The current filehandle is assumed to refer to a regular directory a named attribute directory. LOOKUPP assigns the filehandle for its parent directory to be the current filehandle. If there is no parent directory, an NFS4ERR_NOENT error must be returned. Therefore, NFS4ERR_NOENT will be returned by the server when the current filehandle is at the root or top of the server's file tree.
+(20171210) mount -t nfs4 [SERVER IP]:/archlinux /mnt
On the client computer, using the Archlinux "LiveUSB" I was able to mount the network drive, download the latest kernel (4.14-4-1-ARCH) via the SERVER internet connection, and install archlinux on the [SERVER IP]/archlinux
.
During install rpc.idmapd -fvvv
indicated a successful mapping of usernames, for example,
rpc.idmapd: Server : (user) id "0" -> name "root@localdomain"
rpc.idmapd: Server : (group) id "99" -> name "nobody@localdomain"
... -> name "tty@localdomain"
... -> name "systemd-journal-upload@localdomain"
... -> name rpc@localdomain
... -> name systemd-journal@localdomain
... -> name utmp@localdomain
The result of genfstab
was also different:
[SERVER IP]:/archlinux / nfs4 rw,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,times=600,retrans=2,sec=sys,clientaddr=[CLIENT IP],local_lock=none,addr=[SERVER IP] 0 0
Nevertheless, after reboot systemd
failed again with the same failures as described at the beginning of the post.
+(20171210) Is the remote directory on the server mounted to /new_root
?
The mkinitcpio
script uses the variable mount_handler
to carry an assigned "mounting function", in this case nfs_mount_handler()
, to which the "root path" is passed $1
at a later stage; /new_root
.
I am trying to verify that the client has mounted the [SERVER IP]:/archlinux
to the /new_root
. On the server, I can only observe that the client has established a connection but not if the directory is mounted and to where?
showmount -a server -> All mount points on server: (empty)
ss -ntp | grep 2049 ->
ESTAB 0 0 192.168.2.101:2049 192.168.2.102:809 (random port)
+(20171210) NFS4, sec=sys
and id mapper are incompatible?
Reading the doco, it looks like sec=sys and the id mapper can be used to correctly map uid/gid to name where the client and server have different mappings in /etc/passwd and /etc/group. This simply isn't true.
That's because with sec=sys the id mapper doesn't come into play in the authentication part of the nfs protocol, only the file attributes part. With sec=sys authentication, nfs just passes the client uid/gid which is used directly by the server. So permissions checks will be screwed if client and server uid and gid don't align. To confuse things further, when the client creates a new file it is the authentication credentials that are used, so the file gets created at the server with the client's uid/gid. After that nfs uses idmap to get the file attributes, so the uid/gid (which originally came from the client) gets mapped at the server, and you end up seeing the server's name for a client uid/gid. Borkage! On the other hand, if the file was originally created at the server, you will see the correct name at the client, even if the uid/gid differs. But permissions checking will still be broken. -- kimmie -- Posted: Wed Feb 20, 2013 3:14 am Post subject: -- Emphasis in original
From the kernel documentation for kernel parameters
nfs.nfs4_disable_idmapping=
nfsd.nfs4_disable_idmapping=
nfs.nfs4_disable_idmapping=1
andnfsd.nfs4_disable_idmapping=1
Disabling the id mapper
nfsd.nfs4_disable_idmapping=1
andnfs.nfs4_disable_idmapping=1
on the SERVER and CLIENT resulted in systemd starting up to the user login prompt, with only 1 error:modconf
to themkinitcpio
hooks; together withblock keyboard
in an attempt to deal with the other apparent problem:The
rpc.idmapd -fvvv
did not output any messages.I am able to login as root using an external USB keyboard, read and create files. I have not done any extensive testing so there could still be problems with this solution.
nfs.nfs4_disable_idmapping=0
andnfsd.nfs4_disable_idmapping=0
It seems that
echo "options nfs nfs4_disable_idmapping=0" >> /etc/modprobe.d/nfs.conf
(orcat /sys/module/nfsd/parameters/nfs4_disable_idmapping -> N
) on the CLIENT did not have any effect.The CLIENT id mapper was disabled until I explicitly passed the parameter
nfs.nfs4_disable_idmapping=0
to the kernel during boot (GRUB).The
rpc.idmapd -fvvv
did not output any complaints. On the other hand, it did not print out anything else after establishing the firstrpc.idmapd: Server : (user) id "0" -> name "root@localdomain"
...The Wireshark log however no longer records a
NFS4ERR_BADOWNER
.Nonetheless, all the systemd startup failures persist...
Conclusion
nfs.nfs4_disable_idmapping=0
andnfsd.nfs4_disable_idmapping=0
Save for setting up Kerberos and troubleshooting, I am not sure what to try next. The
rpc.idmapd
still seems to be unable to map correct permissions, butrpc.idmapd -fvvv
no longer outputs any errors...? What to do? The boot errors could perhaps be caused by something else... I dunno...nfs.nfs4_disable_idmapping=1
andnfsd.nfs4_disable_idmapping=1
Although it works, the approach seems wrong; I am not migrating, and I should be able to set up the system using
rpc.idmapd
. For now it will have to do; it will probably come back and bite me in the future...