Estou executando o slurm 21.08.5 instalado via apt.
Meu cluster tem 4 máquinas gpu (nd-gpu[001-005]) com 8 gpus cada. Posso executar trabalhos como
srun --gres=gpu:8 nvidia-smi -L
E eu vejo minhas gpus. Eu também posso agendar trabalhos reais com qualquer um de 0 a 8 gpus. No entanto, o agendamento de recursos não está funcionando corretamente. Se eu executar:
srun --gres=gpu:1 sleep 1000
O nó inteiro será alocado e não poderei usar as 7 GPUs restantes naquele nó.
O que se segue é o gres.conf:
NodeName=nd-gpu001 Name=gpu File=/dev/nvidia0
NodeName=nd-gpu001 Name=gpu File=/dev/nvidia1
NodeName=nd-gpu001 Name=gpu File=/dev/nvidia2
NodeName=nd-gpu001 Name=gpu File=/dev/nvidia3
NodeName=nd-gpu001 Name=gpu File=/dev/nvidia4
NodeName=nd-gpu001 Name=gpu File=/dev/nvidia5
NodeName=nd-gpu001 Name=gpu File=/dev/nvidia6
NodeName=nd-gpu001 Name=gpu File=/dev/nvidia7
NodeName=nd-gpu002 Name=gpu File=/dev/nvidia0
NodeName=nd-gpu002 Name=gpu File=/dev/nvidia1
NodeName=nd-gpu002 Name=gpu File=/dev/nvidia2
NodeName=nd-gpu002 Name=gpu File=/dev/nvidia3
NodeName=nd-gpu002 Name=gpu File=/dev/nvidia4
NodeName=nd-gpu002 Name=gpu File=/dev/nvidia5
NodeName=nd-gpu002 Name=gpu File=/dev/nvidia6
NodeName=nd-gpu002 Name=gpu File=/dev/nvidia7
NodeName=nd-gpu003 Name=gpu File=/dev/nvidia0
NodeName=nd-gpu003 Name=gpu File=/dev/nvidia1
NodeName=nd-gpu003 Name=gpu File=/dev/nvidia2
NodeName=nd-gpu003 Name=gpu File=/dev/nvidia3
NodeName=nd-gpu003 Name=gpu File=/dev/nvidia4
NodeName=nd-gpu003 Name=gpu File=/dev/nvidia5
NodeName=nd-gpu003 Name=gpu File=/dev/nvidia6
NodeName=nd-gpu003 Name=gpu File=/dev/nvidia7
NodeName=nd-gpu004 Name=gpu File=/dev/nvidia0
NodeName=nd-gpu004 Name=gpu File=/dev/nvidia1
NodeName=nd-gpu004 Name=gpu File=/dev/nvidia2
NodeName=nd-gpu004 Name=gpu File=/dev/nvidia3
NodeName=nd-gpu004 Name=gpu File=/dev/nvidia4
NodeName=nd-gpu004 Name=gpu File=/dev/nvidia5
NodeName=nd-gpu004 Name=gpu File=/dev/nvidia6
NodeName=nd-gpu004 Name=gpu File=/dev/nvidia7
o slurm.conf:
# See the slurm.conf man page for more information.
#
ClusterName=cluster
SlurmctldHost=nd-cpu01
SlurmctldHost=nd-cpu02
#
#GresTypes=
#GroupUpdateForce=0
ProctrackType=proctrack/cgroup
ReturnToService=1
SlurmctldPidFile=/var/run/slurmctld.pid
SlurmctldPort=6817
SlurmdPidFile=/var/run/slurmd.pid
SlurmdPort=6818
SlurmdSpoolDir=/var/spool/slurmd
SlurmUser=slurm
StateSaveLocation=/home/slurm/slurmctd
TaskPlugin=task/affinity,task/cgroup
# TIMERS
InactiveLimit=0
KillWait=30
MinJobAge=300
SlurmctldTimeout=120
SlurmdTimeout=300
Waittime=0
# SCHEDULING
SchedulerType=sched/backfill
SelectType=select/cons_tres
# LOGGING AND ACCOUNTING
JobCompType=jobcomp/none
JobAcctGatherFrequency=30
SlurmctldDebug=info
SlurmctldLogFile=/var/log/slurmctld.log
SlurmdDebug=info
SlurmdLogFile=/var/log/slurmd.log
# COMPUTE NODES
GresTypes=gpu
NodeName=nd-gpu[001-004] Sockets=2 CoresPerSocket=56 ThreadsPerCore=1 State=UNKNOWN Gres=gpu:8
PartitionName=debug Nodes=ALL Default=YES MaxTime=INFINITE State=UP
e o cgroups.conf:
ConstrainDevices=yes