首页 > 解决方案 > 某些节点上的 Python mpi4py 脚本分段错误

问题描述

一个简单的 python mpi 脚本在集群的特定节点上因分段错误而崩溃。

脚本的主体是:

import mpi4py 
mpi4py.rc.threads = False 
from mpi4py import MPI 
comm = MPI.COMM_WORLD
name=MPI.Get_processor_name()

print("hello world")
print(("name:",name,"my rank is",comm.rank)) 

我尝试在单个节点上运行脚本之前加载批处理文件中的所有模块,但这不起作用。sbatch 文件如下所示:

#!/bin/bash
#SBATCH --ntasks=256
#SBATCH --mem-per-cpu=150mb
#SBATCH -J jobname
#SBATCH --time=11:00:00
#SBATCH --mail-type=ALL
#SBATCH --mail-user=abc@xyz.com

module load python/3.6.4        
module load gcc             
module load openmpi         
module load mpi4py

echo $SLURM_NODELIST
echo $SLURM_NTASKS
echo $SLURM_JOBID
echo $SLURM_SUBMIT_DIR
export OPENBLAS_NUM_THREADS=1

time mpirun --verbose -np $SLURM_NTASKS python3 testmpi.py

输出的前几行如下所示,节点的实际名称被 NODENAME 替换,而 INSTITUTE 是我工作地点的占位符:

[NODENAME:24753] *** Process received signal ***
[NODENAME:24753] Signal: Segmentation fault (11)
[NODENAME:24753] Signal code: Address not mapped (1)
[NODENAME:24753] Failing at address: 0x7f68a835a008
[NODENAME:24753] [ 0] /lib64/libpthread.so.0(+0xf7e0)
[0x7f68a7f197e0]
[NODENAME:24753] [ 1] /usr/INSTITUTE/gcc/9.1-pkgs/openmpi- 
4.0.1/lib/pmix/mca_gds_ds21.so(pmix_gds_ds21_lock_init+0x124) 
[0x7f689d41c184]
[NODENAME:24753] [ 2] /usr/INSTITUTE/gcc/9.1-pkgs/openmpi- 
4.0.1/lib/libmca_common_dstore.so.1(pmix_common_dstor_init+0x983) 
[0x7f689d20ae43]

我的猜测是这些节点上没有加载模块。

标签: pythondistributed-computingopenmpimpi4py

解决方案


推荐阅读