首页 > 解决方案 > 为 dask_jobqueue 创建 local_directory

问题描述

我正在尝试在使用 NFS 进行存储的 HPC 系统上运行 dask。因此,我想将 dask 配置为使用本地存储作为暂存空间。每个集群节点都有一个/scratch/所有用户都可以写入的文件夹,其中包含将临时文件放入/scratch/<username>/<jobid>/.

我有一些这样配置的代码:

import dask_jobqueue
from distributed import Client

cluster = dask_jobqueue.SLURMCluster(
            queue = 'high',
            cores = 24,
            memory = '60GB',
            walltime = '10:00:00',
            local_directory = '/scratch/<username>/<jobid>/'
)

cluster.scale(1)
client = Client(cluster)

但是,我有一个问题。该目录不存在(既因为我不知道客户端将在哪个节点上,又因为它基于 SLURM 作业 ID 创建它,它始终是唯一的),所以我的代码失败:

Process Dask Worker process (from Nanny):
Traceback (most recent call last):
  File "/home/lsterzin/anaconda3/lib/python3.7/multiprocessing/process.py", line 297, in _bootstrap
    self.run()
  File "/home/lsterzin/anaconda3/lib/python3.7/multiprocessing/process.py", line 99, in run
    self._target(*self._args, **self._kwargs)
  File "/home/lsterzin/anaconda3/lib/python3.7/site-packages/distributed/process.py", line 191, in _run
    target(*args, **kwargs)
  File "/home/lsterzin/anaconda3/lib/python3.7/site-packages/distributed/nanny.py", line 699, in _run
    worker = Worker(**worker_kwargs)
  File "/home/lsterzin/anaconda3/lib/python3.7/site-packages/distributed/worker.py", line 497, in __init__
    self._workspace = WorkSpace(os.path.abspath(local_directory))
  File "/home/lsterzin/anaconda3/lib/python3.7/site-packages/distributed/diskutils.py", line 118, in __init__
    self._init_workspace()
  File "/home/lsterzin/anaconda3/lib/python3.7/site-packages/distributed/diskutils.py", line 124, in _init_workspace
    os.mkdir(self.base_dir)
FileNotFoundError: [Errno 2] No such file or directory: '/scratch/<user>/<jobid>'

在不知道 dask 工作人员将在哪个节点上运行的情况下,我无法创建目录,并且dask_jobqueue如果目录不存在,我将无法创建集群。解决此问题的最佳方法是什么?

标签: pythondaskhpcdask-jobqueue

解决方案


感谢@lsterzinger 提出的措辞恰当的问题

我在这里提出了一个可能有帮助的修复:https ://github.com/dask/distributed/pull/3928

我们会看看社区怎么说


推荐阅读