首页 > 解决方案 > MPI_Reduce 是否需要接收缓冲区的现有指针?

问题描述

MPI 文档断言接收缓冲区的地址 ( )recvbuf仅在根处有效。这意味着内存可能不会在其他进程中分配。这个问题证实了这一点。

int MPI_Reduce(const void *sendbuf, void *recvbuf, int count, MPI_Datatype datatype,
               MPI_Op op, int root, MPI_Comm comm)

起初我认为它recvbuf甚至不必存在:不必为recvbuf自己分配内存(例如通过动态分配)。不幸的是(我花了很多时间才理解我的错误!),似乎即使它指向的内存无效,指针本身也必须存在。

有关我想到的代码,请参见下面的代码,其中一个版本会产生段错误,而另一个版本则不会。

#include <stdio.h>
#include <stdlib.h>
#include <mpi.h>

int main(int argc, char **argv) {
   // MPI initialization
    int world_rank, world_size;
    MPI_Init(NULL, NULL);
    MPI_Comm_rank(MPI_COMM_WORLD, &world_rank);
    MPI_Comm_size(MPI_COMM_WORLD, &world_size);

    int n1 = 3, n2 = 10; // Sizes of the 2d arrays

    long **observables = (long **) malloc(n1 * sizeof(long *));
    for (int k = 0 ; k < n1 ; ++k) {
        observables[k] = (long *) calloc(n2, sizeof(long));
        for (long i = 0 ; i < n2 ; ++i) {
            observables[k][i] = k * i * world_rank; // Whatever
        }
    }

    long **obs_sum; // This will hold the sum on process 0
#ifdef OLD  // Version that gives a segfault
    if (world_rank == 0) {
        obs_sum = (long **) malloc(n2 * sizeof(long *));
        for (int k = 0 ; k < n2 ; ++k) {
            obs_sum[k] = (long *) calloc(n2, sizeof(long));
        }
    }
#else // Correct version
   // We define all the pointers in all the processes.
    obs_sum = (long **) malloc(n2 * sizeof(long *));
    if (world_rank == 0) {
        for (int k = 0 ; k < n2 ; ++k) {
            obs_sum[k] = (long *) calloc(n2, sizeof(long));
        }
    }
#endif

    for (int k = 0 ; k < n1 ; ++k) {
        // This is the line that results in a segfault if OLD is defined
        MPI_Reduce(observables[k], obs_sum[k], n2, MPI_LONG, MPI_SUM, 0,
                   MPI_COMM_WORLD);
    }

    MPI_Barrier(MPI_COMM_WORLD);
    MPI_Finalize();
    // You may free memory here

    return 0;
}

我是否正确解释了这一点?这种行为背后的原因是什么?

标签: cpointersmallocmpi

解决方案


问题不是 MPI,而是您正在传递的事实obs_sum[k],但您根本没有定义/分配它。

for (int k = 0 ; k < n1 ; ++k) {
    // This is the line that results in a segfault if OLD is defined
    MPI_Reduce(observables[k], obs_sum[k], n2, MPI_LONG, MPI_SUM, 0,
               MPI_COMM_WORLD);
}

即使MPI_Reduce()没有得到它的值,生成的代码也会得到obs_sum(未定义且未分配),添加k到它并尝试读取此指针(段错误)以传递给MPI_Reduce().

例如,行的分配应该足以让它工作:

#else // Correct version
      // We define all the pointers in all the processes.
      obs_sum = (long **) malloc(n2 * sizeof(long *));
      // try commenting out the following lines
      // if (world_rank == 0) {
      //   for (int k = 0 ; k < n2 ; ++k) {
      //     obs_sum[k] = (long *) calloc(n2, sizeof(long));
      //   }
      // }
#endif

我会将二维数组分配为平面数组 - 我真的很讨厌这种数组表示。这不是更好吗?

#include <stdio.h>
#include <stdlib.h>
#include <mpi.h>

int main(int argc, char **argv) {
   // MPI initialization
    int world_rank, world_size;
    MPI_Init(NULL, NULL);
    MPI_Comm_rank(MPI_COMM_WORLD, &world_rank);
    MPI_Comm_size(MPI_COMM_WORLD, &world_size);

    int n1 = 3, n2 = 10; // Sizes of the 2d arrays

    long *observables = (long *) malloc(n1*n2*sizeof(long));
    for (int k = 0 ; k < n1 ; ++k) {
        for (long i = 0 ; i < n2 ; ++i) {
            observables[k*n2+i] = k * i * world_rank; // Whatever
        }
    }

    long *obs_sum = nullptr; // This will hold the sum on process 0
    if (world_rank == 0) {
        obs_sum = (long *) malloc(n1*n2*sizeof(long));
    }

    MPI_Reduce(observables, obs_sum, n1*n2, MPI_LONG, MPI_SUM, 0, MPI_COMM_WORLD);

    MPI_Barrier(MPI_COMM_WORLD);
    MPI_Finalize();
    // You may free memory here

    return 0;
}

推荐阅读