首页 > 解决方案 > 使用 nvprof 监控 GPU 性能不起作用

问题描述

我正在尝试使用 nvprof 来监控 GPU 的性能。我想知道 HtoD(主机到设备)、DtoH(设备到主机)和设备执行的耗时。它与 numba cuda 网站的标准代码配合得很好:

from numba import cuda

@cuda.jit
def add_kernel(x, y, out):
    tx = cuda.threadIdx.x # this is the unique thread ID within a 1D block
    ty = cuda.blockIdx.x  # Similarly, this is the unique block ID within the 1D grid

    block_size = cuda.blockDim.x  # number of threads per block
    grid_size = cuda.gridDim.x    # number of blocks in the grid

    start = tx + ty * block_size
    stride = block_size * grid_size

    # assuming x and y inputs are same length
    for i in range(start, x.shape[0], stride):
        out[i] = x[i] + y[i]

if __name__ == "__main__":
    import numpy as np

    n = 100000
    x = np.arange(n).astype(np.float32)
    y = 2 * x
    out = np.empty_like(x)

    threads_per_block = 128
    blocks_per_grid = 30

    add_kernel[blocks_per_grid, threads_per_block](x, y, out)
    print(out[:10])

这是来自 nvprfo 的结果:

nvprof 工作

但是,当我使用以下代码添加多处理的用法时:

import multiprocessing as mp
from numba import cuda

def fun():

    @cuda.jit
    def add_kernel(x, y, out):
        tx = cuda.threadIdx.x # this is the unique thread ID within a 1D block
        ty = cuda.blockIdx.x  # Similarly, this is the unique block ID within the 1D grid

        block_size = cuda.blockDim.x  # number of threads per block
        grid_size = cuda.gridDim.x    # number of blocks in the grid

        start = tx + ty * block_size
        stride = block_size * grid_size

        # assuming x and y inputs are same length
        for i in range(start, x.shape[0], stride):
            out[i] = x[i] + y[i]

    import numpy as np

    n = 100000
    x = np.arange(n).astype(np.float32)
    y = 2 * x
    out = np.empty_like(x)

    threads_per_block = 128
    blocks_per_grid = 30

    add_kernel[blocks_per_grid, threads_per_block](x, y, out)
    print(out[:10])
    return out


# check gpu condition
p = mp.Process(target = fun)
p.daemon = True
p.start()
p.join()

nvprof 似乎在监视该过程,但它没有任何结果,尽管它报告 nvprof 正在分析:

nvprof 不记录

此外,当我使用 Ray(用于进行分布式计算的包)时:

if __name__ == "__main__":

    import multiprocessing

    def fun():

        from numba import cuda
        import ray

        @ray.remote(num_gpus=1)
        def call_ray():
            @cuda.jit
            def add_kernel(x, y, out):
                tx = cuda.threadIdx.x # this is the unique thread ID within a 1D block
                ty = cuda.blockIdx.x  # Similarly, this is the unique block ID within the 1D grid

                block_size = cuda.blockDim.x  # number of threads per block
                grid_size = cuda.gridDim.x    # number of blocks in the grid

                start = tx + ty * block_size
                stride = block_size * grid_size

                # assuming x and y inputs are same length
                for i in range(start, x.shape[0], stride):
                    out[i] = x[i] + y[i]

            import numpy as np

            n = 100000
            x = np.arange(n).astype(np.float32)
            y = 2 * x
            out = np.empty_like(x)

            threads_per_block = 128
            blocks_per_grid = 30

            add_kernel[blocks_per_grid, threads_per_block](x, y, out)
            print(out[:10])
            return out


        ray.shutdown()
        ray.init(redis_address = "***")
        out = ray.get(call_ray.remote())

    # check gpu condition
    p = multiprocessing.Process(target = fun)
    p.daemon = True
    p.start()
    p.join()

nvprof 没有显示任何内容!它甚至没有显示 nvprof 正在分析进程的行(但代码确实已执行):

nvprof 不起作用

有谁知道如何解决这个问题?或者我还有其他选择来获取这些数据以进行分布式计算吗?

标签: nvprof

解决方案


推荐阅读