java - OpenCL 和 Java - 奇怪的性能结果
问题描述
我正在尝试使用 OpenCL 来提高一些使用JOCL的 Java 代码的性能。我一直在浏览他们网站上提供的示例,并使用它们组合了一个快速程序,以将其性能与正常运行的东西进行比较。不过,我得到的结果有点出乎意料,我担心我可能做错了什么。
首先,我使用的是 JOCL 0.1.9,因为我的 NVIDIA 卡不支持 OpenCL/JOCL 2.0。我的电脑有一个 Intel Core i7 CPU、一个 Intel HD Graphics 530 卡和一个 NVIDIA Quadro M2000M。
我编写的程序基于 JOCL 示例;它需要两个数字数组并将它们相乘,将结果放入第三个数组中。我使用 Java 的 nanoTime() 方法来粗略地跟踪 Java 观察到的执行时间。
public class PerformanceComparison {
public static final int ARRAY_SIZE = 1000000;
// OpenCL kernel code
private static String programSource = "__kernel void " + "sampleKernel(__global const float *a,"
+ " __global const float *b," + " __global float *c)" + "{"
+ " int gid = get_global_id(0);" + " c[gid] = a[gid] * b[gid];" + "}";
public static final void main(String[] args) {
// build arrays
float[] sourceA = new float[ARRAY_SIZE];
float[] sourceB = new float[ARRAY_SIZE];
float[] nvidiaResult = new float[ARRAY_SIZE];
float[] intelCPUResult = new float[ARRAY_SIZE];
float[] intelGPUResult = new float[ARRAY_SIZE];
float[] javaResult = new float[ARRAY_SIZE];
for (int i = 0; i < ARRAY_SIZE; i++) {
sourceA[i] = i;
sourceB[i] = i;
}
// get platforms
cl_platform_id[] platforms = new cl_platform_id[2];
clGetPlatformIDs(2, platforms, null);
// I know what devices I have, so declare variables for each of them
cl_context intelCPUContext = null;
cl_context intelGPUContext = null;
cl_context nvidiaContext = null;
cl_device_id intelCPUDevice = null;
cl_device_id intelGPUDevice = null;
cl_device_id nvidiaDevice = null;
// get all devices on all platforms
for (int i = 0; i < 2; i++) {
cl_platform_id platform = platforms[i];
cl_context_properties properties = new cl_context_properties();
properties.addProperty(CL_CONTEXT_PLATFORM, platform);
int[] numDevices = new int[1];
cl_device_id[] devices = new cl_device_id[2];
clGetDeviceIDs(platform, CL_DEVICE_TYPE_ALL, 2, devices, numDevices);
// get devices and build contexts
for (int j = 0; j < numDevices[0]; j++) {
cl_device_id device = devices[j];
cl_context context = clCreateContext(properties, 1, new cl_device_id[] { device }, null, null, null);
long[] length = new long[1];
byte[] buffer = new byte[2000];
clGetDeviceInfo(device, CL_DEVICE_NAME, 2000, Pointer.to(buffer), length);
String deviceName = new String(buffer, 0, (int) length[0] - 1);
// save based on the device name
if (deviceName.contains("Quadro")) {
nvidiaContext = context;
nvidiaDevice = device;
}
if (deviceName.contains("Core(TM)")) {
intelCPUContext = context;
intelGPUDevice = device;
}
if (deviceName.contains("HD Graphics")) {
intelGPUContext = context;
intelGPUDevice = device;
}
}
}
// multiply the arrays using Java and on each of the devices
long jvmElapsed = runInJVM(sourceA, sourceB, javaResult);
long intelCPUElapsed = runInJOCL(intelCPUContext, intelCPUDevice, sourceA, sourceB, intelCPUResult);
long intelGPUElapsed = runInJOCL(intelGPUContext, intelGPUDevice, sourceA, sourceB, intelGPUResult);
long nvidiaElapsed = runInJOCL(nvidiaContext, nvidiaDevice, sourceA, sourceB, nvidiaResult);
// results
System.out.println("Standard Java Runtime: " + jvmElapsed + " ns");
System.out.println("Intel CPU Runtime: " + intelCPUElapsed + " ns");
System.out.println("Intel GPU Runtime: " + intelGPUElapsed + " ns");
System.out.println("NVIDIA GPU Runtime: " + nvidiaElapsed + " ns");
}
/**
* The basic Java approach - loop through the arrays, and save their results into the third array
*
* @param sourceA multiplicand
* @param sourceB multiplier
* @param result product
* @return the (rough) execution time in nanoseconds
*/
private static long runInJVM(float[] sourceA, float[] sourceB, float[] result) {
long startTime = System.nanoTime();
for (int i = 0; i < ARRAY_SIZE; i++) {
result[i] = sourceA[i] * sourceB[i];
}
long endTime = System.nanoTime();
return endTime - startTime;
}
/**
* Run a more-or-less equivalent program in OpenCL on the specified device
*
* @param context JOCL context
* @param device JOCL device
* @param sourceA multiplicand
* @param sourceB multiplier
* @param result product
* @return the (rough) execution time in nanoseconds
*/
private static long runInJOCL(cl_context context, cl_device_id device, float[] sourceA, float[] sourceB,
float[] result) {
// create command queue
cl_command_queue commandQueue = clCreateCommandQueue(context, device, CL_QUEUE_OUT_OF_ORDER_EXEC_MODE_ENABLE, null);
// allocate memory
cl_mem memObjects[] = new cl_mem[3];
memObjects[0] = clCreateBuffer(context, CL_MEM_READ_ONLY | CL_MEM_COPY_HOST_PTR, Sizeof.cl_float * ARRAY_SIZE,
Pointer.to(sourceA), null);
memObjects[1] = clCreateBuffer(context, CL_MEM_READ_ONLY | CL_MEM_COPY_HOST_PTR, Sizeof.cl_float * ARRAY_SIZE,
Pointer.to(sourceB), null);
memObjects[2] = clCreateBuffer(context, CL_MEM_READ_WRITE, Sizeof.cl_float * ARRAY_SIZE, null, null);
// build program and set arguments
cl_program program = clCreateProgramWithSource(context, 1, new String[] { programSource }, null, null);
clBuildProgram(program, 0, null, null, null, null);
cl_kernel kernel = clCreateKernel(program, "sampleKernel", null);
clSetKernelArg(kernel, 0, Sizeof.cl_mem, Pointer.to(memObjects[0]));
clSetKernelArg(kernel, 1, Sizeof.cl_mem, Pointer.to(memObjects[1]));
clSetKernelArg(kernel, 2, Sizeof.cl_mem, Pointer.to(memObjects[2]));
long global_work_size[] = new long[]{ARRAY_SIZE};
long local_work_size[] = new long[]{1};
// Execute the kernel
long startTime = System.nanoTime();
clEnqueueNDRangeKernel(commandQueue, kernel, 1, null,
global_work_size, local_work_size, 0, null, null);
// Read the output data
clEnqueueReadBuffer(commandQueue, memObjects[2], CL_TRUE, 0,
ARRAY_SIZE * Sizeof.cl_float, Pointer.to(result), 0, null, null);
long endTime = System.nanoTime();
// Release kernel, program, and memory objects
clReleaseMemObject(memObjects[0]);
clReleaseMemObject(memObjects[1]);
clReleaseMemObject(memObjects[2]);
clReleaseKernel(kernel);
clReleaseProgram(program);
clReleaseCommandQueue(commandQueue);
clReleaseContext(context);
return endTime - startTime;
}
}
程序的输出是:
Standard Java Runtime: 3662913 ns
Intel CPU Runtime: 27186 ns
Intel GPU Runtime: 9817 ns
NVIDIA GPU Runtime: 12400512 ns
这有两件事让我感到困惑:
- 为什么程序在使用 OpenCL 时在 CPU 上运行得这么快?它与 JVM 将使用的设备相同;我知道与 OpenCL 等低级语言相比,Java 速度较慢,但我认为它并没有那么慢。
- NVIDIA卡有什么问题?我知道考虑到他们的 CUDA 框架,他们对 OpenCL 的支持并不那么出色,但我仍然希望它至少比正常做事要快。事实上,备份,“这是在这里,以防万一你破坏你的真实图形卡”,英特尔 GPU 围绕它运行。
我担心我做错了什么,或者至少错过了一些能让它充分发挥潜力的东西。我能得到的任何指示都将非常受欢迎。
PS - 我知道因为我有一张 NVIDIA 卡,所以 CUDA 对我来说可能是更好/更快的选择;但是在这种情况下,我更喜欢 OpenCL 的灵活性。
更新:我能够找到我做错的一件事;依靠 Java 报告运行时是愚蠢的。我使用 OpenCL 的 profiling 东西编写了一个新测试,它得到了更合理的结果:
代码:
public class PerformanceComparisonTakeTwo {
//@formatter:off
private static final String PROFILE_TEST =
"__kernel void "
+ "sampleKernel(__global const float *a,"
+ " __global const float *b,"
+ " __global float *c,"
+ " __global float *d,"
+ " __global float *e,"
+ " __global float *f)"
+ "{"
+ " int gid = get_global_id(0);"
+ " c[gid] = a[gid] + b[gid];"
+ " d[gid] = a[gid] - b[gid];"
+ " e[gid] = a[gid] * b[gid];"
+ " f[gid] = a[gid] / b[gid];"
+ "}";
//@formatter:on
private static final int ARRAY_SIZE = 100000000;
public static final void main(String[] args) {
initialize();
}
public static void initialize() {
// identify all platforms
cl_platform_id[] platforms = getPlatforms();
Map<cl_device_id, cl_platform_id> deviceMap = getDevices(platforms);
performProfilingTest(deviceMap);
}
private static cl_platform_id[] getPlatforms() {
int[] platformCount = new int[1];
clGetPlatformIDs(0, null, platformCount);
cl_platform_id[] platforms = new cl_platform_id[platformCount[0]];
clGetPlatformIDs(platforms.length, platforms, platformCount);
return platforms;
}
private static Map<cl_device_id, cl_platform_id> getDevices(cl_platform_id[] platforms) {
Map<cl_device_id, cl_platform_id> deviceMap = new HashMap<>();
for(int i = 0; i < platforms.length; i++) {
int[] deviceCount = new int[1];
clGetDeviceIDs(platforms[i], CL_DEVICE_TYPE_ALL, 0, null, deviceCount);
cl_device_id[] devices = new cl_device_id[deviceCount[0]];
clGetDeviceIDs(platforms[i], CL_DEVICE_TYPE_ALL, devices.length, devices, null);
for(int j = 0; j < devices.length; j++) {
deviceMap.put(devices[j], platforms[i]);
}
}
return deviceMap;
}
private static void performProfilingTest(Map<cl_device_id, cl_platform_id> deviceMap) {
float[] sourceA = new float[ARRAY_SIZE];
float[] sourceB = new float[ARRAY_SIZE];
for(int i = 0; i < ARRAY_SIZE; i++) {
sourceA[i] = i;
sourceB[i] = i;
}
for(Entry<cl_device_id, cl_platform_id> devicePair : deviceMap.entrySet()) {
cl_device_id device = devicePair.getKey();
cl_platform_id platform = devicePair.getValue();
cl_context_properties properties = new cl_context_properties();
properties.addProperty(CL_CONTEXT_PLATFORM, platform);
cl_context context = clCreateContext(properties, 1, new cl_device_id[] { device }, null, null, null);
cl_command_queue commandQueue = clCreateCommandQueue(context, device, CL_QUEUE_OUT_OF_ORDER_EXEC_MODE_ENABLE | CL_QUEUE_PROFILING_ENABLE, null);
cl_mem memObjects[] = new cl_mem[6];
memObjects[0] = clCreateBuffer(context, CL_MEM_READ_ONLY | CL_MEM_COPY_HOST_PTR, Sizeof.cl_float * ARRAY_SIZE,
Pointer.to(sourceA), null);
memObjects[1] = clCreateBuffer(context, CL_MEM_READ_ONLY | CL_MEM_COPY_HOST_PTR, Sizeof.cl_float * ARRAY_SIZE,
Pointer.to(sourceB), null);
memObjects[2] = clCreateBuffer(context, CL_MEM_READ_WRITE, Sizeof.cl_float * ARRAY_SIZE, null, null);
memObjects[3] = clCreateBuffer(context, CL_MEM_READ_WRITE, Sizeof.cl_float * ARRAY_SIZE, null, null);
memObjects[4] = clCreateBuffer(context, CL_MEM_READ_WRITE, Sizeof.cl_float * ARRAY_SIZE, null, null);
memObjects[5] = clCreateBuffer(context, CL_MEM_READ_WRITE, Sizeof.cl_float * ARRAY_SIZE, null, null);
cl_program program = clCreateProgramWithSource(context, 1, new String[] { PROFILE_TEST }, null, null);
clBuildProgram(program, 0, null, null, null, null);
cl_kernel kernel = clCreateKernel(program, "sampleKernel", null);
for(int i = 0; i < memObjects.length; i++) {
clSetKernelArg(kernel, i, Sizeof.cl_mem, Pointer.to(memObjects[i]));
}
cl_event event = new cl_event();
long global_work_size[] = new long[]{ARRAY_SIZE};
long local_work_size[] = new long[]{1};
long start = System.nanoTime();
clEnqueueNDRangeKernel(commandQueue, kernel, 1, null,
global_work_size, local_work_size, 0, null, event);
clWaitForEvents(1, new cl_event[] {event});
long end = System.nanoTime();
System.out.println("Information for " + getDeviceInfoString(device, CL_DEVICE_NAME));
System.out.println("\tGPU Runtime: " + getRuntime(event));
System.out.println("\tJava Runtime: " + ((end - start) / 1e6) + " ms");
clReleaseEvent(event);
for(int i = 0; i < memObjects.length; i++) {
clReleaseMemObject(memObjects[i]);
}
clReleaseKernel(kernel);
clReleaseProgram(program);
clReleaseCommandQueue(commandQueue);
clReleaseContext(context);
}
float[] result1 = new float[ARRAY_SIZE];
float[] result2 = new float[ARRAY_SIZE];
float[] result3 = new float[ARRAY_SIZE];
float[] result4 = new float[ARRAY_SIZE];
long start = System.nanoTime();
for(int i = 0; i < ARRAY_SIZE; i++) {
result1[i] = sourceA[i] + sourceB[i];
result2[i] = sourceA[i] - sourceB[i];
result3[i] = sourceA[i] * sourceB[i];
result4[i] = sourceA[i] / sourceB[i];
}
long end = System.nanoTime();
System.out.println("JVM Benchmark: " + ((end - start) / 1e6) + " ms");
}
private static String getDeviceInfoString(cl_device_id device, int parameter) {
long[] bufferLength = new long[1];
clGetDeviceInfo(device, parameter, 0, null, bufferLength);
byte[] buffer = new byte[(int) bufferLength[0]];
clGetDeviceInfo(device, parameter, bufferLength[0], Pointer.to(buffer), null);
return new String(buffer, 0, buffer.length - 1);
}
private static String getRuntime(cl_event event) {
long[] start = new long[1];
long[] end = new long[1];
clGetEventProfilingInfo(event, CL_PROFILING_COMMAND_START, Sizeof.cl_ulong, Pointer.to(start), null);
clGetEventProfilingInfo(event, CL_PROFILING_COMMAND_END, Sizeof.cl_ulong, Pointer.to(end), null);
long nanos = end[0] - start[0];
double millis = nanos / 1e6;
return millis + " ms";
}
}
输出:
Information for Intel(R) Core(TM) i7-6820HQ CPU @ 2.70GHz
GPU Runtime: 639.986906 ms
Java Runtime: 641.590764 ms
Information for Quadro M2000M
GPU Runtime: 794.972 ms
Java Runtime: 1191.357248 ms
Information for Intel(R) HD Graphics 530
GPU Runtime: 1897.876624 ms
Java Runtime: 2065.011125 ms
JVM Benchmark: 192.680669 ms
这似乎表明功能更强大的 NVIDIA 显卡实际上比 Intel 显卡性能更好,正如我所预料的那样。但...
- 为什么CPU仍然更快?
- 为什么普通的 Java 突然变得这么快?
解决方案
我仍在四处寻找并试图理解这一点,但我将开始在这里发布一个实际答案,以使像我这样的任何其他无知的新手受益。希望那些不那么无知的人很快就会过来纠正我的任何错误,但至少那些其他无知的新手可以看到我的工作并从中学习。
正如我在问题的编辑中指出的那样,部分奇怪的结果是由于我依靠 Java 来告诉我事情的运行速度。我认为这并不是完全错误的,但我误解了数据。Java 运行时将包括 Java 将所有内容传入和传出 GPU 内存所需的时间,而 OpenCL 的运行时将仅报告运行所需的时间;毕竟,OpenCL 并不真正知道或关心它的名称。启用 OpenCL 分析并使用事件来跟踪其运行时有助于我澄清这一点。这也解释了 CPU 运行时间之间非常小的差距;它实际上不是切换设备,因此没有发生内存传输。
我还注意到我上面的代码确实有一个严重的缺陷。将内核命令排入队列时,CL.clEnqueueNDRangeKernel 接受九个参数。第六个参数称为“local_work_size”;这似乎指定了您希望 OpenCL 用于运行代码的“工作组”的数量。我能想到的最接近 Java 的类比是线程。更多线程(通常)意味着可以一次完成更多工作(直到某一点)。在上面的代码中,我正在执行示例显示的操作,并告诉 OpenCL 使用单个工作组;基本上,在一个线程中运行所有内容。我的理解是,这恰恰是 GPGPU 中的错误做法。使用 GPU 的全部意义在于它一次可以处理比 CPU 更多的计算。强制 GPU 一次执行一项计算会破坏这一点。看来这里最好的方法是将第六个参数保留为空;这指示 OpenCL 创建它认为必要的尽可能多的工作组。你可以指定一个数字,但允许的最大数字因您的设备而异(您可以使用 CL.clGetDeviceInfo 来获取设备的 CL_DEVICE_MAX_WORK_GROUP_SIZE 属性来确定绝对最大值,但如果您使用多个维度,它会变得更加复杂)。
短版:
- OpenCL 的分析将为您提供比 Java 更好的计时统计信息(但同时使用两者将有助于显示在 CPU 和 GPU 之间切换所需的延迟)
- 调用 CL.clEnqueueNDRangeKernel 时不要指定 local_work_size - 这会让 OpenCL 自动处理“多线程”
新结果:
Information for Quadro M2000M
GPU Runtime: 35.88192 ms
Java Runtime: 438.165651 ms
Information for Intel(R) Core(TM) i7-6820HQ CPU @ 2.70GHz
GPU Runtime: 166.278112 ms
Java Runtime: 167.128259 ms
Information for Intel(R) HD Graphics 530
GPU Runtime: 90.985728 ms
Java Runtime: 239.230354 ms
JVM Benchmark: 177.824372 ms
推荐阅读
- c++ - 如何在模板函数中使用不同的结构作为模板参数?
- amazon-s3 - S3 DeleteObject 策略未添加且 PHP deleteObject SDK 不起作用
- python - 具有十进制数据类型的 Pandas `read_sql_query`
- javascript - javascript如何对单个对象数组进行分组和组合
- java - 在使用 JIB 完成的图像中将参数传递给我的应用程序
- reactjs - × TypeError:无法解构“steps”的属性“salaryanswer”,因为它未定义。反应式
- xml - 哪个 nosql 架构适用于 XML 文件
- objective-c - vImageConvert_422CbYpCrYp8ToARGB8888 转换缺少红色和黑色/alpha - Cocoa
- angular - 产品详细信息显示 NaN 而不是数量
- python-3.x - 如何使用 selenium python 向下滚动谷歌工作页面