OpenCL vs OpenMP performance [closed]
Asked Answered
P

2

33

Have there been any studies comparing OpenCL to OpenMP performance? Specifically I am interested in the overhead cost of launching threads with OpenCL, e.g., if one were to decompose the domain into a very large number of individual work items (each run by a thread doing a small job) versus heavier weight threads in OpenMP were the domain was decomposed into sub domains whose number equals the number of cores.

It seems that the OpenCL programming model is more targeted towards massively parallel chips (GPUs, for instance), rather than CPUs that have fewer but more powerful cores.

Can OpenCL be an effective replacement for OpenMP?

Pallium answered 31/8, 2011 at 20:46 Comment(4)
I would also be interested to know if/when using OpenMP and OpenCL together is effective. OpenCL is thread-safe (with the exception of the clSetKernelArg() method), so it seem like maybe there is room to take advantage of both technologies.Hyalo
If your definition of "effective" includes readability and evolutionary, then the answer has to be "no". OpenCL cannot be bolted onto existing code in the way that OpenMP can, and it has substantial syntactic bloat compared to OpenMP. On the other hand, writing OpenMP code that makes efficient use of a memory hierarchy is usually much less readable than the same in OpenCL.Agitato
I can good and scientific comparison between OpenMP and OpenCL can be found here: Comparison of OpenMP & OpenCL Parallel Processing Technologies by Krishnahari Thouti and S.R.SatheHagride
Your question needs to be narrowed down a bit. Are you looking for a comparison of GPU vs multi-threaded CPU, or OpenMP vs OpenCL? To compare both languages, they really need to be running on the same architecture. Otherwise, it's apples and oranges.Gowen
W
31

The benchmarks I've seen indicate that OpenCL and OpenMP running on the same hardware are usually comparable in performance, or OpenMP has slightly better performance. However, I haven't seen any benchmarks that I would consider conclusive, because they've been mostly lacking in detailed explanations of their methodology. However, there are a few useful things to consider:

  • OpenCL will always have some extra overhead when compiling the kernel at runtime. Any benchmark either needs to list this time separately, use pre-compiled native kernels, or run long enough that the kernel compilation is insignificant.

  • OpenCL implementations will vary. GPU vendors like NVidia have no incentive to make sure their CPU-based OpenCL implementation is as fast as possible. None of the OpenCL implementations are likely to be as mature as a good OpenMP implementation.

  • The OpenCL spec says basically nothing about how CPU-based implementations use threading under the hood, so any discussion of whether the threading is relatively lightweight or heavyweight will necessarily be implementation-specific.

  • When you're running OpenCL code on a CPU, your work items don't have to be tiny and numerous. You can break down the problem in the same way you would for OpenMP.

Even if OpenCL has a bit more overhead, there may be other reasons to prefer it.

  • Obviously, if your code can make good use of a GPU, you will want to have an OpenCL implementation. OpenCL performance on a CPU may be good enough that it isn't worth it to also maintain an OpenMP fallback code path for users who don't have powerful GPUs.

  • A good CPU-based OpenCL implementation means that you will automatically get the benefit of whatever instruction set extensions the CPU and OpenCL implementation support. With OpenMP, you have to do extra work to make sure that your executable includes both SSEx and AVX code paths.

  • OpenCL vector primitives can help you express some explicit parallelism without the portability and readibility sacrifices you get from using SSE intrinsics.

Wales answered 31/8, 2011 at 21:44 Comment(5)
I wonder whether the user-without-GPU-case really is that practical. Instead of maintaining OpenMP fallback code, you'd have to maintain OpenCL fallback code, as CPUs won't support 2D local work sizes, have problems with __local memory and whatnot. Not much gained there if you have optimized GPU kernels.Prendergast
Why do you think that CPU-based implementations can't support 2D local work group sizes or local memory? On a CPU, cache memory is managed by hardware instead of software, so the only difference between global and local memory would be whether locking is needed to access it. The work group sizes would amount to scheduler hints for NUMA systems. Yes, a lot of the optimization effort put into OpenCL code to make it run well on a GPU won't affect performance on the CPU, but it won't break the code, either. Any kernel that will run on a GPU can run on a compliant CPU implementation.Wales
@user57368: Just an addition that usage of optimizations such as explicit usage of local memory makes sense for GPU. On CPUs, having this optimizations "may negatively" affect the performance, atleast when using the Intel OpenCL implementation for x86 CPUs.Ding
@user57368: Maybe the Intel SDK works that way. Apple's doesn't. CL_DEVICE_MAX_WORK_ITEM_SIZES for my Core2Duo under Mac OS 10.6 was {1,1,1}, under 10.7 it is at least {1024,1,1}, but still not 2D. Also, any kernel with more than one local variable would make the compiler give up under 10.6 - I would call that breaking the code.Prendergast
@w.m You can have OpenCL code optimized for CPUs, not using local memory and so on - with comparable performance to OpenMP. As the kernels are usually small, you can still share the host code and use it to achieve greater performance if the system has usable GPU, switching just the kernel (to one optimized for GPU) and few arguments as the workgroup size. This speaks for not keeping the fallback code - a great part is in the host code and this is shared.Surprisal
D
7

I have a program which has the option to use either openCL or openMP on some key bottlenecks, basically adding vectors and performing reductions.

In my case, openMP takes 13 seconds where openCL takes 10 seconds, on the CPU. Intel I5.

The fastest configuration for me so far is to add the vectors using openCL GPU, and do the reductions on openMP getting me down to 7 seconds. When I do the reduction on the openCL kernel, on GPU, it takes a total of 8 seconds.

So from my experience I would say maybe it depends on the use, and much you can optimize your openCL kernel.

Dermot answered 5/2, 2012 at 20:35 Comment(2)
What do you mean here exactly by "reduction"?Stylite
@Stylite A "Reduction" is when you take lots of elements (say a 10,000 length array, a[0] through a[9999]) and then process the data to a smaller one. For example: figuring out the "maximum" number in the array, or the value of a[0] + a[1] + a[2] + ... a[9999]. The most common reductions are "Max", "Min", and "Add", but the concept of processing lots and lots of data in parallel to output a single number (or at very least: fewer numbers that represent the whole) is a common "pattern" in parallel programming.Agrarian

© 2022 - 2024 — McMap. All rights reserved.