I have some questions hanging in the air without an answer for a few days now. The questions arose because I have an OpenMP and an OpenCL implementations of the same problem. The OpenCL runs perfectly on GPU but has 50% less performance when ran on CPU (compared to the OpenMP implementation). A post is already dealing with the difference between OpenMP and OpenCL performances, but it doesn't answer my questions. At the moment I face these questions:
1) Is it really that important to have "vectorized kernel" (in terms of the Intel Offline Compiler)?
There is a similar post, but I think my question is more general.
As I understand: a vectorized kernel not necessarily means that there is no vector/SIMD instruction in the compiled binary. I checked the assembly codes of my kernels, and there are a bunch of SIMD instructions. A vectorized kernel means that by using SIMD instructions you can execute 4 (SSE) or 8 (AVX) OpenCL "logical" threads in one CPU thread. This can only be achieved if ALL your data is consecutively stored in the memory. But who has such perfectly sorted data?
So my question would be: Is it really that important to have your kernel "vectorized" in this sense?
Of course it gives performance improvement, but if most of the computation intensive parts in the kernel are done by vector instructions then you might get near the "optimal" performance. I think the answer to my question lies in the memory bandwidth. Probably vector registers better fit to efficient memory access. In that case the kernel arguments (pointers) have to be vectorized.
2) If I allocate data in local memory on a CPU, where will it be allocated? OpenCL shows L1 cache as local memory, but it is clearly not the same type of memory like on GPU's local memory. If its stored in the RAM/global memory, then there is no sense copying data into it. If it would be in cache, some other process would might flush it out... so that doesn’t make sense either.
3) How are "logical" OpenCL threads mapped to real CPU software/hardware(Intel HTT) threads? Because if I have short running kernels and the kernels are forked like in TBB (Thread Building Blocks) or OpenMP then the fork overhead will dominate.
4) What is the thread fork overhead? Are there new CPU threads forked for every "logical" OpenCL threads or are the CPU threads forked once, and reused for more "logical" OpenCL threads?
I hope I'm not the only one who is interested in these tiny things and some of you might now bits of these problems. Thank you in advance!
UPDATE
3) At the moment OpenCL overhead is more significant then OpenMP, so heavy kernels are required for efficient runtime execution. In Intel OpenCL a work-group is mapped to an TBB thread, so 1 virtual CPU core executes a whole work-group (or thread block). A work-group is implemented with 3 nested for loops, where the inner most loop is vectorized, if possible. So you could imagine it something like:
#pragam omp parallel for
for(wg=0; wg < get_num_groups(2)*get_num_groups(1)*get_num_groups(0); wg++) {
for(k=0; k<get_local_size(2); k++) {
for(j=0; j<get_local_size(1); j++) {
#pragma simd
for(i=0; i<get_local_size(0); i++) {
... work-load...
}
}
}
}
If the inner most loop can be vectorized it steps with SIMD steps:
for(i=0; i<get_local_size(0); i+=SIMD) {
4) Every TBB thread is forked once during the OpenCL execution and they are reused. Every TBB thread is tied to a virtual core, ie. there is no thread migration during the computation.
I also accept @natchouf-s answer.