CUDA 5.5 RC with g++ 4.7 and 4.8: __int128 build errors
Asked Answered
I

1

6

I'm trying to compile some code with the CUDA SDK 5.5 RC and g++ 4.7 on MacOS X 10.8. If I understand correctly CUDA 5.5 should work with g++ 4.7. Looking at /usr/local/cuda/include/host_config.h it should even work with g++ 4.8.

Concerning g++ 4.8: I tried to compile the following program:

// example.cu
#include <stdio.h>
int main(int argc, char** argv) {
  printf("Hello World!\n");
  return 0;
}

But it fails:

$ nvcc example.cu -ccbin=g++-4.8
/usr/local/Cellar/gcc48/4.8.1/gcc/include/c++/4.8.1/cstdlib(178): error: identifier "__int128" is undefined
/usr/local/Cellar/gcc48/4.8.1/gcc/include/c++/4.8.1/cstdlib(179): error: identifier "__int128" is undefined
2 errors detected in the compilation of "/tmp/tmpxft_00007af2_00000000-6_example.cpp1.ii".

The same program compiles and runs with g++ 4.7:

$ nvcc example.cu -ccbin=g++-4.7
$ ./a.out 
Hello World!

But if I include <limits>...

// example_limits.cu
#include <stdio.h>
#include <limits>
int main(int argc, char** argv) {
  printf("Hello World!\n");
  return 0;
}

... even g++ 4.7 fails. The build log is located here: https://gist.github.com/lysannschlegel/6121347
There you can find also a few other errors, I'm not totally sure if they are all related to __int128 missing.
It could well be that other standard library includes break the build on g++ 4.7 as well, limits is the one I tripped over.

I also tried g++ 4.5 because I happen to have it on my machine as well (you can never have too many compiler versions, can you?), and it works.

Can I expect that this will be fixed in the release of CUDA 5.5? (I hope NVIDIA doesn't simply go back to supporting gcc only up to version 4.6.)
Is there a way to work around this in the meantime?

UPDATE:

As @talonmies points out below, this is not strictly a bug in CUDA 5.5 on MacOS as gcc is not officially supported on MacOS. As many third-party libraries don't properly handle the supported toolchains, clang or llvm-gcc (llvm-gcc being from 2007....), there is still a need to make the gcc work. gcc up to 4.6 should work fine (I tested 4.5 only).
You can make gcc 4.7 work using the trick pointed out by @BenC in the comments:

$ cat compatibility.h 
#undef _GLIBCXX_ATOMIC_BUILTINS
#undef _GLIBCXX_USE_INT128

$ nvcc example_limits.cu -ccbin=g++-4.7 --pre-include compatibility.h

nvcc with gcc 4.8 still chokes on __int128 in cstdlib. I guess cstdlib is included before --pre-include files are included.

Improvement answered 31/7, 2013 at 12:23 Comment(5)
Have you tried adding #undef _GLIBCXX_ATOMIC_BUILTINS and #undef _GLIBCXX_USE_INT128? This is a known CUDA bug for GCC 4.8, and packagers/developers need to patch the CUDA files or their projects (see here for instance).Testudo
@Testudo Where should I put these undefs? I tried at the end of my cuda/include/host_config.h as of the patch you mentioned, but it doesn't help. When you say it's a known bug, do you mean it occurs on other platforms as well?Improvement
It does on Linux. There has been problems with GCC 4.7 and 4.8. As @talonmies pointed out, there is no guarantee, but so far patches seem to remain quite simple to fix this issue (on Linux, at least). A less invasive solution (tested with GCC 4.7) is to add those lines to a separate header that you include with --pre-include your_header.h during compilation (like this). I have not tried GCC 4.8 yet, but I've never had any problem with GCC 4.7 and CUDA 5.0/5.5 with that kind of fix so far.Testudo
Also, I think that CUDA 5.5 has GCC 4.7 support now (for Linux at least).Testudo
The --pre-include flag does the trick for gcc 4.7. It doesn't solve the problem in cstdlib of gcc 4.8 though.Improvement
M
4

You need to read the MacOS getting started guide more closely:

To use CUDA on your system, you will need the following installed:

CUDA-capable GPU

‣ Mac OSX v. 10.7.5 or later

‣ The gcc or Clang compiler and toolchain installed using Xcode

‣ NVIDIA CUDA Toolkit (available at http://developer.nvidia.com/cuda-downloads)

That means precisely what it says - use the compiler(s) that ships with Xcode. Don't use a self-built gcc version because it isn't guaranteed to work, even if that compiler version is listed as being supported on other platforms and if trivial code appears to compile correctly.

Mistiemistime answered 31/7, 2013 at 12:42 Comment(2)
Although this is technically correct, it doesn't solve my problem. I've had problems compiling third-party libs (such as OpenCV) with nvcc and Xcode's llvm-gcc for a long time, which made me move to a "real" gcc that isn't 6 years old and that understands newer gcc command line options. But it seems that moving even further ahead is not possible at this time.Improvement
@LysannSchlegel: I feel your pain. I basically gave up trying to do serious development on OS X for this reason. The lack of a proper modern C++ compiler and Fortran compiler are deal breakers for me. NVIDIA won't provide support or accept bug reports against code built with anything other than the supported system compilers. Rock, meet hard place...Mistiemistime

© 2022 - 2024 — McMap. All rights reserved.