Is it possible to run CUDA on AMD GPUs?
Asked Answered
M

9

133

I'd like to extend my skill set into GPU computing. I am familiar with raytracing and realtime graphics(OpenGL), but the next generation of graphics and high performance computing seems to be in GPU computing or something like it.

I currently use an AMD HD 7870 graphics card on my home computer. Could I write CUDA code for this? (my intuition is no, but since Nvidia released the compiler binaries I might be wrong).

A second more general question is, Where do I start with GPU computing? I'm certain this is an often asked question, but the best I saw was from 08' and I figure the field has changed quite a bit since then.

Mischiefmaker answered 10/10, 2012 at 21:2 Comment(3)
check here developer.nvidia.com/cuda-gpusMeatman
fudzilla.com/news/graphics/40199-otoy-allows-cuda-to-run-on-amdPredisposition
I come back to this question, 12 years later, having heard the news. phoronix.com/review/radeon-cuda-zluda ZLUDA is a mostly drop in replacement for CUDA on AMD GPUs and it's now open source.Mischiefmaker
S
99

Nope, you can't use CUDA for that. CUDA is limited to NVIDIA hardware. OpenCL would be the best alternative.

Khronos itself has a list of resources. As does the StreamHPC.com website.

Note that at this time there are several initiatives to translate/cross-compile CUDA to different languages and APIs. One such an example is HIP. Note however that this still does not mean that CUDA runs on AMD GPUs.

Selector answered 10/10, 2012 at 21:3 Comment(5)
Like I figured. Any advice on where to get started on GPGPU programming with openCL?Mischiefmaker
Check out the OpenCL Programming Guide. One of the awesome things about OpenCL vs CUDA is the much better tooling supportWeymouth
Although it was not possible before. It is now possible to run cuda code on AMD hardware. The concept is to convert it to HIP language. See my answer below to check the links.Oyez
That still doesn't mean you're running CUDA on an AMD device. It merely means you convert CUDA code into C++ code which uses the HIP API. It also doesn't support all features. I wouldn't classify this as a CUDA workflow for AMD devices.Selector
@Selector i think it was mentioned in the comment that you need to convert it to a intermediate language. And about features please mention which broad feature is not supported, i think most of them are. However some specific platform related tweaking is needed if you need extra performance. It is said in the doc that the performance is equal to any non-optimized/native CUDA code.Oyez
L
34

You can run NVIDIA® CUDA™ code on Mac, and indeed on OpenCL 1.2 GPUs in general, using Coriander . Disclosure: I'm the author. Example usage:

cocl cuda_sample.cu
./cuda_sample

Result: enter image description here

Lignify answered 9/6, 2017 at 2:9 Comment(0)
O
17

Yup. :) You can use Hipify to convert CUDA code very easily to HIP code which can be compiled run on both AMD and nVidia hardware pretty good. Here are some links

GPUOpen very cool site by AMD that has tons of tools and software libraries to help with different aspects of GPU computing many of which work on both platforms

HIP Github Repository that shows the process to hipify

HIP GPUOpen Blog

Update 2021: AMD changed the Website Link go to ROCm website

https://rocmdocs.amd.com/en/latest/

Oyez answered 4/8, 2016 at 16:53 Comment(0)
A
7

You can't use CUDA for GPU Programming as CUDA is supported by NVIDIA devices only. If you want to learn GPU Computing I would suggest you to start CUDA and OpenCL simultaneously. That would be very much beneficial for you.. Talking about CUDA, you can use mCUDA. It doesn't require NVIDIA's GPU..

Antecedency answered 18/10, 2012 at 9:19 Comment(0)
S
6

I think it is going to be possible soon in AMD FirePro GPU's, see press release here but support is coming 2016 Q1 for the developing tools:

An early access program for the "Boltzmann Initiative" tools is planned for Q1 2016.

Storz answered 18/12, 2015 at 18:11 Comment(1)
Looks like that press release was talking about hcc (roughly speaking, AMD's analogue of nvcc) and HIP (which defines and implements a common API for use on both AMD and nVidia hardware, basically as a header-only library on top of CUDA and a whole runtime library for hcc). There's a relevant link farm in this other answer.Osvaldooswal
S
5

As others have already stated, CUDA can only be directly run on NVIDIA GPUs. As also stated, existing CUDA code could be hipify-ed, which essentially runs a sed script that changes known CUDA API calls to HIP API calls. Then the HIP code can be compiled and run on either NVIDIA (CUDA backend) or AMD (ROCm backend) GPUs.

The new piece of information I'd like to contribute is that if someone doesn't want to hipify their existing CUDA code (i.e., change all CUDA API calls to HIP API calls), there is another option that can be used; simply add (and include) a header file that redefines the CUDA calls as HIP calls. For example, a simple vector addition code might use the following header file:

#include "hip/hip_runtime.h"

#define cudaMalloc hipMalloc
#define cudaMemcpy hipMemcpy
#define cudaMemcpyHostToDevice hipMemcpyHostToDevice
#define cudaMemcpyDeviceToHost hipMemcpyDeviceToHost
#define cudaFree hipFree

...where the main program would include the header file:

#include "/path/to/header/file"

int main(){

    ...

}

Compilation would, of course, require using nvcc (as normal) on an NVIDIA GPU and hipcc on an AMD GPU.

Regarding where to get started with GPU computing (in general), I would recommend starting with CUDA since it has the most documentation, example codes, and user-experiences available via a Google search. The good news is, once you know how to program in CUDA, you essentially already know how to program in HIP : )

Skippie answered 4/11, 2022 at 2:40 Comment(0)
E
4

These are some basic details I could find.

Linux

ROCm supports the major ML frameworks like TensorFlow and PyTorch with ongoing development to enhance and optimize workload acceleration.

It seems the support is only for Linux systems.(https://rocmdocs.amd.com/en/latest/Installation_Guide/Installation-Guide.html)

ROCm supports the major ML frameworks like TensorFlow and PyTorch with ongoing development to enhance and optimize workload acceleration. based on HIP

Heterogeneous-Computing Interface for Portability (HIP) is a C++ dialect designed to ease conversion of CUDA applications to portable C++ code. It provides a C-style API and a C++ kernel language. The C++ interface can use templates and classes across the host/kernel boundary. The HIPify tool automates much of the conversion work by performing a source-to-source transformation from CUDA to HIP. HIP code can run on AMD hardware (through the HCC compiler) or NVIDIA hardware (through the NVCC compiler) with no performance loss compared with the original CUDA code.

Tensorflow ROCm port is https://github.com/ROCmSoftwarePlatform/tensorflow-upstream and their Docker container is https://hub.docker.com/r/rocm/tensorflow

Mac

This support for macOS 12.0+( as per their claim )

Testing conducted by Apple in October and November 2020 using a production 3.2GHz 16-core Intel Xeon W-based Mac Pro system with 32GB of RAM, AMD Radeon Pro Vega II Duo graphics with 64GB of HBM2, and 256GB SSD.

You can now leverage Apple’s tensorflow-metal PluggableDevice in TensorFlow v2.5 for accelerated training on Mac GPUs directly with Metal.

Update(2024):

AMD Quietly Funded A Drop-In CUDA Implementation Built On ROCm: It's Now Open-Source

Enrika answered 12/9, 2021 at 9:23 Comment(0)
C
1

Last year, AMD launched inside ROCm initiative an interesting Open source project named GPUFort.

While it's (obviously) not a way to simply "run CUDA code on AMD GPUs", it helps developers to move away from CUDA.

Quick description here : https://www.phoronix.com/news/AMD-Radeon-GPUFORT

As it's open source, you can find it on Github : https://github.com/ROCmSoftwarePlatform/gpufort

Cerebration answered 19/11, 2022 at 9:50 Comment(0)
P
-2

As of 2019_10_10 I have NOT tested it, but there is the "GPU Ocelot" project

http://gpuocelot.gatech.edu/

that according to its advertisement tries to compile CUDA code for a variety of targets, including AMD GPUs.

Picrate answered 10/10, 2019 at 13:48 Comment(1)
If you read a bit more at the link you posted you will see that development of Ocelot stopped in 2012, and the AMD backend was never actually finished. This is in no way a viable option in 2019 (and it barely was in 2011)Nashner

© 2022 - 2024 — McMap. All rights reserved.