Edit (6/1/12): 2012 edition now available here
An interview of AMD Fusion marketing managers by bit-tech was recently posted on Slashdot. The interviewees predicted the death of CUDA, discussed the importance of GPU acceleration for consumer applications, and had no comment on developing ARM-based Fusion products. I wasn't very impressed with a lot of the answers. My opinions on what was said about OpenCL and the demise of CUDA are after the break. I'd like to make some comments about the role of Fusion in another post.
I've posted a little on the differences between developing with CUDA and OpenCL, but that post was specifically about my experiences porting a small benchmark. If I was developing a GPU compute application from scratch, there are many higher-level points to discuss before making the decision of whether to develop the app in CUDA or OpenCL.
For instance, a developer needs to consider the overall friendliness of the tools available and the API itself. In terms of the core APIs, CUDA is far ahead of OpenCL in developer friendliness. The OpenCL API is more akin to the lower-level CUDA Driver API than the runtime API most developers use. OpenCL is also missing support for C++ features like templates and pointers, which are available through CUDA. CUDA 4.0 adds even more features. OpenCL has a lot of catching up to do as an API, and I am skeptical that the Khronos group can move at the same speed as NVIDIA.
The presence of third-party APIs are also extremely important. APIs like thrust give a very tangible productivity boost to CUDA development. As far as I know, there are no OpenCL equivalents to libraries like CUFFT and CUDPP.
If AMD really is as serious about OpenCL as they claim in the interview, they need to make a serious investment in software. While vendor lock-in with CUDA is a big concern, it definitely does not guarantee its demise. As the GPU compute development environment currently stands, choosing the AMD and OpenCL path requires many sacrifices. The statement that AMD is "betting everything on it" does not show.
An interview of AMD Fusion marketing managers by bit-tech was recently posted on Slashdot. The interviewees predicted the death of CUDA, discussed the importance of GPU acceleration for consumer applications, and had no comment on developing ARM-based Fusion products. I wasn't very impressed with a lot of the answers. My opinions on what was said about OpenCL and the demise of CUDA are after the break. I'd like to make some comments about the role of Fusion in another post.
I've posted a little on the differences between developing with CUDA and OpenCL, but that post was specifically about my experiences porting a small benchmark. If I was developing a GPU compute application from scratch, there are many higher-level points to discuss before making the decision of whether to develop the app in CUDA or OpenCL.
For instance, a developer needs to consider the overall friendliness of the tools available and the API itself. In terms of the core APIs, CUDA is far ahead of OpenCL in developer friendliness. The OpenCL API is more akin to the lower-level CUDA Driver API than the runtime API most developers use. OpenCL is also missing support for C++ features like templates and pointers, which are available through CUDA. CUDA 4.0 adds even more features. OpenCL has a lot of catching up to do as an API, and I am skeptical that the Khronos group can move at the same speed as NVIDIA.
The presence of third-party APIs are also extremely important. APIs like thrust give a very tangible productivity boost to CUDA development. As far as I know, there are no OpenCL equivalents to libraries like CUFFT and CUDPP.
If AMD really is as serious about OpenCL as they claim in the interview, they need to make a serious investment in software. While vendor lock-in with CUDA is a big concern, it definitely does not guarantee its demise. As the GPU compute development environment currently stands, choosing the AMD and OpenCL path requires many sacrifices. The statement that AMD is "betting everything on it" does not show.
Very well said. If we compare compute languages to shading languages, we can argue that GLSL is more widely used than NVIDIA's Cg, and the trend is continuing. If we apply this to compute languages, we'd say OpenCL will win out over NVIDIA. In the long term, this is a possibility, but I don't see it happening in the next few years. Besides all the excellent points you've made, NVIDIA is more serious about CUDA than they are about Cg (not that I'm putting down Cg). More importantly, most graphics developers do not have control over the graphics hardware they run on, but many compute developers do, so a single-vendor solution is more tolerable in the compute world.
ReplyDeleteThe good news for us developers is the competitive leads to better languages and tools for us.
Those are some great points, Patrick. I wish I had numbers to back this up, but I'm fairly confident that the revenue stream from GPU compute is from HPC users buying Teslas, not any GPU compute-accelerated consumer app forcing Geforce purchases. AMD ought to realize then that they should push the FireStream line most in this market, not their Fusion APUs.
ReplyDelete