Why do we use CPUs for ray tracing instead of GPUs?
Asked Answered
K

1

50

After doing some research on rasterisation and ray tracing. I have discovered that there is not much information on how CPUs work for ray-tracing available on the internet. I came across and article about Pixar and how they pre-rendered Cars 2 on the CPU. This took them 11.5 hours per frame. Would a GPU not have rendered this faster with the same image quality? http://gizmodo.com/5813587/12500-cpu-cores-were-required-to-render-cars-2 https://www.engadget.com/2014/10/18/disney-big-hero-6/ http://www.firstshowing.net/2009/michael-bay-presents-transformers-2-facts-and-figures/

Kathie answered 25/6, 2016 at 14:49 Comment(5)
please provide links to the articles you refer to when asking that question, via an edit of the OP.Hypotension
en.wikipedia.org/wiki/Ray-tracing_hardware and any article on raytracing vs rasterization will explain why in detail.Pittance
12500 CPU cores? I think they are actually talking about GPU.Fractionize
@appzYourLife The title is " 12,500 CPU Cores Were Required to Render Cars 2" so that suggests otherwise.... But that is why it confused meKathie
Nope, they mean CPU cores (not all in one box, of course!).Yuletide
Y
132

I'm one of the rendering software architects at a large VFX and animated feature studio with a proprietary renderer (not Pixar, though I was once the rendering software architect there as well, long, long ago).

Almost all high-quality rendering for film (at all the big studios, with all the major renderers) is CPU only. There are a bunch of reasons why this is the case. In no particular order, some of the really compelling ones to give you the flavor of the issues:

  • GPUs only go fast when everything is in memory. The biggest GPU cards have, what, 12GB or so, and it has to hold everything. Well, we routinely render scenes with 30GB of geometry and that reference 1TB or more of texture. Can't load that into GPU memory, it's literally two orders of magnitude too big. So GPUs are simply unable to deal with our biggest (or even average) scenes. (With CPU renderers, we can page stuff from disk whenever we need. GPUs aren't good at that.)

  • Don't believe the hype, ray tracing with GPUs is not an obvious win over CPU. GPUs are great at highly coherent work (doing the same things to lots of data at once). Ray tracing is very incoherent (each ray can go a different direction, intersect different objects, shade different materials, access different textures), and so this access pattern degrades GPU performance very severely. It's only very recently that GPU ray tracing could match the best CPU-based ray tracing code, and even though it has surpassed it, it's not by much, not enough to throw out all the old code and start fresh with buggy fragile code for GPUs. And the biggest, most expensive scenes are the ones where GPUs are only marginally faster. Being lots faster on the easy scenes is not really important to us.

  • If you have 50 or 100 man years of production-hardened code in your CPU-based renderer, you just don't throw it out and start over in order to get a 2x speedup. Software engineering effort, stability, and so on, is more important and a bigger cost factor.

  • Similarly, if your studio has an investment in a data center holding 20,000 CPU cores, all in the smallest, most power and heat-efficient form factor you can, that's also a sunk cost investment you don't just throw away. Replacing them with new machines containing top of the line GPUs vastly increases the cost of your render farm, and they are bigger and produce more heat, so it literally might not fit in your building.

  • Amdahl's Law: The actual "rendering" per se is only one stage in generating the scenes, and GPUs don't help with it. Let's say that it takes 1 hour to fully generate and export the scene to the renderer, and 9 hours to "render", and out of that 9 hours, an hour is reading texture, volumes, and other data from disk. So out of the total 10 hours of how the user experiences rendering (push button until final image is ready), 8 hours is potentially sped up with GPUs. So, even if GPU was 10x as fast as CPU for that part, you go from 10 hours to 1+1+0.8 = nearly 3 hours. So 10x GPU speedup only translates to 3x actual gain. If GPU was 1,000,000x faster than CPU for ray tracing, you still have 1+1+tiny, which is only a 5x speedup.

But what's different about games? Why are GPUs good for games but not film?

First of all, when you make a game, remember that it's got to render in real time -- that means your most important constraint is the 60Hz (or whatever) frame rate, and you sacrifice quality or features where necessary to achieve that. In contrast, with film, the unbreakable constraint is making the director and VFX supervisor happy with the quality and look he or she wants, and how long it takes you to get that is (to a degree) secondary.

Also, with a game, you render frame after frame after frame, live in front of every user. But with film, you effectively are rendering ONCE, and what's delivered to theaters is a movie file -- so moviegoers will never know or care if it took you 10 hours per frame, but they will notice if it doesn't look good. So again, there is less of a penalty placed on those renders taking a long time, as long as they look fabulous.

With a game, you don't really know what frames you are going to render, since the player may wander all around the world, view from just about anywhere. You can't and shouldn't try to make it all perfect, you just want it to be good enough all the time. But for a film, the shots are all hand-crafted! A tremendous amount of human time goes into composing, animating, lighting, and compositing every shot, and then you only need to render it once. Think about the economics -- once 10 days of calendar (and salary) has gone into lighting and compositing the shot just right, the advantage of rendering it in an hour (or even a minute) versus overnight, is pretty small, and not worth any sacrifice of quality or achievable complexity of the image.

ADDENDUM (2022):

The world has changed a lot since I wrote this answer in 2016! Once ray tracing acceleration was added to hardware (with NVIDIA RTX cards) ray tracing on GPUs was finally, definitively faster than ray tracing the same scene on a CPU -- for scenes that are of a size that can fit on the GPUs. And GPUs have a lot more memory than they did in 2016, so that includes a much wider range of scenes. Lots of games in 2022 use a combination of rasterization and ray tracing (when available) and probably within a couple years there may be games that are ray traced only. And in the film world, we are all racing to get our renderers ray tracing on GPUs with full feature parity with the CPU ray tracers. But we're not quite there yet. We use the GPUs more and more for various interactive uses during production, but final frames are still CPU rendered for full-complexity frames. But I think we're within a year or two of some portion of final frames being rendered strictly with GPU ray tracing, and probably within 5 years of nearly all final film frames being GPU ray traced (though not anywhere near at realtime rates).

Yuletide answered 26/6, 2016 at 0:25 Comment(4)
My real-time dynamic whitted style ray tracer is much faster on the GPU than on the CPU youtube.com/watch?v=GErl-poxmNE. That was on a GTX 580 which came out six years ago.Charcoal
@z-boson: It's easy for a scene with a few objects and textures to be much faster with a GPU. It's very difficult when your scene has 100 million polygons and references 600GB of texture.Yuletide
It's too bad the OP did not ask about real-time rendering because I think real-time ray-tracing on the GPU would be an interesting question.Charcoal
For real-time ray tracing, I would definitely choose GPU. But it would be limited to a much lower scene complexity than we typically use for film.Yuletide

© 2022 - 2024 — McMap. All rights reserved.