With every passing year, game engines and hardware become more and more powerful. Each successive triple A title pushes realism and detail just a little bit further, with recent examples being Far Cry 4, Horizon: Zero Dawn, and Uncharted 4. Even games that aren’t triple A are showing the extent that they can deliver realistic scenes, such as For Honor and Player Unknown’s Battlegrounds. All of this detail and beauty comes down to rendering; what are the most detailed models, textures, and light set ups that a studio can produce and implement into a game without requiring a three thousand dollar computer to run? What game engine or renderer should a game use, and how do these compare to those used for feature films? And what about these people that say games are getting closer and closer to movies each year?
First, let’s just list some of the different software used in games and movies. Games run off of what are called “game engines”, which also power the physics and other computations that go into running a game. Some of the popular game engines are: Unity, Unreal Engine, Frostbite Engine, CryEngine, and then a great many proprietary engines built by developers for specific games and their needs. Film rendering software is a bit different. Its job is solely to render the frame being viewed in a 3D scene; it doesn’t draw the geometry, calculate animations, or deal with the relationships between objects. There are also quite a few movie rendering programs, some examples are: Mental Ray, Arnold, Renderman, Mantra, Keyshot, Vray, Cycles, and Hyperion. Again, many companies have proprietary renderers, usually based off of one of the previously mentioned render programs.
The first major difference between rendering for games and rendering for movies is time. Games have to render their images in “real time”, or fast enough to give the illusion of movement. The lowest standard for this 30 frames per second, although many people and companies prefer the smoother appearance of 60 frames per second. This means that each image in a scene only has 1/30th or 1/60th of a second to be rendered, or else the game will appear laggy or jumpy. Movies are almost the opposite. While they don’t have an infinite amount of time to render, they can take many hours to process single frames of an animation. Scenes being rendered by Pixar for their movies would take multiple days of computing on a single computer to finish.
The second major difference between rendering in movies and games is the hardware being used. Games must be built to run on individual systems: one motherboard, one CPU, and one or sometimes two graphics cards. Games are designed to be consumable on multiple platforms, such as laptops, home pcs, the Xbox one, and the PlayStation 4. Multiple computers cannot combine to render the images for a game at the same time, and sometimes simply having more than one graphics card can cause issues based on the way the game was programmed. Movies, or at least professional tier productions, do not use individual computers to render their frames. Movie production companies rent time in places called “render farms”, which are essentially big warehouses filled with lots and lots of CPUs and lots and lots of fans to keep things cool. Like I said above, when the renders on your movie take more than a day for a single frame, no single computer is powerful enough to render a movie in a financially viable amount of time, making render farms with thousands of processors the only real option for big production companies.
The final difference between movie renders and game renders is the final output. In games, the image that is rendered is the image that is sent to your monitor. Games don’t really have any notable form of post processing that occurs after a frame has been rendered. Movies, however, always render in layers for the purpose of compositing them afterwards. Layers rendered might be for diffuse color, specular color, reflections, shadows, or to organize the layering of objects in a scene. All of this saves time and money in the movie industry because having to re-render a sequence is very costly.