Rendering? Do you really get it

3D rendering is a process in which a computer obtains original information from a 3D scene (polygons, materials, and lighting) and calculates the final result. The output is usually a single image or a series of images rendered and compiled together.

Rendering is usually the final stage of the 3D creation process, but the exception is whether to bring the rendering into Photoshop for post-processing.

If you want to render an animation, it will be exported as a video file or a series of images, which can be stitched together later. A one-second animation usually contains at least 24 frames, so a one-minute animation has 1440 frames to render. (Domestic half is 25 frames per second, except for special requirements)

Two types of rendering are generally considered: CPU rendering and GPU (real-time) rendering. The difference between the two lies in the difference between the two computer components themselves. The CPU is usually optimized to run multiple smaller tasks at the same time, while the GPU can usually run more complex calculations better. They are all very important to the way computers work, but in this article, we only discuss from the perspective of 3D rendering.

Generally, GPU rendering is much faster than CPU rendering. This is what enables modern games to run at about 60 FPS. CPU rendering is better at getting more accurate results from lighting and more complex texture algorithms. However, in modern rendering engines, except in the most complex scenes, the visual difference between these two methods is almost insignificant.

CPU rendering

CPU rendering (sometimes called “pre-rendering”) refers to the computer’s use of the CPU as the main component of calculations. This is a technique commonly enjoyed by film studios and architectural visualization artists. This is due to its accuracy requirements when producing realistic images. Complex design parameters and flexible operation are important. And rendering time is not an important issue for these industries. The cpu rendering time may vary widely, and scenes with flat lighting and materials with simple shapes can be rendered in a few seconds. But scenes with complex HDRI lighting and models may take hours to render, but longer time means more realism. Don’t try, Xiaobai, because your lighting is not complicated and the texture is simple. The result of rendering for a long time will not be much worse. An extreme example of this is in Pixar’s 2001 movie monster company . The protagonist Sully has about 5.4 million hairs, which means that his scene on the screen takes up to 13 hours of rendering time per frame!

In order to overcome these long rendering times, many larger studios have used rendering fields. A rendering field is a large collection of high-performance computers or servers that can render multiple frames at once, or sometimes divide the image into parts rendered by each part of the field. This helps reduce overall rendering time.

You can also use the CPU to render more advanced effects. These technologies include:

Ray tracing

Here, each pixel in the final image is calculated as a particle of light, which is modeled as interacting with objects in the scene. It is very good at making realistic scenes with advanced reflections and shadows, but it requires a lot of computing power. However, due to the latest development of NVIDIA 2000 series graphics cards in GPU technology, ray tracing as a rendering method can enter mainstream games through GPU rendering in the next few years.

Path tracking

Path tracing calculates the final image by determining how light illuminates a certain point on a certain surface in the scene, and then how much of it is reflected back to the viewport camera. It will repeat this operation for every pixel that is finally rendered. It is considered the best way to get realism in the final image.

Photon mapping

The computer emits “photons” (light rays in this case) from the camera and any light source used to calculate the final scene. This uses approximate values ​​to save computing power, but you can adjust the number of photons to get more accurate results. When light is refracted through a transparent surface, this method is used to simulate caustics well.

Light energy transmission

Light energy transfer is similar to path tracking, except that light energy transfer only simulates the illumination path reflected from the scattering surface into the camera. It also takes into account light sources that have been reflected from other surfaces in the scene. This makes it easier for the lighting to fill the entire scene and simulate realistic soft shadows.

GPU rendering

GPU rendering (used for real-time rendering) refers to the computer using the GPU as the main resource for computing. This type of rendering is commonly used in video games and other interactive applications, where you need to render anywhere from 30 to 120 frames per second for a smooth experience. In order to obtain this result, real-time rendering cannot use some of the advanced calculation options mentioned earlier. Therefore, the use of approximations in post-processing adds a lot. Other effects can be used to induce smoother eyes, such as motion blur. Due to the rapid development of technology, and developers created computationally cheaper methods to obtain excellent rendering results, the limitations of GPU rendering quickly became history. This is why games and similar media have become better with each new generation of consoles. As the knowledge of chipsets and developers improves, so do graphics results. GPU rendering is not always used in real time, because it is also effective for making longer renderings. This is good for throwing an approximation of the final render relatively quickly, so you can see how the final scene will look like without having to wait hours for the final render. This makes it a very useful tool in 3D workflows while setting up lighting and textures. So if you choose GPU rendering, you must be proficient in lighting and textures.

Rendering engine

There are dozens of rendering engines on the market, and it may be difficult to decide which one to use. Each 3D software will build its own rendering engine on its own workflow. These are usually very useful for learning the basics of rendering. But compared with many incredible third-party rendering engines, their functionality and operational practicality may be limited.

Here are some recommended renderers:

V-Ray is a very common engine. It can use both CPU and GPU rendering, so it is very flexible and can be used in Maya, Blender and almost all other 3D suites.

Corona is another engine often used by building visualizers. It is very powerful, but only works with 3DS Max and Cinema 4D.

RenderMan was developed by Pixar Studios and used in all its films. Many other major movie studios also use it. It can be used directly as a plug-in for Maya, or as a standalone product on Windows, Mac and Linux computers.

OC and RF editors don’t need to explain here, everyone is familiar with them. But you should know when you come into contact with these third-party renderers, it is best to understand the renderer that comes with the software, because although the third-party renderer looks very powerful, the actual limitations are also very large. This requires you to savor carefully. We only need to learn a rendering engine, once we understand its workflow, it can be used to achieve any effect you want. Goodbye everyone!

Leave a Reply