Raytracing is a rendering algorithm that simulates how light travels through a scene: emitted from a light source, being reflected or refracted by geometric objects with certain material properties, and finally appearing at the observer’s eye, typically on the film of a camera. In order to minimize computational costs and render just the visible parts of a scene, the algorithm works backwards: it starts at the camera and sends a ray in the direction of the current view. If this ray hits an object, then its material is evaluated, and the final color information is returned and stored. In case the material is reflective or refractive, further rays are sent into the scene to compute these contributions to the final color. This process is repeated recursively until a certain traversal depth is reached.
Raytracing can be combined with other techniques, such as the scanline or rasterizer, to accelerate the detection and shading of those objects that are primarily visible to the observer. In this case, the more expensive raytracing algorithm will be deferred until a secondary effect actually needs to be computed.
While raytracing offers all these features, it is processing intensive, so rendering with raytracing typically takes longer than other methods.
Similarly, the lighting in your scene also affects your final render. A point light, for example, emits light from a single point and simulates the effect of a light bulb. It produces soft shadows with hot spots. Directional lights, by contrast, provides uniform lighting without hot spots and simulates the light of the sun outdoors.
When the scanline algorithm is enabled, all objects in your scene are projected onto a 2-D plane. Objects are then sorted according to their vertical and horizontal order. This technique requires less rendering time than raytracing since it does not involve repeated searching in 3D scene data to find the next contributing object.
When an object is transparent, scanline rendering is used throughout (ray does not undergo a change in direction). When reflection or refraction is involved, then scanline is used until the ray begins to bend, in which case mental ray switches to raytracing, if enabled.
Although the scanline algorithm is more efficient that raytracing, it does have several limitations. The scanline algorithm can only be used if a pinhole camera is used, and cannot be used for distorting lens shaders such as a fisheye lens or depth of field distortion.
A faster scanline algorithm was introduced in mental ray 3.2—the rasterizer (formerly named Rapid Motion). The rasterizer algorithm accelerates the render of a) motion blur and b) scenes with high depth complexity. It speeds up motion blur by baking colors into triangles. When a triangle moves, the baked color is re-used for every pixel the triangle moves across without having to re-evaluate its new color. The rasterizer does have its limitation, however. Because the reflections calculations are performed at the time of shading (and shading is only calculated once and re-used), when an object moves, its reflections and refractions do not change along with the object but instead remain constant throughout.
Except where otherwise noted, this work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License