Rasterizer
mental ray 3.2 and later support a rendering algorithm called
Rapid Motion, which was
rewritten in mental ray 3.4 and renamed rasterizer. Its primary difference to regular
scanline rendering is its separation of sampling and sample
compositing (also called sample collection). Without the
rasterizer, mental ray selects spatial and temporal sample points
(eye rays) on the image plane in the shutter interval. If an object
moves, it will be shaded in multiple samples at different points in
time. Since rendering time is roughly proportional to the number of
shaded samples, it rises quickly if an object moves quickly.
The rasterizer works by sampling all objects at a fixed time,
and caching the shaded samples for re-use. If the object moves,
these sample results are re-used at every point the object passes
over. The cache is tied to the geometry:
- a number of sample points is selected on each triangle based on
the visibility, size, and orientation of the triangle. These points
are then individually shaded by calling the material shaders as
usual. The results are stored in front-to-back order for later
combination. Care is taken to minimize the shading of points hidden
behind other geometry, but although rendering proceeds in a roughly
front-to-back order, there is no guarantee of the exact order,
unlike for the regular scanline algorithm or raytracing. For this
reason, we can only store the surface shading and transparency
initially, and must calculate volume and environment shading later
on. If an object moves, its shading results are simply used
multiple times instead of re-shading.
- The tile is scanned, and all front-to-back stored shading
results of are composited to individual screen samples, using their
opacities to combine their colors, and the volume and environment
shaders are called and combined with the surface shading.
The late compositing of shading samples to form screen samples,
and re-using of shading results has several important
consequences:
- If the material shader traces rays with shader API functions
like mi_trace_reflection, the result is
re-used at all points the object moves across. This has the effect
that the object appears to "drag" reflections and refractions with it. For example, if a
mirror that is coplanar to the image plane moves sideways, its
edges are always blurred, but the objects being reflected would be
blurred only with the rasterizer.
- Transparency (mi_trace_transparent) can be
calculated by the regular scanline algorithm without tracing rays,
by following the chain of depth-sorted triangles behind the current
point on the image plane. Since the rasterizer shades points on
triangles one by one, and combines the results sccording to depth
at the later compositing stage, mi_trace_transparent will always
return false and transparent black. As long as the shader does not
do nonstandard linear compositing, this gives the same results, but
if the shader makes decisions such as casting environment rays
based on the value returned by mi_trace_transparent, unexpected
results may happen.
- In particular, shaders that implement matte objects will not work without modification.
Matte objects are placeholders for later compositing outside of
mental ray; like transparent cut-outs where live action will be
added later. Since the rasterizer ties its shading sample combining
to the alpha component of the RGBA color returned by the material
shader, it will fill in such cut-outs. To avoid this, a shader may
use the new mi_opacity_set function to
explicitly set the opacity for the compositing stage independently
of the returned alpha value. In other words, if an explicit opacity
value is set, the alpha channel of the shading result color is
ignored for calculating transparency, and is just retained for
writing to the final output buffer. Instead, the opacity color is
used to combine shading values front-to-back, whereas in the
absence of the opacity color, alpha is used to combine shading
samples front-to-back. A matte object could have alpha of 0 but set
an opacity of 1. In this manner one can render solid objects with
transparency built in, for correct results during later, external
compositing. There is also an mi_opacity_get function to support
cooperating shaders in Phenomena.
The rasterizer is enabled with the command-line option
-scanline rapid or the statement scanline rapid
in the options block of the scene file. mental ray 3.4 controls the
pixel sampling rate with -samples_collect, which gives the
number of samples per pixel-dimension. For example, a value of 4
gives 16 samples per pixel. The rate of shading is controlled by
-shading_samples, and defaults to 1.0, or 1 shading call
per pixel. This drives the internal tesselation depth, and takes
effect after the geometry's own approximation has been
calculated.
Due to the different sampling patterns, it is not a good idea to
use multipass rendering with
passes that use the rasterizer together with passes that do not use
the rasterizer.
Copyright © 1986-2006 by mental images GmbH