Here is a summary of some of the new features and feature improvements in version 3.8 of mental ray. Please refer to the release notes for more details and for other changes which are not mentioned here.
This version offers a new rendering mode which generates photo-real imagery using ray tracing technology to also capture global illumination without introducing rendering algorithm specific artifacts and without requiring the use of renderer specific parametrization. When coupled with highly parallel processing platforms, like CUDA-capable hardware, mental ray can deliver these results in a progressive manner, at interactive frame rates. This mode is called the iray rendering mode. It will be enabled with a string option. See also Known Limitations.
This version of mental ray adds runtime support for the features listed in the MetaSL language specification version 1.1, as published on the mental images website. This includes handling of newly introduced material descriptions and BRDF properties, as well as support for scene data access from within MetaSL shaders, for example. See also Known Limitations.
The MetaSL back-end technology for automatic compilation and execution of shader delivered as source code has been improved and extended. In addition to the existing C++ back-end a new LLVM back-end is available. It provides a platform independent way of deployment and shader execution without the need for external compiler or framework installations. Furthermore, a newly added mechanism for shader caching allows to support incremental MetaSL shader editing workflows within mental ray. See also Known Limitations.
mental ray supports stereoscopic rendering in a single run with optimal performance. It will generate the two images for the left and the right eye automatically. Only the primary rendering algorithms like rasterizer and ray tracing (of eye rays) are effected by the slight offset of the eye. Secondary effects with view dependency like tessellation or final gathering are not affected but use the "center" eye as usual. The stereo rendering should not influence existing shaders. The mental ray display protocol has been extended to send stereoscopic information across the connection, and the image tools have been updated accordingly to cope with stereo image files and to display images life when rendering in stereo. The stereo rendering is enabled with a camera setting.
Shaders are able to implement better texture filtering with the help of so-called ray differentials that are natively supported by mental ray. It allows to take advantage of dynamic filter sizes to reduce artifacts, especially in secondary effects like reflections and refractions. Additional texture lookup shader API functions allow to improve existing texture filtering implementations with little effort.
The final gathering (FG) algorithm can now be used together with Importons, or even with Irradiance Particles (IP), to benefit from importance-based computations of those techniques. These combinations are enabled simply by activating both FG and Importons, or FG and IP, which were rejected in previous versions. During rendering, after Importons or IP passes have been finished, the FG points are placed in the scene. If the FG rays have to be shot around, they are not shot in a uniform way, but in the importance-driven way dictated by the Importons, or the IP map (which is used to compute in which directions to shoot more rays and in which ones less). The general outcome of enabling FG with importance-based techniques is better final quality coupled with lower rendering times.
The ray tracing acceleration algorithm BSP2 in mental ray has been revised both in terms of memory usage and speed. It has been optimized especially for handling dynamic scene content with on-demand loaded scene parts provided as assemblies. In the case of motion blur with ray tracing, the memory consumption has been reduced noticeably, with a positive impact on overall performance.
A new acceleration technique for ray tracing hair has been implemented. It leads to noticeable performance improvements both in terms of execution time and memory consumption, especially in the presence of assemblies. When hair is used with the rasterizer, a new mechanism can be enabled with a string option which will decrease memory usage to a fraction compared to previous versions, with the effect of rendering faster by minimizing or even completely avoiding any flushing during rendering. In addition, the default automatic splitting of long hair and large hair counts has been re-designed and greatly improved, so that artifacts of missing hair segments are gone, and tessellation behavior is more adaptive, and memory efficient.
The progressive rendering and IBL algorithms have been improved and extended with capabilities to trade speed for quality. This includes the introduction of a specialized occlusion cache to pre-compute shadowing, as well as support for approximations of controllable quality for the lighting contribution from the IBL environment. A new integration interface has been designed for the implementation of interactive rendering solutions, which allows to apply changes to the model while receiving and displaying the rendered full resolution images at interactive frame rates.
The Map primitive adds support for attachment of global data. This global data is made up of fields similar to the regular per-element data, but there values are considered identical for all elements. The runtime for handling Maps in mental ray has been extended with a caching system to be able to operate on large Maps which can exceed the size of the physical memory installed on a machine. The performance of Map accesses has been improved noticeably.
The image display tool imf_disp
has been
re-implemented based on a unified user interface that looks
identical on all supported platforms. It provides identical
workflow and interactions independent of the system. It adds new
features like exposure control (in addition to gamma control), zoom
display in and out, playback of animation sequences from a
selection of files, and anaglyph color viewing of stereo images.
Furthermore, the tools have been extended to report, display, and
save separate layers in multi-layer image files in OpenEXR
format.
The following changes were made in the .mi scene description syntax:
"iray" on|offThe default is
off
.stereo
statement:
camera "camera_name" ... stereo method eyedistance end cameraThe method is one of
off
,
toein
, offset
, or offaxis
.
The eyedistance is the distance between the two eyes.
See the camera parameter stereo
for details."rast hair disposable" onThe default is
off
."approximate"
.
"environment lighting approximate split" numintReduce color noise by drawing more samples internally from the environment at the cost of bias if the value specified is > 1. The default is 4.
"environment lighting approximate split vis" numintSpecify the number of visibility rays to be shoot per internal sample. Reduces the visibility noise at he cost of increased ray tracing overhead. The default is 2.
"progressive occlusion cache points" numintSpecify the number of points in the cache, 0 disables the cache. The default is no cache.
"progressive occlusion cache rays" numintSpecify the number of occlusion rays to shoot per point. Higher numbers increase precomputation time. The default is 128.
"progressive occlusion cache max frame" numintLast frame that shall use the cache, later frames will blend in the regular IBL. The default is 32.
"progressive occlusion cache exclude" numintAllows to exclude an object with the given label from being included in the cache. The default is not exclude anything.
miCamera
structure has been extended with the
new fields miCamera_stereo
and
eye_separation
for stereoscopic rendering.mi_lookup_color_texture_x
,mi_lookup_filter_color_texture_x
, andmi_lookup_scalar_texture_x
have been added.mi::shader_v3::Mip_remap
has been
added.mi_eval
macros in mental ray, existing shader packages
for earlier versions of mental ray are not binary compatible and
need to be re-compiled. On the other hand, no source code changes
are required if just public interface functions have been
used.Copyright © 1986-2010 by mental images GmbH