This method can be used very easily with existing .mi files, it is only necessary to add a "filter scale" modifier to the texture load statements in the scene file. Here is an example:
local filter 0.8 color texture "tex_0" "mytexture.map"
The basic idea behind pyramid filtering is that when a pixel rectangle (the current sampling location) is projected into texture space, mental ray has to calculate the (weighted) average of all texture pixels (texels) inside this area and return it for the texture lookup. Using the average of the pixels, high frequencies which cause aliasing are eliminated. To speed up this averaging, the compression value is calculated for the current pixel location which is the inverse size of the pixel in texture space. For example, if the pixel has a projected size of four texels in texture space, then one texel is compressed to 1/4 in the focal plane (severe compression gives those aliasing artifacts).
Note: for memory-mapped textures the filter value in the .mi file is ignored, and filtering is not applied in this case for technical reasons. As a workaround, it is possible to specify the filter value when creating the texture with the imf_copy tool with the -f option.
It is very costly to project a rectangle to texture space, so usually the quadrilateral in texture space is approximated by a square and the length of one side is used for the compression value. The compression value is used as an index into the image pyramid, and since this value has a fractional part, the two levels that the value falls in between are looked up using bilinear interpolation at each level, followed by a linear interpolation of the two colors from the level lookups. (mental ray uses also bilinear texture interpolation when no filtering is applied).
Just specifying "filter scale color texture" is not sufficient for an exact projection of the pixel to texture space. The texture shader modifies the UV texture coordinates (either from specified texture surfaces or generated by cylinder projections) according to remapping shader parameters etc. the standard shader interface function mi_lookup_color_texture provides only the UV texture coordinates to mental ray, and it is almost impossible to project the pixel corners to texture space since it is not known how to obtain additional UV coordinates or how to remap them. The remapping is done before mi_lookup_color_texture is called.
mental ray's implementation of pyramid mapping therefore adds a vector to the current intersection point in object space and transforms this point into raster space. The length of the offset vector is calculated by dividing the object extent by the texture resolution (the larger value of width and height is used). This approach assumes that texture space corresponds to object space (that is, if the object size is one object unit, the texture fully covers it). If a texture shader applies texture replications, the filter value should be set to the replication count or larger to adjust for this effect. The compression value is calculated as the distance between the raster position mentioned above and the current raster position (using two variables provided to the shader, state->raster_x and state->raster_y ).
Since this can not always attain satisfying results, mental ray allows multiplication of a "user scaling" value - the scale value in the filter statement. Using this value, it is possible to reduce blurring ( scale < 1) or increase blurring ( scale > 1). For example, if the texture is replicated 10 times, which makes it appear smaller in raster space and hence requires more blurring, the filter constant should be multiplied by 10. Since texture projections are handled by shaders and not by the mental ray core, this cannot be done automatically.
Pyramid filtering also works when reflection or refraction is used, but mathematical correctness cannot be guaranteed since mental ray cannot take reflection or refraction paths into account, for the same reason.
Copyright © 1986-2008 by mental images GmbH