RenderMap Property Editor

 
 
 

| Basic | Maps | Advanced | Surface Map Settings

RenderMap allows you to "bake" a variety of object attributes into different maps. These maps can all be stored as external image files (RenderMap), while many of them can also be stored in Color at Vertices (CAV) properties (RenderVertex).

For more information, see Baking Surface Attribute Maps [Texturing].

To apply: Select an object and choose Get Property RenderMap from any toolbar.

To redisplay: In an explorer, expand the object and click the RenderMap icon.

Basic

Regenerate Maps

Generates all RenderMap images or Color at Vertices properties defined in this property editor. If multiple rendermap properties are being inspected simultaneously, they will all be regenerated simultaneously.

Set Resolution From Clip

Matches the RenderMap images' resolution to the resolution of any one image clip attached to the rendermapped object.

Clicking this button opens a pop-up explorer where the image clips are displayed.

Sampling

Specifies whether to render the surface attributes as RenderMap textures (RenderMap), or only the vertices as a Color at Vertices property (RenderVertex).

Format (RenderMap)

X/Y Res

Specifies the resolution of the RenderMap images in X and Y. to generate non-square images, make sure that the Square option is deactivated.

Square

When enabled, each RenderMap image's Y resolution is automatically set to the same value as its X resolution, producing a square image.

Super Sampling

Defines the number of samples taken for each pixel. For sampling purposes, the pixel is divided into a grid whose size is determined by the super sampling value. For example, if the value is set to 3, the pixel is divided into a 3x3 grid, and 9 samples are taken and averaged.

UV

Specifies the UV coordinate set for which the RenderMap is created. See See Specifying a Texture Projection [Texturing].

Note

If you rendermap an object using a spatial texture projection, the output images will look as though they were generated using a planar projection.

To correctly bake a spatial projection's surface attributes into a rendermap, apply an appropriate projection to the object (UV for example). The texture will still be projected by the spatial projection, but the RenderMap will sample the object using the UV projection.

Surface Color

Enable

Deactivate this option when you do not wish to generate a surface color map, but do wish to generate other types of maps, such as those found on the Maps tab.

Path (RenderMap only)

The destination file name and path for RenderMap. If Usr is on, the path is displayed as you entered it. If Res is on, the resolved path is displayed. You can also click the browse (...) button to open a browser and navigate to the destination location.

Format (RenderMap only)

Sets the file format of the rendermap image. This also determines the file-name extension.

Available file formats are:

  • SOFTIMAGE (.pic) picture

  • OpenEXR (.exr)

  • mental ray Map (.map)

  • Radiance HDR (.hdr)

  • TIFF (.png) picture

  • PNG (.png)

  • Targa (.tga) picture

  • SGI (.sgi) picture

  • mental ray Color (.ct)

  • mental ray Grayscale (.st)

  • BMP (.bmp)

  • JPEG (.jpg) with maximum quality

  • BMP (.bmp) picture

  • Alias (.alias) picture

  • Wavefront (.rla) picture

  • Quantel/Abekas (PAL) (.pal)

  • Quantel/Abekas (NTSC) (.ntsc)

  • DDS (Uncompressed) (.dds)

    • 8 bits per channel = A8R8G8B8 Unsigned, 32 bits per pixel

    • Half float (16 bits per channel)= A16B16G16R16F, 64 bits per pixel

    • Float (32 bits per channel) = A32B32G32R32F, 128 bits per pixel

Width (RenderMap only)

Specifies the output image's bit depth.

The options in this list vary depending on the specified Image Format. Some formats support only 8-bit, others support 8-bit and 16-bit, and still others support 8-bit, 16-bit and Float. One notable exception is the OpenEXR format, which supports "half" (16-bit float), and float.

CAV (RenderVertex Only)

Defines the color at vertices map to which the RenderVertex is written. See Specifying Parameter Maps or Vertex Colors in Property Editors [Scene Elements].

Map

Specifies the attributes that RenderMap or RenderVertex should render when the surface color maps are generated.

  • Surface Color and Illumination bakes all object surface attributes, including color, illumination, bump, and so on, into the rendermap output image.

  • Surface Color Only (albedo) bakes object surface color without considering the current illumination environment.

  • Illumination bakes illumination information into the surface color map. This includes light color.

    Illumination maps can optionally include bump map information provided that the Consider Bump option is activated.

  • Ambient Occlusion uses Softimage's ambient occlusion shader to create a color representation of the extent to which the object is occluded by other objects, or the environment, at any given point.

    When you're setting the RenderMap properties, you can adjust the ambient occlusion shader's parameters to control the final output map. These options are on the Surface Map Settings tab.

    Ambient occlusion maps can optionally include bump map information provided that the Consider Bump option is activated.

Consider Bump

When activated, bump mapping is included in the surface color map. This option is useful for deactivating bump in order to not include it in the surface color map, although this is not possible for all map types.

Coverage in Alpha Channel (RenderMap Only)

When activated, the RenderMap's texel coverage is written to the alpha channel of the output Surface Color map.

Overwrite CAV alpha channel (RenderVertex only)

When activated, regenerating the RenderVertex maps overwrites the alpha channel of the CAV property in which the Surface Color map is stored. When deactivated, only the RGB channels are overwritten.

Disable Surface Properties (If Present)

The Disable Surface Properties options control whether shadows, refractions and/or reflections, as well as the ambient, diffuse and/or specular lighting components on the rendermapped object appear in the output image. When any of these boxes is checked, the corresponding attribute does not appear.

Properties that are view-dependent, including specular highlights, reflections, refractions, and so on, are probably not a good idea to rendermap because they get "baked in" and don't change from different viewpoints. The exception is when they're intended to be seen from a certain viewpoint only.

Of course these surface attributes can only be toggled provided they are active to begin with. For example, if the object is Blinn shaded, but the Blinn shader's specular component is deactivated, toggling the specular component in the RenderMap property editor has no effect.

Maps

The options on this tab allow you to generate a number of different maps based on attributes other than surface color or illumination. Controls for each map are hidden until you activate the map.

All of the maps on this tab use the resolution and UV coordinate settings defined on the Basic tab.

Regenerate Maps

Generates all RenderMap images or Color at Vertices properties defined in this property editor. If multiple rendermap properties are being inspected simultaneously, they will all be regenerated simultaneously.

When you enable a map type, additional controls appear. All map types have the options in the following table for specifying image format. Some map types have additional options as described in the subsections below.

Enable

Activates the corresponding map. The map is now generated, along with any other active maps, when you click the Regenerate Maps button on any tab.

Destination

Specifies the destination for the map.

  • If you are generating a RenderMap image (Sampling is set to Entire Surface on the Basic tab) you must enter the name and destination of each image file to write to.

If Usr is on, the path is relative to the active project directory. If Res is on, the path is absolute. You can type a different path or use the Browse (...) button to change locations. Valid paths are displayed in white, invalid paths are red, and read-only paths are gray.

Format

Sets the file format of the image file containing the map. This also determines the file-name extension.

Available file formats are:

  • SOFTIMAGE (.pic) picture

  • OpenEXR (.exr)

  • mental ray Map (.map)

  • Radiance HDR (.hdr)

  • TIFF (.png) picture

  • PNG (.png)

  • Targa (.tga) picture

  • SGI (.sgi) picture

  • mental ray Color (.ct)

  • mental ray Grayscale (.st)

  • BMP (.bmp)

  • JPEG (.jpg) with maximum quality

  • BMP (.bmp) picture

  • Alias (.alias) picture

  • Wavefront (.rla) picture

  • Quantel/Abekas (PAL) (.pal)

  • Quantel/Abekas (NTSC) (.ntsc)

  • DDS (Uncompressed) (.dds)

    • 8 bits per channel = A8R8G8B8 Unsigned, 32 bits per pixel

    • Half float (16 bits per channel)= A16B16G16R16F, 64 bits per pixel

    • Float (32 bits per channel) = A32B32G32R32F, 128 bits per pixel

Width

Specifies the map's bit depth.

The options in this list vary depending on the specified Format. Some formats support only 8-bit, while others support various combinations of 8-bit, 16-bit, Float, and Half float.

Texel Coverage (RenderMap Only)

Texel coverage maps indicate what fraction of the output image texel is located on the surface: black being no coverage and white being 100% covered. Practically speaking, these options generate an alpha channel and/or external matte for the output image.

Normals

Activating the normal map allows you to burn the rendermapped object's normals into a file or Color At Vertices property.

The data is stored in the file in a biased form: the x, y and z of the normal are stored as (x+1)/2, (y+1)/2, (z+1)/2, so that they are always in the range 0 to 1. To get the unbiased, original normal value, use r*2-1, g*2-1, b*2-1.

Space

Normal vectors can be encoded in one of three spaces:

  • Object space: The vector is represented relative to the coordinate frame of the object. Rotation, translation and scaling of the object do not have an impact on the result.

  • World space: The vector is represented relative to the scene root. Rotations and scaling of the object will impact the result. Translation never affects normals.

  • Relative to UV Basis: The vector is represented in the local space defined by the UV basis, i.e., the coordinate frame defined by the tangent (X-axis), binormal (Y-axis), and interpolated normal (Z-axis).

    If you choose this option, you must either generate a UV basis automatically or specify a pre-existing UV basis using the UV Basis options on the Advanced tab.

    Often, the texture projection specified in the image format will be ideal for computing the basis, since this projection tends to have low shearing in the UVs.

    This representation is very useful for bump-mapping in games. The U and V bases can also be burned into a separate map, as described in Basis Vector Maps [Texturing].

    If Type is set to Interpolated Normal, this option always returns (0,0,1).

Type

You can create the normal map using any of the following types of normal:

  • Interpolated Normal: This normal is the normal that is computed by interpolation across the triangle of the rendermapped surface. It is not affected by bump mapping, and is always the normal of the surface being rendermapped.

    Computing this normal does not involve evaluating the surface shader, so is much faster to compute than the Sampled Normal.

  • Sampled Normal: This is the normal used for shading after ray-casting and evaluating the surface color. As such, bump mapping will affect the result.

In cases where the ray-casting "catches" another surface (for example, if the Ignore RenderMapped Objects parameter, on the Advanced tab, is activated) you will get the normals of the other surface.

If there is no bump mapping, and the original surface is "caught", the sampled normal will match the interpolated normal.

  • Geometric Normal: This is the normal of the geometric triangle being sampled. It is not affected by bump mapping, and is always the normal of the surface being rendermapped.

    Computing this normal does not involve evaluating the surface shader, so is much faster to compute than the Sampled Normal.

  • Bent Normal: This is the average direction of the unoccluded sample rays cast when calculating ambient occlusion.

Basis Vectors

This map allows you to burn the U and V basis vectors. These vectors, along with the interpolated normal, define a coordinate frame on the surface of the object. Conceptually, the U and V bases are supposed to be tangent to the surface, while the interpolated normal is perpendicular to the surface. This coordinate frame is useful for relative normal computations for bump mapping in games.

The tangent and binormal (U and V basis, respectively), are computed using a texture projection, specified on the Advanced tab (UV Basis > Texture projection). You can burn a map for either vector individually, or for both of the vectors at the same time.

As with the normals, the U and V bases are stored in a biased form: the x, y and z of the vectors are stored as (x+1)/2, (y+1)/2, (z+1)/2, so that they are always in the range 0 to 1. To get the unbiased, original vector value, use r*2-1, g*2-1, b*2-1. When generating CAVs with RenderVertex, you can change the bit depth to convert the stored tangents and binormals to unbiased form as described in Setting the Data Type for Tangents and Binormals [Texturing].

Space

Basis vectors can be encoded in either Object space or World space:

  • Object space: The vectors are represented relative to the coordinate frame of the object. Transforming the object will not have an impact on the result.

  • World space: The vectors are represented relative to the scene root. Rotating and scaling the object will impact the result.

Surface Position (RenderMap Only)

The surface position map burns the sampled position of the surface into the map. It stores the raw (x, y, z) position as a color, without biasing. For this reason, you will want to use a file format that supports floating-point bit-depth or ensure that the coordinates of the object are between 0 and 1.

Space

Surface position can be encoded in either Object space or World space:

  • Object space: The position is represented relative to the coordinate frame of the object. Transforming the object will not have an impact on the result.

  • World space: The position is represented relative to the scene root. Transforming the object will impact the result.

Depth

Depth Maps, also called height maps, are grayscale representations of the height of every point on an object's surface. Depth maps are often used by game developers to create a more realistic bump-mapping effect called parallax mapping, which simulates the correct displacement you perceive on an object's surface, based on the camera's point of view.

Advanced

Regenerate Maps

Generates the RenderMap images or Color at Vertices properties defined in this property editor. If multiple rendermap properties are being inspected simultaneously, they will all be regenerated simultaneously.

Prepend owner name to file names (RenderMap only)

Adds the name of the rendermapped object to the resulting maps' filenames. For example, if you rendermap a cube named cube1, the resulting maps' filenames will be cube1_filename.ext.

This option affects all maps generated by a given RenderMap property, including the Surface Color maps that you activate from the Basic Tab, and any maps that you activate on the Other Attributes tab.

Precision Datatype (RenderVertex only)

Sets the data type for storing colors:

  • Byte stores values as an integer to be interpreted as a normalized color value in the range [0.0, 1.0]. Note that while the data type is actually a short integer, only 256 levels of precision are used so it is equivalent to a single byte.

  • Float (4 bytes) stores colors as floating point values. Use this option for HDR color values.

Sampling (RenderMap)

Method

Sets the sampling method.

  • Area Weighted Sampling samples every polygon that covers a texel. The resulting color is computed by averaging all the samples. Each sample is weighted by the percentage of the texel that is covered.

  • Simple Sampling samples once per texel (or subtexel if Super Sampling is > 1), in the very center. Therefore, only the one polygon that overlaps the center is sampled, even if there are other polygons in the texel.

Partial Texel

Defines how partial texels are handled.

  • The Normalize option fills in the entire texel with the color of the covered part.

  • The Blend with Background Color option fills in the uncovered part with the specified background color.

Jitter

When on, the location of each sample is shifted from its calculated position. Varying sample locations can help reduce artifacts, particularly areas where small details might be lost between samples. This is useful in situations where regular sampling makes small artifacts more visible.

When off, there is no jitter; and samples are taken at pixel (or subpixel) corners.

Jitter does not affect render time significantly.

Spill into empty texels

Specifies how deeply the filled texels bleed into adjacent empty texels in the output image, in texels. When displaying textures in most applications, this can reduce artifacts that are caused by interpolating values with adjacent empty texels. A value of 0 results in no bleeding.

Sampling (RenderVertex)

Average colors around vertices

When enabled, the colors around vertices are averaged when the RenderVertex color at vertices map (CAV) is generated.

This is useful for games development because it ensures that all samples around a vertex have the same color, and therefore leads to more efficient triangle stripping.

Enabling this option may cause color errors where the color changes drastically between polygons.

Sample inset factor (RenderVertex only)

Controls the distance between each sampled vertex and the location where the sample is actually taken.

Raising the value moves the sample location closer to the center of the triangle (a value of 1 means the sample location is at the triangle's center). Higher values are more likely to produce artifacts in the resulting CAV.

Lowering the value moves the sample location closer to the vertex. You cannot set this value lower than 0.001. This is because RenderVertex does not sample exactly at a given vertex, but slightly inside the polygon near the vertex. This helps to ensure that the object is colored correctly at each vertex in cases where there is a drastic change in surface color.

UV Basis

The UV Basis options allow you to specify how to compute the UV basis that is used when you generate Normal maps and/or UV Basis maps that you configured on the Maps tab.

Force Perpendicular Basis

When activated, the V-basis is changed to ensure it is perpendicular to the (current) U basis and the interpolated normal. The U-basis is then changed to ensure it is perpendicular to the new V basis and the interpolated normal. The interpolated normal itself never changes.

Automatic Basis

Deactivating this option allows you to specify a Color at Vertices (CAV) property that encodes the tangents to be used to compute the UV Basis instead of a texture projection.

Texture Projection (For Automatic Generation)

Specifies the texture projection used to compute the UV Basis. In most cases, the texture projection specified in the Format options on the Basic tab is ideal for computing the UV basis because this projection tends to produce very little shearing in the UVs. See Specifying a Texture Projection [Texturing].

User-Defined Basis

When the Automatic Basis option is deactivated, this parameter specifies an existing CAV property that encodes the tangents to be used to compute the UV Basis. See Specifying Parameter Maps or Vertex Colors in Property Editors [Scene Elements].

Virtual Camera

Distance From Surface

Sets the virtual camera's distance from the RenderMapped objects.

Non-zero distances are useful when you want to include other scene elements in the output images. For example, you might have a wall with vines creeping up it. By increasing the Distance from Surface, you can bake the vines into the output images, even though they are separate objects.

Unless you want to incorporate elements other than the rendermapped object into the final rendermap image, it's best to keep the Distance from Surface set to 0. Non-zero settings increase the time it takes to generate the rendermap, often significantly.

Final gathering smoothness

If your object is rendered using final gathering, and you find that the rendermap output image contains undesirable artifacts, increase this value. The higher the value, the "smoother" the result.

Note that you should adjust your final gathering accuracy setting to get rid of artifacts in the resulting images. Once you've done that, adjusting the Final Gathering Smoothness value can help reduce the appearance of any remaining artifacts.

View

Defines the viewpoint of the virtual camera:

  • Perpendicular to Surface: the virtual camera samples the rendermapped object(s) from a position perpendicular to the object(s) surface, and from the distance specified by the Distance from Surface setting.

  • Scene Camera: the virtual camera samples the rendermapped object(s) from the direction of the scene camera specified in the render options, but from the distance specified by the Distance from Surface setting. As a result, the RenderMap sampling ray may not originate from the scene camera's position.

    This is useful when an object is to be viewed from a single viewpoint and you want to include view-dependent effects like specular highlights or reflections.

You can only use a scene camera to generate a surface color RenderMap or a RenderVertex. Illumination maps cannot be calculated using a scene camera.

If you are generating an Illumination surface map, specular highlights and reflections are automatically forced off, making camera direction irrelevant. As a result, the View options are unavailable.

Ignore Rendermapped Objects

When activated, RenderMap casts a ray to find the surface to be sampled, but ignores the objects affected by the RenderMap property. As a result, the color information that is computed is taken from the object the ray hits, and not the object being rendermapped.

This is useful for making sprites, and is generally preferable to the technique of putting a Constant shader with 100% transparency on the rendermapped objects.

However, the rendermapped objects will still appear in reflections and through transparent portions of the surface when the shader is evaluated. If you your scene contains reflections and transparency, using a Constant shader may still be appropriate.

This option will lead to additional rays being cast and may affect computation times.

Bidirectional Tracing

When Ignore Rendermapped Objects is activated and the virtual camera View is set to "Perpendicular to Surface", activating this option helps in the transfer of maps from one surface to another.

When bidirectional tracing is activated, rendermap shoots each ray in both directions, if necessary, and chooses the best of the two samples. If neither sample is appropriate, both are rejected.

This is best explained by the following example:

Let's say you want to compute a high-resolution character's normal maps relative to a low-resolution approximated character, for use in games.

This can be achieved by making the low-resolution character transparent. The rendermap is now relative to the low-resolution character, but the ray-casting captures the color/normals of the high-resolution character.

However, this method can fail if the low-resolution character does not fully encompass the high-resolution character. When this is the case, and bidirectional tracing is activated, RenderMap first casts a ray towards the low-resolution character, but ignores it, attempting to capture the high-resolution character. If the ray misses the high-resolution character, or hits on the inside of the high-resolution character, as defined by the surface normal, a new ray is cast from the rendermapped surface, in the opposite direction.

If this second ray hits the inside of the high-resolution character, this other point is used as the cast position. If it misses the high-resolution character, it is treated as missing the object. For a normal map, this means a normal of 0,0,0 is returned (which, when biased, is .5,.5,.5).

Essentially, the optimal camera distance is chosen for every ray, resulting in much cleaner maps.

Using bidirectional tracing is preferable to increasing the Distance from Surface value to capture portions of the surface that are outside of the low-res character, increasing the distance from surface can fail in tight areas like armpits etc.: it "catches" the wrong part of the surface.

This option will lead to additional rays being cast and may affect computation times.

Front Facing Triangles

Includes front-facing-triangles in the RenderMap or RenderVertex calculation.

Back Facing Triangles

Includes back-facing-triangles in the RenderMap or RenderVertex calculation.

Surface Map Settings

The options on this tab allow you to set a background color, apply basic color correction to surface color maps that you create, and control parameters for ambient occlusion maps.

Regenerate Maps

Generates the RenderMap images or Color at Vertices properties defined in this property editor. If multiple rendermap properties are being inspected simultaneously, they will all be regenerated simultaneously.

Background Color (RenderMap Only)

Color

Defines the color used in empty areas of the output texture.

Color Correction

Mode

Controls the color correction mode. Choose one of the following:

Disabled: disables the color correction options.

Grayscale (Average): When selected, the RenderMap image or RenderVertex CAV map is created in grayscale, based on the average of each pixel/vertex's RGB values.

Grayscale (Intensity): When selected, the RenderMap image or RenderVertex CAV map is created in grayscale, based on each pixel/vertex's intensity value.

Negative: When selected, the RenderMap image or RenderVertex CAV map is created as inverted version, or negative, of the texture on the rendermapped object.

Custom: allows you to manually set the color correction options described below (gamma, contrast, and so on).

Gamma

Used to compensate for non-linearity in displays. Often used as a general brightness control.

Contrast

Increases and decreases the contrast levels between light and dark colors. 0.5 = no change in contrast.

Hue

Controls a 360 degrees hue shift through the HLS color space spectrum without modifying the intensity or saturation of the color.

Saturation

Adjusts the saturation, or amount of "pigment," in a color. A value of 1 results in no white and all color; a value of 0 results in no color -- just white light.

Level

Adjusts the level or luminance of a color. Similar to intensity or brightness.

Ambient Occlusion

These options allow you to control the basic ambient occlusion shader parameters when you create an ambient occlusion map.

Samples

Specifies the number of sample rays used to determine occlusion. Higher settings produce a smoother result but take longer to generate the ambient occlusion map.

Dark Color

A color used to scale the ambient lighting where the object is completely occluded. If the object is partially occluded, this color is mixed with the Bright color.

Bright Color

A color used to scale the ambient lighting where the object is completely unoccluded. If the object is partially occluded, this color is mixed with the Dark color.

Spread

Defines the size of the cone from which sample rays are fired. A value of 0 samples only in the direction of the surface normal, while a value of 1.0 samples the entire hemisphere above the sampled point.

Maximum Distance

Specifies the maximum range for sample rays fired from a given point.

  • When Maximum Distance set to 0 the entire scene is sampled, meaning that rays are traced until they reach the scene boundary.

  • When Maximum Distance is set to a non-zero value, sample rays are traced only for the specified distance. Objects outside of this range do not occlude the sampled object at all. Objects within this distance occlude more the closer they are to the object.

It's usually preferable to limit the maximum distance by using non-zero values. Distant objects generally affect the final result less because they occupy a proportionally much smaller area of the sampling hemisphere than closer objects of the same size. The slight reduction in overall occlusion that this may cause is offset by the accompanying reduction in render time.

Reflective

When activated, the shader performs reflection occlusion rather than ambient occlusion. This changes the sampling pattern from a cone around the surface normal to a more distributed pattern around the direction of reflection.

Using reflection occlusion can help enhance the realism of reflection maps by incorporating color and detail from the surrounding environment map.

Output Mode

The shader had five different modes that control the output color:

  • Occlusion Using Shading Normal: produces a standard ambient occlusion effect, where the Bright and Dark colors are used to scale the ambient lighting/reflection in accordance with the amount of occlusion.

    In this mode, sampling is performed in the direction of the shading normal.

  • Occlusion Using Bent Normals: produces a standard ambient occlusion effect, where the Bright and Dark colors are used to scale the ambient lighting/reflection in accordance with the amount of occlusion.

    In this mode, the sampling direction is biased to return more of the Bright color in the blended result.

  • Sampled Environment: is similar to regular occlusion, but also performs environment sampling. As such, when the scene uses an environment map, the map color is multiplied with the Bright color to produce the final unoccluded color value.

  • Return Bent Normals (World Space): returns a color value based on the average of the unoccluded sample rays in world space. The Red, Green, and Blue components correspond to the X, Y, and Z axes respectively.

  • Return Bent Normals (Object Space): returns a color value based on the average of the unoccluded sample rays in object space. The Red, Green, and Blue components correspond to the X, Y, and Z axes respectively.

Occlusion in Alpha

When activated, the scalar occlusion value is stored in the alpha channel, irrespective of the specified Output Mode.

Normally, the color returned for a given point is a blend between the Bright color and the Dark color, including the alpha channel, depending on how that point on the surface is occluded.

When this parameter is on, the blending between the bright and dark color is not done for the alpha channel. Instead, the alpha channel stores the actual amount of occlusion.

If you need the alpha channel to be 1, independently of the occlusion, simply set the Bright color and Dark color alpha values to 1, and the blend will always return 1.