How Distributed Rendering Works

 
 
 

Distributed rendering is started automatically once a render is initiated on a computer. The initiating computer is referred to as the master and the other computers on the network are referred to as slaves. The master and slaves communicate via a mental ray service that listens on a designated TCP port and passes information to mental ray.

There are two types of distributed rendering: Satellite and Standalone. However, only mental ray Satellite distributed rendering (raysat.exe) can be installed and configured during the Softimage setup process. The mental ray Standalone software (ray.exe) is a separate product that is installed and configured through its own setup and licensing process. Examples of Satellite distributed rendering are used throughout this section, but most of what is discussed here also applies to setting up mental ray Standalone distributed rendering.

Distributed Rendering Components

Satellite distributed rendering relies on a number of components that must be configured correctly.

Satellite Tokens

mental ray Satellite distributed rendering requires a Softimage license for the master machine. No additional licences are required for the slave machines. Instead, the master has a fixed number of Satellite tokens (4 tokens by default), each of which can be used for a render slave processor. On multi-processor machines, each processor requires a separate token. Satellite distributed rendering only works when rendering using Softimage (interactively or from the command line).

The mental ray Service

  • On Windows systems, the distributed rendering service listens on the TCP port and runs an associated batch file. Satellite distributed rendering uses the raysatsi2013_3_10_1_4server service, which runs the raysatsi2013_3_10_1_4.bat batch file on each computer.

    These batch files set the environment variables required for distributed rendering through setenv.bat and then run the mental ray renderer (raysat.exe). For more information on how to manage the mental ray service on Windows, see Managing the mental ray Services.

  • On Linux systems, the xinetd.conf file reads the raysatsi2013_3_10_1_4 service configuration file from the /etc/xinetd.d directory. The service is configured to call the ray3.sh script file, which sets the environment variables required for distributed rendering and then runs the mental ray renderer (raysat).

.ray3hosts File

In a distributed rendering setup, the master machine reads a local ray3hosts file, which lists the slaves to be used for the render. The image to be rendered is broken up into segments (tiles), which are placed in a queue. Each computer, master or slave, requests tiles from the queue to render.

Once a tile is finished, it is sent back to the master and another tile is requested from the queue. The master assembles all the tiles to create a complete rendered image.

For more information on how to configure the .ray3hosts file, see Defining a .ray3hosts File.

linktab.ini File

During distributed rendering, the master also sends any extra information the slave might need to accomplish a render, such as texture names and paths. When your render slaves use a mix of different operating systems, you can use a linktab file to coordinate file sharing. For more information on defining a linktab file, see Configuring the linktab.ini File.

Creative Commons License Except where otherwise noted, this work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License