This section provides references to the Help topics, which explain the workflows to use various important components in Softimage.
CrowdFX is a dedicated environment for building sophisticated crowd simulations in Softimage. Built on ICE, you can create complex effects with large numbers of characters that react intelligently to their environment and each other. And because it is ICE, you can use all the standard ICE nodes and compounds to customize the simulation as you like, in addition to using the special CrowdFX ICE nodes and compounds.
For more information about creating a basic crowd effect, see Creating a Basic Crowd Simulation.
Syflex ICE is the ICE version of the Syflex Cloth plug-in that is installed with Softimage. Syflex ICE includes a complete set of deformation simulators, force, collision, and constraint compounds and nodes that allow you to set up complete cloth and curve deformation simulations.
For more information about creating a basic cloth effect of a shirt being nailed to a torso at the waist, see Overview of the Syflex ICE Workflow.
Particles are very small pieces of solid or liquid matter. In the real world, you think of particles as being things like dust, sea salt, water droplets, sand, smoke, or sparks from a fire.
With ICE particles, you can create natural particles like these, but you can also go beyond the usual. You can make objects and even characters act like particles: rocks tumbling, pieces of paper scattered in the air, glass pieces breaking, leaves falling, grass growing, butterflies fluttering, bees buzzing, or humans walking about. Anything that you want to move like a particle can be done using ICE particles.
For more information about limiting the emission of particles using Emission Filter Parameters in the various Emit compounds, see Filtering ICE Particle Emissions.
Face Robot in Softimage allows you to easily rig and animate life-like faces. It lets you quickly set up a face rig by taking you through several defined stages. Once you have a face rig, you can animate the facial controls with either mocap or keyframes, then sculpt and tune the facial tissue using tools that are specific to Face Robot, as well as standard Softimage ones.
For more information about animating a Face Robot head, see The Basic Face Robot Workflow.
The lip-synchronization tools in Face Robot helps you to coordinate the facial animation of your character with a sound track.
For more information about doing lip-synchronization in Face Robot, see Lip-Sync Workflow Overview.
Animation layers allows you to have two or more levels of animation on an object's parameters at the same time. You usually want to layer animation when you need to add an offset to the main animation (base layer) on an object, but you don't want to change that animation.
Layering lets you add keys on top of the existing base animation, which can be either fcurves, expressions, linked parameters, or an action clip in the mixer, such as a mocap clip. You must have animation on the object's parameters before you can create an animation layer on top of them.
For more information about working with animation layers in Softimage, see Overview of the Animation Layer Workflow.
With the Camera Sequencer, you can rearrange the original animation in time and define a camera for a specific time range.
For more information about using the Camera Sequencer, see Camera Sequencer.
In Softimage, the word Lagoa means a multiphysics simulator that uses ICE to create amazing fluid, soft body, rigid body, and cloth effects. It is a framework for building different physical effects in a single unified environment.
For more information about create Lagoa simulation effects easily from scratch, see Creating a Lagoa Effect from Scratch.
Final gathering is a way of calculating indirect illumination without using photon energy. Instead of using rays cast from a light to calculate illumination, final gathering uses rays cast from each illuminated point on an object's surface. The rays are used to sample a hemispherical area above each point and calculate direct and indirect illumination based on what the rays hit.
For more information about the process of setting up a final gathering render, see Final Gathering Workflow Overview.
Textures are images that control the visible properties of an object across its surface. You can use textures to define everything from basic surface color to other characteristics like bumps or dirt. Textures can also be used to drive a wide variety of shader parameters, allowing you to create maps that define an object's transparency, reflectivity, bumpiness, and so on.
For more information about applying a bitmap file or other image as a texture , see Texturing Workflow Overview.
Synoptic views are image maps that serve as visual toolbars and are associated to particular elements in your scene.
For more information about creating synoptic views , see Synoptic View Workflow.
Tracking lets you follow the motion of up to four points in an image sequence. You can use the resulting motion paths to paste one object onto another moving object, or stabilize a sequence with camera shake or other undesired motion. You can also destabilize a previously stabilized sequence to restore the camera motion.
For more information about using the tracker, see Tracking Workflow Overview.
Rendering is the last step in the 3D content creation process. Once you have created your objects, textured them, animated them, and so on, you can render out your scene as a sequence of 2D images.
For more information on the sequence of tasks you might follow when rendering, see Rendering Workflow Overview.
Except where otherwise noted, this work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License