Audio-driven facial animation workflow
 
 
 

Using a Character Face and the Voice device, you can set up a character to “talk” with an audio file or live audio input as its voice. Through the Voice device, the phonemes in the audio input drive the expressions on the character’s face.

Audio-driven facial animation workflow

To drive facial expressions using audio data:

  1. Load a head model with shapes or cluster shapes.

    The shapes on your head model should be appropriate for the number of phonemes you need to make the character convey your audio input, and they should correspond with the sound parameters of the Voice device. See also Phoneme shapes.

  2. Add a Character Face to the scene. See Adding a Character Face.
  3. In the Character Face Definition pane, add a custom expression for each phoneme, then map the phoneme shapes you created for your head model to these custom expressions.
  4. Link the Character Face to a Voice device. See Linking a Character Face to a Voice device.
  5. Add sound parameters, or phoneme sounds, in the Voice device settings. See Adding sound parameters.

    When you add additional sound parameters (phoneme sounds) to the Voice device, custom expressions automatically appear in the Expressions pane.

    Note

    To fine-tune facial animation driven by the Voice device, change the values of automatically added operators, add additional operators, or add other devices to trigger facial expressions.

See Also