How to use Audio-Based Lip Sync

How to use Audio-Based Lip Sync

Audio Based Lip Sync


How to use Audio-Based Lip Sync image 1

How to use Audio-Based Lip Sync image 2
How to use Audio-Based Lip Sync image 3
How to use Audio-Based Lip Sync image 4

Animaze offers four Audio Based Lip sync options: Complex Audio Based Lip Sync, Simple Audio Based Lip Sync, Criware and Oculus Lip Sync. To find these trackers, go to Settings > Advanced Tracking Configuration > Audio Based Lip Sync.

1. Complex Audio Based Lip Sync supports 16 mouth shapes and is generally best for 3D avatars, or 2D avatars with very detailed mouths. Tune the Complex Audio Based Lip Sync to your setup with the following controls:

Noise threshold is used to change the amplitude of the sound needed to open the avatar’s mouth. If you are in a very quiet setting, set the noise threshold low for maximum responsiveness. If there is some din, increase the noise threshold so the background noise doesn’t activate your avatar’s mouth.

Hold duration (ms) sets the time duration between the moment the noise gate opens and closes. It is used to prevent the noise gate from closing between words. A longer hold duration will require a longer pause before the mouth stops moving.

Attack Duration (ms) sets time duration between when the noise gate closes and opens. A longer attack duration will require more input before the mouth starts moving.

The Vowels smoothing for inputs detected as vowel sounds, you can set the smoothing. High smoothing will make the mouth movements stable but less responsive.

The Consonants smoothing for inputs detected as consonant sounds, you can set the smoothing. High smoothing will make the mouth movements stable but less responsive.

2. Simple Audio Based Lip Sync supports 3 mouth shapes. It is generally best for 2D avatars.

Voice Pitch adjusts the lip sync algorithm to the user’s voice frequency. You should match the slider to the pitch of your voice. Higher pitched voices to the right and lower pitched voices to the left.

Voice Speed adjusts the lip sync algorithm to match your speech rate. If you are a fast talker, move the slider to the right. Conversely, slower talkers should set the slider to the left.

Microphone Sensitivity adjusts how much to amplify the audio input. If you are close to the microphone, set the value lower. If you are further from the microphone, set the value higher.

3. Criware detects and remaps vowels to the Animaze viseme system.

Noise threshold - changes the amplitude of the sound needed to open the avatar’s mouth. If you are in a quiet setting, set the noise threshold low for maximum responsiveness. If there is some din, increase the noise threshold so the background noise doesn’t activate your avatar’s mouth.

Hold duration (ms) - sets the time duration between when the noise gate opens and closes. Use this setting to prevent the noise gate from closing between words. A longer hold duration will require a longer pause before the mouth stops moving.

Attack Duration (ms) - sets time duration between when the noise gate closes and opens. A longer attack duration will require more input before the mouth starts moving.

4. OVR Lip-Sync (Oculus) detects and remapps 15 mouth shapes to the Animaze viseme system

Noise threshold - changes the amplitude of the sound needed to open the avatar’s mouth. If you are in a quiet setting, set the noise threshold low for maximum responsiveness. If there is some din, increase the noise threshold so the background noise doesn’t activate your avatar’s mouth.

Hold duration (ms) - sets the time duration between when the noise gate opens and closes. Use this setting to prevent the noise gate from closing between words. A longer hold duration will require a longer pause before the mouth stops moving.

Attack Duration (ms) - sets time duration between when the noise gate closes and opens. A longer attack duration will require more input before the mouth starts moving.

After you enabled one of the available lip sync trackers and made changes to it, click Save to save your tracking preferences and exit the Audio Based Lip Sync drawer menu.

How To Feed Audio Files Directly Into Audio Based Lip Sync


How to use Audio-Based Lip Sync image 26
How to use Audio-Based Lip Sync image 27
How to use Audio-Based Lip Sync image 28

How to use Audio-Based Lip Sync image 29

You can use SplitCam to have the avatar lip sync to a sound file. Here are the steps on how to achieve that:

1. Download SplitCam for free on their official website: https://splitcam.com/

2. Then create a scene by clicking on Media Layers (plus sign)

3. Select System Audio

4. Go into the Animaze app > Settings > Audio and select the SplitCam Microphone as the input source in Animaze

5. In Animaze, go to Settings > Advanced Tracking Configuration > Audio Based Lip Sync and turn it on.

6. Play an audio file on your computer and the avatar will lip sync to the audio file!

Source: https://steamcommunity.com/sharedfiles/filedetails/?id=2656326991					

More Animaze guilds