Leveraging NVIDIA AI animation technology, Reallusion integrates Character Creator, iClone, and Audio2Face into one seamless solution. Presenting the AI-powered workflow for multilingual facial lip-sync animation production and extending Audio2Face compatibility with cross-platform 3D character specifications. With a wide array of facial editing features, it simplifies the export of iClone character animations to leading 3D engines such as Blender, Unreal Engine, Unity, and more. Omniverse users also enjoy a comprehensive solution for both facial and full-body animation with CC characters.
As an AI-powered application, NVIDIA Audio2Face produces expressive facial animations solely from audio input. In addition to generating natural lip-sync animations for multilingual dialogue, the latest standalone release of Audio2Face also supports facial expressions, featuring slider controls and a keyframe editor.
Unlike the majority of English-centric lip-sync solutions, Audio2Face stands out with its exceptional ability to generate animation from any language, including songs and gibberish. In addition to the standard AI model Mark, have access to Clarie, a new deep learning model tailored for female characters proficient in Asian languages. Clarie's friendly intonation is well-suited to customer interaction.
Two complimentary plugins enable an automated workflow. With just a single click, configure a CC character in NVIDIA Audio2Face, animate it in real-time alongside an imported audio track, and seamlessly transfer the talking animation back to iClone for additional refinement before exporting it to 3D tools and game engines.
The CC Character Auto Setup plugin for Audio2Face is the result of a collaboration between NVIDIA and Reallusion, condensing the manual 18-step process into a single step. By importing a CC character and choosing a training model — Mark or Claire — artists can instantly witness lifelike talking animations synchronized with audio files. Experiment with motion sliders, automatic expressions, and even set keyframes. The finalized animations can then be sent to iClone for further production.
The free NVIDIA Audio2Face plugin for iClone is tailored to receive animation data from Audio2Face. In addition to importing animations, it enhances the liveliness of facial features, resulting in a superior cut suitable for final production.
Animations can be tweaked via a dynamic interface. Adjust various parameters such as expression strengths, head movements, or adding darting eyes to enliven the performance. Enlarge the jaw open range to enhance emotional tension and fine-tune the position of the tongue to mimic precise enunciation.
Generative AI animation is susceptible to noise, particularly when audio files are captured by low-fidelity devices or within unfavorable environments. Reallusion Audio2Face integration circumvents these limitations by deploying a highly-refined noise filter to eliminate jitters and achieve optimal results despite poor audio quality.
After obtaining a satisfactory animation from Audio2Face, a finishing touch becomes necessary, particularly when faced with emotional shifts or when emphasizing specific mouth shapes at varying levels of dialogue. iClone empowers facial editing, allowing for refined lip sync, the addition of natural expressions, and the incorporation of head movement sourced from mocap equipment (AccuFACE, iPhone Live Face).
iClone streamlines 3D animation within a user-friendly platform that integrates facial performance, character animation, motion capture production, scene design, and cinematic storytelling.
CC is a complete solution designed to produce fully-rigged, animatable characters with realistic or stylized features. It is compatible with iClone and other 3D applications.
Have CC characters auto-prepared in A2F.
Load and edit facial animation in iClone.