This project delivers a plugin that integrates a speech‑driven AI model into animation pipelines, enabling the generation of facial animations from portraits and audio inputs. It is still a prototype, but the idea behind it is to help animators create expressive facial motion based on video generation while maintaining full control over the animation data directly within Maya.
Link thesis
https://nccastaff.bournemouth.ac.uk/jmacey/MastersProject/MSc25/06/index.html
Github Repository
https://github.com/DanielaHz/Speech-Driven-to-3D-Facial-Animation?tab=readme-ov-file