news

Apr 1, 2024 We are excited to share our recent work on monocular 3D face reconstruction that will be presented at CVPR 2024. We introduce MoSAR, a new method that turns a portrait image into a realistic 3D avatar.

From a single image, MoSAR estimates a detailed mesh and texture maps at 4K resolution, capturing pore-level details. This avatar can be rendered from any viewpoint and under different lighting condition.

We are also releasing a new dataset called FFHQ-UV-Intrinsics. This is the first dataset that offer rich intrinsic face attributes (diffuse, specular, ambient occlusion and translucency) at high resolution for 10K subjects.
Check out the project page!
Oct 27, 2023 Our paper EDMSound: Spectrogram Based Diffusion Models for Efficient and High-Quality Audio Synthesis has been accepted for presentation at the NeurIPS Workshop on ML for Audio. This work has been done in collaboration with colleagues from Rochester University.

In this paper, we propose a diffusion-based generative model in spectrogram domain under the framework of elucidated diffusion models (EDM). We also revealed a potential concern regarding diffusion based audio generation models that they tend to generate duplication of the training data.

Check out the project page!
Sep 21, 2023 Our paper “Rhythm Modeling for Voice Conversion” has been published in IEEE Signal Processing Letters. We also released it on Arxiv.
In this paper we model the natural rhythm of speakers to perform conversion while respecting the target speaker’s natural rhythm. We do more than approximating the global speech rate, we model duration for sonorants, obstruents, and silences.

Check out the demo page!
Jul 15, 2023 Ubisoft had published a blog page describing our system for gesture generation conditioned on speech.
This system was presented in “ZeroEGGS: Zero-shot Example-based Gesture Generation from Speech” and showcased on 2 minute papers.
Jul 21, 2021 This is the recording of the presentation that I gave at the 2021 Game Developers Conference on “speech synthesis applied to videogames”.
Generating spoken dialog lines artificially could prove to be pivotal for the future of the gaming industry. Aside from reducing production costs, it offers opportunities for new types of in-games interactions closer to real-world experiences. The goal of the talk is to present an honest snapshot of the state of the technology, discuss remaining challenges and possible present and future use cases. We demonstrate how current commercial speech synthesis solutions do not directly apply to the gaming context where voice require a high level of expressivity. We discuss present solutions to control expressivity, and how we use speech synthesis at Ubisoft.