November 2022
Faculty and students presented the following papers at the 30th ACM International Conference on Multimedia (ACM-MM-2022) at Lisbon, Portugal from 10 – 14 October.
- Extreme-scale Talking-Face Video Upsampling with Audio-Visual Priors – Sindhu B Hegde, Rudrabha Mukhopadhyay, Vinay Namboodiri and C V Jawahar.
Research work as explained by the authors:
In this paper, we explore an interesting question of what can be obtained from an 8 × 8 pixel video sequence. Surprisingly, it turns out to be quite a lot. We show that when we process this 8 × 8 video with the right set of audio and image priors, we can obtain a full-length, 256 × 256 video. We achieve this 32× scaling of an extremely low-resolution input using our novel audio-visual upsampling network. The audio prior helps to recover the elemental facial details and precise lip shapes and a single high-resolution target identity image prior provides us with rich appearance details. Our approach is an end-to-end multi-stage framework. The first stage produces a coarse intermediate output video that can be then used to animate single target identity image and generate realistic, accurate and high-quality outputs. Our approach is simple and performs exceedingly well (an 8× improvement in FID score) compared to previous super-resolution methods. We also extend our model to talking-face video compression, and show that we obtain a 3.5× improvement in terms of bits/pixel over the previous state-of-the-art. The results from our network are thoroughly analyzed through extensive ablation experiments (in the paper and supplementary material). We also provide the demo video along with code and models on our website.
- Lip-to-Speech Synthesis for Arbitrary Speakers in the Wild – Sindhu B Hegde, K R Prajwal, Rudrabha Mukhopadhyay, Vinay Namboodiri, University of Bath and C V Jawahar.
Research work as explained by the authors:
In this work, we address the problem of generating speech from silent lip videos for any speaker in the wild. In stark contrast to previous works, our method (i) is not restricted to a fixed number of speakers, (ii) does not explicitly impose constraints on the domain or the vocabulary and (iii) deals with videos that are recorded in the wild as opposed to within laboratory settings. The task presents a host of challenges, with the key one being that many features of the desired target speech, like voice, pitch and linguistic content, cannot be entirely inferred from the silent face video. In order to handle these stochastic variations, we propose a new VAE-GAN architecture that learns to associate the lip and speech sequences amidst the variations. With the help of multiple powerful discriminators that guide the training process, our generator learns to synthesize speech sequences in any voice for the lip movements of any person. Extensive experiments on multiple datasets show that we outperform all baselines by a large margin. Further, our network can be fine-tuned on videos of specific identities to achieve a performance comparable to single-speaker models that are trained on 4× more data. We conduct numerous ablation studies to analyze the effect of different modules of our architecture.
ACM Multimedia, since its inception in 1993, has been the worldwide premier conference and a key world event to display scientific achievements and innovative industrial products in the multimedia field. This year, ACM Multimedia 2022 was held at Lisbon, Portugal. ACM Multimedia 2022, had extensive programs consisting of technical sessions covering all aspects of the multimedia field via oral, video and poster presentations, tutorials, panels, exhibits, demonstrations, workshops, doctoral symposium, multimedia grand challenge, brave new ideas on shaping the research landscape, open source software competition, and also an interactive arts program.
Conference page: https://2022.acmmm.org/