[month] [year]

WACV- 2023

Faculty and students presented the following papers at IEEE Winter Conference on Applications of Computer Vision (WACV-2023) held from 3 – 7 January at Waikoloa, Hawaii.

  • DSAG: A Scalable Deep Framework for Action-Conditioned Multi-Actor Full Body Motion Synthesis – Dr. Ravi Kiran Sarvadevabhatla, Debtanu Gupta, Shubh Maheshwari, Sai Shashank Kalakonda, Manasvi Vaidyula 

Research work as explained by the authors:

We introduce DSAG, a controllable deep neural framework for action-conditioned generation of full body multi actor variable duration actions. To compensate for incompletely detailed finger joints in existing large-scale datasets, we introduce full body dataset variants with detailed finger joints. To overcome shortcomings in existing generative approaches, we introduce dedicated representations for encoding finger joints. We also introduce novel spatiotemporal transformation blocks with multi-head self attention and specialized temporal processing. The design choices enable generations for a large range in body joint counts (24 – 52), frame rates (13 – 50), global body movement (in place, locomotion) and action categories (12 – 120), across multiple datasets (NTU-120, HumanAct12, UESTC, Human3.6M). Our experimental results demonstrate DSAG’s significant improvements over state-of-the-art, its suitability for action-conditioned generation at scale. 

Full paper: https://openaccess.thecvf.com/content/WACV2023/papers/Gupta_DSAG_A_Scalable_Deep_Framework_for_Action-Conditioned_Multi-Actor_Full_Body_WACV_2023_paper.pdf

Project page: https://skeleton.iiit.ac.in/dsag

 

  • Audio-Visual Face Reenactment Prof.  C V Jawahar,  Madhav Agarwal and  Rudrabha Mukhopadhyay and Dr. Vinay Namboodiri, University of Bath 

Research work as explained by the authors:

 This work proposes a novel method to generate realistic talking head videos using audio and visual streams. We animate a source image by transferring head motion from a driving video using a dense motion field generated using learnable keypoints. We improve the quality of lip sync using audio as an additional input, helping the network to attend to the mouth region. We use additional priors using face segmentation and face mesh to improve the structure of the reconstructed faces. Finally, we improve the visual quality of the generations by incorporating a carefully designed identity-aware generator module. The identity-aware generator takes the source image and the warped motion features as input to generate a high-quality output with fine-grained details. Our method produces state-of-the-art results and generalizes well to unseen faces, languages, and voices. We comprehensively evaluate our approach using multiple metrics and outperforming the current techniques both qualitative and quantitatively. Our work opens up several applications, including enabling low bandwidth video calls. 

Demo video and additional information: http://cvit.iiit.ac.in/research/projects/cvit-projects/avfr

Full paper: 

https://openaccess.thecvf.com/content/WACV2023/html/Agarwal_Audio-Visual_Face_Reenactment_WACV_2023_paper.html

 

IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) is the premier international computer vision event comprising the main conference and several co-located workshops and tutorials. With its high quality and low cost, it provides an exceptional value for students, academics and industry researchers.

Conference page: https://wacv2023.thecvf.com/home

January 2023

 

 

  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •