[month] [year]

VISAPP-2022

Faculty and students presented the following papers at the 17th International Conference on Computer Vision Theory and Applications (VISAPP-2022). The conference was online from 6 – 8 February.

  • Hear Me Out: Fusional Approaches for Audio Augmented Temporal Action Localization – Anurag Bagchi, Jazib Mahmood, Dolton Fernandes and Dr. Ravi Kiran Sarvadevabhatla 

Research work as explained by the authors: 

State of the art architectures for untrimmed video Temporal Action Localization (TAL) have only considered RGB and Flow modalities, leaving the information-rich audio modality totally unexploited. Audio fusion has been explored for the related but arguably easier problem of trimmed (clip-level) action recognition. However, TAL poses a unique set of challenges. In this paper, we propose simple but effective fusion-based approaches for TAL. To the best of our knowledge, our work is the first to jointly consider audio and video modalities for supervised TAL. We experimentally show that our schemes consistently improve performance for state of the art video-only TAL approaches. Specifically, they help achieve new state of the art performance on large-scale benchmark datasets – ActivityNet1.3 (54.34 mAP@0.5) and THUMOS14 (57.18 mAP@0.5). Our experiments include ablations involving multiple fusion schemes, modality combinations and TAL architectures. Our code, models and associated data are available at https://github.com/skelemoa/tal-hmo. 

Full paper: https://arxiv.org/pdf/2106.14118

 

  • ETL: Efficient Transfer Learning for Face Tasks – Thrupthi Ann John; Isha Dua; Dr. Vineeth; N Balasubramanian, IIT Hyderabad and Prof. C V Jawahar 

Research work as explained by the authors: 

Transfer learning is a popular method for obtaining deep trained models for data-scarce face tasks such as head pose and emotion. However, current transfer learning methods are inefficient and time-consuming as they do not fully account for the relationships between related tasks. Moreover, the transferred model is large and computationally expensive. As an alternative, we propose ETL: a technique that efficiently transfers a pre-trained model to a new task by retaining only cross-task aware filters, resulting in a sparse transferred model. We demonstrate the effectiveness of ETL by transferring VGGFace, a popular face recognition model to four diverse face tasks. Our experiments show that we attain a size reduction up to 97% and an inference time reduction up to 94% while retaining 99.5% of the baseline transfer learning accuracy.

Full paper: https://cdn.iiit.ac.in/cdn/cvit.iiit.ac.in/images/ConferencePapers/2022/ETL_VISSAP.pdf

Conference page: https://visapp.scitevents.org/

  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •