[month] [year]

WACV-2022

Faculty and students published the following papers at the virtual IEEE CVF Winter Conference on Applications of Computer Vision (WACV-2022) held in Waikoloa, Hawaii from 4 – 8 January.

  • Made a poster presentation on their paper,  InfographicVQA – Minesh Mathew; Viraj Bagal, IISER Pune; Ruben Tito, Computer Vision Center, UAB, Spain; Dimosthenis Karatzas, Computer Vision Center, UAB, Spain;  Ernest Valveny, Computer Vision Center, UAB, Spain and Prof. C V Jawahar.

Research work as explained by the authors: 

Infographics communicate information using a combination of textual, graphical and visual elements. This work explores the automatic understanding of infographic images by using a Visual Question Answering technique. To this end, we present InfographicVQA, a new dataset comprising a diverse collection of infographics and question-answer annotations. The questions require methods that jointly reason over the document layout, textual content, graphical elements, and data visualizations. We curate the dataset with an emphasis on questions that require elementary reasoning and basic arithmetic skills. For VQA on the dataset, we evaluate two Transformer-based strong baselines. Both the baselines yield unsatisfactory results compared to near perfect human performance on the dataset. The results suggest that VQA on infographics—images that are designed to communicate information quickly and clearly to human brain—is ideal for benchmarking machine understanding of complex document images. The dataset is available for download at docvqa.org

Full paper:  https://cdn.iiit.ac.in/cdn/cvit.iiit.ac.in/images/ConferencePapers/2021/InfographicVQA.pdf 

 

  • Made a poster presentation on their paper, Visual Understanding of Complex Table Structures from Document Images – Sachin Raja, Ajoy Mondal and Prof. C V Jawahar.

Research work as explained by the authors: 

Table structure recognition is necessary for a comprehensive understanding of documents. Tables in unstructured business documents are tough to parse due to the high diversity of layouts, varying alignments of contents, and the presence of empty cells. The problem is particularly difficult because of challenges in identifying individual cells using visual or linguistic contexts or both. Accurate detection of table cells (including empty cells) simplifies structure extraction and hence, it becomes the prime focus of our work. We propose a novel object-detection-based deep model that captures the inherent alignments of cells within tables and is fine-tuned for fast optimization. Despite accurate detection of cells, recognizing structures for dense tables may still be challenging because of difficulties in capturing long-range row/column dependencies in presence of multi-row/column spanning cells. Therefore, we also aim to improve structure recognition by deducing a novel rectilinear graph-based formulation. From a semantics perspective, we highlight the significance of empty cells in a table. To take these cells into account, we suggest an enhancement to a popular evaluation criterion. Finally, we introduce a modestly sized evaluation dataset with an annotation style inspired by human cognition to encourage new approaches to the problem. Our framework improves the previous state-of-the-art performance by a 2.7% average F1-score on benchmark datasets.

Full paper:          https://cdn.iiit.ac.in/cdn/cvit.iiit.ac.in/images/ConferencePapers/2021/Table_Reconstruction.pdf         

 

  • Made a poster presentation on their paper, Multi-Domain Incremental Learning for Semantic Segmentation – Prachi Garg; Rohit Saluja; Vineeth N Balasubramanian, IIT Hyderabad; Chetan Arora,  IIT Delhi; Anbumani Subramanian and Prof. C V Jawahar.  

Research work as explained by the authors:  

Recent efforts in multi-domain learning for semantic segmentation attempt to learn multiple geographical datasets in a universal, joint model. A simple fine-tuning experiment performed sequentially on three popular road scene segmentation datasets demonstrates that existing segmentation frameworks fail at incrementally learning on a series of visually disparate geographical domains. When learning a new domain, the model catastrophically forgets previously learned knowledge. In this work, we pose the problem of multi-domain incremental learning for semantic segmentation. Given a model trained on a particular geographical domain, the goal is to (i) incrementally learn a new geographical domain, (ii) while retaining performance on the old domain, (iii) given that the previous domain’s dataset is not accessible. We propose a dynamic architecture that assigns universally shared, domain-invariant parameters to capture homogeneous semantic features present in all domains, while dedicated domain-specific parameters learn the statistics of each domain. Our novel optimization strategy helps achieve a good balance between retention of old knowledge (stability) and acquiring new knowledge (plasticity). We demonstrate the effectiveness of our proposed solution on domain incremental settings pertaining to realworld driving scenes from roads of Germany (Cityscapes), the United States (BDD100k), and India (IDD). 

Full paper:  

https://cdn.iiit.ac.in/cdn/cvit.iiit.ac.in/images/ConferencePapers/2021/Multi-Domain.pdf 

 

  • MUGL: Large Scale Multi Person Conditional Action Generation with Locomotion  –  Debtanu Gupta, Shubh Maheshwari and Dr. Ravi Kiran Sarvadevabhatla.

Research work as explained by the authors: 

We introduce MUGL, a novel deep neural model for large-scale, diverse generation of single and multi-person pose-based action sequences with locomotion. Our controllable approach enables variable-length generations customizable by action category, across more than 100 categories. To enable intra/inter-category diversity, we model the latent generative space using a Conditional Gaussian Mixture Variational Autoencoder. To enable realistic generation of actions involving locomotion, we decouple local pose and global trajectory components of the action sequence. We incorporate duration-aware feature representations to enable variable-length sequence generation. We use a hybrid pose sequence representation with 3D pose sequences sourced from videos and 3D Kinect-based sequences of NTU-RGBD-120. To enable principled comparison of generation quality, we employ suitably modified strong baselines during evaluation. Although smaller and simpler compared to baselines, MUGL provides better quality generations, paving the way for practical and controllable large-scale human action generation. 

Full paper: https://arxiv.org/pdf/2110.11460

 

WACV is the premier international computer vision event comprising the main conference and several co-located workshops and tutorials. With its high quality and low cost, it provides an exceptional value for students, academics and industry researchers.

Conference page: https://wacv2022.thecvf.com/home

 

  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •