[month] [year]

 ECCV-2022

October 2022

Faculty and students of IIITH presented the following papers at Computer Vision for Civil and Infrastructure Engineering Workshop, European Conference on Computer Vision (ECCV-2022) at Tel Aviv,  Israel from 23 – 27 October:

  •  UAV-based Visual Remote Sensing for Automated Building Inspection, Kushagra Srivastava, Dhruv Patel, Ravi Kiran Sarvadevabhatla, Pradeep Kumar Ramancharla, Harikumar Kandath, K Madhava Krishna. 

The other authors of this paper are Aditya Kumar Jha and Mohhit Kumar Jha, IIT Kharagpur; Jaskirat Singh, University of Petroleum and Energy Studies, Dehradun.

Research work as explained by the authors: Unmanned Aerial Vehicle (UAV) based remote sensing system incorporated with computer vision has demonstrated potential for assisting building construction and in disaster management like damage assessment during earthquakes. The vulnerability of a building to earthquake can be assessed through inspection that takes into account the expected damage progression of the associated component and the component’s contribution to structural system performance. Most of these inspections are done manually, leading to high utilization of manpower, time, and cost. This paper proposes a methodology to automate these inspections through UAV-based image data collection and a software library for post-processing that helps in estimating the seismic structural parameters. The key parameters considered here are the distances between adjacent buildings, building plan-shape, building plan area, objects on the rooftop and rooftop layout. The accuracy of the proposed methodology in estimating the above-mentioned parameters is verified through field measurements taken using a distance measuring sensor and also from the data obtained through Google Earth. 

Keywords: Building Inspection, UAV-based Remote Sensing, Segmentation, Image Stitching, 3D Reconstruction.

  Link to the PDF of the paper: https://arxiv.org/pdf/2209.13418.pdf

Link to the project page: https://uvrsabi.github.io/

 

 

  • PSUMNet: Unified Modality Part Streams are All You Need for Efficient Pose-based Action Recognition, Neel Trivedi, Ravi Kiran Sarvadevabhatla – at the workshop 1st International Workshop and Challenge on People Analysis: From Face, Body and Fashion to 3D Virtual Avatars (WCPA). 

 

Research work as explained by the authors: Pose-based action recognition is predominantly tackled by approaches which treat the input skeleton in a monolithic fashion, i.e. joints in the pose tree are processed as a whole. However, such approaches ignore the fact that action categories are often characterized by localized action dynamics involving only small subsets of part joint groups involving hands (e.g. ‘Thumbs up’) or legs (e.g. ‘Kicking’). Although part-grouping based approaches exist, each part group is not considered within the global pose frame, causing such methods to fall short. Further, conventional approaches employ independent modality streams (e.g. joint, bone, joint velocity, bone velocity) and train their network multiple times on these streams, which massively increases the number of training parameters. To address these issues, we introduce PSUMNet, a novel approach for scalable and efficient pose-based action recognition. At the representation level, we propose a global frame based part stream approach as opposed to conventional modality based streams. Within each part stream, the associated data from multiple modalities is unified and consumed by the processing pipeline. Experimentally, PSUMNet achieves state of the art performance on the widely used NTURGB+D 60/120 dataset and dense joint skeleton dataset NTU 60-X/120-X. PSUMNet is highly efficient and outperforms competing methods which use 100%-400% more parameters. PSUMNet also generalizes to the SHREC hand gesture dataset with competitive performance. Overall, PSUMNet’s scalability, performance and efficiency makes it an attractive choice for action recognition and for deployment on computerestricted embedded and edge devices. Code and pretrained models can be accessed at https://github.com/skelemoa/psumnet. 

Keywords: human action recognition, skeleton, dataset, human activity recognition, part 

PDF of the paper: https://arxiv.org/pdf/2208.05775.pdf

Project page: https://skeleton.iiit.ac.in/psumnet

Conference page: https://eccv2022.ecva.net/

 

  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •