IIITH’s Lab Captures it Right with 3D and 4D Digitization of Humans

With 3D digitization of humans, the Human Motion Capture lab at IIITH is not just facilitating the future of fashion but also actively contributing to the preservation of dance heritage. Here’s how.  

Bharatnatyam dancers complete in their traditional gear may seem a little out of place in a technology lab. They’re here at the Centre for Visual Information Technology (CVIT)’s Motion Capture Lab not to enthral an audience but to digitize their performances in 4D, that is, 3-dimensional shapes evolving with time. “It is part of the Government of India’s Cultural Heritage conservation project,” says Prof. Avinash Sharma who is in charge of this effort. The idea is to preserve popular Indian classical dance forms such as Bharatnatyam, Mohiniattam, Kathak and so on, in a 4D digitized format. Unlike a stage performance, in this case, the dance movements are recorded in a lab setup and then can be viewed in 4D on a virtual stage. Why 4D? “A conventional 2D recording via photos and videos is unable to focus on the intricate nuances of the mudras and facial expressions as they are constrained by a fixed positioning of camera,” explains Prof. Sharma. According to him, it is the nuanced kind of data that is required before any sort of scientific analysis and research is carried out on the dance forms. The analysis could be posture analysis for the full body or an analysis of the sequence of actions by the dancers which is invaluable from the point of view of heritage preservation. Besides, such footage can also provide an immersive experience to remote audiences via an online medium.


The Future In Metaverse

Motion capture technology is commonplace in gaming, VR avatars, the animation industry and for other entertainment and is now gaining ground in the creation of digital avatars in the Metaverse. “We see it in movies already where with markers on face, facial gestures are captured and then mapped to 3D models of real people so that the animated version talks and gesticulates like the said people,” says Prof. Sharma. However, such marker-based systems are very restrictive in nature due to their dependence on constrained sensor placement. To overcome this limitation, markerless solutions are the need of the hour. “Realism can be obtained only if we can capture the nitty-gritties of the bodily appearance including the surface details of skin and garments, the hairs, and so on. And the more realistic the avatar, the more immersive can be your experience in the Metaverse,” observes Prof. Sharma.

Virtual Try-ons
While capture and analysis is one area Prof. Sharma’s group is working on, the other is an exploration of virtual try-on technology. The latter refers to digitally trying on clothes or accessories in a virtual 3D environment. The tech and its applications for tight-fitting clothing such as jeans and t-shirts may not be so novel but Prof. Sharma elaborates that researchers in his lab are engaged in the pursuit of digitizing complex Indian ethnic wear. Think flowy garments like the lehnga, lungi and the long kurta among many others that need to be draped. In a virtual try-on, an online shopper can select a garment that has already been 3D digitized and opt to drape it on a synthetic model or use his or her own 3D avatar subject to privacy. “We intend to pursue this for commercial use case in near future,” reports the professor adding that they are also looking at digitizing a synthetic avatar into a lifelike-one by personalising it to the specific appearance of an individual.

 

 

The Edge Explained
Most Try-ons that exist on e-commerce sites work on either 2D draping or fixed 3D template-based clothing. Thus, if you would like to try on a t-shirt that a model is wearing online, an image-to-image translation takes place in a 2D setting. However such a prevalence of fixed 3D garment templates restricts the scalability of the solution owing to fast fashion trends. With fashion trends changing ever so often, vendors find it difficult to digitise every different kind of clothing that can possibly exist and hence rely on fixed templates for t-shirts, skirts, pants and the like. “As part of our plan to commercialise this technology, we are aiming to offer a solution that is all at once less time consuming, cost-effective and most importantly scalable,” states Prof. Sharma. It also helps that with no templates to adhere to, fashion designers can give free rein to their creativity in designing various kinds of clothing.

How They Did It
With the help of a commercial 3D scanner, the research group took aesthetic captures of about 250 individuals in a variety of clothing ranging from tight-fitting trousers and t-shirts to the more flowy South Asian ethnic attire and created a dataset named as 3DHumans. “These are very high quality scans with accuracy of upto .2 mm. The idea is to make the dataset freely available for the academic community to use with appropriate licensing in order to democratise research in this domain,” remarks Prof. Sharma. Eventually the group proposes to generate a 3D avatar of a person from a single monocular image. Similarly, from a video recording, they aim to create an avatar in motion.

Beyond Academia
The task of 3D digitisation of human motion is highly challenging and requires a sophisticated setup consisting of high-end capture devices. The challenges also include capturing high-frequency details present in the garments such as the complex folds, gathers, and crinkles, as well as the deformation of the garment itself caused by complex body poses. The goal of the Human Motion Capture lab at CVIT is to address these challenges in addition to capturing classical Indian dance performances in 3D in a cost-effective manner. “Plans are underway to extend the dataset of 250 images and take it to the next level to include accessories as well. Similarly, a dynamic dance dataset is already in the pipeline and will be released soon. Alongside the potential to accelerate academic research, the group’s larger vision is also to make the methods industry-standard enabling large-scale commercial usage. Prof. PJ Narayanan (senior computer vision researcher and director of IIIT Hyderabad) is consistently mentoring our research group to attain these goals”, shares Prof Sharma.

Sarita Chebbi is a compulsive early riser. Devourer of all news. Kettlebell enthusiast. Nit-picker of the written word especially when it’s not her own.

  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  

Leave a Reply

Your email address will not be published. Required fields are marked *

Next post