[month] [year]

Digitizing Human Physical Interactions and Skills from Visual Data by Dr. Srinath Sridhar, Stanford University

Dr. Srinath Sridhar, Stanford University gave a talk on Digitizing Human Physical Interactions and Skills from Visual Data on  2 August.

Humans exhibit the capability to skillfully interact with and manipulate their environment. Capturing and analyzing this interaction will allow to build richer immersive computing systems (VR/AR/XR), more intelligent assistive robots, and better prosthetics. Digitizing human physical skills from video collections requires solving computer vision problems in human-centric 3D understanding, 3D scene understanding, and physical interaction understanding. In his talk Dr. Sridhar showed how they use deep learning for estimating body and hand pose, 6D pose of novel object instances, learning intuitive dynamics of objects, and building generative models of human and object motions. Dr. Sridhar concluded his talk with an outlook for exciting future research directions. 

Srinath Sridhar is a postdoctoral researcher at Stanford University working with Prof. Leonidas Guibas. He obtained his Ph.D in Computer Science from the Max Planck Institute for Informatics in Germany under the supervision of Prof. Christian Theobalt and Prof. Antti Oulasvirta. His research focus is on digitizing human physical skills from videos, a topic involving fundamental computer vision problems such as 3D body pose and shape estimation, 3D scene understanding, and physical interaction understanding. His research has broad applications in novel input methods for emerging virtual/augmented reality devices, robotics, and action recognition. Dr. Sridhar has received numerous fellowships and his work has been published at top-tier conferences such as CVPR, ICCV [best poster award 2017], ECCV, CHI, SIGGRAPH, and Eurographics [best paper honorable mention 2019]. Earlier he had spent time at Microsoft Research Redmond and the Honda Research Institute.