[month] [year]

Gunjan Gupta

Gunjan Gupta supervised by Prof. K Madhava Krishna   received his B.Tech – Dual Degree in Computer Science and Engineering (CSE). Here’s a summary of his research work on Advancing Visual Servoing Controller for Robotic Manipulation: Dynamic Object Grasping and Real-World Implementation:

Visual servoing has been gaining popularity in various real-world vision-centric robotic applications, offering enhanced end-effector control through visual feedback. In the realm of autonomous robotic grasping, where environments are often unseen and unstructured, visual servoing has demonstrated its ability to provide valuable guidance. However, traditional servoing-aided grasping methods encounter challenges when faced with dynamic environments, particularly those involving moving objects. Motivation: In the context of grasping moving objects, the dynamic and unpredictable nature of the environment poses significant challenges for traditional robotic manipulation techniques. Conventional methods based on predefined models need help to adapt to the variability and uncertainty inherent in dynamic scenarios, often resulting in suboptimal performance or failure to grasp moving objects reliably. In contrast, Image-Based Visual Servoing (IBVS) offers a promising solution that directly utilizes visual information to guide grasping actions. By dynamically adjusting robot motions based on real-time visual feedback, IBVS enables robots to respond promptly and accurately to changes in the object’s position and orientation, improving the chances of successful grasping in dynamic environments. In the first part of the thesis, we introduce DynGraspVS, a novel Visual Servoing-aided Grasping approach that models the motion of moving objects in its interaction matrix. Leveraging a single-step rollout strategy, our approach achieves a remarkable increase in success rate, while converging faster and achieving a smoother trajectory, while maintaining precise alignments in six degrees of freedom (6 DoF). By integrating the velocity information into the interaction matrix, our method is able to successfully complete the challenging task of robotic grasping in the case of dynamic objects, while outperforming existing deep Model Predictive Control (MPC) based methods in the PyBullet simulation environment. We test it with a range of objects in the YCB dataset with varying range of shapes, sizes, and material properties. We show the effectiveness of our approach by reporting against various evaluation metrics such as photometric error, success rate, time taken, and trajectory length. In addition to introducing DynGraspVS, this thesis in the second half explores the integration and implementation of Image-Based Visual Servoing (IBVS) mechanisms on the XARM7 robotic platform. Through successful integration, our work demonstrates the feasibility and practical applicability of IBVS in real-world robotic systems. Furthermore, a comprehensive analysis of the Recurrent Task-Visual Servoing (RTVS) framework’s performance in diverse real-world scenarios sheds light on its robustness and versatility. Additionally, the introduction of Imagine2Servo, a conditional diffusion model for generating target images, enhances the capabilities of IBVS for complex tasks. Through a combination of experimental validation and rigorous testing, this thesis provides valuable insights into the effectiveness and potential applications of IBVS in real-world robotic systems, setting the stage for future advancements in visual servoing technology. In conclusion, this thesis has made significant strides in advancing the field of visual servoing technology, particularly in the context of real-world robotic applications. Our exploration of IBVS mechanisms on the XARM7 robotic platform has demonstrated the practical applicability and feasibility of IBVS in real-world scenarios. Through successful integration and implementation, we have shown how IBVS can enhance the capabilities of robotic systems, enabling them to perform tasks with greater flexibility and adaptability. 

 

 June 2024