[month] [year]

Mohd Omama – Perception and Navigation

Mohd Omama received his Master of Science in Computer Science and Engineering (CSE). His research work was supervised by Prof. Madhava Krishna. Here’s a summary of his research work on Bridging the Gap Between Perception and Navigation – A Self-Driving and Lidar Perspective.

We embark on a hitherto unreported problem of an autonomous robot (self-driving car) navigating in dynamic scenes in a manner that reduces its localization error and eventual cumulative drift or Absolute Trajectory Error. Modern autonomous vehicles (AVs) often rely on vision, LIDAR, and even radar based simultaneous localization and mapping (SLAM) frameworks for precise localization and navigation. However, modern SLAM frameworks often lead to unacceptably high levels of drift (i.e., localization error) when AVs observe few visually distinct features or encounter occlusions due to dynamic obstacles. This work argues that minimizing drift must be a key desiderata in AV motion planning, which requires an AV to take active control decisions to move towards feature-rich regions while also minimizing conventional control cost. To do so, we have two distinct formulations of this problem.

In the first approach, we learn actions that lead to drift-minimized navigation through a suitable set of reward and penalty functions. We use Proximal Policy Optimization, a class of Deep Reinforcement Learning methods, to learn the actions that result in drift-minimized trajectories. We show, by extensive comparisons on a variety of synthetic, yet photo-realistic scenes made available through the CARLA Simulator, the superior performance of the proposed framework vis-à-vis methods that do not adopt such policies.

In the second approach, we first introduce a novel data-driven perception module that observes LIDAR point clouds and estimates which features/regions an AV must navigate towards for drift minimization.

Then, we introduce an interpretable model predictive controller (MPC) that moves an AV toward such feature-rich regions while avoiding visual occlusions and gracefully trading off drift and control cost.

Our experiments on challenging, dynamic scenarios in the state-of-the-art CARLA simulator indicate our method reduces drift up to 76.76\% compared to benchmark approaches.

May 2023