Learning Efficient Policies for Vision-based Navigation  

Authors: Maren Bennewitz.

Time: 13:55-14:20

In this talk, I will present a new approach to learning efficient navigation policies using visual features for localization. Fast movements of a mobile robot typically reduce the performance of vision-based localization systems due to motion blur. In the presented approach, the robot learns a policy on how to reach its destination reliably and, at the same time, as fast as possible. Thereby, the impact of motion blur on the observations is implicitly taken into account and delays caused by localization errors are avoided. To reduce the size of the resulting policy, which is desirable in the context of memory-constrained systems, the learned policy is compressed via a clustering approach. I will present experiments demonstrating that the learned policy significantly outperforms any policy that uses a constant velocity. Additional experiments show that the compressed policy does not result in a loss of performance compared to the originally learned policy.

Furthermore, I will shortly introduce a novel approach for learning a landmark selection policy for navigation in unknown environments that allows a robot to discard landmarks that are not valuable for its current navigation task. This enables the robot to reduce the computational burden and to carry out its task more efficiently by maintaining only the important landmarks.

     
Designed by David Ribas