Monocular and Stereo SLAM in Large Environments

Author: Juan D. Tardós.

Time: 10:45-11:10

In this talk we describe a system that can carry out simultaneous localization and mapping (SLAM) in large indoor and outdoor environments using a monocular or a stereo camera as the only sensor. Textured point features are extracted from the images and stored as 3-D points if seen in the stereo image with sufficient disparity, or stored as inverse depth points otherwise. This allows the system to map both near and far features: the first provide distance and orientation, and the second provide orientation information. To map large environments, we will present CI-Graph, a submapping method for SLAM that uses a graph structure to efficiently solve complex trajectories, reducing the computational cost. Unlike other submapping SLAM approaches, we are able to transmit and share information through maps in the graph in a consistent manner by using conditionally independent submaps. Our method works most of the time in local maps with constant time operation, and is able to compute the full map by propagating information along a spanning tree of the graph in close to linear time. To demonstrate the robustness and scalability of our system, we show experimental results in indoor and outdoor urban environments over trajectories of a few hundred meters, with a monocular or stereo camera being carried in hand by a person walking at normal speeds of 4–5 km/h.


Designed by David Ribas