The Simultaneous Localization And Mapping by an autonomous mobile robot–known by its acronym SLAM–is a computationally demanding process for medium and large-scale scenarios, in spite of the progress both in the algorithmic and hardware sides. As a consequence, a robot with SLAM capabilities has to be equipped with the latest computers whose weight and power consumption might limit its autonomy. This paper describes a visual SLAM system based on a distributed framework where the expensive map optimization and storage is allocated as a service in the Cloud, while a light camera tracking client runs on a local computer. The robot onboard computers are freed from most of the computation, the only extra requirement being an internet connection. The data flow from and to the Cloud is low enough to be supported by a standard wireless connection. The experimental section is focused on showing realtime performance for singlerobot and cooperative SLAM using an RGBD camera. The system provides the interface to a map database where 1 a map can be built and stored, 2 stored maps can be reused by other robots, 3 a robot can fuse its map online with a map already in the database, and 4 several robots can estimate individual maps and fuse them together if an overlap is detected.