Robotics and Autonomous Systems (Elsevier)
Special Issue on
Visual Control of
Aims and scope
Visual
control refers to the capability of a robot to visually perceive the
environment and use this information for autonomous navigation. This
task
involves solving multidisciplinary problems related with vision and
robotics,
for example: motion constraints, vision systems, visual perception,
safety,
real-time constraints, robustness, stability issues, obstacle
avoidance… The
problem of the vision-based autonomous navigation is also compounded of
the
different constraints imposed by the particular features of the
platform
involved (ground platforms, aerial vehicles, underwater robots,
humanoids…).
Over the last years, increasing efforts have been made to integrate
robotic
control and vision. Although there is an important number of works in
the area
of visual control for manipulation, which is a mature field of
research, the
use of mobile robots add new challenges in a still open research area.
The interest
in this subject lies in the many potential robotic applications in
industrial
as well as in domestic settings that involve visual control of mobile
robots
(automation industry, material transportation, assistance to disabled
people,
surveillance, rescue, etc).
Special Issue dates:
Deadline for paper submission was on
March 31, 2013
After the review process, the accepted papers were published
online on April, 2014
Guest Editors:
Youcef
Mezouar (Institut Pascal - IFMA,
France)
Gonzalo Lopez-Nicolas (I3A - Universidad de
Zaragoza, Spain)
Special Issue Contents:
[1]
Gonzalo Lopez-Nicolas, Youcef Mezouar, Visual control of mobile robots,
Robotics and Autonomous Systems, Volume 62, Issue 11, November 2014, Pages 1611-1612, doi
[2] Hadi
Aliakbarpour, Omar Tahri,
Helder Araujo, Visual servoing of mobile robots using non-central
catadioptric
cameras, Robotics and Autonomous Systems, Volume 62, Issue 11, November 2014, Pages 1613-1622, doi
Abstract: This paper presents novel contributions on image-based control
of a mobile robot using a general catadioptric camera model. A catadioptric
camera is usually made up by a combination of a conventional camera and a
curved mirror resulting in an omnidirectional sensor capable of providing 360°
panoramic views of a scene. Modeling such cameras has been the subject of
significant research interest in the computer vision community leading to a
deeper understanding of the image properties and also to different models for
different types of configurations. Visual servoing applications using catadioptric
cameras have essentially been using central cameras and the corresponding
unified projection model. So far only in a few cases more general models have
been used. In this paper we address the problem of visual servoing using the
so-called radial model. The radial model can be applied to many camera
configurations and in particular to non-central catadioptric systems with
mirrors that are symmetric around an axis coinciding with the optical axis. In
this case, we show that the radial model can be used with a non-central
catadioptric camera to allow effective image-based visual servoing (IBVS) of a
mobile robot. Using this model, which is valid for a large set of catadioptric
cameras (central or non-central), new visual features are proposed to control
the degrees of freedom of a mobile robot moving on a plane. In addition to
several simulation results, a set of experiments was carried out on Robot
Operating System (ROS)-based platform which validates the applicability,
effectiveness and robustness of the proposed method for image-based control of
a non-holonomic robot.
[3]
Hector M. Becerra, Jean-Bernard Hayet, Carlos Sagues, A single visual-servo controller
of mobile
robots with super-twisting control, Robotics and Autonomous Systems, Volume 62, Issue 11, November 2014, Pages 1623-1635, doi
Abstract: This paper presents a novel approach for image-based visual
servoing, extending the existing works that use the trifocal tensor (TT) as
source for image measurements. In the proposed approach, singularities
typically encountered in this kind of methods are avoided. A formulation of the
TT-based control problem with a virtual target resulting from the vertical
translation of the real target allows us to design a single controller, able to
regulate the robot pose towards the desired configuration, without local
minima. In this context, we introduce a super-twisting control scheme
guaranteeing continuous control inputs, while exhibiting strong robustness
properties. Our approach is valid for perspective cameras as well as
catadioptric systems obeying the central camera model. All these contributions
are supported by convincing numerical simulations and experiments under a
popular dynamic robot simulator.
[4]
Geraldo Silveira, On intensity-based 3-D visual servoing, Robotics and
Autonomous Systems, Volume 62, Issue 11, November 2014, Pages 1636-1645, doi
Abstract: This article investigates the problem of pose-based visual
servoing whose equilibrium state is defined via a reference image. Differently
from most solutions, this work directly exploits the pixel intensities without
any feature extraction or matching. Intensity-based methods provide for higher
accuracy and versatility. Another central idea of this work concerns the
exploitation of the observability issue associated to monocular systems, which
always occurs around the equilibrium. This overall framework allows for
developing a family of new 3D visual servoing techniques with varying degrees
of computational complexity and of prior knowledge, all in a unified scheme.
Three new methods are then presented, and their closed-loop performances are
experimentally assessed. As an additional contribution, these results refute
the common belief that correct camera calibration and pose recovery are crucial
to the accuracy of 3D visual servoing techniques.
[5]
Jakob Engel, Jurgen Sturm, Daniel Cremers, Scale-aware navigation of a
low-cost
quadrocopter with a monocular camera, Robotics and Autonomous Systems, Volume 62, Issue 11, November 2014, Pages 1646-1656, doi
Abstract: We present a complete solution for the visual navigation of a
small-scale, low-cost quadrocopter in unknown environments. Our approach relies
solely on a monocular camera as the main sensor, and therefore does not need
external tracking aids such as GPS or visual markers. Costly computations are
carried out on an external laptop that communicates over wireless LAN with the
quadrocopter. Our approach consists of three components: a monocular SLAM
system, an extended Kalman filter for data fusion, and a PID controller. In
this paper, we (1) propose a simple, yet effective method to compensate for
large delays in the control loop using an accurate model of the quadrocopter’s
flight dynamics, and (2) present a novel, closed-form method to estimate the scale
of a monocular SLAM system from additional metric sensors. We extensively
evaluated our system in terms of pose estimation accuracy, flight accuracy, and
flight agility using an external motion capture system. Furthermore, we
compared the convergence and accuracy of our scale estimation method for an
ultrasound altimeter and an air pressure sensor with filtering-based
approaches. The complete system is available as open-source in ROS. This
software can be used directly with a low-cost, off-the-shelf Parrot AR.Drone
quadrocopter, and hence serves as an ideal basis for follow-up research
projects.
[6]
Duy-Nguyen Ta, Kyel Ok, Frank Dellaert, Vistas and parallel tracking and mapping
with
Wall–Floor Features: Enabling autonomous flight in man-made
environments,
Robotics and Autonomous Systems, Volume 62, Issue 11, November 2014, Pages 1657-1667, doi
Abstract: We propose a solution towards the problem of autonomous flight
in man-made indoor environments with a micro aerial vehicle (MAV), using a
frontal camera, a downward-facing sonar, and odometry inputs. While steering an
MAV towards distant features that we call vistas, we build a map of the
environment in a parallel tracking and mapping fashion to infer the wall
structure and avoid lateral collisions in real-time. Our framework overcomes
the limitations of traditional monocular SLAM approaches that are prone to
failure when operating in feature-poor environments and when the camera purely
rotates. First, we overcome the common dependency on feature-rich environments
by detecting Wall–Floor Features (WFFs), a novel type of low-dimensional
landmarks that are specifically designed for man-made environments to capture
the geometric structure of the scene. We show that WFFs not only reveal the
structure of the scene, but can also be tracked reliably. Second, we cope with
difficult robot motions and environments by fusing the visual data with
odometry measurements in a principled manner. This allows the robot to continue
tracking when it purely rotates and when it temporarily navigates across a
completely featureless environment. We demonstrate our results on a small
commercially available quad-rotor platform flying in a typical feature-poor
indoor environment.