Invited talks

Active Learning in Network Monitoring - Mark Coates, McGill University

http://vimeo.com/13209125

In several latency in obtaining information about the state of the network. The adoption of active learning techniques can result in a dramatic reduction in the number of measurement probes and the time required to obtain accurate estimates of network performance metrics. We describe how these techniques can be employed in estimating loss and delay metrics and conducting fault diagnosis. In particular we focus on active learning methods for simultaneously estimating the available bandwidth on multiple paths through the network.

Active Matching: Efficient Guided Search for Image Correspondence - Andrew Davison, Imperial College London

http://vimeo.com/15742048

Over the past few years we have worked on an approach to matching features between images we takes full advantage of the priors which are normally available to avoid blanket, bottom-up image processing and proceed in a sequential, guided manner. In Active Matching, each measurement of one feature is used to dynamically and probabilistically update predictions of the positions of the other candidate features. In this way, image processing can be put "into the loop" of the search for global consensus, producing matching algorithms which are much more satisfying than RANSAC or similar which depend on random sampling and fixed thresholds. The decisions which must be taken at each step are determined based on explicit evaluation of expected information gain. I will explain the basic Active Matching algorithm, and recent developments which now allow us to match hundreds of features per frame in real-time.

Active Learning for Imitation - Manuel Lopes, University of Plymouth

http://vimeo.com/13644034

Imitation addresses the problem of learning a task representation and/or solution from observations of a demonstration. From such demonstrations it is possible to extract various kind of information, and different approaches exist to extract each type. Approaches have ranged from regression and classification methods, clustering and inverse reinforcement learning. In this presentation we will review some of these approaches, particularly the ones with an active learning generalization. We will also try to have a unified perspective of some of them, particularly regression and inverse reinforcement learning. We will present new results and discuss the main advantages and disadvantages of using active learning in an imitation setting.

Developmental constraints on active learning for the acquisition of motor skills in high-dimensional robots - Pierre-Yves Oudeyer, INRIA

http://vimeo.com/13489178

Learning motor control in robots, such as learning visual reaching or object manipulation in humanoid robots, is becoming a central topic both in "traditional" robotics and in developmental robotics. A major obstacle is that learning can become extremely slow or even impossible without adequate exploration strategies. Active learning techniques, also called intrinsically motivated learning in the developmental robotics literature, can be used to accelerate learning. Yet, many robotic spaces have properties which are not compatible with the standard assumptions of most active learning or intrinsic motivation algorithms. For example, they are typically much too large to be learnt entirely, they can even be open-ended, and they can also contain subspaces which are too complex to be learnt by given machine learning algorithms. Some approaches to active learning/intrinsic motivation have been proposed to address some of these difficulties, such as the explicit maximization of information gain or the explicit maximization of the decrease of prediction errors (as opposed to the maximization of uncertainty or prediction errors as in many active learning heuristics). Yet, even these approaches become quickly inefficient in realistic sensorimotor spaces. In this talk, I will argue that various kinds of developmental constraints should be considered to address properly those spaces, such as maturational constraints on sensorimotor channels, the use of motor primitives, constraints on the spaces on which active learning is performed, morphological constraints, and obviously social learning constraints.

Exploratory Learning of Grasp Affordances - Justus H. Piater, Université de Liège

http://vimeo.com/13373476

Grasping known objects is a capablity fundamental to many important applications of autonomous robotics. Here, active learning holds a lot of promise given the complexities of the real world and the uncertainties associated with physical manipulation. To this end, we have developed learnable object representations for interaction. Objects and associated action parameters are jointly represented by Markov networks whose edge potentials encode pairwise spatial relationships between local features in 3D. Local features typically correspond to visual signatures, but may also represent action-relevant parameters such as object-relative gripper poses useful for grasping the object. Thus, detecting, recognizing and synthesizing grasps for known objects is unified within a single probabilistic inference procedure. Learning these representations is a two-step procedure. First, visual object models are learned by play-like, autonomous, exploratory interaction of a robot with its environment. Secondly, object-specific grasping skills are incrementally acquired, again by play-like interaction. The result is an autonomous system that autonomously acquires knowledge about objects and how to detect, recognize and grasp them.

Planning in Information Space with Macro-actions - Nick Roy, MIT

http://vimeo.com/13489236

Active learning can be framed as a planning in information space problem: the goal is to learn about the world by taking actions that improve expected performance. In some domains, planning far into the future is prohibitively expensive and the agent is not able to discover effective information-gathering plans. However, by using macro-actions consisting of fixed-length open-loop policies, the policy class considered during planning is explicitly restricted in return for computational gains that allow much deeper-horizon forward search. In a certain subset of domains, it is possible to analytically compute the distribution over posterior beliefs that results from a single macro-action; this distribution captures any observation sequence that could occur during the macro-action, and allows significant additional computational savings. I will show performance on two simulation experiments: a standard exploration domain and a UAV search domain.

Active Sequential Estimation of Object Dynamics with Tactile Sensory Feedback - Jo-Anne Ting, UBC

http://vimeo.com/13489404

Estimating parameters of object dynamics, such as viscosity or internal degrees of freedom, is key in the autonomous and dexterous robotic manipulation of objects. Oftentimes, it may be challenging to accurately and efficiently estimate these object parameters due to the complex highly nonlinear underlying physical processes. In an effort to improve the quality of hand-crafted solutions, we examine how control strategies can be automatically generated. We present an active learning framework that sequentially gathers data samples, using information-theoretic criteria to find the optimal actions to perform at each time step. Our framework is evaluated on a robotic hand-arm where the task involves optimizing actions (shaking frequency and rotation of shaking) in order determine viscosity of liquids, given only tactile sensory feedback. The active framework performs better than other simple strategies and speeds up the convergence of estimates.

Active Visual Search - John K. Tsotsos, York University

http://vimeo.com/15770873

Active perception uses intelligent control strategies applied to the data acquisition process that depend on the current state of data interpretation and has a history that pre-dates computer vision. I will very briefly lay out this history and detail theoretical arguments on the computational nature of the general problem. The theory informs us that optimal solutions are not likely to exist. In this context, I consider the problem of visually finding an object in a mostly unknown space with a mobile robot. It is clear that all possible views and images cannot be examined in a practical system and as a result, this is cast as an optimization problem. The goal is to optimize the probability of finding the target given a fixed cost limit in terms of total number of robotic actions required to find the visual target. Due to the inherent intractability of this problem, we present an approximate solution and investigate its performance and properties.