Autonomous operation in uncertain and partially unknown environments: perception, information fusion and planning
Abstract: Autonomous robotic systems are required to operate in increasingly challenging and complex scenarios, often within partially unknown, uncertain or dynamically changing environments. Representative scenarios include single- and multi-robot localization and structure from motion (SfM) problems, such as mobile vision and 3D reconstruction. A reliable and accurate on‐line autonomous operation in these scenarios requires computationally efficient perception, information fusion and planning. Collaboration between robots, while unprecedentedly improving performance, introduces additional challenges as communication and computation budgets are limited and consistent information fusion should be enforced. In this talk I will describe approaches that address these aspects of single- and multi-robot autonomy.
The majority of the talk will be devoted to inference and information fusion. First, I will present a computationally efficient method for incremental bundle adjustment. The method, incremental light bundle adjustment (iLBA), incorporates two key components to substantially reduce computational complexity: the observed 3D points are algebraically eliminated, leading to a cost function that is formulated in terms of multiple view geometry constraints. The second component is incremental smoothing, which uses graphical models to adaptively identify the variables that should be re-eliminated at each step. The described method will be demonstrated both in SfM and robot navigation scenarios. Since high-rate performance for general loop closure observations is not guaranteed, I will overview an approach that addresses this issue by parallelized computations, partitioning the underlying graphical structure of the problem at hand. Next, I will focus on multi-agent configurations and discuss an approach that exploits commonly observed 3D points to perform cooperative localization based on distributed multiple view geometry and graph-based information fusion. While these approaches assume robot actions to be pre-determined, another key component in robot autonomy is planning. To produce reliable, robust and high quality control policies, it is essential to explicitly account for different sources of uncertainty. In the final part of the talk I will present an approach for planning under uncertainty while operating in uncertain and partially unknown environments.