Acoustic scene mapping for robot audition

תאריך
-
Speaker
Christine Evers
Place
BIU Engineering Building 1103, Room 329
Affiliation
Imperial College London
Abstract

The ability to explore the surrounding environment is a major precondition for robot autonomy and is therefore crucial for Human-Robot Interaction (HRI). Audio signals obtained from microphone arrays integrated in the robot platform contain information about the environment that can be exploited constructively for scene analysis, especially in situations where visual sensors suffer from limited Field of View. Acoustic scene mapping is a challenging task as speech in enclosed environments is subject to reverberation, causing localization errors, spurious detections of early reflections, and missing detections of sound sources. Robust multi-source tracking is therefore required to estimate smoothed source trajectories. Moreover, microphone arrays can often localize sound sources only in terms of their instantaneous directions relative to the robot, whereas the source-sensor distance, or range, is generally unmeasured. As the absolute location of a moving robot is often unknown in practice, Acoustic Simultaneous Localization and Mapping (a-SLAM) is required in order to localize the microphone array and jointly map the sound sources.

This talk discusses recent developments in acoustic source tracking and a-SLAM for HRI. A new theoretical framework is introduced that provides robustness against the adverse affects of dominant reflections due to reverberation, and is capable of mapping both static as well as moving sound sources. The theoretical treatment is followed by the presentation and discussion of simulation results.

תאריך עדכון אחרון : 04/12/2022