Addressing real-life experiments (such as dynamic auditory scene analysis in a search and rescue scenario) implies to deploy and run the Two!Ears Auditory Model on a robotic platform. To assess the active and exploratory features of the model and its ability to handle multi-modality, the robot must be endowed with binaural perception, adequate mobility, and other sensing modalities. It must come with a software platform enabling the concurrent execution of all the processes involved in the model. To ensure reproducible research, this architecture must enable maximum software sharing when switching from one platform to another.
This chapter first decribes general aspects of the robotic software architecture, using the celebrated ROS middleware. Then, dedicated components to stream audio are documented. Guidelines to motorize a kemar head are provided to tackle active perception scenarios. Last, a ROS auditory front-end is presented.