Two!Ears documentation

Everything you need to know about the Two!Ears Auditory Model.

Getting help

Having trouble? We’d like to help!

First steps

New to Two!Ears or auditory modelling? This is the place to start!

Binaural simulator

The Two!Ears Binaural Simulator enables you to create dynamic binaural ear signals, that can be used by later parts of the model to actively explore a scene.

Robotic platform

If you have a robotic platform ready to record the binaural signals, no simulation of them will be needed. Here you find the hardware and software you need to connect Matlab with the robotic world:

Auditory front-end

The Two!Ears Auditory Front-End extracts all kinds of auditory cues from the ear signals like the loudness or interaural differences. For detailed description see:

Blackboard system

The Two!Ears Blackboard System is the brain of the Two!Ears Auditory Model as it provides a way to interpret the auditory cues and extract meaning from them. Learn how this happens below:

Database

The Two!Ears Binaural Simulator and the Two!Ears Blackboard System uses a lot of different data to perform their tasks. This is achieved by a large collection of data:

Examples

A key concept of the Two!Ears Auditory Model is reproducible research. Here, you will find scripts show casing basic usage examples of the model or redoing figures from our publications:

Development

If you are part of the Two!Ears development team or would like to become part of it, read on:

License

If not otherwise stated inside single files the Two!Ears Auditory Model is licensed under GNU General Public License, version 3, and its parts available in the RoboticPlatform and the Tools folder are licensed under The BSD 2-Clause License.

Acknowledgement

The Two!Ears team are in alphabetical order:

Sylvain Argentieri (UPMC), Jens Blauert (RUB), Jonas Braasch (Rensselaer), Guy Brown (USFD), Benjamin Cohen-L’hyver (UPMC), Patrick Danès (LAAS), Torsten Dau (DTU), Rémi Decorsière (DTU), Thomas Forgue (LAAS), Bruno Gas (UPMC), Youssef Kashef (TUB), Chungeun Kim (TU/e), Armin Kohlrausch (TU/e), Dorothea Kolossa (RUB), Ning Ma (USFD), Tobias May (DTU), Johannes Mohr (TUB), Antonyo Musabini (UPMC), Klaus Obermayer (TUB), Ariel Podlubne (LAAS), Alexander Raake (TUIl), Christopher Schymura (RUB), Sascha Spors (URO), Jalil Taghia (TUB), Ivo Trowitzsch (TUB), Thomas Walther (RUB), Hagen Wierstorf (TUIl), Fiete Winter (URO).

This project has received funding from the European Union’s Seventh Framework Programme for research, technological development and demonstration under grant agreement no 618075.

_images/eu-flag.png _images/tree.jpg