Back to twoears.eu
  • Two!Ears documentation
  • First steps
    • Installation guide
    • Modules of the Two!Ears Auditory Model
    • Set up an acoustic scene
      • Binaural renderer
      • Binaural room scanning renderer
    • Set up an auditory model
    • Work with the database
    • Use a robotic platform
  • Binaural simulator
    • Usage
      • Configuration
        • Configuration using a Matlab script
        • Configuration using XML Scene Description
      • Simulate Ear-Signals
    • Examples
      • Two dry sources
      • Moving source
      • Rooms using the Image Source Model
      • Rooms using Binaural Room Impulse Responses
    • Advanced installation
      • Linux/Mac
        • Prerequisites
        • Compile MEX Binaries
      • Windows 7 64bit
        • Prerequisites
        • Compile MEX Binaries
    • Credits
  • Robotic platform
    • Robotic specific software
      • Component-based software architectures in robotics
      • ROS, a software platform for robotics
      • GenoM3, a tool to develop robotic components
      • Installation of the robotic tools
        • Install ROS
        • Install the GenoM3 tools through robotpkg
        • Install a GenoM3 component from the sources
    • Audio streaming
      • BASS, an audio streaming server component
        • BASS terminology
        • Services
        • Output port
        • Example of use
      • Writing a client of BASS
        • An algorithm for clients of BASS
        • Sample implementation in a GenoM3 component
    • Motorization of a KEMAR head
      • Overview
      • Assembly of the mechanism
      • Assembly of the limit sensor circuitry
      • Connection to the controller
      • Associated software
    • A ROS auditory front-end
      • Overview of the architecture
      • openAFE, a C++ library for rosAFE
        • Installation
        • Implementation details
        • Code example
      • rosAFE, a ROS auditory front-end
        • Installation
        • Design and description of the module
        • How-to use /rosAFE to compute auditory representations
      • Matlab client to rosAFE
        • Installation
        • Design
        • How-to use the Matlab client
        • Demo
        • Known Bugs
  • Auditory front-end
    • Overview
      • Getting started
      • Computation of an auditory representation
        • Using default parameters
        • Input/output signals dimensions
        • Change parameters used for computation
        • Compute multiple auditory representations
        • How to plot the result
      • Chunk-based processing
      • Feedback inclusion
        • Placing a new request
        • Modifying a processor parameter
        • Deleting a processor
      • List of commands
        • Signal objects sObj
    • Technical description
      • Data handling
        • Circular buffer
        • Signal objects
        • Data objects
      • Processors
        • General considerations
        • processChunk method and chunk-based compatibility
      • Manager
        • Processors and signals instantiation
        • Carrying out the processing
    • Available processors
      • Pre-processing (preProc.m)
        • DC removal filter
        • Pre-emphasis
        • RMS normalisation
        • Level reference and scaling
        • Middle ear filtering
      • Auditory filter bank
        • Gammatone (gammatoneProc.m)
        • Dual-resonance non-linear filter bank (drnlProc.m)
      • Inner hair-cell (ihcProc.m)
      • Adaptation (adaptationProc.m)
      • Auto-correlation (autocorrelationProc.m)
      • Rate-map (ratemapProc.m)
      • Spectral features (spectralFeaturesProc.m)
      • Onset strength (onsetProc.m)
      • Offset strength (offsetProc.m)
      • Binary onset and offset maps (transientMapProc.m)
      • Pitch (pitchProc.m)
      • Medial Olivo-Cochlear (MOC) feedback (mocProc.m)
      • Amplitude modulation spectrogram (modulationProc.m)
      • Spectro-temporal modulation spectrogram
      • Cross-correlation (crosscorrelationProc.m)
      • Interaural time differences (itdProc.m)
      • Interaural level differences (ildProc.m)
      • Interaural coherence (icProc.m)
      • Precedence effect (precedenceProc.m)
    • Add your own processors
      • Getting started and setting up processor properties
        • External parameters controllable by the user
        • Internal parameters
      • Implement static methods
        • getDependency
        • getParameterInfo
        • getProcessorInfo
      • Implement parameters “getter” methods
      • Implement the processor constructor
      • Preliminary testing
        • Default instantiation
        • Is it a valid processor?
        • Are parameters correctly described?
      • Implement the core processing method
        • Input and output arguments
        • Chunk-based and signal-based processing
        • Reset method
      • Override parent methods
        • Initialisation methods
        • Input/output routing methods
        • Processing method
      • Allow alternative processing options
      • Implement a new signal type
      • Recommendations for final testing
    • Credits
  • Blackboard system
    • Introduction
    • Usage
      • Configuration
      • Execution
      • Further examples
    • Blackboard architecture
      • Architectural considerations
        • Building a flexible system
        • Building a dynamic system
      • Dynamic system construction
      • Dynamic blackboard memory
      • Dynamic blackboard interactions
      • Scheduler
    • Knowledge sources
      • Abstract knowledge source
      • Auditory front-end knowledge source: AuditoryFrontEndKS
      • Auditory signal dependent knowledge source superclass: AuditoryFrontEndDepKS
      • Localisation knowledge sources
        • Location knowledge source: DnnLocationKS
        • Location knowledge source: GmmLocationKS
        • Localisation decision knowledge source: LocalisationDecisionKS
        • Confusion detection knowledge source: ConfusionKS
        • Confusion solving knowledge source: ConfusionSolvingKS
        • Head rotation knowledge source: RotationKS
      • Identification knowledge sources
        • Identity knowledge source: IdentityKS
        • Identity decision knowledge source: IdDecisionKS
        • Identity Live Debugging knowledge source: IdTruthPlotKS
        • Segment Identity knowledge source: SegmentIdentityKS
      • Sound quality related knowledge sources
        • Coloration knowledge source: ColorationKS
        • Location knowledge source: ItdLocationKS
      • Stream segregation knowledge sources
        • Stream segregation knowledge source: StreamSegregationKS
      • Number of source estimation knowledge sources
        • Number of Sources knowledge source: NumberOfsourcesKS
    • Add your own knowledge sources
      • Example of adding a new knowledge source
    • Model training
      • Sound localisation training
      • Sound identification training pipeline
        • Concepts
        • Training pipeline core classes
        • Model creators
        • Feature creators
  • Auditory Machine Learning Training and Testing Pipeline
    • Overview
      • Getting started
      • Multi-conditional auditory scene simulation
      • Sample feature generation
      • Label creation
      • Model training algorithms
      • Tight coupling with the blackboard-system
      • Utility
    • Usage
      • Setting up Scenes
        • Point Source
        • BRIR Source
        • Diffuse Source
      • Select Your Labeler
      • Available label creators
        • AzmDistributionLabeler
        • AzmLabeler
        • MultiEventTypeLabeler
        • IdAzmDistributionLabeler
        • NumberOfSourcesLabeler
        • MultiLabeler
      • Select Your Features
      • Some of the available feature creators
        • FeatureSet3Blockmean
        • FeatureSet4Blockmean
        • FeatureSetNSrcDetection
        • FeatureSetNSrcDetectionPlusModelOutputs
      • Select Your Model
        • Models and model trainers
        • Performance measure
      • Running the Pipeline
        • Generating scenes and training new models
        • Using trained models inside the blackboard
        • Caching System
    • Examples
      • Full-Stream Sound Event Detection
      • Estimating the Number of Sound Sources
        • Step-by-step training of a number-of-sources model
    • Credits
  • Database
    • Usage
    • Listening tests
      • Human label file format
      • Localisation
        • 2012-03-01: Localisation of a real vs. binaural simulated point source
        • 2013-11-01: Localisation of different source types in sound field synthesis
        • 2016-03-11: Localisation of simulatenous talkers by humans and machines
      • Coloration
        • 2013-05-01: Coloration of a point source in Wave Field Synthesis
        • 2015-10-01: Coloration of a point source in Wave Field Synthesis revisited
        • 2015-10-05: Coloration of a point source in Local Wave Field Synthesis
      • Quality ratings
        • 2014-04-01: Scene related sound quality
        • 2015-11-01: Listening preference of popular music presented by WFS, surround, and stereo
        • 2016-03-01: Listening position preference for different 5.0 reproductions
        • 2016-06-01: Listening preference of different mixes of one popular music song presented by WFS (binaural simulation)
        • 2016-11-18: Listening preference of different mixes of one popular music song presented by WFS
    • Impulse responses
      • Usage of impulse responses
        • Usage of HRTFs
        • Usage of BRIRs
      • Anechoic measurements (HRTFs)
        • Anechoic HRTFs from the KEMAR manikin with different distances
        • Spherical far-field HRTF compilation of the Neumann KU100
        • MIT HRTF measurements of a KEMAR dummy head
        • Near-field HRTFs from SCUT database of the KEMAR
      • Reverberant measurements (BRIRs)
        • Two!Ears, CNRS Toulouse, Adream-building
        • TU Berlin, room Auditorium 3
        • TU Berlin, room Spirit
        • TU Berlin, room Calypso, 5.0 surround setup for different listening positions
        • TU Berlin, room Calypso, 19-channel linear loudspeaker array
        • University of Rostock, RIRs and BRIRs of a 64-channel Loudspeaker array for different room configurations
        • Salford-BBC, 12-channel loudspeaker studio
        • University of Surrey, four different rooms
        • TU Ilmenau, conference room
    • Trained Models for Knowledge Sources
    • Sound databases
      • Speech databases
        • GRID corpus
      • Acoustic scenes and events
        • IEEE AASP Challenge on Detection and Classification
    • Stimuli
      • Anechoic Stimuli
        • TU Berlin - Noise Stimuli
        • Cologne University of Applied Sciences - Anechoic Recordings
        • Instruments
    • Visual Stimuli
      • Panorama Image of Audio Laboratory at the Institute of Communications Engineering, University of Rostock
        • License
        • Description
      • Stereo-Vision Capture from Adream Building, CNRS Toulouse
        • License
        • Description
        • Files
  • Examples
    • Localisation with and without head rotations
    • Localisation - looking at the results in detail
    • DNN-based localisation under reverberant conditions
    • GMM-based localisation under reverberant conditions
    • Train sound type identification models
      • Example step-through
        • Caching dir
        • Feature and model creators
        • Training and testing sets
        • Scene configuration
        • Running the pipeline
        • Model testing
    • Identification of sound types
      • Example step-through
        • Specifying the identification models
        • Creating a test scene
        • Initialising the Binaural Simulator
        • Building the example Blackboard System
        • Running the simulation
        • Evaluating the simulation
    • Stream binaural signals from BASS to Matlab
      • Preliminary steps
      • Control BASS to start an acquisition
        • Connect to genomix and load BASS
        • Get the name of your sound interface
        • Start an acquisition
      • Get audio data in Matlab
      • End the session
    • Control the rotation of a KEMAR motorized head from Matlab
    • Prediction of coloration in spatial audio systems
      • Getting listening test data
      • Setting up the Binaural Simulator
      • Estimating the coloration with the Blackboard
      • Verify the results
    • Prediction of localisation in spatial audio systems
      • Getting the listening test data
      • Setting up the Binaural Simulator
      • Estimating the localisation with the Blackboard
      • Verify the results
  • Development of Two!Ears
    • Installation of the development version
      • Get the code
        • Work with the whole Two!Ears model
        • Work with a single module
      • Set up dependencies on particular branches
      • Add your changes
    • Development using git
      • Git for beginners
        • Getting a remote repository to your computer
        • Adding/changing files
        • Staying up to date with the remote repository
        • Getting further help
        • Developing and branching
        • Remote branches
      • Git advanced commands
        • Storing credentials
        • Working together with a svn repository
        • Removing commits with large files
        • Split repository
      • Git under Windows
      • Git with large binary files
    • Matlab coding style guide
      • Introduction
      • Documentation and comments
        • Class headers
        • Function headers
        • Comments
        • License
        • Author
        • Versioning
      • Naming Conventions
        • General
        • Variables
        • Constants
        • Functions
        • Classes/Objects
      • Layout
        • Code Indention
        • White Spaces
        • Line Width
        • Line Breaks
      • Credits
    • Write documentation
      • Get the raw documentation
      • Get started with Sphinx
      • Convert existing documentation
      • Commonly used terms
      • reStructuredText guidelines
        • Add a figure
        • Add a table
        • Dealing with referencing throughout the document
        • Using acronyms
      • Document new features
 
The Two!Ears Auditory Model
  • Docs »
  • Robotic platform »
  • Audio streaming
  • Edit on GitHub

Audio streamingΒΆ

  • BASS, an audio streaming server component
  • Writing a client of BASS
Next Previous

© Copyright 2015-2016, Two!Ears Team. Revision c05d2bdf.