ConfigurationΒΆ

In order to get the model running you first have to decide what is the task that the model should solve. You can get an idea of what is possible if you have a look at the currently available knowledge sources of the Blackboard system. A knowledge source is an independent module that runs inside the blackboard system and has knowledge about an specific topic, which could come from bottom-up or top-down processing.

If you have decided on what you want to do, you configure your blackboard in a XML-file. Let’s assume that you want to classify a target speech source in your binaural input signals. A corresponding configuration file could then look like:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
<blackboardsystem>
    <dataConnection Type="AuditoryFrontEndKS"/>

    <KS Name="baby" Type="IdentityKS">
        <Param Type="char">baby</Param>
        <Param Type="char">6687829ce1a73694a1ce41c7c01dec1b</Param>
    </KS>
    <KS Name="femaleSpeech" Type="IdentityKS">
        <Param Type="char">femaleSpeech</Param>
        <Param Type="char">6687829ce1a73694a1ce41c7c01dec1b</Param>
    </KS>
    <KS Name="idDec" Type="IdDecisionKS">
        <Param Type="int">0</Param>
        <Param Type="int">1</Param>
    </KS>

    <Connection Mode="replaceOld" Event="AgendaEmpty">
        <source>scheduler</source>
        <sink>dataConnect</sink>
    </Connection>
    <Connection Mode="replaceOld">
        <source>dataConnect</source>
        <sink>baby</sink>
        <sink>femaleSpeech</sink>
    </Connection>
    <Connection Mode="replaceParallel">
        <source>baby</source>
        <source>femaleSpeech</source>
        <sink>idDec</sink>
    </Connection>
</blackboardsystem>

Looking at the configuration file step by step we find the following settings:

dataConnection
This specifies where the data that your classifier uses comes from. For most of the available knowledge sources this will be the Auditory front-end which processes the input ear signals in a bottom-up way and provides the knowledge source with auditory features it can use to perform its action. If the data should come from the auditory front-end you have to specify AuditoryFrontEndKS, which is by itself a knowledge source.
KS

This specifies the knowledge sources that should be part of the blackboard system. In this case we use two identity knowledge sources that have knowledge about features corresponding to a particular sound source. This is given as a parameter Param to the identity knowledge source. Each identity knowledge source will provide a hypothesis to the blackboard stating the probability that the corresponding identity is matched by the input signal.

The second parameter in the IdentityKS description (a string of hexadecimal digits) is the version number of a Matlab MAT file that contains a trained acoustic model. For the first knowledge source in the example above, the file name of the corresponding acoustic model file will be baby.6687829ce1a73694a1ce41c7c01dec1b.model.mat. For more details, see the description of the identity knowledge source, which explains how to train your own source models.

The second kind of knowledge source we use is the identity decision knowledge source. It will judge the different identity hypothesis it will get from the identity knowledge sources and performs the final decision which identity is matched by the input signals.

Connection

The blackboard is a modular system. The Connection settings tell the blackboard system which connections should be made between its various modules, so that they can be notified about relevant events. The configuration file shown above describes three connections, each of which describe an event binding between one or more sources and one or more sinks; let’s look at each of them in turn.

Lines 17-20: This describes an event binding between the scheduler and the auditory front end, for an event called AgendaEmpty. Hence, the auditory front end will be notified when this event is dispatched by the scheduler, causing it to retrieve a new block of data.

Lines 21-25: Makes an event binding between the auditory front end and the two IdentityKS objects defined earlier in the file. Thus, the IdentityKS objects will be notified when new audio input is available to classify.

Lines 26-30: Make event bindings between the two IdentityKS objects and the IdDecisionKS; hence the IdDecisionKS is notified when a source has been classified, so that it can make a final decision on which source types are present in the scene.

Each of these connections can have a specific Mode, which determines how the blackboard should add triggered knowledge sources into the agenda. For example, the replaceOld mode indicates that old instances of the corresponding sink in the agenda should be replaced when a new one occurs. For full details of the different modes, see the section on dynamic blackboard interactions.