People Projects Publications Software Data Policy Press Videos FAQ Vintage
Projects Recognizing Affect in Speech
This research project is concerned with building computational models for the automatic recognition of affective expression in speech. We are in the process of completing an investigation of how acoustic parameters extracted from the speech waveform (related to voice quality, intonation, loudness and rhythm) can help disambiguate the affect of the speaker without knowledge of the textual component of the linguistic message. We have carried out a multi-corpus investigation, which includes data from actors and spontaneous speech in English, and evaluated the model's performance. In particular, we have shown that the model exhibits a speaker-dependent performance which reflects human evaluation of these particular data sets, and, held against human recognition benchmarks, the model begins to perform competitively.

Group Members:


[mit][media lab + Room E15-419 + 20 Ames Street + Cambridge, MA 02139]