Exploring relationships between audio features and emotion in music
Abstract
In this paper, we present an analysis of the associations between emotion categories and audio features automatically extracted from raw audio data. This work is based on 110 excerpts from film soundtracks evaluated by 116 listeners. This data is annotated with 5 basic emotions (fear, anger, happiness, sadness, tenderness) on a 7 points scale. Exploiting state-of-the-art Music Information Retrieval (MIR) techniques, we extract audio features of different kind: timbral, rhythmic and tonal. Among others we also compute estimations of dissonance, mode, onset rate and loudness. We study statistical relations between audio descriptors and emotion categories confirming results from psychological studies. We also use machine-learning techniques to model the emotion ratings. We create regression models based on the Support Vector Regression algorithm that can estimate the ratings with a correlation of 0.65 in average.
Main Authors
Format
Conferences
Conference paper
Published
2009
Subjects
The permanent address of the publication
https://urn.fi/URN:NBN:fi:jyu-2009411271Use this for linking
Conference
ESCOM 2009 : 7th Triennial Conference of European Society for the Cognitive Sciences of Music
Language
English