Exploring relationships between audio features and emotion in music

DSpace/Manakin Repository

Show simple item record

dc.contributor.author Laurier, Cyril
dc.contributor.author Lartillot, Olivier
dc.contributor.author Eerola, Tuomas
dc.contributor.author Toiviainen, Petri
dc.date.accessioned 2009-08-03T06:24:21Z
dc.date.available 2009-08-03T06:24:21Z
dc.date.issued 2009
dc.identifier.uri http://hdl.handle.net/123456789/20889
dc.description.abstract In this paper, we present an analysis of the associations between emotion categories and audio features automatically extracted from raw audio data. This work is based on 110 excerpts from film soundtracks evaluated by 116 listeners. This data is annotated with 5 basic emotions (fear, anger, happiness, sadness, tenderness) on a 7 points scale. Exploiting state-of-the-art Music Information Retrieval (MIR) techniques, we extract audio features of different kind: timbral, rhythmic and tonal. Among others we also compute estimations of dissonance, mode, onset rate and loudness. We study statistical relations between audio descriptors and emotion categories confirming results from psychological studies. We also use machine-learning techniques to model the emotion ratings. We create regression models based on the Support Vector Regression algorithm that can estimate the ratings with a correlation of 0.65 in average. en
dc.format.extent 260-264
dc.language.iso eng
dc.subject.other emotion en
dc.subject.other music en
dc.subject.other mir en
dc.subject.other audio en
dc.subject.other classification en
dc.subject.other machine learning en
dc.title Exploring relationships between audio features and emotion in music en
dc.type Article en
dc.identifier.urn URN:NBN:fi:jyu-2009411271
dc.identifier.conference ESCOM 2009 : 7th Triennial Conference of European Society for the Cognitive Sciences of Music

This item appears in the following Collection(s)

Show simple item record