dc.contributor.author | Hartmann, Martín Ariel | |
dc.date.accessioned | 2011-08-03T07:33:32Z | |
dc.date.available | 2011-08-03T07:33:32Z | |
dc.date.issued | 2011 | |
dc.identifier.other | oai:jykdok.linneanet.fi:1181512 | |
dc.identifier.uri | https://jyx.jyu.fi/handle/123456789/36531 | |
dc.description.abstract | Automatic musical genre classification is an important information retrieval task since it can be applied for practical purposes such as the organization of data collections in the digital music industry. However, this task remains an open question because the current state of the art shows far from satisfactory outcomes in terms of classification performance. Moreover, the most common algorithms that are used for this task are not designed for modelling music perception. This study suggests a framework for testing different musical features for use in music genre classification and evaluates the performance of this task based on two musical descriptors.
The focus of this study is on automatic classification of music into genres based on audio content. The performance of two sets of timbral descriptors, namely the sub-band fluxes and the mel-frequency cepstral coefficients, is compared. The choice of these particular descriptors is based on their ease or difficulty of interpretation from a perceptual point of view. Classification performance is determined by using a variety of music datasets, learning algorithms, feature selection approaches and combinatorial feature subsets yielded from these descriptors. The results were estimated upon overall classification accuracies, generalization capability, and relevance of these musical descriptors based on feature ranking.
According to the results, the sub-band fluxes, perceptually motivated descriptors of polyphonic timbre, performed better than the widely used mel-frequency cepstral coefficients. The former timbral descriptors showed better classification accuracies and lower tendency to overfit than the latter.
In a nutshell, this study gives support to using perceptually interpretable timbre desciptors for musical genre classification tasks and suggests the utilization of the sub-band flux set for further content-based tasks in the field of music information retrieval. | |
dc.format.extent | 79 s | |
dc.format.mimetype | application/pdf | |
dc.language.iso | eng | |
dc.rights | In Copyright | en |
dc.subject.other | music information retrieval | |
dc.subject.other | music genre classification | |
dc.subject.other | polyphonic timbre | |
dc.subject.other | feature ranking | |
dc.title | Testing a spectral-based feature set for audio genre classification | |
dc.type | master thesis | |
dc.identifier.urn | URN:NBN:fi:jyu-2011080311207 | |
dc.type.dcmitype | Text | en |
dc.type.ontasot | Pro gradu -tutkielma | fi |
dc.type.ontasot | Master’s thesis | en |
dc.contributor.tiedekunta | Humanistinen tiedekunta | fi |
dc.contributor.tiedekunta | Faculty of Humanities | en |
dc.contributor.laitos | Musiikin laitos | fi |
dc.contributor.laitos | Department of Music | en |
dc.contributor.yliopisto | University of Jyväskylä | en |
dc.contributor.yliopisto | Jyväskylän yliopisto | fi |
dc.contributor.oppiaine | Music, Mind and Technology (maisteriohjelma) | fi |
dc.contributor.oppiaine | Master's Degree Programme in Music, Mind and Technology | en |
dc.subject.method | mallintaminen | |
dc.date.updated | 2011-08-03T07:33:32Z | |
dc.type.coar | http://purl.org/coar/resource_type/c_bdcc | |
dc.rights.accesslevel | openAccess | fi |
dc.type.publication | masterThesis | |
dc.contributor.oppiainekoodi | 3054 | |
dc.subject.yso | musiikki | |
dc.subject.yso | genret | |
dc.subject.yso | sähköiset palvelut | |
dc.subject.yso | luokitus | |
dc.format.content | fulltext | |
dc.rights.url | https://rightsstatements.org/page/InC/1.0/ | |
dc.type.okm | G2 | |