In this paper, the automated detection of emotion in music is
modeled as a multilabel classification task, where a piece of
music may belong to more than one class.Four algorithms
are evaluated and compared in this task. Furthermore, the
predictive power of several audio features is evaluated using
a new multilabel feature selection method. Experiments are
conducted on a set of 593 songs with 6clusters of music
emotions based on the Tellegen-Watson-Clark model