Loading...
4 results
Search Results
Now showing 1 - 4 of 4
- A Comparison Study of Deep Learning Methodologies for Music Emotion RecognitionPublication . Louro, Pedro; Redinho, Hugo; Malheiro, Ricardo; Paiva, Rui Pedro; Panda, RenatoClassical machine learning techniques have dominated Music Emotion Recognition. However, improvements have slowed down due to the complex and time-consuming task of handcrafting new emotionally relevant audio features. Deep learning methods have recently gained popularity in the field because of their ability to automatically learn relevant features from spectral representations of songs, eliminating such necessity. Nonetheless, there are limitations, such as the need for large amounts of quality labeled data, a common problem in MER research. To understand the effectiveness of these techniques, a comparison study using various classical machine learning and deep learning methods was conducted. The results showed that using an ensemble of a Dense Neural Network and a Convolutional Neural Network architecture resulted in a state-of-the-art 80.20% F1 score, an improvement of around 5% considering the best baseline results, concluding that future research should take advantage of both paradigms, that is, combining handcrafted features with feature learning.
- "Back in my day...": A Preliminary Study on the Differences in Generational Groups Perception of Musically-evoked EmotionPublication . Louro, Pedro; Panda, RenatoThe increasingly globalized world we live in today and the wide availability of music at our fingertips have led to more diverse musical tastes within younger generations than in older generations. Moreover, these disparities are still not well understood, and the extent to which they affect listeners' preferences and perception of music. Focusing on the latter, this study explores the differences in emotional perception of music between the Millennials and Gen Z generations. Interviews were conducted with six participants equally distributed between both generations by recording their listening experience and emotion perception on two previously compiled sets of songs representing each group. Significant differences between generations and possible contributing factors were found in the analysis of the conducted interviews. Findings point to differences in the perception of energy of songs with specific messages of suffering for love, as well as a tendency from the younger group to perceive a well-defined emotion in songs representing their generation in contrast to neutral responses from the other group. These findings are preliminary, and further studies are needed to understand their extent. Nevertheless, valuable insights can be extracted to improve music recommendation systems.
- MERGE App: A Prototype Software for Multi-User Emotion-Aware Music ManagementPublication . Louro, Pedro; Branco, Guilherme; Redinho, Hugo; Santos, Ricardo Correia Nascimento Dos; Malheiro, Ricardo; Panda, Renato; Paiva, Rui PedroWe present a prototype software for multi-user music library management using the perceived emotional content of songs. The tool offers music playback features, song filtering by metadata, and automatic emotion prediction based on arousal and valence, with the possibility of personalizing the predictions by allowing each user to edit these values based on their own emotion assessment. This is an important feature for handling both classification errors and subjectivity issues, which are inherent aspects of emotion perception. A path-based playlist generation function is also implemented. A multi-modal audio-lyrics regression methodology is proposed for emotion prediction, with accompanying validation experiments on the MERGE dataset. The results obtained are promising, showing higher overall performance on train-validate-test splits (73.20% F1-score with the best dataset/split combination).
- Exploring Deep Learning Methodologies for Music Emotion RecognitionPublication . Louro, Pedro; Redinho, Hugo; Malheiro, Ricardo; Paiva, Rui Pedro; Panda, RenatoClassical machine learning techniques have dominated Music Emotion Recognition (MER). However, improvements have slowed down due to the complex and time-consuming task of handcrafting new emotionally relevant audio features. Deep Learning methods have recently gained popularity in the field because of their ability to automatically learn relevant features from spectral representations of songs, eliminating such necessity. Nonetheless, there are limitations, such as the need for large amounts of quality labeled data, a common problem in MER research. To understand the effectiveness of these techniques, a comparison study using various classical machine learning and deep learning methods was conducted. The results showed that using an ensemble of a Dense Neural Network and a Convolutional Neural Network architecture resulted in a state-of-the-art 80.20% F1-score, an improvement of around 5% considering the best baseline results, concluding that future research should take advantage of both paradigms, that is, conbining handcrafted features with feature learning.