witheFlow

Experience witheFlow where AI enhances live music by dynamically adjusting sound effects. Transform every piece into an emotional experience!

Artificial Intelligence, Music & Brainwaves

Emotion AI, which emerged from the groundbreaking concept of Affective Computing, connects AI with human emotions through biosignals. Utilizing sensors to measure brainwaves, heart rate, skin conductance, and other physiological signals, researchers can detect and analyze emotional states. witheFlow will leverage these methods to create a reliable “ground truth” for emotions, enhancing AI’s ability to interpret and respond to human feelings during musical performances.

State-of-the-Art

AI in Music Research

The integration of AI into music is transforming how we create, access, and experience music. From genre classification and emotion recognition to instrument identification and beat detection, AI is solving complex challenges in music information retrieval (MIR). Our team has contributed to this field with advancements in music theory ontologies, genre recognition systems, and evaluation methods for AI music composition. Explore our contributions and the state-of-the-art research driving our project.

About THE RESEARCH PROJECT

Music’s ability to evoke emotions is complex and subjective, influenced by cultural differences and personal experiences. Music Emotion Recognition (MER) research seeks to unravel these correlations using a multidisciplinary approach, combining music theory, emotions, signal processing, and AI. By analyzing musical elements and their emotional impact, we aim to develop systems that understand and enhance the emotional depth of music, leading to applications in therapy, education, and beyond.

🧾 Our Academic Publications
Perceptual Musical Features for Interpretable Audio Tagging
Authors: Vassilis Lyberatos, Spyridon Kantarelis, Edmund Dervakos, Giorgos Stamou
This paper explores interpretability in music tagging, introducing techniques like symbolic knowledge and deep neural networks to improve tag prediction. The method outperforms baseline models and competes with state-of-the-art solutions.
MusicLIME: Explainable Multimodal Music Understanding
Authors: Theodoros Sotirou, Vassilis Lyberatos, Orfeas Menis Mastromichalakis, Giorgos Stamou
MusicLIME explains the interaction between audio and lyrical features in multimodal music models, enhancing transparency and fairness. This method offers global explanations, improving the explainability of music understanding systems.
Challenges and Perspectives in Interpretable Music Auto-Tagging Using Perceptual Features
Authors: Vassilis Lyberatos, Spyridon Kantarelis, Edmund Dervakos, Giorgos Stamou
This work emphasizes the evaluation of interpretability in music auto-tagging. By combining symbolic knowledge, deep learning, and signal processing, the method achieves higher user trust despite slight accuracy trade-offs, showing that interpretability can sometimes outweigh performance.
Exploring the Directivity of the Lute, Lavta, and Oud Plucked String Instruments
Authors: Ioannis Malafis, Penelope-Maria Pierroutsakou, Konstantinos Bakogiannis, Areti Andreopoulou
This study investigates the spherical directivity and radiation patterns of traditional plucked-string instruments from the Middle East, Turkey, Greece, and surrounding areas. The analysis reveals similar radiation patterns across all frequency bands, despite variations in geometry and material, with the musician’s body impacting directivity.
Directivity Characteristics of Sung Greek Vowels on Formant Frequencies
Authors: Giorgos Dedousis, Konstantinos Bakogiannis, Areti Andreopoulou, Anastasia Georgaki
The study explores the relationship between directivity and formant frequencies of Greek vowels sung by professional classical and Byzantine chant singers. Directivity patterns vary with pitch and formant frequency, though strong patterns are not consistently observed.

Gallery

IMPLEMENTED BY

FUNDED BY

This project is carried out within the framework of the National Recovery and Resilience Plan Greece 2.0, funded by the European Union – NextGenerationEU (Implementation body: HFRI).

PARTICIPATE NOW

Interested in our research or collaborating with us? Reach out to learn more about witheFlow, our team, and how you can be part of this exciting journey. Connect with us to stay updated on our progress and upcoming events.

FROM THE BLOG

Latest News