Music has a big emotional impact on people. In this work package in Virtual Worlds for Wellbeing we cope with the musical aspects of semantic and emotional information in personal communication.
The objective of this work package is to solve the musical ‘semantic gap’ by focusing on the process of meaning generation. We will make a computational model of music cognition research results. These show that musical meaning emerges from the confrontation between complex patterns we perceive in acoustical input and a repository of such patterns that we have previously acquired through listening and training. For example, rhythmic, harmonic, and formal patterns are important sources of meaning in music. By researching these aspects from both a music cognition and a computer science viewpoint, this work package will provide essential knowledge for the creation of advanced musical semantics analysis.
Music has been analyzed statistically in many ways on the basis of low level features, counting pitch classes, beats per minutes, etc. This gives a broad categorization, but still provides little semantic or emotional information, which is much more personal, and less statistical. We will identify the relevant parameters, and create computational models, implementations, and prototype systems.