The saying among directors is “television is emotion”. In this WP, we start from that wisdom for the purpose of making available emotion-differentiated concepts when gathering information from large collections of video. For this purpose emotion may be divided into: “energizing, cheering, saddening, depressing, distressing, comforting”, and other states. For the moment we will select 10 initial states.
This WP aims to establish algorithms that have a good predictive power on each of these aroused states in the viewer. The aim is to predict the state from the responses of audio-visual detectors yet to be learned from example videos. In contrast to the standard method of training a concept detector in visual search, we will not use explicit training from example images but rather use time based physiologic signals acquired from persons when watching television: heart-rate, reduced EEG-signals, sweat, and eye or body movements. The television to watch will both be directed as well as undirected footage (commercials, motion pictures, news, YouTube) to quantify the influence of directing.
Description of work
Sophisticated multi-variate variance decomposition by functional data analysis will be required to learn which physiological signals relate to what emotional state visual detectors. Inspiration is derived from Alan Smeaton. Sennay Ghebreab, Victor Lamme en Steven Scholte are possibly interested to predict the emotional value of movies. See also the Kuleshov effect (Cees Snoek). Advertising agent EQ brands is prepared to make available their data (commercials, storyboards). Some fMRI data are there already .