DECAF: MEG-Based Multimodal Database for Decoding Affective Physiological Responses

被引:239
作者
Abadi, Mojtaba Khomami [1 ,2 ]
Subramanian, Ramanathan [3 ]
Kia, Seyed Mostafa [4 ]
Avesani, Paolo [4 ]
Patras, Ioannis [5 ]
Sebe, Nicu [1 ]
机构
[1] Univ Trento, Dept Informat Engn & Comp Sci, Trento, Italy
[2] Telecom Italia, SKIL, Rome, Italy
[3] Univ Illinois, Adv Digital Sci Ctr, Singapore, Singapore
[4] Fdn Bruno Kessler, NeuroInformat Lab, Trento, Italy
[5] Univ London, Sch Comp Sci & Elect Engn, London WC1E 7HU, England
关键词
Emotion recognition; user physiological responses; MEG; single-trial classification; affective computing; EMOTION; EXPRESSION;
D O I
10.1109/TAFFC.2015.2392932
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this work, we present DECAF-a multimodal data set for decoding user physiological responses to affective multimedia content. Different from data sets such as DEAP [15] and MAHNOB-HCI [31], DECAF contains (1) brain signals acquired using the Magnetoencephalogram (MEG) sensor, which requires little physical contact with the user's scalp and consequently facilitates naturalistic affective response, and (2) explicit and implicit emotional responses of 30 participants to 40 one-minute music video segments used in [15] and 36 movie clips, thereby enabling comparisons between the EEG versus MEG modalities as well as movie versus music stimuli for affect recognition. In addition to MEG data, DECAF comprises synchronously recorded near-infra-red (NIR) facial videos, horizontal Electrooculogram (hEOG), Electrocardiogram (ECG), and trapezius-Electromyogram (tEMG) peripheral physiological responses. To demonstrate DECAF's utility, we present (i) a detailed analysis of the correlations between participants' self-assessments and their physiological responses and (ii) single-trial classification results for valence, arousal and dominance, with performance evaluation against existing data sets. DECAF also contains time-continuous emotion annotations for movie clips from seven users, which we use to demonstrate dynamic emotion prediction.
引用
收藏
页码:209 / 222
页数:14
相关论文
共 35 条
[1]   User-centric Affective Video Tagging from MEG and Peripheral Physiological Responses [J].
Abadi, Mojtaba Khomami ;
Kia, Seyed Mostafa ;
Subramanian, Ramanathan ;
Avesani, Paolo ;
Sebe, Nicu .
2013 HUMAINE ASSOCIATION CONFERENCE ON AFFECTIVE COMPUTING AND INTELLIGENT INTERACTION (ACII), 2013, :582-587
[2]  
[Anonymous], 2008, A8 U FLOR
[3]  
Bartolini E. E., 2001, THESIS WESLEYAN U CO
[4]   CONTROLLING THE FALSE DISCOVERY RATE - A PRACTICAL AND POWERFUL APPROACH TO MULTIPLE TESTING [J].
BENJAMINI, Y ;
HOCHBERG, Y .
JOURNAL OF THE ROYAL STATISTICAL SOCIETY SERIES B-STATISTICAL METHODOLOGY, 1995, 57 (01) :289-300
[5]  
Bradley M., 1994, EMOTIONS ESSAYS EMOT
[6]   Mixed type audio classification with Support Vector Machine [J].
Chen, Lei ;
Gunduz, Sule ;
Ozsu, M. Tamer .
2006 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO - ICME 2006, VOLS 1-5, PROCEEDINGS, 2006, :781-+
[7]  
Cowie Roddy, 2012, International Journal of Synthetic Emotions, V3, P1, DOI DOI 10.4018/JSE.2012010101
[8]  
Furht B., 2006, Encyclopedia of Multimedia
[9]  
Greenwald M. K., 1989, Journal of Psychophysiology, V3, P51
[10]   EMOTION ELICITATION USING FILMS [J].
GROSS, JJ ;
LEVENSON, RW .
COGNITION & EMOTION, 1995, 9 (01) :87-108