A comprehensive study on mid-level representation and ensemble learning for emotional analysis of video material

被引:28
作者
Acar, Esra [1 ]
Hopfgartner, Frank [2 ]
Albayrak, Sahin [1 ]
机构
[1] Tech Univ Berlin, DAI Lab, Ernst Reuter Pl 7,TEL 14, D-10587 Berlin, Germany
[2] Univ Glasgow, Humanities Adv Technol & Informat Inst, Glasgow, Lanark, Scotland
关键词
Video affective content analysis; Ensemble learning; Deep learning; MFCC; Color; Dense trajectories; SentiBank;
D O I
10.1007/s11042-016-3618-5
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In today's society where audio-visual content such as professionally edited and user-generated videos is ubiquitous, automatic analysis of this content is a decisive functionality. Within this context, there is an extensive ongoing research about understanding the semantics (i.e., facts) such as objects or events in videos. However, little research has been devoted to understanding the emotional content of the videos. In this paper, we address this issue and introduce a system that performs emotional content analysis of professionally edited and user-generated videos. We concentrate both on the representation and modeling aspects. Videos are represented using mid-level audio-visual features. More specifically, audio and static visual representations are automatically learned from raw data using convolutional neural networks (CNNs). In addition, dense trajectory based motion and SentiBank domain-specific features are incorporated. By means of ensemble learning and fusion mechanisms, videos are classified into one of predefined emotion categories. Results obtained on the VideoEmotion dataset and a subset of the DEAP dataset show that (1) higher level representations perform better than low-level features, (2) among audio features, mid-level learned representations perform better than mid-level handcrafted ones, (3) incorporating motion and domain-specific information leads to a notable performance gain, and (4) ensemble learning is superior to multi-class support vector machines (SVMs) for video affective content analysis.
引用
收藏
页码:11809 / 11837
页数:29
相关论文
共 46 条
[1]  
Acar Esra, 2014, MultiMedia Modeling. 20th Anniversary International Conference, MMM 2014. Proceedings: LNCS 8325, P303, DOI 10.1007/978-3-319-04114-8_26
[2]  
Acar E, 2015, INT WORK CONTENT MUL
[3]  
[Anonymous], ABS151104798
[4]  
[Anonymous], 1 INT WORKSH AFF AND
[5]  
[Anonymous], 2013, IEEE T PATTERN ANAL, DOI DOI 10.1109/TPAMI.2012.59
[6]  
[Anonymous], 2014, MM
[7]  
[Anonymous], CONT BAS MULT IND CB
[8]  
[Anonymous], P IEEE INT C MULT EX
[9]  
[Anonymous], 2011, P 1 INT ACM WORKSHOP, DOI DOI 10.1145/2072529.2072532
[10]  
[Anonymous], ABS14108586