Video Summarization Based on Camera Motion and a Subjective Evaluation Method

被引:21
作者
Guironnet, M. [1 ]
Pellerin, D. [1 ]
Guyader, N. [1 ]
Ladret, P. [1 ]
机构
[1] Lab Grenoble Image Parole Signal Automat GIPSA La, F-38031 Grenoble, France
关键词
D O I
10.1155/2007/60245
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
We propose an original method of video summarization based on camera motion. It consists in selecting frames according to the succession and the magnitude of camera motions. The method is based on rules to avoid temporal redundancy between the selected frames. We also develop a new subjective method to evaluate the proposed summary and to compare different summaries more generally. Subjects were asked to watch a video and to create a summary manually. From the summaries of the different subjects, an "optimal" one is built automatically and is compared to the summaries obtained by different methods. Experimental results show the efficiency of our camera motion-based summary. Copyright (C) 2007 M. Guironnet et al.
引用
收藏
页数:12
相关论文
共 19 条
  • [1] CHERFAOUI M, 1994, P SOC PHOTO-OPT INS, V2185, P174, DOI 10.1117/12.171774
  • [2] Dynamic key-frame extraction for video summarization
    Ciocca, G
    Schettini, R
    [J]. INTERNET IMAGING VI, 2005, 5670 : 137 - 142
  • [3] Corchs S, 2004, 2004 IEEE 6TH WORKSHOP ON MULTIMEDIA SIGNAL PROCESSING, P71
  • [4] Fauvet B, 2004, LECT NOTES COMPUT SC, V3115, P419
  • [5] Two-stage hierarchical video summary extraction to match low-level user browsing preferences
    Ferman, AM
    Tekalp, AM
    [J]. IEEE TRANSACTIONS ON MULTIMEDIA, 2003, 5 (02) : 244 - 256
  • [6] GUIRONNET M, 2006, P 14 EUR SIGN PROC C
  • [7] Guironnet M., 2005, P 13 EUR SIGN PROC C
  • [8] HUANG M, 2004, LAMPTR114 U MAR
  • [9] Kaup A, 2002, FIFTH IEEE SOUTHWEST SYMPOSIUM ON IMAGE ANALYSIS AND INTERPRETATION, PROCEEDINGS, P211
  • [10] Automatic generation of video summaries for historical films
    Kopf, S
    Haenselmann, T
    Farin, D
    Effelsberg, W
    [J]. 2004 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXP (ICME), VOLS 1-3, 2004, : 2067 - 2070