Visualization and Intelligent Systems Laboratory
VISLab

 

 

Contact Information

VISLab
Winston Chung Hall Room 216
University of California, Riverside
900 University Avenue
Riverside, CA 92521-0425


Tel: (951)-827-3954

CRIS
Bourns College of Engineering
UCR
NSF IGERT on Video Bioinformatics

UCR Collaborators:
CSE
ECE
ME
STAT
PSYC
ENTM
BIOL
BPSC
ECON
MATH
BIOENG
MGNT

Other Collaborators:
Keio University

Other Activities:
IEEE Biometrics Workshop 2014
IEEE Biometrics Workshop 2013
Worshop on DVSN 2009
Multibiometrics Book

Webmaster Contact Information:
Alex Shin
wshin@ece.ucr.edu

Last updated: July 1, 2017

 

 

Summarization

A psychological adaptive model for video analysis

Extracting key-frames is the first step for efficient content-based indexing, browsing and retrieval of the video data in commercial movies. Most of the existing research deals with “how to extract representative frames?” However the unaddressed question is “how many key-frames are required to represent a video shot properly?” Generally, the user defines this number a priori or some heuristic methods are used. In this paper, we propose a psychological model, which computes this number adaptively and online, from variation of visual features in a video-shot. We incorporate it with an iterative key-frame selection method to automatically select the key-frames. We compare the results of this method with two other well-known approaches, based on a novel effectiveness measure that scores each approach based on its representational power. Movie-clips of varying complexity are used to underscore the success of the proposed model in real-time.