CRI: Outdoor Video Sensor Network Laboratory
Supported by National Science Foundation grant 0551741.
Principal Investigator
Bir Bhanu, Center for Research in Intelligent Systems EBU2
Room 216, University of California at Riverside, Riverside, CA 92521,
Tel. 951-827-3954, Fax. 951-827-2425
bhanu@cris.ucr.edu
http://www.vislab.ucr.edu/PEOPLE/BIR_BHANU/index.htm
Co-PIs
Amit K. Roy-Chowdhury, Center for Research in Intelligent Systems
, Dept. of Electrical Engineering,
University of California at Riverside,
Tel. 951-827-7886, Fax. 951-827-2425
amitrc@ee.ucr.edu
http://www.ee.ucr.edu/~amitrc/
Chinya Ravishankar, Center for Research in Intelligent Systems
, Dept. of Computer Science and Engineering,
University of California at Riverside,Tel. 951-827-2451, Fax. 951-827-4643
ravi@cs.ucr.edu
http://www.cs.ucr.edu/~ravi
Students
Ramiro Diaz, Ankit Patel, Hoang Nguyen, Mostafa Elhams, Huy Tran, Mauro Ibarra
Research and Education Activities
Publications and Products
- J. Yu, B. Bhanu, Y. Xu and A. Roy Chowdhury, "Incremental Construction of Super-resolved 3D Facial Texture in Video," International Conference on Image Processing, 2007.
- X. Zou, B. Bhanu, B. Song and A. Roy Chowdhury, "Determining Topology and Identifying Anomalous Patterns in a Distributed Camera Network", International Conference on Image Processing, 2007.
- B. Song, A. Roy-Chowdhury, “Stochastic Adaptive Tracking In A Camera Network,” IEEE Intl. Conf. on Computer Vision, 2007.
- B. Song, N. Vaswani, A. Roy-Chowdhury, "Closed-loop Tracking and Change Detection in Multi-Activity Sequences", IEEE Conference on Computer Vision and Pattern Recognition, 2007
- X. Zou and B. Bhanu, "Anomalous activity classification in the distributed camera network," International Conference on Image Processing, San Diego, CA, Oct. 12-15, 2008.
- J. Yu and B. Bhanu, "Super-resolution of facial images in video with expression changes," 5th IEEE International Conference on Advanced Video and Signal Based Surveillance, Sept. 1-3, Santa Fe, New Mexico., 2008.
- J. Yu and B. Bhanu, "Super-resolution of deformed facial images in video," IEEE International Conference on Image Processing, San Diego, CA, Oct. 12-15, 2008.
- Y. Li and B. Bhanu, “Utility-based dynamic camera assignment and hand-off in a video network,” Second ACM/IEEE International Conference on Distributed Smart Cameras, pop. 1-9, Stanford, CA, Sept. 7-11, 2008.
- B. Song and A. Roy-Chowdhury, “Robust Tracking in A Camera Network: A Multi-Objective Optimization Framework,” IEEE Journal on Selected Topics in Signal Processing: Special Issue on Distributed Processing in Vision Networks, August 2008.
- B. Song, C. Soto, A. Roy-Chowdhury, J. Farrell, “Decentralized Camera Network Control Using Game Theory,” Workshop on Smart Camera and Visual Sensor Networks at IEEE/ACM Intl. Conf. on Distributed Smart Cameras, 2008.
This NSF project develops a new laboratory and conducts research in video understanding and
related technologies in a wireless network environment. While research into large-scale sensor
networks is being carried out for various applications, the idea of massive video sensor networks
consisting of stationary and moving cameras connected over a wireless network has been largely
unexplored. Wireless video sensor networks are necessary for a number of life-critical
applications such as surveillance for homeland security, scene analysis of disaster zones for
coordinating rescue efforts, wildlife monitoring and the entertainment industry. Wireless sensor
networks have the crucial advantage of mobility and ease of installation of sensors, but suffer
from power and bandwidth constraints. Video processing and transmission require large amounts
of computing power and transmission bandwidth. So one is concerned with these trade-offs.
The proposed laboratory, under development, will provide a state-of-the-art facility for
conducting research and teaching. It consists of 80 pan-tilt-zoom video cameras that can be
accessed over the network using an IP address. Each camera is connected to a computational unit
that takes care of local processing at the sensor node. It identifies the data in the video sequence
relevant for a particular application, which is then compressed and transmitted. This object-based
distributed compression scheme significantly reduces the bandwidth requirement from the
network. In order to save battery power at the sensors, a triggering mechanism based on acoustic,
seismic and vibration sensors is used. A few infrared sensors will supplement the data provided
by the color video cameras to perform diurnal scene analysis. Some of the sensors are fixed and
powered by connecting to an electrical outlet, while the mobile ones are powered from solar
energy.
|