Visualization and Intelligent Systems Laboratory
VISLab

 

 

Contact Information

VISLab
Winston Chung Hall Room 216
University of California, Riverside
900 University Avenue
Riverside, CA 92521-0425


Tel: (951)-827-3954

CRIS
Bourns College of Engineering
UCR
NSF IGERT on Video Bioinformatics

UCR Collaborators:
CSE
ECE
ME
STAT
PSYC
ENTM
BIOL
BPSC
ECON
MATH
BIOENG
MGNT

Other Collaborators:
Keio University

Other Activities:
IEEE Biometrics Workshop 2014
IEEE Biometrics Workshop 2013
Worshop on DVSN 2009
Multibiometrics Book

Webmaster Contact Information:
Alex Shin
wshin@ece.ucr.edu

Last updated: July 1, 2017

 

 

Retrieval

Semantic concept co-occurrence patterns for image annotation and retrieval

Presented is a novel approach to automatically generate intermediate image descriptors by exploiting concept co-occurrence patterns in the pre-labeled training set that renders it possible to depict complex scene images semantically. This work is motivated by the fact that multiple concepts that frequently co-occur across images form patterns which could provide contextual cues for individual concept inference. The co-occurrence patterns were discovered as hierarchical communities by graph modularity maximization in a network with nodes and edges representing concepts and co-occurrence relationships separately. A random walk process working on the inferred concept probabilities with the discovered co-occurrence patterns was applied to acquire the refined concept signature representation. Through experiments in automatic image annotation and semantic image retrieval on several challenging datasets, the effectiveness of the proposed concept co-occurrence patterns as well as the concept signature representation in comparison with state-of-the-art approaches was demonstrated.

A software system for automated identification and retrieval of moth images based on wing attributes

Described is the development of an automated moth species identification and retrieval system (SPIR) using computer vision and pattern recognition techniques. The core of the system was a probabilistic model that infered Semantically Related Visual (SRV) attributes from low-level visual features of moth images in the training set, where moth wings were segmented into information-rich patches from which the local features were extracted, and the SRV attributes were provided by human experts as ground-truth. For the large amount of unlabeled test images in the database or added into the database later on, an automated identification process was evoked to translate the detected salient regions of low-level visual features on the moth wings into meaningful semantic SRV attributes. We further proposed a novel network analysis based approach to explore and utilize the co-occurrence patterns of SRV attributes as contextual cues to improve individual attribute detection accuracy. Working with a small set of labeled training images, the approach constructed a network with nodes representing the SRV attributes and weighted edges denoting the co-occurrence correlation.

Discrete Cosine Transform Locality-Sensitive Hashes for Face Retrieval

Sample images from LFW [44],FERET [40], RaFD [42], BioID [43],FEI [41], and Multi-PIE [39] Searching large databases using local binary patterns for face recognition has been problematic due to the cost of the linear search, and the inadequate performance of existing indexing methods. We present Discrete Cosine Transform (DCT) hashing for creating index structures for face descriptors. Hashes play the role of keywords: an index is created, and queried to find the images most similar to the query image. It is shown in this research that DCT hashing has significantly better retrieval accuracy and it is more efficient compared to other popular state-of-the-art hash algorithms.

Automated Identification and Retrieval of Moth Images with Semantically Related Visual Attributes on the Wings

A new automated identification and retrieval system is proposed that aims to provide entomologists, who manage insect specimen images, with fast computer-based processing and analyzing techniques. Several relevant image attributes were designed, such as the so-called semantically-related visual (SRV) attributes detected from the insect wings and the co-occurrence patterns of the SRV attributes which are uncovered from manually labeled training samples. A joint probabilistic model is used as SRV attribute detector working on image visual contents. The identification and retrieval of moth species are conducted by comparing the similarity of SRV attributes and their co-occurrence patterns. The prototype system used moth images while it can be generalized to any insect species with wing structures. The system performed with good stability and the accuracy reached 85% for species identification and 71% for content-based image retrieval on a entomology database.

Improving Large-scale Face Image Retrieval using Multi-level Features

In recent years, extensive efforts have been made for face recognition and retrieval systems. However, there remain several challenging tasks for face image retrieval in unconstrained databases where the face images were captured with varying poses, lighting conditions, etc. In addition, the databases are often large-scale, which demand efficient retrieval algorithms that have the merit of scalability. To improve the retrieval accuracy of the face images with different poses and imaging characteristics, we introduce a novel feature extraction method to bag-of-words (BoW) based face image retrieval system. It employs various scales of features simultaneously to encode different texture information and emphasizes image patches that are more discriminative as parts of the face. Moreover, the overlapping image patches at different scales compensate for the pose variation and face misalignment. Experiments conducted on a large-scale public face database demonstrate the superior performance of the proposed approach compared to the state-of-the-art method.

Semantic-visual Concept Relatedness and Co-Occurrences for Image Retrieval

We introduce a novel approach that allows the retrieval of complex images by integrating visual and semantic concepts. The basic idea consists of three aspects. First, we measure the relatedness of semantic and visual concepts and select the visually separable semantic concepts as elements in the proposed image signature representation. Second, we demonstrate the existence of concept co-occurrence patterns. We propose to uncover those underlying patterns by detecting the communities in a network structure. Third, we leverage the visual and semantic correspondence and the co-occurrence patterns to improve the accuracy and efficiency for image retrieval. We perform experiments on two popular datasets that confirm the effectiveness of our approach.

Concept Learning with Co-occurrence Network for Image Retrieval

We addresses the problem of concept learning for semantic image retrieval in this research. Two types of semantic concepts are introduced in our system: the individual concept and the scene concept. The individual concepts are explicitly provided in a vocabulary of semantic words, which are the labels or annotations in an image database. Scene concepts are higher level concepts which are defined as potential patterns of cooccurrence of individual concepts. Scene concepts exist since some of the individual concepts co-occur frequently across different images. This is similar to human learning where understanding of simpler ideas is generally useful prior to developing more sophisticated ones. Scene concepts can have more discriminative power compared to individual concepts but methods are needed to find them. A novel method for deriving scene concepts is presented. It is based on a weighted concept co-occurrence network (graph) with detected community structure property. An image similarity comparison and retrieval framework is described with the proposed individual and scene concept signature as the image semantic descriptors. Extensive experiments are conducted on a publicly available dataset to demonstrate the effectiveness of our concept learning and semantic image retrieval framework.

Image Retrieval for Highly Similar Objects

In content-based image retrieval, precision is usually regarded as the top metric used for performance measurement. With image databases reaching hundreds of millions of records, it is apparent that many retrieval strategies will not scale. Data representation and organization has to be better understood. This research focuses on: (a) feature selection and optimal representation of features and (b) multidimensional tree indexing structure. The paper proposes the use of a forward and conditional backward searching feature selection algorithm. The data is then put through a minimum description length based optimal non-uniform bit allocation algorithm to reduce the size of the stored data, while preserving the structure of the data. The results of our experiments show that the proposed feature selection process with a minimum description length based non-uniform bit allocation method gives a system that improves retrieval time and precision.

Image Retrieval with Feature Selection and Relevance Feedback

We propose a new content based image retrieval (CBIR) system combined with relevance feedback and the online feature selection procedures. A measure of inconsistency from relevance feedback is explicitly used as a new semantic criterion to guide the feature selection. By integrating the user feedback information, the feature selection is able to bridge the gap between low-level visual features and high-level semantic information, leading to the improved image retrieval accuracy. Experimental results show that the proposed method obtains higher retrieval accuracy than a commonly used approach.

Feature Synthesized EM Algorithm for Image Retrieval

Expectation-Maximization (EM) algorithms have several limitations, including the curse of dimensionality and the convergence at a local maximum. In this article, we propose a novel learning approach, namely Coevolutionary Feature Synthesized Expectation-Maximization (CFS-EM), to address the above problems. Experiments on real image databases show that CFS-EM out performs Radial Basis Function Support Vector Machine (RBF-SVM), CGP, Discriminant-EM (D-EM) and Transductive-SVM (TSVM) in the sense of classification performance and it is computationally more efficient than RBF-SVM in the query phase.

Integrating Relevance Feedback Techniques for Image Retrieval

We propose an image relevance reinforcement learning (IRRL) model for integrating existing RF techniques in a content-based image retrieval system. Various integration schemes are presented and a long-term shared memory is used to exploit the retrieval experience from multiple users. The experimental results manifest that the integration of multiple RF approaches gives better retrieval performance than using one RF technique alone. Further, the storage demand is significantly reduced by the concept digesting technique.

A New Semi-Supervised EM Algorithm for Image Retrieval

One of the main tasks in content-based image retrieval (CBIR) is to reduce the gap between low-level visual features and high-level human concepts. This research presents a new semi-supervised EM algorithm (NSSEM), where the image distribution in feature space is modeled as a mixture of Gaussian densities. Due to the statistical mechanism of accumulating and processing meta knowledge, the NSS-EM algorithm with longterm learning of mixture model parameters can deal with the cases where users may mislabel images during relevance feedback. Our approach that integrates mixture model of the data, relevance feedback and longterm learning helps to improve retrieval performance. The concept learning is incrementally refined with increased retrieval experiences. Experiment results on Corel database show the efficacy of our proposed concept learning approach.

Independent Feature Analysis for Image Retrieval

Content-based image retrieval methods based on the Euclidean metric expect the feature space to be isotropic. They suffer from unequal differential relevance of features in computing the similarity between images in the input feature space. We propose a learning method that attempts to overcome this limitation by capturing local differential relevance of features based on user feedback. This feedback is used to locally estimate the strength of features along each dimension while taking into consideration the correlation between features. In addition to exploring and exploiting local principal information, the system seeks a global space for efficient independent feature analysis by combining such local information.

Probabilistic Feature Relevance Learning for Content-Based Image Retrieval

Most of the current image retrieval systems use “one-shot” queries to a database to retrieve similar images. Typically a K-nearest neighbor kind of algorithm is used, where weights measuring feature importance along each input dimension remain fixed (or manually tweaked by the user), in the computation of a given similarity metric. In this paper, we present a novel probabilistic method that enables image retrieval procedures to automatically capture feature relevance based on user's feedback and that is highly adaptive to query locations. Experimental results are presented that demonstrate the efficacy of our technique using both simulated and real-world data.