Login | Register

Feature Selection in Image Databases

Title:

Feature Selection in Image Databases

Yektaii, Mahdi (2013) Feature Selection in Image Databases. PhD thesis, Concordia University.

[thumbnail of Thesis_PDA_A.pdf]
Preview
Text (application/pdf)
Thesis_PDA_A.pdf - Accepted Version
2MB

Abstract

Even though the problem of determining the number of features required to provide an acceptable classification performance has been a topic of interest to the researchers in the pattern recognition community for a few decades, a formal method for solving this problem still does not exist. For instance, the well-known dimensionality reduction method of principal component analysis (PCA) sorts the features it generates in the order of their importance, but it does not provide a mechanism for determining the number of sorted features that need to be retained for a meaningful classification. Discrete wavelet transform (DWT) is another linear transformation used for data compaction, in which the coefficients in the transform domain can be sorted in different orders depending on their importance. However, the question of determining the number of features to be retained for a good classification of the data remains unanswered.

The objective of this study is to develop schemes for determining the number of features in the PCA and DWT domains that are sufficient for a classifier to provide a maximum possible classifiability of the samples in these transform domains. The energy content of the DWT and PCA coefficients of practical signals follow a specific pattern. The proposed schemes, by exploiting this property of the signals, develop criteria that are based on maintaining the energy of the ensemble of the feature vectors as their dimensionality is reduced. Within this unifying theme, in this thesis, the problem of dimension reduction is investigated when the features are generated by the linear transformation techniques of the discrete wavelet transform and the principal component analysis, and by the nonlinear technique of kernel principal component analysis.

The first part of this study is concerned with developing a criterion for determining the number of coefficients when the features are represented as wavelet coefficients. The reduction in the dimensionality of the feature vectors is performed by letting the matrices of the wavelet coefficients of the data samples to undergo the process of Morton scanning and choosing a set of a fixed number of coefficients from these matrices whose energy content approaches to that of the original set of all the samples.

In the second part of the thesis, the problem of determining a reduced dimensionality of feature vectors is investigated when the features are PCA generated. The proposed method of finding a reduced dimensionality of feature vectors is based on evaluating a cumulative distance between all the pairs of distinct clusters with a reduced set of features and examining its proximity to the distance when all the features are included.

The PCA methods for data classification work well when the distinct clusters are linearly separable. For clusters that are nonlinearly separable, the kernel versions of PCA (KPCA) prove to be more efficient for generating features. The method developed in the second part of this thesis for obtaining the reduced dimensionality of the PCA based feature vectors cannot be readily extended to the kernel space because of the lack of availability of the feature vectors in an explicit form in this space. Therefore, the third part of this study develops a suitable criterion for obtaining reduced dimensionality of the feature vectors when they are generated by a kernel PCA.


Extensive experiments are performed on a series of image databases to demonstrate the effectiveness of the criteria developed in this study for predicting the number of features to be retained. It is shown that there is a direct correlation between the expressions developed for the criteria and the classification accuracy as functions of the number of features retained. The results of the experiments show that with the use of the three feature selection techniques, a classifier can provide its maximum classifiability, that is, a classifiability attained by the uncompressed feature vectors, with only a small fraction of the original features. The robustness of the proposed methods is also investigated by applying them to noise-corrupted images.

Divisions:Concordia University > Gina Cody School of Engineering and Computer Science > Electrical and Computer Engineering
Item Type:Thesis (PhD)
Authors:Yektaii, Mahdi
Institution:Concordia University
Degree Name:Ph. D.
Program:Electrical and Computer Engineering
Date:22 August 2013
Thesis Supervisor(s):Ahmad, Omair and Bhattacharya, Prabir
ID Code:977892
Deposited By: MAHDI YEKTAII
Deposited On:13 Jan 2014 14:58
Last Modified:18 Jan 2018 17:45
All items in Spectrum are protected by copyright, with all rights reserved. The use of items is governed by Spectrum's terms of access.

Repository Staff Only: item control page

Downloads per month over past year

Research related to the current document (at the CORE website)
- Research related to the current document (at the CORE website)
Back to top Back to top