next up previous contents
Next: Evaluation of the False Up: False Positive Reduction Previous: PCA-Based False Positive Reduction   Contents


2DPCA-Based False Positive Reduction

The 2DPCA approach [194] is a recent improvement of the typical eigenfaces approach. As the authors argue 2DPCA has important advantages over PCA in two main aspects: firstly, it is simpler and more straightforward to use for image feature extraction since 2DPCA is directly based on the image matrix, and secondly, it is easier to accurately evaluate the covariance matrix5.2.

In the original eigenfaces approach, each image of size $ m \times
n$ is transformed into a vector of size $ m\cdot n$ , in contrast to the natural way to deal with two dimensional data, which would be treating it as a matrix. This is the motivation of 2DPCA [194].

The algorithm starts with a database of M training images. The image covariance matrix $ G_t$ is calculated by:

$\displaystyle G_t = \frac{1}{M} \sum_{j=1}^M(A_j-A_\mu)^t(A_j-A_\mu)$ (5.1)

where $ A_\mu$ is the mean image of all training samples. Then, using the Karhunen-Loeve transform it is possible to obtain the corresponding face space, which is the subspace defined as:

\begin{displaymath}\begin{cases}
 \{X_1,...X_d\} = arg\max \vert X^tG_tX\vert \\...
... j,  i,j =1,...d \ 
 X_i^tX_i = 1,  i =1,...d \ 
 \end{cases}\end{displaymath} (5.2)

where X is a unitary column vector. The first equation looks for the set of $ d$ unitary vectors where the total scatter of the projecting samples is maximized (the orthonormal eigenvectors of $ G_t$ corresponding to the first $ d$ largest eigenvalues). On the other hand, the other two equations are needed to ensure orthonormal constraints.

With the selected set of eigenvectors it is possible to construct a family of feature vectors for each image. Thus, for an image sample $ A$ , the projected feature vectors (the principal components) $ Y_1,...,Y_d$ are found by:

$\displaystyle Y_k = AX^k, k=1...d$ (5.3)

It is important to note that while for PCA each principal component is a scalar, for 2DPCA each principal component is a vector. It is this set of vectors for image that is used to construct the feature image (a matrix of size $ m \times d$ ) referred to as $ B=[Y_1,...Y_d]$ .

In a similar way to the eigenfaces approach, comparing images means to compare the constructed features. As the dimension of the feature space has increased in one dimension, now the comparison of images is done by comparing matrices:

$\displaystyle d(B_i,B_j) = \sum_{k=1}^d \vert\vert Y_k^i-Y_k^j\vert\vert$ (5.4)

where $ \vert\vert Y_k^i-Y_k^j\vert\vert$ denotes the Euclidean distance between the two principal component (vectors) $ Y_k^i$ and $ Y_k^j$ . To obtain an $ A_z$ value we adopt the analogous probabilistic scheme described in Section [*].


next up previous contents
Next: Evaluation of the False Up: False Positive Reduction Previous: PCA-Based False Positive Reduction   Contents
Arnau Oliver 2008-06-17