Human Face Recognition

Linear discriminant Analysis



Linear Discriminant Analysis

Linear Discriminant analysis or Fisherfaces method overcomes the limitations of eigenfaces method by applying the Fisher’s linear discriminant criterion. This criterion tries to maximize the ratio of the determinant of the between-class scatter matrix of the projected samples to the determinant of the within-class scatter matrix of the projected samples. Fisher discriminants group images of the same class and separates images of different classes. Images are projected from N2-dimensional space to C dimensional space (where C is the number of classes of images). For example, consider two sets of points in 2-dimensional space that are projected onto a single line. Depending on the direction of the line, the points can either be mixed together (Figure 4a) or separated (Figure 4b). Fisher discriminants find the line that best separates the points. To identify an input test image, the projected test image is compared to each projected training image, and the test image is identified as the closest training image. As with eigenspace projection, training images are projected into a subspace. The test images are projected into the same subspace and identified using a similarity measure. What differs is how the subspace is calculated. Unlike the PCA method that extracts features to best represent face images; the LDA method tries to find the subspace that best discriminates different face classes as shown in Figure 4. The within-class scatter matrix, also called intra-personal, represents variations in appearance of the same individual due to different lighting and face expression, while the between-class scatter matrix, also called the extra-personal, represents variations in appearance due to a difference in identity. By applying this method, we find the projection directions that on one hand maximize the distance between the face images of different classes on the other hand minimize the distance between the face images of the same class. In another words, maximizing the between-class scatter matrix Sb, while minimizing the within-class scatter matrix Sw in the projective subspace. Figure 5 shows good and bad class separation.

 

 

Figure 4. (a) Points mixed when projected onto a line. (b) Points separated when projected onto another line

 

Figure 5

Figure 5. (a) Good class separation. (b) Bad class separation

The within-class scatter matrix Sw and the between-class scatter matrix Sb are defined as

                                 

Where Γi j is the ith sample of class j, μj  is the mean of class j, C is the number of classes, Nj is the number of samples in class j.

                             

   

where μ represents the mean of all classes. The subspace for LDA is spanned by a set of

vectors W = [W1, W2, … , Wd], satisfying

                                    

                       

Figure 6. LDA approach for face recognition

The within class scatter matrix represents how face images are distributed closely within

classes and between class scatter matrix describes how classes are separated from each

other. When face images are projected into the discriminant vectors W, face images should be distributed closely within classes and should be separated between classes, as much as possible. In other words, these discriminant vectors minimize the denominator and maximize the numerator in Equation (14). W can therefore be constructed by the

eigenvectors of Sw-1 Sb. Figure 7 shows the first 16 eigenvectors with highest associated

eigenvalues of Sw-1 Sb. These eigenvectors are also referred to as the fisherfaces. There are various methods to solve the problem of LDA such as the pseudo inverse method, the

subspace method, or the null space method . The LDA approach is similar to the eigenface method, which makes use of projection of training images into a subspace. The test images are projected into the same subspace and identified using a similarity measure. The only difference is the method of calculating the subspace characterizing the face space. The face which has the minimum distance with the test face image is labelled with the identity of that image. The minimum distance can be calculated using the Euclidian distance method as given earlier in Equation (11). Figure 6 shows the testing phase of the LDA approach..

 

 

 

Want To Know more with

Video ???

Contact for more learning: webmaster@freehost7com

 

 

 

 

 

 

 

Home

The contents of this webpage are copyrighted © 2008 www.freehost7.com
 All Rights Reserved.