Human FaceRecognition

Achieving Illumination Invariance using Image Filters


Achieving Illumination Invariance using Image Filters

 

                    In this chapter we are interested in accurately recognizing human faces in the presence of large and unpredictable illumination changes. Our aim is to do this in a setup realistic for most practical applications, that is, without overly constraining the conditions in which image data is acquired. Specifically, this means that people's motion and head poses are largely uncontrolled, the amount of available training data is limited to a single short sequence per person, and image quality is low. In conditions such as these, invariance to changing lighting is perhaps the most significant practical challenge for face recognition algorithms. The illumination setup in which recognition is performed is in most cases impractical to control, its physics difficult to accurately model and face appearance differences due to changing illumination are often larger than those differences between individuals [1]. Additionally, the nature of most realworld applications is such that prompt, often real-time system response is needed, demanding appropriately efficient as well as robust matching algorithms. In this chapter we describe a novel framework for rapid recognition under varying illumination, based on simple image filtering techniques. The framework is very general and we demonstrate that it offers a dramatic performance improvement when used with a wide range of filters and different baseline matching algorithms, without sacrificing their computational efficiency.

 

Previous work and its limitations

The choice of representation, that is, the model used to describe a person's face is central to the problem of automatic face recognition. Consider the components of a generic face

recognition system schematically shown in Figure 1. A number of approaches in the literature use relatively complex facial and scene models that explicitly separate extrinsic and intrinsic variables which affect appearance. In most cases, the complexity of these models makes it impossible to compute model parameters as a closed-form expression ("Model parameter recovery" in Figure 1). Rather, model fitting is performed through an iterative optimization scheme. In the 3D Morphable Model , for example, the shape and texture of a novel face are recovered through gradient descent by minimizing the discrepancy between the observed and predicted appearance. Similarly, in Elastic Bunch Graph Matching, gradient descent is used to recover the placements of fiducial features, corresponding to bunch graph nodes and the locations of local texture descriptors. In contrast, the Generic Shape-Illumination Manifold method uses a genetic algorithm to perform a manifold-to-manifold mapping that preserves pose.

 

 

Figure 1.

                  A diagram of the main components of a generic face recognition system.

The "Model parameter recovery" and "Classification" stages can be seen as mutually

complementary: (i) a complex model that explicitly separates extrinsic and intrinsic

appearance variables places most of the workload on the former stage, while the

classification of the representation becomes straightforward; in contrast, (ii) simplistic

models have to resort to more statistically sophisticated approaches to matching

 

Figure 2. (a)

                  The simplest generative model used for face recognition: images are assumed to consist of the low-frequency band that mainly corresponds to illumination changes, midfrequency band which contains most of the discriminative, personal information and white noise, (b) The results of several most popular image filters operating under the assumption of the frequency model

One of the main limitations of this group of methods arises due to the existence of local

minima, of which there are usually many. The key problem is that if the fitted model

parameters correspond to a local minimum, classification is performed not merely on noise contaminated  but rather entirely incorrect data. An additional unappealing feature of these methods is that it is also not possible to determine if model fitting failed in such a manner. The alternative approach is to employ a simple face appearance model and put greater emphasis on the classification stage. This general direction has several advantages which make it attractive from a practical standpoint. Firstly, model parameter estimation can now be performed as a closed-form computation, which is not only more efficient, but also void of the issue of fitting failure such that can happen in an iterative  optimization scheme. This allows for more powerful statistical classification, thus clearly separating well understood and explicitly modelled stages in the image formation process, and those that are more easily learnt implicitly from training exemplars. This is the methodology followed in this chapter. The sections that follow describe the method in detail, followed by a report of experimental results.

 

 

Want To Know more with

Video ???

Contact for more learning: webmaster@freehost7com

 

 

 

 

 

 

                                                                                                                     Home

                                       The contents of this webpage are copyrighted  2008 www.freehost7.com
                                                                                All Rights Reserved.