# Introduction biometric recognition system is an automated system that verifies or identifies a person's identity using a person's physiological characteristics and/or behavioral characteristics [Jain et al., 2004]. Face recognition has been growing rapidly in the past few years for its multiple uses in the areas of Law Enforcement, Biometrics, Security, and other commercial uses. As one of the most successful applications of image analysis and understanding, face recognition has recently gained significant attention, especially during the past several years. There are at least two reasons for such a trend: the first is the wide range of commercial and law enforcement applications and the second is the availability of feasible technologies after several years of research . Face is one of the most common parts used by people to recognize each other. Over the course of its evolution, the human brain has developed highly specialized areas dedicated to the analysis of the facial images. While face recognition has increased in reliability significantly it is still not accurate all the time. The ability to correctly classify the image depends on a variety of variables including lighting, pose (Gross and Brajovic, 2003), facial expressions (Georghiades et al, 2001) and image quality (Shan et al, 2003). In the past decades, face recognition has been an active research area and many types of algorithms and techniques have been proposed to equal this ability of human brain. It is however questioned whether the face itself is a sufficient basis for recognizing a person from large population with great accuracy. Indeed, the human brain also relies on much contextual information and operates on limited population. This is evidenced by the emergence of specific face recognition conferences such as AFGR [1997,1999] and AVBPA [1995][1996][1997][1998] and systematic empirical evaluation of Face Recognition Techniques [FRT], including the FERET [Phillips et al. [1997], [Rizvi et al 1998] and XM2VTS [Messer et al., 1999] protocols. The most problematic perturbation affecting the performance of face recognition systems are strong variations in pose and illumination. Variation between images of different faces in general is smaller than taken from the same face in a variety of environments. More specifically the changes induced by illumination could be larger than the differences between individuals, causing systems based on comparing images to misclassify the identity of the input image [Adini et al., 1997]. i.e. The differences between images of one face under different illumination conditions are greater than the differences between images of different faces under the same illumination conditions. The face verification system authenticates a person's claimed identity and decides that claimed identity is correct or not. In this case it has limited user group and in the most cases it can be forced or demand frontal pose orientations. But, still there are many problems with illumination condition. Face recognition tests revealed that the lighting variant is one of the bottlenecks in face recognition/verification. If lighting ( D D D D ) F conditions are different from the gallery, identity decision is wrong in many cases. There are two approaches to this problem. Model-based, and preprocessing-based (Adini et al.,1997) and (Rabia Jafri and Hamid R.Arabnia, 2009). Model-based approach makes an attempt to model the light variation. Unfortunately, this requires large amount of training data and sometimes fail when there is a complicated lighting configuration. The second approach using preprocessing method removes lighting influence effect without any additional knowledge. So these methods are not practical enough for recognition systems in most cases. But, the approaches based on image processing techniques transform images directly without any assumptions or prior knowledge. Therefore, they are commonly used in practical systems for their simplicity and efficiency. Except the traditional method such as histogram equalization (HE) (Dalal and Triggs, 2005), histogram specification (HS), logarithm transformation (LOG), new methods belonging to this category such as Gamma Intensity Correction (GIC) and self-quotient image (SQI) (Wang et al., 2004) have been proposed recently with impressive performance improvement for illumination problem. We can also carry out some analysis. For example, the popular Eigen subspace projections used in many systems as features have been analyzed under illumination variation [Adini et al., 1997]. The conclusions suggest that significant illumination changes cause dramatic changes in the projection coefficient vectors, and hence can seriously degrade the performance of subspace based methods [Zhao, 1999]. In direct appearance-based approaches, training examples are collected under different lighting conditions and directly (i.e. without undergoing any lighting preprocessing) used to learn a global model of the possible illumination variations, for example a linear subspace or manifold model, which then generalizes to the variations seen in new images [Belhumeur and Kriegman, 1998], [Basri and Jacobs, 2003] # TYPICAL PREPROCESSING METHODS The methods based on image processing techniques for illumination problem commonly attempt to normalize all the face images to a canonical illumination in order to compare them under the "identical" lighting conditions. These methods can be formulated as a uniform form: I'= T(I)(1) Where 'I' is the original image, T is the transformation operator I'is the image after the transform. The transform T is expected to weaken the negative effect of the varying illumination and the image I'can be used as a canonical form for a face recognition system. Therefore, the recognition system is expected to be insensitive to the varying lighting conditions. Histogram equalization (HE), Histogram specification (HS) and logarithm transform (LOG) are the most commonly used methods for gray-scale transform. Gamma Intensity Correction (GIC) and Multi Scale Retinex (MSR) were supposed to weaken the effect of illumination variations in face recognition. All these methods are briefly introduced in the following sections and compared with the proposed method. # a) Histogram Equalization (HE) And Histogram Specification (HS) Histogram Normalization is one of the most commonly used methods for preprocessing. In image processing, the idea of equalizing a histogram is to stretch and redistribute the original histogram using the entire range of discrete levels of the image, in a way that an enhancement of image contrast is achieved. The most commonly used histogram normalization technique is histogram equalization where one attempts to change the image histogram into a histogram that is constant for all brightness values. This would correspond to a brightness distribution where all values are equally probable. For image I(x,y) with discrete k gray values histogram is defined by i.e. the probability of occurrence of the gray level i is given by: p(i) = n i N (2) Where i ? 0, 1?k ?1 grey level and N is total number of pixels in the image. Transformation to a new intensity value is defined by: It uses a weighted Gaussian filter that convolutes with only the large part in edge regions. Thus the halo effects can be reduced. When the lighting variations are large (such as the "illum" subset of the CMU-PIE database), the edges induced by lighting are prominent and this method can work well. However, when lighting variations are not so obvious, the main edges are induced by the facial features. If this kind of filter is still used, the useful information for recognition will be weakened. This is a possible reason that it decreases the recognition rates on the FERET and CAS-PEAL datasets while increasing the recognition rates on the CMU-PIE database. I out = ? n i N k?1 i=0 = ? p(i) k?1 i=0 Fig. 2 gives some examples (under varying lighting conditions) of the images after these transformation operations. # Fig. 2 : Example Effects of the Typical Preprocessing # Methods From Fig. 2, the results show that HE, HS and GIC are better than the other two methods. (Some images in the FERET database had been processed. Therefore HE has little improvement on it.) Furthermore, they need no complex operations and the complexity of time and space is not high. However, the above example shows that these preprocessing approaches do not always work well on different datasets. Furthermore, some approaches may hurt the recognition of face images with normal lighting, though they do facilitate the recognition of face images with illumination variations. So it is necessary to improve the preprocessing method for varying light condition face images in order to guide the application to practical systems. The strengths of gamma correction, DOG filter and contrast equalization techniques have been combined and the net effect has been utilized in the proposed technique. # III. PROPOSED TECHNIQUE The proposed method combines the features of gamma correction, DOG filtering and contrast equalization techniques. Over all stages of proposed preprocessing method is shown in Fig. 3. Gamma Correction is a nonlinear gray-level transformation that replaces gray-level I with the gray level I 1/? , and is given by, I = I 1/? (5) (for > 0) or log(I) (for = 0), where ? [0, 1] is a user-defined parameter. This enhances the local dynamic range of the image in dark or shadowed regions while compressing it in bright regions. # Fig. 4 : Gamma Curve This curve is valuable in keeping the pure black parts of the image black and the white parts white, while adjusting the values in-between in a smooth manner. Thus, the overall tone of an image can be lightened or darkened depending on the gamma value used, while maintaining the dynamic range of the image. In Figure 4, the pixel values range from 0.0 represents pure black, to 1.0, which represents pure white. As the figure shows, gamma values of less than 1.0 darken an image. Gamma values greater than 1.0 lighten an image and a gamma value equal to 1.0 produces no effect on an image. A power law with exponent in the range [0, 0.5] is a good compromise. Here = 0.2 [Tan and Triggs, 2010] is used as the default setting. b) Difference Of Gaussian(Dog) Filtering Gamma correction does not remove the influence of overall intensity gradients such as shading effects. In computer vision, Difference of Gaussians is a grayscale image enhancement algorithm that involves the subtraction of one blurred version of an original grayscale image from another, less blurred version of the original. The blurred images are obtained by convolving the original grayscale image with Gaussian kernels having differing standard deviations. Blurring an image using a Gaussian kernel suppresses only highfrequency spatial information. Subtracting one image from the other preserves spatial information that lies between the ranges of frequencies that are preserved in the two blurred images. Thus, the difference of Gaussians is a band-pass filter that discards all but a handful of spatial frequencies that are present in the original grayscale image. As an image enhancement algorithm, the Difference of Gaussian (DOG) can be utilized to increase the visibility of edges and other detail present in a digital image. The Difference of Gaussians algorithm removes high frequency detail that often includes random noise and this approach could be found well suitable for processing images with a high degree of noise. The DOG impulse response is defined as: ??????(??, ??) = 1 2???? 1 2 ?? ? ?? 2 +?? 2 2?? 1 2 ? 1 2???? 2 2 ?? ? ?? 2 +?? 2 2?? 2 2(6) Where the default values of ? 1 and ? 2 are chosen as 1.0 and 2.0 respectively. Since this effect leads to the reduction in the overall contrast produced by the operation and hence the contrast has to be enhanced in the subsequent stages. IV. # Conclusion A new technique of preprocessing has been proposed for face recognition applications under uncontrolled and difficult lighting conditions. It could be achieved by using a simple, efficient image preprocessing chain whose practical recognition performance will be high when compared to the techniques where face recognition is performed without preprocessing. The technique has been carried out by combining the strengths of gamma correction, Difference of Gaussian filtering and Contrast equalization. ![, [Lee et al., 2005], [Chen et al.,2000] and [Zhang and Samaras 2003]. The robustness of several popular linear subspace methods and of Local Binary Patterns (LBP) can be substantially improved by including a very simple image preprocessing stage based on gamma correction, Difference of Gaussian filtering and robust variance normalization [Tan and Triggs, 2010]. The INface (Illumination Normalization techniques for robust Face recognition) toolbox in its current form is a collection of functions which perform illumination normalization and, hence, tackle one of the greatest challenges in face recognition [V. ?Struc and N Pave?si´c, 2009]. The proposed method is presented in the conference [Anila and Devarajan, 2011]. II.](image-2.png "") 31![Fig.1 : An original image, its histogram, Linear histogram equalization from left to right or too dark. These are most commonly used techniques of histogram adjustment. HE is to create an image with uniform distribution over the whole brightness scale and HS is to make the histogram of the input image have a predefined shape. b) LOG LOG is another frequently used technique of gray Scale transform. It simulates the logarithmic sensitivity of the human eye to the light intensity. Although LOG is one of the best methods in dealing with the variations in lighting on the three databases; it decreases the recognition rates on the other subsets of the CAS-PEAL database greatly. One possible reason is that the difference between the mean brightness values of the transformed images belonging to the same person is too large. c) GIC The Gamma Intensity Correction (GIC) corrects the overall brightness of a face image to a pre-defined canonical face image. Thus the effect of varying lighting is weakened. d) SQI SQI is based on the reflectance-illumination model: I = RL, where I is the image, R is the reflectance of the scene and L is the lighting. The lighting L can be considered as the low frequency component of the image I and can be estimated by a low-pass filter F , i.e., L ~ F * I . Thus we can get the self-quotient image as R = ?? ?? * ??](image-3.png "( 3 )Fig. 1 :") 3![Fig. 3 : The Stages of Proposed Image Preprocessing Method The rest of the paper is organized as follows. Section II Presents Gamma correction, DOG Filtering and contrast equalization technique with the results and Section III reports the conclusion.](image-4.png "Fig. 3 :") 5![Fig. 5 : Comparison of Various Techniques with Difficult Lighting Condition Fig.5shows the different methods of performing the preprocessing. It could be observed that the images are taken under different lighting conditions, varying from very bright to very dark. By comparing, we could observe that the preprocessing performed using the proposed method is better when compared to LOG and HE.The proposed technique is tested with the different datasets Yale B, FRGC-204 and Real time Database that has been created under difficult and different illumination conditions. For each person five images are created as normal, bright, very bright, dark and very dark. The images are tested with the proposed algorithm, preprocessing is performed which is the first stage of any face recognition system.Table I : Default Parameter Settings [Tan and Triggs, 2010] Procedure Parameter Value Gamma correction DOG Filtering Contrast Equalization](image-5.png "Fig. 5 :") © 2012 Global Journals Inc. (US) * Face recognition: The problem of compensating for changes in illumination direction YAdini YMoses SUllman IEEE Trans. Pattern Anal. Mach. Intell 19 7 Jul. 1997 * An efficient Preprocessing Technique under difficult Lighting Conditions SAnila & Dr .NDevarajan Proc. National Conference on Emerging Trends in Computer Communication and Informatics (ETCCI-2011) National Conference on Emerging Trends in Computer Communication and Informatics (ETCCI-2011) March 10-11, 2011 * Lambertian reflectance and linear subspaces RBasri DJacobs IEEE Trans. Pattern Analysis & Machine Intelligence 25 2 February 2003 * What is the set of images of an object under all possible illumination conditions PBelhumeur DKriegman Int. J. Computer Vision 28 3 1998 * In search of illumination invariants HChen PBelhumeur DJacobs CVPR 2000 * Histograms of oriented gradients for human detection NDalal BTriggs Proc. CVPR CVPRWashington, DC 2005 * From Few to Many: Illumination Cone Models for Face Recognition under Differing Pose and Lighting ASGeorghiades PNBelhumeur DJKriegman IEEE TPAMI 23 6 2001 * An image preprocessing algorithm for illumination invariant face recognition RGross VBrajovic Proc. AVBPA AVBPA 2003 * An introduction to biometric recognition AKJain ARoss SPrabhakar IEEE Trans. Circuits Syst. Video Technol 14 1 Jan. 2004 * Acquiring linear subspaces for face recognition under variable lighting KLee JHo DKriegman IEEE Trans. Pattern Analysis & Machine Intelligence 27 5 2005 * XM2VTSDB: The Extended M2VTS Database KMesser JMatas JKittler JLuettin GMaitre Proc. International Conference on Audio-and Video-Based Person Authentication International Conference on Audio-and Video-Based Person Authentication 1999 * The FERET Evaluation Methodology for Face-Recognition Algorithms PJPhillips HMoon PRauss SARizvi Proc. Conference on Computer Vision and Pattern Recognition Conference on Computer Vision and Pattern Recognition 1997 * Proceedings of the International Conferences on Automatic Face and Gesture Recognition the International Conferences on Automatic Face and Gesture Recognition 1995-1998 * Proceedings of the International Conferences on Audio-and Video-Based Person Authentication 1997. 1999 * A Survey of Face Recognition Techniques RabiaJafri HamidRArabnia journal of information processing systems June 2009 5 * The FERET Verification Testing Protocol for Face Recognition Algorithms SARizvi PJPhillips HMoon Proc. International Conference on Automatic Face and Gesture Recognition International Conference on Automatic Face and Gesture Recognition 1998 * Illumination normalization for robust face recognition against varying lighting conditions SShan WGao BCao DZhao Proc. AMFG AMFGWashington, DC 2003 * Face recognition under varying lighting conditions using self-quotient image HWang SLi YWang Proc. IEEE Int. Conf. Autom. Face Gesture Recognition IEEE Int. Conf. Autom. Face Gesture Recognition 2004 * Enhanced Local Texture Feature Sets for Face Recognition Under Difficult Lighting Conditions XiaoyangTan BillTriggs IEEE transactions on image processing 19 6 june 2010 * Face Recognition: A Literature Survey WZhao RChellappa ARosenfeld PJPhillips ACM Computing Surveys 2003 * Robust Image Based 3D Face Recognition WZhao 1999 University of Maryland Ph D Thesis * Face recognition under variable lighting using harmonic image exemplars LZhang DSamaras CVPR Los Alamitos, CA, USA 2003 01 * Performance Evaluation of Photometric Normalization Techniques for Illumination Invariant Face Recognition N?struc Pave?si´c Advances in Face Image Analysis: Techniques and Technologies YJZhang IGI Global 2009