Effect of changing orientation on Facial Recognition System
Facial features vigorously change with illumination and direction, leading to increased false rejection as well as false acceptance and we can’t store images of the same person at a different angle and illumination. If we do this then we will be led to overloading of database [1].75% of the authentication of the facial recognition system fails due to the change of orientation of the test face image then from the stored image [1]. So, we need some feature that is invariant to the change in orientation.
In [2] they selected 9 feature points that have the property of angle invariance, which are 2 eyeballs, 4 near and far corners of eyes, the midpoint of nostrils and 2 mouth corners. They have chosen the operator SUSAN to extract the edge and corner points of the local feature area. In [3] it takes multiple images at different angle maintaining the distance between camera and subject and with the same illumination. Here, the range of rotation is considered from -60 to +60.and 12 PCA is used. Euclidean distance was used for matching similarity but registered and testing image. Angle feature vector contains 6 elements. In [4] they have considered the nose as the localizing component on the face. The nose tip corresponds to the brightest Gray level. The features that they have considered are eye lip distance, eye nose distance, nose lip distance, angle eye lip, angle eye nose and angle nose lip. In[5] Locally Linear Regression is predicting the frontal image of the non-frontal face image, Fisher Linear Discriminant Analysis does the recognition of faces with Principal Component Analysis (PCA) used for dimensionality reduction of the face image before the recognition.
In [1] the approach was to store multiple pose image features in a single trained MLP so that both storage and searching for intermediate angles are efficient and there is no overloading of the database. Geometrical features are fragile to angle variation so they used a subset of geometrical features to express the pose. Their goal was to extract angle features at low computational complexity. To identify the important features of the face it uses a 2 filters minimum value filter and binarization. They used 3 points the left and right eye locations and the middle of the mouth. The distances between them and the slope of the lines connecting them are used as elements of the feature vector so there were 12 elements five distances and four gradients of lines in the angle feature vector which was the input of the MLP. For Image feature extraction they used IDA. NN was used as a mapping function. As there were 12 angle features so 12 input nodes are present in this model and 16 hidden units and 8 output unit equal to no image feature extracted by using IDA. They used separate MLP for every individual and for training images they have taken images of every individual at orientation angle from -50 to +50 degrees at an interval of 5 degrees. They have 21 training images out of with 11 are taken at orientation −50, −40, −30, −20, −10, 0, +10, +20, +30, +40, and +50. And 10 at the orientation of angles −45, −35, −25, and so forth. So, good generalization is obtained. This results in a good false-acceptance rate (FAR) as well as for false-rejection rate (FRR). In [6] a classifier fusion of frontal face classifier and profile face classifier is designed in the DP-Adaboost algorithm to detect a multi-angle face. An improved horizontal differential projection method is presented in the DP-Adaboost algorithm to remove false faces in the detected results.
Effect of changing Illumination on Facial Recognition System
Many techniques were introduced to solve illumination problems such as the illumination cones method [7]and 9 D linear subspace [8] but they require a large amount of data and knowledge of the light source. So, to resolve this issue region-based image proposing method [9] was introduced. Some illumination normalization methods were also introduced such as multiscale retinex [10], wavelet-based normalization technique (WA) [11], and DCT-based normalization technique [12]. In [13] an illumination Normalization method called the Mean Estimation method was introduced. In this Illumination, the component is removed by subtracting the mean estimation from the original image. In order to standardize the overall grey level of different facial images, a ratio matrix of the quotient image and its modulus mean value is obtained. As the Gray value of facial organs is less than the facial skin, postprocessing is practised on the images so that there is a focus on facial texture for face recognition. This method was giving us good results. This paper[14] introduced illumination normalization technique uses histogram normalization method that increases the contrast of the image by changing the values of the original pixels as PCA and LDA are sensitive to lighting and the property of LDA is violated by changing the pixels value of face images. In [15] the method is based on the bidimensional empirical mode decomposition and it uses a gradient faces algorithm to process face image under varying lighting. The two IMFs of the face image was decomposed in the logarithm domain. And then, the gradient faces were adopted to enhance the high-frequency component of face images after reconstruction. The PCA is used to extract facial feature and the KNN classifier based on cosine distance is used to classify. In [16] histogram equalization, Gradient faces and Weber– face methods is used to normalize the illumination of the face images and feature extraction is done by using centre–symmetric local binary pattern, local binary pattern, local directional pattern, local phase quantization, rotated local binary pattern, and local ternary pattern descriptors and SVM is applied for classification. In [17] they introduced Gabor phase-based illumination invariant extraction method. By First normalizing the face images using a homomorphic filter-based preprocessing method to eliminate effects of the illumination changes. Then, a set of 2D real Gabor wavelet with different directions is used for image transformation, and multiple Gabor coefficients are combined into one whole in considering both spectrum and phase. Lastly, the illumination invariant is obtained by extracting the phase feature from the combined coefficient. In [18] they use ICA for eliminating the effect of lighting in the image.
References:
[1] Kato, Hisateru & Chakraborty, Goutam & Chakraborty, Basabi. (2012). A Real-Time Angle- and Illumination-Aware Face Recognition System Based on Artificial Neural Network. Applied Computational Intelligence and Soft Computing. 2012. 10.1155/2012/274617.
[2] Hua Gu, Guangda Su ,Cheng Du. (2003). Feature Points Extraction from Faces.
[3] B. Chakraborty, “A Novel ANN based Approach for Angle Invariant Face Verification,” 2007 IEEE Symposium on Computational Intelligence in Image and Signal Processing, Honolulu, HI, 2007, pp. 72–76, doi: 10.1109/CIISP.2007.369296.
[4] Kavita S.R., Mukeshl Z.A., Mukesh R.M. (2010) Extraction of Pose Invariant Facial Features. In: Das V.V., Vijaykumar R. (eds) Information and Communication Technologies. ICT 2010. Communications in Computer and Information Science, vol 101. Springer, Berlin, Heidelberg
[5] Chai, Xiujuan & Shan, Shiguang & Chen, Xilin & Gao, Wen. (2007). Locally Linear Regression for Pose-Invariant Face Recognition. IEEE transactions on image processing : a publication of the IEEE Signal Processing Society. 16. 1716–25. 10.1109/TIP.2007.899195.
[6] Zheng, Ying-Ying & Yao, Jun. (2015). Multi-angle face detection based on DP-Adaboost. International Journal of Automation and Computing. 12. 10.1007/s11633–014–0872–8.
[7] J. Ho, M.-H. Yang, J. Lim, K.-C. Lee, and D. Kriegman, “Clustering appearances of objects under varying illumination conditions,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 11–18, Madison, Wis, USA, June 2003
[8] R. Basri and D. W. Jacobs, “Lambertian reflectance and linear subspaces,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 25, no. 2, pp. 218–233, 2003.
[9] G. An, J. Wu, and Q. Ruan, “An illumination normalization model for face recognition under varied lighting conditions,” Pattern Recognition Letters, vol. 31, no. 9, pp. 1056–1067, 2010.
[10] D. J. Jobson, Z.-U. Rahman, and G. A. Woodell, “A multiscale retinex for bridging the gap between color images and the human observation of scenes,” IEEE Transactions on Image Processing, vol. 6, no. 7, pp. 965–976, 1997.
[11] S. Du and R. Ward, “Wavelet-based illumination normalization for face recognition,” in Proceedings of the IEEE International Conference on Image Processing (ICIP ’05), vol. 2, pp. 954–957, Genova, Italy, September 2005.
[12] W. Chen, M. J. Er, and S. Wu, “Illumination compensation and normalization for robust face recognition using discrete cosine transform in logarithm domain,” IEEE Transactions on Systems, Man, and Cybernetics B, vol. 36, no. 2, pp. 458–466, 2006.
[13] Luo, Yong & Guan, Ye-Peng & Zhang, Chang-Qi. (2013). A Robust Illumination Normalization Method Based on Mean Estimation for Face Recognition. ISRN Machine Vision. 2013. 1–10. 10.1155/2013/516052.
[14] Shah, Jamal & Sharif, Muhammad & Raza, Mudassar & Murtaza, Marryam & Ur Rehman, Saeed. (2015). Robust Face Recognition Technique under Varying Illumination. Journal of Applied Research and Technology. 13. 97–105. 10.1016/S1665–6423(15)30008–0.
[15] Yang, Zhi-Jun & He, Xue & Xiong, Wen-Yi & Nie, Xiang-Fei. (2016). Face Recognition under Varying Illumination Using Green’s Functionbased Bidimensional Empirical Mode Decomposition and Gradientfaces. ITM Web of Conferences. 7. 01015. 10.1051/itmconf/20160701015.
[16] Tran, Chi-Kien & Tseng, Chin-Dar & Chao, Pei-Ju & Shieh, Chin-Shiuh & Chang, Liyun & Lee, Tsair-Fwu. (2017). Face Recognition under Varying Lighting Conditions: A Combination of Weber-face and Local Directional Pattern for Feature Extraction and Support Vector Machines for Classification. Journal of Information Hiding and Multimedia Signal Processing. 8. 1009–1019.
[17] Fan, Chunnian & Wang, Shuiping & Zhang, Hao. (2017). Efficient Gabor Phase Based Illumination Invariant for Face Recognition. Advances in Multimedia. 2017. 1–11. 10.1155/2017/1356385.
[18] Ahmad, Fawad & Khan, Asif & Islam, Ihtesham & Ullah, Habib. (2017). Illumination normalization using independent component analysis and filtering. The Imaging Science Journal. 1–6. 10.1080/13682199.2017.1338815.