簡易檢索 / 詳目顯示

研究生: 楊權輝
Chyuan-Huei Thomas Yang
論文名稱: 運用正規化梯度的強韌影像比對法之研究
A Study on Robust Image Matching Methods by Using Normalized Gradients
指導教授: 張隆紋
Long-Wen Chang
賴尚宏
Shang-Hong Lai
口試委員:
學位類別: 博士
Doctor
系所名稱: 電機資訊學院 - 資訊工程學系
Computer Science
論文出版年: 2005
畢業學年度: 94
語文別: 英文
論文頁數: 92
中文關鍵詞: 影像比對臉部辨識亮度環境正規化梯度Hausdorff距離混和式比對方法
外文關鍵詞: Image matching, Face recognition, Illumination condition, Normalized gradient, Hausdorff distance, Hybrid image matching method
相關次數: 點閱:1下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 在本論文中將探討運用正規化梯度的強韌影像比對法之研究。我們將以兩種不同的研究的方向,一種方法為應用Hausdorff距離,另一種則不應用Hausdorff距離,來提出研發創新的強韌影像比對法。我們利用臉部辨識的問題來檢驗我們所提出的方法,尤其是在困難度頗高的具有不同光源下的臉部辨識。利用梯度的變化是很適合用來解決這樣的問題。臉部比對在臉部辨識與臉部確認的領域裡是一個基本的步驟。面對在不同光源的影像裡,我們很難找到一個非常強韌的臉部比對方法。在本論文中,我們會先展示一個全新的可以對抗光源變化的臉部比對法。此提出的影像比對演算法是基於利用臉部輪廓具有高的影像梯度的特徵。我們定義一個新的具一致性的度量,此度量為以在比對的兩個影像中的相對應的區域裡的兩個對應點上正規化梯度的內積。正規化梯度的計算方式為梯度向量除以其對應的區域最大梯度量。然後我們對兩個比對的臉部輪廓影像上所有對應點計算其具一致性度量的平均值,這就是我們定義的強韌影像比對度量。為了降低陰影和過亮的問題,我們引進一個亮度加權函數,用在每一個一致性度量。這樣形成了一個加權一致性度量的平均值。我們擴展這一個強韌一致性的度量到整合同一個人的有不同亮度的影像上,這樣就是我們所提出的完整的強韌影像比對法。 可靠的影像比對法對電腦視覺、影像處理、和圖形辨識領域裡的很多問題是很重要的。Hausdorff距離和很多與它相關的變形已經很成功的用在影像比對上。我們在本論文裡提出第二種研究方向,一個改良的且基於修正Hausdorff距離並應用先前所提出的正規化梯度一致性度量。這是一個混和Hausdorff距離和正規化梯度比對的新的演算法,這樣整合了幾何上的Hausdorff距離與光度測定的亮度梯度資訊來獲得一個更好的相似性度量。為了要檢驗我們提出的改良方法,我們與其和先前一些其他作者所提的方法在不同光源的限制下做比較。以Yale或是CMU臉部資料庫的測試,實驗的結果顯示,無論我們所提出的臉部比對的新方法,用或不用Hausdorff距離的兩種方法,其結果和先前其他作者所提的方法都比較好。這些結果說明使用我們所提出的用或不用Hausdorff距離的兩種強韌的臉部影像比對法,在不同光源的限制下都有很好的辨識率。


    In this dissertation, the robust image matching methods by using gradient variations are studied. Two different approaches, with and without using the Hausdorff distance, for image matching methods are discussed in this dissertation. We examine our proposed methods in the face recognition, especially in different illumination conditions, since the gradient variations are very suitable for this area. Face image matching is an essential step for face recognition and face verification. It is difficult to achieve robust face matching under various image acquisition conditions. In this dissertation, a novel face image matching algorithm robust against illumination variations without using Hausdorff distance is shown. The proposed image matching algorithm is motivated by the characteristics of high image gradient along the face contour. We define a new consistency measure as the inner product between two normalized gradient vectors at the corresponding locations in two images. The normalized gradient is obtained by dividing the computed gradient vector by the corresponding locally maximal gradient magnitude. Then we compute the average consistency measures for all pairs of the corresponding face contour pixels to be the robust matching measure between two face images. To alleviate the problem due to shadow and intensity saturation, we introduce an intensity weighting function for each individual consistency measure to form a weighted average of the consistency measure. This robust consistency measure is further extended to integrate multiple face images of the same person captured under different illumination condition, thus making our robust face matching algorithm. Reliable image matching is important to many problems in computer vision, image processing and pattern recognition. Hausdorff distance and many of its variations have been employed for image matching with success. The second approach of the image matching method of this dissertation we proposed is an improved image matching method based on a modified Hausdorff distance with normalizing gradient consistency measure. This hybrid image matching combining Hausdorff distance with normalizing gradient matching is a new image matching algorithm integrates the geometric Hausdorff distance with the photometric intensity gradient information to obtain a better image similarity measure. To show the improvement of the proposed algorithm, we test it with some previous image matching methods on the problem of face recognition under lighting changes. Experimental results of applying the proposed face image matching algorithm with or without Hausdorff distance on the Yale face database or CMU PIE database are compared with those by previous matching methods. These results show superior recognition under different lighting conditions by using the proposed robust face image matching algorithm with or without Hausdorff distance.

    Contents 1 Introduction 1 1.1 Motivation 1 1.2 Related Works on Image Matching 3 1.3 Review Previous Works on Image Matching 6 1.3.1 Naïve Method 7 1.3.2 Gray-level Derivatives Method and 2D Gabor Filter Method 7 1.3.3 PCA-based Method 8 1.4 Related Works on Image Matching by Using Hausdoroff Distance 10 1.5 Review Previous Works on Image Matching with Hausdorff Distance 14 1.5.1 Distance Function 14 1.5.2 Normalized Hausdorff Distance 17 1.5.3 Modified Hausdorff Distance 19 1.5.4 M-estimation and Least Trimmed Square of HD measures 20 1.6 Organization of This Dissertation 21 2 Roubust Image Matching Method by Normalized Gradient 23 2.1 The Idea of the Proposed Robust Image Matching Method 23 2.2 Proposed Robust Image Matching Method 28 3 Hybrid Image Similarity Measure Combining Hausdorff distance with Normalized Gradient 34 3.1 Background of the Related Hausdorff Distance 34 3.2 Proposed Hybrid Image Similarity Measure 36 4 Experimental Results 42 4.1 Testing Results on Yale Face Database with Normalied Gradient 42 4.2 Testing Results on Yale Face Database B with Normalied Gradient 51 4.3 Testing Results on CMU PIE Database with Normalied Gradient 56 4.4 Experimental Results of the Robust Image Matching Method with Hausdorff Distance 59 4.5 Dissusions 67 5 Conclusions 84 Bibliography 86 Lists of Figures Figure 1.1 An example of face matching. 2 Figure 1.2 The image has different illumination conditions. 3 Figure 1.3 A binary image D. 16 Figure 1.4 After applied the Chamfer DT in D. 16 Figure 1.5 A similar figure E. 16 Figure 1.6 After applied the directed multiplication for one position. 17 Figure 2.1 A 3×3 normalization window for the pixel 25 Figure 2.2 Two consistent gradient vectors at the corresponding location of two similar contours 26 Figure 2.3 The intensity weighting function. 28 Figure 2.4 The procedure of proposed method 31 Figure 2.5 Preprocessing on an image database 32 Figure 2.6 Application on an image database 33 Figure 2.7 To apply on an image database after preprocessing 33 Figure 3.1 Figure 3.1: (a) An example face image in the Yale B face database, (b) the edge map and (c) the normalized gradient field computed from the face image. 41 Figure 4.1 An example of one subject with three images in the Yale Face Database with (a) center-light, (b) right-light, and (c) left-ligh. 43 Figure 4.2 Four sets of face images with different lighting directions 44 Figure 4.3 The images during processing . 45 Figure 4.4 The matched locations where the matching sample face is (a) 45 Figure 4.5 (a)Search image, (b)Smooth image of (a), (c)matching template, (d)smoothed matching template, (e)absolute value of gradients of (a), (f)absolute value of gradients of (c), (g)edge detection of (c), (h)matched image 46 Figure 4.6 The results of using the template of Figure 4.5(c) to match all 12 images of fours sets in Figure 4.2. 47 Figure 4.7 (a) a face matching template, (b) its edges 47 Figure 4.8 The results of matching template of Figure 4.7(a) to match all 12 images of four sets in Figure 4.2 48 Figure 4.9 (a) A template face image and (b) the extracted face contour map. 50 Figure 4.10 Face image matching results with one of the face template contour overlaid on the input face images under (a)center-light, (b)right-light, (c)left-light conditions are shown 51 Figure 4.11 10 subjects of the Yale Face Database B 52 Figure 4.12 The original of three reference images of the first subject 53 Figure 4.13 Three reference images of the first subject from Figure 4.5 53 Figure 4.14 The 36 test images with different illumination conditions of the first subject without three reference images 54 Figure 4.15 (a)the edge contour of the first subject, (b)the edge contour of the first subject to match itself (c)the edge contour of the first subject to match the second subject. 55 Figure 4.16 18 test images with different illumination conditions of a subject in the CMU PIE dataset. 57 Figure 4.17 The original of three reference images of a subject in CMU PIE dataset. 57 Figure 4.18 Three reference images of the same subject from Figure 4.10 58 Figure 4.19 (a) The edge contour of a subject in the CMU PIE dataset, (b) the edge contour of this subject is matched and overlaid onto his own face image, (c) the edge contour of this subject is match and overlaid onto another subject’s face image 58 Figure 4.20 The center reference images of all subjects (10 persons) in Yale face database B 61 Figure 4.21 Another 36 test images of one subject under different illumination conditions in the Yale face database B 62 Figure 4.22 One of the results by using the (a) the image of yaleB10_P00A-035E-20.bmp, (b) the edge of (a), (c) the image of yaleB10_P00A-035E-20.bmp, which is in the Yale face database. 62 Figure 4.23 (a) A face template image indexed yaleB10_P00A+000E+00, (b) its edge map, (c) its normalized gradient field, and (d) the face with the normalized gradient field overlaid. 65 Figure 4.24 (a) A test image indexed yaleB10_P00A+060E+20 in Yale B face database, (b) the recognition result by using the MHD with the edge map of the recognized subject in Figure 4.23(b) overlaid to the face image, (c) the normalized gradient field computed from the face region in (a), (d) the recognition result by using the proposed hybrid image matching algorithm with the normalized gradient field of the recognized subject in Figure 4.23(c) overlaid to the face image. 65 Figure 4.25 (a) A face template image indexed yaleB08_P00A+000E+00, (b) its edge map, (c) its normalized gradient field, and (d) the face image with the normalized gradient field overlaid. 66 Figure 4.26 (a) A test image indexed yaleB08_P00A+020E-40 in Yale B face database, (b) the recognition result by using the MHD with the edge map of the incorrectly recognized subject in Figure 4.23(b) overlaid to the face image, (c) the normalized gradient field computed from the face region in (a), (d) the recognition result by using the proposed hybrid image matching algorithm with the normalized gradient field of the correctly recognized subject in Figure 4.25(c) overlaid to the face image. 66 Figure 4.27: 23 miss hits by using the robust image matching method by using the normalized gradient in Yale Face Database B 68 Figure 4.28 shows the 7 missed images and their names by using RIMNG method 76 Lists of Tables Table 4.1 The recognition rate of the proposed method and the other methods with one reference face image in Yale database 50 Table 4.2 Comparison of recognition rates by using isotropic gray-level derivatives, 2D Gabor filters, Eigenface, Fisherface and the proposed robust image matching algorithm on the Yale Face Database B. 56 Table 4.3 Comparison of recognition rates by using isotropic gray-level derivatives, 2D Gabor filters, Eigenface, Fisherface and the proposed robust image matching algorithm with three reference images on the CMU PIE Database 59 Table 4.4 Recognition rates for the HD and MHD methods or normalized gradient matching in the Yale B face dataset 63 Table 4.5 Recognition rates for our proposed algorithm and some previous face recognition methods in the Yale B face dataset 63 Table 4.6 Recognition rates for the HD and MHD methods or normalized gradient matching in the CMU face dataset.. 64 Table 4.7 Recognition rates for our proposed algorithm and some previous face recognition methods in the CMU face dataset 64 Table 4.8‘yaleB02_P00A+020E-10’ and ‘yaleB02_P00A+035E+65’ hit by using HDNG with chessboard distance in Yale face database B 69 Table 4.9‘yaleB02_P00A+020E-10’ and ‘yaleB02_P00A+035E+65’ hit by using HDNG with Euclidean distance in Yale face database B 70 Table 4.10‘yaleB02_P00A+020E-10’ hit by using MHDNG with city-block distance in Yale face database B 71 Table 4.11‘yaleB02_P00A+020E-10’ hit by using MHDNG with chessboard distance in Yale face database B 72 Table 4.12‘yaleB02_P00A+020E-10’ and ‘yaleB02_P00A+035E+65’ hit by using MHDNG with Euclidean distance in Yale face database B 73 Table 4.13 The comparison of hit rates for Yale Face Database B by using RIMNG and by HD, MHD, HDNG, MHDNG with city-block, chessboard, and Euclidean distances 74 Table 4.14 The comparison of hit rates for CMU Face Database by using RIMNG, and by HD, MHD, HDNG, MHDNG with city-block, chessboard, and Euclidean distances 75 Table 4.15‘g21_14’ , ‘g21_15’ and ‘g21_22’ hit by using MHDNG with city-block distance in CMU face database 77 Table 4.16‘g28_17’ hit by using MHDNG with city-block distance in CMU face database 78 Table 4.17‘g40_05’ , and ‘g40_10’ hit by using MHDNG with city-block distance in CMU face database 79 Table 4.18‘g40_05’ , and ‘g40_10’ hit by using MHDNG with chessboard distance in CMU face database 80 Table 4.19‘g21_14’ , ‘g21_15’ and ‘g21_22’ hit by using MHDNG with Euclidean distance in CMU face database 81 Table 4.20‘g28_17’ hit by using MHDNG with Euclidean distance in CMU face database 82 Table 4.21 ‘g40_05’ and ‘g40_10’ hit by using MHDNG with Euclidean distance in CMU face database 83

    [1] Y. Adini, Y. Moses, S. Ullman, “Face recognition: the problem of compensating for changes in illumination direction”. IEEE Trans. Pattern Analysis Mach. Intel., Vol. 19, No. 7 (1997) 721-732
    [2] P. N. Belhumeur, J. P. Hespanha, D. J. Kriegman, “Eigenfaces vs. Fisherfaces: recognition using class specific linear projection”. IEEE Trans. Pattern Analysis Mach. Intel., Vol. 19, No. 7 (1997) 711-720
    [3] S. Belongie, J. Malik, J. Puzicha, “Matching shapes”. Proc. Int. Conf. Computer Vision, (2001) 454-461
    [4] D. Beymer, T. Poggio, “Face recognition from one example view”. MIT AI Memo No. 1536 (1995)
    [5] R. Chellappa, C. L. Wilson, S. Sirohey, “Human and machine recognition of faces: a survey”, Proceedings of the IEEE , Volume: 83 Issue: 5 , May 1995, Page(s): 705 –741
    [6] K.-C. Chung, S. C. Kee, S. R. Kim, “Face recognition using principal component analysis of Gabor filter responses, Recognition, Analysis, and Tracking of Faces and Gestures in Real-Time Systems”, 1999. Proceedings. International Workshop on, 26-27 Sept. 1999, Page(s): 53 –57
    [7] G. J. Edwards, C. J. Taylor, T. F. Cootes, “Interpreting face images using active appearance models”. Proc. Third IEEE Conf. on Automatic Face and Gesture Recognition (1998) 300-305
    [8] D. A. Forsyth, J. Ponce, “Computer Vision A Modern Approach”, Prentice Hall (2003)177-180
    [9] A. S. Georghiades, D. J. Kriegman, P. N. Belhumeur, “Illumination Cones for Recognition under Variable Lighting Faces”. Proc. IEEE Conf. CVPR (1998) 52-59
    [10] A. S. Georghiades, D. J. Kriegman, P. N. Belhumeur, “From Few to Many: Illumination Cone Models for Face Recognition under Variable Lighting and Pose”. IEEE Trans. Pattern Analysis Mach. Intel., Vol. 23, No. 6 (2001) 643-660
    [11] P. Gros, “Color illumination models for image matching and indexing”. Proc. Int. Conf. Pattern Recognition, Vol. 3 (2000)576 -579
    [12] K. Hotta, T. Mishima, T. Kurita, S. Umeyama, “Face matching through information theoretical attention points and its applications to face detection and classification”. Proc. Fourth IEEE Conf. on Automatic Face and Gesture Recognition (2000) 34-39
    [13] R.-L. Hsu, A. K. Jain, “Face modeling for recognition”, Image Processing, 2001. Proceedings. 2001 International Conference on, Volume: 2, 7-10 Oct. 2001, Page(s): 693 -696 vol.2
    [14] C. Liu, H. Wechsler, “A Gabor feature classifier for face recognition”, Computer Vision, 2001. ICCV 2001. Proceedings. Eighth IEEE International Conference on , Volume: 2 , 7-14 July 2001, Page(s): 270 -275 vol.2
    [15] A. Mojsilovic, J. Hu, “Extraction of perceptually important colors and similarity measurement for image matching”. Proc. Int. Conf. Image Processing (2000) 61-64
    [16] B. Moghaddam, T. Jebara. and A. Pentland, “Bayesian: Face Recognition”, Pattern Recognition, Vol. 33, No. 11, November, 2000, pps. 1771-1782.
    [17] X. Mu, M. Artiklar, M. H. Hassoun, P. Watta, “Training algorithms for robust face recognition using a template-matching approach”. Proc. Int. Joint Conf. Neural Networks (2001) 2877-2882
    [18] A. Pentland, “Looking at people: sensing for ubiquitous and wearable computing”, Pattern Analysis and Machine Intelligence, IEEE Transactions on, Volume: 22 Issue: 1 , Jan. 2000, Page(s): 107 –119
    [19] J. E. Dennis, and R. B. Schnabel, “Numerical Methods for Unconstrained Optimization and Nonlinear Equations”, Prentice-Hall Pub., New Jersey, USA: 1983.
    [20] K. Sengupta, J. Ohya, “An affine coordinate based algorithm for reprojecting the human face for identification tasks”. Proc. International Conference on Image Processing, Vol. 3 (1997) 340 -343
    [21] A. Smail, P. Iyengar, “Automatic Recognition and Analysis of Human Faces and Facial Expression: A Survey”, Pattern Recognition, Vol. 25, pp. 65-77, 1992
    [22] B. Takacs, H. Wechsler, “Face recognition using binary image metrics”. Proc. Third IEEE Conf. Automatic Face and Gesture Recognition (1998) 294-299
    [23] T. Sim, S. Baker, and M. Bsat, “The CMU Pose, Illumination, and Expression Database”, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 25, No. 12, December, 2003, pp. 1615 - 1618.
    [24] T. Sim, S. Baker, and M. Bsat: “The CMU Pose, Illumination, and Expression (PIE) Database”, Proceedings of the IEEE International Conference on Automatic Face and Gesture Recognition, May, 2002.
    [25] T. Sim, S. Baker, and M. Bsat , “The CMU Pose, Illumination, and Expression (PIE) Database of Human Faces”, Tech. report CMU-RI-TR-01-02, Robotics Institute, Carnegie Mellon University, January, 2001.
    [26] L. Wiskott, J.-M. Fellous, N. Kuiger, C. von der Malsburg, “Face recognition by elastic bunch graph matching”. IEEE Trans. PAMI, Vol. 19, No. 7, (1997) 775 -779
    [27] C.-H. Yang, S.-H. Lai, L.-W., Chang, “Robust Face Matching Under Different Lighting Conditions”. Proc. IEEE International Conference on Miltimedia and Expo, Session ThuAmPO1 No. 317 (2002)
    [28] C.-H. Yang, S.-H. Lai, L.-W., Chang, “An Illumination-Insensitive Face Matching Algorithm. Proc”. Third IEEE Pacific Rim Conference on Multimedia (2002) 1185-1193
    [29] M.-H. Yang, N. Ahuja, D. Kriegman, “Face recognition using kernel eigenfaces”, Image Processing, 2000. Proceedings. 2000 International Conference on, Volume: 1, 10-13 Sept. 2000, Page(s): 37 -40 vol.1
    [30] W.-Y. Zhao, R. Chellappa, “Illumination-Insensitive Face Recognition using Symmetric Shape-from-Shading”, Proc. IEEE Conf. Computer Vision Pattern Recognition, pp. 286-293, 2000.
    [31] J. Zhu, B. Liu, S. C. Schwartz, “General illumination correction and its application to face normalization”, Acoustics, Speech, and Signal Processing, 2003. Proceedings. (ICASSP '03). 2003 IEEE International Conference on, Volume: 3, April 6-10, 2003, Page(s): III_133 -III_136
    [32] J. A. Gualtieri, J. Le Moigne, C.V. Packer, “Distance between images,” Frontiers of Massively Parallel Computation, 1992., Fourth Symposium on the, pp.:216 – 223, 19-21 Oct. 1992
    [33] J. You, E. Pissaloux, J.-L. Hellec, P. Bonnin, “A guided image matching approach using Hausdorff distance with interesting points detection,” Image Processing, 1994. Proceedings. ICIP-94., IEEE International Conference, Volume: 1, pp. 968 - 972 vol.1, 13-16 Nov. 1994
    [34] V. Di Gesù, V. Starovoitov, “Distance-based functions for image comparison,” Pattern Recognition Letters, Volume: 20, Issue: 2, pp. 207-214, February, 1999
    [35] A. Ghafoor, R. N. Iqbal, and S. Khan, “Robust Image Matching Algorithm,” EC-VIP-MC 2003, 4th EURASIP Conference focused on Video/Image Processing and Multimedia Communications, pp. 155-160, 2-5 July 2003
    [36] V. Perlibakas, “Distance measures for PCA-based face recognition,” Pattern Recognition Letters, Volume: 25, Issue: 6, pp. 711-724, April, 2004
    [37] D. P. Huttenlocher, G. A. Klanderman, W. J. Rucklidge, “Comparing images using the Hausdorff distance,” Pattern Analysis and Machine Intelligence, IEEE Transactions on, Volume: 15 , Issue: 9 , pp. 850 – 863, Sept. 1993
    [38] M.-P. Dubuisson, A. K. Jain, “A modified Hausdorff distance for object matching,” Pattern Recognition, 1994. Vol. 1 - Conference A: Computer Vision & Image Processing., Proceedings of the 12th IAPR International Conference, pp. 566 - 568, 9-13 Oct. 1994
    [39] J. Paumard, “Robust comparison of binary images,” Pattern Recognition Letters, Volume: 18, Issue: 10, pp. 1057-1063, October, 1997
    [40] B. TAKÁCS, “Comparing Face Images Using the Modified Hausdorff Distance,” Pattern Recognition, Volume: 31, Issue: 12, pp. 1873-1881, December, 1998
    [41] B. Günsel, A. M. Tekalp, “Shape similarity matching for query-by-example”, Pattern Recognition, Volume: 31, Issue: 7, pp. 931-944, July 31, 1998
    [42] D.-G. Sim, O.-K. Kwon, R.-H. Park, “Object matching algorithms using robust Hausdorff distance measures,” Image Processing, IEEE Transactions on, Volume: 8, Issue: 3, pp. 425–429, March 1999
    [43] X. Yi; O. I. Camps, “Line-based recognition using a multidimensional Hausdorff distance,” Pattern Analysis and Machine Intelligence, IEEE Transactions on, Volume: 21, Issue: 9, pp. 901–916, Sept. 1999
    [44] Y. Gao, , Maylor K.-H. Leung, “Line segment Hausdorff distance on face matching,” Pattern Recognition,Volume: 35, Issue: 2, pp. 361-371, February, 2002
    [45] B. Guo, K.-M. Lam, K.-H. Lin,W. C. Siu, “Human face recognition based on spatially weighted Hausdorff distance,” Pattern Recognition Letters, Volume: 24, Issue: 1-3, pp. 499-507, January, 2003
    [46] K.-H. Lin, K.-M. Lam, W. C. Siu, “Spatially eigen-weighted Hausdorff distances for human face recognition,” Pattern Recognition, Volume: 36, Issue: 8, pp. 1827-1834, August, 2003
    [47] Z. Zhu, M. Tang, H. Lu, “A new robust circular Gabor based object matching by using weighted Hausdorff distance,” Pattern Recognition Letters, Volume: 25, Issue: 4, pp. 515-523, March, 2004,
    [48] O.-K. Kwon, D.-G. Sim, R.-H. Park, “Robust Hausdorff distance matching algorithms using pyramidal structures,” Pattern Recognition, Volume: 34, Issue: 10, pp. 2005-2013, October, 2001
    [49] S. Srisuk, W. Kuratach, “New robust Hausdorff distance-based face detection,” Image Processing, 2001. Proceedings. 2001 International Conference on , Volume: 1, pp. 1022-1025 vol.1, 7-10 Oct. 2001
    [50] P. Gastaldo, R. Zunino, “Hausdorff distance for target detection,” IEEE International Symposium on Circuits and Systems, ISCAS 2002, Volume: 5, pp. V-661-664, 26-29 May 2002
    [51] X. Yi; O. I. Camps, “Robust occluding contour detection using the Hausdorff distance,” Computer Vision and Pattern Recognition, 1997. Proceedings., 1997 IEEE Computer Society Conference on , pp. 962–968, 17-19 June 1997
    [52] S. Shan, W. Gao, D. Zhao, “Illumination normalization for robust face recognition against varying lighting conditions,” Analysis and Modeling of Faces and Gestures, 2003. AMFG 2003. IEEE International Workshop on, pp. 157 – 164, 17 Oct. 2003
    [53] M. Kirby, L. Stirovich, “Application of the Karhunen-Loeve expansion for the characterization of human faces. Pattern Analysis and Machine Intelligence, IEEE Transactions on, Volume: 12, Issue: 1, pp. 103

    無法下載圖示 全文公開日期 本全文未授權公開 (校內網路)
    全文公開日期 本全文未授權公開 (校外網路)

    QR CODE