簡易檢索 / 詳目顯示

研究生: 徐瑞憫
Hsu, Jui-Min
論文名稱: 濕指紋品質評估與AiRDUNet:基於全方位殘差密集卷積U-Net神經網路的小面積濕指紋還原
Automated Humidity Quality Assessment with AiRDUNet: All-In-One Residual Dense U-Net for Partial Wet Fingerprint Restoration and Recognition
指導教授: 邱瀞德
Chiu, Ching-Te
口試委員: 郭柏志
Kuo, Po-Chih
蘇豐文
Soo, Von-Wun
學位類別: 碩士
Master
系所名稱: 電機資訊學院 - 資訊工程學系
Computer Science
論文出版年: 2024
畢業學年度: 113
語文別: 英文
論文頁數: 57
中文關鍵詞: 深度學習濕指紋品質評估局部指紋濕指紋還原指紋辨識全方位濕指紋還原
外文關鍵詞: Deep learning, Partial fingerprints, Wet fingerprint quality assessment, Wet fingerprint restoration, Fingerprint recognition, All-in-one wet fingerprint restoration
相關次數: 點閱:63下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 指紋影像品質評估常常對於指紋識別效果有很高的關聯性,但目前的研究主要側重於評估大指紋影像的品質。隨著側邊指紋應用的興起,側邊指紋感測器通常被用於身分驗證。這些感測器擷取的指紋面積較小且易受手汗和油漬影響影像的品質。然而目前尚無一種針對「局部濕指紋」的品質評估方法,這增加了分析如此小面積指紋下的濕度退化程度的困難度。
    傳統上只能依賴人工標記指紋溼度等級,這種方法費時且缺乏客觀依據。為解決這些問題,我們提出了濕指紋自動化評估框架 (WFQA)。此方法引入了大津黑像素站比率與拉普拉絲功率譜等六個指標來評估局部濕指紋的脊清晰度。我們還採用了一種新穎的離群值投票機制來清除資料集的異常。並透過設計的自動標記系統與 KNN 分類器 [1] 將濕指紋分為輕、中、重三個濕度等級。我們還使用 CycleGAN [2] 生成不同濕度的指紋作為訓練集,解決數據不足問題。
    我們進一步開發了一個因應不同指紋濕度的全方位指紋還原模型 AiRDUNet,旨在提高局部濕指紋影像的品質並降低系統誤判率(FRR)。結合 WFQA,CHDE 可以學習溼度退化特徵。他做為一種資訊提供給 Residual DenseUNet,提升模型對不同濕度的泛化能力。通過調整殘差權重,模型能夠可以有效地針對不同濕度等級進行還原。同時設計了一個辨識損失函數,使模型在還原過程中能保留與辨識相關的信息,以解決過度還原的問題。
    綜上所述,我們的濕指紋自動化評估框架 WFQA,在溼度預測準確率上超越其他品質評估方法達到 95.37% 的準確度。此外我們設計的一個全方位指紋還原模型 AiRDUNet 能有效修復不同濕度的指紋並保留辨識信息。我們的方法在真實世界測試集中,還原與辨識能力上超越現有方法,FRR 方面降至 11.89%。與其他指紋還原方法相比,AiRDUNet 相較 DenseUNet [3]和 Residual-M-Net [4] 分別提升了 18.84% 和 34.23%。與其他專注於濕指紋還原的研究相比,相較於 PGT-Net [5] 和 FPN-ResUNet [6] 也各取得了 12.78% 和 10.26% 的提升。


    Fingerprint image quality assessment is usually correlated with recognition per- formance, but current methods primarily focus on assess plain and rolled finger- prints. With the rise of edge devices, side-mounted fingerprint sensors are com- monly used for identification authentication. These small areas of fingerprints are often affected by sweat and grease, which affects the quality of the captured finger- print image. However, there is no quality assessment method can evaluate these ”partial wet fingerprints,” which makes it more difficult to analyze the extent of humidity degradation present in such small area fingerprint images.
    Manual labeling the fingerprint humidity level is time-consuming and lacks objectivity. To solve this, we propose an automated wet fingerprint assessment (WFQA) framework. The proposed approach introduces metrics like the Otsu black pixel ratio and Laplacian power spectrum to evaluate ridge clarity in partial wet fingerprints. We also implement a novel outlier voting mechanism to eliminate data anomalies. We design an automated labeling system with a KNN classifier [1] to categorize fingerprints into light, medium, and heavy humidity levels. Addition- ally, CycleGAN [2] is used to generate synthetic fingerprints with various humidity levels, addressing data scarcity.
    We further develop AiRDUNet, an all-in-one fingerprint restoration model de- signed to improve partial wet fingerprint quality and reduce the false rejection rate (FRR) with different humidity levels. In conjunction with WFQA, the Contrastive Humidity Degradation Encoder (CHDE) learns humidity degradation features. It is provided as information to Residual DenseUNet for enhanced restoration across humidity levels. enhances the model’s generalization ability across various humid- ity levels. By adjusting the residual scaling helps effectively restore fingerprints of different humidity levels. The designed recognition loss function prevents over- restoration while preserving important features.
    In summary, our WFQA framework achieves 95.37% accuracy in predicting fingerprint humidity levels, which outperforms other fingerprint quality assessment methods. AiRDUNet effectively restores wet fingerprints while preserving recog- nition information. Our method outperforms existing approaches in real-world test sets, achieving an FRR of 11.89%. Compare to other fingerprint restoration methods, AiRDUNet shows improvement of 18.84% and 34.23% over Dense- UNet [3] and Residual-M-Net [4], respectively. Compared to other wet fingerprint restoration studies, it surpasses PGT-Net [5] and FPN-ResUNet [6] by 12.78% and 10.26%, respectively.

    摘要i Abstract ii 1 Introduction 1 1.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Goal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.3 Contribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 2 Related Works 5 2.1 Fingerprint Quality Assessment . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.2 Fingerprint Restoration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.3 All-In-One Image Restoration . . . . . . . . . . . . . . . . . . . . . . . . . . 9 3 Wet Fingerprint Quality Automated Assessment 11 3.1 Overall Pipeline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 3.1.1 Humidity Quality Metrics Extractor . . . . . . . . . . . . . . . . . . . 12 3.1.2 Outlier Voting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 3.1.3 KNN Classifier with Automated Labeling System . . . . . . . . . . . . 14 3.2 Dataset Generation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 4 All-in-one Residual Dense-UNet 19 4.1 Overall Pipeline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 4.2 Multi-task Learning Techniques . . . . . . . . . . . . . . . . . . . . . . . . . 20 4.3 Network Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 4.3.1 Humidity Degradation Embedding Task . . . . . . . . . . . . . . . . . 22 4.3.2 Restoration Task . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 4.4 Loss Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 4.4.1 Contrastive Loss . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 4.4.2 Restoration Loss . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 5 Datasets 31 5.1 Humidity Quality Assessment Test Set . . . . . . . . . . . . . . . . . . . . . 31 5.1.1 Fingerprint Identification Test set . . . . . . . . . . . . . . . . . . . . 31 5.1.2 Humidity Quality Assessment Test Set . . . . . . . . . . . . . . . . . 32 5.2 Synthetic aligned Train set . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 5.3 Data Preprocessing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 5.4 Real Recognition Test set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 6 Experimental Results 37 6.1 Fingerprint Humidity Quality Assessment . . . . . . . . . . . . . . . . . . . . 37 6.1.1 Identification Results for Different Humidity Level . . . . . . . . . . . 37 6.1.2 Humidity Quality Level Classification . . . . . . . . . . . . . . . . . . 37 6.1.3 Effect of six Humidity Quality Indicators of WFQA . . . . . . . . . . 38 6.1.4 Comparison Results . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 6.2 Partial Wet Fingerprint Restoration and Recognition . . . . . . . . . . . . . . . 40 6.2.1 Implementation Details . . . . . . . . . . . . . . . . . . . . . . . . . . 40 6.2.2 Ablation Studies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 6.2.3 Performance Comparison . . . . . . . . . . . . . . . . . . . . . . . . . 47 7 Conclusion 51 References 53

    [1] G. Guo, H. Wang, D. Bell, Y. Bi, and K. Greer, “Knn model-based approach in clas- sification,” in On The Move to Meaningful Internet Systems 2003: CoopIS, DOA, and ODBASE: OTM Confederated International Conferences, CoopIS, DOA, and ODBASE 2003, Catania, Sicily, Italy, November 3-7, 2003. Proceedings, pp. 986–996, Springer, 2003.
    [2] J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, “Unpaired image-to-image translation using cycle-consistent adversarial networks,” in Proceedings of the IEEE international confer- ence on computer vision, pp. 2223–2232, 2017.
    [3] P. Qian, A. Li, and M. Liu, “Latent fingerprint enhancement based on denseunet,” in 2019 international conference on biometrics (ICB), pp. 1–6, IEEE, 2019.
    [4] N. D. S. Cunha, H. M. Gomes, and L. V. Batista, “Residual m-net with frequency-domain loss function for latent fingerprint enhancement,” in 2022 35th SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI), vol. 1, pp. 198–203, IEEE, 2022.
    [5] Y.-T. Li, C.-T. Chiu, A.-T. Hsieh, M.-H. Hsu, L. Wenyong, and J.-M. Hsu, “Pgt-net: Pro- gressive guided multi-task neural network for small-area wet fingerprint denoising and recognition,” arXiv preprint arXiv:2308.07024, 2023.
    [6] A.-T. Hsieh, C.-T. Chiu, T.-C. Chen, M.-H. Hsu, and L. Wenyong, “Feature points based residual unet with nonlinear decay rate for partial wet fingerprint restoration and recogni- tion,” in 2024 IEEE International Symposium on Circuits and Systems (ISCAS), pp. 1–5, IEEE, 2024.
    [7] B. Li, X. Liu, P. Hu, Z. Wu, J. Lv, and X. Peng, “All-in-one image restoration for unknown corruption,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 17452–17462, 2022.
    [8] E. Tabassi and C. L. Wilson, “A novel approach to fingerprint image quality,” in IEEE International Conference on Image Processing 2005, vol. 2, pp. II–37, IEEE, 2005.
    [9] J. Yang, N. Xiong, and A. V. Vasilakos, “Two-stage enhancement scheme for low-quality fingerprint images by learning from the images,” IEEE transactions on human-machine systems, vol. 43, no. 2, pp. 235–248, 2012.
    [10] S. Yoon, K. Cao, E. Liu, and A. K. Jain, “Lfiq: Latent fingerprint image quality,” in 2013 IEEE Sixth International Conference on Biometrics: Theory, Applications and Systems (BTAS), pp. 1–8, IEEE, 2013.
    [11] A. Sankaran, M. Vatsa, and R. Singh, “Automated clarity and quality assessment for la- tent fingerprints,” in 2013 IEEE Sixth International Conference on Biometrics: Theory, Applications and Systems (BTAS), pp. 1–6, IEEE, 2013.
    [12] S. Yoon, E. Liu, and A. K. Jain, “On latent fingerprint image quality,” in Computational Forensics: 5th International Workshop, IWCF 2012, Tsukuba, Japan, November 11, 2012 and 6th International Workshop, IWCF 2014, Stockholm, Sweden, August 24, 2014, Re- vised Selected Papers, pp. 67–82, Springer, 2015.
    [13] J. Ezeobiejesi and B. Bhanu, “Latent fingerprint image quality assessment using deep learning,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recog- nition Workshops, pp. 508–516, 2018.
    [14] P. Terhörst, A. Boller, N. Damer, F. Kirchbuchner, and A. Kuijper, “Midecon: Unsuper- vised and accurate fingerprint and minutia quality assessment based on minutia detection confidence,” in 2021 IEEE International Joint Conference on Biometrics (IJCB), pp. 1–8, IEEE, 2021.
    [15] T. Oblak, R. Haraksim, P. Peer, and L. Beslay, “Fingermark quality assessment framework with classic and deep learning ensemble models,” Knowledge-Based Systems, vol. 250, p. 109148, 2022.
    [16] A. Sherstinsky and R. W. Picard, “Restoration and enhancement of fingerprint images using m-lattice-a novel nonlinear dynamical system,” in Proceedings of the 12th IAPR International Conference on Pattern Recognition, Vol. 3-Conference C: Signal Processing (Cat. No. 94CH3440-5), vol. 2, pp. 195–200, IEEE, 1994.
    [17] Y. Tang, F. Gao, J. Feng, and Y. Liu, “Fingernet: An unified deep network for fingerprint minutiae extraction,” in 2017 IEEE International Joint Conference on Biometrics (IJCB), pp. 108–116, IEEE, 2017.
    [18] J. Li, J. Feng, and C.-C. J. Kuo, “Deep convolutional neural network for latent fingerprint enhancement,” Signal Processing: Image Communication, vol. 60, pp. 52–63, 2018.
    [19] S. Adiga V and J. Sivaswamy, “Fpd-m-net: Fingerprint image denoising and inpainting us- ing m-net based convolutional neural networks,” in Inpainting and denoising challenges, pp. 51–61, Springer, 2019.
    [20] Z. Shen, Y. Xu, and G. Lu, “Cnn-based high-resolution fingerprint image enhancement for pore detection and matching,” in 2019 IEEE Symposium Series on Computational In- telligence (SSCI), pp. 426–432, IEEE, 2019.
    [21] W. J. Wong and S.-H. Lai, “Multi-task cnn for restoring corrupted fingerprint images,”Pattern Recognition, vol. 101, p. 107203, 2020.
    [22] Y. Zhu, X. Yin, and J. Hu, “Fingergan: a constrained fingerprint generation scheme for latent fingerprint enhancement,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 45, no. 7, pp. 8358–8371, 2023.
    [23] M.-H. Hsu, Y.-C. Hsu, and C.-T. Chiu, “Tiny partial fingerprint sensor quality assess- ment,” IEEE Sensors Letters, 2024.
    [24] G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, “Densely connected convo- lutional networks,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 4700–4708, 2017.
    [25] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” Advances in neural infor- mation processing systems, vol. 27, 2014.
    [26] J. J. Engelsma, K. Cao, and A. K. Jain, “Learning a fixed-length fingerprint represen- tation,” IEEE transactions on pattern analysis and machine intelligence, vol. 43, no. 6, pp. 1981–1997, 2019.
    [27] S. Tandon and A. Namboodiri, “Transformer based fingerprint feature extraction,” in 2022 26th International Conference on Pattern Recognition (ICPR), pp. 870–876, IEEE, 2022.
    [28] S. A. Grosz, J. J. Engelsma, R. Ranjan, N. Ramakrishnan, M. Aggarwal, G. G. Medioni, and A. K. Jain, “Minutiae-guided fingerprint embeddings via vision transformers,” arXiv preprint arXiv:2210.13994, 2022.
    [29] N. Sasuga, K. Ito, and T. Aoki, “Fingerprint feature extraction using cnn with multiple at- tention mechanisms,” in 2022 IEEE International Joint Conference on Biometrics (IJCB), pp. 1–8, IEEE, 2022.
    [30] A. Ranjan, N. Prakash, S. Peddi, and D. Samanta, “A novel framework for robust fin- gerprint representations using deep convolution network with attention mechanism,” in Proceedings of the Fourteenth Indian Conference on Computer Vision, Graphics and Im- age Processing, pp. 1–9, 2023.
    [31] Y. Su, T. Zhao, and Z. Zhang, “Mra-gnn: Minutiae relation-aware model over graph neu- ral network for fingerprint embedding,” in 2023 IEEE International Joint Conference on Biometrics (IJCB), pp. 1–10, IEEE, 2023.
    [32] S. A. Grosz and A. K. Jain, “Afr-net: Attention-driven fingerprint recognition network,”IEEE Transactions on Biometrics, Behavior, and Identity Science, 2023.
    [33] S. A. Grosz and A. K. Jain, “Latent fingerprint recognition: Fusion of local and global embeddings,” IEEE Transactions on Information Forensics and Security, 2023.
    [34] R. Caruana, “Multitask learning,” Machine learning, vol. 28, pp. 41–75, 1997.
    [35] D. Park, B. H. Lee, and S. Y. Chun, “All-in-one image restoration for unknown degrada- tions using adaptive discriminative filters for specific degradations,” in 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5815–5824, IEEE, 2023.
    [36] C. Zhang, Y. Zhu, Q. Yan, J. Sun, and Y. Zhang, “All-in-one multi-degradation image restoration network via hierarchical degradation representation,” in Proceedings of the 31st ACM International Conference on Multimedia, pp. 2285–2293, 2023.
    [37] V. Potlapalli, S. Zamir, S. Khan, and F. Khan, “Promptir: Prompting for all-in-one blind image restoration. arxiv 2023,” arXiv preprint arXiv:2306.13090, 2023.
    [38] T. Gao, Y. Wen, K. Zhang, J. Zhang, T. Chen, L. Liu, and W. Luo, “Frequency-oriented efficient transformer for all-in-one weather-degraded image restoration,” IEEE Transac- tions on Circuits and Systems for Video Technology, 2023.
    [39] W. Li, G. Zhou, S. Lin, and Y. Tang, “Pernet: Progressive and efficient all-in-one image- restoration lightweight network,” Electronics, vol. 13, no. 14, p. 2817, 2024.
    [40] L. Xie, X. Wang, C. Dong, Z. Qi, and Y. Shan, “Finding discriminative filters for spe- cific degradations in blind super-resolution,” Advances in Neural Information Processing Systems, vol. 34, pp. 51–61, 2021.
    [41] O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in Medical image computing and computer-assisted intervention– MICCAI 2015: 18th international conference, Munich, Germany, October 5-9, 2015, pro- ceedings, part III 18, pp. 234–241, Springer, 2015.
    [42] X. Glorot, A. Bordes, and Y. Bengio, “Deep sparse rectifier neural networks,” in Proceed- ings of the Fourteenth International Conference on Artificial Intelligence and Statistics (G. Gordon, D. Dunson, and M. Dudík, eds.), vol. 15 of Proceedings of Machine Learn- ing Research, (Fort Lauderdale, FL, USA), pp. 315–323, PMLR, 11–13 Apr 2011.
    [43] G. Hinton, “Improving neural networks by preventing co-adaptation of feature detectors,”arXiv preprint arXiv:1207.0580, 2012.
    [44] J. Hu, L. Shen, and G. Sun, “Squeeze-and-excitation networks,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 7132–7141, 2018.
    [45] F. Yu and V. Koltun, “Multi-scale context aggregation by dilated convolutions,” arXiv preprint arXiv:1511.07122, 2015.
    [46] X. Zhu, H. Hu, S. Lin, and J. Dai, “Deformable convnets v2: More deformable, bet- ter results,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 9308–9316, 2019.
    [47] Y. Zhang, Y. Tian, Y. Kong, B. Zhong, and Y. Fu, “Residual dense network for image super-resolution,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2472–2481, 2018.
    [48] J. Gurrola-Ramos, O. Dalmau, and T. E. Alarcón, “A residual dense u-net neural network for image denoising,” IEEE Access, vol. 9, pp. 31742–31754, 2021.
    [49] D.-W. Jang and R.-H. Park, “Densenet with deep residual channel-attention blocks for single image super resolution,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops, pp. 0–0, 2019.
    [50] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770– 778, 2016.
    [51] Z. Wang, “Image quality assessment: Form error visibility to structural similarity,” IEEE Trans. Image Process., vol. 13, no. 4, pp. 604–606, 2004.
    [52] W.-F. Ou, L.-M. Po, C. Zhou, Y. A. U. Rehman, P.-F. Xian, and Y.-J. Zhang, “Fusion loss and inter-class data augmentation for deep finger vein feature learning,” Expert Systems with Applications, vol. 171, p. 114584, 2021.
    [53] D. P. Kingma, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014.
    [54] S. W. Zamir, A. Arora, S. Khan, M. Hayat, F. S. Khan, and M.-H. Yang, “Restormer: Effi- cient transformer for high-resolution image restoration,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 5728–5739, 2022.

    QR CODE