簡易檢索 / 詳目顯示

研究生: 邵柏翔
Shao, bo-siang
論文名稱: 聚焦於前景異常偵測之晶圓影像瑕疵偵測方法
Process-agnostic IC Defect Detection via Foreground-focused Anomaly Detection
指導教授: 林嘉文
Lin, Chia-Wen
邵皓強
Shao, Hao-Chiang
口試委員: 方劭云
Fang, Shao-Yun
陳聿廣
Chen, Yu-Guang
學位類別: 碩士
Master
系所名稱: 電機資訊學院 - 電機工程學系
Department of Electrical Engineering
論文出版年: 2024
畢業學年度: 112
語文別: 英文
論文頁數: 28
中文關鍵詞: 異常偵測瑕疵偵測聚焦前景背景抑制
外文關鍵詞: anomaly detection, defect detection, foreground focusing, background suppression
相關次數: 點閱:40下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 在現代製造工業之中,通常會在製造的最終步驟進行異常偵
    測,試圖發現產品瑕疵。其中半導體產業因為產品單位價值的關
    係,所以有著強烈的需求在良率的提升。需要根據不同種類的瑕
    疵對應的去改進生產步驟,在原本的方法中需要大量專業人員對
    AOI 機台拍攝的圖片做處理,這樣的方式造成了巨大的人力負擔。
    過去的方法無法處理因為 IC 圖片中瑕疵小而拍攝圖片背景差異巨
    大的問題,因此本篇論文使用深度學習技術來降低這方面的人力
    負擔,我們的方法可以分成兩個階段: 首先以異常偵測的方式學習
    IC 圖片中的正常樣貌,在第二階段根據第一階段所學的正常樣貌
    凸顯前景瑕疵,讓我們的模型專注於前景瑕疵特徵而不會受到背
    景部分的影響。從實驗中顯示我們的方法在真實世界 IC 瑕疵資料
    集上有著不錯的表現,同時也證明了我們的方法的有效性。


    In modern manufacturing industries, anomaly detection is typically
    carried out in the final steps of production to identify product defects. The semiconductor industry, in particular, exhibits a strong demand for improving yield due to the high value of individual units. Enhancing yield requires adjustments to the production process based on different types of defects. Improvements in the corresponding production steps are needed based on the different types of defects identified. The initial approach necessitates a considerable number of skilled professionals to process images captured by AOI machines, leading to a substantial human resource burden. Previous methods struggled with challenges such as the small size of defects in IC images and significant background differences in captured images. Therefore, this paper employs deep learning techniques to alleviate this human resource burden. Our method consists of two stages. First, adopting a conventional unsupervised anomaly detection strategy to learn the appearance of normal non-defect IC images. Then, second, highlighting foreground defects based on the normal appearance learned in the first stage. This allows our model to focus on foreground defect features without being influenced by background variations. Experimental results demonstrate the effectiveness of our method on real-world IC defect datasets, showcasing its promising performance.

    摘要 i Abstract ii 1 Introduction 1 2 Related Work 5 2.1 Anomaly Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.2 Background Matting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 3 Proposed Method 8 3.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 3.2 Stage1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 3.2.1 Encoder and Decoder . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 3.2.2 Memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 3.2.3 Loss . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 3.3 Stage2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 4 Experiments 18 4.1 Datasets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 4.2 Implementation Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 4.3 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 4.3.1 Result of UMC dataset . . . . . . . . . . . . . . . . . . . . . . . . . . 20 4.3.2 Ablation Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 5 Conclusion 25 References 26

    [1] S. Sagawa, P. W. Koh, T. B. Hashimoto, and P. Liang, “Distributionally robust neural net-
    works for group shifts: On the importance of regularization for worst-case generalization,”
    2020.
    [2] R. Geirhos, J.-H. Jacobsen, C. Michaelis, R. Zemel, W. Brendel, M. Bethge, and F. A.
    Wichmann, “Shortcut learning in deep neural networks,” Nature Machine Intelligence,
    vol. 2, p. 665–673, Nov. 2020.
    [3] K. Y. Xiao, L. Engstrom, A. Ilyas, and A. Madry, “Noise or signal: The role of image
    backgrounds in object recognition,” in International Conference on Learning Representa-
    tions, 2021.
    [4] V. Zavrtanik, M. Kristan, and D. Skočaj, “Dsr – a dual subspace re-projection network
    for surface anomaly detection,” in Computer Vision – ECCV 2022 (S. Avidan, G. Brostow,
    M. Cissé, G. M. Farinella, and T. Hassner, eds.), (Cham), pp. 539–554, Springer Nature
    Switzerland, 2022.
    [5] J. Hou, Y. Zhang, Q. Zhong, D. Xie, S. Pu, and H. Zhou, “Divide-and-assemble: Learn-
    ing block-wise memory for unsupervised anomaly detection,” in 2021 IEEE/CVF Interna-
    tional Conference on Computer Vision (ICCV), (Los Alamitos, CA, USA), pp. 8771–8780,
    IEEE Computer Society, oct 2021.
    [6] H. Lv, C. Chen, Z. Cui, C. Xu, Y. Li, and J. Yang, “Learning normal dynamics in videos
    with meta prototype network,” in 2021 IEEE/CVF Conference on Computer Vision and
    Pattern Recognition (CVPR), pp. 15420–15429, 2021.
    [7] Z. Yang, P. Wu, J. Liu, and X. Liu, “Dynamic local aggregation network with adaptive
    clusterer for anomaly detection,” in Computer Vision – ECCV 2022 (S. Avidan, G. Bros-
    tow, M. Cissé, G. M. Farinella, and T. Hassner, eds.), (Cham), pp. 404–421, Springer
    Nature Switzerland, 2022.
    [8] W. Liu, H. Chang, B. Ma, S. Shan, and X. Chen, “Diversity-measurable anomaly de-
    tection,” in 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition
    (CVPR), (Los Alamitos, CA, USA), pp. 12147–12156, IEEE Computer Society, jun 2023.
    [9] M. Rudolph, T. Wehrbein, B. Rosenhahn, and B. Wandt, “Fully convolutional cross-scale-
    flows for image-based defect detection,” in 2022 IEEE/CVF Winter Conference on Appli-
    cations of Computer Vision (WACV), (Los Alamitos, CA, USA), pp. 1829–1838, IEEE
    Computer Society, jan 2022.
    [10] J. Yu, Y. Zheng, X. Wang, W. Li, Y. Wu, R. Zhao, and L. Wu, “Fastflow: Unsupervised
    anomaly detection and localization via 2d normalizing flows,” 2021.
    [11] D. Gudovskiy, S. Ishizaka, and K. Kozuka, “Cflow-ad: Real-time unsupervised anomaly
    detection with localization via conditional normalizing flows,” in 2022 IEEE/CVF Win-
    ter Conference on Applications of Computer Vision (WACV), (Los Alamitos, CA, USA),
    pp. 1819–1828, IEEE Computer Society, jan 2022.
    [12] K. Roth, L. Pemula, J. Zepeda, B. Scholkopf, T. Brox, and P. Gehler, “Towards total recall
    in industrial anomaly detection,” in 2022 IEEE/CVF Conference on Computer Vision and
    Pattern Recognition (CVPR), (Los Alamitos, CA, USA), pp. 14298–14308, IEEE Com-
    puter Society, jun 2022.
    [13] T. Defard, A. Setkov, A. Loesch, and R. Audigier, “Padim: A patch distribution mod-
    eling framework for anomaly detection and localization,” in Pattern Recognition. ICPR
    International Workshops and Challenges (A. Del Bimbo, R. Cucchiara, S. Sclaroff, G. M.
    Farinella, T. Mei, M. Bertini, H. J. Escalante, and R. Vezzani, eds.), (Cham), pp. 475–489,
    Springer International Publishing, 2021.
    [14] S. Wang, L. Wu, L. Cui, and Y. Shen, “Glancing at the patch: Anomaly localization with
    global and local feature comparison,” in 2021 IEEE/CVF Conference on Computer Vision
    and Pattern Recognition (CVPR), pp. 254–263, 2021.
    [15] V. Zavrtanik, M. Kristan, and D. Skoaj, “Drm –a discriminatively trained reconstruction
    embedding for surface anomaly detection,” 2021 IEEE/CVF International Conference on
    Computer Vision (ICCV), pp. 8310–8319, 2021.
    [16] C. Li, K. Sohn, J. Yoon, and T. Pfister, “Cutpaste: Self-supervised learning for anomaly
    detection and localization,” in 2021 IEEE/CVF Conference on Computer Vision and Pat-
    tern Recognition (CVPR), (Los Alamitos, CA, USA), pp. 9659–9669, IEEE Computer
    Society, jun 2021.
    [17] N.-C. Ristea, N. Madan, R. T. Ionescu, K. Nasrollahi, F. S. Khan, T. B. Moeslund,
    and M. Shah, “Self-supervised predictive convolutional attentive block for anomaly de-
    tection,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern
    Recognition, 2022.
    [18] H. Park, J. Noh, and B. Ham, “Learning memory-guided normality for anomaly detection,”
    in 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), (Los
    Alamitos, CA, USA), pp. 14360–14369, IEEE Computer Society, jun 2020.
    [19] N. Xu, B. Price, S. Cohen, and T. Huang, “Deep image matting,” in 2017 IEEE Conference
    on Computer Vision and Pattern Recognition (CVPR), pp. 311–320, 2017.
    [20] B. Zhu, Y. Chen, J. Wang, S. Liu, B. Zhang, and M. Tang, “Fast deep matting for portrait
    animation on mobile phone,” in Proceedings of the 25th ACM International Conference
    on Multimedia, MM ’17, (New York, NY, USA), p. 297–305, Association for Computing
    Machinery, 2017.
    [21] X. Shen, X. Tao, H. Gao, C. Zhou, and J. Jia, “Deep automatic portrait matting,” in Com-
    puter Vision – ECCV 2016 (B. Leibe, J. Matas, N. Sebe, and M. Welling, eds.), (Cham),
    pp. 92–107, Springer International Publishing, 2016.
    [22] S. Sengupta, V. Jayaram, B. Curless, S. M. Seitz, and I. Kemelmacher-Shlizerman, “Back-
    ground matting: The world is your green screen,” in 2020 IEEE/CVF Conference on Com-
    puter Vision and Pattern Recognition (CVPR), (Los Alamitos, CA, USA), pp. 2288–2297,
    IEEE Computer Society, jun 2020.
    [23] S. Lin, A. Ryabtsev, S. Sengupta, B. Curless, S. Seitz, and I. Kemelmacher-Shlizerman,
    “Real-time high-resolution background matting,” in 2021 IEEE/CVF Conference on Com-
    puter Vision and Pattern Recognition (CVPR), (Los Alamitos, CA, USA), pp. 8758–8767,
    IEEE Computer Society, jun 2021.
    [24] O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical
    image segmentation,” in Medical Image Computing and Computer-Assisted Intervention
    – MICCAI 2015 (N. Navab, J. Hornegger, W. M. Wells, and A. F. Frangi, eds.), (Cham),
    pp. 234–241, Springer International Publishing, 2015.
    [25] M. Shah, J. Deng, and B. Woodford, “Enhanced codebook model for real-time background
    subtraction,” in Neural Information Processing (B.-L. Lu, L. Zhang, and J. Kwok, eds.),
    (Berlin, Heidelberg), pp. 449–458, Springer Berlin Heidelberg, 2011.
    [26] J. Ren, C. Yu, S. Sheng, X. Ma, H. Zhao, S. Yi, and H. Li, “Balanced meta-softmax for
    long-tailed visual recognition,” ArXiv, vol. abs/2007.10740, 2020.
    [27] J. Li, Z. Tan, J. Wan, Z. Lei, and G. Guo, “Nested collaborative learning for long-tailed vi-
    sual recognition,” in 2022 IEEE/CVF Conference on Computer Vision and Pattern Recog-
    nition (CVPR), (Los Alamitos, CA, USA), pp. 6939–6948, IEEE Computer Society, jun
    2022.

    QR CODE