簡易檢索 / 詳目顯示

研究生: 陳冠宇
Chen, Guan-Yu
論文名稱: 基於物件偵測方法與光學模擬輔助之熱點偵測
Model Assist Hotspot Detection via Object Detection Approach with Lithography Simulator
指導教授: 林嘉文
Lin, Chia-Wen
邵皓強
Shao, Hao-Chiang
口試委員: 方劭云
Fang, Shao-Yun
陳聿廣
Chen, Yu-Guang
學位類別: 碩士
Master
系所名稱: 電機資訊學院 - 電機工程學系
Department of Electrical Engineering
論文出版年: 2023
畢業學年度: 111
語文別: 英文
論文頁數: 29
中文關鍵詞: 熱點偵測物件偵測多模型融合電路模擬跨域關注
外文關鍵詞: Hotspot detection, Object detection, Multi-model fusion, Lithography simulator, cross-attention
相關次數: 點閱:2下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 在現今的積體電路技術發展下,製程節點隨著發展一直縮小,
    使得電路密度越來越大,這項發展讓熱點偵測技術需要以更快的
    速度來執行以應付數量日漸龐大的電路結構。在現實中,通常想
    要發現熱點的方法就是實際經過製程再透過電子顯微鏡去拍攝,
    但在技術發展下,使用人力來達成這項工作將會越來越不可行。
    所以本論文透過深度學習技術來解決此問題,我們的模型主要分
    成兩部分: 首先透過物件偵測方法切入問題,使用了一階的物件偵
    測模型使偵測器在速度加快的情況下還能維持好的準確度,再輔
    以電路變化預測模型,讓我們能得知電路的預測變化方向,使我
    們的模型的物件特徵再進一步得到加強。同時,為了更好的融合
    兩者間的特徵,我們使用了跨域關注的方法來進行融合。實驗顯
    示我們的方法不管是在模擬的資料上或是真實的資料上都有著相
    當不錯的表現,在後續研究也證明了我們設計的融合模組的有效
    性。
    關鍵字: 熱點偵測、物件偵測、多模型融合、曝光模擬、跨域
    關注


    Recent advances in VLSI fabrication technology have resulted in die shrink and layout density enlargement, leading an urgent need for advanced hotspot detection methods.
    These advances makes the hotspot detection technology need to be executed at a faster speed to cope with the increasing number of layout structures. These advances makes the hotspot detection technology need to be executed at a faster speed to cope with the increasing number of layout structures. Recently, machine learning methods shows the power on these problems. But existing hotspot detectors has some weak points, they consider only layout information, real process has more uncertainty effect to the hotspots. Thus, real world information is also needs to be taken into account.
    Therefore, this paper solves this problem through deep learning technology. Our model is mainly divided into two parts: first, we cut into the problem through the object detection method, the Retinanet object detection model is proved to make the detector maintain good accuracy while accelerating the speed, and then supported by the layout change prediction model, Lithonet, so that we can know the predicted deformation of the layout so that the object detection features of our model are further strengthened. At the same time, in order to better integrate the features between the two, we use a cross-domain attention method for fusion. Experiments show that our method performs well both on simulated data and real-world data. In the ablation study, also proves the effectiveness of the fusion module we designed.
    Index Terms — Hotspot detection, Object detection, Multi-model fusion, Lithography simulator, cross-attention

    Content 摘要 i Abstract ii 1 Introduction 1 2 Related Work 4 2.1 Machine learning-based hotspot detection . . . . . . . . . . . . . . . . . . . . 4 2.2 Object detection network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.3 Lithography Layout Simulator Network . . . . . . . . . . . . . . . . . . . . . 6 3 Proposed Method 8 3.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 3.2 Main Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 3.2.1 Lithonet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 3.2.2 Customized Retinanet . . . . . . . . . . . . . . . . . . . . . . . . . . 10 3.3 Cross-domain Attention Module . . . . . . . . . . . . . . . . . . . . . . . . . 11 3.4 Detector Subnets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 3.4.1 Classification subnet . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 3.4.2 Box regression subnet . . . . . . . . . . . . . . . . . . . . . . . . . . 14 3.5 Loss function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 4 Experiments Result 16 4.1 Datasets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 4.2 Initialization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 4.3 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 4.3.1 ICCAD16 dataset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 4.3.2 Ablation study on ICCAD16 . . . . . . . . . . . . . . . . . . . . . . . 24 4.3.3 UMC dataset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 5 Conclusion 27 References 28

    References
    [1] Y. Jiang, F. Yang, B. Yu, D. Zhou, and X. Zeng, “Efficient layout hotspot detection via
    binarized residual neural network ensemble,” IEEE Transactions on Computer-Aided De-
    sign of Integrated Circuits and Systems, vol. 40, no. 7, pp. 1476–1488, 2021.
    [2] H. Geng, H. Yang, L. Zhang, J. Miao, F. Yang, X. Zeng, and B. Yu, “Hotspot detection
    via attention-based deep layout metric learning,” in 2020 IEEE/ACM International Con-
    ference On Computer Aided Design (ICCAD), pp. 1–8, 2020.
    [3] H. Yang, J. Su, Y. Zou, B. Yu, and E. F. Young, “Layout hotspot detection with feature
    tensor generation and deep biased learning,” in Proceedings of the 54th Annual Design
    Automation Conference 2017, pp. 1–6, 2017.
    [4] R. Chen, W. Zhong, H. Yang, H. Geng, X. Zeng, and B. Yu, “Faster region-based hotspot
    detection,” in Proceedings of the 56th Annual Design Automation Conference 2019, pp. 1–
    6, 2019.
    [5] T. Gai, T. Qu, S. Wang, X. Su, R. Xu, Y. Wang, J. Xue, Y. Su, Y. Wei, and T. Ye,
    “Flexible hotspot detection based on fully convolutional network with transfer learning,”
    IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, vol. 41,
    no. 11, pp. 4626–4638, 2021.
    [6] B. Zhu, R. Chen, X. Zhang, F. Yang, X. Zeng, B. Yu, and M. D. Wong, “Hotspot detec-
    tion via multi-task learning and transformer encoder,” in 2021 IEEE/ACM International
    Conference On Computer Aided Design (ICCAD), pp. 1–8, IEEE, 2021.
    [7] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and
    I. Polosukhin, “Attention is all you need,” Advances in neural information processing
    systems, vol. 30, 2017.
    [8] K. Qian, S. Zhu, X. Zhang, and L. E. Li, “Robust multimodal vehicle detection in foggy
    weather using complementary lidar and radar signals,” in Proceedings of the IEEE/CVF
    Conference on Computer Vision and Pattern Recognition, pp. 444–453, 2021.
    [9] X. Lin, J. Pan, J. Xu, Y. Chen, and C. Zhuo, “Lithography hotspot detection via hetero-
    geneous federated learning with local adaptation,” in 2022 27th Asia and South Pacific
    Design Automation Conference (ASP-DAC), pp. 166–171, IEEE, 2022.
    [10] R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich feature hierarchies for accurate
    object detection and semantic segmentation,” in Proceedings of the IEEE conference on
    computer vision and pattern recognition, pp. 580–587, 2014.
    [11] R. Girshick, “Fast r-cnn,” in Proceedings of the IEEE international conference on com-
    puter vision, pp. 1440–1448, 2015.
    [12] S. Ren, K. He, R. Girshick, and J. Sun, “Faster r-cnn: Towards real-time object detec-
    tion with region proposal networks,” Advances in neural information processing systems,
    vol. 28, 2015.
    [13] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: Unified, real-
    time object detection,” in Proceedings of the IEEE conference on computer vision and
    pattern recognition, pp. 779–788, 2016.
    [14] J. Redmon and A. Farhadi, “Yolo9000: better, faster, stronger,” in Proceedings of the IEEE
    conference on computer vision and pattern recognition, pp. 7263–7271, 2017.
    [15] J. Redmon and A. Farhadi, “Yolov3: An incremental improvement,” arXiv preprint
    arXiv:1804.02767, 2018.
    [16] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C.-Y. Fu, and A. C. Berg, “Ssd:
    Single shot multibox detector,” in European conference on computer vision, pp. 21–37,
    Springer, 2016.
    [17] T.-Y. Lin, P. Goyal, R. Girshick, K. He, and P. Dollár, “Focal loss for dense object detec-
    tion,” in Proceedings of the IEEE international conference on computer vision, pp. 2980–
    2988, 2017.
    [18] H.-C. Shao, C.-Y. Peng, J.-R. Wu, C.-W. Lin, S.-Y. Fang, P.-Y. Tsai, and Y.-H. Liu, “From
    ic layout to die photograph: A cnn-based data-driven approach,” IEEE Transactions on
    Computer-Aided Design of Integrated Circuits and Systems, vol. 40, no. 5, pp. 957–970,
    2021.
    [19] H.-C. Shao, H.-L. Ping, K.-s. Chen, W.-T. Su, C.-W. Lin, S.-Y. Fang, P.-Y. Tsai, and Y.-H.
    Liu, “Keeping deep lithography simulators updated: Global-local shape-based novelty de-
    tection and active learning,” IEEE Transactions on Computer-Aided Design of Integrated
    Circuits and Systems, pp. 1–1, 2022.
    [20] O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedi-
    cal image segmentation,” in International Conference on Medical image computing and
    computer-assisted intervention, pp. 234–241, Springer, 2015.
    [21] T.-Y. Lin, P. Dollár, R. Girshick, K. He, B. Hariharan, and S. Belongie, “Feature pyramid
    networks for object detection,” in Proceedings of the IEEE conference on computer vision
    and pattern recognition, pp. 2117–2125, 2017.
    [22] Z. Zheng, P. Wang, W. Liu, J. Li, R. Ye, and D. Ren, “Distance-iou loss: Faster and better
    learning for bounding box regression,” in Proceedings of the AAAI conference on artificial
    intelligence, vol. 34, pp. 12993–13000, 2020.

    QR CODE