研究生: |
許尊霖 Xu, Zun-Lin |
---|---|
論文名稱: |
多標籤乾淨圖片後門防禦 Defending Multi-label Clean image Backdoor |
指導教授: |
吳尚鴻
Wu, Shan-Hung |
口試委員: |
邱維辰
Chiu, Wei-Chen 沈之涯 Shen, Chih-Ya 劉奕汶 Liu, Yi-Wen |
學位類別: |
碩士 Master |
系所名稱: |
電機資訊學院 - 資訊工程學系 Computer Science |
論文出版年: | 2023 |
畢業學年度: | 112 |
語文別: | 英文 |
論文頁數: | 20 |
中文關鍵詞: | 後門 、防禦 |
外文關鍵詞: | Backdoor, Defense |
相關次數: | 點閱:37 下載:0 |
分享至: |
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
最近多標籤學習模型顯得相當強大,在圖像標註、物件偵測和文本分類等領域表現出色,透過利用標籤之間的相互關聯以實現更有效的學習。
然而,一種名為「乾淨圖像後門攻擊」的方法在保持圖像不變的同時,操作了標籤之間的相互關聯,這對現有的防禦方法構成了挑戰。本文提出了多種新穎的防禦方法,將乾淨圖像後門攻擊視為一種標籤噪音。基於在噪音標籤學習方面的研究成果,本文引入了不同的防禦方法,這些方法涉及在訓練過程中拒絕最高損失並忽略與訓練集中其他數據點具有顯著不同損失的標籤。
總的來說,本文通過將乾淨圖像後門視為一種標籤噪音並設計出有效的防禦方法,為減輕乾淨圖像後門攻擊的影響做出了貢獻。
Multi-label learning models have been highly powerful recently, which excel in various domains like image annotation, object detection, and text categorization by leveraging label correlations for more efficient learning.
However, a clean-image backdoor attack method manipulates label correlations while keeping the image unchanged, which poses a challenge to existing defense methods. This paper proposes novel defense methods by treating the clean-image backdoor as a form of label noise. Based on findings in noisy label learning, this paper introduces defense methods that involve rejecting the highest loss during training and ignoring labels with significantly different losses from other data points in the training set.
Overall, the paper contributes by recognizing the clean-image backdoor as a form of label noise and creating effective defense methods to mitigate the impact of the clean-image backdoor attack.
[1] Sophie Burkhardt and Stefan Kramer. “Online multi-label dependency topic models for text classification”. In: Machine Learning 107 (2018), pp. 859–886.
[2] Kangjie Chen et al. “Clean-image Backdoor: Attacking Multi-label Models with Poisoned Labels Only”. In: The Eleventh International Conference on Learning Representations. 2022.
[3] Zhao-Min Chen et al. “Multi-label image recognition with graph convolutional networks”. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2019, pp. 5177–5186.
[4] Edward Chou, Florian Tramer, and Giancarlo Pellegrino. “Sentinet: Detecting localized universal attacks against deep learning systems”. In: 2020 IEEE Security and Privacy Workshops (SPW). IEEE. 2020, pp. 48–54.
[5] Yansong Gao et al. “Strip: A defense against trojan attacks on deep neural networks”. In: Proceedings of the 35th Annual Computer Security Applications Conference. 2019, pp. 113–125.
[6] Hao Guo et al. “Visual attention consistency under image transforms for multi-label image classification”. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2019, pp. 729–739.
[7] Wenbo Guo et al. “Towards inspecting and eliminating trojan backdoors in deep neural networks”. In: 2020 IEEE International Conference on Data Mining (ICDM). IEEE. 2020, pp. 162–171.
[8] Youngwook Kim et al. “Large loss matters in weakly supervised multi-label classification”. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022, pp. 14156–14165.
[9] Tsung-Yi Lin et al. “Microsoft coco: Common objects in context”. In: Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13. Springer. 2014, pp. 740–755.
[10] Kang Liu, Brendan Dolan-Gavitt, and Siddharth Garg. “Fine-pruning: De-fending against backdooring attacks on deep neural networks”. In: International symposium on research in attacks, intrusions, and defenses. Springer. 2018, pp. 273–294.
[11] Eneldo Loza Mencıa and Johannes F¨urnkranz. Efficient multilabel classification algorithms for large-scale problems in the legal domain. Springer, 2010.
[12] Ximing Qiao, Yukun Yang, and Hai Li. “Defending neural backdoors via generative distribution modeling”. In: Advances in neural information pro-cessing systems 32 (2019).
[13] Han Qiu et al. “Deepsweep: An evaluation framework for mitigating DNN backdoor attacks using data augmentation”. In: Proceedings of the 2021 ACM Asia Conference on Computer and Communications Security. 2021, pp. 363–377.
[14] Joseph Redmon et al. “You only look once: Unified, real-time object detection”. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2016, pp. 779–788.
[15] Tal Ridnik et al. “Ml-decoder: Scalable and versatile classification head”. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. 2023, pp. 32–41.
[16] Ramprasaath R Selvaraju et al. “Grad-cam: Visual explanations from deep networks via gradient-based localization”. In: Proceedings of the IEEE international conference on computer vision. 2017, pp. 618–626.
[17] Bolun Wang et al. “Neural cleanse: Identifying and mitigating backdoor attacks in neural networks”. In: 2019 IEEE Symposium on Security and Privacy (SP). IEEE. 2019, pp. 707–723.
[18] Hao Zhang et al. “Dino: Detr with improved denoising anchor boxes for end-to-end object detection”. In: arXiv preprint arXiv:2203.03605 (2022).