研究生: |
林軒毅 Lin, Xuan-Yi |
---|---|
論文名稱: |
針對深度學習模型與3D感測器的黑盒物理對抗例攻擊 3D-Adv:Black-Box Physical Adversarial Attacks against Deep Learning Models through 3D Sensors |
指導教授: |
何宗易
Ho, Tsung-Yi |
口試委員: |
李淑敏
Li, Shu-Min 陳宏明 Chen, Hung-Ming |
學位類別: |
碩士 Master |
系所名稱: |
電機資訊學院 - 資訊工程學系 Computer Science |
論文出版年: | 2021 |
畢業學年度: | 109 |
語文別: | 英文 |
論文頁數: | 39 |
中文關鍵詞: | 對抗例攻擊 、深度學習 、神經網路 |
外文關鍵詞: | Adversarial Attack, Deep Learning, Neural Network |
相關次數: | 點閱:1 下載:0 |
分享至: |
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
深度學習技術和商用3D感測器的結合展示了光明的未來,因為它們提供了一種低成本且方便的方法來收集和分析環境中的深度訊息,適用於從工業建模到移動端人臉識別的各種應用。
儘管已有許多的研究致力於開發更精確、靈活且高效的機器學習方案以及3D感測器,但多數與這些技術相關的安全問題仍未有深入的探討。
在本文中,我們提出了一種新的針對這種情境的對抗式攻擊方法,該方法顯示主流3D感測器與深度學習模型的組合可能會對現實環境中的物體進行錯誤分類。
相比於現有針對3D數據分析開發的深度學習模型攻擊算法(僅考慮點雲數據和單一深度學習模型),我們的攻擊目標是主流的商用3D感測器在黑盒環境下的各種深度模型架構。
實驗結果表明我們3D列印後的對抗式物體經過3D感測器掃描後仍然能有效攻擊深度學習模型。
The combination of deep learning techniques and commercial 3D sensors reveal a bright future as they provide a low cost and convenient method to collect and analyze depth information from the environment for various applications ranging from industrial modeling to mobile face recognition. Despite the abundant research devoted to the development of more accurate, flexible and efficient machine learning schemes as well as 3D sensors, security concerns related to these techniques remain largely untouched.
In this thesis, we propose a novel adversarial attack against this combination by showing that deep learning models with popular 3D sensors may misclassify real objects in the physical environment. Comparing to the existing attack algorithms against deep learning models developed for 3D data analysis that only consider digital point cloud data and single deep learning model, our attacks target popular commercial 3D sensors combined with various deep learning schemes in the black-box setting. The experimental results demonstrate that our 3D printed adversarial objects stay effective after scanned by the 3D sensor.
[1] D. Maturana and S. Scherer, “Voxnet: A 3d convolutional neural network for realtime object recognition,” in 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 922–928, IEEE, 2015.
[2] C. R. Qi, H. Su, K. Mo, and L. J. Guibas, “Pointnet: Deep learning on point sets for 3d classification and segmentation,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 652–660, 2017.
[3] C. R. Qi, L. Yi, H. Su, and L. J. Guibas, “Pointnet++: Deep hierarchical feature learning on point sets in a metric space,” in Advances in neural information processing systems, pp. 5099–5108, 2017.
[4] M. Atzmon, H. Maron, and Y. Lipman, “Point convolutional neural networks by extension operators,” arXiv preprint arXiv:1803.10091, 2018.
[5] W. Wu, Z. Qi, and L. Fuxin, “Pointconv: Deep convolutional networks on 3d point clouds,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 9621–9630, 2019.
[6] C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus, “Intriguing properties of neural networks,” arXiv preprint arXiv:1312.6199, 2013.
[7] N. Carlini, P. Mishra, T. Vaidya, Y. Zhang, M. Sherr, C. Shields, D. Wagner, and W. Zhou, “Hidden voice commands,” in 25th fUSENIXg Security Symposium (fUSENIXg Security 16), pp. 513–530, 2016.
[8] S.-J. Wang, Y.-S. Chen, and K. S.-M. Li, “Adversarial attack against modeling attack on pufs,” in 2019 56th ACM/IEEE Design Automation Conference (DAC), pp. 1–6, IEEE, 2019.
[9] S. M. P. Dinakarrao, S. Amberkar, S. Bhat, A. Dhavlle, H. Sayadi, A. Sasan, H. Homayoun, and S. Rafatirad, “Adversarial attack on microarchitectural events based malware detectors,” in Proceedings of the 56th Annual Design Automation Conference 2019, pp. 1–6, 2019.
[10] X. Yuan, Y. Chen, Y. Zhao, Y. Long, X. Liu, K. Chen, S. Zhang, H. Huang, X. Wang, and C. A. Gunter, “Commandersong: A systematic approach for practical adversarial voice recognition,” in 27th USENIX Security Symposium (USENIX Security 18), pp. 49–64, 2018.
[11] A. Kurakin, I. J. Goodfellow, and S. Bengio, “Adversarial examples in the physical world,” in Artificial Intelligence Safety and Security, pp. 99–112, Chapman and Hall/CRC, 2018.
[12] C. Xiang, C. R. Qi, and B. Li, “Generating 3d adversarial point clouds,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 9136–9144, 2019.
[13] D. Liu, R. Yu, and H. Su, “Extending adversarial attacks and defenses to deep 3d point cloud classifiers,” arXiv preprint arXiv:1901.03006, 2019.
[14] Y. Cao, C. Xiao, B. Cyr, Y. Zhou,W. Park, S. Rampazzi, Q. A. Chen, K. Fu, and Z. M. Mao, “Adversarial sensor attack on lidar-based perception in autonomous driving,” arXiv preprint arXiv:1907.06826, 2019.
[15] T. Tsai, K. Yang, T.-Y. Ho, and Y. Jin, “Robust adversarial objects against deep learning models,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 954–962, 2020.
[16] I. J. Goodfellow, J. Shlens, and C. Szegedy, “Explaining and harnessing adversarial examples,” arXiv preprint arXiv:1412.6572, 2014.
[17] N. Papernot, P. McDaniel, S. Jha, M. Fredrikson, Z. B. Celik, and A. Swami, “The limitations of deep learning in adversarial settings,” in Proceedings of the Security and Privacy (S&P) on 2016 IEEE European Symposium, IEEE, 2016.
[18] N. Carlini and D. Wagner, “Towards evaluating the robustness of neural networks,” in Proceedings of the Security and Privacy (S&P) on 2017 IEEE Symposium, IEEE, 2017.
[19] N. Papernot, P. McDaniel, and I. Goodfellow, “Transferability in machine learning: from phenomena to black-box attacks using adversarial samples,” arXiv preprint arXiv:1605.07277, 2016.
[20] P.-Y. Chen, H. Zhang, Y. Sharma, J. Yi, and C.-J. Hsieh, “Zoo: Zeroth order optimization based black-box attacks to deep neural networks without training substitute models,” in Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, pp. 15–26, 2017.
[21] J. Chen, M. Su, S. Shen, H. Xiong, and H. Zheng, “Poba-ga: Perturbation optimized black-box adversarial attacks via genetic algorithm,” Computers & Security, vol. 85, pp. 89–106, 2019.
[22] https://pytorch3d.readthedocs.io/en/latest/modules/loss.html.
[23] Z. Wu, S. Song, A. Khosla, F. Yu, L. Zhang, X. Tang, and J. Xiao, “3d shapenets: A deep representation for volumetric shapes,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1912–1920, 2015.
[24] M. Alzantot, Y. Sharma, S. Chakraborty, H. Zhang, C.-J. Hsieh, and M. B. Srivastava, “Genattack: Practical black-box attacks with gradient-free optimization,” in Proceedings of the Genetic and Evolutionary Computation Conference, pp. 1111–1119, 2019.
[25] K. Zhang, Z. Zhang, Z. Li, and Y. Qiao, “Joint face detection and alignment using multitask cascaded convolutional networks,” IEEE Signal Processing Letters, vol. 23, no. 10, pp. 1499–1503, 2016.