研究生: |
林郁庭 Lin, Yu-Ting |
---|---|
論文名稱: |
基於YOLO之雷達目標偵測演算法 YOLO-CFAR: a Novel CFAR Target Detection Method Based on YOLO |
指導教授: |
鍾偉和
Chung, Wei-Ho |
口試委員: |
張佑榕
Chang, Ronald Y. 吳仁銘 Wu, Jen-Ming 翁詠祿 Ueng, Yeong-Luh |
學位類別: |
碩士 Master |
系所名稱: |
電機資訊學院 - 通訊工程研究所 Communications Engineering |
論文出版年: | 2021 |
畢業學年度: | 109 |
語文別: | 中文 |
論文頁數: | 41 |
中文關鍵詞: | 恆虛警率 、目標偵測 、深度學習 、物件偵測 、YOLO 、動態範圍壓縮 |
外文關鍵詞: | Constant False Alarm Rate, Target Detection, Deep Learning, Object Detection, YOLO, Dynamic Range Compression |
相關次數: | 點閱:2 下載:0 |
分享至: |
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
恆虛警率(Constant False Alarm Rate,簡稱CFAR)檢測是雷達系統中常見的目標偵測算法。然而,傳統的CFAR檢測器在非均質環境(Nonhomogeneous scenario)中,例如:多目標環境(Multiple target scenario)及雜波環境(Clutter scenario),其檢測能力會明顯下降。雖然基於深度學習(Deep Learning,簡稱DL)之CFAR檢測器(簡稱DL-CFAR)改善了多目標環境中的檢測效能,但其依舊無法解決雜波環境中檢測效能低下的問題。造成傳統CFAR及DL-CFAR效能低下的原因通常是其對雜訊電平(Noise level)的估計不夠準確,為了提高CFAR檢測效能,本研究提出了一種新的想法,即是將距離都卜勒圖(Range Doppler map,簡稱RD map)視為一張圖片,使用深度學習中物件偵測之模型來偵測目標,少了估計雜訊的步驟,可以減少造成錯誤延遲(Error propagation)的可能性,並且提升CFAR檢測器效能。因本研究所應用之物件偵測模型為YOLO(You Only Look Once),因此將此算法命名為:YOLO-CFAR。
在本研究中,提出了一種基於深度學習之物件偵測模型的目標偵測算法,我們除了引入YOLO模型外,也使用動態範圍壓縮(Dynamic Range Compression,簡稱DRC)對資料作前處理,除此之外,我們還加入深度神經網路(Deep Neural Network,簡稱DNN),進一步提升YOLO-CFAR在多目標環境中的檢測效能。最後,經由模擬結果顯示,本研究提出的方法除了在均質環境(Homogeneous scenario)中有良好的效能外,在非均質環境中的效能更明顯優於其他現有DL算法及傳統算法,並且其檢測速度可達即時(Real time)檢測。
Constant False Alarm Rate (CFAR) detection is a common target detection algorithm in radar systems. However, nonhomogeneous scenarios, such as multi-target scenarios and clutter scenarios, can dramatically affect the CFAR target detection performance because of the erroneous noise level estimation. In order to improve the CFAR target detection performance in nonhomogeneous scenarios, we propose a novel CFAR target detection method, based on a deep learning model: you only look once (YOLO), called YOLO-CFAR. The proposed CFAR scheme does not require to estimate the noise level and use deep learning model for object detection to detect targets in an RD map. The possibility of error propagation caused by inaccurate noise level estimation decreased, thus getting better CFAR target detection performance.
In this paper, we not only introduce YOLO in CFAR target detection, but also use dynamic range compression (DRC) to pre-process the input data and add deep neural network (DNN) to further improve the performance of YOLO-CFAR. Simulation results demonstrate that YOLO-CFAR outperforms other CFAR schemes especially in nonhomogeneous scenarios, furthermore, YOLO-CFAR can achieve real-time detection.
[1] M. A. Richards, J. A. Scheer, and W. A. Holm, "Constant false alarm rate detectors," in Principles of modern radar, Raleigh, NC: SciTech Publishing, 2010, pp. 589–620.
[2] A. Jalil, H. Yousaf, and M. I. Baig, "Analysis of cfar techniques," in Proc. 13th Int. Bhurban Conf. Appl. Sci. Technol. (IBCAST), Jan. 2016, pp. 654–659.
[3] V. G. Hansen and J. H. Sawyers, "Detectability loss due to greatest of selection in a cell-averaging cfar," IEEE Trans. Aerosp. Electron. Syst., vol. AES-16, no. 1, pp. 115–118, Jan. 1980.
[4] G. V. Trunk, "Range resolution of targets using automatic detectors," IEEE Trans. Aeros. Electron. Syst., vol. AES-14, no. 5, pp. 750–755, Sept. 1978.
[5] H. Rohling, "Radar cfar thresholding in clutter and multiple target situations," IEEE Trans. Aerosp. Electron. Syst., vol. AES-19, no. 4, pp. 608–621, Jul. 1983.
[6] J. T. Rickard and G. M. Dillard, "Adaptive detection algorithms for multiple-target situations," IEEE Trans. Aeros. Electron. Syst., vol. AES-13, no. 4, pp. 338–343, July 1977.
[7] C. Lin, Y. Lin, Y. Bai, W. Chung, T. Lee, and H. Huttunen, "Dl-cfar: a novel cfar target detection method based on deep learning," 2019 IEEE 90th Vehicular Technology Conference (VTC2019-Fall), Honolulu, HI, USA, Sept. 2019.
[8] J. Redmon and A. Farhadi, "Yolov3: an incremental improvement," Apr. 2018, arXiv:1804.02767. [Online]. Available: https://arxiv.org/abs/1804.02767
[9] A. G. Stove, "Linear fmcw radar techniques," IEE Proc. F Radar Signal Process., vol. 139, no. 5, pp. 343–350, Oct. 1992.
[10] J. Fink and F. K. Jondral, "Comparison of ofdm radar and chirp sequence radar", 2015 16th Int. Radar Symp., pp. 315-320, 2015.
[11] A. Wojtkiewicz, J. Misiurewicz, M. Nalecz, K. Jedrzejewski, and K. Kulpa, "Two-dimensional signal processing in fmcw radars," in Proc. 20th KKTOiUE, Warszawa, Poland, 1996, pp. 475–480.
[12] M. Kronauge and H. Rohling, "New chirp sequence radar waveform," IEEE Trans. Aerosp. Electron. Syst., vol. 50, no. 4, pp. 2870–2877, Oct. 2014.
[13] D. Giannoulis, M. Massberg, and J. D. Reiss, "Parameter automation in a dynamic range compressor," J. Audio Eng. Soc., vol. 61, no. 10, Oct. 2013.
[14] Z.-Q. Zhao, P. Zheng, S.-T. Xu, and X. Wu, "Object detection with deep learning: a review," IEEE Trans. Neural Netw. Learn. Syst., vol. 30, no. 11, pp. 3212–3232, Nov. 2019.
[15] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, "You only look once: unified, real-time object detection," in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), June 2016, pp. 779–788.
[16] J. Redmon and A. Farhadi, "Yolo9000: better, faster, stronger," in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jul. 2017, pp. 6517–6525.
[17] S. Ren, K. He, R. Girshick, and J. Sun, "Faster r-cnn: towards real-time object detection with region proposal networks," IEEE Trans. Pattern Anal. Mach. Intell., vol. 39, no. 6, pp. 1137–1149, Jun. 2017.
[18] K. He, X. Zhang, S. Ren, and J. Sun, "Deep residual learning for image recognition," in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Jun. 2016, pp. 770–778.
[19] T.-Y. Lin, P. Dollar, R. Girshick, K. He, B. Hariharan, and S. Belongie, "Feature pyramid networks for object detection," in Proc. CVPR, 2017, pp. 936–944.
[20] Y. L. Sit and T. Zwick, "Automotive mimo ofdm radar: subcarrier allocation techniques for multiple-user access and doa estimation," in 11th European Radar Conf., Oct. 2014, pp. 153–156.