簡易檢索 / 詳目顯示

研究生: 劉宜朋
Liu, Yi-Peng.
論文名稱: 在公路上基於深度學習的軌跡預測與雷達模擬
Trajectory Prediction and Radar Simulation on the Highway With Deep Learning Methods
指導教授: 劉晉良
Liu, Jinn-Liang
口試委員: 陳人豪
Chen, Jen-Hao
陳仁純
Chen, Ren-chun
學位類別: 碩士
Master
系所名稱: 理學院 - 計算與建模科學研究所
Institute of Computational and Modeling Science
論文出版年: 2021
畢業學年度: 109
語文別: 中文
論文頁數: 53
中文關鍵詞: 軌跡預測雷達模擬自動駕駛深度學習
外文關鍵詞: Trajectory prediction, Radar simulation, Autonomous driving, Deep Learning
相關次數: 點閱:2下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 在本文中,我們將針對自駕車這個主題進行研究,使用comma.ai 開源
    的自駕車系統openpilot 作為我們的研究平台,我們根據openpilot 的模型
    以及資料集進行研究,透過他們開源的GitHub,我們可以做出一個相似於
    openpilot 的模型,透過這個模型,我們可以在未來建構自己的自駕車模型,
    並在這套成熟的自駕車系統上運行,值得注意的是,我們只考慮在公路上、
    不換道的簡單且單調的路況進行學習,這個模型主要用於預測未來軌跡以
    及雷達的資訊。
    此外,我們比較了原模型(supercombo) 與我們的模型,結果顯示我們的
    預測結果比supercombo 更準確。我們還提出在模型中使用Attention-based
    Convolution Neural Network (ACNN),我們的研究顯示,使用ACNN 架構的
    軌跡預測效果比openpilot 的遞迴神經網路(GRU) 更好。最後,我們成功地
    在openpilot 軟體以及硬體平台上執行我們的神經網路。


    In this article, we will conduct research on the subject of self-driving cars, using
    comma.ai’s open-source self-driving car system openpilot as our research platform.
    We conduct research based on openpilot’s model and data set. Through their open
    source GitHub, we can create a model similar to openpilot. Through this model,
    we can construct our own self-driving model for predicting car future trajectory
    and run the model on openpilot installed in a comma.ai device called comma two.
    Significantly , we only consider simple and monotonous road conditions on the
    highway without changing lanes. This model is mainly used to predict future trajectories
    and radar information.
    In addition, we compared the original model (supercombo) with our model, and
    the results show that our prediction results are more accurate than supercombo.We
    also propose to use an attention-based convolutional neural network (ACNN) in
    the model. Our results show that ACNN is better than the recurrent neural network
    of openpilot in terms of trajectory prediction. Finally, we successfully excute our
    neural network on the openpilot software and hardware platform.

    摘要. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . i Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ii 誌謝. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii 第一章緒論. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.1 研究動機. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 問題陳述. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.3 研究貢獻. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.4 論文組織. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 第二章文獻回顧. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 2.1 端到端運動規劃. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 2.2 軌跡預測. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 2.3 自駕車中的雷達應用. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.4 ACNN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.5 Openpilot 系統. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 第三章研究方法. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 3.1 模型總覽. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 3.2 MBconv 模塊. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 3.3 EfficientNet 架構. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 3.4 GRU 模塊架構. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 3.5 ACNN 模塊架構. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 3.6 PoseNet 模塊架構. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 3.7 座標轉換. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 3.8 GPS 軌跡以及投影. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 3.9 損失函數. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 3.9.1 軌跡誤差. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 3.9.2 有效距離誤差. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 3.9.3 軌跡偏差誤差. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 3.9.4 雷達預測誤差. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 3.9.5 控制模型誤差. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 3.9.6 Comma two 設置. . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 第四章實驗結果. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 4.1 Comma2k19 資料集. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 4.2 轉向軌跡. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 4.3 訓練設置. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 4.4 環境設置. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 4.5 評估指標. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 iv 4.6 訓練與驗證結果. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 4.6.1 控制模型. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 4.6.2 轉向模型. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 4.7 模型測試結果. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 4.7.1 定量結果- OPDAVE . . . . . . . . . . . . . . . . . . . . . . . . . . 30 4.7.2 定性結果- OPDAVE . . . . . . . . . . . . . . . . . . . . . . . . . . 31 4.7.3 消融結果- 控制模型. . . . . . . . . . . . . . . . . . . . . . . . . . 32 4.7.4 定量結果–控制模型. . . . . . . . . . . . . . . . . . . . . . . . . . 34 4.7.5 定性結果–控制模型. . . . . . . . . . . . . . . . . . . . . . . . . . 35 4.7.6 評估有效距離結果. . . . . . . . . . . . . . . . . . . . . . . . . . . 36 4.7.7 評估雷達模擬結果. . . . . . . . . . . . . . . . . . . . . . . . . . . 37 4.7.8 控制模型可視化結果. . . . . . . . . . . . . . . . . . . . . . . . . . 38 第五章模型配置. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 5.1 轉換模型. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 5.2 Comma two 模型配置結果. . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 第六章結論. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 參考文獻. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 附錄. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52

    [1] D.-H. Lee, K.-L. Chen, K.-H. Liou, C.-L. Liu, and J.-L. Liu, “Deep learning and control
    algorithms of direct perception for autonomous driving,” Applied Intelligence, vol. 51,
    no. 1, pp. 237–247, 2021.
    [2] D.-H. Lee and J.-L. Liu, “End-to-end deep learning of lane detection and path prediction
    for real-time autonomous driving,” arXiv preprint arXiv:2102.04738, 2021.
    [3] K. Cho, B. Van Merriënboer, C. Gulcehre, D. Bahdanau, F. Bougares, H. Schwenk, and
    Y. Bengio, “Learning phrase representations using rnn encoder-decoder for statistical machine
    translation,” arXiv preprint arXiv:1406.1078, 2014.
    [4] H. Cui, V. Radosavljevic, F.-C. Chou, T.-H. Lin, T. Nguyen, T.-K. Huang, J. Schneider,
    and N. Djuric, “Multimodal trajectory predictions for autonomous driving using deep
    convolutional networks,” in 2019 International Conference on Robotics and Automation
    (ICRA), 2019.
    [5] D. A. Pomerleau, “Alvinn: An autonomous land vehicle in a neural network,” Carnegie-
    Mellow Univ., Pittsburgh, PA, Artificial Intelligence and Psychology, Tech. Rep., 1989.
    [6] M. Bojarski, D. Del Testa, D. Dworakowski, B. Firner, B. Flepp, P. Goyal, L. D. Jackel,
    M. Monfort, U. Muller, J. Zhang et al., “End to end learning for self-driving cars,” arXiv
    preprint arXiv:1604.07316, 2016.
    [7] J. Jhung, I. Bae, J. Moon, T. Kim, J. Kim, and S. Kim, “End-to-end steering controller
    with cnn-based closed-loop feedback for autonomous vehicles,” in 2018 IEEE Intelligent
    Vehicles Symposium (IV), 2018.
    [8] M. Bojarski, P. Yeres, A. Choromanska, K. Choromanski, B. Firner, L. Jackel, and
    U. Muller, “Explaining how a deep neural network trained with end-to-end learning steers
    a car,” arXiv preprint arXiv:1704.07911, 2017.
    [9] L. Chi and Y. Mu, “Deep steering: Learning end-to-end driving model from spatial and
    temporal visual cues,” arXiv preprint arXiv:1708.03798, 2017.
    [10] Z. Yang, Y. Zhang, J. Yu, J. Cai, and J. Luo, “End-to-end multi-modal multi-task vehicle
    control for self-driving cars with visual perceptions,” in 2018 24th International Conference
    on Pattern Recognition (ICPR),, 2018.
    [11] S. Hecker, D. Dai, and L. Van Gool, “End-to-end learning of driving models with surroundview
    cameras and route planners,” in Proceedings of the european conference on computer
    vision (eccv), 2018.
    [12] M. McNaughton, C. Urmson, J. M. Dolan, and J.-W. Lee, “Motion planning for autonomous
    driving with a conformal spatiotemporal lattice,” in 2011 IEEE International
    Conference on Robotics and Automation, 2011.
    [13] L. Ma, J. Xue, K. Kawabata, J. Zhu, C. Ma, and N. Zheng, “Efficient sampling-based motion
    planning for on-road autonomous driving,” IEEE Transactions on Intelligent Transportation
    Systems, vol. 16, no. 4, pp. 1961–1976, 2015.
    [14] Y. Kuwata, J. Teo, G. Fiore, S. Karaman, E. Frazzoli, and J. P. How, “Real-time motion
    planning with applications to autonomous urban driving,” IEEE Transactions on control
    systems technology, vol. 17, no. 5, pp. 1105–1118, 2009.
    [15] C. Götte, M. Keller, T. Nattermann, C. Haß, K.-H. Glander, and T. Bertram, “Spline-based
    motion planning for automated driving,” IFAC-PapersOnLine, vol. 50, no. 1, pp. 9114–
    9119, 2017.
    [16] J. Nilsson, Y. Gao, A. Carvalho, and F. Borrelli, “Manoeuvre generation and control for
    automated highway driving,” IFAC Proceedings Volumes, vol. 47, no. 3, pp. 6301–6306,
    2014.
    [17] K. Kant and S. W. Zucker, “Toward efficient trajectory planning: The path-velocity decomposition,”
    The international journal of robotics research, vol. 5, no. 3, pp. 72–89,
    1986.
    [18] B. Lau, C. Sprunk, and W. Burgard, “Kinodynamic motion planning for mobile robots
    using splines,” in 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems.
    IEEE, 2009.
    [19] F. J. Abdu, Y. Zhang, M. Fu, Y. Li, and Z. Deng, “Application of deep learning on
    millimeter-wave radar signals: A review,” Sensors, vol. 21, no. 6, p. 1951, 2021.
    [20] R. Dubé, M. Hahn, M. Schütz, J. Dickmann, and D. Gingras, “Detection of parked vehicles
    from a radar based occupancy grid,” in 2014 IEEE Intelligent Vehicles Symposium
    Proceedings, 2014.
    [21] S. Heuel and H. Rohling, “Pedestrian classification in automotive radar systems,” in 2012
    13th international radar symposium, 2012.
    [22] A. Angelov, A. Robertson, R. Murray-Smith, and F. Fioranelli, “Practical classification of
    different moving targets using automotive radar and deep neural networks,” IET Radar,
    Sonar & Navigation, vol. 12, no. 10, pp. 1082–1089, 2018.
    [23] A. Danzer, T. Griebel, M. Bach, and K. Dietmayer, “2d car detection in radar data with
    pointnets,” in 2019 IEEE Intelligent Transportation Systems Conference (ITSC), 2019.
    [24] I. Sutskever, O. Vinyals, and Q. V. Le, “Sequence to sequence learning with neural networks,”
    arXiv preprint arXiv:1409.3215, 2014.
    [25] M.-T. Luong, H. Pham, and C. D. Manning, “Effective approaches to attention-based neural
    machine translation,” arXiv preprint arXiv:1508.04025, 2015.
    [26] D. Bahdanau, K. Cho, and Y. Bengio, “Neural machine translation by jointly learning to
    align and translate,” arXiv preprint arXiv:1409.0473, 2014.
    [27] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and
    I. Polosukhin, “Attention is all you need,” arXiv preprint arXiv:1706.03762, 2017.
    [28] J. Fu, J. Liu, H. Tian, Y. Li, Y. Bao, Z. Fang, and H. Lu, “Dual attention network for
    scene segmentation,” in Proceedings of the IEEE/CVF Conference on Computer Vision
    and Pattern Recognition, 2019.
    [29] “Openpilot git repo,” https://github.com/commaai/openpilot.
    [30] E. Santana and G. Hotz, “Learning a driving simulator,” arXiv preprint arXiv:1608.01230,
    2016.
    [31] H. Schafer, E. Santana, A. Haden, and R. Biasini, “A commute in data: The comma2k19
    dataset,” arXiv preprint arXiv:1812.05752, 2018.
    [32] A. H. M. Rubaiyat, Y. Qin, and H. Alemzadeh, “Experimental resilience assessment of an
    open-source driving agent,” in 2018 IEEE 23rd Pacific rim international symposium on
    dependable computing (PRDC), 2018.
    [33] S. Jha, S. Banerjee, T. Tsai, S. K. Hari, M. B. Sullivan, Z. T. Kalbarczyk, S. W. Keckler,
    and R. K. Iyer, “Ml-based fault injection for autonomous vehicles: A case for bayesian
    fault injection,” in 2019 49th annual IEEE/IFIP international conference on dependable
    systems and networks (DSN), 2019.
    [34] S. Jha, T. Tsai, S. Hari, M. Sullivan, Z. Kalbarczyk, S. W. Keckler, and R. K. Iyer, “Kayotee:
    A fault injection-based system to assess the safety and reliability of autonomous
    vehicles to faults and errors,” arXiv preprint arXiv:1907.01024, 2019.
    [35] Ruochen, H. Jiao, T. Liang, J. Sato, Q. A. Shen, Q. Chen, and Zhu, “End-to-end
    uncertainty-based mitigation of adversarial attacks to automated lane centering,” arXiv
    preprint arXiv:2103.00345, 2021.
    [36] C. Rupprecht, I. Laina, R. DiPietro, M. Baust, F. Tombari, N. Navab, and G. D. Hager,
    “Learning in an uncertain world: Representing ambiguity through multiple hypotheses,”
    in Proceedings of the IEEE International Conference on Computer Vision, 2017.
    [37] D. Ramos, J. Franco-Pedroso, A. Lozano-Diez, and J. Gonzalez-Rodriguez, “Deconstructing
    cross-entropy for probabilistic binary classifiers,” Entropy, vol. 20, no. 3, p. 208, 2018.
    [38] R. Jiao, H. Liang, T. Sato, J. Shen, Q. A. Chen, and Q. Zhu, “End-to-end uncertaintybased
    mitigation of adversarial attacks to automated lane centering,” arXiv preprint
    arXiv:2103.00345, 2021.
    [39] M. Tan and Q. Le, “EfficientNet: Rethinking model scaling for convolutional neural networks,”
    in Proceedings of the 36th International Conference on Machine Learning, ser.
    Proceedings of Machine Learning Research, K. Chaudhuri and R. Salakhutdinov, Eds.,
    vol. 97. PMLR, 2019.
    [40] M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L.-C. Chen, “Mobilenetv2: Inverted
    residuals and linear bottlenecks,” in Proceedings of the IEEE Conference on Computer
    Vision and Pattern Recognition (CVPR), 2018.
    [41] J. Hu, L. Shen, and G. Sun, “Squeeze-and-excitation networks,” in Proceedings of the
    IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018.
    [42] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,”
    in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition
    (CVPR), 2016.
    [43] E. F. Camacho and C. B. Alba, Model predictive control. Springer science & business
    media, 2013.
    [44] Y. Lecun, E. Cosatto, J. Ben, U. Muller, and B. Flepp, “Dave: Autonomous off-road vehicle
    control using end-to-end learning,” DARPA-IPTO Final Report, 2004.

    QR CODE