簡易檢索 / 詳目顯示

研究生: 鄭至雅
Cheng, Chih-Ya
論文名稱: 利用機會式廣播於智慧汽車間交換光達點雲
LiDAR Point Cloud Disseminator via Opportunistic Broadcast Among Intelligent Vehicles
指導教授: 徐正炘
Hsu, Cheng-Hsin
口試委員: 李哲榮
Lee, Che-Rung
黃俊穎
Huang, Chun-Ying
學位類別: 碩士
Master
系所名稱: 電機資訊學院 - 資訊工程學系
Computer Science
論文出版年: 2024
畢業學年度: 112
語文別: 中文
論文頁數: 45
中文關鍵詞: 光達點雲車聯網機會式廣播
外文關鍵詞: LiDAR point cloud, V2V, Opportunistic Broadcast
相關次數: 點閱:1下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 車輛對動態變化的環境的感知非常重要。 車輛通常配備感測器,例如光達。單一車輛視野感知能力的限制以及障礙物或物體會阻擋視野會造成盲點。合作感知是多個車輛參與者合併資料結果的合適解決方案。許多文獻研究了協作感知的概念,但很少考慮海量資料點雲的資料選擇。考慮到V2V LiDAR點雲機會廣播系統,每輛車都必須廣播其點雲數據,以幫助其他車輛擴展其感知並減少視覺盲點。
    在本文中,我們提出了最遠新點優先(FNPF)演算法來共同決定點雲扇區的資料選擇和傳輸的資料速率。 實驗在NS-3和CARLA綜合聯合模擬器上進行。實驗結果如下。 FNPF的延遲比基線低13%,FNPF的丟包率比基線低約27%。FNPF的吞吐量平均高出30%左右。FNPF 的實際目標函數比基線高平均50\%。FNPF 的接收點平均比基線高 15%。FNPF的覆蓋率平均比基線高 4%。


    The perception of the dynamically changing environment for vehicles is important. Vehicles are usually equipped with sensors, such as LiDARs.The limitation of sensing ability from a single vehicle sight of view as well as the obstacles or objects would block the sight-of-view causing a blind spot. Cooperative perception is a suitable solution with multiple vehicle participants merging the data result. Much literature has studied the cooperative perception concept but has rarely considered the data selection of point clouds with huge amounts of data. Considering a V2V LiDAR point cloud opportunistic broadcasting system, each vehicle has to broadcast its point cloud data to help other vehicles extend their perception and reduce their visual blind spot. In this thesis, we propose the Furthest Novel Point First (FNPF) algorithm to jointly decide the data selection of the point cloud sectors and the data rate of the transmission. The experiments were conducted in the comprehensive co-simulator of NS-3 and CARLA. The result of the experiment is as follows. The latency of FNPF is lower than the baseline 13% , and the packet loss rate of FNPF is lower than the baselines by about 27%. The throughput of FNPF is higher by about 30% on average. The actual objective function of FNPF is higher than the baselines by 50% on average. The received points of FNPF are higher than the baselines by 15% on average. The coverage of FNPF is higher than the baselines by 4% on average.

    中文摘要 . . . . . . .i Abstract . . . . . . ii 1 Introduction . . . 1 2 Background . . . . . . .. . . . . 6 2.1 Opportunistic Broadcast . . . .6 2.2 V2X Communication Networks. . .6 2.3 Awareness Message . . . . . . .7 2.4 Cooperative Perception . . . . 7 3 Related Work . . . . . . . . . . . . . . . . . . . . . 9 3.1 Prioritized Transmission by Spatial Importance. . . 9 3.2 Raw Point Cloud Transmission . . . . . . . . . . . .9 3.3 Feature Transmission . . . . . . . . . . . . . . . .10 4 Opportunistic Broadcast of LiDAR Point Clouds . 11 4.1 System Overview . . . . . . . . . . . . . . .12 4.2 Network Protocol . . . . . . . . . . . . . . 13 4.3 Problem . . . . . . . . . . . . . . . . . . .13 4.4 Algorithm . . . . . . . . . . . . . . . . . .16 5 Evaluations . . . . . . . . . . . . . . . . . . 20 5.1 Implementations . . . . . . . . . . . . . . .20 5.2 Setup . . . . . . . . . . . . . . . . . . . .21 5.3 Results . . . . . . . . . . . . . . . . . . .22 5.3.1 A Sample Vehicle . . . . . . . . . . . . .23 5.3.2 A Sample Run . . . . . . . . . . . . . . .25 5.3.3 All Six Runs . . . . . . . . . . . . . . .29 5.3.4 Per-Vehicle Performance . . . . . . . . . 32 5.3.5 Running time . . . . . . . . . . . . . . .32 5.3.6 Extended experiment with 5 vehicles . . . 36 6 Conclusion. . . . . . . 40 Bibliography. . . . . . . 42

    [1] S. Aoki, T. Yonezawa, N. Kawaguchi, P. Steenkiste, and R. R. Rajkumar. Time-
    sensitive cooperative perception for real-time data sharing over vehicular commu-
    nications: Overview, challenges, and future directions. IEEE Internet of Things
    Magazine, 5(2):108–113, 2022.
    [2] E. Arnold, M. Dianati, R. de Temple, and S. Fallah. Cooperative perception for 3d
    object detection in driving scenarios using infrastructure sensors. IEEE Transactions
    on Intelligent Transportation Systems, 23(3):1852–1864, 2022.
    [3] M. Brambilla, M. Nicoli, G. Soatti, and F. Deflorio. Augmenting vehicle localization
    by cooperative sensing of the driving environment: Insight on data association in
    urban traffic scenarios. IEEE Transactions on Intelligent Transportation Systems,
    21(4):1646–1663, 2020.
    [4] A. Caillot, S. Ouerghi, P. Vasseur, R. Boutteau, and Y. Dupuis. Survey on cooper-
    ative perception in an automotive context. IEEE Transactions on Intelligent Trans-
    portation Systems, 23(9):14204–14223, 2022.
    [5] L. Chancay-Garcı́a, E. Hernández-Orallo, P. Manzoni, A. M. Vegni, V. Loscrı́, J. C.
    Cano, and C. T. Calafate. Optimising message broadcasting in opportunistic net-
    works. Computer Communications, 157:162–178, 2020.
    [6] M. Chaqfeh and A. Lakas. A novel approach for scalable multi-hop data dissemina-
    tion in vehicular ad hoc networks. Ad Hoc Networks, 37:228–239, 2016.
    [7] Q. Chen, X. Ma, S. Tang, J. Guo, Q. Yang, and S. Fu. F-cooper: Feature based
    cooperative perception for autonomous vehicle edge computing system using 3d
    point clouds. In Proceedings of the 4th ACM/IEEE Symposium on Edge Computing,
    SEC ’19, page 88–100, New York, NY, USA, 2019. Association for Computing
    Machinery.
    42[8] Q. Chen, S. Tang, Q. Yang, and S. Fu. Cooper: Cooperative perception for connected
    autonomous vehicles based on 3d point clouds. In 2019 IEEE 39th International
    Conference on Distributed Computing Systems (ICDCS), pages 514–524, 2019.
    [9] S. Chen, B. Liu, C. Feng, C. Vallespi-Gonzalez, and C. Wellington. 3d point cloud
    processing and learning for autonomous driving: Impacting map creation, localiza-
    tion, and perception. IEEE Signal Processing Magazine, 38(1):68–86, 2021.
    [10] I. I. Cplex. V12. 1: User’s manual for cplex. International Business Machines
    Corporation, 46(53):157, 2009.
    [11] J. Cui, H. Qiu, D. Chen, P. Stone, and Y. Zhu. Coopernaut: End-to-end driv-
    ing with cooperative perception for networked vehicles. In Proceedings of the
    IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 17252–
    17262, 2022.
    [12] J. Deng, S. Shi, P. Li, W. Zhou, Y. Zhang, and H. Li. Voxel r-cnn: Towards high per-
    formance voxel-based 3d object detection. In Proceedings of the AAAI Conference
    on Artificial Intelligence, volume 35, pages 1201–1209, 2021.
    [13] A. Dosovitskiy, G. Ros, F. Codevilla, A. Lopez, and V. Koltun. CARLA: An open
    urban driving simulator. In Proc. of PMLR Conference on Robot Learning (CoRL),
    pages 1–16, 2017.
    [14] C. M. Farmer. Crash avoidance potential of five vehicle technologies. Insurance
    Institute for Highway Safety, 2008.
    [15] Google. Draco, 2023. https://github.com/google/draco.
    [16] J. Gozalvez and M. Sepulcre. Opportunistic technique for efficient wireless vehicu-
    lar communications. IEEE Vehicular Technology Magazine, 2(4):33–39, 2007.
    [17] K. F. Hasan, Y. Feng, and Y.-C. Tian. Gnss time synchronization in vehicular ad-hoc
    networks: Benefits and feasibility. IEEE Transactions on Intelligent Transportation
    Systems, 19(12):3915–3924, 2018.
    [18] K. F. Hasan, C. Wang, Y. Feng, and Y.-C. Tian. Time synchronization in vehicular
    ad-hoc networks: A survey on theory and practice. Vehicular Communications,
    14:39–51, 2018.
    [19] P. Hintjens. 0mq - the guide, 2011.
    43[20] K. Kiela, V. Barzdenas, M. Jurgo, V. Macaitis, J. Rafanavicius, A. Vasjanov,
    L. Kladovscikov, and R. Navickas. Review of v2x–iot standards and frameworks
    for its applications. Applied sciences, 10(12):4314, 2020.
    [21] C. Li, R. Shinkuma, T. Sato, and E. Oki. Real-time data selection and merg-
    ing for 3d-image sensing network with multiple sensors. IEEE Sensors Journal,
    21(19):22058–22076, 2021.
    [22] C. Li, R. Shinkuma, T. Sato, and E. Oki. Two-level processing scheme for 3d-image
    sensing network. In 2021 IFIP Networking Conference (IFIP Networking), pages
    1–2, 2021.
    [23] T. Li, C. Zhang, and X. Zhou. Bp-cods: Blind-spot-prediction-assisted multi-vehicle
    collaborative data scheduling. In International Conference on Wireless Algorithms,
    Systems, and Applications, pages 296–308. Springer, 2022.
    [24] Y. Li, L. Ma, Z. Zhong, F. Liu, M. A. Chapman, D. Cao, and J. Li. Deep learning for
    lidar point clouds in autonomous driving: A review. IEEE Transactions on Neural
    Networks and Learning Systems, 32(8):3412–3432, 2021.
    [25] A. Lohachab and A. Jangra. Opportunistic internet of things (iot): Demystifying
    the effective possibilities of opportunisitc networks towards iot. In 2019 6th Inter-
    national Conference on Signal Processing and Integrated Networks (SPIN), pages
    1100–1105, 2019.
    [26] A. Marroquin, M. A. To, C. A. Azurdia-Meza, and S. Bolufé. A general overview
    of vehicle-to-x (v2x) beacon-based cooperative vehicular networks. In 2019 IEEE
    39th Central America and Panama Convention (CONCAPAN XXXIX), pages 1–6.
    IEEE, 2019.
    [27] Nsnam. Network Simulator 3 (NS-3), 2023. https://www.nsnam.org/.
    [28] M. Oka, R. Shinkuma, T. Sato, E. Oki, T. Iwai, K. Nihei, E. Takahashi, D. Kane-
    tomo, and K. Satoda. Spatial feature-based prioritization for transmission of point
    cloud data in 3d-image sensor networks. IEEE Sensors Journal, 21(20):23145–
    23161, 2021.
    [29] W. H. Organization et al. Global status report on road safety 2018: summary. Tech-
    nical report, World Health Organization, 2018.
    [30] R. Otsu, R. Shinkuma, T. Sato, and E. Oki. Data-importance-aware bandwidth-
    allocation scheme for point-cloud transmission in multiple lidar sensors. IEEE Ac-
    cess, 9:65150–65161, 2021.
    44[31] N. Piperigkos, A. S. Lalos, and K. Berberidis. Multi-modal cooperative awareness
    of connected and automated vehicles in smart cities. In 2021 IEEE International
    Conference on Smart Internet of Things (SmartIoT), pages 377–382, 2021.
    [32] D. Qiao and F. Zulkernine. Adaptive feature fusion for cooperative perception using
    lidar point clouds. In Proceedings of the IEEE/CVF Winter Conference on Applica-
    tions of Computer Vision, pages 1186–1195, 2023.
    [33] H. Qiu, F. Ahmad, F. Bai, M. Gruteser, and R. Govindan. Avr: Augmented vehic-
    ular reality. In Proceedings of the 16th Annual International Conference on Mobile
    Systems, Applications, and Services, pages 81–95, 2018.
    [34] H. Qiu, P. Huang, N. Asavisanu, X. Liu, K. Psounis, and R. Govindan. Autocast:
    Scalable infrastructure-less cooperative perception for distributed collaborative driv-
    ing. arXiv preprint arXiv:2112.14947, 2021.
    [35] Z. Y. Rawashdeh, S. M. Mahmud, and A. Khalifeh. A scalable application and
    system level-based communication scheme for v2v communications. In 2016 IEEE
    84th Vehicular Technology Conference (VTC-Fall), pages 1–5. IEEE, 2016.
    [36] K. Sato, R. Shinkuma, T. Sato, E. Oki, T. Iwai, D. Kanetomo, and K. Satoda. Pri-
    oritized transmission control of point cloud data obtained by lidar devices. IEEE
    Access, 8:113779–113789, 2020.
    [37] S. Shi, C. Guo, L. Jiang, Z. Wang, J. Shi, X. Wang, and H. Li. Pv-rcnn: Point-voxel
    feature set abstraction for 3d object detection. In Proceedings of the IEEE/CVF
    conference on computer vision and pattern recognition, pages 10529–10538, 2020.
    [38] S. Shi, X. Wang, and H. Li. Pointrcnn: 3d object proposal generation and detection
    from point cloud. In Proceedings of the IEEE/CVF conference on computer vision
    and pattern recognition, pages 770–779, 2019.
    [39] T.-H. Wang, S. Manivasagam, M. Liang, B. Yang, W. Zeng, and R. Urtasun. V2vnet:
    Vehicle-to-vehicle communication for joint perception and prediction. In European
    Conference on Computer Vision, pages 605–621. Springer, 2020.
    [40] X. Zhang, A. Zhang, J. Sun, X. Zhu, Y. E. Guo, F. Qian, and Z. M. Mao. Emp: Edge-
    assisted multi-vehicle perception. In Proceedings of the 27th Annual International
    Conference on Mobile Computing and Networking, pages 545–558, 2021.

    QR CODE