簡易檢索 / 詳目顯示

研究生: 簡瑜成
Chien, Yu-Cheng
論文名稱: 基於深度學習之衛星圖路徑資訊萃取技術與農機管理應用
Deep Learning Based Route Information Extraction from Satellite Imagery for Agricultural Machinery Management
指導教授: 黃能富
Huang, Nen­-Fu
口試委員: 張耀中
Chang, Yao-Chung
陳震宇
Chen, Jen-Yeu
陳俊良
Chen, Jiann-Liang
學位類別: 碩士
Master
系所名稱: 電機資訊學院 - 通訊工程研究所
Communications Engineering
論文出版年: 2021
畢業學年度: 109
語文別: 英文
論文頁數: 70
中文關鍵詞: 農機具衛星圖分析圖形分割電腦視覺處理管理系統
外文關鍵詞: Agricultural Machinery, Satellite Imagery Analysis, Image Segmentation, CV Processing, Management System
相關次數: 點閱:1下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 近年來農業發展逐漸追求效率、自動化及精準化,為此許多農場開始大量使用農業機具,以進行大規模的耕種。而由於農機的使用與管理上比起人力有更大的成本支出,因此一個有效且方便的管理系統對於使用者是相當有必要的,除了可以降低管理成本,還能進一步提高使用產能。

    在本文中,我們提出了一個全面的農機管理應用,包含了感測器、API到網頁介面,功能分別是收集數據、資料庫存取,以及讓使用者查看感測器數據與自訂義農場規劃。此外我們也提出了一個全新的服務—萃取服務,其中包含了我們特別設計的圖形分割模型以及電腦視覺後處理,可以自動將衛星影像轉換為農場的規劃分配資料,回傳至使用者端的網頁,讓使用者可以進行高效率的農場規劃。

    經過實驗測試,我們所提出的分割模型可以在我們自行收集的資料集上達到0.84的像素準確率,證明該模型能夠正確的學習圖片中的特徵並轉換成所需的資訊。而一系列的電腦視覺後處理也可以針對模型輸出進行修復與簡化,此外花費時間約為3秒,我們認為是一個可以接受且值得的處理時間。最後我們規劃了3組不同設定的實驗來驗證網頁端的管理功能,而其確實也能夠針對農機不同的運作情形提供相對應的資訊,並配合事先自訂義的農場規劃得到更進一步的運作數據,讓使用者清楚地掌握它們的實際狀況。


    Recently, agriculture has progressed and pursued efficiency, automation, and precision. As a result, more and more agricultural machinery has been adopted. However, it seems to cost more to manage the machines than labor. Hence, an efficient management system is necessary for the users of agricultural machinery. With that, not only the cost of management, both time and money, can be lowered, but it can also boost the production capacity a lot.

    In this thesis, we propose a complete solution to manage the agricultural machinery, covering from sensors, API Server and Web Server. API Server is used to access the database and Web Server provides an interface for users to view and edit their machines, sensing data and customization of the farm. Moreover, we introduce Extraction Server, composed of a designed image segmentation model and a series of CV post-processing. Users can interact with it through Web Server and acquire the arrangement of farms by inputting a satellite image to Extraction Server.

    The conducted experiments show that our model can reach 0.84 of pixel accuracy on the collected dataset. CV post-processing repairs and simplifies the model output within about three seconds. The duration is acceptable and worthwhile. We also make plans with three different scenarios to validate Web Server. It has been proven helpful to analyze the operation of machines. And with the predefined customization, users are capable of knowing their further operational data.

    Abstract i 摘要 ii 1 Introduction 1 2 Related Work 5 2.1 Deep Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.1.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.1.2 Deep Learning in Image Analysis . . . . . . . . . . . . . . . . . . . . 5 2.2 Semantic Segmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 2.2.1 Convolution Neural Network . . . . . . . . . . . . . . . . . . . . . . . 7 2.2.2 Fully Convolution Network . . . . . . . . . . . . . . . . . . . . . . . 8 2.2.3 U­Net . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 2.2.4 Semantic Segmentation for Satellite Imagery . . . . . . . . . . . . . . 10 2.3 Agricultural Machinery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 2.3.1 Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 2.3.2 Applications and Possible Solutions . . . . . . . . . . . . . . . . . . . 13 3 System Design 17 3.1 System Architecture Overview . . . . . . . . . . . . . . . . . . . . . . . . . . 17 3.2 Extraction Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 3.2.1 Segmentation Model . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 3.2.2 CV Post­processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 3.3 Management System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 3.3.1 Sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 3.3.2 API Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 3.3.3 Web Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 4 System Implementation 23 4.1 Segmentation Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 4.1.1 Data Preparation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 4.1.1.1 Data Collecting . . . . . . . . . . . . . . . . . . . . . . . . 24 4.1.1.2 Data Labeling . . . . . . . . . . . . . . . . . . . . . . . . . 25 4.1.1.3 Data Expansion by District Separation . . . . . . . . . . . . 25 4.1.1.4 Data Augmentation . . . . . . . . . . . . . . . . . . . . . . 26 4.1.1.5 Data Preprocessing . . . . . . . . . . . . . . . . . . . . . . 27 4.1.2 Model Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 4.1.2.1 Encoder and Decoder . . . . . . . . . . . . . . . . . . . . . 30 4.1.2.2 Dilated Convolution Block . . . . . . . . . . . . . . . . . . 32 4.1.3 Loss Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 4.2 Extraction Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 4.2.1 Request Handling and Model Inputting . . . . . . . . . . . . . . . . . 35 4.2.2 Post­processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 4.2.2.1 Districts . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 4.2.2.2 Routes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 4.2.3 Transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 4.3 Management System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 4.3.1 Customization of Agricultural Machinery and Farm . . . . . . . . . . . 40 4.3.2 Data Visualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 4.3.3 Statistics Extended from Customization and GPS Data . . . . . . . . . 43 5 Experiment 47 5.1 Segmentation Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 5.1.1 Training Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . 47 5.1.2 Segmentation Model Result . . . . . . . . . . . . . . . . . . . . . . . 49 5.2 Extraction Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 5.3 Management System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 5.3.1 Hardware Specification and Preparation . . . . . . . . . . . . . . . . . 57 5.3.2 Management System Results . . . . . . . . . . . . . . . . . . . . . . . 60 5.3.2.1 Exp. A . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 5.3.2.2 Exp. B . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 5.3.2.3 Exp. C . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 6 Conclusion and Future Work 65 6.1 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 6.2 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 References 67

    [1] T.­Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L.
    Zitnick, “Microsoft COCO: Common objects in context,” in Computer Vision – ECCV
    2014, pp. 740–755, Springer International Publishing, 2014.
    [2] R. Yamashita, M. Nishio, R. K. G. Do, and K. Togashi, “Convolutional neural networks:
    an overview and application in radiology,” Insights into Imaging, vol. 9, pp. 611–629,
    June 2018.
    [3] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep convolutional neural networks,” Communications of the ACM, vol. 60, pp. 84–90, May 2017.
    [4] J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” in 2015 IEEE Conference on Computer Vision and Pattern Recognition
    (CVPR), IEEE, June 2015.
    [5] O. Ronneberger, P. Fischer, and T. Brox, “U­net: Convolutional networks for biomedical image segmentation,” in Lecture Notes in Computer Science, pp. 234–241, Springer
    International Publishing, 2015.
    [6] I. Demir, K. Koperski, D. Lindenbaum, G. Pang, J. Huang, S. Basu, F. Hughes, D. Tuia,
    and R. Raskar, “DeepGlobe 2018: A challenge to parse the earth through satellite images,”
    in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops
    (CVPRW), IEEE, June 2018.
    [7] J. Xia and L. V. Wang, “Small­animal whole­body photoacoustic tomography: A review,”
    IEEE Transactions on Biomedical Engineering, vol. 61, pp. 1380–1389, May 2014.
    [8] L. Zhou, C. Zhang, and M. Wu, “D­LinkNet: LinkNet with pretrained encoder and dilated
    convolution for high resolution satellite imagery road extraction,” in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), IEEE, June
    2018.
    [9] D. D. Bochtis, C. G. Sørensen, and P. Busato, “Advances in agricultural machinery management: A review,” Biosystems Engineering, vol. 126, pp. 69–81, Oct. 2014.
    [10] K. Zhou, A. L. Jensen, C. Sørensen, P. Busato, and D. Bothtis, “Agricultural operations
    planning in fields with multiple obstacle areas,” Computers and Electronics in Agriculture,
    vol. 109, pp. 12–22, Nov. 2014.
    [11] A. Yahya, M. Zohadie, A. Kheiralla, S. Giew, and N. Boon, “Mapping system for tractorimplement performance,” Computers and Electronics in Agriculture, vol. 69, pp. 2–11,
    Nov. 2009.
    [12] I. Radosavovic, R. P. Kosaraju, R. Girshick, K. He, and P. Dollar, “Designing network design spaces,” in 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition
    (CVPR), IEEE, June 2020.
    [13] J. Hu, L. Shen, S. Albanie, G. Sun, and E. Wu, “Squeeze­and­excitation networks,” IEEE
    Transactions on Pattern Analysis and Machine Intelligence, vol. 42, pp. 2011–2023, Aug.
    2020.
    [14] A. B. Jung, K. Wada, J. Crall, S. Tanaka, J. Graving, C. Reinders, S. Yadav, J. Banerjee,
    G. Vecsei, A. Kraft, Z. Rui, J. Borovec, C. Vallentin, S. Zhydenko, K. Pfeiffer, B. Cook,
    I. Fernández, F.­M. De Rainville, C.­H. Weng, A. Ayala­Acevedo, R. Meudec, M. Laporte,
    et al., “imgaug.” https://github.com/aleju/imgaug, 2020. Online; accessed 01­Feb­2020.
    [15] Google, “Google maps.” https://www.google.com.tw/maps, February 2005. (Accessed on
    06/07/2021).
    [16] Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature, vol. 521, pp. 436–444,
    May 2015.
    [17] Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient­based learning applied to document recognition,” Proceedings of the IEEE, vol. 86, no. 11, pp. 2278–2324, 1998.
    [18] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: Unified, realtime object detection,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, June 2016.
    [19] G. E. Hinton, “Reducing the dimensionality of data with neural networks,” Science,
    vol. 313, pp. 504–507, July 2006.
    [20] A. Chaurasia and E. Culurciello, “LinkNet: Exploiting encoder representations for efficient semantic segmentation,” in 2017 IEEE Visual Communications and Image Processing (VCIP), IEEE, Dec. 2017.
    [21] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in
    2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, June
    2016.
    [22] M. T. Chiu, X. Xu, Y. Wei, Z. Huang, A. G. Schwing, R. Brunner, H. Khachatrian,
    H. Karapetyan, I. Dozier, G. Rose, D. Wilson, A. Tudor, N. Hovakimyan, T. S. Huang,
    and H. Shi, “Agriculture­vision: A large aerial image database for agricultural pattern
    analysis,” in 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition
    (CVPR), IEEE, June 2020.
    [23] K. Cao and X. Zhang, “An improved res­UNet model for tree species classification using
    airborne high­resolution images,” Remote Sensing, vol. 12, p. 1128, Apr. 2020.
    [24] R. M Rustowicz, R. Cheong, L. Wang, S. Ermon, M. Burke, and D. Lobell, “Semantic segmentation of crop type in africa: A novel dataset and analysis of deep learning methods,”
    in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition
    (CVPR) Workshops, June 2019.
    [25] N. Kussul, M. Lavreniuk, S. Skakun, and A. Shelestov, “Deep learning classification of
    land cover and crop types using remote sensing data,” IEEE Geoscience and Remote Sensing Letters, vol. 14, pp. 778–782, May 2017.
    [26] T. Keller, M. Sandin, T. Colombi, R. Horn, and D. Or, “Historical increase in agricultural
    machinery weights enhanced soil stress levels and adversely affected soil functioning,”
    Soil and Tillage Research, vol. 194, p. 104293, Nov. 2019.
    [27] D. S. Paraforos, H. W. Griepentrog, and S. G. Vougioukas, “Country road and field surface profiles acquisition, modelling and synthetic realisation for evaluating fatigue life of
    agricultural machinery,” Journal of Terramechanics, vol. 63, pp. 1–12, Feb. 2016.
    [28] A. Sopegno, A. Calvo, R. Berruto, P. Busato, and D. Bocthis, “A web mobile application for agricultural machinery cost analysis,” Computers and Electronics in Agriculture,
    vol. 130, pp. 158–168, Nov. 2016.
    [29] C. Amiama, J. Bueno, C. J. Álvarez, and J. M. Pereira, “Design and field test of an automatic data acquisition system in a self­propelled forage harvester,” Computers and Electronics in Agriculture, vol. 61, pp. 192–200, May 2008.
    [30] T. Oksanen, R. Linkolehto, and I. Seilonen, “Adapting an industrial automation protocol
    to remote monitoring of mobile agricultural machinery: a combine harvester with IoT,”
    IFAC­PapersOnLine, vol. 49, no. 16, pp. 127–131, 2016.
    [31] R. Zhang, F. Hao, and X. Sun, “The design of agricultural machinery service management
    system based on internet of things,” Procedia Computer Science, vol. 107, pp. 53–57,
    2017.
    [32] Wikipedia contributors, “Lpwan — Wikipedia, the free encyclopedia,” 2021. [Online;
    accessed 3­June­2021].
    [33] Wikipedia contributors, “Lora — Wikipedia, the free encyclopedia,” 2021. [Online; accessed 3­June­2021].
    [34] Wikipedia contributors, “Narrowband iot — Wikipedia, the free encyclopedia,” 2021.
    [Online; accessed 3­June­2021].
    [35] Wikipedia contributors, “Mqtt — Wikipedia, the free encyclopedia,” 2021. [Online; accessed 3­June­2021].
    [36] Wikipedia contributors, “User datagram protocol — Wikipedia, the free encyclopedia,”
    2021. [Online; accessed 4­June­2021].
    [37] Wikipedia contributors, “Representational state transfer — Wikipedia, the free encyclopedia,” 2021. [Online; accessed 4­June­2021].
    [38] S. Minaee, Y. Y. Boykov, F. Porikli, A. J. Plaza, N. Kehtarnavaz, and D. Terzopoulos, “Image segmentation using deep learning: A survey,” IEEE Transactions on Pattern Analysis
    and Machine Intelligence, pp. 1–1, 2021.
    [39] C. Shorten and T. M. Khoshgoftaar, “A survey on image data augmentation for deep learning,” Journal of Big Data, vol. 6, July 2019.
    [40] M. Wu, C. Zhang, J. Liu, L. Zhou, and X. Li, “Towards accurate high resolution satellite
    image semantic segmentation,” IEEE Access, vol. 7, pp. 55609–55619, 2019.
    [41] A. Buslaev, S. Seferbekov, V. Iglovikov, and A. Shvets, “Fully convolutional network
    for automatic road extraction from satellite imagery,” in 2018 IEEE/CVF Conference on
    Computer Vision and Pattern Recognition Workshops (CVPRW), IEEE, June 2018.
    [42] C. H. Sudre, W. Li, T. Vercauteren, S. Ourselin, and M. J. Cardoso, “Generalised dice
    overlap as a deep learning loss function for highly unbalanced segmentations,” in Deep
    Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support, pp. 240–248, Springer International Publishing, 2017.
    [43] S. Jadon, “A survey of loss functions for semantic segmentation,” in 2020 IEEE Conference on Computational Intelligence in Bioinformatics and Computational Biology
    (CIBCB), IEEE, Oct. 2020.
    [44] S. S. M. Salehi, D. Erdogmus, and A. Gholipour, “Tversky loss function for image segmentation using 3d fully convolutional deep networks,” in Machine Learning in Medical
    Imaging, pp. 379–387, Springer International Publishing, 2017.
    [45] A. Ronacher, “Welcome to flask —flask documentation (2.0.x).”
    https://flask.palletsprojects.com/en/2.0.x/, April 2010. (Accessed on 06/07/2021).
    [46] I. Corporation, “Home ­ opencv.” https://opencv.org/, June 2000. (Accessed on
    06/08/2021).
    [47] D. H. DOUGLAS and T. K. PEUCKER, “ALGORITHMS FOR THE REDUCTION OF
    THE NUMBER OF POINTS REQUIRED TO REPRESENT a DIGITIZED LINE OR ITS
    CARICATURE,” Cartographica: The International Journal for Geographic Information
    and Geovisualization, vol. 10, pp. 112–122, Dec. 1973.
    [48] P. Hart, “How the hough transform was invented [DSP history,” IEEE Signal Processing
    Magazine, vol. 26, pp. 18–22, Nov. 2009.
    [49] Google, “Overview | maps javascript api | google developers.”
    https://developers.google.com/maps/documentation/javascript/overview, June 2005.
    (Accessed on 06/11/2021).
    [50] Wikipedia contributors, “Websocket — Wikipedia, the free encyclopedia,” 2021. [Online;
    accessed 12­June­2021].
    [51] NVIDIA, “Geforce rtx 3090 graphics card | nvidia.” https://www.nvidia.com/en-us/
    geforce/graphics-cards/30-series/rtx-3090/. (Accessed on 07/14/2021).
    [52] Nordic, “nrf52832 ­ versatile bluetooth 5.2 soc ­ nordicsemi.com.”
    https://www.nordicsemi.com/products/nrf52832. (Accessed on 06/27/2021).
    [53] Quectel, “Lpwa bc20 | 移 远 通 信.” https://www.quectel.com/product/lpwabc20/?lang=zh­hans. (Accessed on 06/27/2021).
    [54] STMicroelectronics, “Lis3dh ­ 3­axis mems accelerometer, ultra­low­power,
    ±2g/±4g/±8g/±16g full scale, high­speed i2c/spi digital output, embedded fifo, highperformance acceleration sensor, llga 16 3x3x1.0 package ­ stmicroelectronics.”
    https://www.st.com/en/mems­and­sensors/lis3dh.html. (Accessed on 06/27/2021).

    QR CODE