研究生: |
林弘偉 Lin, Hong-Wei |
---|---|
論文名稱: |
應用深度學習及影像拼接於溫室蝴蝶蘭苗株之盤點系統 Inventory System for Orchid Seedlings in Greenhouse Using Deep Learning and Image Stitching |
指導教授: |
陳榮順
Chen, Rong-Shun |
口試委員: |
白明憲
Bai, Ming-Sian 陳宗麟 Chen, Tsung-Lin |
學位類別: |
碩士 Master |
系所名稱: |
工學院 - 動力機械工程學系 Department of Power Mechanical Engineering |
論文出版年: | 2023 |
畢業學年度: | 111 |
語文別: | 中文 |
論文頁數: | 81 |
中文關鍵詞: | 蝴蝶蘭苗株自動盤點 、影像拼接 、物件辨識 、無人機 |
外文關鍵詞: | Automatic Inventory of Orchid Seedlings, Image Stitching, Object Detection, UAV |
相關次數: | 點閱:38 下載:0 |
分享至: |
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
蝴蝶蘭為台灣高經濟價值出口作物,因應管理排程上的要求,業者需每個月進行全面性溫室蝴蝶蘭苗株數量盤點工作。蝴蝶蘭擺放在溫室平面植床上,有不同大小與品種以及對應床號,在盤點時需花費大量人力與時間,再透過人工將盤點紀錄到紙本庫存表,以便業者進行二次盤點,並統計與初盤數字核對。因此,本研究以業者提供之實際溫室苗株場地的蝴蝶蘭苗株為應用對象,開發蝴蝶蘭苗株盤點系統。此盤點系統透過無人機在溫室內植床飛行依序拍攝,並藉由優化影像拼接演算法,有效地拼接成單一植床全景圖,再將苗株資訊及床號導入自定義二維碼中,透過二維碼將溫室內植床影像上的蝴蝶蘭苗株分類以及裁切,藉由物件辨識演算法以及影像處理之核心技術,實現蝴蝶蘭苗株盤點,最後整合苗株盤點數量以及苗株栽培資訊於苗株管理系統中。本研究所提出的方法分別在10張不同苗株植床重複性盤點並將結果與人工計數比較,其中,小、中、大苗盤點結果之平均準確率分別為94.68%、99.44%、97.07%。另外,分析苗株的平均遮蔽率,其中,小苗為2.38%,中苗為0.47%,大苗為2.81%,驗證本研究所提出無人機的溫室蘭花苗株自動盤點系統的可行性及有效性。
Orchids are highly valuable crops of export in Taiwan. In order to meet the requirements of management, the orchid industry usually needs to conduct a comprehensive monthly inventory of seedlings in greenhouse settings. These orchid seedlings are placed on flat plant trays which contain various sizes, cultivars, and corresponding bed numbers. The inventory process demands significant labor and time cost, involving manual counting that recorded in the paper inventory sheets for follow-up inventory. Therefore, this research develops an automatic counting system of orchid seedlings, provided in the actual greenhouse of orchid industry. This proposed system employs the camera embedded in a drone to sequentially capture images in a planting tray of greenhouse. Then the individually captured images are effectively stitched into a single panoramas image of a planting tray by optimizing an image stitching algorithms. The seedling information and bed numbers of plant trays are recorded into the custom QR code, placed in a plant tray on purpose, to facilitate the classification and segmentation of orchid seedlings within the greenhouse. In this research, object detection algorithm and image processing are the core technology of inventory system to realize the counting of orchid seedlings, and finally integrates the number of seedlings and seedling cultivation information within the seedling management system. The methods proposed in this research is applied to repetitive inventory checks on 10 distinct seedling plant trays, and the experimental results are compared with the manual counting. The average accuracy rates for counting small, medium, and large seedlings are 94.68%, 99.44%, and 97.07%, respectively. Furthermore, through an analysis, the average occlusion rates of orchid for small, medium, and large seedlings are 2.38%, 0.47%, and 2.81%, respectively. As a result, the developed automatic inventory of orchid seedlings by captured drone-based images is a feasible and effective system.
[1] 行 政 院 主 計 總 處. (2022) 農 業 就 業 人 口 統 計. [Online].
Available: https://statview.coa.gov.tw/aqsys_on/importantArgiGoal_
lv3_1_6_2.html, accessed: 2022-10-5.
[2] 行政院農委會. (2022) 農產品別 (coa) 資料查詢 ─ 按農產品
別. [Online]. Available:https://agrstat.coa.gov.tw/sdweb/public/trade/
TradeCoa.aspx, accessed: 2022-10-5.
[3] G. Xia, J. Dan, H. Jinyu, H. Jiming, and S. Xiaoyong, “Research
on fruit counting of xanthoceras sorbifolium bunge based on deep
learning,” 2022 7th International Conference on Image, Vision and
Computing (ICIVC), pp. 790–798, Xian, China, July 26-28, 2022.
[4] H. Li, P. Wang, and C. Huang, “Comparison of deep learning methods
for detecting and counting sorghum heads in uav imagery,” Remote
Sensing, vol. 14, no. 13, p. 3143, 2022.
[5] M. Buzzy, V. Thesma, M. Davoodi, and J. Mohammadpour Velni,
“Real-time plant leaf counting using deep object detection networks,”
Sensors, vol. 20, no. 23, p. 6896, 2020.
[6] R. Heylen, P. Van Mulders, and N. Gallace, “Counting strawberry
flowers on drone imagery with a sequential convolutional neural
network,” 2021 IEEE International Geoscience and Remote Sensing
Symposium IGARSS, pp. 4880–4883, Brussels, Belgium, Jul 12-16,
2021.
[7] 張肇熙,“深度學習應用於蘭花苗株自動化盤點系統",國立清華
大學動力機械工程學系碩士論文, 2022 年 7 月。
[8] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based
learning applied to document recognition,” Proceedings of the IEEE,
vol. 86, no. 11, pp. 2278–2324, 1998.
[9] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” Communications of the ACM, vol. 60, no. 6, pp. 84–90, 2017.
[10] R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich feature
hierarchies for accurate object detection and semantic segmentation.
arxiv e-prints, page,” arXiv preprint arXiv:1311.2524, 2013.
[11] R. Girshick, “Fast r-cnn. arxiv 2015,” arXiv preprint
arXiv:1504.08083, 2015.
[12] S. Ren, K. He, R. Girshick, and J. Sun, “Faster r-cnn: Towards real-time object detection with region proposal networks,” Advances in neural information processing systems, vol. 28, 2015.
[13] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: Unified, real-time object detection,” Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 779–788,Las Vegas, USA, Jun. 26 - Jul. 1, 2016.
[14] W. Liu, D. E. Dragomir Anguelov, C. Szegedy, S. E. Reed, C.-
Y. Fu, and A. C. Berg, “Ssd: single shot multibox detector. corr
abs/1512.02325 (2015),” arXiv preprint arXiv:1512.02325, 2015.
[15] M. Tan, R. Pang, and Q. Le, “Efficientdet: scalable and efficient object detection. arxiv,” arXiv preprint arXiv:1911.09070, vol. 10, 2019.
[16] J. Redmon and A. Farhadi, “Yolo9000: Better, faster, stronger. arxiv 2016,” arXiv preprint arXiv:1612.08242, vol. 394, 2016.
[17] J. Redmon and A. Farhadi, “Yolov3: An incremental improvement,”
arXiv preprint arXiv:1804.02767, 2018.
[18] A. Bochkovskiy, C.-Y. Wang, and H.-Y. M. Liao, “Yolov4:
Optimal speed and accuracy of object detection,” arXiv preprint
arXiv:2004.10934, 2020.
[19] D. G. Lowe, “Distinctive image features from scale-invariant
keypoints,” International journal of computer vision, vol. 60, no. 2,
pp. 91–110, 2004.
[20] D. Zhou and D. Hu, “A robust object tracking algorithm based on
surf,” 2013 International Conference on Wireless Communications and
Signal Processing, pp. 1–5, Hangzhou, China, Oct. 24-26, 2013.
[21] E. Rublee, V. Rabaud, K. Konolige, and G. Bradski, “Orb: An efficient alternative to sift or surf,” 2011 International conference on computer vision, pp. 2564–2571, Barcelona, Spain, Nov. 06-13, 2011.
[22] K. P. Win and Y. Kitjaidure, “Biomedical images stitching using orb feature based approach,” 2018 International Conference on Intelligent Informatics and Biomedical Sciences (ICIIBMS), vol. 3, pp. 221–225, Bangkok, Thailand, Oct. 21-24, 2018.
[23] S. A. K. Tareen and Z. Saleem, “A comparative analysis of sift,
surf, kaze, akaze, orb, and brisk,” 2018 International conference on
computing, mathematics and engineering technologies (iCoMET), pp.
1–10, Sukkur, Pakistan, Mar. 03-04, 2018.
[24] R. O. Duda, P. E. Hart, and D. G. Stork, Pattern classification and scene analysis. Wiley New York, 1973, vol. 3.
[25] M. A. Fischler and R. C. Bolles, “Random sample consensus: a
paradigm for model fitting with applications to image analysis and
automated cartography,” Communications of the ACM, vol. 24, no. 6,
pp. 381–395, 1981.
[26] G. Bradski, “The opencv library.” Dr. Dobb’s Journal: Software Tools for the Professional Programmer, vol. 25, no. 11, pp. 120–123, 2000.
[27] J. Redmon. (2013-2016) Darknet: Open source neural networks
in c. [Online]. Available: https://pjreddie.com/darknet/, accessed:
2022-10-26.
[28] M. Trajković and M. Hedley, “Fast corner detection,” Image and vision computing, vol. 16, no. 2, pp. 75–87, 1998.
[29] M. Calonder, V. Lepetit, C. Strecha, and P. Fua, “Brief: Binary robust independent elementary features,” European conference on computer vision, pp. 778–792, Heraklion, Crete, Greece, Sep. 05-11, 2010.