簡易檢索 / 詳目顯示

研究生: 林采蓉
Lin, Cai-Long
論文名稱: 基於深度學習之自動化蘭花苗株病變視覺辨識系統
Visual Recognition System of Automation Orchid Seedling Lesion Based on Deep Learning
指導教授: 陳榮順
Chen, Rong-Shun
口試委員: 李昇憲
Li, Sheng-Shian
白明憲
Bai, Ming-Sian
學位類別: 碩士
Master
系所名稱: 工學院 - 動力機械工程學系
Department of Power Mechanical Engineering
論文出版年: 2019
畢業學年度: 107
語文別: 中文
論文頁數: 76
中文關鍵詞: 蘭花苗株病變辨識系統卷積神經網路物件偵測遷移學習
外文關鍵詞: Deep Learning, Orchid Seedling, Lesion Recognition System, Convolution Neural Network, Transfer Learning
相關次數: 點閱:4下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 本研究致力於自動化蘭花苗株病變視覺辨識系統之開發。本系統可以藉由蘭花苗株影像辨識該苗株是否有蝴蝶蘭常見的白菇、黃葉及黑頭病癥,並可依照其辨識的結果自動分流至不同的病癥或是正常之區域位置。本研究首先建立正常及三種不同病癥之影像資料庫,透過深度學習之物件偵測演算法,並結合卷積神經網路經典架構之轉移學習,影像辨識時可有效避免蘭花苗株影像因變形影響特徵,順利將蘭花苗株影像之病變特徵萃取出來,藉此判斷其病變種類,再配合自動化分流機構將蘭花苗株分流至特定區域位置。
    為了符合實際產業之要求,本研究所使用之深度學習影像訓練集皆於合作蘭花種植園區實地拍攝的影像。此外,本研究所提出的演算法實現於NVIDIA Jetson TX2上,可以達到即時的運算效能,辨識準確度甚高,將來在蘭花公司建立產線之自動化輸送系統,結合本研究所開發之蘭花苗株辨識系統,可以節省龐大的人力、成本及提高產能。


    This research aims to develop an automatic visual recognition system for three kinds of lesions of orchid seedling. The system is able to recognize one of three kinds of lesions existing by using the images of the orchid seedling, and then automatically arranges the recognized orchid seedling to a specific location, according to the result of the recognition. The deep learning object detection algorithm, combined with the transfer learning of the classical architecture of the convolutional neural network, is used to extract the lesion features of the orchid seedling image, which effectively avoid the failure of feature extraction due to image deformation. Based on the identified lesion features, the type of lesion can be determined, and the orchid seedlings are delivered to the specific position by the switch conveyor.
    To be suitable for practical applications in a orchid farm, the pictures used for the training of deep learning were all photographed in the cooperative orchid garden. In addition, the algorithm proposed in this thesis is deployed on NVIDIA Jetson TX2 in real-time, and the lesion recognition accuracy can reach the level that can be practically applied to the production line.

    摘要-----------------------------I Abstract-------------------------II 誌謝-----------------------------III 圖目錄---------------------------VII 表目錄----------------------------XI 第一章 緒論-----------------------1 1.1 前言------------------------1 1.2 研究動機--------------------2 1.3 文獻回顧--------------------3 1.4 本文架構--------------------10 第二章 系統概述--------------------11 2.1 視覺辨識系統-----------------11 2.1.1硬體設備與作業系統---------11 2.1.2軟體開發套件--------------16 2.2 系統深度學習運算平台----------17 2.2.1硬體設備與作業系統---------17 2.2.2軟體開發套件--------------20 2.3系統分流控制平台---------------23 2.3.1硬體設備與作業系統---------23 2.4蘭花苗株病變病癥---------------26 2.5辨識系統架構流程---------------30 2.5.1視覺辨識站----------------32 2.5.2分流機構------------------33 第三章 系統實現---------------------34 3.1蘭花苗株病變影像資料集----------34 3.1.1影像辨識站----------------34 3.1.2標記蘭花苗株影像-----------37 3.2視覺辨識系統------------------37 3.2.1卷積神經網路--------------38 3.2.2深度學習物件偵測演算法-----43 3.2.3本系統深度學習模型架構-----47 第四章 實驗結果--------------------49 4.1蘭花苗株病變資料集-------------49 4.1.1一般資料集----------------49 4.1.2框選資料集----------------51 4.2視覺辨識結果------------------52 4.2.1辨識結果性能指標-----------52 4.2.2影像辨識------------------55 4.2.3物件偵測------------------60 4.3自動化流程系統-----------------62 4.3.1自動化分流系統-------------62 4.3.2演算法實際應用情形---------64 第五章 結論與未來工作---------------69 5.1 結論------------------------69 5.2 未來工作---------------------70 參考文獻---------------------------72

    1. http://atiip.atri.org.tw/News/PubNewsShow.aspx?ns_guid=fbf35cb9-1434-4feb-8479-515843981cde&fbclid=IwAR3GYNGyKy6Fb7TMiZMECa39N_sXKzfY78-OtPtbLRF2AJJQ_kwVdKOy72Q, December, 2018.
    2. http://statview.coa.gov.tw/aqsys_on/importantArgiGoal_lv3_1_6_3_1.html, July, 2019
    3. https://money.udn.com/money/story/5635/3016569?fbclid=IwAR3KefoFJLEqOqnN-XPMczAIXVCPnjbucvL5ntLH6WWc4_QZqhSYwcrGczM, December, 2018.
    4. http://farm.ksi.com.tw/Home/Monitor, December, 2018
    5. P. B. Padol and A. A. Yadav, “SVM classifier based grape leaf disease detection,” 2016 Conference on Advances in Signal Processing (CASP), Pune, pp. 175-179, 2016.
    6. R. M. Prakash, G. P. Saraswathy, G. Ramalakshmi, K. H. Mangaleswari and T. Kaviya, “Detection of leaf diseases and classification using digital image processing,” 2017 International Conference on Innovations in Information, Embedded and Communication Systems (ICIIECS), Coimbatore, ,pp. 1-4, 2017.
    7. J. Shijie, J. Peiyi, H. Siping and S. Haibo, "Automatic detection of tomato diseases and pests based on leaf images," 2017 Chinese Automation Congress (CAC), Jinan, 2017, pp. 2537-2510.
    8. H. Durmuş, E. O. Güneş and M. Kırcı, “Disease detection on the leaves of the tomato plants by using deep learning,” 2017 6th International Conference on Agro-Geoinformatics, Fairfax, VA, 2017, pp. 1-5.
    9. https://grabcad.com/library/nvidia-jetson-tx1-1, December, 2018.
    10 .A. Fuentes, S. Yoon, S. C. Kim and D. S. Park, "A Robust Deep-Learning-Based Detector for Real-Time Tomato Plant Diseases and Pests Recognition, " Sensors 2017, vol. 17, no. 9, 2022, 2017.
    11. https://www.ptgrey.com/support/downloads/10308, December, 2018.
    12. http://www.ricoh-imaging.co.jp/english/products/catalog/pdf/2012_PENTAX_MV_eng.pdf, December, 2018.
    13. https://www.logitech.com/zh-tw/product/c922-pro-stream-webcam?crid=34, July, 2019.
    14. http://www.chtco.com.tw/files/Saito%20%E5%85%89%E6%BA%90%E5%9E%8B%E9%8C%84_2012.pdf, December, 2018.
    15. https://medium.com/@yehjames/%E8%B3%87%E6%96%99%E5%88%86%E6%9E%90-%E6%A9%9F%E5%99%A8%E5%AD%B8%E7%BF%92-%E7%AC%AC%E4%B8%80%E8%AC%9B-python%E6%87%B6%E4%BA%BA%E5%8C%85-anaconda-%E4%BB%8B%E7%B4%B9-%E5%AE%89%E8%A3%9D-f8199fd4be8c, December, 2018.
    16. G. Bradski and A. Kaehler, Learning OpenCV, O'Reilly Media, Inc., 2008.
    17. https://store.gigabyte.com/tw_zh/product-detail.php?i=452, July, 2019.
    18. https://www.leadtek.com/cht/products/AI_HPC(37)/NVIDIA_Jetson_TX2(10782)/detail, December, 2018.
    19. http://tensorflowkeras.blogspot.com/2017/08/keras.html, December, 2018.
    20. https://www.tensorflow.org/guide/graphs, December, 2018.
    21. https://github.com/tzutalin/labelImg, December, 2018.
    22. http://www.86duino.com/?p=70, December, 2018.
    23. https://goods.ruten.com.tw/item/show?21911302950070#info, July, 2019.
    24. https://www.hcmat.tw.bysources.com/?fbclid=IwAR1VXh_sWFROoLvAqOchGUvR8wk9VXUVzD6vIzX5hq4YGVUNg4yhb5hydHw, December, 2018.
    25. https://www.taiwansensor.com.tw/product/sharp-gp2y0a21yk0f-%E7%B4%85%E5%A4%96%E7%B7%9A%E8%B7%9D%E9%9B%A2%E6%84%9F%E6%B8%AC%E5%99%A8-%E6%B8%AC%E9%87%8F%E7%AF%84%E5%9C%8D1080cm-analog-%E5%A4%8F%E6%99%AE%E6%B8%AC%E8%B7%9D%E6%84%9F%E6%B8%AC/, July, 2018.
    26. 葉士財、柯文華,“中部地區蘭科植物病蟲及害物圖說”, 臺中區農業改良場特刊,124期,2014。
    27. Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proceedings of the IEEE, November 1998.
    28. J. Deng, W. Dong and R. Socher, “ImageNet: A large-scale hierarchical image database,” 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp248-255, Miami, FL, USA, June 20-25, 2009.
    29. L. Y. Pratt, “Discriminability-based transfer between neural networks,” NIPS Conference, pp. 204–211, 1993.
    30.A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” NIPS Conferenc, pp. 1097-1105, 2012.
    31. K. Simonyan and A. Zisserman, “Very deep convolutional networksfor large-scale image recognition”, International Conference on Learning Representations, 2015.
    32. https://www.kaggle.com/shivamb/cnn-architectures-vgg-resnet-inception-tl, July, 2018.
    33. K. He, X. Zhang, S. Ren, “Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition,” IEEE transactions on pattern analysis and machine intelligence, pp. 1904-1916, 2015
    34. https://chtseng.wordpress.com/2017/11/11/data-augmentation-%E8%B3%87%E6%96%99%E5%A2%9E%E5%BC%B7/, July, 2018.
    35. R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich feature hierarchies for accurate object detection and semantic segmentation,” IEEE Conference on computer vision and pattern recognition, pp. 580-587, 2014.
    36. R. Girshick, “Fast r-cnn,” IEEE international Conference on computer vision, pp. 1440-1448, 2015.
    37. S. Ren, K. He, R. Girshick, and J. Sun, "Faster r-cnn: Towards real-time object detection with region proposal networks," NIPS Conference, pp. 91-99, 2015.
    38. J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: Unified, real-time object detection,” IEEE Conference on computer vision and pattern recognition, pp. 779-788, 2016
    39. J. Redmon and A. Farhadi, “YOLO9000: Better, Faster, Stronger,” IEEE Conference on computer vision and pattern recognition, 2016.
    40. J. Redmon and A. Farhadi, “YOLOv3: An Incremental Improvement,” CoRR, April ,2018. (https://arxiv.org/pdf/1804.02767.pdf)
    41. https://images.app.goo.gl/mXaLhne9vX7nnT7q7, July, 2018.
    42. https://colab.research.google.com/drive/1GiL7O5izX5jKXCZAV5EhLILHeWgTJC39, July, 2018.

    無法下載圖示 全文公開日期 2024/09/02 (校內網路)
    全文公開日期 本全文未授權公開 (校外網路)
    全文公開日期 本全文未授權公開 (國家圖書館:臺灣博碩士論文系統)
    QR CODE