簡易檢索 / 詳目顯示

研究生: 吳灃玶
Wu, Feng-Ping
論文名稱: 基於深度圖像影像處理技術之綠竹筍分級系統研製
Design and Implementation of Green Bamboo Shoots Classification System Based on Depth Image Processing Technology
指導教授: 黃能富
Huang, Nen-Fu
口試委員: 陳俊良
Chen, Jiann-Liang
陳震宇
Chen, Jen-Yeu
張耀中
Chang, Yao-Chung
學位類別: 碩士
Master
系所名稱: 電機資訊學院 - 資訊工程學系
Computer Science
論文出版年: 2021
畢業學年度: 109
語文別: 英文
論文頁數: 66
中文關鍵詞: 深度影像處理綠竹筍弧線長度估算分級系統邊緣運算
外文關鍵詞: Depth Image Processing, Green Bamboo Shoot, Curve Length Estimation, Grading System, Edge Computing
相關次數: 點閱:2下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 綠竹筍是台灣重要的夏季蔬果之一,又有「綠金」之稱。然而在綠竹筍種植流程中,需要大量的人力參與,加上台灣鄉村農業勞動人口外流與老化問題嚴重、年輕人不願參與農業相關工作,人力成為種植難題。種植綠竹筍時,除了採收之外,另一個需要大量人力的流程就是對綠竹筍分級分類。傳統上蔬果多以重量為分級標準,而綠竹筍則是以外觀特徵當中的長度、彎度與底面半徑作為分級標準,此步驟仰賴大量人工參與,且會出現標準不一的問題。

    本研究旨在建立一套根據綠竹筍外觀特徵分級的選別系統,解決綠竹筍分級分類需要大量勞力參與的問題,同時也解決不同勞工可能出現不同分級標準的問題。

    硬體方面,運算單元採用低功耗邊緣運算裝置搭配圖形處理器,並使用深度相機為綠竹筍進行拍攝,取得彩色資料與深度資料。使用邊緣偵測演算法在彩色資料當中找出綠竹筍的邊緣特徵,並藉由圖像旋轉等最佳化方式找出綠竹筍的三個頂點。

    同時,我們也對深度資料進行預處理。在空間與時間兩個面向對深度影像進行最佳化與平滑化,修補缺失的深度資料。根據頂點資料與邊緣特徵,將綠竹筍的左弧線與右弧線切割成許多小線段,藉由深度資料來量測這些弧線上的線段長並取和,以此近似弧線的長度。最後使用估計出的左弧線長、右弧線長與底面直徑來對綠竹筍進行分級,將分級結果顯示於螢幕畫面中。此研究在量測綠竹筍邊緣長度上,左弧線誤差率為兩點三二個百分點,右弧線誤差率為五點二七個百分點,底面直徑誤差率為八點五個百分點,全體誤差率為五點三六個百分點。


    Green bamboo shoot is an important vegetable in summer. However, lack of labors and aging population in agriculture became a serious problem. Grading bamboo shoots is a labor-intensive job during the planting period. Instead of grading by weight, the criteria are the length of its curves and diameter of cross section.

    In this thesis, we proposed a green bamboo shoots classification system based on depth image processing technology to make the grading process more efficient and accurate. The system includes an edge computing device and a depth camera to capture color and depth data. Canny edge detection algorithm is applied on color images to extract edge feature. Three critical corner points, including top, left, and right corners, are obtained after performing data optimization skills.

    Next, curves are cut into small sections. We sum up the length of those sections to approximate the length of left and right curve. Lastly, we grade the bamboo shoots based on the length we estimate and show the result on a display device.

    The error rate of length is 2.32% on the left curve, 5.27% on the right curve, 8.5% on the bottom diameter, and 5.36% in total. The system we proposed in this thesis can both save labor cost for farmers, and also increase grading quality of bamboo shoots.

    Abstract. . . . . . . . . . . . . . . . . . . . .i 中文摘要. . . . . . . . . . . . . . . . . . . . .ii Contents. . . . . . . . . . . . . . . . . . . . .iii List of Figures. . . . . . . . . . . . . . . . . . . . .vii List of Tables. . . . . . . . . . . . . . . . . . . . .x Chapter 1 Introduction. . . . . . . . . . . . . . . . . . . . .1 Chapter 2 Background and Related Works. . . . . . . . . . . . . . . . . . . . .5 2.1 Vegetables and Fruit Grading . . . . . . . . . . . . . . . . . . . . . 5 2.1.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.1.2 Hyper-spectral Imaging System . . . . . . . . . . . . . . . . 6 2.1.3 Artificial Intelligent . . . . . . . . . . . . . . . . . . . . . . . 7 2.2 Green Bamboo Shoots Grading . . . . . . . . . . . . . . . . . . . . 8 2.3 Depth Image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 2.3.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 2.3.2 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . 10 2.4 Depth Camera . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.4.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.4.2 Stereo Vision Camera . . . . . . . . . . . . . . . . . . . . . 11 2.4.3 Structured Light Camera . . . . . . . . . . . . . . . . . . . 11 2.4.4 Time of Flight Camera . . . . . . . . . . . . . . . . . . . . . 13 2.5 Intel RealSense SDK . . . . . . . . . . . . . . . . . . . . . . . . . . 14 2.6 Edge Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 2.6.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 2.6.2 Canny Edge Detection . . . . . . . . . . . . . . . . . . . . . 15 2.7 Convolutional Neural Network . . . . . . . . . . . . . . . . . . . . . 16 2.7.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 2.7.2 Learning Method . . . . . . . . . . . . . . . . . . . . . . . . 17 2.7.3 Difficulties of Training a Convolutional Neural Network . . . 19 Chapter 3 System Architecture. . . . . . . . . . . . . . . . . . . . .21 3.1 Edge Computing Device . . . . . . . . . . . . . . . . . . . . . . . . 223.1.1 Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 3.2 Image Shooting Environment . . . . . . . . . . . . . . . . . . . . . 23 3.2.1 Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 3.3 Image Processing Block . . . . . . . . . . . . . . . . . . . . . . . . 25 3.3.1 Depth Data Preprocessing . . . . . . . . . . . . . . . . . . . 25 3.3.1.1 Sub-Sampling . . . . . . . . . . . . . . . . . . . . 26 3.3.1.2 Spatial Edge Preserving Filtering . . . . . . . . . . 26 3.3.1.3 Hole Filling . . . . . . . . . . . . . . . . . . . . . . 26 3.3.2 Feature Extraction and Data Optimization . . . . . . . . . . 26 3.4 Grading Block . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 3.4.1 Length Estimation . . . . . . . . . . . . . . . . . . . . . . . 27 3.4.2 Grading by Length . . . . . . . . . . . . . . . . . . . . . . . 27 3.5 Display Block . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 Chapter 4 System Implementation. . . . . . . . . . . . . . . . . . . . .28 4.1 Reference Length Measurement . . . . . . . . . . . . . . . . . . . . 28 4.2 Camera Parameters and Shooting Environment . . . . . . . . . . . 28 4.3 Image Processing Block . . . . . . . . . . . . . . . . . . . . . . . . 30 4.3.1 Depth Data Preprocessing . . . . . . . . . . . . . . . . . . . 31 4.3.1.1 Sub-Sampling . . . . . . . . . . . . . . . . . . . . 31 4.3.1.2 Spatial Edge Preserving Filtering . . . . . . . . . . 31 4.3.1.3 Temporal Filtering . . . . . . . . . . . . . . . . . . 31 4.3.1.4 Hole Filling . . . . . . . . . . . . . . . . . . . . . . 32 4.3.2 Feature Extraction and Data Optimization . . . . . . . . . . 33 4.3.2.1 Canny Edge Detection . . . . . . . . . . . . . . . . 33 4.3.2.2 Corner Detection . . . . . . . . . . . . . . . . . . . 34 4.3.2.3 Misplacement Correction . . . . . . . . . . . . . . 35 4.3.3 Edge Slicing . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 4.4 Grading Block . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 4.4.1 Length Estimation . . . . . . . . . . . . . . . . . . . . . . . 39 4.4.2 Grading by Length . . . . . . . . . . . . . . . . . . . . . . . 39 4.4.3 Edge Cases Handling . . . . . . . . . . . . . . . . . . . . . . 40 4.5 Result Display . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 Chapter 5 Experiment and Result. . . . . . . . . . . . . . . . . . . . .41 5.1 System Demonstration and Shooting Environment . . . . . . . . . . 42 5.2 Result of Depth Data Preprocessing . . . . . . . . . . . . . . . . . . 43 5.3 Different Parameters in Canny Edge Detection . . . . . . . . . . . . 46 5.4 Different Number of Slices . . . . . . . . . . . . . . . . . . . . . . . 46 5.5 Different Shape of Bamboo Shoots . . . . . . . . . . . . . . . . . . 52 5.6 Processing Time Analysis . . . . . . . . . . . . . . . . . . . . . . . 54 Chapter 6 Conclusion and Future Work. . . . . . . . . . . . . . . . . . . . .56 6.1 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 6.2 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 Bibliography. . . . . . . . . . . . . . . . . . . . .60

    [1] S. W. S. W. Burt and G. O. Patchen,Grading and sizing apples with brushes.Washington]Agricultural Research Service, U.S. Dept. of Agriculture„ vol.new ser.:no.52-18, https://www.biodiversitylibrary.org/bibliography/106743.[Online]. Available: https://www.biodiversitylibrary.org/item/188300
    [2] I. Saranwong, J. Sornsrivichai, and S. Kawano, “On-tree evaluation ofharvesting quality of mango fruit using a hand-held nir instrument,”J. NearInfrared Spectrosc., vol. 11, no. 4, pp. 283–293, Aug 2003. [Online]. Available:http://www.osapublishing.org/jnirs/abstract.cfm?URI=jnirs-11-4-283
    [3] H. Dang, J. Song, and Q. Guo, “A fruit size detecting and grading systembased on image processing,” in2010 Second International Conference on In-telligent Human-Machine Systems and Cybernetics, vol. 2, 2010, pp. 83–86.
    [4] Y. Al Ohali, “Computer vision based date fruit grading system: Designand implementation,”Journal of King Saud University - Computer andInformation Sciences, vol. 23, no. 1, pp. 29–36, 2011. [Online]. Available:https://www.sciencedirect.com/science/article/pii/S1319157810000054
    [5] M. Jhuria, A. Kumar, and R. Borse, “Image processing for smart farming:Detection of disease and fruit grading,” in2013 IEEE Second International Conference on Image Information Processing (ICIIP-2013), 2013, pp. 521–526.
    [6] A. Eugenio, C. Mathilde, E. Isabella, L. C. Maria, B. Emanuela, B. Franco,and G. Flavia, “Sweet taste in apple: the role of sorbitol, individual sugars,organic acids and volatile compounds,”Scientific reports, vol. 7, p. 44950,Mar 2017.
    [7] M. Zude, B. Herold, J.-M. Roger, V. Bellon-Maurel, and S. Landahl,“Non-destructive tests on the prediction of apple fruit flesh firmnessand soluble solids content on tree and in shelf life,”Journal ofFood Engineering, vol. 77, no. 2, pp. 254–260, 2006, progress onBioproducts Processing and Food Safety. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S026087740500436X
    [8] J. Frances, J. Calpe, M. Martinez, A. Rosado, A. Serrano, J. Calleja, andM. Diaz, “Application of arma modeling to the improvement of weight esti-mations in fruit sorting and grading machinery,” in2000 IEEE InternationalConference on Acoustics, Speech, and Signal Processing. Proceedings (Cat.No.00CH37100), vol. 6, 2000, pp. 3666–3669 vol.6.
    [9] A. Davenel, C. Guizard, T. Labarre, and F. Sevila, “Automatic detection ofsurface defects on fruit by using a vision system,”Journal of AgriculturalEngineering Research, vol. 41, no. 1, pp. 1–9, 1988. [Online]. Available:https://www.sciencedirect.com/science/article/pii/0021863488901989
    [10] N. Kondo, U. Ahmad, M. Monta, and H. Murase, “Machine visionbased quality evaluation of iyokan orange fruit using neural networks,” Computers and Electronics in Agriculture, vol. 29, no. 1, pp. 135–147,2000. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0168169900001411
    [11] P. Chen and Z. Sun, “A review of non-destructive methods for qualityevaluation and sorting of agricultural products,”Journal of AgriculturalEngineering Research, vol. 49, pp. 85–98, 1991. [Online]. Available:https://www.sciencedirect.com/science/article/pii/002186349180030I
    [12] B. Zhang, B. Gu, G. Tian, J. Zhou, J. Huang, and Y. Xiong, “Challengesand solutions of optical-based nondestructive quality inspection for roboticfruit and vegetable grading systems: A technical review,”Trends inFood Science Technology, vol. 81, pp. 213–231, 2018. [Online]. Available:https://www.sciencedirect.com/science/article/pii/S0924224417307380
    [13] A. Siedliska, P. Baranowski, M. Zubik, W. Mazurek, and B. Sosnowska,“Detection of fungal infections in strawberry fruit by vnir/swir hyperspectralimaging,”Postharvest Biology and Technology, vol. 139, pp. 115–126,2018. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0925521417308888
    [14] P. Pieczywek, J. Cybulska, M. Szymańska-Chargot, A. Siedliska, A. Zdunek,A. Nosalewicz, P. Baranowski, and A. Kurenda, “Early detection offungal infection of stored apple fruit with optical sensors–comparisonof biospeckle, hyperspectral imaging and chlorophyll fluorescence,”FoodControl, vol. 85, pp. 327–338, 2018. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0956713517304930
    [15] S. Munera, J. M. Amigo, J. Blasco, S. Cubero, P. Talens, and N. Aleixos,“Ripeness monitoring of two cultivars of nectarine using vis-nir hyperspectralreflectance imaging,”Journal of Food Engineering, vol. 214, pp. 29–39,2017. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0260877417302777
    [16] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once:Unified, real-time object detection,” 2016.
    [17] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “Bert: Pre-training ofdeep bidirectional transformers for language understanding,” 2019.
    [18] C. Chen, A. Seff, A. Kornhauser, and J. Xiao, “Deepdriving: Learning affor-dance for direct perception in autonomous driving,” in2015 IEEE Interna-tional Conference on Computer Vision (ICCV), 2015, pp. 2722–2730.
    [19] Y. Zhang, S. Wang, G. Ji, and P. Phillips, “Fruit classificationusing computer vision and feedforward neural network,”Journal ofFood Engineering, vol. 143, pp. 167–177, 2014. [Online]. Available:https://www.sciencedirect.com/science/article/pii/S026087741400291X
    [20] N.-F. Huang, D.-L. Chou, C.-A. Lee, F.-P. Wu, A.-C. Chuang, Y.-H.Chen, and Y.-C. Tsai, “Smart agriculture: real-time classification ofgreen coffee beans by using a convolutional neural network,”IET SmartCities, vol. 2, no. 4, pp. 167–172, 2020. [Online]. Available: https://ietresearch.onlinelibrary.wiley.com/doi/abs/10.1049/iet-smc.2020.0068
    [21]李宗宜and陳財輝, “健康綠竹筍培育與竹筍品質分級,”林業研究專訊, vol. 23,no. 6, 2016. [Online]. Available: https://www.tfri.gov.tw/main/download. aspx?dlfn=\%e5\%b0\%88\%e8\%a8\%8a_134-2-7\%e5\%81\%a5\%e5\%ba\%b7\%e7\%b6\%a0\%e7\%ab\%b9\%e7\%ad\%8d\%e5\%9f\%b9\%e8\%82\%b2\%e8\%88\%87\%e7\%ab\%b9\%e7\%ad\%8d\%e5\%93\%81\%e8\%b3\%aa\%e5\%88\%86\%e7\%b4\%9a.pdf
    [22]葉永章, “綠竹筍清洗分級機之研製,”桃園區農業專訊, pp. 27–30, 2006.[Online]. Available: https://kmweb.coa.gov.tw/redirect_files.php?theme=knowledgebase&id=235754
    [23] T. Gandhi and M. Trivedi, “Pedestrian collision avoidance systems: a surveyof computer vision based recent studies,” in2006 IEEE Intelligent Trans-portation Systems Conference, 2006, pp. 976–981.
    [24] F. Flacco, T. Kröger, A. De Luca, and O. Khatib, “A depth space approachto human-robot collision avoidance,” in2012 IEEE International Conferenceon Robotics and Automation, 2012, pp. 338–345.
    [25] D. Marr, T. Poggio, and S. Brenner, “A computational theory of humanstereo vision,”Proceedings of the Royal Society of London. Series B.Biological Sciences, vol. 204, no. 1156, pp. 301–328, 1979. [Online]. Available:https://royalsocietypublishing.org/doi/abs/10.1098/rspb.1979.0029
    [26] I. R. D. Team, “The basics of stereo depth vision,” 2018. [Online]. Available:https://www.intelrealsense.com/stereo-depth-vision-basics/
    [27] T. Kanade and M. Okutomi, “A stereo matching algorithm with an adaptivewindow: theory and experiment,”IEEE Transactions on Pattern Analysisand Machine Intelligence, vol. 16, no. 9, pp. 920–932, 1994.
    [28] J. Salvi, J. Pagès, and J. Batlle, “Pattern codification strategiesin structured light systems,”Pattern Recognition, vol. 37, no. 4,pp. 827–849, 2004, agent Based Computer Vision. [Online]. Available:https://www.sciencedirect.com/science/article/pii/S0031320303003303
    [29] S. Gokturk, H. Yalcin, and C. Bamji, “A time-of-flight depth sensor - systemdescription, issues and solutions,” in2004 Conference on Computer Visionand Pattern Recognition Workshop, 2004, pp. 35–35.
    [30] Intel,Intel RealSense Documentation, 2021.
    [31] Intel, “Intel RealSense SDK,” https://github.com/IntelRealSense/librealsense, 2021, online; accessed 7 July 2021.
    [32] D. Marr, E. Hildreth, and S. Brenner, “Theory of edge detection,”Proceedings of the Royal Society of London. Series B. BiologicalSciences, vol. 207, no. 1167, pp. 187–217, 1980. [Online]. Available:https://royalsocietypublishing.org/doi/abs/10.1098/rspb.1980.0020
    [33] R. Duda and P. Hart, “Pattern classification and scene analysis,” inA Wiley-Interscience publication, 1973.
    [34] J. Canny, “A computational approach to edge detection,”IEEE Transactionson Pattern Analysis and Machine Intelligence, vol. PAMI-8, no. 6, pp. 679–698, 1986.
    [35] Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learningapplied to document recognition,”Proceedings of the IEEE, vol. 86, no. 11,pp. 2278–2324, Nov 1998.
    [36] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification withdeep convolutional neural networks,” inProceedings of the 25th InternationalConference on Neural Information Processing Systems - Volume 1, ser.NIPS’12. USA: Curran Associates Inc., 2012, pp. 1097–1105. [Online].Available: http://dl.acm.org/citation.cfm?id=2999134.2999257
    [37] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for imagerecognition,” 2015.
    [38] N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov,“Dropout: A simple way to prevent neural networks from overfitting,”Journal of Machine Learning Research, vol. 15, no. 56, pp. 1929–1958, 2014.[Online]. Available: http://jmlr.org/papers/v15/srivastava14a.html
    [39] E. S. L. Gastal and M. M. Oliveira, “Domain transform for edge-awareimage and video processing,”ACM Trans. Graph., vol. 30, no. 4, Jul. 2011.[Online]. Available: https://doi.org/10.1145/2010324.1964964

    QR CODE