簡易檢索 / 詳目顯示

研究生: 周東林
Chou, Dong-Lin
論文名稱: 基於卷積神經網路之即時分類生咖啡豆系統
Real-Time Classification System of Green Coffee Beans by Using a Convolutional Neural Network
指導教授: 黃能富
Huang, Nen-Fu
口試委員: 許健平
Sheu, Jang-Ping
陳俊良
Chen, Jiann-Liang
學位類別: 碩士
Master
系所名稱: 電機資訊學院 - 資訊工程學系
Computer Science
論文出版年: 2019
畢業學年度: 107
語文別: 中文
論文頁數: 56
中文關鍵詞: 卷積神經網路咖啡豆深度學習圖像處理物件偵測圖像分割
外文關鍵詞: Convolutional_Neural_Network, coffee_beans, deep_learning, image_processing, object_detection, segmentation
相關次數: 點閱:1下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 咖啡是一種重要的經濟作物,也是全世界最受歡迎的飲料之一。精品咖啡的興起改變
    了人們對咖啡質量標準的要求。然而生咖啡豆經常混合了許多劣質的豆子與雜質。因
    此,本研究旨在解決精品咖啡上需要花費大量時間成本以及人力成本去挑選咖啡豆,
    將具有瑕疵的生咖啡豆進行仔細的挑選。第二目標在於開發一項自動挑選咖啡豆的系
    統,進而真正減少時間成本及人力成本。
    我們首先透過拍攝生咖啡豆來收集我們所需要的資料,再來透過使用圖片預處理的
    方式以及資料資強的技術去處理我們所拍攝的生咖啡豆。接著我們將所經過預處理及
    資料增強的資料作為深度學習中卷積神經網路的輸入,透過卷積神經網路對於圖片特
    徵的檢測,我們可以更輕鬆的分析圖片的資訊。
    最後將卷積神經網路所訓練出的模組,將模組連結當初收集資料所使用的相機,進
    行影像串流的分析。透過模組連結電腦與相機,我們成功的將生咖啡豆透過影像分析
    的方式一分為二,其中準確度可以到達百分之九十四,以及偽陽性率不到一成。


    Coffee is an important economic crop and one of the most popular beverages worldwide. The rise of specialty coffees has changed people’s standards regarding coffee quality. However, green coffee beans are often mixed with impurities and unpleasant beans. Therefore, this study aimed to solve the problem of time-consuming and labor-intensive manual selection of coffee beans for specialty coffee products. The second objective of our study was to develop an automatic coffee bean picking system. We first collected image by taking picture of green coffee beans. Furthermore, we used image processing and data augmentation technologies to deal with the data. We then used deep learning of the convolutional neural network to analyze the image information. Finally, we applied the training model to connect a webcam for video streaming recognition. We successfully divided the good and bad beans. The false positive rate was 0.0441, and the overall coffee bean recognition rate was 94.63%.

    Abstract 中文摘要 Contents List of Figures List of Tables Chapter 1 -----------1 Chapter 2 -----------5 Chapter 3 -----------23 Chapter 4 -----------27 Chapter 5 -----------40 Chapter 6 -----------48 Bibliography --------51

    [1] T. Oder, “How coffee changed the world,”https://www.mnn.com/food/beverages/stories/how-coffee-changed-the-world.
    [2] C. Pinto, J. Furukawa, H. Fukai, and S. Tamura, “Classification of green coffee bean images basec on defect types using convolutional neural network(cnn),” in 2017 International Conference on Advanced Informatics, Concepts, Theory, and Applications (ICAICTA), Aug 2017, pp. 1–5.
    [3] I. Gavat and D. Militaru, “Deep learning in acoustic modeling for automatic
    speech recognition and understanding - an overview -,” in 2015 International
    Conference on Speech Technology and Human-Computer Dialogue (SpeD),
    Oct 2015, pp. 1–8.
    [4] M. T. Islam, B. M. N. Karim Siddique, S. Rahman, and T. Jabid, “Image
    recognition with deep learning,” in 2018 International Conference on Intelligent Informatics and Biomedical Sciences (ICIIBMS), vol. 3, Oct 2018, pp.106–110.
    [5] T. Young, D. Hazarika, S. Poria, and E. Cambria, “Recent trends in deep
    learning based natural language processing [review article],” IEEE Computational Intelligence Magazine, vol. 13, no. 3, pp. 55–75, Aug 2018.
    [6] Z. M. Fadlullah, F. Tang, B. Mao, N. Kato, O. Akashi, T. Inoue, and K. Mizutani, “State-of-the-art deep learning: Evolving machine intelligence toward
    tomorrow’s intelligent network traffic control systems,” IEEE Communications
    Surveys Tutorials, vol. 19, no. 4, pp. 2432–2455, Fourthquarter 2017.
    [7] J. Ker, L. Wang, J. Rao, and T. Lim, “Deep learning applications in medical
    image analysis,” IEEE Access, vol. 6, pp. 9375–9389, 2018.
    [8] A. Kamilaris and F. X. Prenafeta-Boldú, “Deep learning in agriculture: A
    survey,” Computers and Electronics in Agriculture, vol. 147, pp. 70 – 90,
    2018. [Online]. Available: http://www.sciencedirect.com/science/article/pii/
    S0168169917308803
    [9] “Specialty coffee association of america,” http://www.coffeeresearch.org/
    coffee/scaaclass.htm.
    [10] T. P. Tho, N. T. Thinh, and N. H. Bich, “Design and development of the vision sorting system,” in 2016 3rd International Conference on Green Technology and Sustainable Development (GTSD), Nov 2016, pp. 217–223.
    [11] Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” nature, vol. 521, no.7553, p. 436, 2015.
    [12] Hecht-Nielsen, “Theory of the backpropagation neural network,” in International 1989 Joint Conference on Neural Networks, 1989, pp. 593–605 vol. 1.
    [13] B. L. Kalman and S. C. Kwasny, “Why tanh: choosing a sigmoidal function,” in [Proceedings 1992] IJCNN International Joint Conference on Neural Networks, vol. 4, June 1992, pp. 578–581 vol.4.
    [14] K. He, X. Zhang, S. Ren, and J. Sun, “Delving deep into rectifiers: Surpassing human-level performance on imagenet classification,” in Proceedings of the IEEE international conference on computer vision, 2015, pp. 1026–1034.
    [15] D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning representations by back-propagating errors,” nature, vol. 323, no. 6088, p. 533, 1986.
    [16] Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proceedings of the IEEE, vol. 86, no. 11, pp. 2278–2324, Nov 1998.
    [17] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Proceedings of the 25th International Conference on Neural Information Processing Systems - Volume 1, ser. NIPS’12. USA: Curran Associates Inc., 2012, pp. 1097–1105. [Online].
    Available: http://dl.acm.org/citation.cfm?id=2999134.2999257
    [18] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” CoRR, vol. abs/1409.1556, 2014. [Online].
    Available: http://arxiv.org/abs/1409.1556
    [19] J. Huang, V. Rathod, C. Sun, M. Zhu, A. Korattikara, A. Fathi, I. Fischer, Z. Wojna, Y. Song, S. Guadarrama, and K. Murphy, “Speed/accuracy tradeoffs for modern convolutional object detectors,” in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017, pp. 3296–3297.
    [20] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C.-Y. Fu, and A. C. Berg, “Ssd: Single shot multibox detector,” 2016, to appear. [Online]. Available: http://arxiv.org/abs/1512.02325
    [21] J. Dai, Y. Li, K. He, and J. Sun, “R-fcn: Object detection via region-based fully convolutional networks,” in Advances in Neural Information Processing Systems 29, D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, Eds. Curran Associates, Inc., 2016, pp. 379–387.
    [22] R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich feature hierarchies for accurate object detection and semantic segmentation,” in 2014 IEEE Conference on Computer Vision and Pattern Recognition, June 2014, pp. 580–587.
    [23] R. H. M. Condori, J. H. C. Humari, C. E. Portugal-Zambrano, J. C. Gutiérrez-Cáceres, and C. A. Beltrán-Castañón, “Automatic classification of physical defects in green coffee beans using cglcm and svm,” in 2014 XL Latin American Computing Conference (CLEI), Sep. 2014, pp. 1–9.
    [24] P. F. Felzenszwalb and D. P. Huttenlocher, “Efficient graph-based image segmentation,” Int. J. Comput. Vision, vol. 59, no. 2, pp. 167–181, Sep. 2004.
    [25] R. Girshick, “Fast R-CNN,” arXiv e-prints, p. arXiv:1504.08083, Apr 2015.
    [26] S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards Real-Time
    Object Detection with Region Proposal Networks,” arXiv e-prints, p. arXiv: 1506.01497, Jun 2015.
    [27] G. Bradski, “The OpenCV Library,” Dr. Dobb’s Journal of Software Tools, 2000.
    [28] Salma Ghoneim, “Object detection via color-based image segmentation using python.” https://bit.ly/2FSKjX3, year = 2019. month = April.
    [29] The kernel development community, “Video4linux devices - linux media subsystem documentation documentation,” https://linuxtv.org/downloads/v4l-dvb-apis/kapi/v4l2-core.html, 2005.
    [30] Cnpowdertech, “Color sorter and its application.” https://bit.ly/2WlFcZV, year = 2016.
    [31] O. Omidi-Arjenaki, P. Moghaddam, and A. Moddares Motlagh, “Online tomato sorting based on shape, maturity, size, and surface defects using machine vision,” Turkish Journal of Agriculture and Forestry, vol. 37, pp. 62–68, 07 2012.
    [32] B. Jarimopas and N. Jaisin, “An experimental machine vision system for sorting sweet tamarind,” Journal of Food Engineering, vol. 89, no. 3, pp.291 – 297, 2008. [Online]. Available: http://www.sciencedirect.com/science/article/pii/S0260877408002173
    [33] T. Pearson, D. Brabec, and S. Haley, “Color image based sorter for separating red and white wheat,” Sensing and Instrumentation for Food Quality and Safety, vol. 2, pp. 280–288, 12 2008.
    [34] Google Brain Team, “Tensorflow,” Retrieved June 29, 2018, from the World Wide Web:https://github.com/tensorflow/tensorflow, 2015.
    [35] S. Zhu, X. Xia, Q. Zhang, and K. Belloulata, “An image segmentation algorithm in image processing based on threshold segmentation,” in 2007 Third International IEEE Conference on Signal-Image Technologies and Internet-Based System, Dec 2007, pp. 673–678.
    [36] Y. Qin, S. Sun, X. Ma, S. Hu, and B. Lei, “A background extraction and shadow removal algorithm based on clustering for vibe,” in 2014 International Conference on Machine Learning and Cybernetics, vol. 1, July 2014, pp. 52–57.
    [37] M. K. M. Rabby, B. Chowdhury, and J. H. Kim, “A modified canny edge detection algorithm for fruit detection amp; classification,” in 2018 10th International Conference on Electrical and Computer Engineering (ICECE), Dec 2018, pp. 237–240.
    [38] A. Mikołajczyk and M. Grochowski, “Data augmentation for improving deep learning in image classification problem,” in 2018 International Interdisciplinary PhD Workshop (IIPhDW), May 2018, pp. 117–122.
    [39] J. Shijie, W. Ping, J. Peiyi, and H. Siping, “Research on data augmentation for image classification based on convolution neural networks,” in 2017 Chinese Automation Congress (CAC), Oct 2017, pp. 4165–4170.
    [40] R. Zaheer and H. Shaziya, “Gpu-based empirical evaluation of activation functions in convolutional neural networks,” in 2018 2nd International Conference on Inventive Systems and Control (ICISC), Jan 2018, pp. 769–773.
    [41] J. Kim, “On the false positive rate of the bloom filter in case of using multiple hash functions,” in 2014 Ninth Asia Joint Conference on Information Security, Sep. 2014, pp. 26–30.

    QR CODE