研究生: |
楊學成 Yang, Hsueh-Cheng |
---|---|
論文名稱: |
遷移式卷積類神經網路結合特徵選取以強化預測結果—以瘧疾細胞感染預測為例 Combined Feature Characteristic Selection of Transfer Convolutional Neural Networks to Enhance Prediction Results: A Case Study of Malaria Cell Infection Prediction |
指導教授: |
張國浩
Chang, Kuo-Hao |
口試委員: |
林春成
Lin, Chun-Cheng 劉建良 Liu, Chien-Liang |
學位類別: |
碩士 Master |
系所名稱: |
工學院 - 工業工程與工程管理學系 Department of Industrial Engineering and Engineering Management |
論文出版年: | 2020 |
畢業學年度: | 108 |
語文別: | 中文 |
論文頁數: | 66 |
中文關鍵詞: | 影像辨識 、卷積類神經網路 、遷移學習 、瘧疾 |
外文關鍵詞: | Image recognition, Convolutional neural networks, Transfer learning, Malaria |
相關次數: | 點閱:4 下載:0 |
分享至: |
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
瘧疾長久以來都是危害人體健康甚鉅的疾病,在瘧疾消失的地區容易因醫療人員不熟悉而未能立即確診。而隨著硬體技術的進步,機器學習逐漸能協助解決臨床上的問題。現今對於瘧疾影像辨識的方法主要利用預訓練模型中大量的圖像特徵來協助辨識模型的建構,但預訓練模型參數量龐大、訓練耗時,且與瘧疾影像之間的關聯度低造成解釋性不足的問題。本研究在進行瘧疾細胞的預測時,發現分類正確的細胞瘧原蟲的有無清晰可辨,而分類錯誤之細胞則會出現瘧原蟲影像不清晰或是健康細胞內有雜訊而導致誤判,進而發想出在傳統輕量模型架構下,建構一類神經網路先辨識具有特定特徵細胞,如容易分類錯誤之細胞,再利用所取得的特徵幫助整體資料集的分類。本研究利用遷移學習的概念,先建構提取特定細胞特徵的模型,再將得到的細胞特徵遷移至整體資料辨識之模型並改善辨識準確率。在實證分析中,透過現行方式建構之模型與加入特定細胞特徵之模型進行比較,發現加入特定細胞特徵模型的表現的確優於現有之模型,並驗證子資料特徵應用於母資料集之可行性。並另外建構一瘧疾影像辨識模型提升現有辨識準確率
Malaria has long been a serious health hazard. In areas where malaria disappears, it is not easy to be diagnosed immediately because of unfamiliar medical staff. With the advancement of hardware technology, machine learning can gradually help solve practical problems. At present, the methods for malaria image recognition mainly use a large number of image features in the pre-trained model to assist in the construction of the recognition model. However, the large number of parameters of the pre-trained model, the time-consuming training, and the low correlation with the malaria image cause interpretability loss. In the prediction of malaria cells in this study, it was found that the presence of correctly classified cells, Plasmodium, was clearly discernible, and misclassified cells would have incorrect images of Plasmodium or noise in healthy cells, leading to misjudgment. Then came up with the idea of constructing a type of neural network to identify cells with specific characteristics, such as cells that are easy to misclassify, and then using the obtained features to help the classification of the overall data set under the traditional lightweight model architecture. This study uses the concept of transfer learning to construct a model that extracts specific cell characteristics, and then migrates the obtained cell characteristics to a model for overall data identification and improves the accuracy of identification. In empirical analysis, the model constructed by the current method is compared with the model with specific cell characteristics, and it is found that the performance of the model with specific cell characteristics is indeed better than the existing model, and verified the feasibility of applying the sub-dataset characteristics to the overall data set and construct a malaria image recognition model to improve the current recognition accuracy.
Alex Krizhevsky, Ilya Sutskever, Geoffrey E. Hinton. 2012. “ImageNet classification with deep convolutional neural networks”, NIPS'12 Proceedings of the 25th International Conference on Neural Information Processing Systems, 1, pp.1097-1105.
Andrei A. Rusu, Neil C. Rabinowitz, Guillaume Desjardins, Hubert Soyer, James Kirkpatrick, Koray Kavukcuoglu, Razvan Pascanu, Raia Hadsell. 2016. “Progressive neural networks”, Retrieved from arXiv:1606.04671
A. Sai Bharadwaj Reddy, D. Sujitha Juliet. April, 2019. “Transfer learning with ResNet-50 for malaria cell-image classification”, International Conference on Communication and Signal Processing, pp. 0945-0949.
Barath Narayanan Narayanan, Redha Ali, Russell C. Hardie. 2019 “Performance analysis of machine learning and deep learning architectures for malaria detection on cell images”, Proceedings SPIE, 11139
Daxiang Dong, Hua Wu, Wei He, Dianhai Yu, Haifeng Wang. July, 2015. “Multi-task learning for multiple language translation”, Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, 1, pp.1723-1732.
Gani, Yaroslav. Ustinova, Evgeniya, Ajakan, Hana, Germain, Pascal, Larochelle, Hugo, Laviolette, Francois, Marchand, Mario, Lempitsky, Victor. 2015. “Domain-adversarial training of neural networks”, The Journal of Machine Learning Research, 17(1), pp. 1-47
George Bebis, Michael Georgiopoulos. 1994. “Feed-forward neural networks”, IEEE Potentials, 13(4), pp. 27-31.
George Cybenko. 1989. “Approximation by superpositions of a sigmoidal function”, Mathematics of Control, Signals and Systems, 2, pp.303-314
Geoffrey E. Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, Ruslan R. Salakhutdinov, 2012. “Improving neural networks by preventing co-adaptation of feature detectors”, Retrieved from arXiv:1207.0580v1.
John S. Bridle. 1990. “Training stochastic model recognition algorithms as networks can lead to maximum mutual information estimation of parameters”, Advances in Neural Information Processing Systems, 2, pp.211-217
Jason Yosinski, Jeff Clune, Yoshua Bengio, Hod Lipson. 2014. “How transferable are features in deep neural networks?”, Advances in Neural Information Processing Systems, 27, pp.3320-3328.
Jui-Ting Huang, Jinyu Li, Dong Yu, Li Deng, Yifan Gong. May, 2013. “Cross-language knowledge transfer using multilingual deep neural network with shared hidden layers”, IEEE International Conference on Acoustics, Speech and Signal Processing, Vancouver. pp.7304-7308
Melvin Johnson, Mike Schuster, Quoc V. Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat, Fernanda Viégas, Martin Wattenberg, Greg Corrado, Macduff Hughes, Jeffrey Dean. 2017. “Google’s multilingual neural machine translation system: Enabling zero-shot translation”, Transactions of the Association for Computational Linguistics, 5, pp.339-351.
Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, Ruslan Salakhutdinov. 2014. “Dropout: A simple way to prevent neural networks from overfitting”, Journal of Machine Learning Research, 15, pp.1929-1958.
Rajaraman S., Antani SK., Poostchi M., Silamut K., Hossain MA., Maude RJ., Jaeger S., Thoma GR. 2018. “Pre-trained convolutional neural networks as feature extractors toward improved malaria parasite detection in thin blood smear images”, PeerJ, 6(4568)
Rajat Raina, Alexis Battle, Honglak Lee, Benjamin Packer, Andrew Y. Ng. January, 2007. “Self-taught learning: transfer learning from unlabeled data”, Proceedings of the 24th International Conference on Machine Learning, pp.759-766.
Rich Caruana. 1997. “Multitask learning”, Machine Learning, 28, pp.41-75.
Sebastian Thrun, Lorien Pratt. 1998. “Learning to learn”. MA, United States: Kluwer Academic Publishers
Sinno Jialin Pan, Qiang Yang. 2010. “A survey on transfer learning”, IEEE Transactions on Knowledge and Data Engineering, 22(10), pp.1345-1359.
Shai Ben-David, John Blitzer, Koby Crammer, Fernando Pereira. December, 2006. “Analysis of representations for domain adaptation”, Proceeding NIPS'06 Proceedings of the 19th International Conference on Neural Information Processing Systems, pp.137-144.
Shai Ben-David, John Blitzer, Koby Crammer, Alex Kulesza, Fernando Pereira, Jennifer Wortman Vaughan. 2010. “A theory of learning from different domains”, Machine Learning, 79(1-2), pp.151-175.
Stone, M. 1974. “Cross-validatory choice and assessment of statistical predictions”, Journal of the Royal Statistical Society. Series B (Methodological), 36(2), pp. 111-147.
Vinod Nair, Geoffrey E. Hinton. June, 2010. “Rectified linear units improve restricted boltzmann machines”, ICML'10 Proceedings of the 27th International Conference on International Conference on Machine Learning, pp. 807-814.
Wenyuan Dai, Qiang Yang, Gui-Rong Xue, Yong Yu. July, 2008. “Self-taught clustering”, Proceeding ICML '08 Proceedings of the 25th International Conference on Machine Learning, pp.200-207.
World Malaria Report 2019, WHO
Yann LeCun, Léon Bottou, Yoshua Bengio, Patrick Haffner. Nov, 1998. “Gradient based learning applied to document recognition”, Proceedings of the IEEE, 86(11), pp. 2278–2324.
Yaroslav Ganin, Victor Lempitsky. July, 2015. “Unsupervised domain adaptation by backpropagation”, Proceeding ICML'15 Proceedings of the 32nd International Conference on International Conference on Machine Learning, 37, pp.1180-1189.
Yongqin Xian, Christoph H. Lampert, Bernt Schiele, Zeynep Akata. 2017. “Zero-shot learning - A comprehensive evaluation of the good, the bad and the ugly”, Retrieved from arXiv:1707.00600
李弘毅,2017。Transfer Learning,取自http://speech.ee.ntu.edu.tw/ML_2016/Lecture/tra
nsfer%20(v3).pdf
許世芬,2018。《人體血液寄生蟲圖譜》,衛生福利部疾病管制署