簡易檢索 / 詳目顯示

研究生: 劉哲豪
Liu, Che-Hao
論文名稱: 利用基於深度學習之DXSNet模型進行肺部疾病評估
Lung disease assessment using DXSNet based on deep learning
指導教授: 鐘太郎
JONG, TAI-LANG
口試委員: 謝奇文
HSIEH, CHI-WEN
黃裕煒
HUANG, YU-WEI
學位類別: 碩士
Master
系所名稱: 電機資訊學院 - 電機工程學系
Department of Electrical Engineering
論文出版年: 2022
畢業學年度: 110
語文別: 中文
論文頁數: 71
中文關鍵詞: X光影像影像分類深度學習人工智慧卷積神經網路DXSNetCOVID-19肺炎
外文關鍵詞: X-ray image, image classification, deep learning, CNN, COVID-19, pneumonia, artificial intelligence, DXSNet
相關次數: 點閱:2下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • Covid-19疫情擴散全球,從2019年底至今許多地方疫情情況仍相當嚴峻,疫情的檢測需要投入大量的人力及資源,在資源較匱乏的地區,對於疫情的檢測可能會遭遇到一些困境,檢測的技術及使用的試劑也會對結果造成影響,因此耗費較少資源及人力的發展尤為重要。
    近來,深度學習快速發展在影像分類方面也發展得相當不錯,許多人紛紛將醫療影像透過深度學習,期望可以減輕醫護人員的壓力並加快病情的診斷。在肺部病情診斷方面,通常以X光影像及CT影像最為常見,X光影像取得較CT影像方便快速及便宜,將X光影像透過深度學習模型協助醫生判斷病情,減輕醫生負擔。
    本論文嘗試將肺部X光影像利用四種深度學習模型,包含ResNet152、InceptionV3、Xception及DenseNet201,進行四種肺部疾病的判斷,包括正常肺部影像、COVID-19患者肺部影像與細菌性及病毒性肺炎肺部影像,並且引入X光影像的前處理以提高四種模型的判斷準確率。此外,提出一種新的深度學習模型DXSNet以提升肺部疾病判斷的準確率,它主要是取Xception及DenseNet之優點予以結合,利用DenseNet加強特徵萃取及Xception將特徵圖的資訊先各別處理再整合,再利用SENet attention的特性加強有效特徵抑制無效特徵,實驗結果發現DXSNet之效能皆優於前面四種模型。在實驗方面針對四種的X光影像進行多元分類及二元分類,二元分類可以讓醫護人員可以快速篩檢肺部是否正常,是否需要進一步交給醫生判斷病況,多元分類則可以協助醫生判斷病況,以減輕醫護人員及醫生的負擔。
    本論文使用的錯誤評估指標除了使用Accuracy外,還使用了Precision、Recall及F1-score,同時觀察各模型的ROC曲線及PR曲線之AUC。DXSNet在圖片未經前處理的情況下就達到了0.9381的Precision、0.9372的Recall及0.9365的F1-score,AUC則達到了0.9897,顯示出多元分類可以對疾病進行不錯的評估。除了進行多元分類本論文也進行了二元分類,在二元分類DXSNet在Precision、Recall及F1均達到0.9792,AUC更是達到了0.9958,均優於前面的四種模型,顯示出二元分類上可以很好的判斷出正常肺部及非正常肺部。


    The Covid-19 epidemic has spread around the world. Since the end of 2019, the epidemic situation in many places is still quite severe. The detection of the epidemic requires a lot of manpower and resources. In areas with less resources, the detection of the epidemic may encounter some difficulties. The detection technology and the reagents used will also affect the results. Therefore, development that consumes less resources and manpower is particularly important.
    Recently, the rapid development of deep learning has also developed quite well in image classification. Many people have used medical images through deep learning, hoping to reduce the pressure on medical staff and speed up the diagnosis of diseases. In the diagnosis of lung diseases, X-ray images and CT images are usually the most common. X-ray images are more convenient and faster to obtain than CT images. X-ray images can be used through deep learning models to help doctors judge the condition and reduce the burden on doctors.
    This thesis attempts to use four deep learning models for lung X-ray images, including ResNet152, InceptionV3, Xception and DenseNet201 to judge four lung diseases, including normal lung images, COVID-19 lung images, bacterial lung images and viral pneumonia lung images. Besides, the introduction of X-ray image pre-processing to improve the accuracy of the four models. In addition, a new deep learning model DXSNet is proposed to improve the accuracy of lung disease assessment. It mainly combines the advantages of Xception and DenseNet, using DenseNet to strengthen feature extraction and Xception to process the information of the feature map separately and then integrate it then use the characteristics of SENet attention to strengthen the effective features and suppress the invalid features. The experimental results show that the performance of DXSNet is better than the previous four models.
    In the experiment, the four types of X-ray images are divided into multiple classifications and binary classifications. The binary classification allows medical staff to quickly screen whether the lungs are normal and whether they need to be further handed over to the doctor to determine the condition. The multiple classification can assist doctors in judgement in order to reduce the burden on medical staff and doctors.
    In addition to Accuracy, the error evaluation indicators used in this paper also use Precision, Recall, F1-score and observe the AUC of ROC curve and PR curve of each model at the same time. DXSNet has Precision of 93.81%, Recall of 93.72%, F1-score of 93.65% without pre-processing, showing that multiple classification can make a good assessment of the disease. In addition to the multiple classification, this thesis also performed binary classification. In the binary classification, DXSNet reached 97.92% in Precision, Recall and F1 and the AUC reached 99.58%, showing that the binary classification can well assess normal lungs and abnormal lungs.

    中文摘要 I Abstract III 致謝 V 目錄 VI 圖目錄 VIII 表目錄 X 第一章 緒論 1 1.1 研究背景 1 1.2 文獻回顧 1 1.2.1 Covid-19檢測方法 1 1.2.2 Covid-19檢測與深度學習 1 1.3 研究動機 3 1.4 論文架構 4 第二章 類神經網路與深度學習 5 2.1前言 5 2.2類神經網路 6 2.2.1 類神經網路介紹 6 2.2.2 激勵函數(Loss Function) 8 2.2.3 損失函數(Loss Function) 9 2.2.4神經網路訓練 10 2.2.5梯度消失 11 2.3卷積神經網路(Convolution Neural Network) 12 2.3.1 前言 12 2.3.2 卷積神經網路架構 12 2.3.4 池化層(Pooling Layer) 14 2.3.5 全局平均池化(Global Average Pooling) 16 2.4 卷積神經網路模型介紹 17 2.4.1 ResNet[20] 17 2.4.2 Inception 18 2.4.3 Xception[23] 20 2.4.4 DenseNet 23 2.5 DXSNet 25 2.5.1 Dense block 25 2.5.2 Transition layer 26 2.5.3 Xception block[28] 28 2.5.4 Squeeze-and-Excitation Networks (SENet)[28] 29 第三章 分析方法與實驗結果 30 3.1 前言 30 3.2 Anaconda介紹 30 3.3資料集介紹 33 3.3.1 資料分割 35 3.4 資料前處理 36 3.4.1 前言 36 3.4.2 白平衡(White Balance) 37 3.4.3 Contrast Limited Adaptive Histogram Equalization(CLAHE) 38 3.4.4 雙邊濾波器(Bilateral filter) 39 3.5錯誤評估指標 41 3.5.1 混淆矩陣(Confusion Matrix) 41 3.5.2 Accuracy 43 3.5.3 Precision 43 3.5.4 Recall 43 3.5.5 Marco Average 44 3.5.6 Weighted Average 44 3.5.8 ROC曲線與PR曲線 45 3.6實驗方法與結果討論 47 3.6.1 實驗步驟 47 3.6.2 多元分類實驗結果 49 3.6.3 前處理實驗結果 55 3.6.4 二元分類實驗結果 61 第四章 結論與未來展望 67 參考文獻 69

    [1] "疾病介紹." https://www.cdc.gov.tw/Category/Page/vleOMKqwuEbIMgqaTeXG8A (accessed.
    [2] P. Asrani et al., "Diagnostic approaches in COVID-19: clinical updates," Expert review of respiratory medicine, vol. 15, no. 2, pp. 197-212, 2021.
    [3] N. Subbaraman, "Coronavirus tests: researchers chase new diagnostics to fight the pandemic," Nature, 2020.
    [4] W. Zhang et al., "Molecular and serological investigation of 2019-nCoV infected patients: implication of multiple shedding routes," Emerging microbes & infections, vol. 9, no. 1, pp. 386-389, 2020.
    [5] R. Han, L. Huang, H. Jiang, J. Dong, H. Peng, and D. Zhang, "Early clinical and CT manifestations of coronavirus disease 2019 (COVID-19) pneumonia," American Journal of Roentgenology, vol. 215, no. 2, pp. 338-343, 2020.
    [6] T. Ai et al., "Correlation of chest CT and RT-PCR testing for coronavirus disease 2019 (COVID-19) in China: a report of 1014 cases," Radiology, vol. 296, no. 2, pp. E32-E40, 2020.
    [7] B. A. Oliveira, L. C. d. Oliveira, E. C. Sabino, and T. S. Okay, "SARS-CoV-2 and the COVID-19 disease: a mini review on diagnostic methods," Revista do Instituto de Medicina Tropical de São Paulo, vol. 62, 2020.
    [8] M.-Y. Ng et al., "Imaging profile of the COVID-19 infection: radiologic findings and literature review," Radiology: Cardiothoracic Imaging, vol. 2, no. 1, p. e200034, 2020.
    [9] S. Wang et al., "A deep learning algorithm using CT images to screen for Corona Virus Disease (COVID-19)," European radiology, pp. 1-9, 2021.
    [10] E. S. Amis Jr et al., "American College of Radiology white paper on radiation dose in medicine," Journal of the american college of radiology, vol. 4, no. 5, pp. 272-284, 2007.
    [11] E. Baratella et al., "Severity of lung involvement on chest X-rays in SARS-coronavirus-2 infected patients as a possible tool to predict clinical progression: an observational retrospective analysis of the relationship between radiological, clinical, and laboratory data," Jornal Brasileiro de Pneumologia, vol. 46, no. 5, 2020.
    [12] G. D. Rubin et al., "The role of chest imaging in patient management during the COVID-19 pandemic: a multinational consensus statement from the Fleischner Society," Radiology, vol. 296, no. 1, pp. 172-180, 2020.
    [13] K. Kallianos et al., "How far have we come? Artificial intelligence for chest radiograph interpretation," Clinical radiology, vol. 74, no. 5, pp. 338-345, 2019.
    [14] A. M. Tahir et al., "A systematic approach to the design and characterization of a smart insole for detecting vertical ground reaction force (vGRF) in gait analysis," Sensors, vol. 20, no. 4, p. 957, 2020.
    [15] D. Inc. "Artificial Intelligence 101: Everything You Need to Know To Understand AI." https://medium.com/@diamond_io/artificial-intelligence-101-everything-you-need-to-know-to-understand-ai-8e20fe4bd750 (accessed.
    [16] M. Mishra and M. Srivastava, "A view of artificial neural network," in 2014 International Conference on Advances in Engineering & Technology Research (ICAETR-2014), 2014: IEEE, pp. 1-3.
    [17] C. Kiourt, G. Pavlidis, and S. Markantonatou, "Deep learning approaches in food recognition," in Machine Learning Paradigms: Springer, 2020, pp. 83-108.
    [18] GGWithRabitLIFE. "[機器學習 ML NOTE]Convolution Neural Network卷積神經網路." https:medium.com/機機與兔兔的工程世界/機器學習-ml-note-convolution-neural-network-卷積神經網路-bfa8566744ep (accessed.
    [19] I. B. Oliveira, I. Braga, J. Puga, A. Franco, L. Pereira, and G. Ouverney, "Development of a Multi-attribute Convolutional Neural Network to Seismic Facies Classification," 2019.
    [20] Q. Zhang, J. Sang, W. Wu, B. Cai, Z. Wu, and H. Hu, "An Image Splicing and Copy-Move Detection Method Based on Convolutional Neural Networks with Global Average Pooling," Cham, 2019: Springer International Publishing, in Image and Graphics, pp. 255-265.
    [21] K. He, X. Zhang, S. Ren, and J. Sun, "Deep residual learning for image recognition," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770-778.
    [22] C. Szegedy et al., "Going deeper with convolutions," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 1-9.
    [23] A. Noeman and D. Handayani, "Detection of Mad Lazim Harfi Musyba Images Uses Convolutional Neural Network," in IOP Conference Series: Materials Science and Engineering, 2020, vol. 771, no. 1: IOP Publishing, p. 012030.
    [24] F. Chollet, "Xception: Deep learning with depthwise separable convolutions," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 1251-1258.
    [25] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, "Rethinking the inception architecture for computer vision," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 2818-2826.
    [26] Y. GUOBING. "Separable Convolution." https://blog.csdn.net/tintinetmilou/article/details/81607721 (accessed.
    [27] G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, "Densely connected convolutional networks," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 4700-4708.
    [28] C. Zhang et al., "Resnet or densenet? introducing dense shortcuts to resnet," in Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 2021, pp. 3550-3559.
    [29] J. Hu, L. Shen, and G. Sun, "Squeeze-and-excitation networks," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 7132-7141.
    [30] K. He, X. Zhang, S. Ren, and J. Sun, "Identity mappings in deep residual networks," in European conference on computer vision, 2016: Springer, pp. 630-645.
    [31] "Python." https://www.python.org/ (accessed.
    [32] "Tensorflow." https://www.tensorflow.org/ (accessed.
    [33] "Keras." https://keras.io/ (accessed.
    [34] "Chest X-Ray Images (Pneumonia)." https://www.kaggle.com/paultimothymooney/chest-xray-pneumonia (accessed.
    [35] "COVID-19 Radiography Database." https://www.kaggle.com/tawsifurrahman/covid19-radiography-database (accessed.
    [36] M. Siddhartha and A. Santra, "COVIDLite: A depth-wise separable deep neural network with white balance and CLAHE for detection of COVID-19," arXiv preprint arXiv:2006.13873, 2020.
    [37] "白平衡 是什麼? 正確設定相機白平衡,還原最精彩的色彩." https://hojenjen.com/camera-white-balance-setting/ (accessed.
    [38] S. M. Pizer et al., "Adaptive histogram equalization and its variations," Computer vision, graphics, and image processing, vol. 39, no. 3, pp. 355-368, 1987.
    [39] "[OpenCV] 淺談直方圖均衡化Histogram Equalization、AHE均衡、CLAHE均衡." https://medium.com/@cindylin_1410/%E6%B7%BA%E8%AB%87-opencv-%E7%9B%B4%E6%96%B9%E5%9C%96%E5%9D%87%E8%A1%A1%E5%8C%96-ahe%E5%9D%87%E8%A1%A1-clahe%E5%9D%87%E8%A1%A1-ebc9c14a8f96 (accessed.

    QR CODE