簡易檢索 / 詳目顯示

研究生: 何岳庭
Ho, Yueh-Ting
論文名稱: 利用基於深度學習之DRA-UNet模型做乳房超音波影像腫瘤區塊切割
Tumor Area semantic segmentation of breast ultra-sound images using DRA-UNet based on deep learning
指導教授: 鐘太郎
Jong, Tai-Lang
口試委員: 黃裕煒
Huang, Yu-Wei
謝奇文
Hsieh, Chi-Wen
學位類別: 碩士
Master
系所名稱: 電機資訊學院 - 電機工程學系
Department of Electrical Engineering
論文出版年: 2021
畢業學年度: 109
語文別: 中文
論文頁數: 72
中文關鍵詞: 超音波乳房影像腫瘤切割語意切割深度學習Dense-Res-Attention UNet(DRA-UNet)UNetFully convolution network(FCN)
外文關鍵詞: Breast ultrasound images, Tumor segmentation, Dense-Res-Attention UNet(DRA-UNet), Semantic segmentation, Deep learning, UNet, Fully convolution network(FCN)
相關次數: 點閱:2下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 乳癌是相當常見的癌症,在臺灣女性死因統計位居前三名,大大地對女性健康造成威脅。根據統計,自民國九十八年至一百零八年,隨著發病率的增加,超音波影像也被廣泛應用於分割乳房腫塊,因為其具有安全、無痛、非侵入性、無輻射等優點,且相較諸如CT、MRI影像,超音波影像費用便宜、可攜性較高,同時也更為普遍。然而;超音波影像需要具備相關經驗之放射學家的主觀判斷,人為標註費工且耗時的情況導致資料總量稀少,也為使用深度學習來分析超音波影像帶來更多的挑戰。
    近年,深度學習在各式各樣的醫學影像切割任務中展現巨大的潛能,隨著超音波影像相關的醫療儀器往微型化、高效率、便於日常使用的可攜式裝備發展,深度學習的模型也應竭盡所能地輕量化,在耗時、準確性、穩定性中取得平衡。其中,以U-Net為骨幹的模型尤其突出,故本論文嘗試將六種用於其他醫學影像之神經網路,結合乳房超音波原始影像及其Ground truth,透過端對端的訓練得出各自對應的權重,對逐一像素分類,並從中取其優點予以結合,開發出一種新穎的神經網路架構: DRA-UNet (Dense-Res-Attention UNet),可以幫助專業人士在超音波影像中圈選出腫瘤區域。
    為了客觀分析腫瘤切割的結果,根據JSI、DSC、ACC、TPR、TNR、Precision六種錯誤指標去評估模型切割的好壞,本論文提出之DRA-UNet平均具有78.10%的JSI、85.79%的DSC、97.81%的ACC以及89.72%的Precision,四項指標為所有方法之首,88.31%的TPR落後於89.47%的Attention UNet、98.73%的TNR落後於98.79%的MultiResUNet,其餘兩項指標為第二名,因此可證實該方法可用於生醫影像應用中,且能基於現存的方法做出改善,在乳癌早期階段提供對腫瘤、病灶適當的檢測。
    綜上所述,本論文提出之方法具備以下優點: 其一,訓練過後的模型不需人工調適參數,使用全自動切割系統幫助醫師及放射師判斷,可以節省寶貴的醫護人力與時間;其二,使用較少的參數量、稀少的訓練資料也能有優良的像素級別切割能力;其三,在病灶極小、音響陰影嚴重等難易度較高的超音波影像,依然能保持穩定的水準。


    According to statistics in Taiwan, breast cancer is the third-commonest reason which causes death. This fact really pose a significant threat to women’s health. From 2009 to 2019, with the increment of death rate, medical ultrasound imaging has been widely employed to segment breast lumps because its safety, painless characteristics, noninvasive diagnosis and non-ionized radiation. Furthermore, compared with other clinical medical imaging such as CT and MRI, ultrasound is relatively cheaper, portable and general-used. Nevertheless, it requires the subjective judgement from radiologists with relevant experience and the annotation is laborious and time-consuming, resulting in scarcity of data and bringing more challenges for implementing deep learning technologies on analyzing ultrasound images.
    In recent years, deep learning in computer vision has demonstrated the potential in a vast repertoire of biomedical image segmentation tasks. With the development of medical equipment, professionals prefer to miniaturize the ultrasound devices to leverage the efficiency and portability. Therefore, the deep learning models should also be as lightweight as possible to strike a balance among time consumption, accuracy and stability. In this thesis, I used six existing neural networks for biomedical images and took breast ultrasound raw images and their ground truths to derive the corresponding weights through end-to-end training and categorize the lesions pixel by pixel in the end. Thanks to other brilliant structures that the great researchers proposed before, I combined them and developed a novel architecture, DRA-UNet (Dense-Res-Attention UNet), as a solution of assisting the professionals to delineate the tumor area.
    For the sake of objectively analyzing the results of tumor segmentation, the six error metrics of JSI, DSC, ACC, TPR, TNR, and Precision were used to evaluate the goodness of the model. The DRA-UNet has the highest JSI of 78.10%, DSC of 85.79%, ACC of 97.81% and Precision of 89.72%, while TPR of 89.47% and TNR of 98.79% are the second best. Thus, the proposed method really can be improved based on existing methods and provide appropriate detection of tumors and lesions at early stages of breast cancer.
    In summary, the proposed method has the following advantages. First, the model does not require any manual adjustment, which saves valuable healthcare manpower and time; second, it has excellent pixel-level segmenting ability even with a small number of parameters and sparse training data; third, it can maintain a stable level in ultrasound images with high difficulty, such as very tiny lesions and severe acoustic shadowing.

    目錄 摘要 ii Abstract iii 致謝 iv 目錄 v 圖目錄 vii 表目錄 x 第1章 簡介 1 1.1 前言 1 1.2 文獻回顧 2 1.2.1 影像處理 2 1.2.2 人工智慧與深度學習 2 1.3 研究目的 3 1.4 論文架構 3 第2章 神經網路與深度學習 4 2.1 人工神經網路簡介 4 2.1.1 人工神經網路 4 2.1.2 激勵函數(Activation Function)與損失函數(Loss Function) 6 2.1.3 人工神經網路訓練過程 7 2.2 卷積神經網路簡介 9 2.2.1 卷積神經網路(Convolution Neural Network) 9 2.2.2 池化層(Pooling Layer)簡介 11 2.2.3 填補(Padding)簡介 12 2.3 FCN模型 13 2.3.1 FCN (Fully Convolutional Network)[25] 14 2.3.2 FC DenseNet[28] 18 2.4 UNet 模型 20 2.4.1 UNet[32] 20 2.4.2 ResUNet[33] 21 2.4.3 Attention UNet[34] 23 2.4.4 MultiResUNet[36] 25 2.5 FCN和UNet的差別 27 2.6 DRA-UNet架構 28 2.6.1 Dense Block 29 2.6.2 Res path 31 2.6.3 Attention Gate 31 第3章 實驗方法與結果 34 3.1 實驗流程圖 34 3.2 Data set 35 3.3 錯誤評估指標 38 3.3.1 Accuracy 39 3.3.2 Jaccard similarity index(IoU) 39 3.3.3 Dice similarity coefficient(DSC) 39 3.3.4 Precision 40 3.3.5 Sensitivity(Recall) 40 3.3.6 Specificity 40 3.3.7 ROC & AUC 41 3.4 資料處理與增強 42 3.4.1 Holdout Cross Validation 42 3.4.2 ImageDataGenerator 44 3.5 實驗結果與討論 47 3.5.1 訓練結果 47 3.5.2 資料增強結果 55 3.5.3 切割結果 56 第4章 結論與未來展望 69 4.1 結論 69 4.2 未來展望 70 參考文獻 71

    [1] 行政院衛生福利部., 民國108年國人死因統計年報. 2020.
    [2] Berg, W.A., et al., Combined screening with ultrasound and mammography vs mammography alone in women at elevated risk of breast cancer. 2008. 299(18): p. 2151-2163.
    [3] Wells, P. and M.J.U. Halliwell, Speckle in ultrasonic imaging. 1981. 19(5): p. 225-229.
    [4] Liu, B., et al., Probability density difference-based active contour for ultrasound image segmentation. 2010. 43(6): p. 2028-2042.
    [5] Huang, Q., et al., Breast ultrasound image segmentation: a survey. 2017. 12(3): p. 493-507.
    [6] Xian, M., Y. Zhang, and H.-D.J.P.R. Cheng, Fully automatic segmentation of breast ultrasound images based on breast characteristics in space and frequency domains. 2015. 48(2): p. 485-497.
    [7] Rodrigues, P.S., G.A.J.P.A. Giraldi, and Applications, Improving the non-extensive medical image segmentation based on Tsallis entropy. 2011. 14(4): p. 369-379.
    [8] Yap, M.H., E.A. Edirisinghe, and H.E. Bez. Fully automatic lesion boundary detection in ultrasound breast images. in Medical Imaging 2007: Image Processing. 2007. International Society for Optics and Photonics.
    [9] Lo, C., et al., Computer-aided multiview tumor detection for automated whole breast ultrasound. 2014. 36(1): p. 3-17.
    [10] Moon, W.K., et al., Tumor detection in automated breast ultrasound images using quantitative tissue clustering. 2014. 41(4): p. 042901.
    [11] Gomez, W., et al., Computerized lesion segmentation of breast ultrasound based on marker‐controlled watershed transformation. 2010. 37(1): p. 82-95.
    [12] Gu, P., et al., Automated 3D ultrasound image segmentation to aid breast cancer image interpretation. 2016. 65: p. 51-58.
    [13] 鈕采紋, 以基於形態學得到初始輪廓之距離正規化水平集演化法做超音波乳房影像腫瘤區塊自動化切割, in 電機工程學系. 2016, 國立清華大學: 新竹市. p. 66.
    [14] 謝, 洵., 以利用引導影像濾波器及L0梯度最小化平滑前處理與形態特徵得到初始輪廓之距離正規化水平集演算法做乳房超音波影像腫瘤區塊自動化切割, in 電機工程學系所. 2017, 國立清華大學: 新竹市. p. 76.
    [15] Wang, Z.J.a.p.a., Deep learning in medical ultrasound image segmentation: A review. 2020.
    [16] Joo, S., et al., Computer-aided diagnosis of solid breast nodules: use of an artificial neural network based on multiple sonographic features. 2004. 23(10): p. 1292-1300.
    [17] Li, S., et al., Computer-aided assessment of breast density: comparison of supervised deep learning and feature-based statistical learning. 2018. 63(2): p. 025005.
    [18] 林鉉博, 利用基於深度學習之Mask-RCNN於乳房超音波影像腫瘤區塊檢測以及輪廓切割, in 電機工程學系所. 2018, 國立清華大學: 新竹市. p. 60.
    [19] Mitchell, T.M., Machine learning. 1997.
    [20] saloni1297, Introduction to Artificial Neutral Networks. 2019.
    [21] ujjwalkarn, Introduction to Neural Networks. 2016.
    [22] Chollet, F., Deep learning with Python. Vol. 361. 2018: Manning New York.
    [23] Shyamal Patel, J.P., Introduction to Deep Learning: What Are Convolutional Neural Networks? 2017.
    [24] Ivan, [物件偵測]FCN for Semantic Segmentation. 2019.
    [25] Long, J., E. Shelhamer, and T. Darrell. Fully convolutional networks for semantic segmentation. in Proceedings of the IEEE conference on computer vision and pattern recognition. 2015.
    [26] Zeiler, M.D. and R. Fergus. Visualizing and understanding convolutional networks. in European conference on computer vision. 2014. Springer.
    [27] Dumoulin, V. and F.J.a.p.a. Visin, A guide to convolution arithmetic for deep learning. 2016.
    [28] Jégou, S., et al. The one hundred layers tiramisu: Fully convolutional densenets for semantic segmentation. in Proceedings of the IEEE conference on computer vision and pattern recognition workshops. 2017.
    [29] Huang, G., et al. Densely connected convolutional networks. in Proceedings of the IEEE conference on computer vision and pattern recognition. 2017.
    [30] He, K., et al. Deep residual learning for image recognition. in Proceedings of the IEEE conference on computer vision and pattern recognition. 2016.
    [31] Tsang, S.-H., DenseNet — Dense Convolutional Network (Image Classification). 2018.
    [32] Ronneberger, O., P. Fischer, and T. Brox. U-net: Convolutional networks for biomedical image segmentation. in International Conference on Medical image computing and computer-assisted intervention. 2015. Springer.
    [33] Zhang, Z., et al., Road extraction by deep residual u-net. 2018. 15(5): p. 749-753.
    [34] Oktay, O., et al., Attention u-net: Learning where to look for the pancreas. 2018.
    [35] Vaswani, A., et al., Attention is all you need. 2017.
    [36] Ibtehaz, N. and M.S.J.N.N. Rahman, MultiResUNet: Rethinking the U-Net architecture for multimodal biomedical image segmentation. 2020. 121: p. 74-87.
    [37] Szegedy, C., et al. Going deeper with convolutions. in Proceedings of the IEEE conference on computer vision and pattern recognition. 2015.
    [38] Wang, P., et al. Understanding convolution for semantic segmentation. in 2018 IEEE winter conference on applications of computer vision (WACV). 2018. IEEE.
    [39] Srivastava, R.K., K. Greff, and J.J.a.p.a. Schmidhuber, Highway networks. 2015.
    [40] Larsson, G., M. Maire, and G.J.a.p.a. Shakhnarovich, Fractalnet: Ultra-deep neural networks without residuals. 2016.
    [41] Garcia-Garcia, A., et al., A survey on deep learning techniques for image and video semantic segmentation. 2018. 70: p. 41-65.

    QR CODE