研究生: |
呂冠頤 Lu, Kuan-Yi. |
---|---|
論文名稱: |
利用多模態U-net進行磁振造影腦組織影像的自動化分割 Automatic segmentation in MRI brain tissue images using a Multi-modality U-net |
指導教授: |
許靖涵
Hsu, Ching-Han |
口試委員: |
彭旭霞
Peng, Hsu-Hsia 彭馨蕾 Peng, Shin-Lei |
學位類別: |
碩士 Master |
系所名稱: |
原子科學院 - 生醫工程與環境科學系 Department of Biomedical Engineering and Environmental Sciences |
論文出版年: | 2021 |
畢業學年度: | 109 |
語文別: | 中文 |
論文頁數: | 104 |
中文關鍵詞: | 腦組織擷取 、腦組織分割 、腦部MRI 、多模態U-net 、偽彩色影像 |
外文關鍵詞: | Brain tissue extraction, Brain tissue segmentation, MRI brain, Multi-modality U-net, Pseudo color image |
相關次數: | 點閱:3 下載:0 |
分享至: |
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
MRI腦組織影像的自動化分割在大規模研究中對於所有年齡的定量分析都很重要。高解析度的腦部磁振造影影像中包含一些非腦部組織部分,如眼球,肌肉,脂肪,與皮膚等,這些組織將會對腦組織分割與後續分析產生很大的阻礙,因此,在進行腦組織分割前會先進行頭骨去除,而我們提出透過U-net網路架構進行腦組織提取,其結果在成人腦組織提取達到平均Dice coefficient 0.9825。
腦組織分割部分,我們提出一種使用多模態U-net(Multi-modality U-net)的架構,將MRI腦組織影像自動化分割為多個組織類別。此方法需要使用兩種MRI掃描序列的解剖影像,應用於兩個不同的數據集:Isointense時期嬰兒的腦組織T1與T2 weighted影像,與成人腦組織T1 weighted與T2 FLAIR影像,並且會與使用單一MRI掃描序列的影像分割結果進行比較。對於每個數據集,此方法在所有分割的腦組織類別分別獲得0.90與0.88的平均Dice coefficient,結果顯示,此方法在這兩個數據集中均獲得準確地分割。其中,我們亦會探討使用偽彩色影像作為輸入影像的結果,與使用等向性(Isotropic)數據集其三解剖平面的分割結果比較。
Automatic segmentation in MRI brain tissue images is important for quantitative analysis in large-scale studies at all ages. The high resolution MRI brain images contain some non-brain tissues such as eye balls, muscle, fat, and skin. These non-brain tissues are considered as major obstacles for automatic brain tissue segmentation and following analysis. The skull will be removed before brain tissue segmentation. Consequently, We propose to extract brain tissue using U-net architecture. The result in average Dice coefficient is 0.9825.
In brain tissue segmentation, We propose a method for the automatic segmentation of MRI brain images into a number of tissue classes using a Multi-modality U-net. The method requires two anatomical MRI scan sequence image. The segmentation method is applied to two different datasets: Isointense phase infant brain tissues in T1- and T2-weighted brain, and adult brain tissue in T1 weighted and T2 FLAIR. The results will be compared with the results of segmentation using a single MRI scan sequence image. The method obtained the following average Dice coefficients over all segmented tissue classes for each dataset, respectively: 0.90, and 0.88. The results demonstrate that the method obtains accurate segmentations in the two datasets. Among them, we also explore the results of using pseudo color images as input images and comparison of the segmentation results of the three anatomical planes using the isotropic dataset.
[1] Long, J., Shelhamer, E., & Darrell, T. (2015). Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 3431-3440).
[2] Ronneberger, O., Fischer, P., & Brox, T. (2015, October). U-net: Convolutional networks for biomedical image segmentation. In International Conference on Medical image computing and computer-assisted intervention (pp. 234-241). Springer, Cham.
[3] Chen, L., Bentley, P., & Rueckert, D. (2017). Fully automatic acute ischemic lesion segmentation in DWI using convolutional neural networks. NeuroImage: Clinical, 15, 633-643.
[4] Chen, L., Wu, Y., DSouza, A. M., Abidin, A. Z., Wismüller, A., & Xu, C. (2018, March). MRI tumor segmentation with densely connected 3D CNN. In Medical Imaging 2018: Image Processing (Vol. 10574, p. 105741F). International Society for Optics and Photonics.
[5] Dolz, J., Gopinath, K., Yuan, J., Lombaert, H., Desrosiers, C., & Ayed, I. B. (2018). HyperDense-Net: a hyper-densely connected CNN for multi-modal image segmentation. IEEE transactions on medical imaging, 38(5), 1116-1126.
[6] Dolz, J., Desrosiers, C., Wang, L., Yuan, J., Shen, D., & Ayed, I. B. (2020). Deep CNN ensembles and suggestive annotations for infant brain MRI segmentation. Computerized Medical Imaging and Graphics, 79, 101660.
[7] Mendrik, A. M., Vincken, K. L., Kuijf, H. J., Breeuwer, M., Bouvy, W. H., De Bresser, J., ... & Viergever, M. A. (2015). MRBrainS challenge: online evaluation framework for brain image segmentation in 3T MRI scans. Computational intelligence and neuroscience, 2015.
[8] Deoni, S. C., Rutt, B. K., Parrent, A. G., & Peters, T. M. (2007). Segmentation of thalamic nuclei using a modified k-means clustering algorithm and high-resolution quantitative magnetic resonance imaging at 1.5 T. Neuroimage, 34(1), 117-126.
[9] Commowick, O., Cervenansky, F., & Ameli, R. (2016). MSSEG challenge proceedings: multiple sclerosis lesions segmentation challenge using a data management and processing infrastructure. In Miccai.
[10] Valverde, S., Cabezas, M., Roura, E., González-Villà, S., Pareto, D., Vilanova, J. C., ... & Lladó, X. (2017). Improving automated multiple sclerosis lesion segmentation with a cascaded 3D convolutional neural network approach. NeuroImage, 155, 159-168.
[11] Zhang, W., Li, R., Deng, H., Wang, L., Lin, W., Ji, S., & Shen, D. (2015). Deep convolutional neural networks for multi-modality isointense infant brain image segmentation. NeuroImage, 108, 214-224.
[12] Chen, H., Dou, Q., Yu, L., Qin, J., & Heng, P. A. (2018). VoxResNet: Deep voxelwise residual networks for brain segmentation from 3D MR images. NeuroImage, 170, 446-455.
[13] Kamnitsas, K., Chen, L., Ledig, C., Rueckert, D., & Glocker, B. (2015). Multi-scale 3D convolutional neural networks for lesion segmentation in brain MRI. Ischemic stroke lesion segmentation, 13, 46.
[14] Moeskops, P., Viergever, M. A., Mendrik, A. M., De Vries, L. S., Benders, M. J., & Išgum, I. (2016). Automatic segmentation of MR brain images with a convolutional neural network. IEEE transactions on medical imaging, 35(5), 1252-1261.
[15] Nie, D., Wang, L., Gao, Y., & Shen, D. (2016, April). Fully convolutional networks for multi-modality isointense infant brain image segmentation. In 2016 IEEE 13Th international symposium on biomedical imaging (ISBI) (pp. 1342-1345). IEEE.
[16] LeCun, Y., Bottou, L., Bengio, Y., & Haffner, P. (1998). Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11), 2278-2324.
[17] Albawi, S., Mohammed, T. A., & Al-Zawi, S. (2017, August). Understanding of a convolutional neural network. In 2017 International Conference on Engineering and Technology (ICET) (pp. 1-6). Ieee.
[18] Khan, A., Sohail, A., Zahoora, U., & Qureshi, A. S. (2020). A survey of the recent architectures of deep convolutional neural networks. Artificial Intelligence Review, 53(8), 5455-5516.
[19] Albawi, S., Mohammed, T. A., & Al-Zawi, S. (2017, August). Understanding of a convolutional neural network. In 2017 International Conference on Engineering and Technology (ICET) (pp. 1-6). Ieee.
[20] Nwankpa, C., Ijomah, W., Gachagan, A., & Marshall, S. (2018). Activation functions: Comparison of trends in practice and research for deep learning. arXiv preprint arXiv:1811.03378.
[21] Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems, 25, 1097-1105.
[22] Zhang, Z., & Sabuncu, M. R. (2018). Generalized cross entropy loss for training deep neural networks with noisy labels. arXiv preprint arXiv:1805.07836.
[23] Kingma, D. P., & Ba, J. (2014). Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.
[24] Ruder, S. (2016). An overview of gradient descent optimization algorithms. arXiv preprint arXiv:1609.04747.
[25] Tieleman, T., & Hinton, G. (2012). Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural networks for machine learning, 4(2), 26-31.
[26] Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., & Salakhutdinov, R. (2014). Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research, 15(1), 1929-1958.
[27] Hinton, G. E., Srivastava, N., Krizhevsky, A., Sutskever, I., & Salakhutdinov, R. R. (2012). Improving neural networks by preventing co-adaptation of feature detectors. arXiv preprint arXiv:1207.0580.
[28] Ioffe, S., & Szegedy, C. (2015, June). Batch normalization: Accelerating deep network training by reducing internal covariate shift. In International conference on machine learning (pp. 448-456). PMLR.
[29] Bjorck, J., Gomes, C., Selman, B., & Weinberger, K. Q. (2018). Understanding batch normalization. arXiv preprint arXiv:1806.02375.
[30] Badrinarayanan, V., Kendall, A., & Cipolla, R. (2017). Segnet: A deep convolutional encoder-decoder architecture for image segmentation. IEEE transactions on pattern analysis and machine intelligence, 39(12), 2481-2495.
[31] Jégou, S., Drozdzal, M., Vazquez, D., Romero, A., & Bengio, Y. (2017). The one hundred layers tiramisu: Fully convolutional densenets for semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition workshops (pp. 11-19).
[32] Huang, G., Liu, Z., Van Der Maaten, L., & Weinberger, K. Q. (2017). Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 4700-4708).
[33] Patel, O., Maravi, Y. P., & Sharma, S. (2013). A comparative study of histogram equalization based image enhancement techniques for brightness preservation and contrast enhancement. arXiv preprint arXiv:1311.4033.
[34] Shorten, C., & Khoshgoftaar, T. M. (2019). A survey on image data augmentation for deep learning. Journal of Big Data, 6(1), 1-48.
[35] Li, H., Chen, C., Fang, S., & Zhao, S. (2017). Brain MR image segmentation using NAMS in pseudo-color. Computer Assisted Surgery, 22(sup1), 170-175.
[36] Kalavathi, P., & Prasath, V. S. (2016). Methods on skull stripping of MRI head scan images—a review. Journal of digital imaging, 29(3), 365-379.
[37] Fennema‐Notestine, C., Ozyurt, I. B., Clark, C. P., Morris, S., Bischoff‐Grethe, A., Bondi, M. W., ... & Brown, G. G. (2006). Quantitative evaluation of automated skull‐stripping methods applied to contemporary and legacy images: Effects of diagnosis, bias correction, and slice location. Human brain mapping, 27(2), 99-113.
[38] He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770-778).
[39] Ciresan, D., Giusti, A., Gambardella, L., & Schmidhuber, J. (2012). Deep neural networks segment neuronal membranes in electron microscopy images. Advances in neural information processing systems, 25, 2843-2851.
[40] Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., ... & Rabinovich, A. (2015). Going deeper with convolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1-9).
[41] Simonyan, K., & Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556.
[42] Çiçek, Ö., Abdulkadir, A., Lienkamp, S. S., Brox, T., & Ronneberger, O. (2016, October). 3D U-Net: learning dense volumetric segmentation from sparse annotation. In International conference on medical image computing and computer-assisted intervention (pp. 424-432). Springer, Cham.