研究生: |
翁俊傑 Weng, Chun-Chieh |
---|---|
論文名稱: |
提升醫學影像分類的穩健性:從分割遮罩增強到抗噪訓練優化 Enhancing Robustness in Medical Image Classification: From Segmentation Mask Augmentation to Noise-Resilient Training |
指導教授: |
李祈均
Lee, Chi-Chun |
口試委員: |
郭柏志
Kuo, Po-Chih 陳奕廷 Chen, Yi-Ting 李衍緯 Lee, Yan-Wei |
學位類別: |
碩士 Master |
系所名稱: |
電機資訊學院 - 電機工程學系 Department of Electrical Engineering |
論文出版年: | 2024 |
畢業學年度: | 113 |
語文別: | 英文 |
論文頁數: | 62 |
中文關鍵詞: | 醫療影像 、分割遮罩增強 、抗噪訓練 |
外文關鍵詞: | Medical image, Segmentation mask augmentation, Noise robust |
相關次數: | 點閱:49 下載:0 |
分享至: |
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
醫學影像分析在醫療保健中扮演著至關重要的角色,能夠提升診斷的精準度並優化治療決策。然而,由於影像生成與分析流程中各階段產生的噪音,要實現穩健且可靠的分類效能仍具挑戰性。本論文提出了兩種方法,分別針對不同來源的噪音進行處理,用於提升深度學習模型在醫學影像中的抗噪性,並針對兩種主要噪音類型進行處理:影像品質的變異性和分割準確度的差異性。第一種方法為分割遮罩增強(Segmentation Mask Augmentation, SMA),該方法引入了分割遮罩的多樣性,以模擬真實場景中的不完美情況,從而增強模型對分割噪音的適應能力。第二種方法為AugMix-C,針對影像生成階段中出現的影像層級損傷,結合了傳統增強策略,並在特徵空間中引入對比損失,從而在各類噪音情境下提升模型的抗噪能力。這兩種方法共同構成了一套完整的策略,以應對醫學影像中的噪音挑戰,進而提高自動化分析的可靠性。實驗結果顯示,這些方法在多樣化的數據集上均顯著提升了模型的表現,證實了其在增強臨床應用中模型穩健性的有效性。
Medical image analysis plays a vital role in healthcare by enhancing diagnostic precision and optimizing treatment decisions. However, achieving robust and reliable classification performance remains challenging due to noise arising at different stages of the image generation and analysis pipeline. This thesis presents two complementary approaches to improve the noise robustness of deep learning models for medical imaging, targeting two primary types of noise: variability in image quality and segmentation accuracy. The first approach, Segmentation Mask Augmentation (SMA), introduces variability in segmentation masks to simulate real-world imperfections, thereby strengthening model resilience to segmentation noise. The second approach, AugMix-C, targets image-level corruptions encountered during acquisition and builds upon traditional augmentation strategies by incorporating contrastive loss in the feature space, thereby improving noise robustness under various corruption scenarios. Together, these methods offer a comprehensive strategy to tackle noise challenges in medical imaging, advancing the reliability of automated analysis. Experimental results show significant improvements across diverse datasets, confirming the effectiveness of these approaches for enhancing model robustness in clinical applications.
[1] Javaria Amin, Muhammad Sharif, Anandakumar Haldorai, Mussarat Yasmin,
and Ramesh Sundar Nayak. Brain tumor detection and classification using
machine learning: a comprehensive survey. Complex & intelligent systems,
8(4):3161–3183, 2022.
[2] Maria Luisa Belli, Martina Mori, Sara Broggi, Giovanni Mauro Cattaneo, Valentino Bettinardi, Italo Dell’Oca, Federico Fallanca, Paolo Passoni,
Emilia Giovanna Vanoli, Riccardo Calandrino, et al. Quantifying the robustness of [18f] fdg-pet/ct radiomic features with respect to tumor delineation in
head and neck and pancreatic cancer patients. Physica Medica, 49:105–111,
2018.
[3] Abhishta Bhandari, Muhammad Ibrahim, Chinmay Sharma, Rebecca Liong,
Sonja Gustafson, and Marita Prior. Ct-based radiomics for differentiating
renal tumours: a systematic review. Abdominal Radiology, 46:2052–2063,
2021.
[4] Erena Siyoum Biratu, Friedhelm Schwenker, Yehualashet Megersa Ayano, and
Taye Girma Debelee. A survey of brain tumor segmentation and classification
algorithms. Journal of Imaging, 7(9):179, 2021.
[5] M. Jorge Cardoso, Wenqi Li, Richard Brown, Nic Ma, Eric Kerfoot, Yiheng Wang, Benjamin Murray, Andriy Myronenko, Can Zhao, Dong Yang,
Vishwesh Nath, Yufan He, Ziyue Xu, Ali Hatamizadeh, Wentao Zhu, Yun
Liu, Mingxin Zheng, Yucheng Tang, Isaac Yang, Michael Zephyr, Behrooz
Hashemian, Sachidanand Alle, Mohammad Zalbagi Darestani, Charlie Budd,
Marc Modat, Tom Vercauteren, Guotai Wang, Yiwen Li, Yipeng Hu, Yunguan Fu, Benjamin Gorman, Hans Johnson, Brad Genereaux, Barbaros S.
Erdal, Vikash Gupta, Andres Diaz-Pinto, Andre Dourson, Lena Maier-Hein,
Paul F. Jaeger, Michael Baumgartner, Jayashree Kalpathy-Cramer, Mona
Flores, Justin Kirby, Lee A.D. Cooper, Holger R. Roth, Daguang Xu, David
Bericat, Ralf Floca, S. Kevin Zhou, Haris Shuaib, Keyvan Farahani, Klaus H.
Maier-Hein, Stephen Aylward, Prerna Dogra, Sebastien Ourselin, and Andrew
Feng. MONAI: An open-source framework for deep learning in healthcare.
GitHub repository, November 2022.
[6] Renee Cattell, Shenglan Chen, and Chuan Huang. Robustness of radiomic
features in magnetic resonance imaging: review and a phantom study. Visual
computing for industry, biomedicine, and art, 2:1–16, 2019.
[7] Jun Cheng, Wei Huang, Shuangliang Cao, Ru Yang, Wei Yang, Zhaoqiang
Yun, Zhijian Wang, and Qianjin Feng. Enhanced performance of brain tumor classification via tumor region augmentation and partition. PloS one,
10(10):e0140381, 2015.
[8] Ekin D Cubuk, Barret Zoph, Jonathon Shlens, and Quoc V Le. Randaugment: Practical automated data augmentation with a reduced search space.
In Proceedings of the IEEE/CVF conference on computer vision and pattern
recognition workshops, pages 702–703, 2020.
[9] Weihang Dai, Xiaomeng Li, Taihui Yu, Di Zhao, Jun Shen, and KwangTing Cheng. Radiomics-informed deep learning for classification of atrial
fibrillation sub-types from left-atrium ct volumes. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pages 153–
162. Springer, 2023.
[10] Sanuwani Dayarathna, Kh Tohidul Islam, Sergio Uribe, Guang Yang, Munawar Hayat, and Zhaolin Chen. Deep learning based synthesis of mri, ct and pet: Review and analysis. Medical Image Analysis, page 103046, 2023.
[11] Francesco Di Salvo, Sebastian Doerrich, and Christian Ledig. Medmnist-c:
Comprehensive benchmark and improved classifier robustness by simulating
realistic image corruptions. arXiv preprint arXiv:2406.17536, 2024.
[12] Sebastian Doerrich, Francesco Di Salvo, Julius Brockmann, and Christian
Ledig. Rethinking model prototyping through the medmnist+ dataset collection. arXiv preprint arXiv:2404.15786, 2024.
[13] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn,
Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer,
Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. In International Conference on Learning Representations, 2020.
[14] Akihiro Haga, Wataru Takahashi, Shuri Aoki, Kanabu Nawa, Hideomi Yamashita, Osamu Abe, and Keiichi Nakagawa. Classification of early stage
non-small cell lung cancers on computed tomographic images into histological
types using radiomic features: interobserver delineation variability analysis.
Radiological physics and technology, 11:27–35, 2018.
[15] Yan Han, Gregory Holste, Ying Ding, Ahmed Tewfik, Yifan Peng, and
Zhangyang Wang. Radiomics-guided global-local transformer for weakly supervised pathology localization in chest x-rays. IEEE Transactions on Medical Imaging, 42(3):750–761, 2022.
[16] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on
computer vision and pattern recognition, pages 770–778, 2016.
[17] Nicholas Heller, Fabian Isensee, Dasha Trofimova, Resha Tejpaul, Zhongchen
Zhao, Huai Chen, Lisheng Wang, Alex Golts, Daniel Khapun, Daniel Shats,
Yoel Shoshan, Flora Gilboa-Solomon, Yasmeen George, Xi Yang, Jianpeng
Zhang, Jing Zhang, Yong Xia, Mengran Wu, Zhiyang Liu, Ed Walczak,
Sean McSweeney, Ranveer Vasdev, Chris Hornung, Rafat Solaiman, Jamee
Schoephoerster, Bailey Abernathy, David Wu, Safa Abdulkadir, Ben Byun,
Justice Spriggs, Griffin Struyk, Alexandra Austin, Ben Simpson, Michael
Hagstrom, Sierra Virnig, John French, Nitin Venkatesh, Sarah Chan, Keenan
Moore, Anna Jacobsen, Susan Austin, Mark Austin, Subodh Regmi, Nikolaos
Papanikolopoulos, and Christopher Weight. The kits21 challenge: Automatic
segmentation of kidneys, renal tumors, and renal cysts in corticomedullaryphase ct, 2023.
[18] Dan Hendrycks*, Norman Mu*, Ekin Dogus Cubuk, Barret Zoph, Justin
Gilmer, and Balaji Lakshminarayanan. Augmix: A simple method to improve
robustness and uncertainty under data shift. In International Conference on
Learning Representations, 2020.
[19] Jie Hu, Li Shen, and Gang Sun. Squeeze-and-excitation networks. In Proceedings of the IEEE conference on computer vision and pattern recognition,
pages 7132–7141, 2018.
[20] Gao Huang, Zhuang Liu, Laurens Van Der Maaten, and Kilian Q Weinberger. Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4700–4708, 2017.
[21] Shih-Cheng Huang, Anuj Pareek, Malte Jensen, Matthew P Lungren, Serena
Yeung, and Akshay S Chaudhari. Self-supervised learning for medical image classification: a systematic review and implementation guidelines. NPJ
Digital Medicine, 6(1):74, 2023.
[22] Philippe Lambin, Emmanuel Rios-Velazquez, Ralph Leijenaar, Sara Carvalho,
Ruud GPM Van Stiphout, Patrick Granton, Catharina ML Zegers, Robert
Gillies, Ronald Boellard, Andr´e Dekker, et al. Radiomics: extracting more
information from medical images using advanced feature analysis. European
journal of cancer, 48(4):441–446, 2012.
[23] Bjoern H Menze, Andras Jakab, Stefan Bauer, Jayashree Kalpathy-Cramer,
Keyvan Farahani, Justin Kirby, Yuliya Burren, Nicole Porz, Johannes Slotboom, Roland Wiest, et al. The multimodal brain tumor image segmentation
benchmark (brats). IEEE transactions on medical imaging, 34(10):1993–2024,
2014.
[24] Andriy Myronenko. 3d mri brain tumor segmentation using autoencoder regularization. In Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic
Brain Injuries: 4th International Workshop, BrainLes 2018, Held in Conjunction with MICCAI 2018, Granada, Spain, September 16, 2018, Revised
Selected Papers, Part II 4, pages 311–320. Springer, 2019.
[25] Andriy Myronenko, Dong Yang, Yufan He, and Daguang Xu. Automated
3d segmentation of kidneys and tumors in miccai kits 2023 challenge. arXiv
preprint arXiv:2310.04110, 2023.
[26] Maxime Oquab, Timoth´ee Darcet, Th´eo Moutakanni, Huy Vo, Marc
Szafraniec, Vasil Khalidov, Pierre Fernandez, Daniel Haziza, Francisco Massa, Alaaeldin El-Nouby, et al. Dinov2: Learning robust visual features without
supervision. arXiv preprint arXiv:2304.07193, 2023.
[27] Chintan Parmar, Emmanuel Rios Velazquez, Ralph Leijenaar, Mohammed
Jermoumi, Sara Carvalho, Raymond H Mak, Sushmita Mitra, B Uma
Shankar, Ron Kikinis, Benjamin Haibe-Kains, et al. Robust radiomics feature quantification using semiautomatic volumetric segmentation. PloS one,
9(7):e102107, 2014.
[28] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional
networks for biomedical image segmentation. In Medical image computing and
computer-assisted intervention–MICCAI 2015: 18th international conference,
Munich, Germany, October 5-9, 2015, proceedings, part III 18, pages 234–241.
Springer, 2015.
[29] Numan Saeed, Ikboljon Sobirov, Roba Al Majzoub, and Mohammad Yaqub.
Tmss: An end-to-end transformer-based multimodal network for segmentation and survival prediction. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pages 319–329. Springer,
2022.
[30] Johannes Uhlig, Andreas Leha, Laura M Delonge, Anna-Maria Haack, Brian
Shuch, Hyun S Kim, Felix Bremmer, Lutz Trojan, Joachim Lotz, and Annemarie Uhlig. Radiomic features and machine learning for the discrimination
of renal tumor histological subtypes: a pragmatic study using clinical-routine
computed tomography. Cancers, 12(10):3010, 2020.
[31] Kwang-Hyun Uhm, Seung-Won Jung, Moon Hyung Choi, Hong-Kyu Shin,
Jae-Ik Yoo, Se Won Oh, Jee Young Kim, Hyun Gi Kim, Young Joon Lee,
Seo Yeon Youn, et al. Deep learning for end-to-end kidney cancer diagnosis on multi-phase abdominal computed tomography. NPJ precision oncology,
5(1):54, 2021.
[32] Ross Wightman. Pytorch image models. https://github.com/rwightman/
pytorch-image-models, 2019.
[33] Jiancheng Yang, Rui Shi, Donglai Wei, Zequan Liu, Lin Zhao, Bilian Ke,
Hanspeter Pfister, and Bingbing Ni. Medmnist v2-a large-scale lightweight
benchmark for 2d and 3d biomedical image classification. Scientific Data,
10(1):41, 2023.
[34] Ruimeng Yang, Jialiang Wu, Lei Sun, Shengsheng Lai, Yikai Xu, Xilong Liu,
Ying Ma, and Xin Zhen. Radiomics of small renal masses on multiphasic ct: accuracy of machine learning–based classification models for the differentiation of renal cell carcinoma and angiomyolipoma without visible fat. European
radiology, 30:1254–1263, 2020.
[35] Felix Y Yap, Bino A Varghese, Steven Y Cen, Darryl H Hwang, Xiaomeng Lei, Bhushan Desai, Christopher Lau, Lindsay L Yang, Austin J Fullenkamp, Simin Hajian, et al. Shape and texture-based radiomics signature on ct effectively discriminates benign from malignant renal masses. European radiology,
31:1011–1021, 2021.
[36] Sangdoo Yun, Dongyoon Han, Seong Joon Oh, Sanghyuk Chun, Junsuk Choe, and Youngjoon Yoo. Cutmix: Regularization strategy to train strong classifiers with localizable features. In Proceedings of the IEEE/CVF international conference on computer vision, pages 6023–6032, 2019.
[37] Hongyi Zhang, Moustapha Cisse, Yann N Dauphin, and David Lopez-Paz. mixup: Beyond empirical risk minimization. In International Conference on Learning Representations, 2018.
[38] Ziteng Zhao and Guanyu Yang. Unsupervised contrastive learning of radiomics and deep features for label-efficient tumor classification. In International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI), pages 252–261. SPRINGER INTERNATIONAL PUBLISHING AG, 2021.