簡易檢索 / 詳目顯示

研究生: 張珮榕
Chang, Pei-Jung
論文名稱: 氣胸影像在U-net家族上的應用
Pneumothorax Data Analysis Using U-net Family
指導教授: 陳素雲
Huang, Su-Yun
口試委員: 洪弘
HUNG, HUNG
盧鴻興
Lu, Horng-Shing
謝文萍
Hsieh, Wen-Ping
學位類別: 碩士
Master
系所名稱: 理學院 - 統計學研究所
Institute of Statistics
論文出版年: 2022
畢業學年度: 110
語文別: 英文
論文頁數: 36
中文關鍵詞: 機器學習神經網絡
相關次數: 點閱:4下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 氣胸指肺部塌陷而影響呼吸,最嚴重的是壓迫到心臟導致可能出現缺氧、休克,甚至死亡的現象,但因為很難從外觀上判斷氣胸,通常我們會使用X-ray的方式去偵測再請專業人士分析,但常常花費許多寶貴的時間,因此若能透過機器的學習,就能從氣胸者的X-ray影像即可大致判斷在在肺部哪個部位坍塌,相信能加速醫生的判斷時間。
    而從X-ray影像判斷出哪個部位為坍塌區域,在機器學習的領域中屬於分割任務,在分割任務中最有名的網路架構稱為Unet,因為他擁有了Encoder-Decoder 結構,讓此網路架構可以有效的解析圖片資訊又可以輸出與原始大小相同的結果。但除了最原始的U-net網路架構以外,近幾年又有許多以U-net的骨架為基底的變形,我們稱它為U-net Family。
    在本篇論文中,我們實作了五個U-net架構的骨架變形並比較分析,此外這次資料來源使用來自Society for Imaging Informatics in Medicine(SIIM)有關氣胸的X-ray影像來進行實驗預測,而這次的實驗結果是在Intel(R) Xeon(R) Gold 6230 CPU @ 2.10GHz以及 Quadro RTX 8000 GPU的環境下執行。


    Pneumothorax is an abnormal collection of air in the pleural space between the lung and the chest wall which may induce an oxygen shortage and low blood pressure. For more acute cases, it can be fatal. Diagnosis of pneumothorax with only physical examination is difficult, especially for smaller pneumothorax. Usually, we will use chest X-ray to confirm whether a person got pneumothorax or not. However, the X-ray diagnostic test is time-consuming and tedious. Thus, the demand for an automatic system of semantic image segmentation rises rapidly.
    One of the most famous models of the segmentation task is the U-net model due to its property of the encoder and the decoder, which efficiently enable the learning process to extract and assemble information from the X-ray images. In recent years, there are more and more models using the U-net model as the backbone. We call them the U-net family.
    In this thesis, we implemented the medical image detection machine learning algorithm on Pneumothorax images that come from Society for Imaging Informatics in Medicine (SIIM) by applying the U-net family as the backbone. The system environment is on the Intel(R) Xeon(R) Gold 6230 CPU @ 2.10GHz and Quadro RTX 8000 GPU.

    Chapter 1 - Introduction 6 Chapter 2 - Method 9 Chapter 3 - Experiments 18 Chapter 4 - Conclusion and Perspective 27 Appendix - Summary of the Eight Models 28

    Curiale, Ariel, Flavio Colavecchia, and German Mato (Feb. 2019). “Automatic quantification of the LV function and mass: A deep learning approach for cardiovascular MRI”. In: Computer Methods and Programs in Biomedicine 169, pp. 37–50. doi: 10.1016/j.cmpb.2018.12.002.
    Jakhar, Karan, Avneet Kaur, and Dr. Meenu Gupta (2019). Pneumothorax Segmentation: Deep Learning Image Segmentation to predict Pneumothorax. doi: 10.48550/ARXIV.1912.07329. url: https://arxiv.org/abs/1912.07329.
    Lin, Tsung-Yi et al. (2017). Focal Loss for Dense Object Detection. doi: 10.48550/ARXIV. 1708.02002. url: https://arxiv.org/abs/1708.02002.
    Long, Jonathan, Evan Shelhamer, and Trevor Darrell (2015). “Fully convolutional networks for semantic segmentation”. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3431–3440.
    Morimoto, Toshinari and Su-Yun Huang (2020). TensorProjection Layer: A Tensor-Based Di- mensionality Reduction Method in CNN. arXiv: 2004.04454 [stat.ML].
    Oktay, Ozan et al. (2018). “Attention u-net: Learning where to look for the pancreas”. In: arXiv preprint arXiv:1804.03999.
    Ronneberger, Olaf, Philipp Fischer, and Thomas Brox (2015). “U-net: Convolutional networks for biomedical image segmentation”. In: International Conference on Medical image com- puting and computer-assisted intervention. Springer, pp. 234–241.
    Wu, Haibing and Xiaodong Gu (2015). Max-Pooling Dropout for Regularization of Convolu- tional Neural Networks. doi: 10.48550/ARXIV.1512.01400. url: https://arxiv.org/ abs/1512.01400.
    Zhou, Zongwei et al. (2018). “Unet++: A nested u-net architecture for medical image segmen- tation”. In: Deep learning in medical image analysis and multimodal learning for clinical decision support. Springer, pp. 3–11.

    QR CODE