研究生: |
張延榮 Chang, Yen-Jung |
---|---|
論文名稱: |
通過生成對抗網路與內部數據整合進行有限樣本的醫學圖像偽影去除 “One-Shot” Medical Image Artifact Reduction Through Attentive Generative Network with Internal Data Synthesis and Adversarial Training |
指導教授: |
何宗易
Ho, Tsung-Yi |
口試委員: |
陳煥宗
Chen, Hwann-Tzone 史弋宇 Shi, Yi-Yu |
學位類別: |
碩士 Master |
系所名稱: |
電機資訊學院 - 資訊工程學系 Computer Science |
論文出版年: | 2019 |
畢業學年度: | 107 |
語文別: | 英文 |
論文頁數: | 37 |
中文關鍵詞: | 生成對抗網路 、醫學圖像 、偽影去除 |
外文關鍵詞: | One-Shot, Artifact |
相關次數: | 點閱:2 下載:0 |
分享至: |
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
醫學圖像常附帶許多不同種類的偽影,取決於許多因素,其中包括掃描設置,機器狀況,患者大小和年齡,周圍環境等。另一方面,現有基於深度學習的醫學圖像降噪方法受限於特定的訓練數據,幾乎不可能包含所有偽影種類,導致臨床的應用上有限。在本文中,我們提出了“有限樣本”的醫學圖像偽影去除方法,在不需要預先訓練的情況下使用深度學習的能力。明確來說,我們在測試階段利用輸入影像合成出訓練數據,並用來訓練一個輕量化的降噪網路。無需事前準備的訓練數據,我們的方法幾乎可以處理任何含有各種或未知偽影的醫學圖像。實驗結果中,我們使用斷層掃描和核磁共振影像,並展現出我們方法的降噪效果,無論是主觀或客觀的評斷下,都比目前最具代表性的方法好。據作者所知,這是第一個在沒有事先訓練的情況下減少醫學圖像偽影的深度學習架構。
Medical images exhibit various types of artifacts with different patterns and their mixtures, which depend on many factors including scan setting, machine condition, patient size and age, surrounding environment, etc.
On the other hand, existing deep learning based medical image artifact reduction methods are restricted by the specific training data that contains predetermined artifact types and patterns, which can hardly capture all possibilities exclusively. Accordingly, they can only work well under the scenarios defined by the training data, resulting in limited clinical adoption.
In this thesis, we introduce ``One-Shot'' medical image Artifact Reduction (OSAR), which exploits the power of deep learning but without using pre-trained networks. Specifically, at test time, we train a light-weight image-specific artifact reduction network using data synthesized from the input image. Without requiring any prior training data, OSAR can work with almost any medical images that contain varying or unknown artifacts. We use Computed Tomography (CT) and Magnetic Resonance Imaging (MRI) as vehicles and show that OSAR can reduce artifacts better than state-of-the-art both qualitatively and quantitatively in comparable time. To the best of the authors' knowledge, this is the first deep learning framework that reduces medical image artifacts without a priori training.
[1] H. Chen, Y. Zhang, M. K. Kalra, F. Lin, Y. Chen, P. Liao, J. Zhou, and G. Wang, “Low-dose ct with a residual encoder-decoder convolutional neural network,” IEEE transactions on medical imaging, vol. 36, no. 12, pp. 2524–2535, 2017.
[2] L. Gjesteby, Q. Yang, Y. Xi, B. Claus, Y. Jin, B. De Man, and G. Wang, “Reducing metal streak artifacts in ct images via deep learning: Pilot results,” in The 14th International Meeting on Fully Three-Dimensional Image Reconstruction in Radiology and Nuclear Medicine, pp. 611–614, 2017.
[3] L. Gjesteby, Q. Yang, Y. Xi, Y. Zhou, J. Zhang, and G. Wang, “Deep learning methods to guide ct image reconstruction and reduce metal artifacts,” in Medical Imaging 2017: Physics of Medical Imaging, vol. 10132, p. 101322W, International Society for Optics and Photonics, 2017.
[4] D. Jiang, W. Dou, L. Vosters, X. Xu, Y. Sun, and T. Tan, “Denoising of 3d magnetic resonance images with multi-channel residual learning of convolutional neural network,” Japanese journal of radiology, vol. 36, no. 9, pp. 566–574, 2018.
[5] E. Kang, J. Min, and J. C. Ye, “A deep convolutional neural network using directional wavelets for low-dose x-ray ct reconstruction,” Medical physics, vol. 44, no. 10, 2017.
[6] H. S. Park, Y. E. Chung, S. M. Lee, H. P. Kim, and J. K. Seo, “Sinogram-consistency learning in ct for metal artifact reduction,” arXiv preprint arXiv:1708.00607, 2017.
[7] H. S. Park, S. M. Lee, H. P. Kim, and J. K. Seo, “Machine-learning-based nonlinear decomposition of ct images for metal artifact reduction,” arXiv preprint arXiv:1708.00244, 2017.
[8] H. Shan, Y. Zhang, Q. Yang, U. Kruger, M. K. Kalra, L. Sun, W. Cong, and G. Wang, “3-d convolutional encoder-decoder network for low-dose ct via transfer learning from a 2-d trained network,” IEEE transactions on medical imaging, vol. 37, no. 6, pp. 1522–1534, 2018.
[9] J. M. Wolterink, T. Leiner, M. A. Viergever, and I. Iˇsgum, “Generative adversarial networks for noise reduction in low-dose ct,” IEEE transactions on medical imaging, vol. 36, no. 12, pp. 2536–2545, 2017.
[10] Q. Yang, P. Yan, Y. Zhang, H. Yu, Y. Shi, X. Mou, M. K. Kalra, Y. Zhang, L. Sun, and G. Wang, “Low dose ct image denoising using a generative adversarial network with wasserstein distance and perceptual loss,” IEEE transactions on medical imaging, 2018.
[11] Y. Zhang and H. Yu, “Convolutional neural network based metal artifact reduction in x-ray computed tomography,” IEEE transactions on medical imaging, vol. 37, no. 6, pp. 1370–1381, 2018.
[12] E. Kang, H. J. Koo, D. H. Yang, J. B. Seo, and J. C. Ye, “Cycle consistent adversarial denoising network for multiphase coronary ct angiography,” arXiv preprint arXiv:1806.09748, 2018.
[13] C. You, Y. Zhang, X. Zhang, G. Li, S. Ju, Z. Zhao, Z. Zhang, W. Cong, P. K. Saha, and G. Wang, “Ct super-resolution gan constrained by the identical, residual, and cycle learning ensemble (gan-circle),” arXiv preprint arXiv:1808.04256, 2018.
[14] M. Zaitsev, J. Maclaren, and M. Herbst, “Motion artifacts in mri: a complex problem with many partial solutions,” Journal of Magnetic Resonance Imaging, vol. 42, no. 4, pp. 887–901, 2015.
[15] K. Dabov, A. Foi, and K. Egiazarian, “Video denoising by sparse 3d transformdomain collaborative filtering,” in 2007 15th European Signal Processing Conference, pp. 145–149, IEEE, 2007.
[16] A. Buades, B. Coll, and J.-M. Morel, “A non-local algorithm for image denoising,” in 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), vol. 2, pp. 60–65, IEEE, 2005.
[17] M. Zontak and M. Irani, “Internal statistics of a single natural image,” in Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on, pp. 977–984, IEEE, 2011.
[18] F. E. Boas and D. Fleischmann, “Ct artifacts: causes and reduction techniques,” Imaging Med, vol. 4, no. 2, pp. 229–240, 2012.
[19] K. Krupa and M. Bekiesińska-Figatowska, “Artifacts in magnetic resonance imaging,” Polish journal of radiology, vol. 80, p. 93, 2015.
[20] D. Ulyanov, A. Vedaldi, and V. Lempitsky, “Deep image prior,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 9446–9454, 2018.
[21] J. V. Manjón and P. Coupe, “Mri denoising using deep learning,” in International Workshop on Patch-based Techniques in Medical Imaging, pp. 12–19, Springer, 2018.
[22] R. Qian, R. T. Tan, W. Yang, J. Su, and J. Liu, “Attentive generative adversarial network for raindrop removal from a single image,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2482–2491, 2018.
[23] J. Johnson, A. Alahi, and L. Fei-Fei, “Perceptual losses for real-time style transfer and super-resolution,” in European Conference on Computer Vision, pp. 694–711, Springer, 2016.
[24] I. Arapakis, E. Efstathopoulos, V. Tsitsia, S. Kordolaimi, N. Economopoulos, S. Argentos, A. Ploussi, and E. Alexopoulou, “Using idose4 iterative reconstruction algorithm in adults’ chest–abdomen–pelvis ct examinations: effect on image quality in relation to patient radiation exposure,” The British journal of radiology, vol. 87, no. 1036, p. 20130613, 2014.
[25] F. L. Goerner and G. D. Clarke, “Measuring signal-to-noise ratio in partially parallel imaging mri,” Medical physics, vol. 38, no. 9, pp. 5049–5057, 2011.
[26] P. Kellman and E. R. McVeigh, “Image reconstruction in snr units: a general method for snr measurement,” Magnetic resonance in medicine, vol. 54, no. 6, pp. 1439–1447, 2005.
[27] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014.
[28] X. Glorot and Y. Bengio, “Understanding the difficulty of training deep feedforward neural networks,” in Proceedings of the thirteenth international conference on artificial intelligence and statistics, pp. 249–256, 2010.