簡易檢索 / 詳目顯示

研究生: 許潔馨
Hsu, Jeh-Hsing
論文名稱: 基於反卷積和深度學習方法的光聲與超音波影像品質提升方法
Quality Enhancement of Hybrid Optoacoustic and Ultrasound Images through Deconvolution and Deep Learning-Based Methods.
指導教授: 林曉均
Lin, Hsiao-Chun Amy
口試委員: 王廷瑋
Wang, Ting-Wei
宋雁翎
Sung, Yen-Ling
學位類別: 碩士
Master
系所名稱: 原子科學院 - 生醫工程與環境科學系
Department of Biomedical Engineering and Environmental Sciences
論文出版年: 2025
畢業學年度: 113
語文別: 中文
論文頁數: 79
中文關鍵詞: 光聲斷層成像混合光聲系統深度學習影像品質提升
外文關鍵詞: Optoacoustic Tomography, Hybird Optoacoustic-Ultrasound Imaging, deep learning, image quality improvement
相關次數: 點閱:80下載:2
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 光聲成像(Optoacoustic Imaging, OA)在醫學和生物醫學領域展現出極大的潛力,因其能夠提供高對比度的功能性影像和分子級別的訊息,並且克服了傳統光學影像在深層組織成像中的侷限性。然而,由於光聲影像主要聚焦於功能性資訊,缺乏解剖結構的細節,因此臨床應用中使用者可能難以準確辨別組織的位置。為了解決這一問題,混合光聲超音波影像(Hybrid Optoacoustic Ultrasound Imaging, OPUS)技術在光聲影像中加入了超音波影像,提供組織的結構性資訊。這使得成像系統能夠在保留光聲影像功能性特徵的同時,清晰顯示組織的空間定位和解剖細節,大幅提升了光聲成像的應用價值與臨床適用性,使臨床使用者能更精確地解讀並利用影像資訊。
    然而,這兩種影像技術在影像品質方面仍有進一步提升的空間。光聲影像重建方法面臨速度與品質的權衡:反投影(Back Projection, BP)方法計算快速,但影像品質較差;而基於模型(Model-Based, MB)的方法則能提供更高品質的影像,但計算耗時較長,影響其及時成像的應用。此外,超音波影像的解析度也還有改進空間。因此,如何提升光聲與超音波影像的品質、實現速度與解析度的平衡,成為本領域持續研究的重點。
    本研究針對混合光聲影像中的超音波影像,應用了反卷積(Deconvolution)和點擴散函數(Point Spread Function, PSF)去除模糊,以提升影像解析度。此外,針對光聲影像提出了一套基於深度學習的去噪方法,其使用了不同的資料選擇方法、損失函數權重組合及訓練輪數進行優化,以提高U-Net模型的去噪效果。
    實驗結果顯示,針對超音波影像的部分,反卷積技術在解析度提升方面效果顯著,並且在假體和in-vivo生物影像中皆展示了優異的應用效果。而後,針對光聲影像的部分,模型使用SSIM×0.95 + MSE×0.05損失函數權重組合在PSNR和 MSE指標上取得了最佳表現,有效降低影像噪聲的同時保持了結構完整性。此外,基於影像信息含量的資料選擇策略顯著提升了模型的穩定性與泛化能力,特別是對新數據的適應性。
    通過合理的神經網路訓練與參數調整,可以在保持快速計算的前提下提升BP重建影像的品質,並成功應用於三維光聲影像數據。損失函數的選擇與訓練資料的篩選在影像品質提升中扮演了重要角色。未來研究可進一步探索不同的模型架構與前處理方式,以進一步提升影像品質,促進光聲與超音波影像技術在臨床應用中的發展。


    Optoacoustic Imaging (OA) has demonstrated significant potential in the fields of medicine and biomedical research due to its ability to provide high-contrast functional images and molecular-level information while overcoming the limitations of traditional optical imaging in deep tissue imaging. However, as OA imaging primarily focuses on functional information, it lacks detailed anatomical structure, making it challenging for clinical users to accurately identify tissue locations during application. To address this issue, Hybrid Optoacoustic Ultrasound Imaging (OPUS) technology integrates ultrasound imaging into optoacoustic imaging to provide structural information about tissues. This combination allows the imaging system to retain the functional features of OA imaging while clearly displaying spatial positioning and anatomical details of tissues, significantly enhancing the applicability and clinical utility of OA imaging. This enables clinicians to interpret and utilize the imaging information with greater precision.

    However, there is still room for improvement in the image quality of both modalities. OA image reconstruction faces a trade-off between speed and quality: Back Projection (BP) methods offer fast computation but poor image quality, while Model-Based (MB) methods produce higher-quality images but are computationally intensive, limiting their application in real-time imaging. Additionally, the resolution of ultrasound images remains suboptimal, particularly for applications requiring fine structural details and high resolution. Therefore, improving the quality of OA and ultrasound images while balancing speed and resolution remains a key focus of ongoing research in this field.

    This study applied deconvolution techniques, leveraging the Point Spread Function (PSF), to reduce blurring and enhance the resolution of ultrasound images in the OPUS system. Additionally, a deep learning-based denoising method was proposed for OA imaging. This approach involved optimizing the U-Net model through various data selection strategies, loss function weight combinations, and training epochs to improve its denoising performance.

    Experimental results showed that, for ultrasound images, deconvolution techniques significantly improved resolution and demonstrated excellent performance in both phantom and in-vivo biological images. For OA imaging, the deep learning model achieved the best results with a loss function combination of SSIM×0.95 + MSE×0.05, effectively reducing image noise while preserving structural integrity. Furthermore, data selection strategies based on image information content significantly enhanced the model's stability and generalization ability, particularly in adapting to new datasets.

    Through appropriate neural network training and parameter adjustments, it is possible to enhance the quality of BP-reconstructed images while maintaining fast computation, successfully applying these methods to three-dimensional OA imaging data. The choice of loss functions and the selection of training data play critical roles in improving image quality. Future research can further explore alternative model architectures and preprocessing methods to achieve even better results, advancing the clinical applications of OA and ultrasound imaging technologies.

    Contents Acknowledgements 摘要i Abstract ii 1 緒論(Introduction) 1 1.1 前言. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 光聲成像介紹. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.2.1 光聲斷層成像. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.2.2 混合光聲系統. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.2.3 研究目的. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 2 文獻回顧(Literature review) 5 2.1 光聲成像原理(Optoacoustic theroy) . . . . . . . . . . . . . . . . . . . . . . 5 2.2 光聲斷層成像(OAT review) . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 2.3 混合光聲系統(OPUS review) . . . . . . . . . . . . . . . . . . . . . . . . . . 8 2.4 深度學習. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 2.4.1 卷積神經網路(Convolutional Neural Network, CNN) . . . . . . . . . 11 2.4.2 U-Net 網路架構. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 2.4.3 Transformer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 3 混合光聲系統之影像優化(OPUS Image improvement) 21 3.1 系統與資料及介紹. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 3.2 實驗方法. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 3.3 實驗結果與分析. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 3.3.1 Deconvolved in-vivo images . . . . . . . . . . . . . . . . . . . . . . . 29 3.4 討論. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 4 光聲斷層掃描之影像優化(OAT Image improvement) 35 4.1 系統與資料及介紹. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 4.2 實驗方法. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 4.3 U-Net 架構介紹. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 4.3.1 評估指標. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 4.3.2 指標應用於光聲影像品質評估. . . . . . . . . . . . . . . . . . . . . 43 4.3.3 實驗設計. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 4.4 實驗結果與分析. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 4.4.1 預測結果評估. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 4.5 討論. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 v 5 未來展望71 References 73 補充77 vi

    [1] P. Beard, “Biomedical photoacoustic imaging,” Interface focus, vol. 1, no. 4, pp. 602–631,
    2011.
    [2] B. W. Drinkwater and P. D. Wilcox, “Ultrasonic arrays for non-destructive evaluation: A
    review,” NDT & e International, vol. 39, no. 7, pp. 525–541, 2006.
    [3] J. J. Riksen, A. V. Nikolaev, and G. van Soest, “Photoacoustic imaging on its way toward
    clinical utility: a tutorial review focusing on practical application in medicine,” Journal of
    Biomedical Optics, vol. 28, no. 12, pp. 121205–121205, 2023.
    [4] V. Ntziachristos, “Going deeper than microscopy: the optical imaging frontier in biology,”
    Nature methods, vol. 7, no. 8, pp. 603–614, 2010.
    [5] L. V. Wang and S. Hu, “Photoacoustic tomography: in vivo imaging from organelles to
    organs,” science, vol. 335, no. 6075, pp. 1458–1462, 2012.
    [6] X. Wang, Y. Pang, G. Ku, X. Xie, G. Stoica, and L. V. Wang, “Noninvasive laser-induced
    photoacoustic tomography for structural and functional in vivo imaging of the brain,”
    Nature biotechnology, vol. 21, no. 7, pp. 803–806, 2003.
    [7] M. W. Schellenberg and H. K. Hunt, “Hand-held optoacoustic imaging: A review,” Photoacoustics,
    vol. 11, pp. 14–27, 2018.
    [8] M. Xu and L. V. Wang, “Photoacoustic imaging in biomedicine,” Review of scientific
    instruments, vol. 77, no. 4, 2006.
    [9] X. Pan, E. Y. Sidky, and M. Vannier, “Why do commercial ct scanners still employ traditional,
    filtered back-projection for image reconstruction?,” Inverse problems, vol. 25,
    no. 12, p. 123009, 2009.
    [10] X. L. Dean-Ben and D. Razansky, “A practical guide for model-based reconstruction in
    optoacoustic imaging,” Frontiers in Physics, vol. 10, p. 1028258, 2022.
    [11] A. Rosenthal, D. Razansky, and V. Ntziachristos, “Fast semi-analytical model-based
    acoustic inversion for quantitative optoacoustic tomography,” IEEE transactions on medical
    imaging, vol. 29, no. 6, pp. 1275–1285, 2010.
    [12] L. Ding, D. Razansky, and X. L. Dean-Ben, “Model-based reconstruction of large threedimensional
    optoacoustic datasets,” IEEE transactions on medical imaging, vol. 39, no. 9,
    pp. 2931–2940, 2020.
    [13] A. Rosenthal, V. Ntziachristos, and D. Razansky, “Model-based optoacoustic inversion
    with arbitrary-shape detectors,” Medical physics, vol. 38, no. 7, pp. 4285–4295, 2011.
    [14] M. Oeri, W. Bost, N. Sénégond, S. Tretbar, and M. Fournelle, “Hybrid photoacoustic/
    ultrasound tomograph for real-time finger imaging,” Ultrasound in medicine & biology,
    vol. 43, no. 10, pp. 2200–2212, 2017.
    [15] A. Oraevsky, B. Clingman, J. Zalev, A. Stavros, W. Yang, and J. Parikh, “Clinical optoacoustic
    imaging combined with ultrasound for coregistered functional and anatomical
    mapping of breast tumors,” Photoacoustics, vol. 12, pp. 30–45, 2018.
    [16] E. Merčep, G. Jeng, S. Morscher, P.-C. Li, and D. Razansky, “Hybrid optoacoustic tomography
    and pulse-echo ultrasonography using concave arrays,” IEEE transactions on
    ultrasonics, ferroelectrics, and frequency control, vol. 62, no. 9, pp. 1651–1661, 2015.
    [17] E. Merčep, X. L. Deán-Ben, and D. Razansky, “Imaging of blood flow and oxygen state
    with a multi-segment optoacoustic ultrasound array,” Photoacoustics, vol. 10, pp. 48–53,
    2018.
    [18] D. Queirós, X. L. Déan-Ben, A. Buehler, D. Razansky, A. Rosenthal, and V. Ntziachristos,
    “Modeling the shape of cylindrically focused transducers in three-dimensional optoacoustic
    tomography,” Journal of biomedical optics, vol. 18, no. 7, pp. 076014–076014, 2013.
    [19] H.-C. A. Lin, X. L. Deán-Ben, A. Ozbek, Y.-H. Shao, B. Lafci, and D. Razansky, “Hybrid
    spherical array for combined volumetric optoacoustic and b-mode ultrasound imaging,”
    Optics Letters, vol. 49, no. 6, pp. 1469–1472, 2024.
    [20] T. L. Szabo, Diagnostic ultrasound imaging: inside out. Academic press, 2013.
    [21] J. A. Jensen, “Deconvolution of ultrasound images,” Ultrasonic imaging, vol. 14, no. 1,
    pp. 1–15, 1992.
    [22] O. Michailovich and A. Tannenbaum, “Blind deconvolution of medical ultrasound images:
    A parametric inverse filtering approach,” IEEE Transactions on Image Processing, vol. 16,
    no. 12, pp. 3005–3019, 2007.
    [23] D. Sage, L. Donati, F. Soulez, D. Fortun, G. Schmit, A. Seitz, R. Guiet, C. Vonesch, and
    M. Unser, “Deconvolutionlab2: An open-source software for deconvolution microscopy,”
    Methods, vol. 115, pp. 28–41, 2017.
    [24] P. Huynh, T.-H. Do, and M. Yoo, “A probability-based algorithm using image sensors to
    track the led in a vehicle visible light communication system,” Sensors, vol. 17, no. 2,
    p. 347, 2017.
    [25] L. B. Lucy, “An iterative technique for the rectification of observed distributions,” Astronomical
    Journal, Vol. 79, p. 745 (1974), vol. 79, p. 745, 1974.
    [26] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document
    recognition,” Proceedings of the IEEE, vol. 86, no. 11, pp. 2278–2324, 1998.
    [27] K. O’Shea, “An introduction to convolutional neural networks,” arXiv preprint
    arXiv:1511.08458, 2015.
    [28] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional
    neural networks,” Advances in neural information processing systems, vol. 25,
    2012.
    [29] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image
    recognition,” arXiv preprint arXiv:1409.1556, 2014.
    [30] H. Zhao, O. Gallo, I. Frosio, and J. Kautz, “Loss functions for image restoration with
    neural networks,” IEEE Transactions on computational imaging, vol. 3, no. 1, pp. 47–57,
    2016.
    [31] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from
    error visibility to structural similarity,” IEEE transactions on image processing, vol. 13,
    no. 4, pp. 600–612, 2004.
    [32] J. Johnson, A. Alahi, and L. Fei-Fei, “Perceptual losses for real-time style transfer and
    super-resolution,” in Computer Vision–ECCV 2016: 14th European Conference, Amsterdam,
    The Netherlands, October 11-14, 2016, Proceedings, Part II 14, pp. 694–711,
    Springer, 2016.
    [33] D. P. Kingma, “Adam: A method for stochastic optimization,” arXiv preprint
    arXiv:1412.6980, 2014.
    [34] O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical
    image segmentation,” in Medical image computing and computer-assisted intervention–
    MICCAI 2015: 18th international conference, Munich, Germany, October 5-9, 2015, proceedings,
    part III 18, pp. 234–241, Springer, 2015.
    [35] S. Choi, J. Yang, S. Y. Lee, J. Kim, J. Lee, W. J. Kim, S. Lee, and C. Kim, “Deep learning
    enhances multiparametric dynamic volumetric photoacoustic computed tomography
    in vivo (dl-pact),” Advanced Science, vol. 10, no. 1, p. 2202089, 2023.
    [36] M. Kim, G.-S. Jeng, I. Pelivanov, and M. O'Donnell, “Deep-learning image reconstruction
    for real-time photoacoustic system,” IEEE transactions on medical imaging, vol. 39,
    no. 11, pp. 3379–3390, 2020.
    [37] N. Davoudi, X. L. Deán-Ben, and D. Razansky, “Deep learning optoacoustic tomography
    with sparse data,” Nature Machine Intelligence, vol. 1, no. 10, pp. 453–460, 2019.
    [38] C. Dehner, G. Zahnd, V. Ntziachristos, and D. Jüstel, “A deep neural network for realtime
    optoacoustic image reconstruction with adjustable speed of sound,” Nature Machine
    Intelligence, vol. 5, no. 10, pp. 1130–1141, 2023.
    [39] A. Vaswani, “Attention is all you need,” Advances in Neural Information Processing Systems,
    2017.
    [40] J. M. J. Valanarasu, P. Oza, I. Hacihaliloglu, and V. M. Patel, “Medical transformer:
    Gated axial-attention for medical image segmentation,” in Medical image computing and
    computer assisted intervention–MICCAI 2021: 24th international conference, Strasbourg,
    France, September 27–October 1, 2021, proceedings, part I 24, pp. 36–46, Springer, 2021.
    [41] S. W. Zamir, A. Arora, S. Khan, M. Hayat, F. S. Khan, and M.-H. Yang, “Restormer: Efficient
    transformer for high-resolution image restoration,” in Proceedings of the IEEE/CVF
    conference on computer vision and pattern recognition, pp. 5728–5739, 2022.
    [42] A. Hore and D. Ziou, “Image quality metrics: Psnr vs. ssim,” in 2010 20th international
    conference on pattern recognition, pp. 2366–2369, IEEE, 2010.
    [43] B. Kim, M. Han, H. Shim, and J. Baek, “A performance comparison of convolutional
    neural network-based image denoising methods: The effect of loss functions on low-dose
    ct images,” Medical physics, vol. 46, no. 9, pp. 3906–3923, 2019.

    QR CODE