簡易檢索 / 詳目顯示

研究生: 許楷翊
Hsu, Kai-Yi
論文名稱: 提升果蠅大腦分析技術:使用深度學習進行自動化三維腦區分割
Advancing Drosophila Brain Analysis: Automated 3D Neuropil Segmentation with Deep Learning
指導教授: 羅中泉
Lo, Chung-Chuan
施奇廷
Shih, Chi-Tin
口試委員: 江安世
Chiang, Ann-Shyn
朱麗安
Chu, Li-An
陳南佑
Chen, Nan-Yow
學位類別: 博士
Doctor
系所名稱: 生命科學暨醫學院 - 系統神經科學研究所
Institute of Systems Neuroscience
論文出版年: 2024
畢業學年度: 113
語文別: 英文
論文頁數: 45
中文關鍵詞: 螢光圖像U-NetYOLO連接組學圖像分割解剖分析
外文關鍵詞: Connectomics, Fluorescence Image, U-Net, YOLO, Image Segmentation, Anatomical Analysis
相關次數: 點閱:1下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 腦圖譜提供基因、蛋白質、神經元或解剖區域分布的信息,在現代神經科學研究中扮演著至關重要的角色。為了基於不同腦樣本的圖像分析這些物質的空間分布,我們常需將個體腦圖像變形並配準到標準腦模板。然而,變形和配準過程可能導致空間誤差,從而嚴重降低分析的準確性。為了解決這一問題,我們開發了一種自動化方法,用於分割蠅迴路數據庫中果蠅腦的腦區螢光圖像。此技術允許未來的腦圖譜研究在個體水平上進行精確分析,而無需變形和配準到標準腦模板。

    我們的方法名為LYNSU(利用YOLO定位和使用U-Net分割),分為兩個階段。在第一階段,我們使用YOLOv7模型快速定位腦區並迅速提取小尺度3D圖像作為第二階段模型的輸入。此階段在腦區定位方面達到了99.4%的準確率。在第二階段,我們使用3D U-Net模型對腦區進行分割。LYNSU能夠在僅由16個腦圖像構成的小訓練集中達到高準確度的分割。我們在六個不同的腦區或結構上展示了LYNSU,分割準確度與專業手工標註相當,3D交集-聯合(IoU)值達到0.869。我們的方法僅需約7秒即可分割一個腦區,且性能與人工標註者相當。

    為了展示LYNSU的一個應用案例,我們將其應用於蠅迴路數據庫中的所有雌性果蠅腦,以研究蘑菇體(MBs),即果蠅學習中心的不對稱性。我們使用LYNSU分割雙側MBs並比較每個個體左右MB的體積。值得注意的是,在8,703個有效腦樣本中,有10.14%顯示雙側體積差異超過10%。該研究展示了所提出方法在高通量解剖分析和果蠅腦連接組構建中的潛力。


    In contemporary neuroscience studies, brain atlases serve as vital resources, offering insights into the spatial arrangement of various elements such as genes, proteins, neurons, and anatomical structures. Traditionally, researchers have relied on warping and aligning individual brain images to a standardized template to examine the spatial distribution of these components across different brain samples. However, this process of deformation and registration often introduces spatial inaccuracies, compromising the precision of subsequent analyses. To overcome this limitation, our team has devised an innovative automated approach for dissecting fluorescent images of Drosophila brains sourced from the FlyCircuit database. This novel technique enables future brain atlas investigations to conduct meticulous individual-level examinations without the need for distorting or mapping onto a standardized brain template, thereby enhancing overall accuracy and reliability.

    We've developed a two-phase approach called LYNSU, which stands for "Locating by YOLO and Segmenting by U-Net". The initial phase employs the YOLOv7 framework to swiftly identify neuropils and extract compact 3D images, which then serve as input for the subsequent phase's model. This initial step achieved a remarkable 99.4% precision in pinpointing neuropils. The following phase utilizes a 3D U-Net architecture to delineate the neuropils. Despite relying on a limited training dataset of just 16 brain scans, LYNSU accomplishes highly accurate segmentation. We tested LYNSU on half a dozen distinct neuropils or structures, achieving segmentation accuracy that rivals expert manual delineation, with a 3D Intersection-over-Union (IoU) score of 0.869. Our technique can segment a neuropil in approximately 7 seconds, matching the performance of human annotators.

    To demonstrate LYNSU's practical utility, we employed it to examine the symmetry of mushroom bodies (MBs) - the cognitive hubs in Drosophila - across all female fruit fly brains in the FlyCircuit repository. Our approach involved using LYNSU to isolate the MBs on both sides of the brain and then comparing their respective volumes for each specimen. Intriguingly, our analysis revealed that 10.14% of the 8,703 viable brain samples exhibited a volume discrepancy greater than 10% between the left and right MBs. This investigation highlights the potential of our innovative technique for large-scale anatomical studies and the development of comprehensive neural connectivity maps in Drosophila brains.

    Table of Contents Abstract i 中文摘要 iii Abbreviations v Acknowledgements vi Chapter 1. Introduction 1 Chapter 2. Materials and Methods 5 2.1 Dataset 5 2.2 Data Diversity and Manual Annotation 6 2.3 Preliminary Segmentation Algorithm Validation 7 2.4 Initial Two-Stage Approach: Combining YOLOv7 and FCN 9 2.5 Neuropils Detection and Localization 11 2.6 Neuropils Segmentation 13 2.7 Comparative Evaluation of Segmentation Algorithms 16 2.8 Volume Analysis and Consistency Assessment 17 Chapter 3. Results 19 3.1 Whole Brain Region Segmentation Using 2D U-Net 19 3.2 Two-Stage Brain Region Segmentation: Combining YOLOv7 and FCN 21 3.3 LYNSU: YOLO+3D U-Net 22 Chapter 4. Discussion 37 Data, videos, apparatus accessibility 40 References 41

    References
    1. Bassett, D. S. et al. Reflections on the past two decades of neuroscience. Nat Rev Neurosci 21, 524–534 (2020).
    2. Vázquez-Guardado, A., Yang, Y., Bandodkar, A. J. & Rogers, J. A. Recent advances in neurotechnologies with broad potential for neuroscience research. Nat Neurosci 23, 1522–1536 (2020).
    3. Fan, Y.-J. et al. Development of a water refractive index-matched microneedle integrated into a light sheet microscopy system for continuous embryonic cell imaging. Lab Chip 22, 584–591 (2022).
    4. Werner, C., Sauer, M. & Geis, C. Super-resolving Microscopy in Neuroscience. Chem. Rev. 121, 11971–12015 (2021).
    5. Fu, Z. et al. Light field microscopy based on structured light illumination. Opt. Lett. 46, 3424 (2021).
    6. Huang, S.-H. et al. Optical volumetric brain imaging: speed, depth, and resolution enhancement. J. Phys. D: Appl. Phys. 54, 323002 (2021).
    7. Chu, L.-A. et al. 5D superresolution imaging for a live cell nucleus. Current Opinion in Genetics & Development 67, 77–83 (2021).
    8. Lin, H.-Y. et al. Imaging through the Whole Brain of Drosophila at λ/20 Super-resolution. iScience 14, 164–170 (2019).
    9. Chu, L.-A. et al. Rapid single-wavelength lightsheet localization microscopy for clarified tissue. Nat Commun 10, 4762 (2019).
    10. Hwu, Y., Margaritondo, G. & Chiang, A.-S. Q&A: Why use synchrotron x-ray tomography for multi-scale connectome mapping? BMC Biol 15, 122 (2017).
    11. Peddie, C. J. et al. Volume electron microscopy. Nat Rev Methods Primers 2, 51 (2022).
    12. Mehta, K., Goldin, R. F. & Ascoli, G. A. Circuit analysis of the Drosophila brain using connectivity-based neuronal classification reveals organization of key communication pathways. Network Neuroscience 7, 269–298 (2023).
    13. Kondo, S. et al. Neurochemical Organization of the Drosophila Brain Visualized by Endogenously Tagged Neurotransmitter Receptors. Cell Reports 30, 284-297.e5 (2020).
    14. Wolff, T. & Rubin, G. M. Neuroarchitecture of the Drosophila central complex: A catalog of nodulus and asymmetrical body neurons and a revision of the protocerebral bridge catalog. J of Comparative Neurology 526, 2585–2611 (2018).
    15. Lee, C. H., Blackband, S. J. & Fernandez-Funez, P. Visualization of synaptic domains in the Drosophila brain by magnetic resonance microscopy at 10 micron isotropic resolution. Sci Rep 5, 8920 (2015).
    16. Qiao, C. et al. Rationalized deep learning super-resolution microscopy for sustained live imaging of rapid subcellular processes. Nat Biotechnol 41, 367–377 (2023).
    17. Balasubramanian, H., Hobson, C. M., Chew, T.-L. & Aaron, J. S. Imagining the future of optical microscopy: everything, everywhere, all at once. Commun Biol 6, 1096 (2023).
    18. Lecoq, J., Orlova, N. & Grewe, B. F. Wide. Fast. Deep: Recent Advances in Multiphoton Microscopy of In Vivo Neuronal Activity. J. Neurosci. 39, 9042–9052 (2019).
    19. Lecun, Y., Bottou, L., Bengio, Y. & Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE 86, 2278–2324 (1998).
    20. Krizhevsky, A., Sutskever, I. & Hinton, G. E. ImageNet Classification with Deep Convolutional Neural Networks. in Advances in Neural Information Processing Systems vol. 25 (Curran Associates, Inc., 2012).
    21. Çiçek, Ö., Abdulkadir, A., Lienkamp, S. S., Brox, T. & Ronneberger, O. 3D U-Net: Learning Dense Volumetric Segmentation from Sparse Annotation. Preprint at http://arxiv.org/abs/1606.06650 (2016).
    22. Ronneberger, O., Fischer, P. & Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. (2015) doi:10.48550/ARXIV.1505.04597.
    23. Nour Eddin, J., Dorez, H. & Curcio, V. Automatic brain extraction and brain tissues segmentation on multi-contrast animal MRI. Sci Rep 13, 6416 (2023).
    24. Wang, S. et al. AI-based MRI auto-segmentation of brain tumor in rodents, a multicenter study. acta neuropathol commun 11, 11 (2023).
    25. Wang, T. et al. Deep learning-based automated segmentation of eight brain anatomical regions using head CT images in PET/CT. BMC Med Imaging 22, 99 (2022).
    26. Wang, X. et al. A deep learning method for optimizing semantic segmentation accuracy of remote sensing images based on improved UNet. Sci Rep 13, 7600 (2023).
    27. Zopes, J., Platscher, M., Paganucci, S. & Federau, C. Multi-Modal Segmentation of 3D Brain Scans Using Neural Networks. Front. Neurol. 12, 653375 (2021).
    28. Ranjbarzadeh, R. et al. Brain tumor segmentation based on deep learning and an attention mechanism using MRI multi-modalities brain images. Sci Rep 11, 10930 (2021).
    29. Billot, B. et al. A Learning Strategy for Contrast-agnostic MRI Segmentation. (2020) doi:10.48550/ARXIV.2003.01995.
    30. Roy, A. G., Conjeti, S., Navab, N. & Wachinger, C. QuickNAT: A Fully Convolutional Network for Quick and Accurate Segmentation of Neuroanatomy. (2018) doi:10.48550/ARXIV.1801.04161.
    31. Dolz, J., Desrosiers, C. & Ben Ayed, I. 3D fully convolutional networks for subcortical segmentation in MRI: A large-scale study. NeuroImage 170, 456–470 (2018).
    32. Roy, A. G., Navab, N. & Wachinger, C. Concurrent Spatial and Channel Squeeze & Excitation in Fully Convolutional Networks. (2018) doi:10.48550/ARXIV.1803.02579.
    33. Chiang, A.-S. et al. Three-Dimensional Reconstruction of Brain-wide Wiring Networks in Drosophila at Single-Cell Resolution. Current Biology 21, 1–11 (2011).
    34. Lin, H.-H., Lai, J. S.-Y., Chin, A.-L., Chen, Y.-C. & Chiang, A.-S. A Map of Olfactory Representation in the Drosophila Mushroom Body. Cell 128, 1205–1217 (2007).
    35. Jefferis, G. S. X. E. et al. Comprehensive Maps of Drosophila Higher Olfactory Centers: Spatially Segregated Fruit and Pheromone Representation. Cell 128, 1187–1203 (2007).
    36. Bogovic, J. A. et al. An unbiased template of the Drosophila brain and ventral nerve cord. PLoS ONE 15, e0236495 (2020).
    37. Peng, H. et al. BrainAligner: 3D registration atlases of Drosophila brains. Nat Methods 8, 493–498 (2011).
    38. Court, R. et al. Virtual Fly Brain—An interactive atlas of the Drosophila nervous system. Front. Physiol. 14, 1076533 (2023).
    39. Wang, C.-Y., Bochkovskiy, A. & Liao, H.-Y. M. YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors. (2022) doi:10.48550/ARXIV.2207.02696.
    40. Bochkovskiy, A., Wang, C.-Y. & Liao, H.-Y. M. YOLOv4: Optimal Speed and Accuracy of Object Detection. Preprint at https://doi.org/10.48550/ARXIV.2004.10934 (2020).
    41. Ge, Z., Liu, S., Wang, F., Li, Z. & Sun, J. YOLOX: Exceeding YOLO Series in 2021. Preprint at https://doi.org/10.48550/ARXIV.2107.08430 (2021).
    42. Long, X. et al. PP-YOLO: An Effective and Efficient Implementation of Object Detector. Preprint at https://doi.org/10.48550/ARXIV.2007.12099 (2020).
    43. Redmon, J. & Farhadi, A. YOLO9000: Better, Faster, Stronger. Preprint at http://arxiv.org/abs/1612.08242 (2016).
    44. Redmon, J. & Farhadi, A. YOLOv3: An Incremental Improvement. Preprint at https://doi.org/10.48550/ARXIV.1804.02767 (2018).
    45. Redmon, J., Divvala, S., Girshick, R. & Farhadi, A. You Only Look Once: Unified, Real-Time Object Detection. Preprint at http://arxiv.org/abs/1506.02640 (2016).
    46. Solovyev, R., Kalinin, A. A. & Gabruseva, T. 3D convolutional neural networks for stalled brain capillary detection. Computers in Biology and Medicine 141, 105089 (2022).
    47. Iakubovskii, P. Segmentation Models. GitHub https://github.com/qubvel/segmentation_models (2019).
    48. Hsu, K.-Y., Shih, C.-T., Chen, N.-Y. & Lo, C.-C. LYNSU: automated 3D neuropil segmentation of fluorescent images for Drosophila brains. Front. Neuroinform. 18, 1429670 (2024).
    49. Long, J., Shelhamer, E. & Darrell, T. Fully Convolutional Networks for Semantic Segmentation. (2014) doi:10.48550/ARXIV.1411.4038.
    50. He, K., Zhang, X., Ren, S. & Sun, J. Deep Residual Learning for Image Recognition. Preprint at https://doi.org/10.48550/ARXIV.1512.03385 (2015).
    51. Iriawan, N. et al. YOLO-UNet Architecture for Detecting and Segmenting the Localized MRI Brain Tumor Image. Applied Computational Intelligence and Soft Computing 2024, 1–14 (2024).

    QR CODE