研究生: |
羅宇彣 Lo, Yu-Wen |
---|---|
論文名稱: |
使用HarDNet加速DeepLabv3+進行組織病理學細胞檢測 Speeding up DeepLabv3+ with HarDNet for Histopathological Cell Detection |
指導教授: |
林永隆
Lin, Youn-Long |
口試委員: |
王廷基
Wang, Ting-Chi 黃俊達 Huang, Juinn-Dar |
學位類別: |
碩士 Master |
系所名稱: |
電機資訊學院 - 資訊工程學系 Computer Science |
論文出版年: | 2024 |
畢業學年度: | 112 |
語文別: | 中文 |
論文頁數: | 30 |
中文關鍵詞: | 細胞檢測 、組織分割 、深度學習 |
外文關鍵詞: | cell detection, tissue segmentation, deep learning |
相關次數: | 點閱:50 下載:0 |
分享至: |
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
準確的細胞檢測對於生物醫學研究至關重要,涉及癌症診斷、藥物開發和細胞機制研究等重要領域。為了深入研究細胞行為和組織結構,利用電腦輔助系統是一種有效且前景廣闊的方法。近期的進展帶來了OCELOT數據集的推出,這是一個專門用於研究組織病理學中細胞檢測的資料集,其中包含來自不同器官的圖像,呈現了重疊的細胞和組織結構。OCELOT 資料集的重要性在於它提供了對周圍組織結構和單一細胞之間複雜關係的有價值洞察。在本文中,我們基於先前參賽的作品,提出了一種新的細胞檢測方法。我們採用雙分支架構,充分利用組織與細胞之間的關係來提高細胞偵測的準確率。通過使用不同比例的損失權重來專注於指定的類別,我們進一步優化了模型的性能。使用 HarDNet68 取代 DeepLabv3+ 的 backbone,我們成功提高了細胞檢測的準確率並超越了以往的方法,同時在速度方面取得了比原本 DeepLabv3+ Xception 更好的表現。我們的方法在 OCELOT 資料集上的細胞檢測中取得了非常有前景的結果,成功將準確率提升至0.7530,超過了OCELOT2023競賽中第一名的成績0.7243。
Accurate cell detection is paramount in biomedical research, spanning crucial areas such as cancer diagnosis, drug development, and cellular mechanism studies. Utilizing computer-assisted systems is an effective and promising method for in-depth exploration of cell behavior and tissue structure. Recent advances have brought about the debut of the OCELOT dataset, specifically tailored for cell detection in histopathology. It includes images from diverse organs, showcasing overlapping cell and tissue structures. The importance of the OCELOT dataset is that it provides valuable insights into the complex relationships between surrounding tissue structures and single cells. In this thesis, we propose a novel cell detection method based on our previous competition work. We adopt a two-branch architecture to enhance the accuracy of cell detection by taking full advantage of the relationship between tissues and cells. We further optimize the performance of the model by using different loss weights to focus on specific classes. By replacing the backbone of DeepLabv3+ with HarDNet68, we successfully enhance cell detection accuracy and surpass previous methods, while achieving better performance in terms of speed compared to the original DeepLabv3+ Xception. Our method yielded highly encouraging outcomes in cell detection on the OCELOT dataset, successfully improving the accuracy to 0.7530, surpassing the first place result of 0.7243 in the OCELOT2023 competition.
[1] J. Ryu, A. V. Puche, J. Shin, S. Park, B. Brattoli, J. Lee, W. Jung, S. I. Cho, K. Paeng, C.-Y. Ock, D. Yoo, and S. Pereira, “Ocelot: Overlapped cell on tissue dataset for histopathology,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 23902–23912, June 2023.
[2] Y.-W. Lo and C.-H. Yang, “Enhancing cell detection via fc-hardnet and tissue segmentation: Ocelot 2023 challenge approach,” in Graphs in Biomedical Image Analysis, and Overlapped Cell on Tissue Dataset for Histopathology (S.-A. Ahmadi and S. Pereira, eds.), (Cham), pp. 130–137, Springer Nature Switzerland, 2024.
[3] “Tiger: Grand challenge,” 2022. https://tiger.grand-challenge.org/ [Online; accessed Nov-2022].
[4] Z. Li, W. Li, H. Mai, T. Zhang, and Z. Xiong, “Enhancing cell detection in histopathology images: A vit-based u-net approach,” in International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 150–160, Springer, 2023.
[5] J. Millward, Z. He, and A. Nibali, “Dense prediction of cell centroids using tissue context and cell refinement,” in International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 138–149, Springer, 2023.
[6] L. A. Schoenpflug and V. H. Koelzer, “Softctm: Cell detection by soft instance segmentation and consideration of cell-tissue interaction,” in International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 109–122, Springer, 2023.
[7] P. Chao, C.-Y. Kao, Y.-S. Ruan, C.-H. Huang, and Y.-L. Lin, “Hardnet: A low memory traffic network,” in Proceedings of the IEEE/CVF international conference on computer vision, pp. 3552–3561, 2019.
[8] L.-C. Chen, Y. Zhu, G. Papandreou, F. Schroff, and H. Adam, “Encoder-decoder with atrous separable convolution for semantic image segmentation,” in Proceedings of the European conference on computer vision (ECCV), pp. 801–818, 2018.
[9] F. Chollet, “Xception: Deep learning with depthwise separable convolutions,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1251–1258, 2017.
[10] B. Baheti, S. Innani, S. Gajre, and S. Talbar, “Semantic scene segmentation in unstructured environment with modified deeplabv3+,” Pattern Recognition Letters, vol. 138, pp. 223– 229, 2020.
[11] M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L.-C. Chen, “Mobilenetv2: Inverted residuals and linear bottlenecks,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 4510–4520, 2018.
[12] C.-H. Huang, H.-Y. Wu, and Y.-L. Lin, “Hardnet-mseg: A simple encoder-decoder polyp segmentation neural network that achieves over 0.9 mean dice and 86 fps,” arXiv preprint arXiv:2101.07172, 2021.
[13] T.-Y. Liao, C.-H. Yang, Y.-W. Lo, K.-Y. Lai, P.-H. Shen, and Y.-L. Lin, “Hardnet-dfus: Enhancing backbone and decoder of hardnet-mseg for diabetic foot ulcer image segmentation,” in Diabetic Foot Ulcers Grand Challenge, pp. 21–30, Springer, 2022.
[14] H.-Y. Wu and Y.-L. Lin, “Hardnet-bts: A harmonic shortcut network for brain tumor segmentation,” in International MICCAI Brainlesion Workshop, pp. 261–271, Springer, 2021.
[15] J. Wei, S. Wang, and Q. Huang, “F3net: fusion, feedback and focus for salient object detection,” in Proceedings of the AAAI Conference on Artificial Intelligence, 2020.
[16] Z. Swiderska-Chadaj, H. Pinckaers, M. Van Rijthoven, M. Balkenhol, M. Melnikova, O. Geessink, Q. Manson, M. Sherman, A. Polonia, J. Parry, et al., “Learning to detect lymphocytes in immunohistochemistry with deep learning,” Medical image analysis, vol. 58, p. 101547, 2019.
[17] I. Loshchilov and F. Hutter, “Decoupled weight decay regularization,” arXiv preprint arXiv:1711.05101, 2017.