研究生: |
廖哲暉 Liao, Che-Hui |
---|---|
論文名稱: |
在去識別化中保存臉部圖像特徵以應用於自閉症篩查 Preserving Facial Features during Image De-identification for Autism Screening |
指導教授: |
陳良弼
Chen, Arbee L.P. |
口試委員: |
沈之涯
Shen, Chih-Ya. 簡仁宗 Chien, Jen-Tzung. |
學位類別: |
碩士 Master |
系所名稱: |
電機資訊學院 - 資訊工程學系 Computer Science |
論文出版年: | 2024 |
畢業學年度: | 112 |
語文別: | 英文 |
論文頁數: | 27 |
中文關鍵詞: | 自閉症 、影像辨識 、隱私 、去識別化 |
外文關鍵詞: | Autism, Image recognition, Privacy, De-identification |
相關次數: | 點閱:63 下載:0 |
分享至: |
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
自閉症譜系障礙(ASD)是一種複雜的神經發育障礙,早期診斷對於干預和支持至關重要。傳統診斷方法依賴行為觀察和心理評估,存在主觀性高、專業要求高且診斷延遲等問題。隨著技術進步,研究者開始探索利用生物標記輔助診斷,其中臉部特徵作為潛在生物標記也引起關注。機器學習算法已經展示了通過臉部圖像預測自閉症的高準確率。
然而,臉部圖像數據存在數據偏差,需透過蒐集更多的資料來解決,但又因為隱私問題造成資料蒐集與分享困難。為解決此二者問題,去識別化為一種良好的解決方式,不僅可以分享資料且受試者也更有意願提供資料。
但大部分的去識別化技術在去識別化的同時也可能破壞臉部特徵,這造成了新的問題,且自閉症臉部特徵與身份訊息高度相關,導致去識別化的困難。因此,本研究應用並提出了一個自編碼器,結合隱私保護和特徵提取網絡,去除敏感身份訊息,保留自閉症篩查所需的臉部特徵。
實驗設計情境為多個獨立數據集,經過我們所使用提出的模型進行去識別化處理後,聚合這些數據以訓練自閉症分類器。研究結果顯示,所提出的模型在特徵保留和隱私保護方面均表現良好,並且在其中某些領域也優於現有方法。達到了較高的自閉症分類準確率和較低的重新識別率,表明該方法在保護隱私的同時有效保留了自閉症篩查所需的臉部特徵,為未來的研究和應用提供了重要參考。
Autism is a complex neurodevelopmental disorder, and early diagnosis is important. Traditional diagnostic methods rely on behavioral observation and psychological assessments, which are highly subjective, require specialized expertise, and often result in diagnostic delays. With technological advancements, researchers have begun exploring the use of biomarkers to assist in diagnosis, with facial features attracting attention as potential biomarkers. Machine learning algorithms have demonstrated high accuracy in predicting autism through facial images.
However, facial image data is subject to biases, requiring the collection of more data to address this issue. Moreover, privacy concerns make data collection and sharing challenging. To solve this problem, de-identification techniques have been proposed, allowing data sharing while encouraging participants to provide data. However, most de-identification techniques may destroy facial features during the de-identification process. Furthermore, autism-related facial features are highly correlated with identity information, making direct de-identification more difficult. Therefore, this study applies an autoencoder, which combines privacy protection and feature extraction networks to remove sensitive identity information while retaining facial features necessary for autism screening.
The experiment design scenario is having multiple independent datasets. Our method will de-identify these datasets. After that, these datasets will be aggregated to train an autism classifier. The results showed that the proposed model performed well in both feature retention and privacy protection and outperformed existing methods in at least one field
1. T. Hirota and B. H. King, “Autism Spectrum Disorder: A Review,” JAMA, vol. 329, no. 2, pp. 157–168, Jan. 2023, doi: https://doi.org/10.1001/jama.2022.23661.
2. C. Lord et al., “Autism spectrum disorder,” Nature Reviews Disease Primers, vol. 6, no. 1, pp. 1–23, Jan. 2020, doi: https://doi.org/10.1038/s41572-019-0138-4.
3. R. Grzadzinski, M. Huerta, and C. Lord, “DSM-5 and autism spectrum disorders (ASDs): an opportunity for identifying ASD subtypes,” Molecular Autism, vol. 4, no. 1, p. 12, 2013, doi: https://doi.org/10.1186/2040-2392-4-12.
4. B. Reichow, K. Hume, E. E. Barton, and B. A. Boyd, “Early intensive behavioral intervention (EIBI) for young children with autism spectrum disorders (ASD),” Cochrane Database of Systematic Reviews, vol. 5, no. 5, May 2018, doi: https://doi.org/10.1002/14651858.cd009260.pub3.
5. L. C et al., “A Multisite Study of the Clinical Diagnosis of Different Autism Spectrum Disorders,” Archives of general psychiatry, Mar. 01, 2012. https://pubmed.ncbi.nlm.nih.gov/22065253/
6. L. Shao, C. Fu, Y. You, and D. Fu, “Classification of ASD based on fMRI data with deep learning,” Cognitive Neurodynamics, vol. 15, no. 6, pp. 961–974, May 2021, doi: https://doi.org/10.1007/s11571-021-09683-0.=
7. Mehmet Baygin et al., “Automated ASD detection using hybrid deep lightweight features extracted from EEG signals,” vol. 134, pp. 104548–104548, Jun. 2021, doi: https://doi.org/10.1016/j.compbiomed.2021.104548.
8. G. Tripi et al., “Cranio-Facial Characteristics in Children with Autism Spectrum Disorders (ASD),” Journal of Clinical Medicine, vol. 8, no. 5, p. 641, May 2019, doi: https://doi.org/10.3390/jcm8050641.
9. G. Tripi, S. Roux, T. Canziani, F. B. Brilhault, C. Barthélémy, and F. Canziani, “Minor physical anomalies in children with autism spectrum disorder,” Early Human Development, vol. 84, no. 4, pp. 217–223, Apr. 2008, doi: https://doi.org/10.1016/j.earlhumdev.2007.04.005.
10. Sajeev Ram Arumugam, R. Balakrishna, Rashmita Khilar, Oswalt Manoj, and C. S. Shylaja, “Prediction of Autism Spectrum Disorder in Children using Face Recognition,” 2021 2nd International Conference on Smart Electronics and Communication (ICOSEC), Oct. 2021, doi: https://doi.org/10.1109/icosec51865.2021.9591679.
11. A. Lu and M. Perkowski, “Deep Learning Approach for Screening Autism Spectrum Disorder in Children with Facial Images and Analysis of Ethnoracial Factors in Model Development and Application,” Brain Sciences, vol. 11, no. 11, p. 1446, Oct. 2021, doi: https://doi.org/10.3390/brainsci11111446.
12. M. Shafiul, None Zabina Tasneem, N. Sher, and N. Muhammad, “Effect of Different Modalities of Facial Images on ASD Diagnosis using Deep Learning-Based Neural Network,” Journal of Advanced Research in Applied Sciences and Engineering Technology, vol. 32, no. 3, pp. 59–74, Oct. 2023, doi: https://doi.org/10.37934/araset.32.3.5974.
13. R. Chevrier, V. Foufi, C. Gaudet-Blavignac, A. Robert, and C. Lovis, “Use and Understanding of Anonymization and De-Identification in the Biomedical Literature: Scoping Review,” Journal of Medical Internet Research, vol. 21, no. 5, p. e13484, May 2019, doi: https://doi.org/10.2196/13484.
14. J. Liu, Z. Zhao, P. Li, G. Min, and H. Li, “Enhanced Embedded AutoEncoders: An Attribute-Preserving Face De-Identification Framework,” IEEE internet of things journal, vol. 10, no. 11, pp. 9438–9452, Jun. 2023, doi: https://doi.org/10.1109/jiot.2023.3235725.
15. X. Li, Y. Gu, N. Dvornek, L. H. Staib, P. Ventola, and J. S. Duncan, “Multi-site fMRI analysis using privacy-preserving federated learning and domain adaptation: ABIDE results,” Medical Image Analysis, vol. 65, p. 101765, Oct. 2020, doi: https://doi.org/10.1016/j.media.2020.101765.
16. Hala Shamseddine, Safa Otoum, and A. Mourad, “On the Feasibility of Federated Learning for Neurodevelopmental Disorders: ASD Detection Use-Case,” GLOBECOM 2022 - 2022 IEEE Global Communications Conference, Dec. 2022, doi: https://doi.org/10.1109/globecom48099.2022.10001248.
17. M. H. Brendan, E. Moore, D. Ramage, S. Hampson, and Blaise, “Communication-Efficient Learning of Deep Networks from Decentralized Data,” arXiv (Cornell University), Feb. 2016, doi: https://doi.org/10.48550/arxiv.1602.05629.
18. P. Washington et al., “Crowdsourced privacy-preserved feature tagging of short home videos for machine learning ASD detection,” Scientific Reports, vol. 11, no. 1, Apr. 2021, doi: https://doi.org/10.1038/s41598-021-87059-4.
19. H. Alkahtani, T. H. H. Aldhyani, and M. Y. Alzahrani, “Deep Learning Algorithms to Identify Autism Spectrum Disorder in Children-Based Facial Landmarks,” Applied Sciences, vol. 13, no. 8, p. 4855, Jan. 2023, doi: https://doi.org/10.3390/app13084855.
20. X. Cao et al., “ViTASD: Robust Vision Transformer Baselines for Autism Spectrum Disorder Facial Diagnosis,” arXiv (Cornell University), Jan. 2022, doi: https://doi.org/10.48550/arxiv.2210.16943.
21. I. Ahmad, J. Rashid, M. Faheem, A. Akram, N. A. Khan, and Riaz ul Amin, “Autism spectrum disorder detection using facial images: A performance comparison of pretrained convolutional neural networks,” Jan. 2024, doi: https://doi.org/10.1049/htl2.12073.
22. P. Nousi, S. Papadopoulos, A. Tefas, and I. Pitas, “Deep autoencoders for attribute preserving face de-identification,” Signal Processing: Image Communication, vol. 81, p. 115699, Feb. 2020, doi: https://doi.org/10.1016/j.image.2019.115699.
23. Y. Wu, F. Yang, Y. Xu, and H. Ling, “Privacy-Protective-GAN for Privacy Preserving Face De-Identification,” Journal of Computer Science and Technology, vol. 34, no. 1, pp. 47–60, Jan. 2019, doi: https://doi.org/10.1007/s11390-019-1898-8.
24. J. Lin, Y. Li, and G. Yang, “FPGAN: Face de-identification method with generative adversarial networks for social robots,” Neural Networks, vol. 133, pp. 132–147, Jan. 2021, doi: https://doi.org/10.1016/j.neunet.2020.09.001.
25. J. R. Padilla-López, A. A. Chaaraoui, and F. Flórez-Revuelta, “Visual privacy protection methods: A survey,” Expert Systems with Applications, vol. 42, no. 9, pp. 4177–4195, Jun. 2015, doi: https://doi.org/10.1016/j.eswa.2015.01.041.
26. Håkon Hukkelås, R. Mester, and F. Lindseth, “DeepPrivacy: A Generative Adversarial Network for Face Anonymization,” arXiv (Cornell University), Sep. 2019, doi: https://doi.org/10.48550/arxiv.1909.04538.
27. R. Gross, E. M. Airoldi, B. A. Malin, and L. Sweeney, “Integrating Utility into Face De-identification,” pp. 227–242, May 2005, doi: https://doi.org/10.1007/11767831_15.
28. L. Du, Y. Meng, E. Blasch, and H. Ling, “GARP-face: Balancing privacy protection and utility preservation in face de-identification,” Sep. 2014, doi: https://doi.org/10.1109/btas.2014.6996249.
29. Z. Cai et al., “Disguise without Disruption: Utility-Preserving Face De-identification,” Proceedings of the ... AAAI Conference on Artificial Intelligence, vol. 38, no. 2, pp. 918–926, Mar. 2024, doi: https://doi.org/10.1609/aaai.v38i2.27851.
30. K. Aldridge et al., “Facial phenotypes in subgroups of prepubertal boys with autism spectrum disorders are correlated with clinical phenotypes,” Molecular Autism, vol. 2, no. 1, p. 15, 2011, doi: https://doi.org/10.1186/2040-2392-2-15.
31. H. Du, H. Shi, D. Zeng, X.-P. Zhang, and T. Mei, “The Elements of End-to-end Deep Face Recognition: A Survey of Recent Advances,” arXiv.org, Dec. 27, 2021. https://arxiv.org/abs/2009.13290 (accessed Jul. 14, 2024).
32. J. Deng, J. Guo, J. Yang, N. Xue, I. Cotsia, and S. P. Zafeiriou, “ArcFace: Additive Angular Margin Loss for Deep Face Recognition,” IEEE Transactions on Pattern Analysis and Machine Intelligence, pp. 1–1, 2021, doi: https://doi.org/10.1109/tpami.2021.3087709.
33. M. Tschannen, O. Bachem, and M. Lucic, “Recent Advances in Autoencoder-Based Representation Learning,” arXiv.org, Dec. 12, 2018. http://arxiv.org/abs/1812.05069
34. F. M. Espinoza-Cuadros, J. M. Perero-Codosero, J. Antón-Martín, and L. A. Hernández-Gómez, “Speaker De-identification System using Autoencoders and Adversarial Training,” arXiv (Cornell University), Jan. 2020, doi: https://doi.org/10.48550/arxiv.2011.04696.
35. S. Woo et al., “ConvNeXt V2: Co-designing and Scaling ConvNets with Masked Autoencoders,” Jan. 2023, doi: https://doi.org/10.48550/arxiv.2301.00808.
36. “CelebA Dataset,” mmlab.ie.cuhk.edu.hk. https://mmlab.ie.cuhk.edu.hk/projects/CelebA.html/CelebA/CelebA.html
37. Y. Huang et al., “CurricularFace: Adaptive Curriculum Learning Loss for Deep Face Recognition,” arXiv.org, Apr. 01, 2020. https://arxiv.org/abs/2004.00288
38. Q. Meng, S. Zhao, Z. Huang, and F. Zhou, “MagFace: A Universal Representation for Face Recognition and Quality Assessment,” arXiv:2103.06627 [cs], Jul. 2021, Available: https://arxiv.org/abs/2103.06627
39. L. Qin et al., “SwinFace: A Multi-task Transformer for Face Recognition, Expression Recognition, Age Estimation and Attribute Estimation,” IEEE transactions on circuits and systems for video technology, vol. 34, no. 4, pp. 2223–2234, Apr. 2024, doi: https://doi.org/10.1109/tcsvt.2023.3304724.
40. American Psychiatric Association, Diagnostic and statistical manual of mental disorders, 5th ed. Boston: Pearson, 2013
41. G. Quatrosi et al., “Cranio-Facial Characteristics in Autism Spectrum Disorder: A Scoping Review,” Journal of clinical medicine, vol. 13, no. 3, pp. 729–729, Jan. 2024, doi: https://doi.org/10.3390/jcm13030729.
42. A. Dosovitskiy et al., “An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale,” arXiv.org, Jun. 03, 2021. https://www.arxiv.org/abs/2010.11929v2
43. A. Mollahosseini, B. Hasani, and M. H. Mahoor, “AffectNet: A Database for Facial Expression, Valence, and Arousal Computing in the Wild,” IEEE Transactions on Affective Computing, vol. 10, no. 1, pp. 18–31, Jan. 2019, doi: https://doi.org/10.1109/TAFFC.2017.2740923.
44. K. He, X. Zhang, S. Ren, and J. Sun, “Deep Residual Learning for Image Recognition,” arXiv (Cornell University), Dec. 2015, doi: https://doi.org/10.48550/arxiv.1512.03385.
45. M. Sandler, A. W. Howard, M. Zhu, Andrey Zhmoginov, and L.-C. Chen, “MobileNetV2: Inverted Residuals and Linear Bottlenecks,” arXiv (Cornell University), Jan. 2018, doi: https://doi.org/10.48550/arxiv.1801.04381.
46. K. Simonyan and A. Zisserman, “Very Deep Convolutional Networks for Large-Scale Image Recognition,” Computer Science, 2014, doi: https://doi.org/10.48550/arXiv.1409.1556.
47. E. M. Newton, L. Sweeney, and B. Malin, “Preserving privacy by de-identifying face images,” IEEE Transactions on Knowledge and Data Engineering, vol. 17, no. 2, pp. 232–243, Feb. 2005, doi: https://doi.org/10.1109/TKDE.2005.32.