簡易檢索 / 詳目顯示

研究生: 黃紹源
Huang, Shao-Yuan
論文名稱: 擴增實境中遠端工程協同之影像詳細程度控制
Managing Levels of Detail for Remote Engineering Collaborations in Augmented Reality
指導教授: 瞿志行
Chu, Chih-Hsing
口試委員: 王怡然
Wang, I-Jan
林裕訓
Lin, Yu-Hsun
學位類別: 碩士
Master
系所名稱: 工學院 - 工業工程與工程管理學系
Department of Industrial Engineering and Engineering Management
論文出版年: 2024
畢業學年度: 112
語文別: 中文
論文頁數: 78
中文關鍵詞: 遠端工程協同擴增實境詳細程度影像處理圖像分割繪圖渲染
外文關鍵詞: Remote Engineering Collaboration, Augmented Reality, Level of Details, Image Processing, Image Segmentation, Graphic Rendering
相關次數: 點閱:70下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 在全球化的製造模式下,生產基地多分布在不同地區,或是由於疫情造成的 限制,專家有時無法實際進入現場進行作業。應運而生的,則是使用擴增實境技 術配合網路傳輸,以遠距方式進行工程協同作業,由現場分享實際畫面,遠端人 員透過虛擬標註回饋指引。在半導體產業的製造場域中,機台種類、配置布局等 機密資訊的保護尤為重要,因此在應用此類協同合作模式時,需要具備影像內容 的保護機制,以避免於製造現場傳送影像時,商業機密不經意外洩,損害到企業 自身利益,或違反保密協定。為此,本研究嘗試將詳細程度(Level of Detail, LOD)概念,實現於擴增實境應用中,作為遠端工程協同之資安機制,以較低 詳細程度呈現敏感區域影像。提出兩種影像內容控制方法:一是基於電腦視覺 技巧,對特定區域進行分割後,進行圖像的模糊處理;二是根據預先建立的場 景三維模型,以電腦圖學渲染配合實虛定位,將模型疊合於圖像中,藉此遮蔽 敏感區域。最後根據影格速率、精確度、召回率與F1 score等量化指標,評估與 比較兩種方法之效能差異,並透過遠端維修測試情境,展現其實際應用價值與 限制條件,作為共享即時性工程影像資訊時,保護內容的參考機制。


    Production facilities are often distributed across different regions in today's globalized economy. Due to restrictions caused by events such as pandemics, personnel may sometimes be unable to physically perform their tasks on-site or may require remote assistance. This situation has prompted the adoption of augmented reality (AR) technology combined with network transmission for remote engineering collaboration in the manufacturing industry. Real-time on-site image streaming allows remote personnel to provide guidance through virtual annotations and voice instructions. In the current industry settings, protecting confidential information, such as machine specifications and facility layout configurations, is particularly crucial in remote engineering collaboration. A mechanism for protecting image content in AR-based remote collaboration is essential to prevent the unintentional disclosure of manufacturing know-how during image streaming from the production site. To address this issue, this study realizes the concept of Level of Detail (LOD) as a cybersecurity mechanism in head-mounted AR-based engineering collaboration by implementing two image content control methods. The first method segments sensitive regions in each image and applies blurring effects, while the second overlays pre-existing 3D models onto these regions during the graphic rendering process. Both methods are validated using two manufacturing scenarios: one involving an object to be manipulated by a robot and the other an industrial cooler attached to a machine tool, both of which need to be occluded. Quantitative indicators such as frame rate, precision, recall, and F1 score are used to evaluate and compare the performance differences between these two methods. The practical values and limitations of these methods are demonstrated through the scenarios. This work provides feasible solutions for protecting image content in AR-based real-time engineering collaboration.

    摘要 I Abstract II 目錄 IV 圖目錄 VI 表目錄 VIII 第一章、 緒論 1 1.1 研究背景 1 1.2 研究目的 1 第二章、 文獻回顧 3 2.1 擴增實境中遠端協同作業 3 2.2 擴增實境之詳細程度 5 2.3 即時圖像分割任務 7 2.4 小結 9 第三章、 研究方法 10 3.1 以影像為基礎的方法 10 3.1.1 Snakes模型之原理 10 3.2 以渲染為基礎方法 11 3.2.1 以圖像目標定位虛擬模型 12 3.2.1 以模型目標定位虛擬模型 12 3.3 影像為基礎方法之參數影響 13 3.3.1 線性能量權重w_line 13 3.3.2 彈力係數α與彎曲係數β 14 3.4 方法限制 15 3.4.1 以影像為基方法之限制 15 3.4.2 以渲染為基方法之限制 19 第四章、 系統架構與實作 21 4.1 系統架構 21 4.2 開發環境 24 4.3 擴增實境功能模組之實作 24 4.3.1 設定初始控制點 25 4.3.2 定位虛擬模型 27 4.3.3 調整詳細程度與渲染 28 4.3.4 渲染虛擬模型 28 4.4 遠端功能模組實作 28 4.4.1 形成初始輪廓 29 4.4.2 馬賽克處理控制詳細程度 30 4.4.3 最終影像呈現 31 4.5 串流功能模組實作 31 4.6 系統限制 32 第五章、 系統驗證與比較 33 5.1 評估指標 34 5.3 驗證情境 36 5.3.1 驗證情境內容說明 36 5.3.2 驗證情境之系統操作流程 37 5.4 兩方法驗證情境之比較 50 5.5 失效情境 54 5.5.1 方法與系統限制造成之失效 54 5.5.2 影像修復技術 59 5.6 方法選擇建議 60 第六章、 結論與未來展望 61 6.1 結論 61 6.2 未來展望 62 參考文獻 63 附錄一 66 附錄二 67 附錄三 77

    [1] Mourtzis, D., Siatras, V., & Angelopoulos, J. (2020). Real-time remote maintenance support based on augmented reality (AR). Applied Sciences, 10(5), 1855.
    [2] Mourtzis, D., Vlachou, E., & Zogopoulos, V. (2018). Mobile apps for providing product-service systems and retrieving feedback throughout their lifecycle: A robotics use case. International Journal of Product Lifecycle Management, 11(2), 116-130.
    [3] Pidel, C., & Ackermann, P. (2020). Collaboration in virtual and augmented reality: a systematic overview. Augmented Reality, Virtual Reality, and Computer Graphics: 7th International Conference, AVR 2020, Lecce, Italy, September 7–10, 2020, Proceedings, Part I 7,
    [4] Breitkreuz, D., Müller, M., Stegelmeyer, D., & Mishra, R. (2022). Augmented reality remote maintenance in industry: A systematic literature review. International Conference on Extended Reality,
    [5] Fang, D., Xu, H., Yang, X., & Bian, M. (2020). An augmented reality-based method for remote collaborative real-time assistance: from a system perspective. Mobile Networks and Applications, 25, 412-425.
    [6] Fleck, P., Reyes-Aviles, F., Pirchheim, C., Arth, C., & Schmalstieg, D. (2020). MAUI: Tele-assistance for Maintenance of Cyber-physical Systems. VISIGRAPP (5: VISAPP),
    [7] del Amo, I. F., Erkoyuncu, J., Frayssinet, R., Reynel, C. V., & Roy, R. (2020). Structured authoring for AR-based communication to enhance efficiency in remote diagnosis for complex equipment. Advanced Engineering Informatics, 45, 101096.
    [8] Schneider, M., Rambach, J., & Stricker, D. (2017). Augmented reality based on edge computing using the example of remote live support. 2017 IEEE International Conference on Industrial Technology (ICIT),
    [9] Vorraber, W., Gasser, J., Webb, H., Neubacher, D., & Url, P. (2020). Assessing augmented reality in production: Remote-assisted maintenance with HoloLens. Procedia CIRP, 88, 139-144.
    [10] Küssel, R., Liestmann, V., Spiess, M., & Stich, V. (2000). “TeleService” a customer-oriented and efficient service? Journal of Materials Processing Technology, 107(1-3), 363-371.
    [11] Abualdenien, J., & Borrmann, A. (2022). Levels of detail, development, definition, and information need: A critical literature reivew. Journal of Information Technology in Construction, 27.
    [12] Luebke, D. (2003). Level of detail for 3D graphics. Morgan Kaufmann.
    [13] DiVerdi, S., Hollerer, T., & Schreyer, R. (2004). Level of detail interfaces. Third IEEE and ACM International Symposium on Mixed and Augmented Reality,
    [14] Wysopal, A., Ross, V., Passananti, J., Yu, K., Huynh, B., & Höllerer, T. (2023). Level-of-detail AR: Dynamically adjusting augmented reality level of detail based on visual angle. 2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR),
    [15] Julier, S., Lanzagorta, M., Baillot, Y., Rosenblum, L., Feiner, S., Hollerer, T., & Sestito, S. (2000). Information filtering for mobile augmented reality. Proceedings IEEE and ACM International Symposium on Augmented Reality (ISAR 2000),
    [16] Tatzgern, M., Orso, V., Kalkofen, D., Jacucci, G., Gamberini, L., & Schmalstieg, D. (2016). Adaptive information density for augmented reality displays. 2016 IEEE Virtual Reality (VR),
    [17] Chen, C., Wang, C., Liu, B., He, C., Cong, L., & Wan, S. (2023). Edge intelligence empowered vehicle detection and image segmentation for autonomous vehicles. IEEE Transactions on Intelligent Transportation Systems, 24(11), 13023-13034.
    [18] Ha, Q., Watanabe, K., Karasawa, T., Ushiku, Y., & Harada, T. (2017). MFNet: Towards real-time semantic segmentation for autonomous vehicles with multi-spectral scenes. 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS),
    [19] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A. C., & Lo, W.-Y. (2023). Segment anything. Proceedings of the IEEE/CVF International Conference on Computer Vision,
    [20] Kass, M., Witkin, A., & Terzopoulos, D. (1988). Snakes: Active contour models. International Journal of Computer Vision, 1(4), 321-331.
    [21] Image targets. https://developer.vuforia.com/library/objects/image-targets
    [22] Model targets. https://developer.vuforia.com/library/objects/model-targets
    [23] CAD model best practice. https://developer.vuforia.com/library/model-targets/ model-targets-supported-objects-cad-model-best-practices#cad-model-best-practices
    [24] Hologram stability. https://learn.microsoft.com/en-us/windows/mixed-reality /develop/advanced-concepts/hologram-stability
    [25] Mixed reality capture overview. https://learn.microsoft.com/en-us/windows/mi-xed-reality/develop/advanced-concepts/mixed-reality-capture-overview
    [26] Domański, O. S. a. G. L. a. M. (2018). Chapter 1 - Multiview video: Acquisition, processing, compression, and virtual view rendering (S. T. Rama Chellappa, Ed. Vol. 6). Academic Press. https://doi.org/https://doi.org/10.1016/B978-0-12-811889-4.00001-4
    [27] Dibene, J. C., & Dunn, E. (2022). Hololens 2 sensor streaming. arXiv preprint arXiv:2211.02648.
    [28] Walber. (2014). Precision and recall. In Precisionrecall.svg.png (Ed.).
    [29] Kentaro, W. (2018). Labelme: Image polygonal annotation with python. https://github.com/wkentaro/labelme
    [30] Suvorov, R., Logacheva, E., Mashikhin, A., Remizova, A., Ashukha, A., Silvestrov, A., Kong, N., Goka, H., Park, K., & Lempitsky, V. (2022). Resolution-robust large mask inpainting with fourier convolutions. Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision,
    [31] Park, H., Schoepflin, T., & Kim, Y. (2001). Active contour model with gradient directional information: Directional snake. IEEE Transactions on Circuits and Systems for Video Technology, 11(2), 252-256.
    [32] Xu, S., Yuan, H., Shi, Q., Qi, L., Wang, J., Yang, Y., Li, Y., Chen, K., Tong, Y., & Ghanem, B. (2024). RAP-SAM: Towards real-time all-purpose segment anything. arXiv preprint arXiv:2401.10228.

    QR CODE