研究生: |
陳耀軒 Chen9, Yao-Hsuan |
---|---|
論文名稱: |
改善整合深度感測技術之擴增實境應用 Improving Augmented Reality Applications Integrated with Depth Sensing Technologies |
指導教授: |
瞿志行
Chu, Chih-Hsing |
口試委員: |
羅承浤
Luo, Cheng-Hong 林永裔 Lin, Yong-Yi |
學位類別: |
碩士 Master |
系所名稱: |
工學院 - 工業工程與工程管理學系 Department of Industrial Engineering and Engineering Management |
論文出版年: | 2015 |
畢業學年度: | 103 |
語文別: | 中文 |
論文頁數: | 97 |
中文關鍵詞: | RGB-D 攝影機 、虛實互動 、擴增實境 、虛擬試穿 |
外文關鍵詞: | RGB-D cameras, Augmented Reality, Real-Virtual Interactions, Virtual Try-On |
相關次數: | 點閱:1 下載:0 |
分享至: |
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
近年來擴增實境技術發展迅速,已成功應用於工程、行銷、娛樂與醫療等不同領域,然而在即時性視覺化內容呈現上,仍有許多難題尚未解決。本研究針對結合深度攝影機(RGB-D Camera)的擴增實境,特別是虛擬產品展示的應用,進行不同面向的效能改善。以 RGB-D 攝影機取得真實場景的三維資訊,根據使用者由場景中選擇的定位參考,可自動進行無標籤定位追蹤初始化,再藉由不同座標系的轉換計算,將虛擬模型準確擺置於實際場景影像中。此外根據即時性的場景深度資料,可正確處理虛擬模型與真實物件的遮蔽關係,產生更真實的虛實互動內容。現行的 RGB-D 攝影機存在隨機雜訊與深度缺失的問題,故提出改善深度資訊品質的計算流程,除可修補缺失之深度資訊點外,亦可降低因連續畫面的差異,所導致的遮蔽邊緣跳動。另一方面,為提高虛擬產品展示的應用價值,發展一項分散式擴增實境系統雛形,結合雲端服務與平行處理技術,實現大量遠端虛擬試穿的使用情境。本研究針對現行擴增實境技術的缺失,提出可行的改善方法,將可有效提升虛實內容的互動性。
Progressing rapidly in recent years, augmented reality (AR) technologies have successfully found applications in engineering, marketing, entertainment, and medical industries. However, to render high-quality visualization contents in AR remains a challenging task with several technical barriers. Recent development of RGB-D cameras may provide an effective approach to solving those difficulties by capturing real-time depth data of a real scene. This research attempts to improve the effectiveness of product display in AR from multiple perspectives. First, a virtual model can be precisely positioned into a real scene according to the reference geometry selected by users from the scene, without use of markers. Mutual occlusions between real and virtual objects can be precisely estimated using the depth data. This information enables realistic interactions in an AR environment with intuitive rendering of both. In addition, a computational framework is developed to overcome data deterioration induced by random noises and missing pixels in the data captured by RGB-D cameras. The framework effectively reduces the jitter problem occurring along the occlusion boundaries. In addition, we implement a distributed AR system to realize the concept of mass virtual try-on. This system integrates cloud computing and parallel processing technologies to increase the computation efficiency involving in the try-on application. A use scenario of shoes try-on demonstrates the feasibility of the system. This work enhances the realistic extent of interactions between real and virtual contents by integrating RGB-D cameras into AR applications.
[1] Norman, D. A. (2005) "Human-centered design considered harmful," Interactions, Vol. 12, Issue 4, pp. 14-19.
[2] Armstrong, G., Kotler, P., and He, Zhiyi. (2000) Marketing: an introduction.
[3] 陳坤淼,陳玲鈴,(2007),「即時繪圖虛擬原型與實體模型之產品造形真實度認知研究-以汽車造形設計為例」,設計學報,第12卷,第1期。
[4] Norman, D. A. (2004) "Introduction to this special section on beauty, goodness, and usability," Human-Computer Interaction, Vol. 19, Issue 4, pp. 311-318.
[5] Chryssolouris, G., Mavrikios, D., and Fragos, D. (2000) "A virtual reality-based experimentation environment for the verification of human-related factors in assembly processes," Robotics and Computer-Integrated Manufacturing, Vol. 16, Issue 4, pp. 267-276.
[6] Yuan, M. L., Ong, S. K., and Nee, A. Y. C. (2008) "Augmented reality for assembly guidance using a virtual interactive tool," International Journal of Production Research, Vol. 46, Issue 7, pp. 1745-1767.
[7] Huang, S. H., Yang, Y. I., and Chu, C. H., (2012) "Human-centric design personalization of 3D glasses frame in markerless augmented reality," Advanced Engineering Informatics, Vol. 26, Issue 1, pp. 35-45.
[8] Sun, T. L. (2009) "Petri net-based VR model interactive behaviour specification and control for maintaining training," International Journal of Computer Integrated Manufacturing, Vol. 22, Issue 2, pp. 129-137.
[9] 鄭珏珉,(2009),「一套衣物搭配的即時互動系統」,臺灣大學資訊工程學研究所,碩士論文。
[10] 持丸正明, 河内まき子. (2008) "Technologies for the design and retail service of well-fitting eyeglass frames," Synthesiology English edition, Vol. 1, Issue 1, pp. 38-46.
[11] 楊攸奕,(2012),「基於擴增實境之產品客製化設計: 虛擬試穿與網格變型」,清華大學工業工程與工程管理研究所,碩士論文。
[12] Wang, Lu, Wang, L., Villamil, R., Samarasekera, S., and Kumar, R. (2012) "Magic mirror: A virtual handbag shopping system," Computer Vision and Pattern Recognition Workshops (CVPRW), 2012 IEEE Computer Society Conference on. IEEE, pp. 19-24.
[13] Bradley, D., Gerhard, R., and Prosenjit, B. (2009) "Augmented reality on cloth with realistic illumination," Machine Vision and Applications, Vol. 20, Issue 2, pp. 85-92.
[14] 黃柏源,(2014),「基於三維顏面參數化模型之客製化設計」,清華大學工業工程與工程管理研究所,碩士論文。
[15] Berdnikov, Y., and Dmitriy, V. (2011) "Real-time depth map occlusion filling and scene background restoration for projected-pattern based depth cameras," Graphic Conf., IETP.
[16] Leal-Meléndrez, J. A., Altamirano-Robles, L., and Gonzalez, J. A. (2013) "Occlusion Handling in Video-Based Augmented Reality Using the Kinect Sensor for Indoor Registration," Progress in Pattern Recognition, Image Analysis, Computer Vision, and Applications, Springer Berlin Heidelberg, pp. 447-454.
[17] Harris, C., and Mike, S. (1988) "A combined corner and edge detector," Alvey vision conference. Vol. 15, pp. 50.
[18] Lucas, B. D., and Kanade, T. (1981) "An iterative image registration technique with an application to stereo vision," In IJCAI. Vol. 81, pp. 674-679.
[19] Lowe, David G. (2004) "Distinctive image features from scale-invariant keypoints," International journal of computer vision, Vol. 60, Issue 2, pp. 91-110.
[20] Bleser, G., Yulian, P., and Didier, S. (2005). Real-time 3d camera tracking for industrial augmented reality applications.
[21] Albitar, C., Graebling, P., and Doignon, C. (2007) "Robust structured light coding for 3d reconstruction," Computer Vision, 2007. ICCV 2007. IEEE 11th International Conference on. IEEE, pp. 1-6.
[22] Wiley, W. C., and McLaren, I. H. (1955) "Time‐of‐flight mass spectrometer with improved resolution," Review of Scientific Instruments, Vol. 26, Issue 12, pp. 1150-1157.
[23] Shotton, J., Sharp, T., Kipman, A., Fitzgibbon, A., Finocchio, M., Blake, A., and Moore, R, (2013) "Real-time human pose recognition in parts from single depth images," Communications of the ACM, Vol. 56, Issue 1, pp. 116-124.
[24] Cai, Q., Gallup, D., Zhang, C., and Zhang, Z. (2010) "3d deformable face tracking with a commodity depth camera,"Computer Vision–ECCV 2010. Springer Berlin Heidelberg, pp. 229-242.
[25] Leutenegger, S., Margarita, C., and Roland, Y. S. (2011) "BRISK: Binary robust invariant scalable keypoints," Computer Vision (ICCV), 2011 IEEE International Conference on. IEEE, pp. 2548-2555.
[26] Bay, H., Tuytelaars, T., and Van Gool, L. (2006) "Surf: Speeded up robust features," Computer vision–ECCV 2006. Springer Berlin Heidelberg, pp. 404-417.
[27] Shang, X., Liu, X., Xiong, G., Cheng, C., Ma, Y., and Nyberg, T. R. (2013) "Social manufacturing cloud service platform for the mass customization in apparel industry," Service Operations and Logistics, and Informatics (SOLI), 2013 IEEE International Conference on. IEEE, pp. 220-224.
[28] Thomas, B., Close, B., Donoghue, J., Squires, J., De Bondi, P., Morris, M., and Piekarski, W. (2000) "ARQuake: An outdoor/indoor augmented reality first person application," Wearable computers, the fourth international symposium on. IEEE, pp. 139-146.
[29] Wagner, D., and Dieter, S. (2007). Artoolkitplus for pose tracking on mobile devices.
[30] Rekimoto, J., and Ayatsuka, Y. (2000) "CyberCode: designing augmented reality environments with visual tags," Proceedings of DARE 2000 on Designing augmented reality environments. ACM, pp. 1-10.
[31] Loucks, L. K., & Simpson, R. O. (1993). "Depth buffer clipping for window management," U.S. Patent No. 5,241,656.
[32] Telea, A. (2004) "An image inpainting technique based on the fast marching method," Journal of graphics tools Vol. 9, Issue 1 pp. 23-34.
[33] Kato, H. and Billinghurst, M. (1999) "Marker tracking and hmd calibration for a video-based augmented reality conferencing system," Augmented Reality, 1999.(IWAR'99) Proceedings. 2nd IEEE and ACM International Workshop on. IEEE, pp. 85-94.
[34] Rekimoto, J. (1998) "Matrix: A realtime object identification and registration method for augmented reality," Computer Human Interaction, 1998. Proceedings. 3rd Asia Pacific. IEEE, pp. 63-68.
[35] ARToolkit, http://www.hitl.washington.edu/artoolkit/
[36] Wagner, D., Reitmayr, G., Mulloni, A., Drummond, T., and Schmalstieg, D. (2010) "Real-time detection and tracking for augmented reality on mobile phones," Visualization and Computer Graphics, IEEE Transactions on, Vol. 16, Issue 3, pp. 355-368.
[37] ARToolkitPlus, https://handheldar.icg.tugraz.at/artoolkitplus.php
[38] Botsch, M., Wiratanaya, A., and Kobbelt, L. (2002) "Efficient high quality rendering of point sampled geometry," Proceedings of the 13th Eurographics workshop on Rendering. Eurographics Association, pp. 53-64.
[39] Catmull, E., and Smith, A. R. (1980) " 3-D transformations of images in scanline order, " In ACM SIGGRAPH Computer Graphics, Vol. 14, Issue. 3, pp. 279-285.
[40] Canny, J. (1986) "A computational approach to edge detection," Pattern Analysis and Machine Intelligence, IEEE Transactions on 6, pp. 679-698.
[41] Douglas, D. H., and Thomas, K. P. (1973) "Algorithms for the reduction of the number of points required to represent a digitized line or its caricature," Cartographica: The International Journal for Geographic Information and Geovisualization, Vol. 10, Issue 2, pp. 112-122.
[42] Point Cloud Library, http://pointclouds.org
[43] OpeNI, http://structure.io/openni
[44] Besl P. J., and McKay, N. D. (1992) "Method for registration of 3-D shapes," Robotics-DL tentative. International Society for Optics and Photonics.
[45] CUDA, http://www.nvidia.com.tw/object/cuda-tw.html
[46] django, https://www.djangoproject.com/
[47] Kinect For Windows SDK 2.0, https://msdn.microsoft.com/en-us/library/dn799271.aspx
[48] OpenFrameworks, http://www.openframeworks.cc/