簡易檢索 / 詳目顯示

研究生: 鄭智鴻
Cheng, Chih-Hung
論文名稱: 擴增實境中鞋品虛擬試穿之進階研究
An Advanced Study on Virtual Footwear Try-On in Augmented Reality
指導教授: 瞿志行
Chu, Chih-Hsing
口試委員: 郭嘉真
Guo, Jia-Zhen
黃瀅瑛
Huang, Ying-Ying
學位類別: 碩士
Master
系所名稱: 工學院 - 工業工程與工程管理學系
Department of Industrial Engineering and Engineering Management
論文出版年: 2017
畢業學年度: 105
語文別: 中文
論文頁數: 62
中文關鍵詞: 擴增實境個人化設計三維辨識與追蹤虛擬試穿
外文關鍵詞: Augmented reality, customization, 3D object recognition and tracking, virtual try-on
相關次數: 點閱:3下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 隨著生活水準的提高,鞋品逐漸成為時尚產品,個人化設計的需求明顯。以往研究提出擴增實境中,即時性鞋品虛擬試穿的概念,將其轉換為連續影像中三維辨識與追蹤的問題,並已驗證其實作的可行性。然而既有的計算方法使用限制較多,計算效能仍需持續改善。有鑑於此,針對擴增實境中的鞋品虛擬試穿,本研究進行進階性研究,分別由計算效率、實用性、穩定度、真實鞋品客製化評估與商業化發展等需求,提出有效的解答方案。首先結合空間資料結構,降低反覆最近點演算法的計算時間,提高足部辨識的速度。結合深度相機提供的色彩與深度資訊,嘗試進行完全無標記的追蹤,簡化虛擬試穿的流程。此外導入微晶片感測技術,由陀螺儀擷取足部之旋轉資訊,避免累積誤差造成的失效。為了同時提供功能與外觀上的評估,使用者可試穿實際鞋品,針對影像中呈現的鞋品模型,提供外觀的個人化設計。最後針對電子商務的實際應用,發展分散式擴增實境系統雛型,結合雲端服務,實現大量遠端虛擬試穿的使用情境。本研究已開發並測試上述功能,成功驗證研究概念的正確性,透過不同面向的嘗試,深化既有即時性虛擬試穿的效能,開啟擴增實境技術於商品展示,以及個人化設計的創新性應用。


    With the improvement of living standard, shoes gradually become luxury goods. More and more people like them to be customized. In the past, the concept of real-time virtual shoe try on in augmented reality was proposed and transformed into the problem of 3D object recognition and tracking in continuous images, and the feasibility of the proposed model was verified. However, there are many limitations for current calculation method, which efficiency still needs to be improved. In view of this, this research focuses on helping people try on shoes virtually in augmented reality scene. This research carried out some advanced research, including the improvement of calculation efficiency, practicability, stability, evaluation of the real shoes customization system and product’s commercial development needs, and put forward effective solutions. First, by combining the structure of space information, it can reduce the time of Iterative Closest Point algorithm in order to increase the speed of feet recognizing. By uniting the color and depth information captured by RGB-D camera, this research tried to simplify the process of trying virtual shoes without markers. In addition, by importing the technology of microchip sensing, and with the rotation information captured by gyroscope, it can reduce deviation caused by accumulated errors. In order to provide evaluation of both the functions and the appearance, we provided customized exterior design function above the shoes models’ pictures when users try real shoes. Besides, according to the real application of e-commerce, our system developed the prototype of distributed augmented reality system. By combining Cloud Services, it can also implement the using situation of massive remote virtual try-on needs. This research has developed and tested the functions mentioned above. It successfully verified the accuracy of this concept. Through verification in different aspects, it deepened the efficacy of current real-time virtual try-on, started augmented reality system when displaying products, and applied the innovative customization.

    摘要 I Abstract II 致謝 IV 目錄 V 圖目錄 VII 表目錄 IX 第一章 緒論 1 1.1 研究背景 1 1.2 研究動機與目的 4 第二章 文獻回顧 6 2.1 基於擴增實境之試穿戴應用 6 2.2 擴增實境中的動態物件追蹤 7 2.2.1 標記追蹤 7 2.2.2 無標記追蹤 8 2.2.3 深度影像追蹤 9 2.2.4 整合影像與慣性感測器追蹤 10 2.2.5 整合影像與磁性感測器追蹤 11 2.3小結 11 第三章 研究架構 13 3.1 問題定義與描述 13 3.2 鞋品試穿技術之效能改善 15 3.3 鞋品試穿技術之限制突破 16 第四章 研究方法 17 4.1 三維定位演算法改良 17 4.1.1 基於k-d樹之疊代最近點法 17 4.1.2 圖形處理器平行運算技術 18 4.2 結合色彩與深度資訊 21 4.2.1 Kinect空間轉換 21 4.2.2 色彩空間轉換 23 4.2.3 位置初始化 23 4.3 整合Intel Curie™晶片 23 4.3.1 原始資料提取與角度計算 25 4.3.2 藍芽傳輸建置 25 4.4 真實鞋品客製化評估技術 26 4.4.1 三維模型建置 26 4.4.2 三維模型模組化 27 4.4.3 實作方法修正 28 4.5 分散式虛擬鞋品試穿戴雛型系統 29 4.5.1 虛擬鞋品試穿之模組化 30 4.5.2 三維資訊擷取模組 31 4.5.3 虛擬鞋品試穿戴計算模組 31 4.5.4 伺服平台之部署 32 4.6 追蹤計算結果評估 32 第五章 測試結果 34 5.1 測試環境 34 5.2 三維定位演算法改良 34 5.3 結合色彩與深度資訊 36 5.4 整合Intel Curie™晶片 38 5.4.1 原始資料提取 38 5.4.2 測試結果 40 5.5 結果討論 43 第六章 系統實作 45 6.1 測試環境 45 6.2 虛擬鞋品試穿戴技術 45 6.3 真實鞋品客製化評估技術 47 6.3.1 三維模型取得 48 6.3.2 系統初始化 48 6.3.3 鞋底模組個人化設計 50 6.4 分散式虛擬鞋品試穿戴雛型系統 53 6.4.1 錄製三維場景資訊 53 6.4.2 場景資訊上傳與計算 54 6.4.3 試穿戴結果觀看 55 第七章 結論與未來展望 57 參考文獻 60

    [1] G. Chryssolouris, D. Mavrikios, D. Fragos, and V. Karabatsou, "A virtual reality-based experimentation environment for the verification of human-related factors in assembly processes," Robotics and Computer-Integrated Manufacturing, vol. 16, pp. 267-276, 2000.
    [2] M. Yuan, S. Ong, and A. Nee, "Augmented reality for assembly guidance using a virtual interactive tool," International Journal of Production Research, vol. 46, pp. 1745-1767, 2008.
    [3] S.-H. Huang, Y.-I. Yang, and C.-H. Chu, "Human-centric design personalization of 3D glasses frame in markerless augmented reality," Advanced Engineering Informatics, vol. 26, pp. 35-45, 2012.
    [4] T.-L. Sun, "Petri net-based VR model interactive behaviour specification and control for maintaining training," International Journal of Computer Integrated Manufacturing, vol. 22, pp. 129-137, 2009.
    [5] 楊攸奕, "基於擴增實境之產品客製化設計:虛擬試穿與網格變形," 國立清華大學工業工程與工程管理學系, 2012.
    [6] Intel Curie™, http://www.intel.com.tw/content/www/tw/zh/wearables/wearable-soc.html.
    [7] A. Chen, C.-Y. Kao, Y.-H. Chen, and W.-C. Wang, "Applying augmented reality to consumer garment try-on experience," in Transactions on computational science XIII, ed: Springer, 2011, pp. 169-190.
    [8] P. Eisert and A. Hilsmann, "Realistic virtual try-on of clothes using real-time augmented reality methods," E-LETTER, 2011.
    [9] K. Kjærside, K. J. Kortbek, H. Hedegaard, and K. Grønbæk, "ARDressCode: augmented dressing room with tag-based motion tracking and real-time clothes simulation," in Proceedings of the Central European Multimedia and Virtual Reality Conference, 2005.
    [10] Mochimaru and Kouchi, "Technologies for the design and retail service of well-fitting eyeglass frames," Synthesiology English edition, vol. 1, pp. 38-46, 2008.
    [11] M. Yuan, I. R. Khan, F. Farbiz, A. Niswar, and Z. Huang, "A mixed reality system for virtual glasses try-on," in Proceedings of the 10th International Conference on Virtual Reality Continuum and Its Applications in Industry, 2011, pp. 363-366.
    [12] J. Rekimoto, "Matrix: A realtime object identification and registration method for augmented reality," in Computer Human Interaction, 1998. Proceedings. 3rd Asia Pacific, 1998, pp. 63-68.
    [13] H. Kato and M. Billinghurst, "Marker tracking and hmd calibration for a video-based augmented reality conferencing system," in Augmented Reality, 1999.(IWAR'99) Proceedings. 2nd IEEE and ACM International Workshop on, 1999, pp. 85-94.
    [14] ARToolKit™, http://www.hitl.washington.edu/artoolkit.
    [15] B. MacIntyre, M. Gandy, S. Dow, and J. D. Bolter, "DART: a toolkit for rapid design exploration of augmented reality experiences," in Proceedings of the 17th annual ACM symposium on User interface software and technology, 2004, pp. 197-206.
    [16] T. Ha and W. Woo, "Graphical tangible user interface for a ar authoring tool in product design environment," in International Symposium on Ubiquitous VR, 2007, p. 1.
    [17] C. Harris and M. Stephens, "A combined corner and edge detector," in Alvey vision conference, 1988, p. 50.
    [18] K. Mikolajczyk and C. Schmid, "An affine invariant interest point detector," in European conference on computer vision, 2002, pp. 128-142.
    [19] D. G. Lowe, "Distinctive image features from scale-invariant keypoints," International journal of computer vision, vol. 60, pp. 91-110, 2004.
    [20] I. Skrypnyk and D. G. Lowe, "Scene modelling, recognition and tracking with invariant image features," in Mixed and Augmented Reality, 2004. ISMAR 2004. Third IEEE and ACM International Symposium on, 2004, pp. 110-119.
    [21] Microsoft Kinect™, https://developer.microsoft.com/zh-TW/windows/kinect.
    [22] J. Shotton, T. Sharp, A. Kipman, A. Fitzgibbon, M. Finocchio, A. Blake, et al., "Real-time human pose recognition in parts from single depth images," Communications of the ACM, vol. 56, pp. 116-124, 2013.
    [23] L. Wang, R. Villamil, S. Samarasekera, and R. Kumar, "Magic mirror: A virtual handbag shopping system," in 2012 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, 2012, pp. 19-24.
    [24] J. Cazamias and A. S. Raj, "Virtualized Reality Using Depth Camera Point Clouds."
    [25] Intel RealSense™, https://is.gd/zYvYUL.
    [26] Unity™, https://unity3d.com.
    [27] R. Jafri and M. M. Khan, "Obstacle detection and avoidance for the visually impaired in indoors environments using Google’s Project Tango Device," in International Conference on Computers Helping People with Special Needs, 2016, pp. 179-185.
    [28] Google Project Tango™, https://www.google.com/atap/project-tango.
    [29] J. Canny, "A computational approach to edge detection," IEEE Transactions on pattern analysis and machine intelligence, pp. 679-698, 1986.
    [30] R. Adams and L. Bischof, "Seeded region growing," IEEE Transactions on pattern analysis and machine intelligence, vol. 16, pp. 641-647, 1994.
    [31] R. Jafri, "A GPU-accelerated real-time contextual awareness application for the visually impaired on Google’s project Tango device," The Journal of Supercomputing, pp. 1-13, 2016.
    [32] S. You, U. Neumann, and R. Azuma, "Hybrid inertial and vision tracking for augmented reality registration," in Virtual Reality, 1999. Proceedings., IEEE, 1999, pp. 260-267.
    [33] S. You and U. Neumann, "Fusion of vision and gyro tracking for robust augmented reality registration," in Virtual Reality, 2001. Proceedings. IEEE, 2001, pp. 71-78.
    [34] R. T. Azuma, B. R. Hoff, H. E. Neely III, R. Sarfaty, M. J. Daily, G. Bishop, et al., "Making augmented reality work outdoors requires hybrid tracking," in Proceedings of the First International Workshop on Augmented Reality, 1998.
    [35] G. Hirota, D. T. Chen, W. F. Garrett, and M. A. Livingston, "Superior augmented reality registration by integrating landmark tracking and magnetic tracking," in Proceedings of the 23rd annual conference on Computer graphics and interactive techniques, 1996, pp. 429-438.
    [36] T. Auer, S. Brantner, and A. Pinz, "The integration of optical and magnetic tracking for multi-user augmented reality," in Virtual Environments’ 99, ed: Springer, 1999, pp. 43-52.
    [37] S. Thrun and J. J. Leonard, "Simultaneous localization and mapping," in Springer handbook of robotics, ed: Springer, 2008, pp. 871-889.
    [38] P. J. Besl and N. D. McKay, "Method for registration of 3-D shapes," in Robotics-DL tentative, 1992, pp. 586-606.
    [39] 楊智凱, "於擴增實境中使用多深度感測器之虛擬鞋品試穿技術," 國立清華大學工業工程與工程管理學系, 2013.
    [40] S. Madgwick, "An efficient orientation filter for inertial and inertial/magnetic sensor arrays," Report x-io and University of Bristol (UK), 2010.
    [41] Point Cloud Library, http://pointclouds.org.
    [42] WampServer, http://www.wampserver.com/en/.
    [43] Arduino 101™, https://www.arduino.cc/en/Main/ArduinoBoard101.
    [44] T. Schenk, "Introduction to photogrammetry."

    QR CODE