簡易檢索 / 詳目顯示

研究生: 潘映辰
論文名稱: 利用RGB-D感測器與統計學習模型即時估算人體計測值
A Statistical Learning Model to Estimate Anthropometric Measurements Using an RGB-D Sensor
指導教授: 王茂駿
黃思皓
口試委員: 林志隆
盧俊銘
學位類別: 碩士
Master
系所名稱: 工學院 - 工業工程與工程管理學系
Department of Industrial Engineering and Engineering Management
論文出版年: 2014
畢業學年度: 102
語文別: 中文
論文頁數: 94
中文關鍵詞: 人體計測值群集分析參數化模型點群疊合RGB-D感測裝置
外文關鍵詞: Anthropometric Measurements, Cluster Analysis, Parametric Model, Point Registration, RGB-D Sensor
相關次數: 點閱:2下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 自動量測人體計測值是許多人因研究及產品客製化中很重要的一個步驟,隨三維人體掃瞄儀器的發展,耗時且不精準的手動量測逐漸被取代。利用三維人體模型,許多半自動的演算法被應用於快速蒐集人體幾何資訊,並進一步計算人體計測值。然而,雷射掃瞄儀器價格昂貴且環境限制較多,大幅侷限其終端的應用系統發展與建置。
    本研究提出了一套以RGB-D感測器以及統計學習模型為基礎的自動人體計測值估算系統,其主要包含離線參數模型學習模組(off-line parametric model learning stage)以及即時人體計測值估算模組(on-line measurement estimation stage)。在離線學習模組中,主要採用了一連串的影像處理、電腦視覺及統計型機器學習演算法,例如主成分分析、線性迴歸以及類神經網路模型,進行三維掃描人體資料庫分析。在即時估算模組方面,則是利用由RGB-D感測器擷取深度圖(depth map),經過步進式三維點群疊合演算法(iterative 3D point cloud registration process)產生模擬之全身模型並推估人體模型變形參數。再利用離線學習模組中建立的參數模型即可獲得估算人體計測值。
    實驗結果顯示,本研究利用商業型RGB-D感測器所建構的系統能夠有效的預測九項人體計測值,包含六項長度以及三項圍度。除此之外,此系統亦能接受對於不同型態的深度資訊做為系統輸入,而群集分析技術(cluster analysis)的採用亦可有效降低估測值的誤差。與先前同樣利用RGB-D感測器建構的文獻結果相比,本研究所提出的自動估算系統能更快速且準確的獲得人體計測值。


    To automatically estimate the anthropometric measurements of individuals is an important step for ergonomic researches and customized product design. With the development of 3D scanning technology, several automatic or semi-automatic estimation systems based on the scanned 3D models are proposed to substitute for the time-consuming and inaccurate manual estimation. This 3D scanning system can also be utilized to rapidly collect large amount of human geometric information for further analysis. However, the instrumental cost and the required installation space of 3D laser scanner limit the practical applications.
    In this thesis, an automatic anthropometric measurement estimation system based on a low-cost RGB-D camera and a pre-learned statistical model is proposed. The proposed system includes an off-line parametric model learning stage and an on-line measurement estimation stage. A series of image processing, computer vision, and statistical machine learning algorithms, such as principal component analysis (PCA), linear regression, and artificial neural networks (ANN), are applied to analyze 3D scanned bodies in the off-line stage. In the on-line testing stage, the depth map captured by the RGB-D camera is used to synthesize the whole-body 3D model with 3D point cloud registration algorithm. The estimated surface deformation parameters can be further applied to pre-learned parametric models for anthropometric measurement estimation.
    The experimental results show that the proposed system based on a commercial RGB-D camera can effectively estimate nine anthropometric measurements, including six length measurements and three girth dimensions. The results of simulation experiments and real depth data experiments also demonstrate the robustness of the proposed system with three different types of depth inputs. Finally, the cluster analysis can improve the estimation accuracy with gender classifiers and dynamic parametric models. Compared with the state-of-the-art systems based on RGB-D cameras, the proposed system can estimate anthropometric measurements accurately and efficiently.

    CHAPTER 1 Introduction 1 1.1 Background 1 1.2 Motivation 3 1.3 Purpose 4 1.4 Organization 5 CHAPTER 2 Literature Review 6 2.1 Landmark-based Approaches 7 2.2 Image-based Approaches 9 2.3 3D Geometric Approaches 12 2.3.1 Estimation Based on 3D Feature Points 13 2.3.2 Skeletal Models 16 2.3.3 Geometric Models 18 2.4 Summary 19 CHAPTER 3 Off-line Parametric Model Learning Methodology 22 3.1 System Framework 22 3.2 3D Laser Dataset Description 23 3.3 Data Preprocessing 25 3.4 Surface Variation Modeling 29 3.5 Machine Learning Techniques 33 3.5.1 Linear Regression 33 3.5.2 Artificial Neural Networks 35 3.5.3 Ensemble Learning 37 3.6 Cluster Analysis 40 3.6.1 Linear Discriminant Analysis 41 3.6.2 K-Nearest Neighbor Algorithm 43 CHAPTER 4 On-line Anthropometric Measurement Estimation 47 4.1 RGB-D Dataset Description 47 4.2 3D Depth Map Processing 49 4.3 Iterative 3D Point Cloud Registration 52 4.4 Anthropometric Measurement Estimation 58 4.5 System Demonstration 62 CHAPTER 5 System Evaluation and Experimental Results 65 5.1 Performance Evaluation of Learning Kernels 67 5.2 Anthropometric Measurement Estimation on Simulated Data 69 5.3 Anthropometric Measurement Estimation on RGB-D Data 72 5.4 Discussion 75 CHAPTER 6 Experimental Results with Cluster Analysis 78 6.1 Performance Comparisons of Gender Clustering Methods 78 6.2 Performance Evaluation of Dynamic Cluster Analysis 81 6.3 Discussion 83 CHAPTER 7 Conclusions and Future Works 85 REFERENCES… 88

    [1] M.-J. J. Wang, E. M.-Y. Wang, and Y.-C. Lin, “Anthropometric data book of the Chinese people in Taiwan,” The Ergonomics Society of Taiwan, 2000.
    [2] ISO-7250, International Organization for Standardization, 2008.
    [3] D. Protopsaltou, C. Luible, M. Arevalo, and N. Magnenat-Thalmann, “A body and garment creation method for an Internet based virtual fitting room,” Advances in Modelling, Animation and Rendering, pp. 105-122, 2002.
    [4] J.-M. Lu, M.-J. J. Wang, C.-W. Chen, and J.-H. Wu, “The development of an intelligent system for customized clothing making,” Expert Systems with Applications, vol. 37, pp. 799-803, 2010.
    [5] C. C. L. Wang, Y. Wang, T. K.K. Chang, and M. M. F. Yuen, “Virtual human modeling from photographs for garment industry”, Computer-Aided Design, vol. 35, no. 6, pp.577-589, 2003.
    [6] R. Zheng, W. Yu, and J. Fan, “Development of a new Chinese bra sizing system based on breast anthropometric measurements,” International Journal of Industrial Ergonomics, vol. 37, no. 8, pp. 697-705, 2007.
    [7] O. Kwon, K. Jung, H. You, and H. E. Kim, “Determination of key dimensions for a glove sizing system by analyzing the relationships between hand dimensions,” Applied Ergonomics, vol. 40, no. 4, pp. 762-766, 2009.
    [8] J. Dai, J. J. Yang, and Z. Zhuang, “Sensitivity analysis of important parameters affecting contact pressure between a respirator and a headform,” International Journal of Industrial Ergonomics, vol. 41, no. 3, pp. 268-279, 2011.
    [9] S. H. Huang, Y. I. Yang, and C.H. Chu, “Human-Centric Design Personalization of 3D Glasses Frame in Markerless Augmented Reality,” Advanced Engineering Informatics, vol. 26, no. 1, pp.35-45, 2012.
    [10] B. Das and A. K. Sengupta, “Industrial workstation design: a systematic ergonomics approach,” Applied Ergonomics, vol. 27, no. 3, pp. 157-163, 1996.
    [11] S. H. Huang, and Y. C. Pan, “Ergonomic job rotation strategy based on an automated RGB-D anthropometric measuring system,” Journal of Manufacturing Systems, 2014.
    [12] J.-M. Lu and M.-J. J. Wang, “Automated anthropometric data collection using 3D whole body scanners,” Expert Systems with Applications, vol. 35, no. 1, pp. 407-414, 2008.
    [13] C. Lovato, U. Castellani, S. Fantoni, C. Milanese, C. Zancanaro, and A. Giachetti, “Computer assisted estimation of anthropometric parameters from whole body scanner data,” Modelling the Physiological Human, vol. 5903, pp. 71-83, 2009.
    [14] Y.-L. Lin and M.-J. J. Wang, “Automated body feature extraction from 2D images,” Expert Systems with Applications, vol. 38, pp. 2585-2591, 2011.
    [15] P. C. Y. Hung, C. P. Witana, and R. S. Goonetilleke, “Anthropometric measurements from photographic images,” Proceedings of the 7th International conference on work with Computer Systems, pp. 764-769, 2004.
    [16] M. Robinson, and M. B. Parkinson, Estimating Anthropometry with Microsoft Kinect, 2nd International Digital Human Modeling Symposium, 2013
    [17] E. E. Stone and M. Skubic, “Evaluation of an inexpensive depth camera for passive in-home fall risk assessment,” Pervasive Health, pp. 71-77, 2011.
    [18] G. Mastorakis and D. Makris, “Fall detection system using Kinect’s infrared sensor,” Journal of Real-Time Image Processing, pp. 1-12, 2012.
    [19] T. Dutta, “Evaluation of the Kinect™ sensor for 3-D kinematic measurement in the workplace,” Applied ergonomics, vol. 43, no. 4, pp. 645-649, 2012.
    [20] S. J. Ray and J. Teizer, “Real-time construction worker posture analysis for ergonomics training,” Advanced Engineering Informatics, vol. 26, no. 2, pp. 439-455, 2012.
    [21] Z. Ren, J. Meng, and J. Yuan, “Depth camera based hand gesture recognition and its applications in human-computer-interaction,” In IEEE 8th International Conference on Information, Communications and Signal Processing, pp. 1-5, 2011.
    [22] K. Lai, J. Konrad, and P. Ishwar, “A gesture-driven computer interface using Kinect,” In IEEE Southwest Symposium on Image Analysis and Interpretation, pp. 185-188, 2012.
    [23] A. Weiss, D. Hirshberg, and M. J. Black, “Home 3D body scans from noisy image and range data,” In IEEE International Conference on Computer Vision (ICCV), pp. 1951-1958, 2011.
    [24] C. Velardo and J. L. Dugelay, “Real time extraction of body soft biometric from 3d videos,” Proceedings of the 19th ACM international conference on Multimedia, pp. 781-782, 2011.
    [25] R. M. Araujo, G. Graña, and V. Andersson, “Towards skeleton biometric identification using the microsoft kinect sensor,” In Proceedings of the 28th Annual ACM Symposium on Applied Computing, pp. 21-26, 2013.
    [26] J. M. Lu, and M. J. Wang, “The evaluation of scan-derived anthropometric measurements,” IEEE Transactions on Instrumentation and Measurement, vol. 59, no. 8, pp. 2048-2054, 2010.
    [27] M. J. J. Wang, W. Y. Wu, K. C. Lin, and S. N. Yang, and J. M. Lu, “Automated anthropometric data collection from three-dimensional digital human models,” The International Journal of Advanced Manufacturing Technology, vol. 32, no. 1-2, pp. 109-115, 2007.
    [28] D. Burnsides, M. Boehmer, and K. Robinette, “3-D landmark detection and identification in the CAESAR project,” In Proceedings of the third international conference on 3-D digital imaging and modeling conference, pp. 393–398, 2001.
    [29] K. M. Robinette, H. Daanen, and E. Paquet, “The CAESAR project: a 3-D surface anthropometry survey,” In Proceedings of Second International Conference on 3-D Digital Imaging and Modeling, pp. 380-386, 1999.
    [30] P. Meunier and S. Yin, “Performance of a 2D image-based anthropometric measurement and clothing sizing system,” Applied Ergonomics, vol. 31, no. 5, pp. 445-451, 2000.
    [31] J. Boisvert, C. Shu, S. Wuhrer, and P. Xi, “Three-dimensional human shape inference from silhouettes: reconstruction and validation,” Machine vision and applications, vol. 24, no. 1, pp. 145-157, 2013.
    [32] C. Barrón,and I. A. Kakadiaris, “Estimating anthropometry and pose from a single uncalibrated image,” Computer Vision and Image Understanding, vol. 81, no. 3, pp. 269-284, 2001.
    [33] C. C. Wang, Y. Wang, T. K. Chang, and M. M. Yuen, “Virtual human modeling from photographs for garment industry,” Computer-Aided Design, vol. 35 no. 6, pp. 577-589, 2003.
    [34] W. Lee, J. Gu, and N. Magnenat‐Thalmann, “Generating animatable 3D virtual humans from photographs,” Computer Graphics Forum, vol. 19, no. 3, pp. 1-10, September 2000.
    [35] A. Hilton, D. Beresford, T. Gentils, R. Smith, W. Sun, and J. Illingworth, “Whole-body modelling of people from multiview images to populate virtual worlds,” The Visual Computer, vol. 16, no. 7, pp.411-436, 2000.
    [36] A. L. Yuille, P. W. Hallinan, and D. S. Cohen, “Feature extraction from faces using deformable templates,” International journal of computer vision, vol. 8, no. 2, pp. 99-111, 1992.
    [37] L. A. Tang, and T. S. Huang, “Automatic construction of 3D human face models based on 2D images,” In Proceedings of International Conference on Image Processing, vol. 3, pp. 467-470 ,1996.
    [38] A. S. M. Sohail and P. Bhattacharya, “Detection of facial feature points using anthropometric face model,” In Signal Processing for Image Enhancement and Multimedia Processing , pp. 189-200, 2008.
    [39] I. Bacivarov, M. Ionita, and P. Corcoran, “Statistical models of appearance for eye tracking and eye-blink detection and measurement,” Consumer Electronics, IEEE Transactions on, vol. 54, no. 3, pp. 1312-1320, 2008.
    [40] H. Bahonar, and N. M. Charkari, “Facial feature detection and extraction using symmetry and region-based deformable template matching,” In IEEE 14th International CSI Computer Conference, pp. 664-669, 2009.
    [41] F. Dornaika and J. Ahlberg, “Model-based head and facial motion tracking,” Computer Vision in Human-Computer Interaction, pp. 221-232, 2004.
    [42] C. C. Wang, T. K. Chang, and M. M. Yuen, “From laser-scanned data to feature human model: a system based on fuzzy logic concept,” Computer-Aided Design, vol. 35, no. 3, pp. 241-253, 2003.
    [43] Z. B. Azouz, C. Shu, and A. Mantel, “Automatic locating of anthropometric landmarks on 3D human models,” In Third international symposium on 3D data processing, visualization and transmission, pp. 750-757, 2006.
    [44] I. F. Leong, J. J. Fang, and M. J. Tsai, “Automatic body feature extraction from a marker-less scanned human body,” Computer-Aided Design, vol. 39, no. 7, pp. 568-582, 2007.
    [45] H. Han, and Y. Nam, “Automatic body landmark identification for various body figures,” International Journal of Industrial Ergonomics, vol. 41, no. 6, pp. 592-606, 2011.
    [46] C. Loconsole, N. Barbosa, A. Frisoli, and V. C. Orvalho, “A new marker-less 3d kinect-based system for facial anthropometric measurements,” In Articulated Motion and Deformable Objects, pp. 124-133, 2012.
    [47] J. Shotton, A. Fitzgibbon, M. Cook, T. Sharp, M. Finocchio, R. Moore, A. Kipman, and A. Blake, “Real-time human pose recognition in parts from single depth images,” IEEE Conference on Computer Vision and Pattern Recognition, pp. 1297-1304, 2011.
    [48] I. Samejima, K. Maki, S. Kagami, M. Kouchi, and H. Mizoguchi, “A body dimensions estimation method of subject from a few measurement items using KINECT,” IEEE International Conference on Systems, Man, and Cybernetics, pp. 3384-3389, 2012.
    [49] Y.-C. Pan, S.-H. Huang, and M.-J. J. Wang, “Learning-based anthropometric measurements from RGB-D camera,” International Conference on Innovative Design and Manufacturing, 2012.
    [50] C. Velardo, J. Dugelay, M. Paleari, and P. Ariano, “Building the space scale or how to weigh a person with no gravity,” In IEEE International Conference on Emerging Signal Processing Applications, pp. 67-70, 2012.
    [51] B. Allen, B. Curless, and Z. Popović, “The space of human body shapes: reconstruction and parameterization from range scans,” ACM Transactions on Graphics (TOG), vol. 22, no. 3, pp. 587-594, 2003.
    [52] D. Anguelov, P. Srinivasan, D. Koller, S. Thrun, J. Rodgers, and J. Davis, “Scape: shape completion and animation of people,” ACM Transactions on Graphics (TOG), vol. 24, no. 3, pp. 408-416, 2005.
    [53] J. Tong, J. Zhou, L. Liu, Z. Pan, and H. Yan, “Scanning 3d full human bodies using kinects,” IEEE Transactions on Visualization and Computer Graphics, vol. 18, no. 4, pp. 643-650, 2012.
    [54] Y. Cui, W. Chang, T. Nöll, and D. Stricker, “KinectAvatar: fully automatic body capture using a single Kinect,” In Computer Vision-ACCV 2012 Workshops, pp. 133-147, 2012.
    [55] S. Izadi, D. Kim, O. Hilliges, D. Molyneaux, R. Newcombe, P. Kohli, J. Shotton, S. Hodges, D. Freeman, A. Davison, and A. Fitzgibbon, “KinectFusion: RealTime Interactions with Dynamic 3D Surface Reconstructions,” In Proceedings of the 24th annual ACM symposium on User interface software and technology, pp. 559-568, 2011.
    [56] I. T. Jolliffe, “Principle component analysis,” 2nd edition, 2002.
    [57] C.-H. Chu, Y.-T. Tsai, C. C. L. Wang, and T. H. Kwok, “Exemplar-based statistical model for semantic parametric design of human body,” Computers in Industry, vol. 61, pp.541-549, 2010.
    [58] Z. H. Hu, Y. S. Ding, X. K. Yu, W. B. Zhang, and Q. Yan, “A hybrid neural network and immune algorithm approach for fit garment design,” Textile Research Journal vol. 79, no. 14, pp. 1319-1330, 2009.
    [59] Y. Gu and H. Xie, “Research on parameters reasoning of size for blouse in customization system,“ The 7th International Conference on Computer Science & Education, pp. 101-104, 2012.
    [60] Z. H. Hu, Y. S. Ding, X. K. Yu, W. B. Zhang, and Q. Yan, “A hybrid neural network and immune algorithm approach for fit garment design,” Textile Research Journal, vol. 79, no. 14, pp. 1319-1330, 2009.
    [61] Y. Gu and H. Xie, “Research on parameters reasoning of size for blouse in customization system,” The 7th International Conference on Computer Science & Education, pp. 101-104, 2012.
    [62] A. S. W. Wong, Y. Li, P. K. W. Yeung, and P. W. H. Lee, “Neural network predictions of human psychological perceptions of clothing sensory comfort,” Textile Research Journal, vol. 703, no. 31, pp. 31-37, 2003.
    [63] S. R. Agha and M. J. Alnahhal, “Neural network and multiple linear regression to predict school children dimensions for ergonomic school furniture design,” Applied Ergonomics, vol. 43, no. 6, pp. 979-984, 2012.
    [64] T. G. Dietterichl, “Ensemble learning,” The handbook of brain theory and neural networks, pp. 405-408, 2002.
    [65] H. Lappalainen and J. W. Miskin, “Ensemble learning,” in Advances in Independent Component Analysis, Springer London, pp. 75-92, 2000.
    [66] L. Breiman, “Bagging predictors,” Machine learning, vol. 24, no. 2, pp. 123-140, 1996.
    [67] Y. Freund and R.E. Schapire, “A desicion-theoretic generalization of on-line learning and an application to boosting,” in Computational learning theory, Springer Berlin Heidelberg, pp. 23-37, 1995.
    [68] J. Zurada, “Classifying the risk of work related low back disorders due to manual material handling tasks, “Expert Systems with Applications, vol. 39, no. 12, pp. 11125-11134, 2012.
    [69] F. Mallor, T. Leon, M. Gaston, and M. Izquierdo, “Changes in power curve shapes as an indicator of fatigue during dynamic contractions,” Journal of Biomechanics, vol. 43, pp. 1627-1631, 2010.
    [70] K. W. Sok, M. Kim, and J. Lee, “Simulating biped behaviors from human motion data,” ACM Transactions on Graphics, vol. 26, no. 3, article 107, 2007.
    [71] N. S. Altman, “An introduction to kernel and nearest-neighbor nonparametric regression,” The American Statistician, vol. 46, no. 3, pp. 175–185, 1992.
    [72] Y. J. Ko and Y. J. Seo, “Text categorization using feature projections,” In Proceedings of the Nineteenth international conference on Computational linguistics, vol. 1, pp. 1-7, 2002.
    [73] E. H. Han, G. Karypis and V. Kumar, “Text categorization using weight adjusted k-Nearest Neighbor classification,” Pacific-Asia Conference on Knowledge Discovery and Data Mining, pp. 53-65, 2001.
    [74] B. L. Li, Q. Lu and S. W. Yu, “An adaptive k-nearest neighbor text categorization strategy,” ACM Transactions on Asian Language Information Processing, vol. 3 , no. 4, pp. 215-226, 2004.
    [75] P. J. Besl and N. D. McKay, “A method for registration of 3-D shapes,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 14, no. 2, pp. 239-256, 1992.
    [76] Z. Zhang, “Iterative point matching for registration of free-form curves and surfaces,” International Journal of Computer Vision, vol. 13, no. 12, pp 119-152, 1994.
    [77] R. Benjeman and F. Schmitt, “Fast global registration of 3d sampled surfaces using a multi-Z-buffer technique,” Image and Vision Computing, vol. 17, no. 2, pp. 113-123, 1999.
    [78] T. Masuda and N. Yokoya, “A robust method for registration and segmentation of multiple range images,” Computer Vision and Image Understanding, vol. 61, no. 3, pp. 295-307, 1995.
    [79] T. Masuda, “A unified approach to volumetric registration and integration of multiple range images,” In Proceeding of International Conference on Pattern Recognition, vol. 2, pp. 977-981, 1998.
    [80] K. Khoshelham and S. O. Elberink, “Accuracy and resolution of kinect depth data for indoor mapping applications,” Sensors, vol. 12, no. 2, pp. 1437-1454, 2012.
    [81] J. Han, L. Shao, D. Xu, and J. Shotton, “Enhanced Computer Vision With Microsoft Kinect Sensor: A Review,” IEEE Transactions on Cybernetics, vol. 43, no. 5, pp.1318-1334, 2013.

    無法下載圖示 全文公開日期 本全文未授權公開 (校內網路)
    全文公開日期 本全文未授權公開 (校外網路)

    QR CODE