簡易檢索 / 詳目顯示

研究生: 莊佳穎
Chia-Ying Chuang
論文名稱: 在多重視野下以模型為基礎的即時人體運動參數分析系統
A Real Time Model-based Human Motion Analysis System in Multiple-Views
指導教授: 黃仲陵
Chung-Lin Huang
口試委員:
學位類別: 碩士
Master
系所名稱: 電機資訊學院 - 電機工程學系
Department of Electrical Engineering
論文出版年: 2001
畢業學年度: 89
語文別: 英文
論文頁數: 75
中文關鍵詞: 人體運動分析
外文關鍵詞: Human motion analysis
相關次數: 點閱:2下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 在本篇論文中,我們發展了一個在多重視野下以模型為基礎的即時人體運動參數分析系統。這個系統分析真實場景中人體的運動,並利用三維電腦繪圖技術 ( OpenGL) 合成一個相對應的虛擬世界及虛擬人物,其中的虛擬人物與真實世界的人物動作一致。我們使用一個由十個圓柱體所組成的三維人體模型分別代表人體的軀幹、頭部、手、腳等部位;其連接處則代表人體的關節。首先,我們發展了一個簡單的攝影機校正方法。矯正之後的攝影機可以讓我們尋找人體在空間中的位置,並使用一個透視投影縮放參數將正交投影轉換成透視投影。之後,我們使用一個以三維人體模型為基礎的人體運動參數分析法以尋找最佳的人體關節角度。透過比較影像中前景物體與三維人體模型投影在二維平面上的二維人體模型的相似程度,調整出最佳的人體關節角度。在這裡,我們發展了一個新的「重疊三元樹尋找法」調整關節的角度,可以在更短的時間內搜尋更廣的關節角度範圍。透過這個方法,我們可以在較低的影像擷取速度下分析一些快速的人體動作(例如:揮手)。而在系統中,因為兩台攝影機的拍攝方向互相垂直,所以可以很容易的分析出人體在空間中的位置。在結合多視野的資訊方面,我們發展了一個新的結合與仲裁的方法將多視野的高階資訊加以整合。


    In this thesis, we introduce a real time multiple-views human motion analysis system. This system analyzes the motion of the human object in a real scene and synthesize virtual actor in a corresponding virtual world by computer graphic OpenGL. To analyze the motion of the human object, we use the 3-D human model, which consists of 10 cylindrical primitives, representing torso, head, arms, and legs, and 10 joints connecting these cylinders. First, we introduce a simple camera calibration and 2-D human position finding method that enable us to find the 2-D human position and simulate the perspective projection by applying a perspective-scaling factor on the orthographic projection. Then, we use the 3-D human model-based method to find the BAPs based on the searching for the best matching between the 2-D projection model and the foreground object. Here, we also introduce a new overlapped tri-tree search algorithm with less running time and wider search range, which enables us to track some fast human motions, such as hands waving, in a lower frame rate. In our system, there are two cameras controlled by two viewers and the viewing direction of one camera is orthogonal to that of the other one. To integrate and arbitrate the information from multiple-views, we introduce a new integration and arbitration method, which integrates high-level information from multiple-views.

    CHAPTER 1 INTRODUCTION 1 1.1 REVIEW OF PREVIOUS WORKS 1 1.2 SYSTEM REQUIREMENTS 3 1.3 SYSTEM SPECIFICATION 4 1.4 SYSTEM IMPLEMENTATION 7 CHAPTER 2 BACKGROUND SUBTRACTION 9 2.1 COLOR MODEL 9 2.2 BACKGROUND MODELING 10 2.3 PIXEL CLASSIFICATION 12 2.4 AUTOMATIC THRESHOLD SELECTION 14 CHAPTER 3 MODEL-BASED HUMAN MOTION ANALYSIS 15 3.1 3-D HUMAN MODEL 15 3.1.1 Human Model Parameters 16 3.1.2 Homogeneous Coordinate System 18 3.1.3 Similarity Between Shapes 21 3.2 BODY DEFINITION PARAMETERS ESTIMATION 22 3.2.1 Preprocessing of BDPs Estimation 23 3.2.2 BDPs Estimation of Front Viewer 24 3.2.3 BDPs Estimation of Side Viewer 28 3.3 BODY ANIMATION PARAMETER ESTIMATION 30 3.3.1 Facade/Flank Determination 32 3.3.2 Human Position Estimation 33 3.3.3 Arm Joint Angle Estimation 35 3.3.4 Leg Joint Angle Estimation 38 3.4 OVERLAPPED TRI-TREE SEARCH ALGORITHM 42 3.4.1 Overlapped Tri-Tree Search Algorithm 43 3.4.2 Running Time Analysis 46 CHAPTER 4 THE INTEGRATION AND ARBITRATION FROM MULTIPLE-VIEWS 48 4.1 CAMERA CALIBRATION 48 4.2 2-D POSITION FINDING 51 4.3 BODY DEFINITION PARAMETERS INTEGRATION 58 4.4 FACADE/FLANK ARBITRATION 59 4.5 BODY ANIMATION PARAMETER INTEGRATION 61 CHAPTER 5 EXPERIMENT RESULTS 64 5.1 THE TRACKING OF HUMAN MOVEMENT INSIDE THE ACTION REGION 65 5.2 THE ARM JOINT ANGLES FROM THE TWO VIEWERS ARE INTEGRATED 67 5.3 THE HUMAN CAN WALK PARALLEL TO THE X-AXIS OR Z-AXIS 68 5.4 THE HUMAN CAN MOVE HIS ARM ALONG THE X-Z PLANE 69 5.5 THE ARM JOINT ANGLES AND LEG JOINT ANGLES FROM DIFFERENT VIEWERS ARE INTEGRATED 70 5.6 THE DIFFERENCE BETWEEN SQUATTING DOWN AND LEG LIFTING 71 CHAPTER 6 CONCLUSION AND FUTURE WORKS 72 6.1 CONCLUSION 72 6.2 FUTURE WORKS 73 REFERENCES 74

    [1] G. Johansson, "Visual motion perception," Sci. Am. 232(6), 1975, 76-88.
    [2] R. F. Rashid, "Toward a system for the interpretation of moving light display," IEEE Trans. PAMI, 2(6), 574-581, November 1980.
    [3] J. A. Webb and J. K. Aggarwal, "Visually interpreting the motion of objects in space," IEEE Comput. August 1981, 40-46.
    [4] J. A. Webb and J. K. Aggarwal, "Structure from motion of rigid and jointed objects," in Artif. Intell. 19, 1982, 107-130.
    [5] A. Shio and J. Sklansky, "Segmentation of people in motion," in Proc. Of IEEE Workshop on Visual Motion, IEEE Computer Society, October 1991, pp. 325-332.
    [6] S. Kurakake and R. Nevatia, "Description and tracking of moving articulated objects," in 11th Intl. Conf. on Pattern Recognition, Hague, Netherlands, 1992, Vol. 1, pp. 491-495.
    [7] A. G. Bharatkumar, K. E. Daigle, M. G. Pandy, Q. Cai, and J. K. Aggarwal, "Lower limb kinematics of human walking with the medial axis transformation," in Proc. of IEEE Computer Society Workshop on Motion of Non-Rigid and Articulated Objects, Austin, TX, 1994, pp. 70-76.
    [8] M. K. Leung and Y. H. Yang, "First sight: A human body outline labeling system," IEEE Trans. PAMI 17(4), 1995, 359-377.
    [9] J. O'Rourke and N. I. Badler, Model-based image analysis of human motion using constraint propagation," IEEE Trans. PAMI, 2:522-536, 1980.
    [10] D. Marr and H. K. Nishihara, "Representation and recognition of the spatial organization of three-dimensional shapes," in Proc. R. Soc. London, 1978, Vol. B, pp. 269-294.
    [11] Q. Cai and J. K. Aggarwal, "Tracking human motion using multiple cameras," in Proc. of Intl. Conf. on Pattern Recogintion," Vienna, Austria, August 1996, pp. 68-72.
    [12] Q. Cai and J. K. Aggarwal, "Automatic tracking of human motion in indoor scene across multiple synchronized video streams," in Intl. Conf. on Computer Vision, Bombay, India, January 1998.
    [13] K. Sato, T. Maeda, H. Kato, and S. Inokuchi, "CAD-based object tracking with distributed monocular camera for security monitoring," in Proc. 2nd CAD-Based Vision Workshop, Champion, PA, February 1994, pp. 291-297.
    [14] Thanart Horprasert, David Harwood, and Larry S. Davis, "A robust background subtraction and shadow detection," Proceedings of the Fourth Asian Conference on Computer Vision, pp. 983-988, 2000.
    [15] J. C. Lin and C. L. Huang, "HIVE encoder/decoder for creating virtual actor by using video computing technology," Master thesis, Department of Electrical Engineering in NTHU, Taiwan, 2000.

    無法下載圖示 全文公開日期 本全文未授權公開 (校內網路)
    全文公開日期 本全文未授權公開 (校外網路)
    全文公開日期 本全文未授權公開 (國家圖書館:臺灣博碩士論文系統)
    QR CODE