研究生: |
蔡易澂 Tsai, I-Cheng |
---|---|
論文名稱: |
基於特徵提取的低功耗應用人體即時姿態識別系統 A Low Cost Human Posture Recognition Based on Feature Extraction for Real-Time Applications |
指導教授: | 邱瀞德 |
口試委員: |
邱瀞德
范倫達 黃志煒 |
學位類別: |
碩士 Master |
系所名稱: |
電機資訊學院 - 通訊工程研究所 Communications Engineering |
論文出版年: | 2012 |
畢業學年度: | 100 |
語文別: | 英文 |
論文頁數: | 36 |
中文關鍵詞: | 姿勢辨識 、行為分析 、重心 、影像監視 |
外文關鍵詞: | posture recognition, behavior analsis, center of gravity, video surveillance |
相關次數: | 點閱:2 下載:0 |
分享至: |
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
人體動作的辨識在很多應用中扮演了一個很重要的角色,像是視
訊監視系統、醫療分析等等。因此,很多研究學者提出關於人體姿勢
和行為的辨識的研究,像是利用一個姿勢的骨幹當做特徵,並且利用
隱馬爾可夫模型來辨識其姿態。其他像利用一個姿態經過三角化後並
算出其個別三角型的中央質心,然後再建立其骨幹並分析其姿態。前
者在需要大量的訓練樣本,後者的演算法需要很高的計算複雜度。
在這篇論文裡,我們提出一個以中央質心為基礎的姿勢和行為的
分析。在這裏我們只用了五個重心點和四個特徵集合來做分析,其中
兩個特徵用來量測重心點在垂直跟水平方向的變化,另外兩個用來量
測人體上半身和下半身角度變化的速率。根據這些定義的特徵集合和
我們提出的分類模型,我們提出的方法可以辨識出五種不同的靜態姿
勢,像是站、坐、蹲、躺和彎腰,且也可以分出兩種動態姿勢,像是
揮手和走路。這實驗結果顯示我們提出的方法比起其他的方法有好的
辨識能力和較小的計算複雜度。
The recognition of human actions plays an essential role in many applications such as hu-
man machine interaction, surveillance systems, medical data analysis and etc. With the
popularity of smart televisions and handheld devices, low cost and real-time realization
of human posture recognition become important. Many researches have been proposed
for human posture and action recognition. However, most of the proposed approaches
have high computational complexity. In this paper, we propose a low-cost real-time ac-
tion recognition approach using only ve center of gravity points (COG) and four feature
sets. Two feature sets measure the displacement of the upper and lower body COG in
the vertical and horizontal directions. The other two feature sets quantize the upper
and lower body angular change rate. With these feature sets and a classication model,
our proposed approach is able to recognize ve dierent static postures including stand,
laying, bend, sit and squat and two actions, walking and handwaving. The simulation
results show that our proposed approach achieve 98.02% to 80.20% recognition rates for
various postures and actions in the KTH and ISIR databases. Our approach can achieve
real-time recognition for video sequence and has lower computational complexity than
other state-of-art algorithms.
[1] T. B. Moeslund and E. Granum, \A survey of computer vision-based human motion
capture," Comput. Vis. Image Underst., vol. 81, no. 3,pp. 231-268, Mar. 2001.
[2] W. Hu, T. Tan, L.Wang, and S.Maybank, \A survey on visual surveillance of object-
motion and behaviors," IEEE Trans. Syst. ,Man, Cybern. C, Appl. Rev., vol. 34, no.
4, pp. 334352, Aug. 2004.
[3] D. M. Gavrila, \The visual analysis of human movement: A survey, " Comput. Vis.
Image Underst. , vol. 73, no. 1, pp. 8298, Jan. 1999.
[4] S. Singh, H. Y. Tu, W. Donat, K. Pattipati, and P. Willett, \Anomaly detection
via feature-aided tracking and hidden Markov models," IEEE Trans. Syst. , Man,
Cybern. A, Syst., Humans, vol. 39, no. 1, pp. 144159, Jan. 2009.
[5] C.-F. Juang, C.-M. Chang, J.-R. Wu, and D. Lee, \Computer vision-based human
body segmentation and posture estimation," IEEE Trans. Syst. , Man, Cybern. A,
Syst., Humans, vol. 39, no. 1, pp. 119133, Jan. 2009.
[6] C.-F. Juang and C.-M. Chang, \Human body posture classication by a neural fuzzy
network and home care system application," IEEE Trans. Syst. , Man, Cybern. A,
Syst., Humans, vol. 37, no. 6, pp. 984994, Nov. 2007.
[7] Y.-M. Liang, S.-W. Shih, C.-C. Shih, H.-Y. M. Liao, and C.-C. Lin, \Learning atomic
human actions using variable-length Markov models," IEEE Trans. Syst. , Man, Cy-
bern. B, Cybern., vol. 39, no. 1, pp. 268280, Feb. 2009.
[8] R. Cucchiara, C. Grana, A. Prati, and R. Vezzani, \Probabilities posture classica-
tion for human-behavior analysis," IEEE Trans. Syst. , Man, and Cybern. A, Syst.,
Humans, vol. 35, no. 1, pp. 4254, Jan. 2005
[9] R. Parasuraman, T. B. Sheridan, and C. D. Wickens, \A model for types and levels
of human interaction with automation," IEEE Trans. Syst. , Man, Cybern. A, Syst.,
Humans, vol. 30, no. 3, pp. 286-297, May 2000.
[10] N. Oliver, B. Rosario, and A. Pentland, \A Bayesian computer vision system for
modeling human interactions," IEEE Trans. Pattern Anal. Machine Intell. , vol. 22,
no. 8, pp. 831-843, Aug. 2000.
[11] I. Haritaoglu, D. Harwood, and L. S. Davis, \W4: Real-time surveillance of people
and their activities," IEEE Trans. Pattern Anal. Mach. Intell. , vol. 22, no. 8, pp.
809830, Aug. 2000.
[12] I. Haritaoglu, D. Harwood, and L. S. Davis, \Ghost: A human body part labeling
system using silhouettes," in Proc. 14th Int. Conf. Pattern Recog. , 1998, vol. 1, pp.
7782.
[13] S. S. Micilotta, E. J. Ong, and R. Bowden, \Detecting and tracking of humans by
probabilistic body part assembly," in Proc. British Machine Vision Conf. , Sep. 2005,
vol. 1, pp. 429438.
[14] B. Wu and R. Nevatia, \Tracking of multiple, partially occluded humans based on
static body part detection," in Proc. IEEE Conf. Comput. Vis. Pattern Recog. , Jun.
2006, vol. 1, pp. 951958.
[15] C. R.Wren, A. J. Azarbayejani, T. J. Darrell, and A. P. Pentland, \Pnder: Real-
time tracking of the human body," IEEE Trans. Pattern Anal. Mach. Intell. , vol.
19, no. 7, pp. 780-785, Jul. 1997.
[16] I. Mikic, M. Trivedi, E. Hunter, and P. Cosman, \Human body model acquisition
and tracking using voxel data," Int. J. Comput. Vis. , vol. 53, no. 3, pp. 199223,
Jul./Aug. 2003
[17] S. Park and J. K. Aggarwal, \Segmentation and tracking of interacting human body
parts under occlusion and shadowing," in Proc. IEEEWorkshop Motion Video Com-
put. , Orlando, FL, 2002, pp. 105111.
[18] S. Park and J. K. Aggarwal, \Semantic-level understanding of human actions and
interactions using event hierarchy," in Proc. Conf. Comput. Vis. Pattern Recog. Work-
shop , Washington, DC, Jun. 27Jul. 2, 2004, p. 12.
[19] S. Weik and C. E. Liedtke, \Hierarchical 3D pose estimation for articulated human
body models from a sequence of volume data," in Proc. Int. Workshop Robot Vis.,
Auckland, New Zealand, Feb. 2001, pp. 2734.
[20] Y. Song, L. Goncalves, and P. Perona, \Unsupervised learning of human motion,"
IEEE Trans. Pattern Anal. Mach. Intell. , vol. 25, no. 7, pp. 814827, Jul. 2003.
[21] H. Fujiyoshi, A. J. Lipton, and T. Kanade, \Real-time human motion analysis by
image skeletonization," IEICE Trans. Inform. Syst., vol. E87-D, no. 1, pp. 113120,
Jan. 2004.
[22] S. X. Ju,M. J. Black, and Y. Yaccob, \Cardboard people: A parameterized model
of articulated image motion," in Proc. Int. Conf. Automatic Face and Gesture Recog-
nition, Killington, VT , 1996, pp. 3844.
[23] C. C. Chen, J. W. Hsieh, Y. T. Hsu, and H. Y. Mark Liao, \Video-Based Human
Movement Analysis and Its Application to Surveillance Systems," IEEE Trans. on
Multimedia , vol. 10, no. 3, pp. 372-384, APRIL 2008.
[24] I. Haritaoglu, D. Harwood, and L. S. Davis, \W4Who? When? Where? What? A
Real -Time System for Detecting and Tracking People," International Conference on
Face and Gesture Recognition , April, 14-16, 1998.
[25] C. Schuldt, I. Laptev and B. Caputo; \Recognizing Human Actions: A Local SVM
Approach," ; in Proc. ICPR'04, Cambridge, UK.
[26] J. Barron, D. Fleet, and S. Beauchemin, \Performance of Optical Flow Techniques,"
International Journal of Computer Vision, vol. 12, no. 1, pp. 42-77, 1994.
[27] C. Anderson, P. Burt, and G. van der Wall, \Change Detection and Tracking Using
Pyramid Transformation Techniques," In Proceedings of SPIE - Intelligent Robots
and Computer Vision, vol. 579, pp. 72-78, 1985.
[28] C. F. Juang, C. M. Chang, J. R. Wu, and D. Lee, \Computer Vision-Based Human
Body Segmentation and Posture Estimation," IEEE Trans. Syst. , Man, Cybern. A,
Syst., Humans, vol. 39, no. 1, pp. 119-133, 2009.
[29] A. Mokhber , C. Achard, and M. Milgram, \Recognition of human behavior by
space-time silhouette characterization." Pattern Recognition Letters, 2008.
[30] Peter M. Roth, Thomas Mauthner, Inayatullah Khan, and Horst Bischof, \Efcient
Human Action Recognition by Cascaded Linear Classifcation." 2009 IEEE 12th In-
ternational Conference on Computer Vision Workshops, ICCV Workshops.
[31] H. Ning, Tony X. Han, Dirk B.Walther, M. Liu and Thomas S. Huang, \Hierarchical
Space-Time Model Enabling Efcient Search for Human Actions." IEEE Trans. on
Circuit and Syst. for video tech., vol. 19, no. 6, pp. 808 - 820, June 2009.
[32] P. Dollr, V. Rabaud, G. Cottrell, and S. Belongie, Behavior recognition via sparse
spatio-temporal features. . In VS-PETS, 2005.
[33] J. Niebles and L. Fei-Fei. Unsupervised learning of human action categories using
spatial-temporal words, BMVC 2006.
[34] C. Achard, X. Qu, A. Mokhber, and M. Milgram, \A novel approach for recognition
of human actions with semi-global features." Machihne Vision and Applications, vol.
19, no. 1, pp. 27-34, 2008.