研究生: |
王治凱 Chih-Kai Wang |
---|---|
論文名稱: |
動畫內容自動摘要技術之研究 A Study on Automatic Human Motion Summarization |
指導教授: |
楊熙年
Shi-Nine Yang |
口試委員: | |
學位類別: |
碩士 Master |
系所名稱: |
電機資訊學院 - 資訊工程學系 Computer Science |
論文出版年: | 2007 |
畢業學年度: | 95 |
語文別: | 中文 |
論文頁數: | 49 |
中文關鍵詞: | 動作分析 、關鍵影格萃取 、動作摘要 |
外文關鍵詞: | Motion analysis, Keyframe extraction, Motion summarization |
相關次數: | 點閱:4 下載:0 |
分享至: |
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
很多被使用在動畫、商業廣告或是遊戲當中的動作資料(motion capture data)都需要動用很多人力將這些取得不易的資料切成各個不同行為的區段(segment)。假設我們擁有非常龐大的動作資料庫,可以想見此處理過程所耗費的人力將會相當龐大,於是如何能根據動作內容將動作資料做適當地分段變成一個很重要的議題。
在本篇論文中,我們提出一種動作資料的自動分段方法,它不僅能使同樣動作落在同一區段,而且每個動作都能透過我們提出的方法產生摘要性之文字描述,以提供使用者具語意性較高階之互動環境。
首先,我們提出新的動作表示法,它包括兩種特徵值(features),亦即整體特徵(global features)和相對肢體局部特徵(local features)兩種。整體特徵可用來觀察身體的動作,相對肢體局部特徵則是四肢的動作。由於我們定義的特徵可使得每個特徵的正負值都有其對應的文字,因此我們可以從觀察特徵值正負號的變化知道動作資料現在正在進行的動作。
最後,我們以多組實例來驗證本方法之有效性,並討論它未來之發展方向。
The motion capture data used in animation, commercial advertisement, or video games require much manpower to segment the data into distinct behavior. If the size of database is large, the cost of segmentation process becomes inevitably high. Thus, automatic segmentation becomes an important issue in processing human motion data.
In this thesis, we proposed a method for segmenting the mocap data automatically. Our method not only can segment similar motion into a clip, but also gives each segment a text description, which provides a high level interactive environment for animators.
First we propose a new motion representation, in other words, we define two features for each motion, namely, the global feature and local feature. The global features refer to the movements of torso and the local features are movements of the limbs. Based on these features and their signs we can provide textual abstraction of the motion clip, in other words we can understand the motion data by observing the variations of features. Finally we give several empirical examples to show the effectiveness of the proposed method.
ASSA, J., CASPI, Y., AND COHEN-OR, D. 2005. Action synopsis: pose selection and illustration. ACM Transactions on Graphics 24, 3, 667–676.
CHAO, S.-P., CHIU, C.-Y., YANG, S.-N., AND LIN, T.-G. 2004. Tai chi synthesizer: A motion synthesis framework based on keypostures and motion instructions. Computer Animation and Virtual Worlds 15, 3-4, 259–268.
CLEVELAND, W. 1979. Robust locally weighted regression and smoothing scatterplots. Journal of the American Statistical Association 74, 368, 829–836.
FRIEDMAN, D., FELDMAN, Y. A., SHAMIR, A., AND DAGAN, T. 2004. Automated creation of movie summaries in interactive virtual environments. In Proceedings of the IEEE Virtual Reality 2004, 191–198.
GOLDMAN, D. B., CURLESS, B., SALESIN, D., AND SEITZ, S. M. 2006. Schematic storyboarding for video visualization and editing. ACM Transactions on Graphics 25, 3, 862–871.
HALPER, N., AND MASUCH, M. 2003. Action summary for computer games: Extracting and capturing action for spectator modes and summaries. In Proceedings of 2nd International Conference on Application and Development of Computer Games,124–132.
HORI, C., AND FURUI, S. 2003. A new approach to automatic speech summarization. IEEE Transactions on Multimedia 5, 3, 368–378.
HUANG, K.-S., CHANG, C.-F., HSU, Y.-Y., AND YANG, S.-N. 2005. Key probe: A technique for animation keyframe extraction. The Visual Computer 21, 8-10, 532 – 541.
JOSHI, D., WANG, J. Z., AND LI, J. 2006. The story picturing engine - a system for automatic text illustration. ACM Transactions on Multimedia Computing, Communications, and Applications 2, 1, 68–89.
KOJIMA, A., TAMURA, T., AND FUKUNAGA, K. 2002. Natural language description of human activities from video images based on concept hierarchy of actions. International Journal of Computer Vision 50, 2, 171–184.
LI, W., RITTER, L., AGRAWALA, M., CURLESS, B., AND SALESIN, D. 2007. Interactive cutaway illustrations of complex 3d models. ACM Transactions on Graphics 26, 3, to appear.
LIM, I. S., AND THALMANN, D. 2001. A key-posture extraction out of human motion data. In Proceedings of 23rd Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC 2001), 1167–1169.
LIU, F., ZHUANG, Y., WU, F., AND PAN, Y. 2003. 3d motion retrieval with motion index tree. Computer Vision and Image Understanding 92, 2, 265–284.
MEREDITH, M., AND MADDOCK, S. 2001. Motion capture file formats explained. Tech. rep., University of Sheffield.
http://www.dcs.shef.ac.uk/~steve/publications/Motion%20Capture%20File%20Formats%20Explained.pdf
PARK, M. J., AND SHIN, S. Y. 2004. Example-based motion cloning. Computer Animation and Virtual Worlds 15, 3-4, 245–257.
REN, L., PATRICK, A., EFROS, A. A., HODGINS, J. K., AND REHG, J. M. 2005. A data-driven approach to quantifying natural human motion. ACM Transactions on Graphics 24, 3, 1090–1097.
REN, L. 2006. Statistical Analysis of Natural Human Motion for Animation. PhD thesis, Carnegie Mellon University.
TAVANAPONG, W., AND ZHOU, J. 2004. Shot clustering techniques for story browsing. IEEE Transaction on Multimedia 6, 4, 517–527.
XU, C., MADDAGE, N. C., AND SHAO, X. 2005. Automatic music classification and summarization. IEEE Transactions on Speech and Audio Processing 13, 3, 441–450.