研究生: |
楊昇諺 Sheng-Yan Yang |
---|---|
論文名稱: |
藉由與背景比較,從監視影片中萃取移動物體之研究 A Study of Moving Objects Extraction for Surveillance Videos by Background-Subtraction Method |
指導教授: |
許秋婷
Chiou-Ting Hsu |
口試委員: | |
學位類別: |
碩士 Master |
系所名稱: |
電機資訊學院 - 資訊工程學系 Computer Science |
論文出版年: | 2006 |
畢業學年度: | 94 |
語文別: | 中文 |
論文頁數: | 38 |
中文關鍵詞: | 物體偵測 |
相關次數: | 點閱:2 下載:0 |
分享至: |
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
本篇論文提出一種利用跟背景模型的比較,從監視器影片中有效萃取移動前景物體的方法。我們首先將一個像素用一個混合的特徵向量來表示,這個特徵向量包含了它的GMM機率可能性、顏色特徵以及空間位置,並且利用無母數方法,估計出每張影像畫面的機率分佈。接著,我們藉由分群,將一張影像畫面根據它們混合的特徵向量分成很多群,因此同一群的混合特徵向量會非常類似。最後我們將每一群的背景可能性用該群的代表特徵取代,成為一個新的背景可能性;如此一來,這個背景模型可以看成是一種根據空間及色彩連貫性而產生的平滑GMM背景機率。
為了結合時間上的資訊,我們以DCRF模型為架構,但將原本的背景模型取代成我們上述的修正式背景模型;我們也為原本DCRF模型中的陰影模型加入了彩度的資訊,以增進判斷出陰影的能力。此外,為了降低運算時間,我們提出「先過濾像素,再進行分群步驟」的方法,成功地加快了運算速度。從我們的實驗結果與其他方法的比較中可以看出,在背景充滿許多動態物體的狀況下,我們的方法的確可以萃取出輪廓較清晰的前景物體。
This thesis proposes an approach to extract the moving foreground objects for surveillance videos by background-subtraction method. We first represent each pixel with a hybrid feature vector, which includes its GMM likelihood, color and spatial features, and estimate the density for each video frame by a non-parametric method. Next, we apply a clustering process to segment the video frame into clusters with similar hybrid features. Finally, we replace the background likelihood for each cluster with the GMM likelihood in the cluster mode. Hence, the resulting background model becomes a smoothed GMM in terms of spatial and color coherency.
In order to combine the temporal information of previous extracting results, we follow the DCRF model which uses the conditional random field to model the information of the temporal and spatial neighborhood and replace the original background model with our proposed modified background model. We also add the chrominance component into the original shadow model in the DCRF model to have a better approximation of shadow. Moreover, in order to reduce the computational load, we also propose a filtering step to skip pixels from the time-consuming clustering process. Our experimental results and comparisons demonstrate that the proposed method indeed achieves better detection results with accurate object contours even in dynamic scenes.
Index Terms—Static background, dynamic background, DCRF model, GMM, mean-shift clustering process.
[1] C. Stauffer and W. E. L. Grimson, “Learning Patterns of Activity Using Real-Time Tracking,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 22, no. 8, Aug. 2000, pp. 747-757.
[2] D.S Lee, “Effective Gaussian Mixture Learning for Video Background Subtraction,” IEEE. Trans. Pattern Analysis and Machine Intelligence, vol. 27, no. 5, May 2005, pp. 827-832.
[3] Q. Zang and R. Klette, “Robust Background Subtraction and Maintenance,” Proc. ICPR, 2004.
[4] A. Elgammal, D. Harwood, and L. Davis, “Non-parametric Model for Background Subtraction,” 6th European Conference on Computer Vision, June/July 2000.
[5] Y. Sheikh and M. Shah, “Bayesian Modeling of Dynamic Scenes for Object Detection,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 27, no. 11, Nov. 2005, pp. 1778-1792.
[6] L. Li. W. Huang, I. Y.H. Gu, and Q. Tian, “Statistical Modeling of Complex Backgrounds for Foreground Object Detection,” IEEE Trans. Image Processing, vol. 13, no. 11, Nov. 2004, pp. 1459-1472.
[7] K. Kim, T. H. Chalidabhongse, D. Harwood, and L. Davis, “Background Modeling and Subtraction by Codebook Construction,” Proc. ICIP, 2004.
[8] W. Wei and K. N. Ngan, “Integration of Motion and Image Features for Automatic Video Object Segmentation,” Proc. ICIP, 2004.
[9] I. Karliga and J.N. Hwang, “A Framework for Fully Automatic Moving Video-Object Segmentation based on Graph Partition,” IEEE Proc. ISCAS, vol. 3, May 2004, pp. 845-848.
[10] C. M. Christoudias, B. Georgescu, and P. Meer, “Synergism in Low Level Vision,” 6th Int. Conf. on Pattern Recognition, vol. 4, Aug. 2002, pp. 150-155.
[11] J. Crespo, R. Schafer, J. Serra, C. Gratin, and F. Meyer, “The Flat Zone Approach: A General Low-Level Region Merging Segmentation Method,” Signal Processing, 1998.
[12] J. Shi and J. Malik, “Normalized Cuts and Image Segmentation,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 22, no. 8, Aug. 2000, pp. 888-905.
[13] Y. Wang, K. F. Loe, and J. K. Wu, “A Dynamic Conditional Random Field Model for Foreground and Shadow Segmentation,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 28, no. 2, Feb. 2006, pp. 279-289.
[14] D. Comaniciu and P. Meer, “Mean Shift: A Robust Approach Toward Feature Space Analysis,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 23, no.5, May 2002, pp.603-619.
[15] F. Dibos, S. Pelletier and G. Koepfler, “Real-Time Segmentation of Moving Objects in A Video Sequence by A Contrario Detection,” Proc. ICIP, 2005.
[16] http://perception.i2r.a-star.edu.sg/bk_model/bk_index.html
[17] http://www.cs.rutgers.edu/~elgammal/Research/BGS/research_bgs.htm
[18] S. Y. Yang and C. T. Hsu, “Background Modeling from GMM Likelihood Combined with Spatial and Color Coherency,” to appear in Proc. ICIP, 2006.