簡易檢索 / 詳目顯示

研究生: 李俊谷
Chun-Ku Lee
論文名稱: 在視訊中利用N群分割法偵測特殊事件的發生
Abnormal Event Detection in Video using N-cut Clustering
指導教授: 黃仲陵
Chung-Lin Huang
口試委員:
學位類別: 碩士
Master
系所名稱: 電機資訊學院 - 電機工程學系
Department of Electrical Engineering
論文出版年: 2006
畢業學年度: 94
語文別: 英文
論文頁數: 53
中文關鍵詞: 分群圖譜分割法光流法K中心分群法平面樣板比較法卡方檢定
外文關鍵詞: clustering, spectral graph, optical flow, kmeans, chamfer matching, chi-square test
相關次數: 點閱:3下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 想像您試著要在一段冗長的video中找出一段影像,對應到不尋常事件的發生,所面臨的問題,第一,無法預知是什麼樣的不尋常畫面。例如在一大群行走的人群中發現有人突然衝出來或在一條單向的十字路口上,發現有車輛違規逆轉,甚至是在銀行中,發現可疑的搶劫犯! 這些不尋常事件差異極大,所以並不能用單一的model來表示。
    第二,正常事件要比不尋常事件發生的機會多很多,因為不尋常事件通常都無跡象可循,且發生次數極少,除非我們能試著說明正常事件(大量的)所代表的意義,所包含的要素,否則我們無法去分辨正常和不尋常(少量的)的差別。然而,同樣的問題再度發生,對大量和多樣化,卻正常的影片如何讓電腦仍能像人一樣有智慧的判斷事件的發生屬於正常或不正常?
    因此我們將焦點集中在找出,監視影片中最不尋常的片段,能通知管理者做進一步的判定,我們選取silhouette,motion vector 為feature因為這樣的low level frature,已廣泛應用在影像分析和追蹤上,且容易取得,將motion vector視為機率分佈,經過正歸化後
    藉由比較不同的motion pattern可以找出差異最大的片段(N-cut clustering),透過clustering 內部自我相似度的分析和閥值,來判斷哪些歸類為不正常,最後利用ROC curve的分析找到一個最適當的閥值與分類結果。吾人希望這樣的研究可以應用在各樣的監控系統,例如居家安全、路上車況監視、電梯監視等。


    Imagine you are asked to find out an unusual event in a daily recorded surveillance video. Questions aroused, how to detect events in a variety scenes?

    We focus our attention on finding out events that difference most from others and report it for further examinations. First we divide a video into several overlapping clips. Then we use optical flow to find out motion vectors of each frame in each clip. Magnitudes histogram, direction histogram and color histogram are selected as features. We form a similarity matrix by using difference and chamfer difference as the similarity measure of features in different clips. Then, we apply n-cut clustering .A threshold is selected to balance FAR (false alarm rate) and THR (true hit rate) according to ROC curve (receiver operating characteristic) and once a threshold is selected , clusters correspond to low self-similarity value is reported as unusual events and for further examination. Finally, this mechanism is tested on 6 different views.

    Chapter 1. Introduction 1 1.1 Motivation 1 1.2 Related Work 2 1.3 System overview 4 Chapter 2. Motion information and Similarity Measure 5 2.1 Motion Information 5 2.2 Color histogram 14 2.3 Chi-square difference 16 2.4 Chamfer difference 16 2.5 Similarity matrix 21 Chapter 3. N-Cut algorithm 22 3.1 Bi-Cut algorithm 22 3.2 N-cut algorithm 29 3.3 N-Cut for video clips clustering. 33 3.4 Roc curve test 35 Chapter 4.Experimental Results 38 Chapter 5. Conclusion and Future works 51 References 52

    References
    [1] M. Brand, N. Oliver, and A. Pentland. “Coupled hidden markov models for complex action recognition.” In IEEE Conf. on Computer Vision and Pattern Recognition, 1997.

    [2] M.T. Chan, A. Hoogs, J. Schmiederer, and M. Perterson. “Detecting rare events in video using semantic primitives with HMM.” In Proc. ICPR, August 2004.

    [3] A.Y. Ng, M.I. Jordan and Y. Weiss, “On Spectral Clustering: Analysis and an algorithm,” Advances in Neural Information Processing Systems (NIPS),14, 2002.

    [4] Y. Weiss, “Segmentation using eigenvectors: a unifying view,” Proceedings IEEE International Conference on Computer Vision, p. 975-982 (1999)

    [5] Eli Shechtman, Michal Irani, "Space-Time Behavior Based Correlation". Proc. of IEEE Conf.Computer Vision and Pattern Recognition, 2005.

    [6] I. Haritaoglu, D. Harwood, and L. S. Davis. “W4: Who?when? where? what? a real time system for detecting and tracking people,”. In International Conference on Face andGesture Recognition, Nara, April 1998.

    [7] Y. Rui and P. Anandan. “Segmenting visual actions based on spatiotemporal motion patterns,” In IEEE Conference on Computer Vision and Pattern Recognition, June 2000.

    [8]Fatih Porikli, Tetsuji Haga, "Event Detection by Eigenvector Decomposition Using Object and Frame Features," Conference on Computer Vision and Pattern Recognition Workshop (CVPRW'04) Volume 7, 2004

    [9] G. Medioni, I. Cohen, F. Br’emond, S. Hongeng, and R. Nevatia. “Event detection and analysis from video streams,”IEEE Transactions on Pattern Analysis and Machine Intelligence, 23(8):873–889, 2001.

    [10] D. J. Moore, I. A. Essa, and I. Monson H. Hayes. “Exploiting human actions and object context for recognition tasks,”In International Conference on Computer Vision, volume 1,pages 80–86, Corfu, September 1999.

    [11] Thayananthan, A., Stenger, B., Torr, P.H.S., Cipolla, R. “Shape context and chamfer matching in cluttered scenes,” In IEEE Transactions on Pattern Analysis and Machine Intelligence, pp127-133, 2003

    [12] N. Oliver, B. Rosario and A. Pentland. “A Bayesian Computer Vision System for Modeling Human Interactions,” In IEEE Transactions on Pattern Analysis and Machine Intelligence, archive Vol.22(8) August 2000.

    [13] G. Borgefors, “Hierarchical chamfer matching: A parametric edge matching algorithm,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 10, No. 6, pp. 849-865, 1988.

    [14] C. Wren, A. Azarbayejani, T. Darrell, and A. Pentland. “Pfinder: Real-time tracking of the human body,” IEEE Transactions on Pattern Analysis and Machine Intelligence,19(7):780–785, July 1997.

    [15] Y. Rui, A. Gupta, and A. Acero. “Automatically extracting highlights for tv baseball programs,” In Proc. ACM Multimedia, pp. 105–115, Oct. 2000.

    [16] I. S. Dhillon. “Co-clustering documents and words using bipartite spectral graph partitioning,” In ACM SIGKDD International Conference on Knowledge discovery and data mining, pages 269–274, San Francisco, August 2001.

    [17] C. Stauffer, W. Eric, and L. Grimson. “Learning patterns of activity using real-time tracking.” In IEEE Transactions on Pattern Analysis and Machine Intelligence, archive Vol.22(8) August 2000.

    [18] Berthold K. P. Horn and Brian G. Schunck, “Determining optical flow.,” Artificial Intelligence, vol. 17, no. 1-3, pp. 185--203, 1981.

    [19] H. Zhong, J. Shi, and M. Visontai. “Detecting unusual activity in video,” In Proc. IEEE CVPR, June. 2004.

    [20] G. Rigoll, A. Kosmala, “New improved feature extraction methods for real-time high performance image sequence recognition”. Proc. IEEE Int. Conf on Acoustics, Speech, and Signal Processing (ICASSP), Munich, 1997, pp. 2901-2904.

    無法下載圖示 全文公開日期 本全文未授權公開 (校內網路)
    全文公開日期 本全文未授權公開 (校外網路)

    QR CODE