簡易檢索 / 詳目顯示

研究生: 蘇品靜
Su, Pin-Ching
論文名稱: 基於流場相似度進行影片運鏡模擬
Flow Field Similes
指導教授: 陳煥宗
Chen, Hwann-Tzong
口試委員: 賴尚宏
Lai, Shang-Hong
劉庭祿
Liu, Tyng-Luh
學位類別: 碩士
Master
系所名稱: 電機資訊學院 - 資訊工程學系
Computer Science
論文出版年: 2013
畢業學年度: 101
語文別: 英文
論文頁數: 31
中文關鍵詞: 影片編輯流場影像變形
外文關鍵詞: video editing, optical flow, image warping
相關次數: 點閱:3下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 本論文提出了一個基於流場相似度將專業影片的運鏡模式和拍攝風格傳遞至非專業影片的想法。給定一段非專業的影片及一段參考的流場,我們的目標是計算出一系列的平面轉換並套用至該非專業影片,使其修正後的流場與參考流場相似。此參考的流場可能來自於一段真正的影片或者是合成的結果。我們制定了一個在稀疏對應點上的非線性最佳化問題,透過讓輸出影片流場和參考流場的差異達到最小的方式為影片的每一個影格找到最適合的轉換。我們的實驗結果展示了透過讓輸出影片的流場與參考的流場相似,能夠創造出不同類型的攝影機運鏡方式與拍攝風格,從簡單的影片穩定效果到更複雜的特效,如平滑的縮放、抗模糊快速移動、同時旋轉變焦、跟蹤拍攝和推軌變焦。


    This paper introduces the idea of flow field similes for transferring better camera movements and shooting styles from a reference flow field to carelessly shot videos. Given an input video and a reference flow field, we aim to compute a series of homography transformations to warp the input video, so that the flow of the output video will closely resemble the reference flow. The reference flow field may be derived from a real video or synthetically generated. We formulate a nonlinear optimization problem over sparse feature correspondences to find the required transformation for each video frame, through minimizing the difference between the intended reference flow field and the flow field of the output video frame. We show that, by enforcing the flow field of output video to resemble the reference flow field, we are able to create different types of camera movements and shooting styles, from the simplest effect of video stabilization to more complex ones like smooth zooming, anti-blur fast panning, zooming while rotating, tracking shot, and dolly zoom.

    1 Introduction. . . . . . . . . . . . . . . . . . . . . .7 1.1 Our Approach . . . . . . . . . . . . . . . . . . . 8 1.2 Related Work . . . . . . . . . . . . . . . . . . . 9 2 Flow Field Similes . . . . . . . . . . . . . . . . . . 12 3 Algorithm . . . . . . . . . . . . . . . . . . . . . . .14 4 Experiments . . . . . . . . . . . . . . . . . . . . . .18 5 Applications . . . . . . . . . . . . .. . . . . . . . .20 5.1 Reducing Fast-Panning Motion Blur . . . . . . . . .20 5.2 Synthetic Flow . . . . . . . . . . . . . . . . . . 20 5.3 Tracking Shots . . . . . . . . . . . . . . . . . . 21 5.4 Zooming While Rotating . . . . . . . . . . . . . . 21 5.5 Dolly Zoom . . . . . . . . . . . . . . . . . . . . 22 6 Discussions . . . . . . . . . . . . .. . . . . . . . . 27 6.1 Limitations and Future Work . . . . . . . . . . . .27

    [1] Simon Baker, Daniel Scharstein, J. P. Lewis, Stefan Roth, Michael J. Black, and Richard Szeliski. A database and evaluation methodology for optical flow. In ICCV, pages 1-8, 2007.
    [2] Michael J. Black and P. Anandan. The robust estimation of multiple motions: Parametric and piecewise-smooth flow fi elds. Computer Vision and Image Understanding, 63(1):75-104, 1996.
    [3] Thomas Brox, Andres Bruhn, Nils Papenberg, and Joachim Weickert. High accuracy optical flow estimation based on a theory for warping. In ECCV (4), pages 25-36, 2004.
    [4] Thomas Brox and Jitendra Malik. Large displacement optical flow: Descriptor matching in variational motion estimation. IEEE Trans. Pattern Anal. Mach. Intell., 33(3):500-513, 2011.
    [5] Andres Bruhn, Joachim Weickert, and Christoph Schnorr. Lucas/kanade meets horn/schunck: Combining local and global optic flow methods. International Journal of Computer Vision, 61(3):211-231, 2005.
    [6] Thomas Deselaers, Philippe Dreuw, and Hermann Ney. Pan, zoom, scan - timecoherent, trained automatic video cropping. In CVPR, 2008.
    [7] Michael Gleicher and Feng Liu. Re-cinematography: Improving the camerawork of casual video. TOMCCAP, 5(1), 2008.
    [8] Matthias Grundmann, Vivek Kwatra, and Irfan A. Essa. Auto-directed video stabilization with robust l1 optimal camera paths. In CVPR, pages 225-232, 2011.
    [9] David C. Hoaglin, Frederick Mosteller, John W. Tukey (Editor), and John W. Tukey (Editor). Understanding robust and exploratory data analysis. 2000.
    [10] Berthold K. P. Horn and Brian G. Schunck. Determining optical flow. Artif. Intell., 17(1-3):185-203, 1981.
    [11] Kihwan Kim, Matthias Grundmann, Ariel Shamir, Iain Matthews, Jessica K. Hodgins, and Irfan A. Essa. Motion fi elds to predict play evolution in dynamic sport scenes. In CVPR, pages 840-847, 2010.
    [12] Kihwan Kim, Dongryeol Lee, and Irfan A. Essa. Gaussian process regression flow for analysis of motion trajectories. In ICCV, pages 1164-1171, 2011.
    [13] Kihwan Kim, Dongryeol Lee, and Irfan A. Essa. Detecting regions of interest in dynamic scenes with camera motions. In CVPR, pages 1258-1265, 2012.
    [14] Ken-Yi Lee, Yung-Yu Chuang, Bing-Yu Chen, and Ming Ouhyoung. Video stabilization using robust feature trajectories. In ICCV, pages 1397-1404, 2009.
    [15] Ce Liu, Antonio Torralba, William T. Freeman, Fredo Durand, and Edward H. Adelson. Motion magnifi cation. ACM Trans. Graph., 24(3):519-526, 2005.
    [16] Ce Liu, Jenny Yuen, and Antonio Torralba. Sift flow: Dense correspondence across scenes and its applications. IEEE Trans. Pattern Anal. Mach. Intell., 33(5):978-994, 2011.
    [17] Feng Liu, Michael Gleicher, Hailin Jin, and Aseem Agarwala. Content-preserving warps for 3d video stabilization. ACM Trans. Graph., 28(3), 2009.
    [18] Feng Liu, Michael Gleicher, Jue Wang, Hailin Jin, and Aseem Agarwala. Subspace video stabilization. ACM Trans. Graph., 30(1):4, 2011.
    [19] David G. Lowe. Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision, 60(2):91-110, 2004.
    [20] Bruce D. Lucas and Takeo Kanade. An iterative image registration technique with an application to stereo vision. In IJCAI, pages 674-679, 1981.
    [21] Michael Rubinstein, Ariel Shamir, and Shai Avidan. Improved seam carving for video retargeting. ACM Trans. Graph., 27(3), 2008.
    [22] Brandon M. Smith, Li Zhang, Hailin Jin, and Aseem Agarwala. Light field video stabilization. In ICCV, pages 341-348, 2009.
    [23] Deqing Sun, Stefan Roth, and Michael J. Black. Secrets of optical flow estimation and their principles. In CVPR, pages 2432-2439, 2010.
    [24] Yu-Shuen Wang, Feng Liu, Pu-Sheng Hsu, and Tong-Yee Lee. Spatially and temporally optimized video stabilization. In TVCG, 2013.

    無法下載圖示 全文公開日期 本全文未授權公開 (校內網路)
    全文公開日期 本全文未授權公開 (校外網路)

    QR CODE