簡易檢索 / 詳目顯示

研究生: 鄧進宏
Chin-Hung Teng
論文名稱: 從未校正的影像串列建構逼真的樹木三度空間立體模型之研究
On the Study of Creating a Realistic Three-Dimensional Tree Model from Uncalibrated Image Sequence
指導教授: 許文星
Wen-Hsing Hsu
陳永盛
Yung-Sheng Chen
口試委員:
學位類別: 博士
Doctor
系所名稱: 電機資訊學院 - 電機工程學系
Department of Electrical Engineering
論文出版年: 2005
畢業學年度: 93
語文別: 英文
論文頁數: 207
中文關鍵詞: 虛擬實境樹木光流場相機自我校正樹木影像分割3D模型相機校正樹木3D模型移動估測樹木繪製樹木模型建構
外文關鍵詞: virtual reality, tree, optical flow, camera self-calibration, tree segmentation, 3D model, camera calibration, tree 3D model, motion estimation, tree rendering, tree modeling
相關次數: 點閱:2下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 由於電腦科技的快速發展,虛擬實境在近幾年來吸引了許多人的眼光及注意。對虛擬實境而言,建構三度空間(three-dimensional, 3D)的虛擬環境是非常重要的一環。樹木是自然界中非常常見的物體,因此樹木的3D立體模型建構對於自然環境的3D虛擬重現佔有非常重要的角色。事實上,建構逼真的虛擬樹木在過去幾年來一直是電腦圖學中非常重要的一項研究領域,並已有所多的方法發表於研究文獻上。這些方法大都是利用一些數學演算法合併植物生長的規則產生出樹木的3D幾何結構。通常這些方法在樹木的產生過程中會加入一些隨機的變化以避免所產生的樹木具有高度的自我相似性。然而,雖然這些方法可以模擬出非常逼真的樹木3D立體模型,但這些方法所產生的樹木與我們週遭環境的樹木並不相同,亦即這些樹木是由演算法所自行生長而不是根據真實世界的樹木所建構。欲模擬出與真實世界相類似的樹木,我們必須從真實樹木的影像著手。在此篇論文中,我們發展了一個從未校正的影像串列(uncalibrated image sequence)建構逼真的樹木3D立體模型的方法。我們首先利用樹幹在不同影像間位置的不同推測出其在三度空間中的位置並以此建構出樹幹的3D立體模型,隨後再依據所建構的樹幹生長樹葉。由於樹幹是依照影像所建構,因此此方式所建構的樹幹與真實樹木的樹幹非常地相似,而只要樹葉能夠依據所建構的樹幹適當地生長,則整棵樹的整體外觀與真實樹木亦會相當地接近。然而,欲達到上述的目的,有幾個課題必須審慎地考量。這些課題包含了對應點的找尋(correspondences searching)、相機的自我校正(camera self-calibration)以及樹木影像的分割(tree segmentation)。在本篇論文中我們也針對這些課題進行詳細的研究及探討。
    為了從未校正的影像串列中取得樹幹的3D資訊,相機的自我校正是必要的一環。然而,為了要校正相機,我們必須先取得物體在不同影像間的對應點。對應點的找尋是電腦視覺領域中一極為基礎但卻困難的問題。幸而對於影像串列而言,光流場的計算(optical flow computation)提供了一個估測影像移動或者對應點找尋的絕佳方式。因此在本論文中我們首先針對光流場的估測做進一步的探討,我們提出了一個在非均勻亮度變化下仍可精確估算出光流場的演算法。隨後我們即利用所取得的對應點進行相機的自我校正。在本論文我們也提出了一個極為彈性的相機自我校正演算法,此方法可根據實際情況的不同設定不同的相機限制以取得更精確的校正結果。同時,在本論文中,我們也提出了一個分割影像中樹幹及樹葉區域的演算法,此分割出來的樹幹區域對於隨後樹幹3D模型的建構有極大的幫助。我們先從分割出來的樹幹區域擷取出其平面骨架,之後再利用已校正的相機以及樹幹的對應點計算出樹幹各個部位的三度空間位置,從而將此平面骨架延伸為樹幹的三度空間骨架。當取得樹幹的三度空間骨架之後,我們即可環繞此三度空間骨架建構許多相連的圓柱體已達成樹幹的3D模型建立。由於相機已適當地校正,故樹幹的紋理(texture)可藉由將此樹幹3D模型重新投影到影像平面而取得,此樹幹紋理的產生可大幅增加樹幹3D模型的真實度。最後,我們再將樹葉適當地生長於此樹幹3D模型上即可完成一逼真的樹木3D立體模型。目前我們已進行多項的實驗,由實驗結果可證明我們的系統確實能夠從未校正的影像串列建構出非常逼真的樹木3D立體模型。


    Because of the rapid development of computer technology, virtual reality has received much attention in recent years. For virtual reality, constructing a realistic three-dimensional (3D) virtual environment is of particular importance. Tree is very a common object in natural environment, thus its 3D model construction plays a quite important role for natural scene 3D reconstruction. In fact, modeling realistic trees has been a topic in computer graphics for many years and many approaches have been published in the literature. In computer graphics, trees are typically synthesized by some mathematical algorithms in conjunction with some botanic knowledge to generate the geometrical structures of the trees. Often some random variations are also imposed in the modeling process to avoid auto-similarity. Although these graphical methods can model quite realistic trees, the generated trees are, however, different from the real ones in our surrounding environment, i.e., they are grown themselves without any reference to the real trees in the nature. To create a 3D tree model similar to a real tree in the nature, we must model the tree according to the images of this tree. In this dissertation, a complete framework for constructing a realistic 3D tree model from uncalibrated image sequence (i.e., the images are captured with unknown camera internal and external parameters) is developed. We first construct the 3D trunk model via structure from motion technique, and then generate leaves on the created 3D trunk model. This manner can produce quite similar 3D trunk model since the 3D information of the trunk is directly computed from the images. Meanwhile, the generated tree crown will be also similar to the real tree as long as the leaves are appropriately generated. However, to achieve this goal several issues should be addressed. These issues, including correspondences searching, camera self-calibration, and tree segmentation, are also deeply investigated in this dissertation.
    To acquire the 3D information of the trunk from uncalibrated image sequence, camera self-calibration (i.e., calibrating the camera directly from the captured images) is a necessary step. However, to calibrate the camera, image correspondences should be identified first. Searching image correspondences is an elementary but ill-posed problem in computer vision. Nevertheless, it is convinced that optical flow can provide quite accurate image motion estimation, and hence image correspondences from an image sequence. In this dissertation, an accurate algorithm for computing optical flow under non-uniform brightness variations is first developed. Following this, the issue of camera self-calibration is deeply discussed. A camera self-calibration algorithm suitable for variant camera constraints is proposed. An algorithm for segmenting trunk and leaf regions from a single image is also discussed. The extracted trunk region is quite useful for the subsequent 3D trunk model construction. The skeleton of the segmented trunk region is first extracted to represent the 2D trunk structure of the tree. Subsequently, the 2D trunk skeleton is extended to 3D trunk skeleton by exploiting the calibrated camera and the trunk correspondences. After recovering the 3D trunk skeleton, a set of generalized cylinders is generated around the recovered 3D trunk skeleton to model the 3D trunk. Since the camera has been calibrated, trunk texture can be easily obtained by reprojecting the 3D trunk model onto the image plane. Mapping trunk texture can greatly improve the realism of 3D trunk model. Finally, the leaves are generated on the created 3D trunk model to produce a more realistic 3D tree model. Some experiments were conducted and the results demonstrate the feasibility of proposed system for creating a realistic 3D tree model from uncalibrated image sequence.

    Abstract in Chinese i Acknowledgments in Chinese iv Abstract in English vi Acknowledgments in English viii Contents ix List of Figures xii List of Tables xv 1 Introduction 1 1.1 Motivation and Objective. . . . . . . . . . . . . 1 1.2 Scope of this Research . . . . . . . . . . . . . 4 1.3 Main Contributions . . . . . . . . . . . . . . 10 1.4 Thesis Organization . . . . . . . . . . . . . . 11 2 Optical Flow Computation 13 2.1 Introduction . . . . . . . . . . . . . . . . . . 13 2.2 Gradient-Based Regularization Method . . . . . . 19 2.2.1 Energy Function Formulation. . . . . . . . . . 19 2.2.2 Robust Estimation. . . . . . . . . . . . . . . 21 2.2.3 Dynamic Smoothness Adjustment. . . . . . . . . 23 2.2.4 Constraint Refinement. . . . . . . . . . . . . 25 2.3 Numerical Minimization Algorithm . . . . . . . . 26 2.4 Experiments . . . . . . . . . . . . . . . . . . 32 2.4.1 Results for Synthetic Image Sequences. . . . . 34 2.4.2 Results for Real Image Sequences . . . . . . . 47 2.5 Summary . . . . . . . . . . . . . . . . . . . . 52 3 Camera Self-Calibration 55 3.1 Introduction . . . . . . . . . . . . . . . . . . 55 3.2 Background Knowledge . . . . . . . . . . . . . . 58 3.2.1 Camera Model . . . . . . . . . . . . . . . . . 58 3.2.2 Epipolar Geometry. . . . . . . . . . . . . . . 61 3.3 Camera Self-Calibration. . . . . . . . . . . . . 64 3.3.1 Energy Function Formulation. . . . . . . . . . 64 3.3.2 Evaluating the Energy Function . . . . . . . . 66 3.3.3 Minimizing the Energy Function . . . . . . . . 68 3.3.4 Self-Calibration under Different Camera Constraints . . . . . . . . . . . . . . . . .. . . . 70 3.4 Experiments. . . . . . . . . . . . . . . . . . . 71 3.4.1 Simulated Data . . . . . . . . . . . . . . . . 71 3.4.2 Real Data. . . . . . . . . . . . . . . . . . . 77 3.4.3 Time Complexity. . . . . . . . . . . . . . . . 77 3.5 Summary. . . . . . . . . . . . . . . . . . . . . 79 4 Tree Segmentation 81 4.1 Introduction . . . . . . . . . . . . . . . . . . 81 4.2 Preliminary Segmentation . . . . . . . . . . . . 84 4.3 Trunk Region Extraction. . . . . . . . . . . . . 87 4.3.1 Remove Non-trunk Regions . . . . . . . . . . . 88 4.3.2 Extract Trunk Flows and Identify Trunk Regions 91 4.3.3 Find Potential Branch Regions. . . . . . . . . 99 4.3.4 Merge Trunk Regions. . . . . . . . . . . . . . 99 4.4 Leaf Region Identification . . . . . . . . . . . 99 4.5 Experiments . . . . . . . . . . . . . . . . . . 101 4.6 Discussion. . . . . . . . . . . . . . . . . . . 111 4.7 Summary . . . . . . . . . . . . . . . . . . . . 113 5 3D Trunk Model Construction 115 5.1 System Overview . . . . . . . . . . . . . . . . 115 5.2 Extracting Trunk Region . . . . . . . . . . . . 117 5.3 Establishing 2D Trunk Structure . . . . . . . . 118 5.4 Recovering 3D Trunk Skeleton. . . . . . . . . . 120 5.5 Building 3D Trunk Model . . . . . . . . . . . . 127 5.6 Experiments and Discussion. . . . . . . . . . . 130 5.6.1 Camera Self-Calibration . . . . . . . . . . . 130 5.6.2 3D Trunk Skeleton Recovery and 3D Trunk Model Building . . . . . . . . . . . . . . . . . . . . . 134 5.6.3 Discussion. . . . . . . . . . . . . . . . . . 145 5.7 Summary . . . . . . . . . . . . . . . . . . . . 146 6 Leaf Generation 147 6.1 Introduction. . . . . . . . . . . . . . . . . . 147 6.1.1 L-Systems . . . . . . . . . . . . . . . . . . 147 6.1.2 Interactive Methods . . . . . . . . . . . . . 150 6.2 Parametric Tree Modeling. . . . . . . . . . . . 151 6.3 Generating Twigs and Leaves on 3D Trunk Model . 159 6.4 Experimental Results and Discussion . . . . . . 163 6.5 Summary . . . . . . . . . . . . . . . . . . . . 164 7 Conclusions 171 7.1 Summary . . . . . . . . . . . . . . . . . . . . 171 7.2 Further Research. . . . . . . . . . . . . . . . 173 Bibliography 175 List of Publications 189 Vitae 191

    [1] A. Lindenmayer, “Mathematical models for cellular interaction in development,” Journal of Theoretical Biology, vol. 18, pp. 280–315, 1968.
    [2] A. R. Smith, “Plants, fractals, and formal languages,” Computer Graphics, vol. 18, no. 3, pp. 1–10, 1984.
    [3] P. Prusinkiewicz, A. Lindenmayer, and J. Hanan, “Developmental models of herbaceous plants for computer imagery purposes,” Computer Graphics, vol. 22, no. 4, pp. 141–150, 1988.
    [4] P. Prusinkiewicz and A. Lindenmayer, The Algorithmic Beauty of Plants, Springer-Verlag, New York, 1990.
    [5] P. Prusinkiewicz, M. James, and R. Mech, “Synthetic topiary,” in Proc. of SIGGRAPH, 1994, pp. 351–358.
    [6] R. Mech and P. Prusinkiewicz, “Visual models of plants interacting with their environment,” in Proc. of SIGGRAPH, 1996, pp. 397–410.
    [7] W. T. Reeves, “Approximate and probabilistic algorithm for shading and rendering structured particle systems,” in Proc. of SIGGRAPH, 1985, pp. 313–322.
    [8] P. E. Oppenheimer, “Real time design and animation of fractal plants and trees,” in Proc. of SIGGRAPH, 1986, pp. 55–64.
    [9] J. Bloomenthal, “Modeling the mighty maple,” in Proc. of SIGGRAPH, 1985, pp. 305–311.
    [10] P. de Reffye, C. Edelin, J. Francon, M. Jaeger, and C. Puech, “Plant model faithful to botanical structure and development,” Computer Graphics, vol. 22, no. 4, pp. 151–158, 1988.
    [11] J. Weber and J. Penn, “Creation and rendering of realistic trees,” in Proc. of SIGGRAPH, 1995, pp. 119–128.
    [12] P. Prusinkiewicz, M. S. Hammel, and E. Mjolsness, “Animation of plant development,” in Proc. of SIGGRAPH, 1993, pp. 351–360.
    [13] T. Sakaguchi and J. Ohya, “Modeling and animation of botanical trees for interactive virtual environment,” in Proc. of ACM Symposium on Virtual Reality Software and Technology, 1999, pp. 139–146.
    [14] J. C. Wong and A. Datta, “Animating real-time realistic movements in small plants,” in Proc. of International Conference on Computer Graphics and Interactive Techniques in Australasia and South East Asia, 2004, pp. 182–189.
    [15] S. Lefebvre and F. Neyret, “Synthesizing bark,” in Proc. of Eurographics Workshop on Rendering, 2002, pp. 105–116 and 323.
    [16] J. Lluch, E. Camahort, and R. Vivo, “Procedural multiresolution for plant and tree rendering,” in Proc. of International Conference on Computer Graphics, Virtual Reality, Visualisation and Interaction in Africa, 2003, pp. 31–37.
    [17] O. Deussen, P. Hanrahan, B. Lintermann, R. Mech, M. Pharr, and P. Prusinkiewicz, “Realistic modeling and rendering of plant ecosystems,” in Proc. of SIGGRAPH, 1998, pp. 275–286.
    [18] P. Prusinkiewicz, “Simulation modeling of plants and plant ecosystem,” Communications of the ACM, vol. 43, no. 7, pp. 84–93, 2000.
    [19] B. Lintermann and O. Deussen, “Interactive modeling of plants,” IEEE Computer Graphics and Applications, vol. 19, no. 1, pp. 56–65, 1999.
    [20] J. L. Power, A. J. B. Brush, P. Prusinkiewicz, and D. H. Salesin, “Interactive arrangement of botanical L-system models,” in Proc. of Symposium on Interactive 3D Graphics, 1999, pp. 175–182 and 234.
    [21] K. Onishi, S. Hasuike, Y. Kitamura, and F. Kishino, “Interactive modeling of trees by using growth simulation,” in Proc. of ACM Symposium on Virtual Reality Software and Technology, 2003.
    [22] S. Negahdaripour, “Revised definition of optical flow: Integration of radiometric and geometric cues for dynamic scene analysis,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 20, no. 9, pp. 961–979, 1998.
    [23] J. R. Bergen, P. Anandan, K. J. Hanna, and R. Hingorani, “Hierarchical model-based motion estimation,” in Proc. of European Conference on Computer Vision, 1992, pp. 237–252.
    [24] L. Vincent and P. Soille, “Watersheds in digital spaces: An efficient algorithm based on immersion simulations,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 13, no. 6, pp. 583–598, 1991.
    [25] J. Lluch, R. Vivo, and C. Monserrat, “Modeling tree structures using a single polygonal mesh,” Graphical Models, vol. 66, pp. 89–101, 2004.
    [26] S. H. Lai and B. C. Vemuri, “Reliable and efficient computation of optical flow,” International Journal of Computer Vision, vol. 29, no. 2, pp. 87–105, 1998.
    [27] C. H. Teng, S. H. Lai, Y. S. Chen, and W. H. Hsu, “Robust computation of optical flow under non-uniform illumination variations,” in Proc. of International Conference on Pattern Recognition, 2002, vol. 1, pp. 327–330.
    [28] C. H. Teng, S. H. Lai, Y. S. Chen, and W. H. Hsu, “An accurate and adaptive optical flow estimation algorithm,” in Proc. of International Conference on Image Processing, 2004, vol. 3, pp. 1839–1842.
    [29] C. H. Teng, S. H. Lai, Y. S. Chen, and W. H. Hsu, “Accurate optical flow computation under non-uniform brightness variations,” Computer Vision and Image Understanding, vol. 97, pp. 315–346, 2005.
    [30] C. H. Teng, Y. S. Chen, and W. H. Hsu, “A camera self-calibration method suitable for variant camera constraints,” accepted for publication in Applied Optics.
    [31] C. H. Teng, S. H. Lai, Y. S. Chen, and W. H. Hsu, “Tree segmentation from an image,” in Proc. of IAPR Conference on Machine Vision Applications, 2005, pp. 59–63.
    [32] C. H. Teng, Y. S. Chen, and W. H. Hsu, “An approach to extracting trunk from an image,” submitted to IEICE Transactions on Information and Systems for publication.
    [33] C. H. Teng, Y. S. Chen, and W. H. Hsu, “Constructing a 3D trunk model from two images,” submitted to Graphical Models for publication.
    [34] J. L. Barron, D. J. Fleet, and S. S. Beauchemin, “System and experiment performance of optical flow techniques,” International Journal of Computer Vision, vol. 12, no. 1, pp. 43–77, 1994.
    [35] B. K. P. Horn and B. G. Schunck, “Determining optical flow,” Artificial Intelligence, vol. 17, pp. 185–203, 1981.
    [36] F. J. Hampson and J. C. Pesquet, “Motion estimation in the presence of illumination variations,” Signal Processing: Image communication, vol. 16, pp. 373–381, 2000.
    [37] S. H. Lai, “Adaptive motion estimation for image sequences under non-uniform illumination variations,” in Proc. of International Computer Symposium: Workshop on Image Processing and Pattern Recognition, 2000.
    [38] A. Nomura, H. Miike, and K. Koga, “Determining motion fields under non-uniform illumination,” Pattern Recognition Letter, vol. 16, pp. 285–296, 1995.
    [39] A. Nomura, “Spatio-temporal optimization method for determining motion vectors fields under non-stationary illuminations,” Image and Vision Computing, vol. 18, pp. 939–950, 2000.
    [40] L. Zhang, T. Sakurai, and H. Miike, “Detection of motion fields under spatio-temporal non-uniform illuminations,” Image and Vision Computing, vol. 17, pp. 309–320, 1999.
    [41] H. W. Haussecker and D. J. Fleet, “Computing optical flow with physical models of brightness variation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 23, no. 6, pp. 661–673, 2001.
    [42] M. Yeasin, “Optical flow in log-mapped image plane–a new approach,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 24, no. 1, pp. 125–131, 2002.
    [43] E. P. Ong and M. Spann, “Robust optical flow computation based on least-median-of-squares regression,” International Journal of Computer Vision, vol. 31, no. 1, pp. 51–82, 1999.
    [44] M. J. Black and P. Anandan, “The robust estimation of multiple motions: Parametric and piecewise-smooth flow fields,” Computer Vision and Image Understanding, vol. 63, no. 1, pp. 75–104, 1996.
    [45] M. J. Black and D. J. Fleet, “Probabilistic detection and tracking of motion boundaries,” International Journal of Computer Vision, vol. 38, no. 3, pp. 231–245, 2000.
    [46] D. Shulman and J. Y. Herve, “Regularization of discontinuous flow fields,” in Proc. of Workshop on Visual Motion, 1989, pp. 81–86.
    [47] A. Kumar, A. R. Tannenbaum, and J. Balas, “Optical flow: A curve evolution approach,” IEEE Transactions on Image Processing, vol. 5, no. 4, pp. 598–610, 1996.
    [48] F. Heitz and P. Bouthemy, “Multimodal estimation of discontinuous optical flow using Markov random fields,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 15, no. 12, pp. 1217–1232, 1993.
    [49] H.-H. Nagel, “Constraints for the estimation of displacement vector fields from image sequences,” in Proc. of International Joint Conference on Artificial Intelligence, 1983, pp. 945–951.
    [50] J. Weickert and C. Schnorr, “Variational optic flow computation with a spatio-temporal smoothness constraint,” Journal of Mathematical Imaging and Vision, vol. 14, pp. 245–255, 2001.
    [51] H.-H. Nagel and W. Enkemmann, “An investigation of smoothness constraints for the estimation of displacement vector fields from image sequence,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 8, no. 5, pp. 565–593, 1986.
    [52] H.-H. Nagel, “On the estimation of optical flow: Relations between different approaches and some new results,” Artificial Intelligence, vol. 33, pp. 299–324, 1987.
    [53] L. Alvarez, J. Weickert, and J. Sanchez, “Reliable estimation of dense optical flow fields with large displacements,” International Journal of Computer Vision, vol. 39, no. 1, pp. 41–56, 2000.
    [54] H.-H. Nagel, “Extending the ‘oriented smoothness constraint’ into temporal domain and the estimation of derivatives of optical flow,” in Lecture Notes in Computer Science, vol. 427: Computer Vision-ECCV’90, O. Faugeras, Ed., 1990.
    [55] C. Schnorr, “Determining optical flow for irregular domains by minimizing quadratic functionals of a certain class,” International Journal of Computer Vision, vol. 6, pp. 25–38, 1991.
    [56] M. A. Snyder, “On the mathematical foundations of smoothness constraints for the determination of optical flow and for surface reconstruction,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 13, pp. 1105–1114, 1991.
    [57] R. Deriche, P. Kornprobst, and G. Aubert, “Optical-flow estimation while preserving its discontinuities: A variational approach,” in Proc. of Asian Conference on Computer Vision, 1995, vol. 2.
    [58] G. Aubert, R.Deriche, and P. Kornprobst, “Computing optical flow via variational techniques,” SIAM Journal on Applied Mathematics, vol. 60, no. 1, pp. 156–182, 1999.
    [59] S. Negahdaripour, A. Shokrollahi, and M. Gennert, “Relaxing the brightness constancy assumption in computing optical flow,” in Proc. of International Conference on Image Processing, 1989.
    [60] S. Negahdaripour and C. H. Yu, “A generalized brightness change model for computing optical flow,” in Proc. of International Conference on Computer Vision, 1993, pp. 2–11.
    [61] S. H. Lai, “Robust image matching under partial occlusion and spatially varying illumination change,” Computer Vision and Image Understanding, vol. 78, pp. 84–98, 2000.
    [62] G. Li, “Robust regression,” in Exploring Data Tables, Trends, and Shapes, D. C. Hoaglin, F. Mosteller, and J. W. Tukey, Eds., pp. 281–343. Wiley, New York, 1985.
    [63] G. H. Golub and C. F. Van Loan, Matrix Computations, The Johns Hopkins University Press, 1989.
    [64] S. Srinivasan and R. Chellappa, “Noise-resilient estimation of optical flow by use of overlapped basis functions,” Journal of Optical Society of American A, vol. 16, no. 3, pp. 493–507, 1999.
    [65] H. Liu, T. Hong, M. Herman, and R. Chellappa, “A general motion model and spatio-temporal filters for computing optical flow,” International Journal of Computer Vision, vol. 22, pp. 141–172, 1997.
    [66] H. Liu, R. Chellappa, and A. Rosenfeld, “Accurate dense optical flow estimation using adaptive structure tensor and a parametric model,” in Proc. of International Conference on Pattern Recognition, 2002, vol. 1, pp. 291–294.
    [67] D. Gibson and M. Spann, “Robust optical flow estimation based on a sparse motion trajectory set,” IEEE Transactions on Image Processing, vol. 12, no. 4, pp. 431–445, 2003.
    [68] M. J. Black, “Recursive nonlinear estimation of discontinuous flow fields,” in Proc. of European Conference on Computer Vision, 1994, pp. 138–145.
    [69] E. Memin and P. Perez, “Dense estimation and object-based segmentation of the optical flow with robust techniques,” IEEE Transactions on Image Processing, vol. 7, no. 5, pp. 703–719, 1998.
    [70] Farneback, “Fast and accurate motion estimation using orientation tensors and parametric motion models,” in Proc. of International Conference on Pattern Recognition, 2000, pp. 135–139.
    [71] Farneback, “Very high accuracy velocity estimation using orientation tensors, parametric motion, and simultaneous segmentation of the motion field,” in Proc. of International Conference on Computer Vision, 2001, pp. 171–177.
    [72] E. Memin and P. Perez, “Hierarchical estimation and segmentation of dense motion fields,” International Journal of Computer Vision, vol. 46, no. 2, pp. 129–155, 2002.
    [73] C. Slama, Manual of Photogrammetry, American Society of Photogrammetry, Falls Church, VA, USA, fourth edition, 1980.
    [74] R. K. Lenz and R. Y. Tsai, “Techniques for calibration of the scale factor and image center for high accuracy 3-D machine vision,” IEEE Transactions on Patten Analysis and Machine Intelligence, vol. 10, no. 5, pp. 713–720, 1988.
    [75] B. Caprile and V. Torre, “Using vanishing points for camera calibration,” International Journal of Computer Vision, pp. 127–140, 1990.
    [76] P. Beardsley, D. Murray, and A. Zisserman, “Camera calibration using multiple images,” in Computer Vision - ECCV’92, Lecture Notes in Computer Science, vol. 588, pp. 312–320. Springer-Verlag, 1992.
    [77] Z. Zhang, “A flexible new technique for camera calibration,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no. 11, pp. 1330–1334, 2000.
    [78] R. I. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision, Cambridge University Press, ISBN: 0521540518, second edition, 2004.
    [79] Z. Zhang, “Determining the epipolar geometry and its uncertainty: A review,” International Journal of Computer Vision, vol. 27, no. 2, pp. 161–195, 1998.
    [80] X. Armangue and J. Salvi, “Overall view regarding fundamental matrix estimation,” Image and Vision Computing, vol. 21, pp. 205–220, 2003.
    [81] O. D. Faugeras, Q.-T. Luong, and S. J. Maybank, “Camera self-calibration: Theory and experiments,” in Computer Vision - ECCV’92, Lecture Notes in Computer Science, vol. 588, pp. 321–334. Springer-Verlag, 1992.
    [82] S. J. Maybank and O. D. Faugeras, “A theory of self-calibration of a moving camera,” International Journal of Computer Vision, vol. 8, no. 2, pp. 123–151, 1992.
    [83] R. I. Hartley, “Euclidean reconstruction from uncalibrated views,” in Applications of Invariance in Computer Vision, Lecture Notes in Computer Science, vol. 825, pp. 237–256. Springer-Verlag, 1994.
    [84] M. Armstrong, A. Zisserman, and R. Hartley, “Self-calibration from image triplets,” in Computer Vision - ECCV’96, Lecture Notes in Computer Science, vol. 1064, pp. 3–16. Springer-Verlag, 1996.
    [85] B. Triggs, “Autocalibration and the absolute quadric,” in Proc. of International Conference on Computer Vision and Pattern Recognition, 1997, pp. 609–614.
    [86] Q.-T. Luong and O. D. Faugeras, “Self-calibration of a moving camera from point correspondences and fundamental matrices,” International Journal of Computer Vision, vol. 22, no. 3, pp. 261–289, 1997.
    [87] M. Pollefeys and L. Van Gool, “A stratified approach to metric self-calibration,” in Proc. of International Conference on Computer Vision and Pattern Recognition, 1997, pp. 407–412.
    [88] M. Pollefeys, Self-calibration and metric 3D reconstruction from uncalibrated image sequences, Ph.D. thesis, Katholieke Universiteit Leuven, 1999.
    [89] M. Pollefeys and L. Van Gool, “Stratified self-calibration with the modulus constraint,” IEEE Transactions on Patten Analysis and Machine Intelligence, vol. 21, no. 8, pp. 707–724, 1999.
    [90] A. Heyden and K. Astrom, “Euclidean reconstruction from image sequence with varying and unknown focal length and principal point,” in Proc. of International Conference on Computer Vision and Pattern Recognition, 1997, pp. 438–443.
    [91] S. Bougnoux, “From Projective to Euclidean Space under any practical situation, a criticism of self-calibration,” in Proc. of International Conference on Computer Vision, 1998, pp. 790–796.
    [92] M. Pollefeys, R. Koch, and L. Van Gool, “Self-calibration and metric reconstruction inspite of varying and unknown intrinsic camera parameters,” International Journal of Computer Vision, vol. 32, no. 1, pp. 7–25, 1999.
    [93] M. Pollefeys, L. Van Gool, M. Vergauwen, F. Verbiest, K. Cornelis, and J. Tops, “Visual modeling with a hand-held camera,” International Journal of Computer Vision, vol. 59, no. 3, pp. 207–232, 2004.
    [94] O. D. Faugeras, Three-Dimensional Computer Vision: A Geometric Viewpoint, MIT Press, Cambridge, MA, 1993.
    [95] D. A. Forsyth and J. Ponce, Computer Vision: A Modern Approach, Pearson Education, Inc., 2003.
    [96] R. I. Hartley, “In defense of the 8-point algorithm,” in Proc. of International Conference on Computer Vision, 1995, pp. 1064–1070.
    [97] R. I. Hartley, “In defense of the eight-point algorithm,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 19, no. 6, pp. 580–593, 1997.
    [98] R. I. Hartley, “Estimation of relative camera positions for uncalibrated cameras,” in Computer Vision - ECCV’92, Lecture Notes in Computer Science, vol. 588, pp. 579–587. Springer-Verlag, 1992.
    [99] W. H. Press, S. A. Teukolsky, W. T. Vetterling, and B. P. Flannery, Numerical Recipes in C, Cambridge University Press, second edition, 1999.
    [100] P. Sturm, “Critical motion sequences for the self-calibration of cameras and stereo systems with variable focal length,” Image and Vision Computing, vol. 20, pp. 415–426, 2002.
    [101] M. Hild and Y. Shirai, “Interpretation of natural scenes using multi-parameter default models and qualitative constraints,” in Proc. of International Conference on Computer Vision, 1993, pp. 497–501.
    [102] H. Y. Huang, Y. S. Chen, and W. H. Hsu, “Image recall using indexing process and semantic description,” Journal of Electronic Imaging, vol. 12, no. 4, pp. 705–723, 2003.
    [103] G. L. Foresti and W. Vanzella, “Generalized neural tree for outdoor scene understanding,” in Proc. of International Conference on Image Processing, 2000, vol. 3, pp. 336–339.
    [104] M. Markou, M. Singh, and S. Singh, “Color texture analysis of natural scenes using neural network,” in Proc. of International Joint Conference on Neural Network, 2002, vol. 3, pp. 2059–2063.
    [105] N. Nandhakumar and J. K. Aggarwal, “Integrated analysis of thermal and visual images for scene interpretation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 10, no. 4, pp. 469–480, 1988.
    [106] E. Dekneuvel, M. Ghallab, and P. Grandjean, “A multi-sensory machine for scene interpretation,” in Proc. of International Conference on Intelligent Robots and Systems, 1992, pp. 1247–1255.
    [107] Y. Taniguchi, Y. Shiral, and M. Asada, “Scene interpretation by fusing intermediate results of multiple visual sensory information processing,” in Proc. of International Conference on Multi-sensor Fusion and Integration for Intelligent Systems, 1994, pp. 699–706.
    [108] S. Hirata, Y. Shiral, and M. Asada, “Scene interpretation using 3-D information extracted from monocular color images,” in Proc. of International Conference on Intelligent Robots and Systems, 1992, pp. 1603–1610.
    [109] G. J. McLachlan and T. Krishnan, The EM Algorithm and Extensions, John Wiley and Sons, 1997.
    [110] A. Bhattacharyya, “On a measure of divergence between two statistical populations defined by their probability distributions,” Bull. Calcutta Math. Soc., vol. 35, pp. 99–110, 1943.
    [111] R. W. G. Hunt, Measuring Colour, Ellis Horwood Limited, 1987.
    [112] S. Belongie, C. Carson, H. Greenspan, and J. Malik, “Color- and textured-based image segmentation using EM and its application to content-based image retrieval,” in Proc. of International Conference on Computer Vision, 1998, pp. 675–682.
    [113] J. Zhang, J. W. Modestino, and D. A. Langan, “Maximum-likelihood parameter estimation for unsupervised model-based image segmentation,” IEEE Transactions on Image Processing, vol. 3, pp. 404–420, 1994.
    [114] M. L. Comer and E. J. Delp, “The EM/MPM algorithm for segmentation of textured images: Analysis and further experimental results,” IEEE Transactions on Image Processing, vol. 9, pp. 1731–1744, 2000.
    [115] Y. S. Chen, “Hidden deletable pixel detection using vector analysis in parallel thinning to obtain bias-reduced skeletons,” Computer Vision and Image Understanding, vol. 71, no. 3, pp. 294–311, 1998.
    [116] F. S. Hill, Computer Graphics Using OpenGL, Prentice Hall, second edition, 2001.
    [117] C. Harris and M. Stephens, “A combined corner and edge detector,” in Proc. of Alvey Vision Conference, 1988, pp. 147–152.
    [118] E. K. P. Chong and S.H. Zak, An Introduction to Optimization, John Wiley and Sons, second edition, 2001.

    無法下載圖示 全文公開日期 本全文未授權公開 (校內網路)
    全文公開日期 本全文未授權公開 (校外網路)

    QR CODE