簡易檢索 / 詳目顯示

研究生: 周鑫佑
Hsin-Yo Chou
論文名稱: 使用奇異值分解方法進行三維彩色物件的材質壓縮與合成之研究
Texture Compression and Synthesis of 3D Color Object Using Singular Value Decomposition
指導教授: 黃仲陵
Chung-Lin Huang
張意政
I-Cheng Chang
口試委員:
學位類別: 碩士
Master
系所名稱: 電機資訊學院 - 電機工程學系
Department of Electrical Engineering
論文出版年: 2006
畢業學年度: 94
語文別: 英文
論文頁數: 84
中文關鍵詞: 影像為基礎呈現模型為基礎呈現壓縮三維小波轉換主成份分析法
外文關鍵詞: image-based rendering, model-based rendering, compression, 3D wavelet transform, principla component analysis
相關次數: 點閱:1下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 數位的虛擬3D合成影像是一門很有趣的課題,相較於傳統的影像合成,更能夠還原真實的呈現方法,並且廣泛地應用在電影特效、遊戲動畫以及虛擬實境上。在未來裡,將虛擬3D影像透過網際網路或平面媒體傳送,更可以達到具豐富地互動式娛樂效果。產生物體虛擬影像的方法中,以影像為基礎和模型為基礎的方法最具有代表性,而這兩者方法都仍有一些缺點,我們結合採用這兩者方法來呈現擬真的3D合成影像。因此,我們不需要知道物體的表面特性,根據取樣的材質和物體的幾何資訊,可以很準確地還原真實物體的風貌。

    在本篇論文中,我們在取樣真實物體的外表時,我們取樣的方式是一種view-dependent mapping的方法,物體是處於不同光源環境和具有不同視角的變化。我們提出一個系統架構,利用主成分分析法分解原有取像資料,分解後的係數會再經由三維小波轉換。因此,我們會將取樣下來的材質,會在三維座標下做壓縮,而此其中二維座標平面是定義在3D物體的表面上,另一維是定義在時間軸上。將這些材質在二維平面與時間軸上的多餘不必要的資訊去除。最後係數再透過量化編碼和熵編碼,來達到最小資料量儲存。

    這些壓縮後的係數會送入至我們rendering process box裡,可以呈現四種不同結果,分別為1.原有光源方向,還原原有3D Pose,2.在原有光源方向下,合成新的3D Pose,3.在原有的3D Pose下,合成重新打光的效果以及4.合成新的3D Pose,在此新的3D Pose重新打光。另外,根據取樣的材質和物體的幾何資訊,我們可以利用deformation box將物體的表面光場(light field)做切割,將3D物體做變形,變形後部位也能夠正確地呈現光影變化。


    Digital virtual images and synthesis of 3D objects is a very interesting subject. Compared with traditional image composing, this subject really presents original view of 3D objects and is extensively applied to special movie effects, animation, computer games and virtual reality. In the future, delivering virtual 3D images by the way of internet and print media can provide more interactive entertainments plentifully.

    In this thesis, we combine the image-based and model-based methods and sample the appearance of an object based on view-dependent mapping method. An object is illuminated under different lighting directions, and variations of appearance are changing under different viewing directions. We propose a system framework to utilize principal component analysis (PCA) to decompose the original captured data. These decomposed coefficients will be compressed by three-dimensional discrete wavelet transform (3D DWT). These appearances are compressed in the 2D coordinate system defined on the 3D model surface. The final remainder data will be further compressed via quantization and entropy coding to achieve minimum storage.

    After compressed, these coefficients are delivered to our rendering box and can generated four different applications. They are 1.original pose under original illuminations, 2.novel pose under original illuminations, 3.original pose under novel illuminations and 4.novel pose under novel illuminations.

    Based on the model-based method, we can utilize a deformation box to split the surface light field of an object to deform the sub-light field and the deformed parts can correctly present variations due to illuminations.

    Chapter 1 Introduction 1.1 Related Work 1.2 Thesis Overview Chapter 2 Background and Previous Work Chapter3 System Framework 3.1 System Overview 3.2 Data Acquisition 3.3 Data Normalization and Texture Mapping 3.4 Eigen-Texture Decomposition Method 3.5 Synthesis for novel pose / novel illuminations and deformation 3.6 Compression of Posture Eigen-images adopting Three–Dimensional Wavelet Transform Chapter4 Experimental Results and Discussion 4.1 Hardware Setup 4.2 Analysis of Wood Materials 4.2.1 Compression 4.2.2 Synthesis 4.3 Analysis of Metal Materials 4.3.1 Compression 4.3.2 Synthesis 4.4 Analysis of Porcelain Materials 4.4.1 Compression 4.4.2 Synthesis 4.5 Analysis of Plastic Materials 4.5.1 Compression 4.5.2 Synthesis Chapter 5 Conclusion and Future Work Bibliography

    [1] Gortler, S.J., Grzeszczuk R., Szeliski R., and Cohen M.F. The Lumigraph. In Proceedings of ACM SIGGRAPH 1996, pp. 43-54

    [2] Levoy M. and Hanrahan P. Light Field Rendering. In Proceedings of ACM SIGGRAPH 1996, pp. 31-42.

    [3] Chen B., Ofek E., Shum,H.Y., and Levoy M. Interactive Deformation of Light Fields. In Proceedings of the 2005 symposium on Interactive 3D graphics and games, pp. 139-146.

    [4] Adelson E.H., and Bergen, J. R. The Plenoptic Function and the Elements of Early Vision.Computation Models of Visual Processing, Landy M. and Movshon J. editors, MIT Press,(1991).

    [5] Sato Y., Wheeler M. D., and Ikeuchi K. Object Shape and Reflectance Modeling from Observation. . In Proceedings of ACM SIGGRAPH 1997, pp. 379-388.

    [6] Yu Y., Debevec P., Malik J., and Hawkins T. Inverse Global Illumination: Recovering Reflectance Models of Real Scenes from Photographs, In Proceedings of ACM SIGGRAPH 1999, pp. 215-224.

    [7] GOESELE, M., HEIDRICH, W., LENSCH, H. P., AND SEIDEL, H.-P. 2000. Building a photo studio for measurement purposes. In Proceedings of Vision, Modeling, and Visualization. ,pp. 231–238.

    [8] Wang, Z., Leung, C.-S., Wong, T.-T., Lam, P.-M, Zhu, Y.-S. Compressing image-based relighting data using eigenanalysis and wavelets. In Proceedings of Vision, Image and Signal Processing, IEE 2004.,pp. 378-388.
    [9] Nishino K., Sato Y., and Ikeuchi K. Eigen-Texture Method: Appearance Compression based on 3D Model. In Proceedings of Computer Vision and Pattern Recognition 1999, pp.618-624.

    [10] MALZBENDER, T.,GELB, D., AND WOLTERS, H. 2001. Polynomial texture
    maps. In Proceedings of ACMSIGGRAPH 2001. Computer Graphics roceedings, Annual Conference Series.

    [11] Debevec P., Hawkins T., Tchou C., Duiker H. P., Sarokin W., and Sagar M. Acquiring the reflectance field of a human face. In Proceedings of ACM SIGGRAPH 2000, pp. 145-156.

    [12] Kristensen A.W., Akenine-Möller T., and Jensen H.W. Precomputed Local Radiance Transfer for Real-Time Lighting Design. In Proceedings of ACM SIGGRAPH 2005.

    [13] Wood, D., Azuma, D., Aldinger, K., Curless, B., Duchamp, T., Salesin, D., and Stuetzle, W. Surface Light Fields for 3D Photography. In Proceedings. SIGGRAPH 2000, pp. 287–296.

    [14] Chen W. C., Grzeszczuk R., and Bouguet J. Y. Light Field Mapping: Efficient
    Representation and Hardware Rendering of Surface Light Fields. In Proceedings of ACM SIGGRAPH 2002, pp.447-456.

    [15] Pun-Mo Ho, Tien-Tsin Wong, and Chi-Sing Leung. Compressing the illumination-adjustable images with principal component analysis. Circuits and Systems for Video Technology, IEEE Transactions 2005.

    [16] ZHANG, Z., WANG, L., GUO, B., AND SHUM, H.-Y. 2002. Feature-based light field morphing. In ACM Transactions on Graphics (In Proceedings. SIGGRAPH 2002.)

    [17] Haher, W.W., “Applied numerical linear algebra” (Prentice–Hall, 1988)

    [18] E. Hamilton. (1992, Sep.) JPEG File Interchange Format, ver. 1.02. [Online].
    available: http://www.jpeg.org/public/jfif.pdf

    [19] G. W. “Stewart, On the early history of the singular value decomposition,” Dept. Comput. Sci., Univ. Maryland, College Park, MD, Tech.Rep. CS-TR-2855, 1992.

    [20] Lloyd, S. P., Least Squares Quantization in PCM, IEEE Transactions on Information Theory, Vol IT-28, March, 1982, pp. 129-137.

    [21] Max, J., Quantizing for Minimum Distortion, IRE Transactions on Information Theory, Vol. IT-6, March, 1960, pp. 7-12.

    [22] A. F. Abdelnour and I. W. Selesnick. Nearly symmetric orthogonal wavelet bases. In Proc. IEEE Int. Conf. Acoust., Speech, Signal Processing (ICASSP), May 2001.

    [23] P.J. Besl and N.D. McKay, A Method for Registration of 3-D Shapes IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 14, no. 2, pp. 239-256, Feb. 1992.

    [24] G. Turk and M. Levoy, Zippered Polygon Meshes from Range Images, Proc. ACM SIGGRAPH , pp. 311-318, July 1994.

    [25] D. N. Wood, D. I. Azuma, K. Aldinger, B.Curless, T. Duchamp, D. H. Salesin,
    and W. Stuetzle. Surface Light Fields for 3D Photography. Proceedings of
    SIGGRAPH 2000, pages 287–296, July 2000.

    [26] Pulli K., Cohen M., Duchamp T., Hoppe H., Shapiro L., and Stuetzle W. View- based Rendering: Visualizing Real Objects from Scanned Range and Color Data. Eurographics Rendering Workshop 1997, pp. 23-34.

    [27] Debevec P., Taylor C., and Malik J. Modeling and Rendering Architecture from
    Photographs: A Hybrid Geometry- and Image-Based Approach. In Proceedings
    of ACM SIGGRAPH 1996, pp. 11-20

    [28] Debevec P., Yu Y., and Borshukov G. D. Efficient View-Dependent Image-
    Based Rendering with Projective Texture-Mapping. Eurographics Rendering
    Workshop 1998, pp.105-116.

    [29] McMillian L. and Bishop G. Plenoptic Modeling: An Image-Based Rendering
    System. In Proceedings of ACM SIGGRAPH 1995, pp. 39-46

    無法下載圖示 全文公開日期 本全文未授權公開 (校內網路)
    全文公開日期 本全文未授權公開 (校外網路)

    QR CODE