簡易檢索 / 詳目顯示

研究生: 高佳慧
Kao, Chia-Hui
論文名稱: Power-aware Depth Map Generation for 3D Portrait on Android Systems
在Android系統上的電量感知3D人像深度圖產生法
指導教授: 金仲達
King,Chung-Ta
口試委員: 曾紹崟
梁伯嵩
學位類別: 碩士
Master
系所名稱: 電機資訊學院 - 資訊工程學系
Computer Science
論文出版年: 2011
畢業學年度: 99
語文別: 中文
論文頁數: 38
中文關鍵詞: 2D轉3D人臉辨識深度圖電量感知
相關次數: 點閱:2下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 將一張2D的照片轉為3D立體照片,其中最重要的資訊即為照片中每一個點的深度。但是一張普通的2D照片,並不包含深度資訊,因此要產生3D照片就很困難。另一方面,針對一些特定的照片,如人臉自拍照,由於我們已經知道照片中的一些特性及訊息,因此可以針對這些特性推導,進而得到照片的深度資訊。然而這樣的方法如果套用到嵌入式系統,那就必須額外考慮到效能以及耗電的問題。本篇論文提出了一套完整的系統,兼顧耗電量以及產生出的3D效果,將一張2D人像照轉換成3D立體影像。首先,我們依照裝置目前所剩餘的電量來決定使用不同的深度圖產生演算法。所產生的深度圖可利用DIBR技術產生出左右眼所對應的影像。最後,我們將兩眼的影像結合在一起成為一張3D照片。由實驗結果可證實此系統產生的3D照片效果令人滿意,而且兼顧到系統的剩餘電量。


    The most important information in transforming a 2D image into a 3D image is the depth of each pixel in the image. However, a normal 2D image usually does not contain the depth information, which makes the transformation impossible. On the other hand, for some specific pictures, such as personal portraits, it is possible to infer crude depth information from their known contexts and properties. If we want to port this technology to embedded systems, we should further consider the performance and power consumption issues. This thesis presents a power-aware, 2D-to-3D image transformation tool for personal portraits on embedded systems, such as cell phones or personal information devices (PIDs). The tool first chooses a suitable depth-map generation algorithm based on the remaining power of the device. The depth map is then used to generate stereo binocular image pairs by the DIBR method. Finally, the two-eye views are merged to product a 3D image. Our experimental results show that the proposed method can produce a satisfactory stereoscopic effect.

    中文摘要 I 致謝 II Abstract III Index of Contents IV Index of Figures VI Chapter 1. Introduction 1 Chapter 2. Background and Motivation 4 Chapter 3. Power-aware Image Transformation 9 3.1 Power-aware Depth Map Generation 10 3.2 Merge Database 21 3.3 Red-blue Image Generation 22 Chapter 4. Experiment Results 27 4.1 Experimental Environment 27 4.2 Power Comparison of Different Depth-map Generation Methods 29 4.3 Results of the Image Transformation System 31 Chapter 5. Conclusion and Future Work 35 Chapter 6. References 38

    [1] C. Fehn, “A 3D-TV Approach Using Depth-Image-Based Rendering (DIBR),” Proceedings of Visualization, Imaging, and Image Processing ’03 , Benalmadena, Spain, pp. 482-487, Sep. 2003.
    [2] C. Fehn, “Depth-image-based rendering (DIBR), compression and transmission for a new approach on 3D-TV,” Proc. SPIE Conf. Stereoscopic Displays and Virtual Reality Systems XI, vol. 5291, pp.93 - 104 , 2004.
    [3] H. Murata, Y. Mori, S. Yamashita, A. Maenaka, S. Okada, K. Oyamada,and S. Kishimoto, “A real-time image conversion technique using computed image depth,” SID Int. Symp. Digest Tech. Papers 29, 919–922, 1998.
    [4] S. Battiato, A. Capra, S. Curti, and M. La Cascia, “3D stereoscopic image pairs by depth-map generation,” Proc. Int. Symp. 3D Data Process Visualization and Transmission (3DPVT’02), pp. 124–131, IEEE, Piscatawy, NJ , 2004.
    [5] S. Battiatoa, S. Curtib, M. La Casciac, M. Tortorac, E. Scordatoc, “Depth-Map Generation by Image Classification,” Proceedings of SPIE, 2004.
    [6] S Battiato, A Capra, S Curti, M La Cascia, “3D Stereoscopic Image Pairs by Depth-Map Generation,” Proceedings of the 2nd International Symposium on 3D Data Processing Visualization and Transmission 3DPVT, 2004.
    [7] Fang-Hsuan Cheng, Yun-Hui Liang, “Depth map generation based on scene categories,” Journal of Electronic Imaging, 2009.

    無法下載圖示 全文公開日期 本全文未授權公開 (校內網路)
    全文公開日期 本全文未授權公開 (校外網路)
    全文公開日期 本全文未授權公開 (國家圖書館:臺灣博碩士論文系統)
    QR CODE