研究生: |
劉心如 Liu, Hsin-Ju |
---|---|
論文名稱: |
運用標籤賽局於數位影像之主體重新對焦 Refocusing on the Object by a Labeling Game for Digital Images |
指導教授: |
張隆紋
Chang, Long-Wen |
口試委員: |
陳祝嵩
廖弘源 |
學位類別: |
碩士 Master |
系所名稱: |
電機資訊學院 - 資訊系統與應用研究所 Institute of Information Systems and Applications |
論文出版年: | 2012 |
畢業學年度: | 100 |
語文別: | 英文 |
論文頁數: | 33 |
中文關鍵詞: | 對焦 、影像分割 、標籤 |
外文關鍵詞: | focusing, segmentation, labeling |
相關次數: | 點閱:2 下載:0 |
分享至: |
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
在攝影取像時,場景中通常會存在多重目標物,對相機而言,這些目標物位於不同的深度上,攝影者選擇一個深度作為對焦的依據,用景深來呈現的想要的情境。而一張相片所呈現的視覺品質,受限於相機鏡頭的種類與拍攝者的攝影技術。若拍攝時對焦不準確,便無法呈現出拍攝者的意念及感覺,導致它成為一張失敗的影像。為了解決這個問題,本文提出一個概念,讓攝影者在拍攝時不用考慮對焦的問題,只需隨意拍攝一張全對焦影像,再透過影像處理事後對焦至想要的目標或區域即可。
利用有限的影像資訊,我們提出一個以物件為基礎事後對焦架構,透過一個標籤賽局,來辨認出想要重新對焦的物件,透過空間濾波器使之在圖像中更為突出,並使背景模糊化模擬失焦的現象,進而達到景深設計的效果,提升整體圖像的視覺傳達品質。
Generally there are more than one object that locate at different depth in a scene of a photographic image. Photographers select a depth as the basis of focusing to present the desired scenarios. However, the visual quality of a photo is limited by the type of camera lens and the skills of the photographer. It cannot present the photographer’s feeling and lead to a failure if we cannot focus on the wanted object properly. To solve this problem, this thesis proposes a concept that photographers don’t need to consider the focusing problem when shooting but refocus on it after computer processing.
With limited image information, we propose an object-based refocusing framework to refocus on the wanted object. Through a image labeling game algorithm, we correctly identify the object which we want to refocus and make it more prominent in the image by using a spatial filter. Similarly, we simulate the out-of-focus effect by blurring the background. After these processes, we retrieve a correct focused image that can present the desired depth of field and improve the quality of visual communication.
[1] Joe Demers. (2004) Depth of Field: A Survey of Techniques. [Online]. Available: http://http.developer.nvidia.com/
[2] Glenn M. Cope. Depth of Field: The Misunderstood Element in Image Design. [Online] Available: http://www.photoclasses.com/
[3] R. NG., M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic Camera”. Stanford Tech Report CTSR , 2005.
[4] Ng, R. 2006. Digital Light Field Photography. PhD dissertation, Stanford University.
[5] Adobe Systems, Inc., Adobe Photoshop User Guide, 2002.
[6] Mortensen, E. N., and Barrett, W. A. 1998. Interactive segmentation with intelligent scissors. Graphical Models and Image Processing 60, 5, 349–384.
[7] Boykov, Y., and Jolly, M.-P. 2001. Interactive graph cuts for optimal boundary and region segmentation of objects in n-d images. In Proc. Of the International Conference on Computer Vision, vol. 1, 105–112.
[8] Rother, C., Kolmogorov, V., and Blake, A. 2004. Grabcut – interactive object extraction using iterated graph cuts. Proc. ACM Siggraph.
[9] Heimann, T., Thorn, M., Kunert, T., and Meinzer, H.-P. 2004. New methods for leak detection and contour correction in seeded region growing segmentation. In 20th ISPRS Congress, Istanbul 2004, International Archives of Photogrammetry and Remote Sensing, vol. XXXV, 317–322.
[10] Vezhnevets V and Konushin V, "GrowCut" - Interactive Multi-Label N-D Image Segmentation By Cellular Automata, in Proc. Graphicon, 2005, 150–156.
[11] S. Yu and M. Berthod, A game strategy approach for image labeling, Computer Vision and Image Understanding, vol. 61, No. 1, January, 1995, 32-37.
[12] Guo dong Guo, Shan Yu and Song de Ma, An Image Labeling Algorithm Based on Cooperative Game Theory, Signal Processing Proceedings, vol.2, 1998,978 – 981.
[13] R.C. Gonzalez and R.E. Woods, Digital Image Processing Third Edition, Pearson Prentice Hall, 2008.
[14] Martin J. Osborne, an introduce to Game Theory, Oxford University Press, Inc., New York, 2004.
[15] Dhruv Batra, Adarsh Kowdle, Devi Parikh, Jeibo Luo and Tsuhan Chen, “iCoseg: Interactive Co-segmentation with Intelligent Scribble Guidance”, Computer Vision and Pattern Recognition, 2010.
[16] Lytro, Inc., Lytro Dataset, Retrieved May 1, 2012, from http://www.lytro.com/living-pictures/