研究生: |
林湘瑩 Lin, Hsiang-Ying |
---|---|
論文名稱: |
用於遠距合作之即時物件標註系統 An Interactive Object Annotation System For Remote Collaboration |
指導教授: |
朱宏國
Chu, Hung-Kuo 王浩全 Wang, Hao-Chuan |
口試委員: |
姚智原
Yao, Chih-Yuan 李潤容 Lee, Ruen-Rone |
學位類別: |
碩士 Master |
系所名稱: |
電機資訊學院 - 資訊工程學系 Computer Science |
論文出版年: | 2015 |
畢業學年度: | 103 |
語文別: | 中文 |
論文頁數: | 40 |
中文關鍵詞: | 遠距合作 、視訊溝通 |
外文關鍵詞: | Computer Supported Cooperative Work, Video Content, Communications |
相關次數: | 點閱:2 下載:0 |
分享至: |
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
隨著智慧型裝置以及網路的普及化,使用者透過影像作為媒介與
位於遠端的親朋好友溝通或分享生活點滴儼然已成為現今最流行的溝通方式,而資料的即時同步分享,更大幅提升遠端使用者協同合作的可能性,例如位於世界各地的建築師們能同時合作建構3D模型、醫師能透過影像遠端指示手術的操作等。然而僅提供影像與語音的同步分享,往往不足以滿足對於某些需要透過明確的指稱與參照來溝通的協同合作模式,像是無法清楚溝通「按下那個按鈕」所指的按鈕到底是那一個,導致遠距進行合作的雙方無法清楚了解彼此的理解狀態與需求。因此,我們在這提出一種互動式影像標註系統,讓使用者能夠在手機視訊的介面模式下對螢幕中的物體影像進行標註的動作。透過標註物體影像這個動作,對於使用者之間的溝通效率將大幅提升,使用者可以清楚地標出用言語難以形容或是需要花費很多心力敘述的物件或位置,例如:把這條藍色的線接到左邊那個紅色的洞的上面,有了標註系統的輔助,使用者只需要如此表示:將A的線接到B位置,如此一來便可大大提升遠距合作的溝通效率。我們將此標註系統分為「真人標註」及「虛擬標註」兩部分,使用者可以選擇與遠端的另一使用者進行視訊互動合作或是由系統定義好的指示來導引使用者完成任務。
There’s a general need of remote collaboration on physical tasks in ev- eryday life and work, such as obtaining assistance and instruction from a remote expert for assembling furniture or repairing a machine. While mul- timodal interfaces, such as those integrating video and audio, were found to be useful for communication, users may still suffer from the ambiguity of referential expressions in remote collaboration (e.g., indicating which object in the workspace to handle). We propose a novel annotation-based collabora- tive interface that allows users to add narratable annotations of objects, such as alphabetic labels, in a dynamic video-mediated workspace. By explicitly supporting object narratability, users may directly refer to objects by read- ing out the labels, which may relieve the inconvenience of making references to objects using sophisticated descriptions of object attributes or deictic pro- nouns (e.g., “here”, “there” etc.). This improves working efficiency greatly between user communication. We divide our system into two parts: “live- action annotation” and ”virtual annotation”. In live-action annotation, there is a remote user that can guide the client user by giving annotations imme- diately. While in virtual annotation, we have a server that stored defined instruction annotations to help client user on physical tasks step by step.
[1] L. Alem, F. Tecchia, and W. Huang. Handsonvideo: Towards a gesture based mobile ar system for remote collaboration. In Recent Trends of Mobile Collaborative
Augmented Reality Systems, page 135–148, Springer New York, 2011.
[2] M. Bauer, G. Kortuem, and Z. Segall. “where are you pointing at?” a study of remote collaboration in a wearable videoconference system. In Proc. Intl. Symp on
Wearable Computers (ISWC), page 151–158, 1999.
[3] J. Chastine, K. Nagel, Y. Zhu, and M. Hudachek-Buswell. Studies on the effectiveness of virtual pointers in collaborative augmented reality. In Proc. IEEE Symp. on 3D User Interfaces (3DUI), page 117–124, 2008.
[4] M. Everingham, L. V. Gool, C. Williams, J. Winn, and A. Zisserman. The pascal visual object classes (voc) challenge. In IJCV, 2010.
[5] S. R. Fussell, L. D. Setlock, J. Yang, J. Ou, E. Mauer, and A. D. I. Kramer. Gestures over video streams to support remote collaboration on physical tasks. Hum.-Comput. Interact. 19, page 273–309, 2004.
[6] S. Gauglitz, C. Lee, M. Turk, and T. Hollerer. Integrating the physical environment into mobile remote collaboration. In Proc. ACM MobileHCI, 2012.
[7] S. Gauglitz, B. Nuernberger, M. Turk, and T. Hollerer. World-stabilized annotations and virtual scene navigation for remote collaboration. In Proceedings of the 27th ACM Symposium on User Interfaces Software and Technology (UIST), 2014.
[8] J. Gibson. The perception of the visual world. In Houghton Mifflin, 1950.
[9] C. Kim and J. N. Hwang. Fast and automatic video object segmentation and tracking for content-based applications. IEEE Transactions Circuits And Systems for Video Technology, 12:122–129, Feb 2002.
[10] D. Kirk and D. Stanton Fraser. Comparing remote gesture technologies for supporting collaborative physical tasks. In Proc. SIGCHI Conf. on Human Factors in Computing Systems (CHI), page 1191–1200, 2006.
[11] D. S. Kirk. Turn it this way: Remote gesturing in video-mediated communication. PhD thesis, University of Nottingham, 2006.
[12] T. Kurata, N. Sakata, M. Kourogi, H. Kuzuoka, and M. Billinghurst. Remote collaboration using a shoulder-worn active camera/laser. In Proc. Intl. Symp. on Wearable Computers (ISWC), page 62–69, 2004.
[13] B. Lucas and T. Kanade. An iterative image registration technique with an application to stereo vision. In Proc. of 7th International Joint Conference on Artificial Intelligence (IJCAI), pages 674–679, 1981.
[14] T. Malisiewicz, A. Gupta, and A. A. Efros. Ensemble of exemplar-svms for object detection and beyond. In ICCV, 2011.
[15] J. C. Nascimento and J. S. Marques. Performance evaluation of object detection algorithms for video surveillance. IEEE Trans. On multimedia, 8(4):761–774, 2006.
[16] J. Ou, S. R. Fussell, X. Chen, L. D. Setlock, and J. Yang. Gestural communication over video stream: supporting multimodal interaction for remote collaborative physical tasks. In Proc. Intl. Conf. on Multimodal Interfaces (ICMI), page 242–249, 2003.
[17] H. Regenbrecht, T. Lum, P. Kohler, C. Ott, M. Wagner, W. Wilke, and E. Mueller. Using augmented virtuality for remote collaboration. In Presence: Teleoper. Virtual Environ, page 338–354, July 2004.
[18] C. Rother, V. Kolmogorov, and A. Blake. Grabcut—interactive foreground extraction using iterated graph cuts. In ACM Transactions on Graphics (SIGGRAPH), 2004.
[19] A. Sadagic, H. Towles, L. Holden, K. Daniilidis, and B. Zeleznik. Tele-immersion portal: Towards an ultimate synthesis of computer graphics and computer vision systems. In Proc. Intl. Workshop on Presence, 2001.
[20] J. C. Tang and S. L. Minneman. Videodraw: a video interface for collaborative drawing. In ACM Trans. Inf. Syst. 9, page 170–184, April 1991.
[21] P. Wellner and S. Freeman. The double digital desk: Shared editing of paper documents. In Tech. Rep. EPC-93-108, EuroPARC, 1993.