研究生: |
蔡馥任 Tsai, Fu-Jen |
---|---|
論文名稱: |
用於動態場景影像去模糊的模糊感知關注網路 BANET: Blur-aware Attention Network for Dynamic Scene Deblurring |
指導教授: |
林嘉文
Lin, Chia-Wen 林彥宇 Lin, Yen-Yu |
口試委員: |
莊永裕
Chuang, Yung-Yu 彭彥璁 Peng, Yan-Tsung |
學位類別: |
碩士 Master |
系所名稱: |
電機資訊學院 - 通訊工程研究所 Communications Engineering |
論文出版年: | 2021 |
畢業學年度: | 109 |
語文別: | 英文 |
論文頁數: | 36 |
中文關鍵詞: | 影像去模糊 、模糊感知關注網路 |
外文關鍵詞: | Image Deblurring, Blur-aware Attention |
相關次數: | 點閱:2 下載:0 |
分享至: |
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
影像模糊通常是由物體快速移動或使用者拍攝儀器晃動所導致,當此兩項因素同時發生時,常常會造成影像不規則模糊之現象。過去的方法時常使用多尺度、多區塊等遞迴網路,搭配上自我關注網路以處理不規則模糊之問題。然而,遞迴網路通常會導致較長運算時間,像素間或通道間自我關注網路會造成大量記憶體使用。這篇論文提出用於動態場景去模糊的模糊感知關注網路,使用非遞迴網路達成高效能及高速度去模糊模型,其中我們所提出的區域關注多核條狀池化關注模型,能自行根據不同方向和大小之模糊型態將圖片分割成不同區域,並搭配我們所提出的串接式多尺度並聯空洞捲積網路來幫助抽取內容特徵。經由廣泛的實驗,我們的方法可以在多種資料集達成最佳效果,並且能做到即時處理的運算效能。
Image motion blur usually results from moving objects or camera shakes. Such blur is generally directional and non-uniform. Previous research efforts attempt to solve non-uniform blur by using self-recurrent multi-scale or multi-patch architectures
accompanying with self-attention. However, using self-recurrent frameworks typically leads to a longer inference time, while inter-pixel or inter-channel self-attention may cause excessive memory usage. This paper proposes blur-aware attention networks
(BANet) which accomplish accurate and efficient deblurring via a single forward pass. Our BANet utilizes region-based self-attention with multi-kernel strip pooling to disentangle blur patterns of different magnitudes and orientations and with cascaded parallel dilated convolution to aggregate multi-scale content features. Extensive experimental results on the GoPro and HIDE and RealBlur benchmarks demonstrate that the proposed BANet performs favorably against the state-of-the-arts in blurred image restoration and can provide deblurred results in real-time.
[1] L. Chen, F. Fang, T. Wang, and G. Zhang. Blind image deblurring with local maximum gradient prior. In Proc. Conf. Comput. Vis. Pattern Recognit., 2019.
[2] L.-C. Chen, G. Papandreou, F. Schroff, and H. Adam. Rethinking atrous convolution for semantic image segmentation. arXiv preprint arXiv:1706.05587, 2017.
[3] S. Cho and S. Lee. Fast motion deblurring. In ACM Trans. on Graphics, 2009.
[4] R. Fergus, B. Singh, A. Hertzmann, S. T. Roweis, and W. T. Freeman. Removing camera shake from a single photograph. In ACM Trans. on Graphics, 2006.
[5] H. Gao, X. Tao, X. Shen, and J. Jia. Dynamic scene deblurring with parameter selective sharing and nested skip connections. In Proc. Conf. Comput. Vis. Pattern Recognit., 2019.
[6] A. Gupta, N. Joshi, L. Zitnick, M. Cohen, and B. Curless. Single image deblurring using motion density functions. In Proc. Euro. Conf. Comput. Vis., 2010.
[7] S. Harmeling, H. Michael, and B. Scholkopf. Space-variant single-image blind deconvolution for removing camera shake. In Proc. Neural Inf. Process. Syst., 2010.
[8] M. Hirsch, C. J. Schuler, S. Harmeling, and B. Scholkopf.Fast removal of nonuniform camera shake. In Proc. Int. Conf. Comput. Vis., 2011.
[9] Q. Hou, L. Zhang, M.-M. Cheng, and J. Feng. Strip Pooling: Rethinking spatial pooling for scene parsing. In Proc. Conf. Comput. Vis. Pattern Recognit., 2020.
[10] J. Hu, L. Shen, and G. Sun. Squeeze-and-excitation networks. In Proc. Conf. Comput. Vis. Pattern Recognit., 2018
[11] Z. Jiang, Y. Zhang, D. Zou, J. Ren, J. Lv, and Y. Liu. Learning event-based motion deblurring. In Proc. Conf. Comput. Vis. Pattern Recognit., 2020.
[12] N. Joshi, C. L. Zitnick, R. Szeliski, and D. J. Kriegman. Image deblurring and denoising using color priors. In Proc. Conf. Comput. Vis. Pattern Recognit., 2009.
[13] T. H. Kim and K. M. Lee. Segmentation-free dynamic scene deblurring. In Proc. Conf. Comput. Vis. Pattern Recognit., 2014.
[14] O. Kupyn, V. Budzan, M. Mykhailych, D. Mishkin, and J. Matas. Deblurgan: Blind motion deblurring using conditional adversarial networks. In Proc. Conf. Comput. Vis. Pattern Recognit., 2018.
[15] O. Kupyn, T. Martyniuk, J. Wu, and Z. Wang. Deblurgan-v2: Deblurring (ordersof-magnitude) faster and better. In Proc. Int. Conf. Comput. Vis., 2019.
[16] X. L and J. J. Two-phase kernel estimation for robust motion deblurring. In Proc. Euro. Conf. Comput. Vis., 2010.
[17] S. Liu, D. Huang, and a. Wang. Receptive field block net for accurate and fast object detection. In Proc. Euro. Conf. Comput. Vis., 2018.
[18] S. Nah, T. H. Kim, and K. M. Lee. Deep multi-scale convolutional neural network for dynamic scene deblurring. In Proc. Conf. Comput. Vis. Pattern Recognit., 2017.
[19] J. Pan, Z. Hu, Z. Su, and M. Yang. Deblurring text images via l0-regularized intensity and gradient prior. In Proc. Conf. Comput. Vis. Pattern Recognit., 2014.
[20] J. Pan, D. Sun, H. Pfister, and M.-H. Yang. Deblurring images via dark channel prior. In Proc. Conf. Comput. Vis. Pattern Recognit., 2018.
[21] D. Park, D. U. Kang, J. Kim, and S. Y. Chun. Multi-temporal recurrent neural networks for progressive non-uniform single image deblurring with incremental temporal training. In Proc. Euro. Conf. Comput. Vis., 2020.
[22] N. Parmar, A. Vaswani, J. Uszkoreit, Ł. Kaiser, N. Shazeer, A. Ku, and D. Tran. Image transformer. arXiv preprint arXiv:1802.05751, 2018.
[23] K. Purohit and A. N. Rajagopalan. Region-adaptive dense network for efficient motion deblurring. In Proc. AAAI Conf. Artificial Intell., 2020.
[24] Q. Shan, J. Jia, and A. Agarwala. High-quality motion deblurring from a single image. In ACM Trans. on Graphics, 2008.
[25] Z. Shen, W. Wang, J. Shen, H. Ling, T. Xu, and L. Shao. Human-aware motion deblurring. In Proc. Int. Conf. Comput. Vis., 2019.
[26] M. Suin*, K. Purohit*, and A. N. Rajagopalan. Spatially attentive patch hierarchical network for adaptive motion deblurring. In Proc. Conf. Comput. Vis. Pattern Recognit., 2020.
[27] X. Tao, H. Gao, X. Shen, J. Wang, and J. Jia. Scale-recurrent network for deep image deblurring. In Proc. Conf. Comput. Vis. Pattern Recognit., 2018.
[28] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin. Attention is all you need. In Proc. Neural Inf. Process. Syst., 2017.
[29] X. Wang, R. Girshick, A. Gupta, and K. He. Non-local neural networks. In Proc. Conf. Comput. Vis. Pattern Recognit., 2018.
[30] O. Whyte, J. Sivic, A. Zisserman, and J. Ponce. Non-uniform deblurring for shaken images. In Proc. Conf. Comput. Vis. Pattern Recognit., 2010.
[31] Y. Yan, W. Ren, Y. Guo, R. Wang, and X. Cao. Image deblurring via extreme channels prior. In Proc. Conf. Comput. Vis. Pattern Recognit., 2017.
[32] Y. Yuan, W. Su, and D. Ma. Efficient dynamic scene deblurring using spatially variant deconvolution network with optical flow guided training. In Proc. Conf. Comput. Vis. Pattern Recognit., 2020.
[33] H. Zhang, Y. Dai, H. Li, and P. Koniusz. Deep stacked hierarchical multi-patch network for image deblurring. In Proc. Conf. Comput. Vis. Pattern Recognit., 2019.
[34] H. Zhang, I. Goodfellow, D. Metaxas, and A. Odena. Selfattention generative adversarial networks. In Proc. Machine Learning Research., 2019.
[35] K. Zhang, W. Luo, Y. Zhong, L. Ma, B. Stenger, W. Liu, and H. Li. Deblurring by realistic blurring. In Proc. Conf. Comput. Vis. Pattern Recognit., 2020.
[36] J. Rim, H. Lee, J. Won, and S. Cho. Real-world blur dataset for learning and benchmarking deblurring algorithms. In Proc. Euro. Conf. Comput. Vis., 2020.
[37] K. Zhang, W. Luo, Y. Zhong, L. Ma, B. Stenger, W. Liu, and H. Li. Deblurring by realistic blurring. In Proc. Conf. Comput. Vis. Pattern Recognit., 2020.