研究生: |
吳芯瑀 Wu, Hsin-Yu |
---|---|
論文名稱: |
為仿生視覺應用開發低功耗之光流演算法 Low-Power Optical Flow Algorithm for Bio-Inspired Vision Applications |
指導教授: |
羅中泉
Lo, Chung-Chuan |
口試委員: |
鄭桂忠
Tang, Kea-Tiong 謝志成 Hsieh, Chih-Cheng 施奇廷 Shih, Chi-Tin |
學位類別: |
碩士 Master |
系所名稱: |
生命科學暨醫學院 - 系統神經科學研究所 Institute of Systems Neuroscience |
論文出版年: | 2023 |
畢業學年度: | 111 |
語文別: | 英文 |
論文頁數: | 51 |
中文關鍵詞: | 光流 、生物啟發應用 、運動偵測 、自我運動 |
外文關鍵詞: | optical flow, bio-inspired applications, motion detection, self motion |
相關次數: | 點閱:178 下載:0 |
分享至: |
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
光流是電腦視覺領域中一個重要的研究領域,常用的方法包括基於變分的方法、塊匹配、神經網路方法等。光流的應用範圍廣泛,尤其在自動駕駛領域有著重要的應用價值,例如可利用光流資訊,計算出深度以偵測障礙物。而若想置放在無人機上,需要實現低功耗且實時計算的方法。
傳統的光流法直接在兩張影像上計算光流,計算量較大;而生物在運動時,除了使用視覺資訊,同時也會搜集其他感官輸入來協助空間感知。我們想運用這個概念,結合慣性測量單元(IMU)或陀螺儀等感測器,減少光流運算的運算量。在本文中,我們結合相機姿態和梯度資訊,用姿態估計自我運動產生的運動角度,藉此簡化光流計算的過程。
本文的研究將聚焦於設計光流計算算法,並進行實驗驗證。我們的目標是開發簡易、可部署在資源有限的邊緣裝置上的光流計算方法,從而實現低功耗且實時的光流計算。
Optical flow is an important research area in computer vision, and various methods are commonly used, including variational methods, block matching, and neural network-based approaches. Optical flow has a wide range of applications, particularly in the field of autonomous driving, where it can be used to calculate depth and detect obstacles. To implement optical flow on unmanned aerial vehicles (UAVs), low-power and real-time computation methods are required.
Traditional optical flow methods directly calculate flow between two consecutive frames, which can be computationally expensive. In contrast, biological systems utilize not only visual information but also other sensory inputs to aid spatial perception. Inspired by this concept, we aim to combine visual information with inertial measurement units (IMUs) or gyroscopes to reduce the computational load of optical flow computation. In this paper, we integrate camera pose estimation and gradient information to estimate the motion angles generated by self-motion, thereby simplifying the optical flow calculation process.
This study focuses on designing an optical flow computation algorithm and conducting experimental validation. Our goal is to develop a simplified optical flow computation method that can be deployed on resource-constrained edge devices, enabling low-power and real-time optical flow computation.
[1] Helen H Yang and Thomas R Clandinin. Elementary motion detection in drosophila: algorithms and mechanisms. Annual Review of Vision Science, 4:143–163, 2018.
[2] Alexander Borst, Juergen Haag, and Dierk F Reiff. Fly motion vision. Annual review of neuroscience, 33:49–70, 2010.
[3] Moritz Menze and Andreas Geiger. Object scene flow for autonomous vehicles. In Conference on Computer Vision and Pattern Recognition (CVPR), 2015.
[4] Gunnar Farneb¨ack. Two-frame motion estimation based on polynomial expansion. In Image Analysis: 13th Scandinavian Conference, SCIA 2003 Halmstad, Sweden, June 29–July 2, 2003 Proceedings 13, pages 363–370. Springer, 2003.
[5] Lea Steffen, Daniel Reichard, Jakob Weinland, Jacques Kaiser, Arne Roennau, and R¨udiger Dillmann. Neuromorphic stereo vision: A survey of bioinspired sensors and algorithms. Frontiers in neurorobotics, 13:28, 2019.
[6] P Thanh Tran-Ngoc, Leslie Ziqi Lim, Jia Hui Gan, Hong Wang, T Thang Vo-Doan, and Hirotaka Sato. A robotic leg inspired from an insect leg. Bioinspiration & Biomimetics, 17(5):056008, 2022.
[7] Bernhard Hassenstein and Werner Reichardt. Systemtheoretische analyse der zeit-, reihenfolgen-und vorzeichenauswertung bei der bewegungsperzeption des r¨usselk¨afers chlorophanus. Zeitschrift f¨ur Naturforschung B, 11(9- 10):513–524, 1956.
[8] HB Barlow and William R Levick. The mechanism of directionally selective units in rabbit’s retina. The Journal of physiology, 178(3):477, 1965.
[9] Alexander Borst. A biophysical mechanism for preferred direction enhancement in fly motion vision. PLoS computational biology, 14(6):e1006240, 2018.
[10] Matthias O Franz and Holger G Krapp. Wide-field, motion-sensitive neurons and matched filters for optic flow fields. Biological cybernetics, 83(3):185–197, 2000.
[11] Katja Karmeier, Holger G Krapp, and Martin Egelhaaf. Population coding of self-motion: applying bayesian analysis to a population of visual interneurons in the fly. Journal of neurophysiology, 94(3):2182–2194, 2005.
[12] Kit D Longden, Martina Wicklein, Ben J Hardcastle, Stephen J Huston, and Holger G Krapp. Spike burst coding of translatory optic flow and depth from motion in the fly visual system. Current Biology, 27(21):3225–3236, 2017.
[13] Karl Kral and Michael Poteser. Motion parallax as a source of distance information in locusts and mantids. Journal of insect behavior, 10:145–163, 1997.
[14] Robert M Olberg, Andrea H Worthington, Jessica L Fox, CE Bessette, and Michael P Loosemore. Prey size selection and distance estimation in foraging adult dragonflies. Journal of comparative physiology A, 191:791–797, 2005.
[15] James J Gibson. The perception of the visual world. 1950.
[16] Berthold KP Horn and Brian G Schunck. Determining optical flow. Artificial intelligence, 17(1-3):185–203, 1981.
[17] Bruce D Lucas and Takeo Kanade. An iterative image registration technique with an application to stereo vision. In IJCAI’81: 7th international joint conference on Artificial intelligence, volume 2, pages 674–679, 1981.
[18] Alexey Dosovitskiy, Philipp Fischer, Eddy Ilg, Philip Hausser, Caner Hazirbas, Vladimir Golkov, Patrick Van Der Smagt, Daniel Cremers, and Thomas Brox. Flownet: Learning optical flow with convolutional networks. In Proceedings of the IEEE international conference on computer vision, pages 2758– 2766, 2015.
[19] Zachary Teed and Jia Deng. Raft: Recurrent all-pairs field transforms for optical flow. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part II 16, pages 402–419. Springer, 2020.
[20] Zhaoyang Huang, Xiaoyu Shi, Chao Zhang, Qiang Wang, Ka Chun Cheung, Hongwei Qin, Jifeng Dai, and Hongsheng Li. Flowformer: A transformer architecture for optical flow. In Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XVII, pages 668–685. Springer, 2022.
[21] Zhigang Tu, Nico Van Der Aa, Coert Van Gemeren, and Remco C Veltkamp. A combined post-filtering method to improve accuracy of variational optical flow estimation. Pattern Recognition, 47(5):1926–1940, 2014.
[22] Zhigang Tu, Wei Xie, Dejun Zhang, Ronald Poppe, Remco C Veltkamp, Baoxin Li, and Junsong Yuan. A survey of variational and cnn-based optical flow techniques. Signal Processing: Image Communication, 72:9–24, 2019.
[23] Nusrat Sharmin and Remus Brad. Optimal filter estimation for lucas-kanade optical flow. Sensors, 12(9):12694–12709, 2012.
[24] Nils Papenberg, Andr´es Bruhn, Thomas Brox, Stephan Didas, and Joachim Weickert. Highly accurate optic flow computation with theoretically justified warping. International Journal of Computer Vision, 67:141–158, 2006.
[25] Henning Zimmer, Andr´es Bruhn, and Joachim Weickert. Optic flow in harmony. International Journal of Computer Vision, 93:368–388, 2011.
[26] Deqing Sun, Stefan Roth, and Michael J Black. A quantitative analysis of current practices in optical flow estimation and the principles behind them. International Journal of Computer Vision, 106:115–137, 2014.
[27] Tobias Senst, Volker Eiselein, and Thomas Sikora. Robust local optical flow for feature tracking. IEEE Transactions on Circuits and Systems for Video Technology, 22(9):1377–1387, 2012.
[28] Andreas Wedel, Thomas Pock, Christopher Zach, Horst Bischof, and Daniel Cremers. An improved algorithm for tv-l 1 optical flow. In Statistical and Geometrical Approaches to Visual Motion Analysis: International Dagstuhl Seminar, Dagstuhl Castle, Germany, July 13-18, 2008. Revised Papers, pages 23–45. Springer, 2009.
[29] Deqing Sun, Stefan Roth, and Michael J Black. Secrets of optical flow estimation and their principles. In 2010 IEEE computer society conference on computer vision and pattern recognition, pages 2432–2439. IEEE, 2010.
[30] Clemens Rabe, Thomas M¨uller, Andreas Wedel, and Uwe Franke. Dense, robust, and accurate motion field estimation from stereo image sequences in real-time. In Computer Vision–ECCV 2010: 11th European Conference on Computer Vision, Heraklion, Crete, Greece, September 5-11, 2010, Proceedings, Part IV 11, pages 582–595. Springer Berlin Heidelberg, 2010.
[31] Jiangjian Xiao, Hui Cheng, Harpreet Sawhney, Cen Rao, and Michael Isnardi. Bilateral filtering-based optical flow estimation with occlusion detection. In Computer Vision–ECCV 2006: 9th European Conference on Computer Vision, Graz, Austria, May 7-13, 2006. Proceedings, Part I 9, pages 211–224. Springer, 2006.
[32] Emanuele Trucco and Alessandro Verri. Introductory techniques for 3-D computer vision, volume 201. Prentice Hall Englewood Cliffs, 1998.
[33] Jonathan W Brandt. Improved accuracy in gradient-based optical flow estimation. International Journal of Computer Vision, 25(1):5, 1997.
[34] Rafael C Gonzales and Paul Wintz. Digital image processing. Addison-Wesley Longman Publishing Co., Inc., 1987.
[35] Shital Shah, Debadeepta Dey, Chris Lovett, and Ashish Kapoor. Airsim: High-fidelity visual and physical simulation for autonomous vehicles. In Field and Service Robotics: Results of the 11th International Conference, pages 621–635. Springer, 2018.
[36] Simon Baker and Iain Matthews. Lucas-kanade 20 years on: A unifying framework. International journal of computer vision, 56:221–255, 2004.