研究生: |
陳瑋凡 Chen, Wei-Fan |
---|---|
論文名稱: |
自適應夾爪設計、控制及整合深度學習之智慧抓取應用 Design and Control of an Adaptive Gripper with Application to Intelligent Grasping Using Deep Learning |
指導教授: |
葉廷仁
Yeh, Ting-Jen |
口試委員: |
顏炳郎
Yan, Bing-Lang 劉承賢 Liu, Cheng-Hsien |
學位類別: |
碩士 Master |
系所名稱: |
工學院 - 動力機械工程學系 Department of Power Mechanical Engineering |
論文出版年: | 2020 |
畢業學年度: | 108 |
語文別: | 中文 |
論文頁數: | 80 |
中文關鍵詞: | 自適應夾爪 、欠制動 、肌腱驅動 、深度學習 、形心估測 、姿態估測 |
外文關鍵詞: | Adaptive gripper, Under-actuated, Tendon-driven, Deep learning, Centroid estimation, Orientation estimation |
相關次數: | 點閱:2 下載:0 |
分享至: |
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
本研究建立了一用以執行智慧抓取任務之欠致動自適應夾爪,其可根據夾取時與物體之幾何關係,進行平行式或包覆式抓取。為最大化夾爪的適應能力,本研究建立抓取運動學模型,並透過最佳化來尋找最佳設計參數。此外,爪面上配置有接近與力量感測器,使夾爪能快速且安全地抓取脆弱物體。此夾爪安裝於一具六自由度之機器手臂上,作為執行智慧抓取任務之終端效應器。智慧抓取係基於三個以RGBD影像作為輸入的深度學習神經網路,分別用以偵測物體、估測物體形心位置及其姿態。最後透過實驗,驗證本研究所設計之夾爪與神經網路,可以成功用以抓取隨機放置之各式物體。
This thesis develops an under-actuated adaptive gripper for intelligent grasping. The gripper can perform either parallel or enveloping grasp depending on the geometry of the object to be grasped. To maximize the adaptability of the gripper, kinematic models for grasping are established and optimizations are performed on the models to seek the optimal design parameters. The gripper is equipped with a proximity sensor and a force sensor, which allows it to quickly and safely grasp delicate objects. The gripper is attached to a 6 DOF robot arm as an end effector to perform intelligent grasping. The intelligent grasping is based on three deep learning networks which adopt RGBD stereo images as the inputs. The three networks are respectively to detect the object, estimate its centroid location, and determine its orientation. Experiments verify that the developed gripper and neural networks can successful grasp various randomly placed objects.
[1] [Online]Available:https://news.ltn.com.tw/news/life/breakingnews/2413111 [Assessed 30 November 2019]
[2] [Online]Available:https://www.chanto-air.com [Assessed 30 November 2019]
[3] Mnyusiwalla, H., Vulliez, P., Gazeau, J. P., & Zeghloul, S. (2015). A new dexterous hand based on bio-inspired finger design for inside-hand manipulation. IEEE Transactions on Systems, Man, and Cybernetics: Systems, 46(6), 809-817.
[4] Samadikhoshkho, Z., Zareinia, K., & Janabi-Sharifi, F. (2019, May). A Brief Review on Robotic Grippers Classifications. In 2019 IEEE Canadian Conference of Electrical and Computer Engineering (CCECE) (pp. 1-4). IEEE.
[5] Ciocarlie, M., Hicks, F. M., Holmberg, R., Hawke, J., Schlicht, M., Gee, J., ... & Bahadur, R. (2014). The Velo gripper: A versatile single-actuator design for enveloping, parallel and fingertip grasps. The International Journal of Robotics Research, 33(5), 753-767.
[6] Dong, H., Asadi, E., Qiu, C., Dai, J., & Chen, I. M. (2018). Geometric design optimization of an under-actuated tendon-driven robotic gripper. Robotics and Computer-Integrated Manufacturing, 50, 80-89.
[7] Gao, B., Yang, S., Jin, H., Hu, Y., Yang, X., & Zhang, J. (2016, December). Design and analysis of underactuated robotic gripper with adaptive fingers for objects grasping tasks. In 2016 IEEE International Conference on Robotics and Biomimetics (ROBIO) (pp. 987-992). IEEE.
[8] Biagiotti, L., Melchiorri, C., & Vassura, G. (2001, June). Control of a Three-dof Robotic Gripper for Space Applications. In Symposium on Artificial Intelligence, Robotics and Automation in Space (ISAIRAS'01), Montreal, Canada (pp. 18-22).
[9] Hasegawa, H., Mizoguchi, Y., Tadakuma, K., Ming, A., Ishikawa, M., & Shimojo, M. (2010, May). Development of intelligent robot hand using proximity, contact and slip sensing. In 2010 IEEE International Conference on Robotics and Automation (pp. 777-784). IEEE.
[10] Koyama, K., Murakami, K., Senoo, T., Shimojo, M., & Ishikawa, M. (2019). High-Speed, Small-Deformation Catching of Soft Objects Based on Active Vision and Proximity Sensing. IEEE Robotics and Automation Letters, 4(2), 578-585.
[11] Koyama, K., Shimojo, M., Senoo, T., & Ishikawa, M. (2018). High-speed high-precision proximity sensor for detection of tilt, distance, and contact. IEEE Robotics and Automation Letters, 3(4), 3224-3231.
[12] Jain, S., & Argall, B. (2016, May). Grasp detection for assistive robotic manipulation. In 2016 IEEE International Conference on Robotics and Automation (ICRA) (pp. 2015-2021). IEEE.
[13] Saxena, A., Driemeyer, J., & Ng, A. Y. (2008). Robotic grasping of novel objects using vision. The International Journal of Robotics Research, 27(2), 157-173.
[14] He, K., Gkioxari, G., Dollár, P., & Girshick, R. (2017). Mask r-cnn. In Proceedings of the IEEE international conference on computer vision (pp. 2961-2969).
[15] Redmon, J., Divvala, S., Girshick, R., & Farhadi, A. (2016). You only look once: Unified, real-time object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 779-788).
[16] Yu, J., Weng, K., Liang, G., & Xie, G. (2013, December). A vision-based robotic grasping system using deep learning for 3D object recognition and pose estimation. In 2013 IEEE international conference on robotics and biomimetics (ROBIO) (pp. 1175-1180). IEEE.
[17] Do, T. T., Pham, T., Cai, M., & Reid, I. (2018). Real-time monocular object instance 6d pose estimation. In British Machine Vision Conference (BMVC) (Vol. 1, No. 2, p. 6).
[18] Mahendran, S., Ali, H., & Vidal, R. (2017). 3d pose regression using convolutional neural networks. In Proceedings of the IEEE International Conference on Computer Vision Workshops (pp. 2174-2182).
[19] Cheng, H., & Meng, M. Q. H. (2018, December). A grasp pose detection scheme with an End-to-End CNN regression approach. In 2018 IEEE International Conference on Robotics and Biomimetics (ROBIO) (pp. 544-549). IEEE.
[20] Nandi, G. C., Agarwal, P., Gupta, P., & Singh, A. (2018, June). Deep learning based intelligent robot grasping strategy. In 2018 IEEE 14th International Conference on Control and Automation (ICCA) (pp. 1064-1069). IEEE.
[21] 吳宜儒. (2019). 基於深度學習與立體影像之機械手臂智慧夾取. 清華大學動力機械工程學系學位論文, (2019年), 1-57.
[22] [Online]Available:http://www.ni.com/example/12557/en/ [Accessed 15 July 2019]
[23] [Online]Available:https://www.intel.com.tw/content/www/tw/zh/architecture-and-technology/realsense-overview.html [Accessed 15 July 2019]
[24] [Online]Available:https://en.wikipedia.org/wiki/Genetic_algorithm [Accessed 23 July 2020]
[25] Russell, S., & Norvig, P. (2002). Artificial intelligence: a modern approach.
[26] [Online]Available:https://en.wikipedia.org/wiki/Computer_vision [Accessed 8 June 2020]
[27] [Online]Available:https://github.com/tzutalin/labelImg [Accessed 8 June 2020]
[28] [Online]Available:https://www.coursera.org/learn/convolutional-neural-networks?specialization=deep-learning [Accessed 8 June 2020]
[29] Simonyan, K., & Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556.
[30] [Online]Available:https://en.wikipedia.org/wiki/Quaternion [Accessed 22 July 2020]
[31] [Online]Available:http://www.ece.northwestern.edu/local-apps/matlabhelp/toolbox/images/corr2.html [Accessed 6 July 2020]