研究生: |
蘇立珩 Su, Li-Heng |
---|---|
論文名稱: |
基於機器學習與影像之機械手臂抓取 Robot Arm Grasping Based on Machine Learning and Images |
指導教授: |
葉廷仁
Yeh, Ting-Jen |
口試委員: |
顏炳郎
Yen, Ping-Lang 陳榮順 Chen, Rong-Shun |
學位類別: |
碩士 Master |
系所名稱: |
工學院 - 動力機械工程學系 Department of Power Mechanical Engineering |
論文出版年: | 2018 |
畢業學年度: | 106 |
語文別: | 中文 |
論文頁數: | 58 |
中文關鍵詞: | 機器學習 、深度Q網絡 、機械手臂 、欠制動自適應性夾爪 |
外文關鍵詞: | machine learning, Deep Q Network, robotic arm, underactuated adaptive gripper |
相關次數: | 點閱:2 下載:0 |
分享至: |
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
本研究以七自由度機械手結合影像與機器學習應用於抓取。傳統上需利用多組方程式描述抓取的動態與模型,進而尋找適當夾取點,然而本研究利用機器學習來找尋最佳夾取點,免除數學模擬與實務上的差距。本系統以機器學習的方式,辨識物品並找出目標物,接著建立目標物在三維空間的位置與姿態之模型,再以物體三維空間資訊做為資料,以強化學習的方式,學習機械手臂最佳的夾取位置與姿態。而本研究所用之機器學習分為三大部分,分別為物體辨識、形心與姿態估測,以及最佳抓取點系統,其中前兩部分採用卷積神經網路(Convolutional Neural Network)做為學習之架構,而最佳抓取點系統使用深度Q網絡(Deep Q Network)決定抓取策略。在機械手臂的部分,本系統使用六自由度的機械手臂加上一自由度的欠致動自適應性夾爪(underactuated adaptive gripper),此自適應性夾爪可針對各式外型物品自動轉換成平行式與張角式夾爪,進行彈性化夾取。
This study uses a 7-DOF robotic arm combined with image and machine learning for grasping. Traditionally, multiple sets of equations are needed to describe the dynamics and models of the grabs, and then to find the appropriate gripping points. This system uses machine learning to find the best gripping points, eliminate the gap between mathematical simulation and practice. First, we use machine learning to identify the object and find the target. Second, the system builds the model of the centroid and posture of the target in three-dimensional space. Finally, the system uses the three-dimensional information of the target as The data to reinforcement learning and learn the best gripping points. The machine learning used in this study is divided into three parts, object identification, centroid and posture estimation, and the best grab point system. The first two parts use the Convolutional Neural Network as the learning framework, and the best grab point system uses the Deep Q Network to determine the grasping strategy. In the part of the hardware, the system uses a 6-DOF robotic arm plus 1-DOF underactuated adaptive gripper, which automatically converts parallel and angled jaws for various types of items for elastic clamping.
[1] "Fisrt industrial robot at GM" Available:
https://ifr.org/robot-history
[2] "Pepper," Aldebaran Robotics, SoftBank Robotics Corp., [online] Available:https://www.ald.softbankrobotics.com/en/robots/pepper.
[3] "TALON," Qinetiq Group plc, [online] Available: https://www.qinetiq-na.com/products/unmanned-systems/talon.
[4] Ferrari, C., & Canny, J. (1992, May). Planning optimal grasps. In Robotics and Automation, 1992. Proceedings., 1992 IEEE International Conference on (pp. 2290-2295). IEEE.
[5] Kaneko, M. (1990). A realization of stable grasp based on virtual stiffness model by robot fingers. In Proceedings of IEEE International Workshop on Advanced Motion Control, Yokohama(Vol. 156).
[6] Harada, K., Tsuji, T., Uto, S., Yamanobe, N., Nagata, K., & Kitagaki, K. (2014, May). Stability of soft-finger grasp under gravity. In Robotics and Automation (ICRA), 2014 IEEE International Conference on (pp. 883-888). IEEE.
[7] Goldfeder, C., Ciocarlie, M., Dang, H., & Allen, P. K. (2009, May). The columbia grasp database. In Robotics and Automation, 2009. ICRA'09. IEEE International Conference on (pp. 1710-1716). IEEE.
[8] Kootstra, G., Popović, M., Jørgensen, J. A., Kragic, D., Petersen, H. G., & Krüger, N. (2012). VisGraB: A benchmark for vision-based grasping. Paladyn, Journal of Behavioral Robotics, 3(2), 54-62.
[9] Weisz, J., & Allen, P. K. (2012, May). Pose error robust grasping from contact wrench space metrics. In Robotics and Automation (ICRA), 2012 IEEE International Conference on (pp. 557-562). IEEE.
[10] Rodriguez, A., Mason, M. T., & Ferry, S. (2012). From caging to grasping. The International Journal of Robotics Research, 31(7), 886-900.
[11] Herzog, A., Pastor, P., Kalakrishnan, M., Righetti, L., Bohg, J., Asfour, T., & Schaal, S. (2014). Learning of grasp selection based on shape-templates. Autonomous Robots, 36(1-2), 51-65.
[12] Lenz, I., Lee, H., & Saxena, A. (2015). Deep learning for detecting robotic grasps. The International Journal of Robotics Research, 34(4-5), 705-724.
[13] Kragic, D., & Christensen, H. I. (2002). Survey on visual servoing for manipulation. Computational Vision and Active Perception Laboratory, Fiskartorpsv, 15, 2002.
[14] Watkins, C. J., & Dayan, P. (1992). Q-learning. Machine learning, 8(3-4), 279-292.
[15] Mnih, V., Kavukcuoglu, K., Silver, D., Graves, A., Antonoglou, I., Wierstra, D., & Riedmiller, M. (2013). Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602.
[16] Zhu, T., Yang, H., & Zhang, W. (2016, August). A Spherical Self-Adaptive Gripper with shrinking of an elastic membrane. In Advanced Robotics and Mechatronics (ICARM), International Conference on (pp. 512-517). IEEE.
[17] Gao, B., Yang, S., Jin, H., Hu, Y., Yang, X., & Zhang, J. (2016, December). Design and analysis of underactuated robotic gripper with adaptive fingers for objects grasping tasks. In Robotics and Biomimetics (ROBIO), 2016 IEEE International Conference on (pp. 987-992). IEEE.
[18] "Yale OpenHand Project" Available:
https://www.eng.yale.edu/grablab/openhand/
[19] "Angular gripper" vailable:
http://www.rgk-fa.com/motion.asp?siteid=100490&lgid=1&men-
uid=9407&prodid=133635&cat=11767
[20] 江修, & 黃偉峰. (2006). 六軸機械臂之控制理論分析與應用.
[21] Ioffe, S., & Szegedy, C. (2015). Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167.
[22] Redmon, J., Divvala, S., Girshick, R., & Farhadi, A. (2016). You only look once: Unified, real-time object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 779-788).