簡易檢索 / 詳目顯示

研究生: 吳宜儒
Wu, Yi-Ju
論文名稱: 基於深度學習與立體影像之機械手臂智慧夾取
Intelligent Robotic Grasping Using Deep Learning and Stereo Images
指導教授: 葉廷仁
Yeh, Ting-Jen
口試委員: 顏炳郎
Yen, Ping-Lang
劉承賢
Liu, Cheng-Hsien
學位類別: 碩士
Master
系所名稱: 工學院 - 動力機械工程學系
Department of Power Mechanical Engineering
論文出版年: 2019
畢業學年度: 107
語文別: 中文
論文頁數: 57
中文關鍵詞: 深度學習機械手臂欠制動自適應性夾爪立體影像
外文關鍵詞: Deep learning, Robot arm, Under-actuated adaptive gripper, Stereo image
相關次數: 點閱:2下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 本研究以六自由度機械手臂為主要致動器的架構下,透過立體影像為類神經網路的輸入,應用於夾取空間中多樣化的物體。考量到資料科學上,數據蒐集與標記的繁瑣,我們提出一種自動化的方法,以移動的相機座標來模擬固定相機座標下拍攝到物體位置隨機分布的情形來生成資料,以這些前處理過後的照片作為訓練集(training set),用以訓練類神經網路。深度學習部分,則採用卷積神經網路(Convolutional Neural Network)的架構,以物體辨識找出目標物,再分別進行形心與形狀估測。在機械手臂的部分,本系統使用六自由度的機械手臂加上一自由度的欠致動自適應性夾爪(underactuated adaptive gripper),此自適應性夾爪可針對各式外型物品自動轉換成平行式與張角式夾爪,進行彈性化夾取。


    Abstract
    This thesis studies the intelligent grasping of various objects using a six degree of freedom robot arm. The grasping is particularly based on deep learning networks which adopt RGBD stereo images as the inputs. To reduce the difficulty and complexity arising from the data collection and data labeling processes, an automatic method to generate the dataset for deep learning purposes is proposed. In this method, a Realsense RGBD camera, attached to the end of the robot arm, scans the object placed on a rotating platform at various distances and view angles. By establishing the correspondences between the RGBD stereo images acquired by the camera and the object’s centroid locations and postures computed by the poses of the robot arm and rotation angles of the platform, a labelled dataset can be generated. The dataset is used to train and validate three deep convolutional neural networks: the object detection network, the centroid estimation network, and the posture estimation network. During the grasping stage, the RBGD camera is installed on a two-degree-freedom mechanism which allows the camera to track the object so as to provide appropriate images for the inputs to the trained networks. With the help of an under-actuated adaptive gripper and the neural networks, we are able to experimentally demonstrate successful grasping of various randomly placed objects.

    目錄 摘要 I ABSTRACT II 誌謝 III 目錄 IV 圖目錄 VII 表目錄 X 第一章 緒論 1 1.1 研究動機 1 1.2 文獻回顧 3 1.3 論文簡介 6 第二章 硬體設備介紹 8 2.1 機器手臂 8 2.2 相機致動機構 10 2.3 自適應性夾爪 11 2.4 REALSENSE攝影機 14 2.4.1 深度相片 15 2.4.2 色彩照片和深度照片的疊合 16 第三章 手臂關節與相機關節的控制 17 3.1 D-H參數法 17 3.1.1 機械手臂關節 19 3.1.2 相機關節 20 3.2 順向運動學 21 3.3 逆向運動學 22 3.3.1 關節一之兩組解 24 3.3.2 關節三之兩組解 24 3.3.3 腕關節之兩組解 25 3.4 模擬與實驗結果 26 第四章 電腦視覺 30 4.1 深度學習介紹 30 4.2 實驗架構 32 4.3 資料建立與網路的訓練 34 4.3.1 物件辨識模型 34 4.3.2 形心估測模型 38 4.3.3 姿態估測模型 42 4.4 空間估測結果 48 第五章 結論與未來工作 54 5.1 結論 54 5.2 未來工作 55 參考資料 56

    參考資料
    [1] [Online].Available: https://zenbo.asus.com/tw/ [Accessed15 July 2019]
    [2] [Online].Available:https://www.mi.com/tw/mi-robot-vacuum/[Accessed15 July 2019]
    [3] Zhao, J., Liu, H., Feng, Y., Yuan, S., & Cai, W. (2015, October). BE-SIFT: a more brief and efficient SIFT image matching algorithm for computer vision. In 2015 IEEE International Conference on Computer and Information Technology; Ubiquitous Computing and Communications; Dependable, Autonomic and Secure Computing; Pervasive Intelligence and Computing (pp. 568-574). IEEE.
    [4] Redmon, J., Divvala, S., Girshick, R., & Farhadi, A. (2016). You only look once: Unified, real-time object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 779-788).
    [5] Yu, J., Weng, K., Liang, G., & Xie, G. (2013, December). A vision-based robotic grasping system using deep learning for 3D object recognition and pose estimation. In 2013 IEEE International Conference on Robotics and Biomimetics (ROBIO) (pp. 1175-1180). IEEE.
    [6] Cheng, H., & Meng, M. Q. H. (2018, December). A Grasp Pose Detection Scheme with an End-to-End CNN Regression Approach. In 2018 IEEE International Conference on Robotics and Biomimetics (ROBIO) (pp. 544-549). IEEE.
    [7] Nandi, G. C., Agarwal, P., Gupta, P., & Singh, A. (2018, June). Deep Learning Based Intelligent Robot Grasping Strategy. In 2018 IEEE 14th International Conference on Control and Automation (ICCA) (pp. 1064-1069). IEEE.
    [8] 江修,黃偉峰”六軸機械臂之控制理論分析與應用” 工研院機械所 (2006).
    [9] Zhu, T., Yang, H., & Zhang, W. (2016, August). A spherical self-adaptive gripper with shrinking of an elastic membrane. In 2016 International Conference on Advanced Robotics and Mechatronics (ICARM) (pp. 512-517). IEEE.
    [10] Gao, B., Yang, S., Jin, H., Hu, Y., Yang, X., & Zhang, J. (2016, December). Design and analysis of underactuated robotic gripper with adaptive fingers for objects grasping tasks. In 2016 IEEE International Conference on Robotics and Biomimetics (ROBIO) (pp. 987-992). IEEE.
    [11] 蘇立珩(Li-Heng Su). “基於機器學習與影像之機械手臂夾取, Robot Arm Grasping Based on Machine Learning and Images”. In: 清華大學動力機械工程學系學位論文 pp. 1-66,2018
    [12] [Online]Available:http://emanual.robotis.com/docs/en/dxl/mx/mx-106-2/ [Accessed 15 July 2019]
    [13] [Online]Available:http://www.ni.com/example/12557/en/ [Accessed 15 July 2019]
    [14] [Online]Available:http:https://www.intel.com.tw/content/www/tw/zh/architecture-and-technology/realsense-overview.html [Accessed 15 July 2019]
    [15] [Online]Available:https://dahetalk.com/2018/03/11/%E3%80%90%E5%9C%96%E8%A7%A3%E3%80%913d%E6%84%9F%E6%B8%AC%E6%8A%80%E8%A1%93%E7%99%BC%E5%B1%95%E8%88%87%E6%87%89%E7%94%A8%E8%B6%A8%E5%8B%A2%EF%BD%9C%E5%A4%A7%E5%92%8C%E6%9C%89%E8%A9%B1%E8%AA%AA/ [Accessed 15 July 2019]
    [16] [Online]Available:https://en.wikipedia.org/wiki/Denavit%E2%80%93Hartenberg_parameters [Accessed 15 July 2019]
    [17] [Online]Available:http://uc-r.github.io/feedforward_DNN [Accessed 15 July 2019]
    [18] [Online]Available:https://heartbeat.fritz.ai/gentle-guide-on-how-yolo-object-localization-works-with-keras-part-2-65fe59ac12d [Accessed 15 July 2019]
    [19] [Online]Available:https://en.wikipedia.org/wiki/Rotational_Symmetry [Accessed 15 July 2019]

    QR CODE