簡易檢索 / 詳目顯示

研究生: 葉育宏
Yeh, Yu-Hung
論文名稱: 基於深度學習之使用多感測器的日常動作辨識
Deep Learning-Based Real-Time Activity Recognition with Multiple Inertial Sensors
指導教授: 周百祥
Chou, Pai H.
口試委員: 韓永楷
Hon, Wing-Kai
王俊堯
Wang, Chun-Yao
學位類別: 碩士
Master
系所名稱: 電機資訊學院 - 資訊工程學系
Computer Science
論文出版年: 2019
畢業學年度: 107
語文別: 中文
論文頁數: 41
中文關鍵詞: 日常動作辨識慣性感測器多感測器深度學習
外文關鍵詞: Activity Recognition, Inertial Measurement Unit, Multiple Inertial Sensors, Deep Learning
相關次數: 點閱:1下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 本篇論文提出一套系統,藉由放置身體各部位的動作感測器,實現即時的動作辨識。系統從安裝在使用者右手腕、腰、以及右腳踝上的動作感測器,蒐集加速度以及角速度的資料, 透過藍芽的方式傳送到電腦端。我們訓練卷積神經網路(Convolutional Neural Network)的模型來辨識包括坐、站、走路、上樓梯、下樓梯、喝水、刷牙、打掃、慢跑、開門、伸展、躺以及走路時使用手機共十三類的動作。並將此訓練後的模型利用Tensorflow Lite技術轉移至Raspberry Pi實作,以便使用者隨身攜帶。辨識動作所需的時間,除了第一次需花費約2.6秒,之後每1.325秒即可完成。實驗使用六位受試者的動作資料,無論是使用留一驗證,或者是10折交叉驗證, 我們都能夠達到相當高的準確度,分別可達到98.62%和99.84%。


    This thesis proposes a real-time activity recognition system based on data from several wearable inertial sensors. They are worn on the user’s right wrist, waist, and right ankle to collect acceleration and angular velocity data, which are then transmitted via Bluetooth to a computer. The data are used to train a convolutional neural network (CNN) model to recognize 13 types of activities, including sitting, standing, walking, going upstairs, going downstairs, drinking water, brushing teeth, cleaning,
    jogging, opening a door, stretching, lying down, and walking while using a mobile phone. The trained model has been ported to Tensorflow Lite running on Raspberry Pi to enable edge processing. The latency for recognizing the first activity takes 2.6 seconds, and subsequent ones can be recognized in 1.325 seconds. Experimental results show that among the six human subjects, our model achieves high accuracy of 98.62% and 99.84% for leave-one-out cross-validation and 10-fold cross-validation, respectively.

    Contents i Acknowledgments v 1 Introduction 1 1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.3 Thesis Organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 2 Background Theory 3 2.1 Recurrent Neural Networks (RNN) . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 2.2 Long Short-term Memory (LSTM) . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.3 Convolutional Neural Network (CNN) . . . . . . . . . . . . . . . . . . . . . . . . . 6 3 Related Work 8 3.1 Vision-based Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 3.2 Sensor-based Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 3.2.1 Machine-Learning Methods . . . . . . . . . . . . . . . . . . . . . . . . . . 9 3.2.2 Deep-Learning Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 4 Technical Approach 12 4.1 Activity Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 4.2 Data Preprocessing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 4.2.1 Gravity Elimination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 4.2.2 Data Alignment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 4.2.3 High-Frequency Noises Removal . . . . . . . . . . . . . . . . . . . . . . . 14 4.3 Data Segmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 4.4 Recognition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 4.4.1 Normalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 4.4.2 Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 4.5 Real-time Activity Recognition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 5 System Architecture and Implementation 20 5.1 Node Subsystem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 5.2 Host Subsystem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 6 Evaluation 24 6.1 Experimental Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 6.2 Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 6.3 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 6.3.1 Performance of Different Models . . . . . . . . . . . . . . . . . . . . . . . 27 6.3.2 Sensor Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 6.3.3 Recognition Accuracy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 6.3.4 Time Consumption . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 6.3.5 Performance Comparison with Related Works . . . . . . . . . . . . . . . . . 34 7 Conclusions and Future Work 37 7.1 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 7.2 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 7.2.1 More sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 7.2.2 Virtual Reality(VR)/Augmented Reality(AR) . . . . . . . . . . . . . . . . . 38 Bibliography 39

    [1] “World Population Ageing.” https://www.un.org/en/development/desa/population/
    publications/pdf/ageing/WPA2017_Highlights.pdf.
    [2] L. Wang, X. Zhao, and Y. Liu, “Skeleton Feature Fusion Based on Multi-Stream LSTM for Action Recognition,” IEEE Access, vol. 6, pp. 50788–50800, 2018.
    [3] R. Silambarasi, S. P. Sahoo, and S. Ari, “3D spatial-temporal view based motion tracing in human action recognition,” in 2017 International Conference on Communication and Signal Processing (ICCSP), pp. 1833–1837, IEEE, 2017.
    [4] M. Babiker, O. O. Khalifa, K. K. Htike, A. Hassan, and M. Zaharadeen, “Automated daily human activity recognition for video surveillance using neural network,” in 2017 IEEE 4th International
    Conference on Smart Instrumentation, Measurement and Application (ICSIMA), pp. 1–5, IEEE, 2017.
    [5] O. Dehzangi and V. Sahu, “IMU-Based Robust Human Activity Recognition using Feature Analysis, Extraction, and Reduction,” in 2018 24th International Conference on Pattern Recognition (ICPR), pp. 1402–1407, IEEE, 2018.
    [6] Davide Anguita, Alessandro Ghio, Luca Oneto, Xavier Parra and Jorge L. Reyes-Ortiz, “UCI Human Activity Recognition Using Smartphones Data Set .” https://archive.ics.uci.edu/ml/ datasets/human+activity+recognition+using+smartphones.
    [7] L. Xie, J. Tian, G. Ding, and Q. Zhao, “Human activity recognition method based on inertial sensor and barometer,” in 2018 IEEE International Symposium on Inertial Sensors and Systems
    (INERTIAL), pp. 1–4, IEEE, 2018.
    [8] Y.-C. Huang, C.-W. Yi, W.-C. Peng, H.-C. Lin, and C.-Y. Huang, “A study on multiple wearable sensors for activity recognition,” in 2017 IEEE Conference on Dependable and Secure Computing, pp. 449–452, IEEE, 2017.
    [9] A. Moncada-Torres, K. Leuenberger, R. Gonzenbach, A. Luft, and R. Gassert, “Activity classification based on inertial and barometric pressure sensors at different anatomical locations,” Physiological measurement, vol. 35, no. 7, p. 1245, 2014.
    [10] F. Attal, S. Mohammed, M. Dedabrishvili, F. Chamroukhi, L. Oukhellou, and Y. Amirat, “Physical human activity recognition using wearable sensors,” Sensors, vol. 15, no. 12, pp. 31314–
    31338, 2015.
    [11] L. Gao, A. Bourke, and J. Nelson, “Evaluation of accelerometer based multi-sensor versus single-sensor activity recognition systems,” Medical engineering & physics, vol. 36, no. 6, pp. 779–785, 2014.
    [12] S.-M. Lee, S. M. Yoon, and H. Cho, “Human activity recognition from accelerometer data using Convolutional Neural Network,” in 2017 IEEE International Conference on Big Data and Smart Computing (BigComp), pp. 131–134, IEEE, 2017.
    [13] T. Zebin, M. Sperrin, N. Peek, and A. J. Casson, “Human activity recognition from inertial sensor time-series using batch normalized deep LSTM recurrent networks,” in 2018 40th Annual
    International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), pp. 1–4, IEEE, 2018.
    [14] C. A. Ronao and S.-B. Cho, “Human activity recognition with smartphone sensors using deep learning neural networks,” Expert systems with applications, vol. 59, pp. 235–244, 2016.
    [15] T. Hur, J. Bang, J. Lee, J.-I. Kim, S. Lee, et al., “Iss2Image: A Novel Signal-Encoding Technique
    for CNN-Based Human Activity Recognition,” Sensors, vol. 18, no. 11, p. 3910, 2018.
    [16] C. V. Bouten, K. T. Koekkoek, M. Verduin, R. Kodde, and J. D. Janssen, “A triaxial accelerometer and portable data processing unit for the assessment of daily physical activity,” IEEE transactions on biomedical engineering, vol. 44, no. 3, pp. 136–147, 1997.
    [17] J. Morales and D. Akopian, “Physical activity recognition by smartphones, a survey,” Biocybernetics and Biomedical Engineering, vol. 37, no. 3, pp. 388–400, 2017.
    [18] “Embedded Platform Lab at NTHU in Taiwan.” https://epl.tw/ecomini/.
    [19] “Raspberry Pi 3 model b+.” https://www.raspberrypi.org/products/raspberry-pi-3-model-b-plus/.
    [20] “Tensorflow Lite.” https://www.tensorflow.org/lite.
    [21] “ECG/PPG Measurement Solution.” https://www.richtek.com/Design%20Support/Technical%20Document/AN057?sc_lang=en.

    無法下載圖示 全文公開日期 2021/07/30 (校內網路)
    全文公開日期 本全文未授權公開 (校外網路)
    全文公開日期 本全文未授權公開 (國家圖書館:臺灣博碩士論文系統)
    QR CODE