簡易檢索 / 詳目顯示

研究生: 蕭棋鴻
Hsiao, Chi-Hung
論文名稱: 基於深度學習之磨課師課程評鑑系統
Deep Learning Based Course Evaluation System on MOOCs
指導教授: 黃能富
Huang, Nen-Fu
口試委員: 許健平
Sheu, Jang-Ping
陳俊良
Chen, Jiann-Liang
學位類別: 碩士
Master
系所名稱: 電機資訊學院 - 資訊工程學系
Computer Science
論文出版年: 2020
畢業學年度: 108
語文別: 英文
論文頁數: 74
中文關鍵詞: 磨課師深度學習課程評鑑課程問卷調查學習分析
外文關鍵詞: MOOCs, Deep Learning, Course evaluation, Learning Analysis, Questionnaire survey
相關次數: 點閱:3下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 近年來,MOOC 提供許多並且優異的課程服務,MOOC提供的分享平台,打破了地域與時間的限制,讓人們聚在一起學習、討論、為了相同目標共同努力。MOOC的便利性提供人們許多新的選擇以及更有效率的學習,但也因為低成本的獲取,使得MOOC一直有著相當高的退課率,許多人都有著先訂閱課程卻沒有將課程完成的實際行動,除此之外,目前MOOC平台對於課程的介紹大多數僅限於課程內容介紹,至多會有曾經修習課程的學生給予課程的滿意度,並沒有針對MOOC課程的各項指標進行評鑑。即便希望對課程做評鑑指標,修課人數若不夠多並且問卷的回饋數量也不足也無法提供足夠信用的評鑑效果。因此,我們實作了此課程評鑑系統,首先先定義我們希望評鑑課程的各個向度,分為課程負荷程度、符合需求程度、易理解程度、教師風格喜愛程度、課程健康度。對於學生進行問卷調查,接著分析填寫問卷之學生進行在MOOC平台之學習行為進行分析,並建立深度學習模型預測學生會對該課堂之各項評鑑分數。也就是說,長遠來看,本篇論文的目標為建立一擁有足夠信用之課程評鑑系統,不需要數量龐大的問卷回收數量即可建構完成,並且適用於任何一門課程。如此一來,不僅可以解決問卷調查回收率低迷導致無法做出足夠信用之課程評鑑,還可以使學生在修課之前得到更多關於課程的資訊,將會有相當的可能性降低MOOC退課率。除此之外,經由課程評鑑的產生,會促使MOOC講師對於課程設計更加用心使得學生們的學習環境更加良好。


    MOOC has a very high dropout rate and the information of the course that students get are mostly the introduction of the course content. There is no evaluation of the MOOC courses. Even if you want to make courses evaluation, while the number of questionnaire recovery is not enough, it will not be able to provide a credible evaluation effect. Therefore, we have implemented this course evaluation system. First, we define several orientations of the courses we want to evaluate, which are 'Workload', 'Cater to need', 'Intelligible degree', 'Style liking', 'Course health'. After sending a questionnaire survey to students, we analyze the learning behavior of the students filling the questionnaire, and build up a deep learning model to predict the student's evaluation grades for the courses. The goal of this thesis is to establish a course evaluation system with sufficient credit, which can be constructed and completed without a large number of questionnaires and is suitable for any course. In this way, not only can the low recovery rate of the questionnaire survey to make a course evaluation, but also enable students to get more information about the course before subscribing. There will be a considerable possibility to reduce drop rate and make the learning environment better.

    Contents Abstract 中文摘要 ii Acknowledgement iii Contents iv List of Figures vii List of Tables ix Chapter 1 Introduction 1 Chapter 2 Background and Related Works 5 2.1 Deep neural network . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.1.1 Deep learning model . . . . . . . . . . . . . . . . . . . . . . 6 2.2 Long Short Term Memory network . . . . . . . . . . . . . . . . . . 8 2.2.1 Short-term Memory of Recurrent neural network . . . . . . 8 2.2.2 Long Short Term Memory . . . . . . . . . . . . . . . . . . . 9 2.3 Questionnaire survey . . . . . . . . . . . . . . . . . . . . . . . . . . 13 2.3.1 Questionnaire design . . . . . . . . . . . . . . . . . . . . . . 13 2.3.2 Reverse coding . . . . . . . . . . . . . . . . . . . . . . . . . 14 2.4 Course Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 Chapter 3 System Architecture 17 3.1 MOOCs platform and AI tutor system . . . . . . . . . . . . . . . . 18 3.1.1 Database . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 3.2 Learning log and questionnaire . . . . . . . . . . . . . . . . . . . . 19 3.2.1 Video Learning log Schema . . . . . . . . . . . . . . . . . . 19 3.2.2 Questionnaire . . . . . . . . . . . . . . . . . . . . . . . . . . 20 3.3 Data preprocessing . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 3.3.1 Score of questionnaire . . . . . . . . . . . . . . . . . . . . . 22 3.3.2 Learning feature analysis . . . . . . . . . . . . . . . . . . . . 23 3.3.3 Time series data analysis . . . . . . . . . . . . . . . . . . . 23 3.4 Model and predict result . . . . . . . . . . . . . . . . . . . . . . . . 24 Chapter 4 System Implementation 25 4.1 Data Collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 4.2 Questionnaire design . . . . . . . . . . . . . . . . . . . . . . . . . . 26 4.2.1 Likert Scale . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 4.2.2 Questionnaire reliability . . . . . . . . . . . . . . . . . . . . 29 4.3 Data Preprocessing . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 4.3.1 Questionnaire grade . . . . . . . . . . . . . . . . . . . . . . 31 4.3.2 Learning feature implementation . . . . . . . . . . . . . . . 32 4.3.2.1 Student-related feature . . . . . . . . . . . . . . . 32 4.3.2.2 Course-related feature . . . . . . . . . . . . . . . . 38 4.3.3 Time series data implementation . . . . . . . . . . . . . . . 39 4.3.4 Imbalanced data . . . . . . . . . . . . . . . . . . . . . . . . 40 4.3.5 Data normalization . . . . . . . . . . . . . . . . . . . . . . . 41 4.4 Model implementation . . . . . . . . . . . . . . . . . . . . . . . . . 41 4.4.1 Correlation analysis . . . . . . . . . . . . . . . . . . . . . . 42 4.4.2 Deep neuron network . . . . . . . . . . . . . . . . . . . . . . 45 4.4.2.1 Classification . . . . . . . . . . . . . . . . . . . . . 48 4.4.2.2 Regression . . . . . . . . . . . . . . . . . . . . . . 49 4.4.3 Long short-term memory . . . . . . . . . . . . . . . . . . . 49 Chapter 5 Experiment and Result 51 5.1 Experiment data information . . . . . . . . . . . . . . . . . . . . . 51 5.2 Prediction results of completed course training base . . . . . . . . . 55 5.2.1 MAE of Deep neural network . . . . . . . . . . . . . . . . . 55 5.2.2 LSTM MAE . . . . . . . . . . . . . . . . . . . . . . . . . . 58 5.2.3 The error distribution . . . . . . . . . . . . . . . . . . . . . 58 5.2.4 Testing MAE of non-completed course . . . . . . . . . . . . 62 5.3 Prediction results with training set including completed and non- completed course . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 Chapter 6 Conclusion and Future Work 64 6.1 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 6.2 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 Bibliography 67 List of Figures 2.1 Operation of the deep neural network . . . . . . . . . . . . . . . . . 6 2.2 The function of Sigmoid and Relu . . . . . . . . . . . . . . . . . . . 7 2.3 The repeating module of LSTM . . . . . . . . . . . . . . . . . . . . 9 2.4 The first step of LSTM module . . . . . . . . . . . . . . . . . . . . 10 2.5 The second step of LSTM module . . . . . . . . . . . . . . . . . . . 11 2.6 The third step of LSTM module . . . . . . . . . . . . . . . . . . . . 12 2.7 The forth step of LSTM module . . . . . . . . . . . . . . . . . . . . 13 3.1 Architecture of course-evaluation system. . . . . . . . . . . . . . . . 18 3.2 Example of the questions in a orientation . . . . . . . . . . . . . . . 22 4.1 The Schematic diagram of getting data . . . . . . . . . . . . . . . . 26 4.2 Real watching time and Finished ratio growing graph . . . . . . . . 40 4.3 Structure of DNN model . . . . . . . . . . . . . . . . . . . . . . . . 46 4.4 Structure of LSTM model . . . . . . . . . . . . . . . . . . . . . . . 49 5.1 Error distribution of ’Workload’ result . . . . . . . . . . . . . . . . 59 5.2 Error distribution of ’Cater to need’ DNN result . . . . . . . . . . 59 5.3 Error distribution of ’Intelligible degree’ DNN result . . . . . . . . 60 5.4 Error distribution of ’Style liking’ DNN result . . . . . . . . . . . . 60 5.5 Error distribution of ’Course health’ DNN result . . . . . . . . . . 60 5.6 Error distribution of ’Workload’ and ’Cater to need’ DNN result. 61 5.7 Error distribution of ’Intelligible degree’ and ’Style liking’ DNN result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 5.8 Error box chart of ’Course health’ DNN result . . . . . . . . . . . 61

    Bibliography

    [1] N. Alhazzani, “Mooc’s impact on higher education,” Social Sciences &

    Humanities Open, vol. 2, no. 1, p. 100030, 2020. [Online]. Available: http://www.sciencedirect.com/science/article/pii/S259029112030019X

    [2] Wikipedia contributors, “Massive open online course — Wikipedia, the free encyclopedia,” 2020, [Online; accessed 16-July-2020]. [Online]. Avail- able: https://en.wikipedia.org/w/index.php?title=Massive_open_online_
    course&oldid=966454833

    [3] Sebastian Thrun, David Stavens, and Mike Sokolsky, “Udacity,” 2011-2020, [Online; accessed 16-July-2020]. [Online]. Available: https://www.udacity. com/

    [4] A. Ng and D. Koller, “Coursera,” Retrieved June 29, 2018, from the World

    Wide Web:https://zh-tw.coursera.org, 2012.

    [5] M. I. of Technology and H. University, “edx,” Retrieved June 29, 2018, from

    the World Wide Web:https://www.edx.org, 2012.

    [6] P. Moreno-Marcos, C. Alario-Hoyos, P. Merino, and C. Delgado-Kloos, “Pre- diction in moocs: A review and future research directions,” IEEE Transac- tions on Learning Technologies, vol. PP, pp. 1–1, 07 2018.

    [7] Manisha Srikanth, “The advantages and disadvantages of moocs for learning,” 2017, [Online; accessed 16-July-2020]. [Online]. Available: https://www.infoprolearning.com/blog/advantages-and-disadvantages-of-moocs-massive-open-online-courses-for-learning/

    [8] D. Onah, J. Sinclair, and R. Boyatt, “Dropout rates of massive open online courses: Behavioural patterns,” 07 2014.

    [9] D. Su, P. Xu, M. Yueheng, Q. Zheng, J. Du, H. Sun, and E. Su, “Deep learning for dropout prediction in MOOCs,” 12 2019.

    [10] T. Yang, C. G. Brinton, C. Joe-Wong, and M. Chiang, “Behavior-based grade prediction for MOOCs via time series neural networks,” IEEE Journal of Selected Topics in Signal Processing, vol. 11, no. 5, pp. 716–728, 2017.

    [11] NTHU Center of Teaching and Development, “Nthu cloud,” 2019-2020, [Online; accessed 16-July-2020]. [Online]. Available: https://mooc.nthu.edu.tw/

    [12] N. T. University, “Sharecourse,” Retrieved April 16, 2017, from the World
    Wide Web:http://www.sharecourse.net/sharecourse/, 2012.

    [13] N.-F. Huang, C.-C. Chen, J.-W. Tzeng, T. Fang, and C.-A. Lee, “Concept assessment system integrated with a knowledge map using deep learning,” 09 2018, pp. 113–116.

    [14] Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature, vol. 521, no.7553, p. 436, 2015.

    [15] ——, “Deep learning,” Nature, vol. 521, pp. 436–44, 05 2015.

    [16] I. Goodfellow, Y. Bengio, and A. Courville, Deep Learning. MIT Press, 2016,

    http://www.deeplearningbook.org.

    [17] J. Schmidhuber, “Deep learning in neural networks: An overview,” CoRR,

    vol. abs/1404.7828, 2014. [Online]. Available: http://arxiv.org/abs/1404.7828

    [18] M. Leshno, V. Y. Lin, A. Pinkus, and S. Schocken, “Multilayer feedforward networks with a non-polynomial activation function can approximate any function,” in Neural Networks, 1993.

    [19] G. Cybenkot, “Approximation by superpositions of a sigmoidal function *,”2006.

    [20] S. K. Roy, S. Manna, S. R. Dubey, and B. B. Chaudhuri, “Lisht: Non- parametric linearly scaled hyperbolic tangent activation function for neural networks,” ArXiv, vol. abs/1901.05894, 2019.

    [21] V. Nair and G. E. Hinton, “Rectified linear units improve restricted Boltzmann

    machines,” in ICML, 2010.

    [22] J. Schmidhuber, “Deep learning in neural networks: An overview,” Neural networks: the official journal of the International Neural Network Society, vol. 61, pp. 85–117, 2015.

    [23] S. Hochreiter and J. Schmidhuber, “Long short-term memory,” Neural Computation, vol. 9, pp. 1735–1780, 1997.

    [24] J. A. Krosnick, Questionnaire Design. Cham: Springer International Publishing, 2018, pp. 439–455. [Online]. Available: https://doi.org/10.1007/ 978-3-319-54395-6_53

    [25] R. H. Gault, “A history of the questionnaire method of research in psychology,” The Pedagogical Seminary, vol. 14, no. 3, pp. 366–383, 1907. [Online]. Available: https://doi.org/10.1080/08919402.1907.10532551

    [26] [EB/OL], https://imotions.com/blog/design-a-questionnaire/, title = How to Design a Questionnaire, author = Bryn Farnsworth, Ph.D.

    [27]. B. of Regents of the University of Wisconsin System., “Student learning assessment,” [EB/ OL], https://assessment.provost.wisc.edu/best-practices-and-sample-questions-for-course-evaluation-surveys/ Ac- cessed July 7, 2020.

    [28] G. H. Weems and A. J. Onwuegbuzie, “The impact of midpoint responses and reverse coding on survey data,” Measurement and Evaluation in Counseling and Development, vol. 34, no. 3, pp. 166–176, 2001. [Online].
    Available: https://doi.org/10.1080/07481756.2002.12069033

    [29] D. Paulhus, “Measurement and control of response bias,” Measurement of

    Personality and Social Psychological Attitudes, vol. 1, 12 1991.

    [30] Wikipedia contributors, “Response bias — Wikipedia, the free encyclopedia,”

    2020, [Online; accessed 15-July-2020]. [Online]. Available: https://en.

    wikipedia.org/w/index.php?title=Response_bias&oldid=950332106

    [31] ——, “Course evaluation — Wikipedia, the free encyclopedia,” 2019, [Online; accessed 16-July-2020]. [Online]. Available: https://en.wikipedia. org/w/index.php?title=Course_evaluation&oldid=928708749

    [32] NCTU students ,nctuplus@gmail.com, “Nctu+,” 2019, [Online; accessed

    16-July-2020]. [Online]. Available: https://plus.nctu.edu.tw/

    [33] Nen-Fu Huang, “Hsnl-national tsing hua university,” 2020, [Online; accessed

    16-July-2020]. [Online]. Available: https://hsnl.cs.nthu.edu.tw/

    [34] G. van Rossum, “Mongodb,” Retrieved June 29, 2018, from the World Wide

    Web:www.mongodb.com, 2009.

    [35] B. Christudas, MySQL, 06 2019, pp. 877–884.

    [36] C. G. Brinton, S. Buccapatnam, M. Chiang, and H. V. Poor, “Mining mooc clickstreams: Video-watching behavior vs. in-video quiz performance,” IEEE Transactions on Signal Processing, vol. 64, no. 14, pp. 3677–3692, 2016.

    [37] F. Chollet, “Keras,” Retrieved June 29, 2018, from the World Wide

    Web:https://keras.io/, 2015.

    [38] T. Durksen, M.-W. Chu, Z. F. Ahmad, A. Radil, and L. Daniels, “Motivation in a MOOC: a probabilistic analysis of online learners basic psychological needs,” Social Psychology of Education, vol. 19, 02 2016.

    [39] K. Douglas, P. Bermel, M. Alam, and K. Madhavan, “Big data characterization of learner behavior in a highly technical MOOC engineering course,” Journal of Learning Analytics, vol. 3, pp. 170–192, 01 2016.

    [40] M. T. Y. P. M. B. Mr. Taylor V. Williams, Dr. Kerrie A. Douglas, “Grade prediction in MOOCs,” in American Society for Engineering Education. ASEE, 2018, pp. 386–392.

    [41] H. M. P. Ochando, “Transcultural validation within the English scope of the questionnaire evaluation of variables moderating style of teaching in higher education. c.e.m.e.d.e.p.u.” Procedia - Social and Behavioral Sciences, vol. 237, pp. 1208 – 1215, 2017, education, Health and ICT for a Transcultural World. [Online]. Available: http://www.sciencedirect.com/science/article/ pii/S187704281730191X

    [42] G. Gonçalves, “Studying public health. results from a questionnaire to estimate medical students’workload,” Procedia - Social and Behavioral Sciences, vol. 116, pp. 2915 – 2919, 2014, 5th World Conference on Educational Sciences. [Online]. Available: http://www.sciencedirect.com/ science/article/pii/S187704281400696X

    [43] M. Çakmak, “What are prospective teachers’opinions about their instructors’ teaching styles?” Procedia - Social and Behavioral Sciences, vol. 15, pp. 1960 – 1964, 2011, 3rd World Conference on Educational Sciences - 2011. [Online]. Available: http://www.sciencedirect.com/science/ article/pii/S1877042811005817

    [44] Wikipedia contributors, “Likert scale — Wikipedia, the free encyclopedia,” https://en.wikipedia.org/w/index.php?title=Likert_scale&oldid= 964508976, 2020, [Online; accessed 10-July-2020].

    [45] ——, “Cronbach’s alpha — Wikipedia, the free encyclopedia,” 2020, [Online; accessed 10-July-2020]. [Online]. Available: https://en.wikipedia.org/w/

    index.php?title=Cronbach%27s_alpha&oldid=964429517

    [46]. M. P. . George, D., SPSS for Windows Step-by-Step: A Simple Guide and

    Reference, 15.0 Update (8th Edition). Allyn & Bacon, 2003.

    [47] Wikipedia contributors, “Oversampling and undersampling in data analysis

    — Wikipedia, the free encyclopedia,” 2020, [Online; accessed 16-

    July-2020]. [Online]. Available: https://en.wikipedia.org/w/index.php?title= Oversampling_and_undersampling_in_data_analysis&oldid=952129679

    [48] N. Chawla, K. Bowyer, L. Hall, and W. Kegelmeyer, “Smote: Synthetic minority over-sampling technique,” J. Artif. Intell. Res. (JAIR), vol. 16, pp. 321–357, 01 2002.

    [49] Wikipedia contributors, “Feature scaling — Wikipedia, the free encyclopedia,” 2020, [Online; accessed 16-July-2020]. [Online]. Available: https: //en.wikipedia.org/w/index.php?title=Feature_scaling&oldid=938494090

    [50] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel,

    M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos,

    D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay, “Scikit-learn: Machine learning in Python,” Journal of Machine Learning Research, vol. 12, pp. 2825–2830, 2011.

    [51] Wikipedia contributors, “Pearson correlation coefficient — Wikipedia,

    the free encyclopedia,” 2020, [Online; accessed 16-July-2020]. [Online]. Available: https://en.wikipedia.org/w/index.php?title=Pearson_

    correlation_coefficient&oldid=966228197

    [52] ——, “Multilayer perceptron — Wikipedia, the free encyclopedia,” 2020, [Online; accessed 16-July-2020]. [Online]. Available: https://en.wikipedia. org/w/index.php?title=Multilayer_perceptron&oldid=961430969

    [53] C. Willmott and K. Matsuura, “Advantages of the mean absolute error (MAE) over the root mean square error (rmse) in assessing average model performance,” Climate Research, vol. 30, p. 79, 12 2005.

    [54] Y. Xu, J. Pei, and L. Lai, “Deep learning-based regression and multiclass models for acute oral toxicity prediction with automatic chemical feature extraction,” Journal of Chemical Information and Modeling, vol. 57, no. 11, pp. 2672–2685, 2017, pMID: 29019671. [Online]. Available: https://doi.org/10.1021/acs.jcim.7b00244

    QR CODE