研究生: |
宋易豪 Song, Edward |
---|---|
論文名稱: |
利用情境問題改善InfoQ評估的施測者間信度 Improving Inter-Rater Reliability in InfoQ Assessment Using Contextual Questions |
指導教授: |
徐茉莉
Shmueli, Galit |
口試委員: |
雷松亞
Ray, Soumya 許裴舫 Hsu, Pei-Fang |
學位類別: |
碩士 Master |
系所名稱: |
科技管理學院 - 服務科學研究所 Institute of Service Science |
論文出版年: | 2019 |
畢業學年度: | 108 |
語文別: | 英文 |
論文頁數: | 85 |
中文關鍵詞: | 情境問題 、資訊品質 、問卷 、施測者間信度 |
外文關鍵詞: | InfoQ, Inter-Rater Reliability, contextual questions |
相關次數: | 點閱:3 下載:0 |
分享至: |
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
資訊品質(InfoQ)被定義為「一個資料集透過所給予的資料分析方法,所能達到特定(科學上的或是實務上的)目標的潛力 」(Kenett & Shmueli, 2014)。資料品質與分析的情境有著密不可分的關係,而資料品質有四個重要的元素分別為:資料、 目標、資料分析方法與效用。為了將資料品質操作化,在資料品質的四個元素之上有八個向度。「向度評分量表」是一個透過五分量尺評量研究報告在資訊品質八個向度的表現。雖然「向度評分量表」是一個直觀的報告評量方式,但先前的研究指出評量者用此量表評量研究報告的過程中,有彼此給分不一致的狀況(低施測者堅信度),而這樣的問題需要被妥善處理。本研究的目的是透過我們所設計的「情境化問卷」來解決在使用資料品質做在研究報告的評估時發生的低施測者間信度的問題。情境化問卷是由用來評量個向度的情境化問題所構成。本研究所設計的情境化問卷是專門用於衡量時間序列預測類型的報告。
本研究所進行的實驗為受測者內實驗,我們邀請有時間序列相關知識的受測者用我們給予的量表來評量一份時間序列預測的報告。這份量表的第一部份為「向度評分量表」,下一個部分則是新的「情境化問題」。在分析的部分,為了要驗證情境化問卷是否有效解決低施測者堅信度的問題,我們運用拔靴法 (bootstrap)進行變異數分析。雖然情境化資訊品質分數並不足以提升施測者間信度,但我們從對單一向度及其情境化問題所做的分析中發現重要的洞見。最後,我們將對我們的發現與挑戰進行討論,並提出可以改進情境化問題的方法,供後續的研究做參考。
The concept of Information Quality (InfoQ) is defined as the potential of a dataset to achieve a specific (scientific or practical) goal using a given data analysis method. InfoQ lies within the context of data, goal, analysis methods, and utility, and is tightly coupled with the analysis context (Kenett & Shmueli, 2014). On top of the four components of InfoQ, there are eight dimensions that help operationalize the InfoQ concept. Dimensional score matrix is a way to evaluate a study using a 1–5 scale to score each of the eight InfoQ dimensions. Although dimensional score matrix is a streamlined approach for study evaluation, previous studies have found high variability in respondents’ ratings of the InfoQ dimensions (low inter-rater reliability).
In this study, we aim to address the issue of low inter-rater reliability in InfoQ evaluation by developing a contextualized questionnaire, which is a set of contextualized questions for each of the eight InfoQ dimensions. The contextualized questionnaire we designed is for the context of time series forecasting study evaluation. We then conduct a within-subjects experiment, where we ask respondents with time-series related knowledge to evaluate a forecasting study using a rating-based questionnaire. The questionnaire first uses the InfoQ dimensional score matrix followed by the new contextualized questions. We examine whether the contextualized questionnaire improves inter-rater reliability by conducting variance analysis of the InfoQ scores using bootstrapping. Although the contextualized InfoQ score does not sufficiently improve inter-rater reliability, our analysis of the individual dimensions and contextualized questions reveal important insights. We discuss the findings and challenges, and suggest how to improve the contextualized questionnaire in future research.
Charness, G., Gneezy, U., & Kuhn, M. A. (2012). Experimental methods: Between-subject and within-subject design. J. Econ. Behav. Organ., 81(1), 1-8.
Deming, W. E. (1982). Quality, productivity, and competetive position. MIT Center for Advanced Engineering, Cambridge.
Fassnacht, M., & Koese, I. (2006). Quality of electronic services: Conceptualizing and testing a hierarchical model. Journal of service research, 9(1), 19-37.
Gerbing, D. W., & Anderson, J. C. (1988). An Updated Paradigm for Scale Development Incorporating Unidimensionality and Its Asessment. J. Mark. Res., 25(2), 186--192.
Hallgren, K. A. (2012). Computing inter-rater reliability for observational data: an overview and tutorial. Tutorials in quantitative methods for psychology, 8(1), 23.
Henseler, J., Hubona, G., & Ray, P. A. (2016). Using PLS path modeling in new technology research: updated guidelines. Industrial management & data systems, 116(1), 2-20.
Kenett, R. S., & Shmueli, G. (2014). On information quality. Journal of the Royal Statistical Society: Series A (Statistics in Society), 177(1), 3-38.
Kenett, R. S., & Shmueli, G. (2016). Information quality: The potential of data and analytics to generate knowledge. John Wiley & Sons.
Kock, N., & Verville, J. (2012). Exploring free questionnaire data with anchor variables: an illustration based on a study of IT in healthcare. International Journal of Healthcare Information Systems and Informatics (IJHISI), 7(1), 46-63.
Lee, Y. W., Strong, D. M., Kahn, B. K., & Wang, R. Y. (2002). AIMQ: a methodology for information quality assessment. Information & Management, 40(2), 133--146.
Manyika, James, et al (2017). Jobs lost, jobs gained: Workforce transitions in a time of automation. McKinsey Global Institute.
Northcott, D. (2010). Shadowing and Other Techniques for Doing Fieldwork in Modern Societies. European Accounting Review, 19(2), 375-378.
Olson, M. H. (1985). Management information systems: conceptual foundations, structure, and development. New York: McGraw-Hill.
Reeves, C. A., & Bednar, D. A. (1994). Defining Quality: Alternatives and Implications. Acad. Manage. Rev., 19(3), 419--445.
Santos, J. (2012). E‐service quality: a model of virtual service quality dimensions. Managing Service Quality: An International Journal.
Shmueli, G. (2010). To Explain or to Predict? Stat. Sci., 25(3), 289--310.
Shmueli, G., & Kenett, R. An Information Quality (InfoQ) Framework for Ex-Ante and Ex-Post Evaluation of Empirical Studies.
Shmueli, G., & Koppius, O. R. (2011). Predictive analytics in information systems research. Mis Quarterly, 553-572.
Strack, F. (1992). “Order effects” in survey research: Activation and information functions of preceding questions. In Context effects in social and psychological research (pp. 23-34). Springer, New York, NY.
Strong, D. M., Lee, Y. W., & Wang, R. Y. (1997). Data Quality in Context. Commun. ACM, 40(5), 103--110.
Wang, R. Y., & Strong, D. M. (1996). Beyond Accuracy: What Data Quality Means to Data Consumers. Journal of Management Information Systems, 12(4), 5--33.