研究生: |
呂賴臻柔 Lu Lai, Chen-Jou |
---|---|
論文名稱: |
探討員工在工作場域中,對於AI系統信任度的差異 Investigating AI Trust Differences among Employees in the Workplace |
指導教授: |
許裴舫
Hsu, Pei-Fang |
口試委員: |
雷松亞
Ray, Soumya 徐士傑 Hsu, Shih-Chieh |
學位類別: |
碩士 Master |
系所名稱: |
科技管理學院 - 服務科學研究所 Institute of Service Science |
論文出版年: | 2021 |
畢業學年度: | 109 |
語文別: | 英文 |
論文頁數: | 54 |
中文關鍵詞: | 人工智慧 、AI信任度 、認知差異 |
外文關鍵詞: | Artificial Intelligence, AI Trust, Perception Gap |
相關次數: | 點閱:2 下載:0 |
分享至: |
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
近年來,AI系統被廣泛運用於企業中,為企業增加工作效率與收益。然而,在使用AI系統時,不同的AI科技所帶來的因素會導致員工對於系統產生不信任感,進而影響到員工對於系統的使用度。另外,員工之間往往因為不同的知識背景、工作性質而產生不同差異,導致在使用系統時,不同角色間無法順利溝通。本研究旨在釐清不同角色或使用不同系統時,員工對於AI系統的認知差異並探討哪些因素會影響員工對於AI系統的信任,根據研究結果給予商業建議,以縮短不同角色間的認知差異,並增進員工對於AI系統的信任度。本研究針對半導體公司中曾經開發或使用過AI系統的員工進行問卷調查,關於員工間的認知差異部分,本研究發現開發者與使用者可以透過合作縮短認知差異,而其中未參與過系統開發的使用者,對於系統結果與過程的解釋性與開發者有極大差異,因此建議開發者應幫助未參加過系統開發的使用者釐清系統結果與過程,藉以縮短知識的差距。關於影響員工對於AI系統的信任因素方面,本研究發現(1)對於開發者而言,系統品質是相信系統的最重要因素,其中資料的品質,可能因不同使用者在資料標記上有不同看法而使資料品質不一,建議可以與不同使用者確認資料標記的正確性,或可建立標準化的方式來標記資料(2)對於大部分使用者而言,系統的使用目的在初期接觸AI時很重要,當系統目的符合員工的工作目標,使用者才有相信系統之意願。
When Artificial Intelligence (AI) systems are introduced to the industries, various issues of AI may influence the employees’ trust which causes usage intention on AI systems. However, there’s a perception gap between developers and users due to disparate expert knowledge and different perceptions of the AI systems, which hinders their communication using the same languages. This study aims to understand the perceived differences among different characters or different types of AI systems in the workplace and different factors of AI that relate to humans’ trust when using AI systems. An online survey was conducted to AI system developers and users in a semiconductor company to realize their perceptions on adopting the AI systems in their workflow. Based on the results, we first discussed the perceived differences among different characters and different system types. We found that the developers and the users could shorten their perceived differences by collaboration. The developers (D) could help the users who didn’t participate in the AI system development (U) understand how the system results been processed to shorten the knowledge gap. Then, we discussed the factors that related to AI trust. We found that (1) For the developers, the system quality would be the most important factors which related to AI trust. In addition, the data quality is not good when the data needs to be labeled. We suggest the developers (D) check the data with different users to build up a more standard labeled data. (2) For most of the users, the system purpose would be important to AI trust when they first use the AI systems. When the system purpose matches the employees’ workflow, they would have the intention to trust the AI systems. Based on the results, our research builds up trust in AI using multiple factors and bridges the perception gap between developers and users to enhance positive capability and adoptions of AI systems in the future.
1. Arnold, M., Bellamy, R. K., Hind, M., Houde, S., Mehta, S., Mojsilović, A., ... & Reimer, D. (2019). FactSheets: Increasing trust in AI services through supplier's declarations of conformity. IBM Journal of Research and Development, 63(4/5), 6-1.
2. Davenport, T. H. (2018). The AI advantage: How to put the artificial intelligence revolution to work. MIT Press.
3. Davidson, E. J. (1999) "Joint application design (JAD) in practice," The Journal of Systems and Software (45) 1999, pp. 215-223.
4. Davis, A. L., & Rothstein, H. R. (2006). The effects of the perceived behavioral integrity of managers on employee attitudes: A meta-analysis. Journal of Business Ethics, 67(4), 407-419.
5. Deloitte Insights. (2020). Deloitte’s State of AI in the Enterprise, 3rd Edition https://www2.deloitte.com/us/en/insights/focus/cognitive-technologies/state-of-ai-and-intelligent-automation-in-business-survey.html?id=us:2el:3pr:4di6462:5awa:6di:MMDDYY:&pkid=1006825
6. Fukuyama F. Trust: the social virtues and the creation of prosperity. New York: The Free Press, 1995
7. Gefen, David. “E-Commerce: The Role of Familiarity and Trust.” Omega, Vol. 28, No. 6, 2000
8. Gefen, D., Karahanna, E., & Straub, D. W. (2003). Trust and TAM in online shopping: An integrated model. MIS Quarterly, 27(1), 51-90.
9. Ghahramani, M., Qiao, Y., Zhou, M., Hagan, A. O., & Sweeney, J. (2020). AI-based modeling and data-driven evaluation for smart manufacturing processes. IEEE/CAA Journal of Automatica Sinica, 7(4), 1026-1037.
10. Gibb, J. R. (1961). Defensive communication. Journal of communication, 11(3), 141-148.
11. Hoehle, H., Huff, S., & Goode, S. (2012). The role of continuous trust in information systems continuance. Journal of Computer Information Systems, 52(4), 1-9.
12. Miryung Kim, Thomas Zimmermann, Robert DeLine, and Andrew Begel. 2017. Data scientists in software teams: State of the art and challenges. IEEE Transactions on Software Engineering 44, 11 (2017), 1024–1038.
13. Li, X., Hess, T. J., & Valacich, J. S. (2008). Why do we trust new technology? A study of initial trust formation with organizational information systems. The Journal of Strategic Information Systems, 17(1), 39-71.
14. Luhmann N. Trust and power. Chichester, UK: Wiley, 1979
15. Mao, J. Y., & Markus, M. L. (2004). A critical evaluation of user participation research: Gaps and future directions. PACIS 2004 Proceedings, 16.
16. Marcus, G. (2018). Deep learning: A critical appraisal. arXiv preprint arXiv:1801.00631.
17. Marcus, G., & Davis, E. (2019). Rebooting AI: Building artificial intelligence we can trust. Vintage.
18. McKinsey Analytics. (2020). The State of AI in 2020. https://www.mckinsey.com/business-functions/mckinsey-analytics/our-insights/global-survey-the-state-of-ai-in-2020
19. McKnight, D. H., Choudhury, V., & Kacmar, C. (2002). The impact of initial consumer trust on intentions to transact with a web site: a trust building model. The journal of strategic information systems, 11(3-4), 297-323.
20. Mcknight, D. H., Carter, M., Thatcher, J. B., & Clay, P. F. (2011). Trust in a specific technology: An investigation of its components and measures. ACM Transactions on management information systems (TMIS), 2(2), 1-25.
21. Meske, C., Bunde, E., Schneider, J., & Gersch, M. (2021). Explainable artificial intelligence: objectives, stakeholders, and future research opportunities. Information Systems Management, 1-11.
22. Moyne, J., & Iskandar, J. (2017). Big data analytics for smart manufacturing: Case studies in semiconductor manufacturing. Processes, 5(3), 39.
23. Ongsulee, P. (2017, November). Artificial intelligence, machine learning and deep learning. In 2017 15th International Conference on ICT and Knowledge Engineering (ICT&KE) (pp. 1-6). IEEE.
24. Park, Y., El Sawy, O. A., & Fiss, P. (2017). The role of business intelligence and communication technologies in organizational agility: a configurational approach. Journal of the association for information systems, 18(9), 1.
25. Pieters, W. (2011). Explanation and trust: what to tell the user in security and AI?. Ethics and information technology, 13(1), 53-64.
26. Raza, F. N. (2009, March). Artificial intelligence techniques in software engineering (AITSE). In International MultiConference of Engineers and Computer Scientists (IMECS 2009) (Vol. 1).
27. Rossi, F. (2018). Building trust in artificial intelligence. Journal of international affairs, 72(1), 127-134.
28. Siau, K., & Wang, W. (2018). Building trust in artificial intelligence, machine learning, and robotics. Cutter Business Technology Journal, 31(2), 47-53.
29. Tesch, D., Sobol, M. G., Klein, G., & Jiang, J. J. (2009). User and developer common knowledge: Effect on the success of information system development projects. International Journal of Project Management, 27(7), 657-664.
30. Toreini, E., Aitken, M., Coopamootoo, K., Elliott, K., Zelaya, C. G., & Van Moorsel, A. (2020, January). The relationship between trust in AI and trustworthy machine learning technologies. In Proceedings of the 2020 conference on fairness, accountability, and transparency (pp. 272-283).
31. Ulas, D. (2019). Digital transformation process and SMEs. Procedia Computer Science, 158, 662-671.
32. Xin, Y., Kong, L., Liu, Z., Chen, Y., Li, Y., Zhu, H., ... & Wang, C. (2018). Machine learning and deep learning methods for cybersecurity. Ieee access, 6, 35365-35381.
33. Zhang, A. X., Muller, M., & Wang, D. (2020). How do data science workers collaborate? roles, workflows, and tools. Proceedings of the ACM on Human-Computer Interaction, 4(CSCW1), 1-23.