研究生: |
吳灃晉 Wu, Feng-Jin |
---|---|
論文名稱: |
迎接您的生成式人工智慧助手:分析個人與任務特徵對大型語言模型採用、角色定位及提示策略的影響 Meet Your New (GenAI) Assistant: Examining the Personal and Task Characteristics that Influence Users’ LLM Adoption, Perceptions, Roles, and Prompt Strategies |
指導教授: |
雷松亞
Ray, Soumya |
口試委員: |
邱議德
Chiu, Yi-Te 鄧景宜 Teng, Ching-I |
學位類別: |
碩士 Master |
系所名稱: |
科技管理學院 - 服務科學研究所 Institute of Service Science |
論文出版年: | 2024 |
畢業學年度: | 112 |
語文別: | 英文 |
論文頁數: | 136 |
中文關鍵詞: | 人類與生成式人工智慧的合作 、生成式AI系統中的使用者行為 、任務類型 、AI角色偏好 、提示策略 、個人特徵 、AI採用模式 |
外文關鍵詞: | Human-Gen AI Collaboration, User behavior on Gen-AI enabled systems, Task type, Personal characteristics, AI adoption pattern, AI role preference, Prompt strategies |
相關次數: | 點閱:48 下載:0 |
分享至: |
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
這篇論文探討個人特質與任務特質如何影響使用者在與生成式人工智慧(Generative AI, 簡稱GenAI)互動時的採用、角色分配以及提示策略。本研究透過一項遠距線上田野實驗,研究分析了不同的個人特徵——例如大五人格特質——以及創造性與實用性任務類型如何影響人類與AI合作時的行為。研究針對可能影響人類與AI互動的三大類個人特質進行考量,包含:任務動機、代理特徵及人格特質。研究基於先前Cups研究並修改行為框架進行分析,同時本研究試圖明確定義在相同主題下的不同任務類型。結果顯示,數個關鍵因素影響了Human-GenAI互動時的採用模式、使用者對Gen-AI隱性的角色偏好以及不同提示策略的有效性。結果也表明,個人特徵與任務的性質對使用者與AI的互動有顯著影響,並影響他們採用後的看法及對AI生成結果的滿意度。這些發現有助於設計更直觀且以使用者為中心的AI系統,並提供了人類與AI在任務互動中行為動態的見解。
This thesis explores the influence of personal and task characteristics on users’ adoption, role assignment, and prompt strategies when they interact with a Generative Artificial Intelligence (GenAI). Through an online field experiment, the study examines how different personal characteristics—such as the Big Five personality traits—and task types, categorized into creative and practical, affect user behavior in human-AI collaboration. The empirical study considers three major sets of potential personal characteristics that might influence human-AI interactions: task motivations, agency characteristics, and personality traits. The study establishes a behavioral framework and clearly defines different task types under the same theme. The results indicate several key factors that influence AI adoption patterns, user implicit preferences for AI roles, and the effectiveness of different prompting strategies. The results also suggest that both personal characteristics and the nature of the task significantly impact users' reliance on AI, shaping their post-adoption perceptions and satisfaction with Human-AI cooperative generated outcomes. The findings contribute to designing more intuitive and user-centered AI systems by providing insights into the behavioral dynamics between users and AI during task interactions.
Agossah, A., Krupa, F., Perreira Da Silva, M., & Le Callet, P. (2023). LLM-Based Interaction for Content Generation: A Case Study on the Perception of Employees in an IT Department. Proceedings of the 2023 ACM International Conference on Interactive Media Experiences, 237–241. https://doi.org/10.1145/3573381.3603362
Aleksander, I. (2017). Partners of Humans: A Realistic Assessment of the Role of Robots in the Foreseeable Future. Journal of Information Technology, 32(1), 1–9. https://doi.org/10.1057/s41265-016-0032-4
Ang, I., Isar, Y. R., & Mar, P. (2015). Cultural diplomacy: Beyond the national interest? International Journal of Cultural Policy, 21(4), 365–381. https://doi.org/10.1080/10286632.2015.1042474
Au, N., Ngai, E. W., & Cheng, T. E. (2008). Extending the understanding of end user information systems satisfaction formation: An equitable needs fulfillment model approach. MIS Quarterly, 43–66.
Barke, S., James, M. B., & Polikarpova, N. (2023). Grounded Copilot: How Programmers Interact with Code-Generating Models. Proceedings of the ACM on Programming Languages, 7(OOPSLA1), 85–111. https://doi.org/10.1145/3586030
Bozdogan, H. (1987). Model selection and Akaike’s Information Criterion (AIC): The general theory and its analytical extensions. Psychometrika, 52(3), 345–370. https://doi.org/10.1007/BF02294361
Bsharat, S. M., Myrzakhan, A., & Shen, Z. (2024). Principled Instructions Are All You Need for Questioning LLaMA-1/2, GPT-3.5/4 (No. arXiv:2312.16171). arXiv. http://arxiv.org/abs/2312.16171
Burger, J. M., & Cooper, H. M. (1979). The desirability of control. Motivation and Emotion, 3(4), 381–393. https://doi.org/10.1007/BF00994052
Byrne, M. D., John, B. E., Wehrle, N. S., & Crow, D. C. (1999). The tangled Web we wove: A taskonomy of WWW use. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems the CHI Is the Limit - CHI ’99, 544–551. https://doi.org/10.1145/302979.303154
Chandra, S., Shirish, A., & Srivastava, S. C. (2022). To Be or Not to Be …Human? Theorizing the Role of Human-Like Competencies in Conversational Artificial Intelligence Agents. Journal of Management Information Systems, 39(4), 969–1005. https://doi.org/10.1080/07421222.2022.2127441
Chang, Y., Wang, X., Wang, J., Wu, Y., Yang, L., Zhu, K., Chen, H., Yi, X., Wang, C., Wang, Y., Ye, W., Zhang, Y., Chang, Y., Yu, P. S., Yang, Q., & Xie, X. (2024). A Survey on Evaluation of Large Language Models. ACM Transactions on Intelligent Systems and Technology, 15(3), 1–45. https://doi.org/10.1145/3641289
Cheng, Y., Chen, J., Huang, Q., Xing, Z., Xu, X., & Lu, Q. (2024). Prompt Sapper: A LLM-Empowered Production Tool for Building AI Chains. ACM Transactions on Software Engineering and Methodology, 33(5), 1–24. https://doi.org/10.1145/3638247
Commerford, B. P., Dennis, S. A., Joe, J. R., & Ulla, J. W. (2022). Man Versus Machine: Complex Estimates and Auditor Reliance on Artificial Intelligence. Journal of Accounting Research, 60(1), 171–201. https://doi.org/10.1111/1475-679X.12407
Costa, P. T., & McCrae, R. R. (1999). A five-factor theory of personality. Handbook of Personality: Theory and Research, 2(01), 1999.
Craig, K., Thatcher, J. B., & Grover, V. (2019). The IT Identity Threat: A Conceptual Definition and Operational Measure. Journal of Management Information Systems, 36(1), 259–288. https://doi.org/10.1080/07421222.2018.1550561
Dang, H., Goller, S., Lehmann, F., & Buschek, D. (2023a). Choice Over Control: How Users Write with Large Language Models using Diegetic and Non-Diegetic Prompting. Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, 1–17. https://doi.org/10.1145/3544548.3580969
Dang H., Goller S., Lehmann F., & Buschek D. (2023b). Choice Over Control: How Users Write with Large Language Models using Diegetic and Non-Diegetic Prompting. Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, 1–17. https://doi.org/10.1145/3544548.3580969
Dang, H., Mecke, L., Lehmann, F., Goller, S., & Buschek, D. (2022). How to Prompt? Opportunities and Challenges of Zero- and Few-Shot Learning for Human-AI Interaction in Creative Applications of Generative Models (No. arXiv:2209.01390). arXiv. http://arxiv.org/abs/2209.01390
Darvishi, A., Khosravi, H., Sadiq, S., Gašević, D., & Siemens, G. (2024). Impact of AI assistance on student agency. Computers & Education, 210, 104967. https://doi.org/10.1016/j.compedu.2023.104967
Davis, F. D. (1989). Perceived Usefulness, Perceived Ease of Use, and User Acceptance of Information Technology. MIS Quarterly, 13(3), 319. https://doi.org/10.2307/249008
Davis, F. D. (1993). User acceptance of information technology: System characteristics, user perceptions and behavioral impacts. International Journal of Man-Machine Studies, 38(3), 475–487.
Davis, J., Bulck, L. V., Durieux, B. N., & Lindvall, C. (2024). The Temperature Feature of ChatGPT: Modifying Creativity for Clinical Research. JMIR Human Factors, 11(1), e53559. https://doi.org/10.2196/53559
Dennis, A. R., Lakhiwal, A., & Sachdeva, A. (2023). AI Agents as Team Members: Effects on Satisfaction, Conflict, Trustworthiness, and Willingness to Work With. Journal of Management Information Systems, 40(2), 307–337. https://doi.org/10.1080/07421222.2023.2196773
Dhillon, P. S., Molaei, S., Li, J., Golub, M., Zheng, S., & Robert, L. P. (2024). Shaping Human-AI Collaboration: Varied Scaffolding Levels in Co-writing with Language Models. Proceedings of the CHI Conference on Human Factors in Computing Systems, 1–18. https://doi.org/10.1145/3613904.3642134
Ding, Z. (2024). Advancing GUI for Generative AI: Charting the Design Space of Human-AI Interactions through Task Creativity and Complexity. Companion Proceedings of the 29th International Conference on Intelligent User Interfaces, 140–143. https://doi.org/10.1145/3640544.3645241
Donnellan, M. B., Oswald, F. L., Baird, B. M., & Lucas, R. E. (2006). The Mini-IPIP Scales: Tiny-yet-effective measures of the Big Five Factors of Personality. Psychological Assessment, 18(2), 192–203. https://doi.org/10.1037/1040-3590.18.2.192
Emirbayer, M., & Mische, A. (1998). What Is Agency? American Journal of Sociology, 103(4), 962–1023. https://doi.org/10.1086/231294
Feuerriegel, S., Hartmann, J., Janiesch, C., & Zschech, P. (2024). Generative AI. Business & Information Systems Engineering, 66(1), 111–126. https://doi.org/10.1007/s12599-023-00834-7
Furnham, A., & Ribchester, T. (1995). Tolerance of ambiguity: A review of the concept, its measurement and applications. Current Psychology, 14(3), 179–199. https://doi.org/10.1007/BF02686907
Gangwar, H., Date, H., & Raoot, A. D. (2014). Review on IT adoption: Insights from recent technologies. Journal of Enterprise Information Management, 27(4), 488–502. https://doi.org/10.1108/JEIM-08-2012-0047
Gaudiello, I., Zibetti, E., Lefort, S., Chetouani, M., & Ivaldi, S. (2016). Trust as indicator of robot functional and social acceptance. An experimental study on user conformation to iCub answers. Computers in Human Behavior, 61, 633–655. https://doi.org/10.1016/j.chb.2016.03.057
Giese, J. L., & Cote, J. A. (2000). Defining consumer satisfaction. Academy of Marketing Science Review, 1(1), 1–22.
Glickman, M., & Zhang, Y. (2024). AI and Generative AI for Research Discovery and Summarization. Harvard Data Science Review, 6(2). https://doi.org/10.1162/99608f92.7f9220ff
Glikson, E., & Woolley, A. W. (2020). Human Trust in Artificial Intelligence: Review of Empirical Research. Academy of Management Annals, 14(2), 627–660. https://doi.org/10.5465/annals.2018.0057
Goodman, S. M., Buehler, E., Clary, P., Coenen, A., Donsbach, A., Horne, T. N., Lahav, M., MacDonald, R., Michaels, R. B., Narayanan, A., Pushkarna, M., Riley, J., Santana, A., Shi, L., Sweeney, R., Weaver, P., Yuan, A., & Morris, M. R. (2022). LaMPost: Design and Evaluation of an AI-assisted Email Writing Prototype for Adults with Dyslexia. Proceedings of the 24th International ACM SIGACCESS Conference on Computers and Accessibility, 1–18. https://doi.org/10.1145/3517428.3544819
Hemmer, P., Westphal, M., Schemmer, M., Vetter, S., Vössing, M., & Satzger, G. (2023). Human-AI Collaboration: The Effect of AI Delegation on Human Task Performance and Task Satisfaction. Proceedings of the 28th International Conference on Intelligent User Interfaces, 453–463. https://doi.org/10.1145/3581641.3584052
Hill, J., Randolph Ford, W., & Farreras, I. G. (2015). Real conversations with artificial intelligence: A comparison between human–human online conversations and human–chatbot conversations. Computers in Human Behavior, 49, 245–250. https://doi.org/10.1016/j.chb.2015.02.026
Hong, S.-M., & Faedda, S. (1996). Refinement of the Hong Psychological Reactance Scale. Educational and Psychological Measurement, 56(1), 173–182. https://doi.org/10.1177/0013164496056001014
Hossain, M. A., & Quaddus, M. (2012). Expectation–Confirmation Theory in Information System Research: A Review and Analysis. In Y. K. Dwivedi, M. R. Wade, & S. L. Schneberger (Eds.), Information Systems Theory (Vol. 28, pp. 441–469). Springer New York. https://doi.org/10.1007/978-1-4419-6108-2_21
Hou, Y. T.-Y., Lee, W.-Y., & Jung, M. (2023). “Should I Follow the Human, or Follow the Robot?”—Robots in Power Can Have More Influence Than Humans on Decision-Making. Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, 1–13. https://doi.org/10.1145/3544548.3581066
Hussain, M., Iacovides, I., Lawton, T., Sharma, V., Porter, Z., Cunningham, A., Habli, I., Hickey, S., Jia, Y., Morgan, P., & Wong, N. L. (2024). Development and translation of human-AI interaction models into working prototypes for clinical decision-making. Designing Interactive Systems Conference, 1607–1619. https://doi.org/10.1145/3643834.3660697
Hyun Baek, T., & Kim, M. (2023). Is ChatGPT scary good? How user motivations affect creepiness and trust in generative artificial intelligence. Telematics and Informatics, 83, 102030. https://doi.org/10.1016/j.tele.2023.102030
Imran, M., & Almusharraf, N. (2023). Analyzing the role of ChatGPT as a writing assistant at higher education level: A systematic review of the literature. Contemporary Educational Technology, 15(4), ep464. https://doi.org/10.30935/cedtech/13605
Inkpen, K., Chappidi, S., Mallari, K., Nushi, B., Ramesh, D., Michelucci, P., Mandava, V., Vepřek, L. H., & Quinn, G. (2023). Advancing Human-AI Complementarity: The Impact of User Expertise and Algorithmic Tuning on Joint Decision Making. ACM Transactions on Computer-Human Interaction, 30(5), 1–29. https://doi.org/10.1145/3534561
Jiang, Y., Yang, X., & Zheng, T. (2023). Make chatbots more adaptive: Dual pathways linking human-like cues and tailored response to trust in interactions with chatbots. Computers in Human Behavior, 138, 107485. https://doi.org/10.1016/j.chb.2022.107485
Jo, H. (2023). Understanding AI tool engagement: A study of ChatGPT usage and word-of-mouth among university students and office workers. Telematics and Informatics, 85, 102067. https://doi.org/10.1016/j.tele.2023.102067
John, O. P., Naumann, L. P., & Soto, C. J. (2008). Paradigm shift to the integrative big five trait taxonomy. Handbook of Personality: Theory and Research, 3(2), 114–158.
Jussupow, E., Spohrer, K., & Heinzl, A. (2022). Identity Threats as a Reason for Resistance to Artificial Intelligence: Survey Study With Medical Students and Professionals. JMIR Formative Research, 6(3), e28750. https://doi.org/10.2196/28750
Kang, H., & Lou, C. (2022). AI agency vs. human agency: Understanding human–AI interactions on TikTok and their implications for user engagement. Journal of Computer-Mediated Communication, 27(5), zmac014. https://doi.org/10.1093/jcmc/zmac014
Kankanhalli, Tan, & Wei. (2005). Contributing Knowledge to Electronic Knowledge Repositories: An Empirical Investigation. MIS Quarterly, 29(1), 113. https://doi.org/10.2307/25148670
Kellar, M., Watters, C., & Shepherd, M. (2007). A field study characterizing Web‐based information‐seeking tasks. Journal of the American Society for Information Science and Technology, 58(7), 999–1018. https://doi.org/10.1002/asi.20590
Kim, H., Lee, J., & Oh, S. E. (2020). Individual characteristics influencing the sharing of knowledge on social networking services: Online identity, self-efficacy, and knowledge sharing intentions. Behaviour & Information Technology, 39(4), 379–390. https://doi.org/10.1080/0144929X.2019.1598494
Kim, H. S., & Sherman, D. K. (2007). “Express Yourself”: Culture and the Effect of Self-Expression on Choice. Journal of Personality and Social Psychology, 92(1), 1–11. https://doi.org/10.1037/0022-3514.92.1.1
Kyle, G., Absher, J., Norman, W., Hammitt, W., & Jodice, L. (2007). A Modified Involvement Scale. Leisure Studies, 26(4), 399–427. https://doi.org/10.1080/02614360600896668
Lee, M., Liang, P., & Yang, Q. (2022). CoAuthor: Designing a Human-AI Collaborative Writing Dataset for Exploring Language Model Capabilities. CHI Conference on Human Factors in Computing Systems, 1–19. https://doi.org/10.1145/3491102.3502030
Lee, Y. K., Park, Y.-H., & Hahn, S. (2023). A Portrait of Emotion: Empowering Self-Expression through AI-Generated Art (No. arXiv:2304.13324). arXiv. http://arxiv.org/abs/2304.13324
Li, H., Wang, Y., Liao, Q. V., & Qu, H. (2023). Why is AI not a Panacea for Data Workers? An Interview Study on Human-AI Collaboration in Data Storytelling (No. arXiv:2304.08366). arXiv. http://arxiv.org/abs/2304.08366
Li, H., Wang, Y., & Qu, H. (2024). Where Are We So Far? Understanding Data Storytelling Tools from the Perspective of Human-AI Collaboration. Proceedings of the CHI Conference on Human Factors in Computing Systems, 1–19. https://doi.org/10.1145/3613904.3642726
Li, J., & Li, J. (2024). A Map of Exploring Human Interaction patterns with LLM: Insights into Collaboration and Creativity (No. arXiv:2404.04570). arXiv. http://arxiv.org/abs/2404.04570
Liu, Z., Min, Q., & Ji, S. (2008). A Comprehensive Review of Research in IT Adoption. 2008 4th International Conference on Wireless Communications, Networking and Mobile Computing, 1–5. https://doi.org/10.1109/WiCom.2008.2808
Locke, E. A. (1969). What is job satisfaction? Organizational Behavior and Human Performance, 4(4), 309–336.
Loukas, L., Stogiannidis, I., Diamantopoulos, O., Malakasiotis, P., & Vassos, S. (2023). Making LLMs Worth Every Penny: Resource-Limited Text Classification in Banking. 4th ACM International Conference on AI in Finance, 392–400. https://doi.org/10.1145/3604237.3626891
Macal, C. M., & North, M. J. (2005). Tutorial on agent-based modeling and simulation. Proceedings of the Winter Simulation Conference, 2005., 2–15. https://doi.org/10.1109/WSC.2005.1574234
McGrath, J. E. (1984). Groups: Interaction and performance. (No Title). https://cir.nii.ac.jp/crid/1130282268655742720
McIntyre, N. (1989). The Personal Meaning of Participation: Enduring Involvement. Journal of Leisure Research, 21(2), 167–179. https://doi.org/10.1080/00222216.1989.11969797
Miron, A. M., & Brehm, J. W. (2006). Reactance Theory—40 Years Later. Zeitschrift Für Sozialpsychologie, 37(1), 9–18. https://doi.org/10.1024/0044-3514.37.1.9
Mozannar, H., Bansal, G., Fourney, A., & Horvitz, E. (2024). Reading Between the Lines: Modeling User Behavior and Costs in AI-Assisted Programming. Proceedings of the CHI Conference on Human Factors in Computing Systems, 1–16. https://doi.org/10.1145/3613904.3641936
Osone, H., Lu, J.-L., & Ochiai, Y. (2021). BunCho: AI Supported Story Co-Creation via Unsupervised Multitask Learning to Increase Writers’ Creativity in Japanese. Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems, 1–10. https://doi.org/10.1145/3411763.3450391
Park, J., & Woo, S. (2022). Who Likes Artificial Intelligence? Personality Predictors of Attitudes toward Artificial Intelligence. The Journal of Psychology, 156, 1–27. https://doi.org/10.1080/00223980.2021.2012109
Peeperkorn, M., Kouwenhoven, T., Brown, D., & Jordanous, A. (2024). Is Temperature the Creativity Parameter of Large Language Models? (No. arXiv:2405.00492). arXiv. http://arxiv.org/abs/2405.00492
Pesovski, I., Santos, R., Henriques, R., & Trajkovik, V. (2024). Generative ai for customizable learning experiences. Sustainability, 16(7), 3034.
Ramsey, A. T., & Etcheverry, P. E. (2013). Aligning Task Control With Desire for Control: Implications for Performance. Basic and Applied Social Psychology, 35(5), 467–476. https://doi.org/10.1080/01973533.2013.823617
Ray, S., Kim, S. S., & Morris, J. G. (2014). The Central Role of Engagement in Online Communities. Information Systems Research, 25(3), 528–546. https://doi.org/10.1287/isre.2014.0525
Rezwana, J., & Maher, M. L. (2023). Designing Creative AI Partners with COFI: A Framework for Modeling Interaction in Human-AI Co-Creative Systems. ACM Transactions on Computer-Human Interaction, 30(5), 1–28. https://doi.org/10.1145/3519026
Roediger, H. L., & Crowder, R. G. (1976). A serial position effect in recall of United States presidents. Bulletin of the Psychonomic Society, 8(4), 275–278. https://doi.org/10.3758/BF03335138
Romero-Rodríguez, J.-M., Ramírez-Montoya, M.-S., Buenestado-Fernández, M., & Lara-Lara, F. (2023). Use of ChatGPT at University as a Tool for Complex Thinking: Students’ Perceived Usefulness. Journal of New Approaches in Educational Research, 12(2), 323. https://doi.org/10.7821/naer.2023.7.1458
Rouzegar, H., & Makrehchi, M. (2024). Enhancing Text Classification through LLM-Driven Active Learning and Human Annotation. In S. Henning & M. Stede (Eds.), Proceedings of The 18th Linguistic Annotation Workshop (LAW-XVIII) (pp. 98–111). Association for Computational Linguistics. https://aclanthology.org/2024.law-1.10
Rzepka, C., & Berger, B. (2018). User Interaction with AI-enabled Systems: A Systematic Review of IS Research.
Salah, M., Alhalbusi, H., Ismail, M. M., & Abdelfattah, F. (2024a). Chatting with ChatGPT: Decoding the mind of Chatbot users and unveiling the intricate connections between user perception, trust and stereotype perception on self-esteem and psychological well-being. Current Psychology, 43(9), 7843–7858. https://doi.org/10.1007/s12144-023-04989-0
Salah, M., Alhalbusi, H., Ismail, M. M., & Abdelfattah, F. (2024b). Chatting with ChatGPT: Decoding the mind of Chatbot users and unveiling the intricate connections between user perception, trust and stereotype perception on self-esteem and psychological well-being. Current Psychology, 43(9), 7843–7858. https://doi.org/10.1007/s12144-023-04989-0
Sankaran, S., & Markopoulos, P. (2021). ”It’s like a puppet master”: User Perceptions of Personal Autonomy when Interacting with Intelligent Technologies. Proceedings of the 29th ACM Conference on User Modeling, Adaptation and Personalization, 108–118. https://doi.org/10.1145/3450613.3456820
Sankaran, S., Zhang, C., Aarts, H., & Markopoulos, P. (2021). Exploring Peoples’ Perception of Autonomy and Reactance in Everyday AI Interactions. Frontiers in Psychology, 12, 713074. https://doi.org/10.3389/fpsyg.2021.713074
Shanahan, M. (2024). Talking about Large Language Models. Communications of the ACM, 67(2), 68–79. https://doi.org/10.1145/3624724
Sharan, N. N., & Romano, D. M. (2020). The effects of personality and locus of control on trust in humans versus artificial intelligence. Heliyon, 6(8), e04572. https://doi.org/10.1016/j.heliyon.2020.e04572
Sharma, A., Rao, S., Brockett, C., Malhotra, A., Jojic, N., & Dolan, B. (2024). Investigating Agency of LLMs in Human-AI Collaboration Tasks (No. arXiv:2305.12815). arXiv. http://arxiv.org/abs/2305.12815
Shuster, K., Poff, S., Chen, M., Kiela, D., & Weston, J. (2021). Retrieval Augmentation Reduces Hallucination in Conversation (No. arXiv:2104.07567). arXiv. http://arxiv.org/abs/2104.07567
Sternberg, R. J., Glaveanu, V., Karami, S., Kaufman, J. C., Phillipson, S. N., & Preiss, D. D. (2021). Meta-Intelligence: Understanding, Control, and Interactivity between Creative, Analytical, Practical, and Wisdom-Based Approaches in Problem Solving. Journal of Intelligence, 9(2), 19. https://doi.org/10.3390/jintelligence9020019
Straub, E. T. (2009). Understanding Technology Adoption: Theory and Future Directions for Informal Learning. Review of Educational Research, 79(2), 625–649.
Straus, S. G. (1999). Testing a Typology of Tasks: An Empirical Validation of McGrath’s (1984) Group Task Circumplex. Small Group Research, 30(2), 166–187. https://doi.org/10.1177/104649649903000202
Sundar, S. S. (2020). Rise of Machine Agency: A Framework for Studying the Psychology of Human–AI Interaction (HAII). Journal of Computer-Mediated Communication, 25(1), 74–88. https://doi.org/10.1093/jcmc/zmz026
Sung, S. Y., Antefelt, A., & Choi, J. N. (2017). Dual Effects of Job Complexity on Proactive and Responsive Creativity: Moderating Role of Employee Ambiguity Tolerance. Group & Organization Management, 42(3), 388–418. https://doi.org/10.1177/1059601115619081
Svikhnushina, E., Schellenberg, M., Niedbala, A. K., Barisic, I., & Miles, J. N. (2023). Expectation vs Reality in Users’ Willingness to Delegate to Digital Assistants. Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems, 1–7. https://doi.org/10.1145/3544549.3585763
Thomas, E. (1983). Notes on effort and achievement-oriented behavior. Psychological Review, 90, 1–20. https://doi.org/10.1037/0033-295X.90.1.1
Tian, A. W., & Gamble, J. (2018). Challenged and satisfied: The role of organisational ownership and employee involvement. The International Journal of Human Resource Management, 29(19), 2780–2803. https://doi.org/10.1080/09585192.2016.1254100
Tsai, C.-Y., Marshall, J. D., Choudhury, A., Serban, A., Tsung-Yu Hou, Y., Jung, M. F., Dionne, S. D., & Yammarino, F. J. (2022). Human-robot collaboration: A multilevel and integrated leadership framework. The Leadership Quarterly, 33(1), 101594. https://doi.org/10.1016/j.leaqua.2021.101594
Van Berkel, N., Skov, M. B., & Kjeldskov, J. (2021). Human-AI interaction: Intermittent, continuous, and proactive. Interactions, 28(6), 67–71. https://doi.org/10.1145/3486941
Venkatesh, V., & Davis, F. D. (2000). A Theoretical Extension of the Technology Acceptance Model: Four Longitudinal Field Studies. Management Science, 46(2), 186–204. https://doi.org/10.1287/mnsc.46.2.186.11926
Völkel, S. T., & Kaya, L. (2021). Examining User Preference for Agreeableness in Chatbots. CUI 2021 - 3rd Conference on Conversational User Interfaces, 1–6. https://doi.org/10.1145/3469595.3469633
Wan, Q., Hu, S., Zhang, Y., Wang, P., Wen, B., & Lu, Z. (2024). “It Felt Like Having a Second Mind”: Investigating Human-AI Co-creativity in Prewriting with Large Language Models. Proceedings of the ACM on Human-Computer Interaction, 8(CSCW1), 1–26. https://doi.org/10.1145/3637361
Wang, D., Churchill, E., Maes, P., Fan, X., Shneiderman, B., Shi, Y., & Wang, Q. (2020). From Human-Human Collaboration to Human-AI Collaboration: Designing AI Systems That Can Work Together with People. Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems, 1–6. https://doi.org/10.1145/3334480.3381069
Wang, M., Liu, Y., Zhang, X., Li, S., Huang, Y., Zhang, C., Wang, D., Feng, S., & Li, J. (2024). LangGPT: Rethinking Structured Reusable Prompt Design Framework for LLMs from the Programming Language (No. arXiv:2402.16929). arXiv. http://arxiv.org/abs/2402.16929
Wang, M., Wang, M., Xu, X., Yang, L., Cai, D., & Yin, M. (2024). Unleashing ChatGPT’s Power: A Case Study on Optimizing Information Retrieval in Flipped Classrooms via Prompt Engineering. IEEE Transactions on Learning Technologies, 17, 629–641. https://doi.org/10.1109/TLT.2023.3324714
Wang, X., Lin, X., & Shao, B. (2023). Artificial intelligence changes the way we work: A close look at innovating with chatbots. Journal of the Association for Information Science and Technology, 74(3), 339–353. https://doi.org/10.1002/asi.24621
White, J., Fu, Q., Hays, S., Sandborn, M., Olea, C., Gilbert, H., Elnashar, A., Spencer-Smith, J., & Schmidt, D. C. (2023a). A Prompt Pattern Catalog to Enhance Prompt Engineering with ChatGPT (No. arXiv:2302.11382). arXiv. http://arxiv.org/abs/2302.11382
White, J., Fu, Q., Hays, S., Sandborn, M., Olea, C., Gilbert, H., Elnashar, A., Spencer-Smith, J., & Schmidt, D. C. (2023b). A Prompt Pattern Catalog to Enhance Prompt Engineering with ChatGPT (No. arXiv:2302.11382). arXiv. http://arxiv.org/abs/2302.11382
Xia, Q., Chiu, T. K. F., Chai, C. S., & Xie, K. (2023). The mediating effects of needs satisfaction on the relationships between prior knowledge and self-regulated learning through artificial intelligence chatbot. British Journal of Educational Technology, 54(4), 967–986. https://doi.org/10.1111/bjet.13305
Xiao, Y., Cheng, Y., Fu, J., Wang, J., Li, W., & Liu, P. (2024). How Far Are LLMs from Believable AI? A Benchmark for Evaluating the Believability of Human Behavior Simulation (No. arXiv:2312.17115). arXiv. http://arxiv.org/abs/2312.17115
Yuan A., Coenen A., Reif E., & Ippolito D. (2022). Wordcraft: Story Writing With Large Language Models. 27th International Conference on Intelligent User Interfaces, 841–852. https://doi.org/10.1145/3490099.3511105
Zamfirescu-Pereira, J. D., Wei, H., Xiao, A., Gu, K., Jung, G., Lee, M. G., Hartmann, B., & Yang, Q. (2023). Herding AI Cats: Lessons from Designing a Chatbot by Prompting GPT-3. Proceedings of the 2023 ACM Designing Interactive Systems Conference, 2206–2220. https://doi.org/10.1145/3563657.3596138
Zamfirescu-Pereira J. D., Wong R. Y., Hartmann B., & Yang Q. (2023). Why Johnny Can’t Prompt: How Non-AI Experts Try (and Fail) to Design LLM Prompts. Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, 1–21. https://doi.org/10.1145/3544548.3581388
Zhan, X., Xu, Y., Abdi, N., Collenette, J., Abu-Salma, R., & Sarkadi, S. (2024). Banal Deception Human-AI Ecosystems: A Study of People’s Perceptions of LLM-generated Deceptive Behaviour (No. arXiv:2406.08386). arXiv. http://arxiv.org/abs/2406.08386
Zhang, M., Van Rijn, P. W., Deane, P., & Bennett, R. E. (2019). Scenario-Based Assessments in Writing: An Experimental Study. Educational Assessment, 24(2), 73–90. https://doi.org/10.1080/10627197.2018.1557515
Zhang, P., & Li, N. (2004). An assessment of human–computer interaction research in management information systems: Topics and methods. Computers in Human Behavior, 20(2), 125–147. https://doi.org/10.1016/j.chb.2003.10.011
Zhao, B., Jin, W., Del Ser, J., & Yang, G. (2023). ChatAgri: Exploring potentials of ChatGPT on cross-linguistic agricultural text classification. Neurocomputing, 557, 126708. https://doi.org/10.1016/j.neucom.2023.126708
Zhao, S., Ramos, J., Tao, J., Jiang, Z., Li, S., Wu, Z., Pan, G., & Dey, A. K. (2016). Discovering different kinds of smartphone users through their application usage behaviors. Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing, 498–509. https://doi.org/10.1145/2971648.2971696
Zhong, Q., Ding, L., Liu, J., Du, B., & Tao, D. (2023). Can ChatGPT Understand Too? A Comparative Study on ChatGPT and Fine-tuned BERT (No. arXiv:2302.10198). arXiv. http://arxiv.org/abs/2302.10198
Zhou, D., & Sterman, S. (n.d.). Creative Struggle: Arguing for the Value of Difficulty in Supporting Ownership and Self-Expression in Creative Writing.