研究生: |
陳玠霖 Chen, Chieh-Lin |
---|---|
論文名稱: |
基於大型語言模型的客服服務自動化:雙階段方法的研究 Customer Service Automation through LLMs: a Two-Phase Approach |
指導教授: |
吳尚鴻
Wu, Shan-Hung |
口試委員: |
彭文志
Peng, Wen-Chih 沈之涯 Shen, Chih-Ya 李哲榮 Lee, Che-Rung |
學位類別: |
碩士 Master |
系所名稱: |
電機資訊學院 - 資訊系統與應用研究所 Institute of Information Systems and Applications |
論文出版年: | 2024 |
畢業學年度: | 113 |
語文別: | 英文 |
論文頁數: | 40 |
中文關鍵詞: | 大型語言模型 、客服系統 |
外文關鍵詞: | LLMs, CustomerService |
相關次數: | 點閱:80 下載:0 |
分享至: |
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
N/A
Traditional customer service systems often face challenges when addressing vague or ambiguous user inquiries, typically requiring significant human intervention to ensure response accuracy. Through observing and analyzing real-world customer service datasets, we identified additional complexities in customer interactions that traditional automated systems often overlook, such as handling multi-faceted queries and filtering irrelevant information from user inputs.
To address these issues, this thesis proposes a two-phase approach to customer service automation based on Large Language Models (LLMs). In the first phase, the system uses interactive dialogues to clarify user inquiries, ensuring a thorough understanding of the core issue. The second phase leverages this clarification to provide precise answers or execute predefined actions based on the refined input.
In real-world testing, this system demonstrated improved accuracy and a significant reduction in operational costs compared to existing methods, minimizing the need for human intervention. This work not only advances response efficiency but also highlights the transformative potential of LLMs in real-world customer service automation.
[1] Vaswani A. Attention is all you need. 2017.
[2] Bender EM et al. On the dangers of stochastic parrots: Can language models be too big? 2021.
[3] Douze M et al. The faiss library. 2024.
[4] Jingzhe Shi et al. CHOPS: CHat with custOmer Profile Systems for Customer Service with LLMs. 2024.
[5] Lewis P et al. Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks. 2020.
[6] Papineni K et al. Bleu: a method for automatic evaluation of machine translation. 2002.
[7] Qian C et al. Tell me more! towards implicit user intention understanding
of language model driven agents. 2024.
[8] Radford A et al. Language Models are Unsupervised Multitask Learners. 2019.
[9] Ribeiro MT et al. Beyond accuracy: Behavioral testing of NLP models with CheckList. 2020.
[10] Touvron H et al. Llama 2: Open Foundation and Fine-Tuned Chat Models. 2023.
[11] Wei J et al. Chain-of-thought prompting elicits reasoning in large language models. 2022.
[12] Zhu Y et al. Texygen: A benchmarking platform for text generation models. 2018.
[13] Lavie A. Banerjee S. METEOR: An automatic metric for MT evaluation
with improved correlation with human judgments. 2005.
[14] Lin CY. Rouge: A package for automatic evaluation of summaries. 2004.
[15] et al. Li J. A diversity-promoting objective function for neural conversation models. 2015.
[16] Evans O. Lin S Hilton J. Truthfulqa: Measuring how models mimic human falsehoods. 2021.
[17] OpenAI. Assistants API Overview. https://platform.openai.com/docs/
assistants/overview. Accessed: 2024-10-22.
[18] OpenAI. Function calling and other API updates. https://openai.com/index/function-calling-and-other-api-updates/. Accessed: 2024-10-22.
[19] OpenAI. Gpt-4 technical report. 2023.
[20] OpenAI. New embedding models and API updates. https://openai.com/
index/new-embedding-models-and-api-updates/. Accessed: 2024-10-22.
[21] Salesforce. Salesforce: The Customer Company. https://www.salesforce.com. Accessed: 2024-10-22.
[22] Brown TB. Language Models are Few-Shot Learners. 2020.
[23] Zendesk. Zendesk: The Complete Customer Service Solution. https://www.zendesk.com. Accessed: 2024-10-22.