簡易檢索 / 詳目顯示

研究生: 艾伊曼
Ali, Imad
論文名稱: 以社群網路為基礎之問答系統
Social Network-Based Question Answering Systems
指導教授: 張佑榕
Chang, Ronald Y.
徐正炘
Hsu, Cheng-Hsin
口試委員: 陳健
Chen, Chien
林靖茹
Lin, Kate Ching-Ju
高榮駿
Kao, Jung-Chun
沈之涯
Shen, Chih-Ya
徐正炘
Hsu, Cheng-Hsin
張佑榕
Chang, Ronald Y.
學位類別: 博士
Doctor
系所名稱: 電機資訊學院 - 社群網路與人智計算國際博士學程
Social Networks and Human-Centered Computing
論文出版年: 2020
畢業學年度: 108
語文別: 英文
論文頁數: 120
中文關鍵詞: 社群網站問答系統高級別的專家路由通訊協定非客觀事實問題
外文關鍵詞: Social Networks, Question Answering Systems, High-quality Answerers, Routing, Protocols, Non-factual Questions
相關次數: 點閱:2下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • Google,Bing和Yahoo!等網路搜尋引擎使用了自然語言處理、資訊檢索等諸多先進技術來取得相關網頁資訊,以回答使用者查詢的問題;一般而言,客觀事實問題可以得到很好的答案,然而,對於非客觀事實問題(例如觀點、推薦、建議等)則不是那麼容易回答,通常需要真人透過線上問答系統的方式來回應。在過去十年中,社群網站已廣泛用於線上溝通,並可基於興趣交流建立關係,這些社群網路有利於人們彼此共享和交換有用的資訊。因此,針對非客觀事實性問題,或可利用社群網路找到解答。但是,社群網路眾多使用者的興趣南轅北轍,專業程度亦有高有低,尤其在社群網路使用者多為弱連結,且上線使用時間亦非規律固定的情況下,如何透過如此動態的社群網路,適才適所地轉薦其中相對應程度的合適人選來回答問題,實非易事。此外,關於政治、宗教、黨派、醫療疾病等敏感議題,使用者可能不願涉入轉薦,然這些問題卻最需要專業知識水準等級的專家來回答。在社群網路中,由於使用者僅擁有一度分隔內的好友資訊,而最高等級的專家可能是他們第k度分隔的朋友,因此如何從動態社群網路中,以分散式檢索的方式,找出最高等級專家的應答者,是一大挑戰。

    進一步想,在行動網路的環境下有各種網路頻寬、記憶體、處理器運算能力、與電力等限制,系統的輕量化更是極為重要。而輕量化檢索的設計卻十分困難。另外,由於在社群網路中提問者與k度分隔的使用者互不認識,也無法直接溝通,提問者不清楚k度分隔的應答者的可信度,也就很難評斷解答的正確性。因此,建立一個便於使用者評估所提供答案正確性的機制至關重要。此外,由於隱私問題,很多社群網路使用者並不會揭露自己的地理位置,因此,如果需要回答的問題是與地理位置相關的,從社群網路中找到合適的應答者也不太容易。針對上述的各項挑戰,本論文提出了一種基於社群網路的問答框架。

    首先,本論文要解決的問題是,能夠透過動態社群網路鏈,轉薦適合的問題給特定專業等級的應答者,以期最快地取得所有問題的答案。社群網站使用者擁有不同的興趣愛好和專業知識,每個人的線上活動時間也大不相同,因此透過動態社群網路鏈,轉薦適合的問題給特定專業等級的應答者,使所有問題能夠最快地取得答案,會是一個非常具有挑戰性的問題。為了解決這個問題,本論文提出一個最佳化的問答系統,能夠透過動態社群網路鏈,轉薦適合的問題給特定專業等級的回答者,在最短的時間內取得回答。我們使用群播樹來(i)避免遇到在長時間內碰巧處於離線狀態的使用者而造成瓶頸,和(ii)蒐集多個應答者的回應,以提升答案的品質。該系統使用混合模型來估計每位應答者的專業級別,以便確定答題者的專業水準。所提方法與現存最佳方法相比,其獲得回應的時間更短,明確地來說,結果顯示所提系統可達到:(i)平均回應速率高出27%,(ii)平均最長回應時間降低多達60%,(iii)在應答者數量、問題出現率、應答者專業級別、可預測性等面向達到更好的效能。

    接下來,本論文解決另一個問題:當使用者可能不願意轉薦關於政治、宗教、黨派、醫療疾病等敏感問題時,如何從動態社群網路中以分散式檢索最高專業級別的應答者?在社群網路中,使用者僅能從一度分隔內的好友中尋求答案,雖然他們第k度分隔的朋友中可能有最高級別的專家應答者,但他們彼此並不認識。為了解決這個問題,本文提出一種基於分散式社群網路的問答機制,該機制可以找到對應每個提問問題的最高專業級別應答者,並使回答率更高、回應時間更短。此方案在k度分隔內搜尋最高專業等級的應答者,並在每度分隔間選擇最佳轉薦者,以通過社交推薦鏈結成功轉薦問題。特別是利用在k度分隔內交換資訊的方式,在每度分隔間選擇最佳轉薦者,以搜尋出最高專業等級的應答者。模擬結果顯示,與現存最佳方法相比,所提方法可達到:(i)平均專業等級提高42%以上,(ii)平均回應率提高26%以上,(iii)縮短回應時間達27%。此外,在各種系統參數(例如問題出現率、每個問題的關鍵字數、每個問題的答覆者數、分隔度數和可預測性等)下,所提方法都比現存最佳方法有更好的表現。

    本論文要解決的下一個問題,是提高問答系統的效能,設計更輕量化的方式以符合行動網路的需求。透過適當的朋友(或幫助者)轉發問題,也許能夠在多度分隔下取得特定問題的答案。但是,在有效訊息量有限的情況下,如何分散式搜尋最佳幫助者,是一個具挑戰性的問題。本論文提出一種分散式的輕量化架構,將幫助者選擇的方法結合到基於社群網路的問答系統中。所有使用者都看得到一度分隔內好友的資訊登錄,我們所提方法則運用上述訊息記錄來選擇幫助者,以轉發問題至朋友群中有能力且有意願合作的幫助者,從而提升問答系統的效能。足跡驅動的模擬結果顯示,平均而言,我們所提出的幫助者選擇方法達到更高的回應速度、更高的最佳回答率、更快的回應時間,相較於現存最佳方法,上述各項分別可達到14%、13%和14%的進步。此外,亦觀察到所提方法在各種系統參數設定下,都超越現存最佳方法而有更好的表現。

    接下來,本論文解決了評估k度分隔應答者的答案正確性的問題。在分散式社群網路中,提問者不知道k度分隔應答者的可信程度,因此很難評估答案的正確性。因此,一套以分散式社群網路為基礎且提供答案可信度參考的問答系統,對於確定答案的正確性至關重要。本論文提出一套架構,幫助每個使用者評估所接收答案的正確性。此方法在不確定性條件下利用主觀邏輯建立了朋友對朋友的信譽參考值,並按興趣分類,將如此形成的主觀意見累積統計當成每位使用者的可信度匯總參考值,以反映使用者的實際可信度,再將問題轉發給在該類別中具有最佳可信度參考值的使用者。結果顯示,與現存最佳方法相較,我們的方法具有更高的成功率、更高的答案正確性和更低的答案不確定性,平均差距分別達12.1%、16.4%和22.2%。

    最後,本論文解決了地理相關性的問題,也就是當使用者所關注的問題具有地理位置資訊。我們發現有51%的使用者會查尋與其所在地理位置相關的資訊,然現有系統卻無法回答此類問題,因為處理此類問題並非現存系統設計的主要目的。為了解決這個問題,本論文提出一種在動態社群網路中讓使用者能夠提供及時準確的答案的架構。此方法挖掘使用者的共享資訊與內容脈絡,以找出每個問題的最相關使用者。我們提出一種多模式激勵架構,利用使用者的社交聯繫和金錢獎勵提高意願程度。此方法利用多模式激勵架構找出相關使用者的意願等級,並透過我們提出的三種演算法在動態社群網路中積極地指派問題給最相關、最高信譽且意願程度最高的使用者來回答。我們完成了:(i)一項調查,以檢視是否支持我們提出的多模式激勵架構的假設,以及(ii)足跡驅動的實驗,以評估我們提出的三種演算法的效能。


    The Web-search engines such as Google, Bing, and Yahoo! retrieve relevant Web-pages to users’ factual questions using modern technologies such as natural language processing and information retrieval; however, they are less suitable for answering non-factual questions (e.g., opinions, recommendations, suggestions, etc.) which are better answered by humans via online question answering systems. In the last decade, social networks have been vastly adopted for online communications. Besides, using social networks for building interest-based relationships, people also utilize these networks for sharing and exchanging useful information with each other, and hence could be leveraged for answering non-factual questions. However, social network users own different interests and have diverse expertise levels; therefore, identifying answerers of particular expertise levels and routing the questions to them via social referral chains in the dynamic social networks, where users have a diverse range of availability times and are sporadically connected to each other, is not easy.

    Moreover, users may not be willing to route political, religious, sectarian, medical disease, etc., kind of questions over the Internet due to its sensitive nature; however, at the same time, they do require answers from answerers of the highest expertise levels. Since users in social networks have information of their 1-hop friends only where answerers of the highest expertise levels may exist in the k-hop friends, therefore, distributively identifying the answerers of the highest expertise levels in dynamic social networks is a challenging problem. Further, when considering a mobile environment, a light-weight scheme is crucial for finding the answerers as mobile users are associated with limited resources such as bandwidth, memory, processing power, and energy. Finding answerers with a light-weight scheme is, however, challenging. Additionally, since askers do not communicate with k-hop answerers directly in social networks, they do not know the k-hop answerers’ credibility levels, thereby making it difficult for askers to assess the correctness of their provided answers. Thus, facilitating users to assess the correctness of the provided answers is crucial. Also, a significant fraction of social network users do not declare their locations due to privacy concerns; thus, finding relevant answerers for answering local intent questions in social networks is not easy. To address these challenges, this dissertation proposes a social network-based question answering framework.

    First, this dissertation addresses the problem of finding answerers of particular expertise levels and routing the questions to them via social referral chains in a dynamic social network so as to minimize the response time of each question. Social network users own different interests, expertise levels, and online activity times, thus identifying answerers of particular expertise levels and routing the question to them in a dynamic social network, so as to receive the answers for every question in a short time is a challenging problem. To address this problem, this dissertation proposes an optimal question answering system that identifies answerers with the required expertise levels and routes the question with minimum possible response time in the dynamic social network. A multicast tree is employed to (i) avoid the chance of running into bottleneck users who happen to be offline for considerable time durations, and (ii) increase the quality of answers from multiple answerers. The proposed system uses a hybrid model for estimating the expertise levels of each user in order to identify the answerers’ expertise levels. The proposed method demonstrates improved response time performance as compared to state-of-the-art systems. In particular, the evaluation results reveal that the proposed system achieves: (i) higher average response rate up to 27%, (ii) lower average maximal response time by up to 60%, and (iii) consistently better performance when the number of answerers, the arrival rate of questions, the level of expertise, and the predictability are varied.

    Next, this dissertation addresses the problem of distributively finding answerers of highest expertise levels in a dynamic social network when users are not willing to route political, religious, sectarian, medical disease, etc., kind of questions over the Internet while requiring answers from answerers of the highest expertise levels. In social networks, askers’ search answerers among their 1-hop friends; however, answerers of highest expertise levels may exist in the k-hop friends of social networks who are not known to askers directly. To address this problem, this dissertation proposes a distributive social network-based question answering scheme that finds answerers of the highest expertise levels to each asker’s question with a higher response rate and lower response time. The scheme finds answerers of highest expertise levels in the k-hop dynamic social network and selects optimal relays at each hop to forward the question to, via social referral chains. In particular, the profile information is exchanged among the k-hop friends, and leveraged for finding answerers of the highest expertise levels and optimal relays at each hop. The simulation results show that, compared to state-of-the-art schemes, the proposed scheme achieves: (i) higher average expertise levels by more than 42%, (ii) higher average response rate by more than 26%, and (iii) lower response time with as high as 27% reduction. Furthermore, under various system parameters like question arrival rate, keywords per question, answerers per question, number of hops, and predictability, the proposed scheme consistently outperforms the state-of-the-art schemes.

    Next, this dissertation addresses the problem of improving the performance of a mobile social network-based question answering system with the help of a light-weight scheme. Answers to the particular question may be found when the questions are forwarded via suitable friends (or helpers) in a multi-hop manner. However, with a limited amount of available information, distributively identifying the best helpers is a challenging problem. To this end, this dissertation proposes a distributed and a light-weight helper selection scheme, incorporated into a social network-based question answering system. All users share their information with the 1-hop friends and are recorded in each user’s information register. The proposed helper selection scheme utilizes the information to select and forward the questions to capable and cooperative helpers among the friends to improve the performance of the question answering system. The trace-driven simulations reveal that, on average, the proposed helper selection scheme achieves a higher response rate, higher best-answer rate, and lower response time by more than 14%, 13%, and 14%, respectively, in comparison to state-of-the-art helper selection schemes. Further, it is observed that the proposed helper selection scheme performs consistently better than the state-of-the-art systems under diverse system parameters settings.

    Next, this dissertation addresses the problem of assessing the answer correctnesses of the k-hop answerers. In a distributed social network, an asker does not know a k-hop answerer’s credibility, thus making it difficult for the asker to assess the answer correctness. Therefore, a credibility-enabled distributed social network-based question answering system is crucial for determining the correctness of the answers. To this end, this dissertation proposes a scheme, which facilitates each user to assess the correctness of the received answers. The proposed scheme utilizes subjective logic to build interest-wise friend-to-friend credibility opinions under uncertainties. The developed opinions are then accumulated by the proposed scheme to get each user’s aggregated credibility opinion, which may reflect the user’s real credibility. The proposed scheme forwards a question to users with the highest credibility beliefs in the question interest category. Our evaluation results show that, on average, the proposed scheme accomplishes higher success ratio, higher answer correctness, and lower answer uncertainty by 12.1%, 16.4%, and 22.2%, respectively, as compared to the best-performing baseline systems.

    Lastly, this dissertation addresses the problem of local intent questions where users are interested in finding specific information about various items over particular locations. It is found that 51% of users seek information about their localities. The existing systems fail to answer this class of questions as they are not mainly designed to handle it. To address this problem, this dissertation proposes a scheme where users provide timely and accurate answers in a dynamic social network. The scheme mines users’ shared and contextual information to identify the most relevant users for each question. We propose a multimodal motivation scheme that exploits users’ social ties and monetary rewards to raise their basic motivation levels. The scheme utilizes the multimodal motivation scheme to find the relevant users’ motivation levels, and actively assigns questions to most relevant users with the highest reputations and motivation levels in the dynamic social network via our three proposed algorithms. We conduct: (i) a survey to support our assumptions regarding the proposed multimodal motivation scheme, and (ii) trace-driven experiments to evaluate the performance of the three proposed algorithms.

    1 Introduction 1 1.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.3 Problem Statement . . .. . . . . . . . . . . . . . . . . . . 4 1.4 Contributions . . . . . . . .. . . . . . . . . . . . . . . . 7 1.5 Dissertation Organization . . . . . . . . . . . . . . . . . 8 2 Literature Review 9 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . 9 2.2 Centralized Question Answering Systems . . . . . . . . . . . 9 2.2.1 Community-Based Question Answering Systems . . . . . . . . 9 2.2.2 Social-Based Question Answering Systems . . . . . . . . . 12 2.3 Distributed Question Answering Systems . .. . . . . . . . . 14 2.3.1 Content Search . . . . . . . . . . . . . . . . . . . . . 14 2.3.2 Answerer Search . . . . . . . . . . . . . . . . . . . . . 16 2.4 Motivation Schemes . . . . . . . . . . . . . . . . . . . . 18 2.4.1 Voluntarily . . . . . . . . . . . . . . . . . . . . . . . 18 2.4.2 Reward-based . . . . . . . . . . . . . . . . . . . . . . 19 3 Optimal Question Answering Routing in Dynamic Online Social Networks 20 3.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . 20 3.2 Introduction . . . . . . . . . . . .. . . . . . . . . . . . 20 3.3 System Model . . . . . . . . . . . . . . . . . . . . . . . 22 3.3.1 Time-Dependent Social Network Model . . . . . . . . . . . 23 3.3.2 User Expertise Model . . . . . . . . . . . . . .. . . . . 24 3.3.3 Motivating Example . . . . . . . . . . . . . . .. . . . . 25 3.3.4 Problem Formulation . . . . . . . . . . . . . . . . . . . 26 3.4 Proposed Solution . . . . . . . . . . . . . . . . . . . . . 27 3.4.1 Algorithm 1: Bottleneck Path Pruner . . . . . . . . . . . 27 3.4.2 Algorithm 2: Hybrid Expertise Matcher . . . . . . . . . . 29 3.4.3 Algorithm 3: Bottleneck Tree Generator . . . . . . . . . 29 3.5 Evaluations . . . . . . . . . . . . . . . . . . . . . . . . 30 3.5.1 Dataset . . . . . . . . . . . . . . . . . . . . . . . . . 30 3.5.2 Setup . . . . . . . . . . . . . . . . . . . . . . . . . . 31 3.5.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . 32 3.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . 34 4 Finding High-Quality Answerers in Dynamic Social Networks 36 4.1 Overview . . . . . . . . . . . . . . .. . . . . . . . . . . 36 4.2 Introduction . . . . . . . . . . . . . . . . . . . . . . . 37 4.3 System Model . . . . . . . . . . . . . . . . . . . . . . . 39 4.3.1 Distributed Dynamic Social Network Model . . . . . . . . 39 4.3.2 Problem Statement . . . . . . . . . . . . . . . . . . . . 41 4.4 Proposed Solution . . . . . . . . . . . . . . . . . . . . . 41 4.4.1 BuildNIT . . . . . . . . . . . . . . . . . . . . . . . . 42 4.4.2 SearchNIT . . . . . . . . . . . . . . . . . . . . . . . . 43 4.5 Evaluations . . . . . . . . . . . . . . . . . . . . . . . . 47 4.5.1 Dataset Collection . . . . . . . . . . . . . . . . . . . 47 4.5.2 Setup . . . . . . . . . . . . . . . . . . . . . . . . . . 49 4.5.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . 51 4.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . 59 5 Social Helper Selection Scheme for Mobile Question Answering Systems 60 5.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . 60 5.2 Introduction . . . . . . . . . . . . . . .. . . . . . . . . 61 5.3 System Design . . . . . . . . . . . . . . . . . . . . . . . 62 5.3.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . 63 5.3.2 Social Helper Selection . . . . . . . . . . . . . . . . . 64 5.4 Performance Evaluations . . . . . . . . . . . . . . . . . . 66 5.4.1 Datasets Collection . . . . . . . . . . . . . . . . . . . 66 5.4.2 Simulator Setup . . . . . . . . . . . . . . . . . . . . . 67 5.4.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . 69 5.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . 73 6 Credibility-Enabled Social Network-Based Question Answering System for Assessing Answers Correctness 74 6.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . 74 6.2 Introduction . . . . . . . . . . . . .. . . . . . . . . . . 75 6.3 System Design . . . . . . . . . . . . . . . . . . . . . . . 77 6.3.1 CRED Overview . . . . . . . . . . . . . . . . . . . . . . 77 6.3.2 CRED Credibility Mechanism . . . . . . . . . . . . .. . . 78 6.4 Performance Evaluations . . . . . . . . . . . . . . . . . . 80 6.4.1 Dataset Collection . . . . . . . . . . . . . . . . . . . 80 6.4.2 Simulator Setup . . . . . . . . . . . . . . . . . . . . . 81 6.4.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . 83 6.5 Summary . . . . . . . . . . . . . . . . . .. . . . . . . . 86 7 A Mobile Question Answering System with Multimodal Motivation Scheme for Local Intent Questions in Dynamic Social Networks 88 7.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . 88 7.2 Introduction . . . . . . . . . . . . . . . . . . . . . . . 89 7.3 System Model and Problem Formulation . . . . .. . . . . . . 90 7.3.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . 90 7.3.2 Questions and Users Models . . . . . . . . . . . . . . . 91 7.3.3 Multimodal Motivation Scheme . . . . . . . . . . . . . . 92 7.3.4 Question-Answerer Allocation Problem Formulation . . . . 92 7.4 The Proposed Solution . . . . . . . . . . . . . . . . . . . 93 7.4.1 LOCI Real-Time Allocator . . . . . . . . . . . . . . . . 93 7.4.2 LOCI Locally Optimal Batcher . . . . . . . . . . . . . . 94 7.4.3 LOCI Adaptive Assigner . . . . . . . . . . . . . . . . . 95 7.5 Questionnaire-Based User Study . . . . . . . . . . . . . . 96 7.6 Performance Evaluations . . . . . . . . . . . . . . . . . . 97 7.6.1 Dataset Collection . . . . . . . . . . . . . .. . . . . . 97 7.6.2 Simulator Setup . . . . . . . . . . . . . . . . . . . . . 98 7.6.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . 99 7.7 Summary . . . . . . . . . . . . . . . . . . . . . . . . . 101 8 Conclusions and Future Work 103 8.1 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . 103 8.2 Future Work . . . . . . . . . . . . . . . . . . . . . . . . 105 Bibliography 108 Publications 119

    [1] R. Morris, J. Teevan, and K. Panovich, “A comparison of information seeking using search engines and social networks,” Proc. of ICWSM, vol. 10, pp. 23–26, May 2010.
    [2] B. Li and I. King, “Routing questions to appropriate answerers in community question answering services,” in Proc. of ACM CIKM, Oct. 2010, pp. 1585–1588.
    [3] F. M. Harper, D. Raban, S. Rafaeli, and J. A. Konstan, “Predictors of answer quality in online Q&A sites,” in Proc. of ACM SIGCHI, Apr. 2008, pp. 865–874.
    [4] S. A. Paul, L. Hong, and E. H. Chi, “Is twitter a good place for asking questions,” in Proc. of ICWSM, Jun. 2011.
    [5] Z. Liu and B. J. Jansen, “Predicting potential responders in social Q&A based onnon-QA features,” in Proc. of ACM CHI, Apr. 2014, pp. 2131–2136.
    [6] G. Kukla, P. Kazienko, P. Bródka, and T. Filipowski, “SocLaKE: Social latent knowledge explorator,” Comput. J., vol. 55, no. 3, pp. 258–276, Sep. 2011.
    [7] D. Horowitz and S. Kamvar, “The anatomy of a large-scale social search engine,” in Proc. of WWW, Apr. 2010, pp. 431–440.
    [8] R. W. White, M. Richardson, and Y. Liu, “Effects of community size and contact rate in synchronous social Q&A,” in Proc. of ACM SIGCHI, May 2011, pp. 2837–2846.
    [9] J. Teevan, M. R. Morris, and K. Panovich, “Factors affecting response quantity, quality, and speed for questions asked via social network status messages,” in Proc. of ICWSM, Jul. 2011, pp. 630–633.
    [10] B. J. Hecht, J. Teevan, M. R. Morris, and D. J. Liebling, “SearchBuddies: Bringing search engines into the conversation,” in Proc. of ICWSM, Jun. 2012, pp. 138–145.
    [11] A. Oeldorf-Hirsch, B. Hecht, M. R. Morris, J. Teevan, and D. Gergle, “To search or to ask: The routing of information needs between traditional search engines and social networks,” in Proc. of ACM CSCW, Feb. 2014, pp. 16–27.
    [12] Z. Li, H. Shen, G. Liu, and J. Li, “SOS: A distributed mobile Q&A system based on social networks,” IEEE Trans. Parallel Distrib. Syst., vol. 25, no. 4, pp. 1066–1077, Apr. 2014.
    [13] L. Zhang, X.-Y. Li, J. Lei, J. Sun, and Y. Liu, “Mechanism design for finding experts using locally constructed social referral web,” IEEE Trans. Parallel Distrib. Syst., vol. 26, no. 8, pp. 2316–2326, Aug. 2015.
    [14] G. Liu and H. Shen, “iASK: A distributed Q&A system incorporating social community and global collective intelligence,” IEEE Trans. Parallel Distrib. Syst., vol. 28, no. 5, pp. 1–14, May 2016.
    [15] H. Shen, G. Liu, H. Wang, and N. Vithlani, “SocialQ&A: An online social network based question and answer system,” IEEE Trans. Big Data, vol. 3, no. 1, pp. 91–106, Mar. 2017.
    [16] Y. Answers, https://answers.yahoo.com/, [Accessed in Feb. 2017].
    [17] Qoura, https://www.quora.com/, [Accessed in Feb. 2017].
    [18] Answers.com, http://www.answers.com/, [Accessed in Feb. 2017].
    [19] Stack Overflow, https://stackoverflow.com/, [Accessed in Feb. 2017].
    [20] Z. Zhao, L. Zhang, X. He, and W. Ng, “Expert finding for question answering
    via graph regularized matrix completion,” IEEE Trans. Knowl. Data Eng., vol. 27, no. 4, pp. 993–1004, Apr. 2015.
    [21] L. Nie, X. Wei, D. Zhang, X. Wang, Z. Gao, and Y. Yang, “Data-driven answer selection in community QA systems,” IEEE Trans. Knowl. Data Eng., vol. 29, no. 6, pp. 1186–1198, Jun. 2017.
    [22] Z. Zhao, H. Lu, V. W. Zheng, D. Cai, X. He, and Y. Zhuang, “Community-based question answering via asymmetric multi-faceted ranking network learning,” in Proc. of AAAI, Feb. 2017, pp. 3532–3539.
    [23] B. Li, I. King, and M. R. Lyu, “Question routing in community question answering: Putting category in its place,” in Proc. of CIKM, Oct. 2011, pp. 2041–2044.
    [24] S. Chang and A. Pal, “Routing questions for collaborative answering in community question answering,” in Proc. of IEEE/ACM ASONAM, Aug. 2013, pp. 494–501.
    [25] W.-C. Kao, D.-R. Liu, and S.-W. Wang, “Expert finding in question-answering websites: A novel hybrid approach,” in Proc. of ACM SAC, Mar. 2010, pp. 867–871.
    [26] H. Toba, Z.-Y. Ming, M. Adriani, and T.-S. Chua, “Discovering high quality answers in community question answering archives using a hierarchy of classifiers,” Inf. Sci., vol. 261, pp. 101–115, Mar. 2014.
    [27] J. Sun, A. Vishnu, A. Chakrabarti, C. Siegel, and S. Parthasarathy, “Coldroute: Effective routing of cold questions in stack exchange sites,” Data Min. Knowl. Discov., vol. 32, no. 5, pp. 1339–1367, Sep. 2018.
    [28] I. Srba and M. Bielikova, “A comprehensive survey and classification of approaches for community question answering,” ACM Trans. Web, vol. 10, no. 3, pp. 1–63, Aug. 2016.
    [29] B. M. Evans, S. Kairam, and P. Pirolli, “Do your friends make you smarter?: An analysis of social strategies in online information seeking,” Information Processing & Management, vol. 46, no. 6, pp. 679–692, 2010.
    [30] M. R. Morris, J. Teevan, and K. Panovich, “A comparison of information seeking using search engines and social networks.” ICWSM, vol. 10, pp. 23–26, May 2010.
    [31] J. Thom, S. Y. Helsley, T. L. Matthews, E. M. Daly, and D. R. Millen, “What are you working on? Status message Q&A in an enterprise SNS,” in Proc. of ECSCW 2011, Sep. 2011, pp. 313–332.
    [32] C. Souza, J. Magalhaes, E. Costa, J. Fechine, and R. Reis, “Enhancing the status message question asking process on facebook,” in Proc. of ICCSA, Jun. 2014, pp. 682–695.
    [33] S. A. Paul, L. Hong, and E. H. Chi, “Is Twitter a good place for asking questions? A characterization study,” in Proc. ICWSM, Jul. 2011, pp. 578–581.
    [34] L. Soulier, L. Tamine, and G.-H. Nguyen, “Answering Twitter questions: A model for recommending answerers through social collaboration,” in Proc. of ACM CIKM, Oct. 2016, pp. 267–276.
    [35] T. Lappas, K. Liu, and E. Terzi, “Finding a team of experts in social networks,” in Proc. of ACM SIGKDD, Jun. 2009, pp. 467–476.
    [36] M. R. Bouadjenek, H. Hacid, and M. Bouzeghoub, “Social networks and information retrieval, how are they converging? A survey, a taxonomy and an analysis of social information retrieval approaches and platforms,” Inf. Syst., vol. 56, pp. 1–18, Aug. 2016.
    [37] I. Ali, R. Y. Chang, J.-C. Chuang, C.-H. Hsu, and C. M. Yetis, “Optimal question answering routing in dynamic online social networks,” in Proc. of IEEE VTC, Sep. 2017, pp. 1–7.
    [38] I. Stoica, R. Morris, D. Karger, M. F. Kaashoek, and H. Balakrishnan, “Chord: A scalable peer-to-peer lookup service for internet applications,” ACM SIGCOMM Comput. Commun. Rev., vol. 31, no. 4, pp. 149–160, Oct. 2001.
    [39] S. Ratnasamy, P. Francis, M. Handley, R. Karp, and S. Shenker, A scalable content addressable network, Aug. 2001, vol. 31, no. 4.
    [40] A. Rowstron and P. Druschel, “Pastry: Scalable, decentralized object location, and routing for large-scale peer-to-peer systems,” in Proc. of IFIP/ACM ICDS, Nov. 2001, pp. 329–350.
    [41] B. Y. Zhao, J. Kubiatowicz, A. D. Joseph, and others, “Tapestry: An infrastructure for fault-tolerant wide-area location and routing,” Computer Science Division, University of California Berkeley, University of California Berkeley, Tech. Rep., Apr. 2001.
    [42] L. Xiao, Y. Liu, and L. M. Ni, “Improving unstructured peer-to-peer systems by adaptive connection establishment,” IEEE Trans. Comput., vol. 54, no. 9, pp. 1091–1103, Jun. 2005.
    [43] A. Kumar, J. Xu, and E. W. Zegura, “Efficient and scalable query routing for unstructured peer-to-peer networks,” in Proc. of IEEE Infocom, vol. 2, Mar. 2005, pp. 1162–1173.
    [44] S. Jiang and X. Zhang, “FloodTrail: An efficient file search technique in unstructured peer-to-peer systems,” in Proc. of IEEE Globcom, vol. 5, Dec. 2003, pp. 2891–2895.
    [45] KaZaA file sharing network, http://kazaa.descargar.es/en/, [Accessed in Mar. 2018].
    [46] Napster Homepage, http://us.napster.com/, [Accessed in Mar. 2018].
    [47] Freenet, http://freenetproject.org/, [Accessed in Mar. 2018].
    [48] M. Ripeanu, “Peer-to-peer architecture case study: Gnutella network,” in Proc. of IEEE P2P Computing, Aug. 2001, pp. 99–100.
    [49] K. Aberer, P. Cudré-Mauroux, A. Datta, Z. Despotovic, M. Hauswirth, M. Punceva, and R. Schmidt, “P-Grid: A self-organizing structured p2p system,” Proc. of ACM SIGMOD, vol. 32, no. 3, pp. 29–33, Sep. 2003.
    [50] R. A. Ferreira, M. K. Ramanathan, A. Awan, A. Grama, and S. Jagannathan,“Search with probabilistic guarantees in unstructured peer-to-peer networks,” in Proc. of P2P Computing, Aug. 2005, pp. 165–172.
    [51] R. Gaeta and M. Sereno, “Generalized probabilistic flooding in unstructured peerto-peer networks,” IEEE Trans. Parallel Distrib. Syst., vol. 22, no. 12, pp. 2055–2062, 2011.
    [52] I. Jawhar and J. Wu, “A two-level random walk search protocol for peer-to-peer networks,” in Proc. of the WMSCI, May 2004, pp. 1–5.
    [53] Q. Lv, P. Cao, E. Cohen, K. Li, and S. Shenker, “Search and replication in unstructured peer-to-peer networks,” in Proc. of ACM ICS, Apr. 2002, pp. 84–95.
    [54] V. Kalogeraki, D. Gunopulos, and D. Zeinalipour-Yazti, “A local search mechanism for peer-to-peer networks,” in Proc. of CIKM, Nov. 2002, pp. 300–307.
    [55] N. Sarshar, P. O. Boykin, and V. P. Roychowdhury, “Percolation search in power law networks: Making unstructured peer-to-peer networks scalable,” in Proc. of IEEE P2P Computing, Aug. 2004, pp. 2–9.
    [56] M. Shojafar, J. H. Abawajy, Z. Delkhah, A. Ahmadi, Z. Pooranian, and A. Abraham, “An efficient and distributed file search in unstructured peer-to-peer networks,” Peer Peer Netw. Appl., vol. 8, no. 1, pp. 120–136, Jan. 2015.
    [57] C. Gkantsidis, M. Mihail, and A. Saberi, “Hybrid search schemes for unstructured peer-to-peer networks,” in Proc. of IEEE Infocom, vol. 3, Mar. 2005, p. 1526.
    [58] G. Chen, C. P. Low, and Z. Yang, “Enhancing search performance in unstructured P2P networks based on users’ common interest,” IEEE Trans. Parallel Distrib. Syst., vol. 19, no. 6, pp. 821–836, Jun. 2008.
    [59] K. C.-J. Lin, C.-P. Wang, C.-F. Chou, and L. Golubchik, “SocioNet: A social-based multimedia access system for unstructured P2P networks,” IEEE Trans. Parallel Distrib. Syst., vol. 21, no. 7, pp. 1027–1041, Jul. 2010.
    [60] H. Shen, Z. Li, and K. Chen, “Social-P2P: An online social network based P2P file sharing system,” IEEE Trans. Parallel Distrib. Syst., vol. 26, no. 10, pp. 2874– 2889, Oct. 2015.
    [61] G. Liu, H. Shen, and L. Ward, “An efficient and trustworthy P2P and social network integrated file sharing system,” IEEE Trans. Comput., vol. 64, no. 1, pp. 54–70, Jan. 2015.
    [62] S. Jiang, L. Guo, X. Zhang, and H. Wang, “Lightflood: Minimizing redundant messages and maximizing scope of peer-to-peer search,” IEEE Trans. Parallel Distrib. Syst., vol. 19, no. 5, pp. 601–614, May 2008.
    [63] Y. Lin and H. Shen, “SmartQ: A question and answer system for supplying highquality and trustworthy answers,” IEEE Trans. Big Data, Aug. 2017, Early Access.
    [64] H. Amintoosi and S. S. Kanhere, “A trust-based recruitment framework for multihop social participatory sensing,” in Proc. of IEEE DCOSS, May 2013, pp. 266–273.
    [65] L. Guo, C. Zhang, and Y. Fang, “A trust-based privacy-preserving friend recommendation scheme for online social networks,” IEEE Trans. Dependable Secure Comput., vol. 12, no. 4, pp. 413–427, Jul. 2015.
    [66] C.-Y. Lin, K. Ehrlich, V. Griffiths-Fisher, and C. Desforges, “SmallBlue: People mining for expertise search,” IEEE MultiMedia, vol. 15, no. 1, pp. 78–84, Jan.2008.
    [67] M. Hossain, “Crowdsourcing: Activities, incentives and users’ motivations to participate,” in Proc. of IEEE ICIMTR, May 2012, pp. 501–506.
    [68] “Traffic and Navigation App.” https://www.waze.com/, [Accessed in Apr. 2019].
    [69] A. Artikis, M. Weidlich, F. Schnitzler, I. Boutsis, T. Liebig, N. Piatkowski, C. Bockermann, K. Morik, V. Kalogeraki, J. Marecek et al., “Heterogeneous stream processing and crowdsourcing for urban traffic management.” in Proc. of ACM EDBT, vol. 14, Mar. 2014, pp. 712–723.
    [70] L. De Alfaro and M. Shavlovsky, “CrowdGrader: A tool for crowdsourcing the evaluation of homework assignments,” in Proc. of ACM SIGCSE, Mar. 2014, pp. 415–420.
    [71] S. Dow, E. Gerber, and A. Wong, “A pilot study of using crowds in the classroom,” in Proc. of ACM SIGCHI, Apr. 2013, pp. 227–236.
    [72] A. Morishima, S. Amer-Yahia, and S. B. Roy, “Crowd4U: An initiative for constructing an open academic crowdsourcing network,” in Proc. of AAAI HCOMP, Sep. 2014, pp. 50–51.
    [73] “Amazon Mechanical Turk,” https://www.mturk.com/, [Accessed in Apr. 2019].
    [74] “CrowdFlower,” http://www.crowdflower.com/, [Accessed in Apr. 2019].
    [75] “Microworkers,” https://www.microworkers.com/, [Accessed in Apr. 2019].
    [76] C. Harris, “You’re hired! an examination of crowdsourcing incentive models in human resource tasks,” in Proc. of ACM WSDM. Hong Kong, China, Feb. 2011, pp. 15–18.
    [77] M. Musthag, A. Raij, D. Ganesan, S. Kumar, and S. Shiffman, “Exploring micro incentive strategies for participant compensation in high-burden studies,” in Proc. of ACM UbiComp, Feb. 2011, pp. 435–444.
    [78] B. Shao, L. Shi, B. Xu, and L. Liu, “Factors affecting participation of solvers in crowdsourcing: An empirical study from China,” Electronic Markets, vol. 22, no. 2, pp. 73–82, Jun. 2012.
    [79] B. Guo, Z. Wang, Z. Yu, Y. Wang, N. Y. Yen, R. Huang, and X. Zhou, “Mobile crowd sensing and computing: The review of an emerging human-powered sensing
    paradigm,” ACM Computing Surveys (CSUR), vol. 48, no. 1, p. 7, Sep. 2015.
    [80] “ITU releases 2015 ICT figures,” http://www.itu.int/net/pressoffice/press_releases/
    2015/17.aspx#.V3G0E1dn-nl.
    [81] R. W. White and M. Richardson, “Effects of expertise differences in synchronous social Q&A,” in Proc. ACM SIGIR, Aug. 2012, pp. 1055–1056.
    [82] G. A. Wang, J. Jiao, A. S. Abrahams, W. Fan, and Z. Zhang, “Expertrank: A topic aware expert finding algorithm for online knowledge communities,” Decis. Support Syst., vol. 54, no. 3, pp. 1442–1451, Feb. 2013.
    [83] C. Ding, X. He, P. Husbands, H. Zha, and H. D. Simon, “PageRank, HITS and a unified framework for link analysis,” in Proc. SIAM ICDM, May 2003, pp. 353–354.
    [84] S. P. Borgatti, “Centrality and network flow,” Social Networks, vol. 27, no. 1, pp. 55–71, Jan. 2005.
    [85] D.-Z. Du, J. Smith, and J. H. Rubinstein, Advances in Steiner trees. Springer Science & Business Media, 2013.
    [86] L. Georgiadis, “Bottleneck multicast trees in linear time,” IEEE Commun. Lett., vol. 7, no. 11, pp. 564–566, Nov. 2003.
    [87] G. Sidorov, A. Gelbukh, H. Gómez-Adorno, and D. Pinto, “Soft similarity and soft cosine measure: Similarity of features in vector space model,” Computación y Sistemas, vol. 18, no. 3, pp. 491–504, Sep. 2014.
    [88] S. D. Kamvar, T. H. Haveliwala, C. D. Manning, and G. H. Golub, “Extrapolation methods for accelerating PageRank computations,” in Proc. ACM WWW, May 2003, pp. 261–270.
    [89] Octoparse Home Page, http://www.octoparse.com, [Accessed in Feb. 2017].
    [90] I. Ali, R. Y. Chang, and C.-H. Hsu, “SOQAS: Distributively finding high-quality answerers in dynamic social networks,” IEEE Access, vol. 6, pp. 55074–55089, 2018.
    [91] G. Dror, Y. Koren, Y. Maarek, and I. Szpektor, “I want to answer; who has a question?: Yahoo! Answers recommender system,” in Proc. of ACM SIGKDD, Aug. 2011, pp. 1109–1117.
    [92] E. Pennisi, “How did cooperative behavior evolve?” Science, vol. 309, no. 5731, pp. 93–93, Jul. 2005.
    [93] E. Tan, L. Guo, S. Chen, X. Zhang, and Y. Zhao, “Spammer behavior analysis and detection in user generated content on social networks,” in Proc. of IEEE ICDCS, Jun. 2012, pp. 305–314.
    [94] X. Liu, W. B. Croft, and M. Koll, “Finding experts in community-based question answering services,” in Proc. of ACM CIKM, Oct. 2005, pp. 315–316.
    [95] L. Chen and R. Nayak, “Expertise analysis in a question answer portal for author ranking,” in Proc. of IEEE/WIC/ACM WI, Dec. 2008, pp. 134–140.
    [96] O. Rottenstreich, Y. Kanizo, and I. Keslassy, “The variable-increment counting bloom filter,” IEEE/ACM Trans. Netw., vol. 22, no. 4, pp. 1092–1105, Aug. 2014.
    [97] L. Zhang, X.-Y. Li, K. Liu, T. Jung, and Y. Liu, “Message in a sealed bottle: Privacy preserving friending in mobile social networks,” IEEE Trans. Mobile Comput., vol. 14, no. 9, pp. 1888–1902, Sep. 2015.
    [98] M. Li, S. Yu, N. Cao, and W. Lou, “Privacy-preserving distributed profile matching in proximity-based mobile social networks,” IEEE Trans. Wireless Commun., vol. 12, no. 5, pp. 2024–2033, May 2013.
    [99] R. Zhou, K. Hwang, and M. Cai, “GossipTrust for fast reputation aggregation in peer-to-peer networks,” IEEE Trans. Knowl. Data Eng., vol. 20, no. 9, pp. 1282– 1295, Sep. 2008.
    [100] A. Huang, “Similarity measures for text document clustering,” in Proc. of NZCSRSC, Apr. 2008, pp. 49–56.
    [101] D. M. Christopher, R. Prabhakar, and S. Hinrich, Introduction to information retrieval. New York: Cambridge University Press, Aug. 2008, pp. 1–504.
    [102] S. Zhu, J. Wu, H. Xiong, and G. Xia, “Scaling up top-K cosine similarity search,” Data Knowl. Eng., vol. 70, no. 1, pp. 60–83, Jan. 2011.
    [103] A. Vahdat, D. Becker, and others, “Epidemic routing for partially connected ad hoc networks,” Department of Computer Science, Duke University, Duke University, Tech. Rep., CS-200006, Apr. 2000.
    [104] Microsoft Azure Service Home Page, https://azure.microsoft.com, [Accessed in Feb. 2017].
    [105] NS-3: Network Simulator 3, https://www.nsnam.org/, [Accessed in Feb. 2017].
    [106] “Akamai: State of the Internet,” https://www.
    akamai.com/fr/fr/multimedia/documents/state-of-the-internet/
    q1-2017-state-of-the-internet-connectivity-report.pdf, [Accessed in Feb. 2017].
    [107] I. Ali, R. Y. Chang, and C.-H. Hsu, “SORT: SOcial HelpeR SelecTion Scheme for Mobile Question Answering Systems,” in Proc. of IEEE ICNC, Feb. 2019, pp. 647–652.
    [108] M. Harper, D. Raban, S. Rafaeli, and J. Konstan, “Predictors of answer quality in online Q&A sites,” in Proc. of ACM SIGCHI, Apr. 2008, pp. 865–874.
    [109] A. Rahmati and L. Zhong, “Context-for-Wireless: context-sensitive energy efficient wireless data transfer,” in Proc. of ACM MobiSys, Jun. 2007, pp. 165–178.
    [110] D. L. Olson, “Comparison of weights in TOPSIS models,” Math. Comput. Model., vol. 40, no. 7-8, pp. 721–727, Oct. 2004.
    [111] I. Ali, R. Y. Chang, and C.-H. Hsu, “CRED: Credibility-Enabled Social Network Based Q&A System for Assessing Answers Correctness,” in IEEE WCNC, May 2020 (accepted).
    [112] M. Bouguessa, B. Dumoulin, and S. Wang, “Identifying authoritative actors in question-answering forums: The case of Yahoo! Answers,” in Proc. of ACM SIGKDD, Aug. 2008, pp. 866–874.
    [113] M. R. Morris, J. Teevan, and K. Panovich, “What do people ask their social networks, and why?: A survey study of status message Q&A behavior,” in Proc. of SIGCHI Conf. on Human Factors in Comput. Syst., Apr. 2010, pp. 1739–1748.
    [114] A. Jøsang, Subjective Logic: A Formalism for Reasoning under Uncertainty. Springer, Jun. 2018.
    [115] A. Jøsang, “The consensus operator for combining beliefs,” Artificial Intelligence, vol. 141, no. 1-2, pp. 157–170, Oct. 2002.
    [116] J.-H. Cho, S. Rager, J. O’Donovan, S. Adali, and B. D. Horne, “Uncertainty-based false information propagation in social networks,” ACM Trans. on Social Comput., vol. 2, no. 2, p. 5, Jun. 2019.
    [117] I. Ali, R. Y. Chang, C.-H. Hsu, and C.-H. Lee, “LOCI: A Mobile Q&A System with Multimodal Motivation Scheme for Local Intent Questions in Dynamic Social Networks,” in IEEE VTC, May 2020 (accepted).
    [118] “Microsoft: 53 Percent Of Mobile Searches Have Local Intent,”
    https://searchengineland.com/microsoft-53-percent-of-mobile-searches-havelocal-intent- 55556.
    [119] T. Yan, M. Marzilli, R. Holmes, D. Ganesan, and M. Corner, “mCrowd: A platform for mobile crowdsourcing,” in Proc. of ACM SenSys, Nov. 2009, pp. 347–348.
    [120] Y. Zhang, C. Jiang, L. Song, M. Pan, Z. Dawy, and Z. Han, “Incentive mechanism for mobile crowdsourcing using an optimized tournament model,” IEEE J. Sel. Areas Commun., vol. 35, no. 4, pp. 880–892, Apr. 2017.
    [121] N. Kaufmann, T. Schulze, and D. Veit, “More than fun and money. Worker Mot.in Crowds.-A Study on Mechanical Turk.” in Proc. of AMCIS, vol. 11, Aug. 2011, pp. 1–11.
    [122] K. K. Nam, M. S. Ackerman, and L. A. Adamic, “Questions in, knowledge in?: A study of naver’s question answering community,” in Proc. of ACM SIGCHI, Apr. 2009, pp. 779–788.
    [123] C. H. Papadimitriou and K. Steiglitz, Combinatorial Optimization: Algorithms and Complexity. Upper Saddle River, NJ, USA: Prentice-Hall, Inc., 1998.
    [124] H.-F. Ting and X. Xiang, “Near optimal algorithms for online maximum edge weighted b-matching and two-sided vertex-weighted b-matching,” Theoretical Computer Science, vol. 607, pp. 247–256, Nov. 2015.
    [125] Y. Zhao and Q. Han, “Spatial crowdsourcing: Current state and future directions,” IEEE Commun. Mag., vol. 54, no. 7, pp. 102–107, Jul. 2016.
    [126] D. Yang, D. Zhang, V. W. Zheng, and Z. Yu, “Modeling user activity preference by leveraging user spatial temporal characteristics in LBSNs,” IEEE Trans. Syst., Man, Cybern., vol. 45, no. 1, pp. 129–142, Jun. 2014.
    [127] Y. Tong, J. She, B. Ding, L. Wang, and L. Chen, “Online mobile micro-task allocation in spatial crowdsourcing,” in Proc. of ICDE, May 2016, pp. 49–60.

    QR CODE