Towards human-centric fact-checking
Wiem Ben Rim (Ph.D. Student)
With the increased popularity of social networks, distinguishing fact from fiction has emerged as a new priority on these platforms. In the last few years, spreading misinformation and disinformation has become easier and more consequential than ever. To address these issues, manual fact-checking is one of the main tasks journalists and reporters perform, who rely on multiple sources to confirm or debunk information. However, manual fact-checking remains a challenging task because of the abundance of online data and the time-sensitive aspect of some information; exposing deception in a politician's speech should be done promptly to avoid affecting public opinion and election results. Past research has demonstrated the need for adequate automated fact-checking tools to address the problem efficiently. Automated fact-checking can enable us to quickly and accurately identify false or misleading information. It can also be a valuable tool for fact-checkers to help them verify the accuracy of the information they come across. However, to achieve this goal, there is an increasing need to consider the problem from two additional aspects. First, we argue that we should move towards models that can explain not only their results, but also the steps that lead to their predictions. Second, we should develop models that are human-centric; they take user needs into account and can power reliable user-friendly fact-checking tools. Therefore, we aim to answer the following question: How can we create human-centric explainable fact-checking methods?
Primary Advisor: | Emine Yilmaz (University College London) |
Industry Advisor: | Patrick Lewis (Cohere) |
PhD Duration: | 15 January 2024 - 14 January 2028 |