Dealing with misinformation is a major challenge in today's information society, exemplified by events like the COVID-19 infodemic and the increasing use of hallucinating large language models in real-world applications. The need to aid humans in countering misinformation has spurred increased interest in automated fact-checking (AFC) research, resulting in numerous approaches and resources. In this talk I discuss our research, which investigates the gap between NLP-based AFC research and real-world requirements, and how we try to bridge this gap. I will start by comparing NLP-based AFC approaches to how humans fact-check. Thereby, I will highlight why current AFC approaches fall short of effectively combating misinformation, and I will identify crucial research directions which we follow to bridge this gap. First, we developed a new dataset to explore the naturally occurring ambiguities when comparing realistic claims with realistic evidence. Second, we focus on the specific subproblem of how scientific information gets twisted by misinformation. To tackle this, we are in the process of creating a novel dataset and models reconstructing the fallacious arguments behind such misinformation.
Iryna Gurevych (PhD 2003, U. Duisburg-Essen, Germany) is professor of Computer Science and director of the Ubiquitous Knowledge Processing (UKP) Lab at the Technical University (TU) of Darmstadt in Germany. Her main research interests are in machine learning for large-scale language understanding and text semantics. Iryna's work has received numerous awards. Examples are the ACL fellow award 2020 and the first-ever Hessian LOEWE Distinguished Chair award (2,5 mil. Euro) in 2021. Iryna is co-director of the NLP program within ELLIS, a network of excellence in machine learning. She is currently the president of the Association for Computational Linguistics. In 2022, she has been awarded an ERC Advanced Grant.