Dealing with misinformation is a grand challenge of the information society directed at equipping computer users with effective tools for identifying and debunking misinformation. As such, there are many machine learning-based methods for detecting harmful content, but they can be expensive or infeasible to train, retrain for domain drift, and deploy practically. On top of this, current Natural Language Processing (NLP), including fact-checking research, fails to meet the expectations of real-life scenarios. In this talk, we show why the past work on fact-checking has not yet led to truly useful tools for managing misinformation by comparing the current NLP paradigm against what human fact-checkers do. NLP systems are expensive in terms of financial cost, computation, and manpower needed to create data for the learning process. With that in mind, we are pursuing research on the detection of emerging misinformation topics to focus human attention on the most harmful, novel examples. We further compare the capabilities of automatic, NLP-based approaches to what human fact-checkers do, uncovering critical research directions for the future.
Iryna Gurevych (PhD 2003, U. Duisburg-Essen, Germany) is professor of Computer Science and director of the Ubiquitous Knowledge Processing (UKP) Lab at the Technical University (TU) of Darmstadt in Germany. Her main research interests are in machine learning for large-scale language understanding and text semantics. Iryna's work has received numerous awards. Examples are the ACL fellow award 2020 and the first-ever Hessian LOEWE Distinguished Chair award (2,5 mil. Euro) in 2021. Iryna is co-director of the NLP program within ELLIS, a network of excellence in machine learning. She is currently the president of the Association for Computational Linguistics. In 2022, she has been awarded an ERC Advanced Grant.