Utrecht University
Utrecht University
Threat classification is a relatively new research field in the Natural Language Processing domain. It pertains to models attempting to classify what texts constitute a threat and which texts do not. This is an essential research field as uttering threats is illegal as opposed to insulting someone.
This research operationalizes the Dutch legal definition of what constitutes a threat and investigates to what extent a language model can learned. Language models are the state-of-the-art technique for numerous NLP tasks, including text classification. In the text classification domain, it allows a Machine Learning (ML) model to be pre-trained on millions of tokens before fine-tuning the model on a downstream task. In this way, a language model is created that learns the syntax of a language. This pre-training negates the problem of data scarcity, which is a recurring problem in threat classification. In this study, the application of a language model is compared to previously used models in the threat classification domain (i.e. BiLSTM, CNN, Naive Bayes, & SVM). The performance metrics that the models are tested for are F1-scores and Precision-Recall Area-Under-Curve (PR-AUC) score. All models are trained on publicly available datasets containing threats and non-threats that are manually re-annotated. The goal of these models is to predict whether a threat that is uttered is legally actionable. The models were evaluated by means of a stratified ten-fold split.
The results of the study are that it is possible to operationalize the Dutch legal definition by means of annotation guidelines. Two annotators re-annotated a Dutch threat dataset and their agreement was not caused by chance and was deemed sufficient for the target institution (i.e. the National Police). The language model subsequently outperformed four of the five benchmark models (except the CNN) statistically significantly on all performance metrics.