adapters , knowledge-enhanced transformers , policy domain prediction , political text analysis
Abstract:
Recent work has shown the potential of knowledge injection into transformer-based pretrained language models for improving model performance for a number of NLI benchmark tasks. Motivated by this success, we test the potential of knowledge injection for an application in the political domain and study whether we can improve results for policy domain prediction, that is, for predicting fine-grained policy topics and stance for party manifestos. We experiment with three types of knowledge, namely (1) domain-specific knowledge via continued pre-training on in-domain data, (2) lexical semantic knowledge, and (3) factual knowledge about named entities. In our experiments, we use adapter modules as a parameter-efficient way for knowledge injection into transformers. Our results show a consistent positive effect for domain adaptation via continued pre-training and small improvements when replacing full model training with a task-specific adapter. The injected knowledge, however, only yields minor improvements over full training and fails to outperform the task-specific adapter without external knowledge, raising the question which type of knowledge is needed to solve this task.
Dieser Eintrag ist Teil der Universitätsbibliographie.
Das Dokument wird vom Publikationsserver der Universitätsbibliothek Mannheim bereitgestellt.