Policy domain prediction from party manifestos with adapters and knowledge enhanced transformers
Yu, Hsiao-Chu
;
Rehbein, Ines
;
Ponzetto, Simone Paolo
URL:
|
https://aclanthology.org/2023.konvens-main.23/
|
URN:
|
urn:nbn:de:bsz:180-madoc-676940
|
Document Type:
|
Conference or workshop publication
|
Year of publication:
|
2023
|
Book title:
|
The 19th Conference on Natural Language Processing (KONVENS 2023) : proceedings of the conference, September 18-22, 2023
|
Page range:
|
229-244
|
Conference title:
|
KONVENS 2023, Konferenz zur Verarbeitung natürlicher Sprache
|
Location of the conference venue:
|
Ingolstadt, Germany
|
Date of the conference:
|
18.-22.8.2023
|
Publisher:
|
Georges, Munir
;
Herygers, Aaricia
;
Friedrich, Annemarie
;
Roth, Benjamin
|
Place of publication:
|
Stroudsburg, PA
|
Publishing house:
|
Association for Computational Lingustics, ACL
|
ISBN:
|
979-8-89176-029-5
|
Related URLs:
|
|
Publication language:
|
English
|
Institution:
|
School of Business Informatics and Mathematics > Sonstige - Fakultät für Wirtschaftsinformatik und Wirtschaftsmathematik School of Business Informatics and Mathematics > Information Systems III: Enterprise Data Analysis (Ponzetto 2016-)
|
Subject:
|
004 Computer science, internet
|
Keywords (English):
|
adapters , knowledge-enhanced transformers , policy domain prediction , political text analysis
|
Abstract:
|
Recent work has shown the potential of knowledge injection into transformer-based pretrained language models for improving model performance for a number of NLI benchmark tasks. Motivated by this success, we test the potential of knowledge injection for an application in the political domain and study whether we can improve results for policy domain prediction, that is, for predicting fine-grained policy topics and stance for party manifestos. We experiment with three types of knowledge, namely (1) domain-specific knowledge via continued pre-training on in-domain data, (2) lexical semantic knowledge, and (3) factual knowledge about named entities. In our experiments, we use adapter modules as a parameter-efficient way for knowledge injection into transformers. Our results show a consistent positive effect for domain adaptation via continued pre-training and small improvements when replacing full model training with a task-specific adapter. The injected knowledge, however, only yields minor improvements over full training and fails to outperform the task-specific adapter without external knowledge, raising the question which type of knowledge is needed to solve this task.
|
| Dieser Eintrag ist Teil der Universitätsbibliographie. |
| Das Dokument wird vom Publikationsserver der Universitätsbibliothek Mannheim bereitgestellt. |
Search Authors in
BASE:
Yu, Hsiao-Chu
;
Rehbein, Ines
;
Ponzetto, Simone Paolo
Google Scholar:
Yu, Hsiao-Chu
;
Rehbein, Ines
;
Ponzetto, Simone Paolo
ORCID:
Yu, Hsiao-Chu, Rehbein, Ines and Ponzetto, Simone Paolo ORCID: https://orcid.org/0000-0001-7484-2049
You have found an error? Please let us know about your desired correction here: E-Mail
Actions (login required)
|
Show item |
|
|