AQuA - Combining experts' and non-experts' views to assess deliberation quality in online discussions using LLMs
Behrendt, Maike
;
Wagner, Stefan Sylvius
;
Ziegele, Marc
;
Wilms, Lena K.
;
Stoll, Anke
;
Heinbach, Dominique
;
Harmeling, Stefan
URN:
|
urn:nbn:de:bsz:180-madoc-699396
|
Document Type:
|
Conference or workshop publication
|
Year of publication:
|
2024
|
Book title:
|
LREC-COLING 2024 : the first workshop on language-driven deliberation technology (DELITE2024) : workshop proceedings
|
Page range:
|
1-12
|
Conference title:
|
DELITE 2024
|
Location of the conference venue:
|
Torino, Italia
|
Date of the conference:
|
20.05.2024
|
Publisher:
|
Hautli-Janisz, Annette
;
Lapesa, Gabriella
;
Anastasiou, Lucas
;
Gold, Valentin
;
De Liddo, Anna
;
Reed, Chris
|
Publishing house:
|
ACL
|
ISBN:
|
78-2-493814-14-2
|
Related URLs:
|
|
Publication language:
|
English
|
Institution:
|
School of Humanities > Medien- und Kommunikationswissenschaft (Naab 2022-)
|
Pre-existing license:
|
Creative Commons Attribution, Non-Commercial 4.0 International (CC BY-NC 4.0)
|
Subject:
|
004 Computer science, internet 300 Social sciences, sociology, anthropology 320 Political science
|
Abstract:
|
Measuring the quality of contributions in political online discussions is crucial in deliberation research and computer science. Research has identified various indicators to assess online discussion quality, and with deep learning advancements, automating these measures has become feasible. While some studies focus on analyzing specific quality indicators, a comprehensive quality score incorporating various deliberative aspects is often preferred. In this work, we introduce AQuA, an additive score that calculates a unified deliberative quality score from multiple indices for each discussion post. Unlike other singular scores, AQuA preserves information on the deliberative aspects present in comments, enhancing model transparency. We develop adapter models for 20 deliberative indices, and calculate correlation coefficients between experts' annotations and the perceived deliberativeness by non-experts to weigh the individual indices into a single deliberative score. We demonstrate that the AQuA score can be computed easily from pre-trained adapters and aligns well with annotations on other datasets that have not be seen during training. The analysis of experts' vs. non-experts' annotations confirms theoretical findings in the social science literature.
|
 | Das Dokument wird vom Publikationsserver der Universitätsbibliothek Mannheim bereitgestellt. |
 | Dieser Datensatz wurde nicht während einer Tätigkeit an der Universität Mannheim veröffentlicht, dies ist eine Externe Publikation. |
Search Authors in
BASE:
Behrendt, Maike
;
Wagner, Stefan Sylvius
;
Ziegele, Marc
;
Wilms, Lena K.
;
Stoll, Anke
;
Heinbach, Dominique
;
Harmeling, Stefan
Google Scholar:
Behrendt, Maike
;
Wagner, Stefan Sylvius
;
Ziegele, Marc
;
Wilms, Lena K.
;
Stoll, Anke
;
Heinbach, Dominique
;
Harmeling, Stefan
ORCID:
Behrendt, Maike ; Wagner, Stefan Sylvius ; Ziegele, Marc ; Wilms, Lena K. ; Stoll, Anke ; Heinbach, Dominique ORCID: 0000-0002-7121-7464 ; Harmeling, Stefan
You have found an error? Please let us know about your desired correction here: E-Mail
Actions (login required)
 |
Show item |
|