ZusammenQA: Data augmentation with specialized models for cross-lingual open-retrieval question answering system


Hung, Chia-Chien ; Green, Tommaso ; Litschko, Robert ; Tsereteli, Tornike ; Takeshita, Sotaro ; Bombieri, Marco ; Glavaš, Goran ; Ponzetto, Simone Paolo



DOI: https://doi.org/10.18653/v1/2022.mia-1.8
URL: https://aclanthology.org/2022.mia-1.8
Document Type: Conference or workshop publication
Year of publication: 2022
Book title: Proceedings of the Workshop on Multilingual Information Access (MIA)
Page range: 77-90
Conference title: 1st Workshop on Multilingual Information Access (MIA)
Location of the conference venue: Seattle, WA
Date of the conference: 15.07.2022
Publisher: Asai, Akari ; Choi, Eunsol ; Clark, Jonathan H. ; Hu, Junjie ; Lee, Chia-Hsuan ; Kasai, Jungo ; Longpre, Shayne ; Yamada, Ikuya ; Zhang, Rui
Place of publication: Seattle, USA
Publishing house: Association for Computational Linguistics
ISBN: 978-1-955917-89-6
Related URLs:
Publication language: English
Institution: School of Business Informatics and Mathematics > Information Systems III: Enterprise Data Analysis (Ponzetto 2016-)
Subject: 004 Computer science, internet
Abstract: This paper introduces our proposed system for the MIA Shared Task on Cross-lingual Openretrieval Question Answering (COQA). In this challenging scenario, given an input question the system has to gather evidence documents from a multilingual pool and generate from them an answer in the language of the question. We devised several approaches combining different model variants for three main components: Data Augmentation, Passage Retrieval, and Answer Generation. For passage retrieval, we evaluated the monolingual BM25 ranker against the ensemble of re-rankers based on multilingual pretrained language models (PLMs) and also variants of the shared task baseline, re-training it from scratch using a recently introduced contrastive loss that maintains a strong gradient signal throughout training by means of mixed negative samples. For answer generation, we focused on languageand domain-specialization by means of continued language model (LM) pretraining of existing multilingual encoders. Additionally, for both passage retrieval and answer generation, we augmented the training data provided by the task organizers with automatically generated question-answer pairs created from Wikipedia passages to mitigate the issue of data scarcity, particularly for the low-resource languages for which no training data were provided. Our results show that language- and domain-specialization as well as data augmentation help, especially for low-resource languages.




Dieser Eintrag ist Teil der Universitätsbibliographie.




Metadata export


Citation


+ Search Authors in

+ Page Views

Hits per month over past year

Detailed information



You have found an error? Please let us know about your desired correction here: E-Mail


Actions (login required)

Show item Show item