Are we consistently biased? Multidimensional analysis of biases in distributional word vectors


Lauscher, Anne ; Glavaš, Goran



DOI: https://doi.org/10.18653/v1/S19-1010
URL: https://www.aclweb.org/anthology/S19-1010/
Additional URL: https://www.aclweb.org/anthology/volumes/S19-1/
Document Type: Conference or workshop publication
Year of publication: 2019
Book title: Lexical and Computational Semantics (*SEM) - proceedings of the eighth conference : June 6-7, 2019, Minneapolis : NAACL HLT 2019
Page range: 85-91
Conference title: Eighth Joint Conference on Lexical and Computational Semantics (*SEM 2019)
Location of the conference venue: Minneapolis, MN
Date of the conference: June 6-7, 2019
Publisher: Mihalcea, Rada F.
Place of publication: Stroudsburg, PA
Publishing house: Association for Computational Linguistics
ISBN: 978-1-948087-93-3
Publication language: English
Institution: School of Business Informatics and Mathematics > Text Analytics for Interdisciplinary Research (Juniorprofessur) (Glavaš 2017-2021)
School of Business Informatics and Mathematics > Wirtschaftsinformatik III (Ponzetto 2016-)
Subject: 004 Computer science, internet
Keywords (English): Natural Language Processing ; Word Embeddings ; Word Embeddings Bias
Abstract: Word embeddings have recently been shown to reflect many of the pronounced societal biases (e.g., gender bias or racial bias). Existing studies are, however, limited in scope and do not investigate the consistency of biases across relevant dimensions like embedding models, types of texts, and different languages. In this work, we present a systematic study of biases encoded in distributional word vector spaces: we analyze how consistent the bias effects are across languages, corpora, and embedding models. Furthermore, we analyze the cross-lingual biases encoded in bilingual embedding spaces, indicative of the effects of bias transfer encompassed in cross-lingual transfer of NLP models. Our study yields some unexpected findings, e.g., that biases can be emphasized or downplayed by different embedding models or that user-generated content may be less biased than encyclopedic text. We hope our work catalyzes bias research in NLP and informs the development of bias reduction techniques.

Dieser Eintrag ist Teil der Universitätsbibliographie.




Metadata export


Citation


+ Search Authors in

+ Page Views

Hits per month over past year

Detailed information



You have found an error? Please let us know about your desired correction here: E-Mail


Actions (login required)

Show item Show item