|
Fares on fairness: Using a total error framework to examine the role of measurement and representation in training data on model fairness and bias
Schenk, Patrick Oliver
;
Kern, Christoph
;
Buskirk, Trent D.

|
URL:
|
https://proceedings.mlr.press/v294/schenk25a.html
|
|
Document Type:
|
Conference or workshop publication
|
|
Year of publication:
|
2025
|
|
Book title:
|
European Workshop on Algorithmic Fairness, 30 June-2 July 2025, Eindhoven University of Technology, Eindhoven, The Netherlands
|
|
The title of a journal, publication series:
|
Proceedings of Machine Learning Research : PMLR
|
|
Volume:
|
294
|
|
Page range:
|
187-211
|
|
Conference title:
|
EWAF’25, Fourth European Workshop on Algorithmic Fairness
|
|
Location of the conference venue:
|
Eindhoven, The Netherlands
|
|
Date of the conference:
|
30.06.-02.07.2025
|
|
Publisher:
|
Weerts, Hilde
;
Pechenizkiy, Mykola
;
Allhutter, Doris
;
Corrêa, Ana Maria
;
Grote, Thomas
;
Liem, Cynthia
|
|
Place of publication:
|
Red Hook, NY
|
|
Publishing house:
|
Curran Associates, Inc.
|
|
Publication language:
|
English
|
|
Institution:
|
Außerfakultäre Einrichtungen > Mannheim Centre for European Social Research - Research Department A
|
|
Subject:
|
004 Computer science, internet
|
|
Keywords (English):
|
algorithmic fairness , representation , measurement , training data quality , data quality frameworks , data-centric machine learning , fair machine learning , survey methodology , total survey error , total data quality
|
|
Abstract:
|
Data-driven decisions, often based on predictions from machine learning (ML) models are becoming ubiquitous. For these decisions to be just, the underlying ML models must be fair, i.e., work equally well for all parts of the population such as groups defined by gender or age. What are the logical next steps if, however, a trained model is accurate but not fair? How can we guide the whole data pipeline such that we avoid training unfair models based on inadequate data, recognizing possible sources of unfairness early on? How can the concepts of data-based sources of unfairness that exist in the fair ML literature be organized, perhaps in a way to gain new insight? In this paper, we explore two total error frameworks from the social sciences, Total Survey Error and its generalization Total Data Quality, to help elucidate issues related to fairness and trace its antecedents. The goal of this thought piece is to acquaint the fair ML community with these two frameworks, discussing errors of measurement and errors of representation through their organized structure. We illustrate how they may be useful, both practically and conceptually.
|
 | Dieser Eintrag ist Teil der Universitätsbibliographie. |
Search Authors in
You have found an error? Please let us know about your desired correction here: E-Mail
Actions (login required)
 |
Show item |
|