|
Fares on fairness: Using a total error framework to examine the role of measurement and representation in training data on model fairness and bias
Schenk, Patrick Oliver
;
Kern, Christoph
;
Buskirk, Trent D.

|
URL:
|
https://proceedings.mlr.press/v294/schenk25a.html
|
|
Dokumenttyp:
|
Konferenzveröffentlichung
|
|
Erscheinungsjahr:
|
2025
|
|
Buchtitel:
|
European Workshop on Algorithmic Fairness, 30 June-2 July 2025, Eindhoven University of Technology, Eindhoven, The Netherlands
|
|
Titel einer Zeitschrift oder einer Reihe:
|
Proceedings of Machine Learning Research : PMLR
|
|
Band/Volume:
|
294
|
|
Seitenbereich:
|
187-211
|
|
Veranstaltungstitel:
|
EWAF’25, Fourth European Workshop on Algorithmic Fairness
|
|
Veranstaltungsort:
|
Eindhoven, The Netherlands
|
|
Veranstaltungsdatum:
|
30.06.-02.07.2025
|
|
Herausgeber:
|
Weerts, Hilde
;
Pechenizkiy, Mykola
;
Allhutter, Doris
;
Corrêa, Ana Maria
;
Grote, Thomas
;
Liem, Cynthia
|
|
Ort der Veröffentlichung:
|
Red Hook, NY
|
|
Verlag:
|
Curran Associates, Inc.
|
|
Sprache der Veröffentlichung:
|
Englisch
|
|
Einrichtung:
|
Außerfakultäre Einrichtungen > MZES - Arbeitsbereich A
|
|
Fachgebiet:
|
004 Informatik
|
|
Freie Schlagwörter (Englisch):
|
algorithmic fairness , representation , measurement , training data quality , data quality frameworks , data-centric machine learning , fair machine learning , survey methodology , total survey error , total data quality
|
|
Abstract:
|
Data-driven decisions, often based on predictions from machine learning (ML) models are becoming ubiquitous. For these decisions to be just, the underlying ML models must be fair, i.e., work equally well for all parts of the population such as groups defined by gender or age. What are the logical next steps if, however, a trained model is accurate but not fair? How can we guide the whole data pipeline such that we avoid training unfair models based on inadequate data, recognizing possible sources of unfairness early on? How can the concepts of data-based sources of unfairness that exist in the fair ML literature be organized, perhaps in a way to gain new insight? In this paper, we explore two total error frameworks from the social sciences, Total Survey Error and its generalization Total Data Quality, to help elucidate issues related to fairness and trace its antecedents. The goal of this thought piece is to acquaint the fair ML community with these two frameworks, discussing errors of measurement and errors of representation through their organized structure. We illustrate how they may be useful, both practically and conceptually.
|
 | Dieser Eintrag ist Teil der Universitätsbibliographie. |
Suche Autoren in
Sie haben einen Fehler gefunden? Teilen Sie uns Ihren Korrekturwunsch bitte hier mit: E-Mail
Actions (login required)
 |
Eintrag anzeigen |
|