Causal graphs and fairness in machine learning: addressing practical challenges in causal fairness evaluation


Cohausz, Lea ; Kappenberger, Jakob ; Stuckenschmidt, Heiner


[img]
Vorschau
PDF
19082_pub.pdf - Veröffentlichte Version

Download (982kB)

DOI: https://doi.org/10.1613/jair.1.19082
URL: https://www.jair.org/index.php/jair/article/view/1...
Weitere URL: https://www.researchgate.net/publication/397043895...
URN: urn:nbn:de:bsz:180-madoc-713440
Dokumenttyp: Zeitschriftenartikel
Erscheinungsjahr: 2025
Titel einer Zeitschrift oder einer Reihe: Journal of Artificial Intelligence Research : JAIR
Band/Volume: 84
Heft/Issue: Article 15
Seitenbereich: 1-34
Ort der Veröffentlichung: San Francisco, Calif.
Verlag: Morgan Kaufman Publ.
ISSN: 1076-9757 , 1943-5037
Sprache der Veröffentlichung: Englisch
Einrichtung: Fakultät für Wirtschaftsinformatik und Wirtschaftsmathematik > Practical Computer Science II: Artificial Intelligence (Stuckenschmidt 2009-)
Bereits vorhandene Lizenz: Creative Commons Namensnennung 4.0 International (CC BY 4.0)
Fachgebiet: 004 Informatik
Freie Schlagwörter (Englisch): bayesian networks , causality , machine learning
Abstract: Background: With the discussion of fairness in Machine Learning (ML) gaining traction in recent years, the idea of viewing fairness through the causal lens has become prominent. The main idea behind this is that by looking at the causal structure underlying the data used for an ML model, we can see and evaluate more concisely which influences of the sensitive variables on the target variable are problematic and how they are problematic. Doing so allows not only a nuanced view of fairness and an informed choice of fairness measures but also more targeted approaches (such as path-specific bias mitigation) to handle fairness issues. Objectives: Mainly, two important points have hindered the practical use of the causal lens and causality-based bias mitigation. First, a classification of different graphical structures with different fairness implications involving a sensitive variable and a target variable is still missing, as is a discussion of how different contexts can shape our evaluation of fairness. Second, the construction of such graphical models is not trivial and error-prone. However, recent work showed that combining background knowledge and data-driven network structure learning may lead to more accurate graphs. In this work, we attempt to address and tackle these two practical shortcomings. Methods: Our first contribution is a classification and discussion of causal structures with different fairness implications and how contexts shape our assessment. Our second contribution is an advancement in learning more accurate graphs by adapting structure learning algorithms, and a detailed evaluation of graph correctness and subsequent fairness implications. Results: We show that when including background knowledge naturally available in fairness settings, graph learning becomes more accurate, which also has positive implications for accurate fairness assessments. Conclusions: Our work may pave the way for a broader adoption of causal ML fairness by providing concrete suggestions about the implications of causal structures and contexts, and learning more accurate graphs. We also address current limitations and highlight the need for stakeholder inclusion.


SDG 5: GeschlechtergleichheitSDG 10: Weniger Ungleichheiten


Dieser Eintrag ist Teil der Universitätsbibliographie.

Das Dokument wird vom Publikationsserver der Universitätsbibliothek Mannheim bereitgestellt.




Metadaten-Export


Zitation


+ Suche Autoren in

+ Download-Statistik

Downloads im letzten Jahr

Detaillierte Angaben



Sie haben einen Fehler gefunden? Teilen Sie uns Ihren Korrekturwunsch bitte hier mit: E-Mail


Actions (login required)

Eintrag anzeigen Eintrag anzeigen