|
Causal graphs and fairness in machine learning: addressing practical challenges in causal fairness evaluation
Cohausz, Lea
;
Kappenberger, Jakob
;
Stuckenschmidt, Heiner
|
DOI:
|
https://doi.org/10.1613/jair.1.19082
|
|
URL:
|
https://www.jair.org/index.php/jair/article/view/1...
|
|
Additional URL:
|
https://www.researchgate.net/publication/397043895...
|
|
URN:
|
urn:nbn:de:bsz:180-madoc-713440
|
|
Document Type:
|
Article
|
|
Year of publication:
|
2025
|
|
The title of a journal, publication series:
|
Journal of Artificial Intelligence Research : JAIR
|
|
Volume:
|
84
|
|
Issue number:
|
Article 15
|
|
Page range:
|
1-34
|
|
Place of publication:
|
San Francisco, Calif.
|
|
Publishing house:
|
Morgan Kaufman Publ.
|
|
ISSN:
|
1076-9757 , 1943-5037
|
|
Publication language:
|
English
|
|
Institution:
|
School of Business Informatics and Mathematics > Practical Computer Science II: Artificial Intelligence (Stuckenschmidt 2009-)
|
|
Pre-existing license:
|
Creative Commons Attribution 4.0 International (CC BY 4.0)
|
|
Subject:
|
004 Computer science, internet
|
|
Keywords (English):
|
bayesian networks , causality , machine learning
|
|
Abstract:
|
Background: With the discussion of fairness in Machine Learning (ML) gaining traction in recent years, the idea of viewing
fairness through the causal lens has become prominent. The main idea behind this is that by looking at the causal structure
underlying the data used for an ML model, we can see and evaluate more concisely which influences of the sensitive variables
on the target variable are problematic and how they are problematic. Doing so allows not only a nuanced view of fairness and
an informed choice of fairness measures but also more targeted approaches (such as path-specific bias mitigation) to handle
fairness issues.
Objectives: Mainly, two important points have hindered the practical use of the causal lens and causality-based bias mitigation.
First, a classification of different graphical structures with different fairness implications involving a sensitive variable and
a target variable is still missing, as is a discussion of how different contexts can shape our evaluation of fairness. Second,
the construction of such graphical models is not trivial and error-prone. However, recent work showed that combining
background knowledge and data-driven network structure learning may lead to more accurate graphs. In this work, we
attempt to address and tackle these two practical shortcomings.
Methods: Our first contribution is a classification and discussion of causal structures with different fairness implications
and how contexts shape our assessment. Our second contribution is an advancement in learning more accurate graphs by
adapting structure learning algorithms, and a detailed evaluation of graph correctness and subsequent fairness implications.
Results: We show that when including background knowledge naturally available in fairness settings, graph learning becomes
more accurate, which also has positive implications for accurate fairness assessments.
Conclusions: Our work may pave the way for a broader adoption of causal ML fairness by providing concrete suggestions
about the implications of causal structures and contexts, and learning more accurate graphs. We also address current limitations
and highlight the need for stakeholder inclusion.
|
 
 | Dieser Eintrag ist Teil der Universitätsbibliographie. |
 | Das Dokument wird vom Publikationsserver der Universitätsbibliothek Mannheim bereitgestellt. |
Search Authors in
You have found an error? Please let us know about your desired correction here: E-Mail
Actions (login required)
 |
Show item |
|
|