Student success and drop-out predictions have gained increased attention in recent years, connected to the hope that by identifying struggling students, it is possible to intervene and provide early help and design programs based on patterns discovered by the models. Though by now many models exist achieving remarkable accuracy-values, models outputting simple probabilities are not enough to achieve these ambitious goals. In this paper, we argue that they can be a first exploratory step of a pipeline aiming to be capable of reaching the mentioned goals. By using Explainable Artificial Intelligence (XAI) methods, such as SHAP and LIME, we can understand what features matter for the model and make the assumption that features important for successful models are also important in real life. By then additionally connecting this with an analysis of counterfactuals and a theory-driven causal analysis, we can begin to reasonably understand not just if a student will struggle but why and provide fitting help. We evaluate the pipeline on an artificial dataset to show that it can, indeed, recover complex causal mechanisms and on a real-life dataset showing the method’s applicability. We further argue that collaborations with social scientists are mutually beneficial in this area but also discuss the potential negative effects of personal intervention systems and call for careful designs.
Übersetzter Titel:
Wenn Wahrscheinlichkeiten nicht Reichen – Eine Methode zur Kausalen Erklärung von Studierenden-Erfolgsmodellen
(Deutsch)
Dieser Eintrag ist Teil der Universitätsbibliographie.
Das Dokument wird vom Publikationsserver der Universitätsbibliothek Mannheim bereitgestellt.