Towards accountability in machine learning applications: A system-testing approach


Wan, Wayne Xinwei ; Lindenthal, Thies


[img] PDF
dp22001.pdf - Published

Download (3MB)

URN: urn:nbn:de:bsz:180-madoc-620292
Document Type: Working paper
Year of publication: 2022
The title of a journal, publication series: ZEW Discussion Papers
Volume: 22-001
Place of publication: Mannheim
Publication language: English
Institution: Sonstige Einrichtungen > ZEW - Leibniz-Zentrum für Europäische Wirtschaftsforschung
MADOC publication series: Veröffentlichungen des ZEW (Leibniz-Zentrum für Europäische Wirtschaftsforschung) > ZEW Discussion Papers
Subject: 330 Economics
Classification: JEL: C52 , R30,
Keywords (English): machine learning , accountability gap , computer vision , real estate , urban studies
Abstract: A rapidly expanding universe of technology-focused startups is trying to change and improve the way real estate markets operate. The undisputed predictive power of machine learning (ML) models often plays a crucial role in the ‘disruption’ of traditional processes. However, an accountability gap prevails: How do the models arrive at their predictions? Do they do what we hope they do – or are corners cut? Training ML models is a software development process at heart. We suggest to follow a dedicated software testing framework and to verify that the ML model performs as intended. Illustratively, we augment two ML image classifiers with a system testing procedure based on local interpretable model-agnostic explanation (LIME) techniques. Analyzing the classifications sheds light on some of the factors that determine the behavior of the systems.




Das Dokument wird vom Publikationsserver der Universitätsbibliothek Mannheim bereitgestellt.




Metadata export


Citation


+ Search Authors in

+ Download Statistics

Downloads per month over past year

View more statistics



You have found an error? Please let us know about your desired correction here: E-Mail


Actions (login required)

Show item Show item