Human oversight done right : the AI Act should use humans to monitor AI only when effective


Walter, Johannes


[img] PDF
pb02-23.pdf__gl=1_uolur__ga_MjA5NjQwMjQ3OS4xNjg5NTkwNjM0__ga_KFD4G5CY27_MTY4OTU5MDYzNC4xLjEuMTY4OTU5MjE1NC4wLjAuMA.. - Published

Download (607kB)

URN: urn:nbn:de:bsz:180-madoc-648885
Document Type: Working paper
Year of publication: 2023
The title of a journal, publication series: ZEW policy brief
Volume: 2023-02
Place of publication: Mannheim
Publication language: English
Institution: Sonstige Einrichtungen > ZEW - Leibniz-Zentrum für Europäische Wirtschaftsforschung
MADOC publication series: Veröffentlichungen des ZEW (Leibniz-Zentrum für Europäische Wirtschaftsforschung) > ZEW policy brief
Subject: 330 Economics
Abstract: The EU’s proposed Artificial Intelligence Act (AI Act) is meant to ensure safe AI systems in high-risk applications. The Act relies on human supervision of machine-learning algorithms, yet mounting evidence indicates that such oversight is not always reliable. In many cases, humans cannot accurately assess the quality of algorithmic recommendations, and thus fail to prevent harmful behaviour. This policy brief proposes three ways to solve the problem: First, Article 14 of the AI Act should be revised to acknowledge that humans often have difficulty assessing recommendations made by algorithms. Second, the suitability of human oversight for preventing harmful outcomes should be empirically tested for every high-risk application under consideration. Third, following Biermann et al. (2022), human decision-makers should receive feedback on past decisions to enable learning and improve future decisions.




Das Dokument wird vom Publikationsserver der Universitätsbibliothek Mannheim bereitgestellt.




Metadata export


Citation


+ Search Authors in

+ Download Statistics

Downloads per month over past year

View more statistics



You have found an error? Please let us know about your desired correction here: E-Mail


Actions (login required)

Show item Show item