PDF pb02-23.pdf__gl=1_uolur__ga_MjA5NjQwMjQ3OS4xNjg5NTkwNjM0__ga_KFD4G5CY27_MTY4OTU5MDYzNC4xLjEuMTY4OTU5MjE1NC4wLjAuMA..
- Veröffentlichte Version
Download (607kB)
The EU’s proposed Artificial Intelligence Act (AI Act) is meant to ensure safe AI systems in high-risk applications. The Act relies on human supervision of machine-learning algorithms, yet mounting evidence indicates that such oversight is not always reliable. In many cases, humans cannot accurately assess the quality of algorithmic recommendations, and thus fail to prevent harmful behaviour. This policy brief proposes three ways to solve the problem: First, Article 14 of the AI Act should be revised to acknowledge that humans often have difficulty assessing recommendations made by algorithms. Second, the suitability of human oversight for preventing harmful outcomes should be empirically tested for every high-risk application under consideration. Third, following Biermann et al. (2022), human decision-makers should receive feedback on past decisions to enable learning and improve future decisions.
Das Dokument wird vom Publikationsserver der Universitätsbibliothek Mannheim bereitgestellt.