Veröffentlichung

Titel:
Artificial Intelligence Explainability Requirements of the AI Act and Metrics for Measuring Compliance
AutorInnen:
Fabian Walke
Lars Bennek
Till Winkler
Kategorie:
Konferenzbeiträge
erschienen in:
18. Internationale Tagung Wirtschaftsinformatik (WI) 2023
Abstract:

Explainability in artificial intelligence (AI) is crucial for ensuring transparency, accountability, and risk mitigation, thereby addressing digital responsibility, social, ethical and ecological aspects of information system usage. AI will be regulated in the European Union (EU) through the AI Act. This regulation introduces requirements for explainable AI (XAI). This paper examines which requirements for XAI are regulated and which metrics could be used for measuring compliance. For this purpose, legal texts from the European Parliament and Council were analyzed in order to ascertain XAI requirements. Additionally, XAI taxonomies and metrics were collected. The results reveal, that the AI Act provides abstract regulations for explainability, making it challenging to define specific metrics for achieving explainability. As a solution, we propose a socio-technical metric classification for measuring compliance. Further studies should analyze forthcoming explainability requirements to make AI verifiable and minimize risks arising from AI.

Download: