Explainable AI for EU AI Act compliance audits

2 min read

Introduction

Protiviti Netherlands, in collaboration with IIA Nederland, recently published a scientific article titled “Explainable AI for EU AI Act Compliance Audits” which was featured in a special series of the MAB journal (Maandblad voor Accountancy en Bedrijfseconomie). The research was developed by Protiviti Netherlands team members Vincent Damen, Menno Wiersma, Gokce Aydin, and Rens van Haasteren.

This publication explores how Explainable AI (XAI) can serve as a practical tool for conducting compliance audits on “black-box” AI systems, and highlights its growing relevance for the internal audit function. High-risk applications—such as credit risk scoring—underscore the need for transparent, accountable, and human-supervised AI systems. Under the EU Artificial Intelligence Act, such systems are subject to strict requirements, and explainability will play a central role in ensuring compliance.

The article addresses a key question: Can an explainability layer help AI deployers meet the EU AI Act’s transparency and oversight requirements, and how can internal auditors use it to verify compliance?

While XAI offers promising support, its effectiveness depends on the clarity, reliability, and actionability of the explanations it provides. The article outlines how internal auditors can critically assess these elements and apply XAI within audit frameworks.

Building on earlier work published in MAB (Sandu et al., 2022), this article examines the EU AI Act’s structure, the limitations of XAI, and its practical application through a credit risk example. It concludes with a summary of key insights for audit professionals navigating the evolving AI regulatory landscape.

Image

 

Image

 

Image

 

Image

 

Image

 

Image

 

Image

 

Image

You can learn more about the article at our webinar

Click here
Loading...