Jun
AI Lund Lunch Seminar: High-risk AI transparency? Exploring the role of information disclosure for oversight bodies under the EU AI Act
Topic: High-risk AI transparency? Exploring the role of information disclosure for oversight bodies under the EU AI Act
When: 19 June at 12.00-13.15
Where: Online - link by registration
Speaker: Kasia Söderlund, PhD student, Department of Technology and Society, Lund University
Moderator: Stefan Larsson, Department of Technology and Society, Lund University
Spoken language: English
Abstract
With the adoption of the AI Act, the new legal framework setting out the governance rules for AI technologies has been introduced. While the main responsibility for the compliance with the AI Act rests on the providers of AI systems, oversight bodies are tasked with monitoring that the new rules are duly adhered to. This presentation will focus on the question of how the transparency requirements for high-risk AI systems may serve as an effective tool in the AI Act enforcement. Drawing on the AI transparency literature, this study points to the challenges with utilising transparency for oversight purposes. I argue that information disclosure alone is not sufficient, as in order to safeguard the development of the human-centred and trustworthy AI in the Union, oversight bodies will need to step up their efforts and resources to implement the new legal framework.
Related: European AI Act – Compliance and Enforcement - fika to fika workshop 19 September in Lund an online.
Registration
To participate is free of charge, sign up at ai.lu.se/2024-06-19/registration and you will recieve an access link at the zoom platform.
About the event
Location:
Online - link by registration
Contact:
ellinor [dot] blom_lussi [at] lth [dot] lu [dot] se