The browser you are using is not supported by this website. All versions of Internet Explorer are no longer supported, either by us or Microsoft (read more here: https://www.microsoft.com/en-us/microsoft-365/windows/end-of-ie-support).

Please use a modern browser to fully experience our website, such as the newest versions of Edge, Chrome, Firefox or Safari etc.

Fundamental Rights Impact Assessments under the AI Act

recording from Ai Lund Lunch seminar 9 October 2024

Topic: Fundamental Rights Impact Assessments under the AI Act

When: 9 October at 12.00-13.15

Where: Online

Speaker: Eduardo Gill-Pedro, Associate senior lecturer, Department of Law, Lund University

Spoken language: English

Abstract

One of the most contested and debated provision in the EU’s AI Act concerns the obligation to conduct a Fundamental Rights Impact Assessment (FRIA). Its absence in the Commission’s original proposal was seen as highly disappointing by civil society organisations and human rights advocates. The European Parliament insisted on including it in its proposed amendments, but the EP’s proposal was watered down in the final version of the Act, following pushback from member states and industry.
In this talk I will examine the FRIA process, as set out in the AI Act, and map out the obligations the Act imposes on deployers of AI systems, the powers it grants regulatory bodies and the rights (if any) it grants persons affected by AI systems. I will show that the ‘express’ FRIA obligations in Article 27 of the Act are only one aspect of a broader range of obligations which the Act imposes on all operators – both deployers and providers - to assess the potential impact of high-risk AI systems on fundamental rights.

Beyond examining legal requirements in the Act, I will consider whether FRIAs can be something more than box ticking exercises that will enable companies and regulators to pay lip service to the requirement to uphold fundamental rights, by imposing real obligations to take fundamental rights seriously.