The browser you are using is not supported by this website. All versions of Internet Explorer are no longer supported, either by us or Microsoft (read more here: https://www.microsoft.com/en-us/microsoft-365/windows/end-of-ie-support).

Please use a modern browser to fully experience our website, such as the newest versions of Edge, Chrome, Firefox or Safari etc.

European AI Act: Transparency Before, Inside, and After the AI Black Box

Third annual symposium on regulation of AI from a European perspective

Ilustration. EU AI Act Transparency

Following on from the successful workshops on Regulating High Risk AI in the EU (2023), and Compliance and Enforcement (2024), AI Lund's fika-to-fika workshops* returns to the AI Act with a focus on legal aspects related to “transparency”.

The EU AI Act (Regulation (EU) 2024/1689) marks a groundbreaking shift in AI regulation, laying the foundation for a potential global standard in AI governance, risk management, and transparency. A key pillar of the Act is data transparency, i.e., the ability to understand and audit training data, algorithmic processes and AI output. 

For example, the Act establishes the following obligations:

  • During AI training (machine learning): Ensuring high-quality, unbiased, and representative training datasets.
  • Inside the black box: Documenting and monitoring requirements related to algorithmic transparency, ensuring interpretability and traceability.
  • AI output: Enabling human oversight, contestability, and post-market monitoring to mitigate risks and ensure accountability.

The AI Act also introduces transparency requirements to ensure that users are informed when interacting with an AI system. For example, AI systems generating synthetic content (such as deepfakes) must clearly label their outputs as artificially generated. Furthermore, providers of general-purpose AI models must meet specific disclosure obligations, including transparency regarding copyrighted content used for training models.

At the same time, questions remain about how the transparency obligations in the AI Act interact with other legal frameworks, including:

  • Fundamental rights under the EU Charter
  • Copyright protection for content used as training data in AI development
  • Trade secret protection for training data and AI system operations
  • Obligations under the General Data Protection Regulation (GDPR) regarding personal data protection
  • Competition law restrictions on data access and potential abuse

Against this backdrop, this workshop—featuring leading experts, policymakers, and industry professionals—will explore how these requirements translate into real-world AI compliance strategies and what challenges remain in opening the black box while preserving innovation and competitiveness.

When & where: 1 October 2025. 

  • in Stadshallen (Lund City Hall), Lund, Sweden: 9.00 to 15.30 CET
  • online: 9.25 to 12.00 and 13.00 to 15.15  CET

Spoken language: English

Programme

09.25 Online meeting open

Online host: Ellinor Blom Lussi, AI and Society, Lund university

09.30 Session 1: Overview and Transparency in Data Input

Moderator:  Behrang Kianzad, Institute for Global Political Studies, Malmö University

Initial overview of Transparency in AI Systems
Stefan Larsson, Technology and Society, Lund University

Keynote: Mapping competition concerns along the generative AI value chain
Kalpana Tyagi, EU Intellectual Property and Competition Law, Maastricht University

AI, platforms and transparency
Sebastian Schwemer, BI Norwegian Business School

Panel


10.50 Session 2: Transparency in AI Systems

Session moderator: Johan Axhamn, Business Law, Lund University

Transparency in the GDPR and the AI Act – right to explanation of individual decision-making
Jonas Ledendal, Business Law, Lund University and Hajo Michael Holtz, Department of Law, Uppsala Universitet

Transparency as Fairness and Fairness as Transparency
Behrang Kianzad, Institute for Global Political Studies, Malmö University

When is an AI system transparent? Legal and non-legal meanings of AI transparency and their relation to Trustworthy AI
Kasia Söderlund, Technology and Society, Lund University

Panel


12.00 Lunch break

13.05 Online meeting opens

Online host: Ellinor Blom Lussi, AI and Society, Lund university


13.10 Session 3: Transparency in AI Output

Moderator: Jonas Ledendal, Business Law, Lund University

Transparent but incomprehensible
Jacob Dexe, IAB

Why transparency, and is it enough? IP rights as the elephant in the (black) box
Ana Nordberg, Law, Lund University

The Logics of Transparency in the AI Act
Ida Koivisto, Law, University of Helsinki

Panel


14:50 Outro

Closing keynote: Challenging the algorithmic Leviathan with communicating vessels of transparency
Katja de Vries, Law, Uppsala University

Final words
Stefan Larsson, Technology and Society, Lund University

Organisation

  • Johan Axhamn, Senior lecturer, Department of Business Law, Lund University
  • Behrang Kianzad, Institute for Global Political Studies, Malmö University
  • Eduardo Gill-Pedro, Department of Law, Lund University
  • Jonas Ledendal, Senior lecturer, Department of Business Law, Lund University
  • Stefan Larsson, Senior lecturer, Department of Technology and Society
  • jonas [dot] wisbrant [at] control [dot] lth [dot] se (subject: AI_in_EU_LAW_26_sept) (Jonas Wisbrant), AI Lund