The browser you are using is not supported by this website. All versions of Internet Explorer are no longer supported, either by us or Microsoft (read more here:

Please use a modern browser to fully experience our website, such as the newest versions of Edge, Chrome, Firefox or Safari etc.

Distribution of Responsibility for Artificial Intelligence Development

AI Lund 2019 revisited 17 June 2020

The Corona crisis prevents the physical meetings and discussions that have so far been a central part of AI Lund's outreach activities. But in order to continue dialogue and joint learning, we now try a series of online lunch seminars. Together we watch one or more presentations from previous AI Lund events for 20-30 minutes. Then an expert, often. the original speaker, are there to answer questions for 10-15 minutes. You can also ask questions in the text chat during the video. The presentations can be in Swedish or English, and you can ask questions in both languages.

On 17 June we watch the presentation that Maria Hedlund and Erik Persson gave at the AI Lund Fika-to-fika-workshop: AI, Social Science and Research in EU 16 October 2019

Title: Distribution of Responsibility for Artificial Intelligence Development

Speakers:  Maria Hedlund, Department of Political Science, Lund University and Erik Persson, Practical Philosophy, Lund University

Abstract:  Autonomous systems can make their own decisions but cannot take responsibility and thus not be accountable for their decisions. It is however also crucial to discuss the question of responsibility in a more forward-looking sense. It is also critical to reflect on how our decisions today lead the development in a certain direction that may be difficult to change later. In this project we study the question of who should do what, and when, to make sure that things do not go wrong (forward-looking responsibility), and consider how different distributions of forward-looking responsibility today between law-makers, industry, developers, consumers etc.) will affect the long-term development of AI concern democracy, AI development, and AI safety.