Fika-to-fika workshop: Human-friendly AI
AI Lund fika-to-fika workshop about the development and dissemination of human-friendly AI, with leading experts and exciting insights into the interplay between emerging technologies, ethics and society. Invited speakers are Anja Bechmann, Aarhus University and Giovanni Leoni, Inter IKEA.
Where: Eden Hörsal, Paradisgatan 5H, Lund, Sweden
Spoken language: English
Host and moderator: Anamaria Dutceac Segesten, Docent, Strategic Communication, Lund University
Registration: Participation is free of charge, but you need to sign up at ai.lu.se.
9.30 Registration, fika & mingle
10.15 Session 1: (Why) do we need human-friendly AI?
Invited talk: Collective behavior or AI: which is human friendly or unfriendly?
Anja Bechmann, Professor at Media Studies and director of the interdisciplinary research center DATALAB - Center for Digital Social Research at Aarhus University in Denmark. Biography.
Whose responsibility is it to make AI human friendly?
Erik Persson, Researcher, Practical Philosophy, Lund university
"If the AI is friendly, can I do something else?"
Ericka Jonsson, Professor, Gender Studies, Linköping University
Ingar Brinck, Professor, Theoretical Philosophy, Lund University
12.00 Lunch and mingle
13.15 Session 2: Issues and challenges in the development and implementation of human-friendly AI
Invited talk: A human-centric approach to AI in a global enterprise leads to sustainable business models
Giovanni Leoni, Global Head of Algorithm and AI Ethics at Inter IKEA Biography
Trustworthy companies for friendly AI?
Eduardo Gill-Pedro, Associate senior lecturer, Department of Law, Lund University
Assessing mental health with AI-based language analysis
Sverker Sikström, Professor, Department of Psychology, Lund University
15.00 Fika & mingle
15.30 End of workshop
While the development and proliferation of Artificial Intelligence (AI) technologies in all spheres of society has been marked by great optimism, there is a growing concern among both scholars, political actors and the public that AI can disrupt or even harm the rights of individuals and our democratic societal structures. For example, several research projects have shown that AI-based applications may enforce gender and racial bias, political actors have (mis)used AI technologies to micro-target and influence voters in elections, and machine-learning algorithms have been accused of facilitating the creation and dissemination of disinformation. Similar concerns have been raised about the societal disruptions and job displacement caused by AI-enabled forms of automation and robotics.
These concerns have informed a growing scholarly interest in so-called human-friendly AI technologies. This growing interest is evidenced for example by a forthcoming EU HORIZON call on ‘human- friendly deployment of artificial intelligence and related technologies’, the aim of which is to promote research on both the societal risks associated with AI technologies and the development of AI-systems that are trust-worthy and protect the rights of individuals.
But the growing interest in human-friendly AI technologies also raise new questions. For what are the reasons that we even need human-friendly AI in the first place? Can a robot ever be “friendly”? And what challenges and societal risks does the development and dissemination of such technologies give rise to? On 1 March AI Lund invites to a workshop that brings together researchers working with these important questions. The workshop will address the dual questions of why we need human-friendly AI as well as the issues and challenges related to the development and dissemination of these technologies.
Anja Bechmann (she/her) is professor at Media Studies and director of the interdisciplinary research center DATALAB - Center for Digital Social Research at Aarhus University in Denmark. Her research examines digital and social media communication and collective behavior using large-scale data collection and applied machine learning. She is Chief Investigator (CI) of the EU project NORDIS, executive board member of the European Digital Media Observatory (EDMO), and appointed member of the Academy of Technical Sciences. She has been a member of the EU Commission High-level expert group on disinformation and is continously invited as an independent academic to provide research-based talks, white papers or comments on drafts by e.g., European Parliament, Ministry of Defense, and Ministry of Culture. In 2019 she was appointed Thinker in Residence by the Royal Flemish Academy of Belgium for Science and the Arts. Her research has been funded by national and international research councils such as the Danish Council for Independent Research, Swedish Research Council, Danish Agency for Science and Innovation, Horizon 2020, EU CEF, and Aarhus University Research Foundation.
Giovanni Leoni is the Global Head of Algorithm and AI Ethics at Inter IKEA Group, responsible to develop, implement and lead operationalization of responsible analytics across Inter IKEA Group of companies. This concerns all algorithmic processes, including administrative processes, operational processes, analytics and AI. He is an active contributor to the global AI ethics community, engaging in legislative industry consultations and various forums and bodies, among them being on the Board of Directors for Ethical Artificial Intelligence Governance Group. For more than 20 years he has been active across multiple sectors, private and public, driving development of user experience, sustainable change management, clarity in impacting factors for better business and wrapping everything in numbers and decision-models. He has worked within diverse areas as public sector processes, sales, procurement of services, digital and physical products, supply chain, automatization and business analytics. In his current assignment it all comes together, being a partner to create better business with the use of data through values-led decision-making.
Malin Larsson from AI Sweden works with strategic and sustainable AI management and execution at AI Sweden, Sweden´s national centre for applied AI. AI Sweden has the mission to strengthen and accelerate the use and application of AI in Sweden. It operates nationally for a strategic and human-friendly implementation and use of AI. The interplay between emerging technologies, ethics and society build competitiveness, growth, and a better society, for every organisation in every sector and industry, both in private and public sector in Sweden. She is the head of AI Sweden establishment and operation in the south region, and in this capacity she established the south node and been the interface to the AI ecosystem in the south of Sweden. Among many important work tasks and responsibilities of her assignment she counts driving, together with Lund University and the cities of Lund, Malmö, and Helsingborg, one of fifteen pilot projects globally to evaluate, on ongoing projects, the policy guidelines on AI policy for children developed by UNICEF.
To participate is free of charge, but you need to sign up at: https://www.ai.lu.se/2023-03-01/registration