AI, Social Science and Research in EU
Fika-to-fika-workshop in Lund
A full day of presentations highlighting the link between Artificial Intelligence and Social Science research. In the first part of the day we will showcase several research projects driven by Lund University's own scholars in the field of AI and Social Science. In the afternoon, we will focus on the (thinking behind) funding opportunities available for researchers interested in applying for grants at the European level, but also showing evidence of projects that are more local in nature. While the morning session is open to the interested public, the afternoon session is targeted at LU researchers.
When: 16 October 09.30 to 16:00
Where: AM in MA4, Sölvegatan 20 PM in Room L201, SOL-center, Helgonabacken 12, Lund
AM Session 9.30-12.00: Artificial Intelligence projects at Lund University: the view from Social Science and Humanitie (MA:4)
PM Session 13.15-16.00: Sources of financing AI multidisciplinary AI projects: European Union, national and local opportunities (SOL:L201).
9.30 Fika & Mingle in Annexet*
10.00 Artificial Intelligence projects at Lund University: the view from Social Science and Humanitie (MA4)
Using AI/ML to analyse social media data
Speaker: Anamaria Dutceac Segesten, European Studies, Lund University
Abstract: In this presentation, I will introduce several instances where, in my own research on social media and politics, I have used machine learning techniques to make sense of a large data collection. I will cover, for instance, social network analysis, topic modeling and formality scores, their uses and the problems in dealing with such approaches as a self-trained social scientist. The intention is to highlight the possibilities offered by data science to those preoccupied with understanding the social world.
Distribution of Responsibility for Artificial Intelligence Development
Speakers: Maria Hedlund, Department of Political Science, Lund University and Erik Persson, Practical Philosophy, Lund University
Abstract: Autonomous systems can make their own decisions but cannot take responsibility and thus not be accountable for their decisions. It is however also crucial to discuss the question of responsibility in a more forward-looking sense. It is also critical to reflect on how our decisions today lead the development in a certain direction that may be difficult to change later. In this project we study the question of who should do what, and when, to make sure that things do not go wrong (forward-looking responsibility), and consider how different distributions of forward-looking responsibility today between law-makers, industry, developers, consumers etc.) will affect the long-term development of AI concern democracy, AI development, and AI safety.
Does Google Read Our Minds?
Speaker: Erik J. Olsson, Lund University
Abstract: The process whereby search engines tailor their search results to individual users, so-called personalization, is believed to lead to filter bubbles, where we eventually only get the search results that fit our own ideology and prior beliefs. Since filter bubbles are assumed to be detrimental to society, there have been calls for legal regulation of search engines that blocks filter bubbles. However, the scientific evidence for the filter bubble hypothesis is surprisingly limited. Previous studies of personalization have focused on the extent to which different users get different search results lists in Google without taking the content on the web pages in the lists into account. By contrast, our methodology takes content differences between web pages into account. In particular, the method involves studying the extent to which users with strong opposing views on an issue receive search results that are correlated content-wise with their personal view. I illustrate our methodology at work, but also the challenges it faces, by applying it to the issue of man-made climate change. Finally, I raise the question what’s generally wrong with filter bubbles, philosophically speaking, if they exist, now or on the future.
12.00 Lunch & mingle (location to be decided)
13.15 Afternoon session Sources of financing AI multidisciplinary AI projects: European Union, national and local opportunities (Rum L201, SOL).
Trustworthy Human-Centric AI
Speaker: Fredrik Heintz, Linköping University, Member of the EU High Level Expert Group on AI
AI and methods: initiatives at the Lund Unicersity Faculty of Social Sciences
Speaker: Christopher Swader, Graduate School of the Faculty of Social Science, Lund University
Multidisciplinary AI research in the European Framework Programme
Spealkers: John Pillips and Anna-Karin Wihlborg, Research Services, Lund University
Abstract: Here, we present an overview of the current European Framework Programme, Horizon 2020, as well as its successor, Horizon Europe. We will give special emphasis on schemes and instruments relevant to multidisciplinary AI research, including ‘top-down’ funding from the Societal Challenges, and ‘bottom-up’ schemes such as the European Research Council and Marie Skłodowska-Curie Actions. Additionally, we will detail what support is available at LU for researchers who are interested in applying for funds from the EU.
15.00 Discussions in plenum and groups
16.00 End of workshop
anamaria [dot] dutceac_segesten [at] eu [dot] lu [dot] se (anamaria[dot]dutceac_segesten[at]eu[dot]lu[dot]se)
jonas [dot] wisbrant [at] cs [dot] lth [dot] se (jonas[dot]wisbrant[at]cs[dot]lth[dot]se)