Nov
AI Lund lunch seminar: The Agreeable AI – How LLMs Personalise Political Bias Through Moral Alignment
Topic: The Agreeable AI: How LLMs Personalize Political Bias Through Moral Alignment
When: 26 November 12.00 to 13.00
Where: Online - link by registration
Speaker: Minahil Malik, PhD student at the Department of Political Science of Lund University, Sweden
Moderator: Bibi Imre-Millei, Political Science, Lund university
Spoken language: English
Abstract
The rapid growth of AI use has transformed how citizens acquire political information in democratic societies. This shift raises significant concerns about political bias and its contribution to societal polarisation. This study examines sycophancy in large language models (LLMs), defined as excessive agreement and flattery toward users, and its manifestation in political contexts. Unlike traditional algorithmic bias, sycophancy uses LLMs' personalisation capabilities to tailor responses to individual characteristics, potentially making biased information more persuasive.
Drawing on moral foundations theory, which identifies moral underpinnings of political beliefs, this research examines whether chatbots exhibit political bias aligned with users’ expressed moral convictions. We hypothesize that LLMs demonstrate rightward bias when presented with conservative-associated moral foundations and leftward bias when encountering liberal-associated foundations.
Employing a novel methodology combining prompt engineering with probabilistic analysis, we generated a dataset of political questions and analysed GPT-4’s responses. By comparing response probabilities with and without identity-specific information on moral foundations, we quantified sycophantic tendencies against a baseline condition. Our findings provide compelling evidence that LLMs exhibit sycophantic behaviour, adapting their political outputs based on users’ moral belief systems rather than maintaining consistent, unbiased responses across interactions.
Bios
Minahil Malik is a doctoral student in Political Science at Lund University. Her work sits at the intersection of computer science and political psychology, focusing on how AI can facilitate the spread of disinformation, polarisation, and the resulting challenges for democracy. Through computational experiments combining natural language processing, machine learning, and insights from political psychology, she investigates AI deception, particularly "sycophancy" (the tendency for AI models to mirror their users), and examines how this behavior affects moral framing, persuasion, and democratic deliberation.
Hanna Bäck is a Professor of Political Science at Lund University a collaborator in this project . Her research focuses on political parties and political behaviour, focusing on topics such as elite communication and affective polarisation. She is a member of Kungliga Vitterhetsakademien, and she is assistant coordinator of Lund University’s Natural and Artificial Cognition profile area. One of her current research projects focuses on applying generative LLMs to legislative debates to measure moralisation in parliamentary speech.
Registration
To participate is free of charge. Sign up at ai.lu.se/2025-11-26b/registration and we send you an access link to the zoom platform.
About the event
Location:
Online - link by registration
Contact:
Jonas [dot] Wisbrant [at] control [dot] lth [dot] se