The browser you are using is not supported by this website. All versions of Internet Explorer are no longer supported, either by us or Microsoft (read more here: https://www.microsoft.com/en-us/microsoft-365/windows/end-of-ie-support).

Please use a modern browser to fully experience our website, such as the newest versions of Edge, Chrome, Firefox or Safari etc.

What language models do and don’t do in studies of political behaviour

Lunch seminar 10 September 2025

 

Topic: What language models do and don’t do in studies of political behaviour

When: 10 September at 12.00-13.00

Where: Online

Speakers:

  • Annika Fredén, Associate professor of political science, Lund University
  • Denitsa Saynova, PhD Candidate computer science, Chalmers University of Technology

 Moderator: Bibi Imre-Millei, Political Science, Lund University

Spoken language: English

Abstract

A recent trend in social science is to use language models (LMs) such as Chat GPT to mimic human behavior. When the use of similar tools accelerates, it is important to research their foundations to be able to estimate their usability and character when interpreting responses and output from politically oriented questions. We study the usefulness of word embeddings and LMs to detect differences between political parties, including subtleties and jargon and investigate if LMs can replicate results from social science experiments with human subjects. Drawing on results from our recent research, we show that natural language processing of political materials from parties benefit from pre-training on large, general data, rather than specialized data, and that LMs may indicate which social sciences experiments are robust to replication. We allude to a potential distinction between linguistic differences on the one hand, and oppositional differences on the other, when interpreting the performance of language models and their relevance for the social sciences.