The browser you are using is not supported by this website. All versions of Internet Explorer are no longer supported, either by us or Microsoft (read more here: https://www.microsoft.com/en-us/microsoft-365/windows/end-of-ie-support).

Please use a modern browser to fully experience our website, such as the newest versions of Edge, Chrome, Firefox or Safari etc.

Who decides what's trustworthy? Standards development for the EU AI Act

Lunch seminar recorded 5 March 2025

Topic: Who decides what's trustworthy? Standards development for the EU AI Act

Resources and references

  • Corporate Europe Observatory (2025). Bias baked in: How Big Tech sets its own AI standards. https://corporateeurope.org/en/2025/01/bias-baked.
  • Edwards, Lilian (2022). Expert explainer: The EU AI Act proposal. Ada Lovelace Institute. https://www.adalovelaceinstitute.org/resource/eu-ai-act-explainer/.
  • Russell, Andrew L (2014). Open Standards and the Digital Age: History, Ideology, and Networks. New York: Cambridge University Press.
  • Timmermans, Stefan, & Berg, Marc (2003). The gold standard: The Challenge of Evidence-Based Medicine and Standardization in Health Care. Philadelphia: Temple University Press.
  • Veale, Michael, & Borgesius, Frederik Zuiderveen (2021). Demystifying the Draft EU Artificial Intelligence Act: Analysing the good, the bad, and the unclear elements of the proposed approach. Computer Law Review International, 22(4), 97–112.
  • Yates, JoAnne, & Murphy, Craig (2019). Engineering rules: global standard setting since 1880. Baltimore: Johns Hopkins University Press.

When: 5 March at 12.00-13.00

Where: Online

Speakers:

Spoken language: English

Abstract

In a report published early this year, Corporate Europe Observatory made the claim that global tech companies (such as Microsoft, Amazon and Google) are actively working to undermine the AI Act through the creation of weak and permissive standards. How is this possible? What's the relationship between these standards and the AI Act? And how did global tech companies come to play such a central role in their development? In this seminar, we address these questions by introducing CEN/CLC/JTC 21, its mandate from the European Commission, and opportunities for the politicisation of its standards development. We hope that this seminar will prompt university-based AI researchers to engage more meaningfully with the standards community.

James White is a sociologist and member of the AI technical committee of the Swedish Institute for Standards (SIS/TK 421), with more than ten years experience researching standards.

Stefan Larsson is a lawyer and socio-legal researcher that focuses on issues of trust and transparency and the socio-legal impact of autonomous and AI-driven technologies.