Sociotechnical AI Governance

Opportunities and Challenges for HCI

CHI 2025 Workshop @ Yokohama, Japan

April 27, 2025

PACIFICO Yokohama & Hybrid

See Accepted Papers

Welcome to Logo for the STAIG@CHI'25 workshop, which is a staig/deer. STAIG@CHI'25

We stand at a pivotal time in technology development and its impacts on society. Rapid advancements in and adoption of frontier AI systems have amplified the need for AI governance measures across the public sector, academia, and industry.

To this end, we welcome your participation in the first CHI workshop on Sociotechnical AI Governance, where we rally the interdisciplinary expertise of the HCI community to tackle AI governance through a sociotechnical lens.

Join us at the PACIFICO in Yokohama, Japan or online. See our proposal!


Call for Participation

As AI systems become increasingly powerful and pervasive, there is an urgent need to design effective AI governance measures around both technical and social factors. Sociotechnical AI governance recognizes that AI's real-world impacts are always a product of both technical capabilities and broader social factors including stakeholders, organization structures, power dynamics, and cultural norms. To explore this important emerging topic, we invite authors from across academia, industry, the legal domain, and the public sector to submit papers to the Sociotechnical AI Governance workshop at CHI 2025 (STAIG @ CHI '25).

This workshop aims to build community and collaboratively draft a research agenda for sociotechnical AI governance. In particular, we outline four governance challenges for authors to consider: anticipating high-priority risks to address with governance; identifying where to focus governance efforts and who should participate in and lead those efforts; designing appropriate interventions and tools to implement governance actions in practice; and evaluating the effectiveness of these interventions and tools in context.

Topics authors may choose to tackle include, but are not limited to:

  • Theoretical and empirical understanding of stakeholders' needs and goals in AI governance.
  • Novel interactive tools and interventions for collaborative governance.
  • Case studies of governance in various sociotechnical scenarios.
  • Evaluation methods for governance measures in practice.

Submission Guide

Format. Submitted papers should be up to four [4] pages in the ACM single-column format, excluding references and appendix materials. You can use ACM's LaTeX or Word templates. If you are using LaTeX, please use \documentclass[manuscript,review,anonymous]{acmart} in your preamble.

Reviewing process. Submissions will go through a double-blind peer review process based on quality. Each paper will receive two high-quality reviews. We will advertise the accepted papers on our workshop website. We encourage submitted authors to also review for the workshop.

Attendance and presentation. Accepted papers are non-archival and will be presented as posters in the workshop. At least one author must register for and attend the workshop to present their paper.

Submissions are now closed.


Schedule

Event Who Duration Time (JST)
Opening remarks Workshop organizers 15mins 09:00 – 09:15
Opening panel + Q&A Lama Ahmad (OpenAI), Jessica Bergs (UK AISI), Zana Buçina (Harvard), Yuri Nakao (Fujitsu), Lior Zalmanson (Tel Aviv U. / Cornell). Moderated by Kevin Feng (UW). 45mins 09:15 – 10:00
Morning poster session Workshop attendees 60mins 10:15 – 11:15
Morning ideation session Workshop attendees + organizers 45mins 11:15 – 12:00
Lunch break Everyone 90mins 12:00 – 13:30
Afternoon poster session Workshop attendees 60mins 13:30 – 14:30
Afternoon ideation session Workshop attendees + organizers 45mins 14:30 – 15:15
Cross-cutting discussion Workshop attendees + organizers 30mins 15:30 - 16:00
Keynote speaker + Q&A Roel Dobbe (TU Delft) 45mins 16:00 – 16:45
Closing remarks Workshop organizers 15mins 16:45 – 17:00

Speakers and Panelists

Keynote: Algorithmic Harm And Safety Are Sociotechnical, But Are Our Interventions?

Roel Dobbe

Roel Dobbe

TU Delft

Abstract:
In this talk, I will share insights from the Sociotechnical AI Systems Lab at Delft University of Technology. In recent years, the risks emerging from deploying AI in high stakes or sensitive settings has motivated a flurry of measures and interventions to ensure safe and responsible use. However, most efforts have broadly fallen in either techno-centric, ethics-centric, or policy-centric buckets, and as such are often siloed and lack comprehensiveness. Algorithmic harms however can only be understood and prevented or addressed as dynamical or emergent phenomena for which we have to bring into one view technological tools and infrastructures, human actors at varying organizational levels, and the institutional factors be they more formal rules or informal norms such as hidden in culture or political behavior. Given the growing spectrum of algorithmic harms experienced across sectors and spheres of life and society, we urgently need to work to broadly shared conceptual lenses and associated methodologies for what we understand an ‘AI system’ and its harmful outcomes to be, and what aspects and factors need to be in view for responsible actors to understand these and own their responsibilities. To make this steep challenge concrete and offer viable strategies, I lean on lessons from the field of system safety which offers comprehensive concepts and methods for the anticipation, prevention and mitigation of harms in software-based systems. I reflect on the current state of the ‘Technical AI Safety’ field to inform a discussion on how sociotechnically oriented fields may work together to ensure research, policy and regulatory efforts for AI safety can become more comprehensive and effective for those in need of protection.
Bio:
Roel Dobbe is an Assistant Professor in Technology, Policy & Management at Delft University of Technology focusing on Sociotechnical AI Systems. He received a MSc in Systems & Control from Delft (2010) and a PhD in Electrical Engineering and Computer Sciences from UC Berkeley (2018), where he received the Demetri Angelakos Memorial Achievement Award. He was an inaugural postdoc at the AI Now Institute and New York University. His research addresses the integration and implications of algorithmic technologies in societal infrastructure and democratic institutions, focusing on issues related to safety, sustainability and justice. His projects are situated in various domains, including energy systems, public administration, and healthcare. Roel’s system-theoretic lens enables addressing the sociotechnical and political nature of algorithmic and artificial intelligence systems across analysis, engineering design and governance, with an aim to empower domain experts and affected communities. His results have informed various policy initiatives, including environmental assessments in the European AI Act as well as the development of the algorithm watchdog in The Netherlands.

Panel: What is Sociotechnical About AI Governance?

Lama Ahmad

Lama Ahmad

OpenAI

Jessica Bergs

Jessica Bergs

UK AISI

Zana Buçina

Zana Buçina

Harvard University

Yuri Nakao

Yuri Nakao

Fujitsu Research

Lior Zalmanson

Lior Zalmanson

Tel Aviv University / Cornell


Organizers

Kevin Feng

Kevin Feng

University of Washington

Rock Yuren Pang

Rock Yuren Pang

University of Washington

Tzu-Sheng Kuo

Tzu-Sheng Kuo

Carnegie Mellon University

Amy Winecoff

Amy Winecoff

Center for Democracy & Technology

Emily Tseng

Emily Tseng

Microsoft Research

David Gray Widder

David Gray Widder

Cornell Tech

Harini Suresh

Harini Suresh

Brown University

Katharina Reinecke

Katharina Reinecke

University of Washington

Amy X. Zhang

Amy X. Zhang

University of Washington