background pattern
April 26, 2025 May 1, 2025

Microsoft at CHI 2025

JST (UTC+9)

Lieu: Yokohama, Japan

Toutes les heures sont en JST (UTC +9)

Ce formulaire nécessite JavaScript et sera soumis automatiquement lors de la modification d’entrée. Les résultats sont mis à jour en direct sur la page sans recharger.

Saturday, April 26, 2025

  • 09:0010:30 Workshop G221

    WS17: Human-Centered Evaluation and Auditing of Language Models

    11:10 – 12:40 | 14:10 – 15:40 | 16:20 – 17:50

    Yu Lu Liu, Wesley Hanwen Deng, Michelle S. Lam, Motahhare Eslami, Juho Kim, Q. Vera Liao, Wei Xu, Jekaterina Novikova, Ziang Xiao

    The recent advancements in Large Language Models (LLMs) have significantly impacted numerous, and will impact more, real-world applications. However, these models also pose significant risks to individuals and society. To mitigate these issues and guide future model development, responsible evaluation and auditing of LLMs are essential. This workshop aims to address the current «evaluation crisis» in LLM research and practice by bringing together HCI and AI researchers and practitioners to rethink LLM evaluation and auditing from a human-centered perspective. The workshop will explore topics around understanding stakeholders’ needs and goals with evaluation and auditing LLMs, establishing human-centered evaluation and auditing methods, developing tools and resources to support these methods, building community and fostering collaboration. By soliciting papers, organizing invited keynote and panel, and facilitating group discussions, this workshop aims to develop a future research agenda for addressing the challenges in LLM evaluation and auditing. Following a successful first iteration of this workshop at CHI 2024, we introduce the theme of «mind the context» for this second iteration, where participants will be encouraged to tackle the challenges and nuances of LLM evaluation and auditing in specific contexts.

  • 09:0010:30 Workshop G302

    WS33: Tools for Thought: Research and Design for Understanding, Protecting, and Augmenting Human Cognition with Generative AI

    11:10 – 12:40 | 14:10 – 15:40 | 16:20 – 17:50

    Lev Tankelevitch, Elena L. Glassman, Jessica He, Majeed Kazemitabaar, Aniket Kittur, Mina Lee, Srishti Palani, Advait Sarkar, Gonzalo Ramos, Yvonne Rogers, Hariharan Subramonyam

    We invite researchers, designers, practitioners, and provocateurs to explore what it means to understand and shape the impact of Generative AI (GenAI) on human cognition. GenAI radically widens the scope and capability of automation for work, learning, and creativity. While impactful, it also changes workflows and the quality of thinking involved, raising questions about its effects on cognition, including critical thinking and learning. Yet, GenAI also offers opportunities for designing tools for thought that protect and augment cognition. Such systems provoke critical thinking, provide personalized tutoring, or enable novel ways of sensemaking, among other approaches. How does GenAI change workflows and human cognition? What are opportunities and challenges for designing GenAI systems that protect and augment human cognition? Which theories, perspectives, and methods are relevant? This workshop aims to develop a multidisciplinary community interested in exploring these questions to protect against the erosion, and fuel the augmentation, of human cognition using GenAI.

Sunday, April 27, 2025

  • 09:0010:30 Workshop G416

    WS35: Sociotechnical AI Governance: Challenges and Opportunities for HCI

    11:10 – 12:40 | 14:10 – 15:40 | 16:20 – 17:50

    K. J. Kevin Feng, Rock Yuren Pang, Tzu-Sheng Kuo, Amy Winecoff, Emily Tseng, David Gray Widder, Harini Suresh, Katharina Reinecke, Amy X. Zhang

    Rapid advancements in and adoption of frontier AI systems have amplified the need for AI governance measures across the public sector, academia, and industry. Prior work in technical AI governance has proposed agendas for governing technical components in AI development, such as data, models, and compute. However, recent calls for more sociotechnical approaches recognize the critical role of social infrastructures surrounding technical ones in shaping governance decisions and efforts. While scholars and practitioners have advocated for sociotechnical AI governance, concrete research directions in this area are only beginning to emerge. This workshop aims to gather the expertise of researchers in HCI and adjacent disciplines to chart promising paths forward for sociotechnical AI governance. To make problems in this area more tangible, we outline four core governance challenges for contributions: anticipating high-priority risks to address with governance, identifying where to focus governance efforts and who should lead those efforts, designing appropriate interventions and tools to implement governance actions in practice, and evaluating the effectiveness of these interventions and tools in context. Through papers, panel discussions, keynotes, and collaborative drafting of a research agenda, this workshop will build community and empower actionable efforts to tackle AI governance through a sociotechnical lens.

Monday, April 28, 2025

Tuesday, April 29, 2025

Wednesday, April 30, 2025

Thursday, May 1, 2025