Abstracts at IC2S2

When Large Language Models are Reliable for Judging Empathic Communication

Aakriti Kumar, Nalin (Fai) Poungpeth, Diyi Yang, Erina Farrell, Bruce Lambert, Matt Groh

Large language models (LLMs) excel at generating empathic responses in text-based conversations. But, how reliably do they judge the nuances of empathic communication? We investigate this question by comparing how experts, crowdworkers, and LLMs annotate empathic communication across four evaluative frameworks drawn from psychology, natural language processing, and communications applied to 200 real-world conversations where one speaker shares a personal problem and the other offers support. Drawing on 3,150 expert annotations, 2,844 crowd annotations, and 3,150 LLM annotations, we assess inter-rater reliability between these three annotator groups. We find that expert agreement is high but varies across the frameworks' sub-components depending on their clarity, complexity, and subjectivity. We show that expert agreement offers a more informative benchmark for contextualizing LLM performance than standard classification metrics. Across all four frameworks, LLMs consistently approach this expert level benchmark and exceed the reliability of crowdworkers. These results demonstrate how LLMs, when validated on specific tasks with appropriate benchmarks, can support transparency and oversight in emotionally sensitive applications including their use as conversational companions.

Mapping Computational Personality Rights via Embeddings and Human Perception

Xudong Tang, Matt Groh

Can a computational framework for evaluating voice similarity match or possibly exceed the "reasonable person" standard in tort law for identifying a speaker from an audio recording? We address this question by examining how speaker embeddings in generative audio models influence human perception of speaker identity. By generating voice samples using interpolated speaker embeddings, we seek to reveal an empirical personality right boundary in the embedding space. Our pilot results demonstrate that distance in speaker embedding space broadly aligns with human judgments of vocal similarity. Finally, we propose our next step where we conduct a large-scale web-based experiment with both real and AI-generated voice samples to demonstrate the degree to which a computational metric can match human perception of the personality right boundary, which will offer a new approach for addressing legal concerns about personality rights.

Multimodal Language Models for Annotating Team Behavior

Evey Huang, Matt Groh, Danny Abrams, Brian Uzzi

Multimodal language models capable of annotating interactions in videos present transformative opportunities for computational social science (CSS) and team research, enabling longitudinal and large-scale studies of how team behaviors drive innovation—a task previously hindered by labor-intensive manual coding [2]. While prior work has evaluated AI’s annotation of text or synthetic data [1], its reliability on real-world, multimodal data like video remains underexplored. We address this gap by testing whether Gemini, a state-of-the-art multimodal model, can reliably annotate 23 team behaviors in 138 video-recorded scientific team meetings. Each meeting is approximately one hour long and includes four to seven scientists from diverse backgrounds at scientific conferences. We investigate the inter-rater reliability of a human expert’s and Gemini’s annotations and qualitatively investigate the differences. Our work bridges computational methods and team science, offering a framework to help CSS researchers study how micro-level behaviors (e.g., conflict management) predict macro-level outcomes like innovation.