The talk presents a connected view of recent research on large language models (LLMs) through a human-centered lens, moving from what these systems can infer about people to how they interact with them, where they present security risks, and why such vulnerabilities matter in socially consequential settings. It begins with work on how LLMs can infer personality from short texts and on the role of communication style in shaping user experience and task outcomes, showing both the potential of LLMs for personalization and the importance of designing interaction carefully across contexts. It then broadens to questions of trustworthiness and toxicity, covering security risks in prompt-based interaction, attack surfaces, and the growing challenge posed by multilingual, multimodal, and autonomous jailbreaks. Further, it examines representational harms through methods for measuring gender bias in gendered and under-resourced languages. The talk is concluded with a high-stakes application of mental health crisis response, where clinically informed evaluation reveals that increasing model capability does not automatically translate into safe, appropriate, or context-aware behavior. Across these topics, the unifying theme is that progress in LLMs should be matched by rigorous work on evaluation, safety, fairness, and responsible deployment.
Zoom link: Click me. (Audio issues from last time are being actively addressed.)
The seminar is also organized under the ELLIOT project.


