Presentation by Shalaleh Rismani, postdoctoral researcher at McGill and Mila, working at the intersection of system safety, human-computer interaction, and the societal impact of AI, and Executive Director of the Open Roboethics Institute.
AI assistants and agents are often described as being "under human oversight," but what oversight actually means in everyday use remains unclear. This talk examines oversight as a human behavior rather than a system feature, drawing on findings from a study of AI writing assistants that explores how users' mental models shape their interactions and outputs. We show that greater system understanding does not necessarily lead to better oversight, and can even result in worse outcomes when users place excessive trust in AI suggestions. Using this case as a starting point, the talk raises broader questions about oversight in AI assistants and agents, including coding tools, and highlights why transparency and human-in-the-loop designs alone may be insufficient for ensuring system safety.
Presentation by Shalaleh Rismani, postdoctoral researcher at McGill and Mila, working at the intersection of system safety, human-computer interaction, and the societal impact of AI, and Executive Director of the Open Roboethics Institute.
AI assistants and agents are often described as being "under human oversight," but what oversight actually means in everyday use remains unclear. This talk examines oversight as a human behavior rather than a system feature, drawing on findings from a study of AI writing assistants that explores how users' mental models shape their interactions and outputs. We show that greater system understanding does not necessarily lead to better oversight, and can even result in worse outcomes when users place excessive trust in AI suggestions. Using this case as a starting point, the talk raises broader questions about oversight in AI assistants and agents, including coding tools, and highlights why transparency and human-in-the-loop designs alone may be insufficient for ensuring system safety.
Where: UQAM - Pavillon President-Kennedy, Room PK-1140 (first floor), Montréal
Part of the Montréal AI safety, ethics, and governance community event series. Register at: https://lu.ma/7kugvplz
Posted on: