Perception of an AI Teammate in an Embodied Control Task Affects Team Performance, Reflected in Human Teammates' Behaviors and Physiological Responses
Yinuo Qin, Richard T. Lee, Paul Sajda
Stop assuming AI teammates improve performance by default. Test human-AI team dynamics under stress conditions before deployment. Design AI agents that signal their coordination limitations explicitly, not through human-like avatars that promise reciprocity they can't deliver.
Adding AI teammates to human teams degrades performance as task difficulty increases. Human-AI teams underperformed human-only teams in VR sensorimotor tasks, with elevated arousal and reduced engagement.
Method: The study used a virtual reality ball-balancing task where teams of three controlled a shared platform. When one human was replaced with a human-like AI agent, coordination collapsed under high difficulty conditions. Physiological sensors captured elevated arousal (increased heart rate variability) and behavioral tracking showed reduced communication attempts. The AI's human-like appearance created false expectations of reciprocal coordination that the agent couldn't fulfill.
Caveats: Study used a specific sensorimotor task in VR. Generalizability to cognitive collaboration tasks or asynchronous work unclear.
Reflections: Does explicit signaling of AI limitations (e.g., 'I can only respond to X inputs') prevent the coordination collapse? · At what task difficulty threshold does human-AI performance diverge from human-only teams? · Can physiological monitoring predict team performance degradation before task failure?