Evaluating the impacts of a priori robot competency assessments on human decision-making
This was a (N=155) human subject study where we evaluated how communication of an agent’s a priori competency assessment impacted users decision-making when there was uncertainty about the state of the world and the robot’s capabilities. We formulated this as a grid-world game where participants had to supervise the robot and either (1) allow the robot to autonomously navigate to a goal, or (2) manually drive the agent to the goal using a simple up/down/left/right interface. We leveraged the Outcome Assessment from Factorized Machine Self-Confidence as our robot assessment framework. We found that communication of the robot’s competency let to improved task performance, more informed choices of autonomy level, and participant trust calibrated with the agent’s capabilities.
We published these papers:
“I’m Confident This Will End Poorly”: Robot Proficiency Self-Assessment in Human-Robot Teaming
Investigating the Effects of Robot Proficiency Self-Assessment on Trust and Performance
You can find the code on GitHub here: