Event-Triggered Robot Self-Assessment to Aid in Autonomy Adjustment

Published in Frontiers in Robotics and AI, 2024

Recommended citation: N. Conlon, N. Ahmed, D. Szafir. Event-Triggered Robot Self-Assessment to Aid in Autonomy Adjustment. Frontiers in Robotics and AI 10:1294533. January 2024. https://www.frontiersin.org/articles/10.3389/frobt.2023.1294533/full

Abstract: Human-robot teams are being called upon to accomplish increasingly complex tasks. During execution, the robot may operate at different levels of autonomy (LOA), ranging from full robotic autonomy to full human control. For any number of reasons, such as changes in the robot’s surroundings due the the complexities of operating in dynamic and uncertain environments, degradation and damage to the robot platform, or changes in tasking, adjusting the LOA during operations may be necessary to achieve desired mission outcomes. Thus, a critical challenge is understanding when and how the autonomy should be adjusted. One way to frame this problem is with respect to the robot’s capabilities and limitations, known as robot competency. With this framing, a robot could be granted a level of autonomy in line with its ability to operate with a high degree of competence. In this work, we propose a Model Quality Assessment metric, which indicates how (un)expected an autonomous robot’s observations are compared to its model predictions. We then present an Event-Triggered Generalized Outcome Assessment (ET-GOA) algorithm that uses changes in the Model Quality Assessment above a threshold to selectively execute and report a high level assessment of task objectives. We validate the Model Quality Assessment metric and the ET-GOA algorithm in both simulated and live robot navigation scenarios and present a human-in-the-loop demonstration showing how ET-GOA can facilitate informed autonomy adjustment decisions.