Quantifying and communicating in situ changes to robot competency

This project investigated how an autonomous robot could capture and quantify both when and how its competency may change during a mission. As an extension to Factorized Machine Self-Confidence, we developed a Model Quality Assessment which assesses how close an agent’s model predictions are from real observations. We then developed the Event-Triggered Generalized Outcome Assessment (ET-GOA) algorithm which uses the Model Quality assessment to selectively trigger a Generalized Outcome Assessment in situ. We validated the Model Quality Assessment and ET-GOA algorithm in both simulation and with a live unmanned ground vehicle. We believe that real-time updating and communication of competency can calibrate human users to changing robot capabilities, so they can make more informed and safer decisions with respect to their robotic counterparts.

TODO cool video soon

This project extends the following:

Generalizing Competency Self-Assessment for Autonomous Vehicles Using Deep Reinforcement Learning

Dynamic Competency Self-Assessment for Autonomous Agents

We published this article:

Event-triggered robot self-assessment to aid in autonomy adjustment

You can find the code on GitHub here:

Event-Triggered Generalized Outcome Assessment (ET-GOA) Evaluation