"Our goal is to enable robots to express their incapability, and to do so in a way that communicates both what they are trying to accomplish and why they are unable to accomplish it... Our user study supports that our approach automatically generates motions expressing incapability that communicate both what and why to end-users, and improve their overall perception of the robot and willingness to collaborate with it in the future."
I'm not as plugged into human-computer interaction work, but as a user, it seems like this is sorely missing and getting worse. I wish I could get a happy medium somewhere between a full stack trace and silent failure, e.g. when my iCloud documents won't sync.