Robot Transparency, Trust and Utility

Published in Connection Science, 2017

Recommended citation: Wortham, R. H., Theodorou, A. (2017). Robot Transparency, Trust and Utility. Connection Science, 29:3, 242-247 DOI: 10.1080/09540091.2017.1313816 https://www.tandfonline.com/doi/abs/10.1080/09540091.2017.1313816

Abstract:

As robot reasoning becomes more complex, debugging becomes increasingly hard based solely on observable behaviour, even for robot designers and technical specialists. Similarly, non-specialist users have difficulty creating useful mental models of robot reasoning from observations of robot behaviour. The EPSRC Principles of Robotics mandate that our artefacts should be transparent, but what does this mean in practice, and how does transparency affect both trust and utility? We investigate this relationship in the literature and find it to be complex, particularly in non industrial environments where, depending on the application and purpose of the robot, transparency may have a wider range of effects on trust and utility. We outline our programme of research to support our assertion that it is nevertheless possible to create transparent agents that are emotionally engaging despite having a transparent machine nature.

Past version:

Wortham, R. H., Theodorou, A., & Bryson, J. J. (2016). Robot Transparency, Trust and Utility. AISB 2016 Workshop on Principles of Robotics. Sheffield, UK