Invited tutorial: Designing and Implementing Transparency in Modular Agents


Artificial Intelligence is here, emerging as the defining technology of our time. From personal assistants in our pockets to self-driving cars, it has the potential to transform our lives. It is a new industrial revolution. Yet, we are often encouraged to use AI as a mystical black box, making interactions limited and often uninformative for the end user. This can lead to both safety and privacy concerns; negatively affecting the system’s performance and jeopardizing its functionality. Moreover, such approaches increase the complexity of debugging and tuning intelligent agents, increasing the development time. In this talk I will discuss the development of autonomous systems that must deal with changing goals in an unpredictable environment and yet, they remain transparent to inspection. I will present a programming-inspired methodology, Behavior Oriented Design, for implementing modular, behavior-based intelligent systems; such systems range from embodied robots to games AI. Finally, the talk will conclude with a discussion on the benefits of removing the potentially-frightening mystery around “why my robot behaves like that”; helping us to understand, debug, tune, and calibrate our trust towards our machines.