Talks, tutorials, and other presentations



Invited tutorial: Designing and Implementing Transparency in Modular Agents

November 16, 2017

Tutorial, CodiaX 2017, Cluj-Napoca, Romania

Artificial Intelligence is here, emerging as the defining technology of our time. From personal assistants in our pockets to self-driving cars, it has the potential to transform our lives. It is a new industrial revolution. Yet, we are often encouraged to use AI as a mystical black box, making interactions limited and often uninformative for the end user. This can lead to both safety and privacy concerns; negatively affecting the system’s performance and jeopardizing its functionality. Moreover, such approaches increase the complexity of debugging and tuning intelligent agents, increasing the development time. In this talk I will discuss the development of autonomous systems that must deal with changing goals in an unpredictable environment and yet, they remain transparent to inspection. I will present a programming-inspired methodology, Behavior Oriented Design, for implementing modular, behavior-based intelligent systems; such systems range from embodied robots to games AI. Finally, the talk will conclude with a discussion on the benefits of removing the potentially-frightening mystery around “why my robot behaves like that”; helping us to understand, debug, tune, and calibrate our trust towards our machines.


Invited tutorial: Transparency as a Consideration in Building AI

August 24, 2016

Tutorial, Reponsible AI Summer School - ECAI 2016, The Hague, Netherlands

Transparency is a key consideration for the ethical design and use of Artificial Intelligence, and has recently become a topic of public interest and debate. We frequently use philosophical, mathematical, and biologically-inspired techniques for building interactive, intelligent agents. Yet often, we treat them as black boxes, with no understanding of how the underlying real-time decision making functions. The black-box nature of intelligent systems, such as in context-aware applications, makes interaction limited and often uninformative for the end user. Limiting interactions may negatively affect the system’s performance or even jeopardize the functionality of the system. Transparency allows a better understanding of an agent’s emergent behaviour, helping not only with debugging AI, but also with its public understanding, hopefully removing the potentially-frightening mystery around “why my robot behaves like that”. In this tutorial, we will discuss the benefits of transparent-to-inspection agents and present to the audience both problems and good practises for the development of such agents. We will demonstrate the use of ABOD3, a real-time AI interactive development environment (IDE), with real-time debugging capabilities.