Transparency is a key consideration for the ethical design and use of Artificial Intelligence, and has recently become a topic of public interest and debate. We frequently use philosophical, mathematical, and biologically-inspired techniques for building interactive, intelligent agents. Yet often, we treat them as black boxes, with no understanding of how the underlying real-time decision making functions. The black-box nature of intelligent systems, such as in context-aware applications, makes interaction limited and often uninformative for the end user. Limiting interactions may negatively affect the system’s performance or even jeopardize the functionality of the system. Transparency allows a better understanding of an agent’s emergent behaviour, helping not only with debugging AI, but also with its public understanding, hopefully removing the potentially-frightening mystery around “why my robot behaves like that”. In this tutorial, we will discuss the benefits of transparent-to-inspection agents and present to the audience both problems and good practises for the development of such agents. We will demonstrate the use of ABOD3, a real-time AI interactive development environment (IDE), with real-time debugging capabilities.