Biography
Daniel Polani is Professor of Artificial Intelligence at the School of Computer Science, University of Hertfordshire.
Daniel earned his PhD in 1996 followed by a research fellowship at the University of Mainz (Germany) with a stint in 1997 as a visiting researcher at the University of Texas in Austin. In 2000–2002 he was Fellow at the Institute for Neuro- and Bioinformatics at the University of Luebeck (Germany). Since 2002, he is member of the Algorithms and Adaptive Systems Research Groups at the University of Hertfordshireand leader of the SEPIA (Sensor Evolution, Processing, Information and Actuation) unit.
A central point of his research is the use of information-theoretic language and methods to model intelligent and cognitive behaviour on the one hand and its operational application to questions of Robotics, AI and Artificial Life on the other. The information-theoretic line of work has given rise to a number of concepts emerging from the formalism, among these, the quantification of minimum information to achieve goals (“relevant information”), the emergence of goals from processing constraint, or, empowerment, a principled information-theoretic model for intrinsic motivation which is now being used in a variety of projects and for which currently robotic implementations are being developed.
He is Associate Editor of a number of Journals, including Journal of Autonomous Agents and Multi-Agent Systems and has been reviewing for national and international research councils (e.g. UK, US, Germany, Netherlands, Hongkong) as well as renowned international journals and member of a large number program committees. He is currently PI of the Horizon 2020 projects socSMCs and WiMUST, as well as president-elect of the International RoboCup Federation.
Abstract
“Altruistic” Empowerment as a Replacement for the 3 Laws of Robotics
Even if full autonomy may still be some time off, the increasing capability of AI and robotics systems pushes such constructs already increasingly into the operation and life domain of humans. While industrial robots operate under controlled conditions and safety is mainly guaranteed by spatial/temporal separation, it becomes clear that future robotics will not enjoy such a clear separation. As conditions under which intelligent robots operate and interact with humans become increasingly ill-defined, it is necessary to consider ways to ensure a safe interaction between robots and humans. Asimov’s 3 Laws was an attempt to formulate such rules, but as such are still vague and problematic in their application.
We propose, as alternative, empowerment, an information-theoretic intrinsic motivation model, to address this problem, however, in a number of “altruistic” variants where the robot “puts itself in the shoes of the human”. It turns out that the model suggests a route towards a natural and operational replacement to Asimov’s original ideas; it implicitly takes embodiment and context into account and does not require the robots to have linguistic competence ingrained in Asimov’s description to work. In particular, it suffers less from context-specific adaptations than an explicitly rule-based model.