AI and robot ethics have recently gained a lot of attention because adaptive machines are increasingly involved in ethically sensitive scenarios and cause incidents of public outcry. Much of the debate has been focused on achieving highest moral standards in handling ethical dilemmas on which not even humans can agree, which indicates that the wrong questions are being asked. We suggest to address this ethics debate strictly through the lens of what behavior seems socially acceptable, rather than idealistically ethical. Learning such behavior puts the debate into the very heart of developmental robotics. This paper poses a roadmap of computational and experimental questions to address the development of socially acceptable machines. We emphasize the need for social reward mechanisms and learning architectures that integrate these while reaching beyond limitations of plain reinforcement learning agents. We suggest to use the metaphor of “needs” to bridge rewards and higher level abstractions such as goals for both communication and action generation in a social context. We then suggest a series of experimental questions and possible platforms and paradigms to guide future research in the area.
Rolf, MatthiasCrook, NigelSteil, Jochen
Faculty of Technology, Design and Environment\School of Engineering, Computing and Mathematics
Year of publication: 2019Date of RADAR deposit: 2018-07-24