Future personal robots might possess the capability to autonomously generate novel goals that exceed their initial programming as well as their past experience. We discuss the ethical challenges involved in such a scenario, ranging from the construction of ethics into such machines to the standard of ethics we could actually demand from such machines. We argue that we might have to accept those machines committing human-like ethical failures if they should ever reach human-level autonomy and intentionality. We base our discussion on recent ideas that novel goals could be originated from agents’ value system that express a subjective goodness of world or internal states. Novel goals could then be generated by extrapolating what future states would be good to achieve. Ethics could be built into such systems not just by simple utilitarian measures but also by constructing a value for the expected social acceptance of a the agent’s conduct.
Rolf, MCrook, N
Faculty of Technology, Design and Environment\Department of Computing and Communication Technologies
Year of publication: 2016Date of RADAR deposit: 2017-05-22