An early integration of tactile sensing into motor coordination is the norm in animals, but still a challenge for robots. Tactile exploration through touches on the body gives rise to first body models and bootstraps further development such as reaching competence. Reaching to one’s own body requires connections of the tactile and motor space only. Still, the problems of high dimensionality and motor redundancy persist. Through an embodied computational model for the learning of self-touch on a simulated humanoid robot with artificial sensitive skin, we demonstrate that this task can be achieved (i) effectively and (ii) efficiently at scale by employing the computational frameworks for the learning of internal models for reaching: intrinsic motivation and goal babbling. We relate our results to infant studies on spontaneous body exploration as well as reaching to vibrotactile targets on the body. We analyze the reaching configurations of one infant followed weekly between 4 and 18 months of age and derive further requirements for the computational model: accounting for (iii) continuous rather than sporadic touch and (iv) consistent redundancy resolution. Results show the general success of the learning models in the touch domain, but also point out limitations in achieving fully continuous touch.
Gama, FilipeShcherban, MaksymRolf, Matthias Hoffmann, Matej
School of Engineering, Computing and Mathematics
Year of publication: 2021Date of RADAR deposit: 2022-03-07