Abstract
In this paper, we propose a Bayesian generative model that can form multiple categories based on each sensory-channel and can associate words with any of the four sensory-channels (action, position, object, and color). This paper focuses on cross-situational learning using the co-occurrence between words and information of sensory-channels in complex situations rather than conventional situations of cross-situational learning. We conducted a learning scenario using a simulator and a real humanoid iCub robot. In the scenario, a human tutor provided a sentence that describes an object of visual attention and an accompanying action to the robot. The scenario was set as follows: the number of words per sensory-channel was three or four, and the number of trials for learning was 20 and 40 for the simulator and 25 and 40 for the real robot. The experimental results showed that the proposed method was able to estimate the multiple categorizations and to learn the relationships between multiple sensory-channels and words accurately. In addition, we conducted an action generation task and an action description task based on word meanings learned in the cross-situational learning scenario. The experimental results showed that the robot could successfully use the word meanings learned by using the proposed method.
DOI
10.3389/fnbot.2017.00066
Publication Date
2017-12-19
Publication Title
FRONTIERS IN NEUROROBOTICS
Volume
11
Publisher
Frontiers Media SA
ISSN
1662-5218
Embargo Period
2024-11-22
Keywords
Bayesian model, cross-situational learning, lexical acquisition, multimodal categorization, symbol grounding, word meaning
Recommended Citation
Taniguchi, A., Taniguchi, T., & Cangelosi, A. (2017) 'Cross-Situational Learning with Bayesian Generative Models for Multimodal Category and Word Learning in Robots', FRONTIERS IN NEUROROBOTICS, 11. Frontiers Media SA: Available at: https://doi.org/10.3389/fnbot.2017.00066