Labeling the Phrases of a Conversational Agent with a Unique Personalized Vocabulary

2022 IEEE/SICE International Symposium on System Integration |

Published by IEEE

Publication

Mapping spoken text to gestures is an important research topic for robots with conversation capabilities. According to studies on human co-speech gestures, a reasonable solution for mapping is using a concept-based approach in which a text is first mapped to a semantic cluster (i.e., a concept) containing texts with similar meanings. Subsequently, each concept is mapped to a predefined gesture. By using a concept-based approach, this paper discusses the practical issue of obtaining concepts for a unique vocabulary personalized for a conversational agent. Using Microsoft Rinna as an agent, we qualitatively compare concepts obtained automatically through a natural language processing (NLP) approach to those obtained manually through a sociological approach. We then identify three limitations of the NLP approach: at the semantic level with emojis and symbols; at the semantic level with slang, new words, and buzzwords; and at the pragmatic level. We attribute these limitations to the personalized vocabulary of Rinna. A follow-up experiment demonstrates that robot gestures selected using a concept-based approach leave a better impression than randomly selected gestures for the Rinna vocabulary, suggesting the usefulness of a concept-based gesture generation system for personalized vocabularies. This study provides insights into the development of gesture generation systems for conversational agents with personalized vocabularies.