Word embeddings have triggered great advances in natural language processing for non-embodied systems such as scene describers. Embeddings may similarly advance natural language understanding in robots, as long as those robots preserve the semantic structure of an embedding corpus in their actions. That is, a robot must act similarly when it hears ‘jump’ or ‘hop’ and differently when it hears ‘crouch’ or ‘launch’. This could help a robot learn language because it would immediately obey an unknown word such as ‘hop’ if it had been trained to obey ‘jump’. However, ensuring such alignment between semantic and behavioral structure is currently an open problem. In previous work we showed that the choice of a robot's mechanical structure can facilitate or obstruct a machine learning algorithm's ability to induce semantic and behavioral alignment. That work however required the investigator to create a loss function for each natural language command, including those for which formal definitions are elusive, such as ‘be interesting’. A more scalable approach is to bypass loss functions altogether by inviting non-experts to supply their own commands and reward robots that obey them. Here we found that more semantic and behavioral alignment existed among robots reinforced under popular commands than among robots reinforced under less popular commands. This suggests the crowd either chose alignment-inducing commands and/or preferred robots that acted similarly under similar commands. This may pave the way to scalable human-robot interaction by avoiding loss function construction and increasing the probability of zero-shot obedience to previously unheard commands.

This content is only available as a PDF.