In this study, we introduce a novel system whereby a humanoid robot, named Alter3, employs a selective combination of three strategies - Mimicking, Imitation, and Dream - to replicate human behavior observed through its camera-based eyes. This work builds upon previous research [Masumori et al. (2021); Ikegami et al. (2021)]. In Mimicking mode, Alter3 recreates “how” a human moves by calculating joint angles. In Imitation mode, it identifies and reproduces symbolic poses through a pre-trained Variational AutoEncoder (VAE), essentially replicating “what” the human did. When imitation proves unsuccessful, Alter3 engages its Dream mode, where it recalls altered memories through selection and mutation processes, allowing it to generate movements based on experience. Moreover, in the absence of a human subject, Alter3, with its eyes closed, retrieves and performs movements from memory. Our findings reveal that the concurrent use of the three strategies (Mimicking, Imitation, Dreaming) stabilizes the latent space state and optimizes the range of identifiable poses. Furthermore, the behavior that Alter3 generates through Dream mode evolves from symbolic movements via the Imitation pathway. These findings suggest that new movements can be created from concept-based motions by selectively employing both methodical (Mimicking) and symbolic (Imitation) motions.