In this paper, we describe some use of mixed reality as a new assistance for performing teleoperation tasks in remote scenes. We will start by a brief classification of augmented reality. This paper then describes the principle of our mixed reality system in teleoperation. It tackles the problem of scene registration using a man–machine cooperative and multisensory vision system. The system provides the operator with powerful sensorial feedback as well as appropriate tools to build (and update automatically) the geometric model of the perceived scene. We describe a new interactive approach combining image analysis and mixed reality techniques for assisted 3D geometric and semantic modeling. At the end of this paper, we describe applications in nuclear plants with results in 3D positioning.