In order to feel the sense of presence in a virtual environment, it is important for the participants to become a part of this environment and interact with it through natural behaviors. This interaction is even more important in networked collaborative virtual environments, in which the participants need to see and interact with each other. We present the Virtual Life Network (VLNET), a joint research effort in the field of networked collaborative virtual environments at MIRALab of the University of Geneva and the Computer Graphics Laboratory of the Swiss Federal Institute of Technology, Lausanne. In VLNET each participant is represented by a virtual human actor with realistic appearance and movements similar to the actual body. Interacting with the environment through his virtual body, the participant is perceived by himself and others in a natural way. Since it is generally not possible to track all degrees of freedom of the human body in order to reproduce realistic body motion, we introduce the motor functions that generate natural motion for standard tasks such as walking and arm motion; they are based on limited tracked information (hand and head positions). By using the same virtual human representation, but with the addition of high-level control, autonomous virtual actors can be introduced into the environment to perform some useful tasks or simply to make the environment more appealing. To further enhance the realistic feel of the virtual environment and to simplify object manipulation we provide the facility of defining object behaviors by attaching motor functions to the objects.