A planetary rover acquires a large collection of images while exploring its surrounding environment. For example, 2D stereo images of the Martian surface captured by the lander and the Sojourner rover during the Mars Pathfinder mission in 1997 were transmitted to Earth for scientific analysis and navigation planning. Due to the limited memory and computational power of the Sojourner rover, most of the images were captured by the lander and then transmitted to Earth directly for processing. If these images were merged together at the rover site to reconstruct a 3D representation of the rover's environment using its on-board resources, more information could potentially be transmitted to Earth in a compact manner. However, construction of a 3D model from multiple views is a highly challenging task to accomplish even for the new generation rovers (Spirit and Opportunity) running on the Mars surface at the time this article was written. Moreover, low transmission rates and communication intervals between Earth and Mars make the transmission of any data more difficult. We propose a robust and computationally efficient method for progressive transmission of multi-resolution 3D models of Martian rocks and soil reconstructed from a series of stereo images. For visualization of these models on Earth, we have developed a new multimodal visualization setup that integrates vision and touch. Our scheme for 3D reconstruction of Martian rocks from 2D images for visualization on Earth involves four main steps: a) acquisition of scans: depth maps are generated from stereo images, b) integration of scans: the scans are correctly positioned and oriented with respect to each other and fused to construct a 3D volumetric representation of the rocks using an octree, c) transmission: the volumetric data is encoded and progressively transmitted to Earth, d) visualization: a surface model is reconstructed from the transmitted data on Earth and displayed to a user through a new autostereoscopic visualization table and a haptic device for providing touch feedback. To test the practical utility of our approach, we first captured a sequence of stereo images of a rock surface from various viewpoints in JPL MarsYard using a mobile cart and then performed a series of 3D reconstruction experiments. In this paper, we discuss the steps of our reconstruction process, our multimodal visualization system, and the tradeoffs that have to be made to transmit multiresolution 3D models to Earth in an efficient manner under the constraints of limited computational resources, low transmission rate, and communication interval between Earth and Mars.