In the Turing test, a computer model is deemed to think intelligently if it can generate answers indistinguishable from those of a human. We proposed a Turing-like handshake test for testing motor aspects of machine intelligence. The test is administered through a telerobotic system in which an interrogator holds a robotic stylus and interacts with another party—human, artificial, or a linear combination of the two. Here, we analyze and test experimentally the properties of three versions of the Turing-like handshake test: Pure, Weighted, and Noise. We follow the framework of signal detection theory, and propose a simplified model for the interrogator human-likeness evaluation; we simulate this model and provide an assessment of the statistical power of each version of the handshake test. Our simulation study suggests that the choice of the best test depends on how well the interrogator identifies a human handshake when compared with a model. The Pure test is better for intermediate and large levels of interrogator confusion, and the Weighted and Noise tests are good for low levels of confusion. We then present the results of an experimental study in which we compare among three simple models for a human handshake. We demonstrate that it is possible to distinguish between these handshake models, and discuss the relative advantage of each measure and future possible handshake models and Turing-like tests, in measuring and promoting the design of human-like robots for robotics rehabilitation, teleoperation, and telepresence.