We compare an entropy estimator recently discussed by Zhang (2012) with two estimators, and , introduced by Grassberger (2003) and Schürmann (2004). We prove the identity , which has not been taken into account by Zhang (2012). Then we prove that the systematic error (bias) of is less than or equal to the bias of the ordinary likelihood (or plug-in) estimator of entropy. Finally, by numerical simulation, we verify that for the most interesting regime of small sample estimation and large event spaces, the estimator has a significantly smaller statistical error than .

You do not currently have access to this content.