Abstract
This letter proves that a ReLU network can approximate any continuous function with arbitrary precision by means of piecewise linear or constant approximations. For univariate function , we use the composite of ReLUs to produce a line segment; all of the subnetworks of line segments comprise a ReLU network, which is a piecewise linear approximation to . For multivariate function , ReLU networks are constructed to approximate a piecewise linear function derived from triangulation methods approximating . A neural unit called TRLU is designed by a ReLU network; the piecewise constant approximation, such as Haar wavelets, is implemented by rectifying the linear output of a ReLU network via TRLUs. New interpretations of deep layers, as well as some other results, are also presented.