Abstract

The possibility of approximating a continuous function on a compact subset of the real line by a feedforward single hidden layer neural network with a sigmoidal activation function has been studied in many papers. Such networks can approximate an arbitrary continuous function provided that an unlimited number of neurons in a hidden layer is permitted. In this note, we consider constructive approximation on any finite interval of by neural networks with only one neuron in the hidden layer. We construct algorithmically a smooth, sigmoidal, almost monotone activation function providing approximation to an arbitrary continuous function within any degree of accuracy. This algorithm is implemented in a computer program, which computes the value of at any reasonable point of the real axis.

1  Introduction

Neural networks are being successfully applied across an extraordinary range of problem domains, in fields as diverse as computer science, finance, medicine, engineering, and physics, for example. The main reason for such popularity is their ability to approximate arbitrary functions. For the past 30 years, a number of results have been published showing that the artificial neural network called a feedforward network with one hidden layer can approximate arbitrarily well any continuous function of several real variables. These results play an important role in determining boundaries of efficacy of the considered networks. But the proofs usually do not state how many neurons should be used in the hidden layer. The purpose of this note is to prove constructively that a neural network having only one neuron in its single hidden layer can approximate arbitrarily well all continuous functions defined on any compact subset of the real axis.

Neurons are the building blocks for neural networks. An artificial neuron is a device with n real inputs and an output. This output is generally a superposition of a univariate function with an affine function in the n-dimensional Euclidean space, that is, a function of the form . The neurons are organized in layers. Each neuron of each layer is connected to each neuron of the subsequent (and thus previous) layer. Information flows from one layer to the subsequent layer (thus the term feedforward). A feedforward neural network with one hidden layer has three layers: the input layer, hidden layer, and output layer. A feedforward network with one hidden layer consisting of r neurons computes functions of the form
1.1
Here the vectors wi, called weights, are vectors in ; the thresholds and the coefficients ci are real numbers; and is a univariate activation function. The following are common examples of activation functions:
In many applications, it is convenient to take the activation function as a sigmoidal function, defined as
The literature on neural networks abounds with the use of such functions and their superpositions. Note that all the above activation functions are sigmoidal.

In approximation by neural networks, there are two main problems. The first is the density problem of determining the conditions under which an arbitrary target function can be approximated arbitrarily well by neural networks. The second problem, called the complexity problem, is to determine how many neurons in hidden layers are necessary to give a prescribed degree of approximation. This problem is almost the same as the problem of degree of approximation (see Barron, 1993; Cao, Xie, & Xu, 2008; Hahm & Hong, 1999). The possibility of approximating a continuous function on a compact subset of the real line (or n-dimensional space) by a single hidden layer neural network with a sigmoidal activation function has been well studied in a number of papers. Different methods were used. Carroll and Dickinson (1989) used the inverse Radon transformation to prove the universal approximation property of single hidden layer neural networks. Gallant and White (1988) constructed a specific continuous, nondecreasing sigmoidal function, called a cosine squasher, from which it was possible to obtain any Fourier series. Thus, their activation function had the density property. Cybenko (1989) and Funahashi (1989), independently from each other, established that feedforward neural networks with a continuous sigmoidal activation function can approximate any continuous function within any degree of accuracy on compact subsets of Cybenko’s proof uses the functional analysis method, combining the Hahn–Banach theorem and the Riesz representation theorem, whiles Funahashi’s proof applies the result of Irie and Miyake (1988) on the integral representation of functions, using a kernel that can be expressed as a difference of two sigmoidal functions. Hornik, Stinchcombe, and White (1989) applied the Stone–Weierstrass theorem, using trigonometric functions.

Kůrková (1992) proved that staircase-like functions of any sigmoidal type can approximate continuous functions on any compact subset of the real line within an arbitrary accuracy. This is effectively used in Kůrková’s subsequent results, which show that a continuous multivariate function can be approximated arbitrarily well by two hidden layer neural networks with a sigmoidal activation function (see Kůrková, 1991, 1992).

Chen, Chen, and Liu (1992) extended the result of Cybenko by proving that any continuous function on a compact subset of can be approximated by a single hidden layer feedforward network with a bounded (not necessarily continuous) sigmoidal activation function. Almost the same result was independently obtained by Jones (1990).

Costarelli and Spigler (2013) reconsidered Cybenko’s approximation theorem and for a given function constructed certain sums of the form (1.1), which approximate f within any degree of accuracy. In their result, similar to (Chen et al., 1992), is bounded and sigmoidal. Therefore, when , the result can be viewed as a density result in for the set of all functions of the form (1.1).

Chui and Li (1992) proved that a single hidden layer network with a continuous sigmoidal activation function having integer weights and thresholds can approximate an arbitrary continuous function on a compact subset of . Ito (1991) established a density result for continuous functions on a compact subset of by neural networks with a sigmoidal function having only unit weights. Density properties of a single hidden layer network with a restricted set of weights were also studied in other papers (for a detailed discussion, see Ismailov, 2012).

In many subsequent papers, which dealt with the density problem, nonsigmoidal activation functions were allowed. Among them are the papers by Stinchcombe and White (1990), Cotter (1990), Hornik (1991), Mhaskar and Micchelli (1992), and other researchers. The more general result in this direction belongs to Leshno, Lin, Pinkus, and Schocken (1993). They proved that the necessary and sufficient condition for any continuous activation function to have the density property is that it not be a polynomial. (For a detailed discussion of most of the results in this section, see the review paper by Pinkus, 1999).

It should be remarked that in all the works mentioned above, the number of neurons r in the hidden layer is not fixed. As such, to achieve a desired precision, one may take an excessive number of neurons. This, in turn, gives rise to the problem of complexity (see above).

Our approach to the problem of approximation by single hidden layer feedforward networks is different and quite simple. We consider networks (1.1) defined on with a limited number of neurons (r is fixed!) in a hidden layer and ask the following fair question: Is it possible to construct a well-behaved (i.e., sigmoidal, smooth, monotone) universal activation function providing approximation to arbitrary continuous functions on any compact set in within any degree of precision? We show that this is possible even in the case of a feedforward network with only one neuron in its hidden layer (i.e., in the case r = 1). The basic form of our theorem claims that there exists a smooth, sigmoidal, almost monotone activation function with the property: for each univariate continuous function f on the unit interval and any positive , one can choose three numbers c0, c1, and such that the function gives -approximation to f. It should be remarked that we prove not only the existence result but also give an algorithm for constructing the universal sigmoidal function. For a wide class of Lipschitz continuous functions, we also give an algorithm for evaluating the numbers c0, c1, and .

The main theoretical idea behind the construction of is that polynomials with rational coefficients form a countable dense subset of . Let u1, u2, be the sequence of these polynomials. By translating members of this sequence to the right 1, 3, 5, units, respectively, scaling them vertically, and adding offsets, we can construct a set of polynomials bounded (between 0 and 1) and almost monotone on the intervals , , ,, respectively. Now let s be a function defined on the union and coinciding on each interval with the corresponding polynomial. Then our can be obtained by a smooth extension of s to the whole real line in such a way that and .

For numerical experiments we used SageMath (Stein et al., 2015). We wrote a code for creating the graph of and computing at any reasonable . (The code is available at http://sites.google.com/site/njguliyev/papers/sigmoidal.)

2  The Theoretical Result

We begin this section with the definition of a -increasing (-decreasing) function. Let be any nonnegative number. A real function f defined on is called -increasing (-decreasing) if there exists an increasing (decreasing) function such that , for all . If u is strictly increasing (or strictly decreasing), then the above function f is called a -strictly increasing (or -strictly decreasing) function. Clearly, 0-monotonicity coincides with the usual concept of monotonicity, and a -increasing function is -increasing if .

The following theorem is valid.

Theorem 1.
For any positive numbers and , there exists a , sigmoidal activation function that is strictly increasing on , -strictly increasing on , and satisfies the following property: for any finite closed interval of and any and , there exist three real numbers c0, c1, and for which
for all .
Proof.

Let be any positive number. Divide the interval into the segments , , Let be any strictly increasing, infinitely differentiable function on with the properties

1. for all .

2. .

3. , as .

The existence of a strictly increasing smooth function satisfying these properties is easy to verify. Note that from conditions 1 to 3, it follows that any function satisfying the inequality for all is -strictly increasing and , as .

We are going to construct obeying the required properties in stages. Let be the sequence of all polynomials with rational coefficients defined on First, we define on the closed intervals , as the function
or, equivalently,
2.1
where am and are chosen in such a way that the condition
2.2
holds for all .

At the second stage, we define on the intervals , so that it is in and satisfies the inequality, equation 2.2. Finally, in all of , we define while maintaining the strict monotonicity property and in such a way that . We obtain from the properties of h and the condition 2.2 that is a -strictly increasing function on the interval and , as . Note that the construction of a obeying all the above conditions is feasible. We show this in the next section.

From equation 2.1, it follows that for each
2.3
Now let g be any continuous function on the unit interval . By the density of polynomials with rational coefficients in the space of continuous functions over any compact subset of , for any there exists a polynomial of the above form such that
for all . This, together with equation 2.3, means that
2.4
for some c0, c1, and all .
Note that equation 2.4 proves our theorem for the unit interval . Using linear transformation, it is not difficult to go from to any finite closed interval . Indeed, let , be constructed as above and be an arbitrarily small positive number. The transformed function is well defined on , and we can apply the inequality, equation 2.4. Now, using the inverse transformation , we can write that
2.5
where and . The last inequality, equation 2.5, completes the proof.
Remark 1.

The idea of using a limited number of neurons in hidden layers of a feedforward network was first implemented by Maiorov and Pinkus (1999). They proved the existence of a sigmoidal, strictly increasing, analytic activation function such that two hidden layer neural networks with this activation function and a fixed number of neurons in each hidden layer can approximate any continuous multivariate function over the unit cube in . Note that the result is of theoretical value; the authors do not suggest constructing and using their sigmoidal function. Using the techniques developed in Maiorov and Pinkus (1999), Ismailov showed theoretically that if we replace the demand of analyticity by smoothness and monotonicity by -monotonicity, then the number of neurons in hidden layers can be reduced substantially (see Ismailov, 2014). We stress again that in both papers, the algorithmic implementation of the obtained results is not discussed or illustrated by numerical examples.

In the next section, we propose an algorithm for computing the sigmoidal function at any point of the real axis. (The code of this algorithm is available at http://sites.google.com/site/njguliyev/papers/sigmoidal.) As examples, we include in this note the graph of (see Figure 1) and a numerical table (see Table 1) containing several computed values of this function.

Figure 1:

The graph of on (, ).

Figure 1:

The graph of on (, ).

Table 1:
Some Computed Values of (, ).
ttttt

ttttt

3  Algorithmic Construction of the Universal Sigmoidal Function

3.1  Step 1

Definition of . Set
Note that this function satisfies conditions 1 to 3 in the proof of theorem 1.

3.2  Step 2

Enumerating the rationals. Let an be Stern’s diatomic sequence:
It should be remarked that this sequence first appeared in print in 1858 (Stern, 1858) and has been the subject of many papers (see Northshield, 2010).
The Calkin–Wilf (Calkin & Wilf, 2000) sequence contains every positive rational number exactly once; hence the sequence
is the enumeration of all the rational numbers. It is possible to calculate qn and rn directly. Let
be the binary code of n. Here, show the number of 1-digits, 0-digits, 1-digits,, respectively, starting from the end of the binary code. Note that f0 can be zero. Then qn equals the continued fraction
3.1
The calculation of rn is reduced to the calculation of , if n is even and , if n is odd.

3.3  Step 3

Enumerating the polynomials with rational coefficients. It is clear that every positive rational number determines a unique finite continued fraction with , and .

Since each nonzero polynomial with rational coefficients can uniquely be written as , where (i.e., ), we have the following bijection between the set of all nonzero polynomials with rational coefficients and the set of all positive rational numbers:
We define and
where .

3.4  Step 4

Construction of on . Set . Besides, for each polynomial , set
3.2
and
3.3
It is not difficult to verify that
If um is constant, then we define
Otherwise, we define as the function
3.4
where
3.5
Note that am, bm are the coefficients of the linear function mapping the closed interval onto the closed interval . Thus,
3.6
for all .

3.5  Step 5

Construction of on . To define on the intervals , we will use the smooth transition function,
where
It is easy to see that for , for and for . Set
Since both and belong to the interval , we obtain .
First, we extend smoothly to the interval . Take and choose such that
3.7
Let us show how one can choose this . If um is constant, it is sufficient to take . If um is not constant, we take
where C is a number satisfying for . Now define on the first half of the interval as the function
3.8
Let us verify that satisfies condition 2.2. Indeed, if , then there is nothing to prove, since . If , then , and hence from equation 3.8, it follows that for each , is between the numbers K and . On the other hand, from equation 3.7, we obtain that
which, together with equations 3.4 and 3.6, yields that , for . Since , the inclusion is valid. Now since both K and belong to , we finally conclude that
We define on the second half of the interval in a similar way:
where
One can easily verify, as above, that the constructed satisfies condition 2.2 on and

3.6  Step 6

Construction of on . Finally, we define
It is not difficult to verify that is a strictly increasing, smooth function on . Note also that (see step 4), as t tends to from the left and , for

Step 6 completes the construction of the universal activation function , which satisfies theorem 1.

4  The Algorithm for Evaluating the Numbers c0, c1, and

Although theorem 1 is valid for all continuous functions, in practice it is quite difficult to calculate algorithmically c0, c1, and in theorem 1 for badly behaved continuous functions. The main difficulty arises while attempting to design an efficient algorithm for the construction of a best approximating polynomial within any given degree of accuracy. But for certain large classes of well-behaved functions, the computation of the above numbers can be done. In this section, we show this for the class of Lipschitz continuous functions.

Assume that f is a Lipschitz continuous function on with a Lipschitz constant L. In order to find the parameters c0, c1, and algorithmically, it is sufficient to perform the following steps.

4.1  Step 1

Going to the unit interval. Consider the function , which is Lipschitz continuous on with a Lipschitz constant . Denote by
the nth Bernstein polynomial of the function g. Let be given.

4.2  Step 2

Finding the position of a given rational number. Define the functions
which return the positions of a positive rational number and a rational number in the sequences and , respectively (see section 3). We start with the computation of pq. Let q be a positive rational number. If equation 3.1 is the continued fraction representation of q with k even (we may always consider instead of if needed), then the binary representation of the position of q in the Calkin–Wilf sequence is
Now we can easily find by the formula

4.3  Step 3

Finding such that , . We use the inequality (see Sikkema, 1961)
where
Since
it is sufficient to take
where is the ceiling function defined as .

4.4  Step 4

Finding a polynomial p with rational coefficients such that , . If , then it is sufficient to choose d0, d1, …, such that
and set .

4.5  Step 5

Finding such that . For the definition of um see section 3. If , then clearly m = 1. Otherwise let be the positions of the numbers di in the sequence . Then

4.6  Step 6

Evaluating the numbers c0, c1, and s in equation 2.4. Set . If is constant, then we put
If is not constant, then we put
where am and bm are computed using formulas 3.2, 3.3, and 3.5.

4.7  Step 7

Evaluating the numbers c0, c1, and . In this step, we return to our original function f and calculate the numbers c0, c1, and (see theorem 1). The numbers c0 and c1 have been calculated above (they are the same for both g and f). In order to find , we use the formula .

Remark 2.

Note that some computational difficulties may arise while implementing the above algorithm in standard computers. For some functions, the index m of a polynomial um in step 5 may be extraordinarily large. In this case, a computer is not capable of producing this number—hence, the numbers c0, c1, and .

References

Barron
,
A. R.
(
1993
).
Universal approximation bounds for superposition of a sigmoidal function
.
IEEE Trans. Information Theory
,
39
,
930
945
.
Calkin
,
N.
, &
Wilf
,
H. S.
(
2000
).
Recounting the rationals
.
Amer. Math. Monthly
,
107
,
360
367
.
Cao
,
F.
,
Xie
,
T.
, &
Xu
,
Z.
(
2008
).
The estimate for approximation error of neural networks: A constructive approach
.
Neurocomputing
,
71
(
4–6
),
626
630
.
Carroll
,
S. M.
, &
Dickinson
,
B. W.
(
1989
).
Construction of neural nets using the Radon transform
. In
Proceedings of the IEEE 1989 International Joint Conference on Neural Networks
(
vol. 1
, pp.
607
611
).
Piscataway, NJ
:
IEEE
.
Chen
,
T.
,
Chen
,
H.
, &
Liu
,
R.
(
1992
).
A constructive proof of Cybenko’s approximation theorem and its extensions
. In
Computing science and statistics
(pp.
163
168
).
New York
:
Springer-Verlag
.
Chui
,
C. K.
, &
Li
,
X.
(
1992
).
Approximation by ridge functions and neural networks with one hidden layer
.
J. Approx. Theory
,
70
,
131
141
.
Costarelli
,
D.
, &
Spigler
,
R.
(
2013
).
Constructive approximation by superposition of sigmoidal functions
.
Anal. Theory Appl.
,
29
,
169
196
.
Cotter
,
N. E.
(
1990
).
The Stone–Weierstrass theorem and its application to neural networks
.
IEEE Trans. Neural Networks
,
1
,
290
295
.
Cybenko
,
G.
(
1989
).
Approximation by superpositions of a sigmoidal function
.
Math. Control, Signals, and Systems
,
2
,
303
314
.
Funahashi
,
K.
(
1989
).
On the approximate realization of continuous mapping by neural networks
.
Neural Networks
,
2
,
183
192
.
Gallant
,
A. R.
, &
White
,
H.
(
1988
).
There exists a neural network that does not make avoidable mistakes
. In
Proceedings of the IEEE 1988 International Conference on Neural Networks
(
vol. 1
, pp.
657
664
).
Piscataway, NJ
:
IEEE
.
Hahm
,
N.
, &
Hong
,
B. I.
(
1999
).
Extension of localized approximation by neural networks
.
Bull. Austral. Math. Soc.
,
59
,
121
131
.
Hornik
,
K.
(
1991
).
Approximation capabilities of multilayer feedforward networks
.
Neural Networks
,
4
,
251
257
.
Hornik
,
K.
,
Stinchcombe
,
M.
, &
White
,
H.
(
1989
).
Multilayer feedforward networks are universal approximators
.
Neural Networks
,
2
,
359
366
.
Irie
,
B.
, &
Miyake
,
S.
(
1988
).
Capability of three-layered perceptrons
. In
Proceedings of the IEEE 1988 International Conference on Neural Networks
(
vol. 1
, pp.
641
648
).
Piscataway, NJ
:
IEEE
.
Ismailov
,
V. E.
(
2012
).
Approximation by neural networks with weights varying on a finite set of directions
.
J. Math. Anal. Appl.
,
389
(
1
),
72
83
.
Ismailov
,
V. E.
(
2014
).
On the approximation by neural networks with bounded number of neurons in hidden layers
.
J. Math. Anal. Appl.
,
417
(
2
),
963
969
.
Ito
,
Y.
(
1991
).
Representation of functions by superpositions of a step or sigmoid function and their applications to neural network theory
.
Neural Networks
,
4
,
385
394
.
Jones
,
L. K.
(
1990
).
Constructive approximations for neural networks by sigmoidal functions
.
Proc. IEEE
,
78
,
1586
1589
. (
Correction and addition, Proc. IEEE, 79 (1991), 243
.)
Kůrková
,
V.
(
1991
).
Kolmogorov’s theorem is relevant
.
Neural Comput.
,
3
,
617
622
.
Kůrková
,
V.
(
1992
).
Kolmogorov’s theorem and multilayer neural networks
.
Neural Networks
,
5
,
501
506
.
Leshno
,
M.
,
Lin
,
V. Ya.
,
Pinkus
,
A.
, &
Schocken
,
S.
(
1993
).
Multilayer feedforward networks with a non-polynomial activation function can approximate any function
.
Neural Networks
,
6
,
861
867
.
Maiorov
,
V.
, &
Pinkus
,
A.
(
1999
).
Lower bounds for approximation by MLP neural networks
.
Neurocomputing
,
25
,
81
91
.
,
H. N.
, &
Micchelli
,
C. A.
(
1992
).
Approximation by superposition of a sigmoidal function and radial basis functions
.
,
13
,
350
373
.
Northshield
,
S.
(
2010
).
Stern’s diatomic sequence 0, 1, 1, 2, 1, 3, 2, 3, 1, 4
, …
Amer. Math. Monthly
,
117
,
581
598
.
Pinkus
,
A.
(
1999
).
Approximation theory of the MLP model in neural networks
.
Acta Numerica
,
8
,
143
195
.
Sikkema
,
P. C.
(
1961
).
Der Wert einiger Konstanten in der Theorie der Approximation mit Bernstein-Polynomen
.
Numer. Math.
,
3
,
107
116
.
Stein
,
W. A.
, et al. (
2015
).
Sage Mathematics Software (Version 6.10)
,
Sage Developers
. http://www.sagemath.org
Stern
,
M. A.
(
1858
).
Über eine zahlentheoretische Funktion
.
J. Reine Agnew. Math.
,
55
,
193
220
.
Stinchcombe
,
M.
, &
White
,
H.
(
1990
).
Approximating and learning unknown mappings using multilayer feedforward networks with bounded weights
. In
Proceedings of the IEEE 1990 International Joint Conference on Neural Networks
(
vol. 3
, pp.
7
16
).
Piscataway, NJ
:
IEEE
.