Abstract

A novel m energy functions method is adopted to analyze the retrieval property of continuous-time asymmetric Hopfield neural networks. Sufficient conditions for the local and global asymptotic stability of the network are proposed. Moreover, an efficient systematic procedure for designing asymmetric networks is proposed, and a given set of states can be assigned as locally asymptotically stable equilibrium points. Simulation examples show that the asymmetric network can act as an efficient associative memory, and it is almost free from spurious memory problem.

1.  Introduction

Memory retrieval in artificial neural networks has been extensively investigated in the past decades, and many interesting results have been obtained. For example, Amari (1972a) studied the multistable and oscillatory behaviors of random nets with analog neuron-like elements. Both symmetric and asymmetric networks are studied in Amari (1972b). Primitive neural models of concept formation and characteristics of the learning rules are studied in Amari (1977). Strange dynamical behaviors in recalling processes have been investigated in Amari and Maginu (1988). Hopfield (1982, 1984) introduced the concept of an energy function for studying the stability of the network and found that the recurrent artificial neural network appears to be a powerful tool for associative memory. The conventional networks with symmetric connections were intensively studied. However, the synaptic couplings are in general asymmetric in biological nervous systems, and Hertz, Grinstein, and Solla (1987) found that it was almost impossible to implement the hardware network precisely with symmetric connections, so studies of asymmetric networks are of particular biological and practical importance.

The dynamical behavior analysis, especially the retrieval property analysis, of the asymmetric Hopfield networks is a primary step for the practical application of the network. Amit, Gutfreund, and Sompolinsky (1985) proposed that small asymmetry in the synaptic connections did not qualitatively change the behavior of the system. Fukai and Shiino (1990) studied noise-driven chaotic motions in the retrieval process. Sompolinsky and Kanter (1986) found that asymmetric neural networks could provide temporal association memory. Cheng, Dasgupta, and Singh (2000) studied the retrieval properties of Hopfield model with random asymmetric interactions by computer simulations. Pattern recognition using asymmetric discrete-time attractor neural networks was introduced in Jin and Zhao (2005). Xu, Hu, and Kwong (1996) obtained some sufficient conditions for the discrete-time asymmetric Hopfield networks converging to a stable state. Some sufficient conditions for the global and local stability of the network have been proposed in Michel and Gray (1990), Yang and Dillon (1994), Matsuoka (1992), Chen and Amari (2001), and Liu and Chen (2002). However, most of these results are too complicated for application. In this letter, we propose two concise results by using an explicit energy function method. The network dynamic is simply expressed as a downhill motion in the energy landscapes, which can bring us a better understanding to the network dynamic.

In addition, the symmetric Hopfield network can be easily defined by using Hebb's rule (Hopfield, 1982; Hebb, 1949) or the pseudo-inverse rule (Personnaz, Guyon, & Drefus, 1985). Dealing with asymmetric networks, Zhao (2004) introduced a strategy for designing weights of asymmetric networks with controllable degree of symmetry. Strategies with the concept of higher-order Hamming stability have been proposed in Lee and Chuang (2005) and Zhuang, Huang, and Yu (1994). Learning rules with optimal stability were studied in Krauth and Mezard (1987). Bornholdt and Graudenz (1992) designed general asymmetric networks by using a genetic algorithm. However, they all focused on Hopfield networks with discrete-time dynamics. It is still hard to construct a continuous-time asymmetric network lacking an efficient synthesis procedure.

Therefore, in this letter, the retrieval property of the continuous-time asymmetric Hopfield network is analyzed by using a novel m energy functions method. Sufficient conditions for the local and global asymptotic stability of the network are proposed. Moreover, to make the asymmetric Hopfield networks more applicable, an efficient, systematic system-designing procedure is proposed by using Schur decomposition and single value decomposition, which allows one to assign some predetermined states to the locally asymptotically stable equilibrium points of the asymmetric network. Moreover, the retrieval reliability, spurious memory, and retrieval time of the asymmetric network are also discussed by numerical simulations. We show that the designed network has a perfect retrieval reliability, and it is almost free from spurious memory problem, which outperforms most existing networks.

2.  Retrieval Property Analysis

Consider an asymmetric Hopfield network of the form
formula
2.1
where is the neuron state vector, I = [I1, I2, is a real constant vector, is a positive diagonal matrix, is a synaptic weights matrix that is asymmetrically defined, a continuous function , with φ(0) = 0 and φ(X) = [φ1(x1), φ2(x2), …, φn(xn)]T, denotes the neuron transfer function, and φj(x) is differentiably defined and satisfies the following condition,
formula
2.2

It follows that φj(·) (j = 1, 2, …, n) are monotonically increasing, and φj(x) = 0 if and only if x = 0. In addition, network 2.1 appears to be a linear system if φ(·) is a linear function. It is impossible to endow a linear system with retrieval properties. Hence, in this letter, we consider φ(·) to be a nonlinear function.

Let be the equilibrium points set of the asymmetric network 2.1. We restrict m ⩾ 1: that is, the network has at least one equilibrium point.

To endow the asymmetric network with retrieval properties, a given set of states needs to be designed as equilibrium points of the network. Moreover, the network is expected to converge to an equilibrium point if the initial state is sufficiently close to it (i.e., the equilibrium points are locally asymptotically stable). In the following, a sufficient condition for the local asymptotical stability of the equilibrium points is proposed.

Theorem 1.
X(i) is locally asymptotically stable if
formula
2.3
where and H = (W + WT)/2.

Proof.
Consider m continuously differentiable energy functions as
formula
2.4
where U = XX(i) = [u1, u2, …, un]T, and
formula
2.5

Taking one energy function, V(i)(U), for example, the following analysis holds for i = 1, 2, …, m.

Asymmetric network 2.1 is transformed into
formula
2.6
Let U(i) be the stationary point of V(i)(U). Thus,
formula
It follows from equations 2.2 and 2.5 that gj(·) (j = 1, 2, …, n) are nonlinear differentiable functions. Moreover, gj(·) (j = 1, 2, …, n) increase monotonically, and gj(u) = 0 if and only if u = 0. Hence, U(i) = 0 is the unique stationary point of V(i)(U) and
formula
for any . In addition, it follows from equation 2.4 that V(i)(U(i)) = V(i)(0) = 0. Therefore, V(i)(U) ⩾ 0 and V(i)(U) = 0 if and only if U = U(i) = 0. This implies that U(i) = 0 is the global minimum point of V(i)(U).

Moreover, if U = U(i) = 0, then X = X(i). This implies that the global minimum point of V(i)(U) corresponds to the equilibrium point X(i) of network 2.1.

It follows from equations 2.2 and 2.5 that
formula
The linearization of a nonlinear system at the equilibrium point is a powerful tool for local stability analysis. Define . If εi is sufficiently small, then g(U) can be represented by its linearization at U(i), such that
formula
2.7
Let H = (W + WT)/2, and . For any , it holds that
formula
thus,
formula
2.8
When εi is sufficiently small and , it follows from equations 2.4 and 2.6 to 2.8 that the time derivative of V(i)(U) along the trajectories of the network can be described as
formula
2.9
Hence, in a certain neighborhood of U(i), if HAΦ(X(i)) < 0, then , and if and only if g(U) = 0, that is, U = U(i) = 0. Then, given an initial state U(0) that is sufficiently close to U(i), as the network dynamic evolves with time, V(i)(U) keeps decreasing until it reaches its global minimum point U(i), which also corresponds to the equilibrium point X(i) of network 2.1. According to the Lyapunov stability criterion, X(i) is locally asymptotically stable.

Network 2.1 can be endowed with a retrieval property if X(i) (i = 1, 2, …, m) are designed as locally asymptotically stable equilibrium points. It should be noted that local stability concerns only the behavior in a certain neighborhood of the equilibrium point. In the following, we propose a sufficient condition for the global asymptotic stability of network 2.1:

Theorem 2.
If network 2.1 has an equilibrium point and
formula
2.10
H = (W + WT)/2. Then network 2.1 is globally asymptotically stable, and it has a unique equilibrium point.

Proof.

Supposing network 2.1 has m equilibrium points, we can define m energy functions as shown in equation 2.4. Taking one energy function V(i)(U), for example, the following analysis holds for i = 1, 2, …, m.

Define Qk = diag[g1(u1)/u1, …, gn(un)/un] with and .

It follows from equations 2.2 and 2.5 that
formula
2.11
and
formula
2.12
It follows from equation 2.11 that
formula
2.13
From the fact that A is a positive diagonal matrix, it follows from equations 2.8, 2.9, 2.12, and 2.13 that
formula
Thus, if HA/k < 0, then , and if and only if g(U) = 0, that is, U = U(i) = 0. This implies, given an arbitrary initial state, that all the energy functions V(i)(U) (i = 1, 2, …, m) decrease monotonically along the solution of the network; as t → +∞, all the V(i)(U) (i = 1, 2, …, m) eventually reach its global minimum value at U(i). Then it holds that
formula
Hence, network 2.1 has a unique equilibrium point. According to the Lyapunov stability criterion, network 2.1 is globally asymptotically stable.

Remark 1.

It is worth mentioning that some significant results have been proposed in Matsuoka (1992), Chen and Amari (2001), and Liu and Chen (2002). However, theorems 1 and 2 appear to be more concise and applicable. Moreover, if a1 = ⋅ ⋅ ⋅ = an = 1 and k = 1, theorem 2 reduces to the result in Matsuoka (1992).

Remark 2.

Theorems 1 and 2 also hold when W is symmetrically defined, where H = W. It is not hard to verify that equations 2.3 and 2.10 are sufficient conditions for proving the energy function (see equation 7 in Hopfield, 1984) has local minima and global minimum at the equilibrium point, respectively (see appendix A).

3.  System Designing

One of the most important applications of Hopfield networks is for recognition of binary patterns. Let be the vector set that needs to be stored. To endow the asymmetric network with retrieval properties, one should properly assign the parameters A, W, and I such that X(i)(i = 1, 2, …, m) can be assigned as locally asymptotically stable equilibrium points. For simplicity, we make the following three assumptions:
formula
3.1
formula
3.2
formula
3.3

Let ζ(i) = φ(X(i)) and Θ = [ζ(1) − ζ(m), ζ(2) − ζ(m), …, ζ(m−1) − ζ(m)]. Performing single value decomposition of Θ gives Θ = VΣZT, with V and Z being orthogonal matrices and being a diagonal matrix. Then we have the following result:

Proposition 1.
For i = 1, 2, …, m, we have
formula
where c is a real constant if
formula
3.4
formula
3.5
where is a lower triangular matrix, with
formula
where , and denote random values within interval (βij, γij).

Proof.
It is not hard to verify that
formula
It follows that
formula
for i = 1, 2, …, m.

It should be noted that W = cVLVT = VcLVT can be considered as a Schur decomposition of W; therefore, c × ljj(j = 1, 2, …, n) are the eigenvalues of W. Hence, we can easily control the eigenvalues of W by adjusting parameters c and ljj.

The system-designing procedure can be concluded as follows

  1. Compute Θ = [ζ(1) − ζ(m), …, ζ(m−1) − ζ(m)] where ζ(i) = φ(X(i)).

  2. Perform a single value decomposition of Θ: Θ = VΣZT.

  3. Set lower triangular matrix L with τi ⩽ 0.

  4. Compute c = a1x(1)11(x(1)1).

  5. Compute W = cVLVT, and I = cζ(m)Wζ(m).

It follows from proposition 1 that
formula
Moreover, it follows from equations 3.1 to 3.3 that
formula
Hence,
formula

Thus, X(i) (i = 1, 2, …, m) can be designed as equilibrium points of the network. To minimize the maximum eigenvalue of W, we set τi ⩽ 0. Then λ1(W) = c, where λ1(·) denotes the maximum eigenvalue of a matrix.

Remark 3.

The asymmetric networks with I = [0, 0, …, 0]T can be constructed by performing the designing procedure from step 2 with Θ = [ζ(1), ζ(2), …, ζ(m)]. Moreover, a symmetric network can be designed by setting lij = 0 (i>j, im, jm).

4.  Simulations

4.1.  Example 1.

We illustrate the effectiveness of theorem 2 in this example.

Consider an asymmetric network 2.1 with n = 3, a1 = a2 = a3 = 1, I1 = I2 = I3 = 0, φ1(·) = φ2(·) = φ3(·) = f(·) with
formula
4.1
Hence, we have ai/k = 1 (i = 1, 2, 3), and
formula

It is not hard to verify that HA/k < 0, which satisfies condition 2.10. X(1) = 0 is an equilibrium point of the network. According to theorem 2, X(1) = 0 is the unique equilibrium point of the defined network, and the network is globally asymptotically stable. Furthermore, 100 initial states, which are uniformly distributed within Bo = {X: −6 ⩽ xi ⩽ 6, i = 1, 2, 3}, are presented to the network. As the network dynamic evolves with time, they all finally converge to X(1) = 0. This illustrates the effectiveness of theorem 2.

4.2.  Example 2.

We illustrate the use of the system-designing procedure in this example.

The four patterns needing to be stored are specified by
formula
Then S = {X(1), X(2), X(3), X(4)}. We set a1 = ⋅ ⋅ ⋅ = a8 = 1, and φ1(·) = ⋅ ⋅ ⋅ = φ8(·) = f(·). Then a proper network needs to be designed with Wφ(X(i)) + I = X(i), i = 1, 2, 3, 4.
It follows from the system-designing procedure that
formula
We set τi = 0(3 < i ⩽ 8), and βij = −0.5, γij = 0.5, lij(i>j, i ⩾ 4, j ⩾ 4) are randomly produced within interval (−0.5, 0.5). Then W and I can be calculated by equations 3.4 and 3.5.
formula

It is not hard to verify that λ1(HAΦ(X(i))) = −0.58 < 0 for i = 1, 2, 3, 4, which satisfies the condition of theorem 1. Hence X(i) (i = 1, 2, 3, 4) can be designed as locally asymptotically stable equilibrium points.

Given an initial state X1(0) = [7.0, −6.6, 8.5, −5.6, 8.0, −6.5, 7.8, −9.3]T, the network dynamic evolves with time, and the network finally converges to X(2). Figure 1 illustrates the trajectories of the network.

Figure 1:

Phase portrait of the network in example 2 with initial state X1(0).

Figure 1:

Phase portrait of the network in example 2 with initial state X1(0).

The spurious memories may exist in the designed network, which do harm to the network performance. In the following, we study the spurious memories by numerical simulations.

Assume X1s = [0, −0.8, 0.8, −0.8, 0, 0.8, 0.8, −0.8]T. It is not hard to verify that X1s is an equilibrium point of the designed network. With the initial state X2(0) = [0.3, −0.6, 1, −0.7, −0.3, 0.7, 0.7, −0.9]T, Figure 2 shows that the network finally converges to X1s, which appears to be a spurious memory.

Figure 2:

Phase portrait of the network in example 2 with initial state X2(0).

Figure 2:

Phase portrait of the network in example 2 with initial state X2(0).

It is difficult to compute every spurious memory analytically since network 2.1 is nonlinearly coupled. However, the network can converge to a spurious memory if an initial state is sufficiently close to it. Moreover, it can be concluded that the equilibrium points exist only in a certain neighborhood of the origin (see appendix B). Hence, in this example, we try to find all the spurious memories by giving the network different initial states: 7140 vectors, which are uniformly distributed within Bs = {X: −4 ⩽ xi ⩽ 4, i = 1, 2, …, 8}, are presented to the designed network as initial states. It follows from the results of appendix B that all the equilibrium points (including memorized patterns and spurious memories) are inside Bs. However, only three spurious memories are found:
formula
It is not hard to verify that
formula
and
formula

In this example, all the spurious memories can be written as linear combinations of the vectors X(1), X(2), X(3), X(4). Moreover, it is believed that all the spurious memories have been found because we have presented the network with sufficient initial states. Compared with the networks introduced in Bruck and Roychowdhury (1990) and Li, Michel, and Porod (1989), the asymmetric network proposed in this letter has the least number of spurious memories, which illustrates the effectiveness of the designed network.

In addition, not all the equilibrium points, which do not belong to S, are the spurious memories. Consider Xu = (X(1) + X(2))/2 = [0, 0, 0.8, 0, 0.8, 0, 0, 0]T which is an equilibrium point. Given an initial state X3(0) = [0.05, 0.05, 0.85, 0.01, 0.85, 0.02, 0.05, 0.01]T, which is very close to Xu, Figure 3 shows that as the network dynamic evolves with time, it finally converges to X(2). In addition, it can be calculated that some eigenvalues of the Jacobian matrix evaluated at Xu have positive real parts. Thus, Xu is not a spurious memory because it is unstable.

Figure 3:

Phase portrait of the network in example 2 with initial state X3(0).

Figure 3:

Phase portrait of the network in example 2 with initial state X3(0).

4.3.  Example 3.

We demonstrate the application of asymmetric network 2.1 in this example.

Let us consider patterns made up of 7 × 8 small boxes, each pattern corresponding to a 56-dimensional vector, with each component value varying between −1 (black) and 1 (white). Figure 4 shows the six patterns that need to be stored as equilibrium points. We perform the designing procedure with n = 56, φ1(·) = ⋅ ⋅ ⋅ = φ56(·) = f(·), c = 1.3, ai = 1, βij = −1, γij = 1, lij(i>j, i ⩾ 6, j ⩾ 6), which are randomly produced within interval (−1, 1), τi = −3(5 < i ⩽ 56). Then W and I can be calculated by equations 3.4 and 3.5.

Figure 4:

Pattern correspondence to six equilibrium points of the network.

Figure 4:

Pattern correspondence to six equilibrium points of the network.

Six corrupted patterns, as shown in Figure 5a, are presented to the network; during the retrieval phase, the network dynamic evolves toward the corresponding stored patterns. Patterns in Figure 5b illustrate the retrieved patterns of the corrupted ones in the same column. As shown in Figure 5, the network retrieves a previously stored pattern that most closely resembles the corrupted one. When the simulation results in example 2 are combined, it is easy to see that the designed network has perfect retrieval reliability.

Figure 5:

Initial input patterns and final corresponding outputs.

Figure 5:

Initial input patterns and final corresponding outputs.

In addition, we assume
formula
where X(i)(1 ⩽ i ⩽ 6) denote the patterns shown in Figure 4.

It can be verified that Xu are equilibrium points of the designed network. However, all are unstable. This implies that Xu are not spurious memories.

Furthermore, 1000 initial states, with random strings of ±1, are presented to the designed network. However, all converge to the stored patterns, and no spurious memory is found.

We select 100 initial states, and the Hamming distance between the initial states and the corresponding remember patterns is 9. In simulations, they all converge to the stored patterns. We also design a discrete time asymmetric network by using the method presented in Lee and Chuang (2005). However, only 31 patterns of 100 initial states can be successfully recalled, and 11 spurious memories are found from the 100 initial states. The network constructed in this letter apparently has a better retrieval reliability.

5.  Further Discussion

As indicated in expression 2.9, the retrieval time of a designed network has correlation to the value of λ1(HAΦ(X(i))) which is negative. The retrieval time can be reduced by minimizing the value of λ1(HAΦ(X(i))). We can achieve this by representing patterns with strings of ±ω where ω>1 with −ω(black), ω(white). To illustrate this, we replace the pregiven patterns in example 2 by four new patterns—2.5 · X(1), 2.5 · X(2), 2.5 · X(3), 2.5 · X(4)—and adopt the same system designing parameters as example 2. Then a new asymmetric network can be constructed. It can be calculated that Φ(X(i)) = diag[14.2, …, 14.2], λ1(HAΦ(X(i))) = −12.1 < 0 for i = 1, 2, 3, 4, which is much smaller than that of the network designed in example 2. Figure 6 illustrates the trajectories of the network with the initial state X1(0) = [7.0, −6.6, 8.5, −5.6, 8.0, −6.5, 7.8, −9.3]T. When Figure 6 is compared with Figure 1, it is easy to see that the designed network has reduced retrieval time.

Figure 6:

Phase portrait of the network designed in section 5.

Figure 6:

Phase portrait of the network designed in section 5.

In simulations, we find that the designed networks may have isolated nodes if X(i) (i = 1, 2, …, m) are not well selected, which usually happens if the patterns are linearly dependent. Moreover, condition 3.3 limits the applications of the proposed method to the patterns of bipolar forms. A more general system-designing procedure still needs further research.

As shown in the numerical simulations, the asymmetric network can reach an equilibrium point when an initial state that is sufficiently close to this equilibrium point is given. However, we cannot describe the attraction domain of each equilibrium point precisely. The attraction domain of the asymmetric networks still needs further study.

6.  Conclusion

Sufficient conditions for the local and global asymptotic stability of the asymmetric network are proposed by using an explicit energy function method, and a systematic system designing procedure is developed by using Schur decomposition. We showed that asymmetric Hopfield-type networks can work as efficient associative memories.

Appendix A:  Proof of Remark 2

Define the energy function of the symmetric network 2.1 as
formula
A.1
where O = [o1, o2, …, on]T with oi = φi(xi)(i = 1, 2, …, n). Actually, energy function A.1 is identical to equation 2.7 in Hopfield (1984).
Let O(i) = φ(X(i)). For i = 1, 2, …, m, we have
formula

Thus, the stationary points of the energy function A.1 are also equilibrium points of the symmetric network 2.1.

It is not hard to verify that
formula
Hence, the second partial derivative of V(O) can be described as
formula

It is obvious that in a certain neighborhood of O(i), if WAΦ(X(i)) < 0, then −W + AΦ(X)>0. Hence, O(i) is a local minimum point of energy function A.1. The analysis holds for i = 1, 2, …, m.

Moreover, if WA/k < 0 then −W + AΦ(X)>0 for . Hence, O(i) is the global minimum point of equation A.1.

Appendix B:  Domain Estimation of the Equilibrium Points

The following analysis gives a proof that the equilibrium point exists only in a certain neighborhood of the origin.

Assume X* = [x*1, x*2, …, x*n]T is an equilibrium point of the network, Bρ(X) = {X:‖X‖ < ρ}, and . Then
formula
B.1
In example 2, ai = 1 and . Then
formula
B.2

As shown in equation B.2, if ∃0 ⩽ in, ρ = ∑nj=1|wij| + |Ii|, then there is no XBcρ(X) satisfying condition B.1. Hence, the equilibria of network 2.1, including memorized patterns and spurious memories, exist only in Bρ(X).

References

Amari
,
S. I.
(
1972a
).
Characteristics of random nets of analog neuron-like elements
.
IEEE Trans. Syst. Man Cyb.
,
2
(
5
),
643
657
.
Amari
,
S. I.
(
1972b
).
Learning patterns and pattern sequences by self-organizing nets of threshold elements
.
IEEE Trans. Computers
,
C-21
(
11
),
1197
1206
.
Amari
,
S. I.
(
1977
).
Neural theory of association and concept-formation
.
Biol. Cybern.
,
26
(
3
),
175
185
.
Amari
,
S. I.
, &
Maginu
,
K.
(
1988
).
Statistical neurodynamics of associative memory
.
Neural Networks
,
1
(
1
),
63
73
.
Amit
,
D. J.
,
Gutfreund
,
H.
, &
Sompolinsky
,
H.
(
1985
).
Spin glass models of neural networks
.
Phys. Rev. A
,
32
,
1007
1018
.
Bornholdt
,
S.
, &
Graudenz
,
D.
(
1992
).
General asymmetric neural networks and structure design by genetic algorithms
.
Neural Networks
,
5
,
327
334
.
Bruck
,
J.
, &
Roychowdhury
,
V. P.
(
1990
).
On the number of spurious memories in the Hopfield model
.
IEEE Trans. Information Theory
,
36
,
393
397
.
Chen
,
T. P.
, &
Amari
,
S. I.
(
2001
).
Stability of asymmetric Hopfield networks
.
IEEE Trans. Neural Networks
,
12
,
159
163
.
Cheng
,
X. Z.
,
Dasgupta
,
C.
, &
Singh
,
M. P.
(
2000
).
Retrieval properties of a Hopfield model with random asymmetric interactions
.
Neural Comput.
,
12
,
865
880
.
Fukai
,
T.
, &
Shiino
,
M.
(
1990
).
Asymmetric neural networks incorporating the Dale hypothesis and noise-driven chaos
.
Phys. Rev. Lett.
,
64
,
1465
1468
.
Hebb
,
D. O.
(
1949
).
The organization of behavior
.
New York
:
Wiley
.
Hertz
,
J. A.
,
Grinstein
,
G.
, &
Solla
,
S. A.
(
1987
).
Memory networks with asymmetric bonds
.
AIP Conf. Proc. 151 on Neural Networks for Computing
(pp.
212
218
).
Woodbury, NY
:
American Institute of Physics
.
Hopfield
,
J. J.
(
1982
).
Neural networks and physical systems with emergent collective computational abilities
.
Proc. Natl. Acad. Sci. USA
,
79
,
2554
2558
.
Hopfield
,
J. J.
(
1984
).
Neurons with graded response have collective computational properties like those of two-state neurons
.
Proc. Natl. Acad. Sci. USA
,
81
,
3088
3092
.
Jin
,
T.
, &
Zhao
,
H.
(
2005
).
Pattern recognition using asymmetric attractor neural networks
.
Physical Rev. E
,
72
,
066111
.
Krauth
,
W.
, &
Mezard
,
M.
(
1987
).
Learning algorithms with optimal stability in neural networks
.
Journal of Physics A: Mathematical and General
,
20
(
11
),
L745
L752
.
Lee
,
D. L.
, &
Chuang
,
T. C.
(
2005
).
Designing asymmetric Hopfield-type associative memory with higher order Hamming stability
.
IEEE Trans. Neural Networks
,
16
,
1464
1476
.
Li
,
J. H.
,
Michel
,
A. N.
, &
Porod
,
W.
(
1989
).
Analysis and synthesis of a class of neural networks: Linear systems operating on a closed hypercube
.
IEEE Trans. Circuits Syst.
,
36
,
1405
1422
.
Liu
,
X. W.
, &
Chen
,
T. P.
(
2002
).
A new result on the global convergence of Hopfield neural networks
.
IEEE Trans. Circuits Syst.
,
49
,
1514
1516
.
Matsuoka
,
K.
(
1992
).
Stability conditions for nonlinear continuous neural networks with asymmetric connection weights
.
Neural Networks
,
5
,
495
500
.
Michel
,
A. N.
, &
Gray
,
D. L.
(
1990
).
Analysis and synthesis of neural networks with lower block triangular interconnecting structure
.
IEEE Trans. Circuits Syst.
,
37
,
1267
1283
.
Personnaz
,
L.
,
Guyon
,
I.
, &
Drefus
,
G.
(
1985
).
Information storage and retrieval in spin-glass like neural networks
.
Journal Physique Letters
,
46
,
359
366
.
Sompolinsky
,
H.
, &
Kanter
,
I.
(
1986
).
Temporal association in asymmetric neural networks
.
Phys. Rev. Lett.
,
57
,
2861
2864
.
Xu
,
Z. B.
,
Hu
,
G. Q.
, &
Kwong
,
C. P.
(
1996
).
Asymmetric Hopfield-type networks: Theory and applications
.
Neural Networks
,
9
,
483
501
.
Yang
,
H.
, &
Dillon
,
T. S.
(
1994
).
Exponential stability and oscillation of Hopfield graded response neural network
.
IEEE Trans. Neural Networks
,
5
,
719
729
.
Zhao
,
H.
(
2004
).
Designing asymmetric neural networks with associative memory
.
Physical Rev. E
,
70
,
066137
.
Zhuang
,
X.
,
Huang
,
Y.
, &
Yu
,
F. A.
(
1994
).
Design of Hopfield content-addressable memories
.
IEEE Trans. Signal Processing
,
42
(
2
),
492
495
.