Skip to main content

Peak-to-peak exponential direct learning of continuous-time recurrent neural network models: a matrix inequality approach

Abstract

The purpose of this paper is to propose a new peak-to-peak exponential direct learning law (P2PEDLL) for continuous-time dynamic neural network models with disturbance. Dynamic neural network models trained by the proposed P2PEDLL based on matrix inequality formulation are exponentially stable, with a guaranteed exponential peak-to-peak norm performance. The proposed P2PEDLL can be determined by solving two matrix inequalities with a fixed parameter, which can be efficiently checked using existing standard numerical algorithms. We use a numerical example to demonstrate the validity of the proposed direct learning law.

1 Introduction

Remarkable progress has been made recently in the field of neural networks due to their advantages in overcoming several problems such as learning ability, parallel computation, fault tolerance, and function approximation. New and exciting applications appear frequently in combinatorial optimization, signal processing, control, and pattern recognition and most aim to improve the stability and performance of neural networks [1].

The use of neural networks in applications such as optimization solvers or associative memories requires analysis of the stability of the neural networks. We have two main categories in stability analysis of neural networks: stability of neural networks [2–7] and stability of learning laws [8–16]. This paper focuses on obtaining a new robust learning method for disturbed neural networks. Essentially, if identification or tracking errors in neural networks are analyzed, stable learning methods for neural networks can be obtained. Previous authors [9] who used neural network models to simultaneously identify and control nonlinear systems studied the stability conditions of learning laws. In [17], a modification of the dynamic backpropagation was proposed under N L q stability constraints. In [11, 13, 15, 16], the passivity concept was applied to obtain learning laws for neural networks. Identification or modeling errors always exist between neural network models and unknown nonlinear systems. Thus, extensive modification of either the backpropagation algorithm or the normal gradient algorithm is required [8, 9, 12, 14]. Despite these advances in stable learning of neural networks, the previous research results were derived solely from neural networks without disturbances. However, in real physical systems, model uncertainties and external disturbances always occur. Thus, obtaining learning algorithms for dynamic neural networks with external disturbances is of practical importance. Ahn [18–22] recently proposed several robust training algorithms for dynamic neural networks, switched neural networks, and fuzzy neural networks with external disturbances.

The peak-to-peak norm approach (also known as the induced L ∞ norm approach in control engineering) introduced in [23–26] is recognized as an efficient method for handling various systems with bounded noises or disturbances because we can obtain general stability results using bounded input and output measurements. This raises a question about the possibility of obtaining a peak-to-peak norm based learning algorithm for dynamic neural networks. In this paper, we provide an answer to this interesting question. As far as we are aware, no result published on training law has yet considered the peak-to-peak norm performance for dynamic neural networks. Thus, this research topic still remains unresolved and challenging.

In this paper, we prove, for the first time, the existence of a new robust learning law that considers the peak-to-peak norm performance for dynamic neural networks with external disturbance. This new learning law is called a peak-to-peak exponential direct learning law (P2PEDLL). A new matrix inequality condition for the existence of the P2PEDLL is proposed to ensure that neural networks are exponentially stable with a guaranteed peak-to-peak norm performance. Based on matrix inequality formulation with a fixed parameter, the proposed learning law can be designed by solving two matrix inequalities, which can be checked easily using standard numerical software [27, 28]. In contrast to the existing work [18, 19, 21, 22] on learning algorithms for continuous-time neural networks, the advantage of our work is that we can deal with the worst-case peak value of the state vector for all bounded peak values of external disturbance in learning algorithm effectively.

This paper is organized as follows. In Section 2, we propose a new peak-to-peak exponential direct learning law of dynamic neural networks with disturbance. In Section 3, a numerical example is given, and finally, conclusions are presented in Section 4.

2 Peak-to-peak exponential direct learning law

Consider the following neural network:

x ˙ (t)=Ax(t)+W(t)θ ( x ( t ) ) +V(t)θ ( x ( t − τ ) ) +Gd(t),
(1)

where x(t)= [ x 1 ( t ) ⋯ x n ( t ) ] T ∈ R n is the state vector, d(t)= [ d 1 ( t ) ⋯ d k ( t ) ] T ∈ R k is the disturbance vector, τ>0 is the time-delay, A∈ R n × n is the self-feedback matrix, W(t)∈ R n × p , V(t)∈ R n × p are the weight matrices, θ(x)= [ θ 1 ( x ) , … , θ p ( x ) ] T : R n → R p is the nonlinear vector field, and G∈ R n × k is a known constant matrix. The element functions θ i (x) (i=1,…,p) are usually selected as sigmoid functions.

In this paper, given a prescribed level of disturbance attenuation γ>0, we find a new P2PEDLL such that the neural network (1) with d(t)=0 is exponentially stable and

sup t ≥ 0 { exp ( κ t ) x T ( t ) x ( t ) } < γ 2 sup t ≥ 0 { exp ( κ t ) d T ( t ) d ( t ) }
(2)

under zero-initial conditions for all nonzero d(t)∈ L ∞ [0,∞), where κ is a positive constant.

A new peak-to-peak exponential direct learning law is given in the following theorem.

Theorem 1 Let κ be a given positive constant. For a given level γ>0, assume that there exist matrices P= P T >0, Y and positive scalars λ, μ such that

[ ( P A + Y ) T + P A + Y + ( κ + λ ) P P G G T P − μ I ] <0,
(3)
[ λ P 0 I 0 ( γ − μ ) I 0 I 0 γ I ] >0.
(4)

If the weight matrices W(t) and V(t) are updated as

W(t)={ P − 1 Y x ( t ) ∥ θ ( x ( t ) ) ∥ 2 + ∥ θ ( x ( t − τ ) ) ∥ 2 θ T ( x ( t ) ) , ∥ θ ( x ( t ) ) ∥ 2 + ∥ θ ( x ( t − τ ) ) ∥ 2 ≠ 0 , 0 , ∥ θ ( x ( t ) ) ∥ 2 + ∥ θ ( x ( t − τ ) ) ∥ 2 = 0 ,
(5)
V(t)={ P − 1 Y x ( t ) ∥ θ ( x ( t ) ) ∥ 2 + ∥ θ ( x ( t − τ ) ) ∥ 2 θ T ( x ( t − τ ) ) , ∥ θ ( x ( t ) ) ∥ 2 + ∥ θ ( x ( t − τ ) ) ∥ 2 ≠ 0 , 0 , ∥ θ ( x ( t ) ) ∥ 2 + ∥ θ ( x ( t − τ ) ) ∥ 2 = 0 ,
(6)

then the neural network (1) is exponentially stable with a guaranteed exponential peak-to-peak norm bound γ.

Proof The neural network (1) can be represented by

x ˙ (t)=Ax(t)+ [ W ( t ) V ( t ) ] [ θ ( x ( t ) ) θ ( x ( t − τ ) ) ] +Gd(t).
(7)

Let

Kx(t)= [ W ( t ) V ( t ) ] [ θ ( x ( t ) ) θ ( x ( t − τ ) ) ] ,
(8)

where K∈ R n × n is the gain matrix of the P2PEDLL. Then we obtain

x Ë™ (t)=(A+K)x(t)+Gd(t).
(9)

One of the possible weight selections [W(t)V(t)] to fulfill (8) (perhaps excepting a subspace of a smaller dimension) is given by

[ W ( t ) V ( t ) ] =Kx(t) [ θ ( x ( t ) ) θ ( x ( t − τ ) ) ] + ,
(10)

where [ â‹… ] + stands for the pseudoinverse matrix in the Moore-Penrose sense [20, 21, 29]. This learning law is just an algebraic relation depending on x(t), which can be evaluated directly. Taking into account that [20, 21, 29]

x + ={ x T ∥ x ∥ 2 , x ≠ 0 , 0 , x = 0 ,
(11)

the direct learning law (10) can be rewritten as

W(t)={ K x ( t ) ∥ θ ( x ( t ) ) ∥ 2 + ∥ θ ( x ( t − τ ) ) ∥ 2 θ T ( x ( t ) ) , ∥ θ ( x ( t ) ) ∥ 2 + ∥ θ ( x ( t − τ ) ) ∥ 2 ≠ 0 , 0 , ∥ θ ( x ( t ) ) ∥ 2 + ∥ θ ( x ( t − τ ) ) ∥ 2 = 0 ,
(12)
V(t)={ K x ( t ) ∥ θ ( x ( t ) ) ∥ 2 + ∥ θ ( x ( t − τ ) ) ∥ 2 θ T ( x ( t − τ ) ) , ∥ θ ( x ( t ) ) ∥ 2 + ∥ θ ( x ( t − τ ) ) ∥ 2 ≠ 0 , 0 , ∥ θ ( x ( t ) ) ∥ 2 + ∥ θ ( x ( t − τ ) ) ∥ 2 = 0 .
(13)

Consider the following Lyapunov function: L(t)=exp(κt) x T (t)Px(t). The time derivative of L(t) along the trajectory of (9) is

L ˙ ( t ) = exp ( κ t ) x ˙ ( t ) T P x ( t ) + exp ( κ t ) x T ( t ) P x ˙ ( t ) + κ exp ( κ t ) x T ( t ) P x ( t ) = exp ( κ t ) x T ( t ) [ ( A + K ) T P + P ( A + K ) + κ P ] x ( t ) + exp ( κ t ) x ( t ) T P G d ( t ) + exp ( κ t ) d T ( t ) G T P x ( t ) = exp ( κ t ) { η T ( t ) Ω η ( t ) − λ x T ( t ) P x ( t ) + μ d T ( t ) d ( t ) } ,
(14)

where

η(t)= [ x T ( t ) d T ( t ) ] T ,
(15)
Ω= [ ( A + K ) T P + P ( A + K ) + ( κ + λ ) P P G G T P − μ I ] .
(16)

If Ω<0, then

L ˙ (t)<−λexp(κt) x T (t)Px(t)+μexp(κt) d T (t)d(t)
(17)
=−λL(t)+μexp(κt) d T (t)d(t).
(18)

Thus, L ˙ (t)<0 holds whenever L(t)≥ μ λ exp(κt) d T (t)d(t). Since L(0)=0 under the zero-initial condition, this shows that L(t) cannot exceed the value μ λ exp(κt) d T (t)d(t)

exp(κt) x T (t)Px(t)=L(t)< μ λ exp(κt) d T (t)d(t)
(19)

for t≥0. From (19), we have

1 γ exp ( κ t ) x T ( t ) x ( t ) − γ exp ( κ t ) d T ( t ) d ( t ) = 1 γ exp ( κ t ) x T ( t ) x ( t ) − ( γ − μ ) exp ( κ t ) d T ( t ) d ( t ) − μ exp ( κ t ) d T ( t ) d ( t ) < 1 γ exp ( κ t ) x T ( t ) x ( t ) − ( γ − μ ) exp ( κ t ) d T ( t ) d ( t ) − λ exp ( κ t ) x T ( t ) P x ( t ) .
(20)

The matrix inequality (4) gives [25, 26]

1 γ [ I 0 ] [ I 0 ] < [ λ P 0 0 ( γ − μ ) I ] .
(21)

If we pre- and post-multiply (21) by exp(κt/2)[ x T (t) d T (t)] and exp(κt/2) [ x T ( t ) d T ( t ) ] T , respectively, we have

1 γ exp(κt) x T (t)x(t)−(γ−μ)exp(κt) d T (t)d(t)−λexp(κt) x T (t)Px(t)<0,
(22)

which ensures

1 γ exp(κt) x T (t)x(t)−γexp(κt) d T (t)d(t)<0
(23)

from (20). Thus, we have

exp(κt) x T (t)x(t)< γ 2 exp(κt) d T (t)d(t).
(24)

Taking the supremum over t≥0 leads to (2). When d(t)=0, we have

L ˙ (t)<−λL(t)<0
(25)

from (18). Thus, it implies that L(t)<L(0)= x T (0)Px(0) for any t≥0. We also have

L(t)≥ λ min (P)exp(κt) ∥ x ( t ) ∥ 2 ,
(26)

where λ min (P) is the minimum eigenvalue of the matrix P. It follows from (26) that

∥ x ( t ) ∥ < x T ( 0 ) P x ( 0 ) λ min ( P ) exp ( κ t ) = x T ( 0 ) P x ( 0 ) λ min ( P ) exp ( − κ 2 t ) .
(27)

Thus, the exponential stability of the neural network (1) is guaranteed.

Introducing a change of variable PK=Y, Ω<0 is equivalently changed into the matrix inequality (3). Then the gain matrix of the P2PEDLL is given by K= P − 1 Y. The direct learning laws (12)-(13) are changed into (5)-(6). This completes the proof. □

Remark 1 (3) and (4) are bilinear matrix inequalities (BMI). Thus, solving the feasibility problem of (3) and (4) is difficult. A global optimization method, such as branch and bound, is required for guaranteed convergence to the global optimum of a BMI problem because the BMI problem is not convex [30]. However, we can solve the BMI problem by solving the LMI problem for a fixed variable. For a fixed positive scalar λ, (3) and (4) are linear matrix inequalities (LMIs). Several LMI problems can be solved efficiently by using recently developed convex optimization algorithms [27]. In this paper, we utilized MATLAB LMI Control Toolbox [28] to solve the LMI problem.

3 Numerical example

Consider the neural network (1) with the following parameters:

x ( t ) = [ x 1 ( t ) x 2 ( t ) ] , d ( t ) = [ d 1 ( t ) d 2 ( t ) ] , A = [ − 4 0 0 − 3 ] , G = [ 1 0.24 0 1 ] , θ ( x ( t ) ) = [ 1 1 + e − x 1 ( t ) 1 1 + e − x 2 ( t ) ] , τ = 1 .

We fix λ=1. Solving (3)-(4) with γ=0.4 and κ=1 yields

P= [ 4.2954 − 0.3268 − 0.3268 4.3739 ] ,Y= [ − 21.7639 − 0.8293 − 3.3378 − 25.5985 ] ,μ=0.3188.

Figure 1 shows state trajectories when the initial conditions are given by

x ( σ ) = [ − 2.3 1.1 ] , σ ∈ [ − τ 0 ] , W ( 0 ) = [ 0.08 − 0.15 0.15 − 0.08 ] , V ( 0 ) = [ − 0.21 0.15 − 0.15 − 0.13 ] ,
(28)

and the external disturbance d i (t) (i=1,2) is given by a Gaussian noise with mean 0 and variance 1. In Figure 1, we can see that the proposed P2PEDLL attenuates the effect of the external disturbance d(t) on the state variable x(t). Figures 2 and 3 show the evolutions of the weights W(t) and V(t), respectively. Since an external disturbance d(t) exists in the neural network (1), the weights are bounded around the origin. This result is particularly useful when we deal with robust identification and control problems using neural networks [31].

Figure 1
figure 1

State trajectories of neural network.

Figure 2
figure 2

Trajectories for elements of the weight matrix W(t) .

Figure 3
figure 3

Trajectories for elements of the weight matrix V(t) .

4 Conclusion

In this paper, we have proved the existence of the P2PEDLL, which is a new class of robust learning laws, for dynamic neural networks with external disturbance. The P2PEDLL based on two matrix inequalities was proposed to ensure that neural networks are exponentially stable with a guaranteed peak-to-peak norm performance. A numerical simulation demonstrated the effectiveness of the proposed P2PEDLL. Our expectation is that the P2PEDLL proposed in this paper can be applied to several identification and control problems using neural networks. Implementation of the P2PEDLL using hardware will require that we continuously store state information of neural networks because it uses delayed state information, but this needs a lot of memory/storage in hardware. Thus, a new and efficient implementation algorithm of the P2PEDLL that will include the reduction of memory/storage remains as a future emphasis.

References

  1. Gupta MM, Jin L, Homma N: Static and Dynamic Neural Networks. Wiley, New York; 2003.

    Book  Google Scholar 

  2. Kelly DG: Stability in contractive nonlinear neural networks. IEEE Trans. Biomed. Eng. 1990, 3: 241–242.

    Google Scholar 

  3. Matsouka K: Stability conditions for nonlinear continuous neural networks with asymmetric connections weights. Neural Netw. 1992, 5: 495–500. 10.1016/0893-6080(92)90011-7

    Article  Google Scholar 

  4. Liang XB, Wu LD: A simple proof of a necessary and sufficient condition for absolute stability of symmetric neural networks. IEEE Trans. Circuits Syst. I 1998, 45: 1010–1011. 10.1109/81.721271

    Article  MathSciNet  Google Scholar 

  5. Sanchez EN, Perez JP: Input-to-state stability (ISS) analysis for dynamic neural networks. IEEE Trans. Circuits Syst. I 1999, 46: 1395–1398. 10.1109/81.802844

    Article  MathSciNet  Google Scholar 

  6. Chu T, Zhang C, Zhang Z: Necessary and sufficient condition for absolute stability of normal neural networks. Neural Netw. 2003, 16: 1223–1227. 10.1016/S0893-6080(03)00075-3

    Article  Google Scholar 

  7. Chu T, Zhang C: New necessary and sufficient conditions for absolute stability of neural networks. Neural Netw. 2007, 20: 94–101. 10.1016/j.neunet.2006.06.003

    Article  Google Scholar 

  8. Rovithakis GA, Christodoulou MA: Adaptive control of unknown plants using dynamical neural networks. IEEE Trans. Syst. Man Cybern. 1994, 24: 400–412. 10.1109/21.278990

    Article  MathSciNet  Google Scholar 

  9. Jagannathan S, Lewis FL: Identification of nonlinear dynamical systems using multilayered neural networks. Automatica 1996, 32: 1707–1712. 10.1016/S0005-1098(96)80007-0

    Article  MathSciNet  Google Scholar 

  10. Suykens JAK, Vandewalle J, De Moor B: Lur’e systems with multilayer perceptron and recurrent neural networks; absolute stability and dissipativity. IEEE Trans. Autom. Control 1999, 44: 770–774. 10.1109/9.754815

    Article  MathSciNet  Google Scholar 

  11. Yu W, Li X: Some stability properties of dynamic neural networks. IEEE Trans. Circuits Syst. I 2001, 48: 256–259. 10.1109/81.904893

    Article  Google Scholar 

  12. Chairez I, Poznyak A, Poznyak T: New sliding-mode learning law for dynamic neural network observer. IEEE Trans. Circuits Syst. II, Express Briefs 2006, 53: 1338–1342.

    Article  Google Scholar 

  13. Yu W, Li X: Passivity analysis of dynamic neural networks with different time-scales. Neural Process. Lett. 2007, 25: 143–155. 10.1007/s11063-007-9034-0

    Article  Google Scholar 

  14. Rubio JJ, Yu W: Nonlinear system identification with recurrent neural networks and dead-zone Kalman filter algorithm. Neurocomputing 2007, 70: 2460–2466. 10.1016/j.neucom.2006.09.004

    Article  Google Scholar 

  15. Ahn CK: Passive learning and input-to-state stability of switched Hopfield neural networks with time-delay. Inf. Sci. 2010, 180(23):4582–4594. 10.1016/j.ins.2010.08.014

    Article  Google Scholar 

  16. Ahn CK: Some new results on stability of Takagi-Sugeno fuzzy Hopfield neural networks. Fuzzy Sets Syst. 2011, 179(1):100–111. 10.1016/j.fss.2011.05.010

    Article  Google Scholar 

  17. Suykens JAK, Vandewalle J, De Moor B:N L q theory: checking and imposing stability of recurrent neural networks for nonlinear modelling. IEEE Trans. Signal Process. 1997, 45: 2682–2691. 10.1109/78.650094

    Article  Google Scholar 

  18. Ahn CK:An H ∞ approach to stability analysis of switched Hopfield neural networks with time-delay. Nonlinear Dyn. 2010, 60(4):703–711. 10.1007/s11071-009-9625-6

    Article  Google Scholar 

  19. Ahn CK: L 2 - L ∞ nonlinear system identification via recurrent neural networks. Nonlinear Dyn. 2010, 62(3):543–552. 10.1007/s11071-010-9741-3

    Article  Google Scholar 

  20. Ahn CK: A new robust training law for dynamic neural networks with external disturbance. Discrete Dyn. Nat. Soc. 2010., 2010: Article ID 415895, 14 pages

    Google Scholar 

  21. Ahn CK: Robust stability of recurrent neural networks with ISS learning algorithm. Nonlinear Dyn. 2011, 65(4):413–419. 10.1007/s11071-010-9901-5

    Article  Google Scholar 

  22. Ahn CK:Exponential H ∞ stable learning method for Takagi-Sugeno fuzzy delayed neural networks: a convex optimization approach. Comput. Math. Appl. 2012, 63(5):887–895. 10.1016/j.camwa.2011.11.054

    Article  MathSciNet  Google Scholar 

  23. Abedor J, Nagpal K, Poolla K: A linear matrix inequality approach to peak-to-peak gain minimization. Int. J. Robust Nonlinear Control 1996, 6: 899–927. 10.1002/(SICI)1099-1239(199611)6:9/10<899::AID-RNC259>3.0.CO;2-G

    Article  MathSciNet  Google Scholar 

  24. Scherer C, Gahinet P, Chilali M: Multiobjective output-feedback control via LMI optimization. IEEE Trans. Autom. Control 1997, 42: 896–911. 10.1109/9.599969

    Article  MathSciNet  Google Scholar 

  25. Ahn CK, Lee YS:Induced l ∞ stability of fixed-point digital filters without overflow oscillations and instability due to finite word length effects. Adv. Differ. Equ. 2012., 2012: Article ID 51

    Google Scholar 

  26. Ahn, CK, Kim, PS: New peak-to-peak state-space realization of direct form digital filters free of overflow limit cycles. Int. J. Innov. Comput. Inf. Control (2013, in press)

  27. Boyd S, Ghaoui LE, Feron E, Balakrishinan V: Linear Matrix Inequalities in Systems and Control Theory. SIAM, Philadelphia; 1994.

    Book  Google Scholar 

  28. Gahinet P, Nemirovski A, Laub AJ, Chilali M: LMI Control Toolbox. The Mathworks, Natick; 1995.

    Google Scholar 

  29. Albert AE: Regression and the Moore-Penrose Pseudoinverse. Academic Press, San Diego; 1972.

    Google Scholar 

  30. VanAntwerp JG, Braatz RD: A tutorial on linear and bilinear matrix inequalities. J. Process Control 2000, 10: 363–385. 10.1016/S0959-1524(99)00056-6

    Article  Google Scholar 

  31. Hunt KJ, Sbarbaro D, Zbikowski R, Gawthrop PJ: Neural networks for control systems - a survey. Automatica 1992, 28: 1083–1112. 10.1016/0005-1098(92)90053-I

    Article  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Choon Ki Ahn.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors contributed equally and significantly in writing this paper. All authors read and approved the final manuscript.

Authors’ original submitted files for images

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Ahn, C.K., Song, M.K. Peak-to-peak exponential direct learning of continuous-time recurrent neural network models: a matrix inequality approach. J Inequal Appl 2013, 68 (2013). https://doi.org/10.1186/1029-242X-2013-68

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1029-242X-2013-68

Keywords