Skip to content

Advertisement

  • Research Article
  • Open Access

Asymptotical Mean Square Stability of Cohen-Grossberg Neural Networks with Random Delay

Journal of Inequalities and Applications20102010:247587

https://doi.org/10.1155/2010/247587

  • Received: 26 January 2010
  • Accepted: 5 March 2010
  • Published:

Abstract

The asymptotical mean-square stability analysis problem is considered for a class of Cohen-Grossberg neural networks (CGNNs) with random delay. The evolution of the delay is modeled by a continuous-time homogeneous Markov process with a finite number of states. The main purpose of this paper is to establish easily verifiable conditions under which the random delayed Cohen-Grossberg neural network is asymptotical mean-square stability. By employing Lyapunov-Krasovskii functionals and conducting stochastic analysis, a linear matrix inequality (LMI) approach is developed to derive the criteria for the asymptotical mean-square stability, which can be readily checked by using some standard numerical packages such as the Matlab LMI Toolbox. A numerical example is exploited to show the usefulness of the derived LMI-based stability conditions.

Keywords

  • Linear Matrix Inequality
  • Random Delay
  • Amplification Function
  • Neuron Activation Function
  • Stability Analysis Problem

1. Introduction

It has been widely known that many biological and artificial neural networks contain an inherent time delay, which may cause oscillation and instability (see, e.g., [1]). Recently, many important results have been published on various analysis aspects for Cohen-Grossberg neural networks (CGNNs) with delay. In particular, the existence of equilibrium point, global asymptotic stability, global exponential stability, and the existence of periodic solutions have been intensively investigated in recent publications on the broad topics of time-delay systems (see, e.g., [226]). Generally speaking, the time delay considered can be categorized as constant delay, time-varying delay, and distributed delay, and the methods used include the linear matrix inequality (LMI) approach, Lyapunov functional method, -matrix theory, topological degree theory, and techniques of inequality analysis.

On the other hand, it can be seen from the existing references that only the deterministic time-delay case was concerned, and the stability criteria were derived based only on the information of variation range of the time delay. In practice, the delay in some NNs is due to multiple factors (e.g., synaptic transmission delay, neuroaxon transmission delay), one natural paradigm for treating it is to use a probabilistic description (see, e.g., [22, 2630]). For example, to control and propagate the stochastic signals through universal learning networks (ULNs), a probabilistic universal learning network (PULN) was proposed in [30]. In a PULN, the output signal of the node is transferred to another node by multibranches with arbitrary time delay which is random and its probabilistic characteristic can often be measured by the statistical methods. For this case, if some values of the time delay are very large but the probabilities of the delay taking such large values are very small, it may result in a more conservative result if only the information of variation range of the time delay is considered.

In many situations, the delay process can be modeled as a Markov process with a finite number of states (see, e.g., [27, 28]). References [27, 28] argue in favor of such representation of the delay in communication networks. The discrete values of the delay may correspond to "low", "medium", "high" network loads. However, to the best of the authors' knowledge, so far, the stability analysis of CGNNs with random delay modeled by a continuous-time homogeneous Markov process with a finite number of states has received little attention in the literature. This situation motivates our present investigation.

Motivated by the above discussions, the aim of this paper is to investigate the stability of CGNNs with random delay in mean square. By using a Markov chain with a finite number of states, we propose a new model of CGNNs with random delay. By employing Lyapunov-Krasovskii functionals and conducting stochastic analysis, we develop a linear matrix inequality (LMI) approach to derive the stability criteria that can be readily checked by using some standard numerical packages. A simple example is provided to demonstrate the effectiveness and applicability of the proposed testing criteria.

1.1. Notations

The notations are quite standard. Throughout this paper, and denote, respectively, the dimensional Euclidean space and the set of all real matrices. The superscript " " denotes the transpose and the notation (resp., ) where and are symmetric matrices, means that is positive semi-definite (resp., positive definite). is the identity matrix with compatible dimension. For , denotes the family of continuous functions from to with the norm , where is the Euclidean norm in . If is a matrix, denote by its operator norm, that is, where (resp., ) means the largest (resp., smallest) eigenvalue of . Moreover, let be complete probability space with a filtration (i.e., it is right continuous and contains all P-null sets).Denote by the family of all bounded, measurable, -valued random variables. For and , denote by the family of all measurable -valued random variables such that where stands for the mathematical expectation operator with respect to the given probability measure . In symmetric block matrices, we use an asterisk " " to represent a term that is induced by symmetry and diag stands for a block-diagonal matrix. Sometimes, the arguments of a function will be omitted in the analysis when no confusion can be arise.

2. Problem Formulation

In this section, we will introduce the model of Cohen-Grossberg neural networks with random delay, give the definition of stability related, and put forward the problem to be dealt with in this paper.

Let be a right-continuous homogeneous Markov process on the probability space which take values in the finite space satisfying and its generator is given by

(21)
where and , is the transition rate from to if and
(22)
Consider the following Cohen-Grossberg neural network with constant delay model described by
(23)

where is the state of the th unit at time , is the amplification function, denotes the behaved function, and is the activation function. The matrices are, respectively, the connection weight matrix and the discretely delayed connection weight matrix. is a constant external input vector. The scalar which may be unknown, denotes the discrete time delay.

Let
(24)

The model (2.3) can be rewritten the following matrix form:

(25)

In this paper, we make the following assumptions on the amplification function, the behaved function, and the neuron activation functions.

Assumption 1.

For each , the amplification function is positive, bounded, and satisfies
(26)

where and are known positive constants.

Assumption 2.

The behaved function is continuous and differentiable, and
(27)

Assumption 3.

For the neuron activation function in (2.3) satisfies
(28)

for any where is a positive constant.

Remark 2.1.

It is obvious that, the condition in Assumption 3 is more general than the usual sigmoid functions and the recently commonly used Lipschitz conditions, see, for example, [410].

Assumption 4.

The neuron activation function in (2.3), satisfies the following condition
(29)

where is a positive constant.

For notational convenience, we shift the equilibrium point to the origin by translation which yields the following system:
(210)
where is the state vector of the transformed system and
(211)
It follows, respectively, Assumptions 1 and 2 that
(212)
(213)
Note that Assumption 3 implies the following condition:
(214)
and from (2.14), we have
(215)
Assumption 4 implies that
(216)
Now we consider the following Cohen-Grossberg neural network with random delay, which is actually a modification of (2.10):
(217)

where is a Markov process with a finite number of states

Now we shall work on the network mode Let denote the state trajectory from the initial data on in . According to [31], for any initial value , (2.17) has only a globally continuous state. Clearly, the network (2.17) admits an equilibrium point (trivial solution) corresponding to the initial data .

Remark 2.2.

It is noted that the introduction of random delay modeled by a continuous-time homogeneous Markov process with a finite number of states was first introduced in [27, 28]. Unlike the common assumptions on the delay in published literature, the probability distribution of the delay taking some values is assumed to be known in advance in this paper, and then a new model of the neural system (2.17) has been derived, which can be seen as an extension of the common neural system (2.10).

The following stability concept is needed in this paper.

Definition 2.3.

For system (2.17) and every , , the equilibrium point is asymptotically stable in the mean-square sense if
(218)

where is the solution of system (2.17) at time under the initial state and initial mode .

3. Main Results and Proofs

The following lemma is needed in deriving our main results.

Lemma 3.1 (see [8]).

Let , and Then we have
(31)

Lemma 3.2 (see (Schur Complement) [32]).

Given constant matrices where and then
(32)
if and only if
(33)
Before stating our main results, let us denote
(34)

where are defined in (2.12), is defined in (2.13), the diagonal positive definite matrix is a parameter to be designed.

We are now ready to derive the conditions under which the network dynamics of (2.17) is asymptotic mean square stability. The main theorem given below shows that the stability criteria can be expressed in terms of the feasibility of two LMIs.

Theorem 3.3.

If there exist two sequences of positive scalars , two symmetric positive matrices and two positive diagonal matrix such that the following LMIs hold:
(35)
where
(36)

Then the dynamics of the neural network (2.17) is asymptotic mean square stability.

Proof.

Let denote the family of all nonnegative functions on which are continuously twice differentiable in and differential in . In order to establish the stability conditions, we define a Lyapunov functional candidate by
(37)
It is known (see [27]) that is a valued Markov process. From (2.17) and (3.7), the weak infinitesimal operator (see [27]) of the stochastic process is given by
(38)
Noticing that and are diagonal positive-definite matrices, we obtain from (2.12) and (2.13) that
(39)
It follows from Lemma 3.1 that
(310)
Substituting (3.9)-(3.10) into (3.8)leads to
(311)
where
(312)
It follows from the Schur Complement Lemma that the first item of (3.11) is negative which is equivalent to the following
(313)
In view of the LMIs (3.5) and formula [31], we have
(314)

Therefore, the dynamics of the neural network (2.17) is asymptotical stable in the mean square.

4. A Numerical Example

In this section, a numerical example is presented to demonstrate the effectiveness and applicability of the developed method on the asymptotic mean square stability of the Cohen-Grossberg neural network (2.17) with random delay.

Example 4.1.

Consider a two-neuron neural network (2.17) with two modes. The network parameters are given as follows:
(41)
By using the Matlab LMI toolbox [32], we solve the LMIs in Theorem 3.3 and obtain
(42)

Therefore, it follows from Theorem 3.3 that the Cohen-Grossberg neural network (2.17) with random delay is asymptotical mean square stability.

5. Conclusions

In this paper, we have investigated the asymptotical mean square stability analysis problem of CGNNs with random delay. By utilizing a markov chain to describe discrete delay, a new neural network model has been presented. By employing a Lyapunov-Krasovskii functional and conducting stochastic analysis, we have developed a linear matrix inequality (LMI) approach to derive the stability criteria that can be readily checked by using some standard numerical packages. A simple example has been provided to demonstrate the effectiveness and applicability of the proposed testing criteria.

Declarations

Acknowledgments

The authors would like to thank the Editor and the anonymous referees for their detailed comments and valuable suggestions which considerably improved the presentation of the this paper. This work was supported in part by the Excellent Youth Foundation of Educational Committee of Hunan Provincial under Grants no. 08B005, the Hunan Postdoctoral Scientific Program under Grants no. 2009RS3020, the Natural Science Foundation of Hunan Province under Grants no. 09JJ6006, the Scientific Research Funds of Hunan Provincial Education Department of China under Grants no. 09C059 and the Scientific Research Funds of Hunan Provincial Science and Technology Department of China under Grants no. 2009FJ3103, no. 2009ZK4021. The authors would thank Prof. Fuzhou Gong for his guidance.

Authors’ Affiliations

(1)
School of Mathematics and Computational Science, Xiangtan University, 411105 Hunan, China
(2)
School of Mathematics and Computational Science, Changsha University of Science and Technology, 410076 Hunan, China
(3)
Institute of Applied Mathematics, Academy of Mathematics and Systems Science, Chinese Academy of Sciences, 100080 Beijing, China
(4)
School of Mathematics, Central South University, 410075 Hunan, China

References

  1. Arik S, Tavsanoglu V: Stability analysis of delayed neural networks. IEEE Transactions on Circuits and Systems. I 2000, 47(4):568–571. 10.1109/81.841858MathSciNetView ArticleMATHGoogle Scholar
  2. Liao XX, Mao X: Stability of stochastic neural networks. Neural, Parallel & Scientific Computations 1996, 4(2):205–224.MathSciNetMATHGoogle Scholar
  3. Blythe S, Mao X, Liao X: Stability of stochastic delay neural networks. Journal of the Franklin Institute 2001, 338(4):481–495. 10.1016/S0016-0032(01)00016-3MathSciNetView ArticleMATHGoogle Scholar
  4. Chen T, Rong L: Delay-independent stability analysis of Cohen-Grossberg neural networks. Physics Letters A 2003, 317(5–6):436–449. 10.1016/j.physleta.2003.08.066MathSciNetView ArticleMATHGoogle Scholar
  5. Cao J, Liang J: Boundedness and stability for Cohen-Grossberg neural network with time-varying delays. Journal of Mathematical Analysis and Applications 2004, 296(2):665–685. 10.1016/j.jmaa.2004.04.039MathSciNetView ArticleMATHGoogle Scholar
  6. Cao J, Li X: Stability in delayed Cohen-Grossberg neural networks: LMI optimization approach. Physica D 2005, 212(1–2):54–65. 10.1016/j.physd.2005.09.005MathSciNetView ArticleMATHGoogle Scholar
  7. Sun J, Wan L: Global exponential stability and periodic solutions of Cohen-Grossberg neural networks with continuously distributed delays. Physica D 2005, 208(1–2):1–20. 10.1016/j.physd.2005.05.009MathSciNetView ArticleMATHGoogle Scholar
  8. Wang Z, Liu Y, Yu L, Liu X: Exponential stability of delayed recurrent neural networks with Markovian jumping parameters. Physics Letters A 2006, 2(1):346–352.View ArticleMATHGoogle Scholar
  9. Yang Z, Xu D: Impulsive effects on stability of Cohen-Grossberg neural networks with variable delays. Applied Mathematics and Computation 2006, 177(1):63–78. 10.1016/j.amc.2005.10.032MathSciNetView ArticleMATHGoogle Scholar
  10. Wang Z, Liu Y, Li M, Liu X: Stability analysis for stochastic Cohen-Grossberg neural networks with mixed time delays. IEEE Transactions on Neural Networks 2006, 17(3):814–820. 10.1109/TNN.2006.872355View ArticleGoogle Scholar
  11. Zhao H, Wang K: Dynamical behaviors of Cohen-Grossberg neural networks with delays and reaction-diffusion terms. Neurocomputing 2006, 70(1–3):536–543.View ArticleGoogle Scholar
  12. Zhu E, Zhang H, Wang Y, Zou J, Yu Z, Hou Z: Pth moment exponential stability of stochastic Cohen-Grossberg neural networks with time-varying delays. Neural Processing Letters 2007, 26(3):191–200. 10.1007/s11063-007-9051-zView ArticleMATHGoogle Scholar
  13. Yu W, Cao J, Wang J: An LMI approach to global asymptotic stability of the delayed Cohen-Grossberg neural network via nonsmooth analysis. Neural Networks 2007, 20(7):810–818. 10.1016/j.neunet.2007.07.004View ArticleMATHGoogle Scholar
  14. Ren F, Cao J: Periodic solutions for a class of higher-order Cohen-Grossberg type neural networks with delays. Computers & Mathematics with Applications 2007, 54(6):826–839. 10.1016/j.camwa.2007.03.005MathSciNetView ArticleMATHGoogle Scholar
  15. Zhao W: Dynamics of Cohen-Grossberg neural network with variable coefficients and time-varying delays. Nonlinear Analysis: Real World Applications 2008, 9(3):1024–1037. 10.1016/j.nonrwa.2007.02.002MathSciNetView ArticleMATHGoogle Scholar
  16. Song Q-K, Cao J-D: Robust stability in cohen-grossberg neural network with both time-varying and distributed delays. Neural Processing Letters 2008, 27(2):179–196. 10.1007/s11063-007-9068-3View ArticleMATHGoogle Scholar
  17. Ji C, Zhang HG, Wei Y: LMI approach for global robust stability of Cohen-Grossberg neural networks with multiple delays. Neurocomputing 2008, 71(4–6):475–485.View ArticleGoogle Scholar
  18. Zhang H, Wang Y: Stability analysis of Markovian jumping stochastic Cohen-Grossberg neural networks with mixed time delays. IEEE Transactions on Neural Networks 2008, 19(2):366–370.View ArticleGoogle Scholar
  19. Liu Y, Wang Z, Liu X: On delay-dependent robust exponential stability of stochastic neural networks with mixed time delays and Markovian switching. Nonlinear Dynamics 2008, 54(3):199–212. 10.1007/s11071-007-9321-3MathSciNetView ArticleMATHGoogle Scholar
  20. Cao J, Feng G, Wang Y: Multistability and multiperiodicity of delayed Cohen-Grossberg neural networks with a general class of activation functions. Physica D 2008, 237(13):1734–1749. 10.1016/j.physd.2008.01.012MathSciNetView ArticleMATHGoogle Scholar
  21. Nie X, Cao J: Stability analysis for the generalized Cohen-Grossberg neural networks with inverse Lipschitz neuron activations. Computers & Mathematics with Applications 2009, 57(9):1522–1536. 10.1016/j.camwa.2009.01.003MathSciNetView ArticleMATHGoogle Scholar
  22. Yue D, Zhang Y, Tian E, Peng C: Delay-distribution-dependent exponential stability criteria for discrete-time recurrent neural networks with stochastic delay. IEEE Transactions on Neural Networks 2008, 19(7):1299–1306.View ArticleGoogle Scholar
  23. Shen Y, Wang J: Almost sure exponential stability of recurrent neural networks with Markovian switching. IEEE Transactions on Neural Networks 2009, 20(5):840–855.View ArticleGoogle Scholar
  24. Zhang Y, Yue D, Tian E: Robust delay-distribution-dependent stability of discrete-time stochastic neural networks with time-varying delay. Neurocomputing 2009, 72(4–6):1265–1273.View ArticleGoogle Scholar
  25. Balasubramaniam P, Rakkiyappan R: Delay-dependent robust stability analysis for Markovian jumping stochastic Cohen-Grossberg neural networks with discrete interval and distributed time-varying delays. Nonlinear Analysis: Hybrid Systems 2009, 3(3):207–214. 10.1016/j.nahs.2009.01.002MathSciNetMATHGoogle Scholar
  26. Yang R, Gao H, Lam J, Shi P: New stability criteria for neural networks with distributed and probabilistic delays. Circuits, Systems, and Signal Processing 2009, 28(4):505–522.MathSciNetView ArticleMATHGoogle Scholar
  27. Kolmanovsky I, Maizenberg TL: Mean-square stability of nonlinear systems with time-varying, random delay. Stochastic Analysis and Applications 2001, 19(2):279–293. 10.1081/SAP-100001189MathSciNetView ArticleMATHGoogle Scholar
  28. Kolmanovskii VB, Maizenberg TL, Richard J-P: Mean square stability of difference equations with a stochastic delay. Nonlinear Analysis: Theory, Methods & Applications 2003, 52(3):795–804. 10.1016/S0362-546X(02)00133-5MathSciNetView ArticleMATHGoogle Scholar
  29. Hopfield JJ: Neural networks and physical systems with emergent collective computational abilities. Proceedings of the National Academy of Sciences of the United States of America 1982, 79(8):2554–2558. 10.1073/pnas.79.8.2554MathSciNetView ArticleGoogle Scholar
  30. Hirasawa K, Mabu S, Hu J: Propagation and control of stochastic signals through universal learning networks. Neural Networks 2006, 19(4):487–499. 10.1016/j.neunet.2005.10.005View ArticleMATHGoogle Scholar
  31. Mao X, Yuan C: Stochastic Differential Equations with Markovian Switching. Imperial College Press, London, UK; 2006:xviii+409.View ArticleMATHGoogle Scholar
  32. Boyd S, El Ghaoui L, Feron E, Balakrishnan V: Linear Matrix Inequalities in System and Control Theory, SIAM Studies in Applied Mathematics. Volume 15. SIAM, Philadelphia, Pa, USA; 1994:xii+193.View ArticleMATHGoogle Scholar

Copyright

Advertisement