In recent years, the neural networks (NNs) have been extensively studied because of their immense application potentials, such as signal processing, pattern recognition, static image processing, associative memory, and combinatorial optimization. In practice, time delays are frequently encountered in dynamical systems and are often a source of oscillation and instability. Thus, the stability problem of delayed neural networks has become a topic of great theoretic and practical importance. Numerous important results have been reported for neural networks with time delays (see, e.g., [1–23]).
On the other hand, it can be seen from the existing references that only the deterministic time-delay case was concerned, and the stability criteria were derived based only on the information of variation range of the time delay. In practice, the delay in some NNs is due to multiple factors (e.g., synaptic transmission delay, neuroaxon transmission delay); one natural paradigm for treating it is to use a probabilistic description (see, e.g., [17, 18, 24, 25]). For example, to control and propagate the stochastic signals through universal learning networks (ULNs), a probabilistic universal learning network (PULN) was proposed in [25]. In a PULN, the output signal of the node is transferred to another node by multibranches with arbitrary time delay which is random and its probabilistic characteristic can often be measured by the statistical methods. For this case, if some values of the time delay are very large but the probabilities of the delay taking such large values are very small, it may result in a more conservative result if only the information of variation range of the time delay is considered. In many situations, the delay process can be modeled as a Markov process with a finite number of states (see, e.g., [26, 27]). References [26, 27] argue in favor of such representation of the delay in communication networks. The discrete values of the delay may correspond to "low", "medium", and "high" network loads.
In practice, sometimes a neural network has finite state representations (also called modes, patterns, or clusters), and the modes may switch (or jump) from one to another at different times [19–23]. Recently, it has been revealed in [19] that switching (or jumping) between different neural networks modes can be governed by a Markov chain. Specifically, the class of neural networks with Markovian switching has two components in the state vector. The first one, which carries continuously, is referred to be the continuous state of the neural networks, and the second one, which varies discretely, is referred to be the mode of the neural networks. For a specific mode, the dynamics of the neural networks is continuous, but the switchings among different modes may be seen as discrete events. It should be pointed out that neural networks with Markovian switching have been a subject of great significance in modeling a class of neural networks with finite network modes and were studied by several researchers, for example, [19–23, 28], despite their practical importance. However, to the best of the authors' knowledge, so far, the stability analysis of RNNs with random delay and Markovian switching has received little attention in the literature. This situation motivates our present investigation.
Motivated by the above discussions, the aim of this paper is to investigate the exponential stability of RNNs with random delay and Markovian switching in mean square. By using a Markov chain with a finite number of states, we propose a new model of RNNs with random delay and Markovian switching. The analysis is based on the Lyapunov-Krasovskii functional and stochastic analysis approach, and the conditions for the stability criteria are expressed in terms of linear matrix inequalities which can be readily checked by using some standard numerical packages. A simple example has been provided to demonstrate the effectiveness and applicability of the proposed testing criteria.
Notations
The notations are quite standard. Throughout this paper,
and
denote, respectively, the
-dimensional Euclidean space and the set of all
real matrices. The superscript "
" denotes the transpose and the notation
(resp.,
), where
and
are symmetric matrices, means that
is positive semidefinite (resp., positive definite).
is the identity matrix with compatible dimension. For
,
denotes the family of continuous functions
from
to
with the norm
, where
is the Euclidean norm in
. If
is a matrix, denote by
its operator norm, that is,
, where
(resp.,
) means the largest (resp., smallest) eigenvalue of
. Moreover, let
be complete probability space with a filtration
(i.e., it is right continuous and
contains all P-null sets). Denote by
the family of all bounded,
-measurable,
-valued random variables. For
and
, denote by
the family of all
-measurable,
-valued random variables
such that
, where
stands for the mathematical expectation operator with respect to the given probability measure P. In symmetric block matrices, we use an asterisk "
" to represent a term that is induced by symmetry and diag
stands for a block-diagonal matrix. Sometimes, the arguments of a function will be omitted in the analysis when no confusion can arise.