Consider the following neural network:

\dot{x}(t)=Ax(t)+W(t)\theta (x(t))+V(t)\theta (x(t-\tau ))+Gd(t),

(1)

where x(t)={[{x}_{1}(t)\cdots {x}_{n}(t)]}^{T}\in {R}^{n} is the state vector, d(t)={[{d}_{1}(t)\cdots {d}_{k}(t)]}^{T}\in {R}^{k} is the disturbance vector, \tau >0 is the time-delay, A\in {R}^{n\times n} is the self-feedback matrix, W(t)\in {R}^{n\times p}, V(t)\in {R}^{n\times p} are the weight matrices, \theta (x)={[{\theta}_{1}(x),\dots ,{\theta}_{p}(x)]}^{T}:{R}^{n}\to {R}^{p} is the nonlinear vector field, and G\in {R}^{n\times k} is a known constant matrix. The element functions {\theta}_{i}(x) (i=1,\dots ,p) are usually selected as sigmoid functions.

In this paper, given a prescribed level of disturbance attenuation \gamma >0, we find a new P2PEDLL such that the neural network (1) with d(t)=0 is exponentially stable and

\underset{t\ge 0}{sup}\{exp(\kappa t){x}^{T}(t)x(t)\}<{\gamma}^{2}\underset{t\ge 0}{sup}\{exp(\kappa t){d}^{T}(t)d(t)\}

(2)

under zero-initial conditions for all nonzero d(t)\in {L}_{\mathrm{\infty}}[0,\mathrm{\infty}), where *κ* is a positive constant.

A new peak-to-peak exponential direct learning law is given in the following theorem.

**Theorem 1** *Let* *κ* *be a given positive constant*. *For a given level* \gamma >0, *assume that there exist matrices* P={P}^{T}>0, *Y* *and positive scalars* *λ*, *μ* *such that*

\left[\begin{array}{cc}{(PA+Y)}^{T}+PA+Y+(\kappa +\lambda )P& PG\\ {G}^{T}P& -\mu I\end{array}\right]<0,

(3)

\left[\begin{array}{ccc}\lambda P& 0& I\\ 0& (\gamma -\mu )I& 0\\ I& 0& \gamma I\end{array}\right]>0.

(4)

*If the weight matrices*
W(t)
*and*
V(t)
*are updated as*

W(t)=\{\begin{array}{cc}\frac{{P}^{-1}Yx(t)}{{\parallel \theta (x(t))\parallel}^{2}+{\parallel \theta (x(t-\tau ))\parallel}^{2}}{\theta}^{T}(x(t)),\hfill & {\parallel \theta (x(t))\parallel}^{2}+{\parallel \theta (x(t-\tau ))\parallel}^{2}\ne 0,\hfill \\ 0,\hfill & {\parallel \theta (x(t))\parallel}^{2}+{\parallel \theta (x(t-\tau ))\parallel}^{2}=0,\hfill \end{array}

(5)

V(t)=\{\begin{array}{cc}\frac{{P}^{-1}Yx(t)}{{\parallel \theta (x(t))\parallel}^{2}+{\parallel \theta (x(t-\tau ))\parallel}^{2}}{\theta}^{T}(x(t-\tau )),\hfill & {\parallel \theta (x(t))\parallel}^{2}+{\parallel \theta (x(t-\tau ))\parallel}^{2}\ne 0,\hfill \\ 0,\hfill & {\parallel \theta (x(t))\parallel}^{2}+{\parallel \theta (x(t-\tau ))\parallel}^{2}=0,\hfill \end{array}

(6)

*then the neural network* (1) *is exponentially stable with a guaranteed exponential peak*-*to*-*peak norm bound* *γ*.

*Proof* The neural network (1) can be represented by

\dot{x}(t)=Ax(t)+\left[\begin{array}{cc}W(t)& V(t)\end{array}\right]\left[\begin{array}{c}\theta (x(t))\\ \theta (x(t-\tau ))\end{array}\right]+Gd(t).

(7)

Let

Kx(t)=\left[\begin{array}{cc}W(t)& V(t)\end{array}\right]\left[\begin{array}{c}\theta (x(t))\\ \theta (x(t-\tau ))\end{array}\right],

(8)

where K\in {R}^{n\times n} is the gain matrix of the P2PEDLL. Then we obtain

\dot{x}(t)=(A+K)x(t)+Gd(t).

(9)

One of the possible weight selections [W(t)\phantom{\rule{0.25em}{0ex}}V(t)] to fulfill (8) (perhaps excepting a subspace of a smaller dimension) is given by

\left[\begin{array}{cc}W(t)& V(t)\end{array}\right]=Kx(t){\left[\begin{array}{c}\theta (x(t))\\ \theta (x(t-\tau ))\end{array}\right]}^{+},

(10)

where {[\cdot ]}^{+} stands for the pseudoinverse matrix in the Moore-Penrose sense [20, 21, 29]. This learning law is just an algebraic relation depending on x(t), which can be evaluated directly. Taking into account that [20, 21, 29]

{x}^{+}=\{\begin{array}{cc}\frac{{x}^{T}}{{\parallel x\parallel}^{2}},\hfill & x\ne 0,\hfill \\ 0,\hfill & x=0,\hfill \end{array}

(11)

the direct learning law (10) can be rewritten as

W(t)=\{\begin{array}{cc}\frac{Kx(t)}{{\parallel \theta (x(t))\parallel}^{2}+{\parallel \theta (x(t-\tau ))\parallel}^{2}}{\theta}^{T}(x(t)),\hfill & {\parallel \theta (x(t))\parallel}^{2}+{\parallel \theta (x(t-\tau ))\parallel}^{2}\ne 0,\hfill \\ 0,\hfill & {\parallel \theta (x(t))\parallel}^{2}+{\parallel \theta (x(t-\tau ))\parallel}^{2}=0,\hfill \end{array}

(12)

V(t)=\{\begin{array}{cc}\frac{Kx(t)}{{\parallel \theta (x(t))\parallel}^{2}+{\parallel \theta (x(t-\tau ))\parallel}^{2}}{\theta}^{T}(x(t-\tau )),\hfill & {\parallel \theta (x(t))\parallel}^{2}+{\parallel \theta (x(t-\tau ))\parallel}^{2}\ne 0,\hfill \\ 0,\hfill & {\parallel \theta (x(t))\parallel}^{2}+{\parallel \theta (x(t-\tau ))\parallel}^{2}=0.\hfill \end{array}

(13)

Consider the following Lyapunov function: L(t)=exp(\kappa t){x}^{T}(t)Px(t). The time derivative of L(t) along the trajectory of (9) is

\begin{array}{rl}\dot{L}(t)=& exp(\kappa t)\dot{x}{(t)}^{T}Px(t)+exp(\kappa t){x}^{T}(t)P\dot{x}(t)+\kappa exp(\kappa t){x}^{T}(t)Px(t)\\ =& exp(\kappa t){x}^{T}(t)[{(A+K)}^{T}P+P(A+K)+\kappa P]x(t)+exp(\kappa t)x{(t)}^{T}PGd(t)\\ +exp(\kappa t){d}^{T}(t){G}^{T}Px(t)\\ =& exp(\kappa t)\{{\eta}^{T}(t)\mathrm{\Omega}\eta (t)-\lambda {x}^{T}(t)Px(t)+\mu {d}^{T}(t)d(t)\},\end{array}

(14)

where

\eta (t)={\left[\begin{array}{cc}{x}^{T}(t)& {d}^{T}(t)\end{array}\right]}^{T},

(15)

\mathrm{\Omega}=\left[\begin{array}{cc}{(A+K)}^{T}P+P(A+K)+(\kappa +\lambda )P& PG\\ {G}^{T}P& -\mu I\end{array}\right].

(16)

If \mathrm{\Omega}<0, then

\dot{L}(t)<-\lambda exp(\kappa t){x}^{T}(t)Px(t)+\mu exp(\kappa t){d}^{T}(t)d(t)

(17)

=-\lambda L(t)+\mu exp(\kappa t){d}^{T}(t)d(t).

(18)

Thus, \dot{L}(t)<0 holds whenever L(t)\ge \frac{\mu}{\lambda}exp(\kappa t){d}^{T}(t)d(t). Since L(0)=0 under the zero-initial condition, this shows that L(t) cannot exceed the value \frac{\mu}{\lambda}exp(\kappa t){d}^{T}(t)d(t)

exp(\kappa t){x}^{T}(t)Px(t)=L(t)<\frac{\mu}{\lambda}exp(\kappa t){d}^{T}(t)d(t)

(19)

for t\ge 0. From (19), we have

\begin{array}{r}\frac{1}{\gamma}exp(\kappa t){x}^{T}(t)x(t)-\gamma exp(\kappa t){d}^{T}(t)d(t)\\ \phantom{\rule{1em}{0ex}}=\frac{1}{\gamma}exp(\kappa t){x}^{T}(t)x(t)-(\gamma -\mu )exp(\kappa t){d}^{T}(t)d(t)-\mu exp(\kappa t){d}^{T}(t)d(t)\\ \phantom{\rule{1em}{0ex}}<\frac{1}{\gamma}exp(\kappa t){x}^{T}(t)x(t)-(\gamma -\mu )exp(\kappa t){d}^{T}(t)d(t)-\lambda exp(\kappa t){x}^{T}(t)Px(t).\end{array}

(20)

The matrix inequality (4) gives [25, 26]

\frac{1}{\gamma}\left[\begin{array}{c}I\\ 0\end{array}\right]\left[\begin{array}{cc}I& 0\end{array}\right]<\left[\begin{array}{cc}\lambda P& 0\\ 0& (\gamma -\mu )I\end{array}\right].

(21)

If we pre- and post-multiply (21) by exp(\kappa t/2)[{x}^{T}(t)\phantom{\rule{0.25em}{0ex}}{d}^{T}(t)] and exp(\kappa t/2){[{x}^{T}(t)\phantom{\rule{0.25em}{0ex}}{d}^{T}(t)]}^{T}, respectively, we have

\frac{1}{\gamma}exp(\kappa t){x}^{T}(t)x(t)-(\gamma -\mu )exp(\kappa t){d}^{T}(t)d(t)-\lambda exp(\kappa t){x}^{T}(t)Px(t)<0,

(22)

which ensures

\frac{1}{\gamma}exp(\kappa t){x}^{T}(t)x(t)-\gamma exp(\kappa t){d}^{T}(t)d(t)<0

(23)

from (20). Thus, we have

exp(\kappa t){x}^{T}(t)x(t)<{\gamma}^{2}exp(\kappa t){d}^{T}(t)d(t).

(24)

Taking the supremum over t\ge 0 leads to (2). When d(t)=0, we have

\dot{L}(t)<-\lambda L(t)<0

(25)

from (18). Thus, it implies that L(t)<L(0)={x}^{T}(0)Px(0) for any t\ge 0. We also have

L(t)\ge {\lambda}_{min}(P)exp(\kappa t){\parallel x(t)\parallel}^{2},

(26)

where {\lambda}_{min}(P) is the minimum eigenvalue of the matrix *P*. It follows from (26) that

\parallel x(t)\parallel <\sqrt{\frac{{x}^{T}(0)Px(0)}{{\lambda}_{min}(P)exp(\kappa t)}}=\sqrt{\frac{{x}^{T}(0)Px(0)}{{\lambda}_{min}(P)}}exp(-\frac{\kappa}{2}t).

(27)

Thus, the exponential stability of the neural network (1) is guaranteed.

Introducing a change of variable PK=Y, \mathrm{\Omega}<0 is equivalently changed into the matrix inequality (3). Then the gain matrix of the P2PEDLL is given by K={P}^{-1}Y. The direct learning laws (12)-(13) are changed into (5)-(6). This completes the proof. □

**Remark 1** (3) and (4) are bilinear matrix inequalities (BMI). Thus, solving the feasibility problem of (3) and (4) is difficult. A global optimization method, such as branch and bound, is required for guaranteed convergence to the global optimum of a BMI problem because the BMI problem is not convex [30]. However, we can solve the BMI problem by solving the LMI problem for a fixed variable. For a fixed positive scalar *λ*, (3) and (4) are linear matrix inequalities (LMIs). Several LMI problems can be solved efficiently by using recently developed convex optimization algorithms [27]. In this paper, we utilized MATLAB LMI Control Toolbox [28] to solve the LMI problem.