Consider the following neural network:
(1)
where is the state vector, is the disturbance vector, is the time-delay, is the self-feedback matrix, , are the weight matrices, is the nonlinear vector field, and is a known constant matrix. The element functions () are usually selected as sigmoid functions.
In this paper, given a prescribed level of disturbance attenuation , we find a new P2PEDLL such that the neural network (1) with is exponentially stable and
(2)
under zero-initial conditions for all nonzero , where κ is a positive constant.
A new peak-to-peak exponential direct learning law is given in the following theorem.
Theorem 1 Let κ be a given positive constant. For a given level , assume that there exist matrices , Y and positive scalars λ, μ such that
(3)
(4)
If the weight matrices
and
are updated as
(5)
(6)
then the neural network (1) is exponentially stable with a guaranteed exponential peak-to-peak norm bound γ.
Proof The neural network (1) can be represented by
(7)
Let
(8)
where is the gain matrix of the P2PEDLL. Then we obtain
(9)
One of the possible weight selections to fulfill (8) (perhaps excepting a subspace of a smaller dimension) is given by
(10)
where stands for the pseudoinverse matrix in the Moore-Penrose sense [20, 21, 29]. This learning law is just an algebraic relation depending on , which can be evaluated directly. Taking into account that [20, 21, 29]
(11)
the direct learning law (10) can be rewritten as
(12)
(13)
Consider the following Lyapunov function: . The time derivative of along the trajectory of (9) is
(14)
where
(15)
(16)
If , then
(17)
(18)
Thus, holds whenever . Since under the zero-initial condition, this shows that cannot exceed the value
(19)
for . From (19), we have
(20)
The matrix inequality (4) gives [25, 26]
(21)
If we pre- and post-multiply (21) by and , respectively, we have
(22)
which ensures
(23)
from (20). Thus, we have
(24)
Taking the supremum over leads to (2). When , we have
from (18). Thus, it implies that for any . We also have
(26)
where is the minimum eigenvalue of the matrix P. It follows from (26) that
(27)
Thus, the exponential stability of the neural network (1) is guaranteed.
Introducing a change of variable , is equivalently changed into the matrix inequality (3). Then the gain matrix of the P2PEDLL is given by . The direct learning laws (12)-(13) are changed into (5)-(6). This completes the proof. □
Remark 1 (3) and (4) are bilinear matrix inequalities (BMI). Thus, solving the feasibility problem of (3) and (4) is difficult. A global optimization method, such as branch and bound, is required for guaranteed convergence to the global optimum of a BMI problem because the BMI problem is not convex [30]. However, we can solve the BMI problem by solving the LMI problem for a fixed variable. For a fixed positive scalar λ, (3) and (4) are linear matrix inequalities (LMIs). Several LMI problems can be solved efficiently by using recently developed convex optimization algorithms [27]. In this paper, we utilized MATLAB LMI Control Toolbox [28] to solve the LMI problem.