Skip to main content

Inequalities and pth moment exponential stability of impulsive delayed Hopfield neural networks

Abstract

In this paper, the pth moment exponential stability for a class of impulsive delayed Hopfield neural networks is investigated. Some concise algebraic criteria are provided by a new method concerned with impulsive integral inequalities. Our discussion neither requires a complicated Lyapunov function nor the differentiability of the delay function. In addition, we also summarize a new result on the exponential stability of a class of impulsive integral inequalities. Finally, one example is given to illustrate the effectiveness of the obtained results.

Introduction

In the past few years, the artificial neural networks introduced by Hopfield [1, 2] have become a significant research topic due to their wide applications in various areas such as signal and image processing, associative memory, combinatorial optimization, pattern classification, etc. [35]. All the applications of Hopfield neural networks (HNNs) depend on qualitative behavior such as stability, existence and uniqueness, convergence, oscillation, and so on [610]. Particularly, the stability property is a major concern in the design and applications of neural networks. Therefore, many researchers have been paying much attention to the stability study of HNNs.

In addition, since time delays are frequently encountered for the finite switching speed of neurons and amplify in implementation of neural networks, it is meaningful to discuss the effect of time delays on the stability of HNNs. Consequently, the scientists put forward the model of delayed Hopfield neural networks (DHNNs) and made great efforts for the stability research (see e.g. [11, 12]).

Furthermore, it is worth noting that impulsive effects are also a common phenomenon in many engineering systems, that is, instantaneous jump or reset of system states of automobile industry, network control, video coding, etc. Hence, the model of impulsive delayed Hopfield neural networks (IDHNNs) is more representative, and it is necessary to probe the stability of IDHNNs theoretically and practically. So far, there have been a number of research achievements (see e.g. [1317])

Among the existing stability results of impulsive delayed systems, one powerful technique is Lyapunov method (see e.g. [1824]). Wei et al. [18] studied the global exponential stability in the mean-square sense of a class of stochastic impulsive reaction-diffusion systems with S-type distributed delays based on a Lyapunov–Krasovskii functional and an impulsive inequality. Ren et al. [19] considered the mean-square exponential input-to-state stability for a class of delayed stochastic neural networks with impulsive effects driven by G-Brownian motion by constructing an appropriate G-Lyapunov–Krasovskii functional, mathematical induction approach, and some inequality techniques.

It should be pointed out that the key to the Lyapunov method is to construct a suitable Lyapunov function or functional. However, finding a suitable Lyapunov function or functional often involves some mathematical difficulties.

On the other hand, an alternative technique for stability analysis of impulsive delayed systems has been developed based on the fixed point theorem (see e.g.[2529]). Zhang et al. [25] studied the application of the fixed point theory to the stability analysis of a class of impulsive delayed neural networks. By employing the contraction mapping principle, some novel and concise sufficient conditions have been presented to ensure the existence and uniqueness of solution and the global exponential stability of the considered system.

However, the fixed point method has its disadvantage due to using Holder inequalities at an inappropriate time.

Motivated by the above discussion, we attempt to study the stability of IDHNNs by a new method different from the Lyapunov method and the fixed point method. As we all know, there are many works focused on discussion to mean-square stability of complex dynamical systems. However, mean-square stability is actually a special case of pth moment stability by choosing \(p=2\), so the study of pth moment stability will be more representative. In our paper, we investigate the pth moment exponential stability of IDHNNs with the help of impulsive integral inequalities. Compared with the Lyapunov method and the fixed point theory, our method has two advantages. One is no demand of Lyapunov functions and the differentiability of the delay function. The other is no demand of seeking the appropriate time to use Holder inequalities. Furthermore, a new criterion for the exponential stability of impulsive integral inequalities is provided based on our discussion.

The contents of this paper are organized as follows. In Sect. 2, some notations, the model description, and a useful lemma are introduced. In Sect. 3, we consider the pth moment exponential stability of IDHNNs and obtain some new sufficient conditions. Inspired by Sect. 3, we discuss the exponential stability of a class of impulsive integral inequalities in Sect. 4 and give an algebraic criterion. In Sect. 5, one example is given to illustrate the effectiveness of our results.

Preliminaries

Notations: Let \(\mathrm{R}^{n}\) denote the n-dimensional Euclidean space. \(\vert \cdot \vert \) represents the Euclidean norm for vectors or absolute value for real numbers. \({\mathcal{N}} \stackrel{{{{{{\Delta }}}}}}{{=}} \{ 1,2, \ldots,n \} \). \(\mathrm{R}_{ +} = [ 0,\infty )\). \(C [ X,Y ]\) stands for the space of continuous mappings from the topological space X to the topological space Y. For some \(\tau > 0\), let \(C [ [ - \tau,0 ],\mathrm{R} ]\) be the family of all continuous real-valued functions ϕ defined in \([ - \tau,0 ]\) equipped with the norm \(\Vert \phi \Vert = \sup_{s \in [ - \tau,0 ]} \vert \phi ( s ) \vert \).

Consider a class of impulsive delayed Hopfield neural network described by

$$\begin{aligned} &\frac{{d}x_{i} ( t )}{{d}t} = - a_{i}x_{i} ( t ) + \sum _{j = 1}^{n} b_{ij}f_{j} \bigl( x_{j} ( t ) \bigr) + \sum_{j = 1}^{n} c_{ij}g_{j} \bigl( x_{j} \bigl( t - \tau _{j} ( t ) \bigr) \bigr),\quad t \ge 0, t \ne t_{k}, \\ &\Delta x_{i} ( t_{k} ) = x_{i} ( t_{k} + 0 ) - x_{i} ( t_{k} ) = I_{ik} \bigl( x_{i} ( t_{k} ) \bigr),\quad k = 1,2, \ldots, \\ &x_{i} ( s ) = \varphi _{i} ( s ),\quad - \tau \le s \le 0, i \in {\mathcal{N}}, \end{aligned}$$
(2.1)

where \(i \in {\mathcal{N}}\) and n is the number of neurons in the neural network. \(x_{i} ( t )\) stands for the state of the ith neuron at time t. \(f_{j} ( \bullet ), g_{j} ( \bullet ) \in C [ \mathrm{R},\mathrm{R} ]\), \(f_{j} ( x_{j} ( t ) )\) is the activation function of the jth neuron at time t and \(g_{j} ( x_{j} ( t - \tau _{j} ( t ) ) )\) is the activation function of the jth neuron at time \(t - \tau _{j} ( t )\), where \(\tau _{j} ( t ) \in C [ \mathrm{R}^{ +},\mathrm{R}^{ +} ]\) denotes the transmission delay along the axon of the jth neuron and satisfies \(0 \le \tau _{j} ( t ) \le \tau _{j}\) (\(\tau _{j}\) is a constant). The constant \(a_{i} > 0\) stands for the rate with which the ith neuron will reset its potential to the resting state when disconnected from the network and external inputs. The constant \(b_{ij}\) represents the connection weight of the jth neuron on the ith neuron at time t. The constant \(c_{ij}\) denotes the connection strength of the jth neuron on the ith neuron at time \(t - \tau _{j} ( t )\). The fixed impulsive moments \(t_{k}\) (\(k = 1,2, \ldots \)) satisfy \(0 = t_{0} < t_{1} < t_{2} < \cdots \) and \(\lim_{k \to \infty } t_{k} = \infty \). \(x_{i} ( t_{k} + 0 )\) and \(x_{i} ( t_{k} - 0 )\) stand for the right-hand and left-hand limit of \(x_{i} ( t )\) at time \(t_{k}\), respectively. \(I_{ik} ( x_{i} ( t_{k} ) )\) shows the abrupt change of \(x_{i} ( t )\) at the impulsive moment \(t_{k}\) and \(I_{ik} ( \bullet ) \in C [ \mathrm{R},\mathrm{R} ]\). \(\varphi _{i} ( s ) \in C [ [ - \tau,0 ],\mathrm{R} ]\) and \(\tau = \max_{j \in {\mathcal{N}}} \{ \tau _{j} \} \).

Denote by \(\mathbf{x} ( t; \varphi ) = ( x_{1} ( t; \varphi _{1} ), \ldots,x_{n} ( t; \varphi _{n} ) )^{\mathrm{T}} \in \mathrm{R}^{n}\) the solution of system (2.1), where \(\varphi ( s ) = ( \varphi _{1} ( s ), \ldots,\varphi _{n} ( s ) )^{\mathrm{T}} \in \mathrm{R}^{n}\). The solution \(\mathbf{x} ( t; \varphi )\) of system (2.1) is, for time variable t, a piecewise continuous vector-valued function with the first kind discontinuity at the points \(t_{k}\) (\(k = 1,2, \ldots \)), where it is left-continuous i.e. the following relations are valid:

$$\begin{aligned} x_{i} ( t_{k} - 0 ) = x_{i} ( t_{k} ),\qquad x_{i} ( t_{k} + 0 ) = x_{i} ( t_{k} ) + I_{ik} \bigl( x_{i} ( t_{k} ) \bigr),\quad i \in {\mathcal{N}}, k = 1,2, \ldots. \end{aligned}$$

Throughout this paper, we always assume that \(f_{j} ( 0 ) = g_{j} ( 0 ) = I_{jk} ( 0 ) = 0\) for \(j \in {\mathcal{N}}\) and \(k = 1,2, \ldots \) . Then system (2.1) admits a trivial solution with initial value \(\varphi = 0\).

Definition 2.1

The trivial solution of system (2.1) is said to be pth (\(p \ge 1\)) moment exponentially stable if there exists a pair of positive constants λ and C such that

$$\begin{aligned} \bigl\vert x_{i} ( t;\varphi ) \bigr\vert ^{p} \le C \max_{i \in {\mathcal{N}}} \bigl\{ \Vert \varphi _{i} \Vert ^{p} \bigr\} e^{ - \lambda t},\quad t \ge 0, \end{aligned}$$

holds for any \(\varphi _{i} ( s ) \in C [ [ - \tau,0 ],\mathrm{R} ]\) and \(i \in {\mathcal{N}}\).

Lemma 2.1

Suppose \(0 < \theta < 1\) and \(\lambda \theta ( t - s ) < 1 - \theta \). Then \(\int _{s}^{t} e^{\lambda x}\,dx > \theta ( t - s )e^{\lambda t}\) holds for \(t > s\) and \(\lambda > 0\).

Proof

Construct function \(F ( t ) = \int _{s}^{t} e^{\lambda x}\,dx - \theta ( t - s )e^{\lambda t}\). For fixed s, it is easy to find that \(F ( s ) = 0\) and

$$\begin{aligned} F' ( t ) = e^{\lambda t} - \theta e^{\lambda t} - \lambda \theta ( t - s )e^{\lambda t} = e^{\lambda t} \bigl[ 1 - \theta - \lambda \theta ( t - s ) \bigr] > 0. \end{aligned}$$

So, \(F ( t ) > F ( s ) = 0\) as \(t > s\), which means \(\int _{s}^{t} e^{\lambda x}\,dx > \theta ( t - s )e^{\lambda t} ( t > s )\). □

pth moment exponential stability of IDHNNs

In this section, we develop a new method to discuss the pth moment exponential stability of system (2.1). Before proceeding, we introduce some hypotheses listed as follows:

(H1) There exist nonnegative constants \(\alpha _{j}\) such that, for any \(x_{j}^{ ( 1 )}, x_{j}^{ ( 2 )} \in \mathrm{R}\),

$$\begin{aligned} \bigl\vert f_{j} \bigl( x_{j}^{ ( 1 )} \bigr) - f_{j} \bigl( x_{j}^{ ( 2 )} \bigr) \bigr\vert \le \alpha _{j} \bigl\vert x_{j}^{ ( 1 )} - x_{j}^{ ( 2 )} \bigr\vert ,\quad j \in {\mathcal{N}}. \end{aligned}$$

(H2) There exist nonnegative constants \(\beta _{j}\) such that, for any \(x_{j}^{ ( 1 )}, x_{j}^{ ( 2 )} \in \mathrm{R}\),

$$\begin{aligned} \bigl\vert g_{j} \bigl( x_{j}^{ ( 1 )} \bigr) - g_{j} \bigl( x_{j}^{ ( 2 )} \bigr) \bigr\vert \le \beta _{j} \bigl\vert x_{j}^{ ( 1 )} - x_{j}^{ ( 2 )} \bigr\vert ,\quad j \in {\mathcal{N}}. \end{aligned}$$

(H3) There exist nonnegative constants \(P_{jk}\) such that, for any \(x_{j}^{ ( 1 )}, x_{j}^{ ( 2 )} \in \mathrm{R}\),

$$\begin{aligned} \bigl\vert I_{jk} \bigl( x_{j}^{ ( 1 )} \bigr) - I_{jk} \bigl( x_{j}^{ ( 2 )} \bigr) \bigr\vert \le P_{jk} \bigl\vert x_{j}^{ ( 1 )} - x_{j}^{ ( 2 )} \bigr\vert ,\quad j \in {\mathcal{N}}, k = 1,2, \ldots. \end{aligned}$$

Theorem 3.1

Suppose that

(i) there exist constants \(\mu > 0\) and \(\theta \in ( 0,1 )\) such that \(\inf_{k = 1,2, \ldots } \{ \theta ( t_{k} - t_{k - 1} ) \} \ge \mu \) and \(\max_{k = 1,2, \ldots } \{ t_{k} - t_{k - 1} \} < \frac{1 - \theta }{\theta a_{i}}\),

(ii) there exist constants \(P_{i}\) such that \(P_{ik} \le P_{i} \mu \) for \(i \in {\mathcal{N}}\) and \(k = 1,2, \ldots \) ,

(iii)

$$\begin{aligned} 3^{p - 1} \Biggl\{ a_{i}^{1 - p} \Biggl( \sum_{j = 1}^{n} \vert b_{ij} \alpha _{j} \vert \Biggr)^{p} + a_{i}^{1 - p} \Biggl( \sum_{j = 1}^{n} \vert c_{ij}\beta _{j} \vert \Biggr)^{p} + a_{i}^{1 - p} P_{i}^{p} \Biggr\} < a_{i}. \end{aligned}$$

Then system (2.1) is globally exponentially stable in the pth (\(p \ge 1\)) moment.

Proof

Multiplying both sides of system (2.1) with \(\mathrm{e}^{a_{i}t}\) and integrating from \(t_{k - 1} + \varepsilon \) (\(\varepsilon > 0\)) to \(t \in ( t_{k - 1},t_{k} )\) yields

$$\begin{aligned} x_{i} ( t )\mathrm{e}^{a_{i}t} ={}& x_{i} ( t_{k - 1} + \varepsilon )\mathrm{e}^{a_{i} ( t_{k - 1} + \varepsilon )} \\ &{}+ \int _{ t_{k - 1} + \varepsilon }^{ t} \mathrm{e}^{a_{i}s} \Biggl\{ \sum_{j = 1}^{n} b_{ij}f_{j} \bigl( x_{j} ( s ) \bigr) + \sum_{j = 1}^{n} c_{ij}g_{j} \bigl( x_{j} \bigl( s - \tau _{j} ( s ) \bigr) \bigr) \Biggr\} \,{d}s. \end{aligned}$$
(3.1)

Letting \(\varepsilon \to 0^{ +} \) in (3.1), we have, for \(t \in ( t_{k - 1},t_{k} )\) (\(k = 1,2, \ldots \)),

$$\begin{aligned} x_{i} ( t )\mathrm{e}^{a_{i}t} ={}& x_{i} ( t_{k - 1} + 0 )\mathrm{e}^{a_{i}t_{k - 1}} \\ &{} + \int _{ t_{k - 1}}^{ t} \mathrm{e}^{a_{i}s} \Biggl\{ \sum_{j = 1}^{n} b_{ij}f_{j} \bigl( x_{j} ( s ) \bigr) + \sum_{j = 1}^{n} c_{ij}g_{j} \bigl( x_{j} \bigl( s - \tau _{j} ( s ) \bigr) \bigr) \Biggr\} \,{d}s. \end{aligned}$$
(3.2)

Setting \(t = t_{k} - \varepsilon '\) (\(\varepsilon ' > 0\)) in (3.2), we get

$$\begin{aligned} x_{i} \bigl( t_{k} - \varepsilon ' \bigr) \mathrm{e}^{a_{i} ( t_{k} - \varepsilon ' )} ={}& x_{i} ( t_{k - 1} + 0 ) \mathrm{e}^{a_{i}t_{k - 1}} \\ &{}+ \int _{ t_{k - 1}}^{ t_{k} - \varepsilon '} \mathrm{e}^{a_{i}s} \Biggl\{ \sum_{j = 1}^{n} b_{ij}f_{j} \bigl( x_{j} ( s ) \bigr) + \sum_{j = 1}^{n} c_{ij}g_{j} \bigl( x_{j} \bigl( s - \tau _{j} ( s ) \bigr) \bigr) \Biggr\} \,{d}s, \end{aligned}$$

which generates, by letting \(\varepsilon ' \to 0^{ +} \),

$$\begin{aligned} x_{i} ( t_{k} - 0 )\mathrm{e}^{a_{i}t_{k}} ={}& x_{i} ( t_{k - 1} + 0 )\mathrm{e}^{a_{i}t_{k - 1}} \\ &{}+ \int _{ t_{k - 1}}^{ t_{k}} \mathrm{e}^{a_{i}s} \Biggl\{ \sum_{j = 1}^{n} b_{ij}f_{j} \bigl( x_{j} ( s ) \bigr) + \sum_{j = 1}^{n} c_{ij}g_{j} \bigl( x_{j} \bigl( s - \tau _{j} ( s ) \bigr) \bigr) \Biggr\} \,{d}s. \end{aligned}$$
(3.3)

As \(x_{i} ( t_{k} - 0 ) = x_{i} ( t_{k} )\), (3.3) can be rearranged as

$$\begin{aligned} x_{i} ( t_{k} )\mathrm{e}^{a_{i}t_{k}} ={}& x_{i} ( t_{k - 1} + 0 )\mathrm{e}^{a_{i}t_{k - 1}} \\ &{} + \int _{ t_{k - 1}}^{ t_{k}} \mathrm{e}^{a_{i}s} \Biggl\{ \sum_{j = 1}^{n} b_{ij}f_{j} \bigl( x_{j} ( s ) \bigr) + \sum_{j = 1}^{n} c_{ij}g_{j} \bigl( x_{j} \bigl( s - \tau _{j} ( s ) \bigr) \bigr) \Biggr\} \,{d}s. \end{aligned}$$
(3.4)

Combining (3.2) and (3.4), we derive, for \(t \in ( t_{k - 1},t_{k} ]\) (\(k = 1,2, \ldots \)),

$$\begin{aligned} x_{i} ( t )\mathrm{e}^{a_{i}t} = x_{i} ( t_{k - 1} + 0 )\mathrm{e}^{a_{i}t_{k - 1}} + \int _{ t_{k - 1}}^{ t} \mathrm{e}^{a_{i}s} \Biggl\{ \sum_{j = 1}^{n} b_{ij}f_{j} \bigl( x_{j} ( s ) \bigr) + \sum_{j = 1}^{n} c_{ij}g_{j} \bigl( x_{j} \bigl( s - \tau _{j} ( s ) \bigr) \bigr) \Biggr\} \,{d}s. \end{aligned}$$

This leads to, for \(t \in ( t_{k - 1},t_{k} ]\) (\(k = 1,2, \ldots \)),

$$\begin{aligned} x_{i} ( t )\mathrm{e}^{a_{i}t} ={}& x_{i} ( t_{k - 1} )\mathrm{e}^{a_{i}t_{k - 1}} + \int _{ t_{k - 1}}^{ t} \mathrm{e}^{a_{i}s} \Biggl\{ \sum_{j = 1}^{n} b_{ij}f_{j} \bigl( x_{j} ( s ) \bigr) + \sum_{j = 1}^{n} c_{ij}g_{j} \bigl( x_{j} \bigl( s - \tau _{j} ( s ) \bigr) \bigr) \Biggr\} \,{d}s \\ &{}+ I_{i ( k - 1 )} \bigl( x_{i} ( t_{k - 1} ) \bigr)\mathrm{e}^{a_{i}t_{k - 1}}. \end{aligned}$$

Hence,

$$\begin{aligned} &x_{i} ( t_{k - 1} )\mathrm{e}^{a_{i}t_{k - 1}} = x_{i} ( t_{k - 2} )\mathrm{e}^{a_{i}t_{k - 2}} + \int _{ t_{k - 2}}^{ t_{k - 1}} \mathrm{e}^{a_{i}s} \Biggl\{ \sum_{j = 1}^{n} b_{ij}f_{j} \bigl( x_{j} ( s ) \bigr) + \sum_{j = 1}^{n} c_{ij}g_{j} \bigl( x_{j} \bigl( s - \tau _{j} ( s ) \bigr) \bigr) \Biggr\} \,{d}s\\ &\phantom{x_{i} ( t_{k - 1} )\mathrm{e}^{a_{i}t_{k - 1}} =}{} + I_{i ( k - 2 )} \bigl( x_{i} ( t_{k - 2} ) \bigr)\mathrm{e}^{a_{i}t_{k - 2}},\\ &\vdots \\ &x_{i} ( t_{2} )\mathrm{e}^{a_{i}t_{2}} = x_{i} ( t_{1} )\mathrm{e}^{a_{i}t_{1}} + \int _{ t_{1}}^{ t_{2}} \mathrm{e}^{a_{i}s} \Biggl\{ \sum_{j = 1}^{n} b_{ij}f_{j} \bigl( x_{j} ( s ) \bigr) + \sum_{j = 1}^{n} c_{ij}g_{j} \bigl( x_{j} \bigl( s - \tau _{j} ( s ) \bigr) \bigr) \Biggr\} \,{d}s \\ &\phantom{x_{i} ( t_{2} )\mathrm{e}^{a_{i}t_{2}} =}{}+ I_{i1} \bigl( x_{i} ( t_{1} ) \bigr)\mathrm{e}^{a_{i}t_{1}}, \\ &x_{i} ( t_{1} )\mathrm{e}^{a_{i}t_{1}} = \varphi _{i} ( 0 ) + \int _{ 0}^{ t_{1}} \mathrm{e}^{a_{i}s} \Biggl\{ \sum_{j = 1}^{n} b_{ij}f_{j} \bigl( x_{j} ( s ) \bigr) + \sum_{j = 1}^{n} c_{ij}g_{j} \bigl( x_{j} \bigl( s - \tau _{j} ( s ) \bigr) \bigr) \Biggr\} \,{d}s. \end{aligned}$$

By induction, we obtain that, for \(t > 0\),

$$\begin{aligned} x_{i} ( t ) ={}& \varphi _{i} ( 0 )\mathrm{e}^{ - a_{i}t} + \mathrm{e}^{ - a_{i}t} \int _{ 0}^{ t} \mathrm{e}^{a_{i}s} \Biggl\{ \sum_{j = 1}^{n} b_{ij}f_{j} \bigl( x_{j} ( s ) \bigr) + \sum_{j = 1}^{n} c_{ij}g_{j} \bigl( x_{j} \bigl( s - \tau _{j} ( s ) \bigr) \bigr) \Biggr\} \,{d}s \\ &{}+ \mathrm{e}^{ - a_{i}t}\sum_{0 < t_{k} < t} \bigl\{ I_{ik} \bigl( x_{i} ( t_{k} ) \bigr) \mathrm{e}^{a_{i}t_{k}} \bigr\} . \end{aligned}$$

From (H1)–(H3), we know, for \(t > 0\),

$$\begin{aligned} \bigl\vert x_{i} ( t ) \bigr\vert \le{}& \bigl\vert \varphi _{i} ( 0 ) \bigr\vert \mathrm{e}^{ - a_{i}t} + \mathrm{e}^{ - a_{i}t} \sum_{j = 1}^{n} \vert b_{ij}\alpha _{j} \vert \int _{ 0}^{ t} \mathrm{e}^{a_{i}s} \bigl\vert x_{j} ( s ) \bigr\vert \,{d}s \\ &{}+ \mathrm{e}^{ - a_{i}t} \sum_{j = 1}^{n} \vert c_{ij}\beta _{j} \vert \int _{ 0}^{ t} \mathrm{e}^{a_{i}s} \sup _{s - \tau _{j} ( s ) \le \upsilon \le s} \bigl\vert x_{j} ( \upsilon ) \bigr\vert \,{d}s + \mathrm{e}^{ - a_{i}t}\sum_{0 < t_{k} < t} \bigl\{ P_{ik} \bigl\vert x_{i} ( t_{k} ) \bigr\vert \mathrm{e}^{a_{i}t_{k}} \bigr\} . \end{aligned}$$

Denote

$$\begin{aligned} &I_{i1} = \bigl\vert \varphi _{i} ( 0 ) \bigr\vert \mathrm{e}^{ - a_{i}t},\qquad I_{i2} = \mathrm{e}^{ - a_{i}t} \sum _{j = 1}^{n} \vert b_{ij}\alpha _{j} \vert \int _{ 0}^{ t} \mathrm{e}^{a_{i}s} \bigl\vert x_{j} ( s ) \bigr\vert \,{d}s, \\ &I_{i3} = \mathrm{e}^{ - a_{i}t} \sum_{j = 1}^{n} \vert c_{ij}\beta _{j} \vert \int _{ 0}^{ t} \mathrm{e}^{a_{i}s} \sup _{s - \tau _{j} ( s ) \le \upsilon \le s} \bigl\vert x_{j} ( \upsilon ) \bigr\vert \,{d}s, \qquad I_{i4} = \mathrm{e}^{ - a_{i}t}\sum _{0 < t_{k} < t} \bigl\{ P_{ik} \bigl\vert x_{i} ( t_{k} ) \bigr\vert \mathrm{e}^{a_{i}t_{k}} \bigr\} . \end{aligned}$$

Condition (iii) implies that there exists \(\chi \in ( 0,1 )\) such that

$$\begin{aligned} \frac{3^{p - 1}}{ ( 1 - \chi )^{p - 1}} \Biggl\{ a_{i}^{1 - p} \Biggl( \sum _{j = 1}^{n} \vert b_{ij}\alpha _{j} \vert \Biggr)^{p} + a_{i}^{1 - p} \Biggl( \sum_{j = 1}^{n} \vert c_{ij}\beta _{j} \vert \Biggr)^{p} + a_{i}^{1 - p} P_{i}^{p} \Biggr\} < a_{i}. \end{aligned}$$
(3.5)

By employing Holder’s inequality, we get

$$\begin{aligned} \bigl\vert x_{i} ( t ) \bigr\vert ^{p} &\le \chi ^{1 - p}I_{i1}^{p} + ( 1 - \chi )^{1 - p} ( I_{i2} + I_{i3} + I_{i4} )^{p} \\ &\le \chi ^{1 - p}I_{i1}^{p} + 3^{p - 1} ( 1 - \chi )^{1 - p}I_{i2}^{p} + 3^{p - 1} ( 1 - \chi )^{1 - p}I_{i3}^{p} + 3^{p - 1} ( 1 - \chi )^{1 - p}I_{i4}^{p}. \end{aligned}$$

Moreover, it follows from Holder’s inequality that

$$\begin{aligned} I_{i2}^{p} &= \Biggl\{ \sum_{j = 1}^{n} \vert b_{ij}\alpha _{j} \vert ^{\frac{p - 1}{p}} \vert b_{ij}\alpha _{j} \vert ^{\frac{1}{p}} \int _{ 0}^{ t} \mathrm{e}^{ - a_{i} ( t - s )} \bigl\vert x_{j} ( s ) \bigr\vert \,{d}s \Biggr\} ^{p} \\ &\le \Biggl( \sum_{j = 1}^{n} \vert b_{ij}\alpha _{j} \vert \Biggr)^{p - 1}\sum _{j = 1}^{n} \vert b_{ij}\alpha _{j} \vert \biggl\{ \int _{ 0}^{ t} \mathrm{e}^{ - a_{i} ( t - s )} \bigl\vert x_{j} ( s ) \bigr\vert \,ds \biggr\} ^{p} \\ &= \Biggl( \sum_{j = 1}^{n} \vert b_{ij}\alpha _{j} \vert \Biggr)^{p - 1}\sum _{j = 1}^{n} \vert b_{ij}\alpha _{j} \vert \biggl\{ \int _{ 0}^{ t} \mathrm{e}^{\frac{ - ( p - 1 )a_{i} ( t - s )}{p}} e^{\frac{ - a_{i} ( t - s )}{p}} \bigl\vert x_{j} ( s ) \bigr\vert \,ds \biggr\} ^{p} \\ &\le \Biggl( \sum_{j = 1}^{n} \vert b_{ij}\alpha _{j} \vert \Biggr)^{p - 1} \biggl\{ \int _{ 0}^{ t} \mathrm{e}^{ - a_{i} ( t - s )}\,ds \biggr\} ^{p - 1} \Biggl( \sum_{j = 1}^{n} \vert b_{ij}\alpha _{j} \vert \int _{0}^{t} e^{ - a_{i} ( t - s )} \bigl\vert x_{j} ( s ) \bigr\vert ^{p}\,ds \Biggr) \\ &\le a_{i}^{1 - p} \Biggl( \sum_{j = 1}^{n} \vert b_{ij}\alpha _{j} \vert \Biggr)^{p - 1} \Biggl( \sum_{j = 1}^{n} \vert b_{ij}\alpha _{j} \vert \int _{0}^{t} e^{ - a_{i} ( t - s )} \bigl\vert x_{j} ( s ) \bigr\vert ^{p}\,ds \Biggr). \end{aligned}$$

Similarly, we get

$$\begin{aligned} I_{i3}^{p} \le a_{i}^{1 - p} \Biggl( \sum _{j = 1}^{n} \vert c_{ij}\beta _{j} \vert \Biggr)^{p - 1} \Biggl( \sum _{j = 1}^{n} \vert c_{ij}\beta _{j} \vert \int _{0}^{t} e^{ - a_{i} ( t - s )}\sup _{s - \tau _{j} ( s ) \le \upsilon \le s} \bigl\vert x_{j} ( \upsilon ) \bigr\vert ^{p}\,ds \Biggr). \end{aligned}$$

In addition, Lemma 2.1, conditions (i)–(ii), and Holder’s inequality yield

$$\begin{aligned} I_{i4}^{p}&\le \biggl\{ P_{i}\sum _{0 < t_{k} < t} \bigl\{ \theta ( t_{k} - t_{k - 1} ) \bigl\vert x_{i} ( t_{k} ) \bigr\vert \mathrm{e}^{ - a_{i}(t - t_{k})} \bigr\} \biggr\} ^{p}\le \biggl( P_{i} \int _{0}^{t} e^{ - a_{i} ( t - s )} \bigl\vert x_{i} ( s ) \bigr\vert \,ds \biggr)^{P} \\ &= P_{i}^{p} \biggl\{ \int _{ 0}^{ t} \mathrm{e}^{\frac{ - ( p - 1 )a_{i} ( t - s )}{p}} e^{\frac{ - a_{i} ( t - s )}{p}} \bigl\vert x_{i} ( s ) \bigr\vert \,ds \biggr\} ^{p} \\ &\le P_{i}^{p} \biggl( \int _{ 0}^{ t} \mathrm{e}^{ - a_{i} ( t - s )}\,ds \biggr)^{p - 1} \biggl( \int _{0}^{t} e^{ - a_{i} ( t - s )} \bigl\vert x_{i} ( s ) \bigr\vert ^{p}\,ds \biggr) \\ &\le a_{i}^{1 - p} P_{i}^{p} \biggl( \int _{0}^{t} e^{ - a_{i} ( t - s )} \bigl\vert x_{i} ( s ) \bigr\vert ^{p}\,ds \biggr). \end{aligned}$$

Therefore,

$$\begin{aligned} \bigl\vert x_{i} ( t ) \bigr\vert ^{p}\le{}& \chi ^{1 - p} \bigl\vert \varphi _{i} ( 0 ) \bigr\vert ^{p}\mathrm{e}^{ - a_{i}t} + 3^{p - 1} ( 1 - \chi )^{1 - p}a_{i}^{1 - p} \Biggl( \sum _{j = 1}^{n} \vert b_{ij}\alpha _{j} \vert \Biggr)^{p - 1} \\ & \times{}\Biggl( \sum _{j = 1}^{n} \vert b_{ij}\alpha _{j} \vert \int _{0}^{t} e^{ - a_{i} ( t - s )} \bigl\vert x_{j} ( s ) \bigr\vert ^{p}\,ds \Biggr) \\ &{}+ 3^{p - 1} ( 1 - \chi )^{1 - p}a_{i}^{1 - p} \Biggl( \sum_{j = 1}^{n} \vert c_{ij}\beta _{j} \vert \Biggr)^{p - 1} \\ & \times{}\Biggl( \sum _{j = 1}^{n} \vert c_{ij}\beta _{j} \vert \int _{0}^{t} e^{ - a_{i} ( t - s )}\sup _{s - \tau _{j} ( s ) \le \upsilon \le s} \bigl\vert x_{j} ( \upsilon ) \bigr\vert ^{p}\,ds \Biggr) \\ &{}+ 3^{p - 1} ( 1 - \chi )^{1 - p}a_{i}^{1 - p} P_{i}^{p} \biggl( \int _{0}^{t} e^{ - a_{i} ( t - s )} \bigl\vert x_{i} ( s ) \bigr\vert ^{p}\,ds \biggr). \end{aligned}$$
(3.6)

For each \(i \in {\mathcal{N}}\), define the following function:

$$\begin{aligned} G_{i} ( \lambda ) ={}& ( \lambda - a_{i} ) + 3^{p - 1} \bigl( a_{i} ( 1 - \chi ) \bigr)^{1 - p} \Biggl( \sum _{j = 1}^{n} \vert b_{ij}\alpha _{j} \vert \Biggr)^{p} \\ &{}+ 3^{p - 1} \bigl( a_{i} ( 1 - \chi ) \bigr)^{1 - p} \Biggl( \sum_{j = 1}^{n} \vert c_{ij}\beta _{j} \vert \Biggr)^{p - 1} \Biggl( \sum _{j = 1}^{n} \vert c_{ij}\beta _{j} \vert e^{\lambda \tau _{j}} \Biggr) + 3^{p - 1} \bigl( a_{i} ( 1 - \chi ) \bigr)^{1 - p} P_{i}^{p}. \end{aligned}$$

From (3.5), we know \(G_{i} ( 0 ) < 0\). Further, \(G_{i} ( \lambda )\) is continuous on \(\mathrm{R}_{ +} \), \(G_{i} ( + \infty ) = + \infty \), and \(G_{i}^{\prime } ( \lambda ) > 0\) for \(\lambda \in \mathrm{R}_{ +} \), so for each \(i \in {\mathcal{N}}\), the equation \(G_{i} ( \lambda ) = 0\) has a unique solution \(\lambda _{i} \in \mathrm{R}_{ +} \). Choosing \(\vartheta = \min_{i \in {\mathcal{N}}} \{ \lambda _{i} \} \), we get, for \(i \in {\mathcal{N}}\),

$$\begin{aligned} &\frac{3^{p - 1}}{ ( 1 - \chi )^{p - 1}} \Biggl\{ a_{i}^{1 - p} \Biggl( \sum _{j = 1}^{n} \vert b_{ij}\alpha _{j} \vert \Biggr)^{p} + a_{i}^{1 - p} \Biggl( \sum_{j = 1}^{n} \vert c_{ij}\beta _{j} \vert \Biggr)^{p - 1} \Biggl( \sum _{j = 1}^{n} \vert c_{ij}\beta _{j} \vert e^{\vartheta \tau _{j}} \Biggr) + a_{i}^{1 - p} P_{i}^{p} \Biggr\} \\ &\quad \le a_{i} - \vartheta. \end{aligned}$$
(3.7)

Let \(u ( t ) = \chi ^{1 - p}\max_{i \in {\mathcal{N}}} \{ \Vert \varphi _{i} \Vert ^{p} \} e^{ - \vartheta t}\), \(t \in [ - \tau, + \infty )\). Obviously, \(u ( s ) = u ( t )e^{\vartheta ( t - s )}\) holds for \(- \tau \le s \le t\), and \(\sup_{t - \tau _{j} ( t ) \le s \le t}u ( s ) \le u ( t )e^{\vartheta \tau _{j}}\) is true for each \(j \in {\mathcal{N}}\) and \(t \in \mathrm{R}_{ +} \). Denote

$$\begin{aligned} &L_{1} = u ( 0 )\mathrm{e}^{ - a_{i}t},\\ & L_{2} = 3^{p - 1} ( 1 - \chi )^{1 - p}a_{i}^{1 - p} \Biggl( \sum_{j = 1}^{n} \vert b_{ij}\alpha _{j} \vert \Biggr)^{p - 1} \Biggl( \sum_{j = 1}^{n} \vert b_{ij}\alpha _{j} \vert \int _{0}^{t} e^{ - a_{i} ( t - s )}u ( s )\,ds \Biggr), \\ &L_{3} = 3^{p - 1} ( 1 - \chi )^{1 - p}a_{i}^{1 - p} \Biggl( \sum_{j = 1}^{n} \vert c_{ij}\beta _{j} \vert \Biggr)^{p - 1} \Biggl( \sum _{j = 1}^{n} \vert c_{ij}\beta _{j} \vert \int _{0}^{t} e^{ - a_{i} ( t - s )}\sup _{s - \tau _{j} ( s ) \le \upsilon \le s}u ( \upsilon )\,ds \Biggr), \\ &L_{4} = 3^{p - 1} ( 1 - \chi )^{1 - p}a_{i}^{1 - p} P_{i}^{p} \biggl( \int _{0}^{t} e^{ - a_{i} ( t - s )}u ( s )\,ds \biggr). \end{aligned}$$

As

$$\begin{aligned} &L_{1} = u ( 0 )\mathrm{e}^{ - a_{i}t} = u ( t )e^{ ( \vartheta - a_{i} )t},\\ & L_{2} = u ( t ) 3^{p - 1} ( 1 - \chi )^{1 - p}a_{i}^{1 - p} \Biggl( \sum_{j = 1}^{n} \vert b_{ij}\alpha _{j} \vert \Biggr)^{p}e^{ ( \vartheta - a_{i} )t} \int _{ 0}^{ t} e^{ ( a_{i} - \vartheta )s}\,{d}s, \\ &L_{3} \le u ( t )3^{p - 1} ( 1 - \chi )^{1 - p}a_{i}^{1 - p} \Biggl( \sum_{j = 1}^{n} \vert c_{ij}\beta _{j} \vert \Biggr)^{p - 1} \Biggl( \sum _{j = 1}^{n} \vert c_{ij}\beta _{j} \vert e^{\vartheta \tau _{j}} \Biggr)e^{ ( \vartheta - a_{i} )t} \int _{ 0}^{ t} e^{ ( a_{i} - \vartheta )s}\,{d}s, \\ &L_{4} = u ( t )3^{p - 1} ( 1 - \chi )^{1 - p}a_{i}^{1 - p} P_{i}^{p} e^{ ( \vartheta - a_{i} )t} \int _{ 0}^{ t} e^{ ( a_{i} - \vartheta )s}\,{d}s, \end{aligned}$$

we obtain from (3.7) that

L 1 + L 2 + L 3 + L 4 u ( t ) e ( ϑ a i ) t + u ( t ) ( 1 e ϑ a i t ) a i ϑ { 3 p 1 ( 1 χ ) 1 p a i 1 p ( j = 1 n | b i j α j | ) p + 3 p 1 ( 1 χ ) 1 p a i 1 p ( j = 1 n | c i j β j | ) p 1 ( j = 1 n | c i j β j | e ϑ τ j ) + 3 p 1 ( 1 χ ) 1 p a i 1 p P i p } u ( t ) e ( ϑ a i ) t + u ( t ) ( 1 e ϑ a i t ) = u ( t ) .
(3.8)

Finally, we prove that \(\vert x_{i} ( t ) \vert ^{p} \le u ( t )\) for all \(t \ge - \tau \) and \(i \in {\mathcal{N}}\) by a contradiction. Obviously, \(\vert x_{i} ( t ) \vert ^{p} \le u ( t )\) holds for \(t \in [ - \tau,0 ]\) and \(i \in {\mathcal{N}}\). For each i, assume that there exist \(t_{i} > 0\) and \(\varepsilon > 0\) such that \(\vert x_{i} ( t ) \vert ^{p} < u ( t ) + \varepsilon \) as \(t \in [ 0,t_{i} )\) and \(\vert x_{i} ( t_{i} ) \vert ^{p} = u ( t_{i} ) + \varepsilon \). Choose \(t^{ *} \stackrel{{{{{{\Delta }}}}}}{{=}} t_{i^{ *}} = \min_{i \in {\mathcal{N}}} \{ t_{i} \} \). Obviously, \(\chi ^{1 - p} \vert \varphi _{i^{ *}} ( 0 ) \vert ^{p} < u ( 0 ) + \varepsilon \), from (3.6) and (3.8), we get

$$\begin{aligned} &\bigl\vert x_{i^{ *}} \bigl( t^{ *} \bigr) \bigr\vert ^{p} - u \bigl( t^{ *} \bigr) \\ &\quad \le 3^{p - 1} ( 1 - \chi )^{1 - p}a_{i^{ *}}^{1 - p} \Biggl( \sum_{j = 1}^{n} \vert b_{i^{ *} j}\alpha _{j} \vert \Biggr)^{p - 1} \\ &\qquad{} \times\Biggl( \sum_{j = 1}^{n} \vert b_{i^{ *} j} \alpha _{j} \vert \int _{0}^{t^{ *}} e^{ - a_{i^{ *}} ( t^{ *} - s )} \bigl[ \bigl\vert x_{j} ( s ) \bigr\vert ^{p} - u ( s ) \bigr]\,ds \Biggr) \\ &\qquad {}+ 3^{p - 1} ( 1 - \chi )^{1 - p}a_{i^{ *}}^{1 - p} \Biggl( \sum_{j = 1}^{n} \vert c_{i^{ *} j}\beta _{j} \vert \Biggr)^{p - 1}\\ &\qquad\times{} \Biggl( \sum_{j = 1}^{n} \vert c_{i^{ *} j}\beta _{j} \vert \int _{0}^{t^{ *}} e^{ - a_{i^{ *}} ( t^{ *} - s )}\sup _{s - \tau _{j} ( s ) \le \upsilon \le s} \bigl[ \bigl\vert x_{j} ( \upsilon ) \bigr\vert ^{p} - u ( \upsilon ) \bigr]\,ds \Biggr) \\ &\qquad {}+ 3^{p - 1} ( 1 - \chi )^{1 - p}a_{i^{ *}}^{1 - p} P_{i^{ *}}^{p} \biggl( \int _{0}^{t^{ *}} e^{ - a_{i^{ *}} ( t^{ *} - s )} \bigl[ \bigl\vert x_{i^{ *}} ( s ) \bigr\vert ^{p} - u ( s ) \bigr]\,ds \biggr) \\ &\qquad{}+ \bigl[ \chi ^{1 - p} \bigl\vert \varphi _{i^{ *}} ( 0 ) \bigr\vert ^{p} - u ( 0 ) \bigr]\mathrm{e}^{ - a_{i^{ *}} t^{ *}} \\ &\quad \le \varepsilon \mathrm{e}^{ - a_{i}t} + \varepsilon 3^{p - 1} ( 1 - \chi )^{1 - p}a_{i^{ *}}^{1 - p} \Biggl( \sum _{j = 1}^{n} \vert b_{i^{ *} j}\alpha _{j} \vert \Biggr)^{p} \int _{0}^{t^{ *}} e^{ - a_{i^{ *}} ( t^{ *} - s )}\,ds \\ &\qquad {}+ \varepsilon 3^{p - 1} ( 1 - \chi )^{1 - p}a_{i^{ *}}^{1 - p} \Biggl( \sum_{j = 1}^{n} \vert c_{i^{ *} j}\beta _{j} \vert \Biggr)^{p} \int _{0}^{t^{ *}} e^{ - a_{i^{ *}} ( t^{ *} - s )}\,ds\\ &\qquad{} + \varepsilon 3^{p - 1} ( 1 - \chi )^{1 - p}a_{i^{ *}}^{1 - p} P_{i^{ *}}^{p} \biggl( \int _{0}^{t^{ *}} e^{ - a_{i^{ *}} ( t^{ *} - s )}\,ds \biggr) \\ &\quad = \varepsilon \mathrm{e}^{ - a_{i^{ *}} t^{ *}} + \varepsilon 3^{p - 1} ( 1 - \chi )^{1 - p}a_{i^{ *}}^{1 - p} \Biggl( \sum _{j = 1}^{n} \vert b_{i^{ *} j}\alpha _{j} \vert \Biggr)^{p}\frac{ ( 1 - \mathrm{e}^{ - a_{i^{ *}} t^{ *}} )}{a_{i^{ *}}} \\ &\qquad{}+ \varepsilon 3^{p - 1} ( 1 - \chi )^{1 - p}a_{i^{ *}}^{1 - p} \Biggl( \sum_{j = 1}^{n} \vert c_{i^{ *} j}\beta _{j} \vert \Biggr)^{p} \frac{ ( 1 - \mathrm{e}^{ - a_{i^{ *}} t^{ *}} )}{a_{i^{ *}}}\\ &\qquad{} + \varepsilon 3^{p - 1} ( 1 - \chi )^{1 - p}a_{i^{ *}}^{1 - p} P_{i^{ *}}^{p} \frac{ ( 1 - \mathrm{e}^{ - a_{i^{ *}} t^{ *}} )}{a_{i^{ *}}} \\ &\quad= \varepsilon \mathrm{e}^{ - a_{i^{ *}} t^{ *}} + \frac{3^{p - 1} ( 1 - \chi )^{1 - p} \{ a_{i^{ *}}^{1 - p} ( \sum_{j = 1}^{n} \vert b_{i^{ *} j}\alpha _{j} \vert )^{p} + a_{i^{ *}}^{1 - p} ( \sum_{j = 1}^{n} \vert c_{i^{ *} j}\beta _{j} \vert )^{p} + a_{i^{ *}}^{1 - p} P_{i^{ *}}^{p} \} }{a_{i^{ *}}} \\ &\qquad{}\times\bigl( 1 - \mathrm{e}^{ - a_{i^{ *}} t^{ *}} \bigr)\varepsilon \\ &\quad < \varepsilon, \end{aligned}$$

which is a contradiction. This shows that \(\vert x_{i} ( t ) \vert ^{p} \le u ( t )\) for all \(t \ge - \tau \) and \(i \in {\mathcal{N}}\), which means that \(\vert x_{i} ( t;\varphi ) \vert ^{p} \le \chi ^{1 - p}\max_{i \in {\mathcal{N}}} \{ \Vert \varphi _{i} \Vert ^{p} \} e^{ - \vartheta t}\) for \(t \in [ - \tau, + \infty )\). □

As a special case, we give the following theorem.

Theorem 3.2

Suppose that

(i) there exists constant \(\mu > 0\) such that \(\inf_{k = 1,2, \ldots } \{ \frac{t_{k} - t_{k - 1}}{2} \} \ge \mu \) and \(\max_{k = 1,2, \ldots } \{ t_{k} - t_{k - 1} \} < \frac{1}{ a_{i}}\),

(ii) there exist constants \(P_{i}\) such that \(P_{ik} \le P_{i} \mu \) for \(i \in {\mathcal{N}}\) and \(k = 1,2, \ldots \) ,

(iii) \(- a_{i} + \sum_{j = 1}^{n} \vert b_{ij}\alpha _{j} \vert + \sum _{j = 1}^{n} \vert c_{ij}\beta _{j} \vert + P_{i} < 0\).

Then system (2.1) is globally exponentially stable.

Proof

Let \(p = 1\) and \(\theta = \frac{1}{2}\) in Theorem 3.1. □

Remark 3.1

In [25], the fixed point theory was employed to study system (2.1), and the research shows that system (2.1) is globally exponentially stable on the condition that \(\sum_{i = 1}^{n} \{ \frac{1}{a_{i}}\max_{j \in {\mathcal{N}}} \vert b_{ij}l_{j} \vert + \frac{1}{a_{i}}\max_{j \in {\mathcal{N}}} \vert c_{ij}k_{j} \vert \} + \max_{i \in {\mathcal{N}}} \{ p_{i} ( \mu + \frac{1}{a_{i}} ) \} < 1\). Obviously, condition (iii) in Theorem 3.2 is weaker.

Exponential stability of impulsive integral inequalities

Consider the following impulsive integral inequalities:

$$\begin{aligned} y_{i} ( t ) \le{}& C \phi _{i} ( 0 )\mathrm{e}^{ - a_{i}t} + \sum_{j = 1}^{n} \alpha _{ij} \int _{ 0}^{ t} \mathrm{e}^{ - a_{i} ( t - s )} y_{j} ( s ) \,{d}s + \sum_{j = 1}^{n} \beta _{ij} \int _{ 0}^{ t} \mathrm{e}^{ - a_{i} ( t - s )} \sup _{s - \tau _{j} ( s ) \le \upsilon \le s}y_{j} ( \upsilon ) \,{d}s \\ &{}+ \sum_{0 < t_{k} < t} \bigl\{ P_{ik} y_{i} ( t_{k} ) \mathrm{e}^{ - a_{i} ( t - t_{k} )} \bigr\} ,\quad t \ge 0, \\ y_{i} ( t ) ={}& \phi _{i} ( 0 ) \in C \bigl( [ - \tau,0 ],R^{ +} \bigr),\quad t \in [ - \tau,0 ], \end{aligned}$$
(4.1)

where \(C \ge 1\), and for each \(i,j \in {\mathcal{N}}\), \(y_{i} ( t ) \ge 0\) for \(t \ge - \tau \), \(0 \le \tau _{j} ( s ) \le \tau _{j} \le \tau \) for \(s \ge 0\), and \(a_{i} > 0\), \(\alpha _{ij} \ge 0\), \(\beta _{ij} \ge 0\), \(P_{ik} \ge 0\), \(k = 1,2, \ldots \) .

Theorem 4.1

Suppose that

(i) there exist constants \(\mu > 0\) and \(\theta \in ( 0,1 )\) such that \(\inf_{k = 1,2, \ldots } \{ \theta ( t_{k} - t_{k - 1} ) \} \ge \mu \) and \(\max_{k = 1,2, \ldots } \{ t_{k} - t_{k - 1} \} < \frac{1 - \theta }{\theta a_{i}}\),

(ii) there exist nonnegative constants \(P_{i}\) such that \(P_{ik} \le P_{i} \mu \) for \(i \in {\mathcal{N}}\) and \(k = 1,2, \ldots \) ,

(iii) \(- a_{i} + \sum_{j = 1}^{n} \alpha _{ij} + \sum_{j = 1}^{n} \beta _{ij} + P_{i} < 0\).

Then there exist positive constant C and \(\lambda ^{ *} \) such that

$$\begin{aligned} \max_{i \in {\mathcal{N}}}y_{i} ( t ) \le C\max _{i \in {\mathcal{N}}} \bigl\{ \Vert \phi _{i} \Vert \bigr\} e^{ - \lambda ^{ *} t},\quad t \in [ - \tau, + \infty ), \end{aligned}$$

where \(\lambda ^{ *} \) is the minimum solution of the following equations:

$$\begin{aligned} \lambda - a_{i} + \sum_{j = 1}^{n} \alpha _{ij} + \sum_{j = 1}^{n} \beta _{ij}e^{\lambda \tau _{j}} + P_{i} = 0. \end{aligned}$$

Proof

For each \(i \in {\mathcal{N}}\), define the following function:

$$\begin{aligned} F_{i} ( \lambda ) = \lambda - a_{i} + \sum _{j = 1}^{n} \alpha _{ij} + \sum _{j = 1}^{n} \beta _{ij}e^{\lambda \tau _{j}} + P_{i}. \end{aligned}$$

Note that \(F_{i} ( \lambda )\) is continuous on \(\mathrm{R}_{ +} \), \(F_{i} ( 0 ) = - a_{i} + \sum_{j = 1}^{n} \alpha _{ij} + \sum_{j = 1}^{n} \beta _{ij} + P_{i} < 0\), \(F_{i} ( + \infty ) = + \infty \), and \(F_{i}^{\prime } ( \lambda ) > 0\) for \(\lambda \in \mathrm{R}_{ +} \), so for each \(i \in {\mathcal{N}}\), the equation \(F_{i} ( \lambda ) = 0\) has a unique solution \(\lambda _{i} \in \mathrm{R}_{ +} \). Choosing \(\lambda ^{ *} = \min_{i \in {\mathcal{N}}} \{ \lambda _{i} \} \), we get

$$\begin{aligned} \lambda ^{ *} - a_{i} + \sum_{j = 1}^{n} \vert b_{ij}\alpha _{j} \vert + \sum _{j = 1}^{n} \vert c_{ij}\beta _{j} \vert e^{\lambda ^{ *} \tau _{j}} + P_{i} \le 0,\quad i \in { \mathcal{N}}. \end{aligned}$$

Let \(u ( t ) = C\max_{i \in {\mathcal{N}}} \{ \Vert \phi _{i} \Vert \} e^{ - \lambda ^{ *} t}\), \(t \in [ - \tau, + \infty )\). Similar to Theorem 3.1, we get

$$\begin{aligned} &\mathrm{e}^{ - a_{i}t} u ( 0 ) + \mathrm{e}^{ - a_{i}t} \sum _{j = 1}^{n} \alpha _{ij} \int _{ 0}^{ t} \mathrm{e}^{a_{i}s} u ( s ) \,{d}s + \mathrm{e}^{ - a_{i}t} \sum_{j = 1}^{n} \beta _{ij} \int _{ 0}^{ t} \mathrm{e}^{a_{i}s} \sup _{s - \tau _{j} ( s ) \le \upsilon \le s} u ( \upsilon ) \,{d}s \\ &\quad{}+ \mathrm{e}^{ - a_{i}t}\sum_{0 < t_{k} < t} \bigl\{ P_{ik} u ( t_{k} ) \mathrm{e}^{a_{i}t_{k}} \bigr\} \le u ( t ),\quad t \ge 0. \end{aligned}$$

Finally, we prove that \(y_{i} ( t ) \le u ( t )\) for all \(t \ge - \tau \) and \(i \in {\mathcal{N}}\) by a contradiction. This is similar to the proof of Theorem 3.1, so we omit it here now. This shows that \(y_{i} ( t ) \le u ( t )\) for all \(t \ge - \tau \) and \(i \in {\mathcal{N}}\), which means that \(\max_{i \in {\mathcal{N}}}y_{i} ( t ) \le C\max_{i \in {\mathcal{N}}} \{ \Vert \phi _{i} \Vert \} e^{ - \lambda ^{ *} t}\) for \(t \in [ - \tau, + \infty )\). □

Remark 4.1

Inequalities (4.1) can be considered as multidimensional Halanay inequalities with impulses. In [3032], the authors used the one-dimensional Halanay inequality to consider the stability of delayed neural networks. However, they did not consider the impulse effect, and they needed to construct a complicated Lyapunov function [30, 31] or define a complicated matrix norm [32]; in addition, their results are not easy to verify in practice. The advantages of our multidimensional Halanay inequalities with impulses are that we take into account the impulse effects, and we neither require to construct a complicated Lyapunov function nor to define the adaptive matrix form; furthermore, our results are easy to verify.

Example

Consider the following two-dimensional impulsive delayed Hopfield neural network:

$$\begin{aligned} &\frac{{d}x_{i} ( t )}{{d}t} = - a_{i}x_{i} ( t ) + \sum _{j = 1}^{2} b_{ij}f_{j} \bigl( x_{j} ( t ) \bigr) + \sum_{j = 1}^{2} c_{ij}g_{j} \bigl( x_{j} \bigl( t - \tau _{j} ( t ) \bigr) \bigr),\quad t \ge 0, t \ne t_{k}, \\ &\Delta x_{i} ( t_{k} ) = x_{i} ( t_{k} + 0 ) - x_{i} ( t_{k} ) = I_{ik} \bigl( x_{i} ( t_{k} ) \bigr),\quad t_{k} = 0.25k, k = 1,2, \ldots, \end{aligned}$$

with the initial conditions \(x_{1} ( s ) = \cos ( s )\), \(x_{2} ( s ) = \sin ( s )\) on \(- \tau \le s \le 0\), where \(a_{1} = a_{2} = 4\), \(b_{11} = 0\), \(b_{12} = 0.1\), \(b_{21} = - 0.2\), \(b_{22} = 0\), \(c_{11} = 0.2\), \(c_{12} = 0\), \(c_{21} = 0\), \(c_{22} = - 0.1\), \(f_{j} ( s ) = g_{j} ( s ) = ( \vert s + 1 \vert - \vert s - 1 \vert )/2\) (\(j = 1,2\)), \(I_{ik} ( x_{i} ( t_{k} ) ) = \arctan ( 0.45x_{i} ( t_{k} ) )\) for \(i = 1,2\) and \(k = 1,2, \ldots \) (\(k = 1,2, \ldots \)). It is easy to find that \(\mu = 0.125\), \(\alpha _{j} = \beta _{j} = 1\), and \(P_{ik} = 0.45\). Select \(P_{i} = 3.6\) and then compute \(- a_{i} + \sum_{j = 1}^{n} \vert b_{ij}\alpha _{j} \vert + \sum_{j = 1}^{n} \vert c_{ij}\beta _{j} \vert + P_{i} = - 0.1 < 0\). From Theorem 3.2, we know this system is globally exponentially stable (Fig. 1).

Figure 1
figure1

The simulation in the example

Availability of data and materials

Not applicable.

References

  1. 1.

    Hopfield, J.J.: Neural networks and physical system with emergent collective computational abilities. Proc. Natl. Acad. Sci. USA 79, 2554–2558 (1982)

    MathSciNet  Article  Google Scholar 

  2. 2.

    Hopfield, J.J.: Neurons with graded response have collective computational properties like those of two-state neurons. Proc. Natl. Acad. Sci. USA 81, 3088–3092 (1984)

    Article  Google Scholar 

  3. 3.

    Paik, J.K., Katsaggelos, A.K.: Image restoration using a modified Hopfield network. IEEE Trans. Image Process. 1(1), 49–63 (1992)

    Article  Google Scholar 

  4. 4.

    Tatem, A.J., Lewis, H.G., Atkinson, P.M., Nixon, M.S.: Super-resolution land cover pattern prediction using a Hopfield neural network. Remote Sens. Environ. 79(1), 1–14 (2002)

    Article  Google Scholar 

  5. 5.

    Zhu, Y., Yan, Z.: Computerized tumor boundary detection using a Hopfield neural network. IEEE Trans. Med. Imaging 16(1), 55–67 (1997)

    MathSciNet  Article  Google Scholar 

  6. 6.

    Pratap, A., Raja, R., Alzabut, J., Cao, J., Rajchakit, G., Huang, C.: Mittage-Leffler stability and adaptive impulsive synchronization of fractional order neural networks in quaternion field. Math. Methods Appl. Sci.. https://doi.org/10.1002/mma.6367

  7. 7.

    Pratap, A., Raja, R., Alzabut, J., Dianavinnarasi, J., Cao, J., Rajchakit, G.: Finite-time Mittag-Leffler stability of fractional-order quaternion-valued memristive neural networks with impulses. Neural Process. Lett. 51, 1485–1526 (2020)

    Article  Google Scholar 

  8. 8.

    Iswarya, M., Raja, R., Rajchakit, G., Cao, J., Alzabut, J., Huang, C.: A perspective on graph theory-based stability analysis of impulsive stochastic recurrent neural networks with time-varying delays. Adv. Differ. Equ. 2019, Article ID 502 (2019)

    MathSciNet  Article  Google Scholar 

  9. 9.

    Alzabut, J., Tyagi, S., Martha, S.C.: On the stability and Lyapunov direct method for fractional difference model of BAM neural networks. J. Intell. Fuzzy Syst. 38(3), 2491–2501 (2020)

    Article  Google Scholar 

  10. 10.

    Iswarya, M., Raja, R., Rajchakit, G., Cao, J., Alzabut, J., Huang, C.: Existence, uniqueness and exponential stability of periodic solution for discrete-time delayed BAM neural networks based on coincidence degree theory and graph theoretic method. Mathematics 7(11), 1055 (2019)

    Article  Google Scholar 

  11. 11.

    Zhang, Q., Wei, X., Xu, J.: On global exponential stability of delayed cellular neural networks with time-varying delays. Appl. Math. Comput. 162, 679–686 (2005)

    MathSciNet  MATH  Google Scholar 

  12. 12.

    Arik, S., Tavsanoglu, V.: On the global asymptotic stability of delayed cellular neural networks. IEEE Trans. Circuits Syst. I 47, 571–574 (2000)

    MathSciNet  Article  Google Scholar 

  13. 13.

    Ahmad, S., Stamova, I.M.: Global exponential stability for impulsive cellular neural networks with time-varying delays. Nonlinear Anal. 69, 786–795 (2008)

    MathSciNet  Article  Google Scholar 

  14. 14.

    Liu, X., Teo, K.L.: Exponential stability of impulsive high-order Hopfield-type neural networks with time-varying delays. IEEE Trans. Neural Netw. 16, 1329–1339 (2005)

    Article  Google Scholar 

  15. 15.

    Qiu, J.L.: Exponential stability of impulsive neural networks with time-varying delays and reaction-diffusion terms. Neurocomputing 70, 1102–1108 (2007)

    Article  Google Scholar 

  16. 16.

    Stamov, G.T., Stamova, I.M.: Almost periodic solutions for impulsive neural networks with delay. Appl. Math. Model. 31, 1263–1270 (2007)

    Article  Google Scholar 

  17. 17.

    Li, K., Zhang, X., Li, Z.: Global exponential stability of impulsive cellular neural networks with time-varying and distributed delays. Chaos Solitons Fractals 41, 1427–1434 (2009)

    MathSciNet  Article  Google Scholar 

  18. 18.

    Wei, T., Lin, P., Wang, Y., Wang, L.: Stability of stochastic impulsive reaction-diffusion neural networks with S-type distributed delays and its application to image encryption. Neural Netw. 116, 35–45 (2019)

    Article  Google Scholar 

  19. 19.

    Ren, Y., He, Q., Gu, Y., Sakthivel, R.: Mean-square stability of delayed stochastic neural networks with impulsive effects driven by G-Brownian motion. Stat. Probab. Lett. 143, 56–66 (2018)

    MathSciNet  Article  Google Scholar 

  20. 20.

    Wu, Y., Yan, S., Fan, M., Li, W.: Stabilization of stochastic coupled systems with Markovian switching via feedback control based on discrete-time state observations. Int. J. Robust Nonlinear Control 28, 247–265 (2018)

    MathSciNet  Article  Google Scholar 

  21. 21.

    Wang, P., Feng, J., Su, H.: Stabilization of stochastic delayed networks with Markovian switching and hybrid nonlinear coupling via aperiodically intermittent control. Nonlinear Anal. Hybrid Syst. 32, 115–130 (2019)

    MathSciNet  Article  Google Scholar 

  22. 22.

    Han, X.-X., Wu, K.-N., Ding, X.: Finite-time stabilization for stochastic reaction-diffusion systems with Markovian switching via boundary control. Appl. Math. Comput. 385, 125422 (2020)

    MathSciNet  MATH  Google Scholar 

  23. 23.

    He, D., Qing, Y.: Boundedness theorems for non-autonomous stochastic delay differential systems driven by G-Brownian motion. Appl. Math. Lett. 91, 83–89 (2019)

    MathSciNet  Article  Google Scholar 

  24. 24.

    Wang, H., Wei, G., Wen, S., Huang, T.: Generalized norm for existence, uniqueness and stability of Hopfield neural networks with discrete and distributed delays. Neural Netw. 128, 288–293 (2020)

    Article  Google Scholar 

  25. 25.

    Zhang, Y., Luo, Q.: Global exponential stability of impulsive cellular neural networks with time-varying delays via fixed point theory. Adv. Differ. Equ. 2013, 23 (2013). https://doi.org/10.1186/1687-1847-2013-23

    MathSciNet  Article  MATH  Google Scholar 

  26. 26.

    Chen, G., Li, D., Shi, L., Ganns, O.V., Lunel, S.V.: Stability results for stochastic delayed recurrent neural networks with discrete and distributed delays. J. Differ. Equ. 264, 3864–3898 (2018)

    MathSciNet  Article  Google Scholar 

  27. 27.

    Luo, J.: Fixed points and exponential stability for stochastic Volterra–Levin equations. J. Comput. Appl. Math. 234, 934–940 (2010)

    MathSciNet  Article  Google Scholar 

  28. 28.

    Luo, J., Taniguchi, T.: Fixed points and stability of stochastic neutral partial differential equations with infinite delays. Stoch. Anal. Appl. 27, 1163–1173 (2009)

    MathSciNet  Article  Google Scholar 

  29. 29.

    Luo, J.: Fixed points and exponential stability of mild solutions of stochastic partial differential equations with delays. J. Math. Anal. Appl. 342, 753–760 (2008)

    MathSciNet  Article  Google Scholar 

  30. 30.

    Huang, C., He, Y., Wang, H.: Mean square exponential stability of stochastic recurrent neural networks with time-varying delays. Comput. Math. Appl. 56, 1773–1778 (2008)

    MathSciNet  Article  Google Scholar 

  31. 31.

    Huang, C., He, Y., Huang, L., Zhu, W.: pth moment stability analysis of stochastic recurrent neural networks with time-varying delays. Inf. Sci. 178, 2194–2203 (2008)

    Article  Google Scholar 

  32. 32.

    Jiang, M., Mu, J., Huang, D.: Globally exponential stability and dissipativity for nonautonomous neural networks with mixed time-varying delays. Neurocomputing 205, 421–429 (2016)

    Article  Google Scholar 

Download references

Acknowledgements

The authors would like to thank the editors and reviewers for their valuable contributions, which greatly improved the readability of this paper.

Funding

This work is supported by the National Natural Science Foundation of China with Grant Nos. 61573193, 61473213, 61671338, by Hubei Province Key Laboratory of Systems Science in Metallurgical Process (Wuhan University of Science and Technology) with Grant No. Z201901, and the Joint Key Grant of National Natural Science Foundation of China and Zhejiang Province (U1509217).

Author information

Affiliations

Authors

Contributions

The authors have equally made contributions. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Yutian Zhang.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Zhang, Y., Chen, G. & Luo, Q. Inequalities and pth moment exponential stability of impulsive delayed Hopfield neural networks. J Inequal Appl 2021, 113 (2021). https://doi.org/10.1186/s13660-021-02640-9

Download citation

Keywords

  • pth moment exponential stability
  • Integral inequality
  • Impulse
  • Hopfield neural networks
  • Delay