- Research
- Open access
- Published:
Inequalities and pth moment exponential stability of impulsive delayed Hopfield neural networks
Journal of Inequalities and Applications volume 2021, Article number: 113 (2021)
Abstract
In this paper, the pth moment exponential stability for a class of impulsive delayed Hopfield neural networks is investigated. Some concise algebraic criteria are provided by a new method concerned with impulsive integral inequalities. Our discussion neither requires a complicated Lyapunov function nor the differentiability of the delay function. In addition, we also summarize a new result on the exponential stability of a class of impulsive integral inequalities. Finally, one example is given to illustrate the effectiveness of the obtained results.
1 Introduction
In the past few years, the artificial neural networks introduced by Hopfield [1, 2] have become a significant research topic due to their wide applications in various areas such as signal and image processing, associative memory, combinatorial optimization, pattern classification, etc. [3–5]. All the applications of Hopfield neural networks (HNNs) depend on qualitative behavior such as stability, existence and uniqueness, convergence, oscillation, and so on [6–10]. Particularly, the stability property is a major concern in the design and applications of neural networks. Therefore, many researchers have been paying much attention to the stability study of HNNs.
In addition, since time delays are frequently encountered for the finite switching speed of neurons and amplify in implementation of neural networks, it is meaningful to discuss the effect of time delays on the stability of HNNs. Consequently, the scientists put forward the model of delayed Hopfield neural networks (DHNNs) and made great efforts for the stability research (see e.g. [11, 12]).
Furthermore, it is worth noting that impulsive effects are also a common phenomenon in many engineering systems, that is, instantaneous jump or reset of system states of automobile industry, network control, video coding, etc. Hence, the model of impulsive delayed Hopfield neural networks (IDHNNs) is more representative, and it is necessary to probe the stability of IDHNNs theoretically and practically. So far, there have been a number of research achievements (see e.g. [13–17])
Among the existing stability results of impulsive delayed systems, one powerful technique is Lyapunov method (see e.g. [18–24]). Wei et al. [18] studied the global exponential stability in the mean-square sense of a class of stochastic impulsive reaction-diffusion systems with S-type distributed delays based on a Lyapunov–Krasovskii functional and an impulsive inequality. Ren et al. [19] considered the mean-square exponential input-to-state stability for a class of delayed stochastic neural networks with impulsive effects driven by G-Brownian motion by constructing an appropriate G-Lyapunov–Krasovskii functional, mathematical induction approach, and some inequality techniques.
It should be pointed out that the key to the Lyapunov method is to construct a suitable Lyapunov function or functional. However, finding a suitable Lyapunov function or functional often involves some mathematical difficulties.
On the other hand, an alternative technique for stability analysis of impulsive delayed systems has been developed based on the fixed point theorem (see e.g.[25–29]). Zhang et al. [25] studied the application of the fixed point theory to the stability analysis of a class of impulsive delayed neural networks. By employing the contraction mapping principle, some novel and concise sufficient conditions have been presented to ensure the existence and uniqueness of solution and the global exponential stability of the considered system.
However, the fixed point method has its disadvantage due to using Holder inequalities at an inappropriate time.
Motivated by the above discussion, we attempt to study the stability of IDHNNs by a new method different from the Lyapunov method and the fixed point method. As we all know, there are many works focused on discussion to mean-square stability of complex dynamical systems. However, mean-square stability is actually a special case of pth moment stability by choosing \(p=2\), so the study of pth moment stability will be more representative. In our paper, we investigate the pth moment exponential stability of IDHNNs with the help of impulsive integral inequalities. Compared with the Lyapunov method and the fixed point theory, our method has two advantages. One is no demand of Lyapunov functions and the differentiability of the delay function. The other is no demand of seeking the appropriate time to use Holder inequalities. Furthermore, a new criterion for the exponential stability of impulsive integral inequalities is provided based on our discussion.
The contents of this paper are organized as follows. In Sect. 2, some notations, the model description, and a useful lemma are introduced. In Sect. 3, we consider the pth moment exponential stability of IDHNNs and obtain some new sufficient conditions. Inspired by Sect. 3, we discuss the exponential stability of a class of impulsive integral inequalities in Sect. 4 and give an algebraic criterion. In Sect. 5, one example is given to illustrate the effectiveness of our results.
2 Preliminaries
Notations: Let \(\mathrm{R}^{n}\) denote the n-dimensional Euclidean space. \(\vert \cdot \vert \) represents the Euclidean norm for vectors or absolute value for real numbers. \({\mathcal{N}} \stackrel{{{{{{\Delta }}}}}}{{=}} \{ 1,2, \ldots,n \} \). \(\mathrm{R}_{ +} = [ 0,\infty )\). \(C [ X,Y ]\) stands for the space of continuous mappings from the topological space X to the topological space Y. For some \(\tau > 0\), let \(C [ [ - \tau,0 ],\mathrm{R} ]\) be the family of all continuous real-valued functions Ï• defined in \([ - \tau,0 ]\) equipped with the norm \(\Vert \phi \Vert = \sup_{s \in [ - \tau,0 ]} \vert \phi ( s ) \vert \).
Consider a class of impulsive delayed Hopfield neural network described by
where \(i \in {\mathcal{N}}\) and n is the number of neurons in the neural network. \(x_{i} ( t )\) stands for the state of the ith neuron at time t. \(f_{j} ( \bullet ), g_{j} ( \bullet ) \in C [ \mathrm{R},\mathrm{R} ]\), \(f_{j} ( x_{j} ( t ) )\) is the activation function of the jth neuron at time t and \(g_{j} ( x_{j} ( t - \tau _{j} ( t ) ) )\) is the activation function of the jth neuron at time \(t - \tau _{j} ( t )\), where \(\tau _{j} ( t ) \in C [ \mathrm{R}^{ +},\mathrm{R}^{ +} ]\) denotes the transmission delay along the axon of the jth neuron and satisfies \(0 \le \tau _{j} ( t ) \le \tau _{j}\) (\(\tau _{j}\) is a constant). The constant \(a_{i} > 0\) stands for the rate with which the ith neuron will reset its potential to the resting state when disconnected from the network and external inputs. The constant \(b_{ij}\) represents the connection weight of the jth neuron on the ith neuron at time t. The constant \(c_{ij}\) denotes the connection strength of the jth neuron on the ith neuron at time \(t - \tau _{j} ( t )\). The fixed impulsive moments \(t_{k}\) (\(k = 1,2, \ldots \)) satisfy \(0 = t_{0} < t_{1} < t_{2} < \cdots \) and \(\lim_{k \to \infty } t_{k} = \infty \). \(x_{i} ( t_{k} + 0 )\) and \(x_{i} ( t_{k} - 0 )\) stand for the right-hand and left-hand limit of \(x_{i} ( t )\) at time \(t_{k}\), respectively. \(I_{ik} ( x_{i} ( t_{k} ) )\) shows the abrupt change of \(x_{i} ( t )\) at the impulsive moment \(t_{k}\) and \(I_{ik} ( \bullet ) \in C [ \mathrm{R},\mathrm{R} ]\). \(\varphi _{i} ( s ) \in C [ [ - \tau,0 ],\mathrm{R} ]\) and \(\tau = \max_{j \in {\mathcal{N}}} \{ \tau _{j} \} \).
Denote by \(\mathbf{x} ( t; \varphi ) = ( x_{1} ( t; \varphi _{1} ), \ldots,x_{n} ( t; \varphi _{n} ) )^{\mathrm{T}} \in \mathrm{R}^{n}\) the solution of system (2.1), where \(\varphi ( s ) = ( \varphi _{1} ( s ), \ldots,\varphi _{n} ( s ) )^{\mathrm{T}} \in \mathrm{R}^{n}\). The solution \(\mathbf{x} ( t; \varphi )\) of system (2.1) is, for time variable t, a piecewise continuous vector-valued function with the first kind discontinuity at the points \(t_{k}\) (\(k = 1,2, \ldots \)), where it is left-continuous i.e. the following relations are valid:
Throughout this paper, we always assume that \(f_{j} ( 0 ) = g_{j} ( 0 ) = I_{jk} ( 0 ) = 0\) for \(j \in {\mathcal{N}}\) and \(k = 1,2, \ldots \) . Then system (2.1) admits a trivial solution with initial value \(\varphi = 0\).
Definition 2.1
The trivial solution of system (2.1) is said to be pth (\(p \ge 1\)) moment exponentially stable if there exists a pair of positive constants λ and C such that
holds for any \(\varphi _{i} ( s ) \in C [ [ - \tau,0 ],\mathrm{R} ]\) and \(i \in {\mathcal{N}}\).
Lemma 2.1
Suppose \(0 < \theta < 1\) and \(\lambda \theta ( t - s ) < 1 - \theta \). Then \(\int _{s}^{t} e^{\lambda x}\,dx > \theta ( t - s )e^{\lambda t}\) holds for \(t > s\) and \(\lambda > 0\).
Proof
Construct function \(F ( t ) = \int _{s}^{t} e^{\lambda x}\,dx - \theta ( t - s )e^{\lambda t}\). For fixed s, it is easy to find that \(F ( s ) = 0\) and
So, \(F ( t ) > F ( s ) = 0\) as \(t > s\), which means \(\int _{s}^{t} e^{\lambda x}\,dx > \theta ( t - s )e^{\lambda t} ( t > s )\). □
3 pth moment exponential stability of IDHNNs
In this section, we develop a new method to discuss the pth moment exponential stability of system (2.1). Before proceeding, we introduce some hypotheses listed as follows:
(H1) There exist nonnegative constants \(\alpha _{j}\) such that, for any \(x_{j}^{ ( 1 )}, x_{j}^{ ( 2 )} \in \mathrm{R}\),
(H2) There exist nonnegative constants \(\beta _{j}\) such that, for any \(x_{j}^{ ( 1 )}, x_{j}^{ ( 2 )} \in \mathrm{R}\),
(H3) There exist nonnegative constants \(P_{jk}\) such that, for any \(x_{j}^{ ( 1 )}, x_{j}^{ ( 2 )} \in \mathrm{R}\),
Theorem 3.1
Suppose that
(i) there exist constants \(\mu > 0\) and \(\theta \in ( 0,1 )\) such that \(\inf_{k = 1,2, \ldots } \{ \theta ( t_{k} - t_{k - 1} ) \} \ge \mu \) and \(\max_{k = 1,2, \ldots } \{ t_{k} - t_{k - 1} \} < \frac{1 - \theta }{\theta a_{i}}\),
(ii) there exist constants \(P_{i}\) such that \(P_{ik} \le P_{i} \mu \) for \(i \in {\mathcal{N}}\) and \(k = 1,2, \ldots \) ,
(iii)
Then system (2.1) is globally exponentially stable in the pth (\(p \ge 1\)) moment.
Proof
Multiplying both sides of system (2.1) with \(\mathrm{e}^{a_{i}t}\) and integrating from \(t_{k - 1} + \varepsilon \) (\(\varepsilon > 0\)) to \(t \in ( t_{k - 1},t_{k} )\) yields
Letting \(\varepsilon \to 0^{ +} \) in (3.1), we have, for \(t \in ( t_{k - 1},t_{k} )\) (\(k = 1,2, \ldots \)),
Setting \(t = t_{k} - \varepsilon '\) (\(\varepsilon ' > 0\)) in (3.2), we get
which generates, by letting \(\varepsilon ' \to 0^{ +} \),
As \(x_{i} ( t_{k} - 0 ) = x_{i} ( t_{k} )\), (3.3) can be rearranged as
Combining (3.2) and (3.4), we derive, for \(t \in ( t_{k - 1},t_{k} ]\) (\(k = 1,2, \ldots \)),
This leads to, for \(t \in ( t_{k - 1},t_{k} ]\) (\(k = 1,2, \ldots \)),
Hence,
By induction, we obtain that, for \(t > 0\),
From (H1)–(H3), we know, for \(t > 0\),
Denote
Condition (iii) implies that there exists \(\chi \in ( 0,1 )\) such that
By employing Holder’s inequality, we get
Moreover, it follows from Holder’s inequality that
Similarly, we get
In addition, Lemma 2.1, conditions (i)–(ii), and Holder’s inequality yield
Therefore,
For each \(i \in {\mathcal{N}}\), define the following function:
From (3.5), we know \(G_{i} ( 0 ) < 0\). Further, \(G_{i} ( \lambda )\) is continuous on \(\mathrm{R}_{ +} \), \(G_{i} ( + \infty ) = + \infty \), and \(G_{i}^{\prime } ( \lambda ) > 0\) for \(\lambda \in \mathrm{R}_{ +} \), so for each \(i \in {\mathcal{N}}\), the equation \(G_{i} ( \lambda ) = 0\) has a unique solution \(\lambda _{i} \in \mathrm{R}_{ +} \). Choosing \(\vartheta = \min_{i \in {\mathcal{N}}} \{ \lambda _{i} \} \), we get, for \(i \in {\mathcal{N}}\),
Let \(u ( t ) = \chi ^{1 - p}\max_{i \in {\mathcal{N}}} \{ \Vert \varphi _{i} \Vert ^{p} \} e^{ - \vartheta t}\), \(t \in [ - \tau, + \infty )\). Obviously, \(u ( s ) = u ( t )e^{\vartheta ( t - s )}\) holds for \(- \tau \le s \le t\), and \(\sup_{t - \tau _{j} ( t ) \le s \le t}u ( s ) \le u ( t )e^{\vartheta \tau _{j}}\) is true for each \(j \in {\mathcal{N}}\) and \(t \in \mathrm{R}_{ +} \). Denote
As
we obtain from (3.7) that
Finally, we prove that \(\vert x_{i} ( t ) \vert ^{p} \le u ( t )\) for all \(t \ge - \tau \) and \(i \in {\mathcal{N}}\) by a contradiction. Obviously, \(\vert x_{i} ( t ) \vert ^{p} \le u ( t )\) holds for \(t \in [ - \tau,0 ]\) and \(i \in {\mathcal{N}}\). For each i, assume that there exist \(t_{i} > 0\) and \(\varepsilon > 0\) such that \(\vert x_{i} ( t ) \vert ^{p} < u ( t ) + \varepsilon \) as \(t \in [ 0,t_{i} )\) and \(\vert x_{i} ( t_{i} ) \vert ^{p} = u ( t_{i} ) + \varepsilon \). Choose \(t^{ *} \stackrel{{{{{{\Delta }}}}}}{{=}} t_{i^{ *}} = \min_{i \in {\mathcal{N}}} \{ t_{i} \} \). Obviously, \(\chi ^{1 - p} \vert \varphi _{i^{ *}} ( 0 ) \vert ^{p} < u ( 0 ) + \varepsilon \), from (3.6) and (3.8), we get
which is a contradiction. This shows that \(\vert x_{i} ( t ) \vert ^{p} \le u ( t )\) for all \(t \ge - \tau \) and \(i \in {\mathcal{N}}\), which means that \(\vert x_{i} ( t;\varphi ) \vert ^{p} \le \chi ^{1 - p}\max_{i \in {\mathcal{N}}} \{ \Vert \varphi _{i} \Vert ^{p} \} e^{ - \vartheta t}\) for \(t \in [ - \tau, + \infty )\). □
As a special case, we give the following theorem.
Theorem 3.2
Suppose that
(i) there exists constant \(\mu > 0\) such that \(\inf_{k = 1,2, \ldots } \{ \frac{t_{k} - t_{k - 1}}{2} \} \ge \mu \) and \(\max_{k = 1,2, \ldots } \{ t_{k} - t_{k - 1} \} < \frac{1}{ a_{i}}\),
(ii) there exist constants \(P_{i}\) such that \(P_{ik} \le P_{i} \mu \) for \(i \in {\mathcal{N}}\) and \(k = 1,2, \ldots \) ,
(iii) \(- a_{i} + \sum_{j = 1}^{n} \vert b_{ij}\alpha _{j} \vert + \sum _{j = 1}^{n} \vert c_{ij}\beta _{j} \vert + P_{i} < 0\).
Then system (2.1) is globally exponentially stable.
Proof
Let \(p = 1\) and \(\theta = \frac{1}{2}\) in Theorem 3.1. □
Remark 3.1
In [25], the fixed point theory was employed to study system (2.1), and the research shows that system (2.1) is globally exponentially stable on the condition that \(\sum_{i = 1}^{n} \{ \frac{1}{a_{i}}\max_{j \in {\mathcal{N}}} \vert b_{ij}l_{j} \vert + \frac{1}{a_{i}}\max_{j \in {\mathcal{N}}} \vert c_{ij}k_{j} \vert \} + \max_{i \in {\mathcal{N}}} \{ p_{i} ( \mu + \frac{1}{a_{i}} ) \} < 1\). Obviously, condition (iii) in Theorem 3.2 is weaker.
4 Exponential stability of impulsive integral inequalities
Consider the following impulsive integral inequalities:
where \(C \ge 1\), and for each \(i,j \in {\mathcal{N}}\), \(y_{i} ( t ) \ge 0\) for \(t \ge - \tau \), \(0 \le \tau _{j} ( s ) \le \tau _{j} \le \tau \) for \(s \ge 0\), and \(a_{i} > 0\), \(\alpha _{ij} \ge 0\), \(\beta _{ij} \ge 0\), \(P_{ik} \ge 0\), \(k = 1,2, \ldots \) .
Theorem 4.1
Suppose that
(i) there exist constants \(\mu > 0\) and \(\theta \in ( 0,1 )\) such that \(\inf_{k = 1,2, \ldots } \{ \theta ( t_{k} - t_{k - 1} ) \} \ge \mu \) and \(\max_{k = 1,2, \ldots } \{ t_{k} - t_{k - 1} \} < \frac{1 - \theta }{\theta a_{i}}\),
(ii) there exist nonnegative constants \(P_{i}\) such that \(P_{ik} \le P_{i} \mu \) for \(i \in {\mathcal{N}}\) and \(k = 1,2, \ldots \) ,
(iii) \(- a_{i} + \sum_{j = 1}^{n} \alpha _{ij} + \sum_{j = 1}^{n} \beta _{ij} + P_{i} < 0\).
Then there exist positive constant C and \(\lambda ^{ *} \) such that
where \(\lambda ^{ *} \) is the minimum solution of the following equations:
Proof
For each \(i \in {\mathcal{N}}\), define the following function:
Note that \(F_{i} ( \lambda )\) is continuous on \(\mathrm{R}_{ +} \), \(F_{i} ( 0 ) = - a_{i} + \sum_{j = 1}^{n} \alpha _{ij} + \sum_{j = 1}^{n} \beta _{ij} + P_{i} < 0\), \(F_{i} ( + \infty ) = + \infty \), and \(F_{i}^{\prime } ( \lambda ) > 0\) for \(\lambda \in \mathrm{R}_{ +} \), so for each \(i \in {\mathcal{N}}\), the equation \(F_{i} ( \lambda ) = 0\) has a unique solution \(\lambda _{i} \in \mathrm{R}_{ +} \). Choosing \(\lambda ^{ *} = \min_{i \in {\mathcal{N}}} \{ \lambda _{i} \} \), we get
Let \(u ( t ) = C\max_{i \in {\mathcal{N}}} \{ \Vert \phi _{i} \Vert \} e^{ - \lambda ^{ *} t}\), \(t \in [ - \tau, + \infty )\). Similar to Theorem 3.1, we get
Finally, we prove that \(y_{i} ( t ) \le u ( t )\) for all \(t \ge - \tau \) and \(i \in {\mathcal{N}}\) by a contradiction. This is similar to the proof of Theorem 3.1, so we omit it here now. This shows that \(y_{i} ( t ) \le u ( t )\) for all \(t \ge - \tau \) and \(i \in {\mathcal{N}}\), which means that \(\max_{i \in {\mathcal{N}}}y_{i} ( t ) \le C\max_{i \in {\mathcal{N}}} \{ \Vert \phi _{i} \Vert \} e^{ - \lambda ^{ *} t}\) for \(t \in [ - \tau, + \infty )\). □
Remark 4.1
Inequalities (4.1) can be considered as multidimensional Halanay inequalities with impulses. In [30–32], the authors used the one-dimensional Halanay inequality to consider the stability of delayed neural networks. However, they did not consider the impulse effect, and they needed to construct a complicated Lyapunov function [30, 31] or define a complicated matrix norm [32]; in addition, their results are not easy to verify in practice. The advantages of our multidimensional Halanay inequalities with impulses are that we take into account the impulse effects, and we neither require to construct a complicated Lyapunov function nor to define the adaptive matrix form; furthermore, our results are easy to verify.
5 Example
Consider the following two-dimensional impulsive delayed Hopfield neural network:
with the initial conditions \(x_{1} ( s ) = \cos ( s )\), \(x_{2} ( s ) = \sin ( s )\) on \(- \tau \le s \le 0\), where \(a_{1} = a_{2} = 4\), \(b_{11} = 0\), \(b_{12} = 0.1\), \(b_{21} = - 0.2\), \(b_{22} = 0\), \(c_{11} = 0.2\), \(c_{12} = 0\), \(c_{21} = 0\), \(c_{22} = - 0.1\), \(f_{j} ( s ) = g_{j} ( s ) = ( \vert s + 1 \vert - \vert s - 1 \vert )/2\) (\(j = 1,2\)), \(I_{ik} ( x_{i} ( t_{k} ) ) = \arctan ( 0.45x_{i} ( t_{k} ) )\) for \(i = 1,2\) and \(k = 1,2, \ldots \) (\(k = 1,2, \ldots \)). It is easy to find that \(\mu = 0.125\), \(\alpha _{j} = \beta _{j} = 1\), and \(P_{ik} = 0.45\). Select \(P_{i} = 3.6\) and then compute \(- a_{i} + \sum_{j = 1}^{n} \vert b_{ij}\alpha _{j} \vert + \sum_{j = 1}^{n} \vert c_{ij}\beta _{j} \vert + P_{i} = - 0.1 < 0\). From Theorem 3.2, we know this system is globally exponentially stable (Fig. 1).
Availability of data and materials
Not applicable.
References
Hopfield, J.J.: Neural networks and physical system with emergent collective computational abilities. Proc. Natl. Acad. Sci. USA 79, 2554–2558 (1982)
Hopfield, J.J.: Neurons with graded response have collective computational properties like those of two-state neurons. Proc. Natl. Acad. Sci. USA 81, 3088–3092 (1984)
Paik, J.K., Katsaggelos, A.K.: Image restoration using a modified Hopfield network. IEEE Trans. Image Process. 1(1), 49–63 (1992)
Tatem, A.J., Lewis, H.G., Atkinson, P.M., Nixon, M.S.: Super-resolution land cover pattern prediction using a Hopfield neural network. Remote Sens. Environ. 79(1), 1–14 (2002)
Zhu, Y., Yan, Z.: Computerized tumor boundary detection using a Hopfield neural network. IEEE Trans. Med. Imaging 16(1), 55–67 (1997)
Pratap, A., Raja, R., Alzabut, J., Cao, J., Rajchakit, G., Huang, C.: Mittage-Leffler stability and adaptive impulsive synchronization of fractional order neural networks in quaternion field. Math. Methods Appl. Sci.. https://doi.org/10.1002/mma.6367
Pratap, A., Raja, R., Alzabut, J., Dianavinnarasi, J., Cao, J., Rajchakit, G.: Finite-time Mittag-Leffler stability of fractional-order quaternion-valued memristive neural networks with impulses. Neural Process. Lett. 51, 1485–1526 (2020)
Iswarya, M., Raja, R., Rajchakit, G., Cao, J., Alzabut, J., Huang, C.: A perspective on graph theory-based stability analysis of impulsive stochastic recurrent neural networks with time-varying delays. Adv. Differ. Equ. 2019, Article IDÂ 502 (2019)
Alzabut, J., Tyagi, S., Martha, S.C.: On the stability and Lyapunov direct method for fractional difference model of BAM neural networks. J. Intell. Fuzzy Syst. 38(3), 2491–2501 (2020)
Iswarya, M., Raja, R., Rajchakit, G., Cao, J., Alzabut, J., Huang, C.: Existence, uniqueness and exponential stability of periodic solution for discrete-time delayed BAM neural networks based on coincidence degree theory and graph theoretic method. Mathematics 7(11), 1055 (2019)
Zhang, Q., Wei, X., Xu, J.: On global exponential stability of delayed cellular neural networks with time-varying delays. Appl. Math. Comput. 162, 679–686 (2005)
Arik, S., Tavsanoglu, V.: On the global asymptotic stability of delayed cellular neural networks. IEEE Trans. Circuits Syst. I 47, 571–574 (2000)
Ahmad, S., Stamova, I.M.: Global exponential stability for impulsive cellular neural networks with time-varying delays. Nonlinear Anal. 69, 786–795 (2008)
Liu, X., Teo, K.L.: Exponential stability of impulsive high-order Hopfield-type neural networks with time-varying delays. IEEE Trans. Neural Netw. 16, 1329–1339 (2005)
Qiu, J.L.: Exponential stability of impulsive neural networks with time-varying delays and reaction-diffusion terms. Neurocomputing 70, 1102–1108 (2007)
Stamov, G.T., Stamova, I.M.: Almost periodic solutions for impulsive neural networks with delay. Appl. Math. Model. 31, 1263–1270 (2007)
Li, K., Zhang, X., Li, Z.: Global exponential stability of impulsive cellular neural networks with time-varying and distributed delays. Chaos Solitons Fractals 41, 1427–1434 (2009)
Wei, T., Lin, P., Wang, Y., Wang, L.: Stability of stochastic impulsive reaction-diffusion neural networks with S-type distributed delays and its application to image encryption. Neural Netw. 116, 35–45 (2019)
Ren, Y., He, Q., Gu, Y., Sakthivel, R.: Mean-square stability of delayed stochastic neural networks with impulsive effects driven by G-Brownian motion. Stat. Probab. Lett. 143, 56–66 (2018)
Wu, Y., Yan, S., Fan, M., Li, W.: Stabilization of stochastic coupled systems with Markovian switching via feedback control based on discrete-time state observations. Int. J. Robust Nonlinear Control 28, 247–265 (2018)
Wang, P., Feng, J., Su, H.: Stabilization of stochastic delayed networks with Markovian switching and hybrid nonlinear coupling via aperiodically intermittent control. Nonlinear Anal. Hybrid Syst. 32, 115–130 (2019)
Han, X.-X., Wu, K.-N., Ding, X.: Finite-time stabilization for stochastic reaction-diffusion systems with Markovian switching via boundary control. Appl. Math. Comput. 385, 125422 (2020)
He, D., Qing, Y.: Boundedness theorems for non-autonomous stochastic delay differential systems driven by G-Brownian motion. Appl. Math. Lett. 91, 83–89 (2019)
Wang, H., Wei, G., Wen, S., Huang, T.: Generalized norm for existence, uniqueness and stability of Hopfield neural networks with discrete and distributed delays. Neural Netw. 128, 288–293 (2020)
Zhang, Y., Luo, Q.: Global exponential stability of impulsive cellular neural networks with time-varying delays via fixed point theory. Adv. Differ. Equ. 2013, 23 (2013). https://doi.org/10.1186/1687-1847-2013-23
Chen, G., Li, D., Shi, L., Ganns, O.V., Lunel, S.V.: Stability results for stochastic delayed recurrent neural networks with discrete and distributed delays. J. Differ. Equ. 264, 3864–3898 (2018)
Luo, J.: Fixed points and exponential stability for stochastic Volterra–Levin equations. J. Comput. Appl. Math. 234, 934–940 (2010)
Luo, J., Taniguchi, T.: Fixed points and stability of stochastic neutral partial differential equations with infinite delays. Stoch. Anal. Appl. 27, 1163–1173 (2009)
Luo, J.: Fixed points and exponential stability of mild solutions of stochastic partial differential equations with delays. J. Math. Anal. Appl. 342, 753–760 (2008)
Huang, C., He, Y., Wang, H.: Mean square exponential stability of stochastic recurrent neural networks with time-varying delays. Comput. Math. Appl. 56, 1773–1778 (2008)
Huang, C., He, Y., Huang, L., Zhu, W.: pth moment stability analysis of stochastic recurrent neural networks with time-varying delays. Inf. Sci. 178, 2194–2203 (2008)
Jiang, M., Mu, J., Huang, D.: Globally exponential stability and dissipativity for nonautonomous neural networks with mixed time-varying delays. Neurocomputing 205, 421–429 (2016)
Acknowledgements
The authors would like to thank the editors and reviewers for their valuable contributions, which greatly improved the readability of this paper.
Funding
This work is supported by the National Natural Science Foundation of China with Grant Nos. 61573193, 61473213, 61671338, by Hubei Province Key Laboratory of Systems Science in Metallurgical Process (Wuhan University of Science and Technology) with Grant No. Z201901, and the Joint Key Grant of National Natural Science Foundation of China and Zhejiang Province (U1509217).
Author information
Authors and Affiliations
Contributions
The authors have equally made contributions. All authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare that they have no competing interests.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Zhang, Y., Chen, G. & Luo, Q. Inequalities and pth moment exponential stability of impulsive delayed Hopfield neural networks. J Inequal Appl 2021, 113 (2021). https://doi.org/10.1186/s13660-021-02640-9
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13660-021-02640-9