Skip to main content

Stability of Cayley dynamic systems with impulsive effects


Linear dynamic systems with impulsive effects are considered. For such a system we define a new impulsive exponential matrix. Necessary and sufficient conditions for exponential stability and boundedness have been established. The fundamental tool is an impulsive exponential matrix for exponential stability.

1 Introduction

Researchers are totally aware of the importance of the theory of impulsive dynamic systems from a theoretical point of view, and their applications in control theory, dynamical games, and also in optimal control and mathematical programming [1, 2]. Wang et al. in [3], reviewed some recent developments in impulsive control theory. Lupulescu et al. [4] have addressed some very recent exciting results.

Wang et al. [3], described an impulsive differential equation by three components: a continuous-time differential equation, which governs the state of the system between impulses; an impulse equation, which models an impulsive jump defined by a jump function at the instant an impulse occurs; and a jump criterion, which represents a set of jump events in which the impulse equation is active.

During the last decade of the last century, the time scale calculus was affirmed as a tool for an explanation of a lot of new phenomena in electricity, mechanics, biology, economics, etc. [5]. Time scale calculus confirmed a way to treat discrete versions of continuous scientific problems [6]. There are many papers and books discussing the theory and applications of time scales (see, for example, [6, 7]).

On the other hand, several stability results of impulsive systems on time scales were obtained in recent years. We refer to [8, 9]. The models of impulsive control systems have been constantly expended, including impulsive stochastic systems [10, 11], Impulsive neural networks [9, 12], impulsive chaotic systems [13] fractional order impulsive functional systems [14], stability of impulsive switched systems [15]. Besides to the traditional asymptotic (exponential) stability analysis [16], the dynamical properties of impulsive dynamical systems was also extended to the finite-time stability [17, 18], stability of two measures [19], input-to-state stability [20, 21].

1.1 Research gap and motivation

The impulsive dynamic systems can present a more accurate dynamical system compared to the non-impulsive dynamic systems where the sense of disturbance in nature is involved. On the other hand, dynamic systems on time scale calculus combine discrete and continuous phenomena. So, the hybridization of two different phenomena, where the disturbance in the mathematical model is concerned and the mixed domain (continuous and discrete) in the related parameters, claims to merge the notions of impulsive dynamic systems and the time scale theory.

Matrix exponential and the fundamental matrix of homogenous dynamic systems play a key role in the stability theory. However, as far as authors know, all these studies are performed on Hilger’s exponential function. In the existing literature, we have noticed some articles like [4, 22] that addressed different solution properties of impulsive dynamic systems with Hilger’s approach.

Recently, Cieslinski [23] has proposed a new definition of the exponential function on time scales, based on the Cayley transformation. Complex-valued Cayley transformation transforms the imaginary axis into the unit circle. Therefore, it is possible to define trigonometric and hyperbolic complex-valued functions on time scales in a standard way. The following functions maintain most of the qualitative properties of the analogous continuous functions. In special, Pythagorean trigonometric identities hold exactly on any time scale. Dynamic equations satisfied by Cayley-type functions have a natural resemblance to the corresponding differential equations, Cayley h-difference equations [24], and Cayley quantum equations [25]. The current work is an attempt to study the solution of linear impulsive dynamic systems by means of a Cayley-type fundamental matrix.

1.2 Novelties of the work

Motivated by the recent studies [22, 23], the current article claims its novelty from the following perspectives:

  1. (i)

    We define fundamental matrices of linear impulsive dynamic systems on time scales that are based on Cayley’s transformation. The impulsive transition matrix and its properties are key for the stability theory.

  2. (ii)

    We establish the existence and uniqueness of the solution of impulsive dynamic systems.

  3. (iii)

    Necessary and sufficient conditions for boundedness and exponential stability of the proposed solution are established in this article.

1.3 Structure of the paper

The outline of the paper is as follows. In Sect. 2, we review the definitions and properties of time-scale calculus and qualitative properties of dynamic systems on time scales. In Sect. 3, we develop the fundamental concepts of homogeneous impulsive dynamic systems on time scales. These properties are used to investigate linear nonhomogeneous dynamic systems. In Sect. 4, we get variations of the constants formula for linear dynamic systems. We investigate the stability and boundedness of solutions in Sect. 5. Finally, we conclude the paper with some brief comments.

2 Preliminaries

Let \(\mathbb{X}\) be a finite-dimensional Banach space with a norm \(\Vert \cdot \Vert \). For simplicity, we shall denote all norms by the same symbol \(\Vert \cdot \Vert \), without danger of confusion. For \(M\in \mathbb{M}_{n}(\mathbb{R})\), let us denote its conjugate by \(M^{T}\) (its transpose). For an unbounded time scale \(\mathbb{T}_{+}\) (a nonempty subset of real-line), consider the following jump operators:

$$\begin{aligned} &x^{\sigma} =\sigma (x):=\inf \{s\in \mathbb{T}:s>x\}; \\ &x^{\rho} =\rho ( x ):=\sup \{s\in \mathbb{T}:s< x\}, \end{aligned}$$

and a real-valued function:

$$\begin{aligned} \mu:\mathbb{T}\rightarrow \mathbb{R}^{+}:\mu (x):=x^{\sigma}-x. \end{aligned}$$

From definitions of jump operators: for all \(x\in \mathbb{T}\), \(x^{\sigma}\geq x\) and \(x^{\rho}\leq x\), (see, e.g., [6]).

Throughout this paper, we always assume that \(\sup \mathbb{T=\infty}\) and for any \(\tau \in \mathbb{T}\), let \(\mathbb{T}_{ ( \tau ) }:=[ \tau,\infty ) \cap \mathbb{T}\) and \(\mathbb{T}_{+}:=\mathbb{T}_{ ( 0 ) }\).

A function f is said to be right-dense continuous (rd-continuous) if it is continuous at right-dense points and left-hand limits exist at left-dense points. The Δ-derivative of a function g is defined by

$$\begin{aligned} g^{\Delta} ( x ) =\lim_{ \substack{s\rightarrow x\\s\neq x^{\sigma}}} \frac{g ( x^{\sigma} ) -g ( s ) }{x^{\sigma}-s}. \end{aligned}$$

Let us state the following basic spaces:

  • \(C_{rd}(\mathbb{T}_{ ( \tau ) },\mathbb{X}):= \{ g: \mathbb{T}_{ ( \tau ) }\rightarrow \mathbb{X}:g \text{ is }rd\text{-continuous} \} \).

  • \(C_{rd}^{1}(\mathbb{T}_{ ( \tau ) },\mathbb{X}):= \{ g: \mathbb{T}_{ ( \tau ) }\rightarrow \mathbb{X}:g\text{ }\Delta \text{-derivative is in }C_{rd}(\mathbb{T}_{ ( \tau ) },\mathbb{X}) \} \).

  • \(\mathcal{R}(\mathbb{T}_{ ( \tau ) },\mathbb{X}):= \{ g\in C_{rd}(\mathbb{T}_{ ( \tau ) },\mathbb{X}): ( I_{\mathbb{X}}+\mu (x)g(x) ) ^{-1}\text{ exists for all }x\in \mathbb{T}_{ ( \tau ) } \} \).

  • \(\mathcal{R}^{+}(\mathbb{T}_{ ( \tau ) },\mathbb{R}):= \{ g\in \mathcal{R}(\mathbb{T}_{ ( \tau ) }, \mathbb{R}):I_{\mathbb{X}}+\mu (x)g(x)>0\text{ for all }x\in \mathbb{T}_{ ( \tau ) } \} \).

Here \(\mathbb{X}\) can be \(\mathbb{R},\mathbb{C},\mathbb{R}^{n},\mathbb{C}^{n}\) or \(\mathbb{M}_{n} ( \mathbb{R} ) \).

For \(g\in \mathcal{R}(\mathbb{T}_{+},\mathbb{C})\), Hilger defined the exponential function as follows:

$$\begin{aligned} e_{g} ( x,s ) =\exp \biggl( \int _{s}^{x} \frac{\operatorname{Log} ( 1+\mu (r)g ( r ) ) }{\mu (r)}\Delta r \biggr), \end{aligned}$$

where Log is the principal logarithm function.

If \(g,f\in \mathcal{R}(\mathbb{T}_{+},\mathbb{C})\), then the following properties hold:

  • \(e_{f} ( x^{\sigma},s ) = ( 1+\mu (x)f ( x ) ) e_{f} ( x,s ) \),

  • \(( e_{f} ( x,s ) ) ^{-1}=e_{\ominus ^{\mu}f} ( x,s ) \), where \(\ominus ^{\mu}f:=\frac{-f}{1+\mu f}\),

  • \(e_{f} ( x,s ) e_{f} ( s,r ) =e_{f} ( x,s ) \),

  • \(e_{g} ( x,s ) e_{f} ( x,s ) =e_{f\oplus ^{\mu}g} ( x,s ) \), where \(f\oplus ^{\mu}g:=f+g+\mu fg\).

For each regressive matrix M, \(e_{M} ( t,s ) \) is the only solution of IVP:

$$\begin{aligned} \begin{gathered} X^{\Delta}=MX, \quad X ( s ) =I_{\mathbb{M}_{n} ( \mathbb{R} ) }\end{gathered} \end{aligned}$$

and \(x ( t ) =e_{M} ( t,s ) x_{0}\), \(t\geq s\), is the only solution of IVP:

$$\begin{aligned} x^{\Delta}=Mx,\quad x ( s ) =a_{0} \end{aligned}$$

and \(x ( t ) =e_{\ominus M^{T}} ( t,s ) x_{0}\), \(t\geq s\) is the only solution of the adjoint IVP:

$$\begin{aligned} x^{\Delta}=-M^{T}x^{\sigma}, \quad x ( s ) =x_{0}. \end{aligned}$$

3 Exponential function

3.1 Scalar case

Cieslinski [23], introduced an improved exponential function (or the Cayley-exponential function). To formulate a new definition, we need a new regressivity notion:

The function \(f:\mathbb{T\rightarrow C}\) is called regressive (Cieslinski definition) if \(\mu (x)f(x)\neq \pm 2\) for any \(x\in \mathbb{T}^{k}\). The set of all regressive functions \(f:\mathbb{T\rightarrow C}\) is denoted by \(\mathcal{R}^{C}(\mathbb{T}_{+},\mathbb{C})\).

For \(f\in \mathcal{R}^{C}(\mathbb{T}_{+},\mathbb{C})\), the Cayley-exponential function is defined by

$$\begin{aligned} \begin{gathered} E_{f}(x,s):=\exp \biggl( { \int _{s}^{x}} \frac{1}{\mu (r)}\operatorname{Log} \biggl( \frac{1+\frac{1}{2}\mu (r)f ( r ) }{1-\frac{1}{2}\mu (r)f ( r ) } \biggr) \Delta r \biggr), \quad \mu (r)>0. \end{gathered} \end{aligned}$$

If \(f,g\in \mathcal{R}^{C}(\mathbb{T}_{+},\mathbb{C})\), then the following properties hold:

  • \(E_{f} ( x^{\sigma},s ) = ( \frac{1+\frac{1}{2}\mu ( x ) f ( x ) }{1-\frac{1}{2}\mu ( x ) f ( x ) } ) E_{f} ( x,s ) \),

  • \(( E_{f} ( x,s ) ) ^{-1}=E_{-f} ( x,s ) \),

  • \(\overline{ ( E_{f} ( x,s ) ) }=E_{\bar{f}} ( x,s ) \),

  • \(E_{f} ( x,s ) E_{f} ( s,r ) =E_{f} ( x,s ) \),

  • \(E_{f} ( x,s ) E_{g} ( x,s ) =E_{f\oplus g} ( x,s ) \), where \(f\oplus g:=\frac{f+g}{1+\frac{1}{4}\mu ^{2}fg}\).

The function \(f:\mathbb{T\rightarrow R}\) is called positively regressive if for all \(x\in \mathbb{T}^{k}\) we have \(\vert f(x)\mu (x) \vert <2\). The set of all positively regressive function \(f:\mathbb{T\rightarrow R}\) is denoted by \(\mathcal{R}_{+}^{C}(\mathbb{T}_{+},\mathbb{C})\). \(\mathcal{R}_{+}^{C}(\mathbb{T}_{+},\mathbb{C})\) is a commutative group under . However, the set \(\mathcal{R}^{C}(\mathbb{T}_{+},\mathbb{C})\) is not closed under .

It is known that [23, Theorem 3.2], for real-valued function a on time scale \(\mathbb{T}_{+}\), we have

$$\begin{aligned} \begin{gathered} a^{\Delta}(t)=\beta (t)a(t)\quad\Longleftrightarrow\quad a^{\Delta}(t)=\alpha (t) \bigl\langle a(t) \bigr\rangle , \end{gathered} \end{aligned}$$

where \(\beta (t)=\frac{\alpha (t)}{1-\frac{1}{2}\mu (t)\alpha (t)}\) and \(\langle a(t) \rangle:=\frac{a(t)+a(\sigma (t))}{2}\).

3.2 Matrix case

Definition 3.1


An \(n\times n\) matrix-valued function M on a time scale \(\mathbb{T}_{+}\) is called regressive, if

$$\begin{aligned} I\pm \frac{\mu (x)M(x)}{2}\quad\text{are invertible for all }x\in \mathbb{T}^{k}. \end{aligned}$$

The class of all such regressive and rd-continuous matrix-valued functions is denoted by \(\mathcal{R}^{C} ( \mathbb{T}_{+},M_{n}(\mathbb{R}) ) \).

In this part, we generalized Cayley exponential function for the following dynamic systems:

$$\begin{aligned} \begin{gathered} a^{\Delta}(t)=M(t) \bigl\langle a ( t ) \bigr\rangle , \end{gathered} \end{aligned}$$

where \(M\in \mathcal{R}^{C}(\mathbb{T}_{+},M_{n}(\mathbb{R}))\). Nonhomogenous system is stated as:

$$\begin{aligned} \begin{gathered} a^{\Delta}(t)=M(t) \bigl\langle a ( t ) \bigr\rangle +h(t) \end{gathered} \end{aligned}$$

with \(h\in C_{rd}(\mathbb{T}_{+},(\mathbb{R}^{n}))\).

An element ϕ of \(C_{rd}^{1}(\mathbb{T}_{+},(\mathbb{R}^{n}))\) is called a solution of IVP (3) on \(\mathbb{T}_{+}\) such that \(\phi ^{\Delta }(t)=M(t) \langle \phi ( t ) \rangle +h(t)\) for all \(t\in \mathbb{T}_{+}\).

Let us denote the transition matrix \(E_{M}(x,\tau )\) of (2) at initial time \(\tau \in \mathbb{T}_{+}\), and is the unique solution of the following matrix IVP:

$$\begin{aligned} \begin{gathered} A^{\Delta}(t)=M(t) \bigl\langle A ( t ) \bigr\rangle , \quad A(\tau )=I \end{gathered} \end{aligned}$$

and \(a(t)=E_{M}(t,\tau )\eta \), \(t\geq \tau \), satisfied:

$$\begin{aligned} \begin{gathered} a^{\Delta}(t)=M(t) \bigl\langle a ( t ) \bigr\rangle ,\quad a(\tau )=\eta. \end{gathered} \end{aligned}$$

Definition 3.2

Assume M and \(N\in \mathcal{R}^{C} ( \mathbb{T}_{+},M_{n}(\mathbb{R}) ) \). Then we define circle addition of M and N by

$$\begin{aligned} (M\oplus N) (x):= ( M+N ) \biggl( I+\frac{1}{4}\mu ^{2}MN \biggr) ^{-1}\quad\text{for all }x\in \mathbb{T}^{k}. \end{aligned}$$

Example 3.3

Assume \(M\in \mathcal{R}^{C} ( \mathbb{T}_{+},M_{n}(\mathbb{R}) ) \) is constant \(n\times n\)-matrix. If \(\mathbb{T}\mathbbm{=}\mathbb{R}\), then \(E_{M}(x,x_{0})=e^{M(x-x_{0})}\).

Theorem 3.4

If \(M\in \mathcal{R}^{C} ( \mathbb{T}_{+},M_{n}(\mathbb{R}) ) \) is a matrix-valued functions on \(\mathbb{T}_{+}\), then

  1. 1.

    \(E_{0}(x,s)\equiv I\) and \(E_{M}(x,x)\equiv I\)

  2. 2.

    \(E_{M}(\sigma (x),s)= ( I-\frac{\mu M}{2} ) ^{-1} ( I+ \frac{\mu M}{2} ) E_{M}(x,s)\);

  3. 3.


  4. 4.


  5. 5.


Theorem 3.5

Let \(M\in \mathcal{R}^{C} ( \mathbb{T}_{+},M_{n}(\mathbb{R}) ) \) and suppose that \(f:\mathbb{T}_{+}\rightarrow \mathbb{R}^{n}\) is rd-continuous. Let \(x_{0}\in \mathbb{T}_{+}\) and \(a_{0}\in \mathbb{R}^{n}\). Then the IVP

$$\begin{aligned} \begin{aligned} a^{\Delta}(x)&=M \bigl\langle a(x) \bigr\rangle +\biggl(I-\frac{M}{2}\mu (x)\biggr)f(x), \\ a(x_{0})&=a_{0}\end{aligned} \end{aligned}$$

has a unique solution, given by

$$\begin{aligned} \begin{aligned} a(x)=E_{M}(x,x_{0})a_{0}+{ \int _{x_{0}}^{x}} E_{M}\bigl(x,\sigma (\tau )\bigr)f( \tau )\Delta \tau. \end{aligned} \end{aligned}$$


First, a given by (5) is well defined and can be rewritten as

$$\begin{aligned} \begin{aligned} a(\cdot )=E_{M}(\cdot,x_{0}) \biggl\{ a_{0}+{ \int _{x_{0}}^{\cdot}} E_{M} \bigl(x_{0},\sigma ( \tau )\bigr)f(\tau )\Delta \tau \biggr\} . \end{aligned} \end{aligned}$$

We use the product rule to differentiate a:

$$\begin{aligned} \begin{aligned} a^{\Delta}(\cdot )={}&M(\cdot ) \bigl\langle E_{M}(\cdot,t_{0}) \bigr\rangle \biggl\{ a_{0}+{ \int _{x_{0}}^{\cdot}} E_{M} \bigl(x_{0},\bigl(\tau ^{ \sigma}\bigr)\bigr)f(\tau )\Delta \tau \biggr\} \\ &{}+E_{M}\bigl(\cdot ^{\sigma},x_{0} \bigr)E_{M}\bigl(x_{0},\cdot ^{\sigma}\bigr)f(\cdot ) \\ ={}&M(\cdot ) \bigl\langle E_{M}(\cdot,x_{0}) \bigr\rangle \biggl\{ a_{0}+{ \int _{x_{0}}^{\cdot}} E_{M} \bigl(x_{0},\tau ^{ \sigma}\bigr)f(\tau )\Delta \tau \biggr\} +f( \cdot ) \\ ={}&M(\cdot ) \biggl( \frac{E_{M} ( \cdot,x_{0} ) +E_{M} ( \cdot ^{\sigma},x_{0} ) }{2} \biggr) \\ &{}\times\biggl\{ a_{0}+{ \int _{x_{0}}^{\cdot}} E_{M} \bigl(x_{0},\tau ^{ \sigma}\bigr)f(\tau )\Delta \tau \biggr\} +f( \cdot ) \\ ={}&\frac{M(\cdot )}{2} \biggl[ E_{M}(\cdot,x_{0})a_{0}+{ \int _{x_{0}}^{\cdot}} E_{M}\bigl(\cdot,\tau ^{ \sigma}\bigr)f(\tau )\Delta \tau \biggr] \\ &{}+\frac{M(\cdot )}{2} \biggl[ E_{M}\bigl(\cdot ^{\sigma},x_{0} \bigr)a_{0}+{ \int _{x_{0}}^{\cdot}} E_{M}(\cdot ^{\sigma},\bigl( \tau ^{\sigma}\bigr)f(\tau )\Delta \tau \biggr] +f( \cdot ). \end{aligned} \end{aligned}$$

We know that \({ \int _{x}^{x^{\sigma}}} f(t)\Delta t=\mu (x)f ( x ) \). Therefore, we have

$$\begin{aligned} \begin{aligned} { \int _{x}^{x^{\sigma}}} E_{M} \bigl(x^{\sigma},\tau ^{ \sigma}\bigr)f(\tau )\Delta \tau =\mu (x)E_{M}\bigl(x^{\sigma },x^{\sigma}\bigr)f(x)= \mu (x)f(x). \end{aligned} \end{aligned}$$

It implies that

$$\begin{aligned} \begin{aligned} a^{\Delta}(x)={}&\frac{M(x)}{2} \biggl[ E_{M}(x,x_{0})a_{0}+{ \int _{x_{0}}^{x}} E_{M} \bigl(x_{0},\tau ^{\sigma}\bigr)f( \tau )\Delta \tau \biggr] \\ &{}+\frac{M(x)}{2} \biggl[ E_{M}\bigl(x^{\sigma},x_{0} \bigr)a_{0}+{ \int _{x_{0}}^{x}} E_{M} \bigl(x^{\sigma},\tau ^{ \sigma}\bigr)f(\tau )\Delta \tau \biggr] \\ &{}+f(x)+\frac{M}{2}\mu (x)f(x)-\frac{M}{2}\mu (x)f(x), \end{aligned} \end{aligned}$$

we have

$$\begin{aligned} \begin{aligned} a^{\Delta}(x)={}&\frac{M(x)}{2}a(x) \\ &{}+\frac{M(x)}{2} \biggl[ E_{M}\bigl(x^{\sigma},x_{0} \bigr)a_{0}+{ \int _{x_{0}}^{x}} E_{M} \bigl(x^{\sigma},\tau ^{ \sigma}\bigr)f(\tau )\Delta \tau \biggr] \\ &{}+f(x)+\frac{M ( x ) }{2}\mu (x)f(x)- \frac{M ( x ) }{2}\mu (x)f(x) \\ ={}&\frac{M(x)}{2}\bigl(a(x)\bigr)+\frac{M(x)}{2}a\bigl(x^{\sigma} \bigr)+ \biggl( I- \frac{M ( x ) }{2}\mu (x) \biggr) f(x) \\ ={}&M ( x ) \bigl\langle a(x) \bigr\rangle + \biggl( I- \frac{M ( x ) }{2}\mu (x) \biggr) f(x). \end{aligned} \end{aligned}$$


$$\begin{aligned} \begin{aligned} &a^{\Delta}(x)=M ( x ) \bigl\langle a(x) \bigr\rangle + \biggl( I-\frac{M ( x ) }{2}\mu (x) \biggr) f(x), \\ &a(x_{0})=a_{0}. \end{aligned} \end{aligned}$$


Theorem 3.6

(Putzer Algorithm)

Let \(M\in \mathcal{R}^{C} ( \mathbb{T}_{+},M_{n}(\mathbb{R}) ) \) be a constant \(n\times n\)-matrix. Suppose \(x_{0}\in \mathbb{T}_{+}\). If \(\lambda _{1},\lambda _{2},\ldots \lambda _{n}\) are the eigenvalues of M, then

$$\begin{aligned} E_{M}(x,x_{0})={ \sum _{i=0}^{n-1}} r_{i+1}(x)P_{i}, \end{aligned}$$

where \(r(x):=(r_{1}(x),r_{2}(x),\ldots,r_{n}(x))^{T}\) is the solution of IVP

$$\begin{aligned} \begin{aligned} r^{\Delta}= \begin{pmatrix} \lambda _{1} & 0 & 0 & \cdots & 0 \\ 1 & \lambda _{2} & 0 & \ddots & \vdots \\ 0 & 1 & \lambda _{3} & \ddots & \vdots \\ \vdots & \ddots & \ddots & \ddots & 0 \\ 0 & \cdots & 0 & 1 & \lambda _{n}\end{pmatrix} \langle r \rangle,\quad r(x_{0})= \begin{pmatrix} 1 \\ 0 \\ 0 \\ \vdots \\ 0 \end{pmatrix} \end{aligned} \end{aligned}$$

and the P matrices \(P_{0},P_{1},\ldots,P_{n}\) are recursively defined by \(P_{0}=I\) and

$$\begin{aligned} \begin{aligned} P_{k+1}=(M-\lambda _{k+1}I)P_{k}\quad \textit{for }0\leq k\leq n-1. \end{aligned} \end{aligned}$$

Example 3.7

Let us consider the following dynamic system:

a Δ ( x ) = M a ( x ) , where  M = ( a 0 0 b )  such that  b > a > 0

on time scales \(\mathbb{T}_{+}\) with step size function μ. We find the matrix exponential \(E_{M}(x,x_{0})\). M is regressive for \(\mu \neq \frac{2}{a},\frac{2}{b}\) that is det ( ( 1 0 0 1 ) + μ 2 ( a 0 0 b ) ) = 1 2 aμ+ 1 2 bμ+ 1 4 ab μ 2 +1= 1 4 (bμ+2)(aμ+2)0.

Since by definition \(\mu \geq 0\), therefore det ( ( 1 0 0 1 ) + μ 2 ( a 0 0 b ) ) 0 and det ( ( 1 0 0 1 ) μ 2 ( a 0 0 b ) ) = 1 4 ab μ 2 1 2 bμ 1 2 aμ+1= 1 4 (bμ2)(aμ2)0 for \(\mu \neq \frac{2}{a},\frac{2}{b}\).

Equation (9) is equivalently written as

$$\begin{aligned} a^{\Delta} ( x ) = \biggl( I-\frac{\mu}{2}M \biggr) ^{-1}Ma ( x ). \end{aligned}$$

It is easy to see that

E M (x, x 0 )= ( e 2 a 2 a μ ( x , x 0 ) 0 0 e 2 b 2 b μ ( x , x 0 ) )


a(x)= ( e 2 a 2 a μ ( x , x 0 ) 0 0 e 2 b 2 b μ ( x , x 0 ) ) a 0 .

4 Impulsive exponential matrix

In this section, we assume the following IDS:

$$\begin{aligned} \textstyle\begin{cases} a^{\Delta} ( x ) =M(x) \langle a(x) \rangle,& x\in \mathbb{T}_{+},\text{ }x\neq x_{k}, \\ a(x_{k}^{+}) = ( I+B_{k} ) a(x_{k}),&k=1,2,\ldots. \end{cases}\displaystyle \end{aligned}$$


$$\begin{aligned} \textstyle\begin{cases} a^{\Delta} ( x ) =M(x) \langle a(x) \rangle, \text{ }x\in \mathbb{T}_{(\tau ),}&x\neq x_{k}, \\ a(x_{k}^{+}) = ( I+B_{k} ) a(x_{k}),&k=1,2,\ldots, \\ a(\tau ^{+}) =\eta,&\tau \geq 0. \end{cases}\displaystyle \end{aligned}$$

with the following assumption:

\((H)\): Assume \(B_{k}\in M_{n}(\mathbb{R}\mathbbm{)}\), \(k=1,2,\ldots \) , \(M\in \mathcal{R}^{C}\mathbbm{(}\mathbb{T}_{+},M_{n}(\mathbb{R}))\), \(0=x_{0}< x_{1}< x_{2}<\cdots <x_{k}<\cdots \) , with \(\lim_{k\rightarrow \infty}x_{k}=\infty \), and \(x_{k}^{\sigma}=x_{k}\).

For solution of (11), let us define:

$$\begin{aligned} \begin{gathered} \Omega:= \begin{Bmatrix} a:\mathbb{T}_{+}\rightarrow \mathbb{R}^{n};\text{ }a\in C(\bigl(x_{k},x_{k+1}, \mathbb{R}^{n}\bigr),\text{ }k=0,1,2,\ldots,a\bigl(x_{k}^{+} \bigr)\text{ } \\ \text{and }a\bigl(x_{k}^{-}\bigr)\text{ exists with }a \bigl(x_{k}^{-}\bigr)=a(x_{k}), \text{ }k=1,2, \ldots, \end{Bmatrix} \end{gathered} \end{aligned}$$


$$\begin{aligned} \begin{gathered} \Omega ^{1}:= \bigl\{ a\in \Omega;\text{ }a\in C^{1}\bigl((x_{k},x_{k+1}), \text{ } \mathbb{R}^{n}\bigr),\text{ }k=0,1,2,\ldots \bigr\} . \end{gathered} \end{aligned}$$

A function \(a\in \Omega ^{1}\) is called the solution of (10), if it satisfies \(a^{\Delta}(x)=M(x) \langle a(x) \rangle \) everywhere on \(\mathbb{T}_{(\tau )}\setminus \{\tau,x_{k(\tau )},x_{k(\tau )+1}, \ldots \}\) such that \(k(\tau ):=\min \{ \text{ }k=0,1,2,\ldots;\tau < x_{k} \} \) and for each \(i=k(\tau ),k(\tau )+1,\ldots \) satisfies the initial \(a(\tau ^{+})=\eta \) and impulsive \(a(x_{i}^{+})=a(x_{i})+B_{i}a(x_{i})\) conditions.

Theorem 4.1

If \((H)\) holds, then any solution of the IVP (10) satisfies

$$\begin{aligned} \begin{gathered} a(x)=a(\tau )+{ \int _{\tau}^{x}} M(s) \bigl\langle a(s) \bigr\rangle \Delta s+{ \sum_{\tau < x_{j}< x}} B_{j}a(x_{j}),\quad x \in \mathbb{T}_{+}, \end{gathered} \end{aligned}$$

and vice versa.


By a similar discussion as in the proof of [26, Theorem 3.1], we can obtain this result. □

By using [26, Lemma 3.1] and [27, Lemma 3.1], we can obtain the following impulsive dynamic inequality:

Lemma 4.2

Let \(\tau \in \mathbb{T}_{+},a\in \mathcal{R}\mathbbm{(}\mathbb{T}_{+},\mathbb{R})\), \(p\in \mathcal{R}_{+}^{C}\mathbbm{(}\mathbb{T}_{+},\mathbb{R}\mathbbm{)}\), and \(c_{i},d_{i}\in \mathbb{R}_{+}\), \(i=1,2,\ldots \) . If

$$\begin{aligned} \begin{aligned} \textstyle\begin{cases} a^{\Delta}(x)\leq p(x) \langle a(x) \rangle +b(x), & x\in \mathbb{T}_{(\tau )}\textit{, }x\neq x_{i}, \\ a(x_{i}^{+})\leq c_{i}a(x_{i})+d_{i},& i=1,2,\ldots, \end{cases}\displaystyle \end{aligned} \end{aligned}$$


$$\begin{aligned} \begin{aligned} a(x)\leq{}& a(\tau ){ \prod _{\tau < x_{i}< x}} c_{i}E_{p}(x,\tau )+{ \sum_{\tau < x_{i}< x}} \biggl( { \prod _{x_{i}< x_{j}< x}} c_{j}E_{p}(x,x_{i}) \biggr) \,d_{i} \\ &{}+{ \int _{\tau}^{x}} { \prod _{s< x_{i}< x}} c_{i} \bigl\langle E_{-p}(s,x) \bigr\rangle b(s) \Delta s. \end{aligned} \end{aligned}$$

Lemma 4.3

Let \(\tau \in \mathbb{T}_{+},a,b\in \mathcal{R}^{C}\mathbbm{(}\mathbb{T}_{+},\mathbb{R})\), \(p\in \mathcal{R}_{+}^{C}\mathbbm{(}\mathbb{T}_{+},\mathbb{R}\mathbbm{)}\) and \(c,b_{i}\in \mathbb{R}_{+}\), \(i=1,2,3,\ldots \) . If

$$\begin{aligned} \begin{aligned} a(x)\leq c+{ \int _{\tau}^{x}} p(s) \bigl\langle a(s) \bigr\rangle \Delta s+{ \sum_{\tau < x_{i}< x}} b_{i}a(x_{i}),\quad x \in \mathbb{T}_{(\tau )}\end{aligned} \end{aligned}$$


$$\begin{aligned} a(x)\leq c{ \prod_{\tau < x_{i}< x}} (1+b_{i})E_{p}(x,\tau ),\quad x\geq \tau. \end{aligned}$$



$$\begin{aligned} \begin{gathered} v(x):=c+{ \int _{\tau}^{x}} p(s) \bigl\langle a(s) \bigr\rangle \Delta s+{ \sum_{\tau < x_{i}< x}} b_{i}a(x_{i}), \quad x \geq \tau \end{gathered} \end{aligned}$$


$$\begin{aligned} \textstyle\begin{cases} v^{\Delta}(x)=p(x) \langle a(x) \rangle,\quad x\neq x_{i}, \text{ }v(\tau )=c, \\ v(x_{i}^{+})=v(x_{i})+b_{i}a(x_{i}),\quad i=1,2,3,\ldots. \end{cases}\displaystyle \end{aligned}$$

Since \(\langle a(x) \rangle \leq \langle v(x) \rangle \), \(x\geq \tau \), we then have

$$\begin{aligned} \textstyle\begin{cases} v^{\Delta}(x)\leq p(x) \langle v(x) \rangle,\quad x\neq x_{i}, \text{ }v(\tau )=c, \\ v(x_{i}^{+})=v(x_{i})+b_{i}v(x_{i}),\quad i=1,2,3,\ldots. \end{cases}\displaystyle \end{aligned}$$

Lemma 4.2 yields

$$\begin{aligned} \begin{gathered} v(x)\leq c{ \prod_{\tau < x_{k}< x}} ( 1+b_{k} ) E_{p}(x,\tau ), \quad x\geq \tau, \end{gathered} \end{aligned}$$

which implies (14). □

Theorem 4.4

If \((H)\) holds, then any solution of (11) satisfies the following estimate

$$\begin{aligned} \begin{gathered} \bigl\Vert a(x) \bigr\Vert \leq \bigl\Vert a( \tau ) \bigr\Vert { \prod_{\tau < x_{k}\leq x}} \bigl( 1+ \Vert B_{k} \Vert \bigr) \exp \biggl( { \int _{\tau}^{x}} \frac{ \Vert M(s) \Vert }{1-\frac{\mu (s) \Vert M(s) \Vert }{2}} \Delta s \biggr),\quad \mu (s)>0 \end{gathered} \end{aligned}$$

for \(\tau,x\in \mathbb{T}_{+}\) with \(x\geq \tau \).


From (12), we obtain that

$$\begin{aligned} \bigl\Vert a(x) \bigr\Vert \leq{}& \bigl\Vert a(\tau ) \bigr\Vert +{ \int _{\tau}^{x}} \bigl\Vert M(s) \bigr\Vert \bigl\langle \bigl\Vert a(s) \bigr\Vert \bigr\rangle \Delta s \\ &{}+{ \sum_{\tau < x_{j}\leq x}} \Vert B_{j} \Vert \bigl\Vert a ( x_{j} ) \bigr\Vert , \quad x\geq \mathbb{ \tau}. \end{aligned}$$

Lemma 4.3, yields

$$\begin{aligned} \begin{gathered} \bigl\Vert a(x) \bigr\Vert \leq \bigl\Vert a(\tau ) \bigr\Vert { \prod_{\tau < x_{k}\leq x}} \bigl( 1+ \Vert B_{k} \Vert \bigr) E_{ \Vert M(\cdot ) \Vert }(x,\tau ), \quad x\geq \tau. \end{gathered} \end{aligned}$$

Since for any \(a\geq 0\),

$$\begin{aligned} \lim_{u\rightarrow \mu (s)} \frac{\ln ( \frac{1+\frac{au}{2}}{1-\frac{au}{2}} ) }{u}=\textstyle\begin{cases} a & \text{if }\mu (s)=0, \\ \frac{\ln ( \frac{1+\frac{a\mu (s)}{2}}{1-\frac{a\mu (s)}{2}} ) }{\mu (s)}\leq \frac{a}{1-\frac{a\mu (s)}{2}} & \text{if }\mu (s)>0, \end{cases}\displaystyle \end{aligned}$$


$$\begin{aligned} E_{ \Vert M ( \cdot ) \Vert }(x,\tau )&=\exp \biggl( { \int _{\tau}^{x}} \lim_{u\rightarrow \mu (s)} \frac{\ln ( \frac{1+\frac{ \Vert M ( s ) \Vert \mu (s)}{2}}{1-\frac{ \Vert M ( s ) \Vert \mu (s)}{2}} ) }{\mu (s)}\Delta s \biggr) \\ &\leq \exp \biggl( { \int _{\tau}^{x}} \frac{ \Vert M(s) \Vert }{1-\frac{\mu (s) \Vert M(s) \Vert }{2}} \Delta s\Delta s \biggr), \quad x\geq \tau. \end{aligned}$$

Thus, we obtain (15). □

Remark 4.5

Estimate (15) is μ dependent. However, in Hilger’s exponential case, it is independent of μ, see, for example, [22, Theorem 3.2].

Let us define generalized exponential matrix \(G_{M}(x,y)\), \(0\leq y\leq x\), for impulsive effects \(\{ B_{i},x_{i} \} _{i=1}^{\infty}\):

$$\begin{aligned} G_{M}(x,y)=\textstyle\begin{cases} E_{M}(x,x_{i}^{+}) [ { \prod_{y< x_{k}\leq x}} (I+B_{k})E_{M}(x_{k},x_{k-1}^{+}) ] (I+B_{j})E_{M}(x_{j},y), \\ \quad\text{for }x_{j-1}\leq y< x_{j}< \cdots < x_{i}< x< x_{i+1}; \\ E_{M}(x,x_{i}^{+})(I+B_{i})E_{M}(x_{i},y),\quad\text{for }x_{i-1}\leq y \leq x_{i}< x< x_{i+1}; \\ E_{M}(x,y),\quad \text{for }x_{i-1}\leq y\leq x\leq x_{i}. \end{cases}\displaystyle \end{aligned}$$

Remark 4.6

By definition, we can obtain the following equality:

$$\begin{aligned} G_{M}(x,y)={}&E_{M}\bigl(x,x_{k}^{+} \bigr) (I+B_{k})E_{M}\bigl(x_{k},x_{k-1}^{+} \bigr) \\ &{}\times\biggl[ { \prod_{s< x_{j}< x_{k}}} (I+B_{j})E_{M} \bigl(x_{j},x_{j-1}^{+}\bigr) \biggr] ( I+B_{i} ) E_{M}(x_{i},y). \end{aligned}$$

It follows that

$$\begin{aligned} G_{M}(x,y)=E_{M}\bigl(x,x_{k}^{+} \bigr) (I+B_{k})G_{M}(x_{k},y) \end{aligned}$$

for \(x_{i-1}\leq y< x_{i}<\cdots <x_{k}\leq x<x_{k+1}\).

For the following result, let us set \(X_{M}(t):=G_{M}(t,0)\), \(t\in \mathbb{T}_{+}\).

Theorem 4.7

If \((H)\) holds, then \(G_{M}(t,y)\) has the following properties:

  1. 1.

    \(G_{M}(t,y)=X_{M}(t)X_{M}^{-1}(y)\), \(0\leq y\leq t\);

  2. 2.

    \(G_{M}(t,t)=I\), \(t\geq 0\);

  3. 3.

    If \((I+B_{i})^{-1}\) exists for each i, then \(G_{M}(t,y)=G_{M}^{-1}(y,t)\), \(0\leq y\leq t\);

  4. 4.

    \(G_{M}(\sigma (x),s)= ( I-\frac{\mu}{2}M ) ^{-1} ( I+ \frac{\mu}{2}M ) G_{M}(x,s)\), \(0\leq s\leq x\);

  5. 5.

    \(G_{M}(t,y)G_{M}(y,r)=G_{M}(t,r)\), \(0\leq r\leq y\leq t\).


  1. 1.


    $$\begin{aligned} \begin{gathered} Y(t):=X_{M}(t)X_{M}^{-1}(y),\quad 0\leq y\leq t. \end{gathered} \end{aligned}$$

    Then we have

    $$\begin{aligned} Y^{\Delta}(t)&=X_{M}^{\Delta}(t)X_{M}^{-1}(y) \\ &=M(t) \bigl\langle X_{M}(t) \bigr\rangle X_{M}^{-1}(y) \\ &=M(t) \bigl\langle X_{M}(t)X_{M}^{-1}(y) \bigr\rangle \\ &=M(t) \bigl\langle Y(t) \bigr\rangle , \quad t\neq x_{k}. \end{aligned}$$


    $$\begin{aligned} Y(y)=X_{M}(y)X_{M}^{-1}(y)=I, \end{aligned}$$


    $$\begin{aligned} Y \bigl( x_{k}^{+} \bigr) -Y(x_{k}) & =X_{M}\bigl(x_{k}^{+} \bigr)X_{M}^{-1}(y)-X_{M}(x_{k})X_{M}^{-1}(y) \\ & =\bigl[X_{M}\bigl(x_{k}^{+} \bigr)-X_{M}(x_{k})\bigr]X_{M}^{-1}(y) \\ & =B_{k}X_{M}(x_{k})X_{M}^{-1}(y) \\ & =B_{k}Y(x_{k})\quad\text{for each }x_{k}\geq y. \end{aligned}$$

    Therefore, \(Y(t)=X_{M}(t)X_{M}^{-1}(y)\) solves the IVP (19), which has exactly one solution. Therefore,

    $$\begin{aligned} \begin{gathered} G_{M}(t,y)=X_{M}(t)X_{M}^{-1}(y), \quad 0\leq y\leq t. \end{gathered} \end{aligned}$$
  2. 2.

    From part 1: \(G_{M}(t,t)=X_{M}(t)X_{M}^{-1}(t)=I\).

  3. 3.

    By using part 1, we have \(G_{M}(t,y)=X_{M}(t)X_{M}^{-1}(y)= ( X_{M}(y)X_{M}^{-1}(t) ) ^{-1}=G_{M}^{-1}(y,t)\).

  4. 4.

    Well-known relation implies that

    $$\begin{aligned} \begin{aligned} G_{M}\bigl(\sigma (t),y\bigr) & =G_{M}(t,y)+ \mu (t)G_{M}^{\Delta}(t,y) \\ & =G_{M}(t,y)+\mu (t)M(t) \bigl\langle G_{M}(t,y) \bigr\rangle . \end{aligned} \end{aligned}$$

    It follows that

    $$\begin{aligned} \begin{aligned} G_{M}\bigl(\sigma (t),y\bigr) & =G_{M}(t,y)+ \mu (t)M(t) \bigl\langle G_{M}(t,y) \bigr\rangle ; \\ & =G_{M}(t,y)+\mu (t)M(t) \biggl( \frac{G_{M}(t,y)+G_{M}(\sigma (t),y)}{2} \biggr) \\ & =G_{M}(t,y)+\frac{\mu (t)M(t)G_{M}(t,y)}{2}+ \frac{\mu (t)M(t)G_{M}(\sigma (t),y)}{2}; \\ & =G_{M}\bigl(\sigma (t),y\bigr)-\frac{\mu (t)M(t)G_{M}(\sigma (t),y)}{2}- \biggl( I+ \frac{\mu}{2}M \biggr) G_{M}(t,y). \end{aligned} \end{aligned}$$

    Therefore, we have

    $$\begin{aligned} \biggl( I-\frac{\mu}{2}M \biggr) G_{M}\bigl(\sigma (t),y\bigr)= \biggl( I+ \frac{\mu}{2}M \biggr) G_{M}(t,y). \end{aligned}$$

    Since \(M\in \mathcal{R}^{C}\mathbbm{(}\mathbb{T}_{+},M_{n}(\mathbb{R}))\), it implies that

    $$\begin{aligned} G_{M}\bigl(\sigma (t),y\bigr)= \biggl( I-\frac{\mu}{2}M \biggr) ^{-1} \biggl( I+ \frac{\mu }{2}M \biggr) G_{M}(t,y). \end{aligned}$$
  5. 5.

    Now let \(Y(t)=G_{M}(t,y)G_{M}(y,r^{+})\), \(0\leq r\leq y\leq t\). It follows that:

    $$\begin{aligned} \begin{aligned} Y^{\Delta}(t) & =G_{M}^{\Delta}(t,y)G_{M} \bigl(y,r^{+}\bigr) \\ & =M(t) \bigl\langle G_{M}(t,y) \bigr\rangle G_{M} \bigl(y,r^{+}\bigr) \\ & =M(t) \bigl\langle G_{M}(t,y)G_{M}\bigl(y,r^{+} \bigr) \bigr\rangle \\ & =M(t) \bigl\langle Y(t) \bigr\rangle \quad\text{for }t\neq x_{k}. \end{aligned} \end{aligned}$$


    $$\begin{aligned} \begin{gathered} Y\bigl(r^{+}\bigr)=G_{M} \bigl(r^{+},y\bigr)G_{M}\bigl(y,r^{+} \bigr)=G_{M}\bigl(r^{+},y\bigr)G_{M}^{-1} \bigl(y,r^{+}\bigr)=I. \end{gathered} \end{aligned}$$


    $$\begin{aligned} \begin{gathered} Y\bigl(x_{k}^{+}\bigr)=G_{M} \bigl(x_{k}^{+},y\bigr)G_{M}\bigl(y,r^{+} \bigr)=(I+B_{k})Y(x_{k}) \end{gathered} \end{aligned}$$

    \((\forall )\) \(x_{k}\geq y\). Uniqueness result implies that \(G_{M}(t,r)=G_{M}(t,y)G_{M}(y,r)\), \(0\leq r\leq y\leq t\).  □

Theorem 4.8

If \((H)\) holds, then

$$\begin{aligned} \frac{\partial}{\Delta s}G_{M}(x,s)=-G_{M}\bigl(x,\sigma (s) \bigr)M(s) \bigl\langle G_{M}(s,x) \bigr\rangle G_{M}^{-1}(s,x),\quad \textit{ }s\neq x_{k}. \end{aligned}$$


By [4, Theorem A.2], we have

$$\begin{aligned} \frac{\partial}{\Delta s}G_{M}(x,s)&=\frac{\partial}{\Delta s}G_{M}^{-1}(s,x) \\ &=-G_{M}^{-1}\bigl(s,\sigma (x)\bigr)\frac{\partial}{\Delta s}G_{M}(s,x)G_{M}^{-1}(s,x) \\ &=-G_{M}^{-1}\bigl(s,\sigma (x)\bigr)M(x) \bigl\langle G_{M}(s,x) \bigr\rangle G_{M}^{-1}(s,x). \end{aligned}$$

Therefore, \(\frac{\partial}{\Delta t}G_{M}(x,s)=-G_{M}(x,\sigma (s))M(s) \langle G_{M}(s,x) \rangle G_{M}^{-1}(s,x)\) for all \(s\in \mathbb{T}_{+}\), \(s\neq x_{k}\), \(k=1,2\cdots \) . □

Let \(l^{\infty}(\mathbb{R}^{n}):=\{c:= \{ c_{k} \} _{k=1}^{ \infty },c_{k}\in \mathbb{R}^{n}\ k=1,2,\ldots,\text{ such that }\sup_{k\geq 1} \Vert c_{k} \Vert <\infty \}\). Then \(l^{\infty}(\mathbb{R}^{n})\) is a complete normed space with the norm \(\Vert c \Vert :=\sup_{k\geq 1} \Vert c_{k} \Vert \).

Consider the following nonhomogeneous IVP

$$\begin{aligned} \textstyle\begin{cases} a^{\Delta} ( x ) =M(x) \langle a(x) \rangle + ( I-\frac{\mu ( x ) }{2}M ( x ) ) f(x), \text{ }x\in \mathbb{T}_{(\tau ),} \quad x\neq x_{k}; \\ a(x_{k}^{+}) =a(x_{k})+B_{k}a(x_{k})+c_{k}, \quad k=1,2,3,\ldots; \\ a(\tau ^{+}) =\eta, \quad \tau \geq 0, \end{cases}\displaystyle \end{aligned}$$

with a given vector-valued function f.

Theorem 4.9

If \((H)\) holds, \(c:= \{ c_{k} \} _{k=1}^{\infty}\in l^{\infty}(\mathbb{R}^{n} \mathbbm{)}\). For regressive vector-valued function f, the IVP (17) has only one solution:

$$\begin{aligned} \begin{aligned} a(x)={}&G_{M}(x,\tau )\eta +{ \int _{\tau}^{x}} G_{M}\bigl(x,\sigma (s) \bigr)f(s) \Delta s \\ &{}+{ \sum_{\tau < x_{j}< x}} G_{M} \bigl(x,x_{j}^{+}\bigr)c_{j}, \quad x\geq \mathbb{\tau}. \end{aligned} \end{aligned}$$


Let \((\tau,\eta )\in \mathbb{T}_{+}\times R^{n}\). Then there exists \(i\in \{ 1,2,\ldots \} \) such that \(\tau \in [ x_{i-1},x_{i} ) \). Theorem 3.5 implies the unique solution of (17) on \([ \tau,x_{i} ) \):

$$\begin{aligned} \begin{aligned} a(x)&=E_{M}(x,\tau )\eta +{ \int _{\tau}^{x}} E_{M}\bigl(x,\sigma (s) \bigr)f(s) \Delta s \\ &=G_{M}(x,\tau )\eta +{ \int _{\tau}^{x}} G_{M}\bigl(x,\sigma (s) \bigr)f(s) \Delta s, \quad x\in [ \tau,x_{i} ). \end{aligned} \end{aligned}$$

For \(x\in [ x_{i},x_{i+1} ) \) the Cauchy problem

$$\begin{aligned} \textstyle\begin{cases} a^{\Delta}=M(x) \langle a(x) \rangle + ( I- \frac{\mu}{2}M ) f(x), \quad x\in (x_{i},x_{i+1}), \\ a(x_{i}^{+})=a(x_{i})+B_{i}a(x_{i})+c_{i}, \end{cases}\displaystyle \end{aligned}$$

has unique solution

$$\begin{aligned} \begin{aligned} a(x)=E_{M}\bigl(x,x_{i}^{+} \bigr)a\bigl(x_{i}^{+}\bigr)+{ \int _{x_{i}}^{x}} E_{M}\bigl(x,\sigma (s) \bigr)f(s) \Delta s\text{,}\quad x\in [ x_{i},x_{i+1} ). \end{aligned} \end{aligned}$$

It follows that

$$\begin{aligned} a(x)={}&E_{M}\bigl(x,x_{i}^{+} \bigr) \bigl[ (I+B_{i})a(x_{i})+c_{i} \bigr] \\ &{}+{ \int _{x_{i}}^{x}} E_{M}\bigl(x,\sigma (s) \bigr)f(s) \Delta s \\ ={}&E_{M}\bigl(x,x_{i}^{+} \bigr) (I+B_{i})E_{M}(x_{i},\tau )\eta \\ &{}+{ \int _{\tau}^{x_{i}}} G_{M} \bigl(x_{i},\sigma (s)\bigr)f(s) \Delta s]+E_{M} \bigl(x,x_{i}^{+}\bigr)c_{i} \\ &{}+{ \int _{x_{i}}^{x}} E_{M}\bigl(x,\sigma (s) \bigr)f(s) \Delta s \\ ={}&E_{M}\bigl(x,x_{i}^{+} \bigr) (I+B_{i})E_{M}(x_{i},\tau )\eta \\ &{}+{ \int _{\tau}^{x_{i}}} E_{M} \bigl(x,x_{i}^{+}\bigr) (I+B_{i})E_{M} \bigl(x_{i}, \sigma (s)\bigr)f(s)\Delta s+E_{M}\bigl(x,x_{i}^{+}\bigr)c_{i} \\ &{}+{ \int _{x_{i}}^{x}} E_{M}\bigl(x,\sigma (s) \bigr)f(s) \Delta s. \end{aligned}$$

Using (16), we get that

$$\begin{aligned} a(x)={}&G_{M}(x,\tau )\eta +{ \int _{\tau}^{x_{k}}} G_{M}\bigl(x,\sigma (s) \bigr)f(s) \Delta s \\ &{}+{ \int _{x_{k}}^{x}} G_{M}\bigl(x,\sigma (s) \bigr)f(s) \Delta s+G_{M}\bigl(x,x_{i}^{+} \bigr)c_{i}, \end{aligned}$$

and so

$$\begin{aligned} a(x)={}&G_{M}(x,\tau )\eta +{ \int _{\tau}^{x}} G_{M}\bigl(x,\sigma (s) \bigr)f(s) \Delta s \\ &{}+G_{M}\bigl(x,x_{i}^{+}\bigr)c_{i},\quad x\in (x_{i},x_{i+1}]. \end{aligned}$$

Assume for \(k>i+2\), (17) has a solution on an interval \([ x_{k-1},x_{k} ) \):

$$\begin{aligned} a(x)={}&G_{M}(x,\tau )\eta +{ \int _{\tau}^{x}} G_{M}\bigl(x,\sigma (s) \bigr)f(s) \Delta s \\ &{}+{ \sum_{\tau < x_{j}< x_{k}}} G_{M} \bigl(x,x_{j}^{+}\bigr)c_{j}, \quad x\in [ x_{k},x_{k+1} ). \end{aligned}$$


$$\begin{aligned} \begin{aligned} a(x)=E_{M}\bigl(x,x_{k}^{+} \bigr)a\bigl(x_{k}^{+}\bigr)+{ \int _{x_{k}}^{x}} E_{M}\bigl(x,\sigma (s)M(s)f(s) \Delta s\text{,}\quad x\in [ x_{k},x_{k+1} \bigr). \end{aligned} \end{aligned}$$

is the solution of the following IVP:

$$\begin{aligned} \textstyle\begin{cases} a^{\Delta}=M(x) \langle a(x) \rangle + ( I- \frac{\mu}{2}M ) f(x), \quad x\in ( x_{k},x_{k+1} ) \\ a(x_{k}^{+})=a(x_{k})+B_{k}a(x_{k})+c_{k}. \end{cases}\displaystyle \end{aligned}$$

More explicitly, we have

$$\begin{aligned} a(x) ={}&E_{M}\bigl(x,x_{k}^{+}\bigr) \bigl[ (I+B_{k})a(x_{k})+c_{k} \bigr] +{ \int _{x_{k}}^{x}} E_{M}\bigl(x,\sigma (s) \bigr)f(s) \Delta s \\ ={}&E_{M}\bigl(x,x_{k}^{+} \bigr) (I+B_{k})G_{M}(x_{k},\tau )\eta +{ \int _{\tau}^{x_{k}}} G_{M} \bigl(x_{k},\sigma (s)\bigr)f(s) \Delta s \\ &{}+{ \sum_{\tau < x_{j}< x_{k}}} G_{M} \bigl(x_{k},x_{j}^{+}\bigr)c_{j}+E_{M} \bigl(x,x_{k}^{+}\bigr)c_{k} \\ &{}+{ \int _{x_{k}}^{x}} E_{M}\bigl(x,\sigma (s) \bigr)f(s) \Delta s, \end{aligned}$$

Hence, using Remark 4.6, we get

$$\begin{aligned} a(x)={}&G_{M}(x,\tau )\eta +{ \int _{\tau}^{x_{k}}} G_{M}\bigl(x,\sigma (s) \bigr)f(s) \Delta s \\ &{}+{ \int _{x_{k}}^{x}} G_{M}\bigl(x,\sigma (s) \bigr)f(s) \Delta s+{ \sum_{\tau < x_{j}< x_{k}}} G_{M}\bigl(x,x_{j}^{+}\bigr)c_{j}, \end{aligned}$$

and so

$$\begin{aligned} a(x)=G_{M}(x,\tau )\eta +{ \int _{\tau}^{x}} G_{M}\bigl(x,\sigma (s) \bigr)f(s) \Delta s+{ \sum_{\tau < x_{j}< x_{k}}} G_{M}\bigl(x,x_{j}^{+}\bigr)c_{j}. \end{aligned}$$


For \(c_{k}=0\), for each k

$$\begin{aligned} \begin{gathered} a(x)=G_{M}(x,\tau )\eta +{ \int _{\tau}^{x}} G_{M}\bigl(x,\sigma (s) \bigr)f(s) \Delta s, \quad x\in \mathbb{T}_{(\tau )}\end{gathered} \end{aligned}$$

is the only solution for the following IVP:

Corollary 4.10

If \((H)\) holds, then the IVP (11) has at most one solution, given by

$$\begin{aligned} \begin{aligned} a(x)=G_{M}(x,\tau )\eta,\quad x\geq \tau, \textit{ for each }(\tau, \eta )\in \mathbb{T}_{+}\times \mathbb{R}^{n}. \end{aligned} \end{aligned}$$

Corollary 4.11

If \((H)\) holds, then generalized transition matrix \(G_{M}(x,y)\), \(0\leq y\leq x\), uniquely satisfied IVP:

$$\begin{aligned} \textstyle\begin{cases} Y^{\Delta} ( x ) =M(x) \langle Y(x) \rangle,\quad x\in \mathbb{T}_{(+),}\textit{ }x\neq x_{k}, \\ Y(x_{k}^{+}) =(I+B_{k})Y(x_{k}),\quad k=1,2,\ldots, \\ Y(y^{+}) =I, \quad y\geq 0. \end{cases}\displaystyle \end{aligned}$$

Moreover, the following properties hold:

  1. 1.

    \(G_{M}(x_{i}^{+},s)=(I+B_{i})G_{M}(x_{i},s)\), \(x_{i}\geq s\), \(i=1,2,3,\ldots \) ;

  2. 2.

    If \((I+B_{i})^{-1}\) exists for each i, then \(G_{M}(x,x_{i}^{+})=G_{M}(x,x_{i})(I+B_{i})^{-1}\), \(x_{i}\leq x\), \(i=1,2,3,\ldots \) ;

  3. 3.

    \(G_{M}(x,x_{i}^{+})G_{M}(x_{i}^{+},y)=G_{M}(x,y)\), \(0\leq y\leq x_{i}\leq x\), \(i=1,2,3,\ldots \) .

Corollary 4.12

If \((H)\) holds, then we have the following estimate

$$\begin{aligned} \bigl\Vert G_{M}(x,\tau ) \bigr\Vert \leq { \prod_{\tau < x_{k}\leq x}} \bigl(I+ \Vert B_{k} \Vert \bigr)\exp \biggl( { \int _{\tau}^{x}} \frac{ \Vert M(s) \Vert }{1-\frac{\mu (s) \Vert M(s) \Vert }{2}} \Delta s \biggr),\quad \textit{ }\mu (s)>0, \end{aligned}$$

for \(\tau,x\in \mathbb{T}_{+}\) with \(x\geq \tau \).

Remark 4.13

From inequality (20), it is easy to see that \(G_{M}\) is a bounded operator.

5 Boundedness and exponential stability

Lemma 5.1

For any constant \(p>0\), such that \(p\in \mathcal{R}_{+}^{C}\mathbbm{(}\mathbb{T}_{+},\mathbb{R})\). Then

$$\begin{aligned} 1+p ( x-s ) \leq E_{p}(x,s)\quad\textit{for all }x\geq s. \end{aligned}$$


Let \(a ( x ) =p ( x-s ) \), then it implies that \(\langle a ( x ) \rangle =\frac{p}{2} ( x^{ \sigma }+x-2s ) \). Since \(x\geq s\), it follows that \(x^{\sigma}+x-2s\geq 0\). Furthermore

$$\begin{aligned} p \bigl\langle a ( x ) \bigr\rangle +p=\frac{p^{2}}{2} \bigl( x^{\sigma}+x-2s \bigr) +p\geq p=a^{\Delta} ( x ) \end{aligned}$$


$$\begin{aligned} a^{\Delta} ( x ) \leq p \bigl\langle a ( x ) \bigr\rangle +p. \end{aligned}$$

By applying [26, Lemma 3.1], we have

$$\begin{aligned} a ( x ) \leq a ( s ) E_{p} ( x,s ) + \int _{s}^{x}p \bigl\langle E_{-p} ( r,x ) \bigr\rangle \Delta r. \end{aligned}$$

As \(a ( s ) =0\) and \(\int _{s}^{x}p \langle E_{-p} ( r,x ) \rangle \Delta r=E_{-p} ( r,x ) -1\). After simplification, we obtain

$$\begin{aligned} 1+p ( x-s ) \leq E_{p} ( x,s ). \end{aligned}$$


Lemma 5.2

For any constant \(p>0\), such that \(-p\in \mathcal{R}_{+}^{C}\mathbbm{(}\mathbb{T}_{+},\mathbb{R})\). Then we have

$$\begin{aligned} E_{-p}(x,s)\leq e^{\frac{-p}{2} ( x-s ) }\quad\textit{for all }x \geq s. \end{aligned}$$

Theorem 5.3

Let \((H)\) holds and \(\theta >0\) such that \(x_{i+1}-x_{i}<\theta \), \(i=1,2,3,\ldots \) . If the solution of IVP

$$\begin{aligned} \textstyle\begin{cases} a^{\Delta} ( x ) =M(x) \langle a(x) \rangle, \textit{ }x\in \mathbb{T}_{(\tau ),}\quad x\neq x_{k}, \\ a(x_{k}^{+}) = ( I+B_{k} ) a(x_{k})+c_{k},\quad k=1,2, \ldots, \\ a(\tau ^{+}) =0, \quad \tau \geq 0, \end{cases}\displaystyle \end{aligned}$$

is bounded for every \(c\in l^{\infty}(\mathbb{R}\mathbbm{)}^{n}\), then \(N=N(\tau )\geq 1\) with \(-\lambda \in \mathcal{R}_{+}^{C}\mathbbm{(}\mathbb{T}_{+},\mathbb{R})\) such that

$$\begin{aligned} \begin{aligned} \bigl\Vert G_{M}(x,\tau ) \bigr\Vert \leq NE_{-\lambda}(x,\tau ) \quad\textit{for all }x\in \mathbb{T}_{(\tau )}. \end{aligned} \end{aligned}$$


Theorem 4.9, yields

$$\begin{aligned} \begin{gathered} a(x)={ \sum_{\tau < x_{j}< x}} G_{M}\bigl(x,x_{j}^{+}\bigr)c_{j},\quad x\in \mathbb{T}_{(\tau )}. \end{gathered} \end{aligned}$$

By using similar estimation as given in [22, Theorem 5.1], we can obtain:

$$\begin{aligned} \begin{gathered} \bigl\Vert G_{M}(x,\tau ) \bigr\Vert \leq \frac{K^{4}}{(K-1)2} \beta _{i}\biggl(1-\frac{1}{K} \biggr)^{\frac{1}{\theta}(x-\tau )}\quad\text{for }x\in {}[ x_{i+j},x_{i+j+1}). \end{gathered} \end{aligned}$$

Let us consider the positive function \(\lambda (x)\), with \(-\lambda (x)\in \mathcal{R}^{+}\), as the solution of inequality

$$\begin{aligned} \begin{gathered} E_{-\lambda}(x,\tau )\geq \biggl(1-\frac{1}{k} \biggr)^{\frac{1}{\theta}(x-\tau )},\quad \text{for }x\in {}[ x_{i+j},x_{i+j+1}), \end{gathered} \end{aligned}$$

where \(E_{-\lambda}(x,\tau )\) is a Cayley exponential function. Set

$$\begin{aligned} \begin{gathered} N=\max \biggl\{ \frac{K^{4}}{(K-1)2}\beta _{i},\text{ } \sup_{\tau < x< x_{i}}\frac{ \Vert G_{M}(x,\tau ) \Vert }{E_{-\lambda}(x,\tau )} \biggr\} . \end{gathered} \end{aligned}$$

Then for all \(x,\tau \in \mathbb{T}_{+}\), we have

$$\begin{aligned} \begin{gathered} \bigl\Vert G_{M}(x,\tau ) \bigr\Vert \leq NE_{-\lambda}(x,\tau ), \end{gathered} \end{aligned}$$

and so the theorem is proved. □

Corollary 5.4

Let \((H)\) holds and \(\theta >0\) such that \(x_{i+1}-x_{i}<\theta \), \(i=1,2,3,\ldots \) . Then the boundedness of (21), for every \(c:= \{ c_{k} \} _{k=1}^{\infty},\in l^{\infty}( \mathbb{R}\mathbbm{)}^{n}\), implies the exponentially stability of (11).


Theorem 5.3 implies that \((\exists )\) \(N=N(\tau )\geq 1\), λ with \(-\lambda \in \mathbb{R}^{+}\) such that

$$\begin{aligned} \bigl\Vert G_{M}(x,\tau )a(\tau ) \bigr\Vert \leq NE_{-\lambda}(x, \tau )\quad\text{for all }x\in \mathbb{T}_{(\tau )}. \end{aligned}$$

For any \(\tau \in \mathbb{T}_{+}\), the solution of (11) satisfies

$$\begin{aligned} \begin{gathered} \bigl\Vert a(x) \bigr\Vert = \bigl\Vert G_{M}(x,\tau )a(\tau ) \bigr\Vert \leq \bigl\Vert G_{M}(x,\tau ) \bigr\Vert . \bigl\Vert a(\tau ) \bigr\Vert \leq N \bigl\Vert a(\tau ) \bigr\Vert E_{- \lambda}(x,\tau ) \end{gathered} \end{aligned}$$

for all \(x\in \mathbb{T}_{(\tau )}\). □

Lemma 5.5

If \(\theta >0\) such that \(x_{i+1}-x_{i}<\theta \), \(i=1,2,3,\ldots \) , then for every positive number λ with \(\theta <\frac{1}{\lambda}\), we have \(-\lambda \in \mathcal{R}_{+}^{C}\).


Since \(x_{i}^{\sigma}=x_{i}\) \(\forall i=1,2,3,\ldots \) , then for \(x\in {}[ x_{i},x_{i+1}]\), we have that \(x_{i}\leq x\leq \sigma (x)\leq x_{i+1}\).

It follows that \(\mu (x)=\sigma (x)-x\leq \) \(x_{i+1}-x_{i}<\theta \) for \(x\in {}[ x_{i},x_{i+1}]\). Therefore, \(\mu (x)<\theta \) for \(x\in \mathbb{T}_{+}\). If \(\theta <\frac{1}{\lambda}\), then we have that

\(2-\lambda \mu (x)>2-\frac{1}{\theta}\mu (x)>0\) and thus \(-\lambda \in \mathcal{R}_{+}^{C}\). □

Theorem 5.6

Let \((H)\) holds and \(\theta >0\) such that \(x_{i+1}-x_{i}<\theta \), \(i=1,2,3,\ldots \) , and

$$\begin{aligned} \begin{aligned} \sup_{i\geq 1} \Vert B_{i} \Vert \leq b,\textit{ }{ \int _{x_{i}}^{x_{i+1}}} \frac{ \Vert M(s) \Vert }{1-\frac{\mu (s) \Vert M(s) \Vert }{2}} \Delta s\leq M,\quad i=1,2,\ldots. \end{aligned} \end{aligned}$$

Then the IVP (21) is bounded for any \(c\in l^{\infty}(\mathbb{R}\mathbbm{)}^{n}\), implies that \(N>0\), \(\lambda >0\) with \(-\lambda \in \mathcal{R}_{+}^{C}\):

$$\begin{aligned} \begin{aligned} \bigl\Vert G_{M}(x,\tau ) \bigr\Vert \leq NE_{-\lambda}(x,\tau ) \quad\textit{for all }x\in \mathbb{T}_{(\tau )}. \end{aligned} \end{aligned}$$


By using Lemma 5.1, Lemma 5.5, and [22, Theorem 5.2], we can obtain

$$\begin{aligned} \begin{aligned} \bigl\Vert G_{M}(x,\tau ) \bigr\Vert \leq NE_{-\lambda}(x,\tau ) \quad\text{for all }x\in \mathbb{T}_{(\tau )}. \end{aligned} \end{aligned}$$


Example 5.7

Let us consider the following linear Cayley-type dynamic system on the time scale \(P_{1,1}:=\bigcup_{k=0}^{\infty} [ 2k,2k+1 ] \)

{ a Δ ( x ) = ( α 0 0 β ) a ( x ) + ( ( 1 + μ α 2 ) E α ( x σ , τ ) ( 1 + μ β 2 ) E β ( x σ , τ ) ) , x x k ; a ( ( 2 k ) + ) = ( a 2 k 0 0 b 2 k ) a ( 2 k ) , k = 1 , 2 , ; a ( τ ) = η , τ 0

with \(\beta >\alpha >0\), and \(( a_{2k} ) _{k\geq 1}\) and \(( b_{2k} ) _{k\geq 1}\) such that \(a_{ij}:=\prod_{l=1}^{j}a_{2 ( i+l ) }< a\), \(b_{ij}:=\prod_{l=1}^{j}a_{2 ( i+l ) }< b\) for each fixed \(i\geq 1\), and \(j=1,2,\ldots \) .

Since \(( H ) \) holds, then the generalized exponential matrix \(G_{M}(x,\tau )\), \(0\leq \tau \leq x\), for impulsive effects \(\{ B_{i},x_{i} \} _{i=1}^{\infty}\) is given by

$$\begin{aligned} G_{M}(x,\tau )= \begin{pmatrix} a_{ij}E_{-\alpha} ( x,\tau ) & 0 \\ 0 & b_{ij}E_{-\beta} ( x,\tau ) \end{pmatrix}, \end{aligned}$$

where \(E_{p} ( x,\tau ) =e_{\frac{2p}{2+\mu p}} ( x,\tau ) = ( 1+\frac{2p}{2+\mu p} ) ^{j}e^{ \frac{2p ( x-\tau ) }{2+\mu p}}e^{-\frac{2pj}{2+\mu p}}\), for \(\tau \in {}[ 2i,2i+2)\) and \(x\in [ 2 ( i+j ),2 ( i+j+1 ) ] \). It follows that

$$\begin{aligned} \bigl\Vert G_{M}(x,\tau ) \bigr\Vert \leq KE_{-\alpha} ( x, \tau ), \end{aligned}$$

where \(K:=\max \{a,b\}\).

The solution of (23) is given by

$$\begin{aligned} a ( x ) ={}& \begin{pmatrix} a_{ij}E_{-\alpha} ( x,\tau ) & 0 \\ 0 & b_{ij}E_{-\beta} ( x,\tau ) \end{pmatrix} \eta \\ &{} + \int _{\tau}^{x} \begin{pmatrix} a_{ij}E_{-\alpha} ( x,s^{\sigma} ) & 0 \\ 0 & b_{ij}E_{-\beta} ( x,s^{\sigma} ) \end{pmatrix} \begin{pmatrix} E_{-\alpha} ( s^{\sigma},\tau ) \\ E_{-\beta} ( s^{\sigma},\tau ) \end{pmatrix} \Delta s \\ ={}& \begin{pmatrix} a_{ij}E_{-\alpha} ( x,\tau ) & 0 \\ 0 & b_{ij}E_{-\beta} ( x,\tau ) \end{pmatrix} \eta + \begin{pmatrix} E_{-\alpha} ( x,\tau ) \int _{\tau}^{x}a_{ij}\Delta s \\ E_{-\beta} ( x,\tau ) \int _{\tau}^{x}b_{ij}\Delta s \end{pmatrix} \\ ={}& \begin{pmatrix} a_{ij}E_{-\alpha} ( x,\tau ) ( \eta _{1}+ ( x- \tau ) ) \\ b_{ij}E_{-\beta} ( x,\tau ) ( \eta _{2}+ ( x- \tau ) ) \end{pmatrix}. \end{aligned}$$

In particular, if we consider the following system:

{ a Δ ( x ) = ( α 0 0 β ) a ( x ) , x x k ; a ( ( 2 k ) + ) = ( a 2 k 0 0 b 2 k ) a ( 2 k ) + c 2 k , k = 1 , 2 , ; a ( τ ) = 0 , τ 0 ,

then the solution is

$$\begin{aligned} a(x)={ \sum_{l=1}^{j}} G_{M}\bigl(x, \bigl( 2 ( i+l ) \bigr) ^{+} \bigr)c_{2 ( i+l ) }. \end{aligned}$$

Moreover, it is easy to see that the solution of (24) is bounded for any \(c= \{ c_{k} \} _{k=1}^{\infty}\in l^{\infty}(\mathbb{R}^{n}\mathbbm{)}\). Consequently, the following impulsive dynamic system

{ a Δ ( x ) = ( α 0 0 β ) a ( x ) , x x k ; a ( ( 2 k ) + ) = ( a 2 k 0 0 b 2 k ) a ( 2 k ) , k = 1 , 2 , ; a ( τ ) = η , τ 0 ,

is uniformly exponentially stable.

6 Conclusions and future directions

We proposed the solution of a linear time-varying Cayley impulsive dynamic system on time scales. We have introduced Cayley regressive matrices. We proved the basic properties of the transition matrix and the impulsive transition matrix. We have also given some estimates for the solution of the Cayley impulsive dynamic systems. We have established the necessary and sufficient conditions for exponential stability and boundedness.

Moreover, these results will be very useful for the analysis and synthesis of impulsive control systems on time scales. The discussion of stability and Hyers–Ulam stability for Cayley impulsive dynamic systems is possible. We can use the results for the study of the transfer function for linear Cayley dynamic systems.

Availability of data and materials

Not applicable.


  1. Lakshmikantham, V., Simeonov, P.S., et al.: Theory of Impulsive Differential Equations, vol. 6. World Scientific, New Jersey (1989)

    Book  MATH  Google Scholar 

  2. Milman, V.D., Myshkis, A.D.: On the stability of motion in the presence of impulses. Sib. Mat. Zh. 1(2), 233–237 (1960)

    MathSciNet  Google Scholar 

  3. Wang, Y., Lu, J.: Some recent results of analysis and control for impulsive systems. Commun. Nonlinear Sci. Numer. Simul. 80, 104862 (2020)

    Article  MathSciNet  MATH  Google Scholar 

  4. Lupulescu, V., Younus, A.: On controllability and observability for a class of linear impulsive dynamic systems on time scales. Math. Comput. Model. 54(5–6), 1300–1310 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  5. Hilger, S.: Analysis on measure chains—a unified approach to continuous and discrete calculus. Results Math. 18(1–2), 18–56 (1990)

    Article  MathSciNet  MATH  Google Scholar 

  6. Bohner, M., Peterson, A.: Dynamic Equations on Time Scales: An Introduction with Applications. Birkhouser, Boston (2001)

    Book  MATH  Google Scholar 

  7. Agarwal, R., Bohner, M., o’Regan, D., Peterson, A.: Dynamic equations on time scales: a survey. J. Comput. Appl. Math. 141(1–2), 1–26 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  8. Naghshtabrizi, P., Hespanha, J.P., Teel, A.R.: Exponential stability of impulsive systems with application to uncertain sampled-data systems. Syst. Control Lett. 57(5), 378–385 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  9. Stamova, I., Stamov, G.: Stability analysis of impulsive functional systems of fractional order. Commun. Nonlinear Sci. Numer. Simul. 19(3), 702–709 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  10. Li, X., Zhang, X., Song, S.: Effect of delayed impulses on input-to-state stability of nonlinear systems. Automatica 76, 378–382 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  11. Zhang, L., Yang, X., Xu, C., Feng, J.: Exponential synchronization of complex-valued complex networks with time-varying delays and stochastic perturbations via time-delayed impulsive control. Appl. Math. Comput. 306, 22–30 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  12. Li, X., Song, S.: Impulsive control for existence, uniqueness, and global stability of periodic solutions of recurrent neural networks with discrete and continuously distributed delays. IEEE Trans. Neural Netw. Learn. Syst. 24(6), 868–877 (2013)

    Article  Google Scholar 

  13. Yang, X., Yang, Z., Nie, X.: Exponential synchronization of discontinuous chaotic systems via delayed impulsive control and its application to secure communication. Commun. Nonlinear Sci. Numer. Simul. 19(5), 1529–1543 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  14. Stamova, I., Stamov, T., Li, X.: Global exponential stability of a class of impulsive cellular neural networks with supremums. Int. J. Adapt. Control Signal Process. 28(11), 1227–1239 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  15. Li, X., Li, P., Wang, Q.-G.: Input/output-to-state stability of impulsive switched systems. Syst. Control Lett. 116, 1–7 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  16. Lu, J., Ho, D.W., Cao, J., Kurths, J.: Single impulsive controller for globally exponential synchronization of dynamical networks. Nonlinear Anal., Real World Appl. 14(1), 581–593 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  17. Lv, X., Li, X.: Finite time stability and controller design for nonlinear impulsive sampled-data systems with applications. ISA Trans. 70, 30–36 (2017)

    Article  Google Scholar 

  18. Yang, X., Lu, J.: Finite-time synchronization of coupled networks with Markovian topology and impulsive effects. IEEE Trans. Autom. Control 61(8), 2256–2261 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  19. Zhang, S., Sun, J., Zhang, Y.: Stability of impulsive stochastic differential equations in terms of two measures via perturbing Lyapunov functions. Appl. Math. Comput. 218(9), 5181–5186 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  20. Dashkovskiy, S., Feketa, P.: Input-to-state stability of impulsive systems and their networks. Nonlinear Anal. Hybrid Syst. 26, 190–200 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  21. Liu, B., Hill, D.J., Sun, Z.: Input-to-state exponents and related iss for delayed discrete-time systems with application to impulsive effects. Int. J. Robust Nonlinear Control 28(2), 640–660 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  22. Lupulescu, V., Zada, A.: Linear impulsive dynamic systems on time scales. Electron. J. Qual. Theory Differ. Equ. 2010, Article ID 11 (2010)

    MathSciNet  MATH  Google Scholar 

  23. Cieśliński, J.L.: Some implications of a new definition of the exponential function on time scales. Discrete Contin. Dyn. Syst. 2011, 302–311 (2011)

    MATH  Google Scholar 

  24. Anderson, D.R., Onitsuka, M.: Hyers–Ulam stability and best constant for Cayley h-difference equations. Bull. Malays. Math. Sci. Soc. 43(6), 4207–4222 (2020)

    Article  MathSciNet  MATH  Google Scholar 

  25. Anderson, D.R., Onitsuka, M.: Hyers–Ulam stability for Cayley quantum equations and its application to h-difference equations. Mediterr. J. Math. 18(4), 168 (2021)

    Article  MathSciNet  MATH  Google Scholar 

  26. Younus, A., Asif, M., Farhad, K., Nisar, O.: Some new variants of interval-valued Gronwall type inequalities on time scales. Iran. J. Fuzzy Syst. 16(5), 187–198 (2019)

    MathSciNet  MATH  Google Scholar 

  27. Huang, M., Feng, W.: Oscillation criteria for impulsive dynamic equations on time scales. Electron. J. Differ. Equ. 2007, Article ID 169 (2007)

    MathSciNet  MATH  Google Scholar 

Download references


The authors A. ALoqaily, N. Mlaiki and T. Abdeljawad would like to thank Prince Sultan University for paying the APC and for the support through the TAS research lab.


This research did not receive any external funding.

Author information

Authors and Affiliations



A.Y., G.A., A.A., T.A. and N.M wrote the main manuscript text. All authors reviewed the manuscript.

Corresponding authors

Correspondence to Nabil Mlaiki or Thabet Abdeljawad.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit

Reprints and Permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Younus, A., Atta, G., Aloqaily, A. et al. Stability of Cayley dynamic systems with impulsive effects. J Inequal Appl 2023, 80 (2023).

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI:


  • Dynamic systems
  • Impulsive exponential matrix
  • Cayley transform
  • Regressive matrix