Skip to content

Advertisement

Open Access

Sufficient conditions for non-stability of stochastic differential systems

Journal of Inequalities and Applications20152015:377

https://doi.org/10.1186/s13660-015-0894-y

Received: 12 May 2015

Accepted: 12 November 2015

Published: 2 December 2015

Abstract

In this paper, we use the local \(C^{2}\)-conjugate transformation to transform nonlinear stochastic differential systems to ordinary differential systems and give some sufficient conditions for the exponential stability and instability of stochastic differential systems. Our conditions just depend on the derivatives of drift terms and diffusion terms at equilibrium points.

Keywords

nonlinear stochastic differential systemIto formula \(C^{2}\)-equivalence

1 Introduction

The exponential stability of nonlinear stochastic differential systems is always a point which has attracted a significant amount of concern in the last two decades and produced a lot of results and methods (see [14]). Because of the complexities of differential systems, one used to employ Lyapunov function to discuss these problems in most of the literature, the methods have the advantage of discussing the stability of the systems if one only knows the existence of these solutions.

However, there are many inconveniences in the application on account of the difficulties in the construction of the Lyapunov function. So we try to transform nonlinear stochastic differential systems to ordinary differential systems in which the Brownian motion is just a parameter of them, and then discuss the stability in different situations.

We consider a simple example when the dimension of stochastic differential system is one as an illustration of our ideas:
$$ \left \{ \textstyle\begin{array}{l} dX(t)=b(X(t))\, dt+\sigma X(t)\, dW(t), \\ X(0)=x_{0}, \end{array}\displaystyle \right . $$
(1.1)
where \(W(t)\) is a standard one dimensional Brownian motion, σ is a constant, and \(b(x)\) is a function which satisfies the Lipschitz condition and the linear growth condition. Let \(Y(t)=X(t)\exp\{-\sigma W(t)\}\), by the Ito formula of continuous semi-martingales (see [5], p.32), we have
$$dY(t)= \biggl[\exp\bigl\{ -\sigma W(t)\bigr\} b\bigl(X(t)\bigr)- \frac{\sigma^{2}}{2}Y(t) \biggr]\, dt, $$
which is the same as
$$ \left \{ \textstyle\begin{array}{l} dY(t)= [\exp\{-\sigma W(t)\}b (\exp\{\sigma W(t)\}Y(t) )-\frac{\sigma^{2}}{2}Y(t) ]\, dt, \\ Y(0)=x_{0}. \end{array}\displaystyle \right . $$
(1.2)

We remark that equation (1.2) is different from equation (1.1). It is an ordinary differential equation and the Brownian motion \(W(t)\) is a parameter in (1.2). We can get the solution of (1.2) for many \(b(x)\), such as \(b(x)=kx\) or \(b(x)=kx+hx^{n}\).

By the law of the iterated logarithm for Brownian motion (see [6], p.112), the exponential stability of \(Y(t)\) is equivalent to that of \(X(t)\). Because (1.2) is an ordinary differential equation with the parameter \(W(t)\), we can discuss the stability of (1.2) with a lot of methods.

The main purpose of this paper is to extend this result to a wider range. This paper will be organized as follows: We shall first give preliminaries: some definitions and fundamental lemmas which will be used in the following and together with the transformations in different situations. Next we shall give the main results of this paper. In this section we not only discuss the exponential stability of nonlinear stochastic differential systems in a one dimensional situation but also consider the high dimensional situation in two cases.

2 Preliminaries: transformation

Throughout this paper, unless otherwise specified, we let \((\Omega ,\mathscr{F}, \{\mathscr{F}_{t}\}_{t \geq0}, P)\) be a complete probability space with the filter \(\{\mathscr{F}_{t}\}_{t\geq0}\) which satisfies the following usual conditions:
  1. (i)

    \(\mathscr{F}_{0}\) includes all the events whose probability is 0;

     
  2. (ii)

    \(\mathscr{F}_{t}\) is right continuous.

     

Let \(\{W(t)\}_{t\geq 0}\) be the standard d dimensional Brownian motion which is defined on \((\Omega, \mathscr{F}, \{\mathscr{F}_{t}\}_{t\geq0}, P)\) and is adapted to \(\{\mathscr{F}_{t}\}_{t \geq0}\).

In this paper, we mainly consider the stochastic differential equation
$$ \left \{ \textstyle\begin{array}{l} dX(t)=b(X(t))\, dt+\sigma(X(t))\, dW(t), \\ X(0)=x_{0}. \end{array}\displaystyle \right . $$
(2.1)
Here \(b(x)\) and \(\sigma(x)\) are two continuous functions on \(\mathbb{R}^{m}\) and satisfy the following conditions:
  1. (A1)

    \(b(x)\) and \(\sigma(x)\) are continuous differentiable;

     
  2. (A2)

    there exists a positive constant M such that \(|b(x)|+|\sigma (x)|< M(1+|x|)\).

     
We know that equation (2.1) has a unique solution (see [7]). We suppose that
  1. (A3)

    0 is the only 0 point of \(b(x)\) and \(\sigma(x)\), and \(\partial \sigma(0)\) is not 0.

     

If \(\sigma(x)\) and \(b(x)\) satisfy (A1), (A2), and (A3), then (2.1) has the unique solution \(X(t)\equiv0\) corresponding to the initial value. We call the solution a trivial solution or equilibrium position.

From Lemma 2.1 of [8], we have the following result.

Lemma 1

If we denote the solution of (2.1) as \(X(t;x_{0})\), then
$$ P\bigl\{ \exists t>0, X(t;x_{0})=0\bigr\} =0, \quad \forall x_{0}\neq0, $$
which means that almost all the sample paths of the solution of (2.1) starting from a nonzero state will never reach the origin.

Definition 1

If
$$ \lim_{x\rightarrow0}\sup_{t>0}\bigl\vert X(t;x)\bigr\vert =0,\quad \mbox{a.s.}, $$
then we call the trivial solution of (2.1) stable. If not, the trivial solution of (2.1) is unstable. Furthermore, if there exists a positive constant γ such that
$$ \lim_{x\rightarrow0}\sup_{t>0}e^{\gamma t}\bigl\vert X(t;x)\bigr\vert =0,\quad \mbox{a.s.}, $$
then the trivial solution of (2.1) is exponentially stable.

Definition 2

We suppose that U and V are the vector spaces on \(\mathbb{R}^{m}\), \(U(0)=V(0)=0\). If there exist a positive constant ρ and \(H:O_{\rho}(0)\rightarrow H(O_{\rho}(0))\) satisfying the following two conditions:
  1. (i)

    H is \(C^{2}\)-differentiable homeomorphism;

     
  2. (ii)

    \(\partial H(x)U(x)=V(H(x))\), \(\forall x\in O_{\rho}(0)\).

     
Then U and V are \(C^{2}\)-equivalent and H is the \(C^{2}\)-conjugate mapping between U and V.

2.1 One dimensional situation

Let \(m=d=1\). By (A3), we can suppose that \(\sigma(x)>0\) when \(x\in(0, \infty)\). If \(x\in(-\infty, 0)\), then we have \(\sigma(x)<0\) and \(\sigma'(0)>0\), where \(\sigma'(x) \) is the derivative of \(\sigma(x)\).

Let
$$ F(x)=\left \{ \textstyle\begin{array}{l@{\quad}l} \exp\{ \int_{1}^{x} \frac{\sigma'(0)}{\sigma(u)}\, du \}, & x \geq0, \\ -\exp\{ \int_{-1}^{x} \frac{\sigma'(0)}{\sigma(u)}\, du \}, & x< 0. \end{array}\displaystyle \right . $$
(2.2)
It is obvious that \(F(0)=0\), and \(F(x)\) is a strictly monotonically increasing function. We denote its inverse function by \(G(x)\) and then we have from (2.2)
$$\begin{aligned} \lim_{x\rightarrow0+} \frac{F(x)}{x} =& \lim_{x\rightarrow0+} \exp\biggl\{ - \int_{x}^{1} \frac{\sigma '(0)}{\sigma(u)}\,du-\log x\biggr\} \\ =& \lim_{x\rightarrow0+}\exp\biggl\{ - \int_{x}^{1} \frac{\sigma '(0)}{\sigma(u)}-\frac{1}{u}\,du \biggr\} \\ =&\exp\biggl\{ -\lim_{x\rightarrow0+} \int_{x}^{1} \frac{\sigma'(0)u-\sigma (u)}{\sigma(u)u}\,du\biggr\} \\ =&\exp\biggl\{ \int_{0}^{1} \frac{\sigma(u)-\sigma'(0)u}{\sigma(u)u}\,du\biggr\} . \end{aligned}$$
In the same way we can get
$$\lim_{x\rightarrow0-} \frac{F(x)}{x}=\exp\biggl\{ \int_{-1}^{0} \frac {\sigma(u)-\sigma'(0)u}{\sigma(u)u}\, du\biggr\} . $$
For any \(x_{0}\neq0\), we note \(X(t;x_{0})\) as \(X(t)\) for short as long as it will not cause any ambiguity. Let
$$ Y(t)=\exp\bigl\{ -\sigma'(0)W(t)\bigr\} F\bigl(X(t)\bigr), $$
where \(\sigma(x)\), \(W(t)\) and \(X(t)\) are used in (2.1).
By the Ito formula we get
$$\begin{aligned}& d\exp\bigl\{ -\sigma'(0)W(t)\bigr\} = - \sigma'(0)\exp\bigl\{ -\sigma'(0)W(t)\bigr\} \, dW(t) \\& \hphantom{d\exp\bigl\{ -\sigma'(0)W(t)\bigr\} ={}}{}+\frac{\sigma'(0)}{2}\exp\bigl\{ -\sigma'(0)W(t)\bigr\} \, dt, \end{aligned}$$
(2.3)
$$\begin{aligned}& dF\bigl(X(t)\bigr) = F'\bigl(X(t)\bigr)b\bigl(X(t) \bigr)\, dt+F'\bigl(X(t)\bigr)\sigma\bigl(X(t)\bigr)\, dW(t) \\& \hphantom{dF\bigl(X(t)\bigr) ={}}{}+\frac{1}{2}F''\bigl(X(t) \bigr)\sigma\bigl(X(t)\bigr)^{2}\, dt \\& \hphantom{dF\bigl(X(t)\bigr)} = F\bigl(X(t)\bigr)\frac{\sigma'(0)b(X(t))}{\sigma(X(t))}\, dt+F\bigl(X(t)\bigr) \sigma '(0)\, dW(t) \\& \hphantom{dF\bigl(X(t)\bigr) ={}}{}+\frac{1}{2}F\bigl(X(t)\bigr)\bigl[\sigma'(0)^{2}- \sigma'(0)\sigma'\bigl(X(t)\bigr)\bigr]\, dt, \end{aligned}$$
(2.4)
and
$$\begin{aligned}& \bigl\langle d\exp\bigl\{ -\sigma'(0)W(t)\bigr\} , dF \bigl(X(t)\bigr)\bigr\rangle \\& \quad = -\sigma'(0)\exp\bigl\{ -\sigma'(0)W(t)\bigr\} F\bigl(X(t)\bigr)\sigma'(0)\, dt. \end{aligned}$$
(2.5)
According to the product formula of continuous semi-martingales, (2.3), (2.4), and (2.5), we have
$$\begin{aligned} dY(t) =& d\exp\bigl\{ -\sigma'(0)W(t)\bigr\} F \bigl(X(t)\bigr) \\ =& F\bigl(X(t)\bigr)\, d\exp\bigl\{ -\sigma'(0)W(t)\bigr\} +\exp\bigl\{ -\sigma'(0)W(t)\bigr\} \, d F\bigl(X(t)\bigr) \\ & {}+\bigl\langle d\exp\bigl\{ -\sigma'(0)W(t)\bigr\} , dF \bigl(X(t)\bigr)\bigr\rangle \\ =&\biggl[\frac{\sigma'(0)b(X(t))}{\sigma(X(t))}-\frac{\sigma'(0)\sigma '(X(t))}{2}\biggr]\exp\bigl\{ - \sigma'(0)W(t)\bigr\} F\bigl(X(t)\bigr)\, dt. \end{aligned}$$
(2.6)
Let
$$ \begin{aligned} &\frac{\sigma'(0)b(x)}{\sigma(x)}=b'(0)+h(x),\qquad \frac{\sigma '(0)\sigma'(x)}{2}= \frac{\sigma'(0)^{2}}{2}+q(x), \\ &R(x)=h(x)-q(x). \end{aligned} $$
(2.7)
It is easy to see that \(\lim_{x\rightarrow0} R(x)=0\).
By (2.6) and (2.7) we know that \(Y(t)\) satisfies
$$ \left \{ \textstyle\begin{array}{l} \frac{dY(t)}{dt}=[b'(0)-\frac{\sigma'(0)}{2}+R(\exp\{\sigma '(0)W(t)\}G(Y(t)) ) ] Y(t), \\ Y(0)=F(x_{0}). \end{array}\displaystyle \right . $$
(2.8)

We notice that equation (2.8) is an ordinary differential equation, where \(W(t)\) is a parameter of it. Thus (2.8) has a unique solution for every trajectory \(W(t)\) of Brownian motion.

2.2 High dimensional situation I

In the case \(m>1\) and \(d=1\). Let \(A=(a_{ij})_{i,j\leq m}\) be a \(m\times m\) matrix and suppose that every m dimensional vector is a \(m\times1\) matrix. For any constant x, we make a convention
$$x*A=A*x=(xa_{ij})_{i,j\leq m}. $$
Let \(A(s)=(a_{ij}(s))_{i,j\leq m}\) and \(B(s)=(b_{ij}(s))_{i,j\leq m}\) be \(m\times m\) matrix functions and \(\langle dA(s),dB(s)\rangle\) be the abbreviated form of \((\sum_{k=1}^{m} \langle da_{ik}(s),db_{kj}(s)\rangle)_{i,j\leq m}\). If \(a_{ij}(s)\) and \(b_{ij}(s)\) are all continuous semi-martingales, then we have
$$ dA(s)B(s)=\bigl[dA(s)\bigr]B(s)+A(s)\bigl[dB(s)\bigr]+\bigl\langle dA(s),B(s) \bigr\rangle . $$
Let A be a \(m\times m\) matrix, \(W(t)\) be a one dimensional Brownian motion, and
$$ \exp\bigl\{ AW(t)\bigr\} =I+A*W(t)+\frac{1}{2!}A^{2}*W(t)^{2}+ \cdots+\frac {1}{k!}A^{k}*W(t)^{k}+\cdots, $$
then we have
$$ \exp\bigl\{ AW(t)\bigr\} A = A\exp\bigl\{ AW(t)\bigr\} $$
and
$$ d\exp\bigl\{ AW(t)\bigr\} = A\exp\bigl\{ AW(t)\bigr\} \, dW(t)+ \frac{1}{2}A^{2}\exp\bigl\{ AW(t)\bigr\} \, dt. $$
Furthermore, if A and B are commutative (\(AB=BA\), A and B are \(m \times m\) matrices), then we have
$$\exp\bigl\{ (A+B)W(t)\bigr\} =\exp\bigl\{ BW(t)\bigr\} \exp\bigl\{ AW(t)\bigr\} . $$

Let \(A=\partial\sigma(0)\), \(\sigma(x)\) and Ax be smooth \(C^{2}\)-equivalent, H be the \(C^{2}\)-conjugate mapping between \(\sigma (x)\), and Ax. Suppose that \(G(y)\) is the inverse mapping of \(H(x)\), then \(\partial G(0)=(\partial H(0))^{-1}\). By Definition 2, we have \(\partial H(x)\sigma(x)=A H(x)\), \(\forall x\in O_{\rho}(0)\). Taking derivative of both sides above, we can get \(\partial H(0)A=A\partial H(0)\), which means that \(\partial H(0)\) and A are commutative.

Now suppose that \(X(t;x_{0})\) is the solution of (2.1) for any \(x_{0}\in O_{\rho}(0)\) and is noted as \(X(t)\). Let
$$S=\inf\bigl\{ t|X(t)\notin O_{\rho}(0)\bigr\} $$
and
$$Y(t)=\exp\bigl\{ -AW(t)\bigr\} H\bigl(X(t)\bigr),\quad t\in(0, S), $$
where \(A=\partial\sigma(0)\) and H is the \(C^{2}\)-conjugate mapping. Then
$$\begin{aligned} dY(t) =& \bigl[d\exp\bigl\{ -AW(t)\bigr\} \bigr]Y(t)+\exp\bigl\{ -AW(t)\bigr\} \bigl[dH\bigl(X(t)\bigr)\bigr] \\ &{}+\bigl\langle d\exp\bigl\{ -AW(t)\bigr\} ,dH\bigl(Y(t)\bigr)\bigr\rangle \\ =& -A\exp\bigl\{ -AW(t)\bigr\} H\bigl(Y(t)\bigr)\, dW(t) \\ &{}+\frac{1}{2} A^{2}\exp\bigl\{ -AW(t)\bigr\} H\bigl(X(t)\bigr) \, dt \\ &{}+\exp\bigl\{ -AW(t)\bigr\} \partial H\bigl(X(t)\bigr)b\bigl(X(t)\bigr)\, dt \\ &{}+\exp\bigl\{ -AW(t)\bigr\} \partial H\bigl(X(t)\bigr)\sigma\bigl(X(t)\bigr)\, dW(t) \\ &{}+\frac{1}{2}\exp\bigl\{ -AW(t)\bigr\} \sigma\bigl(X(t) \bigr)^{T} \partial^{2} H\bigl(X(t)\bigr)\sigma \bigl(X(t) \bigr)\, dt \\ &{}-A\exp\bigl\{ -AW(t)\bigr\} \partial H\bigl(X(t)\bigr)\sigma\bigl(X(t)\bigr)\, dt \\ =&\biggl[\exp\bigl\{ -AW(t)\bigr\} \partial H\bigl(X(t)\bigr)b\bigl(X(t)\bigr) \\ &{}-\frac{1}{2} A^{2}\exp\bigl\{ -AW(t)\bigr\} H\bigl(X(t)\bigr) \biggr]\, dt \\ &{}+\frac{1}{2}\exp\bigl\{ -AW(t)\bigr\} \sigma\bigl(X(t) \bigr)^{T} \partial^{2} H\bigl(X(t)\bigr)\sigma\bigl(X(t) \bigr)\, dt \\ =&\biggl[\exp\bigl\{ -AW(t)\bigr\} \partial H\bigl(X(t)\bigr)b\bigl(X(t)\bigr)- \frac{1}{2} A^{2}Y(t)\biggr]\, dt \\ &{}+\frac{1}{2}\exp\bigl\{ -AW(t)\bigr\} \sigma\bigl(X(t) \bigr)^{T} \partial^{2} H\bigl(X(t)\bigr)\sigma \bigl(X(t) \bigr)\, dt. \end{aligned}$$
(2.9)
Let
$$\begin{aligned}& \partial H\bigl(G(y)\bigr)b\bigl(G(y)\bigr) = \partial H(0) \partial b(0)\partial G(0)y+\eta_{1}(y), \end{aligned}$$
(2.10)
$$\begin{aligned}& \sigma\bigl(G(y)\bigr)^{T} \partial^{2} H \bigl(G(y)\bigr)\sigma\bigl(G(y)\bigr) = \eta_{2}(y), \end{aligned}$$
(2.11)
and
$$ \eta(y)=\eta_{(}y)+\eta_{2}(y). $$
(2.12)
Then we have
$$\lim_{y\rightarrow0}\frac{|\eta(y)|}{|y|}=0. $$
So we can get from (2.9), (2.10), (2.11), and (2.12)
$$ \left \{ \textstyle\begin{array}{l} \frac{dY(t)}{dt} = \exp\{-AW(t)\}\partial H(0)\partial b(0)\partial G(0)\exp\{AW(t)\}Y(t) \\ \hphantom{\frac{dY(t)}{dt} ={}}{}-\frac{1}{2} A^{2}Y(t)+\eta(\exp\{AW(t)\}Y(t)), \\ Y(0) = H(x_{0}) \end{array}\displaystyle \right . $$
(2.13)
for \(t\in(0, S)\), which means that equation (2.13) is an ordinary differential equation.

2.3 High dimensional situation II

Next we suppose \(d>1\). Let \(W(t)=(W_{1}(t),\ldots,W_{d}(t))\) and \(\sigma(x)=(\sigma_{1}(x),\ldots,\sigma_{d}(x))\). Here \(W_{1}(t), \ldots, W_{d}(t)\) are d independent Brownian motions, \(\sigma_{i}(x)\) is a function defined on \(\mathbb{R}^{m}\), where \(i=1, 2, \ldots, d\).

Let \(A_{1}=\partial\sigma_{1}(0),\ldots, A_{d}=\partial\sigma_{d}(0)\). Now suppose that \(\sigma_{k}(x)\) and \(A_{k} x\) are \(C^{2}-\)equivalent, where \(k=1, \ldots, d\). For \(m=1\), the \(C^{2}-\)equivalence will be automatically established.

Suppose that \(H^{(k)}\) is the \(C^{2}\)-conjugate mapping between \(\sigma _{k}(x)\) and \(A_{k} x\), \(k=1, \ldots, d\). Let
$$Y(t)=\exp\bigl\{ A_{d}W_{d}(t)\bigr\} H^{(d)}\cdots \exp\bigl\{ A_{1}W_{1}(t)\bigr\} H^{(1)}\bigl(X(t) \bigr). $$
If \(\partial b(0)\), \(A_{1},\ldots,A_{d}\) are commutative, then we have
$$ \left \{ \textstyle\begin{array}{l} \frac{dY(t)}{dt} = \partial H^{(d)}(0)\cdots\partial H^{(1)}(0)\Gamma_{d}\partial(H^{(1)}(0))^{-1}\cdots (H^{(d)}(0))^{-1}Y(t) \\ \hphantom{\frac{dY(t)}{dt} ={}}{}+\eta(\exp\{\sum_{k=1}^{d}A_{k}W_{k}(t)\}Y(t)), \\ Y(0) = y_{0}. \end{array}\displaystyle \right . $$
(2.14)

3 Main results: exponential stability and instability

3.1 One dimensional situation

Let \(\gamma=b'(0)-\frac{\sigma'(0)}{2}\) in equation (2.8), then we have the following results.

Theorem 1

If \(\gamma<0\), then the trivial solution of (2.1) is exponentially stable.

Proof

We only prove Theorem 1 in the case \(x_{0}>0\), because the proof of it is similar in the case \(x_{0} \leq0\).

By the law of iterated logarithm for Brownian motion, almost surely, we have
$$\limsup_{t\rightarrow\infty}\frac{|W(t)|}{\sqrt{2t\log\log t}}=1, $$
so there is a corresponding \(M_{W}\) such that
$$\sigma'(0)\bigl\vert W(t)\bigr\vert \leq M_{W}(1+ \sqrt{2t\log\log t}) $$
for every trajectory \(W(t)\) of Brownian motion, where \(t\geq0\).
It is obvious that
$$\exp\biggl\{ M_{W}(1+\sqrt{2t\log\log t})+\frac{\gamma t}{2}\biggr\} $$
is a bounded function. If we denote its upper bound by \(N_{W}\), then it is easy to get \(N_{W}>1\).
Since \(\lim_{x\rightarrow0}R(x)=0\), we can find a \(\epsilon>0\) such that \(R(x)<\frac{|\gamma|}{3} \), where \(x\in(0, \epsilon)\). Notice that \(C=\lim_{x\rightarrow0+} \frac{F(x)}{x}\) is a nonzero constant and \(G(x)\) is the inverse function of \(F(x)\), we have \(\lim_{x\rightarrow0+} \frac{G(x)}{x}=\frac{1}{C}\), which means that there exists a positive constant δ such that
$$G(x)\leq\frac{2}{C}x\quad \mbox{and}\quad F(x)\leq2{C}x $$
for any \(x\in(0, \delta)\).
Let \(T=\inf\{ t| |R(X(t;x_{0}))|\geq\frac{|\gamma|}{2}\}\), where \(x_{0}<\frac{\min\{\delta,\epsilon\}}{4(1+C)N_{W}}\). It is obvious that \(T>0\). If T is finite and \(R(X(T))=\frac{|\gamma|}{2}\) for \(t\in[0, T]\), then we have from (2.8)
$$\frac{dY(t)}{dt}=\bigl[\gamma+R\bigl(X(t)\bigr)\bigr]Y(t)\leq \frac{\gamma t}{2}. $$
So
$$\begin{aligned}& Y(t) \leq e^{\frac{\gamma t}{2}}Y(0)=e^{\frac{\gamma t}{2}}F(x_{0}) \leq2Ce^{\frac{\gamma t}{2}}x_{0}\leq\delta, \\& X(t) = \exp\bigl\{ \sigma'(0)W(t)\bigr\} G\bigl(Y(t)\bigr) \\& \hphantom{X(t)} \leq \exp\bigl\{ M_{W}(1+\sqrt{2t\log\log t})\bigr\} G \bigl(2Ce^{\frac{\gamma t}{2}}x_{0}\bigr) \\& \hphantom{X(t)} \leq \exp\bigl\{ M_{W}(1+\sqrt{2t\log\log t})\bigr\} \frac{2}{C}\cdot 2Ce^{\frac{\gamma t}{2}}x_{0} \\& \hphantom{X(t)} = 4\exp\biggl\{ M_{W}(1+\sqrt{2t\log\log t})+ \frac{\gamma t}{2}\biggr\} x_{0} \\& \hphantom{X(t)} \leq 4N_{W} x_{0}< \epsilon. \end{aligned}$$
By the definition of ϵ we can get \(R(X(t))<\frac{|\gamma |}{3}\) for any \(t\in[0, T]\), which is a contradiction. So we have \(R(X(t))\leq\frac{|\gamma |}{2}\), which together with (2.8) gives \(Y(t)\leq e^{\frac{\gamma t}{2}}F(x_{0})\). This means that
$$X(t;x_{0})= \exp\bigl\{ \sigma'(0)W(t)\bigr\} G\bigl(Y(t) \bigr)\leq4\exp\biggl\{ M_{W}(1+\sqrt {2t\log\log t})+\frac{\gamma t}{2} \biggr\} x_{0}, $$
where \(x_{0}<\frac{\min\{\delta,\epsilon\}}{4(1+C)N_{W}}\). According to Definition 1, we immediately know that the trivial solution of (2.1) is exponentially stable.

The proof is similar in the case \(x_{0}<0\). So we omit it here. Then we complete the proof. □

Remark 1

In the proof of Theorem 1, the constant \(N_{W}\) depends on \(W(t)\), which means that \(N_{W}\) may be different for different trajectories \(W(t)\) of Brownian motion. Theorem 1 can be strengthened as follows. Almost surely, there exists a domain \(O_{\omega}(0)\) for every \(\omega \in\Omega\), we have for random \(x_{0}\in O_{\omega}(0)\)
$$\lim_{t\rightarrow\infty}e^{-\lambda t}\bigl\vert X(t;x_{0}) \bigr\vert =0, $$
where \(\lambda< b'(0)-\frac{\sigma'(0)}{2}\).

Theorem 2

If \(\gamma>0\), then the trivial solution of (2.1) is unstable.

Proof

Suppose that \(x_{0}>0\). There exists a positive constant ϵ such that
$$F(x)>\frac{Cx}{2} \quad \mbox{and}\quad R(x)>-\frac{\gamma}{2}, $$
where \(x\in(0, \epsilon]\).
Let \(T=\inf\{t|X(t;x_{0})\geq \epsilon\}\), where \(x_{0}\in(0, \epsilon]\). It is easy to see that \(T>0\). For any \(t\in[0, T)\), we know that
$$\frac{Y(t)}{dt}=\bigl[\gamma+R\bigl(X(t)\bigr)\bigr]Y(t)\geq \frac{\gamma}{2}Y(t), $$
which, together with the comparison principle of ordinary differential equation, gives
$$Y(t)\geq Y(0)e^{\frac{\gamma}{2}t}=F(x_{0})e^{\frac{\gamma}{2}t}\geq \frac{Ce^{\frac{\gamma}{2} t}x_{0}}{2}. $$
If \(P\{T=\infty\}>0\), when t is sufficiently large, then we have \(T=\infty\) and
$$ X(t;x_{0}) = \exp\bigl\{ \sigma'(0)W(t)\bigr\} G\bigl(Y(t) \bigr)\geq\exp\bigl\{ \sigma '(0)W(t)\bigr\} G\biggl( \frac{Ce^{\frac{\gamma}{2} t}x_{0}}{2}\biggr). $$

By the law of iterated logarithm of Brownian motion, almost surely, we know that \(\limsup_{t\rightarrow\infty}X(t;x_{0})=\infty\) when \(T=\infty\), which is a contradiction.

So \(P\{T<\infty\}=1\), which is the same as \(\sup_{t>0}|X(t;x_{0})|\geq \epsilon\), a.s., so the trivial solution of (2.1) is unstable.

We can prove the conclusion in the case \(x_{0}<0\) similarly. So we omit it here.

Finally Theorem 2 is proved. □

3.2 High dimensional situation I

Let \(\Gamma_{1}=\partial b(0)-\frac{1}{2}A^{2}\) in equation (2.13). Then we have the following results.

Theorem 3

If \(\partial b(0)\) and A are commutative, and the real parts of all the eigenvalues of \(\Gamma_{1}\) are negative, then the trivial solution of (2.1) is exponentially stable.

Proof

Since \(\partial b(0)\) and A are commutative, \(\partial H(0)\) and A are also commutative and (2.13) can be simplified to
$$ \left \{ \textstyle\begin{array}{l} \frac{dY(t)}{dt} = \partial H(0)[\partial b(0) -\frac{1}{2} A^{2}]\partial G(0)Y(t)+\eta(\exp\{AW(t)\}Y(t)), \\ Y(0) = H(x_{0}). \end{array}\displaystyle \right . $$
We suppose that the maximum of the real part of all the eigenvalues of \(\Gamma_{1}\) is −λ (\(\lambda>0\)). If \(Z(t)=e^{\frac{\lambda t}{3}}Y(t)\), then \(Z(t)\) satisfies
$$ \left \{ \textstyle\begin{array}{l} \frac{dZ(t)}{dt} = \partial H(0)[\Gamma_{1}+\frac{\lambda}{3} I]\partial G(0)Z(t)+e^{\frac{\lambda t}{3}}\eta(\exp\{AW(t)-\frac {\lambda t}{3}\}Z(t)), \\ Z(0) = H(x_{0}). \end{array}\displaystyle \right . $$

Finally we complete the proof of Theorem 3 from Theorem 1. □

Theorem 4

If \(\partial b(0)\) and A are commutative, and there is a positive one in the real part of all the eigenvalues of \(\Gamma_{1}\), then the trivial solution of (2.1) is unstable.

Proof

We omit the proof of Theorem 4 because we can obtain it in a completely parallel way to the proof of Theorem 2. □

When the real part of all the eigenvalues of \(\Gamma_{1}\) is 0, we cannot judge the stability of the trivial solution of (2.1).

Example 1

Let \(\sigma(x)=Ax\) and \(b(x)=\frac{1}{2}A^{2} x\). Then the solution of (2.1) is
$$X(t;x_{0})=\exp\bigl\{ AW(t)\bigr\} x_{0}. $$

It is easy to see that if the multiplication of all the eigenvalues of A is 1, then the trivial solution of (2.1) is stable. If not, the trivial solution of (2.1) is unstable.

3.3 High dimensional situation II

Let
$$ \Gamma_{d} =\partial b(0)-\frac{1}{2}\sum _{k=1}^{d}A_{k}^{2} $$
in equation (2.14). Similarly we have the following results in these situations.

Theorem 5

If \(\partial b(0)\), \(A_{1}, \ldots, A_{d}\) are commutative and the real parts of all the eigenvalues of \(\Gamma_{d}\) are negative, then the trivial solution of (2.1) is exponentially stable.

Theorem 6

If \(\partial b(0)\), \(A_{1}, \ldots, A_{d}\) are commutative and there is a positive one in the real parts of all the eigenvalues of \(\Gamma_{d}\), then the trivial solution of (2.1) is unstable.

Declarations

Acknowledgements

The author is thankful to the referees for their helpful suggestions and necessary corrections in the completion of this paper.

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors’ Affiliations

(1)
School of Mathematics and Statistics, Central South University, Changsha, China
(2)
School of Statistics, Henan University of Economics and Law, Zhengzhou, China

References

  1. Kushner, H: Stochastic Stability and Control. Academic press, New York (1967) MATHGoogle Scholar
  2. Khasminskii, R: Stochastic Stability of Differential Equations. Springer, Berlin (1980) View ArticleGoogle Scholar
  3. Krystul, J, Blom, H: Generalized stochastic hybrid processes as strong solutions of stochastic differential equations. Hybridge report 2 (2005) Google Scholar
  4. Boukas, E: Stochastic Switching Systems: Analysis and Design. Birkhäuser, Boston (2005) Google Scholar
  5. Mao, X: Stochastic Differential Equations and Applications. Horwood, Chichester (1997) MATHGoogle Scholar
  6. Karatzas, I, Shreve, S: Brownian Motion and Stochastic Calculus (2000) Google Scholar
  7. Skorohod, A: Asymptotic Method in the Theory of Stochastic Differential Equations. Am. Math. Soc., Providence (1989) Google Scholar
  8. Mao, X: Stability of stochastic differential equations with Markovian switching. Stoch. Process. Appl. 79, 45-67 (1999) View ArticleMATHGoogle Scholar

Copyright

© Yan 2015

Advertisement