Open Access

A note on the solutions of neutral SFDEs with infinite delay

Journal of Inequalities and Applications20132013:181

https://doi.org/10.1186/1029-242X-2013-181

Received: 24 September 2012

Accepted: 1 April 2013

Published: 16 April 2013

Abstract

The main aim of this paper is to discuss the existence and uniqueness of solution of the neutral stochastic functional differential infinite delay equations under a non-Lipschitz condition and a weakened linear growth condition. Furthermore, an estimate for the error between approximate solution and accurate solution is given.

MSC:60H05, 60H10.

Keywords

approximate solutionexistenceinfinite delayuniqueness

1 Introduction

Stochastic differential equations (SDEs) are well known to model problems from many areas of science and engineering. For instance, in 2006, Henderson et al. [1] published the Stochastic Differential Equations in Science and Engineering, in 2007, Mao [2] published the stochastic differential equations and applications, in 2010, Li and Fu [3] considered the stability analysis of stochastic functional differential equations with infinite delay and its application to recurrent neural networks.

In recent years, there is an increasing interest in stochastic functional differential equations (SFDEs) (see [1, 47], and references therein for details).

On the one hand, Kolmanovskii and Myshkis [8] introduced the following neutral stochastic differential equations with finite delay:
d [ x ( t ) G ( x t ) ] = f ( t , x t ) d t + g ( t , x t ) d B ( t )

which could be used in chemical engineering and aeroelasticity. Since then, the theory of neutral SDEs has been developed by researchers (see [2, 9, 10]).

After, Ren and Xia [11] derived an existence and uniqueness of the solution to the following neutral SFDEs under some Carathéodory-type conditions with Lipschitz conditions and non-Lipschitz conditions as a special case:
x ( t ) = ξ ( 0 ) + G ( t , x t ) G ( t 0 , ξ ) + t 0 t f ( s , x s ) d s + t 0 t g ( s , x s ) d B ( s ) a.s.

This kind of neutral SFDE has a practical background in a collision problem in electrodynamics. The extra noise B can be regarded as some extra information, which cannot be detected in the electrodynamics systems, but is available to the particular investors.

Motivated by [11], one of the objectives of this paper is to get one proof to the existence and uniqueness theorem for given neutral SFDEs, which contains an improved condition given in [11]. The other objective of this paper is to estimate on how fast the Picard iterations x n ( t ) converge the unique solution x ( t ) of the neutral SFDEs.

2 Preliminary and notations

Let | | denote Euclidean norm in R n . If A is a vector or a matrix, its transpose is denoted by  A T ; if A is a matrix, its trace norm is represented by | A | = trace ( A T A ) . Let t 0 be a positive constant and ( Ω , F , P ) throughout this paper unless otherwise specified, be a complete probability space with a filtration { F t } t t 0 satisfying the usual conditions (i.e. it is right continuous and F t 0 contains all P-null sets). Assume that B ( t ) is a m-dimensional Brownian motion defined on complete probability space, that is B ( t ) = ( B 1 ( t ) , B 2 ( t ) , , B m ( t ) ) T . We consider the following spaces:

  • B C ( ( , 0 ] ; R d ) denote the family of bounded continuous R d -value functions φ defined on ( , 0 ] with norm φ = sup < θ 0 | φ ( θ ) | .

  • L 1 ( [ t 0 , T ] ; R d ) denote the family of all R d -valued measurable F t -adapted process ψ ( t ) = ψ ( t , w ) , t [ t 0 , T ] such that t 0 T | ψ ( t ) | d t < .

  • L 2 ( [ t 0 , T ] ; R d × m ) denote the family of all R d × m -valued measurable F t -adapted process ψ ( t ) = ψ ( t , w ) , t [ t 0 , T ] such that t 0 T | ψ ( t ) | 2 d t < .

  • M 2 ( ( , T ] ; R d ) denote the family of all R d -valued measurable F t -adapted process ψ ( t ) = ψ ( t , w ) , t ( , T ] such that E T | ψ ( t ) | 2 d t < .

With all the above preparation, consider the following d-dimensional neutral SFDEs:
d [ x ( t ) G ( t , x t ) ] = f ( t , x t ) d t + g ( t , x t ) d B ( t ) , t 0 t T ,
(2.1)
where x t = { x ( t + θ ) : < θ 0 } can be considered as a B C ( ( , 0 ] ; R d ) -value stochastic process,
be Borel measurable, and
G : [ t 0 , T ] × B C ( ( , 0 ] ; R d ) R d
be continuous. Next, we give the initial value of (2.1) as follows:
(2.2)

To be more precise, we give the definition of the solution of the equation (2.1) with initial data (2.2).

Definition 2.1 R d -value stochastic process x ( t ) defined on < t T is called the solution of (2.1) with initial data (2.2), if x ( t ) has the following properties:
  1. (i)

    x ( t ) is continuous and { x ( t ) } t 0 t T is F t -adapted;

     
  2. (ii)

    { f ( t , x t ) } L 1 ( [ t 0 , T ] ; R d ) and { g ( t , x t ) } L 2 ( [ t 0 , T ] ; R d × m ) ;

     
  3. (iii)
    x t 0 = ξ , for each t 0 t T ,
    x ( t ) = ξ ( 0 ) + G ( t , x t ) G ( t 0 , ξ ) + t 0 t f ( s , x s ) d s + t 0 t g ( s , x s ) d B ( s ) a.s.
    (2.3)
     
The x ( t ) is called as a unique solution, if any other solution x ¯ ( t ) is distinguishable with x ( t ) , that is
P { x ( t ) = x ¯ ( t ) ,  for any  < t T } = 1 .

The following lemma is known as the moment inequality for stochastic integrals which was established by Mao [10] and will play an important role in next section.

Lemma 2.1 If p 2 , g M 2 ( [ 0 , T ] ; R d × m ) such that
E 0 T | g ( s ) | p d s < ,
then
E ( sup 0 t T | 0 t g ( s ) d B ( s ) | p ) ( p 3 2 ( p 1 ) ) p 2 T p 2 2 E 0 T | g ( s ) | p d s .

In order to attain the solution of (2.1) with initial value (2.2), we propose the following assumptions:

(H1)

(1a) There exists a function Γ ( t , u ) ; [ t 0 , T ] × R + R + such that
| f ( t , φ ) f ( t , ψ ) | 2 | g ( t , φ ) g ( t , ψ ) | 2 Γ ( t , φ ψ 2 )

for all φ , ψ B C ( ( , 0 ] ; R d ) and t [ t 0 , T ] .

(1b) Γ ( t , u ) is locally integrable in t for each fixed u R + and is continuous, nondecreasing, and concave in u for each fixed t t 0 . Moreover, Γ ( t , 0 ) = 0 and if a nonnegative continuous function Z ( t ) , t 0 t T satisfies
Z ( t ) D t 0 t Γ ( s , Z ( s ) ) d s , t 0 t T ,

where D > 0 is a positive constant, then Z ( t ) = 0 for all t 0 t T .

(1c) For any constant K > 0 , the deterministic ordinary differential equation
d u d t = K Γ ( t , u ) , t 0 t T

has a global solution for any initial value u 0 .

(H2) For any t [ t 0 , T ] , it follows that f ( t , 0 ) , g ( t , 0 ) L 2 such that
| f ( t , 0 ) | 2 | g ( t , 0 ) | 2 K ,

where K is a positive constant.

(H3) Assuming that there exists a positive number K 0 such that K 0 < α 2 ( 0 < α < 1 ) and for any φ , ψ B C ( ( , 0 ] ; R d ) and t [ t 0 , T ] , it follows that
| G ( t , φ ) G ( t , ψ ) | 2 K 0 ( φ ψ 2 ) .

3 Existence and uniqueness of the solution

Now we give the existence and uniqueness theorem to (2.1) with initial value (2.2) under the Carathéodory-type conditions.

Theorem 3.1 Assume that (H1)-(H3) hold. Then there exists a unique solution to the neutral SFDEs (2.1) with initial value (2.2). Moreover, the solution belongs to M 2 ( ( , T ] ; R d ) .

In order to obtain the existence of solutions to neutral SFDEs, let x t 0 0 = ξ and x 0 ( t ) = ξ ( 0 ) , for t 0 t T . For each n = 1 , 2 ,  , set x t 0 n = ξ and define the following Picard sequence
x n ( t ) = ξ ( 0 ) + G ( t , x t n 1 ) G ( t 0 , x t 0 n 1 ) + t 0 t f ( s , x s n 1 ) d s + t 0 t g ( s , x s n 1 ) d B ( s ) .

We prepare a lemma in order to prove this theorem.

Lemma 3.2 Let the assumption (H1) and (H3) hold. If x ( t ) is a solution of equation (2.1) with initial data (2.2), then
E ( sup t 0 < t T | x ( t ) | 2 ) u t u T < ,

where u t = β 1 + β 2 t 0 t Γ ( s , u s ) d s , β 1 = K ( T t 0 ) β 2 + K 0 + 3 α 2 ( 1 α ) ( α 2 K 0 ) E ξ 2 , and β 2 = 6 α 2 ( T t 0 + 4 ) ( 1 α ) ( α 2 K 0 ) . In particular, x ( t ) belong to M 2 ( ( , T ] ; R d ) .

Proof For each number n 1 , define the stopping time
τ n = T inf { t [ t 0 , T ] : x ( t ) n } .
Obviously, as n , τ n T a.s. Let x n ( t ) = x ( t τ n ) , t ( , T ] . Then, for t 0 t T , x n ( t ) satisfy the following equation:
x n ( t ) = G ( t , x t n ) G ( t 0 , x t 0 n ) + J n ( t ) ,
where
J n ( t ) = ξ ( 0 ) + t 0 t f ( s , x s n ) I [ t 0 , τ n ] ( s ) d s + t 0 t g ( s , x s n ) I [ t 0 , τ n ] ( s ) d B ( s ) .
Applying the elementary inequality ( a + b ) 2 a 2 α + b 2 1 α when a , b > 0 , 0 < α < 1 , we have
| x n ( t ) | 2 1 α | G ( t , x t n ) G ( t 0 , x t 0 n ) | 2 + 1 1 α | J n ( t ) | 2 K 0 α 2 x t n 2 + K 0 α ( 1 α ) ξ 2 + 1 1 α | J n ( t ) | 2 ,
where condition (H3) has also been used. Taking the expectation on both sides, one sees that
Noting that sup < s t | x n ( s ) | 2 ξ 2 + sup t 0 s t | x n ( s ) | 2 , we get
Consequently,
(3.1)
On the other hand, by Hölder’s inequality, Lemma 2.1, and the condition (H2), one can show that
where β = 3 E ξ 2 + 6 K ( T t 0 ) ( T t 0 + 4 ) . Substituting this into (3.1) yields that
E ( sup t 0 < s t | x n ( s ) | 2 ) β 1 + β 2 t 0 t Γ [ s , E ( sup t 0 < u s | x n ( u ) | 2 ) ] d s ,
where β 1 = K ( T t 0 ) β 2 + K 0 + 3 α 2 ( 1 α ) ( α 2 K 0 ) E ξ 2 and β 2 = 6 α 2 ( T t 0 + 4 ) ( 1 α ) ( α 2 K 0 ) . Assumption (1c) indicates that there is a solution u t satisfies that
u t = β 1 + β 2 t 0 t Γ ( s , u s ) d s .
Since E ξ 2 < , for all n = 0 , 1 , 2 ,  , we deduce that
E ( sup t 0 < s t | x n ( s ) | 2 ) u t u T < .
Letting t = T , it then follows that
E ( sup t 0 < s T | x ( s τ n ) | 2 ) u T .
Thus,
E ( sup t 0 < s τ n | x ( s ) | 2 ) u T .

Consequently, the required result follows by letting n . □

Proof of Theorem 3.1 To check the uniqueness, let x ( t ) and x ¯ ( t ) be any two solutions of (2.1), by Lemma 3.2, x ( t ) , x ¯ ( t ) M 2 ( ( , T ] ; R d ) . Note that
x ( t ) x ¯ ( t ) = G ( t , x t ) G ( t , x ¯ t ) + J 1 ( t ) ,
where J 1 ( t ) = t 0 t [ f ( s , x s ) f ( s , x ¯ s ) ] d s + t 0 t [ g ( s , x s ) g ( s , x ¯ s ) ] d B ( s ) . One then gets
| x ( t ) x ¯ ( t ) | 2 1 α | G ( t , x t ) G ( t , x ¯ t ) | 2 + 1 1 α | J 1 ( t ) | 2 ,
where 0 < α < 1 . We derive that
| x ( t ) x ¯ ( t ) | 2 K 0 α x t x ¯ t 2 + 1 1 α | J 1 ( s ) | 2 .
Therefore,
E ( sup t 0 s t | x ( s ) x ¯ ( s ) | 2 ) K 0 α E ( sup t 0 s t | x ( s ) x ¯ ( s ) | 2 ) + 1 ( 1 α ) E ( sup t 0 s t | J 1 ( s ) | 2 ) .
Consequently,
E ( sup t 0 s t | x ( t ) x ¯ ( t ) | 2 ) α ( 1 α ) ( α K 0 ) E ( sup t 0 s t | J 1 ( s ) | 2 ) .
On the other hand, one can show that
This yields that
E ( sup t 0 < s t | x ( t ) x ¯ ( t ) | 2 ) 2 α ( T t 0 + 4 ) ( 1 α ) ( α K 0 ) t 0 t Γ ( s , E ( sup t 0 u s | x ( u ) x ¯ ( u ) | 2 ) ) d s .
Hence,
Z ( t ) 2 α ( T t 0 + 4 ) ( 1 α ) ( α K 0 ) t 0 t Γ ( s , Z ( s ) ) d s .

By assumption (1b), we get Z ( t ) = 0 . This implies that x ( t ) = x ¯ ( t ) for t 0 t T . Therefore, for all < t T , x ( t ) = x ¯ ( t ) a.s. The uniqueness has been proved.

Next to check the existence. Obviously, from the Picard iterations, we have x 0 ( t ) M 2 ( [ t 0 , T ] : R d ) . Moreover, one can show the boundedness of the sequence { x n ( t ) , n 0 } that x n ( t ) M 2 ( ( , T ] : R d ) , in fact,
x n ( t ) = G ( t , x t n 1 ) G ( t 0 , x t 0 n 1 ) + J n 1 ( t ) ,
where
J n 1 ( t ) = ξ ( 0 ) + t 0 t f ( s , x s n 1 ) d s + t 0 t g ( s , x s n 1 ) d B ( s ) .
Applying the elementary inequality ( a + b ) 2 a 2 α + b 2 1 α when a , b > 0 , 0 < α < 1 , we have
| x n ( t ) | 2 1 α | G ( t , x t n 1 ) G ( t 0 , ξ ) | 2 + 1 1 α | J n 1 ( t ) | 2 K 0 α 2 x t n 1 2 + K 0 α ( 1 α ) ξ 2 + 1 1 α | J n 1 ( t ) | 2 ,
where condition (H3) has also been used. Taking the expectation on both sides, one sees that
(3.2)
On the other hand, by Hölder’s inequality and the conditions, one can show that
where γ 1 = 3 E ξ 2 + γ 2 K ( T t 0 ) , γ 2 = 6 ( T t 0 + 4 ) . Substituting this into (3.2) yields that
where γ 3 = K ( T t 0 ) 1 α γ 2 + ( 1 α ) K 0 + α ( 4 α ) α ( 1 α ) E ξ 2 . Note that for any k 1 ,
Therefore, one can derive that
Assumption (1c) indicates that there is a solution u t satisfies that
u t = γ 3 α 2 α 2 K 0 + γ 2 α 2 ( 1 α ) ( α 2 K 0 ) t 0 t Γ ( s , u s ) d s .
Since E ξ 2 < , for all n = 0 , 1 , 2 ,  , we deduce that
E ( sup t 0 s t | x n ( s ) | 2 ) u t u T < ,
(3.3)

which shows the boundedness of the sequence { x n ( t ) , n 0 } .

Next, we that the sequence { x n ( t ) } is Cauchy sequence. For all n 0 and t 0 t T , we have
x n + 1 ( t ) x m + 1 ( t ) = G ( t , x t n ) G ( t , x t m ) + t 0 t [ f ( s , x s n ) f ( s , x s m ) ] d s + t 0 t [ g ( s , x s n ) g ( s , x s m ) ] d B ( s ) .
Next, using an elementary inequality ( u + v ) 2 1 α u 2 + 1 1 α v 2 and the condition (H3), we derive that
| x n + 1 ( t ) x m + 1 ( t ) | 2 K 0 α x t n x t m 2 + 1 1 α | J n , m ( t ) | 2 ,
where J n , m ( t ) = t 0 t [ f ( s , x s n ) f ( s , x s m ) ] d s + t 0 t [ g ( s , x s n ) g ( s , x s m ) ] d B ( s ) . Consequently,
On the other hand, by Hölder’s inequality, Lemma 2.2, and the condition (1a), one can show that
where used an elementary inequality ( u + v ) 2 2 ( u 2 + v 2 ) . This yields that
Let
Z ( t ) = lim sup n , m max 1 n k E ( sup t 0 s t | x n + 1 ( s ) x m + 1 ( s ) | 2 ) .
We get
Z ( t ) 2 α ( T t 0 + 4 ) ( 1 α ) ( α K 0 ) t 0 t Γ ( s , Z ( s ) ) d s .
By assumption (1b), we get Z ( t ) = 0 . This shows the sequence { x n ( t ) , n 0 } is a Cauchy sequence in L 2 . Hence, as n , x n ( t ) x ( t ) , that is E | x n ( t ) x ( t ) | 2 0 . Letting n in (3.3) then yields that
E ( sup t 0 s t | x ( s ) | 2 ) u t u T < .
(3.4)
Therefore, we obtain that x ( t ) M 2 ( ( , T ] ; R d ) . Now to show that x ( t ) satisfy (2.3).
Noting that sequence x n ( t ) is uniformly converge on ( , T ] , it means that
E ( sup t 0 u s | x n ( u ) x ( u ) | 2 ) 0
as n , further
Γ ( s , E ( sup t 0 u s | x n ( u ) x ( u ) | 2 ) ) 0
as n . Hence, taking limits on both sides in the Picard sequence, we obtain that
x ( t ) = ξ ( 0 ) + G ( t , x t ) G ( t 0 , x t 0 ) + t 0 t f ( s , x s ) d s + t 0 t g ( s , x s ) d B ( s ) .

The above expression demonstrates that x ( t ) is a solution of equation (2.1) satisfying the initial condition (2.2). So far, the existence of theorem is complete. □

In the proof of Theorem 3.1, we have shown that the Picard iterations x n ( t ) converge to the unique solution x ( t ) of equation (2.1). The following theorem gives an estimate on the difference between x n ( t ) and x ( t ) under some special condition, and it clearly shows that one can use the Picard iteration procedure to obtain the approximate solutions to equations (2.1).

Theorem 3.3 Let x ( t ) be the unique solution of equation (2.1) with initial data (2.2), x n ( t ) be the Picard iterations defined by previous section, and the function Γ satisfy Γ ( t , u ) = a u t , a > 0 . Then, for all n 1 ,
E ( sup t 0 t T | x n ( t ) x ( t ) | 2 ) M exp ( 2 α 2 ( T t 0 + 4 ) t 0 ( 1 α ) 2 ( α 2 K 0 ) ( T t 0 ) ) .
Proof From the Picard sequence and the solution of equation (2.3), we have
x n ( t ) x ( t ) = G ( t , x t n 1 ) G ( t , x t ) + t 0 t [ f ( s , x s n 1 ) f ( s , x s ) ] d s + t 0 t [ g ( s , x s n 1 ) g ( s , x s ) ] d B ( s ) .
We can derive that
(3.5)
(3.6)
where
J ( t ) = t 0 t [ f ( s , x s n 1 ) f ( s , x s ) ] d s + t 0 t [ g ( s , x s n 1 ) g ( s , x s ) ] d B ( s ) .
On the other hand, by Hölder’s inequality, and the assumption, one can show that
Without loss of generality, we may assume t 0 > 0 . This yields that
E ( sup t 0 s t | J ( s ) | 2 ) 2 t 0 ( T t 0 + 4 ) t 0 t E ( sup t 0 u s | x n 1 ( u ) x ( u ) | 2 ) d s .
Substituting this into (3.5) yields that
E ( sup t 0 < s t | x n ( s ) x ( s ) | 2 ) M + 2 α ( T t 0 + 4 ) t 0 ( 1 α ) ( α 2 K 0 ) t 0 t E ( sup t 0 u s | x n ( u ) x ( u ) | 2 ) d s ,
where
M = α K 0 ( 1 α ) ( α 2 K 0 ) E ( sup t 0 < s t | x n 1 ( s ) x n ( s ) | 2 ) + 2 α 2 ( T t 0 + 4 ) t 0 ( 1 α ) 2 ( α 2 K 0 ) t 0 t E ( sup t 0 r s | x n ( r ) x n 1 ( r ) | 2 ) d s .
Now follows by applying the Gronwall inequality,
E ( sup t 0 t T | x n ( t ) x ( t ) | 2 ) M exp ( 2 α 2 ( T t 0 + 4 ) t 0 ( 1 α ) 2 ( α 2 K 0 ) ( T t 0 ) ) .

The proof is complete. □

Declarations

Acknowledgements

This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (2012-0008474).

Authors’ Affiliations

(1)
Department of Mathematics, Changwon National University

References

  1. Henderson D, Plaschko P: Stochastic Differential Equations in Science and Engineering. World Scientific, Singapore; 2006.View ArticleGoogle Scholar
  2. Mao X: Stochastic Differential Equations and Applications. Horwood, Chichester; 2007.Google Scholar
  3. Li X, Fu X: Stability analysis of stochastic functional differential equations with infinite delay and its application to recurrent neural networks. J. Comput. Appl. Math. 2010, 234: 407–417. 10.1016/j.cam.2009.12.033MathSciNetView ArticleGoogle Scholar
  4. Halidias N: Remarks and corrections on ‘An existence theorem for stochastic functional differential equations with dealys under weak assumptions, Statistics and Probability Letters 78, 2008’ by N. Halidias and Y. Ren. Stat. Probab. Lett. 2009, 79: 2220–2222. 10.1016/j.spl.2009.07.021MathSciNetView ArticleGoogle Scholar
  5. Kim Y-H: An estimate on the solutions for stochastic functional differential equations. J. Appl. Math. Inform. 2011, 29(5–5):1549–1556.MathSciNetGoogle Scholar
  6. Ren Y, Lu S, Xia N: Remarks on the existence and uniqueness of the solution to stochastic functional differential equations with infinite delay. J. Comput. Appl. Math. 2008, 220: 364–372. 10.1016/j.cam.2007.08.022MathSciNetView ArticleGoogle Scholar
  7. Wei F, Wang K: The existence and uniqueness of the solution for stochastic functional differential equations with infinite delay. J. Math. Anal. Appl. 2007, 331: 516–531. 10.1016/j.jmaa.2006.09.020MathSciNetView ArticleGoogle Scholar
  8. Kolmanovskii VB, Myshkis A: Applied Theory of Functional Differential Equations. Kluwer Academic, Norwell; 1992.View ArticleGoogle Scholar
  9. Mao X, Shen Y, Yuan C: Almost surely asymptotic stability of neutral stochastic differential delay equations with Markovian switching. Stoch. Process. Appl. 2008, 118: 1385–1406. 10.1016/j.spa.2007.09.005MathSciNetView ArticleGoogle Scholar
  10. Ren Y, Xia N: Existence, uniqueness and stability of the solutions to neutral stochastic functional differential equations with infinite delay. Appl. Math. Comput. 2009, 210: 72–79. 10.1016/j.amc.2008.11.009MathSciNetView ArticleGoogle Scholar
  11. Ren Y, Xia N: A note on the existence and uniqueness of the solution to neutral stochastic functional differential equations with infinite delay. Appl. Math. Comput. 2009, 214: 457–461. 10.1016/j.amc.2009.04.013MathSciNetView ArticleGoogle Scholar

Copyright

© Kim; licensee Springer 2013

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.