Open Access

Existence of stationary distributions for a class of nonlinear time series models in random environment domain

Journal of Inequalities and Applications20112011:63

https://doi.org/10.1186/1029-242X-2011-63

Received: 13 March 2011

Accepted: 19 September 2011

Published: 19 September 2011

Abstract

In this paper, we study the problem of a variety of nonlinear time series model Xn+1= F(X n , en+1(Zn+1)) in which {Zn+1} is a Markov chain with finite state space, and for every state i of the Markov chain, {e n (i)} is a sequence of independent and identically distributed random variables. Also, the existence of the stationary distribution of the sequence {X n } defined by the above model is investigated. Some new novel results on the underlying models are presented.

2010 Mathematics Subject Classification: 60J10

Keywords

Stationary distributionNonlinear time seriesRandom environment

1 Introduction

It is known that stochastic difference equations provide models that represent a broad class of discrete-time stochastic systems, and a unified representation leads to the following general model (see, e.g.,[16]):
X n + 1 = F ( X n , e n + 1 ) , n 0 ,
(1.1)

where F : R q × R q R q is a Boreal measurable mapping, {e n } is a sequence of independent and identically distributed q-dimensional random vectors on a probability space ( Ω , , P ) . It can be seen that sequence {X n } defined in (1.1) forms a temporally homogeneous Markov chain with state space (R q , B q ) whenever X0 is a random variable on ( Ω , , P ) which is independent of {e n } (see, e.g., [14]).

It has been recognized that the application of model (1.1) is of great significance. However, the limitations of the model are obvious, that is, it neglects the factor that interference with a system is affected by environment, see for example [79] and the references therein. Generally speaking, the interference with a system will change when environment changes. In view of the above fact, we, in the present paper, will introduce a model, which improves model (1.1) in certain extent.

Let ( Ω , , P ) be a probability space, and (R q , B q ) be a measurable space, where R q is a q-dimensional real space, and B q is the σ-algebra consisting of all Boreal subsets of R q . μ q denotes Lebesgue measure on (R q , B q ). E = {1, 2, ..., m} is a finite set. F stands for the σ-algebra consisting of all subsets of E. Let {Z n , n ≥ 1} be an irreducible, aperiodic, and time homogeneous Markov chain, which values on state space (E, F) and its probability space is ( Ω , , P ) . Its transition probability is p ij = P(Zn+1= j|Z n = i), i, j E. {e n (1)}, ..., {e n (m)} are i.i.d random vector sequences, which value on state space (R q , B q ) and are defined on ( Ω , , P ) . They are mutually independent and i E, {Z n } is independent of {e n (i)}. Let e n ( Z n ) = i = 1 m { e n ( i ) } I { i } ( Z n ) , where I{i}(Z n ) is an indicator function for a single point set {i}. We introduce the following definition on the model we are studying.

Definition 1.1 If
X n + 1 = F ( X n , e n + 1 ( Z n + 1 ) ) , X 0 R q ,
(1.2)

where F : (R q × R q × E, B q × B q × F) (R q , B q ) is a Borel measurable mapping; {Z n }, {e n (1)}, ..., {e n (m)} are mutually independent and satisfy: both Z n and e n (i) are independent of X0, Ee n (i) = 0, E |e n (i)| < ∞, for every i E and n ≥ 0, then the model defined by (1.2) is called general nonlinear time series model in random environment, written as random environment general nonlinear time series (REGNLTS).

In the present paper, we are interested in the stationary solution of the sequence {X n } which is generated iteratively by (1.2).

We also recall the following definitions, which can be found in [7].

Definition 1.2 (see [7]). Assume that {X n } is a sequence of q-dimensional random vectors which submits to REGNLTS model (1.2).
  1. (i)

    Let π be a probability distribution. If for every n ≥ 1, X n ~ π when X0 ~ π, then π is called invariant distribution of the model (1.2).

     
  2. (ii)

    If X0 ~ π, and π is the invariant distribution of model (1.2), then the sequence {X n } which is generated iteratively by (1.2) and started from the initial value X0, is called a stationary solution of the model (1.2).

     

2 Basic notions

In this section, we provide notions and preliminary properties for stationary distribution. These preliminaries will be used in subsequent sections.

Definition 2.1 (see [10, 11]). Suppose ( X , ) is a measurable space, {X n } is a homogeneous Markov chain with state space ( X , ) , P(n), n = 1, 2, ... is transition probability. We call a probability measure π defined on is a stationary distribution for {X n }, if the following equality holds: to any A ,
π ( A ) = X π ( d x ) P ( x , A ) .
(2.1)

It is easy to see, {X n } is a strictly stationary process when X0 ~ π is a stationary distribution for {X n }. Furthermore, if {X n } is φ-irreducible, then {X n } has stationary distribution if and only if {X n } is ergodic.

Let = { g | g is a finite non - negative measurable function defined on ( X , ) } . Define a mapping A :
A g ( x ) = E [ g ( X n + 1 ) - g ( X n ) | X n = x ] = X P ( x , d y ) g ( y ) - g ( x ) .

A g ( x ) is called g-drift at x for {X n }.

Proposition 2.2 Suppose that g , π is a stationary distribution for {X n }. If g(x) is integral with respect to π on X , then
X π ( d x ) A g ( x ) = 0 .
(2.2)

Especially, when g is a non-negative bounded measurable function on ( X , ) , the above equality is true.

Proof From (2.1) and Fubini theorem, we have
X π ( d x ) g ( x ) = X π ( d y ) X P ( y , d x ) g ( x ) .

Since g(x) is integral with respect to π, it is easy to see equality(2.2) is true.

Before moving further, we give some notations.

Let
G 1 2 ( x , A ) = 1 P ( n ) ( x , A ) , x X , A ,
and
D = { A | G 1 2 ( x , A ) > 0 , x X } .

3 Preliminary results

Lemma 3.1 (see [79]). Suppose that {X n } is the iterative sequence in (1.2), then {(X n , Z n )} is a time-homogeneous Markov chain with state space (R q × E, B q × F).

Theorem 3.2 If {(X n , Z n )} is a time-homogeneous Markov chain with state space (R q × E, B q × F), and there exists stationary distribution π1 × π2 and (X0, Z0) ~ π1 × π2, then for π(A) (π1 × π2)(A × E) (A B q ) we have
π ( A ) = R q π ( d x ) P ( x , A ) .
(3.1)
Proof Setting π(A) = π1 × π2(A × E), we have
π ( A ) = π 1 × π 2 ( A × E ) (1) = P ( X n A , Z n E ) (2) = P ( X 1 A ) (3) = R q P ( X 1 A | X 0 = x ) P ( X 0 d x ) (4) = R q π ( d x ) P ( X 1 A | X 0 = x ) . (5) (6)

Hence π ( A ) = R q π ( d x ) P ( x , A ) .   □

4 Main results

In this section, we present some main results in this paper. To begin with, we recall the following theorem on a necessary condition for the existence of stationary distribution for a general state space Markov chain.

Lemma 4.1 (see [3]). Let V ( x ) . Suppose there exist A and parameter function V z (x), z (a, b) on ( X , ) such that
  1. (i)

    sup x A V ( x ) < + , and V ( y ) sup x A V ( x ) , y A c ;

     
  2. (ii)

    for any z (a, b), V z (x) is a non-negative bounded measurable function on ( X , ) ;

     
  3. (iii)

    A V ( x ) liminf z b - A V z ( x ) , x X , and A V z ( x ) , z (a, b), has uniformly below bound on X , i.e., there exists N > 0 such that A V z ( x ) - N , x X , z (a, b);

     
  4. (iv)

    A c D, and A V ( x ) > 0 , x A c ;

     

Then there is no stationary distribution about Markov chain {X n }.

On the above basis, we obtain some criteria of existence for stationary solution about the model (1.2).

Theorem 4.2. Suppose there is a strictly positive measurable function V(x, i) on (R q × E, B q × F) and A = {(x, i) R q × E|V(x, i) ≤ m} (to some m > 0) such that
  1. (i)

    x R q , j E, V(F(x, y(j)), j) is integral with respect to D j (·) on R q , where D j (·) denotes probability distribution of e n (j);

     
  2. (ii)

    V(F(x, y(j)), j) ≥ V(T(x), j) - θ(x, j)α(y(j)), x R q , y R q , j E, where T(·) is a measurable mapping on (R q , B q ), θ(·) is a bounded measurable function on (R q × E, B q × F), α(·) is a measurable function defined on (R q × E, B q × F) and j E, α(·(j)) is integral with respect to D j (·) on R q × E;

     
  3. (iii)

    A c D, and V(T(x), j) > V(x, i) + c j θ(x, j)((x, i) A c ), j E, where c j = R q α ( y ( j ) ) D j ( d y ) .

     

Then Markov chain (X n , Z n ) determined by Equation (1.2) does not have stationary distribution, and consequently, model (1.2) does not have stationary distribution.

Proof Using condition (i) and integral transformation formula, we have
R q × E P ( X , d Y ) V ( Y ) = R q j = 1 m p i j V ( F ( x , y ( j ) ) , j ) D j ( d y ) (1) = j = 1 m p i j R q V ( F ( x , y ( j ) ) , j ) D j ( d y ) (2) < + , (3) (4)
where X = (x, i), Y = (y, j). Taking
V z ( x , i ) = 1 - z V ( x , i ) 1 - z , z ( 0 , 1 ) ,
and in virtue of L'Hospital law, we have
V ( x , i ) = lim z 1 - V z ( x , i ) , x R q , i E
, and
0 V z ( x , i ) 1 + V ( x , i ) , x R q , i E , z ( 0 , 1 ) .
According to control convergence theorem, we have
lim z 1 - R q × E P ( X , d Y ) V z ( Y ) = R q × E P ( X , d Y ) V ( Y ) , X R q × E ,
then
lim z 1 - A V z ( X ) = A V ( X ) , X R q × E .
Using condition is (ii) and (iii), we have
A V ( X ) = R q × E P ( X , d Y ) V ( Y ) - V ( X ) (1) = R q j = 1 m p i j V ( F ( x , y ( j ) ) , j ) D j ( d y ) - V ( X ) (2) = j = 1 m p i j R q V ( F ( x , y ( j ) ) , j ) D j ( d y ) - V ( X ) (3) j = 1 m p i j R q [ V ( T ( x ) , j ) - θ ( x , j ) α ( y ( j ) ) ] D j ( d y ) - V ( X ) (4) = j = 1 m p i j V ( T ( x ) , j ) R q D j ( d y ) - j = 1 m p i j θ ( x , j ) R q α ( y ( j ) ) (5) D j ( d y ) - V ( X ) (6) = j = 1 m p i j ( V ( T ( x ) , j ) - θ ( x , j ) c j - V ( x , i ) ) (7) > 0 , ( x , i ) A c . (8) (9)
In the following, we prove A V z ( X ) has uniformly below bound. Denoting
B ( x , j ) = { y R q : V ( T ( x ) , j ) > V ( F ( x , y ( j ) ) , j ) } ,
then
- A V z ( x , i ) = - R q × E P ( X , d Y ) V z ( Y ) - V z ( X ) (1) = R q × E 1 - z V ( X ) 1 - z - 1 - z V ( Y ) 1 - z P ( X , d Y ) (2) = R q × E z V ( Y ) - z V ( X ) 1 - z P ( X , d Y ) (3) = R q j = 1 m p i j z V ( F ( x , y ( j ) ) , j ) - z V ( T ( x ) , j ) 1 - z D j ( d y ) (4) + j = 1 m p i j z V ( T ( x ) , j ) - z V ( x , i ) 1 - z (5) (6) (7)
j = 1 m p i j B ( x , j ) [ 1 + V ( T ( x ) , j ) - V ( F ( x , y ( j ) ) , j ) ] D j ( d y ) (1) + j = 1 m p i j sup V ( T ( x ) , j ) < V ( x , i ) [ 1 + V ( x , i ) - V ( T ( x ) , j ) ] (2) = 2 + j = 1 m p i j B ( x , j ) θ ( x , j ) α ( y ( j ) ) D j ( d y ) (3) + j = 1 m p i j max m , sup ( x , i ) A c [ V ( x , i ) - V ( T ( x ) , j ) ] (4) 2 + j = 1 m p i j | θ ( x , j ) | R q | α ( y ( j ) ) | D j ( d y ) (5) + j = 1 m p i j max m , | c j | sup ( x , i ) A c | θ ( x , j ) | . (6) (7)
Note that when j E, α(·(j)) is integral with respect to D j (·), we have
R q | α ( y ( j ) ) | D j ( d y ) < + ,

therefore A V z ( X ) has uniformly below bound. We know {(X n , Z n )} has not stationary distribution in terms of Lemma 4.1, and consequently, {X n } has not stationary distribution.

Another form of Theorem 4.2 is the following.   □

Theorem 4.3 If there is a strictly positive measurable function V(x, i) on (R q × E, B q × F) and A = {(x, i) R q × E|V(x, i) ≤ K} (to some K > 0) such that
  1. (i)

    x R q , j E, V(F(x, y(j)), j) is integral for D j (·) on R q , where D j (·) denotes probability distribution of e n (j);

     
  2. (ii)

    V ( F ( x , y ( j ) ) , j ) V ( T ( x ) , j ) - k = 1 l θ k ( x , j ) α k ( y ( j ) ) , x R q , j E, y R q , where T(·) is a measurable mapping on (R q , B q ), θ k (·) is a bounded measurable function on (R q × E, B q × F), α k (·) is a measurable function defined on (R q × E, B q × F) and c k j = R q α k ( y ( j ) ) D j ( d y ) , k = 1, 2, ..., l is existent and finite;

     
  3. (iii)

    A c D, and when (x, i) A c , we have V ( T ( x ) , j ) > V ( x , i ) + k = 1 l θ k ( x , j ) ( c k j ) , j E, where c k j = R q α k ( y ( j ) ) D j ( d y ) .

     

Then the Markov chain (X n , Z n ) determined by Equation (1.2) does not have stationary distribution, and consequently, there is no stationary distribution about {X n }.

Proof Similar to the proof of Tsheorem 4.2, we omit the proof.   □

Remark Note that Theorems 4.2 and 4.3 may generalize to a general measurable space.

Remark The system xn+1= T(x n ) defined by T(x) in Theorems 4.2 and 4.3, is called the corresponding determination department of (1.2). Theorem 4.2 and 4.3 show that the existence of stationary solution in Equation (1.2) depends on, to some extent, the increasing or decreasing rate of the determination department Lyapunov function along path curve (i.e., reach to overcome the influence of noise).

Theorem 4.4 If the Markov chain {(X n , Z n )} determined by Equation (1.2) is weak Feller chain, i.e., for every bounded continuous function g on (R q × E), P g : = R q × E P ( X , d Y ) g ( Y ) is still a bounded continuous function on (R q × E), there are constants r i ≥ 1, i = 1, 2, ..., l, a nonempty compact subset A and a non-negative measurable function V(X) on (R q × E, B q × F) such that
  1. (i)

    V ( F ( x , y ( j ) ) , j ) V ( T ( x ) , j ) + k = 1 l θ k ( x , j ) α k ( y ( j ) ) , x R q , y R q , where H k (x, j), θ k (x, j), k = 1, 2, ..., l, j E , are non-negative measurable function on (R q × E, B q × F), and j E, H k (x, j), θ k (x, j) are bounded on A, α k (y(j)), k = 1, 2, ..., l, are nonnegative measurable function on (R q × E, B q × F) and α k (y(j)), k = 1, 2, ..., l, are integral with respect to D j (·);

     
  2. (ii)

    ε > 0, and set family { B ( x , j ) } ( x , j ) A c R q , when (x, j) A c and y B(x, j), we have V ( F ( x , y ( j ) ) , j ) V ( x , j ) ε k = 1 l [ H k ( x , j ) D j 1 r k ( B ( x , j ) c ) + θ k ( x , j ) c k j ( x ) ] r k , where c k j ( x ) = [ B ( x , j ) c α k ( y ( j ) ) r k D j ( d y ) ] 1 r k .

     

Then the Markov chain {(X n , Z n )} has stationary distribution, and consequently, {X n } has stationary distribution.

Proof
E [ V ( X n + 1 ) | X n = ( x , i ) ] = R q × E V ( Y ) P ( X , d Y ) (1) = R q j = 1 m p i j V ( F ( x , y ( j ) ) , j ) D j ( d y ) . (2) (3)
Using Minkowski inequality, for every (x, i) A c we have
E [ V ( X n + 1 ) | X n = ( x , i ) ] j = 1 m p i j B ( x , j ) V ( F ( x , y ( j ) ) , j ) D j ( d y ) + k = 1 l j = 1 m p i j B ( x , j ) c ( H k ( x , j ) + θ k ( x , j ) α k ( y ( j ) ) ) r k D j ( d y ) V ( x , j ) ε .
Besides, we have
E [ V ( X n + 1 ) | X n = ( x , i ) ] k = 1 l R q j = 1 m p i j ( H k ( x , j ) + θ k ( x , j ) α k ( y ( j ) ) ) r k D j ( d y ) (1) j = 1 m p i j k = 1 l H k ( x , j ) + θ k ( x , j ) R q α k r k ( y ( j ) ) D j ( d y ) 1 r k , (2) (3) 
and
sup ( x , i ) A E [ V ( X n + 1 ) | X n = ( x , i ) ] < + .

We know the conclusion is true in terms of Theorem 2 and 3 of [12].   □

Corollary 4.5 Suppose the Markov chain {(X n , Z n )} determined by Equation (1.2) is weak Feller chain, the T is a measurable mapping on (R q , B q ). If there are a non-negative measurable function V(X) on (R q × E, B q × F), and a nonempty compact subset A in R q such that
  1. (i)

    V ( F ( x , y ( j ) ) , j ) V ( T ( x ) , j ) + k = 1 l θ k ( x , j ) α k ( y ( j ) ) , x R q , y R q , j E, where α k (y(j)), θ k (x, j), k = 1, 2, ..., l, j E are measurable function on (R q × E, B q × F) and j E, V(T(x), j), θ k (x, j) are bounded on A, α k (y(j)), k = 1, 2, ..., l are integral with respect to D j (·);

     
  2. (ii)

    V ( T ( x ) , j ) V ( x , j ) - k = 1 l c k j θ k ( x , j ) - ε , x A c , where c k j = R q α k ( y ( j ) ) D j ( d y ) , k = 1, 2, ..., l.

     

Then the Markov chain {(X n , Z n )} has stationary distribution, and consequently, {X n } has stationary distribution.

Proof Similar to the proof of Theorem 4.4, we omit the proof.   □

Remark The corollary does not demand θ k (x, j) and α k (x, j), k = 1, 2, ..., l, are non-negative, so it makes that the application is more facility.

5 Example

Consider the following a class of model
X n + 1 = T ( X n ) + θ ( X n ) e n + 1 ( Z n + 1 ) .
(5.1)
Here, X n values on R q and T : R q R q is a Boreal measurable mapping, θ(x) is a q order matrix function on R q and its every element is Boreal measurable on R q , {e n } is i.i.d random sequence valued on R q , e n has strictly positive density function f(t) > 0, t R q with respect to Lebesgue measure μ q . We suppose
Both Z n and e n ( i ) ( i E ) are independent of X 0 , E e n ( i ) = 0 , E | e n ( i ) | < + ( i E ) .

To any matrix A Rq×q, denote A 1 = sup x = 1 A x x .

Theorem 5.1 The Markov chain {(X n , Z n )} is determined by Equation (5.1), when ||θ(x)||1 is bounded function on R q , C : = max R q t f j ( t ) μ q ( d t ) : j E < + and there exists a constant K such that
  1. (i)

    ||T(x)|| > ||x|| + C||θ(x)||1, ||x|| > K, then {(X n , Z n )} has not stationary distribution, and consequently, {X n } has not stationary distribution.

     
  2. (ii)

    If every component of T(x) and every element of θ(x) are both continuous function on R q such that C = : max d i s p l a y s t y l e R q t r f j ( t ) r μ q ( d t ) : j E 1 r < + and (||T(x)|| + c||θ(x)||1) r ≤ ||x|| r - ε, ||x|| > K, where r ≥ 1 and ε, K > 0, then {(X n , Z n )} has stationary distribution, and consequently, {X n } has stationary distribution.

     
Proof
  1. (i)

    Taking V(x, i) = ||x||, x R q , i E, we complete the proof in terms of theorem 4.3.

     
  2. (ii)

    It is easy to see {(X n , Z n )} is a weak Feller chain, taking V(x, i) = ||x|| r , x R q and B(x, j) ≡ R q , x A c = {x R q : ||x|| > K} in the Theorem 4.4, then this completes the proof.   □

     

Declarations

Acknowledgements

The authors would like to thank the Editor and the anonymous referees for their detailed comments and valuable suggestions which considerably improved the presentation of this paper. This work was supported in part by the National Natural Science Foundation of China under Grants no. 11101054 and 11101434, the Scientific Research Funds of Hunan Provincial Education Department of China under Grants no. 09C059, the Scientific Research Funds of Hunan Provincial Science and Technology Department of China under Grants no. 2010FJ6036 and the Open Fund Project of Key Research Institute of Philosophies and Social Sciences in Hunan Universities under Grants no. 11FEFM11.

Authors’ Affiliations

(1)
School of Mathematics and Computational Science, Changsha University of Science and Technology
(2)
School of Mathematics, Central South University

References

  1. Chan K, Tong H: On the use of the deterministic Lyapunov function for the ergodicity of stochastic difference equations. Adv Appl Prob 1985,17(3):666–678. 10.2307/1427125MathSciNetView ArticleGoogle Scholar
  2. Sheng Z, Wang T, Liu D: On The Stability Properties of Nonlinear Time Series Models-Theorem of Ergocity and Its Application. Science Press, Beijing; 1993.Google Scholar
  3. Wang T, Sheng Z: Existence of stationary distributions for a class of time-invariant nonlinear stochastic difference equations. J Southeast Univ 1994,24(1):83–89.MathSciNetGoogle Scholar
  4. Wang T, Sheng Z: Stability of discrete-time stochastic system. J Sys Eng 1994,9(2):43–52.MathSciNetGoogle Scholar
  5. Wen F, Yang X: Skewness of return distribution and coeffcient of risk premium. J Syst Sci Complex 2009,22(3):360–371. 10.1007/s11424-009-9170-xMathSciNetView ArticleGoogle Scholar
  6. Wen F, Liu Z: A copula-based correlation measure and its application in chinese stock market. Int J Inf Tech Decis 2009,8(4):1–15.View ArticleGoogle Scholar
  7. Hou Z, Yu Z, Shi P: Study on a class of nonlinear time series models and ergodicitiy in random environment domain. Math Methods Oper Res 2005,61(2):299–310. 10.1007/s001860400399MathSciNetView ArticleGoogle Scholar
  8. Zhu E, Zou J, Hou Z: Analysis on adjoint non-recurrent property of nonlinear time series in random environment domain. Math Methods Oper Res 2007,65(2):353–360. 10.1007/s00186-006-0128-7MathSciNetView ArticleGoogle Scholar
  9. Zhu E, Zhang H, Zou J, Hou Z: Ergodicity of a class of nonlinear time series models in random environment domain. Acta Math Appl Sinica 2010,26(1):159–168. 10.1007/s10255-009-8245-8View ArticleGoogle Scholar
  10. An H, Chen M: Nonlinear Time Series Analysis. Science and Technology Press, Shanghai; 1998.Google Scholar
  11. William J: Continuous-Time Markov Chains. Springer, New York; 1991.Google Scholar
  12. Tweedie R: Invariant measures for Markov chains with no irreducibility assumptions. J Appl Prob 1988, 25: 275–285.MathSciNetView ArticleGoogle Scholar

Copyright

© Wang et al; licensee Springer. 2011

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.