Skip to content

Advertisement

  • Review
  • Open Access

A class of small deviation theorems for random fields on a uniformly bounded tree

Journal of Inequalities and Applications20132013:81

https://doi.org/10.1186/1029-242X-2013-81

  • Received: 17 June 2011
  • Accepted: 11 February 2013
  • Published:

Abstract

By introducing the asymptotic logarithmic likelihood ratio as a measure of the Markov approximation of arbitrary random fields on a uniformly bounded tree, by constructing a non-negative martingale on a uniformly bounded tree, a class of small deviation theorems of functionals and a class of small deviation theorems of the frequencies of occurrence of states for random fields on a uniformly bounded tree are established. Some known results are generalized in this paper.

MSC:60J10, 60F15.

Keywords

  • deviation theorems
  • uniformly bounded tree
  • random field

1 Introduction

A tree is a graph G = { T , E } which is connected and contains no circuits. Given any two vertices α β T , let α β ¯ be the unique path connecting α and β. Define the graph distance d ( α , β ) to be the number of edges contained in the path α β ¯ .

Let T be an infinite tree which is locally finite and has no leaves. We choose a vertex as the root. We call the number of neighbors of a vertex of the tree the degree of this vertex. When the degrees of any vertices of T are uniformly bounded, we say T is a uniformly bounded tree. A tree is said to be a Bethe tree if each vertex has N + 1 neighboring vertices, which is denoted by T B , N . A tree is said to be a Cayley tree if the root has only N neighbors and the other vertices have N + 1 neighbors, which is denoted by T C , N . Both kinds of trees are common homogeneous trees. Obviously, the two kinds of homogeneous trees are the special cases of the uniformly bounded tree. When the context permits, uniformly bounded trees are all denoted simply by T.

Let T be an infinite tree with root o. The set of all vertices with distance n from the root is called the nth generation of T, which is denoted by L n . We denote by T ( n ) the subtree comprised of level 0 (the root o) through level n. For each vertex t, there is a unique path from o to t, and | t | for the number of edges on this path. We denote the first predecessor of t by 1 t , the second predecessor of t by 2 t , and denote by n t the nth predecessor of t. For any two vertices s and t of tree T, write s t if s is on the unique path from the root o to t. We denote by s t the vertex farthest from o satisfying s t s and s t t .

Let ( Ω , F ) be a measurable space, { X t , t T } be a collection of random variables defined on ( Ω , F ) and taking values in S = { 0 , 1 , , b 1 } , where b is a positive integer. Let A be a subgraph of T, X A = { X t , t A } , and we denote by | A | the number of vertices of A. Let the realization of X T ( n ) be x T ( n ) . Let μ be a probability measure on the measurable space  ( Ω , F ) . We will call μ the random field on tree T. Let the distribution of { X t , t T } under probability measure μ be μ ( x T ( n ) ) = μ ( X T ( n ) = x T ( n ) ) . μ ( x T ( n ) ) is actually the marginal distribution of μ.

Definition 1 (see [1])

Let T be an infinite tree which is locally finite and has no leaves, S be a finite state space, { X t , t T } be a collection of S-valued random variables defined on the measurable space ( Ω , F ) , and let
p = { p ( x ) , x S }
(1)
be a distribution on S,
P = ( P ( y | x ) ) , x , y S ,
(2)
be a stochastic matrix on S 2 . Let μ P be a probability measure on the measurable space ( Ω , F ) . If for any vertex t T ,
μ P ( X t = y | X 1 t = x  and  X s  for  s t 1 t ) = μ P ( X t = y | X 1 t = x ) = P ( y | x ) , x , y S ,
(3)
and
μ P ( X 0 = x ) = p ( x ) , x S ,

{ X t , t T } will be called S-valued Markov chains indexed by an infinite tree T with initial distribution (1) and transition matrix (2) under probability measure μ P .

Let { X t , t T } be Markov chains indexed by tree T under probability measure μ P defined above. It is easy to see that
μ P ( x T ( n ) ) = μ P ( X T ( n ) = x T ( n ) ) = p ( x o ) m = 1 n t L m P ( x t | x 1 t ) .
(4)

In order to avoid technical problems, we always assume that μ ( x T ( n ) ) , P ( y | x ) and p ( x ) are positive.

Definition 2 Let T be a uniformly bounded tree which has no leaves and { X t , t T } be a collection of S-valued random variables defined on ( Ω , F ) , P = ( P ( y | x ) ) , x , y S be a positive stochastic matrix, μ, μ P be two probability measures on ( Ω , F ) , and { X t , t T } be Markov chains indexed by tree T under probability measure μ P determined by P. Assume that μ ( x T ( n ) ) is always strictly positive. Let
(5)
(6)

φ ( ω ) will be called the asymptotic logarithmic likelihood ratio.

Remark 1 If μ = μ P , then φ ( ω ) 0 . In Lemma 1 we will show that in a general case φ ( ω ) 0 μ-a.e., hence φ ( ω ) can be regarded as a measure of the Markov approximation of an arbitrary random field on T.

The tree model has recently drawn increasing interest from specialists in physics, probability and information theory. Benjamini and Peres [1] have given the notion of the tree-indexed homogeneous Markov chains and studied the recurrence and ray-recurrence for them. Berger and Ye [2] have studied the existence of entropy rate for some stationary random fields on a homogeneous tree. Ye and Berger [3] have studied the asymptotic equipartition property (AEP) in the sense of convergence in probability for a PPG-invariant and ergodic random field on a homogeneous tree. Recently, Yang [4] has studied some strong limit theorems for countable homogeneous Markov chains indexed by a homogeneous tree and the strong law of large numbers and the asymptotic equipartition property (AEP) for finite homogeneous Markov chains indexed by a homogeneous tree. Huang and Yang [5] have studied the strong law of large numbers for Markov chains indexed by an infinite tree with uniformly bounded degree. Liu and Wang [6] have studied the small deviation theorems between the arbitrary random fields and the Markov chain fields on the Cayley tree. Peng, Yang, and Wang [7] have further studied a class of small deviation theorems for functionals of random fields on a homogeneous tree which partially extend the result of [6].

In this paper, by introducing the asymptotic logarithmic likelihood ratio as a measure of the Markov approximation of the arbitrary random field on a uniformly bounded tree, and by constructing a non-negative martingale, we obtain the following two results: a class of small deviation theorems of functionals and a class of small deviation theorems of the frequencies of occurrence of states for random fields on a uniformly bounded tree. In fact, our present outcomes can imply the case in [5] and [7].

Lemma 1 Let T be a uniformly bounded tree which has no leaves. Let μ 1 , μ 2 be two probability measures on ( Ω , F ) , D F , { τ n , n 1 } be a sequence of positive random variables such that
lim inf n τ n | T ( n ) | > 0 μ 1 -a.e. on D .
(7)
Then
lim sup n 1 τ n ln μ 2 ( X T ( n ) ) μ 1 ( X T ( n ) ) 0 μ 1 -a.e. on D .
(8)

Proof The proof of this lemma is similar to that of Lemma 1 of [6], so we omit it. □

Remark 2 Let μ 1 = μ , μ 2 = μ P and τ n = | T ( n ) | in Lemma 1, by (8) there exists A F , μ ( A ) = 1 such that
lim sup n 1 | T ( n ) | ln μ P ( X T ( n ) ) μ ( X T ( n ) ) 0 , ω A ,
(9)

hence we have φ ( ω ) 0 , ω A .

From this remark, we know that D F and the sequence of { τ n , n 1 } are existent.

Let T be a uniformly bounded tree, k , l S . Let
(10)
(11)
that is,
S n ( k ) = t T ( n ) δ k ( X t ) , S n ( k , l ) = m = 1 n t L m δ k ( X 1 t ) δ l ( X t ) ,
where δ k ( ) ( k S ) is the Kronecker δ-function
δ k ( x ) = { 1 , if  x = k , 0 , if  x k .

Let the degree of each vertex σ ( σ o ) on the tree T be d ( σ ) . Since T is a uniformly bounded tree which has no leaves, we know that there are two positive numbers m and M such that 2 m d ( σ ) M .

Lemma 2 Let T be a uniformly bounded tree which has no leaves, P = ( P ( y | x ) ) , x , y S be a positive stochastic matrix, μ, μ P be two probability measures on ( Ω , F ) , { X t , t T } be Markov chains indexed by T under probability measure μ P determined by P, φ ( ω ) be denoted by (6), M be defined as above, 0 c < ln ( 1 a k ) 1 be a constant, and
(12)
(13)
where a k = min { P ( k | i ) , i S } , b k = max { P ( k | i ) , i S } . Then
lim inf n S n 1 ( k ) | T ( n ) | M k M 1 μ -a.e. on D ( c ) .
(14)
Proof By using a similar proof as that of Lemma of [6], we can obtain
lim inf n S n ( k ) | T ( n ) | M k μ -a.e. on  D ( c ) .
(15)

Since 1 | T ( n + 1 ) | 1 M 1 1 | T ( n ) | , this corollary follows from (15). □

In the following, we always let N 0 , k S , d 0 ( t ) = 1 , and denote by d N ( t ) = | τ T : N τ = t | , where N τ is defined as above. Let
(16)
(17)
Corollary 1 Let m, M be defined as above. Under the assumption of Lemma  2, we have
lim inf n S k N + 1 ( T ( n 1 ) ) | T ( n ) | M k ( m 1 ) N + 1 M 1 μ -a.e. on D ( c ) .
(18)
Proof Because T is a uniformly bounded tree and has no leaves, then ( m 1 ) N d N ( t ) ( M 1 ) N . By Lemma 2 we have
lim inf n S k N + 1 ( T ( n 1 ) ) | T ( n ) | = lim inf n t T ( n 1 ) δ k ( X t ) d N + 1 ( t ) | T ( n ) | ( m 1 ) N + 1 lim inf n t T ( n 1 ) δ k ( X t ) | T ( n ) | = ( m 1 ) N + 1 lim inf n S n 1 ( k ) | T ( n ) | M k ( m 1 ) N + 1 M 1 μ -a.e. on  D ( c ) .
(19)

The proof is finished. □

Lemma 3 Let T be a uniformly bounded tree which has no leaves, P = ( P ( y | x ) ) , x , y S be a positive stochastic matrix, μ, μ P be two probability measures on ( Ω , F ) , { X t , t T } be Markov chains indexed by T under probability measure μ P determined by P, { g t ( x , y ) , t T } be functions defined on S 2 , L 0 = { o } (where o is the root of the tree T), F n = σ ( X T ( n ) ) , λ be a real number. Let
t n ( λ , ω ) = e λ t T ( n ) { o } g t ( X 1 t , X t ) t T ( n ) { o } E μ P [ e λ g t ( X 1 t , X t ) | X 1 t ] μ P ( X T ( n ) ) μ ( X T ( n ) ) ,
(20)

where E μ P is the expectation under probability measure μ P . Then ( t n ( λ , ω ) , F n , n 1 ) is a non-negative martingale under probability measure μ.

Proof The proof of this lemma is similar to that of Lemma 3 of [7], so we omit it. □

2 Small deviation theorem

Small deviation theorems are a class of strong limit theorems expressed by inequalities. They are the extensions of strong limit theorems expressed by equalities. It is a new research topic proposed by Liu (see [8]).

In this section, we will establish a class of small deviation theorems of functionals and a class of small deviation theorems of the frequencies of occurrence of states for random fields on a uniformly bounded tree.

Theorem 1 Let T be a uniformly bounded tree which has no leaves, P = ( P ( y | x ) ) , x , y S be a positive stochastic matrix, μ, μ P be two probability measures on ( Ω , F ) , { X t , t T } be Markov chains indexed by T under probability measure μ P determined by P. Let { g t ( x , y ) , t T } be a collection of uniformly bounded functions defined on S 2 . Let { | g t ( x , y ) | K , x , y S , t T } , and let φ ( ω ) be denoted by (6). Let c 0 , D ( c ) be defined by (12),
(21)
(22)
Then
(23)
(24)
Proof Let t n ( λ , ω ) be defined by (20). By Lemma 3, ( t n ( λ , ω ) , F n , n 1 ) is a non-negative martingale under probability measure μ with E μ ( t n ( λ , ω ) ) = 1 . By Doob’s martingale convergence theorem, we have
lim n t n ( λ , ω ) = t ( λ , ω ) < μ -a.e.
(25)
Hence
lim sup n 1 | T ( n ) | ln t n ( λ , ω ) 0 μ -a.e.
(26)
We have by (20) and (26)
(27)
By (5), (6), (12) and (27), we have
(28)
Taking λ > 0 , we arrive at
(29)
where (a) follows by (28), (b) follows by the inequality ln x x 1 ( x > 0 ), (c) follows by the inequality 0 e x x 1 1 2 x 2 e | x | , (d) follows by | g t ( x , y ) | K , t T , and (e) follows by the inequality e x 1 x . In the case c > 0 , noticing that λ 2 K 2 e λ K + c λ e λ K attains its smallest value 2 c K when λ e λ K = 2 c K 2 , by (29) we have
lim sup n 1 | T ( n ) | [ F n ( ω ) G n ( ω ) ] K ( c + 2 c ) μ -a.e. on  D ( c ) .

Hence (23) holds. In the case c = 0 , (23) also holds by choosing λ i 0 + ( i ) in (29). Taking λ < 0 and using a similar approach, we can prove (24). This completes the proof of Theorem 1. □

In the following, we will provide an example showing that D ( c ) maybe has a positive probability, even has probability 1.

Example Let T be a Cayley tree T C , 2 , μ P and μ be two probability measures on the measurable space ( Ω , F ) , and { X t , t T } be a collection of random variables taking values in the state space { 0 , 1 } defined on the measurable space ( Ω , F ) . Let { X t , t T } be i.i.d. process indexed by tree T under the probability measure μ P with the common distribution μ P ( X t = 1 ) = p , μ P ( X t = 0 ) = 1 p , 0 < p < 1 , and { X t , t T } be also i.i.d. process indexed by tree T under the probability measure μ with the common distribution μ ( X t = 1 ) = q , μ ( X t = 0 ) = 1 q , 0 < q < 1 . It is easy to see that { X t , t T } are Markov chains indexed by tree T with the transitions matrices
( 1 p p 1 p p ) and ( 1 q q 1 q q )
and stationary distributions ( 1 p , p ) and ( 1 q , q ) under the probability measures μ P and μ, respectively. It is also easy to see that
where S 1 ( T ( n ) ) = S 1 0 ( T ( n ) ) , and s 1 ( T ( n ) ) is its realization of S 1 ( T ( n ) ) . In this case
φ n ( ω ) = μ ( X T ( n ) ) μ P ( X T ( n ) ) = ( q p ) S 1 ( T ( n ) ) ( 1 q 1 p ) | T ( n ) | S 1 ( T ( n ) ) .
By the strong law of large numbers for Markov chains indexed by tree (see [4]), we have
lim n S 1 ( T ( n ) ) | T ( n ) | = q μ -a.e.
Hence we have
φ ( ω ) = lim sup n 1 | T ( n ) | ln φ n ( ω ) = lim n S 1 ( T ( n ) ) | T ( n ) | ln q ( 1 p ) p ( 1 q ) + ln 1 q 1 p = q ln q ( 1 p ) p ( 1 q ) + ln 1 q 1 p μ -a.e.
Let
f ( q ) = q ln q ( 1 p ) p ( 1 q ) + ln 1 q 1 p , 0 < q < 1 .

It is easy to see that f ( p ) = 0 and lim q 1 f ( q ) = . Since f ( q ) is a continuous function, for any 0 c < , there exists q such that f ( q ) = c . Thus μ ( D ( c ) ) = 1 .

Theorem 2 Let T be a uniformly bounded tree which has no leaves. M k , m, M, D ( c ) are defined as above. Let 0 c < ln ( 1 a k ) 1 . Under the assumption of Theorem  1, we have
(30)
(31)
Proof Letting g t ( x , y ) = δ k ( x ) δ l ( y ) d N ( t ) in Theorem 1, by (21) and (22), we have
(32)
(33)
By (28), when c 0 , we have
(34)
By Corollary 1 and (34), when 0 c < ln ( 1 a k ) 1 , we arrive at
(35)
Taking λ > 0 , we have
(36)
where (f) follows by (35), (g), similarly to (b) and (c) of (29), follows by the inequalities ln x x 1 ( x > 0 ) and 0 e x x 1 x 2 2 e | x | , (h) follows by the inequality ( m 1 ) N d N ( t ) ( M 1 ) N , (i) follows by the inequality
t T ( n ) { o } δ k ( X 1 t ) S k N + 1 ( T ( n 1 ) ) = t T ( n 1 ) δ k ( X t ) d 1 ( t ) t T ( n 1 ) δ k ( X t ) d N + 1 ( t ) 1 ( m 1 ) N ,
and (j) follows by the inequality e x 1 x . In the case c > 0 , notice that
λ ( M 1 ) 2 N e λ ( M 1 ) N P ( l | k ) 2 + c ( M 1 ) λ e λ ( M 1 ) N M k ( m 1 )
attains its smallest value 2 c ( M 1 ) 2 N + 1 P ( l | k ) 2 M k ( m 1 ) when
λ ( M 1 ) N e λ ( M 1 ) N = 2 c ( M 1 ) M k P ( l | k ) ( m 1 ) .
By (36), we have

Hence (30) holds. In the case c = 0 , (30) also holds by choosing λ i 0 + ( i ) in (36). Taking λ < 0 and using a similar approach, we can prove (31). This completes the proof of Theorem 2. □

Corollary 2 Under the assumption of Theorem  2, we have
lim sup n [ S k , l N ( T ( n ) { o } ) S k N + 1 ( T ( n 1 ) ) P ( l | k ) ] = 0 μ -a.e. on D ( 0 ) ,
(37)

Proof Letting c = 0 , (37) follows from (30) and (31). □

Corollary 3 (see [5])

Under the assumption of Theorem  2, we have
lim sup n [ S k , l N ( T ( n ) { o } ) S k N + 1 ( T ( n 1 ) ) P ( l | k ) ] = 0 μ P -a.e.
(38)

Proof Let μ = μ P . Then φ n ( ω ) 0 , D ( 0 ) = Ω . Hence (38) follows (37) directly. □

Corollary 4 (see [7])

Under the assumptions of Theorem  2, if T is a T C , N 1 , we have
(39)
(40)
Proof Since T is a T C , N 1 , we can take M 1 = N 1 , m 1 = N 1 . Since
(41)
(42)
By (30), (31), (41) and (42), we have
lim sup n [ S n ( k , l ) N 1 S n 1 ( k ) P ( l | k ) ] 2 c N 1 P ( l | k ) M k N 1 + c N 1 M k N 1 = 2 c P ( l | k ) M k + c M k μ -a.e. on  D ( c ) , lim inf n [ S n ( k , l ) N 1 S n 1 ( k ) P ( l | k ) ] 2 c N 1 P ( l | k ) M k N 1 c N 1 M k N 1 = 2 c P ( l | k ) M k c M k μ -a.e. on  D ( c ) .

This completes the proof of Corollary 4. □

Remark 3 Unfortunately, the upper and lower bounds obtained in Theorem 1 and Theorem 2 are not tight. For example, if we let | g t ( x , y ) | 1 and c = 1 in Theorem 1, then lim sup n 1 | T ( n ) | | F n ( ω ) G n ( ω ) | 2 , but c + 2 c > 2 .

Declarations

Acknowledgements

The authors would like to thank the referees for their many valuable comments and suggestions. This work was supported by National Natural Science Foundation of China 11071104.

Authors’ Affiliations

(1)
Faculty of Science, Jiangsu University, Zhenjiang, Jiangsu, 212013, China

References

  1. Benjamini I, Peres Y: Markov chains indexed by trees. Ann. Probab. 1994, 22: 219–243. 10.1214/aop/1176988857MATHMathSciNetView ArticleGoogle Scholar
  2. Berger T, Ye Z: Entropic aspects of random fields on trees. IEEE Trans. Inf. Theory 1990, 36(5):1006–1018. 10.1109/18.57200MATHMathSciNetView ArticleGoogle Scholar
  3. Ye Z, Berger T: Entropic, regularity and asymptotic equipartition property of random fields on trees. J. Comb. Inf. Syst. Sci. 1996, 21(2):157–184.MATHMathSciNetGoogle Scholar
  4. Yang WG: Some limit properties for Markov chains indexed by a homogeneous tree. Stat. Probab. Lett. 2003, 65: 241–250. 10.1016/j.spl.2003.04.001MATHView ArticleGoogle Scholar
  5. Huang HL, Yang WG: Strong law of large numbers for Markov chains indexed by an infinite tree with uniformly bounded degree. Sci. China Ser. A 2008, 51(2):195–202. 10.1007/s11425-008-0015-1MATHMathSciNetView ArticleGoogle Scholar
  6. Liu W, Wang LY: The Markov approximation of the random field on Cayley tree and a class of small deviation theorems. Stat. Probab. Lett. 2003, 63: 113–121. 10.1016/S0167-7152(03)00058-0MATHView ArticleGoogle Scholar
  7. Peng WC, Yang WG, Wang B: A class of small deviation theorems for functionals of random fields on a homogeneous tree. J. Math. Anal. Appl. 2010, 361: 293–301. 10.1016/j.jmaa.2009.06.079MATHMathSciNetView ArticleGoogle Scholar
  8. Liu W: Relative entropy densities and a class of limit theorems of the sequence of m -valued random variables. Ann. Probab. 1990, 18: 829–839. 10.1214/aop/1176990860MATHMathSciNetView ArticleGoogle Scholar

Copyright

Advertisement