# A class of small deviation theorems for random fields on a uniformly bounded tree

## Abstract

By introducing the asymptotic logarithmic likelihood ratio as a measure of the Markov approximation of arbitrary random fields on a uniformly bounded tree, by constructing a non-negative martingale on a uniformly bounded tree, a class of small deviation theorems of functionals and a class of small deviation theorems of the frequencies of occurrence of states for random fields on a uniformly bounded tree are established. Some known results are generalized in this paper.

MSC:60J10, 60F15.

## 1 Introduction

A tree is a graph $G=\left\{T,E\right\}$ which is connected and contains no circuits. Given any two vertices $\alpha \ne \beta \in T$, let $\overline{\alpha \beta }$ be the unique path connecting α and β. Define the graph distance $d\left(\alpha ,\beta \right)$ to be the number of edges contained in the path $\overline{\alpha \beta }$.

Let T be an infinite tree which is locally finite and has no leaves. We choose a vertex as the root. We call the number of neighbors of a vertex of the tree the degree of this vertex. When the degrees of any vertices of T are uniformly bounded, we say T is a uniformly bounded tree. A tree is said to be a Bethe tree if each vertex has $N+1$ neighboring vertices, which is denoted by ${T}_{B,N}$. A tree is said to be a Cayley tree if the root has only N neighbors and the other vertices have $N+1$ neighbors, which is denoted by ${T}_{C,N}$. Both kinds of trees are common homogeneous trees. Obviously, the two kinds of homogeneous trees are the special cases of the uniformly bounded tree. When the context permits, uniformly bounded trees are all denoted simply by T.

Let T be an infinite tree with root o. The set of all vertices with distance n from the root is called the nth generation of T, which is denoted by ${L}_{n}$. We denote by ${T}^{\left(n\right)}$ the subtree comprised of level 0 (the root o) through level n. For each vertex t, there is a unique path from o to t, and $|t|$ for the number of edges on this path. We denote the first predecessor of t by ${1}_{t}$, the second predecessor of t by ${2}_{t}$, and denote by ${n}_{t}$ the nth predecessor of t. For any two vertices s and t of tree T, write $s\le t$ if s is on the unique path from the root o to t. We denote by $s\wedge t$ the vertex farthest from o satisfying $s\wedge t\le s$ and $s\wedge t\le t$.

Let $\left(\mathrm{\Omega },\mathcal{F}\right)$ be a measurable space, $\left\{{X}_{t},t\in T\right\}$ be a collection of random variables defined on $\left(\mathrm{\Omega },\mathcal{F}\right)$ and taking values in $S=\left\{0,1,\dots ,b-1\right\}$, where b is a positive integer. Let A be a subgraph of T, ${X}^{A}=\left\{{X}_{t},t\in A\right\}$, and we denote by $|A|$ the number of vertices of A. Let the realization of ${X}^{{T}^{\left(n\right)}}$ be ${x}^{{T}^{\left(n\right)}}$. Let μ be a probability measure on the measurable space $\left(\mathrm{\Omega },\mathcal{F}\right)$. We will call μ the random field on tree T. Let the distribution of $\left\{{X}_{t},t\in T\right\}$ under probability measure μ be $\mu \left({x}^{{T}^{\left(n\right)}}\right)=\mu \left({X}^{{T}^{\left(n\right)}}={x}^{{T}^{\left(n\right)}}\right)$. $\mu \left({x}^{{T}^{\left(n\right)}}\right)$ is actually the marginal distribution of μ.

Definition 1 (see )

Let T be an infinite tree which is locally finite and has no leaves, S be a finite state space, $\left\{{X}_{t},t\in T\right\}$ be a collection of S-valued random variables defined on the measurable space $\left(\mathrm{\Omega },\mathcal{F}\right)$, and let

$p=\left\{p\left(x\right),x\in S\right\}$
(1)

be a distribution on S,

$P=\left(P\left(y|x\right)\right),\phantom{\rule{1em}{0ex}}x,y\in S,$
(2)

be a stochastic matrix on ${S}^{2}$. Let ${\mu }_{P}$ be a probability measure on the measurable space $\left(\mathrm{\Omega },\mathcal{F}\right)$. If for any vertex $t\in T$,

(3)

and

${\mu }_{P}\left({X}_{0}=x\right)=p\left(x\right),\phantom{\rule{1em}{0ex}}x\in S,$

$\left\{{X}_{t},t\in T\right\}$ will be called S-valued Markov chains indexed by an infinite tree T with initial distribution (1) and transition matrix (2) under probability measure ${\mu }_{P}$.

Let $\left\{{X}_{t},t\in T\right\}$ be Markov chains indexed by tree T under probability measure ${\mu }_{P}$ defined above. It is easy to see that

${\mu }_{P}\left({x}^{{T}^{\left(n\right)}}\right)={\mu }_{P}\left({X}^{{T}^{\left(n\right)}}={x}^{{T}^{\left(n\right)}}\right)=p\left({x}_{o}\right)\prod _{m=1}^{n}\prod _{t\in {L}_{m}}P\left({x}_{t}|{x}_{{1}_{t}}\right).$
(4)

In order to avoid technical problems, we always assume that $\mu \left({x}^{{T}^{\left(n\right)}}\right)$, $P\left(y|x\right)$ and $p\left(x\right)$ are positive.

Definition 2 Let T be a uniformly bounded tree which has no leaves and $\left\{{X}_{t},t\in T\right\}$ be a collection of S-valued random variables defined on $\left(\mathrm{\Omega },\mathcal{F}\right)$, $P=\left(P\left(y|x\right)\right)$, $x,y\in S$ be a positive stochastic matrix, μ, ${\mu }_{P}$ be two probability measures on $\left(\mathrm{\Omega },\mathcal{F}\right)$, and $\left\{{X}_{t},t\in T\right\}$ be Markov chains indexed by tree T under probability measure ${\mu }_{P}$ determined by P. Assume that $\mu \left({x}^{{T}^{\left(n\right)}}\right)$ is always strictly positive. Let (5) (6)

$\phi \left(\omega \right)$ will be called the asymptotic logarithmic likelihood ratio.

Remark 1 If $\mu ={\mu }_{P}$, then $\phi \left(\omega \right)\equiv 0$. In Lemma 1 we will show that in a general case $\phi \left(\omega \right)\ge 0$ μ-a.e., hence $\phi \left(\omega \right)$ can be regarded as a measure of the Markov approximation of an arbitrary random field on T.

The tree model has recently drawn increasing interest from specialists in physics, probability and information theory. Benjamini and Peres  have given the notion of the tree-indexed homogeneous Markov chains and studied the recurrence and ray-recurrence for them. Berger and Ye  have studied the existence of entropy rate for some stationary random fields on a homogeneous tree. Ye and Berger  have studied the asymptotic equipartition property (AEP) in the sense of convergence in probability for a PPG-invariant and ergodic random field on a homogeneous tree. Recently, Yang  has studied some strong limit theorems for countable homogeneous Markov chains indexed by a homogeneous tree and the strong law of large numbers and the asymptotic equipartition property (AEP) for finite homogeneous Markov chains indexed by a homogeneous tree. Huang and Yang  have studied the strong law of large numbers for Markov chains indexed by an infinite tree with uniformly bounded degree. Liu and Wang  have studied the small deviation theorems between the arbitrary random fields and the Markov chain fields on the Cayley tree. Peng, Yang, and Wang  have further studied a class of small deviation theorems for functionals of random fields on a homogeneous tree which partially extend the result of .

In this paper, by introducing the asymptotic logarithmic likelihood ratio as a measure of the Markov approximation of the arbitrary random field on a uniformly bounded tree, and by constructing a non-negative martingale, we obtain the following two results: a class of small deviation theorems of functionals and a class of small deviation theorems of the frequencies of occurrence of states for random fields on a uniformly bounded tree. In fact, our present outcomes can imply the case in  and .

Lemma 1 Let T be a uniformly bounded tree which has no leaves. Let ${\mu }_{1}$, ${\mu }_{2}$ be two probability measures on $\left(\mathrm{\Omega },\mathcal{F}\right)$, $D\in \mathcal{F}$, $\left\{{\tau }_{n},n\ge 1\right\}$ be a sequence of positive random variables such that

$\underset{n\to \mathrm{\infty }}{lim inf}\frac{{\tau }_{n}}{|{T}^{\left(n\right)}|}>0\phantom{\rule{1em}{0ex}}{\mu }_{1}\mathit{\text{-a.e. on}}D.$
(7)

Then

$\underset{n\to \mathrm{\infty }}{lim sup}\frac{1}{{\tau }_{n}}ln\frac{{\mu }_{2}\left({X}^{{T}^{\left(n\right)}}\right)}{{\mu }_{1}\left({X}^{{T}^{\left(n\right)}}\right)}\le 0\phantom{\rule{1em}{0ex}}{\mu }_{1}\mathit{\text{-a.e. on}}D.$
(8)

Proof The proof of this lemma is similar to that of Lemma 1 of , so we omit it. □

Remark 2 Let ${\mu }_{1}=\mu$, ${\mu }_{2}={\mu }_{P}$ and ${\tau }_{n}=|{T}^{\left(n\right)}|$ in Lemma 1, by (8) there exists $A\in \mathcal{F}$, $\mu \left(A\right)=1$ such that

$\underset{n\to \mathrm{\infty }}{lim sup}\frac{1}{|{T}^{\left(n\right)}|}ln\frac{{\mu }_{P}\left({X}^{{T}^{\left(n\right)}}\right)}{\mu \left({X}^{{T}^{\left(n\right)}}\right)}\le 0,\phantom{\rule{1em}{0ex}}\omega \in A,$
(9)

hence we have $\phi \left(\omega \right)\ge 0$, $\omega \in A$.

From this remark, we know that $D\in \mathcal{F}$ and the sequence of $\left\{{\tau }_{n},n\ge 1\right\}$ are existent.

Let T be a uniformly bounded tree, $k,l\in S$. Let (10) (11)

that is,

${S}_{n}\left(k\right)=\sum _{t\in {T}^{\left(n\right)}}{\delta }_{k}\left({X}_{t}\right),\phantom{\rule{2em}{0ex}}{S}_{n}\left(k,l\right)=\sum _{m=1}^{n}\sum _{t\in {L}_{m}}{\delta }_{k}\left({X}_{{1}_{t}}\right){\delta }_{l}\left({X}_{t}\right),$

where ${\delta }_{k}\left(\cdot \right)$ ($k\in S$) is the Kronecker δ-function

Let the degree of each vertex σ ($\sigma \ne o$) on the tree T be $d\left(\sigma \right)$. Since T is a uniformly bounded tree which has no leaves, we know that there are two positive numbers m and M such that $2\le m\le d\left(\sigma \right)\le M$.

Lemma 2 Let T be a uniformly bounded tree which has no leaves, $P=\left(P\left(y|x\right)\right)$, $x,y\in S$ be a positive stochastic matrix, μ, ${\mu }_{P}$ be two probability measures on $\left(\mathrm{\Omega },\mathcal{F}\right)$, $\left\{{X}_{t},t\in T\right\}$ be Markov chains indexed by T under probability measure ${\mu }_{P}$ determined by P, $\phi \left(\omega \right)$ be denoted by (6), M be defined as above, $0\le c be a constant, and (12) (13)

where ${a}_{k}=min\left\{P\left(k|i\right),i\in S\right\}$, ${b}_{k}=max\left\{P\left(k|i\right),i\in S\right\}$. Then

$\underset{n\to \mathrm{\infty }}{lim inf}\frac{{S}_{n-1}\left(k\right)}{|{T}^{\left(n\right)}|}\ge \frac{{M}_{k}}{M-1}\phantom{\rule{1em}{0ex}}\mu \mathit{\text{-a.e. on}}D\left(c\right).$
(14)

Proof By using a similar proof as that of Lemma of , we can obtain

(15)

Since $\frac{1}{|{T}^{\left(n+1\right)}|}\ge \frac{1}{M-1}\frac{1}{|{T}^{\left(n\right)}|}$, this corollary follows from (15). □

In the following, we always let $N\ge 0$, $k\in S$, ${d}^{0}\left(t\right)=1$, and denote by ${d}^{N}\left(t\right)=|\tau \in T:{N}_{\tau }=t|$, where ${N}_{\tau }$ is defined as above. Let (16) (17)

Corollary 1 Let m, M be defined as above. Under the assumption of Lemma  2, we have

$\underset{n\to \mathrm{\infty }}{lim inf}\frac{{S}_{k}^{N+1}\left({T}^{\left(n-1\right)}\right)}{|{T}^{\left(n\right)}|}\ge \frac{{M}_{k}{\left(m-1\right)}^{N+1}}{M-1}\phantom{\rule{1em}{0ex}}\mu \mathit{\text{-a.e. on}}D\left(c\right).$
(18)

Proof Because T is a uniformly bounded tree and has no leaves, then ${\left(m-1\right)}^{N}\le {d}^{N}\left(t\right)\le {\left(M-1\right)}^{N}$. By Lemma 2 we have

(19)

The proof is finished. □

Lemma 3 Let T be a uniformly bounded tree which has no leaves, $P=\left(P\left(y|x\right)\right)$, $x,y\in S$ be a positive stochastic matrix, μ, ${\mu }_{P}$ be two probability measures on $\left(\mathrm{\Omega },\mathcal{F}\right)$, $\left\{{X}_{t},t\in T\right\}$ be Markov chains indexed by T under probability measure ${\mu }_{P}$ determined by P, $\left\{{g}_{t}\left(x,y\right),t\in T\right\}$ be functions defined on ${S}^{2}$, ${L}_{0}=\left\{o\right\}$ (where o is the root of the tree T), ${\mathcal{F}}_{n}=\sigma \left({X}^{{T}^{\left(n\right)}}\right)$, λ be a real number. Let

${t}_{n}\left(\lambda ,\omega \right)=\frac{{e}^{\lambda {\sum }_{t\in {T}^{\left(n\right)}\setminus \left\{o\right\}}{g}_{t}\left({X}_{{1}_{t}},{X}_{t}\right)}}{{\prod }_{t\in {T}^{\left(n\right)}\setminus \left\{o\right\}}{E}_{{\mu }_{P}}\left[{e}^{\lambda {g}_{t}\left({X}_{{1}_{t}},{X}_{t}\right)}|{X}_{{1}_{t}}\right]}\frac{{\mu }_{P}\left({X}^{{T}^{\left(n\right)}}\right)}{\mu \left({X}^{{T}^{\left(n\right)}}\right)},$
(20)

where ${E}_{{\mu }_{P}}$ is the expectation under probability measure ${\mu }_{P}$. Then $\left({t}_{n}\left(\lambda ,\omega \right),{\mathcal{F}}_{n},n\ge 1\right)$ is a non-negative martingale under probability measure μ.

Proof The proof of this lemma is similar to that of Lemma 3 of , so we omit it. □

## 2 Small deviation theorem

Small deviation theorems are a class of strong limit theorems expressed by inequalities. They are the extensions of strong limit theorems expressed by equalities. It is a new research topic proposed by Liu (see ).

In this section, we will establish a class of small deviation theorems of functionals and a class of small deviation theorems of the frequencies of occurrence of states for random fields on a uniformly bounded tree.

Theorem 1 Let T be a uniformly bounded tree which has no leaves, $P=\left(P\left(y|x\right)\right)$, $x,y\in S$ be a positive stochastic matrix, μ, ${\mu }_{P}$ be two probability measures on $\left(\mathrm{\Omega },\mathcal{F}\right)$, $\left\{{X}_{t},t\in T\right\}$ be Markov chains indexed by T under probability measure ${\mu }_{P}$ determined by P. Let $\left\{{g}_{t}\left(x,y\right),t\in T\right\}$ be a collection of uniformly bounded functions defined on ${S}^{2}$. Let $\left\{|{g}_{t}\left(x,y\right)|\le K,x,y\in S,\mathrm{\forall }t\in T\right\}$, and let $\phi \left(\omega \right)$ be denoted by (6). Let $c\ge 0$, $D\left(c\right)$ be defined by (12), (21) (22)

Then (23) (24)

Proof Let ${t}_{n}\left(\lambda ,\omega \right)$ be defined by (20). By Lemma 3, $\left({t}_{n}\left(\lambda ,\omega \right),{\mathcal{F}}_{n},n\ge 1\right)$ is a non-negative martingale under probability measure μ with ${E}_{\mu }\left({t}_{n}\left(\lambda ,\omega \right)\right)=1$. By Doob’s martingale convergence theorem, we have

$\underset{n\to \mathrm{\infty }}{lim}{t}_{n}\left(\lambda ,\omega \right)=t\left(\lambda ,\omega \right)<\mathrm{\infty }\phantom{\rule{1em}{0ex}}\mu \text{-a.e.}$
(25)

Hence

$\underset{n\to \mathrm{\infty }}{lim sup}\frac{1}{|{T}^{\left(n\right)}|}ln{t}_{n}\left(\lambda ,\omega \right)\le 0\phantom{\rule{1em}{0ex}}\mu \text{-a.e.}$
(26)

We have by (20) and (26) (27)

By (5), (6), (12) and (27), we have (28)

Taking $\lambda >0$, we arrive at (29)

where (a) follows by (28), (b) follows by the inequality $lnx\le x-1$ ($x>0$), (c) follows by the inequality $0\le {e}^{x}-x-1\le \frac{1}{2}{x}^{2}{e}^{|x|}$, (d) follows by $|{g}_{t}\left(x,y\right)|\le K$, $\mathrm{\forall }t\in T$, and (e) follows by the inequality ${e}^{-x}\ge 1-x$. In the case $c>0$, noticing that $\frac{\lambda }{2}{K}^{2}{e}^{\lambda K}+\frac{c}{\lambda {e}^{\lambda K}}$ attains its smallest value $\sqrt{2c}K$ when $\lambda {e}^{\lambda K}=\sqrt{\frac{2c}{{K}^{2}}}$, by (29) we have

Hence (23) holds. In the case $c=0$, (23) also holds by choosing ${\lambda }_{i}\to {0}^{+}$ ($i\to \mathrm{\infty }$) in (29). Taking $\lambda <0$ and using a similar approach, we can prove (24). This completes the proof of Theorem 1. □

In the following, we will provide an example showing that $D\left(c\right)$ maybe has a positive probability, even has probability 1.

Example Let T be a Cayley tree ${T}_{C,2}$, ${\mu }_{P}$ and μ be two probability measures on the measurable space $\left(\mathrm{\Omega },\mathcal{F}\right)$, and $\left\{{X}_{t},t\in T\right\}$ be a collection of random variables taking values in the state space $\left\{0,1\right\}$ defined on the measurable space $\left(\mathrm{\Omega },\mathcal{F}\right)$. Let $\left\{{X}_{t},t\in T\right\}$ be i.i.d. process indexed by tree T under the probability measure ${\mu }_{P}$ with the common distribution ${\mu }_{P}\left({X}_{t}=1\right)=p$, ${\mu }_{P}\left({X}_{t}=0\right)=1-p$, $0, and $\left\{{X}_{t},t\in T\right\}$ be also i.i.d. process indexed by tree T under the probability measure μ with the common distribution $\mu \left({X}_{t}=1\right)=q$, $\mu \left({X}_{t}=0\right)=1-q$, $0. It is easy to see that $\left\{{X}_{t},t\in T\right\}$ are Markov chains indexed by tree T with the transitions matrices

$\left(\begin{array}{cc}1-p& p\\ 1-p& p\end{array}\right)\phantom{\rule{1em}{0ex}}\text{and}\phantom{\rule{1em}{0ex}}\left(\begin{array}{cc}1-q& q\\ 1-q& q\end{array}\right)$

and stationary distributions $\left(1-p,p\right)$ and $\left(1-q,q\right)$ under the probability measures ${\mu }_{P}$ and μ, respectively. It is also easy to see that where ${S}_{1}\left({T}^{\left(n\right)}\right)={S}_{1}^{0}\left({T}^{\left(n\right)}\right)$, and ${s}_{1}\left({T}^{\left(n\right)}\right)$ is its realization of ${S}_{1}\left({T}^{\left(n\right)}\right)$. In this case

${\phi }_{n}\left(\omega \right)=\frac{\mu \left({X}^{{T}^{\left(n\right)}}\right)}{{\mu }_{P}\left({X}^{{T}^{\left(n\right)}}\right)}={\left(\frac{q}{p}\right)}^{{S}_{1}\left({T}^{\left(n\right)}\right)}{\left(\frac{1-q}{1-p}\right)}^{|{T}^{\left(n\right)}|-{S}_{1}\left({T}^{\left(n\right)}\right)}.$

By the strong law of large numbers for Markov chains indexed by tree (see ), we have

$\underset{n\to \mathrm{\infty }}{lim}\frac{{S}_{1}\left({T}^{\left(n\right)}\right)}{|{T}^{\left(n\right)}|}=q\phantom{\rule{1em}{0ex}}\mu \text{-a.e.}$

Hence we have

$\begin{array}{rcl}\phi \left(\omega \right)& =& \underset{n\to \mathrm{\infty }}{lim sup}\frac{1}{|{T}^{\left(n\right)}|}ln{\phi }_{n}\left(\omega \right)\\ =& \underset{n\to \mathrm{\infty }}{lim}\frac{{S}_{1}\left({T}^{\left(n\right)}\right)}{|{T}^{\left(n\right)}|}ln\frac{q\left(1-p\right)}{p\left(1-q\right)}+ln\frac{1-q}{1-p}\\ =& qln\frac{q\left(1-p\right)}{p\left(1-q\right)}+ln\frac{1-q}{1-p}\phantom{\rule{1em}{0ex}}\mu \text{-a.e.}\end{array}$

Let

$f\left(q\right)=qln\frac{q\left(1-p\right)}{p\left(1-q\right)}+ln\frac{1-q}{1-p},\phantom{\rule{1em}{0ex}}0

It is easy to see that $f\left(p\right)=0$ and ${lim}_{q\to {1}^{-}}f\left(q\right)=\mathrm{\infty }$. Since $f\left(q\right)$ is a continuous function, for any $0\le c<\mathrm{\infty }$, there exists q such that $f\left(q\right)=c$. Thus $\mu \left(D\left(c\right)\right)=1$.

Theorem 2 Let T be a uniformly bounded tree which has no leaves. ${M}_{k}$, m, M, $D\left(c\right)$ are defined as above. Let $0\le c. Under the assumption of Theorem  1, we have (30) (31)

Proof Letting ${g}_{t}\left(x,y\right)={\delta }_{k}\left(x\right){\delta }_{l}\left(y\right){d}^{N}\left(t\right)$ in Theorem 1, by (21) and (22), we have (32) (33)

By (28), when $c\ge 0$, we have (34)

By Corollary 1 and (34), when $0\le c, we arrive at (35)

Taking $\lambda >0$, we have (36)

where (f) follows by (35), (g), similarly to (b) and (c) of (29), follows by the inequalities $lnx\le x-1$ ($x>0$) and $0\le {e}^{x}-x-1\le \frac{{x}^{2}}{2}{e}^{|x|}$, (h) follows by the inequality ${\left(m-1\right)}^{N}\le {d}^{N}\left(t\right)\le {\left(M-1\right)}^{N}$, (i) follows by the inequality

$\frac{{\sum }_{t\in {T}^{\left(n\right)}\setminus \left\{o\right\}}{\delta }_{k}\left({X}_{{1}_{t}}\right)}{{S}_{k}^{N+1}\left({T}^{\left(n-1\right)}\right)}=\frac{{\sum }_{t\in {T}^{\left(n-1\right)}}{\delta }_{k}\left({X}_{t}\right){d}^{1}\left(t\right)}{{\sum }_{t\in {T}^{\left(n-1\right)}}{\delta }_{k}\left({X}_{t}\right){d}^{N+1}\left(t\right)}\le \frac{1}{{\left(m-1\right)}^{N}},$

and (j) follows by the inequality ${e}^{-x}\ge 1-x$. In the case $c>0$, notice that

$\frac{\lambda {\left(M-1\right)}^{2N}{e}^{\lambda {\left(M-1\right)}^{N}}P\left(l|k\right)}{2}+\frac{c\left(M-1\right)}{\lambda {e}^{\lambda {\left(M-1\right)}^{N}}{M}_{k}\left(m-1\right)}$

attains its smallest value $2\sqrt{\frac{c{\left(M-1\right)}^{2N+1}P\left(l|k\right)}{2{M}_{k}\left(m-1\right)}}$ when

$\lambda {\left(M-1\right)}^{N}{e}^{\lambda {\left(M-1\right)}^{N}}=\sqrt{\frac{2c\left(M-1\right)}{{M}_{k}P\left(l|k\right)\left(m-1\right)}}.$

By (36), we have Hence (30) holds. In the case $c=0$, (30) also holds by choosing ${\lambda }_{i}\to {0}^{+}$ ($i\to \mathrm{\infty }$) in (36). Taking $\lambda <0$ and using a similar approach, we can prove (31). This completes the proof of Theorem 2. □

Corollary 2 Under the assumption of Theorem  2, we have

$\underset{n\to \mathrm{\infty }}{lim sup}\left[\frac{{S}_{k,l}^{N}\left({T}^{\left(n\right)}\setminus \left\{o\right\}\right)}{{S}_{k}^{N+1}\left({T}^{\left(n-1\right)}\right)}-P\left(l|k\right)\right]=0\phantom{\rule{1em}{0ex}}\mu \mathit{\text{-a.e. on}}D\left(0\right),$
(37)

Proof Letting $c=0$, (37) follows from (30) and (31). □

Corollary 3 (see )

Under the assumption of Theorem  2, we have

$\underset{n\to \mathrm{\infty }}{lim sup}\left[\frac{{S}_{k,l}^{N}\left({T}^{\left(n\right)}\setminus \left\{o\right\}\right)}{{S}_{k}^{N+1}\left({T}^{\left(n-1\right)}\right)}-P\left(l|k\right)\right]=0\phantom{\rule{1em}{0ex}}{\mu }_{P}\mathit{\text{-a.e.}}$
(38)

Proof Let $\mu ={\mu }_{P}$. Then ${\phi }_{n}\left(\omega \right)\equiv 0$, $D\left(0\right)=\mathrm{\Omega }$. Hence (38) follows (37) directly. □

Corollary 4 (see )

Under the assumptions of Theorem  2, if T is a ${T}_{C,{N}_{1}}$, we have (39) (40)

Proof Since T is a ${T}_{C,{N}_{1}}$, we can take $M-1={N}_{1}$, $m-1={N}_{1}$. Since (41) (42)

By (30), (31), (41) and (42), we have

This completes the proof of Corollary 4. □

Remark 3 Unfortunately, the upper and lower bounds obtained in Theorem 1 and Theorem 2 are not tight. For example, if we let $|{g}_{t}\left(x,y\right)|\le 1$ and $c=1$ in Theorem 1, then ${lim sup}_{n\to \mathrm{\infty }}\frac{1}{|{T}^{\left(n\right)}|}|{F}_{n}\left(\omega \right)-{G}_{n}\left(\omega \right)|\le 2$, but $c+\sqrt{2c}>2$.

## References

1. Benjamini I, Peres Y: Markov chains indexed by trees. Ann. Probab. 1994, 22: 219–243. 10.1214/aop/1176988857

2. Berger T, Ye Z: Entropic aspects of random fields on trees. IEEE Trans. Inf. Theory 1990, 36(5):1006–1018. 10.1109/18.57200

3. Ye Z, Berger T: Entropic, regularity and asymptotic equipartition property of random fields on trees. J. Comb. Inf. Syst. Sci. 1996, 21(2):157–184.

4. Yang WG: Some limit properties for Markov chains indexed by a homogeneous tree. Stat. Probab. Lett. 2003, 65: 241–250. 10.1016/j.spl.2003.04.001

5. Huang HL, Yang WG: Strong law of large numbers for Markov chains indexed by an infinite tree with uniformly bounded degree. Sci. China Ser. A 2008, 51(2):195–202. 10.1007/s11425-008-0015-1

6. Liu W, Wang LY: The Markov approximation of the random field on Cayley tree and a class of small deviation theorems. Stat. Probab. Lett. 2003, 63: 113–121. 10.1016/S0167-7152(03)00058-0

7. Peng WC, Yang WG, Wang B: A class of small deviation theorems for functionals of random fields on a homogeneous tree. J. Math. Anal. Appl. 2010, 361: 293–301. 10.1016/j.jmaa.2009.06.079

8. Liu W: Relative entropy densities and a class of limit theorems of the sequence of m -valued random variables. Ann. Probab. 1990, 18: 829–839. 10.1214/aop/1176990860

## Acknowledgements

The authors would like to thank the referees for their many valuable comments and suggestions. This work was supported by National Natural Science Foundation of China 11071104.

## Author information

Authors

### Corresponding author

Correspondence to Weiguo Yang.

### Competing interests

The authors declare that they have no competing interests.

### Authors’ contributions

All authors carried out the proof. All authors conceived of the study, and participated in its design and coordination. All authors read and approved the final manuscript.

## Rights and permissions

Reprints and Permissions

Wang, S., Yang, W. A class of small deviation theorems for random fields on a uniformly bounded tree. J Inequal Appl 2013, 81 (2013). https://doi.org/10.1186/1029-242X-2013-81 