Skip to main content

On complete convergence for weighted sums of martingale-difference random fields

Abstract

Let { a n , i ,n Z + d ,in} be an array of real numbers, and let { X i ,i Z + d } be the martingale differences with respect to { G n ,n Z + d } satisfying E(E(X| G k )| G m )=E(X| G k m ) a.s., where km denotes componentwise minimum, { G k ,k Z + d } is a family of σ-algebras such that kn, G k G n G, and X is any integrable random variable defined on the initial probability space. The aim of this paper is to obtain some results concerning complete convergence of weighted sums i n a n , i X i .

MSC:60F05, 60F15.

1 Introduction

The concept of complete convergence for sums of independent and identically distributed random variables was introduced by Hsu and Robbins [1] as follows: A sequence of random variables { X n } is said to be completely to a constant c if

n = 1 P ( | X n c | > ϵ ) <for all ϵ>0.

This result has been generalized and extended to the random fields { X n ,n Z + d } of random variables. For example, Fazekas and Tómács [2] and Czerebak-Mrozowicz et al. [3] for fields of pairwise independent random variables, and Gut and Stadtmüller [4] for random fields of i.i.d. random variables.

Let Z + be the set of positive integers. For fixed d Z + , set Z + d ={n=( n 1 , n 2 ,, n d ): n i Z + ,i=1,2,,d} with coordinatewise partial order, ≤, i.e., for m=( m 1 , m 2 ,, m d ),n=( n 1 , n 2 ,, n d ) Z + d , mn if and only if m i n i , i=1,2,,d. For n=( n 1 , n 2 ,, n d ) Z + d , let |n|= i = 1 d n i . For a field { a n ,n Z + d } of real numbers, the limit superior is defined by inf r 1 sup | n | r a n and is denoted by lim sup | n | a n .

Note that |n| is equivalent to max{ n 1 , n 2 ,, n d }, which is weaker than the condition min{ n 1 , n 2 ,, n d } when d2.

Let { X n ,n Z + d } be a field of random variables, and let { a n , k ,n Z + d ,kn} be an array of real numbers. The weighted sums k n a n , k X k can play an important role in various applied and theoretical problems, such as those of the least squares estimators (see Kafles and Bhaskara Rao [5]) and M-estimates (see Rao and Zhao [6]) in linear models, the nonparametric regression estimators (see Priestley and Chao [7]), etc. So, the study of the limiting behavior of the weighted sums is very important and significant (see Chen and Hao [8]).

Now, we consider the notion of martingale differences. Let { G k ,k Z + d } be a family of σ-algebras such that

G k G n G,kn,

and for any integrable random variable X defined on the initial probability space,

E ( E ( X | G k ) | G m ) =E(X| G k m )a.s.,
(1.1)

where km denotes the componentwise minimum.

An { G k ,k Z + d }-adapted, integrable process { Y k ,k Z + d } is called a martingale if and only if

E( Y n | G m )= Y m n a.s.

Let us observe that for martingale {( Y n , G n ),n Z + d }, the random variables

X n = a { 0 , 1 } d ( 1 ) i = 1 d a i Y n a ,

where a=( a 1 , a 2 ,, a d ) and n Z + d , are martingale differences with respect to { G n ,n Z + d } (see Kuczmaszewska and Lagodowski [9]).

For the results concerning complete convergence for martingale arrays obtained in the one-dimensional case, we refer to Lagodowski and Rychlik [10], Elton [11], Lesigne and Volny [12], Stoica [13] and Ghosal and Chandra [14]. Recently, complete convergence for martingale difference random fields was proved by Kuczmaszewska and Lagodowski [9].

The aim of this paper is to obtain some results concerning complete convergence of weighted sums i n a n , i X i , where { a n , i ,n Z + d ,in} is an array of real numbers, and { X i ,i Z + d } is the martingale differences with respect to { G n ,n Z + d } satisfying (1.1).

2 Results

The following moment maximal inequality provides us a useful tool to prove the main results of this section (see Kuczmaszewska and Lagodowski [9]).

Lemma 2.1 Let {( Y n , G n ),n Z + d } be a martingale, and let {( X n , G n ),n Z + d } be the martingale differences corresponding to it. Let q>1. There exists a finite and positive constant C depending only on q and d such that

E ( max k n | Y k | q ) CE ( k n X k 2 ) q / 2 .
(2.1)

Let us denote G i =σ{ G j :j<i}. Now, we are ready to formulate the next result.

Theorem 2.2 Let { a n ,i,n Z + d ,in} be an array of real numbers, and let { X n ,n Z + d } be the martingale differences with respect to { G n ,n Z + d } satisfying (1.1). For αp>1, p>1 and α> 1 2 , we assume that

  1. (i)

    n | n | α p 2 i n P{| a n , i X i |> | n | α }<,

  2. (ii)

    n | n | α ( p q ) 3 + q / 2 i n | a n , i | q E( | X i | q I[| a n , i X i | | n | α ])< for q2,

(ii)′ n | n | α ( p q ) 2 i n | a n , i | q E( | X i | q I[| a n , i X i | | n | α ])< for 1<q<2 and

  1. (iii)

    n | n | α p 2 P{ max j n | i j E( a n , i X i I[| a n , i X i | | n | α ]| G i )|>ϵ | n | α }< for all ϵ>0.

Then we have

n | n | α p 2 P { max j n | S j | > ϵ | n | α } <for all ϵ>0,
(2.2)

where S n = 1 i n a n , i X i .

Proof Let us notice that the series n | n | α p 2 is finite, then (2.2) always holds. Therefore, we consider only the case such that n | n | α p 2 is divergent. Let X n , i = X i I[| a n , i X i | | n | α ], X n , i = X n , i E( X n , i | G i ) and S n , j = i j a n , i X n , i .

Then

n | n | α p 2 P { max j n | S j | > ϵ | n | α } n | n | α p 2 P { | a n , i X i | > | n | α } + n | n | α p 2 P { max j n | i j a n , i X i I [ | a n , i X i | | n | α ] | > ϵ | n | α } n | n | α p 2 i n P { | a n , i X i | > | n | α } + n | n | α p 2 P { max j n | i j ( a n , i X i I [ | a n , i X i | | n | α ] E ( a n , i X i I [ | a n , i X i | | n | α ] | G i ) ) | > ϵ 2 | n | α } + n | n | α p 2 P { max j n | i j E ( a n , i X i I [ | a n , i X i | | n | α ] | G i ) | > ϵ 2 | n | α } = I 1 + I 2 + I 3 .

Clearly, I 1 < by (i), and I 3 < by (iii). It remains to prove that I 2 <. Thus, the proof will be completed by proving that

n | n | α p 2 P { max j n | S n , j | > ϵ | n | α } <.

To prove it, we first observe that {( S n , j , G j ),jn} is a martingale. In fact, if i>j, then G i j G i and by (1.1), we have

E ( a n , i X n , i | G j ) = E ( a n , i X n , i E ( a n , i X n , i | G i ) | G i ) = E ( E ( a n , i X n , i E ( a n , i X n , i | G i ) | G i ) | G j ) = E ( a n , i X n , i E ( a n , i X n , i | G i ) | G i j ) = 0 .

Then, by the Markov inequality and Lemma 2.1, there exists some constant C such that

P { max j n | S n , j | > ϵ | n | α } C E ( max j n | S n , j | q ) | n | α q C | n | α q E ( i n a n , i 2 X n , i 2 ) q / 2 = I 4 .

Case q2; we get

I 4 C | n | α q | n | q / 2 1 i n E | a n , i X n , i | q C | n | q / 2 1 α q i n E ( | a n , i X i | q I [ | a n , i X i | | n | α ] ) .

Note that the last estimation follows from the Jensen inequality. Thus, we have

n | n | α p 2 P { max j n | S n , j | > ϵ | n | α } C n | n | α p 3 q ( α 1 / 2 ) i n E ( | a n , i X i | q I [ | a n , i X i | | n | α ] ) <

by assumption (ii).

Case 1<q<2; we get

I 4 C | n | α q i n E | a n , i X n , i | q C | n | α q i n E ( | a n , i X i | q I [ | a n , i X i | | n | α ] ) .

Therefore, for 1<q<2, we obtain

n | n | α p 2 P { max j n | S n , j | > ϵ | n | α } C n | n | α ( p q ) 2 i n E ( | a n , i X i | q I [ | a n , i X i | | n | α ] ) <

by assumption (ii)′. Thus, I 2 < for all q>1, and the proof of Theorem 2.2 is complete. □

Corollary 2.3 Let { a n ,i,n Z + d ,in} be an array of real numbers. Let { X n ,n Z + d } be martingale differences with respect to { G n ,n Z + d } satisfying (1.1), and E X n =0 for n Z + d . Let p1, α> 1 2 and αp>1. Assume that (i) and for some q>1, (ii) or (ii)′ hold respectively. If

max j n i j E ( a n , i X i I [ | a n , i X i | | n | α ] | G i ) =o ( | n | α ) ,
(2.3)

then (2.2) holds.

Proof It is easy to see that (2.3) implies (iii). We omit details that prove it. □

The following corollary shows that assumption (iii) in Theorem 2.2 is natural, and in the case of independent random fields, it reduces to the known one.

Corollary 2.4 Let { a n ,i,n Z + d ,in} be an array of real numbers. Let { X n ,n Z + d } be a field of independent random variables such that E X n =0 for n Z + d . Let p1, α> 1 2 and αp>1. Assume that (i) and for some q>1, (ii) or (ii)′ hold respectively. If

1 | n | α max j n i j E ( a n , i X i I [ | a n , i X i | | n | α ] ) 0as |n|,
(2.4)

then (2.2) holds.

Proof Since { X n ,n Z + d } is a field of independent random variables, we have

1 | n | α max j n i j E ( a n , i X i I [ | a n , i X i | | n | α ] | G i ) = 1 | n | α max j n i j E ( a n , i X i I [ | a n , i X i | | n | α ] ) .

Now, it is easy to see that (2.4) implies (iii) of Theorem 2.2. Thus, by Theorem 2.2, result (2.2) follows. □

Remark Theorem 2.2 and Corollary 2.4 are extensions of Theorem 4.1 and Corollary 4.1 in Kuczmaszewska and Lagodowski [9] to the weighted sums case, respectively.

Corollary 2.5 Let { a n ,i,n Z + d ,1in} be an array of real numbers. Let { X n ,n Z + d } be the martingale differences with respect to { G n ,n Z + d } satisfying (1.1) and E X n =0. Let p1, α> 1 2 and αp>1 and E | X n | 1 + λ n < for λ n with 0< λ n <1 for n Z + d . If

n | n | α p 2 | n | α ( 1 + λ n ) i n | a n , i | 1 + λ n E | X i | 1 + λ n <,
(2.5)
max 1 j n i j E ( | a n , i X i | I [ | a n , i X i | | n | α ] | G i ) =o ( | n | α ) ,
(2.6)

then (2.2) holds.

Proof If the series n | n | α p 2 <, then (2.2) always holds. Hence, we only consider the case n | n | α p 2 =. It follows from (2.5) that

| n | α ( 1 + λ n ) i n | a n , i | 1 + λ n E | X i | 1 + λ n <1.

By (2.5) and the Markov inequality,

n | n | α p 2 P ( | a n , i X i | > | n | α ) n | n | α p 2 | n | α ( 1 + λ n ) i n | a n , i | 1 + λ n E | X i | 1 + λ n < ,
(2.7)

which satisfies (i) of Theorem 2.2.

As the proof of Corollary 2.3, (2.6) implies (iii) of Theorem 2.2.

It remains to show that Theorem 2.2(ii) or (ii)′ is satisfied.

For 1<q<2, take 1+ λ n <q. Then we have

n | n | α ( p q ) 2 i n | a n , i | q E ( | X i | q I [ | a n , i X i | | n | α ] ) n | n | α p 2 | n | α ( 1 + λ n ) | n | α q + α ( 1 + λ n ) | n | α q α ( 1 + λ n ) i n | a n , i | 1 + λ n E | X i | 1 + λ n = n | n | α p 2 | n | α ( 1 + λ n ) i n | a n , i | 1 + λ n E | X i | 1 + λ n < by (2.5) ,

which satisfies Theorem 2.2(ii)′. Hence, the proof is complete. □

Corollary 2.6 Let { a n ,i,n Z + d ,1in} be an array of real numbers, and let { X n ,n Z + d } be the martingale differences with respect to { G n ,n Z + d } satisfying (1.1), E X n =0 and E | X n | p < for 1<p<2. Let α> 1 2 , αp>1 and 1<p<2. If

1 i n | a n , i | p E | X i | p =O ( | n | δ ) for 0<δ<1,
(2.8)

and Theorem  2.2(iii) hold, then (2.2) holds.

Proof By (2.8) and the Markov inequality,

n | n | α p 2 i n P ( | a n , i X i | > | n | α ) n | n | α p 2 i n | a n , i | p E | X i | p | n | α p C n | n | 2 + δ < .
(2.9)

By taking q<p, we have

n | n | α ( p q ) 2 i n | a n , i | q E ( | X i | q I [ | a n , i X i | ϵ | n | α ] ) n | n | 2 i n | a n , i | p E | X i | p C n | n | 2 + δ < .
(2.10)

Hence, by (2.9) and (2.10), conditions (i) and (ii)′ in Theorem 2.2 are satisfied, respectively.

To complete the proof, it is enough to note that by E X n =0 for n Z + d and by (2.8), we get for jn

| n | α i j | a n , i |E| X i |I [ | a n , i X i | ϵ | n | α ] 0as |n|.
(2.11)

Hence, the proof is complete. □

Corollary 2.7 Let { X n ,n Z + d } be the martingale differences with respect to { G n ,n Z + d } satisfying (1.1), let E X n =0 and E | X n | p < for 1<p<2 and be stochastically dominated by a random variable X, i.e., there is a constant D such that P(| X n |>x)DP(|X|>x) for all x0 and n Z + d . Let { a n ,i,n Z + d ,in} be an array of real numbers satisfying

i n | a n , i | p =O ( | n | δ ) for 0<δ<1.
(2.12)

If Theorem  2.2(iii) holds, then (2.2) holds.

Proof From (2.12), (2.8) follows. Hence, by Corollary 2.6, we obtain (2.2). □

Remark Linear random fields are of great importance in time series analysis. They arise in a wide variety of context. Applications to economics, engineering, and physical science are extremely broad (see Kim et al. [15]).

Let Y k = i 1 a k + i X i , where { a i ,i Z + d } is a field of real numbers with i 1| a i |<, and { X i ,i Z + d } is a field of the martingale difference random variables.

Define a n , i = 1 k n a i + k . Then we have

1 k n Y k = 1 k n i 1 a i + k X i = i 1 1 k n a i + k X i = i 1 a n , i X i .

Hence, we can use the above results to investigate the complete convergence for linear random fields.

References

  1. Hsu PL, Robbins H: Complete convergence and the law of large numbers. Proc. Natl. Acad. Sci. USA 1947, 33: 25–31. 10.1073/pnas.33.2.25

    Article  MathSciNet  MATH  Google Scholar 

  2. Fazekas I, Tómács T: Strong laws of large numbers for pairwise independent random variables with multidimensional indices. Publ. Math. (Debr.) 1998, 53: 149–161.

    MathSciNet  MATH  Google Scholar 

  3. Czerebak-Mrozowicz EB, Klesov OI, Rychlik Z: Marcinkiewicz-type strong law of large numbers for pairwise independent random fields. Probab. Math. Stat. 2002, 22: 127–139.

    MathSciNet  MATH  Google Scholar 

  4. Gut A, Stadtmüller U: An asymmetric Marcinkiewicz-Zygmund SLLN for random fields. Stat. Probab. Lett. 2009, 35: 756–763.

    MATH  Google Scholar 

  5. Kafles D, Bhaskara Rao M: Weak consistency of least squares estimators in linear models. J. Multivar. Anal. 1982, 12: 186–198. 10.1016/0047-259X(82)90014-8

    Article  MathSciNet  MATH  Google Scholar 

  6. Rao CR, Zhao LC: Linear representation of M -estimates in linear models. Can. J. Stat. 1992, 20: 359–368. 10.2307/3315607

    Article  MathSciNet  MATH  Google Scholar 

  7. Priestley MB, Chao MT: Nonparametric function fitting. J. R. Stat. Soc. B 1972, 34: 385–392.

    MathSciNet  MATH  Google Scholar 

  8. Chen P, Hao C: A remark on the law of the logarithm for weighted sums of random variables with multidimensional indices. Stat. Probab. Lett. 2011, 81: 1808–1812. 10.1016/j.spl.2011.07.007

    Article  MathSciNet  MATH  Google Scholar 

  9. Kuczmaszewska A, Lagodowski Z: Convergence rates in the SLLN for some classes of dependent random fields. J. Math. Anal. Appl. 2011, 380: 571–584. 10.1016/j.jmaa.2011.03.042

    Article  MathSciNet  MATH  Google Scholar 

  10. Lagodowski ZA, Rychlik Z: Rate of convergence in the strong law of large numbers for martingales. Probab. Theory Relat. Fields 1986, 71: 467–476. 10.1007/BF01000217

    Article  MathSciNet  MATH  Google Scholar 

  11. Elton J: Law of large numbers for identically distributed martingale differences. Ann. Probab. 1981, 9: 405–412. 10.1214/aop/1176994414

    Article  MathSciNet  MATH  Google Scholar 

  12. Lesigne E, Volny D: Large deviations for martingales. Stoch. Process. Appl. 2001, 96: 143–159. 10.1016/S0304-4149(01)00112-0

    Article  MathSciNet  MATH  Google Scholar 

  13. Stoica G: Baum-Katz-Nagayev type results for martingales. J. Math. Anal. Appl. 2007, 336: 1489–1492. 10.1016/j.jmaa.2007.03.012

    Article  MathSciNet  MATH  Google Scholar 

  14. Ghosal S, Chandra TK: Complete convergence of martingale arrays. J. Theor. Probab. 1998, 11: 621–631. 10.1023/A:1022646429754

    Article  MathSciNet  MATH  Google Scholar 

  15. Kim TS, Ko MH, Choi YK: The invariance principle for linear multiparameter stochastic processes generated by associated random fields. Stat. Probab. Lett. 2008, 78: 3298–3303. 10.1016/j.spl.2008.06.022

    Article  MathSciNet  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mi Hwa Ko.

Additional information

Competing interests

The author declares that she has no competing interests.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Ko, M.H. On complete convergence for weighted sums of martingale-difference random fields. J Inequal Appl 2013, 473 (2013). https://doi.org/10.1186/1029-242X-2013-473

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1029-242X-2013-473

Keywords