Skip to main content

A supplement to the convergence rate in a theorem of Heyde

Abstract

Let {X, X n ,n1} be a sequence of i.i.d. random variables with zero mean, set S n = k = 1 n X k , E X 2 = σ 2 >0, and λ(ϵ)= n = 1 P(| S n |nϵ). In this paper, the authors discuss the rate of approximation of σ 2 by ϵ 2 λ(ϵ) under suitable conditions, improve the results of Klesov (Theory Probab. Math. Stat. 49:83-87, 1994), and extend the work He and Xie (Acta Math. Appl. Sin. 2012, doi:10.1007/s10255-012-0138-6).

MSC:60F15, 60G50.

1 Introduction and main results

Let {X, X n ,n1} be a sequence of i.i.d. random variables, set S n = k = 1 n X k , and λ(ϵ)= n = 1 P(| S n |nϵ). Heyde [1] proved that

lim ϵ 0 ϵ 2 λ(ϵ)= σ 2 ,

whenever E X 2 = σ 2 < and EX=0.

There are various extensions of this result: Chen [2], Gut and Spǎtara [3], Lanzinger and Stadtmüller [4]. Liu and Lin [5] introduced a new kind of complete moment convergence; Klesov [6] studied the rate of approximation of σ 2 by ϵ 2 λ(ϵ) and proved the following Theorem A.

Theorem A Let{X, X n ,n1}be a sequence of i.i.d. random variables with zero mean, ifE X 2 = σ 2 >0, andE | X | 3 <, then

ϵ 2 λ(ϵ) σ 2 =o ( ϵ 1 / 2 ) , as ϵ0.

Recently, He and Xie [7] obtained Theorem B which improved Theorem A. Gut and Steinebach [8] extended the results of Klesov [6].

Theorem B Let{X, X n ,n1}be a sequence of i.i.d. random variables, and0<δ1, if

EX=0,E X 2 = σ 2 >0andE | X | 2 + δ <,

then

ϵ 2 λ(ϵ) σ 2 ={ O ( ϵ ) , δ = 1 , o ( ϵ δ ) , 0 < δ < 1 .

Let G be the set of functions g(x) that are defined for all real x and satisfy the following conditions: (a) g(x) is nonnegative, even, nondecreasing in the interval x>0, and g(x)0 for x0; (b) x g ( x ) is nondecreasing in the interval x>0.

Let G 0 be the set of functions g(x)G satisfying the supplementary condition (c) lim x g ( x 2 ) x g ( x ) =0. Obviously, the function g(x)= | x | δ with 0<δ<1 belongs to G 0 and does not belong to G 0 if δ=1. The purpose of this paper is to generalize Theorem B to the case where the condition E | X | 2 + δ < is replaced by a more general condition E | X | 2 g(X)< in which the function g belongs to some subset of G. Denote T g (v)=E X 2 g(X)I(|X|>v), T g (v) is a nonnegative nonincreasing function in the interval v>0, and lim v T g (v)=0 with E X 2 g(X)<. Now we state our results as follows.

Theorem 1.1 Let{X, X n ;n1}be a sequence of i.i.d. random variables with zero mean andE X 2 = σ 2 >0, ifE X 2 g(X)<for some functiong(x)G, and

n = 1 1 n g ( n ) <,
(1.1)

then

ϵ 2 λ(ϵ) σ 2 =O ( ϵ 1 / 2 ) +o(1) ( h 1 ( ϵ ) + f 1 ( ϵ ) ) , as ϵ0,
(1.2)

where f 1 (ϵ)= n = [ 1 ϵ 2 ] + 1 1 n g ( n ) , h 1 (ϵ)= ϵ 2 n = 1 [ 1 ϵ 2 ] 1 g ( n ) .

Theorem 1.2 Under the conditions of Theorem 1.1, andg(x) G 0 , then

ϵ 2 λ(ϵ) σ 2 =o(1) ( h 1 ( ϵ ) + f 1 ( ϵ ) ) , as ϵ0.
(1.3)

Throughout this paper, we suppose that C denotes a constant which only depends on some given numbers and may be different at each appearance, and that [x] denotes the integer part of x.

2 Proofs of the main results

Before we prove the main results we state some lemmas. Lemma 2.1 is from [7]. Φ(x) is the standard normal distribution function, Φ(x)= 1 2 π x e t 2 / 2 dt.

Lemma 2.1 Let{X, X n ,n1}be a sequence of i.i.d. standard normal distribution random variables. Then

ϵ 2 λ(ϵ)= ϵ 2 n = 1 2 2 π ϵ n e t 2 / 2 dt=1 ϵ 2 2 +O ( ϵ 3 ) , as ϵ0.
(2.1)

If{ X n ,n1}is a sequence of independent random variables with zero mean and finite variance, and putE X j 2 = σ j 2 , B n = j = 1 n σ j 2 , Bikelis[9]obtained the following inequality:

for every x, where V j (x)=P( X j <x)is the distribution function of the random variable X j . By applying the above inequality to the sequence of i.i.d. random variables with zero mean and variance 1, and letting|x|=ϵ n , we have the following lemma.

Lemma 2.2 Let{X, X n ,n1}be a sequence of i.i.d. random variables with zero mean andE X 2 =1. Then for any givenϵ>0, we have

whereV(x)=P(X<x)is the distribution function of a random variable X.

Proof of Theorem 1.1 Without loss of generality, we suppose that σ 2 =1, 0<ϵ<1, and write

ϵ 2 λ(ϵ)=I+ ϵ 2 n = 1 2 2 π ϵ n e t 2 / 2 dt,

where

I= ϵ 2 n = 1 ( P ( | S n | > n ϵ ) 2 2 π ϵ n e t 2 / 2 d t ) .

Applying Lemma 2.1, we obtain

ϵ 2 λ(ϵ)=I+1 ϵ 2 2 +O ( ϵ 3 ) ,

then

ϵ 2 λ(ϵ)1= ϵ 2 2 + ϵ 2 n = 1 R n +O ( ϵ 3 ) ,

here R n =P(| S n |>nϵ) 2 2 π ϵ n e t 2 / 2 dt. By Lemma 2.2,

| R n | R 1 n + R 2 n ,

where

We obtain

ϵ 2 λ(ϵ)1= ϵ 2 n = 1 R 1 n + ϵ 2 n = 1 R 2 n +O ( ϵ 2 ) .
(2.2)

Firstly, we estimate ϵ 2 n = 1 R 1 n . Note that

ϵ 2 n = 1 R 1 n = ϵ 2 n = 1 [ 1 ϵ 2 ] R 1 n + ϵ 2 n = [ 1 ϵ 2 ] + 1 R 1 n =: T 1 + T 2 .

Applying the condition E X 2 g(X)<, we have

lim n | u | > n 4 u 2 g(u)dV(u)=0.

Therefore, for any η>0, there is an integer N 0 such that | u | > n 4 u 2 g(u)dV(u)η, whenever n> N 0 . Hence

T 1 C ϵ 2 n = 1 N 0 | u | > n u 2 d V ( u ) + C ϵ 2 n = N 0 + 1 [ 1 ϵ 2 ] ( 1 + ϵ n ) 2 | u | > ( 1 + ϵ n ) n u 2 d V ( u ) C ϵ 2 N 0 + C ϵ 2 η n = N 0 + 1 [ 1 ϵ 2 ] 1 ( 1 + ϵ n ) 2 g ( n ( 1 + ϵ n ) ) C ϵ 2 ( N 0 + η n = 1 [ 1 ϵ 2 ] 1 g ( n ) ) = C h 1 ( ϵ ) ( N 0 n = 1 [ 1 ϵ 2 ] 1 g ( n ) + η ) C h 1 ( ϵ ) ( N 0 ϵ + η ) = o ( h 1 ( ϵ ) ) ,
(2.3)

where h 1 (ϵ)= ϵ 2 n = 1 [ 1 ϵ 2 ] 1 g ( n ) . For T 2 , noting that g(x)G, we have the following inequality:

T 2 C ϵ 2 n = [ 1 ϵ 2 ] + 1 1 n ϵ 2 | u | > n ( 1 + ϵ n ) u 2 d V ( u ) C n = [ 1 ϵ 2 ] + 1 1 n g ( n ( 1 + ϵ n ) ) | u | > n ( 1 + ϵ n ) u 2 g ( u ) d V ( u ) C n = [ 1 ϵ 2 ] + 1 1 n g ( n ) | u | > 1 ϵ u 2 g ( u ) d V ( u ) C T g ( 1 ϵ ) f 1 ( ϵ ) .
(2.4)

Next, we estimate the second term of (2.2). Note that

ϵ 2 n = 1 R 2 n = C ϵ 2 n = 1 n 1 / 2 ( 1 + ϵ n ) 3 | u | ( n ( 1 + ϵ n ) ) 1 / 2 | u | 3 d V ( u ) + C ϵ 2 n = 1 n 1 / 2 ( 1 + ϵ n ) 3 ( n ( 1 + ϵ n ) ) 1 / 2 < | u | < n ( 1 + ϵ n ) | u | 3 d V ( u ) = : J 1 + J 2 .

For J 1 , we can write

J 1 = C ϵ 2 ( n = 1 [ 1 ϵ 2 ] + n = [ 1 ϵ 2 ] + 1 ) n 1 / 2 ( 1 + ϵ n ) 3 | u | ( n ( 1 + ϵ n ) ) 1 / 2 | u | 3 d V ( u ) = : J 11 + J 12 .

Noting that x g ( x ) is nondecreasing in the interval x>0, we have

J 11 = C ϵ 2 n = 1 [ 1 ϵ 2 ] 1 n ( 1 + ϵ n ) 3 | u | ( n ( 1 + ϵ n ) ) 1 / 2 | u | 3 d V ( u ) C ϵ 2 n = 1 [ 1 ϵ 2 ] 1 n 1 / 4 ( 1 + ϵ n ) 5 / 2 g ( ( n ( 1 + ϵ n ) ) 1 / 2 ) | u | ( n ( 1 + ϵ n ) ) 1 / 2 u 2 g ( u ) d V ( u ) C ϵ 2 n = 1 [ 1 ϵ 2 ] 1 n 1 / 4 g ( n 1 / 4 ) = C h 2 ( ϵ ) ,
(2.5)

where h 2 (ϵ)= ϵ 2 n = 1 [ 1 ϵ 2 ] 1 n 1 / 4 g ( n 1 / 4 ) .

Similarly, we can obtain

J 12 = C ϵ 2 n = [ 1 ϵ 2 ] + 1 1 n ( 1 + ϵ n ) 3 | u | ( n ( 1 + ϵ n ) ) 1 / 2 | u | 3 d V ( u ) C ϵ 2 n = [ 1 ϵ 2 ] + 1 1 n 1 / 4 ( 1 + ϵ n ) 5 / 2 g ( ( n ( 1 + ϵ n ) ) 1 / 2 ) | u | ( n ( 1 + ϵ n ) ) 1 / 2 u 2 g ( u ) d V ( u ) C ϵ 2 n = [ 1 ϵ 2 ] + 1 1 ϵ 5 / 2 n 3 / 2 g ( n 1 / 4 ) = C 1 ϵ f 2 ( ϵ ) ,
(2.6)

where f 2 (ϵ)= n = [ 1 ϵ 2 ] + 1 1 n 3 / 2 g ( n 1 / 4 ) .

For J 2 , we write

J 2 = C ϵ 2 ( n = 1 [ 1 ϵ 2 ] + n = [ 1 ϵ 2 ] + 1 ) n 1 / 2 ( 1 + ϵ n ) 3 ( n ( 1 + ϵ n ) ) 1 / 2 < | u | < n ( 1 + ϵ n ) | u | 3 d V ( u ) = : J 21 + J 22 .

Using the properties of g(x) by simple calculation, it follows that

J 21 = C ϵ 2 n = 1 [ 1 ϵ 2 ] n 1 / 2 ( 1 + ϵ n ) 3 ( n ( 1 + ϵ n ) ) 1 / 2 < | u | < n ( 1 + ϵ n ) | u | 3 d V ( u ) C ϵ 2 ( n = 1 N 0 + n = N 0 + 1 [ 1 ϵ 2 ] ) 1 ( 1 + ϵ n ) 2 g ( n ( 1 + ϵ n ) ) × ( n ( 1 + ϵ n ) ) 1 / 2 < | u | < n ( 1 + ϵ n ) u 2 g ( u ) d V ( u ) C ϵ 2 ( n = 1 N 0 + n = N 0 + 1 [ 1 ϵ 2 ] ) 1 g ( n ) | u | > n 1 / 4 u 2 g ( u ) d V ( u ) C ϵ 2 ( N 0 + η n = 1 [ 1 ϵ 2 ] 1 g ( n ) ) = o ( h 1 ( ϵ ) ) ,
(2.7)

and

J 22 C ϵ 2 n = [ 1 ϵ 2 ] + 1 n 1 2 ( 1 + ϵ n ) 3 ( n ( 1 + ϵ n ) ) 1 / 2 < | u | < n ( 1 + ϵ n ) | u | 3 d V ( u ) C n = [ 1 ϵ 2 ] + 1 1 n g ( n ) ( n ( 1 + ϵ n ) ) 1 / 2 < | u | < n ( 1 + ϵ n ) u 2 g ( u ) d V ( u ) C T g ( 1 ϵ ) n = [ 1 ϵ 2 ] + 1 1 n g ( n ) C T g ( 1 ϵ ) f 1 ( ϵ ) .
(2.8)

From (2.2) to (2.8), we conclude that

ϵ 2 λ(ϵ)1C 1 ϵ f 2 (ϵ)+C T g ( 1 ϵ ) f 1 (ϵ)+o(1) h 1 (ϵ)+C h 2 (ϵ).
(2.9)

Since

1 ϵ f 2 (ϵ) C ϵ n = [ 1 ϵ 2 ] + 1 1 n 3 / 2 C ϵ ,

and

h 2 (ϵ)= ϵ 2 n = 1 [ 1 ϵ 2 ] 1 n 4 g ( n 4 ) C ϵ 2 n = 1 [ 1 ϵ 2 ] 1 n 4 C ϵ ,

by (2.9), we have

ϵ 2 λ(ϵ)1=O ( ϵ 1 / 2 ) +o(1) ( f 1 ( ϵ ) + h 1 ( ϵ ) ) .

This completes the proof of Theorem 1.1. □

Proof of Theorem 1.2 By the conditions g(x) G 0 , and lim x g ( x 2 ) x g ( x ) =0, for any η>0, there is an integer N 1 such that g ( n ) n 4 g ( n 4 ) η, whenever n> N 1 . We have

h 2 ( ϵ ) ϵ 2 n = 1 N 1 1 n 4 g ( n 4 ) + ϵ 2 n = N 1 [ 1 ϵ 2 ] η g ( n ) C ϵ 2 N 1 + ϵ 2 n = N 1 + 1 [ 1 ϵ 2 ] η g ( n ) C ϵ 2 N 1 + ϵ 2 n = 1 [ 1 ϵ 2 ] η g ( n ) = o ( 1 ) h 1 ( ϵ ) ,
(2.10)

and

1 ϵ f 2 ( ϵ ) 1 ϵ n = [ 1 ϵ 2 ] + 1 η n 5 / 4 g ( n ) n = [ 1 ϵ 2 ] + 1 η n g ( n ) = o ( 1 ) f 1 ( ϵ ) .
(2.11)

By (2.9)-(2.11), note that T g ( 1 ϵ )=o(1), as ϵ0, we have

ϵ 2 λ(ϵ) σ 2 =o(1) ( h 1 ( ϵ ) + f 1 ( ϵ ) ) ,as ϵ0.

This completes the proof of Theorem 1.2. □

Remark 2.1 If g(x)= | x | δ , 0<δ<1, then f 1 (ϵ)=O( ϵ δ ), h 1 (ϵ)=O( ϵ δ ). By Theorem 1.2, we get

ϵ 2 λ(ϵ) σ 2 =o ( ϵ δ ) ,as ϵ0.

Remark 2.2 If g(x)=|x|, δ=1, then 1 ϵ f 2 (ϵ)=O(ϵ), f 1 (ϵ)=O(ϵ), h 1 (ϵ)=O(ϵ), h 2 (ϵ)=O(ϵ). By (2.9), we get

ϵ 2 λ(ϵ) σ 2 =O(ϵ),as ϵ0.

References

  1. Heyde CC: A supplement to the strong law of large numbers. J. Appl. Probab. 1975, 12: 903–907.

    MathSciNet  Article  Google Scholar 

  2. Chen R: A remark on the strong law of large numbers. Proc. Am. Math. Soc. 1976, 61: 112–116. 10.1090/S0002-9939-1976-0420802-1

    Article  Google Scholar 

  3. Gut A, Spǎtaru A: Precise asymptotics in the Baum-Kate and Davis law of large numbers. J. Math. Anal. Appl. 2000, 248: 233–246. 10.1006/jmaa.2000.6892

    MathSciNet  Article  Google Scholar 

  4. Lanzinger H, Stadtmüller U: Refined Baum-Katz laws for weighted sums of iid random variables. Stat. Probab. Lett. 2004, 69: 357–368. 10.1016/j.spl.2004.06.033

    Article  Google Scholar 

  5. Liu WD, Lin ZY: Precise asymptotic for a new kind of complete moment convergence. Stat. Probab. Lett. 2006, 76: 1787–1799. 10.1016/j.spl.2006.04.027

    Article  Google Scholar 

  6. Klesov OI: On the convergence rate in a theorem of Heyde. Theory Probab. Math. Stat. 1994, 49: 83–87.

    MathSciNet  Google Scholar 

  7. He JJ, Xie TF: Asymptotic property for some series of probability. Acta Math. Appl. Sin. 2012. doi:10.1007/s10255–012–0138–6

    Google Scholar 

  8. Gut A, Steinebach J: Convergence rates in Precise asymptotics. J. Math. Anal. Appl. 2012, 390: 1–14. 10.1016/j.jmaa.2011.11.046

    MathSciNet  Article  Google Scholar 

  9. Bikelis A: Estimates of the remainder in the central limit theorem. Litovsk. Mat. Sb. 1966, 6: 323–346.

    MathSciNet  Google Scholar 

Download references

Acknowledgements

The authors are very grateful to the referees and editors for their valuable comments and some helpful suggestions that improved the clarity and readability of the paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jianjun He.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors read and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and Permissions

About this article

Cite this article

He, J., Xie, T. A supplement to the convergence rate in a theorem of Heyde. J Inequal Appl 2012, 195 (2012). https://doi.org/10.1186/1029-242X-2012-195

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1029-242X-2012-195

Keywords

  • convergence rate
  • i.i.d. random variable
  • theorem of Heyde