Skip to main content

Improved results in almost sure central limit theorems for the maxima and partial sums for Gaussian sequences

Abstract

Let \(X, X_{1}, X_{2},\ldots\) be a standardized Gaussian sequence. The universal results in almost sure central limit theorems for the maxima \(M_{n}\) and partial sums and maxima \((S_{n}/\sigma_{n}, M_{n})\) are established, respectively, where \(S_{n}=\sum_{i=1}^{n}X_{i}\), \(\sigma^{2}_{n}=\operatorname{Var}S_{n}\), and \(M_{n}=\max_{1\leq i\leq n}X_{i}\).

Introduction

Starting with Brosamler [1] and Schatte [2], in the last two decades several authors investigated the almost sure central limit theorem (ASCLT) dealing mostly with partial sums of random variables. Some ASCLT results for partial sums were obtained by Ibragimov and Lifshits [3], Miao [4], Berkes and Csáki [5], Hörmann [6], Wu [79], and Wu and Chen [10]. The concept has already started to have applications in many areas. Fahrner and Stadtmüller [11] and Nadarajah and Mitov [12] investigated ASCLT for the maxima of i.i.d. random variables. The ASCLT of Gaussian sequences has experienced new developments in the recent past years. Significant recent contributions can be found in Csáki and Gonchigdanzan [13], Chen and Lin [14], Tan et al. [15], and Tan and Peng [16], extending this principle by proving ASCLT for the maxima of a Gaussian sequence. Further, Peng et al. [1719], Zhao et al. [20], and Tan and Wang [21] studied the maximum and partial sums of a standardized nonstationary Gaussian sequence.

A standardized Gaussian sequence \(\{X_{n}; n\geq1\}\) is a sequence of standard normal random variables, and for any choice of n, \(i_{1},\ldots,i_{n}\), the joint distribution of \(X_{i_{1}},\ldots,X_{i_{n}}\) is an n-dimensional normal distribution. Throughout this paper we assume \(\{X_{n}; n\geq1\}\) is a standardized Gaussian sequence with covariance \(r_{i,j}:=\operatorname{Cov}(X_{i}, X_{j})\). For each \(n\geq1\), let \(S_{n}=\sum_{i=1}^{n}X_{i}\), \(\sigma^{2}_{n}=\operatorname{Var}S_{n}\), \(M_{n}=\max_{1\leq i\leq n}X_{i}\). The symbols \(S_{n}/\sigma_{n}\) and \(M_{n}\) denote partial sums and maxima, respectively. Let \(\Phi(\cdot)\) and \(\phi(\cdot)\) denote the standard normal distribution function and its density function, respectively, and I denote an indicator function. \(A_{n}\sim B_{n}\) denotes \(\lim_{n\rightarrow\infty}A_{n}/B_{n}=1\), and \(A_{n}\ll B_{n}\) means that there exists a constant \(c>0\) such that \(A_{n}\leq cB_{n}\) for sufficiently large n. The symbol c stands for a generic positive constant which may differ from one place to another. The normalizing constants \(a_{n}\) and \(b_{n}\) are defined by

$$ a_{n}=(2\ln n)^{1/2}, \qquad b_{n}=a_{n}- \frac{\ln\ln n+\ln (4\pi)}{2a_{n}}. $$
(1)

Chen and Lin [14] obtained the following almost sure limit theorem for the maximum of a standardized nonstationary Gaussian sequence.

Theorem A

Let \(\{X_{n}; n\geq1\}\) be a standardized nonstationary Gaussian sequence such that \(|r_{ij}|\leq\rho_{|i-j|}\) for \(i\neq j\) where \(\rho_{n}<1\) for all \(n\geq1\) and \(\rho_{n}\ll\frac{1}{\ln n(\ln\ln n)^{1+\varepsilon}}\). Let the numerical sequence \(\{u_{ni}; 1\leq i\leq n, n\geq1\}\) be such that \(n(1-\Phi(\lambda_{n}))\) is bounded and \(\lambda_{n}=\min_{1\leq i\leq n}u_{ni}\geq c\ln^{1/2}n\) for some \(c>0\). If \(\sum_{i=1}^{n}(1-\Phi(u_{ni}))\rightarrow\tau\) as \(n\rightarrow\infty\) for some \(\tau\geq0\), then

$$\lim_{n\rightarrow\infty}\frac{1}{\ln n}\sum_{k=1} ^{n} \frac{1}{k}I \Biggl(\bigcap_{i=1} ^{k}(X_{i}\leq u_{ki}) \Biggr)=\exp(-\tau)\quad \textit{a.s.} $$

Zhao et al. [20] obtained the following almost sure limit theorem for maximum and partial sums of standardized nonstationary Gaussian sequence.

Theorem B

Let \(\{X_{n}; n\geq1\}\) be a standardized nonstationary Gaussian sequence. Suppose that there exists a numerical sequence \(\{u_{ni}; 1\leq i\leq n, n\geq1\}\) such that \(\sum_{i=1}^{n}(1-\Phi(u_{ni}))\rightarrow\tau\) for some \(0<\tau<\infty\) and \(n(1-\Phi(\lambda_{n}))\) is bounded, where \(\lambda_{n}=\min_{1\leq i\leq n}u_{ni}\). If \(\sup_{i\neq j}|r_{ij}|=\delta<1\),

$$\begin{aligned}& \sum_{j=2} ^{n}\sum _{i=1} ^{j-1}|r_{ij}|=o(n), \end{aligned}$$
(2)
$$\begin{aligned}& \sup_{i\geq1}\sum_{j=1} ^{n}|r_{ij}|\ll\frac{\ln^{1/2}n}{(\ln\ln n)^{1+\varepsilon}} \quad \textit{for some } \varepsilon>0 , \end{aligned}$$
(3)

then

$$ \lim_{n\rightarrow\infty}\frac{1}{\ln n}\sum_{k=1} ^{n} \frac{1}{k}I \Biggl(\bigcap_{i=1} ^{k}(X_{i}\leq u_{ki}), \frac{S_{k}}{\sigma_{k}}\leq y \Biggr)=\exp(-\tau)\Phi(y)\quad \textit{a.s. for all }y\in\mathbb{R} $$
(4)

and

$$ \lim_{n\rightarrow\infty}\frac{1}{\ln n}\sum_{k=1} ^{n}\frac{1}{k}I \biggl(a_{k}(M_{k}-b_{k}) \leq x, \frac{S_{k}}{\sigma_{k}}\leq y \biggr)=\exp\bigl(-\mathrm{e}^{-x}\bigr) \Phi(y) \quad \textit{a.s. for all } x, y\in\mathbb{R} . $$
(5)

By the terminology of summation procedures (see e.g. Chandrasekharan and Minakshisundaram [22], p.35) one shows that the larger the weight sequence in ASCLT is, the stronger the relation becomes. Based on this view, one should also expect to get stronger results if one uses larger weights. Moreover, it would be of considerable interest to determine the optimal weights.

The purpose of this paper is to give substantial improvements for weight sequences and to weaken greatly conditions (2) and (3) in Theorem B obtained by Zhao et al. [20]. We will study and establish the ASCLT for maximum \(M_{n}\) and maximum and partial sums of the standardized Gaussian sequences, and we will show that the ASCLT holds under a fairly general growth condition on \(d_{k}=k^{-1}\exp(\ln^{\alpha}k)\), \(0\leq\alpha<1/2\).

Main results

Set

$$ d_{k}=\frac{\exp(\ln^{\alpha}k)}{k},\qquad D_{n}=\sum _{k=1} ^{n}d_{k} \quad \mbox{for }0\leq \alpha< 1/2. $$
(6)

Our theorems are formulated in a more general setting.

Theorem 2.1

Let \(\{X_{n}; n\geq1\}\) be a standardized Gaussian sequence. Let the numerical sequence \(\{u_{ni}; 1\leq i\leq n, n\geq1\}\) be such that \(\sum_{i=1}^{n}(1-\Phi(u_{ni}))\rightarrow\tau\) for some \(0\leq\tau<\infty\) and \(n(1-\Phi(\lambda_{n}))\) is bounded, where \(\lambda_{n}=\min_{1\leq i\leq n}u_{ni}\). Suppose that \(\rho_{n}<1\) for all \(n\geq1\) such that

$$ |r_{ij}|\leq\rho_{|i-j|}\quad \textit{for } i\neq j,\qquad \rho_{n}\ll\frac{1}{\ln n(\ln D_{n})^{1+\varepsilon}}\quad \textit{for some } \varepsilon>0. $$
(7)

Then

$$ \lim_{n\rightarrow\infty}\frac{1}{D_{n}}\sum_{k=1} ^{n}d_{k}I \Biggl(\bigcap_{i=1}^{k}(X_{i} \leq u_{ki}) \Biggr)=\exp(-\tau)\quad \textit{a.s.} $$
(8)

and

$$ \lim_{n\rightarrow\infty}\frac{1}{D_{n}}\sum_{k=1} ^{n}d_{k}I \bigl(a_{k}(M_{k}-b_{k}) \leq x \bigr)=\exp\bigl(-\mathrm{e}^{-x}\bigr) \quad \textit{a.s. for any } x\in \mathbb{R}, $$
(9)

where \(a_{n}\) and \(b_{n}\) are defined by (1).

Theorem 2.2

Let \(\{X_{n}; n\geq1\}\) be a standardized Gaussian sequence. Let the numerical sequence \(\{u_{ni}; 1\leq i\leq n, n\geq1\}\) be such that \(\sum_{i=1}^{n}(1-\Phi(u_{ni}))\rightarrow\tau\) for some \(0\leq\tau<\infty\) and \(n(1-\Phi(\lambda_{n}))\) is bounded, where \(\lambda_{n}=\min_{1\leq i\leq n}u_{ni}\). Suppose that \(\sup_{i\neq j}|r_{ij}|=\delta<1\), there exists a constant \(0< c<1/2\) such that

$$\begin{aligned}& \biggl\vert \sum_{1\leq i< j\leq n}r_{ij}\biggr\vert \leq cn, \end{aligned}$$
(10)
$$\begin{aligned}& \max_{1\leq i\leq n}\sum_{j=1} ^{n}|r_{ij}|\ll\frac{\ln^{1/2}n}{\ln D_{n}}. \end{aligned}$$
(11)

Then

$$ \lim_{n\rightarrow\infty}\frac{1}{D_{n}}\sum_{k=1} ^{n}d_{k}I \Biggl(\bigcap_{i=1}^{k}(X_{i} \leq u_{ki}), \frac{S_{k}}{\sigma_{k}}\leq y \Biggr)=\exp(-\tau)\Phi(y) \quad \textit{a.s. for any } y\in \mathbb{R} $$
(12)

and

$$\begin{aligned}& \lim_{n\rightarrow\infty}\frac{1}{D_{n}}\sum_{k=1} ^{n}d_{k}I \biggl(a_{k}(M_{k}-b_{k}) \leq x, \frac{S_{k}}{\sigma_{k}}\leq y \biggr) \\& \quad =\exp\bigl(-\mathrm{e}^{-x}\bigr) \Phi(y) \quad \textit{a.s. for any } x, y\in\mathbb{R}, \end{aligned}$$
(13)

where \(a_{n}\) and \(b_{n}\) are defined by (1).

Taking \(u_{ki}=u_{k}\) for \(1\leq i\leq k\) in Theorems 2.1 and 2.2, we can immediately obtain the following corollaries.

Corollary 2.3

Let \(\{X_{n}; n\geq1\}\) be a standardized Gaussian sequence. Let the numerical sequence \(\{u_{n}; n\geq1\}\) be such that \(n(1-\Phi(u_{n}))\rightarrow\tau\) for some \(0\leq\tau<\infty\). Suppose that condition (7) is satisfied. Then (9) and

$$\lim_{n\rightarrow\infty}\frac{1}{D_{n}}\sum_{k=1} ^{n}d_{k}I(M_{k}\leq u_{k})=\exp(- \tau)\quad \textit{a.s.} $$

hold.

Corollary 2.4

Let \(\{X_{n}; n\geq1\}\) be a standardized Gaussian sequence. Let the numerical sequence \(\{u_{n}; n\geq1\}\) be such that \(n(1-\Phi(u_{n}))\rightarrow\tau\) for some \(0\leq\tau<\infty\). Suppose that \(\sup_{i\neq j}|r_{ij}|=\delta<1\), there exists a constant \(0< c<1/2\) such that conditions (10) and (11) are satisfied. Then (13) and

$$\lim_{n\rightarrow\infty}\frac{1}{D_{n}}\sum_{k=1} ^{n}d_{k}I \biggl(M_{k}\leq u_{k}, \frac{S_{k}}{\sigma_{k}}\leq y \biggr)=\exp(-\tau)\Phi(y)\quad \textit{a.s. for any }y\in \mathbb{R} $$

hold.

By the terminology of summation procedures (see e.g. Chandrasekharan and Minakshisundaram [22], p.35), we have the following corollary.

Corollary 2.5

Theorems 2.1 and 2.2, Corollaries 2.3 and 2.4 remain valid if we replace the weight sequence \(\{d_{k}; n\geq1\}\) by any \(\{d_{k}^{\ast}; n\geq1\}\) such that \(0\leq d_{k}^{\ast}\leq d_{k}\), \(\sum_{k=1}^{\infty}d_{k}^{\ast}=\infty\).

Remark 2.6

Obviously, the condition (10) is significantly weaker than the condition (2), and in particular taking \(\alpha=0\), i.e., the weight \(d_{k}=\mathrm{e}/k\), we have \(D_{n}\sim\mathrm{e}\ln n\) and \(\ln D_{n}\sim\ln\ln n\), in this case, the condition (11) is significantly weaker than the condition (3), and the conclusions (12) and (13) become (4) and (5), respectively. Therefore, our Theorem 2.2 not only gives substantial improvements for the weight but also has greatly weakened restrictions on the covariance \(r_{ij}\) in Theorem B obtained by Zhao et al. [20].

Remark 2.7

Theorem A obtained by Chen and Lin [14] is a special case of Theorem 2.1 when \(\alpha=0\). When \(\{X_{n}; n\geq1\}\) is stationary, \(u_{ni}=u_{n}\), \(1\leq i\leq n\), and \(\alpha=0\), Theorem 2.1 is Corollary 2.2 obtained by Csáki and Gonchigdanzan [13].

Remark 2.8

Whether (8), (9), (12), and (13) work also for some \(1/2\leq\alpha<1\) remains an open question.

Proofs

The proof of our results follows a well-known scheme of the proof of an a.s. limit theorem, e.g. Berkes and Csáki [5], Chuprunov and Fazekas [23, 24], and Fazekas and Rychlik [25]. We will point out that the weight from \(d_{k}=1/k\) is extended to \(d_{k}=\exp(\ln^{\alpha}k)/k\), \(0\leq\alpha<1/2\), and relaxed restrictions on the covariance \(r_{ij}\) encountered great difficulties and challenges; to overcome the difficulties and challenges the following five lemmas play an important role. The proofs of Lemmas 3.2 to 3.4 are given in the Appendix.

Lemma 3.1

(Normal comparison lemma, Theorem 4.2.1 in Leadbetter et al. [26])

Suppose \(\xi_{1},\ldots, \xi_{n}\) are standard normal variables with covariance matrix \(\Lambda^{1}=(\Lambda^{1}_{ij})\), and \(\eta_{1},\ldots, \eta_{n}\) similarly with covariance matrix \(\Lambda^{0}=(\Lambda^{0}_{ij})\), and let \(\rho_{ij}=\max(|\Lambda^{1}_{ij}|, |\Lambda^{0}_{ij}|)\), \(\max_{i\neq j}\rho_{ij}=\delta<1\). Further, let \(u_{1},\ldots, u_{n}\) be real numbers. Then

$$\begin{aligned}& \bigl\vert \mathbb{P}(\xi_{j}\leq u_{j}\textit{ for }j=1,\ldots,n)-\mathbb{P}(\eta_{j}\leq u_{j}\textit{ for }j=1,\ldots,n)\bigr\vert \\& \quad \leq K\sum_{1\leq i< j\leq n}\bigl\vert \Lambda^{1}_{ij}-\Lambda^{0}_{ij} \bigr\vert \exp \biggl(-\frac{u^{2}_{i}+u^{2}_{j}}{2(1+\rho _{ij})} \biggr) \end{aligned}$$

for some constant K, depending only on δ.

Lemma 3.2

Suppose that the conditions of Theorem  2.1 hold, then there exists a constant \(\gamma>0\) such that

$$\begin{aligned}& \sup_{1\leq k\leq l}\sum_{i=1}^{k} \sum_{j=i+1}^{l}|r_{ij}|\exp \biggl(- \frac {u_{ki}^{2}+u_{lj}^{2}}{2(1+|r_{ij}|)} \biggr)\ll\frac{1}{l^{\gamma}}+\frac {1}{(\ln D_{l})^{1+\varepsilon}}, \end{aligned}$$
(14)
$$\begin{aligned}& \mathbb{E}\Biggl\vert I \Biggl(\bigcap_{i=1}^{l}(X_{i} \leq u_{li}) \Biggr)-I \Biggl(\bigcap_{i=k+1}^{l}(X_{i} \leq u_{li}) \Biggr) \Biggr\vert \ll\frac{k}{l} \quad \textit{for } 1\leq k< l, \end{aligned}$$
(15)
$$\begin{aligned}& \Biggl\vert \operatorname{Cov} \Biggl(I \Biggl(\bigcap _{i=1}^{k}(X_{i}\leq u_{ki}) \Biggr), I \Biggl(\bigcap_{i=k+1}^{l}(X_{i} \leq u_{li}) \Biggr) \Biggr) \Biggr\vert \ll\frac{1}{l^{\gamma}}+ \frac{1}{(\ln D_{l})^{1+\varepsilon}} \quad \textit{for } 1\leq k< l, \end{aligned}$$
(16)

where ε is defined by (7).

Lemma 3.3

Suppose that the conditions of Theorem  2.2 hold, then there exists a constant \(\gamma>0\) such that

$$\begin{aligned}& \mathbb{E}\Biggl\vert I \Biggl(\bigcap_{i=1}^{l}(X_{i} \leq u_{li}), \frac{S_{l}}{\sigma_{l}}\leq y \Biggr)-I \Biggl(\bigcap _{i=k+1}^{l}(X_{i}\leq u_{li}), \frac{S_{l}}{\sigma_{l}}\leq y \Biggr) \Biggr\vert \ll \biggl(\frac{k}{l} \biggr)^{\gamma}\quad \textit{for } 1\leq k< l, \end{aligned}$$
(17)
$$\begin{aligned}& \Biggl\vert \operatorname{Cov} \Biggl(I \Biggl(\bigcap _{i=1}^{k}(X_{i}\leq u_{ki}), \frac{S_{k}}{\sigma_{k}}\leq y \Biggr), I \Biggl(\bigcap_{i=k+1}^{l}(X_{i} \leq u_{li}), \frac{S_{l}}{\sigma_{l}}\leq y \Biggr) \Biggr) \Biggr\vert \\& \quad \ll \biggl(\frac{k}{l} \biggr)^{\gamma}+\frac{k^{1/2}(\ln l)^{1/2}}{l^{1/2}\ln D_{l}} \quad \textit{for } 1\leq k< \frac{l}{\ln l}. \end{aligned}$$
(18)

The following weak convergence results are the extended versions of Theorem 4.5.2 of Leadbetter et al. [26] to the nonstationary normal random variables.

Lemma 3.4

Suppose that the conditions of Theorem  2.1 hold, then

$$ \lim_{n\rightarrow\infty}\mathbb{P} \Biggl(\bigcap _{i=1}^{n}(X_{i}\leq u_{ni}) \Biggr)=\mathrm{e}^{-\tau}. $$
(19)

Suppose that the conditions of Theorem  2.2 hold, then

$$ \lim_{n\rightarrow\infty}\mathbb{P} \Biggl(\bigcap _{i=1}^{n}(X_{i}\leq u_{ni}), \frac{S_{n}}{\sigma_{n}}\leq y \Biggr)=\mathrm{e}^{-\tau}\Phi(y). $$
(20)

Lemma 3.5

Let \(\{\xi_{n}; n\geq1\}\) be a sequence of uniformly bounded random variables. If

$$\operatorname{Var} \Biggl(\sum_{k=1}^{n}d_{k} \xi_{k} \Biggr)\ll \frac{D^{2}_{n}}{(\ln D_{n})^{1+\varepsilon}} $$

for some \(\varepsilon>0\), then

$$\lim_{n\rightarrow\infty}\frac{1}{D_{n}}\sum_{k=1}^{n}d_{k}( \xi_{k}-\mathbb {E}\xi_{k})=0\quad \textit{a.s.}, $$

where \(d_{n}\) and \(D_{n}\) are defined by (6).

Proof

Similarly to the proof of Lemma 2.2 in Wu [9], we can prove Lemma 3.5. □

Proof of Theorem 2.1

Using Lemma 3.4, \(\mathbb{P}(\bigcap_{i=1}^{n}(X_{i}\leq u_{ni}))\rightarrow\exp(-\tau)\), and hence by the Toeplitz lemma,

$$ \lim_{n\rightarrow\infty}\frac{1}{D_{n}}\sum_{k=1}^{n}d_{k} \mathbb{P} \Biggl(\bigcap_{i=1}^{k}(X_{i} \leq u_{ki}) \Biggr)=\exp(-\tau). $$

Therefore, in order to prove (8), it suffices to prove that

$$\lim_{n\rightarrow\infty}\frac{1}{D_{n}}\sum_{k=1}^{n}d_{k} \Biggl(I \Biggl(\bigcap_{i=1}^{k}(X_{i} \leq u_{ki}) \Biggr)-\mathbb{P} \Biggl(\bigcap _{i=1}^{k}(X_{i}\leq u_{ki}) \Biggr) \Biggr)=0\quad \mbox{a.s.}, $$

which will be done by showing that

$$ \operatorname{Var} \Biggl(\sum_{k=1}^{n}d_{k}I \Biggl(\bigcap_{i=1}^{k}(X_{i} \leq u_{ki}) \Biggr) \Biggr)\ll\frac{D^{2}_{n}}{(\ln D_{n})^{1+\varepsilon}} $$
(21)

for some \(\varepsilon>0\) from Lemma 3.5. Let \(\xi_{k}:=I (\bigcap_{i=1}^{k}(X_{i}\leq u_{ki}) )-\mathbb{P} (\bigcap_{i=1}^{k}(X_{i}\leq u_{ki}) )\). Then \(\mathbb{E}\xi_{k}=0\) and \(|\xi_{k}|\leq1\) for all \(k\geq1\). Hence

$$\begin{aligned} \operatorname{Var} \Biggl(\sum_{k=1}^{n}d_{k}I \Biggl(\bigcap_{i=1}^{k}(X_{i} \leq u_{ki}) \Biggr) \Biggr) =&\sum_{k=1}^{n}d^{2}_{k} \mathbb{E}\xi^{2}_{k}+2\sum_{1\leq k< l\leq n}d_{k}d_{l} \mathbb{E}(\xi_{k}\xi_{l}) \\ :=&T_{1}+T_{2}. \end{aligned}$$
(22)

Since \(|\xi_{k}|\leq1\) and \(\exp(2\ln^{\beta}x)=\exp(2\int_{1}^{x}\frac{(\ln u)^{\beta-1}}{u} \,\mathrm{d}u)\), \(\beta<1\), is a slowly varying function at infinity, from Seneta [27], it follows that

$$ T_{1}\leq\sum_{k=1}^{\infty} \frac{\exp(2\ln^{\alpha}k)}{k^{2}}=c\leq\frac{D^{2}_{n}}{(\ln D_{n})^{1+\varepsilon}}. $$
(23)

By Lemma 3.2, for \(1\leq k< l\),

$$\begin{aligned} \bigl\vert \mathbb{E}(\xi_{k}\xi_{l})\bigr\vert \leq& \Biggl\vert \operatorname{Cov} \Biggl(I \Biggl(\bigcap _{i=1}^{k}(X_{i}\leq u_{ki}) \Biggr), I \Biggl(\bigcap_{i=1}^{l}(X_{i} \leq u_{li}) \Biggr)-I \Biggl(\bigcap_{i=k+1}^{l}(X_{i} \leq u_{li}) \Biggr) \Biggr)\Biggr\vert \\ &{}+\Biggl\vert \operatorname{Cov} \Biggl(I \Biggl(\bigcap _{i=1}^{k}(X_{i}\leq u_{ki}) \Biggr), I \Biggl(\bigcap_{i=k+1}^{l}(X_{i} \leq u_{li}) \Biggr) \Biggr)\Biggr\vert \\ \ll&\mathbb{E}\Biggl\vert I \Biggl(\bigcap_{i=1}^{l}(X_{i} \leq u_{li}) \Biggr)-I \Biggl(\bigcap_{i=k+1}^{l}(X_{i} \leq u_{li}) \Biggr)\Biggr\vert \\ &{}+\Biggl\vert \operatorname{Cov} \Biggl(I \Biggl(\bigcap _{i=1}^{k}(X_{i}\leq u_{ki}) \Biggr), I \Biggl(\bigcap_{i=k+1}^{l}(X_{i} \leq u_{li}) \Biggr) \Biggr)\Biggr\vert \\ \ll& \biggl(\frac{k}{l} \biggr)^{\gamma_{1}}+\frac{1}{(\ln D_{l})^{1+\varepsilon}} \end{aligned}$$

for \(\gamma_{1}=\min(1, \gamma)>0\). Hence,

$$\begin{aligned} T_{2} \ll&\sum_{l=1}^{n}\sum _{k=1}^{l}d_{k}d_{l} \biggl(\frac{k}{l} \biggr)^{\gamma_{1}} +\sum _{l=1}^{n}\sum_{k=1}^{l}d_{k}d_{l} \frac{1}{(\ln D_{l})^{1+\varepsilon}} \\ :=&T_{21}+T_{22}. \end{aligned}$$
(24)

By (11) in Wu [9],

$$ \begin{aligned} &D_{n}\sim\frac{1}{\alpha}\ln^{1-\alpha}n\exp\bigl( \ln^{\alpha}n\bigr),\qquad \ln D_{n}\sim\ln^{\alpha}n, \\ &\exp\bigl(\ln^{\alpha}n\bigr)\sim\frac{\alpha D_{n}}{(\ln D_{n})^{\frac{1-\alpha}{\alpha}}}\quad \mbox{for } \alpha>0. \end{aligned} $$
(25)

From this, combined with the fact that \(\int_{1}^{x}\frac{l(t)}{t^{\beta}}\,\mathrm{d}t\sim\frac{l(x)x^{1-\beta}}{1-\beta}\) as \(x\rightarrow\infty\) for \(\beta<1\) and \(l(x)\) is a slowly varying function at infinity (see Proposition 1.5.8 in Bingham et al. [28]), we get

$$\begin{aligned} T_{21} \leq&\sum_{l=1}^{n} \frac{d_{l}}{l^{\gamma_{1}}}\sum_{k=1}^{l} \frac{\exp(\ln^{\alpha}k)}{k^{1-{\gamma_{1}}}}\ll\sum_{l=1}^{n} \frac{d_{l}}{l^{\gamma_{1}}}l^{\gamma _{1}}\exp\bigl(\ln^{\alpha}l\bigr) \\ \leq& D_{n}\exp\bigl(\ln^{\alpha}n\bigr) \ll \left \{ \begin{array}{l@{\quad}l} \frac{D_{n}^{2}}{(\ln D_{n})^{(1-\alpha)/\alpha}}, &\alpha>0, \\ D_{n},& \alpha=0 \end{array} \right . \\ \leq&\frac{D_{n}^{2}}{(\ln D_{n})^{1+\varepsilon_{1}}} \end{aligned}$$
(26)

for \(0<\varepsilon_{1}<(1-2\alpha)/\alpha\).

Now, we estimate \(T_{22}\). For \(\alpha>0\), by (25)

$$\begin{aligned} T_{22} =&\sum_{l=1}^{n} \frac{d_{l}}{(\ln D_{l})^{1+\varepsilon}}D_{l}\ll\sum_{l=1}^{n} \frac{\exp(2\ln^{\alpha}l)(\ln l)^{1-2\alpha-\alpha\varepsilon}}{l} \\ \sim&\int_{\mathrm{e}}^{n}\frac{\exp(2\ln^{\alpha}x)(\ln x)^{1-2\alpha-\alpha\varepsilon}}{x}\,\mathrm{d}x= \int_{1}^{\ln n}\exp\bigl(2y^{\alpha}\bigr)y^{1-2\alpha-\alpha\varepsilon}\,\mathrm{d}y \\ \sim&\int_{1}^{\ln n} \biggl(\exp \bigl(2y^{\alpha}\bigr)y^{1-2\alpha-\alpha\varepsilon}+\frac{2-3\alpha-\alpha\varepsilon }{2\alpha}\exp \bigl(2y^{\alpha}\bigr)y^{1-3\alpha-\alpha\varepsilon} \biggr)\,\mathrm{d}y \\ =&\int_{1}^{\ln n} \bigl((2\alpha)^{-1} \exp\bigl(2y^{\alpha}\bigr)y^{2-3\alpha-\alpha\varepsilon} \bigr)^{\prime}\, \mathrm{d}y \\ \ll&\exp\bigl(2\ln^{\alpha}n\bigr) (\ln n)^{2-3\alpha-\alpha\varepsilon} \\ \ll&\frac{D_{n}^{2}}{(\ln D_{n})^{1+\varepsilon}}. \end{aligned}$$
(27)

For \(\alpha=0\), noting the fact that \(D_{n}\sim\ln n\), similarly we get

$$\begin{aligned} T_{22}&\sim\sum_{l=3 }^{n} \frac{\ln l}{l(\ln\ln l)^{1+\varepsilon}}\sim\int_{3}^{n} \frac{\ln x}{x(\ln\ln x)^{1+\varepsilon}}\,\mathrm{d}x \\ &=\int_{\ln3}^{\ln n} \frac{y}{(\ln y)^{1+\varepsilon}}\,\mathrm{d}y\ll\frac{\ln^{2} n}{(\ln\ln n)^{1+\varepsilon}}\sim\frac{D_{n}^{2}}{(\ln D_{n})^{1+\varepsilon}}. \end{aligned}$$
(28)

Equations (22)-(24), (26)-(28) together establish (21), which concludes the proof of (8). Next, take \(u_{ni}=u_{n}=x/a_{n} +b_{n}\). Then we see that \(\sum_{i=1}^{n}(1-\Phi(u_{ni}))=n(1-\Phi(u_{n})) \rightarrow\exp(-x)\) as \(n\rightarrow\infty\) (see Theorem 1.5.3 in Leadbetter et al. [26]) and hence (9) immediately follows from (8) with \(u_{ni}=x/a_{n} +b_{n}\).

This completes the proof of Theorem 2.1. □

Proof of Theorem 2.2

Using Lemma 3.4, \(\mathbb{P}(\bigcap_{i=1}^{n}(X_{i}\leq u_{ni}), S_{n}/\sigma_{n}\leq y )\rightarrow\mathrm{e}^{-\tau}\Phi(y)\), and hence by the Toeplitz lemma,

$$ \lim_{n\rightarrow\infty}\frac{1}{D_{n}}\sum_{k=1}^{n}d_{k} \mathbb{P} \Biggl(\bigcap_{i=1}^{k}(X_{i} \leq u_{ki}), \frac{S_{k}}{\sigma_{k}}\leq y \Biggr)=\mathrm{e}^{-\tau} \Phi(y). $$

Therefore, in order to prove (12), it suffices to prove that

$$\lim_{n\rightarrow\infty}\frac{1}{D_{n}}\sum_{k=1}^{n}d_{k} \Biggl(I \Biggl(\bigcap_{i=1}^{k}(X_{i} \leq u_{ki}), \frac{S_{k}}{\sigma_{k}}\leq y \Biggr)-\mathbb{P} \Biggl(\bigcap _{i=1}^{k}(X_{i}\leq u_{ki}), \frac{S_{k}}{\sigma_{k}}\leq y \Biggr) \Biggr)=0 \quad \mbox{a.s.}, $$

which will be done by showing that

$$ \operatorname{Var} \Biggl(\sum_{k=1}^{n}d_{k}I \Biggl(\bigcap_{i=1}^{k}(X_{i} \leq u_{ki}), \frac{S_{k}}{\sigma_{k}}\leq y \Biggr) \Biggr)\ll \frac{D^{2}_{n}}{(\ln D_{n})^{1+\varepsilon}} $$
(29)

for some \(\varepsilon>0\) from Lemma 3.5. Let \(\eta_{k}:=I (\bigcap_{i=1}^{k}(X_{i}\leq u_{ki}), \frac{S_{k}}{\sigma_{k}}\leq y )-\mathbb{P} (\bigcap_{i=1}^{k}(X_{i}\leq u_{ki}), \frac{S_{k}}{\sigma_{k}}\leq y )\). By Lemma 3.3, for \(1\leq k< l/\ln l\),

$$\begin{aligned} \bigl\vert \mathbb{E}(\eta_{k}\eta_{l})\bigr\vert \leq& \Biggl\vert \operatorname{Cov} \Biggl(I \Biggl(\bigcap _{i=1}^{k}(X_{i}\leq u_{ki}), \frac{S_{k}}{\sigma_{k}}\leq y \Biggr), I \Biggl(\bigcap_{i=1}^{l}(X_{i} \leq u_{li}), \frac{S_{l}}{\sigma_{l}}\leq y \Biggr) \\ &{}-I \Biggl(\bigcap _{i=k+1}^{l}(X_{i}\leq u_{li}), \frac{S_{l}}{\sigma_{l}}\leq y \Biggr) \Biggr)\Biggr\vert \\ &{}+\Biggl\vert \operatorname{Cov} \Biggl(I \Biggl(\bigcap _{i=1}^{k}(X_{i}\leq u_{ki}), \frac{S_{k}}{\sigma_{k}}\leq y \Biggr), I \Biggl(\bigcap_{i=k+1}^{l}(X_{i} \leq u_{li}), \frac{S_{l}}{\sigma_{l}}\leq y \Biggr) \Biggr)\Biggr\vert \\ \leq&\mathbb{E}\Biggl\vert I \Biggl(\bigcap_{i=1}^{l}(X_{i} \leq u_{li}), \frac{S_{l}}{\sigma_{l}}\leq y \Biggr)-I \Biggl(\bigcap _{i=k+1}^{l}(X_{i}\leq u_{li}), \frac{S_{l}}{\sigma_{l}}\leq y \Biggr)\Biggr\vert \\ &{}+\Biggl\vert \operatorname{Cov} \Biggl(I \Biggl(\bigcap _{i=1}^{k}(X_{i}\leq u_{ki}), \frac{S_{k}}{\sigma_{k}}\leq y \Biggr), I \Biggl(\bigcap_{i=k+1}^{l}(X_{i} \leq u_{li}), \frac{S_{l}}{\sigma_{l}}\leq y \Biggr) \Biggr)\Biggr\vert \\ \ll& \biggl(\frac{k}{l} \biggr)^{\gamma}+\frac{k^{1/2}\ln ^{1/2}l}{l^{1/2}\ln D_{l}}. \end{aligned}$$

Hence,

$$\begin{aligned}& \operatorname{Var} \Biggl(\sum_{k=1}^{n}d_{k}I \Biggl(\bigcap_{i=1}^{k}(X_{i} \leq u_{ki}), \frac{S_{k}}{\sigma_{k}}\leq y \Biggr) \Biggr) \\& \quad = \sum _{k=1}^{n}d^{2}_{k}\mathbb{E} \eta^{2}_{k}+2\sum_{1\leq k< l\leq n}d_{k}d_{l} \mathbb{E}(\eta_{k}\eta_{l})\ll\sum _{1\leq k<l\leq n}d_{k}d_{l}\mathbb {E}( \eta_{k}\eta_{l}) \\& \quad \ll \sum_{l=1}^{n}\sum _{1\leq k<l/\ln l}d_{k}d_{l} \biggl(\frac{k}{l} \biggr)^{\gamma}+\sum_{l=1}^{n}\sum _{1\leq k<l/\ln l}d_{k}d_{l} \frac{k^{1/2}\ln^{1/2}l}{l^{1/2}\ln D_{l}} \\& \qquad {}+\sum_{l=1}^{n}\sum _{l/\ln l\leq k\leq l}d_{k}d_{l} \\& \quad := T_{3}+T_{4}+T_{5}. \end{aligned}$$
(30)

By the proof of (26),

$$ T_{3}\ll\frac{D_{n}^{2}}{(\ln D_{n})^{1+\varepsilon_{1}}} \quad \mbox{for } 0< \varepsilon_{1}<(1-2 \alpha)/\alpha. $$
(31)

Now, we estimate \(T_{4}\). For \(\alpha>0\), by (25)

$$\begin{aligned} T_{4} \leq&\sum_{l=1}^{n} \frac{d_{l}\ln^{1/2}l}{l^{1/2}\ln D_{l}}\sum_{k=1}^{l} \frac{\exp(\ln^{\alpha}k)}{k^{1/2}}\ll\sum_{l=1}^{n} \frac{d_{l}\ln^{1/2}l}{l^{1/2}\ln D_{l}} l^{1/2}\exp\bigl(\ln^{\alpha}l\bigr) \\ \sim&\int_{\mathrm{e}}^{n}\frac{\exp(2\ln^{\alpha}x)(\ln x)^{1/2-\alpha}}{x}\,\mathrm{d}x= \int_{1}^{\ln n}\exp\bigl(2y^{\alpha}\bigr)y^{1/2-\alpha}\,\mathrm{d}y \\ \sim&\int_{1}^{\ln n} \biggl(\exp \bigl(2y^{\alpha}\bigr)y^{1/2-\alpha}+\frac{3-4\alpha}{4\alpha}\exp \bigl(2y^{\alpha}\bigr)y^{1/2-2\alpha} \biggr)\,\mathrm{d}y \\ =&\int_{1}^{\ln n} \bigl((2\alpha)^{-1} \exp\bigl(2y^{\alpha}\bigr)y^{3/2-2\alpha} \bigr)^{\prime}\, \mathrm{d}y \\ \ll&\exp\bigl(2\ln^{\alpha}n\bigr) (\ln n)^{3/2-2\alpha}\ll \frac{D_{n}^{2}}{(\ln D_{n})^{1/2\alpha}} \\ \ll&\frac{D_{n}^{2}}{(\ln D_{n})^{1+\varepsilon_{2}}} \end{aligned}$$
(32)

for \(0<\varepsilon_{2}<1/(2\alpha)-1\).

For \(\alpha=0\),

$$\begin{aligned}& T_{4} \ll \sum_{l=3}^{n} \frac{\ln^{1/2}l}{l^{3/2}\ln\ln l}\sum_{k=1}^{l} \frac{1}{k^{1/2}}\ll\sum_{l=3 }^{n} \frac{\ln^{1/2}l}{l\ln\ln l} \\& \hphantom{T_{4}} \sim \int_{3}^{n} \frac{(\ln x)^{1/2}}{x\ln\ln x}\,\mathrm{d}x=\int_{\ln3}^{\ln n} \frac{y^{1/2}}{\ln y}\,\mathrm{d}y \\& \hphantom{T_{4}} \ll \frac{(\ln n)^{3/2}}{\ln\ln n}\sim\frac{D_{n}^{3/2}}{(\ln D_{n})}, \end{aligned}$$
(33)
$$\begin{aligned}& T_{5} \leq \sum_{l=1}^{n}d_{l} \exp\bigl(\ln^{\alpha}l\bigr)\sum_{l/\ln l\leq k\leq l} \frac{1}{k}\ll\sum_{l=1}^{n}d_{l} \exp\bigl(\ln^{\alpha}l\bigr)\ln\ln l \\& \hphantom{T_{5}} \ll D_{n}\exp\bigl(\ln^{\alpha}n\bigr)\ln\ln n \\& \hphantom{T_{5}}\ll \left \{ \begin{array}{l@{\quad}l}\frac{D_{n}^{2}\ln\ln D_{n}}{(\ln D_{n})^{(1-\alpha)/\alpha}},& \alpha>0, \\ D_{n}\ln D_{n}, &\alpha=0 \end{array} \right . \\& \hphantom{T_{5}} \leq \frac{D_{n}^{2}}{(\ln D_{n})^{1+\varepsilon_{1}}}. \end{aligned}$$
(34)

Equations (30)-(34) together establish (29) for \(\varepsilon=\min(\varepsilon_{1}, \varepsilon_{2})>0\), which concludes the proof of (12). Next, take \(u_{ni}=u_{n}=x/a_{n} +b_{n}\). Then we see that \(\sum_{i=1}^{n}(1-\Phi(u_{ni}))=n(1-\Phi(u_{n})) \rightarrow\exp(-x)\) as \(n\rightarrow\infty\) (see Theorem 1.5.3 in Leadbetter et al. [26]) and hence (13) immediately follows from (12) with \(u_{ni}=x/a_{n} +b_{n}\).

This completes the proof of Theorem 2.2. □

References

  1. 1.

    Brosamler, GA: An almost everywhere central limit theorem. Math. Proc. Camb. Philos. Soc. 104, 561-574 (1988)

    Article  MATH  MathSciNet  Google Scholar 

  2. 2.

    Schatte, P: On strong versions of the central limit theorem. Math. Nachr. 137, 249-256 (1988)

    Article  MATH  MathSciNet  Google Scholar 

  3. 3.

    Ibragimov, IA, Lifshits, M: On the convergence of generalized moments in almost sure central limit theorem. Stat. Probab. Lett. 40, 343-351 (1998)

    Article  MATH  MathSciNet  Google Scholar 

  4. 4.

    Miao, Y: Central limit theorem and almost sure central limit theorem for the product of some partial sums. Proc. Indian Acad. Sci. Math. Sci. 118(2), 289-294 (2008)

    Article  MATH  MathSciNet  Google Scholar 

  5. 5.

    Berkes, I, Csáki, E: A universal result in almost sure central limit theory. Stoch. Process. Appl. 94, 105-134 (2001)

    Article  MATH  Google Scholar 

  6. 6.

    Hörmann, S: Critical behavior in almost sure central limit theory. J. Theor. Probab. 20, 613-636 (2007)

    Article  MATH  Google Scholar 

  7. 7.

    Wu, QY: Almost sure limit theorems for stable distribution. Stat. Probab. Lett. 81(6), 662-672 (2011)

    Article  MATH  Google Scholar 

  8. 8.

    Wu, QY: An almost sure central limit theorem for the weight function sequences of NA random variables. Proc. Indian Acad. Sci. Math. Sci. 121(3), 369-377 (2011)

    Article  MATH  MathSciNet  Google Scholar 

  9. 9.

    Wu, QY: A note on the almost sure limit theorem for self-normalized partial sums of random variables in the domain of attraction of the normal law. J. Inequal. Appl. 2012, 17 (2012). doi:10.1186/1029-242X-2012-17

    Article  Google Scholar 

  10. 10.

    Wu, QY, Chen, PY: An improved result in almost sure central limit theorem for self-normalized products of partial sums. J. Inequal. Appl. 2013, 129 (2013). doi:10.1186/1029-242X-2013-129

    Article  Google Scholar 

  11. 11.

    Fahrner, I, Stadtmüller, U: On almost sure max-limit theorems. Stat. Probab. Lett. 37, 229-236 (1998)

    Article  MATH  Google Scholar 

  12. 12.

    Nadarajah, S, Mitov, K: Asymptotics of maxima of discrete random variables. Extremes 5, 287-294 (2002)

    Article  MathSciNet  Google Scholar 

  13. 13.

    Csáki, E, Gonchigdanzan, K: Almost sure limit theorems for the maximum of stationary Gaussian sequences. Stat. Probab. Lett. 58, 195-203 (2002)

    Article  MATH  Google Scholar 

  14. 14.

    Chen, SQ, Lin, ZY: Almost sure max-limits for nonstationary Gaussian sequence. Stat. Probab. Lett. 76, 1175-1184 (2006)

    Article  MATH  Google Scholar 

  15. 15.

    Tan, ZQ, Peng, ZX, Nadarajah, S: Almost sure convergence of sample range. Extremes 10, 225-233 (2007)

    Article  MATH  MathSciNet  Google Scholar 

  16. 16.

    Tan, ZQ, Peng, ZX: Almost sure convergence for non-stationary random sequences. Stat. Probab. Lett. 79, 857-863 (2009)

    Article  MATH  MathSciNet  Google Scholar 

  17. 17.

    Peng, ZX, Liu, MM, Nadarajah, S: Conditions based on conditional moments for max-stable limit laws. Extremes 11, 329-337 (2008)

    Article  MATH  MathSciNet  Google Scholar 

  18. 18.

    Peng, ZX, Wang, LL, Nadarajah, S: Almost sure central limit theorem for partial sums and maxima. Math. Nachr. 282(4), 632-636 (2009)

    Article  MATH  MathSciNet  Google Scholar 

  19. 19.

    Peng, ZX, Weng, ZC, Nadarajah, S: Almost sure limit theorems of extremes of complete and incomplete samples of stationary sequences. Extremes 13, 463-480 (2010)

    Article  MATH  MathSciNet  Google Scholar 

  20. 20.

    Zhao, SL, Peng, ZX, Wu, SL: Almost sure convergence for the maximum and the sum of nonstationary Gaussian sequences. J. Inequal. Appl. 2010, Article ID 856495 (2010). doi:10.1155/2010/856495

    Article  MathSciNet  Google Scholar 

  21. 21.

    Tan, ZQ, Wang, YB: Almost sure central limit theorem for the maxima and sums of stationary Gaussian sequences. J. Korean Stat. Soc. 40, 347-355 (2011)

    Article  MATH  MathSciNet  Google Scholar 

  22. 22.

    Chandrasekharan, K, Minakshisundaram, S: Typical Means. Oxford University Press, Oxford (1952)

    Google Scholar 

  23. 23.

    Chuprunov, A, Fazekas, I: Almost sure versions of some analogues of the invariance principle. Publ. Math. (Debr.) 54(3-4), 457-471 (1999)

    MATH  MathSciNet  Google Scholar 

  24. 24.

    Chuprunov, A, Fazekas, I: Almost sure versions of some functional limit theorems. J. Math. Sci. (N.Y.) 111(3), 3528-3536 (2002)

    Article  MATH  MathSciNet  Google Scholar 

  25. 25.

    Fazekas, I, Rychlik, Z: Almost sure functional limit theorems. Ann. Univ. Mariae Curie-Skłodowska, Sect. A 56(1), 1-18 (2002)

    MATH  MathSciNet  Google Scholar 

  26. 26.

    Leadbetter, MR, Lindgren, G, Rootzén, H: Extremes and Related Properties of Random Sequences and Processes. Springer, New York (1983)

    Google Scholar 

  27. 27.

    Seneta, E: Regularly Varying Functions. Lecture Notes in Mathematics, vol. 508. Springer, Berlin (1976)

    Google Scholar 

  28. 28.

    Bingham, NH, Goldie, CM, Teugels, JL: Regular Variation. Cambridge University Press, Cambridge (1987)

    Google Scholar 

Download references

Acknowledgements

The author is very grateful to the referees and the Editors for their valuable comments and some helpful suggestions that improved the clarity and readability of the paper. The author was supported by the National Natural Science Foundation of China (11361019), project supported by Program to Sponsor Teams for Innovation in the Construction of Talent Highlands in Guangxi Institutions of Higher Learning ([2011] 47), and the Support Program of the Guangxi China Science Foundation (2013GXNSFDA019001).

Author information

Affiliations

Authors

Corresponding author

Correspondence to Qunying Wu.

Additional information

Competing interests

The author declares to have no competing interests.

Author’s information

Qunying Wu is a professor, doctor, working in the field of probability and statistics.

Appendix

Appendix

Proof of Lemma 3.2

By assumption (7), we have \(\delta:=\sup_{i\neq j}|r_{ij}|<1\). Define λ such that \(0<\lambda<2/(1+\delta)-1\), for \(1\leq k\leq l\),

$$\begin{aligned}& \sum_{i=1} ^{k}\sum _{j=i+1} ^{l}|r_{ij}|\exp \biggl(- \frac {u_{ki}^{2}+u_{lj}^{2}}{2(1+|r_{ij}|)} \biggr) \\& \quad = \sum_{i=1} ^{k}\sum _{i+1\leq j\leq l, j-i\leq l^{\lambda}}|r_{ij}|\exp \biggl(-\frac {u_{ki}^{2}+u_{lj}^{2}}{2(1+|r_{ij}|)} \biggr) \\& \qquad {}+\sum_{i=1} ^{k}\sum _{i+1\leq j\leq l, j-i> l^{\lambda}}|r_{ij}|\exp \biggl(-\frac {u_{ki}^{2}+u_{lj}^{2}}{2(1+|r_{ij}|)} \biggr) \\& \quad := H_{1}+H_{2}. \end{aligned}$$
(35)

Since \(n(1-\Phi(\lambda_{n}))\) is bounded, where \(\lambda_{n}\) is the same as defined in Theorem 2.1, there exists a constant \(c>0\) such that \(n(1-\Phi(\lambda_{n}))\leq c\). \(v_{n}\) is defined to satisfy \(n(1-\Phi(v_{n}))=c\); then clearly \(v_{n}\leq\lambda_{n}\).

Since \(1-\Phi(x)\sim\phi(x)/x\) as \(x\rightarrow\infty\), we have

$$ \exp \biggl(-\frac{v^{2}_{n}}{2} \biggr)\sim c\frac{v_{n}}{n},\quad v_{n}\sim\sqrt{2\ln n} . $$
(36)

By (7) and (36),

$$\begin{aligned} H_{1} \leq&\sum_{i=1} ^{k}\sum _{i+1\leq j\leq l, j-i\leq l^{\lambda}}\rho_{j-i}\exp \biggl(- \frac{v_{k}^{2}+v_{l}^{2}}{2(1+\delta)} \biggr) \leq\sum_{i=1} ^{k}\sum_{1\leq s\leq l^{\lambda}}\rho_{s}\exp \biggl(-\frac{v_{k}^{2}+v_{l}^{2}}{2(1+\delta)} \biggr) \\ \leq&kl^{\lambda}\frac{(\ln k)^{1/2(1+\delta)}}{k^{1/(1+\delta)}} \frac{(\ln l)^{1/2(1+\delta)}}{l^{1/(1+\delta)}}\leq \frac{(\ln l)^{1/(1+\delta)}}{l^{2/(1+\delta)-1-\lambda}} \\ \leq&\frac{1}{l^{\gamma}} \end{aligned}$$
(37)

for \(0<\gamma<2/(1+\delta)-1-\lambda\).

Setting \(\sigma_{j}=\sup_{i\geq j}\rho_{i}\), by (7) and (25),

$$\sigma_{l^{\lambda}}=\sup_{i\geq l^{\lambda}}\rho_{i}\ll\sup _{i\geq l^{\lambda}}\frac{1}{\ln i(\ln D_{i})^{1+\varepsilon}}=\frac{1}{\ln l^{\lambda}(\ln D_{l^{\lambda}})^{1+\varepsilon}}\ll \frac{1}{\ln l(\ln D_{l})^{1+\varepsilon}} $$

and

$$\sigma_{l^{\lambda}}v_{k}v_{l}\ll\frac{1}{(\ln D_{l})^{1+\varepsilon}}\quad \mbox{for all } 1\leq k\leq l. $$

This, combined with (36), shows

$$\begin{aligned} H_{2} \leq&\sum_{i=1} ^{k}\sum _{i+1\leq j\leq l, j-i> l^{\lambda}}\rho_{j-i}\exp \biggl(- \frac{v_{k}^{2}+v_{l}^{2}}{2(1+\rho _{j-i})} \biggr) \\ \leq&\sum_{i=1} ^{k}\sum _{l^{\lambda}\leq s\leq l}\rho_{s}\exp \biggl(-\frac{v_{k}^{2}+v_{l}^{2}}{2} \biggr) \exp \biggl(\frac{\sigma_{l^{\lambda}}(v_{k}^{2}+v_{l}^{2})}{2} \biggr) \\ \ll&kl\sigma_{l^{\lambda}}\frac{v_{k}}{k}\frac{v_{l}}{l}\ll \frac{1}{(\ln D_{l})^{1+\varepsilon}}. \end{aligned}$$

This, together with (35) and (37) implies that (14) holds.

It is well known that \(\mathbb{P}(B)-\mathbb{P}(AB)\leq \mathbb{P}(\bar{A})\) for any sets A and B, then using the condition that \(n(1-\Phi(\lambda_{n}))\) is bounded, for \(1\leq k< l\), we get

$$\begin{aligned}& \mathbb{E}\Biggl\vert I \Biggl(\bigcap_{i=1}^{l}(X_{i} \leq u_{li}) \Biggr)-I \Biggl(\bigcap_{i=k+1}^{l}(X_{i} \leq u_{li}) \Biggr) \Biggr\vert \\& \quad =\mathbb{P} \Biggl(\bigcap_{i=k+1}^{l}(X_{i} \leq u_{li}) \Biggr)-\mathbb{P} \Biggl(\bigcap _{i=1}^{l}(X_{i}\leq u_{li}) \Biggr) \\& \quad \leq \mathbb{P}(X_{i}> u_{li} \mbox{ for some } 1\leq i\leq k)\leq\sum_{i=1}^{k} \mathbb{P}(X_{i}> u_{li}) \\& \quad \leq k\bigl(1-\Phi( \lambda_{l})\bigr)=\frac{k}{l}l\bigl(1-\Phi( \lambda_{l})\bigr) \\& \quad \ll \frac{k}{l}. \end{aligned}$$

Hence, (15) holds.

Now we prove (16). By (14), applying the normal comparison lemma, Lemma 3.1, for \(1\leq k< l\),

$$\begin{aligned}& \Biggl\vert \operatorname{Cov} \Biggl(I \Biggl(\bigcap _{i=1}^{k}(X_{i}\leq u_{ki}) \Biggr), I \Biggl(\bigcap_{i=k+1}^{l}(X_{i} \leq u_{li}) \Biggr) \Biggr) \Biggr\vert \\& \quad = \bigl\vert \mathbb{P} (X_{1}\leq u_{k1},\ldots, X_{k}\leq u_{kk}, X_{k+1}\leq u_{l,k+1}, \ldots, X_{l}\leq u_{ll} ) \\& \qquad {} -\mathbb{P} (X_{1}\leq u_{k1},\ldots, X_{k}\leq u_{kk} )\mathbb{P} (X_{k+1}\leq u_{l,k+1},\ldots, X_{l}\leq u_{ll} )\bigr\vert \\& \quad \ll \sum_{i=1} ^{k}\sum _{j=k+1} ^{l}|r_{ij}|\exp \biggl(- \frac {u_{ki}^{2}+u_{lj}^{2}}{2(1+|r_{ij}|)} \biggr) \\& \quad \ll \frac{1}{l^{\gamma}}+\frac{1}{\ln l(\ln D_{l})^{1+\varepsilon}}. \end{aligned}$$

Hence, (16) holds. □

Proof of Lemma 3.3

Notice, for \(1\leq k< l\),

$$\begin{aligned}& \mathbb{E}\Biggl\vert I \Biggl(\bigcap_{i=1}^{l}(X_{i} \leq u_{li}), \frac{S_{l}}{\sigma_{l}}\leq y \Biggr)-I \Biggl(\bigcap _{i=k+1}^{l}(X_{i}\leq u_{li}), \frac{S_{l}}{\sigma_{l}}\leq y \Biggr) \Biggr\vert \\& \quad = \mathbb{P} \Biggl(\bigcap_{i=k+1}^{l}(X_{i} \leq u_{li}), \frac{S_{l}}{\sigma_{l}}\leq y \Biggr)-\mathbb{P} \Biggl(\bigcap _{i=1}^{l}(X_{i}\leq u_{li}), \frac{S_{l}}{\sigma_{l}}\leq y \Biggr) \\& \quad \leq \Biggl\vert \mathbb{P} \Biggl(\bigcap_{i=1}^{l}(X_{i} \leq u_{li}), \frac{S_{l}}{\sigma_{l}}\leq y \Biggr)-\mathbb{P} \Biggl(\bigcap _{i=1}^{l}(X_{i}\leq u_{li}) \Biggr)\mathbb{P} \biggl( \frac{S_{l}}{\sigma_{l}}\leq y \biggr) \Biggr\vert \\& \qquad {} +\Biggl\vert \mathbb{P} \Biggl(\bigcap_{i=k+1}^{l}(X_{i} \leq u_{li}), \frac{S_{l}}{\sigma_{l}}\leq y \Biggr)-\mathbb{P} \Biggl(\bigcap _{i=k+1}^{l}(X_{i}\leq u_{li}) \Biggr)\mathbb{P} \biggl( \frac{S_{l}}{\sigma_{l}}\leq y \biggr) \Biggr\vert \\& \qquad {} +\mathbb{P} \biggl( \frac{S_{l}}{\sigma_{l}}\leq y \biggr) \Biggl(\mathbb{P} \Biggl(\bigcap_{i=k+1}^{l}(X_{i} \leq u_{li}) \Biggr)-\mathbb{P} \Biggl(\bigcap _{i=1}^{l}(X_{i}\leq u_{li}) \Biggr) \Biggr) \\& \quad := H_{3}+H_{4}+H_{5}. \end{aligned}$$
(38)

Using \(l-2|\sum_{1\leq i< j\leq l}r_{ij}|\leq\sigma^{2}_{l}\leq l+2|\sum_{1\leq i<j\leq l}r_{ij}|\) and (10), there exist constants \(c_{i}>0\), \(i=1,2\), such that

$$ c_{1} l\leq\sigma^{2}_{l}\leq c_{2}l. $$
(39)

Hence, using (11), for \(1\leq i\leq l\leq n\),

$$ \biggl\vert \operatorname{Cov} \biggl(X_{i}, \frac{S_{l}}{\sigma_{l}} \biggr)\biggr\vert \leq\frac{1}{\sigma_{l}}\sum_{j=1}^{l}|r_{ij}| \ll\frac{\ln^{1/2}l}{l^{1/2}\ln D_{l}} $$
(40)

and

$$ \biggl\vert \operatorname{Cov} \biggl(\frac{S_{k}}{\sigma_{k}}, \frac{S_{l}}{\sigma_{l}} \biggr)\biggr\vert \leq \frac{1}{\sigma_{k}\sigma_{l}}\sum_{i=1} ^{k}\sum_{j=1} ^{l}|r_{ij}| \ll \frac{k^{1/2}}{l^{1/2}}\frac{(\ln l)^{1/2}}{\ln D_{l}}. $$
(41)

Noting the fact that lnl and \(\ln D_{l}\) are slowly varying functions at infinity, (40) and (41) imply that there exists \(0<\mu<1\) such that, for sufficiently large l,

$$ \max_{1\leq i\leq l}\biggl\vert \operatorname{Cov} \biggl(X_{i}, \frac{S_{l}}{\sigma_{l}} \biggr)\biggr\vert \leq\mu $$

and

$$ \max_{1\leq k< l/\ln l}\biggl\vert \operatorname{Cov} \biggl( \frac{S_{k}}{\sigma_{k}}, \frac{S_{l}}{\sigma_{l}} \biggr)\biggr\vert \leq \mu. $$

Combining (36), (40), and the normal comparison lemma, Lemma 3.1, for \(i=3, 4\),

$$\begin{aligned} H_{i} \ll&\sum_{i=1}^{l}\biggl\vert \operatorname{Cov} \biggl(X_{i}, \frac{S_{l}}{\sigma_{l}} \biggr) \biggr\vert \exp \biggl(-\frac{u^{2}_{li}+y^{2}}{2(1+\mu )} \biggr) \\ \leq&\sum _{i=1}^{l}\biggl\vert \operatorname{Cov} \biggl(X_{i}, \frac{S_{l}}{\sigma_{l}} \biggr)\biggr\vert \exp \biggl(- \frac{v^{2}_{l}}{2(1+\mu )} \biggr) \\ \leq&\frac{(\ln l)^{1/2+1/2(1+\mu)}}{l^{1/(1+\mu)-1/2}\ln D_{l}}\leq\frac{1}{l^{\gamma_{1}}} \end{aligned}$$
(42)

for \(0<\gamma_{1}<1/(1+\mu)-1/2\).

By the proof of (15), we have \(H_{5}\ll k/l\). This, combined with (38) and (42), implies that (17) holds for \(\gamma=\min(1, \gamma_{1})>0\).

Now we prove (18). Again applying the normal comparison lemma, Lemma 3.1, for \(1\leq k< l/\ln l\),

$$\begin{aligned}& \Biggl\vert \operatorname{Cov} \Biggl(I \Biggl(\bigcap _{i=1}^{k}(X_{i}\leq u_{ki}), \frac{S_{k}}{\sigma_{k}}\leq y \Biggr), I \Biggl(\bigcap_{i=k+1}^{l}(X_{i} \leq u_{li}), \frac{S_{l}}{\sigma_{l}}\leq y \Biggr) \Biggr) \Biggr\vert \\& \quad = \biggl\vert \mathbb{P} \biggl(X_{1}\leq u_{k1}, \ldots, X_{k}\leq u_{kk}, \frac{S_{k}}{\sigma_{k}}\leq y, X_{k+1}\leq u_{l,k+1},\ldots, X_{l}\leq u_{ll}, \frac{S_{l}}{\sigma_{l}}\leq y \biggr) \\& \qquad {} -\mathbb{P} \biggl(X_{1}\leq u_{k1},\ldots, X_{k}\leq u_{kk}, \frac{S_{k}}{\sigma_{k}}\leq y \biggr) \\& \qquad {}\times\mathbb{P} \biggl(X_{k+1}\leq u_{l,k+1},\ldots, X_{l}\leq u_{ll}, \frac{S_{l}}{\sigma_{l}}\leq y \biggr)\biggr\vert \\& \quad \ll \sum_{i=1} ^{k}\sum _{j=k+1} ^{l}|r_{ij}|\exp \biggl(- \frac {u_{ki}^{2}+u_{lj}^{2}}{2(1+|r_{ij}|)} \biggr) \\& \qquad {}+ \sum_{i=1} ^{k} \biggl\vert \operatorname{Cov} \biggl(X_{i}, \frac{S_{l}}{\sigma_{l}} \biggr)\biggr\vert \exp \biggl(-\frac {u_{ki}^{2}+y^{2}}{2(1+|\operatorname{Cov}(X_{i}, S_{l}/\sigma_{l})|)} \biggr) \\& \qquad {} +\sum_{j=k+1} ^{l}\biggl\vert \operatorname{Cov} \biggl(X_{j}, \frac{S_{k}}{\sigma_{k}} \biggr)\biggr\vert \exp \biggl(-\frac {u_{lj}^{2}+y^{2}}{2(1+|\operatorname{Cov}(X_{j}, S_{k}/\sigma_{k})|)} \biggr) \\& \qquad {}+\biggl\vert \operatorname{Cov} \biggl( \frac{S_{k}}{\sigma_{k}}, \frac{S_{l}}{\sigma_{l}} \biggr)\biggr\vert \exp \biggl(- \frac{y^{2}}{1+|\operatorname{Cov}(S_{k}/\sigma_{k}, S_{l}/\sigma_{l})|} \biggr) \\& \quad := H_{6}+H_{7}+H_{8}+H_{9}. \end{aligned}$$

By (11), (39) to (41),

$$\begin{aligned} H_{6}&\leq\sum_{i=1} ^{k}\sum _{j=1} ^{l}|r_{ij}|\exp \biggl(- \frac {v_{k}^{2}+v_{l}^{2}}{2(1+\delta)} \biggr) \\ &\ll k \frac{(\ln l)^{1/2}}{\ln D_{l}}\frac{(\ln k)^{1/2(1+\delta)}}{k^{1/(1+\delta)}} \frac{(\ln l)^{1/2(1+\delta)}}{l^{1/(1+\delta)}} \\ &\leq\frac{(\ln l)^{1/2+1/(1+\delta)}}{l^{2/(1+\delta)-1}\ln D_{l}} \leq\frac{1}{l^{\gamma_{2}}} \end{aligned}$$

for \(0<\gamma_{2}<2/(1+\delta)-1\),

$$\begin{aligned}& H_{7}\ll\sum_{i=1} ^{k}\biggl\vert \operatorname{Cov} \biggl(X_{i}, \frac{S_{l}}{\sigma_{l}} \biggr) \biggr\vert \exp \biggl(-\frac{v_{k}^{2}}{2(1+\mu )} \biggr) \\& \hphantom{H_{7}}\ll k \frac{(\ln l)^{1/2}}{l^{1/2}\ln D_{l}} \frac{(\ln k)^{1/2(1+\mu)}}{k^{1/(1+\mu)}}\leq\frac{(\ln l)^{1/2+1/2(1+\mu)}}{l^{1/(1+\mu)-1/2}\ln D_{l}}\leq\frac{1}{l^{\gamma_{1}}}, \\& H_{8}\leq\sum_{j=k+1} ^{l}\biggl\vert \operatorname{Cov} \biggl(X_{j}, \frac{S_{k}}{\sigma_{k}} \biggr) \biggr\vert \exp \biggl(-\frac{v_{l}^{2}}{2(1+\mu )} \biggr) \\& \hphantom{H_{8}}\leq\frac{1}{\sigma_{k}}\sum _{i=1} ^{k}\sum _{j=1} ^{l}|r_{ij}|\exp \biggl(- \frac{v_{l}^{2}}{2(1+\mu)} \biggr) \\& \hphantom{H_{8}}\ll\frac{k}{k^{1/2}}\frac{(\ln l)^{1/2}}{\ln D_{l}} \frac{(\ln l)^{1/2(1+\mu)}}{l^{1/(1+\mu)}} \leq\frac{1}{l^{\gamma_{1}}} \end{aligned}$$

and

$$H_{9}\leq\biggl\vert \operatorname{Cov} \biggl(\frac{S_{k}}{\sigma_{k}}, \frac{S_{l}}{\sigma_{l}} \biggr)\biggr\vert \ll \frac{k^{1/2}}{l^{1/2}}\frac{(\ln l)^{1/2}}{\ln D_{l}}. $$

Hence (18) follows for \(\gamma=\min(\gamma_{1}, \gamma_{2})>0\) and thus (17) and (18) hold for \(\gamma=\min(\gamma_{1}, \gamma_{2}, 1)>0\). □

Proof of Lemma 3.4

On applying (14), it follows from the normal comparison lemma, Lemma 3.1, that

$$\begin{aligned} \Biggl\vert \mathbb{P} \Biggl(\bigcap_{i=1}^{n}(X_{i} \leq u_{ni}) \Biggr)-\prod_{i=1}^{n} \Phi(u_{ni})\Biggr\vert \ll&\sum_{1\leq i< j\leq n}|r_{ij}| \exp \biggl(-\frac{u_{ni}^{2}+u_{nj}^{2}}{2(1+|r_{ij}|)} \biggr) \\ \ll&\frac{1}{n^{\gamma}}+\frac{1}{(\ln D_{n})^{1+\varepsilon}} \rightarrow0. \end{aligned}$$

Hence, by \(\sum_{i=1}^{n}(1-\Phi(u_{ni}))\rightarrow\tau\), we get

$$\begin{aligned} \lim_{n\rightarrow\infty}\mathbb{P} \Biggl(\bigcap _{i=1}^{n}(X_{i}\leq u_{ni}) \Biggr) =&\lim_{n\rightarrow\infty}\prod_{i=1}^{n} \Phi (u_{ni})=\lim_{n\rightarrow\infty}\exp \Biggl(\sum _{i=1}^{n}\ln \bigl(\Phi(u_{ni})\bigr) \Biggr) \\ =&\lim_{n\rightarrow\infty}\exp \Biggl(-\sum _{i=1}^{n}\bigl(1- \Phi(u_{ni})\bigr) \Biggr)=\mathrm{e}^{-\tau}. \end{aligned}$$

That is, (19) holds. Using the proof of \(H_{3}\) and (19), (20) follows from

$$\begin{aligned} \lim_{n\rightarrow\infty}\mathbb{P} \Biggl(\bigcap _{i=1}^{n}(X_{i}\leq u_{ni}), \frac{S_{n}}{\sigma_{n}}\leq y \Biggr) =& \lim_{n\rightarrow\infty}\mathbb{P} \Biggl( \bigcap_{i=1}^{n}(X_{i}\leq u_{ni}) \Biggr) \lim_{n\rightarrow\infty}\mathbb{P} \biggl( \frac{S_{n}}{\sigma_{n}}\leq y \biggr) \\ =&\mathrm{e}^{-\tau}\Phi(y). \end{aligned}$$

 □

Rights and permissions

Open Access This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Wu, Q. Improved results in almost sure central limit theorems for the maxima and partial sums for Gaussian sequences. J Inequal Appl 2015, 109 (2015). https://doi.org/10.1186/s13660-015-0634-3

Download citation

MSC

  • 60F15

Keywords

  • standardized Gaussian sequence
  • maxima
  • partial sums and maxima
  • almost sure central limit theorem