# Strong law for linear processes

## Abstract

In this paper, the authors obtain the strong approximations, the laws of the single logarithm, the functional laws of the single logarithm, and the limit law of the single logarithm for linear processes generated by arrays of independent and identically distributed random variables, and they show that they are all equivalent under the same moment conditions.

## 1 Introduction and main result

Throughout the paper, we assume that $$\{\varepsilon_{i},-\infty< i<\infty\}$$ (or $$\{\varepsilon_{n,i},-\infty< i<\infty,n\geq1\}$$) is a sequence (or an array) of independent and identically distributed (i.i.d.) random variables with the same distribution as the random variable Îµ, $$\{a_{i},-\infty< i<\infty\}$$ is a sequence of real constants with

$$\sum^{\infty}_{i=-\infty}|a_{i}|< \infty,\quad a=\sum^{\infty}_{i=-\infty}a_{i}\neq0.$$

Define

$$X_{n}=\sum_{i=-\infty}^{\infty}a_{i}\varepsilon_{n-i},\quad n\geq1 \qquad\Biggl(\mbox{or } X_{nk}=\sum_{i=-\infty}^{\infty}a_{i}\varepsilon _{n,k-i}, k\geq1, n\geq1 \Biggr).$$
(1.1)

We call $$\{X_{n},n\geq1\}$$ (or $$\{X_{nk},k\geq1, n\geq1\}$$) the linear process of $$\{\varepsilon_{i},-\infty< i<\infty\}$$ (or $$\{\varepsilon_{ni},-\infty< i<\infty,n\geq1\}$$). In particular if $$a_{i}=0$$ for $$i=-1,-2,\ldots$$â€‰, we call $$\{X_{n},n\geq1\}$$ (or $$\{X_{nk},k\geq1, n\geq1\}$$) the one-side linear processes. Denote by $$\mathcal{K}$$ the set of all absolutely continuous f on $$[0,1]$$ with $$f(0)=0$$ and $$\int_{0}^{1}(f^{\prime}(x))^{2}\,dx\leq1$$; it is well known that $$\mathcal{K}$$ is a compact, convex, and symmetric subset of the Banach space $$C[0,1]$$, which is the set of real continuous functions on $$[0,1]$$ with the uniform norm.

For the one-side linear processes, Philipps and Solo [1] established the strong law of large numbers and the law of the iterated logarithm, Wang et al. [2] established the strong approximation to a Gaussian process. Lu and Qiu [3] obtained the functional law of the iterated logarithm for the one-side linear processes as follows.

### Theorem A

Suppose that $$E\varepsilon=0$$, $$E\varepsilon^{2}=1$$, and $$\sum^{\infty}_{i=1} i|a_{i}|<\infty$$. Then the stochastic process sequence $$\{\frac{S(nt)}{\sqrt{2a^{2}n\log\log n}},0\leq t\leq1,n\geq1\}$$ converges to and clusters throughout $$\mathcal{K}$$ in the uniform norm with probability 1, that is,

$$\begin{cases} \textstyle\lim_{n\rightarrow\infty}\inf_{f\in\mathcal{K}}\sup_{0\leq t\leq1}|\frac {S(nt)}{\sqrt{2a^{2}n\log\log n}}-f(t)|=0 \quad \textit{a.s.} \quad\textit{and}\\ \textstyle\liminf_{n\rightarrow\infty}\sup_{0\leq t\leq1}|\frac{S(nt)}{\sqrt {2a^{2}n\log\log n}}-f(t)|=0 \quad\textit{a.s.}\quad \textit{for every } f\in\mathcal{K}, \end{cases}$$

where

$$S(nt)=\sum_{j=1}^{[nt]}X_{j}+ \bigl(nt-[nt]\bigr)X_{[nt]+1},\quad 0\leq t\leq1, n\geq1,$$

$$[\cdot]$$ is the greatest integer function, and $$\log x=\ln(\max\{e, x\} )$$ for $$x>0$$.

By the same argument as Strassen [4], Theorem A implies the classical Hartman-Wintner law of the iterated logarithm (cf. Hartman and Wintner [5])

$$\limsup_{n\rightarrow\infty}\frac{\sum^{n}_{j=1}X_{j}}{\sqrt{2a^{2}n\log\log n}}=1 \quad\mbox{a.s.} \quad \mbox{and}\quad \liminf_{n\rightarrow\infty} \frac{\sum^{n}_{j=1}X_{j}}{\sqrt{2a^{2}n\log\log n}}=-1 \quad \mbox{a.s.}$$

Recently, Tan et al. [6] proved the above-mentioned conclusions without the condition $$\sum_{i=1}^{\infty}i|a_{i}|<\infty$$.

There is a substantial difference between the a.s. limiting behavior for sequences of i.i.d. random variables and arrays of i.i.d. random variables. For example, if $$E\varepsilon=0$$, $$E\varepsilon^{2}=1$$, $$E\varepsilon^{4}<\infty$$, Hu and Weber [7] proved the following law of the single logarithm:

$$\limsup_{n\rightarrow\infty}\frac{\sum^{n}_{j=1}\varepsilon_{n,j}}{\sqrt {2n\log n}}=1 \quad\mbox{a.s.} \quad \mbox{and} \quad \liminf_{n\rightarrow\infty}\frac{\sum^{n}_{j=1}\varepsilon_{n,j}}{\sqrt {2n\log n}}=-1 \quad \mbox{a.s.}$$
(1.2)

Hence the classical Hartman-Wintner law of the iterated logarithm (cf. Hartman and Wintner [5]) does not hold for arrays. The above result was improved by Li et al. [8] and by Qi [9] who simultaneously and independently proved that

$$E\varepsilon=0,\qquad E\varepsilon^{2}=1, \quad\mbox{and}\quad E\frac{|\varepsilon |^{4}}{\log^{2}|\varepsilon|}< \infty$$

are necessary and sufficient for (1.2) to hold.

Afterwards, using the strong approximations for partial sums of i.i.d. random variables, Li et al. [10] obtained more general results as follows.

### Theorem B

Let $$\alpha>0$$. On a suitable probability space one can redefine $$\{\varepsilon, \varepsilon_{n,i}, 1\leq i\leq[n^{\alpha}],n\geq1\}$$ without changing the distribution, and there exist a sequence of independent real standard Wiener processes $$\{ W_{n}(t),0\leq t\leq1, n\geq1\}$$ and a sequence of independent standard normal random variables $$\{Z_{n},n\geq1\}$$, such that the following are equivalent:

\begin{aligned}& E\varepsilon=0,\qquad E\varepsilon^{2}=1, \quad\textit{and}\quad E \frac{|\varepsilon |^{2p}}{\log^{p}|\varepsilon|}< \infty, \end{aligned}
(1.3)
\begin{aligned}& \limsup_{n\rightarrow\infty}\frac{\sum_{j=1}^{[n^{\alpha}]}\varepsilon _{n,j}}{\sqrt{2n^{\alpha}\log n}}=1 \quad\textit{a.s.} \quad \textit{and}\quad \liminf_{n\rightarrow\infty}\frac{\sum_{j=1}^{[n^{\alpha}]}\varepsilon _{n,j}}{\sqrt{2n^{\alpha}\log n}}=-1 \quad\textit{a.s.}, \end{aligned}
(1.4)
\begin{aligned}& \begin{cases} \textstyle\lim_{n\rightarrow\infty}\inf_{f\in{\mathcal{K}}} \sup_{0\leq t\leq1}\vert \frac{U_{n}(t)}{(2n^{\alpha}\log n)^{1/2}}-f(t)\vert =0 \quad\textit{a.s. } \quad\textit{and} \\ \textstyle\liminf_{n\rightarrow\infty}\sup_{0\leq t\leq1}\vert \frac{U_{n}(t)}{(2n^{\alpha}\log n)^{1/2}}-f(t)\vert =0 \quad\textit{a.s.}\quad \textit{for every } f\in{\mathcal{K}}, \end{cases} \end{aligned}
(1.5)
\begin{aligned}& \biggl\vert \frac{\sum_{j=1}^{[n^{\alpha}]}\varepsilon_{n,j}}{n^{\alpha /2}}-Z_{n}\biggr\vert =o\bigl((\log n)^{1/2}\bigr)\quad \textit{a.s.}, \end{aligned}
(1.6)
\begin{aligned}& \sup_{0\leq t\leq1}\biggl\vert \frac{U_{n}(t)}{n^{\alpha/2}}-W_{n}(t) \biggr\vert =o\bigl((\log n)^{1/2}\bigr) \quad\textit{a.s.}, \end{aligned}
(1.7)

where $$p=1+1/\alpha$$ and

$$U_{n}(t)=\sum_{j=1}^{[[n^{\alpha}]t]} \varepsilon_{n,j}+\bigl(\bigl[n^{\alpha}\bigr]t-\bigl[ \bigl[n^{\alpha}\bigr]t\bigr]\bigr)\varepsilon_{n,[[n^{\alpha}]t]+1},\quad 0\leq t \leq1, n\geq1.$$
(1.8)

### Remark 1.1

Theorem B also holds if (1.4) is replaced by

$$\limsup_{n\rightarrow\infty} \frac{\max_{1\leq m\leq[n^{\alpha}]}|\sum_{j=1}^{m}\varepsilon_{n,j}|}{\sqrt{2n^{\alpha}\log n}}=1 \quad\mbox{a.s.}$$
(1.9)

In fact, by the equivalence of (1.3) and (1.4), $$E\varepsilon =0$$, $$E\varepsilon^{2}=1$$. By Markovâ€™s inequality, for fixed $$\delta>0$$, when n is large enough,

$$\max_{1\leq m\leq[n^{\alpha}]-1}P\Biggl\{ \Biggl|\sum^{[n^{\alpha}]}_{j=m+1} \varepsilon _{n,j}\Biggr|>(\delta/2)\sqrt{2n^{\alpha}\log n}\Biggr\} \leq2\delta^{-2}\bigl(E|\varepsilon|^{2}\bigr) (\log n)^{-1}< 1/2.$$

Hence, by Ottavianiâ€™s inequality (see Lemma 6.2 in Ledoux and Talagrand [11]), when n is large enough,

\begin{aligned} &P\Biggl\{ \max_{1\leq m\leq[n^{\alpha}]}\Biggl|\sum^{m}_{j=1} \varepsilon _{n,j}\Biggr|>(1+\delta)\sqrt{2n^{\alpha}\log n}\Biggr\} \\ &\quad=P\Biggl\{ \max_{1\leq m\leq[n^{\alpha}]}\Biggl|\sum^{m}_{j=1} \varepsilon _{n,j}\Biggr|>(1+\delta/2)\sqrt{2n^{\alpha}\log n}+(\delta/2) \sqrt{2n^{\alpha}\log n}\Biggr\} \\ &\quad\leq\frac{P\{|\sum^{[n^{\alpha}]}_{j=1}\varepsilon_{n,j}|>(1+\delta /2)\sqrt{2n^{\alpha}\log n}\}}{ 1-\max_{1\leq m\leq[n^{\alpha}]-1}P\{|\sum^{[n^{\alpha}]}_{j=m+1}\varepsilon_{n,j}|>(\delta/2)\sqrt{2n^{\alpha}\log n}\}}\\ &\quad\leq2P\Biggl\{ \Biggl|\sum^{[n^{\alpha}]}_{j=1} \varepsilon_{n,j}\Biggr|>(1+\delta/2)\sqrt {2n^{\alpha}\log n}\Biggr\} . \end{aligned}

By the Borel-Cantelli lemma, (1.4) implies that

$$\sum^{\infty}_{n=1}P\Biggl\{ \Biggl|\sum ^{[n^{\alpha}]}_{j=1}\varepsilon_{n,j}\Biggr|>(1+\delta /2) \sqrt{2n^{\alpha}\log n}\Biggr\} < \infty.$$

Hence

$$\sum^{\infty}_{n=1}P\Biggl\{ \max _{1\leq m\leq[n^{\alpha}]}\Biggl|\sum^{m}_{j=1} \varepsilon_{n,j}\Biggr|>(1+\delta)\sqrt{2n^{\alpha}\log n}\Biggr\} < \infty.$$

By the Borel-Cantelli lemma again

$$\limsup_{n\rightarrow\infty}\frac{\max_{1\leq m\leq[n^{\alpha}]} |\sum_{j=1}^{m}\varepsilon_{n,j}|}{\sqrt{2n^{\alpha}\log n}}\leq1+\delta \quad\mbox{a.s.}$$

Note that $$\delta>0$$ is arbitrary and

$$\limsup_{n\rightarrow\infty}\frac{\max_{1\leq m\leq[n^{\alpha}]} |\sum_{j=1}^{m}\varepsilon_{n,j}|}{\sqrt{2n^{\alpha}\log n}} \geq\limsup _{n\rightarrow\infty}\frac{|\sum_{j=1}^{[n^{\alpha}]}\varepsilon_{n,j}|}{\sqrt{2n^{\alpha}\log n}}=1 \quad\mbox{a.s.},$$

(1.9) holds.

Very few results for a moving average process of an array of i.i.d. random variables are known. In this paper, we will extend Theorem B to the linear processes as follows.

### Theorem 1.1

Let $$\alpha>0$$. On a suitable probability space one can redefine $$\{\varepsilon, \varepsilon_{n,i},-\infty< i<\infty, n\geq1\}$$ without changing the distribution, and there exist a sequence of independent real standard Wiener processes $$\{ W_{n}(t),0\leq t\leq1, n\geq1\}$$ and a sequence of independent standard normal random variables $$\{Z_{n},n\geq1\}$$, such that the following are equivalent:

\begin{aligned}& E\varepsilon=0,\qquad E\varepsilon^{2}=1, \quad\textit{and}\quad E\frac{|\varepsilon |^{2p}}{\log^{p}|\varepsilon|}< \infty, \end{aligned}
(1.10)
\begin{aligned}& \limsup_{n\rightarrow\infty}\frac{\sum_{j=1}^{[n^{\alpha}]}X_{n,j}}{\sqrt {2a^{2}n^{\alpha}\log n}}=1 \quad\textit{a.s.} \quad\textit{and}\quad \liminf_{n\rightarrow\infty}\frac{\sum_{j=1}^{[n^{\alpha}]}X_{n,j}}{\sqrt {2a^{2}n^{\alpha}\log n}}=-1\quad\textit{a.s.}, \end{aligned}
(1.11)
\begin{aligned}& \begin{cases} \textstyle\lim_{n\rightarrow\infty}\inf_{f\in{\mathcal{K}}}\sup_{0\leq t\leq1}\vert \frac{S_{n}(t)}{(2a^{2}n^{\alpha}\log n)^{1/2}}-f(t)\vert =0 \quad\textit{a.s.}\quad \textit{and}\\ \textstyle\liminf_{n\rightarrow\infty}\sup_{0\leq t\leq1}\vert \frac{S_{n}(t)}{(2a^{2}n^{\alpha}\log n)^{1/2}}-f(t)\vert =0 \quad\textit{a.s.}\quad \textit{for every } f\in{\mathcal{K}}, \end{cases} \end{aligned}
(1.12)
\begin{aligned}& \biggl\vert \frac{\sum_{j=1}^{[n^{\alpha}]}X_{n,j}}{n^{\alpha/2}}-aZ_{n}\biggr\vert =o\bigl((\log n)^{1/2}\bigr) \quad\textit{a.s.}, \end{aligned}
(1.13)
\begin{aligned}& \sup_{0\leq t\leq1}\biggl\vert \frac{S_{n}(t)}{n^{\alpha/2}}-aW_{n}(t) \biggr\vert =o\bigl((\log n)^{1/2}\bigr) \quad\textit{a.s.}, \end{aligned}
(1.14)
\begin{aligned}& \lim_{n\rightarrow\infty}\frac{1}{\sqrt{2a^{2}\log n}} \max_{1\leq m\leq n} \frac{|\sum^{[m^{\alpha}]}_{j=1}X_{m,j}|}{m^{\alpha /2}}=1\quad \textit{a.s.}, \end{aligned}
(1.15)

where $$p=1+1/\alpha$$ and

$$S_{n}(t)=\sum_{j=1}^{[[n^{\alpha}]t]}X_{n,j}+ \bigl(\bigl[n^{\alpha}\bigr]t-\bigl[\bigl[n^{\alpha}\bigr]t\bigr] \bigr)X_{n,[[n^{\alpha}]t]+1},\quad 0\leq t\leq1, n\geq1.$$

### Remark 1.2

We call (1.15) the limit law of the single logarithm, an analog of the limit law of the iterated logarithm which is due to Chen [12] and developed by Li and Liang [13].

## 2 Proof of main result

To prove our main result, the following lemmas are needed.

### Lemma 2.1

Let $$\alpha>0$$ and $$\{Y,Y_{n,j},1\leq j\leq[n^{\alpha}],n\geq1\}$$ be an array of identically distributed random variables (without independence assumption) with $$EY=0$$ and $$E\frac{|Y|^{2p}}{\log^{p}|Y|}<\infty$$, $$p=1+1/\alpha$$. Then

$$\frac{1}{\sqrt{2n^{\alpha}\log n}}\max_{1\leq j\leq[n^{\alpha}]}|Y_{n,j}|\rightarrow0 \quad\textit{a.s.}$$
(2.1)

### Proof

Note that $$E\frac{|Y|^{2p}}{\log^{p}|Y|}<\infty$$ is equivalent to

$$\sum^{\infty}_{n=1}n^{\alpha}P\bigl\{ |Y|>\delta\sqrt{2n^{\alpha}\log n}\bigr\} < \infty, \quad\forall \delta>0.$$

Hence $$\forall \delta>0$$

$$\sum^{\infty}_{n=1}P\Bigl\{ \max _{1\leq j\leq[n^{\alpha}]}|Y_{nj}|>\delta\sqrt {2n^{\alpha}\log n} \Bigr\} \leq\sum^{\infty}_{n=1}n^{\alpha}P \bigl\{ |Y|>\delta\sqrt{2n^{\alpha}\log n}\bigr\} < \infty,$$

which implies that (2.1) holds by the Borel-Cantelli lemma.â€ƒâ–¡

### Lemma 2.2

Let $$\alpha>0$$ and $$\{Y,Y_{n,j},1\leq j\leq[n^{\alpha}],n\geq1\}$$ be an array of i.i.d. random variables with $$EY=0$$ and $$E\frac{|Y|^{2p}}{\log^{p}|Y|}<\infty$$, $$p=1+1/\alpha$$. Then

$$E\sup_{n\geq1}\frac{\max_{1\leq m\leq[n^{\alpha}]}| \sum_{j=1}^{m}Y_{n,j}|}{\sqrt{2n^{\alpha}\log n}}< \infty.$$
(2.2)

### Proof

Let $$\{Y_{n},n\geq1\}$$ be a sequence of independent random variables with common distribution as Y. By the same argument as Theorem 4 in Li and SpÇŽtaru [14] or Theorem 1.2 in Chen and Wang [15],

$$\int^{\infty}_{M}\sum^{\infty}_{n=1}P \Biggl\{ \Biggl|\sum^{[n^{\alpha}]}_{j=1}Y_{j}\Biggr|>y \sqrt {2n^{\alpha}\log n}\Biggr\} \,dy< \infty$$

holds for some large enough $$M>0$$. For all $$x\geq2M$$, by Markov â€™s inequality, when n is large enough,

$$\max_{1\leq m\leq[n^{\alpha}]-1}P\Biggl\{ \Biggl|\sum^{[n^{\alpha}]}_{j=m+1}Y_{n,j}\Biggr|>x \sqrt{2n^{\alpha}\log n}\Biggr\} \leq(2M)^{-2} \bigl(E|Y|^{2}\bigr) (2\log n)^{-1}\leq1/2.$$

Hence by Ottavianiâ€™s inequality (see Lemma 6.2 in Ledoux and Talagrand [11]), for all $$x\geq2M$$, when n is large enough,

$$P\Biggl\{ \max_{1\leq m\leq[n^{\alpha}]}\Biggl|\sum_{j=1}^{m}Y_{n,j}\Biggr|>x \sqrt{2n^{\alpha}\log n} \Biggr\} \leq 2P\Biggl\{ \Biggl|\sum ^{[n^{\alpha}]}_{j=1}Y_{nj}\Biggr|>x\sqrt{2n^{\alpha}\log n}/2 \Biggr\} .$$

Then

\begin{aligned} &E\sup_{n\geq1}\frac{\max_{1\leq m\leq[n^{\alpha}]}| \sum_{j=1}^{m}Y_{n,j}|}{\sqrt{2n^{\alpha}\log n}} \\ &\quad=\int^{\infty}_{0} P\biggl\{ \sup_{n\geq1} \frac{\max_{1\leq m\leq[n^{\alpha}]}|\sum_{j=1}^{m}Y_{n,j}|}{\sqrt{2n^{\alpha}\log n}} >x\biggr\} \,dx \\ &\quad\leq2M+\int^{\infty}_{2M} \sum ^{\infty}_{n=1}P\Biggl\{ \max_{1\leq m\leq [n^{\alpha}]}\Biggl|\sum _{j=1}^{m}Y_{n,j}\Biggr|>x \sqrt{2n^{\alpha}\log n} \Biggr\} \,dx \\ &\quad\leq2M+2\int^{\infty}_{2M} \sum ^{\infty}_{n=1}P\Biggl\{ \Biggl|\sum ^{[n^{\alpha}]}_{j=1}Y_{nj}\Biggr|>x\sqrt{2n^{\alpha}\log n}/2 \Biggr\} \,dx \\ &\quad=2M+4\int^{\infty}_{M} \sum ^{\infty}_{n=1}P\Biggl\{ \Biggl|\sum ^{[n^{\alpha}]}_{j=1}Y_{j}\Biggr|>y\sqrt{2n^{\alpha}\log n} \Biggr\} \,dy\quad (x=2y) \\ &\quad< \infty. \end{aligned}

â€ƒâ–¡

### Proof of Theorem 1.1

We only prove that (1.10) â‡’ (1.14), (1.11) â‡’ (1.10) and (1.13) â‡’ (1.15) â‡’ (1.10), the proofs of (1.14) â‡’ (1.13) â‡’ (1.11), (1.14) â‡’ (1.12) â‡’ (1.11) are similar to those in Li et al. [10].

(1.10) â‡’ (1.14). Note that

$$\sup_{0\leq t\leq1}\Biggl|U_{n}(t)-\sum ^{[[n^{\alpha}]t]}_{j=1}\varepsilon_{n,j}\Biggr| \leq\max _{1\leq j \leq[n^{\alpha}]}|\varepsilon_{n,j}|$$

and

$$\sup_{0\leq t\leq1}\Biggl|S_{n}(t)-\sum ^{[[n^{\alpha}]t]}_{j=1}X_{n,j}\Biggr| \leq\max _{1\leq j \leq[n^{\alpha}]}|X_{n,j}|,$$

where $$U_{n}(t)$$ is as Theorem B, and

\begin{aligned} &\sup_{1\leq t\leq1}\biggl\vert \frac{S_{n}(t)}{n^{\alpha/2}}-aW_{n}(t) \biggr\vert \\ &\quad=\sup_{1\leq t\leq1}\Biggl\vert \frac{1}{n^{\alpha/2}} \Biggl[ \Biggl(S_{n}(t)-\sum^{[[n^{\alpha}]t]}_{j=1}X_{n,j} \Biggr) + \Biggl(\sum^{[[n^{\alpha}]t]}_{j=1}X_{n,j}-a \sum^{[[n^{\alpha}]t]}_{j=1}\varepsilon_{n,j} \Biggr) \\ &\qquad{} + a \Biggl(\sum^{[[n^{\alpha}]t]}_{j=1} \varepsilon _{n,j}-U_{n}(t) \Biggr) \Biggr] +a \biggl( \frac{U_{n}(t)}{n^{\alpha/2}}-W_{n}(t) \biggr)\Biggr\vert \\ &\quad\leq\frac{1}{n^{\alpha/2}}\max_{1\leq j \leq[n^{\alpha}]}|X_{n,j}| + \frac{1}{n^{\alpha/2}}\sup_{1\leq t\leq1}\Biggl\vert \sum ^{[[n^{\alpha}]t]}_{j=1}X_{n,j}-a\sum ^{[[n^{\alpha}]t]}_{j=1}\varepsilon_{n,j}\Biggr\vert \\ &\qquad{} +\frac{|a|}{n^{\alpha/2}}\max_{1\leq j \leq[n^{\alpha}]}|\varepsilon_{n,j}| +|a|\sup_{1\leq t\leq1}\biggl\vert \frac{U_{n}(t)}{n^{\alpha/2}}-W_{n}(t) \biggr\vert . \end{aligned}

Therefore, to prove (1.14), by Lemma 2.1 and Theorem B it is enough to prove that

$$\frac{1}{\sqrt{2n^{\alpha}\log n}}\sup_{0\leq t\leq1}\Biggl|\sum ^{[[n^{\alpha}]t]}_{j=1}X_{n,j}-a\sum ^{[[n^{\alpha}]t]}_{j=1}\varepsilon_{n,j}\Biggr| \rightarrow0\quad \mbox{a.s.}$$
(2.3)

Given $$m>0$$, set

\begin{aligned} &Y_{nm}(t)=\sum^{[[n^{\alpha}]t]}_{j=1}\sum ^{m}_{i=-m}a_{i}\varepsilon _{n,j-i}, \\ &\widetilde{a}_{m}=0, \qquad\widetilde{a}_{i}=\sum ^{m}_{k=i+1}a_{k},\quad i=0,\ldots ,m-1, \\ &\widetilde{\widetilde{a}}_{-m}=0,\qquad \widetilde{\widetilde{a}}_{i}= \sum^{i-1}_{k=-m}a_{k},\quad i=-m+1,-m+2, \ldots,0,\\ &\widetilde{\varepsilon}_{nj}=\sum^{m}_{i=1} \widetilde{a}_{i}\varepsilon_{n,j-i}, \qquad\widetilde{\widetilde{ \varepsilon}}_{nj}=\sum^{0}_{i=-m} \widetilde {\widetilde{a}}_{i}\varepsilon_{n,j-i}. \end{aligned}

Then

$$Y_{nm}(t)=\Biggl(\sum^{m}_{i=-m}a_{i} \Biggr)\sum^{[[n^{\alpha}]t]}_{j=1}\varepsilon_{n,j} +(\widetilde{\varepsilon}_{n0}-\widetilde{\varepsilon}_{n,[[n^{\alpha}]t]}+ \widetilde{\widetilde{\varepsilon}}_{n,[[n^{\alpha}]t]+1}-\widetilde {\widetilde{ \varepsilon}}_{n1})$$
(2.4)

and

$$\sum^{[[n^{\alpha}]t]}_{j=1}X_{n,j} =Y_{nm}(t)+\sum^{[[n^{\alpha}]t]}_{j=1}\sum _{|i|>m}a_{i}\varepsilon_{n,j-i}.$$
(2.5)

For every i, by Lemma 2.1,

$$\frac{1}{\sqrt{2n^{\alpha}\log n}}\sup_{0\leq t\leq1}| \varepsilon _{n,[[n^{\alpha}]t]-i}|\leq \frac{1}{\sqrt{2n^{\alpha}\log n}}\max_{0\leq m\leq[n^{\alpha}]}|\varepsilon_{n,m-i}| \rightarrow0 \quad\mbox{a.s.}$$

So

$$\frac{1}{\sqrt{2n^{\alpha}\log n}} \sup_{0\leq t\leq1}|\widetilde {\varepsilon}_{n,[[n^{\alpha}]t]}| \rightarrow0 \quad\mbox{a.s.}$$

and

$$\frac{1}{\sqrt{2n^{\alpha}\log n}}\sup_{0\leq t\leq1}|\widetilde {\widetilde{ \varepsilon}}_{n,[[n^{\alpha}]t]+1}| \rightarrow0 \quad\mbox{a.s.}$$

Furthermore

$$\lim_{n\rightarrow\infty}\frac{|\widetilde{\varepsilon}_{n0}|}{\sqrt {2n^{\alpha}\log n}}= \lim_{n\rightarrow\infty} \frac{|\widetilde{\widetilde{\varepsilon }}_{n1}|}{\sqrt{2n^{\alpha}\log n}}=0 \quad\mbox{a.s.}$$

Hence

$$\frac{1}{\sqrt{2n^{\alpha}\log n}}\sup_{0\leq t\leq1}|\widetilde {\varepsilon}_{n0}- \widetilde{\varepsilon}_{n,[[n^{\alpha}]t]}+ \widetilde{\widetilde{ \varepsilon}}_{n,[[n^{\alpha}]t]+1}-\widetilde {\widetilde{\varepsilon}}_{n1}| \rightarrow0\quad \mbox{a.s.}$$
(2.6)

By (2.4)-(2.6) and Remark 1.2

\begin{aligned}[b] &\limsup_{n\rightarrow\infty}\frac{1}{\sqrt{2n^{\alpha}\log n}} \sup _{0\leq t\leq1}\Biggl|\sum^{[[n^{\alpha}]t]}_{j=1}X_{n,j}-a \sum^{[[n^{\alpha}]t]}_{j=1}\varepsilon_{n,j}\Biggr| \\ &\quad=\limsup_{n\rightarrow\infty}\frac{1}{\sqrt{2n^{\alpha}\log n}} \sup_{0\leq t\leq1}\Biggl| \sum_{|i|>m}a_{i}\sum ^{[[n^{\alpha}]t]}_{j=1}\varepsilon_{n,j-i} -\sum _{|i|>m}a_{i}\sum^{[[n^{\alpha}]t]}_{j=1} \varepsilon_{n,j}\Biggr| \\ &\quad\leq\limsup_{n\rightarrow\infty}\frac{1}{\sqrt{2n^{\alpha}\log n}} \sup_{0\leq t\leq1}\Biggl| \sum_{|i|>m}a_{i}\sum ^{[[n^{\alpha}]t]}_{j=1}\varepsilon_{n,j-i}\Biggr| \\ &\qquad{} +\limsup_{n\rightarrow\infty}\frac{1}{\sqrt{2n^{\alpha}\log n}} \sup_{0\leq t\leq1}\Biggl| \sum_{|i|>m}a_{i}\sum ^{[[n^{\alpha}]t]}_{j=1}\varepsilon_{n,j}\Biggr| \\ &\quad\leq\sum_{|i|>m}|a_{i}|\sup _{n\geq1}\frac{1}{\sqrt{2n^{\alpha}\log n}}\max_{1\leq m \leq[n^{\alpha}]} \Biggl|\sum ^{m}_{j=1}\varepsilon_{n,j-i}\Biggr|+\biggl|\sum _{|i|>m}a_{i}\biggr|. \end{aligned}
(2.7)

By the stationarity of $$\{\varepsilon,\varepsilon_{n,i},-\infty< i<\infty ,n\geq1\}$$ and Lemma 2.2

\begin{aligned} &E\sum^{\infty}_{i=-\infty}|a_{i}|\sup _{n\geq1}\frac{1}{\sqrt{2n^{\alpha}\log n}}\max_{1\leq m \leq[n^{\alpha}]} \Biggl|\sum ^{m}_{j=1}\varepsilon_{n,j-i}\Biggr| \\ &\quad \leq\sum^{\infty}_{i=-\infty}|a_{i}|E \sup_{n\geq1}\frac{1}{\sqrt {2n^{\alpha}\log n}}\max_{1\leq m \leq[n^{\alpha}]} \Biggl|\sum ^{m}_{j=1}\varepsilon_{n,j-i}\Biggr| \\ &\quad =\Biggl(\sum^{\infty}_{i=-\infty}|a_{i}| \Biggr)E\sup_{n\geq1}\frac{1}{\sqrt {2n^{\alpha}\log n}}\max_{1\leq m \leq[n^{\alpha}]} \Biggl|\sum^{m}_{j=1}\varepsilon_{n,j}\Biggr| \\ &\quad < \infty. \end{aligned}

Hence

$$\sum^{\infty}_{i=-\infty}|a_{i}|\sup _{n\geq1}\frac{1}{\sqrt{2n^{\alpha}\log n}}\max_{1\leq m \leq[n^{\alpha}]} \Biggl|\sum ^{m}_{j=1}\varepsilon_{n,j-i}\Biggr|< \infty\quad \mbox{a.s.}$$

Therefore letting $$m\rightarrow\infty$$ in (2.7)

$$\limsup_{n\rightarrow\infty}\frac{1}{\sqrt{2n^{\alpha}\log n}} \sup_{0\leq t\leq1}\Biggl| \sum^{[[n^{\alpha}]t]}_{j=1}X_{n,j}-a\sum ^{[[n^{\alpha}]t]}_{j=1}\varepsilon_{n,j}\Biggr|=0\quad \mbox{a.s.},$$

i.e. (2.3) holds.

(1.11) â‡’ (1.10). Set $$a_{n,i}=\sum_{j=1}^{[n^{\alpha}]}a_{j-i}$$. Note that $$\sum_{j=1}^{[n^{\alpha}]}X_{n,j} =\sum_{i=-\infty}^{\infty}a_{n,i}\varepsilon_{n,i}$$, and $$\{\sum_{j=1}^{[n^{\alpha}]}X_{n,j}, n\geq1\}$$ is a sequence of independent random variables, thus by the Borel-Cantelli lemma, (1.11) implies that

$$\sum^{\infty}_{n=1}P \Biggl\{ \Biggl|\sum ^{\infty}_{i=-\infty}a_{n,i}\varepsilon_{n,i}\Biggr| >M\sqrt{2a^{2}n^{\alpha}\log n} \Biggr\} < \infty$$
(2.8)

holds for any $$M>1$$. Let $$\{\varepsilon', \varepsilon'_{n,i},-\infty < i<\infty,n\geq1\}$$ be an independent copy of $$\{\varepsilon, \varepsilon_{n,i},-\infty< i<\infty,n\geq1\}$$, Hence by (2.8), for $$\{\varepsilon', \varepsilon'_{n,i},-\infty< i<\infty ,n\geq1\}$$

$$\sum^{\infty}_{n=1}P \Biggl\{ \Biggl|\sum ^{\infty}_{i=-\infty}a_{n,i}\varepsilon'_{n,i}\Biggr| >M\sqrt{2a^{2}n^{\alpha}\log n} \Biggr\} < \infty$$
(2.9)

also holds. Set $$\varepsilon^{s}=\varepsilon-\varepsilon'$$ and $$\varepsilon^{s}_{n,i}=\varepsilon_{n,i}-\varepsilon'_{n,i}$$, it is clear that $$\{\varepsilon^{s}, \varepsilon^{s}_{n,i},-\infty< i<\infty,n\geq1\}$$ is an array of independent symmetrical, identically distributed random variables. Then by (2.8) and (2.9)

$$\sum^{\infty}_{n=1}P \Biggl\{ \Biggl|\sum ^{\infty}_{i=-\infty}a_{n,i}\varepsilon^{s}_{n,i}\Biggr| >2M\sqrt{2a^{2}n^{\alpha}\log n} \Biggr\} < \infty.$$
(2.10)

By the comparison principle (see Lemma 6.5 of Ledoux and Talagrand [11]), (2.10) implies that

$$\sum^{\infty}_{n=1}P \Biggl\{ \Biggl|\sum ^{[n^{\alpha}/2]}_{i=[n^{\alpha}/3]}a_{n,i}\varepsilon^{s}_{n,i}\Biggr| >2M\sqrt{2a^{2}n^{\alpha}\log n} \Biggr\} < \infty.$$
(2.11)

Note that $$\sum^{\infty}_{i=-\infty}a_{i}=a\neq0$$, then for n large enough, $$|a_{n,i}|\geq|a|/2$$ holds uniformly for $$[n^{\alpha}/3]\leq i\leq[n^{\alpha}/2]$$. By (2.11) and the comparison principle (see Lemma 6.5 of Ledoux [11]) again

$$\sum^{\infty}_{n=1}P \Biggl\{ \Biggl|\sum ^{[n^{\alpha}/2]}_{i=[n^{\alpha}/3]}\varepsilon^{s}_{n,i}\Biggr| >4M\sqrt{2n^{\alpha}\log n} \Biggr\} < \infty.$$
(2.12)

Then by the same argument as Li et al. (p.175 in [10]), (2.12) implies that $$E\frac{|\varepsilon^{s}|^{2p}}{\log^{p}|\varepsilon^{s}|}<\infty$$, and consequently $$E\frac{|\varepsilon|^{2p}}{\log^{p}|\varepsilon|}<\infty$$ holds. By (1.10) â‡’ (1.11), we have

$$\limsup_{n\rightarrow\infty}\frac{|\sum^{\infty}_{i=-\infty }a_{n,i}(\varepsilon_{n,i} -E\varepsilon_{n,i})|}{\sqrt{2a^{2}E(\varepsilon-E\varepsilon)^{2}n^{\alpha}\log n}}=1 \quad\mbox{a.s.}$$
(2.13)

Consequently

$$\lim_{n\rightarrow\infty}\frac{\sum^{\infty}_{i=-\infty }a_{n,i}(\varepsilon_{n,i} -E\varepsilon_{n,i})}{n^{\alpha}}=0 \quad\mbox{a.s.}$$

By (1.11), we also have

$$\lim_{n\rightarrow\infty}\frac{\sum^{\infty}_{i=-\infty}a_{n,i}\varepsilon _{n,i}}{n^{\alpha}}=0 \quad\mbox{a.s.},$$

then $$E\varepsilon=0$$. Comparing (2.13) and (1.11), we get $$E\varepsilon^{2}=1$$ immediately.

(1.13) â‡’ (1.15). By Embrechts et al. ([16], p.176, line 10),

$$\lim_{n\rightarrow\infty}\frac{\max_{1\leq m\leq n} Z_{m}}{\sqrt{2\log n}}=1 \quad\mbox{a.s.},$$

which with (1.13) implies (1.15) immediately.

(1.15) â‡’ (1.10). By (1.15),

$$\limsup_{n\rightarrow\infty}\frac{|\sum_{j=1}^{[n^{\alpha}]}X_{n,j}|}{\sqrt{2a^{2}n^{\alpha}\log n}}\leq1 \quad\mbox{a.s.}$$

The rest proof is similar to that of (1.11) â‡’ (1.10). The proof of the theorem is completed.â€ƒâ–¡

## References

1. Philips, PCB, Solo, V: Asymptotic for linear processes. Ann. Stat. 20, 971-1001 (1992)

2. Wang, QY, Lin, YX, Gulati, CM: Strong approximation for long memory processes with application. J. Theor. Probab. 16, 377-389 (2003)

3. Lu, CR, Qiu, J: Strong approximation of linear processes. Acta Math. Sci. Ser. A Chin. Ed. 27, 309-313 (2007) (in Chinese)

4. Strassen, V: An invariance principle for the law of the iterated logarithm. Z. Wahrscheinlichkeitstheor. Verw. Geb. 3, 211-226 (1964)

5. Hartman, P, Wintner, A: On the law of the iterated logarithm. Am. J. Math. 63, 169-176 (1941)

6. Tan, XL, Zhao, SS, Yang, XY: Strong approximation and the law of the iterated logarithm for linear processes. Chinese J. Appl. Probab. Statist. 24, 225-239 (2008) (in Chinese)

7. Hu, TC, Weber, NC: On the rate of convergence in the strong law of large numbers for arrays. Bull. Aust. Math. Soc. 45, 479-482 (1992)

8. Li, D, Rao, MB, Tomkins, KJ: A strong law for B-valued arrays. Proc. Am. Math. Soc. 123, 3205-3212 (1995)

9. Qi, YC: On the strong convergence of arrays. Bull. Aust. Math. Soc. 50, 219-223 (1994)

10. Li, D, Huang, ML, Rosalsky, A: Strong invariance principles for arrays. Bull. Inst. Math. Acad. Sin. 28, 167-181 (2000)

11. Ledoux, M, Talagrand, M: Probability in Banach Space. Springer, Berlin (1991)

12. Chen, X: The limit law of the iterated logarithm. J. Theor. Probab. (2013). doi:10.1007/s10959-013-0481-4

13. Li, D, Liang, H: The limit law of the iterated logarithm in Banach space. Stat. Probab. Lett. 83, 1800-1804 (2013)

14. Li, D, SpÇŽtaru, A: Refinement of convergence rates for tail probabilities. J. Theor. Probab. 18, 933-947 (2005)

15. Chen, P, Wang, D: Convergence rates for probabilities of moderate deviations for moving average processes. Acta Math. Sin. Engl. Ser. 24, 611-622 (2008)

16. Embrechts, P, Kluppelberg, C, Mikosch, T: Modelling Extremal Events. Springer, Berlin (1997)

## Acknowledgements

The authors would like to thank the referees and the editors for the helpful comments and suggestions. The work of Liu was supported by the National Natural Science Foundation of China (Grant No. 71471075), the work of Guan was supported by the National Natural Science Foundation of China (Grant No. 11271161), the work of Hu was partially supported by National Science Council Taiwan under the grant number NSC 101-2118-M-007-001-MY2.

## Author information

Authors

### Corresponding author

Correspondence to Xiangdong Liu.

### Competing interests

The authors declare that they have no competing interests.

### Authorsâ€™ contributions

All authors have equal contributions. All authors read and approved the final manuscript.

## Rights and permissions

Reprints and permissions

Liu, X., Guan, Z. & Hu, T.C. Strong law for linear processes. J Inequal Appl 2015, 144 (2015). https://doi.org/10.1186/s13660-015-0664-x