Skip to main content

Complete convergence for weighted sums of widely orthant-dependent random variables

Abstract

The complete convergence results for weighted sums of widely orthant-dependent random variables are obtained. A strong law of large numbers for weighted sums of widely orthant-dependent random variables is also obtained. Our results extend and generalize some results of Chen and Sung (J. Inequal. Appl. 2018:121, 2018), Zhang et al. (J. Math. Inequal. 12:1063–1074, 2018), Chen and Sung (Stat. Probab. Lett. 154:108544, 2019), Lang et al. (Rev. Mat. Complut., 2020, https://doi.org/10.1007/s13163-020-00369-5), and Liang (Stat. Probab. Lett. 48:317–325, 2000).

1 Introduction

Let \(\{X_{n}, n \ge 1\}\) be a sequence of random variables and let \(\{a_{nk}, 1 \le k\le n, n \ge 1\}\) be an array of constant. Since many linear statistics such as least-squares estimators, nonparametric regression function estimators and jackknife estimators are of the form of weighted sums \(\sum_{k=1}^{n} a_{nk}X_{k}\), it is important to study the limiting behavior of the weighted sums.

The complete convergence was introduced by Hsu and Robbins [10] as follows. A sequence \(\{X_{n}, n\ge 1\}\) of random variables converges completely to the constant θ if \(\sum_{n=1}^{\infty }P(|X_{n}-\theta |>\varepsilon )<\infty \) for all \(\varepsilon >0\). Note that the complete convergence implies almost sure convergence in view of the Borel–Cantelli lemma. The complete convergence is also used to characterize the rate of convergence.

In this paper, we will focus on the array weights \(\{a_{nk}, 1\le k\le n, n\ge 1\}\) of real numbers satisfying

$$ \sum^{n}_{k=1} \vert a_{nk} \vert ^{\alpha }=O(n) $$
(1.1)

for some \(\alpha >0\).

In fact, under condition (1.1), many authors have studied the strong laws of large numbers for weighted sums of independent and identically distributed random variables. For example, Chow [8] proved the Kolmogorov strong law of large numbers for weighted sums, and Cuzick [9] generalized Chow’s [8] result. Bai and Cheng [2] proved the Marcinkiewicz–Zygmund strong law of large numbers for weighted sums, and Chen and Gan [5] generalized the result of Bai and Cheng [2].

A convergence rate in the law of large numbers for weighted sums is also studied by many authors. Chen [4] established the following complete convergence:

$$ \sum^{\infty }_{n=1}n^{r-2}P \Biggl(\max_{1\leq m\leq n} \Biggl\vert \sum ^{m}_{k=1}a_{nk}X_{k} \Biggr\vert > \varepsilon n^{1/p} \Biggr)< \infty ,\quad \forall \varepsilon >0, $$
(1.2)

for weighted sums of identically distributed negatively associated random variables satisfying (1.1), where \(r>1\), \(1\le p<2\), \(1/\alpha +1/\beta =1/p\), and \(\alpha < rp\). Note that if \(a_{nk}=1\) for \(1\le k\le n\) and \(n\ge 1\), then (1.2) reduces to the well-known Baum and Katz [3] strong law. Liang [12] established (1.2) for identically distributed negatively associated random variables with the weights of special type satisfying (1.1) (see also Remark 1.4 below). Chen and Sung [6], Sung [13], and Wu et al. [17] obtained (1.2) for \(\rho ^{*}\)-mixing random variables, Wang and Wang [16] and Wu et al. [19] established (1.2) for extended negatively dependent random variables, Wu et al. [18] established (1.2) for m-asymptotic negatively associated random variables, and Lang et al. [11] obtained (1.2) for widely orthant-dependent (WOD) random variables.

Recently, Chen and Sung [6] obtained a complete convergence result for weighted sums of \(\rho ^{*}\)-mixing random variables.

Theorem A

(Chen and Sung [6])

Let \(r\geq 1\), \(1\leq p<2\), \(\alpha >0\), \(\beta >0\) with \(1/\alpha +1/\beta =1/p\). Let \(\{a_{nk}, 1\leq k\leq n, n\geq 1\}\) be an array of constants satisfying (1.1) and let \(\{X,X_{n},n\geq 1\}\) be a sequence of identically distributed \(\rho ^{*}\)-mixing random variables. If \(EX=0\) and

$$ \textstyle\begin{cases} E \vert X \vert ^{(r-1)\beta }< \infty &\textit{if }\alpha < rp, \\ E \vert X \vert ^{(r-1)\beta }\log (1+ \vert X \vert )< \infty &\textit{if }\alpha =rp, \\ E \vert X \vert ^{rp}< \infty &\textit{if }\alpha >rp, \end{cases} $$
(1.3)

then (1.2) holds. Conversely, if (1.2) holds for any array \(\{a_{nk}, 1\le k\le n,n\ge 1\}\) satisfying (1.1) for some \(\alpha >p\), then \(EX=0\), \(E|X|^{rp}<\infty \) and \(E|X|^{(r-1)\beta }<\infty \).

The case \(\alpha >rp\) with \(r>1\) in Theorem A is due to Sung [13]. When \(\alpha =rp\), the moment condition \(E|X|^{(r-1)\beta }\log (1+|X|)<\infty \) is a sufficient condition for (1.2). However, it is not known whether it is also a necessary condition for (1.2).

In this paper, we extend Theorem A to WOD random variables. The concept of WOD was introduced by Wang et al. [14] as follows.

Definition 1.1

Random variables \(X_{1}, X_{2}, \ldots \) are said to be widely upper orthant dependent (WUOD) if for each \(n\ge 1\), there exists a positive number \(g_{U}(n)\) such that for all real numbers \(x_{i}\), \(1\le i\le n\),

$$ P(X_{1}>x_{1}, \ldots , X_{n}>x_{n}) \le g_{U}(n)\prod_{i=1}^{n} P(X_{i}>x_{i}), $$

they are said to be widely lower orthant dependent (WLOD) if for each \(n\ge 1\), there exists a positive number \(g_{L}(n)\) such that, for all real numbers \(x_{i}\), \(1\le i\le n\),

$$ P(X_{1}\le x_{1}, \ldots , X_{n}\le x_{n})\le g_{L}(n)\prod_{i=1}^{n} P(X_{i}\le x_{i}), $$

and they are said to be WOD if they are both WUOD and WLOD.

In Definition 1.1, \(g_{U}(n)\), \(g_{L}(n)\), \(n\ge 1\), are called dominating coefficients. If for all \(n\ge 1\), \(g_{U}(n)=g_{L}(n)=M\) for some positive constant M, then \(\{X_{n}, n\ge 1\}\) are said to be extended negatively dependent (END). In particular, if \(M=1\), then \(\{X_{n}, n\ge 1\}\) are said to be negatively orthant dependent (NOD) or negatively dependent. Since the class of WOD random variables contains END random variables and NOD random variables as special cases, it is interesting to study the limiting behavior of WOD random variables.

We now state the main results. Some preliminary lemmas will be presented in Sect. 2. The proofs of the main results will be detailed in Sect. 3.

The first theorem extends the sufficiency of Theorem A with \(r>1\) to WOD random variables.

Theorem 1.1

Let \(r>1\), \(1\leq p<2\), \(\alpha >0\), \(\beta >0\) with \(1/\alpha +1/\beta =1/p\), and let \(\{a_{nk}, 1\leq k\leq n, n\geq 1\}\) be an array of constants satisfying (1.1). Let \(\{X,X_{n},n\geq 1\}\) be a sequence of identically distributed WOD random variables with dominating coefficients \(g_{L}(n)\), \(g_{U}(n)\) for \(n\ge 1\). Suppose that there exist a nondecreasing positive function \(g(x)\) on \([0, \infty )\) and a constant \(\tau \ge 0\) such that \(\max \{g_{L}(n), g_{U}(n)\} \le g(n)=O(n^{\tau })\). If (1.3) holds, then

$$ \sum^{\infty }_{n=1}n^{r-2}P \Biggl(\max_{1\leq m\leq n} \Biggl\vert \sum ^{m}_{k=1}a_{nk}(X_{k}-EX_{k}) \Biggr\vert > \varepsilon n^{1/p} \Biggr)< \infty ,\quad \forall \varepsilon >0. $$
(1.4)

Remark 1.1

When \(\alpha >rp\), Zhang et al. [20] proved a weaker complete convergence result than (1.4) under a stronger condition than (1.1). Hence Theorem 1.1 improves the result of Zhang et al. [20].

Remark 1.2

When \(p=1\) and \(\alpha >rp\), Lang et al. [11] proved Theorem 1.1 for the weights with \(\max_{1\le k\le n} |a_{nk}|=O(1)\) under stronger conditions on \(g(x)\). Note that, if \(\max_{1\le k\le n} |a_{nk}|=O(1)\), then (1.1) holds for any \(\alpha >0\). Hence Theorem 1.1 generalizes and improves the result of Lang et al. [11].

When \(r=1\), we have the following theorem.

Theorem 1.2

Let \(\{a_{nk}, 1\leq k\leq n, n\geq 1\}\) be an array of constants satisfying (1.1) for some \(\alpha >1\). Let \(\{X, X_{n}, n\ge 1\}\) be a sequence of identically distributed WOD random variables with dominating coefficients \(g_{L}(n)\), \(g_{U}(n)\) for \(n\ge 1\). Suppose that there exist a nondecreasing positive function \(g(x)\) on \([0, \infty )\) and a positive constant \(\tau <\min \{1, \alpha -1\}\) such that \(\max \{g_{L}(n), g_{U}(n)\}\le g(n)\) for \(n\ge 1\) and \(g(x)/x^{\tau }\downarrow \). If \(E|X|g(|X|)<\infty \), then

$$ \sum^{\infty }_{n=1}\frac{1}{n} P \Biggl( \max_{1\leq m\leq n} \Biggl\vert \sum ^{m}_{k=1}a_{nk}(X_{k}-EX_{k}) \Biggr\vert > \varepsilon n \Biggr)< \infty , \quad \forall \varepsilon >0. $$
(1.5)

Remark 1.3

If \(a_{nk}=1\) for \(1\le k\le n\) and \(n\ge 1\), or \(\max_{1\le k\le n} |a_{nk}|=O(1)\) for \(n\ge 1\), then (1.1) holds for any \(\alpha >0\). These two cases are treated by Chen and Sung [7] and Lang et al. [11], respectively. Therefore, Theorem 1.2 generalizes the results of Chen and Sung [7] and Lang et al. [11].

The following corollary is a strong law of large numbers for weighted sums of WOD random variables.

Corollary 1.1

Let \(s>-1\) and let \(l(x)>0\) be a slowly varying function. Let \(\{X, X_{n}, n\ge 1\}\) and \(g(x)\) be as in Theorem 1.2. If \(E|X|g(|X|)<\infty \), then

$$ \frac{ \sum_{k=1}^{n} k^{s} l(k) (X_{k}-EX_{k})}{n^{1+s} l(n)} \to 0 \quad \textit{a.s.} $$

Corollary 1.2

Let \(r \ge 1 \) and \(s>-1/r\), and let \(\{a_{nk}=c_{nk}k^{s}/n^{s}, 1\le k\le n, n\ge 1 \}\) be an array of constants, where \(|c_{nk}|\le B<\infty \) for all \(1\le k\le n\) and \(n\ge 1\). Let \(\{X,X_{n},n\geq 1\}\) be a sequence of identically distributed WOD random variables with dominating coefficients \(g_{L}(n)\), \(g_{U}(n)\) for \(n\ge 1\). Suppose that there exists a nondecreasing positive function \(g(x)\) on \([0,\infty )\) such that \(\max \{g_{L}(n), g_{U}(n)\} \le g(n)\). When \(r>1\), assume that \(g(n)=O(n^{\tau })\) for some \(\tau \ge 0\) and \(E|X|^{r}<\infty \). When \(r=1\), assume that \(g(x)/x^{\tau }\downarrow \) for some \(0<\tau <\min \{1, |1+1/s|\}\) (set \(\min \{1, |1+1/s|\}=1\) when \(s=0\)) and \(E|X|g(|X|)<\infty \). Then

$$ \sum^{\infty }_{n=1}n^{r-2}P \Biggl( \max_{1\leq m\leq n} \Biggl\vert \sum ^{m}_{k=1}a_{nk}(X_{k}-EX_{k}) \Biggr\vert > \varepsilon n \Biggr)< \infty ,\quad \forall \varepsilon >0. $$
(1.6)

Remark 1.4

Liang [12] proved Corollary 1.2 when \(r>1\) and \(\{X, X_{n}, n\ge 1\}\) is a sequence of identically distributed negatively associated random variables. Note that the proof of Liang [12] cannot be applied to the case \(r=1\) (the series on line 3 in page 322 of Liang [12] does not converge). Since negatively associated random variables imply WOD, Corollary 1.2 complements and extends Liang’s [12] result.

Throughout this paper, C always stands for a positive constant which may differ from one place to another. For events A and B, we denote \(I(A, B)=I(A\cap B)\), where \(I(A)\) is the indicator function of the event A.

2 Preliminary lemmas

In this section, we present some lemmas which will be used in the proofs of main results. The following two lemmas are well known (see, for example, Wang et al. [15], Chen and Sung [7] or Lang et al. [11]). The first one is a Marcinkiewicz–Zygmund-Rosenthal type moment inequality for sums of WOD random variables.

Lemma 2.1

Let \(\{X_{n}, n\ge 1\}\) be a sequence of mean zero WOD random variables with dominating coefficients \(g_{L}(n)\), \(g_{U}(n)\) for \(n\ge 1\), and \(E|X_{n}|^{q}<\infty \) for some \(q>1\).

  1. (i)

    If \(1< q\le 2\), there exists a positive constant \(C_{q}\) depending only on q such that, for all \(n\ge 1\),

    $$ E \Biggl\vert \sum_{k=1}^{n} X_{k} \Biggr\vert ^{q}\le C_{q} \Biggl\{ \sum_{k=1}^{n} E \vert X_{k} \vert ^{q} +\bigl(g_{L}(n)+g_{U}(n) \bigr)\sum_{k=1}^{n} E \vert X_{k} \vert ^{q} \Biggr\} . $$
  2. (ii)

    If \(q>2\), there exists a positive constant \(C_{q}\) depending only on q such that, for all \(n\ge 1\),

    $$ E \Biggl\vert \sum_{k=1}^{n} X_{k} \Biggr\vert ^{q}\le C_{q} \Biggl\{ \sum_{k=1}^{n} E \vert X_{k} \vert ^{q} +\bigl(g_{L}(n)+g_{U}(n) \bigr) \Biggl( \sum_{k=1}^{n} E \vert X_{k} \vert ^{2} \Biggr)^{q/2} \Biggr\} . $$

The following lemma is a Rosenthal type moment inequality for the maximum of partial sums of WOD random variables.

Lemma 2.2

Let \(\{X_{n}, n\ge 1\}\) be a sequence of mean zero WOD random variables with dominating coefficients \(g_{L}(n)\), \(g_{U}(n)\) for \(n\ge 1\), and \(E|X_{n}|^{q}<\infty \) for some \(q>2\). Further assume that there exists a nondecreasing positive function \(g(x)\) on \([0,\infty )\) such that \(\max \{g_{L}(n), g_{U}(n)\}\le g(n)\) for \(n\ge 1\). Then there exists a positive constant \(C_{q}\) depending only on q such that, for all \(n\ge 1\),

$$ E\max_{1 \le m \le n} \Biggl\vert \sum_{k=1}^{m} X_{k} \Biggr\vert ^{q} \le C_{q} (\log n)^{q} \Biggl\{ \sum^{n}_{k=1}E \vert X_{k} \vert ^{q}+ g(n) \Biggl( \sum _{k=1}^{n} EX_{k}^{2} \Biggr)^{q/2} \Biggr\} . $$

The following lemma can be found in Chen and Sung [6].

Lemma 2.3

Let \(r\geq 1\), \(0< p<2\), \(\alpha >0\), \(\beta >0\) with \(1/\alpha +1/\beta =1/p\), and let X be a random variable. Let \(\{a_{nk},1\leq k\leq n, n\geq 1\}\) be an array of constants satisfying (1.1). Then

$$ \sum^{\infty }_{n=1}n^{r-2}\sum ^{n}_{k=1}P \bigl( \vert a_{nk}X \vert >n^{1/p} \bigr) \leq \textstyle\begin{cases} C E \vert X \vert ^{(r-1)\beta } &\textit{if }\alpha < rp, \\ C E \vert X \vert ^{(r-1)\beta }\log (1+ \vert X \vert ) &\textit{if }\alpha =rp, \\ C E \vert X \vert ^{rp} &\textit{if }\alpha >rp. \end{cases} $$

The following lemma is similar to Lemma 2.3.

Lemma 2.4

Let \(r\geq 1\), \(0< p<2\), \(\alpha >0\), \(\beta >0\) with \(1/\alpha +1/\beta =1/p\), and let X be a random variable. Let \(\{a_{nk},1\leq k\leq n, n\geq 1\}\) be an array of constants satisfying (1.1). If \(u>p\) and \(q>\max \{\alpha , (r-1)\beta \}\), then

$$\begin{aligned}& \sum^{\infty }_{n=1}n^{r-2-q/p+q/u} (\log n)^{q} \sum^{n}_{k=1}P \bigl( \vert a_{nk}X \vert >n^{1/u} \bigr) \\& \quad \leq \textstyle\begin{cases} C E \vert X \vert ^{(r-1)\beta } &\textit{if }\alpha < rp, \\ C E \vert X \vert ^{(r-1)\beta }\log (1+ \vert X \vert ) &\textit{if }\alpha =rp, \\ C E \vert X \vert ^{rp} &\textit{if }\alpha >rp. \end{cases}\displaystyle \end{aligned}$$
(2.1)

Proof

The proof is similar to that of Lemma 2.3 (Lemma 2.2 in Chen and Sung [6]).

Case 1: \(\alpha \leq rp\). We observe by the Markov inequality that, for any \(s>0\),

$$\begin{aligned} &P \bigl( \vert a_{nk}X \vert >n^{1/u} \bigr) \\ &\quad =P \bigl( \vert a_{nk}X \vert >n^{1/u}, \vert X \vert >n^{1/\beta } \bigr)+P \bigl( \vert a_{nk}X \vert >n^{1/u}, \vert X \vert \leq n^{1/\beta } \bigr) \\ &\quad \leq n^{-\alpha /u} \vert a_{nk} \vert ^{\alpha }E \vert X \vert ^{\alpha }I\bigl( \vert X \vert >n^{1/\beta } \bigr)+n^{-s/u} \vert a_{nk} \vert ^{s}E \vert X \vert ^{s}I\bigl( \vert X \vert \leq n^{1/\beta }\bigr). \end{aligned}$$
(2.2)

It is easy to show that

$$\begin{aligned} &\sum^{\infty }_{n=1}n^{r-2-q/p+q/u} (\log n)^{q} \cdot n^{-\alpha /u} \Biggl(\sum ^{n}_{k=1} \vert a_{nk} \vert ^{\alpha } \Biggr)E \vert X \vert ^{\alpha }I\bigl( \vert X \vert >n^{1/ \beta }\bigr) \\ &\quad \leq C\sum^{\infty }_{n=1}n^{r-1-q/p+q/u-\alpha /u} (\log n)^{q} E \vert X \vert ^{ \alpha }I\bigl( \vert X \vert >n^{1/\beta }\bigr) \\ &\quad \leq C\sum^{\infty }_{n=1}n^{r-1-\alpha /p} E \vert X \vert ^{\alpha }I\bigl( \vert X \vert >n^{1/ \beta } \bigr) \\ &\quad \leq \textstyle\begin{cases} CE \vert X \vert ^{(r-1)\beta } &\text{if }\alpha < rp, \\ CE \vert X \vert ^{(r-1)\beta }\log (1+ \vert X \vert ) &\text{if }\alpha =rp. \end{cases}\displaystyle \end{aligned}$$
(2.3)

Taking an s such that \(\max \{\alpha , (r-1)\beta \}< s< q\), we have

$$\begin{aligned} &\sum^{\infty }_{n=1}n^{r-2-q/p+q/u} (\log n)^{q} \cdot n^{-s/u} \Biggl( \sum ^{n}_{k=1} \vert a_{nk} \vert ^{s} \Biggr)E \vert X \vert ^{s}I\bigl( \vert X \vert \leq n^{1/\beta }\bigr) \\ &\quad \leq C\sum^{\infty }_{n=1}n^{r-2-q/p+q/u-s/u + s/\alpha } (\log n)^{q} E \vert X \vert ^{s}I\bigl( \vert X \vert \leq n^{1/\beta }\bigr) \\ &\quad \leq C\sum^{\infty }_{n=1}n^{r-2-s/p + s/\alpha } E \vert X \vert ^{s}I\bigl( \vert X \vert \leq n^{1/ \beta }\bigr) \\ &\quad =C\sum^{\infty }_{n=1}n^{r-2-s/\beta }E \vert X \vert ^{s}I\bigl( \vert X \vert \leq n^{1/\beta }\bigr) \\ &\quad \leq CE \vert X \vert ^{(r-1)\beta }, \end{aligned}$$
(2.4)

since \(s>(r-1)\beta \). Then (2.1) holds by (2.2)–(2.4).

Case 2: \(\alpha >rp\). The proof is similar to that of Case 1. However, we use a different truncation for X. We observe by the Markov inequality that, for any \(t>0\),

$$\begin{aligned} &P \bigl( \vert a_{nk}X \vert >n^{1/u} \bigr) \\ &\quad =P \bigl( \vert a_{nk}X \vert >n^{1/u}, \vert X \vert >n^{1/p} \bigr)+P \bigl( \vert a_{nk}X \vert >n^{1/u}, \vert X \vert \leq n^{1/p} \bigr) \\ &\quad \leq n^{-t/u} \vert a_{nk} \vert ^{t} E \vert X \vert ^{t} I\bigl( \vert X \vert >n^{1/p} \bigr)+n^{-\alpha /u} \vert a_{nk} \vert ^{\alpha }E \vert X \vert ^{\alpha }I\bigl( \vert X \vert \leq n^{1/p} \bigr). \end{aligned}$$
(2.5)

Taking \(0< t< rp\), we have

$$\begin{aligned} &\sum^{\infty }_{n=1}n^{r-2-q/p+q/u} (\log n)^{q} \cdot n^{-t/u} \Biggl( \sum ^{n}_{k=1} \vert a_{nk} \vert ^{t} \Biggr)E \vert X \vert ^{t}I\bigl( \vert X \vert >n^{1/p}\bigr) \\ &\quad \le C\sum^{\infty }_{n=1}n^{r-1-q/p+q/u-t/u} (\log n)^{q} E \vert X \vert ^{t}I\bigl( \vert X \vert >n^{1/p}\bigr) \\ &\quad \leq C\sum^{\infty }_{n=1}n^{r-1-t/p}E \vert X \vert ^{t}I\bigl( \vert X \vert >n^{1/p} \bigr) \\ &\quad \leq CE \vert X \vert ^{rp}. \end{aligned}$$
(2.6)

It is easy to show that

$$\begin{aligned} &\sum^{\infty }_{n=1}n^{r-2-q/p+q/u} (\log n)^{q} \cdot n^{-\alpha /u} \Biggl(\sum ^{n}_{k=1} \vert a_{nk} \vert ^{\alpha } \Biggr)E \vert X \vert ^{\alpha }I\bigl( \vert X \vert \leq n^{1/p}\bigr) \\ &\quad \leq C\sum^{\infty }_{n=1}n^{r-1-q/p+q/u-\alpha /u} (\log n)^{q} E \vert X \vert ^{\alpha }I\bigl( \vert X \vert \leq n^{1/p}\bigr) \\ &\quad \leq C\sum^{\infty }_{n=1}n^{r-1-\alpha /p} E \vert X \vert ^{\alpha }I\bigl( \vert X \vert \leq n^{1/p}\bigr) \\ &\quad \leq CE \vert X \vert ^{rp}, \end{aligned}$$
(2.7)

since \(\alpha >rp\). Then (2.1) holds by (2.5)–(2.7). □

The following lemma can be found in Chen and Sung [6].

Lemma 2.5

Let \(r\geq 1\), \(0< p<2\), \(\alpha >0\), \(\beta >0\) with \(1/\alpha +1/\beta =1/p\), and let X be a random variable. Let \(\{a_{nk},1\leq k\leq n, n\geq 1\}\) be an array of constants satisfying (1.1). Then, for any \(s>\max \{\alpha , (r-1)\beta \}\),

$$\begin{aligned} &\sum^{\infty }_{n=1}n^{r-2-s/p}\sum ^{n}_{k=1}E \vert a_{nk}X \vert ^{s}I\bigl( \vert a_{nk}X \vert \leq n^{1/p}\bigr) \\ &\quad \leq \textstyle\begin{cases} C E \vert X \vert ^{(r-1)\beta } &\textit{if }\alpha < rp, \\ C E \vert X \vert ^{(r-1)\beta }\log (1+ \vert X \vert ) &\textit{if }\alpha =rp, \\ C E \vert X \vert ^{rp} &\textit{if }\alpha >rp. \end{cases}\displaystyle \end{aligned}$$

The following lemma is similar to Lemma 2.5. However, the truncations for X are different, and the term \((\log n)^{s}\) is added in Lemma 2.6.

Lemma 2.6

Let \(r\geq 1\), \(0< p<2\), \(\alpha >0\), \(\beta >0\) with \(1/\alpha +1/\beta =1/p\), and let X be a random variable. Let \(\{a_{nk},1\leq k\leq n, n\geq 1\}\) be an array of constants satisfying (1.1). Then, for any \(u>p\) and \(s>\max \{\alpha , (r-1)\beta \}\),

$$\begin{aligned} &\sum^{\infty }_{n=1}n^{r-2-s/p} (\log n)^{s} \sum^{n}_{k=1}E \vert a_{nk}X \vert ^{s}I\bigl( \vert a_{nk}X \vert \leq n^{1/u}\bigr) \\ &\quad \leq \textstyle\begin{cases} C E \vert X \vert ^{(r-1)\beta } &\textit{if }\alpha < rp, \\ C E \vert X \vert ^{(r-1)\beta }\log (1+ \vert X \vert ) &\textit{if }\alpha =rp, \\ C E \vert X \vert ^{rp} &\textit{if }\alpha >rp. \end{cases}\displaystyle \end{aligned}$$

Proof

Since \(u>p\), we have, for any \(0< s'< s\),

$$\begin{aligned} &\sum^{\infty }_{n=1} n^{r-2-s/p} ( \log n)^{s} \sum^{n}_{k=1}E \vert a_{nk}X \vert ^{s}I\bigl( \vert a_{nk}X \vert \leq n^{1/u}\bigr) \\ &\quad \le \sum^{\infty }_{n=1} n^{r-2-s/p +(s-s')/u} (\log n)^{s} \sum^{n}_{k=1}E \vert a_{nk}X \vert ^{s'} I\bigl( \vert a_{nk}X \vert \leq n^{1/u}\bigr) \\ &\quad \le C \sum^{\infty }_{n=1} n^{r-2-s'/p} \sum^{n}_{k=1}E \vert a_{nk}X \vert ^{s'} I\bigl( \vert a_{nk}X \vert \leq n^{1/u}\bigr) \\ &\quad \le C \sum^{\infty }_{n=1} n^{r-2-s'/p} \sum^{n}_{k=1}E \vert a_{nk}X \vert ^{s'} I\bigl( \vert a_{nk}X \vert \leq n^{1/p}\bigr). \end{aligned}$$

Now we choose an \(s'\) such that \(s>s'>\max \{\alpha , (r-1)\beta \}\). Then the result follows directly from Lemma 2.5. □

The following lemma can be found in Chen and Sung [6].

Lemma 2.7

Let \(1\leq p<2\), \(\alpha >0\), \(\beta >0\) with \(1/\alpha +1/\beta =1/p\), and let X be a random variable. Let \(\{a_{nk},1\leq k\leq n, n\geq 1\}\) be an array of constants satisfying (1.1). If \(E|X|^{p}<\infty \), then

$$ n^{-1/p}\sum^{n}_{k=1}E \vert a_{nk}X \vert I\bigl( \vert a_{nk}X \vert > n^{1/p}\bigr)\rightarrow 0 $$

as \(n\rightarrow \infty \).

The following lemma is similar to Lemma 2.7.

Lemma 2.8

Let \(p\ge 1\) and let X be a random variable with \(E|X|^{q}<\infty \) for some \(q>p\). Let \(\{a_{nk}, 1\leq k\leq n, n\geq 1\}\) be an array of constants satisfying (1.1) for some \(\alpha >p\). Then, for any \(u>p\) such that \(1/u > 1/(q-1) \cdot \max \{ 1-1/p, q/\alpha -1/p\}\),

$$ n^{-1/p} \sum_{k=1}^{n} E \vert a_{nk}X \vert I \bigl( \vert a_{nk}X \vert >n^{1/u} \bigr) \rightarrow 0 $$

as \(n\rightarrow \infty \).

Proof

From (1.1), we have

$$ \sum_{k=1}^{n} \vert a_{nk} \vert ^{q} \leq \textstyle\begin{cases} Cn &\text{if }q\le \alpha , \\ Cn^{q/\alpha } & \text{if }q>\alpha . \end{cases} $$

It follows that

$$\begin{aligned} &n^{-1/p} \sum_{k=1}^{n} E \vert a_{nk}X \vert I \bigl( \vert a_{nk}X \vert >n^{1/u} \bigr) \\ &\quad \le n^{-1/p-(q-1)/u} \sum_{k=1}^{n} E \vert a_{nk}X \vert ^{q}I \bigl( \vert a_{nk}X \vert > n^{1/u} \bigr) \\ &\quad \le n^{-1/p-(q-1)/u} E \vert X \vert ^{q} \sum _{k=1}^{n} \vert a_{nk} \vert ^{q} \\ &\quad \le \textstyle\begin{cases} C n^{1-1/p-(q-1)/u} & \text{if }q\le \alpha , \\ C n^{q/\alpha -1/p-(q-1)/u} & \text{if }q> \alpha \end{cases}\displaystyle \\ &\quad \rightarrow 0 \end{aligned}$$

as \(n\to \infty \), since \(1/u > 1/(q-1) \cdot \max \{ 1-1/p, q/\alpha -1/p\}\). □

3 Proofs of the main results

In this section, we present proofs of the main results.

Proof of Theorem 1.1

Without loss of generality, we may assume that \(X_{n}\ge 0\) for \(n\ge 1\) and \(a_{nk}\ge 0\) for \(1\le k\le n\) and \(n\ge 1\).

Since \(r>1\) and \(\alpha >p\), we can choose a constant u such that \(1/p >1/u>1/(rp-1)\cdot \max \{1-1/p, rp/\alpha -1/p\}\). For \(1\leq k\leq n\) and \(n\geq 1\), we define

$$\begin{aligned}& X_{nk}= a_{nk}X_{k} I\bigl(a_{nk}X_{k} \leq n^{1/u}\bigr) +n^{1/u} I\bigl(a_{nk}X_{k}> n^{1/u}\bigr) , \\& Y_{nk}= \bigl(a_{nk}X_{k}-n^{1/u} \bigr) I\bigl(n^{1/u}< a_{nk}X_{k}\leq n^{1/p}\bigr) + \bigl(n^{1/p}-n^{1/u}\bigr)I \bigl(a_{nk}X_{k}> n^{1/p}\bigr), \\& Z_{nk}= \bigl(a_{nk}X_{k}-n^{1/p} \bigr)I\bigl(a_{nk}X_{k}>n^{1/p}\bigr). \end{aligned}$$

Then \(X_{nk}+Y_{nk}+Z_{nk}=a_{nk}X_{k}\) for \(1\le k\le n\) and \(n\ge 1\), and \(\{X_{nk}, 1\le k\le n\}\), \(\{Y_{nk}, 1\le k\le n\}\), \(\{Z_{nk}, 1\le k\le n\}\) are sequences of WOD random variables. Note that

$$\begin{aligned} & \Biggl\{ \max_{1\leq m\leq n} \Biggl\vert \sum ^{m}_{k=1}a_{nk}(X_{k}-EX_{k}) \Biggr\vert > \varepsilon n^{1/p} \Biggr\} \\ &\quad \subset \bigcup^{n}_{k=1}\bigl\{ \vert a_{nk}X_{k} \vert >n^{1/p}\bigr\} \cup \Biggl\{ \max_{1 \leq m\leq n} \Biggl\vert \sum ^{m}_{k=1}(X_{nk}+Y_{nk}-a_{nk}EX_{k}) \Biggr\vert > \varepsilon n^{1/p} \Biggr\} . \end{aligned}$$

Then by Lemmas 2.3 and 2.7, to prove (1.4), it is enough to prove that

$$ \sum^{\infty }_{n=1}n^{r-2}P \Biggl(\max_{1\leq m\leq n} \Biggl\vert \sum ^{m}_{k=1}(X_{nk}-EX_{nk}) \Biggr\vert > \varepsilon n^{1/p} \Biggr)< \infty ,\quad \forall \varepsilon >0, $$
(3.1)

and

$$ \sum^{\infty }_{n=1}n^{r-2}P \Biggl(\max_{1\leq m\leq n} \Biggl\vert \sum ^{m}_{k=1}(Y_{nk}-EY_{nk}) \Biggr\vert > \varepsilon n^{1/p} \Biggr)< \infty ,\quad \forall \varepsilon >0. $$
(3.2)

Set \(s\in (p, \min \{2, \alpha \})\) if \(\alpha \leq rp\) and \(s\in (p, \min \{2, rp\})\) if \(\alpha >rp\) (note that such an s cannot be chosen when \(r=1\)). Then \(p< s<\min \{2,\alpha \}\), and \(E|X|^{s}<\infty \). Taking \(q>\max \{2,\alpha , (r-1)\beta , 2p(r-1+\tau )/(s-p)\}\), we have by the Markov inequality and Lemma 2.2

$$\begin{aligned}& P \Biggl(\max_{1\leq m\leq n} \Biggl\vert \sum ^{m}_{k=1}(X_{nk}-EX_{nk}) \Biggr\vert > \varepsilon n^{1/p} \Biggr) \\& \quad \leq Cn^{-q/p} (\log n)^{q} g(n) \Biggl(\sum ^{n}_{k=1}E(X_{nk}-EX_{nk})^{2} \Biggr)^{q/2} \\& \qquad {}+Cn^{-q/p} (\log n)^{q} \sum ^{n}_{k=1}E \vert X_{nk}-EX_{nk} \vert ^{q} \\& \quad \leq Cn^{-q/p+\tau } (\log n)^{q} \Biggl(\sum ^{n}_{k=1}E(X_{nk}-EX_{nk})^{2} \Biggr)^{q/2} \\& \qquad {}+Cn^{-q/p} (\log n)^{q} \sum ^{n}_{k=1}E \vert X_{nk}-EX_{nk} \vert ^{q}. \end{aligned}$$
(3.3)

Since \(q>2p(r-1+\tau )/(s-p)\), we have \(r-2+\tau +q(1-s/p)/2<-1\). It follows that

$$\begin{aligned} &\sum^{\infty }_{n=1}n^{r-2}\cdot n^{-q/p+\tau } (\log n)^{q} \Biggl( \sum ^{n}_{k=1}E(X_{nk}-EX_{nk})^{2} \Biggr)^{q/2} \\ &\quad \leq \sum^{\infty }_{n=1}n^{r-2} \cdot n^{-q/p+\tau } (\log n)^{q} \Biggl(\sum ^{n}_{k=1}EX^{2}_{nk} \Biggr)^{q/2} \\ &\quad = \sum^{\infty }_{n=1}n^{r-2}\cdot n^{-q/p+\tau } (\log n)^{q} \\ &\qquad {}\times \Biggl( \sum ^{n}_{k=1}E \bigl(a_{nk}X_{k} I\bigl(a_{nk}X_{k}\le n^{1/u}\bigr) +n^{1/u} I\bigl(a_{nk}X_{k}>n^{1/u} \bigr) \bigr)^{2} \Biggr)^{q/2} \\ &\quad \leq \sum^{\infty }_{n=1}n^{r-2} \cdot n^{-q/p+\tau } (\log n)^{q} \\ &\qquad {}\times \Biggl(\sum ^{n}_{k=1}E \bigl( a_{nk}X_{k} I\bigl(a_{nk}X_{k}\le n^{1/p}\bigr) +n^{1/p} I\bigl(a_{nk}X_{k}>n^{1/p} \bigr) \bigr)^{2} \Biggr)^{q/2} \\ &\quad \leq \sum^{\infty }_{n=1}n^{r-2} \cdot n^{-q/p+\tau } (\log n)^{q} \\ &\qquad {}\times\Biggl( n^{(2-s)/p}\sum ^{n}_{k=1}E(a_{nk}X_{k})^{s} I\bigl(a_{nk}X_{k} \le n^{1/p}\bigr) +n^{2/p} P\bigl(a_{nk}X_{k}>n^{1/p} \bigr) \Biggr)^{q/2} \\ &\quad \leq \sum^{\infty }_{n=1}n^{r-2} \cdot n^{-q/p+\tau +q(2-s)/(2p)} ( \log n)^{q} \Biggl(\sum ^{n}_{k=1}E(a_{nk}X_{k})^{s} \Biggr)^{q/2} \\ &\quad \leq C\sum^{\infty }_{n=1}n^{r-2+\tau +q(1-s/p)/2}( \log n)^{q}< \infty . \end{aligned}$$
(3.4)

By Lemmas 2.4 and 2.6, we have

$$\begin{aligned} &\sum^{\infty }_{n=1}n^{r-2}\cdot n^{-q/p} (\log n)^{q} \sum^{n}_{k=1}E \vert X_{nk}-EX_{nk} \vert ^{q} \\ &\quad \le C\sum^{\infty }_{n=1}n^{r-2} \cdot n^{-q/p} (\log n)^{q} \sum ^{n}_{k=1}EX^{q}_{nk} \\ &\quad = C\sum^{\infty }_{n=1}n^{r-2} \cdot n^{-q/p} (\log n)^{q} \sum ^{n}_{k=1} \bigl\{ E(a_{nk}X_{k})^{q}I \bigl(a_{nk}X_{k}\leq n^{1/u}\bigr)+ n^{q/u} P\bigl(a_{nk}X_{k}>n^{1/u} \bigr) \bigr\} \\ &\quad < \infty . \end{aligned}$$
(3.5)

Hence (3.1) holds by (3.3)–(3.5).

Now we prove (3.2). By Lemmas 2.7 and 2.8,

$$\begin{aligned} n^{-1/p}\sum_{k=1}^{n} EY_{nk} &\leq n^{-1/p}\sum_{k=1}^{n} \bigl\{ Ea_{nk}X_{k}I\bigl(n^{1/u}< a_{nk}X_{k} \le n^{1/p}\bigr) +n^{1/p} P\bigl(a_{nk}X_{k}>n^{1/p} \bigr) \bigr\} \\ &\leq n^{-1/p}\sum_{k=1}^{n} \bigl\{ Ea_{nk}X_{k}I\bigl(n^{1/u}< a_{nk}X_{k} \le n^{1/p}\bigr) + Ea_{nk}X_{k}I \bigl(a_{nk}X_{k}>n^{1/p}\bigr) \bigr\} \\ & \to 0 \end{aligned}$$

as \(n\to \infty \). Hence there exists an integer N such that \(n^{-1/p}\sum_{k=1}^{n} EY_{nk}<\varepsilon /4\) if \(n>N\). It follows that, for \(n>N\),

$$\begin{aligned} &P \Biggl( \max_{1\le m\le n} \Biggl\vert \sum _{k=1}^{m} (Y_{nk}-EY_{nk}) \Biggr\vert > \varepsilon n^{1/p} \Biggr) \\ &\quad \leq P \Biggl( \sum_{k=1}^{n} (Y_{nk} +EY_{nk})>\varepsilon n^{1/p} \Biggr) \\ &\quad \leq P \Biggl( \sum_{k=1}^{n} Y_{nk} > \frac{3 \varepsilon }{4} n^{1/p} \Biggr) \\ &\quad = P \Biggl( \sum_{k=1}^{n} (Y_{nk}-EY_{nk}+EY_{nk}) > \frac{3 \varepsilon }{4} n^{1/p} \Biggr) \\ &\quad \leq P \Biggl( \sum_{k=1}^{n} (Y_{nk}-EY_{nk}) > \frac{\varepsilon }{2} n^{1/p} \Biggr). \end{aligned}$$

Then we have by the Markov inequality and Lemma 2.1, for \(n>N\),

$$\begin{aligned} &P \Biggl( \max_{1\le m\le n} \Biggl\vert \sum _{k=1}^{m} (Y_{nk}-EY_{nk}) \Biggr\vert > \varepsilon n^{1/p} \Biggr) \\ &\quad \le P \Biggl( \Biggl\vert \sum_{k=1}^{n} (Y_{nk}-EY_{nk}) \Biggr\vert > \frac{\varepsilon }{2} n^{1/p} \Biggr) \\ &\quad \leq C n^{-q/p} g(n) \Biggl(\sum^{n}_{k=1}E(Y_{nk}-EY_{nk})^{2} \Biggr)^{q/2}+C n^{-q/p} \sum^{n}_{k=1}E \vert Y_{nk}-EY_{nk} \vert ^{q} \\ &\quad \leq C n^{-q/p +\tau } \Biggl(\sum^{n}_{k=1}E(Y_{nk}-EY_{nk})^{2} \Biggr)^{q/2}+C n^{-q/p} \sum^{n}_{k=1}E \vert Y_{nk}-EY_{nk} \vert ^{q}. \end{aligned}$$
(3.6)

As in the proof of (3.4), we obtain

$$\begin{aligned} &\sum_{n=1}^{\infty }n^{r-2} \cdot n^{-q/p+\tau } \Biggl(\sum^{n}_{k=1}E(Y_{nk}-EY_{nk})^{2} \Biggr)^{q/2} \\ &\quad \le \sum_{n=1}^{\infty }n^{r-2} \cdot n^{-q/p+\tau } \Biggl(\sum^{n}_{k=1}EY^{2}_{nk} \Biggr)^{q/2} \\ &\quad \le \sum_{n=1}^{\infty }n^{r-2} \cdot n^{-q/p+\tau } \Biggl(\sum^{n}_{k=1} \bigl\{ E(a_{nk}X_{k})^{2}I \bigl(a_{nk}X_{k}\le n^{1/p} \bigr)+n^{2/p}P\bigl(a_{nk}X_{k}>n^{1/p} \bigr) \bigr\} \Biggr)^{q/2} \\ &\quad \le \sum_{n=1}^{\infty }n^{r-2} \cdot n^{-q/p+\tau } \Biggl( n^{(2-s)/p} \sum ^{n}_{k=1} E(a_{nk}X_{k})^{s} \Biggr)^{q/2} \\ &\quad < \infty . \end{aligned}$$
(3.7)

By Lemmas 2.3 and 2.5, we have

$$\begin{aligned} &\sum_{n=1}^{\infty }n^{r-2} \cdot n^{-q/p} \sum^{n}_{k=1}E \vert Y_{nk}-EY_{nk} \vert ^{q} \\ &\quad \le C\sum_{n=1}^{\infty }n^{r-2} \cdot n^{-q/p} \sum^{n}_{k=1}E \vert Y_{nk} \vert ^{q} \\ &\quad \le C\sum_{n=1}^{\infty }n^{r-2} \cdot n^{-q/p} \sum^{n}_{k=1} \bigl\{ E(a_{nk}X_{k})^{q}I \bigl(a_{nk}X_{k}\le n^{1/p} \bigr)+n^{q/p} P\bigl(a_{nk}X_{k}>n^{1/p} \bigr) \bigr\} \\ &\quad < \infty . \end{aligned}$$
(3.8)

Hence (3.2) holds by (3.6)–(3.8). □

Proof of Theorem 1.2

Without loss of generality, we may assume that \(X_{n}\ge 0\) for \(n\ge 1\) and \(a_{nk}\ge 0\) for \(1\le k\le n\) and \(n\ge 1\). For simplicity, we may assume that \(\sum_{k=1}^{n} a_{nk}^{\alpha }\le n\) for \(n\ge 1\). Since \(EX<\infty \), there exists a positive integer N such that \(EXI(X>N)<\varepsilon /4\). For \(n\ge 1\), we define

$$\begin{aligned}& X_{n}' =X_{n}I(X_{n}\le N)+NI(X_{n}>N), \\& X_{n}'' =X_{n}-X_{n}'. \end{aligned}$$

Then \(\{X_{n}', n\ge 1\}\) is still a sequence of WOD random variables, and \(\{a_{nk}X_{k}', 1\le k\le n\}\) is also a sequence of WOD random variables. To prove (1.5), it is enough to show that

$$ I_{1}:=\sum^{\infty }_{n=1} \frac{1}{n} P \Biggl(\max_{1\leq m\leq n} \Biggl\vert \sum ^{m}_{k=1}a_{nk} \bigl(X_{k}'-EX_{k}'\bigr) \Biggr\vert >\varepsilon n \Biggr)< \infty ,\quad \forall \varepsilon >0, $$

and

$$ I_{2}:=\sum^{\infty }_{n=1} \frac{1}{n} P \Biggl(\max_{1\leq m\leq n} \Biggl\vert \sum ^{m}_{k=1}a_{nk} \bigl(X_{k}''-EX_{k}'' \bigr) \Biggr\vert >\varepsilon n \Biggr)< \infty , \quad \forall \varepsilon >0. $$

Taking \(q>\max \{2, \alpha , \tau /(1-1/\min \{2,\alpha \})\}\), we have by the Markov inequality and Lemma 2.2

$$\begin{aligned} I_{1} &\le C \sum_{n=1}^{\infty }n^{-1-q} (\log n)^{q} \Biggl\{ g(n) \Biggl( \sum _{k=1}^{n} a^{2}_{nk}E \bigl(X_{k}'-EX_{k}' \bigr)^{2} \Biggr)^{q/2} +\sum _{k=1}^{n} E \bigl\vert a_{nk} \bigl(X_{k}'-EX_{k}'\bigr) \bigr\vert ^{q} \Biggr\} \\ &\le C \sum_{n=1}^{\infty }n^{-1-q} (\log n)^{q} \Biggl\{ g(n) \Biggl( \sum _{k=1}^{n} a_{nk}^{2} \Biggr)^{q/2} + \sum_{k=1}^{n} a_{nk}^{q} \Biggr\} \\ &\le C \sum_{n=1}^{\infty }n^{-1-q} (\log n)^{q} \bigl\{ g(n)n^{q/ \min \{2,\alpha \}} +n^{q/\alpha } \bigr\} \\ &\le C \sum_{n=1}^{\infty }n^{-1-q+q/\min \{2, \alpha \}} (\log n)^{q} g(n) \\ &\le C \sum_{n=1}^{\infty }n^{-1-q+q/\min \{2, \alpha \}+\tau } (\log n)^{q} \\ &< \infty , \end{aligned}$$

since \(q>\tau /(1-1/\min \{2, \alpha \})\).

To prove \(I_{2}<\infty \), we define, for \(1\le k\le n\) and \(n>N\),

$$ Y_{nk}=(X_{k}-N)I(N< X_{k}\le n)+(n-N)I(X_{k}>n). $$

Then we can rewrite \(X_{k}''\) as

$$ X_{k}''=Y_{nk}+(X_{k}-n)I(X_{k}>n)\quad \text{for $1\le k\le n$ and $n> N$}, $$

and so \(X_{k}''=Y_{nk}\) if \(X_{k}\le n\).

Hence we have, for \(n> N\),

$$\begin{aligned} &P \Biggl(\max_{1\leq m\leq n} \Biggl\vert \sum ^{m}_{k=1}a_{nk}\bigl(X_{k}''-EX_{k}'' \bigr) \Biggr\vert > \varepsilon n \Biggr) \\ &\quad \le P \Biggl(\sum^{n}_{k=1}a_{nk} \bigl(X_{k}''+EX_{k}'' \bigr)>\varepsilon n \Biggr) \\ &\quad \le \sum_{k=1}^{n} P ( X_{k}>n )+P \Biggl( \sum_{k=1}^{n} a_{nk} Y_{nk}+\sum_{k=1}^{n} a_{nk}EX_{k}'' > \varepsilon n \Biggr) \\ &\quad =\sum_{k=1}^{n} P ( X_{k}>n )+P \Biggl( \sum_{k=1}^{n} a_{nk} (Y_{nk}-EY_{nk}) +\sum _{k=1}^{n} a_{nk}\bigl(EY_{nk}+EX_{k}'' \bigr) > \varepsilon n \Biggr). \end{aligned}$$

Noting that

$$ \sum_{k=1}^{n} a_{nk} \bigl(EY_{nk}+EX_{k}'' \bigr) \le 2\sum_{k=1}^{n} a_{nk}EX_{k}'' \le 2EXI(X>N) \sum_{k=1}^{n} a_{nk}< \varepsilon n/2, $$

we have by the Markov inequality, for any \(q>0\),

$$\begin{aligned} I_{2}&\le \sum_{n=1}^{N} n^{-1}P \Biggl( \max_{1\leq m\leq n} \Biggl\vert \sum ^{m}_{k=1}a_{nk} \bigl(X_{k}''-EX_{k}'' \bigr) \Biggr\vert > \varepsilon n \Biggr) \\ &\quad {} + \sum_{n=N+1}^{\infty }n^{-1} \Biggl\{ \sum_{k=1}^{n} P ( X_{k}>n )+P \Biggl( \sum_{k=1}^{n} a_{nk} (Y_{nk}-EY_{nk}) > \varepsilon n/2 \Biggr) \Biggr\} \\ &\le C+\sum_{n=1}^{\infty }P( X>n) + \sum _{n=N+1}^{\infty }n^{-1} P \Biggl( \sum_{k=1}^{n} a_{nk} (Y_{nk}-EY_{nk}) >\varepsilon n/2 \Biggr) \\ &\le C+ EX + C \sum_{n=N+1}^{\infty }n^{-1-q} E \Biggl\vert \sum_{k=1}^{n} a_{nk} (Y_{nk}-EY_{nk}) \Biggr\vert ^{q}. \end{aligned}$$
(3.9)

We now proceed with two cases \(1<\alpha \le 2\) and \(\alpha >2\).

Case 1: \(1<\alpha \le 2\). In this case, we take \(q=\alpha \). Since \(\{a_{nk}Y_{nk}, 1 \le k\le n \}\) is a sequence of WOD random variables, we have by Lemma 2.1

$$\begin{aligned} &\sum_{n=N+1}^{\infty }n^{-1-q} E \Biggl\vert \sum_{k=1}^{n} a_{nk} (Y_{nk}-EY_{nk}) \Biggr\vert ^{q} \\ &\quad \le \sum_{n=N+1}^{\infty }n^{-1-\alpha } g(n) \sum_{k=1}^{n} a^{\alpha }_{nk} E \vert Y_{nk}-EY_{nk} \vert ^{\alpha } \\ &\quad \le C \sum_{n=1}^{\infty }n^{-\alpha } g(n) \bigl\{ EX^{\alpha }I(X\le n)+n^{\alpha }P(X>n) \bigr\} . \end{aligned}$$
(3.10)

Since \(g(x)/x^{\tau }\downarrow \) and \(0<\tau <\alpha -1\), we have

$$\begin{aligned} \sum_{n=1}^{\infty }n^{-\alpha } g(n) EX^{\alpha }I(X\le n) &=\sum_{i=1}^{\infty }EX^{\alpha }I(i-1< X \le i) \sum_{n=i}^{\infty }n^{-\alpha } g(n) \\ & \le \sum_{i=1}^{\infty }EX^{\alpha }I(i-1< X \le i) g(i) i^{-\tau } \sum_{n=i}^{\infty }n^{-\alpha +\tau } \\ & \le C\sum_{i=1}^{\infty }EX^{\alpha }I(i-1< X \le i) g(i) i^{1-\alpha } \\ & \le C\sum_{i=1}^{\infty }EX^{1+\tau } I(i-1< X\le i) g(i) i^{-\tau } \\ & \le C\sum_{i=1}^{\infty }EXg(X) I(i-1< X\le i) \\ & =CEXg(X)< \infty . \end{aligned}$$
(3.11)

Since \(0< g(x)\uparrow \) and \(g(x)/x^{\tau }\downarrow \), we also have the following relation (see page 7 in Chen and Sung [7]):

$$ \sum_{n=1}^{\infty }g(n)P(X>n)\le 2^{\tau }EXg(X). $$
(3.12)

Hence \(I_{2}<\infty \) by (3.9)–(3.12)

Case 2: \(\alpha > 2\). In this case, we take \(q=2\). The proof is similar to that of Case 1 and is omitted. □

Proof of Corollary 1.1

Let \(a_{nk}=k^{s}l(k)/(n^{s} l(n))\) for \(1\le k\le n\) and \(n\ge 1\). Since \(s>-1\), we can take \(\alpha >1\) such that \(\alpha s>-1\). Then

$$ \sum_{k=1}^{n} \vert a_{nk} \vert ^{\alpha }=n^{-s\alpha } l^{-\alpha }(n) \sum_{k=1}^{n} k^{s\alpha }l^{\alpha }(k) =O(n). $$

By Theorem 1.2, we obtain

$$ \sum^{\infty }_{n=1} \frac{1}{n} P \Biggl( \max_{1\leq m\leq n} \Biggl\vert \sum ^{m}_{k=1} k^{s} l(k) (X_{k}-EX_{k}) \Biggr\vert >\varepsilon n^{1+s} l(n) \Biggr)< \infty ,\quad \forall \varepsilon >0, $$

which implies

$$ \sum^{\infty }_{i=1}P \Biggl( \max _{1\leq m\leq 2^{i} } \Biggl\vert \sum^{m}_{k=1} k^{s} l(k) (X_{k}-EX_{k}) \Biggr\vert > \varepsilon \max_{2^{i} \le n< 2^{i+1}}n^{1+s} l(n) \Biggr)< \infty ,\quad \forall \varepsilon >0. $$

By the Borel–Cantelli lemma,

$$ \frac{ \max_{1\leq m\leq 2^{i} } \vert \sum^{m}_{k=1} k^{s} l(k)(X_{k}-EX_{k} ) \vert }{\max_{2^{i} \le n< 2^{i+1}}n^{1+s} l(n)} \to 0\quad \text{a.s.} $$
(3.13)

From the fact that \(\max_{2^{i} \le n<2^{i+1}} l(n)/l(2^{i})\to 1\) as \(i\to \infty \) (see Bai and Su [1]), we also have \(\min_{2^{i} \le n<2^{i+1}} l(n)/l(2^{i})\to 1\) as \(i\to \infty \), since \(1/l(x)\) is also a slowly varying function. We can prove the result from (3.13) by a standard method. □

Proof of Corollary 1.2

We prove the result with two cases \(r>1\) and \(r=1\).

Case 1: \(r>1\). Since \(s>-1/r\), we can choose \(\alpha >r\) such that \(s\alpha >-1\). Then

$$ \sum_{k=1}^{n} \vert a_{nk} \vert ^{\alpha }=n^{-s\alpha } \sum _{k=1}^{n} \vert c_{nk} \vert ^{\alpha }k^{s\alpha } \le B^{\alpha }n^{-s\alpha } \sum _{k=1}^{n} k^{s \alpha } =O(n). $$

Hence (1.6) holds by Theorem 1.1 with \(p=1\).

Case 2: \(r=1\). In this case, \(g(x)/x^{\tau }\downarrow \) for some \(0<\tau <\min \{1, |1+1/s|\}\).

If \(s>-1/2\), then \(0<\tau <1\). In this case, we take \(\alpha =2\). Then

$$ \sum_{k=1}^{n} \vert a_{nk} \vert ^{\alpha }=n^{-2s} \sum _{k=1}^{n} c^{2}_{nk} k^{2s} =O(n). $$

Hence (1.6) holds by Theorem 1.2.

If \(-1< s\le -1/2\), then \(0<\tau <-1-1/s\). In this case, we take α such that \(1+\tau <\alpha <-1/s\). Then

$$ \sum_{k=1}^{n} \vert a_{nk} \vert ^{\alpha }=n^{-\alpha s} \sum _{k=1}^{n} \vert c_{nk} \vert ^{\alpha }k^{\alpha s} =O(n). $$

Hence (1.6) also holds by Theorem 1.2. □

Availability of data and materials

Not applicable.

References

  1. Bai, Z., Su, C.: The complete convergence for partial sums of iid random variables. Sci. Sin., Ser. A 28, 1261–1277 (1985)

    MATH  Google Scholar 

  2. Bai, Z.D., Cheng, P.E.: Marcinkiewicz strong laws for linear statistics. Stat. Probab. Lett. 46, 105–112 (2000)

    Article  MathSciNet  Google Scholar 

  3. Baum, L.E., Katz, M.: Convergence rates in the law of large numbers. Trans. Am. Math. Soc. 120, 108–123 (1965)

    Article  MathSciNet  Google Scholar 

  4. Chen, P.: Limiting behavior of weighted sums of negatively associated random variables. Math. Acta Sin. A 25, 489–495 (2005)

    MathSciNet  MATH  Google Scholar 

  5. Chen, P., Gan, S.: Limiting behavior of weighted sums of i.i.d. random variables. Stat. Probab. Lett. 77, 1589–1599 (2007)

    Article  MathSciNet  Google Scholar 

  6. Chen, P., Sung, S.H.: On complete convergence and complete moment convergence for weighted sums of \(\rho ^{*}\)-mixing random variables. J. Inequal. Appl. 2018, 121 (2018)

    Article  MathSciNet  Google Scholar 

  7. Chen, P., Sung, S.H.: A Spitzer-type law of large numbers for widely orthant dependent random variables. Stat. Probab. Lett. 154, 108544 (2019)

    Article  MathSciNet  Google Scholar 

  8. Chow, Y.S.: Some convergence theorems for independent random variables. Ann. Math. Stat. 37, 1482–1493 (1966)

    Article  MathSciNet  Google Scholar 

  9. Cuzick, J.: A strong law for weighted sums of i.i.d. random variables. J. Theor. Probab. 8, 625–641 (1995)

    Article  MathSciNet  Google Scholar 

  10. Hsu, P.L., Robbins, H.: Complete convergence and the law of large numbers. Proc. Natl. Acad. Sci. USA 33, 25–31 (1947)

    Article  MathSciNet  Google Scholar 

  11. Lang, J., He, T., Cheng, L., Lu, C., Wang, X.: Complete convergence for weighted sums of widely orthant-dependent random variables and its statistical application. Rev. Mat. Complut. (2020). https://doi.org/10.1007/s13163-020-00369-5

    Article  Google Scholar 

  12. Liang, H.Y.: Complete convergence for weighted sums of negatively associated random variables. Stat. Probab. Lett. 48, 317–325 (2000)

    Article  MathSciNet  Google Scholar 

  13. Sung, S.H.: Complete convergence for weighted sums of \(\rho ^{*}\)-mixing random variables. Discrete Dyn. Nat. Soc. 2010, Article ID 630608 (2010)

    Article  MathSciNet  Google Scholar 

  14. Wang, K., Wang, Y., Gao, Q.: Uniform asymptotics for the finite-time ruin probability of a dependent risk model with a constant interest rate. Methodol. Comput. Appl. Probab. 15, 109–124 (2013)

    Article  MathSciNet  Google Scholar 

  15. Wang, X., Xu, C., Hu, T.-C., Volodin, A., Hu, S.: On complete convergence for widely orthant-dependent random variables and its applications in nonparametric regression models. Test 23, 607–629 (2014)

    Article  MathSciNet  Google Scholar 

  16. Wang, Y., Wang, X.: Complete f-moment convergence for Sung’s type weighted sums and its application to the EV regression models. Stat. Pap. (2019). https://doi.org/10.1007/s00362-019-01112-z

    Article  Google Scholar 

  17. Wu, Y., Wang, X., Hu, S.: Complete moment convergence for weighted sums of weakly dependent random variables and its application in nonparametric regression model. Stat. Probab. Lett. 127, 56–66 (2017)

    Article  MathSciNet  Google Scholar 

  18. Wu, Y., Wang, X., Shen, A.: Strong convergence properties for weighted sums of m-asymptotic negatively associated random variables and statistical applications. Stat. Pap. (2020). https://doi.org/10.1007/s00362-020-01179-z

    Article  Google Scholar 

  19. Wu, Y., Zhai, M., Peng, J.Y.: On the complete convergence for weighted sums of extended negatively dependent random variables. J. Math. Inequal. 13, 251–260 (2019)

    MathSciNet  MATH  Google Scholar 

  20. Zhang, A., Yu, Y., Yang, R., Shen, Y.: On the complete convergence of weighted sums for widely orthant dependent random variables. J. Math. Inequal. 12, 1063–1074 (2018)

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

The authors would like to thank the referees for the helpful comments.

Funding

The research of Pingyan Chen is supported by the National Natural Science Foundation of China (No. 71471075). The research of Soo Hak Sung is supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. 2020R1F1A1A01050160).

Author information

Authors and Affiliations

Authors

Contributions

All authors read and approved the manuscript.

Corresponding author

Correspondence to Soo Hak Sung.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Chen, P., Sung, S.H. Complete convergence for weighted sums of widely orthant-dependent random variables. J Inequal Appl 2021, 45 (2021). https://doi.org/10.1186/s13660-021-02574-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13660-021-02574-2

MSC

Keywords