Skip to main content

Complete moment convergence of extended negatively dependent random variables

Abstract

In this paper, some results on the complete moment convergence of extended negatively dependent (END) random variables are established. The results in the paper improve and extend the corresponding ones of Qiu et al. (Acta Math. Appl. Sin. 40(3):436–448, 2017) under some weaker conditions. Our results also improve and extend the related known works in the literature.

Introduction

In many statistical models, it is not reasonable to assume that random variables are independent, and so it is very meaningful to extend the concept of independence to dependence cases. One important dependence sequence of these dependences is extended negatively dependent (END) random variables, we recall the concept of END random variables as follows.

Definition 1.1

The random variables \(\{X_{n},n\geq 1\}\) are said to be extended negatively dependent (END) random variables if there exists a positive constant \(M>0\) such that both

$$ P(X_{1}>x_{1},X_{2}>x_{2}, \ldots,X_{n}>x_{n})\leq M\prod_{i=1}^{n}P(X_{i}>x_{i}) $$

and

$$ P(X_{1}\leq x_{1},X_{2}\leq x_{2}, \ldots,X_{n}\leq x_{n})\leq M\prod _{i=1}^{n}P(X_{i} \leq x_{i}) $$

hold for each \(n\geq 1\) and all real \(x_{1},x_{2},\ldots,x_{n}\).

The concept of END random variables was introduced by Liu [2]. Obviously, END random variables (\(M=1\)) imply NOD (negatively orthant dependent) random variables (Joag-Dev and Proschan [3]). Liu [2] pointed out that the END random variables are more comprehensive, and they can reflect not only negative dependence random variables but also positive ones, to some extent. Joag-Dev and Proschan [3] once pointed out that NOD random variables imply NA (negatively associated) random variables, but NA random variables do not imply NOD random variables, so END random variables imply NA random variables. Thus, it is interesting to investigate convergence properties for END random variables.

After the appearance of Liu [2], many scholars have focused on the properties of END random variables, and a lot of results have been gained. For example, Liu [4] studied necessary and sufficient conditions for moderate deviations of dependent random variables with heavy tails; Chen et al. [5] established strong law of large numbers for END random variables; Wu and Guan [6] presented convergence properties of the partial sums of END random variables; Shen [7] presented probability inequalities for END sequence and their applications; Wang and Wang [8] investigated large deviations for random sums of END random variables; Wang et al. and Qiu et al. [913] studied complete convergence of END random variables, etc.

The complete convergence plays a very important role in the probability theory and mathematical statistics. The concept of complete convergence was introduced by Hsu and Robbins [14] as follows: A sequence \(\{U_{n},n\geq 1\}\) of random variables is said to converge completely to a constant θ if, for \(\forall \varepsilon >0\), \(\sum_{n=1}^{\infty }P(\vert U_{n}-\theta \vert > \varepsilon ) <\infty \). In view of the Borel–Cantelli lemma, the complete convergence implies that \(U_{n}\rightarrow \theta \) almost surely. Therefore, complete convergence is a very important tool in establishing almost sure convergence for partial of random variables as well as weighted sums of random variables.

Let \(\{X_{n},n\geq 1\}\) be a sequence of random variables, \(a_{n} > 0\), \(b_{n} > 0\), \(\gamma > 0\). If for \(\forall \varepsilon >0\), \(\sum_{n=1}^{\infty }a_{n}E\{b^{-1}_{n}\vert X_{n}\vert - \varepsilon \}_{+}^{\gamma } <\infty \), then \(\{X_{n},n\geq 1\}\) is called the complete moment convergence (Chow [15]). It is well known that complete moment convergence implies complete convergence, i.e., the complete moment convergence is more general than complete convergence. The following result is from Chow [15].

Theorem A

Let\(r>1\), \(1\leq p<2\), \(\{X,X_{n}, n \geq 1\}\)be a sequence of independent identically distributed random variables and\(EX_{1}=0\), if\(E\{\vert X_{1}\vert ^{rp}+\vert X_{1}\vert \log (1+\vert X_{1}\vert )\}<\infty \), then

$$ \sum_{n=1}^{\infty } n^{r-2-1/p}E\Biggl\{ \Biggl\vert \sum_{i=1}^{n} X_{i} \Biggr\vert - \varepsilon n^{1/p}\Biggr\} _{+}< \infty ,\quad \forall \varepsilon >0. $$

It should be noted that Theorem A has been extended and improved by many scholars (see [1619]).

Recently, Chen and Sung [20] obtained complete and complete moment convergence of ρ-mixing random variables, and Qiu et al. [1] obtained the following complete moment convergence for weighted sums of END random variables.

Theorem B

Let\(r>1\), \(1\leq p<2\), \(\lambda >0\), \(\alpha >1\), \(\beta >1\)with\(1/\alpha +1/\beta =1/p\). Let\(\{a_{ni}, 1\leq i\leq n, n\geq 1\}\)be an array of constants satisfying

$$ \sum_{i=1}^{n} \vert a_{ni} \vert ^{a}\leq Dn,\quad \forall n\geq 1, $$
(1.1)

whereDis a positive constant. \(\{X,X_{n}, 1\leq n\}\)is a sequence of identically distributed END random variables with\(EX=0 \). If

$$ \textstyle\begin{cases} E \vert X \vert ^{(r-1)\beta }< \infty &\textit{if } \alpha < rp, \lambda < (r-1)\beta , \\ E \vert X \vert ^{(r-1)\beta }\log (1+ \vert X \vert )< \infty &\textit{if } \alpha =rp, \lambda < (r-1) \beta , \\ E \vert X \vert ^{(r-1)\beta }\log (1+ \vert X \vert )< \infty &\textit{if } \alpha < rp, \lambda = (r-1) \beta , \\ E \vert X \vert ^{(r-1)\beta }\log ^{2}(1+ \vert X \vert )< \infty &\textit{if } \alpha =rp, \lambda = (r-1) \beta , \\ E \vert X \vert ^{rp}< \infty &\textit{if } \alpha >rp , \lambda < rp, \\ E \vert X \vert ^{rp}\log (1+ \vert X \vert )< \infty &\textit{if } \alpha >rp, \lambda = rp, \\ E \vert X \vert ^{\lambda }< \infty &\textit{if } \lambda >\max \{rp,(r-1)\beta \} , \\ &\textit{when } \alpha >rp, \textit{assume } \lambda < \alpha , \end{cases} $$

then

$$ \sum_{n=1}^{\infty } n^{r-2-\lambda /p}E\Biggl\{ \max_{1 \leq k\leq n} \Biggl\vert \sum_{i=1}^{k} a_{ni}X_{i} \Biggr\vert -\varepsilon n^{1/p} \Biggr\} ^{\lambda }_{+}< \infty ,\quad \forall \varepsilon >0. $$

In this article, our goal is to further study complete moment convergence for weighted sums of END random variables with suitable conditions. By using the truncated method, we obtain a novel result, which extends that in Qiu et al. [1] under some weaker conditions. Our result also improves and extends those in Chen and Sung [20], Sung [21], and Qiu and Xiao [22].

The layout of this paper is as follows. Main results and some lemmas are provided in Sect. 2. Proofs of the main results are given in Sect. 3. Throughout the paper, the symbol C denotes a positive constant, which may take different values in different places. \(I(A)\) is the indicator function of an event A.

Main results and some lemmas

Theoremm 2.1

Let\(r>1\), \(1\leq p<2\), \(\lambda >0\), \(\alpha >0\), \(\beta >0\)with\(1/\alpha +1/\beta =1/p\). Let\(\{a_{ni}, 1\leq i\leq n, n\geq 1\}\)be an array of constants satisfying (1.1). \(\{X,X_{n}, {n\geq 1}\}\)is a sequence of identically distributed END random variables with\(EX=0 \). Assume that one of the following conditions holds:

  1. (1)

    If\(\alpha < rp\), then

    $$ \textstyle\begin{cases} E \vert X \vert ^{(r-1)\beta }< \infty &\textit{if } \lambda < (r-1)\beta , \\ E \vert X \vert ^{(r-1)\beta }\log (1+ \vert X \vert )< \infty &\textit{if } \lambda = (r-1)\beta , \\ E \vert X \vert ^{\lambda }< \infty &\textit{if } \lambda >(r-1)\beta . \end{cases} $$
    (2.1)
  2. (2)

    If\(\alpha =rp\), then

    $$ \textstyle\begin{cases} E \vert X \vert ^{(r-1)\beta }\log (1+ \vert X \vert )< \infty &\textit{if } \lambda \leq (r-1)\beta =rp, \\ E \vert X \vert ^{\lambda }< \infty &\textit{if } \lambda >(r-1)\beta =rp. \end{cases} $$
    (2.2)
  3. (3)

    If\(\alpha >rp\), then

    $$ \textstyle\begin{cases} E \vert X \vert ^{rp}< \infty &\textit{if } \lambda < rp, \\ E \vert X \vert ^{rp}\log (1+ \vert X \vert )< \infty &\textit{if } \lambda =rp, \\ E \vert X \vert ^{\lambda } < \infty &\textit{if } \lambda >rp. \end{cases} $$
    (2.3)

Then

$$ \sum_{n=1}^{\infty } n^{r-2-\lambda /p}E\Biggl\{ \max_{1 \leq k\leq n} \Biggl\vert \sum_{i=1}^{k} a_{ni}X_{i} \Biggr\vert -\varepsilon n^{1/p} \Biggr\} ^{\lambda }_{+}< \infty ,\quad \forall \varepsilon >0. $$
(2.4)

Conversely, if (2.4) holds for any array\(\{a_{ni}, 1\leq i\leq n, n\geq 1\}\)satisfying (1.1), then\(EX=0\), \(E\vert X\vert ^{(r-1)\beta }<\infty \), \(E\vert X\vert ^{rp}<\infty \).

Remark 2.1

The Rademacher–Menshov inequality is only used in the proof process of Theorem 2.1. The results in this paper still hold for random variable satisfying Rosenthal’s inequality. Therefore, our results improve and extend the result of Chen and Sung [20].

Remark 2.2

In this paper, the conditions of Theorem 2.1 are weaker than those in Theorem 1.1 of Qiu et al. [1], and the condition of “if \(\alpha >rp\), assume \(\lambda <\alpha \) (Qiu et al. [1])” is not necessary for (2.4) in our paper. Therefore our results improve and extend the result of Qiu et al. [1]. It is worth pointing out that the method applied in this article is different from that in Qiu et al. [1].

To prove Theorem 2.1 of the paper, we need the following important lemmas.

Lemma 2.1

(Qiu [22]; Rademacher–Menshov inequality)

Let\(p>1\), \(\{X_{n},n\geq 1\}\)be a sequence of END random variables with\(EX_{n}=0\)and\(E\vert X_{n}\vert ^{p}<\infty \). Then there exists a positive constant\(C_{p}\)only depending onpsuch that

$$\begin{aligned}& E\Biggl(\max_{1\leq j\leq n} \Biggl\vert \sum _{i=1}^{j} {X_{i}} \Biggr\vert ^{p}\Biggr) \leq C_{p}\log ^{p}n \sum _{i=1}^{n}E \vert {X_{i}} \vert ^{p},\quad 1< p \leq 2, \end{aligned}$$
(2.5)
$$\begin{aligned}& E\Biggl(\max_{1\leq j\leq n} \Biggl\vert \sum _{i=1}^{j}{X_{i}} \Biggr\vert ^{p}\Biggr) \leq C_{p} \log ^{p}n \Biggl\{ \sum _{i=1}^{n}E \vert {X_{i}} \vert ^{p}+\Biggl(\sum_{i=1}^{n}E \vert {X_{i}} \vert ^{2}\Biggr)^{p/2}\Biggr\} , \quad p>2. \end{aligned}$$
(2.6)

Lemma 2.2

(Qiu [22])

Let\(p\geq 1\), \(\{X_{n},n\geq 1\}\)be a sequence of END random variables with\(EX_{n}=0\)and\(E\vert X_{n}\vert ^{p}<\infty \). Then there exists a positive constant\(C_{p}\)only depending onpsuch that

$$\begin{aligned}& E\Biggl( \Biggl\vert \sum_{i=1}^{n}X_{i} \Biggr\vert ^{p}\Biggr) \leq C_{p} \sum _{i=1}^{n}E \vert X_{i} \vert ^{p}, \quad 1 \leq p< 2, \\ \end{aligned}$$
(2.7)
$$\begin{aligned}& E\Biggl( \Biggl\vert \sum_{i=1}^{n}X_{i} \Biggr\vert ^{p}\Biggr) \leq C_{p} \Biggl\{ \sum _{i=1}^{n}E \vert X_{i} \vert ^{p}+\Biggl( \sum_{i=1}^{n}E(X_{i})^{2} \Biggr)^{p/2}\Biggr\} , \quad p\geq 2. \end{aligned}$$
(2.8)

Lemma 2.3

(Liu [2])

Let\(\{X_{n},n\geq 1\}\)be a sequence of END random variables. If\(f_{1},f_{2},\ldots,f_{n}\)are all nondecreasing (or nonincreasing) functions, then random variables\(f_{1}(X_{1}),f_{2}(X_{2}),\ldots, f_{n}(X_{n})\)are still END random variables.

Lemma 2.4

(Wu [23])

Let\(\{X_{n},n\geq 1\}\)and\(\{Y_{n},n\geq 1\}\)be sequences of random variables, for any\(q>r>0\), \(\varepsilon >0\), \(a>0\), then

$$\begin{aligned} E\Biggl(\max_{1\leq k\leq n} \Biggl\vert \sum _{i=1}^{k}(X_{i}+Y_{i}) \Biggr\vert - \varepsilon a\Biggr)^{r}_{+} &\leq C_{r} \biggl(\frac{1}{\varepsilon ^{q}}+ \frac{r}{q-r}\biggr)\frac{1}{a^{q-r}} E\Biggl( \max_{1\leq k\leq n} \Biggl\vert \sum_{i=1}^{k}X_{i} \Biggr\vert ^{q}\Biggr) \\ &\quad{} +C_{r}E\Biggl(\max _{1\leq k\leq n} \Biggl\vert \sum_{i=1}^{k}Y_{i} \Biggr\vert ^{r}\Biggr), \end{aligned}$$

where\(C_{r}=1\)if\(0< r\leq 1\)or\(C_{r}=2^{r-1}\)if\(r>1\).

Chen and Sung [20] obtained the following theorems (see Lemmas 2.52.7).

Lemma 2.5

(Chen [20])

Let\(r>1\), \(1\leq p<2\), \(\alpha >0\), \(\beta >0\)with\(1/\alpha +1/\beta =1/p\). Let\(\{a_{ni}, 1\leq i\leq n, n\geq 1\}\)be an array of constants satisfying (1.1). Xis a random variable, then

$$ \sum_{n=1}^{\infty }n^{r-2}\sum _{i=1}^{n}P\bigl( \vert a_{ni}X \vert >n^{1/p}\bigr) \leq \textstyle\begin{cases} CE \vert X \vert ^{(r-1)\beta } &\textit{if } \alpha < rp, \\ CE \vert X \vert ^{(r-1)\beta }\log (1+ \vert X \vert ) &\textit{if } \alpha =rp, \\ CE \vert X \vert ^{rp} &\textit{if } \alpha >rp. \end{cases} $$

Lemma 2.6

(Chen [20])

Let\(r>1\), \(1\leq p<2\), \(\alpha >0\), \(\beta >0\)with\(1/\alpha +1/\beta =1/p\). Let\(\{a_{ni}, 1\leq i\leq n, n\geq 1\}\)be an array of constants satisfying (1.1). IfXis a random variable, then for any\(v>\max \{\alpha ,(r-1)\beta \}\)

$$ \sum_{n=1}^{\infty }n^{r-2-v/p}\sum _{i=1}^{n}E \vert a_{ni}X \vert ^{v}I \bigl( \vert a_{ni}X \vert \leq n^{1/p} \bigr)\leq \textstyle\begin{cases} CE \vert X \vert ^{(r-1)\beta } &\textit{if } \alpha < rp, \\ CE \vert X \vert ^{(r-1)\beta }\log (1+ \vert X \vert ) &\textit{if } \alpha =rp, \\ CE \vert X \vert ^{rp} &\textit{if } \alpha >rp. \end{cases} $$

Lemma 2.7

(Chen [20])

Let\(\lambda >0\), \(r>1\), \(1\leq p<2\), \(\alpha >0\), \(\beta >0\)with\(1/\alpha +1/\beta =1/p\). Let\(\{a_{ni}, 1\leq i\leq n, n\geq 1\}\)be an array of constants satisfying (1.1) andXbe a random variable. Then the following statements hold:

  1. (1)

    If\(\alpha < rp\), then

    $$\begin{aligned}& \sum_{n=1}^{\infty }n^{r-2-\lambda /p}\sum _{i=1}^{n}E \vert a_{ni}X \vert ^{\lambda }I\bigl( \vert a_{ni}X \vert > n^{1/p}\bigr) \\& \quad \leq \textstyle\begin{cases} CE \vert X \vert ^{(r-1)\beta } &\textit{if } \lambda < (r-1)\beta , \\ CE \vert X \vert ^{(r-1)\beta }\log (1+ \vert X \vert ) &\textit{if } \lambda = (r-1)\beta , \\ CE \vert X \vert ^{\lambda } &\textit{if } \lambda >(r-1)\beta . \end{cases}\displaystyle \end{aligned}$$
  2. (2)

    If\(\alpha =rp\), then

    $$\begin{aligned}& \sum_{n=1}^{\infty }n^{r-2-\lambda /p}\sum _{i=1}^{n}E \vert a_{ni}X \vert ^{\lambda }I\bigl( \vert a_{ni}X \vert > n^{1/p}\bigr) \\& \quad \leq \textstyle\begin{cases} CE \vert X \vert ^{(r-1)\beta }\log (1+ \vert X \vert ) &\textit{if } \lambda \leq (r-1)\beta =rp, \\ CE \vert X \vert ^{\lambda } &\textit{if } \lambda >(r-1)\beta =rp. \end{cases}\displaystyle \end{aligned}$$
  3. (3)

    If\(\alpha >rp\), then

    $$\begin{aligned}& \sum_{n=1}^{\infty }n^{r-2-\lambda /p}\sum _{i=1}^{n}E \vert a_{ni}X \vert ^{\lambda }I\bigl( \vert a_{ni}X \vert > n^{1/p}\bigr) \\& \quad\leq \textstyle\begin{cases} CE \vert X \vert ^{rp} &\textit{if } \lambda < rp, \\ CE \vert X \vert ^{rp}\log (1+ \vert X \vert ) &\textit{if } \lambda =rp, \\ CE \vert X \vert ^{\lambda } &\textit{if } \lambda >rp. \end{cases}\displaystyle \end{aligned}$$

Proofs of theorems

Proof of Theorem 2.1

Noting \(\alpha >0\), \(\beta >0\), \(1/\alpha +1/\beta =1/p\), we have

$$ \textstyle\begin{cases} \alpha < rp \quad \Leftrightarrow \quad rp< (r-1)\beta , \\ \alpha =rp \quad \Leftrightarrow \quad rp=(r-1)\beta , \\ \alpha > rp \quad \Leftrightarrow \quad rp>(r-1)\beta . \end{cases} $$

For \(\forall t:0< t\leq \alpha \), by the Hölder inequality and (1.1), we have

$$ \sum_{i=1}^{n} \vert a_{ni} \vert ^{t}\leq \Biggl(\sum_{i=1}^{n} \vert a_{ni} \vert ^{a}\Biggr)^{t/a}\Biggl( \sum _{i=1}^{n} 1\Biggr)^{1-t/a}\leq Cn. $$
(3.1)

For \(\forall t: t>\alpha \), it follows from the \(C_{r}\) inequality and (1.1) that

$$ \sum_{i=1}^{n} \vert a_{ni} \vert ^{t}\leq \Biggl(\sum_{i=1}^{n} \vert a_{ni} \vert ^{a}\Biggr)^{t/a} \leq Cn^{t/a}. $$
(3.2)

Noting that \(a_{ni}=a_{ni}^{+}-a_{ni}^{-}\), without loss of generality, we can assume \(a_{ni}>0\).

Sufficiency. Set \(\theta \in (\frac{p}{\alpha \wedge rp},1)\) for \(1\leq i\leq n\), \(n\geq 1\), and let

$$\begin{aligned}& X^{(1)}_{ni}=-n^{\theta /p}I\bigl(a_{ni}X_{i}< -n^{\theta /p} \bigr)+a_{ni}X_{i}I\bigl( \vert a_{ni}X_{i} \vert \leq n^{\theta /p}\bigr)+n^{\theta /p}I\bigl(a_{ni}X_{i}>n^{\theta /p} \bigr), \\& X^{(2)}_{ni}=\bigl(a_{ni}X_{i}-n^{\theta /p} \bigr)I\bigl(n^{\theta /p}< a_{ni}X_{i} \leq n^{\theta /p}+n^{1/p}\bigr)+n^{1/p}I\bigl(a_{ni}X_{i}>n^{\theta /p}+n^{1/p} \bigr), \\& X^{(3)}_{ni}=\bigl(a_{ni}X_{i}+n^{\theta /p} \bigr)I\bigl(-n^{\theta /p}-n^{1/p} \leq a_{ni}X_{i}< -n^{\theta /p} \bigr)-n^{1/p}I\bigl(a_{ni}X_{i}< -n^{\theta /p}-n^{1/p} \bigr), \\& X^{(4)}_{ni}=\bigl(a_{ni}X_{i}-n^{\theta /p}-n^{1/p} \bigr)I\bigl(a_{ni}X_{i}>n^{ \theta /p}+n^{1/p} \bigr), \\& X^{(5)}_{ni}=\bigl(a_{ni}X_{i}+n^{\theta /p}+n^{1/p} \bigr)I\bigl(a_{ni}X_{i}< -n^{ \theta /p}-n^{1/p} \bigr). \end{aligned}$$

Then \(a_{ni}X_{i}=\sum_{l=1}^{5}X^{(l)}_{ni}\). It follows from the definition of \(X^{(2)}_{ni},\theta \in (\frac{p}{\alpha \wedge rp},1)\), (3.1), and (2.1)–(2.3) that

$$\begin{aligned} n^{{-1/p}}\max_{1\leq k\leq n} \Biggl\vert \sum _{i=1}^{k}EX^{(2)}_{ni} \Biggr\vert =&n^{{-1/p}} \sum_{i=1}^{n}EX^{(2)}_{ni} \\ \leq &n^{{-1/p}}\sum_{i=1}^{n}E \vert a_{ni}X_{i} \vert I\bigl( \vert a_{ni}X_{i} \vert >n^{ \theta /p}\bigr) \\ \leq &n^{{-1/p}}\sum_{i=1}^{n}E \vert a_{ni}X_{i} \vert \biggl( \frac{ \vert a_{ni}X_{i} \vert }{n^{\theta /p}} \biggr)^{(\alpha \wedge rp-1)}I\bigl( \vert a_{ni}X_{i} \vert >n^{ \theta /p}\bigr) \\ \leq &n^{{1-1/p-(\alpha \wedge rp-1)\theta /p}}E \vert X \vert ^{\alpha \wedge rp} \rightarrow 0, \quad n\rightarrow \infty . \end{aligned}$$

By the definition \(X^{(4)}_{ni}\) and (3.1), from the above proof process, we have

$$\begin{aligned} n^{{-1/p}}\max_{1\leq k\leq n} \Biggl\vert \sum _{i=1}^{k}EX^{(4)}_{ni} \Biggr\vert =&n^{{-1/p}} \sum_{i=1}^{n}EX^{(4)}_{ni} \\ \leq &n^{{-1/p}}\sum_{i=1}^{n}E \vert a_{ni}X_{i} \vert I\bigl( \vert a_{ni}X_{i} \vert >n^{ \theta /p}+n^{1/p}\bigr) \\ \leq &n^{{-1/p}}\sum_{i=1}^{n}E \vert a_{ni}X_{i} \vert I\bigl( \vert a_{ni}X_{i} \vert >n^{ \theta /p}\bigr)\rightarrow 0, \quad n\rightarrow \infty . \end{aligned}$$

Similarly, we can obtain

$$ \lim_{n\rightarrow \infty }n^{{-1/p}}\max_{1\leq k \leq n} \Biggl\vert \sum_{i=1}^{k}EX^{(3)}_{ni} \Biggr\vert =\lim_{n \rightarrow \infty }-n^{{-1/p}}\sum _{i=1}^{n}EX^{(3)}_{ni}=0 $$

and

$$ \lim_{n\rightarrow \infty }n^{{-1/p}}\max_{1\leq k \leq n} \Biggl\vert \sum_{i=1}^{k}EX^{(5)}_{ni} \Biggr\vert =\lim_{n \rightarrow \infty }-n^{{-1/p}}\sum _{i=1}^{n}EX^{(5)}_{ni}=0. $$

Noting that \(EX_{i}=0\), it follows from Lemma 2.4 and the \(C_{r}\) inequality that, for \(v>\lambda \geq 1\),

$$\begin{aligned}& \sum_{n=1}^{\infty } n^{r-2-\lambda /p}E\Biggl\{ \max_{1 \leq k\leq n} \Biggl\vert \sum_{i=1}^{k} a_{ni}X_{i} \Biggr\vert -\varepsilon n^{1/p} \Biggr\} ^{\lambda }_{+} \\ & \quad = \sum_{n=1}^{\infty } n^{r-2-\lambda /p}E \Biggl\{ \max_{1 \leq k\leq n} \Biggl\vert \sum _{i=1}^{k}\sum_{l=1}^{5} \bigl(X^{(l)}_{ni}-EX^{(l)}_{ni}\bigr) \Biggr\vert - \varepsilon n^{1/p}\Biggr\} ^{\lambda }_{+} \\ & \quad \leq \sum_{n=1}^{\infty } n^{r-2-\lambda /p}E \Biggl\{ \sum_{l=1}^{5} \max _{1\leq k\leq n} \Biggl\vert \sum_{i=1}^{k} \bigl(X^{(l)}_{ni}-EX^{(l)}_{ni}\bigr) \Biggr\vert - \varepsilon n^{1/p}\Biggr\} ^{\lambda }_{+} \\ & \quad \leq \sum_{n=1}^{\infty } n^{r-2-\lambda /p}E \Biggl\{ \max_{1\leq k\leq n} \Biggl\vert \sum _{i=1}^{k}\bigl(X^{(1)}_{ni}-EX^{(1)}_{ni} \bigr) \Biggr\vert + \sum_{l=2}^{5} \Biggl\vert \sum_{i=1}^{n}X^{(l)}_{ni} \Biggr\vert -3 \varepsilon n^{1/p}/4\Biggr\} ^{\lambda }_{+} \\ & \quad \leq \sum_{n=1}^{\infty } n^{r-2-\lambda /p}E \Biggl\{ \max_{1\leq k\leq n} \Biggl\vert \sum _{i=1}^{k}\bigl(X^{(1)}_{ni}-EX^{(1)}_{ni} \bigr) \Biggr\vert + \sum_{l=2}^{5} \Biggl\vert \sum_{i=1}^{n}\bigl(X^{(l)}_{ni}-EX^{(l)}_{ni} \bigr) \Biggr\vert -\varepsilon n^{1/p}/2\Biggr\} ^{\lambda }_{+} \\ & \quad \leq C\sum_{n=1}^{\infty } n^{r-2-v/p}E\Biggl\{ \max_{1 \leq k\leq n} \Biggl\vert \sum _{i=1}^{k}\bigl(X^{(1)}_{ni}-EX^{(1)}_{ni} \bigr) \Biggr\vert ^{v} \Biggr\} \\ & \quad \quad{} +C\sum_{l=2}^{3}\sum _{n=1}^{\infty } n^{r-2-v/p}E \Biggl\vert \sum_{i=1}^{n}\bigl(X^{(l)}_{ni}-EX^{(l)}_{ni} \bigr) \Biggr\vert ^{v} \\ & \quad \quad {}+C\sum_{l=4}^{5}\sum _{n=1}^{\infty } n^{r-2- \lambda /p}E \Biggl\vert \sum_{i=1}^{n}\bigl(X^{(l)}_{ni}-EX^{(l)}_{ni} \bigr) \Biggr\vert ^{ \lambda } \\ & \quad = :I_{1}+I_{2}+I_{3}+I_{4}+I_{5}. \end{aligned}$$
(3.3)

Similarly, for \(v>\lambda \), \(0<\lambda <1\), we have

$$\begin{aligned}& \sum_{n=1}^{\infty } n^{r-2-\lambda /p}E\Biggl\{ \max_{1 \leq k\leq n} \Biggl\vert \sum_{i=1}^{k} a_{ni}X_{i} \Biggr\vert -\varepsilon n^{1/p} \Biggr\} ^{\lambda }_{+} \\ & \quad \leq \sum_{n=1}^{\infty } n^{r-2-\lambda /p}E \Biggl\{ \max_{1\leq k\leq n} \Biggl\vert \sum _{i=1}^{k}\bigl(X^{(1)}_{ni}-EX^{(1)}_{ni} \bigr) \Biggr\vert + \sum_{l=2}^{3} \Biggl\vert \sum_{i=1}^{n}\bigl(X^{(l)}_{ni}-EX^{(l)}_{ni} \bigr) \Biggr\vert \\ & \quad\quad{} +\sum_{l=4}^{5} \Biggl\vert \sum_{i=1}^{n}X^{(l)}_{ni} \Biggr\vert - \varepsilon n^{1/p}/2\Biggr\} ^{\lambda }_{+} \\ & \quad \leq C\sum_{n=1}^{\infty } n^{r-2-v/p}E\Biggl\{ \max_{1 \leq k\leq n} \Biggl\vert \sum _{i=1}^{k}\bigl(X^{(1)}_{ni}-EX^{(1)}_{ni} \bigr) \Biggr\vert ^{v} \Biggr\} \\ & \quad \quad{} +C\sum_{l=2}^{3}\sum _{n=1}^{\infty } n^{r-2-v/p}E \Biggl\vert \sum_{i=1}^{n}\bigl(X^{(l)}_{ni}-EX^{(l)}_{ni} \bigr) \Biggr\vert ^{v} \\ & \quad\quad{} +C\sum_{l=4}^{5}\sum _{n=1}^{\infty } n^{r-2- \lambda /p}E \Biggl\vert \sum _{i=1}^{n}X^{(l)}_{ni} \Biggr\vert ^{\lambda } \\ & \quad = :I_{1}+I_{2}+I_{3}+I_{4}+I_{5}. \end{aligned}$$
(3.4)

In order to prove Theorem 2.1, we need to prove \(I_{i}<\infty \), \(i=1,2,\ldots,5\).

Taking \({v>\max \{2,2rp/[(2-p)(1-\theta )],2pr/(a-p),2pr/(2-p),a,(r-1)\beta , \lambda \}}\), it follows from Lemmas 2.1 and 2.3 that

$$\begin{aligned} I_{1} \leq &C \sum_{n=1}^{\infty } n^{r-2-v/p}\log ^{v}n \sum_{i=1}^{n} \Biggl\{ {E \bigl\vert X^{(1)}_{ni} \bigr\vert ^{v}+\Biggl(\sum_{i=1}^{n}E \bigl\vert X^{(1)}_{ni} \bigr\vert ^{2} \Biggr)^{v/2} \Biggr\} } \\ :=&I_{11}+I_{12}. \end{aligned}$$

By the definition of \(X^{(1)}_{ni}\) and \({v>2rp/[(2-p)(1-\theta )]>rp/(1-\theta )}\), we have

$$\begin{aligned} I_{11} \leq &C \sum_{n=1}^{\infty } n^{r-2-v/p}\log ^{v}n\Biggl[ \sum_{i=1}^{n}E \vert a_{ni}X_{i} \vert ^{v}I\bigl( \vert a_{ni}X_{i} \vert \leq n^{ \theta /p}\bigr)+\sum _{i=1}^{n}n^{v\theta /p}P\bigl( \vert a_{ni}X_{i} \vert >n^{ \theta /p}\bigr)\Biggr] \\ \leq &C\sum_{n=1}^{\infty } n^{r-2-v/p}\log ^{v}n\Biggl(\sum_{i=1}^{n}n^{v\theta /p} \Biggr) \\ \leq& C\sum_{n=1}^{\infty } n^{r-1-(1-\theta )v/p}\log ^{v}n< \infty . \end{aligned}$$
(3.5)

Since \(r>1\), \(1\leq p<2\), \(\alpha >0\), \(\beta >0\) with \(1/\alpha +1/\beta =1/p\), then \(p<\alpha \wedge rp\). By (3.1) and (2.1)–(2.3), we obtain

$$\begin{aligned} I_{12} \leq &C \sum_{n=1}^{\infty } n^{r-2-v/p}\log ^{v}n\Biggl[ \sum_{i=1}^{n}E \vert a_{ni}X_{i} \vert ^{2}I\bigl( \vert a_{ni}X_{i} \vert \leq n^{ \theta /p}\bigr) \\ &{} +\sum _{i=1}^{n}n^{2\theta /p}P\bigl( \vert a_{ni}X_{i} \vert >n^{ \theta /p}\bigr) \Biggr]^{v/2} \\ \leq &C \sum_{n=1}^{\infty } n^{r-2-v/p} \log ^{v}n\Biggl(\sum_{i=1}^{n}E \vert a_{ni}X_{i} \vert ^{p}n^{(2-p)\theta /p} \Biggr)^{v/2} \\ \leq &C \sum_{n=1}^{\infty } n^{r-2-(2-p)(1-\theta )v/2p} \log ^{v}n\bigl(E \vert X \vert ^{p} \bigr)^{v/2}< \infty . \end{aligned}$$
(3.6)

Then it follows from (3.5) and (3.6) that \(I_{1}<\infty \) holds.

By the definition of \(X^{(2)}_{ni}\), Lemmas 2.2 and 2.3, we get

$$\begin{aligned} I_{2} \leq &C \sum_{n=1}^{\infty } n^{r-2-v/p}\Biggl[\sum_{i=1}^{n}E \bigl\vert X^{(2)}_{ni} \bigr\vert ^{v}+\Biggl( \sum _{i=1}^{n}E \bigl\vert X^{(2)}_{ni} \bigr\vert ^{2}\Biggr)^{v/2}\Biggr] \\ \leq &C \sum_{n=1}^{\infty } n^{r-2-v/p} \Biggl[\sum_{i=1}^{n}E \vert a_{ni}X_{i} \vert ^{v}I\bigl( \vert a_{ni}X_{i} \vert \leq 2n^{1/p}\bigr)+\sum _{i=1}^{n}n^{v/p}P\bigl( \vert a_{ni}X_{i} \vert >n^{1/p}\bigr)\Biggr] \\ &{}+ C \sum_{n=1}^{\infty } n^{r-2-v/p} \Biggl[\sum_{i=1}^{n}E \vert a_{ni}X_{i} \vert ^{2}I\bigl( \vert a_{ni}X_{i} \vert \leq 2n^{1/p}\bigr)+\sum _{i=1}^{n}n^{2/p}P\bigl( \vert a_{ni}X_{i} \vert >n^{1/p}\bigr) \Biggr]^{v/2} \\ :=&I_{21}+I_{22}. \end{aligned}$$

Combining Lemmas 2.5 and 2.6, we obtain \(I_{21}<\infty \).

The proof of \(I_{22}<\infty \) will mainly be conducted under the following four cases.

Case 1:\(1<\alpha <2\), \(\alpha \leq rp\). Noting that \(p<\alpha \), by (2.1)–(2.2), we have \(E\vert X\vert ^{\alpha }<\infty \), then

$$\begin{aligned} I_{22} \leq &C \sum_{n=1}^{\infty } n^{r-2-v/p}\Biggl[\sum_{i=1}^{n}E \vert a_{ni}X_{i} \vert ^{2}I\bigl( \vert a_{ni}X_{i} \vert \leq 2n^{1/p}\bigr) \Biggr]^{v/2} \\ &{} +C \sum_{n=1}^{\infty } n^{r-2}\Biggl[\sum_{i=1}^{n}P\bigl( \vert a_{ni}X_{i} \vert >n^{1/p}\bigr) \Biggr]^{v/2} \\ \leq &C \sum_{n=1}^{\infty } n^{r-2-v/p} \Biggl[\sum_{i=1}^{n}E \vert a_{ni}X_{i} \vert ^{ \alpha }\bigl(2n^{1/p} \bigr)^{2-\alpha }\Biggr]^{v/2} +C\sum_{n=1}^{\infty } n^{r-2}\Biggl[ \sum_{i=1}^{n}E \vert a_{ni}X_{i} \vert ^{\alpha } \bigl(n^{-\alpha /p}\bigr)\Biggr]^{v/2} \\ \leq &C\sum_{n=1}^{\infty } n^{r-2-[(\alpha /p)-1]v/2} \bigl(E \vert X \vert ^{ \alpha }\bigr)^{v/2}< \infty . \end{aligned}$$
(3.7)

Case 2:\(1<\alpha <2\), \(\alpha > rp\). Noting that \(rp<2\), by (2.3), we obtain \(E\vert X\vert ^{rp}<\infty \), then

$$\begin{aligned} I_{22} \leq &C \sum_{n=1}^{\infty } n^{r-2-v/p}\Biggl[\sum_{i=1}^{n}E \vert a_{ni}X_{i} \vert ^{rp} \bigl(2n^{1/p}\bigr)^{2-rp}\Biggr]^{v/2} \\ &{}+C\sum _{n=1}^{\infty } n^{r-2}\Biggl[\sum _{i=1}^{n}E \vert a_{ni}X_{i} \vert ^{rp}n^{-rp/p})\Biggr]^{v/2} \\ \leq &C\sum_{n=1}^{\infty } n^{r-2-(r-1)v/2} \bigl(E \vert X \vert ^{rp}\bigr)^{v/2}< \infty . \end{aligned}$$
(3.8)

Case 3:\(\alpha \geq 2\), \(\alpha \leq rp\). Noting that \(rp\geq 2\), by (2.1)–(2.2), we get \(E\vert X\vert ^{2}<\infty \), and then

$$\begin{aligned} I_{22} \leq &C \sum_{n=1}^{\infty } n^{r-2-v/p}\Biggl[\sum_{i=1}^{n}E \vert a_{ni}X_{i} \vert ^{2} \Biggr]^{v/2} +C\sum_{n=1}^{ \infty } n^{r-2}\Biggl[\sum_{i=1}^{n}E \vert a_{ni}X_{i} \vert ^{2}n^{-2/p} \Biggr]^{v/2} \\ \leq &C\sum_{n=1}^{\infty } n^{r-2-[(2/p)-1]v/2} \bigl(E \vert X \vert ^{2}\bigr)^{v/2}< \infty . \end{aligned}$$
(3.9)

Case 4:\(\alpha \geq 2\), \(\alpha > rp\), then \(E\vert X\vert ^{rp}<\infty \). If \(rp<2\), the proof is the same as that of Case 2. If \(rp\geq 2\), the proof is the same as that of Case 3.

Then it follows from (3.7)–(3.9) that \(I_{2}<\infty \) holds.

The proof of \(I_{4}<\infty \) will mainly be conducted under the following three cases.

Case 1:\(0<\lambda <1\). By (3.4), the \(C_{r}\) inequality, Lemma 2.7, and (2.1)–(2.3), we have

$$\begin{aligned} I_{4} = &\sum_{n=1}^{\infty } n^{r-2-\lambda /p}E \Biggl\vert \sum_{i=1}^{n}X^{(4)}_{ni} \Biggr\vert ^{\lambda } \\ \leq &\sum_{n=1}^{\infty } n^{r-2-\lambda /p}\sum _{i=1}^{n}E \bigl\vert X^{(4)}_{ni} \bigr\vert ^{ \lambda } \\ \leq &C \sum_{n=1}^{\infty } n^{r-2-\lambda /p} \sum_{i=1}^{n}E \vert a_{ni}X_{i} \vert ^{\lambda }I\bigl( \vert a_{ni}X_{i} \vert >n^{1/p}\bigr)< \infty . \end{aligned}$$
(3.10)

Case 2:\(1\leq \lambda \leq 2\). It follows from (3.3), the \(C_{r}\) inequality, Jensen’s inequality, Lemmas 2.22.32.7, and (2.1)–(2.3) that

$$\begin{aligned} I_{4} = &\sum_{n=1}^{\infty } n^{r-2-\lambda /p}E \Biggl\vert \sum_{i=1}^{n} \bigl(X^{(4)}_{ni}-EX^{(4)}_{ni}\bigr) \Biggr\vert ^{\lambda } \\ \leq &\sum_{n=1}^{\infty } n^{r-2-\lambda /p}\sum _{i=1}^{n}E \bigl\vert X^{(4)}_{ni} \bigr\vert ^{ \lambda } \\ \leq &C \sum_{n=1}^{\infty } n^{r-2-\lambda /p} \sum_{i=1}^{n}E \vert a_{ni}X_{i} \vert ^{\lambda }I\bigl( \vert a_{ni}X_{i} \vert >n^{1/p}\bigr)< \infty . \end{aligned}$$
(3.11)

Case 3.\(\lambda >2\). By (3.3), the \(C_{r}\) inequality, Jensen’s inequality, Lemmas 2.22.7, and (2.1)–(2.3), we have

$$\begin{aligned} I_{4} =& \sum_{n=1}^{\infty } n^{r-2-\lambda /p}E \Biggl\vert \sum_{i=1}^{n} \bigl(X^{(4)}_{ni}-EX^{(4)}_{ni}\bigr) \Biggr\vert ^{\lambda } \\ \leq &C\sum_{n=1}^{\infty } n^{r-2-\lambda /p} \Biggl\{ \sum_{i=1}^{n}E \bigl\vert X^{(4)}_{ni} \bigr\vert ^{\lambda }+\Biggl(\sum _{i=1}^{n}E \bigl\vert X^{(4)}_{ni} \bigr\vert ^{2}\Biggr)^{ \lambda /2}\Biggr\} \\ \leq &C \sum_{n=1}^{\infty } n^{r-2-\lambda /p} \sum_{i=1}^{n}E \vert a_{ni}X_{i} \vert ^{\lambda }I\bigl( \vert a_{ni}X_{i} \vert >n^{1/p}\bigr) \\ &{}+ C \sum_{n=1}^{\infty } n^{r-2-\lambda /p} \Biggl[\sum_{i=1}^{n}E \vert a_{ni}X_{i} \vert ^{2}I\bigl( \vert a_{ni}X_{i} \vert >n^{1/p}\bigr) \Biggr]^{ \lambda /2} \\ :=&I_{41}+I_{42}. \end{aligned}$$

From Lemma 2.7 and (2.1)–(2.3), we obtain \(I_{41}<\infty \).

The proof of \(I_{42}<\infty \) will mainly be conducted under the following two cases.

Case a:\(\alpha \leq rp\). Taking \(q=\max \{(r-1)\beta ,\lambda \}>2\), by (2.1)–(2.2), (3.2), we have \(E\vert X\vert ^{q}<\infty \) and

$$\begin{aligned} I_{42} \leq &C\sum_{n=1}^{\infty } n^{r-2-\lambda /p}\Biggl[\sum_{i=1}^{n}E \vert a_{ni}X_{i} \vert ^{q}n^{(2-q)/p}I \bigl( \vert a_{ni}X_{i} \vert >n^{1/p}\bigr) \Biggr]^{ \lambda /2} \\ \leq &C\sum_{n=1}^{\infty } n^{r-2-\lambda /p} \bigl[n^{q/\alpha }E \vert X \vert ^{q}n^{(2-q)/p} \bigr]^{ \lambda /2} \\ =&C\sum_{n=1}^{\infty } n^{r-2-q\lambda /2\beta }\bigl[E \vert X \vert ^{q}\bigr]^{ \lambda /2}< \infty . \end{aligned}$$
(3.12)

Case b:\(\alpha > rp\). Letting \(q=\max \{rp,\lambda \}>2\), it follows from (2.3) that \(E\vert X\vert ^{q}<\infty \). If \(\alpha \geq q \), by (3.1), we have

$$\begin{aligned} I_{42} \leq &C\sum_{n=1}^{\infty } n^{r-2-\lambda /p}\Biggl[\sum_{i=1}^{n}E \vert a_{ni}X_{i} \vert ^{q}n^{(2-q)/p}I \bigl( \vert a_{ni}X_{i} \vert >n^{1/p}\bigr) \Biggr]^{ \lambda /2} \\ \leq &C\sum_{n=1}^{\infty } n^{r-2-\lambda /p}\bigl[nE \vert X \vert ^{q}n^{(2-q)/p} \bigr]^{ \lambda /2} \\ =&C\sum_{n=1}^{\infty } n^{r-2-(q-p)\lambda /2p}\bigl[E \vert X \vert ^{q}\bigr]^{ \lambda /2}< \infty . \end{aligned}$$
(3.13)

If \(\alpha < q \), then \((r-1)\beta < rp<\alpha <q \), by (3.2), we have

$$\begin{aligned} I_{42} \leq &C\sum_{n=1}^{\infty } n^{r-2-\lambda /p}\Biggl[\sum_{i=1}^{n}E \vert a_{ni}X_{i} \vert ^{q}n^{(2-q)/p}I \bigl( \vert a_{ni}X_{i} \vert >n^{1/p}\bigr) \Biggr]^{ \lambda /2} \\ \leq &C\sum_{n=1}^{\infty } n^{r-2-\lambda /p} \bigl[n^{q/\alpha }E \vert X \vert ^{v}n^{(2-q)/p} \bigr]^{ \lambda /2} \\ =&C\sum_{n=1}^{\infty } n^{r-2-q\lambda /2\beta }\bigl[E \vert X \vert ^{q}\bigr]^{ \lambda /2}< \infty . \end{aligned}$$
(3.14)

Then it follows from (3.10)–(3.14) that \(I_{4}<\infty \).

Similar to the proof of \(I_{2}<\infty \) and \(I_{4}<\infty \), we can get \(I_{3}<\infty \) and \(I_{5}<\infty \), too.

Necessity. By (2.4), we have

$$ \sum_{n=1}^{\infty } n^{r-2}P\Biggl(\max _{1\leq k\leq n} \Biggl\vert \sum_{i=1}^{k} a_{ni}X_{i} \Biggr\vert >\varepsilon n^{1/p} \Biggr) < \infty ,\quad \forall \varepsilon >0. $$
(3.15)

Set \(a_{ni}=1\) for \(\{1\leq i\leq n\), \(n\geq 1\}\), then (3.15) can be rewritten as follows:

$$ \sum_{n=1}^{\infty } n^{r-2}P\Biggl(\max _{1\leq k\leq n} \Biggl\vert \sum_{i=1}^{k} X_{i} \Biggr\vert >\varepsilon n^{1/p}\Biggr) < \infty ,\quad \forall \varepsilon >0, $$
(3.16)

which implies that \(EX=0\), \(E\vert X\vert ^{rp}<\infty \) (see Theorem 2 in Peligard and Gut [24]). Take \(a_{ni}=0\) for \(1\leq i\leq n-1\), \(n\geq 1\), and \(a_{nn}= n^{1/\alpha }\), then (3.15) can be rewritten as follows:

$$ \sum_{n=1}^{\infty } n^{r-2}P\bigl( \vert X_{n} \vert >\varepsilon n^{1/ \beta }\bigr) < \infty ,\quad \forall \varepsilon >0, $$
(3.17)

which is equivalent to \(E\vert X\vert ^{(r-1)\beta }<\infty \). The proof is completed. □

References

  1. 1.

    Qiu, D.H., Chen, P.Y., Xiao, J.: Complete moment convergence for sequences of END random variables. Acta Math. Appl. Sin. 40(3), 436–488 (2017)

    MathSciNet  MATH  Google Scholar 

  2. 2.

    Liu, L.: Precise large deviations for dependent random variables with heavy tails. Stat. Probab. Lett. 79(9), 1290–1298 (2009)

    MathSciNet  Article  Google Scholar 

  3. 3.

    Joag-Dev, K., Proschan, F.: Negative association of random variables with applications. Ann. Stat. 11(1), 286–295 (1983)

    MathSciNet  Article  Google Scholar 

  4. 4.

    Liu, L.: Necessary and sufficient conditions for moderate deviations of dependent random variables with heavy tails. Sci. China Ser. A, Math. 53(6), 1421–1434 (2010)

    MathSciNet  Article  Google Scholar 

  5. 5.

    Chen, Y., Chen, A., Ng, K.W.: The strong law of large numbers for extend negatively dependent random variables. J. Appl. Probab. 47(4), 908–922 (2010)

    MathSciNet  Article  Google Scholar 

  6. 6.

    Wu, Y.F., Guan, M.: Convergence properties of the partial sums for sequences of END random variables. J. Korean Math. Soc. 49(6), 1097–1110 (2012)

    MathSciNet  Article  Google Scholar 

  7. 7.

    Shen, A.T.: Probability inequalities for END sequence and their applications. J. Inequal. Appl. (2011). https://doi.org/10.1186/1029-242X-2011-98

    MathSciNet  Article  MATH  Google Scholar 

  8. 8.

    Wang, S.J., Wang, X.J.: Precise large deviations for random sums of END real-valued random variables with consistent variation. J. Math. Anal. Appl. 402(2), 660–667 (2013)

    MathSciNet  Article  Google Scholar 

  9. 9.

    Wang, X.J., Hu, T.C., Volodin, A., Hu, S.H.: Complete convergence for weighted sums and arrays of rowwise extended negatively dependent random variables. Commun. Stat., Theory Methods 42(13), 2391–2401 (2013)

    MathSciNet  Article  Google Scholar 

  10. 10.

    Wang, X.J., Wang, S.J., Hu, S.H.: On complete convergence of weighted sums for arrays of rowwise extended negatively dependent random variables. Stoch. Int. J. Probab. Stoch. Process. 85(6), 1060–1072 (2013)

    MathSciNet  Article  Google Scholar 

  11. 11.

    Wang, X.J., Li, X.Q., Hu, S.H., Wang, X.H.: On complete convergence for an extended negatively dependent sequence. Commun. Stat., Theory Methods 43(14), 2923–2937 (2014)

    MathSciNet  Article  Google Scholar 

  12. 12.

    Wang, X.J., Zheng, L.L., Xu, C., Hu, S.H.: Complete consistency for the estimator of nonparametric regression models based on extended negatively dependent errors. Statistics 49(2), 396–407 (2015)

    MathSciNet  Article  Google Scholar 

  13. 13.

    Qiu, D.H., Chen, P.Y., Antonini, R.G., Volodin, A.: On the complete convergence for arrays of rowwise extended negatively dependent random variables. J. Korean Math. Soc. 50(2), 379–392 (2013)

    MathSciNet  Article  Google Scholar 

  14. 14.

    Hsu, P.L., Robbins, H.: Complete convergence and the law of large numbers. Proc. Natl. Acad. Sci. 33, 25–31 (1947)

    MathSciNet  Article  Google Scholar 

  15. 15.

    Chow, Y.S.: On the rate of moment complete convergence of sample sums and extremes. Bull. Inst. Math. Acad. Sin. 16, 177–201 (1988)

    MATH  Google Scholar 

  16. 16.

    Chen, P.Y., Wang, D.C.: Complete moment convergence for sequence of identically distributed φ-mixing random variables. Acta Math. Sin. Engl. Ser. 26(4), 679–690 (2010)

    MathSciNet  Article  Google Scholar 

  17. 17.

    Liang, H.Y., Li, D.L., Rosalsky, A.: Complete moment and integral convergence for sums of negatively associated random variables. Acta Math. Sin. Engl. Ser. 26(3), 419–432 (2010)

    MathSciNet  Article  Google Scholar 

  18. 18.

    Qiu, D.H., Liu, X.D., Chen, P.Y.: Complete moment convergence for maximal partial sums under NOD setup. J. Inequal. Appl. (2015). https://doi.org/10.1186/s13660-015-0577-8

    MathSciNet  Article  MATH  Google Scholar 

  19. 19.

    Qiu, D.H., Chen, P.Y.: Complete and complete moment convergence for weighted sums of widely orthant dependent random variables. Acta Math. Sin. Engl. Ser. 39(9), 1539–1548 (2014)

    MathSciNet  Article  Google Scholar 

  20. 20.

    Chen, P.Y., Sung, S.H.: On complete convergence and complete moment convergence for weighted sums of ρ-mixing random variables. J. Inequal. Appl. (2018). https://doi.org/10.1186/s13660-018-1710-2

    MathSciNet  Article  Google Scholar 

  21. 21.

    Sung, S.H.: Complete convergence for weighted sums of ρ-mixing random variables. Discrete Dyn. Nat. Soc. (2010). https://doi.org/10.1155/2010/630608

    MathSciNet  Article  Google Scholar 

  22. 22.

    Qiu, D.H., Xiao, J.: Complete moment convergence for Sung’s type weighted sums under END setup. Acta Math. Sci. 38A(6), 1103–1110 (2018)

    MathSciNet  MATH  Google Scholar 

  23. 23.

    Wu, Y., Wang, X.J., Hu, S.H.: Complete moment convergence for weighted sums of widely dependent random variables and its application in nonparametric regression model. Stat. Probab. Lett. 127, 56–66 (2017)

    Article  Google Scholar 

  24. 24.

    Peligrad, M., Gut, A.: Almost-sure results for a class of dependent random variables. J. Theor. Probab. 12, 87–104 (1999)

    MathSciNet  Article  Google Scholar 

Download references

Acknowledgements

The authors would like to thank the referees for their valuable comments and suggestions, which have improved the quality of the manuscript greatly.

Availability of data and materials

Not applicable.

Funding

This work was jointly supported by the National Natural Science Foundation of China (61773217), the Key University Science Research Project of Anhui Province (KJ2019A0700), the Key Program in the Youth Talent Support Plan in Universities of Anhui Province (gxyqZD2016317), Hunan Provincial Science and Technology Project Foundation (2019RS1033), the Scientific Research Fund of Hunan Provincial Education Department (18A013), Hunan Normal University National Outstanding Youth Cultivation Project (XP1180101), and the Construct Program of the Key Discipline in Hunan Province.

Author information

Affiliations

Authors

Contributions

The authors contributed equally and significantly in writing this paper. Both authors read and approved the final manuscript.

Corresponding author

Correspondence to Quanxin Zhu.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Song, M., Zhu, Q. Complete moment convergence of extended negatively dependent random variables. J Inequal Appl 2020, 150 (2020). https://doi.org/10.1186/s13660-020-02416-7

Download citation

MSC

  • 60F15

Keywords

  • END random variables
  • Weighted sums
  • Complete convergence
  • Complete moment convergence