Skip to main content

Theorems of complete convergence and complete integral convergence for END random variables under sub-linear expectations

Abstract

The goal of this paper is to build complete convergence and complete integral convergence for END sequences of random variables under sub-linear expectation space. By using the Markov inequality, we extend some complete convergence and complete integral convergence theorems for END sequences of random variables when we have a sub-linear expectation space, and we provide a way to learn this subject.

Introduction

Classical probability theorems were widely used in many fields, which only hold on some occasions of model certainty. However, there are uncertainties, such as measures of risk, non-linear stochastic calculus and statistics in the process of finance. At this time, sub-linear expectation and capacity are not additive, the limit theorems of classical probability space are no longer valid. Therefore, the study of the limit theorems of sub-linear expectation becomes more complex. Peng Shige [1,2,3] of Shandong University constructed the basic concept of sub-linear expectation and gave a complete set of axioms of sub-linear expectation theories. The sub-linear expectation axiom system makes up for the deficiency of limit theorems of classical probability space. Since the general framework of sub-linear expectation was introduced by Peng Shige, many scholars have paid close attention to it, and lots of excellent results have been established. For example, Zhang [4,5,6] studied the sub-linear expectation space in depth and proved some important inequalities under the sub-linear expectations, Xu and Zhang [7] proved a three series theorem for independent random variables under sub-linear expectations with applications. Wu and Jiang [8] proved the strong law of large numbers and the law of iterated logarithm in the sub-linear expectation space. Chen [9] obtained the strong law of large numbers for independent isomorphic sequences in sub-linear expectation space, and he also obtained the central limit theorem of weighted sums in sub-linear expectation space. The complete convergence and complete integral convergence have a relatively complete development in limit theories of probability. The notion of complete convergence was raised by Hsu and Robbins [10], and Chow [11] built complete moment convergence. However, to the best of our knowledge, except for Liu [12], Chen [13], Wu and Guan [14], not many authors discussed the properties for END random variables. We know Sung [15] founded the notion of uniform integrability, we obtained complete convergence and complete integral convergence for the array of END random variables under uniform integrability, which were not considered in Sung [15]. We found the concept of uniform integrability on random variable sequences and uniform integrability is a more extensive condition than that of Cesàro [16, 17].

The complete integral convergence is a more important version of the complete convergence, and both of them are most important problems in classical probability theories. Many of the related results have already been obtained in classical probability space. Now, some corresponding results were obtained by Gut and Stadtmuller [18], Qiu and Chen [19], Wu and Jiang [20] and Feng and Wang [21], we still need to perfect the complete convergence and complete integral convergence under sub-linear expectation. We establish the complete convergence and complete integral convergence for END random variables under sub-linear expectation and generalize them [22] to the sub-linear expectation space.

Preliminaries

We use the framework and notions of Peng [1]. Let \((\varOmega , \mathcal{F})\) be a given measurable space and let \(\mathcal{H}\) be a linear space of real functions defined on \((\varOmega , \mathcal{F})\) such that if \(X_{1}, X_{2},\ldots ,X_{n} \in \mathcal{H}\) then \(\varphi (X_{1},\ldots ,X_{n})\in \mathcal{H}\) for each \(\varphi \in \mathrm{C}_{l,\operatorname{Lip}}(\mathbb{R}_{n})\), where \(\mathrm{C}_{l,\operatorname{Lip}}(\mathbb{R}_{n})\) denotes the linear space of (local Lipschitz) functions φ satisfying

$$\begin{aligned} \bigl\vert \varphi (\mathbf{x})- \varphi (\mathbf{y}) \bigr\vert \leq c \bigl(1+ \vert \mathbf{x} \vert ^{m}+ \vert \mathbf{y} \vert ^{m}\bigr) \vert \mathbf{x}-\mathbf{y} \vert ,\quad \forall \mathbf{x}, \mathbf{y} \in \mathbb{R}_{n}, \end{aligned}$$

for some \(c>0\), \(m\in \mathbb{N}\) depending on φ. \(\mathcal{H}\) is considered as a space of random variables. In this case we denote \(X \in \mathcal{H}\).

Definition 2.1

([1])

A sub-linear expectation \(\mathbb{\hat{E}}\) on \(\mathcal{H}\) is a function \(\mathbb{\hat{E}} : \mathcal{H} \rightarrow \bar{\mathbb{R}}\) satisfying the following properties: for all \(X,Y \in \mathcal{H}\), we have

  1. (a)

    monotonicity: if \(\ X \geq Y\) then \(\mathbb{\hat{E}} X \geq \mathbb{\hat{E}} Y\);

  2. (b)

    the constant preserving property: \(\mathbb{\hat{E}}c = c\);

  3. (c)

    sub-additivity: \(\mathbb{\hat{E}}(X+Y) \leq \mathbb{\hat{E}}X + \mathbb{\hat{E}}Y\); whenever \(\mathbb{\hat{E}}X + \mathbb{\hat{E}}Y \) is not of the form \(+\infty - \infty \) or \(-\infty + \infty \);

  4. (d)

    positive homogeneity: \(\mathbb{\hat{E}}(\lambda X) = \lambda \mathbb{\hat{E}}X\), \(\lambda \geq 0\).

Here \(\bar{\mathbb{R}} = [-\infty ,\infty ]\). The triple \((\varOmega , \mathcal{H}, \mathbb{\hat{E}})\) is called a sub-linear expectation space. Given a sub-linear expectation \(\mathbb{\hat{E}}\), let us denote the conjugate expectation ε̂ of \(\mathbb{\hat{E}}\) by

$$\begin{aligned} &\hat{\varepsilon }X := -\mathbb{\hat{E}}(-X),\quad \forall X \in \mathcal{H}.\\ & \mathbb{\hat{E}}f \leq \mathbb{V}(A) \leq \mathbb{\hat{E}}g,\qquad \hat{\varepsilon }f \leq \mathcal{V}(A) \leq \hat{\varepsilon }g,\quad \mbox{if } f \leq I(A) \leq g, f,g \in \mathcal{H}. \end{aligned}$$

From the definition, it is easily shown that for all \(X,Y \in \mathcal{H}\)

$$\begin{aligned}& \begin{aligned} &\hat{\varepsilon }X \leq \mathbb{\hat{E}}X,\qquad \mathbb{\hat{E}}(X+c) = \mathbb{ \hat{E}}X +c, \\ &\mathbb{\hat{E}}(X-Y) \geq \mathbb{\hat{E}}X - \mathbb{\hat{E}}Y. \end{aligned} \end{aligned}$$
(2.1)

If \(\mathbb{\hat{E}}Y = \hat{\varepsilon }Y \), then \(\mathbb{\hat{E}}(X + aY) = \mathbb{\hat{E}}X + a\mathbb{\hat{E}}Y\) for any \(a \in \mathbb{R}\). Next, we consider the capacities corresponding to the sub-linear expectations. Let \(\mathcal{G} \subset \mathcal{F}\). A function \(V : \mathcal{G} \rightarrow [0,1]\) is called a capacity if

$$\begin{aligned} V(\emptyset ) = 0,\qquad V(\varOmega ) = 1\quad \mbox{and}\quad V(A) \leq V(B)\quad \mbox{for } \forall A \subseteq B, A,B \in \mathcal{G}. \end{aligned}$$

It is called sub-additive if \(V(A \cup B) \leq V(A) + V(B)\) for all \(A,B \in \mathcal{G}\) with \(A \cup B \in \mathcal{G}\). In the sub-linear space \((\varOmega , \mathcal{H}, \mathbb{\hat{E}})\), we denote a pair \((\mathbb{V},\mathcal{V})\) of capacities by

$$\begin{aligned} \mathbb{V}(A):= \inf \bigl\{ \mathbb{\hat{E}}\xi ; I(A)\leq \xi , \xi \in \mathcal{H}\bigr\} ,\qquad \mathcal{V}(A):= 1 - \mathbb{V}\bigl(A^{c}\bigr),\quad \forall A \in \mathcal{F}, \end{aligned}$$

where \(A^{c}\) is the complement set of A. By definition of \(\mathbb{V}\) and \(\mathcal{V}\), it is obvious that \(\mathbb{V}\) is sub-additive, and

$$\begin{aligned} \mathcal{V} [A]\leq \mathbb{V}[A],\quad \forall A \in \mathcal{F}. \end{aligned}$$

This implies the Markov inequality: \(\forall X \in \mathcal{H}\),

$$\begin{aligned} \mathbb{V}\bigl( \vert X \vert \geq x\bigr) \leq \mathbb{\hat{E}}\bigl( \vert X \vert ^{p}\bigr)/x^{p},\quad \forall x > 0,p >0, \end{aligned}$$

from \(I(|X|\geq x) \leq |X|^{p}/x^{p} \in \mathcal{H}\).

Definition 2.2

([1])

A sequence of random variables \(\{X_{n}; n \geq 1\}\) is said to be upper (resp. lower) extended negatively dependent if there is some dominating constant \(K \geq 1\) such that

$$\begin{aligned} \mathbb{\hat{E}} \Biggl(\prod_{i=1}^{n} \varphi _{i}(X_{i}) \Biggr) \leq K \prod _{i=1}^{n}\mathbb{\hat{E}}\bigl(\varphi _{i}(X_{i})\bigr),\quad \forall n \geq 2, \end{aligned}$$

whenever the non-negative functions \(\varphi _{i}(x) \in C_{l,\operatorname{Lip}}( \mathbb{R})\), \(i = 1,2,\ldots \) are all non-decreasing (resp. all non-increasing). They are called extended negatively dependent (END) if they are both upper extended negatively dependent and lower extended negatively dependent.

It is obvious that, if \(\{X_{n}; n \geq 1\}\) is a sequence of extended negatively dependent random variables and \(\ f_{1}(x), f_{2}(x),\ldots \in C_{l,\operatorname{Lip}}(\mathbb{R})\) are non-decreasing (resp. non-increasing) functions, then \(\{f_{n}(X_{n});n\geqq 1\}\) is also a sequence of END random variables.

Definition 2.3

([4])

The Choquet integrals/expectations \((C_{\mathbb{V}},C_{\mathcal{V}})\) are defined by

$$ C_{V}= \int _{0}^{\infty }V(X\geq t)\,\mathrm{d}t+ \int _{-\infty }^{0} \bigl[V(X\geq t)-1 \bigr]\, \mathrm{d}t, $$

with V being replaced by \(\mathbb{V}\) and \(\mathcal{V}\), respectively.

We define C to be various positive constants at different places in this paper.

Lemma 2.1

([4], Theorem 3.1)

Assume that \(\{X_{i};u_{n} \leq i\leq m_{n}\}\) is an array of row-wise END random variables in \((\varOmega ,\mathcal{H},\hat{\mathbb{E}})\) with \(\hat{\mathbb{E}}X_{i} \leq 0\), for \(u_{n}\leq i\leq m_{n}\). Let \(B_{n}=\sum_{i = u _{n}}^{m_{n}}\hat{\mathbb{E}}[X_{i}^{2}]\). Then, for any given n and for all \(x>0\), \(y>0\), \(K \geq 1\). Then

$$\begin{aligned} \mathbb{V} \Biggl(\sum_{i=u_{n}}^{m_{n}}X_{i} \geq x \Biggr)\leq \mathbb{V} \Bigl(\max_{u_{n}\leq i\leq m_{n}}X_{i} \geq y \Bigr)+K \exp \biggl\{ \frac{x}{y}-\frac{x}{y}\ln \biggl(1+ \frac{xy}{B_{n}} \biggr) \biggr\} . \end{aligned}$$

Here K is the dominating constant in Definition 2.2.

For \(0<\mu <1\), let \(g(x)\in C_{l,\operatorname{Lip}}(\mathbb{R})\) be a non-increasing function such that \(0\leq g(x)\leq 1\) for all x and \(g(x)=1\) if \(x\leq \mu \), \(g(x)=0\) if \(x>1\). Then

$$\begin{aligned} I \bigl( \vert x \vert \leq \mu \bigr)\leq g\bigl( \vert x \vert \bigr)\leq I \bigl( \vert x \vert \leq 1 \bigr). \end{aligned}$$
(2.2)

Lemma 2.2

Assume that \(\{X_{ni};u_{n}\leq i\leq m_{n},n\geq 1\}\) is an array of row-wise END random variables. Let \(\{h_{n};n\geq 1\}\) and \(\{k_{n};n \geq 1\}\) be increasing sequences of positive constants with \(h_{n}\rightarrow \infty \), \(k_{n}\rightarrow \infty \) as \(n\rightarrow \infty \) and \(\frac{h_{n}}{k_{n}}\rightarrow 0\), for some \(r>0\), satisfying

$$\begin{aligned} \sup_{n\geq 1}k_{n}^{-1}\sum _{i=u_{n}}^{m_{n}}\hat{\mathbb{E}} \vert X _{ni} \vert ^{r}< \infty \end{aligned}$$
(2.3)

and

$$\begin{aligned} \lim_{n\rightarrow \infty }k_{n}^{-1}\sum _{i=u_{n}}^{m_{n}} \hat{\mathbb{E}} \vert X_{ni} \vert ^{r} \biggl(1-g \biggl(\frac{ \vert X_{ni} \vert ^{r}}{h _{n}} \biggr) \biggr)=0. \end{aligned}$$
(2.4)

Then the following statements hold:

$$\begin{aligned}& \lim_{n\rightarrow \infty }k_{n}^{-\frac{\alpha }{r}}\sum _{i=u_{n}} ^{m_{n}}\hat{\mathbb{E}} \vert X_{ni} \vert ^{\alpha } \biggl(1-g \biggl(\frac{ \vert X _{ni} \vert ^{r}}{k_{n}} \biggr) \biggr)=0, \quad \textit{for any } 0< \alpha \leq r, \end{aligned}$$
(2.5)
$$\begin{aligned}& \lim_{n\rightarrow \infty }k_{n}^{-\frac{\beta }{r}}\sum _{i=u_{n}} ^{m_{n}}\hat{\mathbb{E}} \vert X_{ni} \vert ^{\beta } g \biggl(\frac{ \vert X_{ni} \vert ^{r}}{k _{n}} \biggr)=0, \quad \textit{for any } \beta >r. \end{aligned}$$
(2.6)

Proof

We have \(\frac{h_{n}}{k_{n}}\rightarrow 0\) as \(n\rightarrow \infty \), so there exists N such that \(h_{n}\leq k _{n}\) if \(n>N\). We know \(g(x)\in C_{l,\operatorname{Lip}}(\mathbb{R})\) is a non-increasing function and \(h_{n}\leq k_{n}\), by (2.2) we obtain

$$\begin{aligned} 1-g \biggl(\frac{ \vert X_{ni} \vert ^{r}}{k_{n}} \biggr)\leq 1-g \biggl( \frac{ \vert X _{ni} \vert ^{r}}{h_{n}} \biggr), \end{aligned}$$
(2.7)

and \(1-g(\frac{|X_{ni}|^{r}}{k_{n}})\leq I (|X_{ni}|^{r}>\mu k _{n} )\). When \(0<\alpha \leq r\) and \(0<\mu <1\), then, for \(n>N\), combining (2.4), we can get

$$\begin{aligned} k_{n}^{-\frac{\alpha }{r}}\sum_{i=u_{n}}^{m_{n}} \hat{\mathbb{E}} \vert X _{ni} \vert ^{\alpha } \biggl(1-g \biggl(\frac{ \vert X_{ni} \vert ^{r}}{k_{n}} \biggr) \biggr) & =k_{n} ^{-1}\cdot k_{n}^{-\frac{\alpha }{r}+1}\sum_{i=u_{n}}^{m_{n}} \hat{\mathbb{E}} \vert X_{ni} \vert ^{\alpha } \biggl(1-g \biggl(\frac{ \vert X_{ni} \vert ^{r}}{k _{n}} \biggr) \biggr) \\ & \leq k_{n}^{-1}\sum_{i=u_{n}}^{m_{n}} \hat{\mathbb{E}} \vert X_{ni} \vert ^{ \alpha }\biggl( \frac{ \vert X_{ni} \vert ^{r}}{\mu }\biggr)^{\frac{r-\alpha }{r}} \biggl(1-g \biggl(\frac{ \vert X_{ni} \vert ^{r}}{k_{n}} \biggr) \biggr) \\ & \leq \mu ^{\frac{\alpha -r}{r}}\cdot k_{n}^{-1}\sum _{i=u_{n}} ^{m_{n}}\hat{\mathbb{E}} \vert X_{ni} \vert ^{r} \biggl(1-g \biggl(\frac{ \vert X_{ni} \vert ^{r}}{h _{n}} \biggr) \biggr) \\ & \rightarrow 0 \quad (n\rightarrow \infty ). \end{aligned}$$

Therefore, (2.5) has been proven.

Now we prove (2.6). Assume that \(A_{n}=k_{n}^{-\frac{ \beta }{r}}\sum_{i=u_{n}}^{m_{n}}\hat{\mathbb{E}}|X_{ni}|^{ \beta } g (\frac{|X_{ni}|^{r}}{k_{n}} ) \), on considering (2.1), we conclude that

$$\begin{aligned} A_{n} &= k_{n}^{-\frac{\beta }{r}}\sum _{i=u_{n}}^{m_{n}} \hat{\mathbb{E}} \vert X_{ni} \vert ^{\beta } g \biggl(\frac{ \vert X_{ni} \vert ^{r}}{k_{n}} \biggr) \\ &\leq k_{n}^{-\frac{\beta }{r}}\sum_{i=u_{n}}^{m_{n}} \hat{\mathbb{E}} \vert X_{ni} \vert ^{\beta } g \biggl( \frac{\mu \vert X_{ni} \vert ^{r}}{1} \biggr)+ k_{n}^{-\frac{\beta }{r}} \sum _{i=u_{n}}^{m_{n}}\hat{\mathbb{E}} \vert X_{ni} \vert ^{\beta } \biggl(g \biggl(\frac{ \vert X_{ni} \vert ^{r}}{k_{n}} \biggr)-g \biggl( \frac{\mu \vert X_{ni} \vert ^{r}}{1} \biggr) \biggr) \\ &=:A_{n1}+A_{n2}. \end{aligned}$$

For \(A_{n1}\), note that \(\frac{\beta }{r}>1\), \(0<\mu <1\), \(g (\frac{ \mu |X_{ni}|^{r}}{1} )\leq I (\mu |X_{ni}|^{r}\leq 1 )\) and (2.3), while \(k_{n}\rightarrow \infty \) as \(n\rightarrow \infty \), so we can obtain

$$\begin{aligned} A_{n1} &= k_{n}^{-\frac{\beta }{r}}\sum _{i=u_{n}}^{m_{n}} \hat{\mathbb{E}} \vert X_{ni} \vert ^{\beta } g \biggl(\frac{\mu \vert X_{ni} \vert ^{r}}{1} \biggr) \\ &= k_{n}^{-\frac{\beta }{r}+1}k_{n}^{-1}\sum _{i=u_{n}}^{m_{n}} \hat{\mathbb{E}} \vert X_{ni} \vert ^{r} \vert X_{ni} \vert ^{\beta -r} g \biggl( \frac{\mu \vert X _{ni} \vert ^{r}}{1} \biggr) \\ &\leq k_{n}^{-\frac{\beta }{r}+1}k_{n}^{-1}\sum _{i=u_{n}}^{m _{n}}\hat{\mathbb{E}} \vert X_{ni} \vert ^{r} \biggl(\frac{1}{\mu } \biggr)^{\frac{ \beta -r}{r}} \\ &\leq (\mu k_{n})^{1-\frac{\beta }{r}}\cdot \sup_{n\geq 1}k_{n}^{-1} \sum_{i=u_{n}}^{m_{n}}\hat{\mathbb{E}} \vert X_{ni} \vert ^{r} \\ &\rightarrow 0 \quad (n\rightarrow \infty ). \end{aligned}$$

For \(A_{n2}\), because of (2.2), we obtain \(I(|X_{ni}|^{r}>j) \leq 1-g (\frac{|X_{ni}|^{r}}{j} ) \). So we have

$$\begin{aligned} & \vert X_{ni} \vert ^{\beta } \biggl(g \biggl( \frac{ \vert X_{ni} \vert ^{r}}{k_{n}} \biggr) -g \biggl(\frac{\mu \vert X_{ni} \vert ^{r}}{1} \biggr) \biggr) \\ &\quad \leq \vert X_{ni} \vert ^{\beta }I \bigl(1< \vert X_{ni} \vert ^{r}\leq k_{n} \bigr) \\ & \quad = \vert X_{ni} \vert ^{r} \vert X_{ni} \vert ^{\beta -r}I \bigl(1< \vert X_{ni} \vert ^{r} \leq k_{n} \bigr) \\ & \quad \leq \vert X_{ni} \vert ^{r}\sum _{j=1}^{k_{n}-1}(j+1)^{ \frac{\beta -r}{r}}I \bigl(j< \vert X_{ni} \vert ^{r}\leq j+1 \bigr) \\ & \quad = \sum_{j=1}^{k_{n}-1} \vert X_{ni} \vert ^{r}(j+1)^{\frac{\beta -r}{r}}I \bigl( \vert X_{ni} \vert ^{r}>j \bigr) -\sum _{j=1}^{k_{n}-1} \vert X_{ni} \vert ^{r}(j+1)^{\frac{ \beta -r}{r}}I \bigl( \vert X_{ni} \vert ^{r}>j+1 \bigr) \\ & \quad \leq \sum_{j=1}^{k_{n}-1} \vert X_{ni} \vert ^{r}(j+1)^{ \frac{\beta -r}{r}}I \bigl( \vert X_{ni} \vert ^{r}>j \bigr) -\sum _{j=2} ^{k_{n}} \vert X_{ni} \vert ^{r}j^{\frac{\beta -r}{r}}I \bigl( \vert X_{ni} \vert ^{r}>j \bigr) \\ &\quad \leq \sum_{j=1}^{k_{n}-1} \vert X_{ni} \vert ^{r} \bigl((j+1)^{\frac{ \beta -r}{r}} -j^{\frac{\beta -r}{r}} \bigr)I \bigl( \vert X_{ni} \vert ^{r}>j \bigr)+C \vert X _{ni} \vert ^{r}I \bigl( \vert X_{ni} \vert ^{r}>1 \bigr) \\ & \quad \leq \sum_{j=1}^{k_{n}-1} \vert X_{ni} \vert ^{r} \bigl((j+1)^{\frac{ \beta -r}{r}} -j^{\frac{\beta -r}{r}} \bigr) \biggl(1-g \biggl(\frac{ \vert X _{ni} \vert ^{r}}{j} \biggr) \biggr)+C \vert X_{ni} \vert ^{r}. \end{aligned}$$

Because of \(\frac{h_{n}}{k_{n}}\rightarrow 0\) as \(n\rightarrow \infty \), \(\frac{\beta }{r}>1\), \(g(x)\in C_{l,Lip}(\mathbb{R})\) is a non-increasing function, and we have (2.3) and (2.4), then

$$\begin{aligned} A_{n2}\leq{} & k_{n}^{-\frac{\beta }{r}}\sum _{i=u_{n}}^{m_{n}} \sum_{j=1}^{h_{n}} \bigl((j+1)^{\frac{\beta -r}{r}}-j^{\frac{ \beta -r}{r}} \bigr) \hat{\mathbb{E}} \vert X_{ni} \vert ^{r} \biggl(1-g \biggl(\frac{ \vert X _{ni} \vert ^{r}}{j} \biggr) \biggr) \\ &{} +k_{n}^{-\frac{\beta }{r}}\sum_{i=u_{n}}^{m_{n}} \sum_{j=h_{n}+1}^{k_{n}-1} \bigl((j+1)^{\frac{\beta -r}{r}}-j^{\frac{ \beta -r}{r}} \bigr) \hat{\mathbb{E}} \vert X_{ni} \vert ^{r} \biggl(1-g \biggl(\frac{ \vert X _{ni} \vert ^{r}}{j} \biggr) \biggr) +Ck_{n}^{-\frac{\beta }{r}} \sum_{i=u_{n}}^{m_{n}}\hat{\mathbb{E}} \vert X_{ni} \vert ^{r} \\ \leq {}& k_{n}^{-\frac{\beta }{r}}\sum_{i=u_{n}}^{m_{n}} \hat{\mathbb{E}} \vert X_{ni} \vert ^{r} \biggl(1-g \biggl(\frac{ \vert X_{ni} \vert ^{r}}{1} \biggr) \biggr) \bigl((h_{n}+1)^{\frac{\beta -r}{r}}-1 \bigr) \\ &{} + k_{n}^{-\frac{\beta }{r}}\sum_{i=u_{n}}^{m_{n}} \hat{\mathbb{E}} \vert X_{ni} \vert ^{r} \biggl(1-g \biggl(\frac{ \vert X_{ni} \vert ^{r}}{h _{n}+1} \biggr) \biggr) \bigl(k_{n}^{\frac{\beta -r}{r}}-(h_{n}+1)^{\frac{ \beta -r}{r}} \bigr) +Ck_{n}^{-\frac{\beta }{r}}\sum_{i=u_{n}} ^{m_{n}}\hat{\mathbb{E}} \vert X_{ni} \vert ^{r} \\ \leq {}& k_{n}^{-\frac{\beta }{r}}\sum_{i=u_{n}}^{m_{n}} \hat{\mathbb{E}} \vert X_{ni} \vert ^{r} \bigl((h_{n}+1)^{\frac{\beta -r}{r}}-1 \bigr) +k_{n}^{-1} \sum_{i=u_{n}}^{m_{n}}\hat{\mathbb{E}} \vert X_{ni} \vert ^{r} \biggl(1-g \biggl(\frac{ \vert X _{ni} \vert ^{r}}{h_{n}} \biggr) \biggr)\\ &{} +Ck_{n}^{-\frac{\beta }{r}+1} \cdot k_{n}^{-1} \sum_{i=u_{n}}^{m_{n}}\hat{\mathbb{E}} \vert X_{ni} \vert ^{r} \\ \leq {}& k_{n}^{-\frac{\beta }{r}}\sum_{i=u_{n}}^{m_{n}} \hat{\mathbb{E}} \vert X_{ni} \vert ^{r} (h_{n}+1 )^{ \frac{\beta -r}{r}} +k_{n}^{-1}\sum _{i=u_{n}}^{m_{n}} \hat{\mathbb{E}} \vert X_{ni} \vert ^{r} \biggl(1-g \biggl(\frac{ \vert X_{ni} \vert ^{r}}{h _{n}} \biggr) \biggr)\\ &{}+Ck_{n}^{-\frac{\beta }{r}+1}\cdot k_{n}^{-1} \sum _{i=u_{n}}^{m_{n}}\hat{\mathbb{E}} \vert X_{ni} \vert ^{r} \\ = {}& \biggl(\frac{h_{n}+1}{k_{n}} \biggr)^{\frac{\beta -r}{r}}k_{n} ^{-1}\sum_{i=u_{n}}^{m_{n}} \hat{ \mathbb{E}} \vert X_{ni} \vert ^{r} +k _{n}^{-1}\sum_{i=u_{n}}^{m_{n}} \hat{\mathbb{E}} \vert X_{ni} \vert ^{r} \biggl(1-g \biggl(\frac{ \vert X_{ni} \vert ^{r}}{h_{n}} \biggr) \biggr) \\ &{}+Ck_{n} ^{-\frac{\beta }{r}+1}\cdot k_{n}^{-1}\sum_{i=u_{n}}^{m_{n}} \hat{\mathbb{E}} \vert X_{ni} \vert ^{r} \\ \leq {}& \biggl(\frac{h_{n}+1}{k_{n}} \biggr)^{\frac{\beta -r}{r}} \cdot \sup _{n\geq 1}k_{n}^{-1}\sum _{i=u_{n}}^{m_{n}} \hat{\mathbb{E}} \vert X_{ni} \vert ^{r} +k_{n}^{-1}\sum _{i=u_{n}}^{m_{n}} \hat{\mathbb{E}} \vert X_{ni} \vert ^{r} \biggl(1-g \biggl(\frac{ \vert X_{ni} \vert ^{r}}{h _{n}} \biggr) \biggr)\\ &{}+Ck_{n}^{-\frac{\beta }{r}+1}\cdot \sup_{n\geq 1}k_{n}^{-1} \sum_{i=u_{n}}^{m_{n}} \hat{\mathbb{E}} \vert X_{ni} \vert ^{r} \\ \rightarrow {}& 0 \quad (n\rightarrow \infty ). \end{aligned}$$

Now (2.6) has been proven. The proof is completed. □

Main results

Theorem 3.1

Assume that \(\{X_{ni};u_{n}\leq i\leq m_{n},n\geq 1\}\) is an array of row-wise END random variables; \(\{h_{n};n\geq 1\}\) and \(\{k_{n};n \geq 1\}\) are two increasing sequences of positive constants with \(h_{n}\rightarrow \infty \), \(k_{n}\rightarrow \infty \) as \(n\rightarrow \infty \). For some \(1\leq r <2\), satisfying (2.3),

$$\begin{aligned}& \sum_{n=1}^{\infty } (m_{n}-u_{n} )\frac{h_{n}}{k _{n}}< \infty , \end{aligned}$$
(3.1)
$$\begin{aligned}& \sum_{n=1}^{\infty } \biggl( \frac{h_{n}}{k_{n}} \biggr)^{ \frac{2-r}{r}}< \infty , \end{aligned}$$
(3.2)
$$\begin{aligned}& \sum_{n=1}^{\infty }k_{n}^{-1} \sum_{i=u_{n}}^{m_{n}} \hat{\mathbb{E}} \vert X_{ni} \vert ^{r} \biggl(1-g \biggl(\frac{ \vert X_{ni} \vert ^{r}}{h _{n}} \biggr) \biggr)< \infty . \end{aligned}$$
(3.3)

Then, for all \(\varepsilon >0\), we have

$$ \sum_{n=1}^{\infty }\mathbb{V} \Biggl(\sum_{i=u_{n}}^{m _{n}}(X_{ni}-\hat{ \mathbb{E}}X_{ni})>\varepsilon k_{n}^{\frac{1}{r}} \Biggr)< \infty $$
(3.4)

and

$$ \sum_{n=1}^{\infty }\mathbb{V} \Biggl(\sum_{i=u_{n}}^{m _{n}}(X_{ni}-\hat{ \mathcal{E}}X_{ni})< -\varepsilon k_{n}^{\frac{1}{r}} \Biggr)< \infty . $$
(3.5)

In particular, if \(\hat{\mathbb{E}}X_{ni}=\hat{\mathcal{E}}X_{ni}\), then

$$ \sum_{n=1}^{\infty }\mathbb{V} \Biggl( \Biggl\vert \sum_{i=u _{n}}^{m_{n}}(X_{ni}- \hat{\mathbb{E}}X_{ni}) \Biggr\vert >\varepsilon k _{n}^{\frac{1}{r}} \Biggr)< \infty . $$
(3.6)

Theorem 3.2

Assume that \(\{X_{ni};u_{n}\leq i\leq m_{n},n\geq 1\}\) is an array of row-wise END random variables; \(\{h_{n};n\geq 1\}\) and \(\{k_{n};n \geq 1\}\) are two increasing sequences of positive constants with \(h_{n}\rightarrow \infty \), \(k_{n}\rightarrow \infty \) as \(n\rightarrow \infty \). For some \(1\leq r <2\), satisfying (2.3), (3.1), (3.2),

$$\begin{aligned}& \hat{\mathbb{E}} \vert X_{ni} \vert ^{r} \biggl(1-g \biggl(\frac{ \vert X_{ni} \vert ^{r}}{h _{n}} \biggr) \biggr)\leq C_{\mathbb{V}} \vert X_{ni} \vert ^{r} \biggl(1-g \biggl(\frac{ \vert X _{ni} \vert ^{r}}{h_{n}} \biggr) \biggr), \end{aligned}$$
(3.7)
$$\begin{aligned}& \sum_{n=1}^{\infty }k_{n}^{-1} \sum_{i=u_{n}}^{m_{n}}C _{\mathbb{V}} \biggl\{ \vert X_{ni} \vert ^{r} \biggl(1-g \biggl( \frac{ \vert X_{ni} \vert ^{r}}{h _{n}} \biggr) \biggr) \biggr\} < \infty . \end{aligned}$$
(3.8)

Then, for all \(\varepsilon >0 \) and \(\hat{\mathbb{E}}X_{ni}= \hat{\mathcal{E}}X_{ni}\), we have

$$ \sum_{n=1}^{\infty }k_{n}^{-1}C_{\mathbb{V}} \Biggl( \Biggl\vert \sum_{i=u_{n}}^{m_{n}} (X_{ni}-\hat{\mathbb{E}}X_{ni}) \Biggr\vert - \varepsilon k_{n}^{\frac{1}{r}} \Biggr)_{+}^{r}< \infty . $$
(3.9)

Proof of Theorem 3.1

For an array of row-wise END random variables \(\{X_{ni};u_{n}\leq i\leq m_{n},n\geq 1\}\), to ensure the truncated random variables are also END, we demand that truncated functions belong to \(C_{l,\operatorname{Lip}}\). For all \(u_{n}\leq i\leq m_{n}\), \(n \geq 1\), \(\lambda \geq 0\), for all \(\varepsilon >0\), we define

$$\begin{aligned} & Y_{ni}=-\lambda k_{n}^{\frac{1}{r}}I \bigl\{ X_{ni}< -\lambda k_{n} ^{\frac{1}{r}} \bigr\} +X_{ni} I \bigl\{ \vert X_{ni} \vert \leq \lambda k_{n} ^{\frac{1}{r}} \bigr\} + \lambda k_{n}^{\frac{1}{r}}I \bigl\{ X_{ni}> \lambda k_{n}^{\frac{1}{r}} \bigr\} , \\ & Z_{ni}=X_{ni}-Y_{ni}= \bigl(X_{ni}+ \lambda k_{n}^{\frac{1}{r}} \bigr) I \bigl\{ X_{ni}< -\lambda k_{n}^{\frac{1}{r}} \bigr\} + \bigl(X_{ni}- \lambda k_{n}^{\frac{1}{r}} \bigr)I \bigl\{ X_{ni}>\lambda k_{n}^{ \frac{1}{r}} \bigr\} . \end{aligned}$$

Through this, it is easy to see that \(Y_{ni}\leq \vert Y_{ni} \vert \leq \lambda k_{n}^{\frac{1}{r}}\), \(|Y_{ni}|\leq |X_{ni}|\) and

$$ \vert Z_{ni} \vert \leq \vert X_{ni} \vert I \bigl\{ \vert X_{ni} \vert > \lambda k_{n}^{\frac{1}{r}} \bigr\} \leq \vert X_{ni} \vert \biggl(1-g \biggl(\frac{ \vert X_{ni} \vert }{\lambda k_{n}^{ \frac{1}{r}}} \biggr) \biggr). $$

Now we prove (3.4), for all \(\varepsilon >0\), it suffices to verify that

$$\begin{aligned}& I_{1}=\sum_{n=1}^{\infty }\mathbb{V} \Biggl(\sum_{i=u _{n}}^{m_{n}}Z_{ni}> \varepsilon k_{n}^{\frac{1}{r}} \Biggr)< \infty , \\& I_{2}=\sum_{n=1}^{\infty }\mathbb{V} \Biggl(\sum_{i=u _{n}}^{m_{n}} (Y_{ni}-\hat{ \mathbb{E}}Y_{ni} ) >\varepsilon k_{n}^{\frac{1}{r}} \Biggr)< \infty , \\& I_{3}=\lim_{n\rightarrow \infty }k_{n}^{-\frac{1}{r}}\sum _{i=u _{n}}^{m_{n}} (\hat{\mathbb{E}}Y_{ni}- \hat{\mathbb{E}}X_{ni} )=0. \end{aligned}$$

By the Markov inequality, (3.1) and (3.3), we may draw the conclusion that

$$\begin{aligned} I_{1} &= \sum_{n=1}^{\infty } \mathbb{V} \Biggl(\sum_{u _{n}}^{m_{n}}Z_{ni}> \varepsilon k_{n}^{\frac{1}{r}} \Biggr) \\ &= \sum_{n=1}^{\infty }\mathbb{V} \bigl(\exists i;u_{n} \leq i\leq m_{n},\mbox{ such that } \vert X_{ni} \vert >\lambda k_{n}^{ \frac{1}{r}} \bigr) \\ &\leq \sum_{n=1}^{\infty }\sum _{i=u_{n}}^{m_{n}} \mathbb{V} \bigl( \vert X_{ni} \vert >\lambda k_{n}^{\frac{1}{r}} \bigr) \leq \sum _{n=1}^{\infty }\sum_{i=u_{n}}^{m_{n}} \frac{ \hat{\mathbb{E}} \vert X_{ni} \vert ^{r}}{\lambda ^{r}k_{n}} \\ &\leq C\sum_{n=1}^{\infty }k_{n}^{-1} \sum_{i=u_{n}} ^{m_{n}}\hat{\mathbb{E}} \vert X_{ni} \vert ^{r} \biggl(1-g \biggl(\frac{ \vert X_{ni} \vert ^{r}}{h _{n}} \biggr) \biggr)+ C\sum_{n=1}^{\infty }k_{n}^{-1} \sum_{i=u_{n}}^{m_{n}}\hat{\mathbb{E}} \vert X_{ni} \vert ^{r} g \biggl(\frac{ \vert X _{ni} \vert ^{r}}{h_{n}} \biggr) \\ &\leq C\sum_{n=1}^{\infty }k_{n}^{-1} \sum_{i=u_{n}} ^{m_{n}}\hat{\mathbb{E}} \vert X_{ni} \vert ^{r} \biggl(1-g \biggl(\frac{ \vert X_{ni} \vert ^{r}}{h _{n}} \biggr) \biggr)+ C\sum_{n=1}^{\infty }k_{n}^{-1} \sum_{i=u_{n}}^{m_{n}}\hat{\mathbb{E}}h_{n} g \biggl(\frac{ \vert X_{ni} \vert ^{r}}{h _{n}} \biggr) \\ &\leq C\sum_{n=1}^{\infty }k_{n}^{-1} \sum_{i=u_{n}} ^{m_{n}}\hat{\mathbb{E}} \vert X_{ni} \vert ^{r} \biggl(1-g \biggl(\frac{ \vert X_{ni} \vert ^{r}}{h _{n}} \biggr) \biggr)+ C\sum_{n=1}^{\infty }k_{n}^{-1} \sum_{i=u_{n}}^{m_{n}}h_{n} \\ &\leq C\sum_{n=1}^{\infty }k_{n}^{-1} \sum_{i=u_{n}} ^{m_{n}}\hat{\mathbb{E}} \vert X_{ni} \vert ^{r} \biggl(1-g \biggl(\frac{ \vert X_{ni} \vert ^{r}}{h _{n}} \biggr) \biggr)+ C\sum_{n=1}^{\infty } (m_{n}-u_{n} )\frac{h _{n}}{k_{n}} \\ &< \infty . \end{aligned}$$

Next we consider \(I_{2}\). Let \(B_{n}=\sum_{i=u_{n}}^{m_{n}} \hat{\mathbb{E}} (Y_{ni}-\hat{\mathbb{E}}Y_{ni} )^{2}\), \(x= \varepsilon k_{n}^{\frac{1}{r}}\), \(y=2\lambda k_{n}^{\frac{1}{r}}\). Assume \(\lambda =\frac{\varepsilon }{2}\), so \(y=2\lambda k_{n}^{\frac{1}{r}}= \varepsilon k_{n}^{\frac{1}{r}}\) in Lemma 2.1. For all \(u_{n}\leq i\leq m_{n}\), \(n\geq 1\), \(\varepsilon >0\), we have

$$ \hat{\mathbb{E}} (Y_{ni}-\hat{\mathbb{E}}Y_{ni} )^{2} \leq 2\hat{\mathbb{E}} \bigl(Y_{ni}^{2}+ ( \hat{\mathbb{E}}Y_{ni} ) ^{2} \bigr)\leq 4\hat{ \mathbb{E}}Y_{ni}^{2} $$

and

$$ Y_{ni}-\hat{\mathbb{E}}Y_{ni}\leq \lambda k_{n}^{\frac{1}{r}}+ \hat{\mathbb{E}} \vert Y_{ni} \vert \leq \varepsilon k_{n}^{\frac{1}{r}}. $$

Because of \(Y_{ni}-\hat{\mathbb{E}}Y_{ni}\leq \lambda k_{n}^{ \frac{1}{r}}+\hat{\mathbb{E}}|Y_{ni}|\leq \varepsilon k_{n}^{ \frac{1}{r}} \), we have \(\mathbb{V} (\max_{u_{n}\leq i\leq m_{n}} (Y_{ni}-\hat{\mathbb{E}}Y_{ni} )> \varepsilon k_{n}^{\frac{1}{r}} )=0 \). So we can get

$$\begin{aligned} I_{2} &= \sum_{n=1}^{\infty }\mathbb{V} \Biggl(\sum_{i=u _{n}}^{m_{n}} (Y_{ni} - \hat{\mathbb{E}}Y_{ni} )>\varepsilon k_{n}^{\frac{1}{r}} \Biggr) \\ &\leq \sum_{n=1}^{\infty } \biggl\{ \mathbb{V} \Bigl(\max_{u_{n} \leq i\leq m_{n}} (Y_{ni}-\hat{\mathbb{E}}Y_{ni} )>\varepsilon k_{n}^{\frac{1}{r}} \Bigr)+K \exp \biggl(1-\ln \biggl(1+ \frac{ \varepsilon ^{2}k_{n}^{\frac{2}{r}}}{B_{n}} \biggr) \biggr) \biggr\} \\ &= \sum_{n=1}^{\infty }K \exp \biggl(1-\ln \biggl(1+\frac{ \varepsilon ^{2}k_{n}^{\frac{2}{r}}}{B_{n}} \biggr) \biggr) \leq C \sum _{n=1}^{\infty }\exp \ln \biggl(\frac{B_{n}}{B_{n}+ \varepsilon ^{2}k_{n}^{\frac{2}{r}}} \biggr) \\ &\leq C\sum_{n=1}^{\infty }k_{n}^{-\frac{2}{r}}B_{n} = C \sum_{n=1}^{\infty }k_{n}^{-\frac{2}{r}} \sum_{i=u_{n}} ^{m_{n}}\hat{\mathbb{E}} (Y_{ni}-\hat{\mathbb{E}}Y_{ni} ) ^{2} \leq C\sum _{n=1}^{\infty }k_{n}^{-\frac{2}{r}}\sum _{i=u_{n}}^{m_{n}}\hat{\mathbb{E}}Y_{ni}^{2} \\ &\leq C\sum_{n=1}^{\infty }k_{n}^{-\frac{2}{r}} \sum_{i=u_{n}}^{m_{n}}\hat{\mathbb{E}}X_{ni}^{2} g \biggl(\frac{ \mu \vert X_{ni} \vert }{\lambda k_{n}^{\frac{1}{r}}} \biggr)+ C\sum_{n=1} ^{\infty }k_{n}^{-\frac{2}{r}}\sum_{i=u_{n}}^{m_{n}} \hat{\mathbb{E}} \bigl(\lambda k_{n}^{\frac{1}{r}} \bigr)^{2} \biggl(1-g \biggl(\frac{ \vert X_{ni} \vert }{\lambda k_{n}^{\frac{1}{r}}} \biggr) \biggr) \\ &\leq C\sum_{n=1}^{\infty }k_{n}^{-\frac{2}{r}} \sum_{i=u_{n}}^{m_{n}}\hat{\mathbb{E}}X_{ni}^{2} g \biggl(\frac{ \mu \vert X_{ni} \vert }{\lambda k_{n}^{\frac{1}{r}}} \biggr)+C\sum_{n=1} ^{\infty }\sum_{i=u_{n}}^{m_{n}}\mathbb{V} \bigl( \vert X_{ni} \vert > \mu \lambda k_{n}^{\frac{1}{r}} \bigr) \\ &:=I_{21}+I_{22}. \end{aligned}$$

From the proof of \(I_{1}\), it is easy to prove that

$$\begin{aligned} I_{22} &= C\sum_{n=1}^{\infty }\sum _{i=u_{n}}^{m_{n}} \mathbb{V} \bigl( \vert X_{ni} \vert >\mu \lambda k_{n}^{\frac{1}{r}} \bigr) \\ &\leq C\sum_{n=1}^{\infty }\sum _{i=u_{n}}^{m_{n}}\frac{ \hat{\mathbb{E}} \vert X_{ni} \vert ^{r}}{(\mu \lambda )^{r}k_{n}} \\ &< \infty . \end{aligned}$$

Noting that \(g (\frac{\mu |X_{ni}|}{\lambda k_{n}^{\frac{1}{r}}} ) \leq 1\) and \(g (\frac{\mu ^{r}|X_{ni}|^{r}}{\lambda ^{r}k_{n}} )-g (\frac{|X_{ni}|^{r}}{h_{n}} )\leq I \{\mu h_{n}< |X _{ni}|^{r}\leq \frac{\lambda ^{r}k_{n}}{\mu ^{r}} \}\), combining (2.1), (2.3), (3.2) and (3.3) we get

$$\begin{aligned} \begin{aligned} I_{21} ={}& C\sum_{n=1}^{\infty }k_{n}^{-\frac{2}{r}} \sum_{i=u_{n}}^{m_{n}}\hat{\mathbb{E}}X_{ni}^{2} g \biggl(\frac{ \vert X _{ni} \vert ^{r}}{h_{n}} \biggr) +C\sum_{n=1}^{\infty }k_{n}^{- \frac{2}{r}} \sum_{i=u_{n}}^{m_{n}}\hat{\mathbb{E}}X_{ni}^{2} g \biggl(\frac{\mu \vert X_{ni} \vert }{\lambda k_{n}^{\frac{1}{r}}} \biggr)\\ &{}- C \sum_{n=1}^{\infty }k_{n}^{-\frac{2}{r}} \sum_{i=u_{n}} ^{m_{n}}\hat{\mathbb{E}}X_{ni}^{2} g \biggl(\frac{ \vert X_{ni} \vert ^{r}}{h_{n}} \biggr) \\ \leq{}& C\sum_{n=1}^{\infty }k_{n}^{-\frac{2}{r}} \sum_{i=u_{n}}^{m_{n}}\hat{\mathbb{E}}X_{ni}^{2} g \biggl(\frac{ \vert X _{ni} \vert ^{r}}{h_{n}} \biggr) +C\sum_{n=1}^{\infty }k_{n}^{- \frac{2}{r}} \sum_{i=u_{n}}^{m_{n}}\hat{\mathbb{E}}X_{ni}^{2} \biggl(g \biggl(\frac{\mu \vert X_{ni} \vert }{\lambda k_{n}^{\frac{1}{r}}} \biggr)-g \biggl(\frac{ \vert X_{ni} \vert ^{r}}{h_{n}} \biggr) \biggr) \\ ={}& C\sum_{n=1}^{\infty }k_{n}^{-\frac{2}{r}} \sum_{i=u _{n}}^{m_{n}}\hat{\mathbb{E}} \vert X_{ni} \vert ^{r} \vert X_{ni} \vert ^{2-r}g \biggl(\frac{ \vert X _{ni} \vert ^{r}}{h_{n}} \biggr) \\ &{}+C\sum _{n=1}^{\infty }k_{n}^{- \frac{2}{r}}\sum _{i=u_{n}}^{m_{n}}\hat{\mathbb{E}} \vert X_{ni} \vert ^{r} \vert X _{ni} \vert ^{2-r} \biggl(g \biggl(\frac{\mu \vert X_{ni} \vert }{\lambda k_{n}^{ \frac{1}{r}}} \biggr)-g \biggl(\frac{ \vert X_{ni} \vert ^{r}}{h_{n}} \biggr) \biggr) \\ \leq{} &C\sum_{n=1}^{\infty }k_{n}^{-\frac{2}{r}} \sum_{i=u_{n}}^{m_{n}}\hat{\mathbb{E}} \vert X_{ni} \vert ^{r}h_{n}^{ \frac{2-r}{r}}g \biggl( \frac{ \vert X_{ni} \vert ^{r}}{h_{n}} \biggr) \\ &{}+C\sum_{n=1}^{\infty } \biggl(\frac{\lambda }{\mu } \biggr)^{2-r}k _{n}^{-\frac{2}{r}} \sum_{i=u_{n}}^{m_{n}} \hat{\mathbb{E}} \vert X _{ni} \vert ^{r}k_{n}^{\frac{2-r}{r}} \biggl(g \biggl(\frac{\mu \vert X_{ni} \vert }{ \lambda k_{n}^{\frac{1}{r}}} \biggr)-g \biggl(\frac{ \vert X_{ni} \vert ^{r}}{h _{n}} \biggr) \biggr) \\ \leq{}& C\sum_{n=1}^{\infty } \biggl( \frac{h_{n}}{k_{n}} \biggr) ^{\frac{2-r}{r}}k_{n}^{-1}\sum _{i=u_{n}}^{m_{n}} \hat{\mathbb{E}} \vert X_{ni} \vert ^{r} +C\sum_{n=1}^{\infty }k_{n}^{-1} \sum_{i=u_{n}}^{m_{n}}\hat{\mathbb{E}} \vert X_{ni} \vert ^{r} \biggl(1 -g \biggl(\frac{ \vert X_{ni} \vert ^{r}}{h_{n}} \biggr) \biggr) \\ \leq{}& C\sum_{n=1}^{\infty } \biggl( \frac{h_{n}}{k_{n}} \biggr) ^{\frac{2-r}{r}}\cdot \sup_{n\geq 1} k_{n}^{-1}\sum_{i=u_{n}} ^{m_{n}} \hat{\mathbb{E}} \vert X_{ni} \vert ^{r} +C\sum _{n=1}^{\infty }k _{n}^{-1}\sum _{i=u_{n}}^{m_{n}}\hat{\mathbb{E}} \vert X_{ni} \vert ^{r} \biggl(1 -g \biggl(\frac{ \vert X_{ni} \vert ^{r}}{h_{n}} \biggr) \biggr) \\ < {}&\infty . \end{aligned} \end{aligned}$$

So we have \(I_{2}<\infty \).

Finally, we prove \(I_{3}\rightarrow 0 \). We only need to prove \(\lim_{n\rightarrow \infty } k_{n}^{-\frac{1}{r}}\sum_{i=u_{n}}^{m_{n}} \vert \hat{\mathbb{E}}Y_{ni}- \hat{\mathbb{E}}X_{ni} \vert =0\). There exists n such \(h_{n} \leq \lambda ^{r}k_{n} \), thus \(1-g (\frac{|X_{ni}|}{\lambda k _{n}^{\frac{1}{r}}} )\leq 1-g (\frac{|X_{ni}|^{r}}{\lambda ^{r} k_{n}} )\leq 1-g (\frac{|X_{ni}|^{r}}{h_{n}} ) \). By combining (2.4) and \(\vert \hat{\mathbb{E}}Y_{ni}- \hat{\mathbb{E}}X_{ni} \vert \leq \hat{\mathbb{E}} \vert Y_{ni}-X _{ni} \vert \),

$$\begin{aligned} & \lim_{n\rightarrow \infty } k_{n}^{-\frac{1}{r}} \sum_{i=u_{n}}^{m_{n}} \vert \hat{ \mathbb{E}}Y_{ni}- \hat{\mathbb{E}}X_{ni} \vert \\ &\quad \leq \lim_{n\rightarrow \infty } k_{n}^{-\frac{1}{r}}\sum _{i=u_{n}}^{m_{n}}\hat{\mathbb{E}} \vert Y_{ni}-X_{ni} \vert = \lim_{n\rightarrow \infty } k_{n}^{-\frac{1}{r}}\sum_{i=u_{n}}^{m_{n}} \hat{\mathbb{E}} \vert Z_{ni} \vert \\ &\quad \leq \lim_{n\rightarrow \infty } k_{n}^{-\frac{1}{r}}\sum _{i=u_{n}}^{m_{n}}\hat{\mathbb{E}} \vert X_{ni} \vert \biggl(1-g \biggl(\frac{ \vert X_{ni} \vert }{\lambda k_{n}^{\frac{1}{r}}} \biggr) \biggr) \\ &\quad \leq \lim_{n\rightarrow \infty } k_{n}^{-\frac{1}{r}} \sum _{i=u_{n}}^{m_{n}}\hat{\mathbb{E}} \vert X_{ni} \vert \biggl(\frac{ \vert X_{ni} \vert }{\mu \lambda k_{n}^{\frac{1}{r}}} \biggr)^{r-1} \biggl(1-g \biggl( \frac{ \vert X_{ni} \vert }{\lambda k_{n}^{\frac{1}{r}}} \biggr) \biggr) \\ &\quad \leq C\cdot \lim_{n\rightarrow \infty } k_{n}^{-1} \sum_{i=u_{n}}^{m_{n}}\hat{\mathbb{E}} \vert X_{ni} \vert ^{r} \biggl(1-g \biggl(\frac{ \vert X_{ni} \vert ^{r}}{\lambda ^{r} k_{n}} \biggr) \biggr) \\ &\quad \leq C\cdot \lim_{n\rightarrow \infty } k_{n}^{-1} \sum_{i=u_{n}}^{m_{n}}\hat{\mathbb{E}} \vert X_{ni} \vert ^{r} \biggl(1-g \biggl(\frac{ \vert X_{ni} \vert ^{r}}{h_{n}} \biggr) \biggr) \\ &\quad = 0. \end{aligned}$$
(3.10)

So we obtain \(I_{3}\rightarrow 0 \). So (3.4) to be established.

Now we should prove (3.5), because \(\{-X_{ni}; u _{n}\leq i \leq m_{n}, n\geq 1 \} \) is also an array of row-wise END random variables. We use \(\{-X_{ni}; u_{n}\leq i \leq m_{n}, n\geq 1 \} \) instead of \(\{X_{ni}; u_{n}\leq i \leq m_{n}, n\geq 1 \} \) in (3.4), and by \(\hat{\mathbb{E}}X _{ni}=-\hat{\mathcal{E}}[-X_{ni}] \), then we get (3.5). Finally, we need to prove (3.6). We have \(\hat{\mathbb{E}}X_{ni}=\hat{\mathcal{E}}X_{ni} \), so we obtain

$$\begin{aligned} &\sum_{n=1}^{\infty }\mathbb{V} \Biggl( \Biggl\vert \sum_{i=u _{n}}^{m_{n}} (X_{ni}- \hat{\mathbb{E}}X_{ni} ) \Biggr\vert > \varepsilon k_{n}^{\frac{1}{r}} \Biggr) \\ & \quad \leq \sum_{n=1}^{\infty }\mathbb{V} \Biggl(\sum _{i=u_{n}} ^{m_{n}} (X_{ni}-\hat{ \mathbb{E}}X_{ni} ) >\varepsilon k _{n}^{\frac{1}{r}} \Biggr)+ \sum_{n=1}^{\infty }\mathbb{V} \Biggl(- \sum_{i=u_{n}}^{m_{n}} (X_{ni}-\hat{ \mathcal{E}}X _{ni} ) >\varepsilon k_{n}^{\frac{1}{r}} \Biggr) \\ &\quad =\sum_{n=1}^{\infty }\mathbb{V} \Biggl(\sum _{i=u_{n}} ^{m_{n}} (X_{ni}-\hat{ \mathbb{E}}X_{ni} ) >\varepsilon k _{n}^{\frac{1}{r}} \Biggr)+ \sum_{n=1}^{\infty }\mathbb{V} \Biggl(\sum _{i=u_{n}}^{m_{n}} (X_{ni}-\hat{ \mathcal{E}}X _{ni} ) < -\varepsilon k_{n}^{\frac{1}{r}} \Biggr) \\ &\quad < \infty . \end{aligned}$$

The proof is completed. □

Proof of Theorem 3.2

We know \(\hat{\mathbb{E}}|X_{ni}|^{r} (1-g (\frac{|X_{ni}|^{r}}{h_{n}} ) )\leq C_{ \mathbb{V}}|X_{ni}|^{r} (1-g (\frac{|X_{ni}|^{r}}{h_{n}} ) ) \), hence (3.8) implies (3.3). We have

$$\begin{aligned} & \sum_{n=1}^{\infty }k_{n}^{-1}C_{\mathbb{V}} \Biggl( \Biggl\vert \sum_{i=u_{n}}^{m_{n}} (X_{ni} -\hat{\mathbb{E}}X_{ni} ) \Biggr\vert - \varepsilon k_{n}^{\frac{1}{r}} \Biggr)_{+}^{r} \\ & \quad = \sum_{n=1}^{\infty }k_{n}^{-1} \int _{0}^{\infty } \mathbb{V} \Biggl( \Biggl\vert \sum _{i=u_{n}}^{m_{n}} (X_{ni}- \hat{ \mathbb{E}}X_{ni} ) \Biggr\vert -\varepsilon k_{n}^{\frac{1}{r}}>t ^{\frac{1}{r}} \Biggr)\,\mathrm{d}t \\ & \quad \leq \sum_{n=1}^{\infty }k_{n}^{-1} \int _{0}^{k_{n}} \mathbb{V} \Biggl( \Biggl\vert \sum _{i=u_{n}}^{m_{n}} (X_{ni}- \hat{ \mathbb{E}}X_{ni} ) \Biggr\vert > \varepsilon k_{n}^{ \frac{1}{r}} \Biggr)\,\mathrm{d}t\\ &\qquad {}+ \sum_{n=1}^{\infty }k_{n}^{-1} \int _{k_{n}}^{\infty }\mathbb{V} \Biggl( \Biggl\vert \sum _{i=u _{n}}^{m_{n}} (X_{ni}-\hat{ \mathbb{E}}X_{ni} ) \Biggr\vert >t ^{\frac{1}{r}} \Biggr)\, \mathrm{d}t \\ & \quad \leq\sum_{n=1}^{\infty }\mathbb{V} \Biggl( \Biggl\vert \sum_{i=u_{n}}^{m_{n}} (X_{ni}-\hat{\mathbb{E}} X_{ni} ) \Biggr\vert > \varepsilon k_{n}^{\frac{1}{r}} \Biggr)\\ &\qquad {}+ \sum_{n=1}^{\infty }k_{n}^{-1} \int _{k_{n}}^{\infty } \mathbb{V} \Biggl( \Biggl\vert \sum _{i=u_{n}}^{m_{n}} (X_{ni}- \hat{ \mathbb{E}}X_{ni} ) \Biggr\vert >t^{\frac{1}{r}} \Biggr) \, \mathrm{d}t \\ &\quad =:I_{4}+I_{5}. \end{aligned}$$

If we want to prove (3.9), it suffices to prove \(I_{4}<\infty \) and \(I_{5}<\infty \). Because of Theorem 3.1, we obtain \(I_{4}<\infty \). For all \(u_{n}\leq i \leq m_{n}\), \(n\geq 1\), \(t\geq k_{n}\), \(\delta >0 \), we define

$$\begin{aligned} & Y_{ni}=-\delta t^{\frac{1}{r}}I \bigl\{ X_{ni}< -\delta t^{ \frac{1}{r}} \bigr\} +X_{ni}I \bigl\{ \vert X_{ni} \vert \leq \delta t^{ \frac{1}{r}} \bigr\} +\delta t^{\frac{1}{r}}I \bigl\{ X_{ni}>\delta t ^{\frac{1}{r}} \bigr\} , \\ & Z_{ni}=X_{ni}-Y_{ni}= \bigl(X_{ni}+ \delta t^{\frac{1}{r}} \bigr)I \bigl\{ X_{ni}< -\delta t^{\frac{1}{r}} \bigr\} + \bigl(X_{ni}-\delta t^{\frac{1}{r}} \bigr)I \bigl\{ X_{ni}>\delta t^{\frac{1}{r}} \bigr\} . \end{aligned}$$

Through this, we can get

$$ \vert Z_{ni} \vert \leq \vert X_{ni} \vert I \bigl\{ \vert X_{ni} \vert > \delta t^{\frac{1}{r}} \bigr\} \leq \vert X_{ni} \vert \biggl(1-g \biggl(\frac{ \vert X_{ni} \vert }{\delta t^{\frac{1}{r}}} \biggr) \biggr). $$

Next we need to prove \(I_{5}<\infty \). Let \(I_{6}= \sum_{n=1} ^{\infty }k_{n}^{-1}\int _{k_{n}}^{\infty }\mathbb{V} (\sum_{i=u_{n}}^{m_{n}} (X_{ni}-\hat{\mathbb{E}}X_{ni} )>t ^{\frac{1}{r}} )\,\mathrm{d}t \), noting that

$$\begin{aligned} I_{6}= {}& \sum_{n=1}^{\infty }k_{n}^{-1} \int _{k_{n}}^{ \infty }\mathbb{V} \Biggl(\sum _{i=u_{n}}^{m_{n}} (X_{ni}- \hat{ \mathbb{E}}X_{ni} )>t^{\frac{1}{r}} \Biggr)\,\mathrm{d}t \\ \leq {}& \sum_{n=1}^{\infty }k_{n}^{-1} \int _{k_{n}}^{ \infty }\mathbb{V} \Biggl(\sum _{i=u_{n}}^{m_{n}} Z_{ni} >\frac{t ^{\frac{1}{r}}}{3} \Biggr)\,\mathrm{d}t+ \sum_{n=1}^{\infty }k _{n}^{-1} \int _{k_{n}}^{\infty }\mathbb{V} \Biggl(\sum _{i=u_{n}}^{m_{n}} (Y_{ni}-\hat{ \mathbb{E}}Y_{ni} ) >\frac{t^{\frac{1}{r}}}{3} \Biggr)\,\mathrm{d}t \\ &{} + \sum_{n=1}^{\infty }k_{n}^{-1} \int _{k_{n}}^{\infty }\mathbb{V} \Biggl(t^{-\frac{1}{r}} \sum_{i=u_{n}}^{m_{n}} (\hat{\mathbb{E}}Y_{ni}- \hat{\mathbb{E}}X_{ni} ) > \frac{1}{3} \Biggr)\,\mathrm{d}t \\ =:{} &I_{61}+I_{62}+I_{63}. \end{aligned}$$

Because of (2.7) and (3.8), we get

$$\begin{aligned} I_{61} & =\sum_{n=1}^{\infty }k_{n}^{-1} \int _{k_{n}}^{ \infty }\mathbb{V} \Biggl(\sum _{i=u_{n}}^{m_{n}} Z_{ni} >\frac{t ^{\frac{1}{r}}}{3} \Biggr)\,\mathrm{d}t \\ & \leq \sum_{n=1}^{\infty }k_{n}^{-1} \int _{k_{n}}^{ \infty }\mathbb{V} \bigl(\exists \ i; u_{n}\leq i \leq m_{n},\mathrm{such \ that} \ \vert X_{ni} \vert >\delta t^{\frac{1}{r}} \bigr)\,\mathrm{d}t \\ & \leq \sum_{n=1}^{\infty }k_{n}^{-1} \int _{k_{n}}^{ \infty }\sum_{i=u_{n}}^{m_{n}} \mathbb{V} \bigl( \vert X_{ni} \vert > \delta t^{\frac{1}{r}} \bigr)\,\mathrm{d}t \\ & \leq \sum_{n=1}^{\infty }k_{n}^{-1} \sum_{i=u_{n}} ^{m_{n}}C_{\mathbb{V}} \bigl( \vert X_{ni} \vert ^{r}I \bigl( \vert X_{ni} \vert ^{r}> k_{n} \bigr) \bigr) \\ & \leq \sum_{n=1}^{\infty }k_{n}^{-1} \sum_{i=u_{n}} ^{m_{n}}C_{\mathbb{V}} \biggl( \vert X_{ni} \vert ^{r} \biggl(1-g \biggl( \frac{ \vert X _{ni} \vert ^{r}}{k_{n}} \biggr) \biggr) \biggr) \\ & \leq \sum_{n=1}^{\infty }k_{n}^{-1} \sum_{i=u_{n}} ^{m_{n}}C_{\mathbb{V}} \biggl( \vert X_{ni} \vert ^{r} \biggl(1-g \biggl( \frac{ \vert X _{ni} \vert ^{r}}{h_{n}} \biggr) \biggr) \biggr) \\ & < \infty . \end{aligned}$$

So we have \(I_{61}<\infty \).

Next we consider \(I_{62}\), Let \(B_{n}=\sum_{i=u_{n}}^{m_{n}} \hat{\mathbb{E}} (Y_{ni}-\hat{\mathbb{E}}Y_{ni} )^{2}\), \(x=\frac{t ^{\frac{1}{r}}}{3}\), \(y=\frac{t^{\frac{1}{r}}}{6} \) in Lemma 2.1. For all \(u_{n}\leq i \leq m_{n}\), \(n\geq 1\), \(t\geq k _{n}\), \(\delta >0 \), suppose that \(\delta =\frac{1}{12} \), there are \(Y_{ni}-\hat{\mathbb{E}}Y_{ni}\leq 2\delta t^{\frac{1}{r}} \leq \frac{t ^{\frac{1}{r}}}{6} \) and \(\hat{\mathbb{E}} (Y_{ni}- \hat{\mathbb{E}}Y_{ni} )^{2}\leq 4\hat{\mathbb{E}}Y_{ni}^{2} \). By Lemma 2.1, we get

$$\begin{aligned} & \sum_{n=1}^{\infty }k_{n}^{-1} \int _{k_{n}}^{\infty } \mathbb{V} \Biggl(\sum _{i=u_{n}}^{m_{n}} (Y_{ni}- \hat{ \mathbb{E}}Y_{ni} )>\frac{t^{\frac{1}{r}}}{3} \Biggr) \,\mathrm{d}t \\ &\quad \leq \sum_{n=1}^{\infty }k_{n}^{-1} \int _{k_{n}}^{ \infty }\mathbb{V} \biggl(\max _{u_{n}\leq i\leq m_{n}} (Y _{ni}-\hat{\mathbb{E}}Y_{ni} )> \frac{t^{\frac{1}{r}}}{6} \biggr) \,\mathrm{d}t\\ &\qquad {}+ \sum_{n=1}^{\infty }k_{n}^{-1} \int _{k_{n}} ^{\infty }K\exp \biggl\{ 2-2\ln \biggl(1+ \frac{2\delta t^{\frac{2}{r}}}{3B _{n}} \biggr) \biggr\} \,\mathrm{d}t \\ &\quad \leq C\sum_{n=1}^{\infty }k_{n}^{-1} \int _{k_{n}}^{ \infty } \bigl(t^{-\frac{2}{r}}B_{n} \bigr)^{2}\,\mathrm{d}t = C \sum_{n=1}^{\infty }k_{n}^{-1} \int _{k_{n}}^{\infty } \Biggl(t^{-\frac{2}{r}}\sum _{i=u_{n}}^{m_{n}} \hat{\mathbb{E}} \bigl(Y_{ni}- \hat{\mathbb{E}}Y_{ni}^{2} \bigr) \Biggr)^{2} \, \mathrm{d}t \\ &\quad \leq C\sum_{n=1}^{\infty }k_{n}^{-1} \int _{k_{n}}^{\infty } \Biggl(t^{-\frac{2}{r}}\sum _{i=u_{n}}^{m_{n}} \hat{\mathbb{E}}Y_{ni}^{2} \Biggr)^{2}\,\mathrm{d}t \\ &\quad \leq C\sum_{n=1}^{\infty }k_{n}^{-1} \int _{k_{n}}^{ \infty } \Biggl\{ t^{-\frac{2}{r}}\sum _{i=u_{n}}^{m_{n}} \hat{\mathbb{E}} \biggl[ \vert X_{ni} \vert ^{2}g \biggl(\frac{\mu \vert X_{ni} \vert }{\delta t^{\frac{1}{r}}} \biggr)+ \delta ^{2}t^{\frac{2}{r}} \biggl(1-g \biggl(\frac{ \vert X _{ni} \vert }{\delta t^{\frac{1}{r}}} \biggr) \biggr) \biggr] \Biggr\} ^{2} \,\mathrm{d}t \\ & \quad \leq C\sum_{n=1}^{\infty }k_{n}^{-1} \int _{k_{n}}^{ \infty } \Biggl\{ t^{-\frac{2}{r}}\sum _{i=u_{n}}^{m_{n}} \hat{\mathbb{E}} \vert X_{ni} \vert ^{2}g \biggl(\frac{\mu \vert X_{ni} \vert }{\delta t^{ \frac{1}{r}}} \biggr) \Biggr\} ^{2}\, \mathrm{d}t\\ &\qquad {}+ C\sum_{n=1}^{ \infty }k_{n}^{-1} \int _{k_{n}}^{\infty } \Biggl\{ \sum _{i=u_{n}}^{m_{n}} \hat{\mathbb{E}} \biggl(1-g \biggl( \frac{ \vert X _{ni} \vert }{\delta t^{\frac{1}{r}}} \biggr) \biggr) \Biggr\} ^{2} \,\mathrm{d}t \\ & \quad \leq C\sum_{n=1}^{\infty }k_{n}^{-1} \int _{k_{n}}^{ \infty } \Biggl\{ t^{-\frac{2}{r}}\sum _{i=u_{n}}^{m_{n}} \hat{\mathbb{E}} \vert X_{ni} \vert ^{2}g \biggl(\frac{ \vert X_{ni} \vert ^{r}}{k_{n}} \biggr)\\ &\qquad {} + t^{-\frac{2}{r}}\sum _{i=u_{n}}^{m_{n}} \hat{\mathbb{E}} \vert X _{ni} \vert ^{r} \vert X_{ni} \vert ^{2-r} \biggl(g \biggl(\frac{\mu \vert X_{ni} \vert }{\delta t ^{\frac{1}{r}}} \biggr)-g \biggl( \frac{ \vert X_{ni} \vert ^{r}}{k_{n}} \biggr) \biggr) \Biggr\} ^{2}\,\mathrm{d}t \\ &\qquad {} + C\sum_{n=1}^{\infty }k_{n}^{-1} \int _{k_{n}}^{ \infty } \Biggl(\sum _{i=u_{n}}^{m_{n}} \mathbb{V} \bigl( \vert X_{ni} \vert > \mu \delta t^{\frac{1}{r}} \bigr) \Biggr)^{2}\,\mathrm{d}t \\ &\quad \leq C\sum_{n=1}^{\infty }k_{n}^{-1} \int _{k_{n}}^{ \infty } \Biggl\{ t^{-\frac{2}{r}}\sum _{i=u_{n}}^{m_{n}} \hat{\mathbb{E}} \vert X_{ni} \vert ^{2}g \biggl(\frac{ \vert X_{ni} \vert ^{r}}{k_{n}} \biggr) \\ &\qquad {}+ t^{-1}\sum _{i=u_{n}}^{m_{n}} \hat{\mathbb{E}} \vert X_{ni} \vert ^{r} \biggl(g \biggl(\frac{\mu \vert X_{ni} \vert }{\delta t^{\frac{1}{r}}} \biggr)-g \biggl(\frac{ \vert X_{ni} \vert ^{r}}{k_{n}} \biggr) \biggr) \Biggr\} ^{2} \, \mathrm{d}t \\ &\qquad {}+ C\sum_{n=1}^{\infty }k_{n}^{-1} \int _{k_{n}}^{ \infty } \Biggl(\sum _{i=u_{n}}^{m_{n}} \mathbb{V} \bigl( \vert X_{ni} \vert > \mu \delta t^{\frac{1}{r}} \bigr) \Biggr)^{2}\,\mathrm{d}t \\ & \quad \leq C\sum_{n=1}^{\infty }k_{n}^{-1} \int _{k_{n}}^{ \infty } \Biggl\{ t^{-\frac{2}{r}}\sum _{i=u_{n}}^{m_{n}} \hat{\mathbb{E}} \vert X_{ni} \vert ^{2}g \biggl(\frac{ \vert X_{ni} \vert ^{r}}{k_{n}} \biggr) \Biggr\} ^{2}\, \mathrm{d}t \\ &\qquad {}+ C\sum_{n=1}^{\infty }k_{n}^{-1} \int _{k_{n}}^{\infty } \Biggl\{ t^{-1}\sum _{i=u_{n}}^{m_{n}} \hat{\mathbb{E}} \vert X_{ni} \vert ^{r} \biggl(1-g \biggl(\frac{ \vert X_{ni} \vert ^{r}}{k _{n}} \biggr) \biggr) \Biggr\} ^{2}\,\mathrm{d}t \\ &\qquad {} + C\sum_{n=1}^{\infty }k_{n}^{-1} \int _{k_{n}}^{ \infty } \Biggl(\sum _{i=u_{n}}^{m_{n}} \mathbb{V} \bigl( \vert X_{ni} \vert > \mu \delta t^{\frac{1}{r}} \bigr) \Biggr)^{2}\,\mathrm{d}t \\ &\quad =:I_{62}^{\prime }+I_{62}^{\prime \prime }+I_{62}^{\prime \prime \prime }. \end{aligned}$$

By a similar argument to the proof of \(I_{21}\), we have (2.3), (3.2) and (3.3), then

$$\begin{aligned} I_{62}^{\prime } ={}& C\sum_{n=1}^{\infty }k_{n}^{-1} \int _{k _{n}}^{\infty } \Biggl(t^{-\frac{2}{r}}\sum _{i=u_{n}}^{m_{n}} \hat{\mathbb{E}} \vert X_{ni} \vert ^{2}g \biggl(\frac{ \vert X_{ni} \vert ^{r}}{k_{n}} \biggr) \Biggr) ^{2}\, \mathrm{d}t \\ \leq {}&C\sum_{n=1}^{\infty }k_{n}^{-1} \Biggl(\sum_{i=u_{n}}^{m_{n}} \hat{\mathbb{E}} \vert X_{ni} \vert ^{2}g \biggl(\frac{ \vert X _{ni} \vert ^{r}}{k_{n}} \biggr) \Biggr)^{2} \int _{k_{n}}^{\infty } t ^{-\frac{4}{r}}\,\mathrm{d}t \\ \leq{}& C\sum_{n=1}^{\infty }k_{n}^{-\frac{4}{r}} \Biggl(\sum_{i=u_{n}}^{m_{n}} \hat{\mathbb{E}} \vert X_{ni} \vert ^{2}g \biggl(\frac{ \vert X _{ni} \vert ^{r}}{k_{n}} \biggr) \Biggr)^{2} = C\sum_{n=1}^{\infty } \Biggl(k_{n}^{-\frac{2}{r}}\sum_{i=u_{n}}^{m_{n}} \hat{\mathbb{E}} \vert X_{ni} \vert ^{2}g \biggl( \frac{ \vert X_{ni} \vert ^{r}}{k_{n}} \biggr) \Biggr) ^{2} \\ \leq {}& C\sum_{n=1}^{\infty } \Biggl\{ \biggl( \frac{h_{n}}{k_{n}} \biggr) ^{\frac{2-r}{r}}k_{n}^{-1}\sum _{i=u_{n}}^{m_{n}} \hat{\mathbb{E}} \vert X_{ni} \vert ^{r}+k_{n}^{-1}\sum _{i=u_{n}}^{m_{n}} \hat{\mathbb{E}} \vert X_{ni} \vert ^{r} \biggl(1-g \biggl(\frac{ \vert X_{ni} \vert ^{r}}{h _{n}} \biggr) \biggr) \Biggr\} ^{2} \\ \leq{} & C \Biggl(\sum_{n=1}^{\infty } \biggl( \frac{h_{n}}{k_{n}} \biggr) ^{\frac{2-r}{r}}\cdot \sup_{n\geq 1} k_{n}^{-1}\sum_{i=u_{n}}^{m_{n}} \hat{\mathbb{E}} \vert X_{ni} \vert ^{r} \Biggr)^{2}\\ &{}+C \Biggl\{ \sum_{n=1}^{\infty } k_{n}^{-1}\sum_{i=u_{n}} ^{m_{n}} \hat{\mathbb{E}} \vert X_{ni} \vert ^{r} \biggl(1-g \biggl(\frac{ \vert X_{ni} \vert ^{r}}{h _{n}} \biggr) \biggr) \Biggr\} ^{2} \\ < {}& \infty . \end{aligned}$$

Because of (2.7) and (3.3), it is obvious that

$$\begin{aligned} I_{62}^{\prime \prime } & = C\sum_{n=1}^{\infty }k_{n}^{-1} \Biggl\{ \sum_{i=u_{n}}^{m_{n}} \hat{\mathbb{E}} \vert X_{ni} \vert ^{r} \biggl(1-g \biggl( \frac{ \vert X _{ni} \vert ^{r}}{k_{n}} \biggr) \biggr) \Biggr\} ^{2} \int _{k_{n}} ^{\infty }t^{-2}\,\mathrm{d}t \\ & \leq C \Biggl\{ \sum_{n=1}^{\infty }k_{n}^{-1} \sum_{i=u_{n}}^{m_{n}} \hat{\mathbb{E}} \vert X_{ni} \vert ^{r} \biggl(1-g \biggl(\frac{ \vert X _{ni} \vert ^{r}}{k_{n}} \biggr) \biggr) \Biggr\} ^{2} \\ & \leq C \Biggl\{ \sum_{n=1}^{\infty }k_{n}^{-1} \sum_{i=u_{n}}^{m_{n}} \hat{\mathbb{E}} \vert X_{ni} \vert ^{r} \biggl(1-g \biggl(\frac{ \vert X _{ni} \vert ^{r}}{h_{n}} \biggr) \biggr) \Biggr\} ^{2} \\ & < \infty . \end{aligned}$$

Similar to the proof of \(I_{22}\), then

$$\begin{aligned} \begin{aligned} I_{62}^{\prime \prime \prime } & \leq C\sum_{n=1}^{\infty }k_{n}^{-1} \int _{k_{n}}^{\infty } \Biggl(\sum _{i=u_{n}}^{m_{n}} \frac{ \hat{\mathbb{E}} \vert X_{ni} \vert ^{r}}{\mu ^{r}\delta ^{r}t} \Biggr)^{2} \, \mathrm{d}t \\ & \leq C\sum_{n=1}^{\infty }k_{n}^{-1} \Biggl(\sum_{i=u _{n}}^{m_{n}}\hat{\mathbb{E}} \vert X_{ni} \vert ^{r} \Biggr)^{2} \int _{k _{n}}^{\infty }t^{-2}\,\mathrm{d}t \\ & \leq C \Biggl(\sum_{n=1}^{\infty }k_{n}^{-1} \sum_{i=u _{n}}^{m_{n}}\hat{\mathbb{E}} \vert X_{ni} \vert ^{r} \Biggr)^{2} \\ & < \infty . \end{aligned} \end{aligned}$$

That is to say \(I_{62}<\infty \).

By a similar argument to the proof of (3.10) and for \(I_{63}\), t is greater than \(k_{n}\), it follows that \(t^{- \frac{1}{r}}< k_{n}^{-\frac{1}{r}} \), there exists n such that \(h_{n}\leq \delta ^{r}k_{n}< \delta ^{r}t \), thus \(1-g (\frac{|X _{ni}|}{\delta t^{\frac{1}{r}}} )< 1-g (\frac{|X_{ni}|}{ \delta k_{n}^{\frac{1}{r}}} )\leq 1-g (\frac{|X_{ni}|^{r}}{ \delta ^{r}k_{n}} )\leq 1-g (\frac{|X_{ni}|^{r}}{h_{n}} ) \). Then we can get

$$\begin{aligned} & \sup_{t\geq k_{n}}t^{-\frac{1}{r}} \Biggl\vert \sum _{i=u _{n}}^{m_{n}} (\hat{\mathbb{E}}Y_{ni}-\hat{ \mathbb{E}} X_{ni} ) \Biggr\vert \\ &\quad \leq \sup_{t\geq k_{n}}t^{-\frac{1}{r}}\sum _{i=u_{n}} ^{m_{n}} \vert \hat{\mathbb{E}}Y_{ni}- \hat{\mathbb{E}}X_{ni} \vert \leq \sup_{t\geq k_{n}} t^{-\frac{1}{r}}\sum_{i=u_{n}} ^{m_{n}}\hat{ \mathbb{E}} \vert Y_{ni}-X_{ni} \vert \\ & \quad = \sup_{t\geq k_{n}}t^{-\frac{1}{r}}\sum _{i=u_{n}} ^{m_{n}}\hat{\mathbb{E}} \vert Z_{ni} \vert \leq \sup_{t\geq k_{n}}t^{- \frac{1}{r}}\sum _{i=u_{n}}^{m_{n}}\hat{\mathbb{E}} \vert X_{ni} \vert \biggl(1-g \biggl(\frac{ \vert X_{ni} \vert }{\delta t^{\frac{1}{r}}} \biggr) \biggr) \\ &\quad \leq k_{n}^{-\frac{1}{r}}\sum_{i=u_{n}}^{m_{n}} \hat{\mathbb{E}} \vert X_{ni} \vert \biggl(1-g \biggl( \frac{ \vert X_{ni} \vert }{\delta k _{n}^{\frac{1}{r}}} \biggr) \biggr) \\ &\quad \leq k_{n}^{-\frac{1}{r}}\sum_{i=u_{n}}^{m_{n}} \hat{\mathbb{E}} \vert X_{ni} \vert \biggl(\frac{ \vert X_{ni} \vert }{\mu \delta k_{n}^{ \frac{1}{r}}} \biggr)^{r-1} \biggl(1-g \biggl(\frac{ \vert X_{ni} \vert }{\delta k _{n}^{\frac{1}{r}}} \biggr) \biggr) \\ &\quad \leq C\cdot k_{n}^{-1}\sum_{i=u_{n}}^{m_{n}} \hat{\mathbb{E}} \vert X_{ni} \vert ^{r} \biggl(1-g \biggl(\frac{ \vert X_{ni} \vert ^{r}}{ \delta ^{r} k_{n}} \biggr) \biggr) \\ &\quad \leq C\cdot k_{n}^{-1}\sum_{i=u_{n}}^{m_{n}} \hat{\mathbb{E}} \vert X_{ni} \vert ^{r} \biggl(1-g \biggl(\frac{ \vert X_{ni} \vert ^{r}}{h _{n}} \biggr) \biggr) \\ &\quad \rightarrow 0. \end{aligned}$$

We are conscious of \(I_{63}=\sum_{n=1}^{\infty }k_{n}^{-1} \int _{k_{n}}^{\infty }\mathbb{V} (t^{-\frac{1}{r}} \sum_{i=u_{n}}^{m_{n}} (\hat{\mathbb{E}}Y_{ni}- \hat{\mathbb{E}}X_{ni} ) >\frac{1}{3} )\,\mathrm{d}t \), on the other hand, we obtain \(\sup_{t\geq k_{n}}t^{-\frac{1}{r}} \vert \sum_{i=u_{n}}^{m_{n}} (\hat{\mathbb{E}}Y_{ni}- \hat{\mathbb{E}} X_{ni} ) \vert \rightarrow 0 \) as \(n\rightarrow \infty \), then \(\sup_{t\geq k_{n}}t^{-\frac{1}{r}}\times\sum_{i=u_{n}}^{m_{n}} (\hat{\mathbb{E}}Y_{ni}- \hat{\mathbb{E}} X_{ni} )\rightarrow 0 \) as \(n\rightarrow \infty \), so n is sufficiently large, we know \(\mathbb{V} (t ^{-\frac{1}{r}}\sum_{i=u_{n}}^{m_{n}} (\hat{\mathbb{E}}Y _{ni} -\hat{\mathbb{E}}X_{ni} ) >\frac{1}{3} )\leq 0 \), and we get \(I_{63}<\infty \). Combining \(I_{61}<\infty \), \(I_{62}<\infty \) and \(I_{63}<\infty \), then we have \(I_{6}<\infty \). We use \(\{-X _{ni}; u_{n}\leq i \leq m_{n}, n\geq 1 \} \) instead of \(\{X_{ni}; u_{n}\leq i \leq m_{n}, n\geq 1 \} \) in \(I_{6}\), and by \(\hat{\mathbb{E}}X_{ni}=\hat{\mathcal{E}}X_{ni} \), then we get

$$\begin{aligned} &\sum_{n=1}^{\infty }k_{n}^{-1} \int _{k_{n}}^{\infty } \mathbb{V} \Biggl(\sum _{i=u_{n}}^{m_{n}} \bigl(-X_{ni}- \hat{\mathbb{E}} (-X_{ni} ) \bigr)>t^{\frac{1}{r}} \Biggr) \,\mathrm{d}t \\ &\quad = \sum_{n=1}^{\infty }k_{n}^{-1} \int _{k_{n}}^{\infty }\mathbb{V} \Biggl(\sum _{i=u_{n}}^{m_{n}} - (X_{ni}- \hat{ \mathbb{E}}X_{ni} )>t^{\frac{1}{r}} \Biggr)\,\mathrm{d}t \\ &\quad < \infty . \end{aligned}$$

By \(I_{6}<\infty \), it is obvious that

$$\begin{aligned} I_{5} ={}&\sum_{n=1}^{\infty }k_{n}^{-1} \int _{k_{n}}^{ \infty }\mathbb{V} \Biggl( \Biggl\vert \sum _{i=u_{n}}^{m_{n}} (X _{ni}-\hat{ \mathbb{E}}X_{ni} ) \Biggr\vert >t^{\frac{1}{r}} \Biggr) \, \mathrm{d}t \\ \leq{}&\sum_{n=1}^{\infty }k_{n}^{-1} \int _{k_{n}}^{\infty } \mathbb{V} \Biggl(\sum _{i=u_{n}}^{m_{n}} (X_{ni}+ \hat{ \mathbb{E}}X_{ni} )>t^{\frac{1}{r}} \Biggr)\,\mathrm{d}t \\ &{}+ \sum _{n=1}^{\infty }k_{n}^{-1} \int _{k_{n}}^{\infty } \mathbb{V} \Biggl(\sum _{i=u_{n}}^{m_{n}} - (X_{ni}- \hat{ \mathbb{E}}X_{ni} )>t^{\frac{1}{r}} \Biggr)\,\mathrm{d}t \\ < {}&\infty . \end{aligned}$$

So we obtain \(I_{5}<\infty \), in other words, (3.9) is proved and the proof is completed. □

References

  1. Peng, S.: Multi-dimensional g-Brownian motion and related stochastic calculus under g-expectation. Stoch. Process. Appl. 118(12), 2223–2253 (2008)

    MathSciNet  Article  Google Scholar 

  2. Peng, S.: A new central limit theorem under sublinear expectations. J. Math. 53(8), 1989–1994 (2008)

    Google Scholar 

  3. Peng, S.: G-Brownian motion and related stochastic calculus of Itô type. Stoch. Anal. Appl. 34(2), 139–161 (2007)

    Google Scholar 

  4. Zhang, L.X.: Exponential inequalities under the sub-linear expectations with applications to laws of the iterated. Sci. China Math. 59(12), 2503–2526 (2016)

    MathSciNet  Article  Google Scholar 

  5. Zhang, L.X.: Rosenthals inequalities for independent and negatively dependent random variables under sub-linear expectations with applications. Sci. China Math. 59(4), 751–768 (2016)

    MathSciNet  Article  Google Scholar 

  6. Zhang, L.X., Lin, J.H.: Marcinkiewicz’s strong law of large numbers for nonlinear expectations. Stat. Probab. Lett. 137, 269–276 (2018)

    MathSciNet  Article  Google Scholar 

  7. Xu, J.P., Zhang, L.X.: Three series theorem for independent random variables under sub-linear expectations with applications. Acta Math. Sin. Engl. Ser. 35, 172–184 (2018)

    MathSciNet  Article  Google Scholar 

  8. Wu, Q.Y., Jiang, Y.Y.: Strong law of large numbers and Chover’s law of the iterated logarithm under sub-linear expectations. J. Math. Anal. Appl. 460(1), 252–270 (2018)

    MathSciNet  Article  Google Scholar 

  9. Chen, Z.: Strong laws of large numbers for sub-linear expectations. Sci. China Math. 59(5), 945–954 (2016)

    MathSciNet  Article  Google Scholar 

  10. Hsu, P.L., Robbins, H.: Complete convergence and the law of large numbers. Proc. Natl. Acad. Sci. USA 33(2), 25–31 (1947)

    MathSciNet  Article  Google Scholar 

  11. Chow, Y.S.: On the rate of moment complete convergence of sample sums and extremes. Bull. Inst. Math. Acad. Sin. 16, 177–201 (1988)

    MATH  Google Scholar 

  12. Liu, L.: Precise large deviations for dependent random variables with heavy tails. Stat. Probab. Lett. 79, 1209–1298 (2009)

    MathSciNet  MATH  Google Scholar 

  13. Chen, Y.Q., Chen, A.Y., Ng, K.W.: The strong law of large numbers for extended negatively dependent random variables. J. Appl. Probab. 47, 908–922 (2010)

    MathSciNet  Article  Google Scholar 

  14. Wu, Y.F., Guan, M.: Convergence properties of the partial sums for sequences of END random variables. J. Korean Math. Soc. 49, 1097–1110 (2012)

    MathSciNet  Article  Google Scholar 

  15. Sung, H.S., Lisawadi, S., Volodin, A.: Weak laws of large numbers for arrays under a condition of uniform integrability. J. Korean Math. Soc. 45, 289–300 (2008)

    MathSciNet  Article  Google Scholar 

  16. Chandra, T.K.: Uniform integrability in the Cesàro sense and the weak law of large numbers. Sankhya, Ser. A 51, 309–317 (1989)

    MathSciNet  MATH  Google Scholar 

  17. Chandra, T.K., Goswami, A.: Cesàro α-integrability and laws of large numbers. J. Theor. Probab. 16, 655–669 (2003)

    Article  Google Scholar 

  18. Gut, A., Stadtmuller, U.: An intermediate Baum–Katz theorem. Stat. Probab. Lett. 81, 1486–1492 (2011)

    MathSciNet  Article  Google Scholar 

  19. Qiu, D.H., Chen, P.Y.: Complete moment convergence for i.i.d. random variables. Stat. Probab. Lett. 91, 76–82 (2014)

    MathSciNet  Article  Google Scholar 

  20. Wu, Q.Y., Jiang, Y.Y.: Complete convergence and complete moment convergence for negatively associated sequences of random variables. J. Inequal. Appl. 2016, 157 (2016)

    MathSciNet  Article  Google Scholar 

  21. Feng, F.X.: Complete convergence for weighted sums of negatively dependent random variables under the sub-linear expectations. Commun. Stat., Theory Methods 1(1), 1–16 (2018)

    MathSciNet  Article  Google Scholar 

  22. Wu, Y.F., Peng, J.Y., Hu, T.C.: Limiting behaviour for arrays of row-wise END random variables under conditions of h-integrability. Stoch. Int. J. Probab. Stoch. Process. 87(3), 409–423 (2015)

    MathSciNet  Article  Google Scholar 

Download references

Acknowledgements

The authors are grateful to the editors and anonymous referees for their helpful comments and suggestions that improved the clarity and readability of this article.

Availability of data and materials

Not applicable.

Authors’ information

Qunying Wu, professor, doctor, working in the field of probability and statistics.

Funding

This paper was supported by the National Natural Science Foundation of China (11661029), the Support Program of the Guangxi China Science Foundation (2018GXNSFAA281011) and Support Program of the Guangxi China Science Foundation (2018GXNSFAA294131).

Author information

Affiliations

Authors

Contributions

All authors contributed equally to the writing of this paper. All the authors read and approved the final manuscript.

Corresponding author

Correspondence to Qunying Wu.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Liang, Z., Wu, Q. Theorems of complete convergence and complete integral convergence for END random variables under sub-linear expectations. J Inequal Appl 2019, 114 (2019). https://doi.org/10.1186/s13660-019-2064-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13660-019-2064-0

MSC

  • 60F15

Keywords

  • Sub-linear expectation
  • Complete convergence
  • Complete integral convergence
  • END random variables