# Further Spitzer’s law for widely orthant dependent random variables

## Abstract

The Spitzer’s law is obtained for the maximum partial sums of widely orthant dependent random variables under more optimal moment conditions.

## 1 Introduction and main results

It is well known that the sample mean, based on a sequence of independent random variables with common distribution, is a strongly consistent estimator for the population mean. This property is the Kolmogorov strong law of large numbers. The strong law of large numbers is one of key results of modern probability theory.

For the independent case, the Kolmogorov strong law of large numbers is obtained by the Kolmogorov three-series theorem, which is based on the Kolmogorov inequality or the moment inequality for the maximum partial sums, see Petrov [11] for more detail. For some dependent cases, such as negatively associated random variables and $$\rho ^{*}$$-mixing random variables, the moment inequalities for the maximum partial sums are obtained by Matula [10] and Utev and Peligrad [14], respectively.

For the pairwise independent case, the moment inequality for the maximum partial sums is not known. Etemadi [6] used a subsequence method to prove the Kolmogorov strong law of large numbers. Matula [10] extended it to pairwise negatively quadrant dependent (NQD) random variables. Using Etemadi’s subsequence method, Kruglov [8] gave a sufficient condition for the Kolmogorov strong law of large numbers for nonnegative random variables as follows.

### Theorem A

Let $$\{X_{n},n\geq 1\}$$ be a sequence of nonnegative random variables, and let $$\{b_{n},n\geq 1\}$$ be a bounded sequence of nonnegative numbers. Suppose that

$$\sum^{\infty }_{n=1}n^{-1}P \Biggl\{ \Biggl\vert \sum^{n}_{k=1}(X_{k}-b_{k}) \Biggr\vert >\varepsilon n \Biggr\} < \infty ,\quad \forall \varepsilon >0.$$
(1.1)

Then the Kolmogorov strong law of large numbers

$$n^{-1}\sum^{n}_{k=1}(X_{k}-b_{k}) \rightarrow 0\quad \textit{a.s.}$$
(1.2)

Unfortunately, (1.1) does not imply (1.2) without the nonnegative random variable assumption, see Remarks 1.1 and 1.2 in Bai et al. [1]. However, if (1.1) is improved to

$$\sum^{\infty }_{n=1}n^{-1}P \Biggl\{ \max_{1\leq m\leq n} \Biggl\vert \sum^{m}_{k=1}(X_{k}-b_{k}) \Biggr\vert >\varepsilon n \Biggr\} < \infty ,\quad \forall \varepsilon >0,$$
(1.3)

then (1.3) implies (1.2) without any assumptions. This fact can be proved by a subsequence method (see, for example, Chen et al. [2]), which is different from Etemadi’s subsequence method. Spitzer [13] first established (1.1) for independent and identically distributed (i.i.d.) random variables $$\{X_{n}\}$$ with $$b_{n}=EX_{1}$$ by using the combinatorial method. One can label (1.3) as Spitzer’s law for maximum partial sums of random variables. In the independent case, (1.1) and (1.3) are equivalent. In some dependent cases, for example, pairwise independent case, (1.1) is strictly weaker than (1.3), see Bai et al. [1]. So, it is more important to obtain (1.3) than (1.1).

Let us recall the concept of widely orthant dependent (WOD) random variables which was introduced by Wang et al. [15] as follows.

### Definition 1.1

A sequence of random variables $$\{X_{n}, n\geq 1\}$$ is said to be widely upper orthant dependent (WUOD) if, for each $$n\ge 1$$, there exists a positive number $$g_{U}(n)$$ such that, for all real numbers $$x_{k}$$, $$1\le k\le n$$,

$$P\{X_{1}>x_{1}, \ldots , X_{n}>x_{n}\} \le g_{U}(n)\prod_{k=1}^{n} P \{X_{k}>x_{k} \}.$$

It is said to be widely lower orthant dependent (WLOD) if, for each $$n\ge 1$$, there exists a positive number $$g_{L}(n)$$ such that, for all real numbers $$x_{k}$$, $$1\le k\le n$$,

$$P\{X_{1}\le x_{1}, \ldots , X_{n}\le x_{n}\}\le g_{L}(n)\prod_{k=1}^{n} P\{X_{k}\le x_{k}\},$$

and it is said to be WOD if it is both WUOD and WLOD.

Wang et al. [15] provided many examples of WLOD and WUOD random variables. They also provided WOD random variables which are neither positively dependent nor negatively dependent.

In Definition 1.1, $$g_{U}(n)$$, $$g_{L}(n)$$, $$n\ge 1$$, are called dominating coefficients. The sequence $$\{X_{n}, n\ge 1\}$$ is said to be extended negatively dependent (END) if $$g_{U}(n)=g_{L}(n)=M$$ for all $$n\ge 1$$ and for some positive constant M. In particular, $$\{X_{n}, n\ge 1\}$$ is said to be negatively orthant dependent (NOD) or negatively dependent if $$M=1$$. When $$n=2$$ and $$g_{U}(2)=g_{L}(2)=1$$, $$X_{1}$$ and $$X_{2}$$ are said to be NQD, and $$X_{1}$$ and $$X_{2}$$ are independent if the inequalities are replaced with the equalities.

Since the class of WOD random variables contains END random variables and NOD random variables as special cases, it is interesting to study the limiting behavior of WOD random variables. In fact, a number of authors studied the limiting behavior of WOD random variables. We refer to Wang and Cheng [16] for renewal theory, to Wang et al. [17] for risk theory, to Chen et al. [5] and Lang et al. [9] for complete convergence, and to Guan et al. [7] for complete moment convergence.

Recently, Chen and Sung [3] obtained Spitzer’s law for maximum partial sums of WOD random variables as follows.

### Theorem B

Let $$\{ X_{n}, n\ge 1\}$$ be a sequence of WOD random variables with the same distribution as X, and with dominating coefficients $$g_{L}(n)$$, $$g_{U}(n)$$ for $$n\ge 1$$. Suppose that there exists a nondecreasing positive function $$g(x)$$ on $$[0, \infty )$$ such that $$\max \{g_{L}(n), g_{U}(n)\}\le g(n)$$ for $$n\ge 1$$. Assume that one of the following conditions holds.

1. (I)

$$g(x)/x^{\tau }\downarrow$$ for some $$0<\tau <1$$, and $$E \vert X \vert g( \vert X \vert )<\infty$$.

2. (II)

There exists a nondecreasing positive function $$h(x)$$ on $$[0, \infty )$$ such that $$h(x)/x \downarrow$$, $$\sum_{n=1}^{\infty }g(n)/(nh^{\gamma }(n))<\infty$$ for some $$\gamma >0$$, and $$E \vert X \vert h( \vert X \vert )<\infty$$.

Then

$$\sum_{n=1}^{\infty }n^{-1} P \Biggl\{ \max_{1\le m\le n} \Biggl\vert \sum_{k=1}^{m} X_{k}-mEX \Biggr\vert >n\varepsilon \Biggr\} < \infty , \quad \forall \varepsilon >0.$$
(1.4)

Note that Chen and Sung [4] generalized Theorem B(I) to weighted sums $$\sum_{k=1}^{m} a_{nk}X_{k}$$, where the weights $$\{a_{nk}\}$$ satisfy $$\sum_{k=1}^{n} \vert a_{nk} \vert ^{\alpha }=O(n)$$ for some $$\alpha \ge 2$$.

When $$g(x)=\max \{1, x^{\alpha }\}$$ (or $$\log ^{\alpha }x$$) for some $$\alpha >0$$, the moment condition in (II) is $$E \vert X \vert ^{1+\delta }<\infty$$ (or $$E \vert X \vert \log ^{\delta } \vert X \vert <\infty$$, correspondingly) for some $$\delta >0$$, which is weaker than that in (I), where, and in the following, $$\log x=\log _{e} \max \{x, e\}$$. When $$g(x)=(\log \log x)^{\alpha }$$ for some $$\alpha >0$$, the moment condition in (I) is $$E \vert X \vert (\log \log \vert X \vert )^{\alpha }<\infty$$. In this paper, we improve the moment condition when $$g(x)=\max \{1, x^{\alpha }\}$$, $$\log ^{\alpha }x$$, $$(\log \log x)^{\alpha }$$ for some $$\alpha >0$$. In fact, we provide a more optimal moment condition when g is a fairly general function containing above functions.

Now we state the main results. Some preliminary lemmas and the proofs of the main results will be detailed in Sect. 2. Some examples satisfying the main results will be presented in Sect. 3.

### Theorem 1.1

Let $$\{X_{n}, n\ge 1\}$$ be a sequence of WOD random variables with the same distribution as X and with dominating coefficients $$g_{L}(n)$$, $$g_{U}(n)$$ for $$n\ge 1$$. Suppose that there exist nondecreasing positive functions $$g(x)$$ and $$h(x)$$ on $$[0, \infty )$$ such that $$\max \{g_{L}(n), g_{U}(n)\}\le g(n)$$ for $$n\ge 1$$, $$g(x) =O(x^{\alpha })$$ for some $$\alpha >0$$, $$x/h(x) \uparrow \infty$$,

$$\limsup_{x\to \infty } \frac{h(x)}{h ( x/h(x) )}< \infty ,$$
(1.5)

and

$$\sum_{n=1}^{\infty }n^{-1} g(n) \exp \bigl( -\delta h(n) \bigr) < \infty \quad \textit{for some }\delta >0.$$
(1.6)

If $$E \vert X \vert h( \vert X \vert )<\infty$$, then (1.4) holds.

### Theorem 1.2

Let $$\{X_{n}, n\ge 1\}$$ be a sequence of WOD random variables with the same distribution as X and with dominating coefficients $$g_{L}(n)$$, $$g_{U}(n)$$ for $$n\ge 1$$. Suppose that there exists a nondecreasing positive function $$g(x)$$ on $$[0, \infty )$$ such that $$\max \{g_{L}(n), g_{U}(n)\}\le g(n)$$ for $$n\ge 1$$, $$g(x)/x^{\tau }\downarrow$$ for some $$0<\tau <1$$, and

$$\limsup_{x\to \infty } \frac{g(x)}{g ( x/g(x) )}< \infty .$$
(1.7)

If $$E \vert X \vert \sqrt{g( \vert X \vert )}<\infty$$, then (1.4) holds.

### Remark 1.1

Chen and Sung [3, 4] proved Theorem 1.2 without condition (1.7). However, the moment condition of Chen and Sung [3, 4] is $$E \vert X \vert g( \vert X \vert )<\infty$$, which is stronger than $$E \vert X \vert \sqrt{g( \vert X \vert )}<\infty$$. Under the stronger moment condition $$E \vert X \vert g( \vert X \vert )<\infty$$, Chen and Sung [4] generalized Theorem 1.2 to weighted sum $$\sum_{k=1}^{n} a_{nk}X_{k}$$ satisfying $$\sum_{k=1}^{n} \vert a_{nk} \vert ^{\alpha }=O(n)$$ for some $$\alpha >0$$, where $$0<\tau <\min \{1,\alpha \}$$. When $$a_{nk}=1$$ for $$1\le k\le n$$ and $$n\ge 1$$, $$\sum_{k=1}^{n} \vert a_{nk} \vert ^{\alpha }=O(n)$$ for any $$\alpha >0$$, and hence the condition $$0<\tau <\min \{1,\alpha \}$$ reduces to $$0<\tau <1$$.

### Remark 1.2

It is important to find functions satisfying (1.5) or (1.7). A number of slowly varying functions, for example, logx, $$\log \log x$$, and $$\log \log \log x$$, satisfy (1.5). However, there exist slowly varying functions which do not satisfy (1.5). Such functions are $$\exp (\log x/\log \log x)$$ and $$\exp (\log x/\log \log \log x)$$. Further, there exists a non-slowly varying function satisfying (1.5). Define $$h(x)$$ by

$$h(x)= \textstyle\begin{cases} 1,& 0\le x< 2, \\ 2^{n}, & 2^{2^{n}}\le x< 2^{2^{n+1}}, n\ge 0. \end{cases}$$

Then we have that

$$\frac{h(x)}{h(x/h(x))}= \textstyle\begin{cases} 2, &2^{2^{n}}\le x< 2^{2^{n}+n},n\ge 1, \\ 1, & 2^{2^{n} +n}\le x< 2^{2^{n+1}}, n\ge 1, \end{cases}$$

which implies that $$\limsup_{x\to \infty } h(x)/h(x/h(x))=2$$. Hence $$h(x)$$ satisfies (1.5). On the other hand, we also have that

$$\frac{h(2^{2^{n}})}{h(2^{2^{n}}/2)}=2, \quad \text{if }n\ge 1,$$

which implies that $$h(x)$$ is not a slowly varying function.

### Corollary 1.1

Let $$\{X_{n}, n\ge 1\}$$ be a sequence of WOD random variables with the same distribution as X and with dominating coefficients $$g_{L}(n)$$, $$g_{U}(n)$$ for $$n\ge 1$$. Suppose that there exists a nondecreasing positive function $$g(x)$$ on $$[0, \infty )$$ such that $$\max \{g_{L}(n), g_{U}(n)\}\le g(n)$$ for $$n\ge 1$$. Assume that one of the following conditions holds.

1. (I)

$$g(x)=O(x^{\alpha })$$ for some $$\alpha >0$$, and $$E \vert X \vert \log \vert X \vert <\infty$$.

2. (II)

$$g(x)=O((\log x)^{\alpha })$$ for some $$\alpha >0$$, and $$E \vert X \vert \log \log \vert X \vert <\infty$$.

3. (III)

$$g(x)=O((\log \log x)^{\alpha })$$ for some $$\alpha >0$$, and $$E \vert X \vert (\log \log \vert X \vert )^{\alpha /2}<\infty$$.

Then (1.4) holds.

Theorem 1.1 can be generalized as follows.

### Theorem 1.3

Let $$r\ge 1$$. Let $$\{X_{n}, n\ge 1\}$$, $$g(x)$$, and $$h(x)$$ be as in Theorem 1.1except (1.6). Assume, instead of (1.6), that

$$\sum_{n=1}^{\infty }n^{r-2} g(n) \exp \bigl( -\delta h(n) \bigr) < \infty \quad \textit{for some }\delta >0.$$

If $$E( \vert X \vert h( \vert X \vert ))^{r}<\infty$$, then

$$\sum_{n=1}^{\infty }n^{r-2} P \Biggl\{ \max_{1\le m\le n} \Biggl\vert \sum_{k=1}^{m} X_{k}-mEX \Biggr\vert >n\varepsilon \Biggr\} < \infty , \quad \forall \varepsilon >0.$$
(1.8)

Theorem 1.2 can also be generalized as follows.

### Theorem 1.4

Let $$1\le r<2$$. Let $$\{X_{n}, n\ge 1\}$$ and $$g(x)$$ be as in Theorem 1.2except $$0<\tau <1$$. If $$0<\tau <2-r$$ and $$E( \vert X \vert \sqrt{g( \vert X \vert )})^{r}<\infty$$, then (1.8) holds.

### Remark 1.3

When $$r=1$$, Theorems 1.3 and 1.4 reduce to Theorems 1.1 and 1.2, respectively. We can consider the following more general result:

$$\sum_{n=1}^{\infty }n^{r-2} P \Biggl\{ \max_{1\le m\le n} \Biggl\vert \sum_{k=1}^{m} X_{k}-mEX \Biggr\vert >n^{\theta }\varepsilon \Biggr\} < \infty , \quad \forall \varepsilon >0,$$

where $$\theta >1/2$$. If $$\theta =1$$, this reduces to (1.8). However, the proofs of Theorems 1.3 and 1.4 cannot be applied to this case.

Throughout this paper, the symbol C denotes a positive constant which is not necessarily the same one in each appearance. For an event A, $$I(A)$$ denotes the indicator function of the event A.

## 2 Lemmas and proofs

To prove the main results, we need the following lemmas. The following lemma is a Fuk–Nagaev type inequality for WOD random variable which is due to Shen et al. [12].

### Lemma 2.1

Let $$\{Y_{n},n\geq 1\}$$ be a sequence of WOD random variables with dominating coefficients $$g_{L}(n)$$, $$g_{U}(n)$$ for $$n\ge 1$$. If $$EY_{n}=0$$ and $$EY^{2}_{n}<\infty$$ for $$n\geq 1$$, then, for any $$n\ge 1$$, $$x>0$$, and $$y>0$$,

$$P \Biggl\{ \Biggl\vert \sum^{n}_{k=1}Y_{k} \Biggr\vert >x \Biggr\} \leq 2 P \Bigl\{ \max_{1\leq k\leq n} \vert Y_{k} \vert >y \Bigr\} +2g(n)\exp \biggl\{ - \frac{x^{2}}{2(xy+M_{n})} \biggr\} ,$$

where $$M_{n}=\sum^{n}_{k=1}EY^{2}_{k}$$ and $$g(n)=\max \{g_{L}(n), g_{U}(n)\}$$.

The following lemma is simple, but it is useful for proving Theorem 1.1.

### Lemma 2.2

Let $$f(x)$$ be a positive function defined on $$[0, \infty )$$ such that $$f(x) \uparrow$$ and $$x/f(x) \uparrow \infty$$. If $$\limsup_{x\to \infty } f(x)/f(x/f(x))<\infty$$, then $$\limsup_{x\to \infty } f(x)/f(\gamma x/f(x))<\infty$$ for any $$\gamma >0$$.

### Proof

If f is bounded, then the result is obvious. We now assume that $$0< f(x) \uparrow \infty$$. Let $$D=\sup_{x\ge 0} f(x)/f(x/f(x))$$. Since $$0< f(x) \uparrow \infty$$, there exists $$x_{0}$$ such that $$f(x)\ge 1/\gamma$$ if $$x\ge x_{0}$$. It follows that, for $$x\ge x_{0}$$,

$$f(x) \le D f \bigl(x/f(x) \bigr)\le D f(\gamma x),$$

and hence, for $$x/f(x)\ge x_{0}$$,

$$\frac{f(x)}{f(\gamma x/f(x))} \le \frac{ D f(x/f(x))}{f(\gamma x/f(x))} \le \frac{ D^{2} f(\gamma x/f(x))}{f(\gamma x/f(x))}=D^{2},$$

which proves the result. □

### Proof of Theorem 1.1

Without loss of generality, we can assume that $$X\geq 0$$ a.s. Let $$\varepsilon >0$$ be given. Since $$EX<\infty$$ and $$EXh(X)<\infty$$, there exists $$A>0$$ such that $$\max \{EXI(X>A), EXh(X)I(X>A)\}<\min \{\varepsilon /8, \varepsilon ^{2}/(64 \delta )\}$$. Let

$$Y_{k}=(X_{k}-A)I(X_{k}>A), \quad\quad Z_{k}=X_{k}I(X_{k}\leq A)+AI(X_{k}>A).$$

Then $$X_{k}=Y_{k}+Z_{k}$$. Hence, to prove (1.4), it suffices to prove that

$$\sum_{n=1}^{\infty }n^{-1}P \Biggl\{ \max_{1\leq m\leq n} \Biggl\vert \sum^{m}_{k=1}(Y_{k}-EY_{k}) \Biggr\vert >\varepsilon n/2 \Biggr\} < \infty$$
(2.1)

and

$$\sum_{n=1}^{\infty }n^{-1}P \Biggl\{ \max_{1\leq m\leq n} \Biggl\vert \sum^{m}_{k=1}(Z_{k}-EZ_{k}) \Biggr\vert >\varepsilon n/2 \Biggr\} < \infty .$$
(2.2)

By the same argument as $$I_{1}<\infty$$ in Chen and Sung [3], (2.2) holds. So we only need to prove (2.1). Noting that $$EY_{k}\leq EXI(X>A)<\varepsilon /8$$, we have that

\begin{aligned} \max_{1\leq m\leq n} \Biggl\vert \sum^{m}_{k=1}(Y_{k}-EY_{k}) \Biggr\vert &\leq \sum^{n}_{k=1}Y_{k}+ \sum^{n}_{k=1}EY_{k} \\ &< \sum^{n}_{k=1}Y_{k}+ \varepsilon n/8. \end{aligned}

Therefore, to prove (2.1), it suffices to prove that

$$\sum_{n=1}^{\infty }n^{-1}P \Biggl\{ \sum^{n}_{k=1}Y_{k}>3\varepsilon n/8 \Biggr\} < \infty .$$
(2.3)

For $$n>A$$ and $$1\le k\le n$$, let

$$W_{nk}=(X_{k}-A)I(A< X_{k}\leq n)+(n-A)I(X_{k}>n).$$

Then $$EW_{nk}\leq EXI(X>A)<\varepsilon /8$$ and $$W_{nk}=Y_{k}$$ if $$X_{k}\le n$$. It follows that

\begin{aligned} \Biggl\{ \sum^{n}_{k=1}Y_{k}>3 \varepsilon n/8 \Biggr\} &\subset \Biggl\{ \sum^{n}_{k=1}W_{nk}>3 \varepsilon n/8 \Biggr\} \cup \Biggl( \bigcup^{n}_{k=1} \{X_{k}>n\} \Biggr) \\ &\subset \Biggl\{ \Biggl\vert \sum^{n}_{k=1} (W_{nk}-EW_{nk}) \Biggr\vert > \varepsilon n/4 \Biggr\} \cup \Biggl(\bigcup^{n}_{k=1} \{X_{k}>n\} \Biggr). \end{aligned}

So, to prove (2.3), it is enough to show that

$$\sum_{n=1}^{\infty }n^{-1}P \Biggl\{ \Biggl\vert \sum^{n}_{k=1}(W_{nk}-EW_{nk}) \Biggr\vert >\varepsilon n/4 \Biggr\} < \infty ,$$
(2.4)

since $$\sum^{\infty }_{n=1}n^{-1}P\{\bigcup^{n}_{k=1}\{X_{k}>n\}\}\leq \sum^{\infty }_{n=1}P\{X>n\}\leq EX<\infty$$.

Since $$h(x)\uparrow$$ and $$x/h(x)\uparrow$$, we have that, for $$n>A$$,

\begin{aligned} M_{n}&:=\sum^{n}_{k=1}E(W_{nk}-EW_{nk})^{2} \leq \sum^{n}_{k=1}EW_{nk}^{2} \\ &\leq nEX^{2}I(A< X\leq n)+n^{3}P\{X>n\} \\ &\leq nE \biggl[Xh(X)\cdot \frac{X}{h(X)}I(A< X\leq n) \biggr]+n^{3}P \bigl\{ Xh(X)>nh(n), X>n \bigr\} \\ &\leq \frac{n^{2}}{h(n)}\cdot EXh(X)I(A< X\le n) +\frac{n^{2}}{h(n)} \cdot EXh(X)I(X> n) \\ &= \frac{n^{2}}{h(n)}\cdot EXh(X)I(X>A) \\ &\leq \frac{\varepsilon ^{2} n^{2}}{64 \delta h(n)}. \end{aligned}

Note that $$\{W_{nk}, 1\le k\le n\}$$ is a sequence of WOD, since each $$W_{nk}$$ is a monotone transformation of $$X_{k}$$ (see Proposition 1.1 in Wang et al. [15]). Then we have by Lemma 2.1 with $$x=\varepsilon n/4$$ and $$y=\varepsilon n /(16 \delta h(n))$$ that

\begin{aligned} &P \Biggl\{ \Biggl\vert \sum^{n}_{k=1}(W_{nk}-EW_{nk}) \Biggr\vert >\varepsilon n/4 \Biggr\} \\ &\quad \leq 2P \Bigl\{ \max_{1\leq k\leq n} \vert W_{nk}-EW_{nk} \vert >\varepsilon n / \bigl(16 \delta h(n) \bigr) \Bigr\} \\ & \quad\quad {} +2g(n)\exp \biggl\{ - \frac{\varepsilon ^{2}n^{2}}{32 (\varepsilon ^{2} n^{2}/(64 \delta h(n))+\varepsilon ^{2} n^{2}/(64 \delta h(n)) )} \biggr\} \\ &\quad \leq 2P \Bigl\{ \max_{1\leq k\leq n}W_{nk} +\varepsilon /8> \varepsilon n / \bigl(16 \delta h(n) \bigr) \Bigr\} +2g(n)\exp \bigl\{ - \delta h(n) \bigr\} . \end{aligned}
(2.5)

Since $$n/h(n)\uparrow \infty$$, we have that $$\varepsilon n/(16\delta h(n))-\varepsilon /8> \varepsilon n/(17 \delta h(n))$$ for all large n. We also have by (1.5) and Lemma 2.2 that $$\sup_{n\ge 1} h(n)/h(\varepsilon n/(17 \delta h(n))\le D$$ for some constant $$D>0$$. It follows that

\begin{aligned} &\sum^{\infty }_{n=1} n^{-1} P \Bigl\{ \max_{1\leq k\leq n}W_{nk} + \varepsilon /8>\varepsilon n/ \bigl(16 \delta h(n) \bigr) \Bigr\} \\ &\quad \le C\sum^{\infty }_{n=1} n^{-1} P \Bigl\{ \max_{1\leq k\leq n}W_{nk}> \varepsilon n/ \bigl( 17 \delta h(n) \bigr) \Bigr\} \\ &\quad \le C\sum^{\infty }_{n=1} n^{-1} P \Bigl\{ \max_{1\leq k\leq n}X_{k}> \varepsilon n/ \bigl( 17 \delta h(n) \bigr) \Bigr\} \\ &\quad \le C\sum^{\infty }_{n=1} P \bigl\{ X> \varepsilon n/ \bigl( 17 \delta h(n) \bigr) \bigr\} \\ &\quad = C\sum^{\infty }_{n=1}P \biggl\{ Xh(X)> \frac{\varepsilon n}{17 \delta h(n)} h \bigl(\varepsilon n/ \bigl( 17 \delta h(n) \bigr) \bigr) \biggr\} \\ &\quad \le C\sum^{\infty }_{n=1}P \bigl\{ Xh(X)> \varepsilon n/(17 \delta D) \bigr\} \\ &\quad \leq C \varepsilon ^{-1} 17 \delta D E \bigl[Xh(X) \bigr]< \infty . \end{aligned}
(2.6)

Therefore, (2.4) holds by (2.5), (2.6), and (1.6). The proof is completed. □

### Proof of Theorem 1.2

Without loss of generality, we can assume that $$X\geq 0$$ a.s. Let $$\varepsilon >0$$ be given. Since $$EX<\infty$$, there exists $$A>0$$ such that $$EXI(X>A)<\varepsilon /8$$. Let

$$Y_{k}=(X_{k}-A)I(X_{k}>A), \quad\quad Z_{k}=X_{k}I(X_{k}\leq A)+AI(X_{k}>A).$$

Then $$X_{k}=Y_{k}+Z_{k}$$. As in the proof of Theorem 1.1, it suffices to prove that

$$\sum_{n=1}^{\infty }n^{-1}P \Biggl\{ \sum^{n}_{k=1}Y_{k}>3\varepsilon n/8 \Biggr\} < \infty .$$

For $$n/\sqrt{g(n)}>A$$ and $$1\le k\le n$$, let

$$W_{nk}=(X_{k}-A)I \bigl(A< X_{k}\leq n/ \sqrt{g(n)} \bigr)+ \bigl(n/\sqrt{g(n)}-A \bigr)I \bigl(X_{k}>n/ \sqrt{g(n)} \bigr).$$

Then $$EW_{nk}\leq EXI(X>A)<\varepsilon /8$$ and $$W_{nk}=Y_{k}$$ if $$X_{k}\le n/\sqrt{g(n)}$$. It follows that

\begin{aligned} \Biggl\{ \sum^{n}_{k=1}Y_{k}>3 \varepsilon n/8 \Biggr\} &\subset \Biggl\{ \sum^{n}_{k=1}W_{nk}>3 \varepsilon n/8 \Biggr\} \cup \Biggl( \bigcup^{n}_{k=1} \bigl\{ X_{k}>n/\sqrt{g(n)} \bigr\} \Biggr) \\ &\subset \Biggl\{ \Biggl\vert \sum^{n}_{k=1} (W_{nk}-EW_{nk}) \Biggr\vert > \varepsilon n/4 \Biggr\} \cup \Biggl(\bigcup^{n}_{k=1} \bigl\{ X_{k}>n/\sqrt{g(n)} \bigr\} \Biggr). \end{aligned}

By (1.7) and $$0< g(x)\uparrow$$, we have that $$\sup_{n\ge 1} g(n)/g(n/\sqrt{g(n)})\le \sup_{n\ge 1} g(n)/g(n/g(n)) \le D$$ for some constant $$D>0$$ (note that $$g(n)\ge 1$$, since $$\max \{g_{L}(n), g_{U}(n)\}\le g(n)$$). Then

\begin{aligned} \sum_{n=1}^{\infty }n^{-1} P \Biggl\{ \bigcup^{n}_{k=1} \bigl\{ X_{k}>n/ \sqrt{g(n)} \bigr\} \Biggr\} &\leq \sum^{\infty }_{n=1}P \bigl\{ X>n/ \sqrt{g(n)} \bigr\} \\ &=\sum^{\infty }_{n=1}P \biggl\{ X \sqrt{g(X)} > \frac{n}{\sqrt{g(n)}} \sqrt{g \bigl(n/\sqrt{g(n)} \bigr)} \biggr\} \\ &\le \sum^{\infty }_{n=1}P \bigl\{ X \sqrt{g(X)} > n/\sqrt{D} \bigr\} \\ &\le \sqrt{D} E \bigl[X\sqrt{g(X)} \bigr]< \infty . \end{aligned}
(2.7)

It remains to prove that

$$I:=\sum_{n=1, n/\sqrt{g(n)}>A}^{\infty }n^{-1}P \Biggl\{ \Biggl\vert \sum^{n}_{k=1}(W_{nk}-EW_{nk}) \Biggr\vert >\varepsilon n/4 \Biggr\} < \infty ,$$
(2.8)

where $$\sum_{n=1, n/\sqrt{g(n)}>A}^{\infty }$$ means that the summation is taken over all positive integer n satisfying $$n/\sqrt{g(n)}>A$$. The rest of the proof is similar to that of Chen and Sung [3]. We have by Markov’s inequality and Lemma 2.6 in Chen and Sung [3] that

\begin{aligned} I&\le C \sum_{n=1, n/\sqrt{g(n)}>A}^{\infty }n^{-3} E \Biggl\vert \sum^{n}_{k=1}(W_{nk}-EW_{nk}) \Biggr\vert ^{2} \\ &\le C \sum_{n=1, n/\sqrt{g(n)}>A}^{\infty }n^{-3} g(n) \sum^{n}_{k=1} EW^{2}_{nk} \\ &\le C \sum_{n=1}^{\infty }n^{-2} g(n) EX^{2} I \bigl(X\le n/\sqrt{g(n)} \bigr) + C \sum _{n=1}^{\infty }P \bigl\{ X>n/\sqrt{g(n)} \bigr\} . \end{aligned}
(2.9)

Interchanging the order of summation, we have that

\begin{aligned} &\sum_{n=1}^{\infty }n^{-2} g(n) EX^{2} I \bigl(X\le n/\sqrt{g(n)} \bigr) \\ &\quad =\sum_{n=1}^{\infty }n^{-2} g(n) \sum_{i=1}^{n} EX^{2} I \bigl((i-1)/\sqrt{g(i-1)}< X \le i/\sqrt{g(i)} \bigr) \\ &\quad =\sum_{i=1}^{\infty }EX^{2} I \bigl((i-1)/\sqrt{g(i-1)}< X\le i/\sqrt{g(i)} \bigr) \sum _{n=i}^{\infty }n^{-2} g(n) \\ &\quad \le C \sum_{i=1}^{\infty }EX^{2} I \bigl((i-1)/\sqrt{g(i-1)}< X\le i/\sqrt{g(i)} \bigr) i^{-1} g(i) \\ &\quad = C \sum_{i=1}^{\infty }E \biggl[ X \sqrt{g(X)} \cdot \frac{X}{\sqrt{g(X)}} I \bigl((i-1)/\sqrt{g(i-1)}< X\le i/ \sqrt{g(i)} \bigr) \biggr] i^{-1} g(i) \\ &\quad \le C\sum_{i=1}^{\infty }E \bigl[ X \sqrt{g(X)} I \bigl((i-1)/\sqrt{g(i-1)}< X \le i/\sqrt{g(i)} \bigr) \bigr] \frac{ i/\sqrt{g(i)}}{\sqrt{ g(i/\sqrt{g(i)})}} i^{-1} g(i) \\ &\quad \le C\sqrt{D}\sum_{i=1}^{\infty }E \bigl[ X\sqrt{g(X)} I \bigl((i-1)/\sqrt{g(i-1)}< X \le i/\sqrt{g(i)} \bigr) \bigr] \\ &\quad =C\sqrt{D} E \bigl[X\sqrt{g(X)} \bigr]< \infty . \end{aligned}
(2.10)

We also have by the proof of (2.7) that

$$\sum_{n=1}^{\infty }P \bigl\{ X>n/\sqrt{g(n)} \bigr\} \le \sqrt{D} E \bigl[X\sqrt{g(X)} \bigr]< \infty .$$
(2.11)

Hence (2.8) holds by (2.9)–(2.11). The proof is completed. □

### Proof of Corollary 1.1

If (I) holds, then all conditions of Theorem 1.1 are satisfied provided that $$h(x):=\log x$$. If (II) holds, then all conditions of Theorem 1.1 are satisfied provided that $$h(x):=\log \log x$$. If (III) holds, then all conditions of Theorem 1.2 are satisfied provided that $$g(x):=C(\log \log x)^{\alpha }$$. Hence the result follows from Theorems 1.1 and 1.2. □

### Proof of Theorem 1.3

The proof is the same as that of Theorem 1.1 and is omitted. □

### Proof of Theorem 1.4

The proof is the same as that of Theorem 1.2 and is omitted. □

## 3 Examples

In this section, we provide two examples satisfying Theorem 1.1 or Theorem 1.2.

Assume that $$\{ (Y_{n}, Z_{n}), n\ge 1\}$$ is a sequence of i.i.d. random vectors such that, for each $$n\ge 1$$, $$Y_{n}$$ and $$Z_{n}$$ have the same distribution as X, and they are dependent according to the Farlie–Gumbel–Morgenstern copula

$$C(u,v)=uv+\theta _{n} uv(1-u) (1-v), \quad (u,v) \in [0,1]\times [0,1],$$

where $$-1\le \theta _{n} \le 1$$.

Set $$X_{n}= Y_{(n+1)/2}$$ if n is odd, and $$X_{n}=Z_{n/2}$$ if n is even. Then $$\{X_{n}, n \ge 1\}$$ is a sequence of WOD random variables with

$$g_{L}(n)=g_{U}(n)= \textstyle\begin{cases} \prod_{i=1}^{[n/2]} a(\theta _{i}), &n\ge 2, \\ 1, & n=1, \end{cases}$$

where

$$a(\theta _{n})= \textstyle\begin{cases} 1+\theta _{n}, &0< \theta _{n}\le 1, \\ 1, & -1\le \theta _{n}\le 0 \end{cases}$$

(see Examples in Wang et al. [15]).

The following example satisfies the conditions of Theorem 1.1 but does not satisfy the conditions of Theorem 1.2.

### Example 3.1

Let $$E \vert X \vert \log \vert X \vert <\infty$$, and $$\theta _{n}=1$$ if $$n=2^{k}$$, $$k\ge 0$$, and $$\theta _{n}=0$$ otherwise. Then $$g_{L}(n)=g_{U}(n)=2^{k}$$ for $$2^{k}\le n<2^{k+1}$$, $$k\ge 0$$. If we take $$g(x)=\max \{1, x\}$$ and $$h(x)=\log x$$, then $$g(x)$$ and $$h(x)$$ satisfy all the conditions of Theorem 1.1. In order to apply Theorem 1.2, we must take $$g(x)=\log ^{2} x$$ by the moment condition. However, $$g(x)$$ does not satisfy the condition $$\max \{g_{L}(n), g_{U}(n)\}\le g(n)$$ of Theorem 1.2.

The following example satisfies the conditions of Theorem 1.2 but does not satisfy the conditions of Theorem 1.1.

### Example 3.2

Let $$E \vert X \vert \sqrt{\log _{2} \log _{2} \vert X \vert }<\infty$$, and $$\theta _{n}=1$$ if $$n=2^{-1+2^{2^{k}}}$$, $$k\ge 1$$, and $$\theta _{n}=0$$ otherwise, where $$\log _{2} x=\max \{1, \log _{e} x/\log _{e} 2\}$$. Then we have that

$$g_{L}(n)=g_{U}(n)= \textstyle\begin{cases} 1, &1\le n< 16, \\ 2^{k}, &2^{2^{2^{k}}}\le n< 2^{2^{2^{k+1}}}, k\ge 1. \end{cases}$$

If we take $$g(x)=\log _{2} \log _{2} x$$, then $$g(x)$$ satisfies all the conditions of Theorem 1.2. In order to apply Theorem 1.1, we must take $$h(x)= \sqrt{\log _{2} \log _{2} x}$$ by the moment condition. However, $$g(x)$$ and $$h(x)$$ do not satisfy (1.6) of Theorem 1.1.

Not applicable.

## References

1. Bai, P., Chen, P., Sung, S.H.: On complete convergence and the strong law of large numbers for pairwise independent random variables. Acta Math. Hung. 142, 502–518 (2014)

2. Chen, P., Hu, T.-C., Volodin, A.: A note on the rate of complete convergence for maximum of partial sums for moving average processes in Rademacher type Banach spaces. Lobachevskii J. Math. 21, 45–55 (2006)

3. Chen, P., Sung, S.H.: A Spitzer-type law of large numbers for widely orthant dependent random variables. Stat. Probab. Lett. 154, 108544 (2019)

4. Chen, P., Sung, S.H.: Complete convergence for weighted sums of widely orthant-dependent random variables. J. Inequal. Appl. 2021, 45 (2021)

5. Chen, W., Wang, Y., Cheng, D.: An inequality of widely dependent random variables and its applications. Lith. Math. J. 56, 16–31 (2016)

6. Etemadi, N.: An elementary proof of the strong law of large numbers. Z. Wahrscheinlichkeitstheor. Verw. Geb. 55, 119–122 (1981)

7. Guan, L., Xiao, Y., Zhao, Y.: Complete moment convergence of moving average processes for m-WOD sequence. J. Inequal. Appl. 2021, 16 (2021)

8. Kruglov, V.M.: Strong law of large numbers, stability problems for stochastic models. In: Zolotarev, V.M., Kruglov, V.M., Korolev, V.Yu. (eds.) TVP/VSP (Moscow/Utrecht, 1994), pp. 139–150 (1994)

9. Lang, J., He, T., Yu, Z., Wu, Y., Wang, X.: Complete convergence for randomly weighted sums of random variables and its application in linear-time-invariant systems. Commun. Stat., Simul. Comput. (2021). https://doi.org/10.1080/03610918.2020.1870695

10. Matula, P.: A note on the almost sure convergence of sums of negatively dependent variables. Stat. Probab. Lett. 15, 209–213 (1992)

11. Petrov, V.V.: Sums of Independent Random Variables. Springer, Berlin (1975)

12. Shen, A., Yao, M., Wang, W., Volodin, A.: Exponential probability inequalities for WNOD random variables and their applications. Rev. R. Acad. Cienc. Exactas Fís. Nat., Ser. A Mat. 110, 251–268 (2016)

13. Spitzer, F.: A combinatorial lemma and its application to probability theory. Trans. Am. Math. Soc. 82, 323–339 (1956)

14. Utev, S., Peligrad, M.: Maximal inequalities and an invariance principle for a class of weakly dependent random variables. J. Theor. Probab. 16, 101–115 (2003)

15. Wang, K., Wang, Y., Gao, Q.: Uniform asymptotics for the finite-time ruin probability of a dependent risk model with a constant interest rate. Methodol. Comput. Appl. Probab. 15, 109–124 (2013)

16. Wang, Y., Cheng, D.: Basic renewal theorems for random walks with widely dependent increments. J. Math. Anal. Appl. 384, 597–606 (2011)

17. Wang, Y., Cui, Z., Wang, K., Ma, X.: Uniform asymptotics of the finite-time ruin probability for all times. J. Math. Anal. Appl. 390, 208–223 (2012)

## Acknowledgements

The authors would like to thank the referees for careful reading of the manuscript and valuable suggestions which helped in improving an earlier version of this paper.

## Funding

The research of Soo Hak Sung is supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. 2020R1F1A1A01050160).

## Author information

Authors

### Contributions

All authors read and approved the manuscript.

### Corresponding author

Correspondence to Soo Hak Sung.

## Ethics declarations

### Competing interests

The authors declare that they have no competing interests.

## Rights and permissions

Reprints and permissions

Chen, P., Luo, J. & Sung, S.H. Further Spitzer’s law for widely orthant dependent random variables. J Inequal Appl 2021, 183 (2021). https://doi.org/10.1186/s13660-021-02718-4