Skip to main content

Strong laws of large numbers for general random variables in sublinear expectation spaces

Abstract

In this paper, we obtain the equivalent relations between Kolmogorov maximal inequality and Hájek–Rényi maximal inequality both in moment and capacity types in sublinear expectation spaces. Based on these, we establish several strong laws of large numbers for general random variables and obtain the growth rate of the partial sums. In a first application, a strong law of large numbers for negatively dependent random variables is obtained. In a second application, we consider the normalizing sequence \(\{\log n\} _{n\ge 1}\) and get some special limit properties in sublinear expectation spaces.

1 Introduction and notations

The general framework of the sublinear expectation space was introduced by Peng [13] and it is a natural extension of the classical linear expectation space with the linear property being replaced by the subadditivity and positive homogeneity. This simple generalization provides a very flexible framework to model uncertainty problems in statistics or finance. Peng [14] introduced the independent and identically distributed random variables, G-normal distribution and obtained the weak law of large numbers and the central limit theorem in sublinear expectation spaces. Since then, many authors began to investigate the limit properties, especially the law of large numbers in sublinear expectation spaces. For instance, Chen, Wu and Li [5] gave a strong law of large numbers for non-negative product independent random variables; Chen [2] obtained the Kolmogorov strong law of large numbers for independent and identically distributed random variables; Zhang [21] gave the necessary and sufficient conditions of Kolmogorov strong law of larger numbers holding for independent and identically distributed random variables under a continuous sublinear expectation; Zhang [22] obtained a strong law of large numbers for a sequence of extended independent random variables; Chen, Hu and Zong [3] gave several strong laws of large numbers under some moment conditions with respect to the partial sum and some conditions similar to Petrov’s; Chen, Huang and Wu [4] established an extension of the strong law of large numbers under exponential independence. There are also many results on law of large numbers under different non-additive probability framework. The reader can see [1, 6, 12, 18] and the references therein.

However, none of these papers investigate the growth rate of the partial sums. The main purpose of this paper is to explore the growth rate of the partial sums and investigate the strong law of large numbers for general random variables in sublinear expectation spaces.

In classical probability or expectation space, Fazekas and Klesov [9] gave a general approach of obtaining the strong law of large numbers for sequences of random variables by using a Hájek–Rényi type maximal inequality. This general approach made no restriction on the dependence structure of random variables. Under the same conditions as those in Fazekas and Klesov [9], Hu and Hu [10] obtained the growth rate for the sums of random variables. Sung, Hu and Volodin [17] improved these results and applied this approach to the weak law of large numbers for tail series. Tómács and Líbor [20] used the probability Hájek–Rényi type maximal inequality to prove the strong law of large numbers. More results and applications of this approach in classical probability space can be found in Fazekas [8] and the references therein.

In this paper, we will extend the approach used by Fazekas and Klesov [9] and Hu and Hu [10] to sublinear expectation spaces. In Sect. 2, we firstly show the equivalent relations between Kolmogorov maximal inequality and Hájek–Rényi maximal inequality both in the moment and capacity types in sublinear expectation spaces. Based on these, we establish several strong laws of large numbers for general random variables in sublinear expectation spaces and obtain the growth rate of the partial sums in Sect. 3. As an application, a strong law of large numbers for negatively dependent random variables is obtained since the Kolmogorov type maximal inequality for negatively dependent random variables has been proved in Sect. 2. In Sect. 4, we give another application. We consider the normalizing sequence \(\{\log n\}_{n\ge 1}\) and get some limit properties in sublinear expectation spaces.

In the remainder of this section, we give some notations in sublinear expectation spaces. We use the framework and notations in Peng [14] and [15]. Let \((\varOmega ,\mathcal{F})\) be a given measurable space and \(\mathscr{H}\) be a linear space of real measurable functions defined on \((\varOmega , \mathcal{F})\) such that if \(X_{1},\ldots , X_{n} \in \mathscr{H}\) then \(\psi (X_{1},\ldots ,X_{n})\in \mathscr{H}\), for each \(\psi \in C_{l,\mathrm{Lip}}( \mathbb{R}^{n})\), where \(C_{l,\mathrm{Lip}}(\mathbb{R}^{n})\) denotes the linear space of functions ψ satisfying

$$\begin{aligned}& \bigl\vert \psi (y)-\psi (z) \bigr\vert \leq C\bigl(1+ \vert y \vert ^{m}+ \vert z \vert ^{m}\bigr) \vert y-z \vert , \\& \quad \mbox{for all } y,z\in \mathbb{R}^{n}, \mbox{for some } C>0, m \in \mathbb{N} \mbox{ depending on } \psi . \end{aligned}$$

Here and in the sequel, \(\mathbb{N}\) denotes the set of all non-negative integers and \(\mathbb{N}^{*}\) denotes the set of all positive integers. In general, \(C_{l,\mathrm{Lip}}(\mathbb{R}^{n})\) can be replaced by \(C_{b,\mathrm{Lip}}( \mathbb{R}^{n})\), which is the space of all bounded and Lipschitz continuous functions on \(\mathbb{R}^{n}\). The space \(\mathscr{H}\) is considered as a space of random variables.

Definition 1.1

A sublinear expectation \(\mathbb{E}\) on \(\mathscr{H}\) is a functional \(\mathbb{E}: \mathscr{H}\to \mathbb{R}\cup \{-\infty ,\infty \}\) satisfying the following properties: for all \(X, Y \in \mathscr{H}\), we have

  1. (a)

    monotonicity: \(\mathbb{E} [X]\ge \mathbb{E} [Y]\), if \(X \ge Y\);

  2. (b)

    constant preserving: \(\mathbb{E} [c] = c\), for \(c \in \mathbb{R}\);

  3. (c)

    subadditivity: \(\mathbb{E}[X+Y]\le \mathbb{E} [X] + \mathbb{E} [Y]\) whenever \(\mathbb{E}[X]+\mathbb{E}[Y]\) is not of the form \(\infty -\infty \) or \(-\infty +\infty \);

  4. (d)

    positive homogeneity: \(\mathbb{E} [\lambda X] = \lambda \mathbb{E}[X]\), for \(\lambda \ge 0\).

The triple \((\varOmega , \mathscr{H}, \mathbb{E})\) is called a sublinear expectation space. Given a sublinear expectation \(\mathbb{E} \), let us denote the conjugate expectation \(\mathcal{E}\) of \(\mathbb{E}\) by

$$ \mathcal{E}[X]:=-\mathbb{E}[-X],\quad \mbox{for all } X\in \mathscr{H}. $$

Lemma 1.2

(Lemma 2.4 in Peng [14])

A functional \(\mathbb{E}\) defined on \((\varOmega ,\mathscr{H})\) is a sublinear expectation if and only if, there exists a family of linear expectations (finite additive) \(\{ E_{\theta }: \theta \in \varTheta \}\) such that

$$ \mathbb{E}[X]=\max_{\theta \in \varTheta }E_{\theta }[X],\quad \textit{for all } X\in \mathscr{H}. $$

Obviously, for all \(X\in \mathscr{H}\), \(\mathcal{E}[X]\le \mathbb{E}[X]\). Furthermore, \((\mathbb{E}, \mathcal{E})\) can generate a pair \((V,v)\) of capacities by

$$ V(A):=\sup_{\theta \in \varTheta } E_{\theta }[I_{A}],\qquad v(A):=1-V\bigl(A ^{c}\bigr), \quad \mbox{for all } A\in \mathcal{F}, $$

where \(\{E_{\theta }: \theta \in \varTheta \}\) is the family of linear expectations in Lemma 1.2 and \(A^{c}\) is the complement set of A. It is easy to check that V is subadditive, monotonic and

$$ \mathbb{E}[X]\le V(A)\le \mathbb{E}[Y], \qquad \mathcal{E}[X]\le v(A) \le \mathcal{E}[Y], \quad \mbox{if } X\le I_{A}\le Y, X,Y\in \mathscr{H}. $$
(1)

Moreover, if \(I_{A}\in \mathscr{H}\), then \(V(A)=\mathbb{E}[I_{A}]\) and \(v(A)=\mathcal{E}[I_{A}]\).

Remark 1.3

Zhang [21] introduced another way to define the capacities generated by \((\mathbb{E},\mathcal{E})\), that is,

$$ \bar{V}(A):=\inf \bigl\{ \mathbb{E}[X]: I_{A}\le X, X\in \mathscr{H} \bigr\} , \qquad \bar{v}(A):=1-V\bigl(A^{c}\bigr),\quad \mbox{for all } A \in \mathcal{F}. $$

Under this definition, is also subadditive, monotonic and \((\bar{V},\bar{v})\) satisfy (1). The relation of these two pair of capacities satisfy \(\bar{v}(A)\le v(A)\le V(A)\le \bar{V}(A)\), for all \(A\in \mathcal{F}\). More properties about \((\bar{V},\bar{v})\) can be found in Zhang [21]. It is easy to see that Theorem 2.2, Proposition 2.5 and Theorem 2.7 in this paper still hold when \((V,v)\) is replaced by \((\bar{V},\bar{v})\), respectively.

Definition 1.4

  1. (i)

    In a sublinear expectation space \((\varOmega , \mathscr{H}, \mathbb{E})\), a n-dimensional random vector Y is said to be independent on another m-dimensional random vector X under \(\mathbb{E}\) if for each test function \(\psi \in C_{l,\mathrm{Lip}}(\mathbb{R} ^{m+n})\) we have

    $$ \mathbb{E} \bigl[\psi (\mathbf{X}, \mathbf{Y} )\bigr] = \mathbb{E} \bigl[ \mathbb{E}\bigl[ \psi (\mathbf{x}, \mathbf{Y} )\bigr]\big|_{\mathbf{x}=\mathbf{X}} \bigr], $$

    whenever \(\overline{\psi }(\mathbf{x}):=\mathbb{E} [|\psi (\mathbf{x}, \mathbf{Y} )| ]<\infty \) for all x and \(\mathbb{E} [|\overline{ \psi }(\mathbf{X})| ]<\infty \).

  2. (ii)

    A sequence of random variables \(\{X_{n}\}_{n\ge 1}\) is said to be independent, if \(X_{n+1}\) is independent on \((X_{1},\ldots , X _{n})\) for each \(n\ge 1\).

Definition 1.5

(Definition 1.5 in Zhang [21])

In a sublinear expectation space \((\varOmega ,\mathscr{H},\mathbb{E})\), a n-dimensional random vector Y is said to be negatively dependent on another m-dimensional random vector X under \(\mathbb{E}\), if for each pair of test function \(\psi _{1}\in C_{l,\mathrm{Lip}}(\mathbb{R}^{m})\) and \(\psi _{2}\in C_{l,\mathrm{Lip}}(\mathbb{R}^{n})\), we have \(\mathbb{E}[\psi _{1}( \mathbf{X})\psi _{2}(\mathbf{Y})]\le \mathbb{E}[\psi _{1}(\mathbf{X})] \mathbb{E}[\psi _{2}(\mathbf{Y})]\), whenever \(\psi _{1}(\mathbf{X}) \ge 0\) and \(\mathbb{E}[\psi _{2}(\mathbf{Y})]\ge 0\), \(\mathbb{E}[|\psi _{1}(\mathbf{X})\psi _{2}(\mathbf{Y})|]<\infty \), \(\mathbb{E}[|\psi _{1}(\mathbf{X})|]<\infty \), \(\mathbb{E}[|\psi _{2}(\mathbf{Y})|]< \infty \) and either \(\psi _{1}\) and \(\psi _{2}\) are coordinatewise nondecreasing or \(\psi _{1}\) and \(\psi _{2}\) are coordinatewise non-increasing.

2 Kolmogorov type and Hájek–Rényi type maximal inequalities

For convenience, let \(\max_{i\in A} a_{i}=0\) and \(\sum_{i\in A} a_{i}=0\) if \(A=\emptyset \).

Let \(\{X_{i}\}_{i\ge 1}\) denote a sequence of random variables in the sublinear expectation space \((\varOmega ,\mathscr{H},\mathbb{E})\) and the partial sums of random variables be \(S_{n}=\sum_{i=1}^{n}X_{i}\) for all \(n\in \mathbb{N}^{*}\) and \(S_{0}=0\). The constant c may be different in different places.

Theorem 2.1

Let \(\{\alpha _{l}\}_{l=1}^{n}\) be a sequence of non-negative numbers and \(r>0\). Then the following two statements are equivalent:

  1. (i)

    there exists a constant \(c>0\), such that, for any \(k=1,\ldots ,n\),

    $$ \mathbb{E} \Bigl[\max_{1\le l\le k} \vert S_{l} \vert ^{r} \Bigr] \le c\sum _{l=1}^{k} \alpha _{l}; $$
    (2)
  2. (ii)

    there exists a constant \(c>0\), such that, for any nondecreasing positive numbers \(\{\beta _{l}\}_{l=1}^{n}\) and any \(k=1,\ldots ,n\),

    $$ \mathbb{E} \biggl[\max_{1\le l\le k} \biggl\vert \frac{S_{l}}{\beta _{l}} \biggr\vert ^{r} \biggr]\le 4c \sum _{l=1}^{k} \frac{\alpha _{l}}{\beta _{l}^{r}}. $$
    (3)

Proof

In (ii), for \(l=1,\ldots ,n\), let \(\beta _{l}\equiv 1\), then we can get (i) from (ii). Now we turn to the proof of (i) (ii). The proof is based on the idea of the proof of Theorem 1.1 by Fazekas and Klesov [9].

Fix any nondecreasing positive numbers \(\{\beta _{l}\}_{l=1}^{n}\). Without loss of generality we can assume \(\beta _{1}=1\). For any fixed \(k\in \{1,\ldots ,n\}\), define the following notations:

$$\begin{aligned}& A_{i}=\bigl\{ l: 1\le l\le k \mbox{ and } 2^{i}\le \beta _{l}^{r}< 2^{i+1} \bigr\} , \quad i\in \mathbb{N}, \\& I=\max \{i: A_{i}\neq \emptyset \}, \\& m(i)= \textstyle\begin{cases} \max \{l: l\in A_{i}\}, & \mbox{if }A_{i}\neq \emptyset , \\ m(i-1), & \mbox{if }A_{i}= \emptyset , \end{cases}\displaystyle \quad i\in \mathbb{N},\quad \mbox{and}\quad m(-1)=0. \end{aligned}$$

It is easy to see that \(I<\infty \), \(m(0)\ge 1\) and \(\{m(i)\}_{i=-1} ^{\infty }\) is nondecreasing. Then, by the subadditivity of \(\mathbb{E}\) and inequality (2), we have

$$\begin{aligned} \mathbb{E} \biggl[\max_{1\le l\le k} \biggl\vert \frac{S_{l}}{\beta _{l}} \biggr\vert ^{r} \biggr] \le & \sum_{i=0}^{I} \mathbb{E} \biggl[\max_{l\in A_{i}} \biggl\vert \frac{S_{l}}{\beta _{l}} \biggr\vert ^{r} \biggr] \le \sum_{i=0}^{I} 2^{-i}\mathbb{E} \Bigl[\max_{l\in A_{i}} \vert S_{l} \vert ^{r} \Bigr] \\ \le & \sum_{i=0}^{I} 2^{-i} \mathbb{E} \Bigl[\max_{l\le m(i)} \vert S _{l} \vert ^{r} \Bigr] \le \sum_{i=0}^{I} 2^{-i}c\sum_{l=1}^{m(i)} \alpha _{l} \\ \le & c\sum_{i=0}^{I} 2^{-i} \sum_{j=0}^{i}\sum _{l=m(j-1)+1}^{m(j)} \alpha _{l} = c \sum _{j=0}^{I} \sum_{l=m(j-1)+1}^{m(j)} \alpha _{l}\sum_{i=j}^{I} 2^{-i} \\ \leq &2c \sum_{j=0}^{I} 2^{-j} \sum_{l=m(j-1)+1}^{m(j)}\alpha _{l} = 2c \sum_{j=0}^{I} 2^{-j}\sum _{l=m(j-1)+1}^{m(j)}\frac{\alpha _{l}}{\beta _{l}^{r}}\beta _{l}^{r} \\ \le &4c \sum_{j=0}^{I} \sum _{l=m(j-1)+1}^{m(j)}\frac{\alpha _{l}}{ \beta _{l}^{r}} = 4c \sum _{l=1}^{k}\frac{\alpha _{l}}{\beta _{l}^{r}}. \end{aligned}$$

Therefore, the proof of this theorem is completed. □

Theorem 2.2

Let \(\{\alpha _{l}\}_{l=1}^{n}\) be a sequence of non-negative numbers and \(r>0\). Then the following two statements are equivalent:

  1. (i)

    there exists a constant \(c>0\), such that, for any \(k=1,\ldots ,n\), any \(\epsilon >0\),

    $$ V \Bigl(\max_{1\le l\le k} \vert S_{l} \vert \ge \epsilon \Bigr) \le c\epsilon ^{-r}\sum _{l=1}^{k} \alpha _{l}; $$
    (4)
  2. (ii)

    there exists a constant \(c>0\), such that, for any nondecreasing positive numbers \(\{\beta _{l}\}_{l=1}^{n}\) and any \(k=1,\ldots ,n\), any \(\epsilon >0\),

    $$ V \biggl(\max_{1\le l\le k} \biggl\vert \frac{S_{l}}{\beta _{l}} \biggr\vert \ge \epsilon \biggr)\le 4c \epsilon ^{-r}\sum_{l=1}^{k} \frac{\alpha _{l}}{\beta _{l}^{r}}. $$
    (5)

Proof

In (ii), for \(l=1,\ldots ,n\), let \(\beta _{l}\equiv 1\), then we can get (i) from (ii). Now we focus on the proof of (i) (ii).

Fix any nondecreasing positive numbers \(\{\beta _{l}\}_{l=1}^{n}\) and fix any \(k\in \{1,\ldots ,n\}\). Without loss of generality we can assume \(\beta _{1}=1\). We use the same notations \(A_{i}\), I and \(m(i)\) defined in the proof of Theorem 2.1. Then, by the subadditivity and monotonicity of V and inequality (4), we have

$$\begin{aligned} V \biggl(\max_{1\le l\le k} \biggl\vert \frac{S_{l}}{\beta _{l}} \biggr\vert \ge \epsilon \biggr) \le &\sum_{i=0}^{I} V \biggl(\max_{l\in A_{i}} \biggl\vert \frac{S_{l}}{\beta _{l}} \biggr\vert \ge \epsilon \biggr) \\ \le & \sum_{i=0}^{I} V \Bigl(\max _{l\in A_{i}} \vert S_{l} \vert \ge \epsilon 2^{i/r} \Bigr) \\ \le & \sum_{i=0}^{I} V \Bigl(\max _{l\le m(i)} \vert S_{l} \vert \ge \epsilon 2^{i/r} \Bigr) \\ \le & \sum_{i=0}^{I}c\epsilon ^{-r}2^{-i}\sum_{l=1}^{m(i)} \alpha _{l}. \end{aligned}$$

Through the proof of Theorem 2.1, we have

$$ \sum_{i=0}^{I}c\epsilon ^{-r}2^{-i}\sum_{l=1}^{m(i)} \alpha _{l} \le 4c \epsilon ^{-r} \sum _{l=1}^{k}\frac{\alpha _{l}}{\beta _{l}^{r}}. $$

Hence, the proof of this theorem is completed. □

Remark 2.3

Inequalities (2) and (4) are Kolmogorov maximal inequalities in the moment and capacity types, respectively. As well, inequalities (3) and (5) are Hájek–Rényi maximal inequalities in the moment and capacity types, respectively.

At the end of this section, we give two Kolmogorov type maximal inequalities for negatively dependent random variables in sublinear expectation spaces. Before we do this, we prove two propositions which will be used in the proof of Kolmogorov type maximal inequality.

Proposition 2.4

Let \(\{f_{i}\}_{i=1}^{n_{1}}\subset C_{l,\mathrm{Lip}}(\mathbb{R}^{n})\) and \(\{g_{i}\}_{i=1}^{m_{1}}\subset C_{l,\mathrm{Lip}}(\mathbb{R}^{m})\) be coordinatewise nondecreasing or coordinatewise non-increasing functions. If n-dimensional random vector Y is negatively dependent on m-dimensional random vector X under \(\mathbb{E}\), then \((f_{1}(\mathbf{Y}),\ldots ,f_{n_{1}}(\mathbf{Y}))\) is negatively dependent on \((g_{1}(\mathbf{X}),\ldots ,g_{m_{1}}(\mathbf{X}))\). In particular, −Y is negatively dependent onX.

Proof

For any test functions \(\psi _{1}\in C_{l,\mathrm{Lip}}( \mathbb{R}^{m_{1}})\) and \(\psi _{2}\in C_{l,\mathrm{Lip}}(\mathbb{R}^{n_{1}})\) such that \(\psi _{1}(g_{1}(\mathbf{X}),\ldots , g_{m_{1}}(\mathbf{X})) \ge 0\) and \(\mathbb{E}[\psi _{2}(f_{1}(\mathbf{Y}),\ldots ,f_{n_{1}}( \mathbf{Y}))]\ge 0\),

$$ \mathbb{E}\bigl[ \bigl\vert \psi _{1}\bigl(g_{1}( \mathbf{X}),\ldots ,g_{m_{1}}(\mathbf{X})\bigr) \psi _{2} \bigl(f_{1}(\mathbf{Y}),\ldots ,f_{n_{1}}(\mathbf{Y})\bigr) \bigr\vert \bigr]< \infty , $$

\(\mathbb{E}[|\psi _{1}(g_{1}(\mathbf{X}),\ldots ,g_{m_{1}}(\mathbf{X}))|]< \infty \), \(\mathbb{E}[|\psi _{2}(f_{1}(\mathbf{Y}),\ldots ,f_{n_{1}}( \mathbf{Y}))|]<\infty \) and either \(\psi _{1}\) and \(\psi _{2}\) are coordinatewise nondecreasing or \(\psi _{1}\) and \(\psi _{2}\) are coordinatewise non-increasing, define \(\tilde{\psi }_{1}(\cdot )=\psi _{1}(g_{1}(\cdot ),\ldots ,g_{m_{1}}(\cdot ))\) and \(\tilde{\psi }_{2}( \cdot )=\psi _{1}(f_{1}(\cdot ),\ldots ,f_{n_{1}}(\cdot ))\). Then \(\tilde{\psi }_{1}\in C_{l,\mathrm{Lip}}(\mathbb{R}^{m})\) and \(\tilde{\psi } _{2}\in C_{l,\mathrm{Lip}}(\mathbb{R}^{n})\) are coordinatewise nondecreasing or \(\tilde{\psi }_{1}\) and \(\tilde{\psi }_{2}\) are coordinatewise non-increasing. Meanwhile \(\tilde{\psi }_{1}(\mathbf{X})\ge 0\) and \(\mathbb{E}[\tilde{\psi }_{2}(\mathbf{Y})]\ge 0\), \(\mathbb{E}[| \tilde{\psi }_{1}(\mathbf{X})\tilde{\psi }_{2}(\mathbf{Y})|]<\infty \), \(\mathbb{E}[|\tilde{\psi }_{1}(\mathbf{X})|]<\infty \), \(\mathbb{E}[| \tilde{\psi _{2}}(\mathbf{Y})|]<\infty \). Therefore, by the definition of negatively dependence, we have

$$\begin{aligned}& \mathbb{E}\bigl[\psi _{1}\bigl(g_{1}(\mathbf{X}),\ldots ,g_{m_{1}}(\mathbf{X})\bigr) \psi _{2}\bigl(f_{1}( \mathbf{Y}),\ldots ,f_{n_{1}}(\mathbf{Y})\bigr)\bigr] \\& \quad = \mathbb{E}\bigl[ \tilde{\psi }_{1}(\mathbf{X})\tilde{\psi }_{2}(\mathbf{Y}) \bigr] \le \mathbb{E}\bigl[\tilde{\psi }_{1}(\mathbf{X})\bigr]\mathbb{E} \bigl[ \tilde{\psi }_{2}(\mathbf{Y})\bigr] \\& \quad = \mathbb{E}\bigl[\psi _{1}\bigl(g_{1}(\mathbf{X}),\ldots ,g_{m_{1}}(\mathbf{X})\bigr)\bigr] \mathbb{E}\bigl[\psi _{2} \bigl(f_{1}(\mathbf{Y}),\ldots ,f_{n_{1}}(\mathbf{Y})\bigr) \bigr]. \end{aligned}$$

Hence \((f_{1}(\mathbf{Y}),\ldots ,f_{n_{1}}(\mathbf{Y}))\) is negatively dependent on \((g_{1}(\mathbf{X}),\ldots ,g_{m_{1}}(\mathbf{X}))\) under \(\mathbb{E}\). □

The following proposition gives the Chebyshev inequalities in sublinear expectation spaces which have been proved in Chen, Wu and Li [5]. But the proof there is not valid for the capacities defined by Zhang [21] (see Remark 1.3 in this paper). We give a new proof here which is also valid for the capacities defined by Zhang [21].

Proposition 2.5

(Chebyshev inequalities)

Let X be any random variable in \(\mathscr{H}\) and f be any nondecreasing positive function in \(C_{l,\mathrm{Lip}}(\mathbb{R})\). Then, for any \(\epsilon \in \mathbb{R}\),

$$ V(X\ge \epsilon )\le \frac{\mathbb{E}[f(X)]}{f(\epsilon )},\qquad v(X \ge \epsilon )\le \frac{\mathcal{E}[f(X)]}{f(\epsilon )}. $$

Proof

Notice that

$$ I_{X\ge \epsilon }\le \frac{f(X)}{f(\epsilon )}\in \mathscr{H}, $$

then this proposition can be deduced directly from (1) and the positive homogeneity of \(\mathbb{E}\) and \(\mathcal{E}\). □

Theorem 2.6

(Kolmogorov maximal inequality in moment type)

Let \(1\le p\le 2\), \(\{X_{l}\}_{l= 1}^{n}\) be a sequence of random variables in sublinear expectation space \((\varOmega ,\mathscr{H},\mathbb{E})\) with \(\mathbb{E}[X _{l}]=\mathcal{E}[X_{l}]=0\) for all \(l=1,\ldots ,n\). If \(X_{k}\) is negatively dependent on \((X_{k+1},\ldots ,X_{n})\) for all \(k=1,\ldots ,n-1\), then

$$ \mathbb{E} \Bigl[\max_{1\le l\le n} \vert S_{l} \vert ^{p} \Bigr] \le 2^{3-p}\sum _{l=1}^{n}\mathbb{E}\bigl[ \vert X_{l} \vert ^{p}\bigr]. $$
(6)

Proof

By Proposition 2.4, we know that \(-X_{k}\) is negatively dependent on \((-X_{k+1},\ldots ,-X_{n})\) for all \(k=1,\ldots ,n-1\). So by Theorem 2.1 in Zhang [21], we have

$$ \mathbb{E} \Bigl[ \Bigl\vert \max_{1\le l\le n}S_{l} \Bigr\vert ^{p} \Bigr] \le 2^{2-p}\sum _{l=1}^{n}\mathbb{E}\bigl[ \vert X_{l} \vert ^{p}\bigr],\quad \mbox{and}\quad \mathbb{E} \Bigl[ \Bigl\vert \max _{1\le l\le n}(-S_{l}) \Bigr\vert ^{p} \Bigr] \le 2^{2-p}\sum_{l=1}^{n}\mathbb{E} \bigl[ \vert X_{l} \vert ^{p}\bigr]. $$

Notice that

$$\begin{aligned} \max_{1\le l\le n} \vert S_{l} \vert ^{p}= \Bigl(\max_{1\le l\le n} \vert S_{l} \vert \Bigr) ^{p} \le & \Bigl\vert \max_{1\le l\le n}S_{l} \Bigr\vert ^{p}+ \Bigl\vert \max_{1 \le l\le n}(-S_{l}) \Bigr\vert ^{p}, \end{aligned}$$

taking \(\mathbb{E}\) in the above inequality, we can obtain (6) from the subadditivity of \(\mathbb{E}\). □

Theorem 2.7

(Kolmogorov maximal inequality in capacity type)

Let \(1\le p\le 2\), \(\{X_{l}\}_{l= 1}^{n}\) be a sequence of random variables in sublinear expectation space \((\varOmega ,\mathscr{H},\mathbb{E})\) with \(\mathbb{E}[X _{l}]=\mathcal{E}[X_{l}]=0\) for all \(l=1,\ldots ,n\). If \(X_{k}\) is negatively dependent on \((X_{k+1},\ldots ,X_{n})\) for all \(k=1,\ldots ,n-1\), then, for any \(\epsilon >0\),

$$ V \Bigl(\max_{1\le l\le n} \vert S_{l} \vert >\epsilon \Bigr) \le 2^{3-p}\epsilon ^{-p}\sum _{l=1}^{n}\mathbb{E}\bigl[ \vert X_{l} \vert ^{p}\bigr]. $$
(7)

Proof

By Proposition 2.5, we have

$$ V \Bigl(\max_{1\le l\le n} \vert S_{l} \vert >\epsilon \Bigr) \le \epsilon ^{-p}\mathbb{E}\Bigl[\max_{1\le l\le n} \vert S_{l} \vert ^{p}\Bigr]. $$

Then inequality (7) can be deduced from Theorem 2.6. □

3 Strong laws of large numbers and the growth rate of partial sums

In this section and the sequel, we consider the sublinear expectation \(\mathbb{E}\) can be represented by

$$ \mathbb{E}[X]=\sup_{P\in \mathcal{P}}E_{P}[X], \quad \mbox{for all } X \in \mathscr{H}, $$

where \(\mathcal{P}\) is a nonempty set of σ-additive probabilities on \(\mathcal{F}\). It is easy to check that

$$ \mathcal{E}[X]=\inf_{P\in \mathcal{P}}E_{P}[X], \quad \mbox{for all } X \in \mathscr{H}, $$

and \((V,v)\) can be rewritten as

$$ V(A)=\sup_{P\in \mathcal{P}}P(A),\qquad v(A)=\inf_{P\in \mathcal{P}}P(A),\quad \mbox{for all } A\in \mathcal{F}. $$

Clearly, V is inner continuous and v is outer continuous, that is, \(V (A_{n})\uparrow V (A)\), if \(A_{n}\uparrow A\); and \(v(A_{n})\downarrow v(A)\), if \(A_{n}\downarrow A\), where \(A_{n}, A\in {\mathcal{F}}\), \(n\ge 1\) (see Lemma 2.1 in Chen, Wu and Li [5]). Theorem 3.2 shows that these properties also hold for \(\mathbb{E}\) and \(\mathcal{E}\).

Definition 3.1

(Definition 3 in Denis, Hu and Peng [7])

A set B is a polar set if \(V (B)=0\) and a property holds quasi-surely if it holds outside a polar set.

Theorem 3.2

(The monotone convergence theorem)

  1. (i)

    If random variables \(X_{n} \uparrow X\) and there exists a constant c such that \(X_{n}\ge c\) quasi-surely for all \(n\ge 1\), then \(\mathbb{E}[X_{n}] \uparrow \mathbb{E}[X]\).

  2. (ii)

    If random variables \(Y_{n} \downarrow Y\) and there exists a constant c such that \(Y_{n}\le c\) quasi-surely for all \(n\ge 1\), then \(\mathcal{E}[Y_{n}]\downarrow \mathcal{E}[Y]\).

Proof

It is easy to see that (i) is equivalent to (ii). So we only prove (i). By the monotonicity of \(\mathbb{E}\), we know \(\mathbb{E}[X_{n}]\) nondecreasing and \(\mathbb{E}[X]\ge \lim_{n\to \infty }\mathbb{E}[X_{n}]\). On the other hand, it follows from the classical monotone convergence theorem that

$$ \mathbb{E}[X]=\sup_{P\in \mathcal{P}}E_{P}[X]=\sup _{P\in \mathcal{P}} \lim_{n\to \infty }E_{P}[X_{n}] \le \lim_{n\to \infty }\mathbb{E}[X _{n}]. $$

Therefore (i) is proved. □

Let φ be a positive function satisfying

$$ \sum_{n=1}^{\infty } \frac{\varphi (n)}{n^{2}}< \infty \quad \mbox{and}\quad 0< \varphi (x)\uparrow \infty \quad \mbox{on } [c,\infty ) \mbox{ for some } c>0. $$
(8)

For convenience, we define \(\frac{1}{0}=\infty \), \(\frac{1}{\infty }=0\) and \(\varphi (\infty )=\infty \).

Remark 3.3

For any \(0<\delta <1\) and \(\alpha \in \mathbb{R}\), functions \(|x|^{\delta }\) and \(|x|^{\delta }(\log |x|)^{\alpha }\) satisfy (8).

Lemma 3.4

(Lemma 1 in Sung, Hu and Volodin [17])

Let \(\{a_{n}\}_{n\ge 1}\) be a sequence of non-negative real numbers such that \(a_{n}>0\) for infinitely many n. Let \(v_{n}=\sum_{i=n}^{\infty } a_{i}\) for \(n\in \mathbb{N}^{*}\) and φ be a positive function satisfying (8). If \(\sum_{n=1}^{\infty }a_{n}<\infty \), then \(\sum_{n=1}^{\infty } a_{n}\varphi (\frac{1}{v_{n}} )< \infty \).

Proposition 3.5

Let \(\{b_{n}\}_{n\ge 1}\) be a nondecreasing unbounded sequence of positive numbers, \(\{\alpha _{n}\}_{n\ge 1}\) be a sequence of non-negative real numbers, and φ be a positive function satisfying (8). Let r be any fixed positive number and for \(n\in \mathbb{N}^{*}\),

$$ v_{n}=\sum_{i=n}^{\infty }\alpha _{i} b_{i}^{-r}\quad \textit{and}\quad \beta _{n}= \max_{1\le i\le n} b_{i}\varphi \biggl(\frac{1}{v_{i}} \biggr) ^{-1/r}. $$

If \(\sum_{n=1}^{\infty }\alpha _{n} b_{n}^{-r}<\infty \), then \(\sum_{n=1}^{\infty }\alpha _{n} \beta _{n}^{-r}<\infty \) and \(\lim_{n\to \infty }\frac{\beta _{n}}{b_{n}}=0\).

Proof

Firstly, we consider that there exists an integer m such that \(\alpha _{n}=0\) for all \(n\ge m\), thus \(\beta _{n}= \max_{1\le i\le m}b_{i}\varphi (\frac{1}{v_{i}} )^{-1/r}\) for \(n\ge m\). Obviously, \(\sum_{n=1}^{\infty }\alpha _{n} \beta _{n} ^{-r}<\infty \) and we can get \(\lim_{n\to \infty }\frac{\beta _{n}}{b _{n}}=0\) from \(\lim_{n\to \infty }b_{n}=\infty \).

Secondly, we consider that \(\alpha _{n}>0\) for infinitely many n. It is easy to see that \(\beta _{n}\le \beta _{n+1}\) and \(\beta _{n}\ge b _{n}\varphi (\frac{1}{v_{n}})^{-1/r}\), for \(n\in \mathbb{N}^{*}\). Then, by Lemma 3.4, we have

$$ \sum_{n=1}^{\infty }\alpha _{n} \beta _{n}^{-r}\le \sum_{n=1}^{\infty } \alpha _{n}b_{n}^{-r}\varphi \biggl( \frac{1}{v_{n}} \biggr)< \infty . $$

Therefore, for any \(k\le n\), we have

$$\begin{aligned} \frac{\beta _{n}}{b_{n}} \le & \frac{\max_{1\le i\le k}b_{i}\varphi (\frac{1}{v_{i}} )^{-1/r}}{b_{n}}+\frac{\max_{k\le i \le n}b_{i}\varphi (\frac{1}{v_{i}} )^{-1/r}}{b_{n}} \\ \le & \frac{\max_{1\le i\le k}b_{i}\varphi (\frac{1}{v_{i}} ) ^{-1/r}}{b_{n}}+\varphi \biggl(\frac{1}{v_{k}} \biggr)^{-1/r}. \end{aligned}$$

Let \(n\to \infty \), we get \(\limsup_{n\to \infty }\frac{\beta _{n}}{b _{n}}\le \varphi (\frac{1}{v_{k}} )^{-1/r}\) since \(\lim_{n\to \infty }b_{n}=\infty \). Then let \(k\to \infty \), we have \(\limsup_{n\to \infty }\frac{\beta _{n}}{b_{n}}\le 0\) since \(\lim_{n\to \infty }v_{n}=0\) and \(\varphi (x)\uparrow \infty \) as \(x\uparrow \infty \). Hence \(\lim_{n\to \infty } \frac{\beta _{n}}{b_{n}}=0\). The proof of this proposition is completed. □

Theorems 3.6, 3.8 and 3.9 give several strong laws of large numbers and the growth rate of the partial sums for general random variables in sublinear expectation spaces.

Theorem 3.6

Let \(\{b_{n}\}_{n\ge 1}\) be a nondecreasing unbounded sequence of positive numbers and \(\{\alpha _{n}\}_{n\ge 1}\) be a sequence of non-negative numbers. Let r and c be any fixed positive numbers and φ be any positive function satisfying (8). For \(n\in \mathbb{N}^{*}\), let

$$ v_{n}=\sum_{i=n}^{\infty }\alpha _{i} b_{i}^{-r} \quad \textit{and} \quad \beta _{n}= \max_{1\le i\le n} b_{i}\varphi \biggl(\frac{1}{v_{i}} \biggr) ^{-1/r}. $$

If for each \(n\in \mathbb{N}^{*}\),

$$ \mathbb{E} \Bigl[\max_{1\le i\le n} \vert S_{i} \vert ^{r} \Bigr] \le c\sum _{i=1}^{n} \alpha _{i} $$
(9)

and \(\sum_{n=1}^{\infty }\alpha _{n} b_{n}^{-r}<\infty \), then \(\lim_{n\to \infty }\frac{S_{n}}{b_{n}}=0\) quasi-surely with the convergence rate \(\frac{S_{n}}{b_{n}}= O(\frac{\beta _{n}}{b_{n}}) \) quasi-surely.

Proof

Step 1. We firstly consider that there exists an integer m such that \(\alpha _{n}=0\) for all \(n\ge m\), then \(\beta _{n}=\max_{1\le i\le m}b_{i}\varphi (\frac{1}{v_{i}} ) ^{-1/r}\) for \(n\ge m\).

In this case, the sequence \(\{\beta _{n}\}_{n\ge 1}\) is bounded and it follows from the monotone convergence theorem (Theorem 3.2) that

$$ \mathbb{E} \Bigl[\sup_{n\ge 1} \vert S_{n} \vert ^{r} \Bigr]=\lim_{n\to \infty } \mathbb{E} \Bigl[\max _{1\le i\le n} \vert S_{i} \vert ^{r} \Bigr]\le c\sum_{i=1} ^{\infty }\alpha _{i}< \infty , $$

thus \(\sup_{n\ge 1}|S_{n}|<\infty \) quasi-surely.

Therefore \(\lim_{n\to \infty }\frac{S_{n}}{b_{n}}=0\) quasi-surely since \(\lim_{n\to \infty }b_{n}=\infty \). Because of \(\frac{S_{n}}{b_{n}}=\frac{S _{n}}{\beta _{n}}\frac{\beta _{n}}{b_{n}}\), we obtain the convergence rate of \(\frac{S_{n}}{b_{n}}= O (\frac{\beta _{n}}{b_{n}} ) \) quasi-surely.

Step 2. We consider that \(\alpha _{n}>0\) for infinitely many n. By Proposition 3.5, we have \(\sum_{n=1}^{\infty }\alpha _{n}\beta _{n}^{-r}<\infty \), and \(\lim_{n\to \infty }\beta _{n} b_{n} ^{-1}=0\). It follows from Theorem 2.1 that

$$ \mathbb{E} \biggl[\max_{1\le l\le n} \biggl\vert \frac{S_{l}}{\beta _{l}} \biggr\vert ^{r} \biggr]\le 4c\sum_{l=1}^{n} \frac{\alpha _{l}}{\beta _{l}^{r}} \le 4c\sum_{l=1}^{\infty } \frac{\alpha _{l}}{\beta _{l}^{r}}< \infty . $$

By the monotone convergence theorem (Theorem 3.2), we have

$$ \mathbb{E} \biggl[\sup_{n\ge 1} \biggl\vert \frac{S_{n}}{\beta _{n}} \biggr\vert ^{r} \biggr]=\lim_{n\to \infty }\mathbb{E} \biggl[\max_{1\le l\le n} \biggl\vert \frac{S_{l}}{\beta _{l}} \biggr\vert ^{r} \biggr] \le 4c\sum_{l=1} ^{\infty }\frac{\alpha _{l}}{\beta _{l}^{r}}< \infty . $$

So \(\sup_{n\ge 1} \vert \frac{S_{n}}{\beta _{n}} \vert <\infty \) quasi-surely and

$$ 0\le \biggl\vert \frac{S_{n}}{b_{n}} \biggr\vert \le \biggl\vert \frac{\beta _{n}}{b _{n}} \biggr\vert \sup_{n\ge 1} \biggl\vert \frac{S_{n}}{\beta _{n}} \biggr\vert . $$

Hence, \(\lim_{n\to \infty }\frac{S_{n}}{b_{n}}=0\) quasi-surely and \(\frac{S_{n}}{b_{n}}=O (\frac{\beta _{n}}{b_{n}} )\) quasi-surely. The proof of this theorem is completed. □

Proposition 3.7

Under the assumptions of Theorem 3.6, the following statements hold:

  1. (i)

    if the sequence \(\{\beta _{n}\}_{n\ge 1}\) is bounded, then \(\frac{S_{n}}{\beta _{n}}=O(1)\) quasi-surely;

  2. (ii)

    if the sequence \(\{\beta _{n}\}_{n\ge 1}\) is unbounded, then \(\frac{S_{n}}{\beta _{n}}=o(1)\) quasi-surely.

Proof

(i) Assume that \(\{\beta _{n}\}_{n\ge 1}\) is bounded by a constant \(D>0\). Then

$$ \sum_{n=1}^{\infty }\alpha _{n}\le D^{r}\sum_{n=1}^{\infty }\alpha _{n} \beta _{n}^{-r}< \infty . $$

It follows from the monotone convergence theorem (Theorem 3.2) that

$$ \mathbb{E} \Bigl[\sup_{n\ge 1} \vert S_{n} \vert ^{r} \Bigr]=\lim_{n\to \infty } \mathbb{E} \Bigl[\max _{1\le l\le n} \vert S_{l} \vert ^{r} \Bigr]\le c\sum_{l=1} ^{\infty }\alpha _{l}< \infty . $$

Therefore \(\sup_{n\ge 1}|S_{n}|<\infty \) quasi-surely. Since \(\{\beta _{n}\}_{n\ge 1}\) is bounded, we obtain \(\frac{S_{n}}{\beta _{n}}=O(1)\) quasi-surely.

(ii) We turn to the case that \(\{\beta _{n}\}_{n\ge 1}\) is unbounded. Now \(\{\beta _{n}\}_{n\ge 1}\) is a nondecreasing unbounded sequence of positive numbers. It follows from Proposition 3.5 that \(\sum_{n=1}^{\infty }\alpha _{n}\beta _{n}^{-r}<\infty \). Then, by Theorem 3.6, we get \(\lim_{n\to \infty }\frac{S_{n}}{\beta _{n}}=0\) quasi-surely, that is, \(\frac{S_{n}}{\beta _{n}}=o(1)\) quasi-surely. □

Theorem 3.8

Under the assumptions of Theorem 3.6, the following statements hold:

  1. (i)

    if the sequence \(\{\beta _{n}\}_{n\ge 1}\) is bounded, then \(\frac{S_{n}}{b_{n}}=O(\frac{\beta _{n}}{b_{n}}) \) quasi-surely;

  2. (ii)

    if the sequence \(\{\beta _{n}\}_{n\ge 1}\) is unbounded, then \(\frac{S_{n}}{b_{n}}=o(\frac{\beta _{n}}{b_{n}}) \) quasi-surely.

Proof

Due to \(\frac{S_{n}}{b_{n}}=\frac{S_{n}}{\beta _{n}}\frac{ \beta _{n}}{b_{n}}\), these two statements can easily be got from Theorem 3.6 and Proposition 3.7. □

Theorem 3.9

Let \(\{b_{n}\}_{n\ge 1}\) be a nondecreasing unbounded sequence of positive numbers and \(\{\alpha _{n}\}_{n\ge 1}\) be a sequence of non-negative numbers and r be any fixed positive number. For \(n\in \mathbb{N}\), let

$$ v_{n}=\sum_{i=n}^{\infty }\alpha _{i} b_{i}^{-r} \quad \textit{and}\quad \beta _{n}= \max_{1\le i\le n} b_{i}\varphi \biggl(\frac{1}{v_{i}} \biggr) ^{-1/r}. $$

Assume that \(\sum_{n=1}^{\infty }\alpha _{n} b_{n}^{-r}<\infty \) and there exists a constant \(c>0\) such that for any \(n\in \mathbb{N}^{*}\) and any \(\epsilon >0\)

$$ V \Bigl(\max_{1\le i\le n} \vert S_{i} \vert \ge \epsilon \Bigr) \le c\epsilon ^{-r} \sum _{i=1}^{n} \alpha _{i}. $$
(10)

Then the following statements hold:

  1. (i)

    \(\lim_{n\to \infty }\frac{S_{n}}{b_{n}}=0\) quasi-surely and \(\frac{S_{n}}{b_{n}}= O (\frac{\beta _{n}}{b_{n}} ) \) quasi-surely;

  2. (ii)

    if the sequence \(\{\beta _{n}\}_{n\ge 1}\) is bounded, then \(\frac{S_{n}}{\beta _{n}}=O(1)\) quasi-surely and \(\frac{S_{n}}{b_{n}}=O (\frac{\beta _{n}}{b_{n}} ) \) quasi-surely;

  3. (iii)

    if the sequence \(\{\beta _{n}\}_{n\ge 1}\) is unbounded, then \(\frac{S_{n}}{\beta _{n}}=o(1)\) quasi-surely and \(\frac{S_{n}}{b_{n}}=o (\frac{\beta _{n}}{b_{n}} ) \) quasi-surely.

Proof

(i) By Theorem 2.2, we have for any \(n\in \mathbb{N}^{*}\) and any \(\epsilon >0\)

$$ V \biggl(\max_{1\le i\le n} \biggl\vert \frac{S_{i}}{\beta _{i}} \biggr\vert \ge \epsilon \biggr)\le 4c\epsilon ^{-r}\sum _{i=1}^{n}\frac{\alpha _{i}}{\beta _{i}^{r}}. $$

So for any \(k\in \mathbb{N}^{*}\), it follows from the inner continuity of V that

$$ V \biggl(\sup_{n\ge 1} \biggl\vert \frac{S_{n}}{\beta _{n}} \biggr\vert >k \biggr) \le \lim_{n\to \infty }V \biggl(\max _{1\le i \le n} \biggl\vert \frac{S_{i}}{ \beta _{i}} \biggr\vert \ge k \biggr)\le 4ck^{-r} \sum_{i=1}^{\infty } \frac{ \alpha _{i}}{\beta _{i}^{r}}. $$

From Proposition 3.5, we have \(\sum_{i=1}^{\infty } \alpha _{i} \beta _{i}^{-r}<\infty \). Therefore

$$ \lim_{k\to \infty }V \biggl(\sup_{n\ge 1} \biggl\vert \frac{S_{n}}{\beta _{n}} \biggr\vert >k \biggr)=0. $$

By the monotonicity of V, we have

$$ V \biggl(\sup_{n\ge 1} \biggl\vert \frac{S_{n}}{\beta _{n}} \biggr\vert =\infty \biggr) =V \Biggl(\bigcap_{k=1}^{\infty } \biggl\{ \sup_{n\ge 1} \biggl\vert \frac{S _{n}}{\beta _{n}} \biggr\vert >k \biggr\} \Biggr)\le \lim_{k\to \infty }V \biggl(\sup _{n\ge 1} \biggl\vert \frac{S_{n}}{\beta _{n}} \biggr\vert >k \biggr)=0. $$

Consequently \(\sup_{n\ge 1}|S_{n}|/\beta _{n} <\infty \) quasi-surely. Due to

$$ \frac{ \vert S_{n} \vert }{b_{n}}=\frac{ \vert S_{n} \vert }{\beta _{n}}\frac{\beta _{n}}{b_{n}} $$

and Proposition 3.5, we have \(\lim_{n\to \infty }\frac{S_{n}}{b _{n}}=0\) and \(\frac{S_{n}}{b_{n}}=O (\frac{\beta _{n}}{b_{n}} ) \) quasi-surely.

(ii) If the sequence \(\{\beta _{n}\}_{n\ge 1}\) is bounded by constant \(D>0\), then

$$ \sum_{n=1}^{\infty }\alpha _{n}\le D^{r}\sum_{n=1}^{\infty }\alpha _{n} \beta _{n}^{-r}< \infty . $$

So for any \(k\in \mathbb{N}^{*}\), it follows from the inner continuity of V and (10) that

$$ V \Bigl(\sup_{n\ge 1} \vert S_{n} \vert >k \Bigr) \le \lim_{n\to \infty }V \Bigl(\max_{1\le i \le n} \vert S_{i} \vert \ge k \Bigr) \le ck^{-r} \sum _{i=1}^{\infty } \alpha _{i}. $$

Therefore

$$ \lim_{k\to \infty }V \Bigl(\sup_{n\ge 1} \vert S_{n} \vert >k \Bigr)=0. $$

By the monotonicity of V, we have

$$ V \Bigl(\sup_{n\ge 1} \vert S_{n} \vert =\infty \Bigr)=V \Biggl(\bigcap_{k=1}^{\infty } \Bigl\{ \sup_{n\ge 1} \vert S_{n} \vert >k \Bigr\} \Biggr) \le \lim_{k\to \infty }V \Bigl(\sup_{n\ge 1} \vert S_{n} \vert >k \Bigr)=0. $$

Consequently \(\sup_{n\ge 1}|S_{n}|<\infty \) quasi-surely. Hence, \(\frac{S_{n}}{\beta _{n}}=O(1)\) quasi-surely and \(\frac{S_{n}}{b_{n}}=O (\frac{\beta _{n}}{b_{n}} ) \) quasi-surely.

(iii) If the sequence \(\{\beta _{n}\}_{n\ge 1}\) is unbounded, then, by Proposition 3.5 we have

$$ \sum_{n=1}^{\infty }\alpha _{n}\beta _{n}^{-r}< \infty . $$

From (i) of this theorem, we get \(\lim_{n\to \infty }S_{n}/\beta _{n}=0\) quasi-surely, and thus \(S_{n}/b_{n}=o(\beta _{n}/b_{n}) \) quasi-surely. □

The result of Theorem 3.8 is more precise than Theorem 3.6. When the sublinear expectation space degenerates to the classical probability space, Theorem 3.8 and Theorem 3.9 give more precise results than Theorem 2.1 in Fazekas and Klesov [9], Lemma 1.2 in Hu and Hu [10], Theorem 3.4 in Tómács [19] and Theorem 2.4 in Tómács and Líbor [20].

Theorem 3.10

(Strong law of large numbers for negatively dependent random variables)

Let \(1\le p\le 2\), \(\{X_{n}\}_{n\ge 1}\) be a sequence of random variables in sublinear expectation space \((\varOmega ,\mathscr{H}, \mathbb{E})\) with \(\mathbb{E}[X_{n}]=\mathcal{E}[X_{n}]=0\) for all \(n\in \mathbb{N}^{*}\). If \(X_{k}\) is negatively dependent on \((X_{k+1},\ldots ,X_{k+n})\) for all \(n,k\in \mathbb{N}^{*}\) and \(\{b_{n}\}_{n\ge 1}\) is a nondecreasing unbounded sequence of positive numbers with

$$ \sum_{n=1}^{\infty }\frac{\mathbb{E}[ \vert X_{n} \vert ^{p}]}{b_{n}^{p}}< \infty , $$

then

$$ \lim_{n\to \infty }\frac{S_{n}}{b_{n}}=0 \quad \textit{quasi-surely} $$

with the convergence rate \(\frac{S_{n}}{b_{n}}=O(\frac{\beta _{n}}{b _{n}})\) when \(\{\beta _{n}\}_{n\ge 1} \) is bounded, \(\frac{S_{n}}{b _{n}}=o(\frac{\beta _{n}}{b_{n}})\) when \(\{\beta _{n}\}_{n\ge 1} \) is unbounded, where for \(n\in \mathbb{N}^{*}\),

$$ \beta _{n}=\max_{1\le i\le n} b_{i}\varphi \biggl( \frac{1}{v_{i}} \biggr) ^{-1/p},\qquad v_{n}=\sum _{i=n}^{\infty }\mathbb{E}\bigl[ \vert X_{i} \vert ^{p}\bigr] b_{i}^{-p} $$

and φ is any positive function satisfying (8).

Proof

Set \(\alpha _{k}=\mathbb{E}[|X_{k}|^{p}]\) for all \(k\in \mathbb{N}^{*}\), then, by Theorem 2.6, we have

$$ \mathbb{E} \Bigl[\max_{1\le k\le n} \vert S_{k} \vert ^{p} \Bigr] \le 2^{3-p}\sum_{k=1}^{n} \alpha _{k},\quad n\ge 1. $$

Due to

$$ \sum_{n=1}^{\infty }\frac{\alpha _{n}}{b_{n}^{p}}=\sum _{n=1}^{\infty }\frac{ \mathbb{E}[ \vert X_{n} \vert ^{p}]}{b_{n}^{p}}< \infty , $$

we can deduce this theorem from Theorem 3.6 and 3.8. □

4 An application to the logarithmically weighted sums

By using Theorem 3.6 and 3.8 to the logarithmically weighted sums, we can get Theorem 4.2, which sharpens the result of Theorem 8.1 in Fazekas and Klesov [9] and Theorem 2.5 in Hu and Hu [10] under the same condition in the classical probability theory and extends it to the sublinear expectation space. Some of our idea for obtaining Theorem 4.2 come from these papers.

Lemma 4.1

(Lemma 8.1 in Fazekas and Klesov [9])

Set \(g(i,j)=\sum_{k=i}^{j}\frac{1}{k}\) for \(i\le j\). Then, for any \(0<\beta <1\) and \(1<\gamma <2\), we have

$$ \sum_{k=i}^{j}\sum _{l=i}^{k}\frac{1}{k^{1+\beta }}\frac{1}{l^{1- \beta }}\le \frac{2}{\beta }g^{\gamma }(i,j). $$

Theorem 4.2

Let \(\{X_{n}\}_{n\ge 1}\) satisfy the following condition (11):

$$\begin{aligned}& \textit{there exist constants } \beta >0, c>0 \textit{ such that} \\& \quad \bigl\vert \mathbb{E}\bigl[\bigl(X_{k}-\mathbb{E}[X_{k}] \bigr) \bigl(X_{l}-\mathbb{E}[X_{l}]\bigr)\bigr] \bigr\vert \le c \biggl(\frac{l}{k} \biggr)^{\beta } ,\quad 1\le l\le k. \end{aligned}$$
(11)

Then

$$ \lim_{n\to \infty }\frac{1}{\log n}\sum _{k=1}^{n}\frac{X_{k}- \mathbb{E}[X_{k}]}{k}=0 \quad \textit{quasi-surely} $$
(12)

and for any \(\epsilon \in (0,1/2)\),

$$ \frac{1}{\log n}\sum_{k=1}^{n} \frac{X_{k}-\mathbb{E}[X_{k}]}{k}=o \biggl(\frac{1}{(\log n)^{\epsilon }} \biggr) \quad \textit{quasi-surely}. $$

Proof

Without loss of generality we may assume \(0<\beta <1\). Using the assumption (11) and Lemma 4.1 we have for all \(i< j\), \(1<\gamma <2\)

$$ \mathbb{E} \Biggl[ \Biggl(\sum_{k=i}^{j} \frac{X_{k}-\mathbb{E}[X_{k}]}{k} \Biggr)^{2} \Biggr]\le 2c\sum _{k=i} ^{j}\sum_{l=i}^{k} \frac{1}{k^{1+\beta }}\frac{1}{l^{1-\beta }}\le \frac{4c}{ \beta } \Biggl(\sum _{k=i}^{j}\frac{1}{k} \Biggr)^{\gamma }. $$

By Theorem 1 in Longnecker and Serfling [11], we get, for every \(P\in \mathcal{P}\),

$$ E_{P} \Biggl[\max_{1\le i\le n} \Biggl\vert \sum _{k=1}^{i}\frac{X_{k}- \mathbb{E}[X_{k}]}{k} \Biggr\vert ^{2} \Biggr]\le A_{2,\gamma }\frac{4c}{ \beta } \Biggl(\sum _{k=1}^{n}\frac{1}{k} \Biggr)^{\gamma }, $$

then

$$ \mathbb{E} \Biggl[\max_{1\le i\le n} \Biggl\vert \sum _{k=1}^{i}\frac{X_{k}- \mathbb{E}[X_{k}]}{k} \Biggr\vert ^{2} \Biggr]\le A_{2,\gamma }\frac{4c}{ \beta } \Biggl(\sum _{k=1}^{n}\frac{1}{k} \Biggr)^{\gamma }, $$
(13)

where \(A_{2,\gamma }\) is a constant defined in Theorem 1 of Longnecker and Serfling [11]. Since \(\sum_{k=1}^{n} 1/k=O(1) \log n\) as \(n\to \infty \), we denote

$$ \alpha _{n}=\bigl(\log (n+1)\bigr)^{\gamma }-(\log n)^{\gamma } $$

and then by (13) we have for all \(n\ge 1\)

$$ \mathbb{E} \Biggl[\max_{1\le i\le n} \Biggl\vert \sum _{k=1}^{i}\frac{X_{k}- \mathbb{E}[X_{k}]}{k} \Biggr\vert ^{2} \Biggr]\le c_{1}\sum_{k=1}^{n} \alpha _{k}, $$

where \(c_{1}\) is a positive constant. Due to \(\lim_{n\to \infty } \gamma n^{-1}(\log n)^{\gamma -1}/\alpha _{n}=1\), there exists \(n_{0}\ge 3\), \(c_{2}>0\) and \(c_{3}>0\) such that, for all \(n\ge n_{0}\),

$$ c_{2}n^{-1}(\log n)^{\gamma -1}\le \alpha _{n} \le c_{3}n^{-1}(\log n)^{ \gamma -1}. $$

Take \(b_{n}=\log (n\vee 2)\), then

$$ v_{1}=\sum_{n=1}^{\infty }\alpha _{n} b_{n}^{-2}< \infty . $$

Hence by Theorem 3.6, (12) holds.

Meanwhile, for \(n\ge n_{0}\), we have

$$ \frac{c_{2}}{2-\gamma }(\log n)^{\gamma -2} \le v_{n}=\sum _{i=n}^{ \infty }\alpha _{i} b_{i}^{-2} \le \frac{c_{3}}{2-\gamma }\bigl(\log (n-1)\bigr)^{ \gamma -2}. $$

Therefore, for any \(\delta \in (0,1)\), \(n\ge n_{0}\),

$$ \beta _{n}=\max_{1\le i\le n}b_{i} v_{i}^{\delta /2}\ge \biggl(\frac{c _{2}}{2-\gamma } \biggr)^{\delta /2}( \log n)^{1-(2-\gamma )\delta /2} , $$

which implies \(\{\beta _{n}\}_{n\ge 1}\) is unbounded since \(1-(2- \gamma )\delta /2\in (0,1/2)\). Therefore, by Theorem 3.8, the convergence rate of \(\frac{1}{\log n}\sum_{k=1}^{n}\frac{X_{k}- \mathbb{E}[X_{k}]}{k}\) is \(o(\beta _{n}/b_{n}) \) quasi-surely.

On the other hand, for \(n\ge n_{0}\), we have

$$\begin{aligned} \beta _{n} =&\max_{1\le i\le n}b_{i} v_{i}^{\delta /2}=\max \Bigl\{ \beta _{n_{0}}, \max _{n_{0}\le i\le n}b_{i} v_{i}^{\delta /2} \Bigr\} \\ \le & \max \biggl\{ \beta _{n_{0}}, \biggl(\frac{c_{3}}{2-\gamma } \biggr) ^{\delta /2}(\log n) \bigl(\log (n-1) \bigr)^{(\gamma -2)\delta /2} \biggr\} \\ \le & \max \biggl\{ \beta _{n_{0}}, \biggl(\frac{c_{3}}{2-\gamma } \biggr) ^{\delta /2} \biggl(\frac{\log 3}{\log 2} \biggr)^{(2-\gamma )\delta /2}( \log n)^{1-(2-\gamma )\delta /2} \biggr\} . \end{aligned}$$

Since \(\{\beta _{n}\}_{n\ge 1}\) is unbounded, for sufficient large n, we have

$$ \frac{\beta _{n}}{b_{n}}\le c_{4}(\log n)^{-(2-\gamma )\delta /2}, $$

where \(c_{4}= (\frac{c_{3}}{2-\gamma } )^{\delta /2} (\frac{ \log 3}{\log 2} )^{(2-\gamma )\delta /2}\). Let \(\epsilon =(2- \gamma )\delta /2\), then ϵ can be any value in \((0,1/2)\) since δ is an arbitrary number in \((0,1)\) and \(\frac{1}{\log n} \sum_{k=1}^{n}\frac{X_{k}-\mathbb{E}[X_{k}]}{k}=o (\frac{1}{( \log n)^{\epsilon }} ) \) quasi-surely. □

Throughout the sequel of this section, the sublinear expectation spaces and the random variable sequence \(\{X_{n}\}_{n\ge 1}\) are further supposed to satisfy the following two assumptions.

Assumption 1

The sublinear expectation spaces \((\varOmega ,\mathscr{H},\mathbb{E})\) and \((\widetilde{\varOmega },\widetilde{\mathscr{H}},\widetilde{\mathbb{E}})\) satisfy for all \(X\in \mathscr{H}(\mbox{or }\widetilde{\mathscr{H}})\) and \(f_{n}\in C_{b,\mathrm{Lip}}(\mathbb{R})\), \(f_{n}\downarrow 0\): \(\mathbb{E}(\mbox{or }\widetilde{\mathbb{E}})[f_{n}(X)]\downarrow 0\).

Assumption 2

Having fixed the ratio \(\lambda \geq 1\) as a constant, the sequence \(\{X_{n}\}_{n\ge 1}\) is a sequence of independent random variables in any fixed sublinear expectation space \((\varOmega , \mathscr{H}, \mathbb{E})\) with \(\mathbb{E}[X_{n}]=\mathcal{E}[X_{n}]=0\), \(\overline{\sigma }_{n}=\sqrt{\mathbb{E}[X_{n}^{2}]}\) and \(\underline{\sigma }_{n}=\sqrt{\mathcal{E}[X_{n}^{2}]}\), for \(n\geq 1\), and \(0< \inf_{n\geq 1}\underline{\sigma }_{n}^{2} \leq \sup_{n\geq 1}\overline{\sigma }_{n}^{2} < \infty \). For \(n\ge 1\), denote \(S_{0}=0\), \(S_{n}= \sum_{i=1}^{n} X_{i}\) and \(\sigma _{n}:= \frac{\underline{\sigma }_{n} +\overline{\sigma }_{n}}{2}\), \(\lambda _{n}:= \frac{\overline{\sigma }_{n}}{\underline{\sigma }_{n}} \equiv \lambda \), \(B_{n}:= \sqrt{\sum_{i=1}^{n} \sigma _{i} ^{2}}\), \(W_{n}:= \frac{S_{n}}{B_{n}}\).

A random variable ξ is G-normal distributed (denoted by \(\xi \sim N(0, [\underline{ \sigma }^{2}, \overline{\sigma }^{2}])\)) under a sublinear expectation \(\widetilde{\mathbb{E}}\), if and only if for any \(f \in C_{b,\mathrm{Lip}}( \mathbb{R})\), the function \(u(t,x)=\widetilde{\mathbb{E}} [f (x+\sqrt{t} \xi ) ]\) (\(x\in \mathbb{R}\), \(t\ge 0\)) is the unique viscosity solution of the following G-heat equation:

$$ \textstyle\begin{cases} \partial _{t} u -G ( \partial _{xx}^{2} u ) =0, \quad (x,t)\in \mathbb{R}\times (0,\infty ), \\ u(0,x)=f(x), \end{cases} $$

where \(G(a)=\frac{1}{2}\widetilde{\mathbb{E}}[a\xi ^{2}]\), \(a\in \mathbb{R}\), is determined by the variances \(\bar{\sigma }^{2}:= \widetilde{\mathbb{E}}[\xi ^{2}]\) and \(\underline{\sigma }^{2}:= \widetilde{\mathcal{E}}[\xi ^{2}]\). If \(\bar{\sigma }^{2}=\underline{ \sigma }^{2}\), then G-normal distribution is just the classical normal distribution \(N(0, \bar{\sigma }^{2})\).

Lemma 4.3

(Theorem 5.1 in Song [16])

Let \((\varOmega ,\mathscr{H},\mathbb{E})\) and \((\widetilde{\varOmega }, \widetilde{\mathscr{H}},\widetilde{\mathbb{E}})\) be sublinear expectation spaces satisfying Assumption 1 and \(\{X_{n}\} _{n\ge 1}\) be a sequence of random variables in \((\varOmega ,\mathscr{H}, \mathbb{E})\) satisfying Assumption 2. Then there exist a constant \(\alpha \in (0,1)\), depending on λ, and a constant \(C_{\alpha ,\lambda }>0\), depending on α, λ such that, for any \(n\ge 1\),

$$ \sup_{ \vert f \vert _{\mathrm{Lip}}\leq 1} \bigl\vert \mathbb{E} \bigl[ f ( W _{n} ) \bigr] -\widetilde{\mathbb{E}} \bigl[f( \xi ) \bigr] \bigr\vert \leq C_{\alpha ,\lambda } \sup_{1\leq i\leq n} \biggl\{ \frac{\mathbb{E}[ \vert X_{i} \vert ^{2+\alpha }]}{\sigma _{i}^{2+\alpha }} \biggl( \frac{\sigma _{i}}{B_{n}} \biggr)^{\alpha } \biggr\} , $$

where ξ is G-normal distribution under \(\widetilde{\mathbb{E}}\) with the fixed λ and \(\sqrt{\widetilde{\mathbb{E}}[\xi ^{2}] }= \frac{2\lambda }{1+\lambda }\), \(\sqrt{\widetilde{\mathcal{E}}[\xi ^{2}] }= \frac{2 }{1+\lambda }\) and \(|f|_{\mathrm{Lip}}\) is the Lipschitz constant of f.

Theorem 4.4

Let \((\varOmega ,\mathscr{H},\mathbb{E})\) and \((\widetilde{\varOmega }, \widetilde{\mathscr{H}},\widetilde{\mathbb{E}})\) be sublinear expectation spaces satisfying Assumption 1 and \(\{X_{n}\} _{n\ge 1}\) be a sequence of random variables in \((\varOmega ,\mathscr{H}, \mathbb{E})\) satisfying Assumption 2. For the α in Lemma 4.3, if \(\sup_{i\geq 1}\mathbb{E} [|X_{i}|^{2+ \alpha }] < \infty \), then, for any \(f\in C_{b,\mathrm{Lip}}(\mathbb{R})\), we have

$$ \lim_{n\to \infty }\frac{1}{\log n}\sum _{k=1}^{n} \frac{\mathbb{E}[f(W _{k})]}{k}=\widetilde{\mathbb{E}} \bigl[f(\xi ) \bigr] $$
(14)

and

$$ \lim_{n\to \infty }\frac{1}{\log n}\sum _{k=1}^{n} \frac{\mathcal{E}[f(W _{k})]}{k}=\widetilde{\mathcal{E}} \bigl[f(\xi ) \bigr], $$
(15)

and the convergence rate is \(O (\frac{1}{\log n} )\), where ξ is G-normal distribution under \(\widetilde{\mathbb{E}}\) with the fixed λ and \(\sqrt{\widetilde{\mathbb{E}}[\xi ^{2}] }= \frac{2 \lambda }{1+\lambda }\), \(\sqrt{\widetilde{\mathcal{E}}[\xi ^{2}] }= \frac{2 }{1+\lambda }\).

Proof

We can get (15) by considering −f in (14). So we only need to prove (14). It follows from Lemma 4.3 that

$$\begin{aligned}& \limsup_{n\to \infty } \Biggl\vert \frac{1}{\log n}\sum _{k=1}^{n} \frac{ \mathbb{E}[f(W_{k})]}{k}-\widetilde{\mathbb{E}} \bigl[f(\xi )\bigr] \Biggr\vert \\& \quad = \limsup_{n\to \infty } \Biggl\vert \frac{1}{\log n}\sum_{k=1}^{n} \frac{ \mathbb{E}[f(W_{k})]-\widetilde{\mathbb{E}}[f(\xi )]}{k} \Biggr\vert \\& \quad \le \limsup_{n\to \infty }\frac{1}{\log n}\sum _{k=1}^{n} \frac{ \vert \mathbb{E}[f(W_{k})]-\widetilde{\mathbb{E}}[f(\xi )] \vert }{k} \\& \quad \le \limsup_{n\to \infty }\frac{1}{\log n}\sum _{k=1}^{n}\frac{ \vert f \vert _{\mathrm{Lip}}C _{\alpha ,\lambda }}{k}\sup_{1\leq i\leq k} \biggl\{ \frac{\mathbb{E}[ \vert X _{i} \vert ^{2+\alpha }]}{\sigma _{i}^{2+\alpha }} \biggl( \frac{\sigma _{i}}{B _{k}} \biggr)^{\alpha } \biggr\} \\& \quad \le \limsup_{n\to \infty } \vert f \vert _{\mathrm{Lip}}C_{\alpha ,\lambda } \sup_{i \ge 1}\mathbb{E}\bigl[ \vert X_{i} \vert ^{2+\alpha }\bigr]\frac{1}{\log n}\sum_{k=1}^{n} \frac{1}{k (\inf_{i\ge 1}\sigma _{i}^{2} ) (k\inf _{i \ge 1}\sigma _{i}^{2} )^{\frac{\alpha }{2}}} \\& \quad = \frac{ \vert f \vert _{\mathrm{Lip}}C_{\alpha ,\lambda }\sup_{i\ge 1}\mathbb{E}[ \vert X_{i} \vert ^{2+ \alpha }]}{ (\inf _{i\ge 1}\underline{\sigma }_{i}^{2} ) ^{1+\frac{\alpha }{2}}} \limsup_{n\to \infty }\frac{1}{\log n}\sum _{k=1}^{n}\frac{1}{k^{1+\frac{\alpha }{2}}} \\& \quad = 0. \end{aligned}$$

Therefore, equality (14) holds and the convergence rate is \(O (\frac{1}{\log n} )\). □

Theorem 4.5

Under the assumptions of Theorem 4.4, for \(f\in C_{b,\mathrm{Lip}}( \mathbb{R})\) satisfying the following condition (16): for all \(1\le l\le k\),

$$\begin{aligned}& \textit{there exist constants } \beta , c>0 \textit{ such that} \\& \quad \bigl\vert \mathbb{E}\bigl[\bigl(f(W_{k})-\mathbb{E} \bigl[f(W_{k})\bigr]\bigr) \bigl(f(W_{l})-\mathbb{E}\bigl[f(W _{l})\bigr]\bigr)\bigr] \bigr\vert \le c \biggl(\frac{l}{k} \biggr)^{\beta }, \end{aligned}$$
(16)

we have

$$ \lim_{n\to \infty }\frac{1}{\log n}\sum _{k=1}^{n} \frac{f(W_{k})}{k}= \widetilde{\mathbb{E}} \bigl[f(\xi ) \bigr]\quad \textit{quasi-surely}, $$
(17)

and the convergence rate is \(o(\frac{1}{(\log n)^{\epsilon }} )\), for any \(\epsilon \in (0,1/2)\).

Proof

Under the condition (16), it follows from Theorem 4.2 that, for any \(\epsilon \in (0,1/2)\),

$$ \frac{1}{\log n}\sum_{k=1}^{n} \frac{f(W_{k})-\mathbb{E}[f(W_{k})]}{k}=o \biggl(\frac{1}{(\log n)^{ \epsilon }} \biggr)\quad \mbox{quasi-surely}. $$

On the other hand, from Theorem 4.4, we have

$$ \lim_{n\to \infty }\frac{1}{\log n}\sum_{k=1}^{n} \frac{\mathbb{E}[f(W _{k})]}{k}-\widetilde{\mathbb{E}} \bigl[f(\xi ) \bigr]=O \biggl( \frac{1}{\log n} \biggr). $$

Consequently, we can get (17) and the convergence rate. □

References

  1. Agahi, H., Mohammadpour, A., Mesiar, R., Ouyang, Y.: On a strong law of large numbers for momtone measures. Stat. Probab. Lett. 83, 1213–1218 (2013)

    Article  Google Scholar 

  2. Chen, Z.: Strong laws of large numbers for sub-linear expectations. Sci. China Math. 59(5), 945–954 (2016)

    Article  MathSciNet  Google Scholar 

  3. Chen, Z., Hu, C., Zong, G.: Strong laws of large numbers for sub-linear expectation without independence. Commun. Stat., Theory Methods 46(15), 7529–7545 (2017)

    Article  MathSciNet  Google Scholar 

  4. Chen, Z., Huang, H., Wu, P.: Extension of the strong law of large numbers for capacities. Math. Control Relat. Fields 9(1), 175–190 (2019)

    Article  MathSciNet  Google Scholar 

  5. Chen, Z., Wu, P., Li, B.: A strong law of large numbers for non-additive probabilities. Int. J. Approx. Reason. 54(3), 365–377 (2013)

    Article  MathSciNet  Google Scholar 

  6. De Cooman, G., Miranda, E.: Weak and strong laws of large numbers for coherent lower previsions. J. Stat. Plan. Inference 138, 2409–2432 (2008)

    Article  MathSciNet  Google Scholar 

  7. Denis, L., Hu, M., Peng, S.: Function spaces and capacity related to a sub-linear expectation: application to G-Brownian motion paths. Potential Anal. 34(2), 139–161 (2011)

    Article  MathSciNet  Google Scholar 

  8. Fazekas, I.: On a general approach to the strong laws of large numbers (2014). arXiv:1406.2883v1

    Article  MathSciNet  Google Scholar 

  9. Fazekas, I., Klesov, O.: A general approach to the strong laws of large numbers. Theory Probab. Appl. 45(3), 568–583 (2001)

    Article  MathSciNet  Google Scholar 

  10. Hu, S., Hu, M.: A general approach rate to the strong law of large numbers. Stat. Probab. Lett. 76(8), 843–851 (2006)

    Article  MathSciNet  Google Scholar 

  11. Longnecker, M., Serfling, R.J.: General moment and probability inequalities for the maximum partial sum. Acta Math. Acad. Sci. Hung. 30(1–2), 129–133 (1977)

    Article  MathSciNet  Google Scholar 

  12. Maccheroni, F., Marinacci, M.: A strong law of large number for capacities. Ann. Probab. 33, 1171–1178 (2005)

    Article  MathSciNet  Google Scholar 

  13. Peng, S.: G-Expectation, G-Brownian motion and related stochastic calculus of Itô type. In: Stochastic Analysis and Applications, pp. 541–567. Springer, Berlin (2007)

    Chapter  Google Scholar 

  14. Peng, S.: Survey on normal distributions, central limit theorem, Brownian motion and the related stochastic calculus under sub-linear expectations. Sci. China Ser. A, Math. 52(7), 1391–1411 (2009)

    Article  Google Scholar 

  15. Peng, S.: Nonlinear expectations and stochastic calculus under uncertainty—with robust central limit theorem and G-Brownian motion (2010). arXiv:1002.4546v1

  16. Song, Y.: Normal approximation by Stein’s method under sublinear expectations (2017). arXiv:1711.05384v1

  17. Sung, S.H., Hu, T., Volodin, A.: A note on the growth rate in the Fazekas–Klesov general law of large numbers and on the weak law of large numbers for tail series. Publ. Math. (Debr.) 73(1–2), 1–10 (2008)

    MathSciNet  MATH  Google Scholar 

  18. Terán, P.: A law of large numbers for the possibilistic mean value. Fuzzy Sets Syst. 245, 116–124 (2014)

    Article  MathSciNet  Google Scholar 

  19. Tómács, T.: A general method to obtain the rate of convergence in the strong law of large numbers. Ann. Math. Inform. 34, 97–102 (2007)

    MathSciNet  MATH  Google Scholar 

  20. Tómács, T., Líbor, Z.: A Hájek–Rényi type inequality and its applications. Ann. Math. Inform. 33, 141–149 (2006)

    MathSciNet  MATH  Google Scholar 

  21. Zhang, L.: Rosenthal’s inequalities for independent and negatively dependent random variables under sub-linear expectations with applications. Sci. China Math. 59(4), 751–768 (2016)

    Article  MathSciNet  Google Scholar 

  22. Zhang, L.: Strong limit theorems for extended independent and extended negatively dependent random variables under non-linear expectations (2016). arXiv:1608.00710v1

Download references

Acknowledgements

The authors would like to thank the editor and referees for their valuable comments.

Availability of data and materials

Data sharing not applicable to this paper as no data sets were generated or analyzed during the current study.

Funding

This paper is supported by the National Natural Science Foundation of China (Grant Nos. 11601280 and 11871050) and the Natural Science Foundation of Shandong Province of China (Grant Nos. ZR2016AQ11 and ZR2016AQ13).

Author information

Authors and Affiliations

Authors

Contributions

All authors contributed equally in writing this paper. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Panyu Wu.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Huang, W., Wu, P. Strong laws of large numbers for general random variables in sublinear expectation spaces. J Inequal Appl 2019, 143 (2019). https://doi.org/10.1186/s13660-019-2094-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13660-019-2094-7

Keywords