Skip to main content

A new proof for the generalized law of large numbers under Choquet expectation

Abstract

In this article, we employ the elementary inequalities arising from the sub-linearity of Choquet expectation to give a new proof for the generalized law of large numbers under Choquet expectations induced by 2-alternating capacities with mild assumptions. This generalizes the Linderberg–Feller methodology for linear probability theory to Choquet expectation framework and extends the law of large numbers under Choquet expectation from the strong independent and identically distributed (iid) assumptions to the convolutional independence combined with the strengthened first moment condition.

1 Introduction

As pointed out in [17] and the references cited therein, the Choquet expectation can provide a strong and convincing way to describe volatility uncertainty phenomena in financial markets, as well as bridge the distance between the traditional and new-arising sub-linear expectation theory. Recently, Choquet expectation finds its application in monetary services aggregation [8]. Therefore, the theoretical study on the law of large numbers (LLNs) and the central limit theorem (CLT) under Choquet expectations has been attracting much attention of mathematicians and economists, and a great achievement has been attained; for example, see [2, 9, 10].

Reviewing the literature of LLNs under Choquet expectations, we found that the proof for LLNs can be mainly classified into two categories: one is the direct method, as in [2, 10], in which the purely probabilistic Linderberg–Feller idea and the classical inequality technique, which are often applied to linear expectation case, are borrowed to derive the LLNs under Choquet expectations. The other is the indirect method, such as done in [9], in which the non-additive Choquet expectation is turned into an additive Lebesgue–Stieltjes integral, then the existing additive properties can be used to derive the Choquet LLNs.

Checking carefully these existing methods, it is not difficult to find that their proof relies mainly on the three assumptions: additivity of expectations, the independence and the identical distribution of random variables. Noting that the latter two, or simply “iid” assumption for short, are too strong to be verified and utilized in the simulation of real financial practice, one may naturally ask whether there are some weakened assumptions, under which the desired Choquet LLNs are still valid and its applications are facilitated. The answer is affirmative.

The main goals of this article are to: (1) adopt Choquet expectations induced by 2-alternating capacities as an alternative of the first assumption of the additivity of expectations in [9], and replace the independent and identical distribution (iid) condition for random variables by a convolutional independence condition combined with the strengthened first moment condition which is much weaker than the iid constraints; (2) then prove the LLNs under Choquet expectation through a pure probabilistic argument and classical inequality technique. This generalizes the Linderberg–Feller methodology from linear probability theory to Choquet expectation framework.

The remaining part of this article is organized as follows. Section 2 is to recall the definitions for capacity, Choquet expectation, convolutional independence and the strengthened first moment condition, and then present their properties concisely. In Sect. 3, we provide some useful lemmas based on these definitions, properties and some inequality techniques. Section 4 is devoted to the presentation of a detailed proof for the LLNs under Choquet expectation through a pure probabilistic argument and classical inequality technique. We give some concluding remarks in the last section.

2 Preliminary

In this section, we recall the definitions and properties concerning capacities and Choquet expectations.

Let (Ω, \({\mathcal{F}}\)) be a measurable space, \(C_{b}(\mathbb{R})\) be the set of all bounded and continuous functions on \(\mathbb{R}\) and \(C_{b}^{2}(\mathbb{R})\) be the set of functions in \(C_{b}(\mathbb{R})\) with bounded, continuous first and second order derivatives.

Definition 1

([3])

A set function \(\mathbb{V}\): \({\mathcal{F}}\to [0,1]\) is called a capacity, if it satisfies the following properties:

$$ \begin{gathered} \mathbb{V}(\emptyset )=0, \qquad \mathbb{V}(\varOmega )=1; \\ \mathbb{V}(A)\leq \mathbb{V}(B) \quad \text{whenever } A \subseteq B \text{ and } A, B \in {\mathcal{F}}. \end{gathered} $$

Especially, a capacity \(\mathbb{V}\) is 2-alternating if for all \(A, B \in {\mathcal{F}}\),

$$ \mathbb{V}(A\cup B)\leq \mathbb{V}(A)+\mathbb{V}(B)-\mathbb{V}(A \cap B). $$

Definition 2

([3])

Let X be a random variable on \((\varOmega ,{\mathcal{F}})\). The upper Choquet expectation (integral) of X induced by a capacity \(\mathbb{V}\) on \({\mathcal{F}}\) is defined by

$$ \mathbb{C}_{V}[X]:= \int _{\varOmega } X\,d \mathbb{V}(t)= \int _{0}^{ \infty }\mathbb{V}(X>t)\,dt+ \int _{-\infty }^{0} \bigl[ \mathbb{V}(X>t)-1\bigr]\,dt. $$

The lower Choquet expectation of X induced by \(\mathbb{V}\) is given by

$$ \mathcal{C}_{V}[X]:=-\mathbb{C}_{V}[-X], $$

which is conjugate to the upper expectation and satisfies \(\mathcal{C}_{V}[X]\leq \mathbb{C}_{V}[X]\).

For simplicity, we only consider the upper Choquet expectation in the sequel, since the lower (conjugate) version can be considered similarly.

Proposition 1

([3, 11])

LetX, Ybe two random variables on\((\varOmega ,{\mathcal{F}})\), \(\mathbb{C}_{V}\)be the upper Choquet expectation induced by a capacity\(\mathbb{V}\). Then, we have

  1. (1)

    monotonicity: \(\mathbb{C}_{V}[X]\geq \mathbb{C}_{V}[Y]\)for\(X\geq Y\);

  2. (2)

    positive homogeneity: \(\mathbb{C}_{V}[\lambda X]= \lambda \mathbb{C}_{V}[X]\), \(\forall \lambda \geq 0\);

  3. (3)

    translation invariance: \(\mathbb{C}_{V}[X+a]=\mathbb{C}_{V}[X]+a\), \(\forall a\in \mathbb{R}\).

Compared to Proposition 1, the Choquet expectation induced by 2-alternating capacity \(\mathbb{V}\) implies more information than the general Choquet expectation does, for example, the sub-additivity and sub-linearity, as presented in the following proposition.

Proposition 2

([2, 3, 11])

Let\(\mathbb{C}_{V}\)be the upper Choquet expectation induced by 2-alternating capacity\(\mathbb{V}\). LetX, Ybe two random variables on\((\varOmega ,{\mathcal{F}})\). Then, we have

  1. (1)

    sub-additivity:

    $$ \mathbb{C}_{V}[X+Y]\leq \mathbb{C}_{V}[X]+ \mathbb{C}_{V}[Y]; $$
  2. (2)

    for any constant\(a \in \mathbb{R}\),

    $$ \mathbb{C}_{V}[aX]=a^{+} \mathbb{C}_{V}[X]-a^{-} \mathcal{C}_{V}[X], $$

    where\(a^{+}=\max \{ a, 0\}\)and\(a^{-}=\max \{ -a, 0\}\);

  3. (3)

    sub-linearity:

    $$ -\mathbb{C}_{V}\bigl[ \vert Y \vert \bigr]\leq \mathcal{C}_{V}[Y]\leq \mathbb{C}_{V}[X+Y]- \mathbb{C}_{V}[X]\leq \mathbb{C}_{V}[Y]\leq \mathbb{C}_{V}\bigl[ \vert Y \vert \bigr]. $$

Remark 1

Let \(\mathbb{V}\) be a 2-alternating capacity and \(\mathbb{C}_{V}\), \(\mathcal{C}_{V}\) be the induced upper, lower Choquet expectation, respectively. Then

$$ v(A):=\mathcal{C}_{V}[I_{A}], \quad \forall A\in { \mathcal{F}} $$

defines a capacity for the indicator function \(I_{A}\) on the set A. Further, if \(A^{c}\) is the complementary set of A, then

$$ V(A)=1-v\bigl(A^{c}\bigr), \quad \forall A\in {\mathcal{F}}. $$

Next, we recall the definitions of distribution for random variables.

Definition 3

([2])

Let \(\mathbb{C}_{V}\) be the upper Choquet expectation induced by a capacity \(\mathbb{V}\) on \(\mathcal{F}\). Let X be a random variable on \((\varOmega ,{\mathcal{F}})\). For any function φ on \(\mathbb{R}\) with \(\varphi (X) \in \mathcal{F}\), the distribution function of X is defined by

$$ \mathbb{F}_{X}[\varphi ]:=\mathbb{C}_{V}\bigl[\varphi (X) \bigr]. $$

Random variables X and Y are called identically distributed, if for any \(\varphi \in C_{b}(\mathbb{R}) \) with \(\varphi (X), \varphi (Y)\in {\mathcal{F}}\),

$$ \mathbb{C}_{V}\bigl[\varphi (X)\bigr]=\mathbb{C}_{V}\bigl[ \varphi (Y)\bigr]< \infty . $$

Definition 4

([2])

A sequence \(\{X_{i}\}_{i=1}^{\infty } \) of random variables is said to converge in distribution (in law) under upper Choquet expection \(\mathbb{C}_{V}\) on \(\mathcal{F}\), if for any \(\varphi \in C_{b}(\mathbb{R}) \) with \(\varphi (X_{i}) \in \mathcal{F}\), \(i \ge 1\), the sequence \(\{\mathbb{C}_{V}[\varphi (X_{i})]\}_{i=1}^{\infty } \) converges.

We conclude this section by reviewing two fundamental concepts, convolution and independence, which play important roles in the development of classical probability theory, and then generalizing them to Choquet expectation framework.

Convolution and independence in linear probability theory. Let ξ and η be two random variables on the probability space \((\varOmega ,{\mathcal{F}},P)\), and \(E_{P}[\cdot ]\) be the expectation operator under probability P. ξ and η are said to be of convolution if for any bounded function φ,

$$ E_{P} \bigl[\varphi (\xi +\eta ) \bigr]=E_{P} \bigl[ E_{P} \bigl[ \varphi (x+\eta ) \bigr]|_{x=\xi } \bigr]. $$

Fubini’s theorem implies that

$$\begin{aligned} E_{P}\bigl[\varphi (\xi +\eta ) \bigr]=E_{P} \bigl[ E_{P}\bigl[\varphi (x+\eta )\bigr] |_{x=\xi } \bigr]=E_{P} \bigl[ E_{P}\bigl[\varphi ( \xi +y)\bigr]|_{y= \eta } \bigr]. \end{aligned}$$
(1)

Obviously, if ξ and η are independent, then ξ and η are of convolution. However, the converse may not be true. In this point of view, convolution is weaker than independence in linear probability theory.

Motivated by this fact, we attempt to define a convolution-combined independence in place of the strong independence assumption to prove the generalized LLNs under Choquet expectation.

Definition 5

(Convolutional Independence)

Let X and Y be two random variables and \(\{X_{i}\}_{i=1}^{\infty } \) be a sequence of random variables on \((\varOmega , {\mathcal{F}})\).

  1. (i)

    The random variable X is said to be convolutionally independent of Y, if for any function \(\varphi \in C_{b}(\mathbb{R})\),

    $$ \mathbb{C}_{V}\bigl[\varphi (X+Y)\bigr]=\mathbb{C}_{V} \bigl[\mathbb{C}_{V}\bigl[ \varphi (x+Y)\bigr]|_{x=X} \bigr]. $$
  2. (ii)

    The sequence \(\{X_{i}\}_{i=1}^{\infty } \) is said to be convolutionally independent, if \(X_{i+1}\) is convolutionally independent of \(\sum_{j=1}^{i}X_{j}\) for \(i\geqslant 1\).

Remark 2

A novel feature of this kind of convolution is its asymmetry and directionality. According to new definition of convolution, the order of marginal Choquet expectations may not be exchangeable. That is, the following could happen:

$$ \mathbb{C}_{V} \bigl[\mathbb{C}_{V}\bigl[\varphi (x+Y) \bigr]|_{x=X} \bigr]\neq\mathbb{C}_{V} \bigl[ \mathbb{C}_{V}\bigl[\varphi (X+y)\bigr] |_{y=Y} \bigr]. $$

Such a property is completely different from the notion of “mutual” independence in linear probability theory, but it is more consistent with financial phenomenon.

According to Definition 3, the verification of the identical distribution for random variables X and Y is hard to be implemented for all \(\varphi \in C_{b}(\mathbb{R})\) in application. We hope to reduce the complex verification to only those functions satisfying the so-called strengthened first moment condition. Therefore, we have the following definition.

Definition 6

(The strengthened first moment condition)

A sequence \(\{X_{i}\}_{i=1}^{\infty } \) of random variables on \((\varOmega , {\mathcal{F}})\) is said to satisfy the strengthened first moment condition if the following Choquet expectations are finite for \(i\geqslant 1\):

$$ \begin{gathered} \mathbb{C}_{V}[X_{i}]= \mathbb{C}_{V}[X_{1}], \qquad \mathcal{C}_{V}[X_{i}]= \mathcal{C}_{V}[X_{1}]; \\ \mathcal{C}_{V}\bigl[ \vert X_{i} \vert \bigr]= \mathcal{C}_{V}\bigl[ \vert X_{1} \vert \bigr], \qquad \mathbb{C}_{V}\bigl[ \vert X_{i} \vert \bigr]= \mathbb{C}_{V}\bigl[ \vert X_{1} \vert \bigr]. \end{gathered} $$

3 Useful lemmas

In this section, we apply the convolutional independence and the strengthened first moment condition proposed in Sect. 2, which are obviously weaker than the iid assumption used in the existing methods, and then prove two useful lemmas.

Lemma 1

Let\(\{X_{i}\}_{i=1}^{\infty }\)be a sequence of convolutionally independent random variables on\((\varOmega , {\mathcal{F}})\). Let\(\mathbb{V}\)be a 2-alternating capacity defined on\({\mathcal{F}}\), and\(\mathbb{C}_{V}\), \(\mathcal{C}_{V}\)be the induced upper, lower Choquet expectation, respectively. Then, for any\(\varphi \in C_{b}(\mathbb{R})\)and any constant\(y_{i} \in \mathbb{R}\),

$$ I_{1}\leq \mathbb{C}_{V} \Biggl[\varphi \Biggl( \sum_{i=1}^{n} X_{i} \Biggr) \Biggr]-\varphi \Biggl(\sum_{i=1}^{n}y_{i} \Biggr)\leq I_{2}, $$
(2)

where

$$\begin{aligned}& I_{1}:=\sum_{m=1}^{n}\inf _{x\in \mathbb{R}} \bigl\{ \mathbb{C}_{V} \bigl[\varphi (x+X_{n-(m-1)}-y_{m} ) \bigr] -\varphi (x ) \bigr\} , \\& I_{2}:=\sum_{m=1}^{n}\sup _{x\in \mathbb{R}} \bigl\{ \mathbb{C}_{V} \bigl[\varphi (x+X_{n-(m-1)}-y_{m} ) \bigr] -\varphi (x ) \bigr\} . \end{aligned}$$

Proof

Set \(S_{n}:=\sum_{i=1}^{n} X_{i}\), \(S_{0}=0\).

$$ \mathbb{C}_{V}\bigl[\varphi (S_{n} )\bigr]-\varphi \Biggl(\sum_{i=1}^{n}y_{i} \Biggr)=\sum_{m=0}^{n-1}\Delta _{m}, $$
(3)

where

$$ \begin{gathered} \Delta _{0}:=\mathbb{C}_{V} \bigl[\varphi ( S_{n} ) \bigr]-\mathbb{C}_{V} \bigl[\varphi ( S_{n-1}+ y_{1} ) \bigr], \quad \text{for } m=0; \\ \Delta _{m}:=\mathbb{C}_{V} \Biggl[\varphi \Biggl( S_{n-m}+ \sum_{i=1}^{m}y_{i} \Biggr) \Biggr]-\mathbb{C}_{V} \Biggl[\varphi \Biggl( S_{n-(m+1)}+ \sum_{i=1}^{m+1} y_{i} \Biggr) \Biggr], \quad \text{for } m\geq 1. \end{gathered} $$

We now estimate the term \(\Delta _{m}\) for \(0 \leq m \leq n-1\). We let \(h(x)=\mathbb{C}_{V} [\varphi (x+X_{n-m}) ]\) and apply the convolutional independence of \(\{X_{i}\}_{i=1}^{n}\) to derive

$$\begin{aligned} \mathbb{C}_{V} \Biggl[\varphi \Biggl( S_{n-m}+ \sum_{i=1}^{m}y_{i} \Biggr) \Biggr] =& \mathbb{C}_{V} \bigl[\mathbb{C}_{V} \bigl[\varphi ( x+X_{n-m})\bigr] |_{x=S_{n-(m+1)}+\sum _{i=1}^{m}y_{i}} \bigr] \\ =&\mathbb{C}_{V} \Biggl[h \Biggl( S_{n-(m+1)}+ \sum _{i=1}^{m}y_{i} \Biggr) \Biggr], \end{aligned}$$

which, combined with the sub-linearity of \(\mathbb{C}_{V}\) in Proposition 2, implies

$$\begin{aligned} \Delta _{m} =&\mathbb{C}_{V} \Biggl[h \Biggl( S_{n-(m+1)}+ \sum_{i=1}^{m}y_{i} \Biggr) \Biggr] -\mathbb{C}_{V} \Biggl[\varphi \Biggl( S_{n-(m+1)}+ \sum_{i=1}^{m} y_{i}+ y_{m+1} \Biggr) \Biggr] \\ \leq &\mathbb{C}_{V} \Biggl[h \Biggl( S_{n-(m+1)}+ \sum _{i=1}^{m}y_{i} \Biggr) -\varphi \Biggl( S_{n-(m+1)}+\sum_{i=1}^{m} y_{i}+ y_{m+1} \Biggr) \Biggr] \\ \leq &\sup_{x\in \mathbb{R}} \bigl\{ h(x)-\varphi (x+y_{m+1} ) \bigr\} \\ =&\sup_{x\in \mathbb{R}} \bigl\{ \mathbb{C}_{V} \bigl[ \varphi (x+X_{n-m} ) \bigr] -\varphi (x+y_{m+1} ) \bigr\} . \end{aligned}$$

On the other hand, we apply the sub-linearity in Proposition 2 again,

$$\begin{aligned} \Delta _{m} =&\mathbb{C}_{V} \Biggl[h \Biggl( S_{n-(m+1)}+ \sum_{i=1}^{m}y_{i} \Biggr) \Biggr] -\mathbb{C}_{V} \Biggl[\varphi \Biggl( S_{n-(m+1)}+ \sum_{i=1}^{m} y_{i}+ y_{m+1} \Biggr) \Biggr] \\ \ge &\mathcal{C}_{V} \Biggl[h \Biggl( S_{n-(m+1)} +\sum _{i=1}^{m}y_{i} \Biggr) -\varphi \Biggl( S_{n-(m+1)}+\sum_{i=1}^{m} y_{i}+y_{m+1} \Biggr) \Biggr] \\ \ge &\inf_{x\in \mathbb{R}} \bigl\{ h(x)-\varphi (x+y_{m+1} ) \bigr\} \\ =&\inf_{x\in \mathbb{R}} \bigl\{ \mathbb{C}_{V} \bigl[ \varphi (x+X_{n-m} ) \bigr] -\varphi (x+y_{m+1} ) \bigr\} . \end{aligned}$$

That is,

$$ \begin{gathered} \inf_{x\in \mathbb{R}} \bigl\{ \mathbb{C}_{V} \bigl[ \varphi (x+X_{n-m} ) \bigr] -\varphi (x+y_{m+1} ) \bigr\} \\ \quad \leq \Delta _{m} \leq \sup_{x\in \mathbb{R}} \bigl\{ \mathbb{C}_{V} \bigl[\varphi (x+X_{n-m} ) \bigr] -\varphi (x+y_{m+1} ) \bigr\} . \end{gathered} $$

This with (3) implies that

$$ \begin{gathered} \sum_{m=0}^{n-1} \inf_{x\in \mathbb{R}} \bigl\{ \mathbb{C}_{V} \bigl[\varphi (x+X_{n-m} ) \bigr] -\varphi (x+y_{m+1} ) \bigr\} \\ \quad \leq \mathbb{C}_{V} \bigl[\varphi ( S_{n} ) \bigr]- \varphi \Biggl(\sum_{i=1}^{n}y_{i} \Biggr) \\ \quad \leq \sum_{m=0}^{n-1}\sup _{x\in \mathbb{R}} \bigl\{ \mathbb{C}_{V} \bigl[\varphi (x+X_{n-m} ) \bigr] -\varphi (x+y_{m+1} ) \bigr\} . \end{gathered} $$

The desired conclusion (2) then follows directly from the facts that

$$ \begin{gathered} \sup_{x\in \mathbb{R}} \bigl\{ \mathbb{C}_{V} \bigl[ \varphi (x+X_{n-m} ) \bigr] -\varphi (x+y_{m+1} ) \bigr\} \\ \quad =\sup_{x\in \mathbb{R}} \bigl\{ \mathbb{C}_{V} \bigl[ \varphi (x+X_{n-m}-y_{m+1} ) \bigr] -\varphi (x ) \bigr\} , \\ \inf_{x\in \mathbb{R}} \bigl\{ \mathbb{C}_{V} \bigl[ \varphi (x+X_{n-m} ) \bigr] -\varphi (x+y_{m+1} ) \bigr\} \\ \quad =\inf_{x\in \mathbb{R}} \bigl\{ \mathbb{C}_{V} \bigl[ \varphi (x+X_{n-m}-y_{m+1} ) \bigr] -\varphi (x ) \bigr\} , \end{gathered} $$

which completes the proof. □

Further, we estimate the bounds for \(I_{1}\), \(I_{2}\) of Lemma 1 under the strengthened first moment condition.

Lemma 2

Let\(\mathbb{V}\)be a 2-alternating capacity and\(\mathbb{C}_{V}\), \(\mathcal{C}_{V}\)be the induced upper and lower Choquet expectation, respectively. Let the sequence\(\{X_{i}\}_{i=1}^{\infty }\)of random variables satisfy the strengthened first moment condition. Set\(\mathbb{C}_{V}[X_{i}]=\overline{\mu}\)and\(\mathcal{C}_{V}[X_{i}]=\underline{\mu}\), \(i\ge 1\). Then, for each function\(\varphi \in C_{b}^{2}(\mathbb{R})\), there exists a positive constant\(b_{n}(\epsilon )\)with\(b_{n}(\epsilon )\to 0\), as\(n\to \infty \), such that

  1. (I)

    \(\sum_{i=1}^{n}\sup_{x\in \mathbb{R}} \{ \mathbb{C}_{V} [ \varphi (x+\frac{X_{i}}{n} ) ]-\varphi (x) \} \leq \sup_{x\in \mathbb{R}}G ( \varphi '(x),\overline{\mu}, \underline{\mu} )+b_{n}(\epsilon )\).

  2. (II)

    \(\sum_{i=1}^{n}\inf_{x\in \mathbb{R}} \{ \mathbb{C}_{V} [ \varphi (x+\frac{X_{i}}{n} ) ]-\varphi (x) \} \ge \inf_{x\in \mathbb{R}} G ( \varphi '(x),\overline{\mu}, \underline{\mu} )-b_{n}(\epsilon )\).

Here\(G (x, \overline{\mu}, \underline{\mu} ):=x^{+} \overline{\mu}-x^{-} \underline{\mu}\)with constants\(\underline{\mu}<\overline{\mu}<\infty \).

Proof

Applying the Taylor expansion, we have, for φ and \(0 \leq \theta _{1}\leq 1\),

$$ \varphi \biggl(x+\frac{X_{i}}{n} \biggr)-\varphi (x) = \varphi '(x)\frac{X_{i}}{n}+J_{n}(x, X_{i}), $$

where \(J_{n}(x, X_{i}):= (\varphi '(x+\theta _{1}\frac{X_{i}}{n})- \varphi '(x) )\frac{X_{i}}{n}\).

Take the upper Choquet expectation \(\mathbb{C}_{V}\) on both sides of the above equality and apply the sub-linearity of \(\mathbb{C}_{V}\),

$$\begin{aligned} -\mathbb{C}_{V}\bigl[ \bigl\vert J_{n}(x, X_{i}) \bigr\vert \bigr]\leq \mathbb{C}_{V} \biggl[\varphi \biggl(x+\frac{X_{i}}{n} \biggr)-\varphi (x) \biggr]- \mathbb{C}_{V} \biggl[\varphi '(x)\frac{X_{i}}{n} \biggr] \leq \mathbb{C}_{V}\bigl[ \bigl\vert J_{n}(x,X_{i}) \bigr\vert \bigr]. \end{aligned}$$

Since \(\mathbb{C}_{V}[X_{i}]=\overline{\mu}\), \(\mathcal{C}_{V}[X_{i}]=\underline{\mu}\), \(i=1,2,\ldots n\), we have

$$ \mathbb{C}_{V} \biggl[\varphi '(x)\frac{X_{i}}{n} \biggr]= \frac{1}{n} G\bigl(\varphi '(x),\overline{\mu}, \underline{\mu}\bigr). $$

Therefore, by the translation invariance for \(\mathbb{C}_{V}\) in Lemma 1,

$$ \begin{aligned}[b] &{-}\mathbb{C}_{V}\bigl[ \bigl\vert J_{n}(x, X_{i}) \bigr\vert \bigr] \\ &\quad \leq \mathbb{C}_{V} \biggl[\varphi \biggl(x+\frac{X_{i}}{n} \biggr) \biggr]-\varphi (x)- \frac{1}{n} G \bigl(\varphi '(x), \overline{\mu}, \underline{\mu} \bigr) \\ &\quad \leq \mathbb{C}_{V}\bigl[ \bigl\vert J_{n}(x,X_{i}) \bigr\vert \bigr]. \end{aligned} $$
(4)

Take supremum \(\sup_{x\in \mathbb{R}}\) on both sides of (4),

$$ \begin{aligned}[b] &\sup_{x\in \mathbb{R}}\bigl\{ -\mathbb{C}_{V}\bigl[ \bigl\vert J_{n}(x, X_{i}) \bigr\vert \bigr]\bigr\} \\ &\quad \leq \sup_{x\in \mathbb{R}} \biggl\{ \mathbb{C}_{V} \biggl[ \varphi \biggl(x+\frac{X_{i}}{n} \biggr) \biggr]-\varphi (x)- \frac{1}{n} G \bigl(\varphi '(x), \overline{\mu}, \underline{ \mu } \bigr) \biggr\} \\ &\quad \leq \sup_{x\in \mathbb{R}}\mathbb{C}_{V}\bigl[ \bigl\vert J_{n}(x,X_{i}) \bigr\vert \bigr]. \end{aligned} $$
(5)

For convenience, denote

$$ T_{n}^{i}(x):=\mathbb{C}_{V} \biggl[\varphi \biggl(x+\frac{X_{i}}{n} \biggr) \biggr]-\varphi (x)- \frac{1}{n} G \bigl(\varphi '(x), \overline{\mu}, \underline{\mu} \bigr). $$

From (5), we have

$$\begin{aligned} -\sum_{i=1}^{n} \sup_{x\in \mathbb{R}}\mathbb{C}_{V}\bigl[ \bigl\vert J_{n}(x,X_{i}) \bigr\vert \bigr] \leq \sum _{i=1}^{n}\sup_{x\in \mathbb{R}} T_{n}^{i}(x) \leq \sum_{i=1}^{n} \sup_{x\in \mathbb{R}}\mathbb{C}_{V}\bigl[ \bigl\vert J_{n}(x,X_{i}) \bigr\vert \bigr]. \end{aligned}$$
(6)

To estimate the right hand term \(\sum_{i=1}^{n}\sup_{x\in \mathbb{R}}\mathbb{C}_{V}[|J_{n}(x,X_{i})|]\) in (6), we apply the strengthened first moment condition

$$ \mathbb{C}_{V}[X_{i}]=\mathbb{C}_{V}[X_{1}], \mathbb{C}_{V}\bigl[ \vert X_{i} \vert \bigr]= \mathbb{C}_{V}\bigl[ \vert X_{1} \vert \bigr]< \infty ,\quad i \geq 1 $$

and for any \(\epsilon >0\),

$$ \mathbb{C}_{V}\bigl[ \vert X_{1} \vert \bigr]< \infty \quad \text{implies}\quad \mathbb{C}_{V} \bigl[ \vert {X_{1}} \vert I_{\{ \vert {X_{1}} \vert > n \epsilon \}} \bigr]\to 0, \quad \text{as } n \to \infty $$

derived from the definition of Choquet expectation [3, 11]. Thus, we have, for any \(\epsilon >0\),

$$ \begin{aligned}[b] &\sum_{i=1}^{n} \sup_{x\in \mathbb{R}}\mathbb{C}_{V}\bigl[ \bigl\vert J_{n}(x,X_{i}) \bigr\vert \bigr] \\ &\quad \leq \sum_{i=1}^{n} \Bigl\{ \sup _{x\in \mathbb{R}} \mathbb{C}_{V} \bigl[ \bigl\vert J_{n}(x,X_{i}) \bigr\vert I_{\{ \vert \frac{X_{i}}{n} \vert > \epsilon \}} \bigr]+\sup _{x\in \mathbb{R}}\mathbb{C}_{V} \bigl[ \bigl\vert J_{n}(x,X_{i}) \bigr\vert I_{\{ \vert \frac{X_{i}}{n} \vert \leq \varepsilon \}} \bigr] \Bigr\} \\ &\quad \leq \sum_{i=1}^{n} \mathbb{C}_{V} \biggl[ \biggl(\sup_{x\in \mathbb{R}} \biggl\vert \varphi ' \biggl(x+\theta _{1}\frac{X_{i}}{n} \biggr) \biggr\vert +\sup_{x\in \mathbb{R}} \bigl\vert \varphi '(x) \bigr\vert \biggr) \frac{ \vert X_{i} \vert }{n} I_{\{ \vert X_{i} \vert > n \epsilon \}} \biggr] \\ &\qquad {}+\sum_{i=1}^{n} \frac{1}{2}\mathbb{C}_{V} \biggl[\sup_{x\in \mathbb{R}} \biggl\vert \varphi '' \biggl(x+\theta _{2} \frac{X_{i}}{n} \biggr) \biggr\vert \frac{X_{i}^{2}}{n^{2}}I_{\{ \vert {X_{i}} \vert \leq n \epsilon \}} \biggr] \\ &\quad \leq \sum_{i=1}^{n} \frac{1}{n}2 \bigl\Vert \varphi ' \bigr\Vert \mathbb{C}_{V} \bigl[ \vert {X_{i}} \vert I_{\{ \vert {X_{i}} \vert > n \epsilon \}} \bigr] + \sum_{i=1}^{n} \frac{1}{n^{2}} \bigl\Vert \varphi '' \bigr\Vert \mathbb{C}_{V}\bigl[ \vert X_{i} \vert \bigr]n\epsilon \\ &\quad = 2 \bigl\Vert \varphi ' \bigr\Vert \mathbb{C}_{V} \bigl[ \vert {X_{1}} \vert I_{\{ \vert {X_{1}} \vert > n \epsilon \}} \bigr] + \bigl\Vert \varphi '' \bigr\Vert \mathbb{C}_{V}\bigl[ \vert X_{1} \vert \bigr] \epsilon \\ &\quad := b_{n}(\epsilon ) \to 0, \quad n \to \infty , \end{aligned} $$
(7)

where \(\|\varphi \|:= \sup_{x\in \mathbb{R}} |\varphi (x)|\) and \(0 \leq \theta _{1}\), \(\theta _{2} \leq 1\).

Combining (6) and (7), for the arbitrary of ϵ, as \(n\to \infty \), we have

$$\begin{aligned} \sum_{i=1}^{n} \sup_{x\in \mathbb{R}} T_{n}^{i}(x) \leq \sum _{i=1}^{n} \sup_{x\in \mathbb{R}} \mathbb{C}_{V}\bigl[ \bigl\vert J_{n}(x,X_{i}) \bigr\vert \bigr] \leq b_{n}( \epsilon )\to 0. \end{aligned}$$
(8)

With the help of the estimates (8), we finally derive the estimate for the first assertion \((I)\) of this lemma.

$$ \begin{aligned}[b] &\sum_{i=1}^{n} \sup_{x\in \mathbb{R}} \biggl\{ \mathbb{C}_{V} \biggl[ \varphi \biggl(x+\frac{X_{i}}{n} \biggr) \biggr]-\varphi (x) \biggr\} \\ &\quad =\sum_{i=1}^{n}\sup _{x\in \mathbb{R}} \biggl\{ \mathbb{C}_{V} \biggl[\varphi \biggl(x+\frac{X_{i}}{n} \biggr) \biggr]-\varphi (x)- \frac{1}{n} G \bigl(\varphi '(x), \overline{\mu}, \underline{\mu} \bigr)+ \frac{1}{n}G \bigl(\varphi '(x), \overline{\mu}, \underline{ \mu } \bigr) \biggr\} \\ &\quad \leq \sum_{i=1}^{n} \sup _{x\in \mathbb{R}}T_{n}^{i}(x)+\sum _{i=1}^{n} \sup_{x\in \mathbb{R}} \biggl\{ \frac{1}{n} G \bigl(\varphi '(x), \overline{\mu}, \underline{ \mu } \bigr) \biggr\} \\ &\quad \leq b_{n}(\epsilon )+\sup_{x\in \mathbb{R}}G \bigl( \varphi '(x), \overline{\mu}, \underline{\mu} \bigr). \end{aligned} $$

Taking infimum \(\inf_{x\in \mathbb{R}}\) on both sides of (4) and arguing similarly as for the first assertion \((I)\), we obtain the estimate for the second assertion \((\mathit{II})\) of this lemma.

$$\begin{aligned} \sum_{i=1}^{n} \inf_{x\in \mathbb{R}} T_{n}^{i}(x) \geq -\sum _{i=1}^{n} \sup_{x\in \mathbb{R}} \mathbb{C}_{V}\bigl[ \bigl\vert J_{n}(x,X_{i}) \bigr\vert \bigr] \geq -b_{n}( \epsilon ). \end{aligned}$$

Hence

$$\begin{aligned} & \sum_{i=1}^{n} \inf_{x\in \mathbb{R}} \biggl\{ \mathbb{C}_{V} \biggl[ \varphi \biggl(x+\frac{X_{i}}{n} \biggr) \biggr]-\varphi (x) \biggr\} \\ &\quad \geq \sum_{i=1}^{n} \inf _{x\in \mathbb{R}}T_{n}^{i}(x)+\sum _{i=1}^{n} \inf_{x\in \mathbb{R}} \biggl\{ \frac{1}{n} G \bigl(\varphi '(x), \overline{\mu}, \underline{ \mu } \bigr) \biggr\} \\ &\quad \geq -b_{n}(\epsilon )+\inf_{x\in \mathbb{R}}G \bigl( \varphi '(x), \overline{\mu}, \underline{\mu} \bigr). \end{aligned}$$

This completes the proof. □

For later use we quote from [2] a conclusion related to the function G defined in Lemma 2.

Lemma 3

([2])

Let\(G(x,y,z)\)be the function defined in Lemma 2, that is,

$$ G(x,y,z):=x^{+}y-x^{-}z. $$

Then, for any monotonic function\(\varphi \in C_{b}(\mathbb{R})\),

  1. (I)

    \(\inf_{y\in D_{n}}\sup_{x\in \mathbb{R}}G ( \varphi '(x), \overline{\mu}- \frac{1}{n}\sum_{i=1}^{n}y_{i}, \underline{\mu}-\frac{1}{n}\sum_{i=1}^{n}y_{i} )=0\).

  2. (II)

    \(\inf_{y\in D_{n}}\inf_{x\in \mathbb{R}}G ( \varphi '(x), \overline{\mu}- \frac{1}{n}\sum_{i=1}^{n}y_{i}, \underline{\mu}-\frac{1}{n}\sum_{i=1}^{n}y_{i} )=0\).

Here, \(D_{n}:= \{ y:=( y_{1}, y_{2}, \ldots , y_{n}): y_{i}\in [ \underline{\mu}, \overline{\mu}], i=1,2,\ldots , n \} \).

4 Main results

In this section, we shall prove the generalized LLNs under Choquet expectation under the convolutional independence and the strengthened first moment conditions in Theorem 1 and Theorem 2. Theorem 1 is for any monotonic function \(\varphi \in C_{b}(\mathbb{R})\) and Theorem 2 is for general \(\varphi \in C_{b}(\mathbb{R})\). These two theorems generalize the existing LLNs from the strong iid assumption to the much weaker conditions.

Theorem 1

(Generalized LLNs for monotonic functions in \(C_{b}(\mathbb{R})\))

Let\(\mathbb{V}\)be a 2-alternating capacity defined on\({\mathcal{F}}\), and\(\mathbb{C}_{V}\), \(\mathcal{C}_{V}\)be the induced upper, lower Choquet expectation, respectively. Let\(\{X_{i}\}_{i=1}^{\infty }\)be a convolutionally independent sequence of random variables on\((\varOmega , {\mathcal{F}})\)with\(\mathbb{C}_{V}[X_{i}]=\overline{\mu}\), \(\mathcal{C}_{V}[X_{i}]= \underline{\mu}\). Assume\(\{X_{i}\}_{i=1}^{\infty }\)satisfies the strengthened first moment condition and\(S_{n}:= \sum_{i=1}^{n}X_{i}\). Then, for each monotonic function\(\varphi \in C_{b}(\mathbb{R})\),

$$ \begin{gathered} (I)\quad \lim_{n\rightarrow \infty } \mathbb{C}_{V} \biggl[\varphi \biggl(\frac{S_{n}}{n} \biggr) \biggr]=\sup_{\underline{\mu}\leq x\leq \overline{\mu}}\varphi (x); \\ (\mathit{II})\quad \lim_{n\rightarrow \infty }\mathcal{C}_{V} \biggl[ \varphi \biggl(\frac{S_{n}}{n} \biggr) \biggr]=\inf _{ \underline{\mu}\leq x\leq \overline{\mu}}\varphi (x). \end{gathered} $$
(9)

Proof

We decompose the proof into 2 steps. Step 1 is to prove the conclusion for any monotonic \(\varphi \in C_{b}^{2}(\mathbb{R})\), and Step 2 is for any monotonic \(\varphi \in C_{b}(\mathbb{R})\).

Step 1 for any monotonic\(\varphi \in C_{b}^{2}(\mathbb{R})\): Set

$$ D_{n}=\bigl\{ y:=(y_{1},y_{2},\ldots , y_{n}): \underline{\mu}\leq y_{i} \leq \overline{\mu}, i=1,\ldots , n\bigr\} . $$

Obviously, \(\sup_{y\in D_{n}}\varphi (\frac{1}{n}\sum_{i=1}^{n}y_{i} )=\sup_{\underline{\mu}\leq x\leq \overline{\mu}} \varphi (x) \). Thus,

$$ \begin{aligned}[b] &\mathbb{C}_{V} \biggl[\varphi \biggl(\frac{1}{n} S_{n} \biggr) \biggr]- \sup _{\underline{\mu}\leq x\leq \overline{\mu}}\varphi (x) \\ &\quad =\mathbb{C}_{V} \biggl[\varphi \biggl(\frac{1}{n} S_{n} \biggr) \biggr]- \sup_{y\in D_{n}}\varphi \Biggl( \frac{1}{n}\sum_{i=1}^{n}y_{i} \Biggr) \\ &\quad =\inf_{y\in D_{n}} \Biggl\{ \mathbb{C}_{V} \biggl[ \varphi \biggl( \frac{1}{n} S_{n} \biggr) \biggr]-\varphi \Biggl(\frac{1}{n}\sum_{i=1}^{n}y_{i} \Biggr) \Biggr\} . \end{aligned} $$
(10)

Applying Lemma 1, \((I)\) of Lemma 2 and \((I)\) of Lemma 3, we derive, for any \(\epsilon >0\),

$$ \begin{aligned}[b] &\inf_{y\in D_{n}} \Biggl\{ \mathbb{C}_{V} \biggl[\varphi \biggl( \frac{1}{n} S_{n} \biggr) \biggr]-\varphi \Biggl(\frac{1}{n}\sum _{i=1}^{n}y_{i} \Biggr) \Biggr\} \\ &\quad \leq \inf_{y\in D_{n}}\sum_{i=1}^{n} \sup_{x\in \mathbb{R}} \biggl\{ \mathbb{C}_{V} \biggl[\varphi \biggl(x+ \frac{X_{i}-y_{i}}{n} \biggr) \biggr] -\varphi (x ) \biggr\} \\ &\quad \leq \inf_{y\in D_{n}} \Biggl\{ \sup_{x\in \mathbb{R}}G \Biggl( \varphi '(x), \overline{\mu}-\frac{1}{n}\sum _{i=1}^{n}y_{i}, \underline{\mu}- \frac{1}{n}\sum_{i=1}^{n} y_{i} \Biggr)+b_{n}(\epsilon ) \Biggr\} \\ &\quad = \inf_{y\in D_{n}} \sup_{x\in \mathbb{R}}G \Biggl( \varphi '(x), \overline{\mu}-\frac{1}{n}\sum _{i=1}^{n}y_{i}, \underline{\mu}- \frac{1}{n}\sum_{i=1}^{n} y_{i} \Biggr)+b_{n}(\epsilon ) \\ &\quad = b_{n}(\epsilon ). \end{aligned} $$
(11)

Further, the term \(b_{n}(\epsilon )\) in (11) is still a positive real-valued constant and similar to inequality (7), we have

$$ \begin{aligned} b_{n}(\epsilon ) &= 2 \bigl\Vert \varphi ' \bigr\Vert \bigl\{ \mathbb{C}_{V} \bigl[ \vert {X_{1}} \vert I_{ \{ \vert {X_{1}} \vert > n \epsilon -\mu \}} \bigr] \\ &\quad{} +\mu \mathbb{C}_{V} [I_{\{ \vert {X_{1}} \vert > n \epsilon - \mu \}} ]\bigr\} +{\epsilon } \bigl\Vert \varphi '' \bigr\Vert \bigl( \mathbb{C}_{V}\bigl[ \vert X_{1} \vert \bigr]+ \mu \bigr) \\ & \to 0, \quad \text{as } n \to \infty , \end{aligned} $$

where the definition of Choquet expectation and the fact that \(\mathbb{C}_{V}[I_{\{|{X_{1}} |>n\epsilon -\mu \}}] \leq \frac{1}{n\epsilon -\mu } \mathbb{C}_{V}[|X_{1}|]\to 0\) as \(n\to \infty \) are used. Thus, combining this inequality with (10) and (11), we have

$$\begin{aligned} \limsup_{n\to \infty }\mathbb{C}_{V} \biggl[\varphi \biggl(\frac{1}{n} S_{n} \biggr) \biggr]\leq \sup_{\underline{\mu}\leq x\leq \overline{\mu}} \varphi (x). \end{aligned}$$
(12)

On the other hand, we apply Lemma 1, the second assertion \((\mathit{II})\) of Lemma 2 and the second assertion \((\mathit{II})\) of Lemma 3, and we have, for any \(\epsilon >0\),

$$\begin{aligned} & \inf_{y\in D_{n}} \Biggl\{ \mathbb{C}_{V} \biggl[\varphi \biggl( \frac{1}{n} S_{n} \biggr) \biggr]-\varphi \Biggl(\frac{1}{n}\sum _{i=1}^{n}y_{i} \Biggr) \Biggr\} \\ &\quad \geq \inf_{y\in D_{n}}\sum_{n=1}^{n} \inf_{x\in \mathbb{R}} \biggl\{ \mathbb{C}_{V} \biggl[\varphi \biggl(x+ \frac{X_{i}-y_{i}}{n} \biggr) \biggr] -\varphi (x ) \biggr\} \\ &\quad \geq \inf_{y\in D_{n}} \Biggl\{ \inf_{x\in \mathbb{R}}G \Biggl( \varphi '(x), \overline{\mu}-\frac{1}{n}\sum _{i=1}^{n}y_{i}, \underline{\mu}- \frac{1}{n}\sum_{i=1}^{n} y_{i} \Biggr)-b_{n}(\epsilon ) \Biggr\} \\ &\quad = \inf_{y\in D_{n}} \inf_{x\in \mathbb{R}}G \Biggl( \varphi '(x), \overline{\mu}-\frac{1}{n}\sum _{i=1}^{n}y_{i}, \underline{\mu}- \frac{1}{n}\sum_{i=1}^{n} y_{i} \Biggr)-b_{n}(\epsilon ) \\ &\quad =-b_{n}(\epsilon ) \to 0, n \to \infty . \end{aligned}$$

According to (10), we get

$$\begin{aligned} \liminf_{n\to \infty }\mathbb{C}_{V} \biggl[\varphi \biggl(\frac{1}{n} S_{n} \biggr) \biggr]\ge \sup _{\underline{\mu}\leq x\leq \overline{\mu}} \varphi (x). \end{aligned}$$
(13)

Combining (12) with (13), we can prove that conclusion (I) holds for any monotonic function \(\varphi \in C_{b}^{2}(\mathbb{R})\).

Step2: For each monotone \(\varphi \in C_{b}(\mathbb{R})\), there exists a monotone function \(\overline{\varphi }\in C_{b}^{2}(\mathbb{R})\) such that [12, 13]

$$ \sup_{x\in \mathbb{R}} \bigl\vert \varphi (x)-\overline{\varphi }(x) \bigr\vert \leq \epsilon , \quad \forall \epsilon >0. $$

Apply Step 1 to function \(\overline{\varphi }(x)\) and the sub-linearity of \(\mathbb{C}_{V}\),

$$\begin{aligned} & \biggl\vert \mathbb{C}_{V} \biggl[\varphi \biggl(\frac{S_{n}}{n} \biggr) \biggr]-\sup_{\underline{\mu}\leq x\leq \overline{\mu}}\varphi (x) \biggr\vert \\ &\quad \leq \biggl\vert \mathbb{C}_{V} \biggl[\varphi \biggl( \frac{S_{n}}{n} \biggr) \biggr]-\mathbb{C}_{V} \biggl[ \overline{ \varphi } \biggl(\frac{S_{n}}{n} \biggr) \biggr] \biggr\vert \\ &\qquad{} + \biggl\vert \mathbb{C}_{V} \biggl[\overline{\varphi } \biggl( \frac{S_{n}}{n} \biggr) \biggr]-\sup_{\underline{\mu}\leq x\leq \overline{\mu}}\overline{ \varphi }(x) \biggr\vert \\ &\qquad{} + \Bigl\vert \sup_{\underline{\mu}\leq x\leq \overline{\mu}} \overline{\varphi }(x)- \sup_{\underline{\mu}\leq x\leq \overline{\mu}} \varphi (x) \Bigr\vert \\ &\quad \leq 2 \epsilon + \biggl\vert \mathbb{C}_{V} \biggl[ \overline{\varphi } \biggl(\frac{S_{n}}{n} \biggr) \biggr]-\sup _{ \underline{\mu}\leq x\leq \overline{\mu}}\overline{\varphi }(x) \biggr\vert . \end{aligned}$$
(14)

Thus, we prove the conclusion \((I) \) for any monotonic function \(\varphi \in C_{b}(\mathbb{R})\).

The second conclusion \((\mathit{II})\) of this theorem follows directly from a combination of the conjugate property,

$$ \mathcal{C}_{V}\bigl[\varphi (X_{i})\bigr]=- \mathbb{C}_{V}\bigl[-\varphi (X_{i})\bigr], \quad \forall X_{i} \in {\mathcal{F}}, i\ge 1, $$

and the proved conclusion \((I)\). This completes the proof. □

An application of Theorem 1 to a special pair functions \(H_{\delta }(x)\) and \(h_{\delta }(x)\) defined below, which can be viewed as continuous approximation from upper bound and lower bound, respectively, to the discontinuous indicator function \(I_{A}\), we can infer the following two corollaries concerning the upper and lower Choquet expectation \(\mathbb{C}_{V}\) and \(\mathcal{C}_{V}\) or the induced upper and lower capacity \(V(A)\) and \(v(A)\).

Corollary 1

Let the function \(H_{\delta }(x)\) be defined by

$$ H_{\delta }(x)= \textstyle\begin{cases} 1, & x< \underline{\mu}-\delta , \\ \frac{\underline{\mu}-x}{\delta },& \underline{\mu}-\delta \leq x< \underline{\mu}, \\ 0, & \underline{\mu}\leq x< \overline{\mu}, \\ \frac{x-\overline{\mu}}{\delta }, & \overline{\mu}\leq x < \overline{\mu}, \\ 1, & x\geq \overline{\mu}+\delta . \end{cases} $$

Then we have

$$ 0\leq \mathbb{C}_{V}[I_{\{\frac{S_{n}}{n}< \underline{\mu}-\delta \} \cup \{ \frac{S_{n}}{n}>\overline{\mu}+\delta \}} ] \leq \mathbb{C}_{V}\biggl[H_{\delta }\biggl(\frac{S_{n}}{n} \biggr)\biggr] \to 0 \quad \textit{as } n\to \infty . $$
(15)

Proof

Noticing that we cannot apply directly the conclusion \((I) \) of Theorem 1 to prove (15) since the function \(H_{\delta }\) is non-monotonic. However, we can split \(H_{\delta }\) into a sum of monotonic functions, for example,

$$ H_{\delta }(x)=H^{(1)}_{\delta }(x)+H^{(2)}_{\delta }(x) $$

with

$$ H^{(1)}_{\delta }(x)= \textstyle\begin{cases} 1, & x< \underline{\mu}-\delta , \\ \frac{\underline{\mu}-x}{\delta },& \underline{\mu}-\delta \leq x< \underline{\mu}, \\ 0, & x\geq \underline{\mu}, \end{cases}\displaystyle \qquad H^{(2)}_{\delta }(x)= \textstyle\begin{cases} 0, & x< \overline{\mu}, \\ \frac{x-\overline{\mu}}{\delta }, & \overline{\mu}\leq x < \overline{\mu}, \\ 1, & x\geq \overline{\mu}+\delta . \end{cases} $$

From this, we have

$$ \begin{aligned} 0&\leq \mathbb{C}_{V}[I_{\{\frac{S_{n}}{n}< \underline{\mu}-\delta \} \cup \{ \frac{S_{n}}{n}>\overline{\mu}+\delta \}} ] \quad (\text{by monotoncity}) \\ & \leq \mathbb{C}_{V}\biggl[H_{\delta }\biggl(\frac{S_{n}}{n} \biggr)\biggr] \quad (\text{by monotoncity}) \\ & =\mathbb{C}_{V}\biggl[H^{(1)}_{\delta }\biggl( \frac{S_{n}}{n}\biggr)+H^{(2)}_{\delta }\biggl(\frac{S_{n}}{n} \biggr)\biggr] \quad (\text{by spliting} ) \\ & \leq \mathbb{C}_{V}\biggl[H^{(1)}_{\delta }\biggl( \frac{S_{n}}{n}\biggr)\biggr]+ \mathbb{C}_{V}\biggl[(H^{(2)}_{\delta } \biggl(\frac{S_{n}}{n}\biggr)\biggr] \quad (\text{by sublinearity}) \\ & \to \sup_{\underline{\mu}\leq x\leq \overline{\mu}}H^{(1)}_{\delta }(x)+\sup _{\underline{\mu}\leq x\leq \overline{\mu}}H^{(2)}_{\delta }(x) \quad (\text{by Theorem } 1) \\ & =0+0=0, \end{aligned} $$

which implies (15). This completes the proof. □

Corollary 2

Assume that\(x^{*}\in [\underline{\mu}, \overline{\mu}]\), \(\delta >0\)is sufficiently small such that\((x^{*}-\delta , x^{*}+\delta )\subset [\underline{\mu}, \overline{\mu}]\)as\(x^{*}\in (\underline{\mu}, \overline{\mu})\)or\((x^{*}-\delta ,x^{*})\subset [\underline{\mu}, \overline{\mu}]\)as\(x^{*}=\overline{\mu}\)or\((x^{*}, x^{*}+\delta )\subset [\underline{\mu}, \overline{\mu}]\)as\(x^{*}=\underline{\mu}\), and the function\(h_{\delta }(x)\)is defined by

$$ \begin{gathered} h_{\delta }(x)= \textstyle\begin{cases} 1-\frac{(x-x^{*})^{2}}{\delta ^{2}}, & \vert x-x^{*} \vert < \delta , \\ 0, & \textit{others}, \end{cases}\displaystyle \quad \textit{for } x^{*} \in (\underline{\mu}, \overline{\mu}), \\ h_{\delta }(x)= \textstyle\begin{cases} 0, & x< x^{*}-\delta , \\ 1-\frac{(x-x^{*})^{2}}{\delta ^{2}}, & x^{*}-\delta \leq x\leq x^{*}, \\ 1,& x> x^{*}, \end{cases}\displaystyle \quad \textit{for } x^{*}= \overline{\mu}; \\ h_{\delta }(x)= \textstyle\begin{cases} 1,& x< x^{*}, \\ 1-\frac{(x-x^{*})^{2}}{\delta ^{2}}, & x^{*}\leq x\leq x^{*}+\delta , \\ 0, & x> x^{*}+\delta , \end{cases}\displaystyle \quad \textit{for } x^{*}= \underline{\mu}. \end{gathered} $$

Then we have

$$ 1\geq \mathbb{C}_{V}[I_{\{|\frac{S_{n}}{n}-x^{*}|< \delta \}}]\geq \mathbb{C}_{V}\biggl[h_{\delta }\biggl(\frac{S_{n}}{n}\biggr) \biggr] \to 1, \quad \textit{as } n\to \infty . $$
(16)

Proof

Noting that the function \(h_{\delta }\) is non-monotonic as \(x^{*}\in (\underline{\mu}, \overline{\mu})\), we decompose it into a sum of monotonic functions in order to apply Theorem 1; for example,

$$ h_{\delta }(x)=h^{(1)}_{\delta }(x)+h^{(2)}_{\delta }(x)-1 $$

with

$$ h^{(1)}_{\delta }(x)= \textstyle\begin{cases} 0, & x\leq x^{*}-\delta , \\ h_{\delta }, & x^{*}-\delta < x\leq x^{*}, \\ 1, & x>x^{*}, \end{cases}\displaystyle \qquad h^{(2)}_{\delta }(x)= \textstyle\begin{cases} 1, & x\leq x^{*}, \\ h_{\delta }, & x^{*}< x\leq x^{*}+\delta , \\ 0, & x>x^{*}+\delta . \end{cases} $$

From this, we have

$$ \begin{aligned} 1&\geq \mathbb{C}_{V}[I_{\{|\frac{S_{n}}{n}-x^{*}|< \delta \}}] \quad (\text{by monotoncity}) \\ & \geq \mathbb{C}_{V}\biggl[h_{\delta }\biggl(\frac{S_{n}}{n} \biggr)\biggr]\quad (\text{by monotoncity}) \\ & =\mathbb{C}_{V}\biggl[h^{(1)}_{\delta }\biggl( \frac{S_{n}}{n}\biggr)+h^{(2)}_{\delta }\biggl(\frac{S_{n}}{n} \biggr)-1\biggr] \\ & =\mathbb{C}_{V}\biggl[h^{(1)}_{\delta }\biggl( \frac{S_{n}}{n}\biggr)-\biggl(1-h^{(2)}_{\delta }\biggl( \frac{S_{n}}{n}\biggr)\biggr)\biggr] \quad (\text{by spliting} ) \\ & \geq \mathbb{C}_{V}\biggl[h^{(1)}_{\delta }\biggl( \frac{S_{n}}{n}\biggr)\biggr]- \mathbb{C}_{V}\biggl[ \biggl(1-h^{(2)}_{\delta }\biggl(\frac{S_{n}}{n}\biggr)\biggr) \biggr] \quad (\text{by sublinearity}) \\ & \to \sup_{\underline{\mu}\leq x\leq \overline{\mu}}h^{(1)}_{\delta }(x)-\sup _{\underline{\mu}\leq x\leq \overline{\mu}}\bigl(1-h^{(2)}_{\delta }(x)\bigr) \quad ( \text{by Theorem } 1) \\ & =1-0=1, \end{aligned} $$

which implies (16).

As \(x^{*}=\overline{\mu}\) or \(x^{*}=\underline{\mu}\), we can directly apply Theorem 1 to obtain (16) since \(h_{\delta }(x)\) is monotonic and continuous in these two cases. This completes the proof. □

Combining Theorem 1, Corollary 1 and Corollary 2, we shall present the generalized LLNs for functions in \(C_{b}(\mathbb{R})\), the main result of this article.

Theorem 2

(Generalized LLNs for functions in \(C_{b}(\mathbb{R})\))

Let the assumptions in Theorem 1hold. Then, for any function\(\varphi \in C_{b}(\mathbb{R})\),

$$ \begin{gathered} (I)\quad \lim_{n\rightarrow \infty } \mathbb{C}_{V} \biggl[\varphi \biggl(\frac{S_{n}}{n} \biggr) \biggr]=\sup_{\underline{\mu}\leq x\leq \overline{\mu}}\varphi (x); \\ (\mathit{II})\quad \lim_{n\rightarrow \infty }\mathcal{C}_{V} \biggl[ \varphi \biggl(\frac{S_{n}}{n} \biggr) \biggr]=\inf _{ \underline{\mu}\leq x\leq \overline{\mu}}\varphi (x). \end{gathered} $$
(17)

Proof

Since the conclusions \((I)\) and \((\mathit{II})\) are conjugate, we mainly pay attention to the proof for \((I)\), and \((\mathit{II})\) can be derived analogously.

For the conclusion \((I)\), it suffices to show that

$$ \lim \sup_{n\to \infty }\mathbb{C}_{V} \biggl[\varphi \biggl( \frac{S_{n}}{n}\biggr)\biggr]\leq \sup _{\underline{\mu}\leq x\leq \overline{\mu}}\varphi (x), $$
(18)

and

$$ \lim \inf_{n\to \infty }\mathbb{C}_{V} \biggl[\varphi \biggl( \frac{S_{n}}{n}\biggr)\biggr]\geq \sup _{\underline{\mu}\leq x\leq \overline{\mu}}\varphi (x). $$
(19)

Let \(x^{*}\in [\underline{\mu}, \overline{\mu}]\) be a point such that \(\varphi (x^{*})=\sup_{\underline{\mu}\leq x\leq \overline{\mu}}\varphi (x)\). we infer from the continuity of φ that, for any \(\epsilon >0\), there exists a \(\delta >0\) such that

$$ \sup_{\underline{\mu}-\delta \leq x\leq \overline{\mu}+\delta } \varphi (x)\leq \varphi \bigl(x^{*} \bigr)+\epsilon . $$

We also let \(M=\sup_{x\in \mathbb{R}}|\varphi (x)|\). Then, applying (15) of Corollary 1 to

$$ V\biggl(\biggl\{ \frac{S_{n}}{n}> \overline{\mu}+\delta \biggr\} \cup \biggl\{ \frac{S_{n}}{n}< \underline{\mu}-\delta \biggr\} \biggr)\to 0, \quad \text{as } n\to \infty , $$

we obtain

$$ \begin{aligned} &\mathbb{C}_{V}\biggl[\varphi \biggl( \frac{S_{n}}{n}\biggr)\biggr] \\ &\quad \leq \mathbb{C}_{V}\biggl[\varphi \biggl(\frac{S_{n}}{n} \biggr)I_{\{ \underline{\mu}-\delta \leq \frac{S_{n}}{n}\leq \overline{\mu}+\delta \}} \biggr] +\mathbb{C}_{V}\biggl[\varphi \biggl( \frac{S_{n}}{n}\biggr)I_{\{\frac{S_{n}}{n}< \underline{\mu}-\delta \}\cup \{\frac{S_{n}}{n}> \overline{\mu}+\delta \}} \biggr] \\ &\quad \leq \sup_{\underline{\mu}-\delta \leq x\leq \overline{\mu}+\delta }\varphi (x)+M\cdot V\biggl(\biggl\{ \frac{S_{n}}{n}> \overline{\mu}+\delta \biggr\} \cup \biggl\{ \frac{S_{n}}{n}< \underline{\mu}-\delta \biggr\} \biggr) \\ &\quad \to \varphi \bigl(x^{*}\bigr)+\epsilon , \quad \text{as } n\to \infty . \end{aligned} $$

This implies (18).

On the other hand, we have

$$ \begin{gathered} \mathbb{C}_{V}\biggl[\varphi \biggl( \frac{S_{n}}{n}\biggr)\biggr]-\varphi \bigl(x^{*}\bigr) \\ \quad =\mathbb{C}_{V}\biggl[\biggl(\varphi \biggl(\frac{S_{n}}{n} \biggr)\biggr]-\varphi \bigl(x^{*}\bigr)\biggr)I_{ \{ \vert \frac{S_{n}}{n}-x^{*} \vert < \delta \}}+\biggl( \varphi \biggl(\frac{S_{n}}{n}\biggr)]- \varphi \bigl(x^{*}\bigr) \biggr)I_{\{ \vert \frac{S_{n}}{n}-x^{*} \vert \geq \delta \}}] \\ \quad \geq \mathbb{C}_{V}[-\epsilon -2M\cdot I_{\{ \vert \frac{S_{n}}{n}-x^{*} \vert \geq \delta \}}] \\ \quad = -\epsilon -2M\cdot v \biggl( \biggl\vert \frac{S_{n}}{n}-x^{*} \biggr\vert \geq \delta \biggr) \\ \quad =-\epsilon -2M\cdot (1-V \biggl( \biggl\vert \frac{S_{n}}{n}-x^{*} \biggr\vert < \delta \biggr). \end{gathered} $$

Noticing that

$$ V\biggl( \biggl\vert \frac{S_{n}}{n}-x^{*} \biggr\vert < \delta \biggr)=\mathbb{C}_{V}[I_{\{ \vert \frac{S_{n}}{n}-x^{*} \vert < \delta \}}]\geq \mathbb{C}_{V} \bigl(h_{\delta }(x)\bigr) $$

and (16) of Corollary 2, then combining with the above inequality, we arrive at

$$ 1\geq V\biggl( \biggl\vert \frac{S_{n}}{n}-x^{*} \biggr\vert < \delta \biggr)=\mathbb{C}_{V}[I_{\{ \vert \frac{S_{n}}{n}-x^{*} \vert < \delta \}}]\geq \mathbb{C}_{V}\bigl(h_{\delta }(x)\bigr) \to 1, \quad n\to \infty , $$
(20)

and thus

$$ v\biggl( \biggl\vert \frac{S_{n}}{n}-x^{*} \biggr\vert \geq \delta \biggr)=1-V\biggl( \biggl\vert \frac{S_{n}}{n}-x^{*} \biggr\vert < \delta \biggr) \to 0, \quad n\to \infty . $$
(21)

We conclude from the above deduction,

$$ \mathbb{C}_{V}\biggl[\varphi \biggl(\frac{S_{n}}{n}\biggr)\biggr]- \varphi \bigl(x^{*}\bigr)\geq - \epsilon -2M \epsilon \to 0, \quad n\to \infty , $$

from which we can infer (19),

$$ \lim \inf_{n\to \infty }\mathbb{C}_{V}\biggl[\varphi \biggl( \frac{S_{n}}{n}\biggr)\biggr]\geq \sup_{\underline{\mu}\leq x\leq \overline{\mu}}\varphi (x). $$

A combination of the conclusions for (18) and (19) completes the proof of this theorem. □

5 Concluding remark

In this article, we employed an elementary argument technique, such as Taylor’s expansion, the elementary inequalities arising from the sub-linearity of Choquet expectation, and made mild assumptions of the convolutional independence and the strengthened first moment condition to give a new proof for the generalized LLNs under Choquet expectation.

The novel features are: (1) The proof is purely probabilistic without using other artificial tools like characteristic functions or PDEs. This can be viewed as an extension of Linderberg–Feller’s ideas for linear probability theory to Choquet expectation framework. (2) The proof is accomplished under much weaker conditions, and thus we generalize the LLNs under Choquet expectation theory from the strong iid assumptions to the convolutional independence combined with the strengthened first moment assumptions. This facilitates the application of Choquet expectation in the simulation for financial phenomena.

References

  1. Augustin, T.: Optimal decisions under complex uncertainty—basic notions and a general algorithm for data-based decision making with partial prior knowledge described by interval probability. Z. Angew. Math. Mech. 84(10–11), 678–687 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  2. Chen, J.: Law of large numbers under Choquet expectations. In: Abstract and Applied Analysis, vol. 2014. Hindawi (2014)

    Google Scholar 

  3. Choquet, G.: Theory of capacities. In: Annales de L’institut Fourier, vol. 5, pp. 131–295 (1954)

    Google Scholar 

  4. Doob, J.: Classical Potential Theory and Its Probabilistic Counterpart. Springer, Berlin (1984)

    Book  MATH  Google Scholar 

  5. Maccheroni, F., Marinacci, M., et al.: A strong law of large numbers for capacities. Ann. Probab. 33(3), 1171–1178 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  6. Marinacci, M.: Limit laws for non-additive probabilities and their frequentist interpretation. J. Econ. Theory 84(2), 145–195 (1999)

    Article  MathSciNet  MATH  Google Scholar 

  7. Schmeidler, D.: Subjective probability and expected utility without additivity. Econometrica 57, 571–587 (1989)

    Article  MathSciNet  MATH  Google Scholar 

  8. Barnett, W.A., Han, Q., Zhang, J.: Monetary services aggregation under uncertainty: a behavioral economics extension using Choquet expectation. J. Econ. Behav. Organ. (2020, in press)

  9. Chareka, P.: The central limit theorem for capacities. Stat. Probab. Lett. 79(12), 1456–1462 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  10. Li, W.-J., Chen, Z.-J.: Laws of large numbers of negatively correlated random variables for capacities. Acta Math. Appl. Sin. Engl. Ser. 27(4), 749 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  11. Denneberg, D.: Non-additive Measure and Integral, vol. 27. Springer, Berlin (2013)

    MATH  Google Scholar 

  12. Passow, E.: Piecewise monotone spline interpolation. J. Approx. Theory 12(3), 240–241 (1974)

    Article  MathSciNet  MATH  Google Scholar 

  13. Baolin, Z.: Piecewise cubic spline interpolation. Numer. Comput. Comput. Appl. Chin. Ed. 3, 157–162 (1983)

    MathSciNet  Google Scholar 

Download references

Acknowledgements

The authors thank the editor and the referees for constructive and pertinent suggestions.

Availability of data and materials

Data sharing not applicable to this article as no data sets were generated or analysed during the current study.

Funding

This work is supported by Natural Science Foundation of Shandong Province (No. BS2014SF015) and by National Natural Science Foundation of China (No. 11526124).

Author information

Authors and Affiliations

Authors

Contributions

All authors contributed equally to this work. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Jing Chen.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Chen, J., Chen, Z. A new proof for the generalized law of large numbers under Choquet expectation. J Inequal Appl 2020, 158 (2020). https://doi.org/10.1186/s13660-020-02426-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13660-020-02426-5

Keywords