- Research
- Open Access
- Published:
A new proof for the generalized law of large numbers under Choquet expectation
Journal of Inequalities and Applications volume 2020, Article number: 158 (2020)
Abstract
In this article, we employ the elementary inequalities arising from the sub-linearity of Choquet expectation to give a new proof for the generalized law of large numbers under Choquet expectations induced by 2-alternating capacities with mild assumptions. This generalizes the Linderberg–Feller methodology for linear probability theory to Choquet expectation framework and extends the law of large numbers under Choquet expectation from the strong independent and identically distributed (iid) assumptions to the convolutional independence combined with the strengthened first moment condition.
1 Introduction
As pointed out in [1–7] and the references cited therein, the Choquet expectation can provide a strong and convincing way to describe volatility uncertainty phenomena in financial markets, as well as bridge the distance between the traditional and new-arising sub-linear expectation theory. Recently, Choquet expectation finds its application in monetary services aggregation [8]. Therefore, the theoretical study on the law of large numbers (LLNs) and the central limit theorem (CLT) under Choquet expectations has been attracting much attention of mathematicians and economists, and a great achievement has been attained; for example, see [2, 9, 10].
Reviewing the literature of LLNs under Choquet expectations, we found that the proof for LLNs can be mainly classified into two categories: one is the direct method, as in [2, 10], in which the purely probabilistic Linderberg–Feller idea and the classical inequality technique, which are often applied to linear expectation case, are borrowed to derive the LLNs under Choquet expectations. The other is the indirect method, such as done in [9], in which the non-additive Choquet expectation is turned into an additive Lebesgue–Stieltjes integral, then the existing additive properties can be used to derive the Choquet LLNs.
Checking carefully these existing methods, it is not difficult to find that their proof relies mainly on the three assumptions: additivity of expectations, the independence and the identical distribution of random variables. Noting that the latter two, or simply “iid” assumption for short, are too strong to be verified and utilized in the simulation of real financial practice, one may naturally ask whether there are some weakened assumptions, under which the desired Choquet LLNs are still valid and its applications are facilitated. The answer is affirmative.
The main goals of this article are to: (1) adopt Choquet expectations induced by 2-alternating capacities as an alternative of the first assumption of the additivity of expectations in [9], and replace the independent and identical distribution (iid) condition for random variables by a convolutional independence condition combined with the strengthened first moment condition which is much weaker than the iid constraints; (2) then prove the LLNs under Choquet expectation through a pure probabilistic argument and classical inequality technique. This generalizes the Linderberg–Feller methodology from linear probability theory to Choquet expectation framework.
The remaining part of this article is organized as follows. Section 2 is to recall the definitions for capacity, Choquet expectation, convolutional independence and the strengthened first moment condition, and then present their properties concisely. In Sect. 3, we provide some useful lemmas based on these definitions, properties and some inequality techniques. Section 4 is devoted to the presentation of a detailed proof for the LLNs under Choquet expectation through a pure probabilistic argument and classical inequality technique. We give some concluding remarks in the last section.
2 Preliminary
In this section, we recall the definitions and properties concerning capacities and Choquet expectations.
Let (Ω, \({\mathcal{F}}\)) be a measurable space, \(C_{b}(\mathbb{R})\) be the set of all bounded and continuous functions on \(\mathbb{R}\) and \(C_{b}^{2}(\mathbb{R})\) be the set of functions in \(C_{b}(\mathbb{R})\) with bounded, continuous first and second order derivatives.
Definition 1
([3])
A set function \(\mathbb{V}\): \({\mathcal{F}}\to [0,1]\) is called a capacity, if it satisfies the following properties:
Especially, a capacity \(\mathbb{V}\) is 2-alternating if for all \(A, B \in {\mathcal{F}}\),
Definition 2
([3])
Let X be a random variable on \((\varOmega ,{\mathcal{F}})\). The upper Choquet expectation (integral) of X induced by a capacity \(\mathbb{V}\) on \({\mathcal{F}}\) is defined by
The lower Choquet expectation of X induced by \(\mathbb{V}\) is given by
which is conjugate to the upper expectation and satisfies \(\mathcal{C}_{V}[X]\leq \mathbb{C}_{V}[X]\).
For simplicity, we only consider the upper Choquet expectation in the sequel, since the lower (conjugate) version can be considered similarly.
Proposition 1
LetX, Ybe two random variables on\((\varOmega ,{\mathcal{F}})\), \(\mathbb{C}_{V}\)be the upper Choquet expectation induced by a capacity\(\mathbb{V}\). Then, we have
-
(1)
monotonicity: \(\mathbb{C}_{V}[X]\geq \mathbb{C}_{V}[Y]\)for\(X\geq Y\);
-
(2)
positive homogeneity: \(\mathbb{C}_{V}[\lambda X]= \lambda \mathbb{C}_{V}[X]\), \(\forall \lambda \geq 0\);
-
(3)
translation invariance: \(\mathbb{C}_{V}[X+a]=\mathbb{C}_{V}[X]+a\), \(\forall a\in \mathbb{R}\).
Compared to Proposition 1, the Choquet expectation induced by 2-alternating capacity \(\mathbb{V}\) implies more information than the general Choquet expectation does, for example, the sub-additivity and sub-linearity, as presented in the following proposition.
Proposition 2
Let\(\mathbb{C}_{V}\)be the upper Choquet expectation induced by 2-alternating capacity\(\mathbb{V}\). LetX, Ybe two random variables on\((\varOmega ,{\mathcal{F}})\). Then, we have
-
(1)
sub-additivity:
$$ \mathbb{C}_{V}[X+Y]\leq \mathbb{C}_{V}[X]+ \mathbb{C}_{V}[Y]; $$ -
(2)
for any constant\(a \in \mathbb{R}\),
$$ \mathbb{C}_{V}[aX]=a^{+} \mathbb{C}_{V}[X]-a^{-} \mathcal{C}_{V}[X], $$where\(a^{+}=\max \{ a, 0\}\)and\(a^{-}=\max \{ -a, 0\}\);
-
(3)
sub-linearity:
$$ -\mathbb{C}_{V}\bigl[ \vert Y \vert \bigr]\leq \mathcal{C}_{V}[Y]\leq \mathbb{C}_{V}[X+Y]- \mathbb{C}_{V}[X]\leq \mathbb{C}_{V}[Y]\leq \mathbb{C}_{V}\bigl[ \vert Y \vert \bigr]. $$
Remark 1
Let \(\mathbb{V}\) be a 2-alternating capacity and \(\mathbb{C}_{V}\), \(\mathcal{C}_{V}\) be the induced upper, lower Choquet expectation, respectively. Then
defines a capacity for the indicator function \(I_{A}\) on the set A. Further, if \(A^{c}\) is the complementary set of A, then
Next, we recall the definitions of distribution for random variables.
Definition 3
([2])
Let \(\mathbb{C}_{V}\) be the upper Choquet expectation induced by a capacity \(\mathbb{V}\) on \(\mathcal{F}\). Let X be a random variable on \((\varOmega ,{\mathcal{F}})\). For any function φ on \(\mathbb{R}\) with \(\varphi (X) \in \mathcal{F}\), the distribution function of X is defined by
Random variables X and Y are called identically distributed, if for any \(\varphi \in C_{b}(\mathbb{R}) \) with \(\varphi (X), \varphi (Y)\in {\mathcal{F}}\),
Definition 4
([2])
A sequence \(\{X_{i}\}_{i=1}^{\infty } \) of random variables is said to converge in distribution (in law) under upper Choquet expection \(\mathbb{C}_{V}\) on \(\mathcal{F}\), if for any \(\varphi \in C_{b}(\mathbb{R}) \) with \(\varphi (X_{i}) \in \mathcal{F}\), \(i \ge 1\), the sequence \(\{\mathbb{C}_{V}[\varphi (X_{i})]\}_{i=1}^{\infty } \) converges.
We conclude this section by reviewing two fundamental concepts, convolution and independence, which play important roles in the development of classical probability theory, and then generalizing them to Choquet expectation framework.
Convolution and independence in linear probability theory. Let ξ and η be two random variables on the probability space \((\varOmega ,{\mathcal{F}},P)\), and \(E_{P}[\cdot ]\) be the expectation operator under probability P. ξ and η are said to be of convolution if for any bounded function φ,
Fubini’s theorem implies that
Obviously, if ξ and η are independent, then ξ and η are of convolution. However, the converse may not be true. In this point of view, convolution is weaker than independence in linear probability theory.
Motivated by this fact, we attempt to define a convolution-combined independence in place of the strong independence assumption to prove the generalized LLNs under Choquet expectation.
Definition 5
(Convolutional Independence)
Let X and Y be two random variables and \(\{X_{i}\}_{i=1}^{\infty } \) be a sequence of random variables on \((\varOmega , {\mathcal{F}})\).
-
(i)
The random variable X is said to be convolutionally independent of Y, if for any function \(\varphi \in C_{b}(\mathbb{R})\),
$$ \mathbb{C}_{V}\bigl[\varphi (X+Y)\bigr]=\mathbb{C}_{V} \bigl[\mathbb{C}_{V}\bigl[ \varphi (x+Y)\bigr]|_{x=X} \bigr]. $$ -
(ii)
The sequence \(\{X_{i}\}_{i=1}^{\infty } \) is said to be convolutionally independent, if \(X_{i+1}\) is convolutionally independent of \(\sum_{j=1}^{i}X_{j}\) for \(i\geqslant 1\).
Remark 2
A novel feature of this kind of convolution is its asymmetry and directionality. According to new definition of convolution, the order of marginal Choquet expectations may not be exchangeable. That is, the following could happen:
Such a property is completely different from the notion of “mutual” independence in linear probability theory, but it is more consistent with financial phenomenon.
According to Definition 3, the verification of the identical distribution for random variables X and Y is hard to be implemented for all \(\varphi \in C_{b}(\mathbb{R})\) in application. We hope to reduce the complex verification to only those functions satisfying the so-called strengthened first moment condition. Therefore, we have the following definition.
Definition 6
(The strengthened first moment condition)
A sequence \(\{X_{i}\}_{i=1}^{\infty } \) of random variables on \((\varOmega , {\mathcal{F}})\) is said to satisfy the strengthened first moment condition if the following Choquet expectations are finite for \(i\geqslant 1\):
3 Useful lemmas
In this section, we apply the convolutional independence and the strengthened first moment condition proposed in Sect. 2, which are obviously weaker than the iid assumption used in the existing methods, and then prove two useful lemmas.
Lemma 1
Let\(\{X_{i}\}_{i=1}^{\infty }\)be a sequence of convolutionally independent random variables on\((\varOmega , {\mathcal{F}})\). Let\(\mathbb{V}\)be a 2-alternating capacity defined on\({\mathcal{F}}\), and\(\mathbb{C}_{V}\), \(\mathcal{C}_{V}\)be the induced upper, lower Choquet expectation, respectively. Then, for any\(\varphi \in C_{b}(\mathbb{R})\)and any constant\(y_{i} \in \mathbb{R}\),
where
Proof
Set \(S_{n}:=\sum_{i=1}^{n} X_{i}\), \(S_{0}=0\).
where
We now estimate the term \(\Delta _{m}\) for \(0 \leq m \leq n-1\). We let \(h(x)=\mathbb{C}_{V} [\varphi (x+X_{n-m}) ]\) and apply the convolutional independence of \(\{X_{i}\}_{i=1}^{n}\) to derive
which, combined with the sub-linearity of \(\mathbb{C}_{V}\) in Proposition 2, implies
On the other hand, we apply the sub-linearity in Proposition 2 again,
That is,
This with (3) implies that
The desired conclusion (2) then follows directly from the facts that
which completes the proof. □
Further, we estimate the bounds for \(I_{1}\), \(I_{2}\) of Lemma 1 under the strengthened first moment condition.
Lemma 2
Let\(\mathbb{V}\)be a 2-alternating capacity and\(\mathbb{C}_{V}\), \(\mathcal{C}_{V}\)be the induced upper and lower Choquet expectation, respectively. Let the sequence\(\{X_{i}\}_{i=1}^{\infty }\)of random variables satisfy the strengthened first moment condition. Set\(\mathbb{C}_{V}[X_{i}]=\overline{\mu}\)and\(\mathcal{C}_{V}[X_{i}]=\underline{\mu}\), \(i\ge 1\). Then, for each function\(\varphi \in C_{b}^{2}(\mathbb{R})\), there exists a positive constant\(b_{n}(\epsilon )\)with\(b_{n}(\epsilon )\to 0\), as\(n\to \infty \), such that
-
(I)
\(\sum_{i=1}^{n}\sup_{x\in \mathbb{R}} \{ \mathbb{C}_{V} [ \varphi (x+\frac{X_{i}}{n} ) ]-\varphi (x) \} \leq \sup_{x\in \mathbb{R}}G ( \varphi '(x),\overline{\mu}, \underline{\mu} )+b_{n}(\epsilon )\).
-
(II)
\(\sum_{i=1}^{n}\inf_{x\in \mathbb{R}} \{ \mathbb{C}_{V} [ \varphi (x+\frac{X_{i}}{n} ) ]-\varphi (x) \} \ge \inf_{x\in \mathbb{R}} G ( \varphi '(x),\overline{\mu}, \underline{\mu} )-b_{n}(\epsilon )\).
Here\(G (x, \overline{\mu}, \underline{\mu} ):=x^{+} \overline{\mu}-x^{-} \underline{\mu}\)with constants\(\underline{\mu}<\overline{\mu}<\infty \).
Proof
Applying the Taylor expansion, we have, for φ and \(0 \leq \theta _{1}\leq 1\),
where \(J_{n}(x, X_{i}):= (\varphi '(x+\theta _{1}\frac{X_{i}}{n})- \varphi '(x) )\frac{X_{i}}{n}\).
Take the upper Choquet expectation \(\mathbb{C}_{V}\) on both sides of the above equality and apply the sub-linearity of \(\mathbb{C}_{V}\),
Since \(\mathbb{C}_{V}[X_{i}]=\overline{\mu}\), \(\mathcal{C}_{V}[X_{i}]=\underline{\mu}\), \(i=1,2,\ldots n\), we have
Therefore, by the translation invariance for \(\mathbb{C}_{V}\) in Lemma 1,
Take supremum \(\sup_{x\in \mathbb{R}}\) on both sides of (4),
For convenience, denote
From (5), we have
To estimate the right hand term \(\sum_{i=1}^{n}\sup_{x\in \mathbb{R}}\mathbb{C}_{V}[|J_{n}(x,X_{i})|]\) in (6), we apply the strengthened first moment condition
and for any \(\epsilon >0\),
derived from the definition of Choquet expectation [3, 11]. Thus, we have, for any \(\epsilon >0\),
where \(\|\varphi \|:= \sup_{x\in \mathbb{R}} |\varphi (x)|\) and \(0 \leq \theta _{1}\), \(\theta _{2} \leq 1\).
Combining (6) and (7), for the arbitrary of ϵ, as \(n\to \infty \), we have
With the help of the estimates (8), we finally derive the estimate for the first assertion \((I)\) of this lemma.
Taking infimum \(\inf_{x\in \mathbb{R}}\) on both sides of (4) and arguing similarly as for the first assertion \((I)\), we obtain the estimate for the second assertion \((\mathit{II})\) of this lemma.
Hence
This completes the proof. □
For later use we quote from [2] a conclusion related to the function G defined in Lemma 2.
Lemma 3
([2])
Let\(G(x,y,z)\)be the function defined in Lemma 2, that is,
Then, for any monotonic function\(\varphi \in C_{b}(\mathbb{R})\),
-
(I)
\(\inf_{y\in D_{n}}\sup_{x\in \mathbb{R}}G ( \varphi '(x), \overline{\mu}- \frac{1}{n}\sum_{i=1}^{n}y_{i}, \underline{\mu}-\frac{1}{n}\sum_{i=1}^{n}y_{i} )=0\).
-
(II)
\(\inf_{y\in D_{n}}\inf_{x\in \mathbb{R}}G ( \varphi '(x), \overline{\mu}- \frac{1}{n}\sum_{i=1}^{n}y_{i}, \underline{\mu}-\frac{1}{n}\sum_{i=1}^{n}y_{i} )=0\).
Here, \(D_{n}:= \{ y:=( y_{1}, y_{2}, \ldots , y_{n}): y_{i}\in [ \underline{\mu}, \overline{\mu}], i=1,2,\ldots , n \} \).
4 Main results
In this section, we shall prove the generalized LLNs under Choquet expectation under the convolutional independence and the strengthened first moment conditions in Theorem 1 and Theorem 2. Theorem 1 is for any monotonic function \(\varphi \in C_{b}(\mathbb{R})\) and Theorem 2 is for general \(\varphi \in C_{b}(\mathbb{R})\). These two theorems generalize the existing LLNs from the strong iid assumption to the much weaker conditions.
Theorem 1
(Generalized LLNs for monotonic functions in \(C_{b}(\mathbb{R})\))
Let\(\mathbb{V}\)be a 2-alternating capacity defined on\({\mathcal{F}}\), and\(\mathbb{C}_{V}\), \(\mathcal{C}_{V}\)be the induced upper, lower Choquet expectation, respectively. Let\(\{X_{i}\}_{i=1}^{\infty }\)be a convolutionally independent sequence of random variables on\((\varOmega , {\mathcal{F}})\)with\(\mathbb{C}_{V}[X_{i}]=\overline{\mu}\), \(\mathcal{C}_{V}[X_{i}]= \underline{\mu}\). Assume\(\{X_{i}\}_{i=1}^{\infty }\)satisfies the strengthened first moment condition and\(S_{n}:= \sum_{i=1}^{n}X_{i}\). Then, for each monotonic function\(\varphi \in C_{b}(\mathbb{R})\),
Proof
We decompose the proof into 2 steps. Step 1 is to prove the conclusion for any monotonic \(\varphi \in C_{b}^{2}(\mathbb{R})\), and Step 2 is for any monotonic \(\varphi \in C_{b}(\mathbb{R})\).
Step 1 for any monotonic\(\varphi \in C_{b}^{2}(\mathbb{R})\): Set
Obviously, \(\sup_{y\in D_{n}}\varphi (\frac{1}{n}\sum_{i=1}^{n}y_{i} )=\sup_{\underline{\mu}\leq x\leq \overline{\mu}} \varphi (x) \). Thus,
Applying Lemma 1, \((I)\) of Lemma 2 and \((I)\) of Lemma 3, we derive, for any \(\epsilon >0\),
Further, the term \(b_{n}(\epsilon )\) in (11) is still a positive real-valued constant and similar to inequality (7), we have
where the definition of Choquet expectation and the fact that \(\mathbb{C}_{V}[I_{\{|{X_{1}} |>n\epsilon -\mu \}}] \leq \frac{1}{n\epsilon -\mu } \mathbb{C}_{V}[|X_{1}|]\to 0\) as \(n\to \infty \) are used. Thus, combining this inequality with (10) and (11), we have
On the other hand, we apply Lemma 1, the second assertion \((\mathit{II})\) of Lemma 2 and the second assertion \((\mathit{II})\) of Lemma 3, and we have, for any \(\epsilon >0\),
According to (10), we get
Combining (12) with (13), we can prove that conclusion (I) holds for any monotonic function \(\varphi \in C_{b}^{2}(\mathbb{R})\).
Step2: For each monotone \(\varphi \in C_{b}(\mathbb{R})\), there exists a monotone function \(\overline{\varphi }\in C_{b}^{2}(\mathbb{R})\) such that [12, 13]
Apply Step 1 to function \(\overline{\varphi }(x)\) and the sub-linearity of \(\mathbb{C}_{V}\),
Thus, we prove the conclusion \((I) \) for any monotonic function \(\varphi \in C_{b}(\mathbb{R})\).
The second conclusion \((\mathit{II})\) of this theorem follows directly from a combination of the conjugate property,
and the proved conclusion \((I)\). This completes the proof. □
An application of Theorem 1 to a special pair functions \(H_{\delta }(x)\) and \(h_{\delta }(x)\) defined below, which can be viewed as continuous approximation from upper bound and lower bound, respectively, to the discontinuous indicator function \(I_{A}\), we can infer the following two corollaries concerning the upper and lower Choquet expectation \(\mathbb{C}_{V}\) and \(\mathcal{C}_{V}\) or the induced upper and lower capacity \(V(A)\) and \(v(A)\).
Corollary 1
Let the function \(H_{\delta }(x)\) be defined by
Then we have
Proof
Noticing that we cannot apply directly the conclusion \((I) \) of Theorem 1 to prove (15) since the function \(H_{\delta }\) is non-monotonic. However, we can split \(H_{\delta }\) into a sum of monotonic functions, for example,
with
From this, we have
which implies (15). This completes the proof. □
Corollary 2
Assume that\(x^{*}\in [\underline{\mu}, \overline{\mu}]\), \(\delta >0\)is sufficiently small such that\((x^{*}-\delta , x^{*}+\delta )\subset [\underline{\mu}, \overline{\mu}]\)as\(x^{*}\in (\underline{\mu}, \overline{\mu})\)or\((x^{*}-\delta ,x^{*})\subset [\underline{\mu}, \overline{\mu}]\)as\(x^{*}=\overline{\mu}\)or\((x^{*}, x^{*}+\delta )\subset [\underline{\mu}, \overline{\mu}]\)as\(x^{*}=\underline{\mu}\), and the function\(h_{\delta }(x)\)is defined by
Then we have
Proof
Noting that the function \(h_{\delta }\) is non-monotonic as \(x^{*}\in (\underline{\mu}, \overline{\mu})\), we decompose it into a sum of monotonic functions in order to apply Theorem 1; for example,
with
From this, we have
which implies (16).
As \(x^{*}=\overline{\mu}\) or \(x^{*}=\underline{\mu}\), we can directly apply Theorem 1 to obtain (16) since \(h_{\delta }(x)\) is monotonic and continuous in these two cases. This completes the proof. □
Combining Theorem 1, Corollary 1 and Corollary 2, we shall present the generalized LLNs for functions in \(C_{b}(\mathbb{R})\), the main result of this article.
Theorem 2
(Generalized LLNs for functions in \(C_{b}(\mathbb{R})\))
Let the assumptions in Theorem 1hold. Then, for any function\(\varphi \in C_{b}(\mathbb{R})\),
Proof
Since the conclusions \((I)\) and \((\mathit{II})\) are conjugate, we mainly pay attention to the proof for \((I)\), and \((\mathit{II})\) can be derived analogously.
For the conclusion \((I)\), it suffices to show that
and
Let \(x^{*}\in [\underline{\mu}, \overline{\mu}]\) be a point such that \(\varphi (x^{*})=\sup_{\underline{\mu}\leq x\leq \overline{\mu}}\varphi (x)\). we infer from the continuity of φ that, for any \(\epsilon >0\), there exists a \(\delta >0\) such that
We also let \(M=\sup_{x\in \mathbb{R}}|\varphi (x)|\). Then, applying (15) of Corollary 1 to
we obtain
This implies (18).
On the other hand, we have
Noticing that
and (16) of Corollary 2, then combining with the above inequality, we arrive at
and thus
We conclude from the above deduction,
from which we can infer (19),
A combination of the conclusions for (18) and (19) completes the proof of this theorem. □
5 Concluding remark
In this article, we employed an elementary argument technique, such as Taylor’s expansion, the elementary inequalities arising from the sub-linearity of Choquet expectation, and made mild assumptions of the convolutional independence and the strengthened first moment condition to give a new proof for the generalized LLNs under Choquet expectation.
The novel features are: (1) The proof is purely probabilistic without using other artificial tools like characteristic functions or PDEs. This can be viewed as an extension of Linderberg–Feller’s ideas for linear probability theory to Choquet expectation framework. (2) The proof is accomplished under much weaker conditions, and thus we generalize the LLNs under Choquet expectation theory from the strong iid assumptions to the convolutional independence combined with the strengthened first moment assumptions. This facilitates the application of Choquet expectation in the simulation for financial phenomena.
References
Augustin, T.: Optimal decisions under complex uncertainty—basic notions and a general algorithm for data-based decision making with partial prior knowledge described by interval probability. Z. Angew. Math. Mech. 84(10–11), 678–687 (2004)
Chen, J.: Law of large numbers under Choquet expectations. In: Abstract and Applied Analysis, vol. 2014. Hindawi (2014)
Choquet, G.: Theory of capacities. In: Annales de L’institut Fourier, vol. 5, pp. 131–295 (1954)
Doob, J.: Classical Potential Theory and Its Probabilistic Counterpart. Springer, Berlin (1984)
Maccheroni, F., Marinacci, M., et al.: A strong law of large numbers for capacities. Ann. Probab. 33(3), 1171–1178 (2005)
Marinacci, M.: Limit laws for non-additive probabilities and their frequentist interpretation. J. Econ. Theory 84(2), 145–195 (1999)
Schmeidler, D.: Subjective probability and expected utility without additivity. Econometrica 57, 571–587 (1989)
Barnett, W.A., Han, Q., Zhang, J.: Monetary services aggregation under uncertainty: a behavioral economics extension using Choquet expectation. J. Econ. Behav. Organ. (2020, in press)
Chareka, P.: The central limit theorem for capacities. Stat. Probab. Lett. 79(12), 1456–1462 (2009)
Li, W.-J., Chen, Z.-J.: Laws of large numbers of negatively correlated random variables for capacities. Acta Math. Appl. Sin. Engl. Ser. 27(4), 749 (2011)
Denneberg, D.: Non-additive Measure and Integral, vol. 27. Springer, Berlin (2013)
Passow, E.: Piecewise monotone spline interpolation. J. Approx. Theory 12(3), 240–241 (1974)
Baolin, Z.: Piecewise cubic spline interpolation. Numer. Comput. Comput. Appl. Chin. Ed. 3, 157–162 (1983)
Acknowledgements
The authors thank the editor and the referees for constructive and pertinent suggestions.
Availability of data and materials
Data sharing not applicable to this article as no data sets were generated or analysed during the current study.
Funding
This work is supported by Natural Science Foundation of Shandong Province (No. BS2014SF015) and by National Natural Science Foundation of China (No. 11526124).
Author information
Authors and Affiliations
Contributions
All authors contributed equally to this work. All authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare that they have no competing interests.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Chen, J., Chen, Z. A new proof for the generalized law of large numbers under Choquet expectation. J Inequal Appl 2020, 158 (2020). https://doi.org/10.1186/s13660-020-02426-5
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13660-020-02426-5
Keywords
- Law of large numbers
- Choquet expectation
- Convolutional independence
- The strengthened first moment condition
- New proof