Open Access

On Jensen’s inequality, Hölder’s inequality, and Minkowski’s inequality for dynamically consistent nonlinear evaluations

Journal of Inequalities and Applications20152015:152

https://doi.org/10.1186/s13660-015-0677-5

Received: 4 November 2014

Accepted: 24 April 2015

Published: 7 May 2015

Abstract

In this paper, the dynamically consistent nonlinear evaluations that were introduced by Peng are considered in probability space \(L^{2} (\Omega,{\mathcal{F}}, ({\mathcal {F}}_{t} )_{t\geq0},P )\). We investigate the n-dimensional (\(n\geq1\)) Jensen inequality, Hölder inequality, and Minkowski inequality for dynamically consistent nonlinear evaluations in \(L^{1} (\Omega,{\mathcal{F}}, ({\mathcal {F}}_{t} )_{t\geq0},P )\). Furthermore, we give four equivalent conditions on the n-dimensional Jensen inequality for g-evaluations induced by backward stochastic differential equations with non-uniform Lipschitz coefficients in \(L^{p} (\Omega,{\mathcal{F}}, ({\mathcal {F}}_{t} )_{0\leq t\leq T},P )\) (\(1< p\leq2\)). Finally, we give a sufficient condition on g that satisfies the non-uniform Lipschitz condition under which Hölder’s inequality and Minkowski’s inequality for the corresponding g-evaluation hold true. These results include and extend some existing results.

Keywords

dynamically consistent nonlinear evaluation g-evaluation g-expectationJensen’s inequalityHölder’s inequalityMinkowski’s inequality

1 Introduction

It is well known that (see Peng [1, 2]) a dynamically consistent nonlinear evaluation in probability space \(L^{2} (\Omega,{\mathcal {F}}, ({\mathcal{F}}_{t} )_{t\geq0},P )\), where \(\{{\mathcal {F}}_{t}\}_{t\geq0}\) is a given filtration, is a system of operators:
$${\mathcal{E}}_{s,t}[X]:X\in L^{2}(\Omega,{ \mathcal{F}}_{t},P)\mapsto L^{2}(\Omega,{\mathcal{F}}_{s},P), \quad 0\leq s\leq t< \infty, $$
which satisfies the following properties:
  1. (i)

    \({\mathcal{E}}_{s,t}[X_{1}]\geq{\mathcal {E}}_{s,t}[X_{2}]\), if \(X_{1}\geq X_{2}\);

     
  2. (ii)

    \({\mathcal{E}}_{t,t}[X]=X\);

     
  3. (iii)

    \({\mathcal{E}}_{r,s}[{\mathcal {E}}_{s,t}[X]]={\mathcal{E}}_{r,t}[X]\), if \(0\leq r\leq s\leq t<\infty\);

     
  4. (iv)

    \(1_{A}{\mathcal{E}}_{s,t}[X]=1_{A}{\mathcal {E}}_{s,t}[1_{A}X]\), \(\forall A\in{\mathcal{F}}_{s}\).

     

Of course, we can define this notion in \(L^{1} (\Omega,{\mathcal {F}}, ({\mathcal{F}}_{t} )_{t\geq0},P )\).

In a financial market, the evaluation of the discounted value of a derivative is often treated as a dynamically consistent nonlinear evaluation (expectation). The well-known g-evaluation (g-expectation) induced by backward stochastic differential equations (BSDEs for short), which was put forward by Peng, is a special case of a dynamically consistent nonlinear evaluation (expectation). While nonlinear BSDEs were firstly introduced by Pardoux and Peng [3], who proved the existence and uniqueness of adapted solutions, when the coefficient g is Lipschitz in \((y,z)\) uniformly in \((t,\omega)\), with square-integrability assumptions on the coefficient \(g(t,\omega,y,z)\) and terminal condition ξ. Later many researchers developed the theory of BSDEs and their applications in a series of papers (for example see Hu and Peng [4], Lepeltier and San Martin [5], El Karoui et al. [6], Pardoux [7, 8], Briand et al. [9] and the references therein) under some other assumptions on the coefficients but for a fixed terminal time \(T>0\). In 2000, Chen and Wang [10] obtained the existence and uniqueness theorem for \(L^{2}\) solutions of infinite time interval BSDEs when \(T=\infty\), by the martingale representation theorem and fixed point theorem. Recently, Zong [11] have obtained the result on \(L^{p}\) (\(1< p<2\)) solutions of infinite time interval BSDEs. One of the special cases is the existence and uniqueness theorem of BSDEs with non-uniformly Lipschitz coefficients.

The original motivation for studying nonlinear evaluation (expectation) and g-evaluation (g-expectation) comes from the theory of expected utility, which is the foundation of modern mathematical economics. Chen and Epstein [12] gave an application of dynamically consistent nonlinear evaluation (expectation) to recursive utility, Peng [1, 2, 1315] and Rosazza Gianin [16] investigated some applications of dynamically consistent nonlinear evaluations (expectations) and g-evaluations (g-expectations) to static and dynamic pricing mechanisms and risk measures.

Since the notions of nonlinear evaluation (expectation) and g-evaluation (g-expectation) were introduced, many properties of the nonlinear evaluation (expectation) and g-evaluation (g-expectation) have been studied in [1, 2, 6, 1031]. In [1, 2], Peng obtained an important result: he proved that if a dynamically consistent nonlinear evaluation \({\mathcal {E}}_{s,t}[\cdot]\) can be dominated by a kind of g-evaluation, then \({\mathcal{E}}_{s,t}[\cdot]\) must be a g-evaluation. Thus, in this case, many problems on dynamically consistent nonlinear evaluations \({\mathcal{E}}_{s,t}[\cdot]\) can be solved through the theory of BSDEs.

It is well known that Jensen’s inequality for classic mathematical expectations holds in general, which is a very important property and has many important applications. But for nonlinear expectation, even for its special case: g-expectation, by Briand et al. [17], we know that Jensen’s inequality for g-expectations usually does not hold in general. So under the assumption that g is continuous with respect to t, some papers, such as [18, 19, 25, 27, 28] have been devoted to Jensen’s inequality for g-expectations, with the help of the theory of BSDEs, they have obtained the necessary and sufficient conditions under which Jensen’s inequality for g-expectations holds in general. Under the assumptions that g does not depend on y and is convex, Chen et al. [18, 19] studied Jensen’s inequality for g-expectations and gave a necessary and sufficient condition on g under which Jensen’s inequality holds for convex functions. Provided g only does not depend on y, Jiang and Chen [28] gave another necessary and sufficient condition on g under which Jensen’s inequality holds for convex functions. It was an improved result in comparison with the result that Chen et al. found. Later, this result was improved by Hu [25] and Jiang [27], in fact, Jiang [27] showed that g must be independent of y. In addition, Fan [22] studied Jensen’s inequality for filtration-consistent nonlinear expectations without domination condition. Jia [26] studied the n-dimensional (\(n>1\)) Jensen’s inequality for g-expectations and got the result that the n-dimensional (\(n>1\)) Jensen’s inequality holds for g-expectations if and only if g is independent of y and linear with respect to z, in other words, the corresponding g-expectation must be linear. Then the natural question is asked:

For more general dynamically consistent nonlinear evaluation \({\mathcal{E}}_{s,t}[\cdot]\), what are the sufficient and necessary conditions under which Jensen’s inequality for \({\mathcal{E}}_{s,t}[\cdot]\) holds in general? Roughly speaking, what conditions on \({\mathcal{E}}_{s,t}[\cdot]\) are equivalent with the inequality
$${\mathcal {E}}_{s,t}\bigl[\varphi(\xi)\bigr]\geq\varphi\bigl({ \mathcal{E}}_{s,t}[\xi]\bigr) \quad \mbox{a.s.} $$
holding for any convex function \(\varphi: \mathcal{R}\mapsto\mathcal{R}\)?

One of the objectives of this paper is to investigate this problem. At the same time, this paper will also investigate the sufficient and necessary conditions on \({\mathcal {E}}_{s,t}[\cdot]\) under which the n-dimensional (\(n>1\)) Jensen inequality holds. As applications of these two results, we give four equivalent conditions on the 1-dimensional Jensen inequality and the n-dimensional (\(n>1\)) Jensen inequality for g-evaluations induced by BSDEs with non-uniform Lipschitz coefficients in \(L^{p} (\Omega,{\mathcal{F}}, ({\mathcal {F}}_{t} )_{0\leq t\leq T},P )\) (\(1< p\leq2\)), respectively.

The remainder of this paper is organized as follows: In Section 2, we study the n-dimensional (\(n\geq1\)) Jensen inequality, Hölder inequality, and Minkowski inequality for dynamically consistent nonlinear evaluations in \(L^{1} (\Omega,{\mathcal{F}}, ({\mathcal{F}}_{t} )_{t\geq0},P )\). In Section 3, we give four equivalent conditions on the 1-dimensional Jensen inequality and the n-dimensional (\(n>1\)) Jensen inequality for g-evaluations induced by BSDEs with non-uniform Lipschitz coefficients in \(L^{p} (\Omega,{\mathcal{F}}, ({\mathcal {F}}_{t} )_{0\leq t\leq T},P )\) (\(1< p\leq2\)), respectively. These results generalize the known results on Jensen’s inequality for g-expectation in [18, 19, 22, 2528, 31]. In Section 4, we give a sufficient condition on g that satisfies the non-uniform Lipschitz condition under which Hölder’s inequality and Minkowski’s inequality for the corresponding g-evaluation hold true.

2 Jensen’s inequality, Hölder’s inequality, and Minkowski’s inequality for dynamically consistent nonlinear evaluations

Let \((\Omega,{\mathcal{F}},P)\) be a probability space carrying a standard d-dimensional Brownian motion \((B_{t})_{t\geq0}\), and let \(({\mathcal{F}}_{t} )_{t\geq0}\) be the σ-algebra generated by \((B_{t} )_{t\geq0}\). We always assume that \(({\mathcal{F}}_{t} )_{t\geq0}\) is complete. Let \(T > 0\) be a given real number. In this paper, we always work in the probability space \((\Omega,{\mathcal{F}}_{T},P)\), and only consider processes indexed by \(t\in[0, T ]\). We denote \(L^{p}(\Omega,{\mathcal{F}}_{t} ,P)\) (\(p\geq1\)), the space of \({\mathcal {F}}_{t}\)-measurable random variables satisfying \(E_{P}[|X|^{p}]<\infty\), and by \(L^{p}_{+}(\Omega,{\mathcal{F}}_{t} ,P)\) the space of non-negative random variables in \(L^{p}(\Omega,{\mathcal{F}}_{t} ,P)\). Let \(1_{A}\) denote the indicator of event A. For notational simplicity, we use \(L^{p}({\mathcal{F}}_{t}):= L^{p}(\Omega,{\mathcal{F}}_{t} ,P)\) and \(L^{p}_{+}({\mathcal{F}}_{t}):=L^{p}_{+}(\Omega,{\mathcal{F}}_{t} ,P)\). For the convenience of the reader, we recall the notion of a dynamically consistent nonlinear evaluation, defined in \(L^{2}({\mathcal{F}}_{T})\) in Peng [1, 2], but defined in \(L^{1}({\mathcal{F}}_{T})\) in this section.

Definition 2.1

An \({\mathcal{F}}_{t}\)-consistent nonlinear evaluation in \(L^{1}({\mathcal{F}}_{T})\) is a system of operators:
$${\mathcal{E}}_{s,t}[X]:X\in L^{1}({\mathcal{F}}_{t}) \mapsto L^{1}({\mathcal {F}}_{s}), \quad 0\leq s\leq t\leq T, $$
which satisfies the following properties:
  1. (A.1)

    monotonicity: \({\mathcal {E}}_{s,t}[X_{1}]\geq{\mathcal{E}}_{s,t}[X_{2}]\), if \(X_{1}\geq X_{2}\);

     
  2. (A.2)

    \({\mathcal{E}}_{t,t}[X]=X\);

     
  3. (A.3)

    dynamical consistency: \({\mathcal {E}}_{r,s}[{\mathcal{E}}_{s,t}[X]]={\mathcal{E}}_{r,t}[X]\), if \(0\leq r\leq s\leq t\leq T\);

     
  4. (A.4)

    zero one law: \(1_{A}{\mathcal {E}}_{s,t}[X]=1_{A}{\mathcal{E}}_{s,t}[1_{A}X]\), \(\forall A\in{\mathcal{F}}_{s}\).

     

First, we consider Jensen’s inequality for \({\mathcal{F}}_{t}\)-consistent nonlinear evaluations. We have the following results.

Theorem 2.1

Suppose that \({\mathcal {E}}_{s,t}[\cdot]\), \(0\leq s\leq t\leq T\) is an \({\mathcal {F}}_{t}\)-consistent nonlinear evaluation in \(L^{1}({\mathcal{F}}_{T})\), then the following two statements are equivalent:
  1. (i)
    Jensen’s inequality for \({\mathcal{F}}_{t}\)-consistent evaluation \({\mathcal{E}}_{s,t}[\cdot]\) holds in general, i.e., for each convex function \(\varphi: \mathcal{R}\mapsto\mathcal{R}\) and \(\xi\in L^{1}({\mathcal{F}}_{t})\), if \(\varphi(\xi)\in L^{1}({\mathcal{F}}_{t})\), then we have
    $${\mathcal{E}}_{s,t}\bigl[\varphi(\xi)\bigr]\geq\varphi\bigl({\mathcal {E}}_{s,t}[\xi]\bigr) \quad \textit{a.s.}; $$
     
  2. (ii)

    \(\forall(\xi,a,b)\in L^{1}({\mathcal{F}}_{t})\times \mathcal{R}\times\mathcal{R}\), \({\mathcal{E}}_{s,t}[a\xi+b]\geq a{\mathcal{E}}_{s,t}[\xi]+b\) a.s.

     

Proof

First, we prove (i) implies (ii). Suppose (i) holds, for each \((\xi,a, b)\in L^{1}({\mathcal {F}}_{t} )\times\mathcal{R} \times\mathcal{R}\), let \(\varphi(x):=ax + b\). Obviously, \(\varphi(x)\) is a convex function and \(\varphi(\xi)\in L^{1}({\mathcal {F}}_{t} )\), then we have
$${\mathcal{E}}_{s,t}[a\xi+b]={\mathcal {E}}_{s,t}\bigl[\varphi( \xi)\bigr]\geq\varphi\bigl({\mathcal{E}}_{s,t}[\xi]\bigr)= a{ \mathcal{E}}_{s,t}[\xi]+b\quad \mbox{a.s.} $$
In the following, we prove (ii) implies (i). Suppose (ii) holds, for each \((\xi,a, b)\in L^{1}({\mathcal{F}}_{t} )\times \mathcal{R} \times\mathcal{R}\), we have
$$ {\mathcal{E}}_{s,t}[a\xi+b]\geq a{\mathcal {E}}_{s,t}[\xi]+b \quad \mbox{a.s.} $$
(2.1)
But, for any convex function \(\varphi: \mathcal{R}\mapsto\mathcal{R}\), there exists a countable set \(\mathcal{D}\subseteq\mathcal{R}^{2}\) such that
$$ \varphi(x)=\sup_{(a,b)\in \mathcal{D}}(ax+b). $$
(2.2)
In view of (2.1), for any \((a,b)\in\mathcal{D}\), we have
$${\mathcal{E}}_{s,t}\bigl[\varphi(\xi)\bigr]\geq{\mathcal {E}}_{s,t}[a\xi+b]\geq a{\mathcal{E}}_{s,t}[\xi]+b \quad \mbox{a.s.}, $$
which implies (i) by taking into consideration of (2.2). □

Theorem 2.2

Suppose that \({\mathcal {E}}_{s,t}[\cdot]\), \(0\leq s\leq t\leq T\) is an \({\mathcal {F}}_{t}\)-consistent nonlinear evaluation in \(L^{1}({\mathcal{F}}_{T})\) and \(n>1\), then the following two statements are equivalent:
  1. (i)
    the n-dimensional Jensen inequality for a \({\mathcal{F}}_{t}\)-consistent evaluation \({\mathcal{E}}_{s,t}[\cdot]\) holds in general, i.e., for each convex function \(\varphi: \mathcal{R}^{n}\mapsto\mathcal{R}\) and \(\xi_{i}\in L^{1}({\mathcal{F}}_{t})\) (\(i=1,2,\ldots,n\)), if \(\varphi(\xi_{1},\xi_{2},\ldots,\xi_{n})\in L^{1}({\mathcal{F}}_{t})\), then we have
    $${\mathcal{E}}_{s,t} \bigl[\varphi(\xi_{1},\xi_{2}, \ldots,\xi_{n}) \bigr]\geq \varphi \bigl({\mathcal {E}}_{s,t}[ \xi_{1}],{\mathcal{E}}_{s,t}[\xi_{2}],\ldots,{ \mathcal {E}}_{s,t}[\xi_{n}] \bigr) \quad \textit{a.s.}; $$
     
  2. (ii)
    \({\mathcal{E}}_{s,t}\) is linear, i.e.,
    1. (a)

      \({\mathcal{E}}_{s,t}[\lambda X]=\lambda{\mathcal {E}}_{s,t}[X]\) a.s., \(\forall(X,\lambda)\in L^{1}({\mathcal{F}}_{t})\times \mathcal{R}\);

       
    2. (b)

      \({\mathcal{E}}_{s,t}[X+Y]={\mathcal {E}}_{s,t}[X]+{\mathcal{E}}_{s,t}[Y]\) a.s., \(\forall(X,Y)\in L^{1}({\mathcal{F}}_{t})\times L^{1}({\mathcal{F}}_{t})\);

       
    3. (c)

      \({\mathcal{E}}_{s,t}[\mu]=\mu\) a.s., \(\forall\mu\in \mathcal{R}\).

       
     

Proof

We prove (i) implies (ii).

First, we prove (i) implies (ii)(a). For each \((X,\lambda)\in L^{1}({\mathcal{F}}_{t})\times\mathcal{R}\), let \(\varphi(x_{1},x_{2},\ldots,x_{n}):=\lambda x_{1}\) and \(\xi_{1}:=X\). Obviously, \(\varphi(x_{1},x_{2},\ldots,x_{n})\) is a convex function and \(\varphi(\xi_{1},\xi_{2},\ldots,\xi_{n})\in L^{1}({\mathcal{F}}_{t} )\), then we have
$$ {\mathcal{E}}_{s,t}[\lambda X]={\mathcal{E}}_{s,t} \bigl[ \varphi(\xi _{1},\xi_{2},\ldots,\xi_{n}) \bigr]\geq \varphi \bigl({\mathcal {E}}_{s,t}[\xi_{1}],{ \mathcal{E}}_{s,t}[\xi_{2}],\ldots,{\mathcal {E}}_{s,t}[\xi_{n}] \bigr)=\lambda{\mathcal{E}}_{s,t}[X] \quad \mbox{a.s.} $$
(2.3)
On the other hand, let \(\varphi(x_{1},x_{2},\ldots,x_{n}):=x_{1}-(\lambda-1)x_{2}\), \(\xi_{1}:=\lambda X\), and \(\xi_{2}:=X\). By (i), we can deduce that
$$\begin{aligned} {\mathcal {E}}_{s,t}[X] =&{\mathcal {E}}_{s,t} \bigl[\varphi( \xi_{1},\xi_{2},\ldots,\xi_{n}) \bigr]\geq\varphi \bigl({\mathcal {E}}_{s,t}[\xi_{1}],{\mathcal{E}}_{s,t}[ \xi_{2}],\ldots,{\mathcal {E}}_{s,t}[\xi_{n}] \bigr) \\ =&{\mathcal{E}}_{s,t}[\lambda X]-(\lambda-1){\mathcal{E}}_{s,t}[X]\quad \mbox{a.s.}, \end{aligned}$$
i.e.,
$$ {\mathcal{E}}_{s,t}[\lambda X]\leq\lambda{\mathcal {E}}_{s,t}[X] \quad \mbox{a.s.} $$
(2.4)
It follows from (2.3) and (2.4) that (ii)(a) holds true.
Next we prove (ii)(b) holds. For each \((X,Y)\in L^{1}({\mathcal {F}}_{t})\times L^{1}({\mathcal{F}}_{t})\), let \(\varphi(x_{1},x_{2},\ldots,x_{n}):=x_{1}+x_{2}\), \(\xi_{1}:=X\), and \(\xi_{2}:=Y\), then we have
$$\begin{aligned} {\mathcal{E}}_{s,t}[X+Y] =&{\mathcal{E}}_{s,t} \bigl[\varphi( \xi_{1},\xi _{2},\ldots,\xi_{n}) \bigr]\geq\varphi \bigl({\mathcal {E}}_{s,t}[\xi_{1}],{\mathcal{E}}_{s,t}[ \xi_{2}],\ldots,{\mathcal {E}}_{s,t}[\xi_{n}] \bigr) \\ =&{\mathcal{E}}_{s,t}[X]+{\mathcal {E}}_{s,t}[Y] \quad \mbox{a.s.} \end{aligned}$$
(2.5)
On the other hand, let \(\varphi(x_{1},x_{2},\ldots,x_{n}):=x_{1}-x_{2}\), \(\xi_{1}:=X+Y\), and \(\xi_{2}:=Y\). By (i), we have
$$\begin{aligned} {\mathcal {E}}_{s,t}[X] =&{\mathcal {E}}_{s,t} \bigl[\varphi( \xi_{1},\xi_{2},\ldots,\xi_{n}) \bigr]\geq\varphi \bigl({\mathcal {E}}_{s,t}[\xi_{1}],{\mathcal{E}}_{s,t}[ \xi_{2}],\ldots,{\mathcal {E}}_{s,t}[\xi_{n}] \bigr) \\ =&{\mathcal{E}}_{s,t}[X+Y]-{\mathcal {E}}_{s,t}[Y] \quad \mbox{a.s.}, \end{aligned}$$
i.e.,
$$ {\mathcal{E}}_{s,t}[X+Y]\leq{\mathcal {E}}_{s,t}[X]+{ \mathcal{E}}_{s,t}[Y] \quad \mbox{a.s.} $$
(2.6)
Thus, from (2.5) and (2.6), we can see that (ii)(b) holds.
Finally, we prove (ii)(c) holds. For each \(\mu\in\mathcal{R}\), let \(\varphi(x_{1},x_{2},\ldots,x_{n}):=\mu\), then we have
$$ {\mathcal{E}}_{s,t}[\mu]={\mathcal{E}}_{s,t} \bigl[\varphi( \xi_{1},\xi _{2},\ldots,\xi_{n}) \bigr]\geq\varphi \bigl({\mathcal {E}}_{s,t}[\xi_{1}],{\mathcal{E}}_{s,t}[ \xi_{2}],\ldots,{\mathcal {E}}_{s,t}[\xi_{n}] \bigr)= \mu \quad \mbox{a.s.} $$
(2.7)
On the other hand, let \(\varphi(x_{1},x_{2},\ldots,x_{n}):=2x_{1}-\mu\) and \(\xi_{1}:=\mu\). By (i), we can obtain
$${\mathcal {E}}_{s,t}[\mu]={\mathcal {E}}_{s,t} \bigl[\varphi( \xi_{1},\xi_{2},\ldots,\xi_{n}) \bigr]\geq\varphi \bigl({\mathcal {E}}_{s,t}[\xi_{1}],{\mathcal{E}}_{s,t}[ \xi_{2}],\ldots,{\mathcal {E}}_{s,t}[\xi_{n}] \bigr)=2{\mathcal{E}}_{s,t}[\mu]-\mu\quad \mbox{a.s.}, $$
i.e.,
$$ {\mathcal{E}}_{s,t}[\mu]\leq\mu\quad \mbox{a.s.} $$
(2.8)
It follows from (2.7) and (2.8) that (ii)(c) holds true.
In the following, we prove (ii) implies (i). Suppose (ii) holds, for any \((a_{1},a_{2},\ldots,a_{n},b)\in\mathcal{R}^{n+1}\) and \(\xi_{i}\in L^{1}({\mathcal{F}}_{t})\) (\(i=1,2,\ldots,n\)), we have
$$ {\mathcal{E}}_{s,t} \Biggl[\sum_{i=1}^{n}a_{i} \xi_{i}+b \Biggr]= \sum_{i=1}^{n}a_{i}{ \mathcal{E}}_{s,t}[\xi_{i}]+b \quad \mbox{a.s.} $$
(2.9)
But, for any convex function \(\varphi: \mathcal{R}^{n}\mapsto\mathcal{R}\), there exists a countable set \(\mathcal{D}\subseteq\mathcal{R}^{n+1}\) such that
$$ \varphi(x_{1},x_{2},\ldots,x_{n})=\sup _{(a_{1},a_{2},\ldots,a_{n},b)\in \mathcal{D}} \Biggl(\sum_{i=1}^{n}a_{i}x_{i}+b \Biggr). $$
(2.10)
In view of (2.9), for any \((a_{1},a_{2},\ldots,a_{n},b)\in\mathcal{D}\), we have
$${\mathcal {E}}_{s,t}\bigl[\varphi(\xi_{1},\xi_{2}, \ldots,\xi_{n})\bigr]\geq{\mathcal {E}}_{s,t} \Biggl[\sum _{i=1}^{n}a_{i} \xi_{i}+b \Biggr]= \sum_{i=1}^{n}a_{i}{ \mathcal{E}}_{s,t}[\xi_{i}]+b \quad \mbox{a.s.}, $$
which implies (i) by taking into consideration of (2.10). □
The basic version of Hölder’s inequality for the classical mathematical expectation \(E_{P}\) defined in \((\Omega,{\mathcal{F}}_{T},P)\) reads
$$ E_{P}[XY]\leq\bigl(E_{P}\bigl[X^{p}\bigr] \bigr)^{\frac{1}{p}}\bigl(E_{P}\bigl[Y^{q}\bigr] \bigr)^{\frac{1}{q}}, $$
(2.11)
where X, Y are non-negative random variables in \((\Omega,{\mathcal {F}}_{T},P)\) and \(1< p\), \(q<\infty\) is a pair of conjugated exponents, i.e., \(\frac{1}{p}+\frac{1}{q}=1\). One may proceed in the following way (cf., e.g., Krein et al. [32], p.43). By elementary calculus, one verifies
$$ab=\inf_{r>0} \biggl(\frac{r^{p}}{p}a^{p}+ \frac{r^{-q}}{q}b^{q} \biggr) $$
for any constant \(a, b\geq0\). This yields \(XY\leq\frac{r^{p}}{p}X^{p}+\frac{r^{-q}}{q}Y^{q}\) a.s. for any \(r>0\). Taking the expectation yields \(E_{P}[XY]\leq\frac{r^{p}}{p}E_{P}[X^{p}]+\frac{r^{-q}}{q}E_{P}[Y^{q}]\) for any \(r>0\), and taking the infimum with respect to r again we arrive at (2.11).

By the above argument, we have the following Hölder inequality for \({\mathcal{F}}_{t}\)-consistent nonlinear evaluations.

Theorem 2.3

Suppose that \({\mathcal {E}}_{s,t}[\cdot]\), \(0\leq s\leq t\leq T\) is an \({\mathcal {F}}_{t}\)-consistent nonlinear evaluation in \(L^{1}({\mathcal{F}}_{T})\). If \({\mathcal{E}}_{s,t}[\cdot]\) satisfies the following conditions:
  1. (d)

    \({\mathcal{E}}_{s,t}[\xi+\eta]\leq{\mathcal {E}}_{s,t}[\xi]+{\mathcal{E}}_{s,t}[\eta]\) a.s., \(\forall(\xi,\eta)\in L^{1}_{+}({\mathcal{F}}_{t})\times L^{1}_{+}({\mathcal{F}}_{t})\);

     
  2. (e)

    \({\mathcal{E}}_{s,t}[\lambda \xi]\leq\lambda{\mathcal{E}}_{s,t}[\xi]\) a.s., \(\forall\xi\in L^{1}_{+}({\mathcal{F}}_{t})\), \(\lambda\geq0\),

     
then, for any \(X,Y\in L^{1}({\mathcal{F}}_{t})\) and \(|X|^{p}, |Y|^{q}\in L^{1}({\mathcal{F}}_{t})\) (\(p, q>1\) and \(1/p+1/q=1\)), we have
$${\mathcal{E}}_{s,t}\bigl[\vert XY\vert \bigr]\leq \bigl({\mathcal {E}}_{s,t}\bigl[|X|^{p}\bigr] \bigr)^{\frac{1}{p}} \bigl({ \mathcal {E}}_{s,t}\bigl[|Y|^{q}\bigr] \bigr)^{\frac{1}{q}} \quad \textit{a.s.} $$

Similarly, we have the following Minkowski inequality for \({\mathcal {F}}_{t}\)-consistent nonlinear evaluations.

Theorem 2.4

Suppose that \({\mathcal {E}}_{s,t}[\cdot]\), \(0\leq s\leq t\leq T\) is an \({\mathcal {F}}_{t}\)-consistent nonlinear evaluation in \(L^{1}({\mathcal{F}}_{T})\). If \({\mathcal{E}}_{s,t}[\cdot]\) satisfies the following conditions:
  1. (d)

    \({\mathcal{E}}_{s,t}[\xi+\eta]\leq{\mathcal {E}}_{s,t}[\xi]+{\mathcal{E}}_{s,t}[\eta]\) a.s., \(\forall(\xi,\eta)\in L^{1}_{+}({\mathcal{F}}_{t})\times L^{1}_{+}({\mathcal{F}}_{t})\);

     
  2. (e)

    \({\mathcal{E}}_{s,t}[\lambda \xi]\leq\lambda{\mathcal{E}}_{s,t}[\xi]\) a.s., \(\forall\xi\in L^{1}_{+}({\mathcal{F}}_{t})\), \(\lambda\geq0\),

     
then, for any \(X,Y\in L^{1}({\mathcal{F}}_{t})\) and \(|X|^{p},|Y|^{p}\in L^{1}({\mathcal{F}}_{t})\) (\(p>1\)), we have
$$ \bigl({\mathcal {E}}_{s,t}\bigl[|X+Y|^{p}\bigr] \bigr)^{\frac{1}{p}}\leq\bigl({\mathcal {E}}_{s,t}\bigl[|X|^{p} \bigr]\bigr)^{\frac{1}{p}}+\bigl({\mathcal {E}}_{s,t} \bigl[|Y|^{p}\bigr]\bigr)^{\frac{1}{p}} \quad \textit{a.s.} $$
(2.12)

Proof

Here \(h:[0,\infty)\times[0,\infty)\mapsto[0,\infty)\) is of the form
$$ h(x_{1},x_{2})= \bigl(x_{1}^{\frac{1}{p}}+x_{2}^{\frac{1}{p}} \bigr)^{p}= \inf_{r\in Q\cap(0,1)} \bigl\{ r^{-p}x_{1}+(1-r)^{-p}x_{2} \bigr\} , $$
(2.13)
where \(\mathcal{Q}\) is the set of all rational numbers in \(\mathcal {R}\). Let \(x_{1}:=|X|^{p}\) and \(x_{2}:=|Y|^{p}\). From (2.13), we have
$$\bigl(\vert X\vert +|Y|\bigr)^{p}\leq r^{-p}|X|^{p}+(1-r)^{-p}|Y|^{p} \quad \mbox{a.s.} $$
for all \(r\in\mathcal{Q}\cap(0,1)\). It follows from (d) and (e) that
$${\mathcal{E}}_{s,t}\bigl[\bigl(\vert X\vert +|Y| \bigr)^{p}\bigr]\leq r^{-p}{\mathcal {E}}_{s,t} \bigl[|X|^{p}\bigr]+(1-r)^{-p}{\mathcal{E}}_{s,t} \bigl[|Y|^{p}\bigr] \quad \mbox{a.s.} $$
for all \(r\in\mathcal{Q}\cap(0,1)\). Taking the infimum with respect to r in \(\mathcal{Q}\cap(0,1)\), we have
$${\mathcal {E}}_{s,t}\bigl[\bigl(\vert X\vert +|Y| \bigr)^{p}\bigr]\leq \bigl\{ \bigl({\mathcal {E}}_{s,t} \bigl[|X|^{p}\bigr]\bigr)^{\frac{1}{p}}+\bigl({\mathcal {E}}_{s,t}\bigl[|Y|^{p}\bigr]\bigr)^{\frac{1}{p}} \bigr\} ^{p} \quad \mbox{a.s.} $$
Thus, (2.12) holds true. □

3 Jensen’s inequality for g-evaluations

In this section, first, we present some notations, notions, and propositions which are useful in this paper.

Let
$$\begin{aligned}& {\mathcal{S}}^{p}(0,t;P;\mathcal{R}):=\Bigl\{ V:V_{s} \mbox{ is } \mathcal{R}\mbox{-valued } {\mathcal{F}}_{s}\mbox{-adapted continuous process with} \\& \hphantom{{\mathcal{S}}^{p}(0,t;P;\mathcal{R}):={}}E_{P}\Bigl[\sup_{0\leq s\leq t}|V_{s}|^{p} \Bigr]< \infty\Bigr\} , \\& {\mathcal{S}}(0,t;P;\mathcal{R}):=\bigcup_{p>1}{ \mathcal {S}}^{p}(0,t;P;\mathcal{R}) , \\& L^{p}\bigl(0,t;P;\mathcal{R}^{d}\bigr):=\biggl\{ V:V_{s} \mbox{ is } \mathcal{R}^{d}\mbox{-valued and } { \mathcal{F}}_{s} \mbox{-adapted process with} \\& \hphantom{L^{p}\bigl(0,t;P;\mathcal{R}^{d}\bigr):={}}E_{P}\biggl[ \biggl(\int_{0}^{t}|V_{s}|^{2} \, \mathrm{d}s\biggr)^{\frac{p}{2}}\biggr]<\infty\biggr\} , \\& {\mathcal{L}}\bigl(0,t;P;\mathcal{R}^{d}\bigr):=\bigcup _{p>1}L^{p}\bigl(0,t;P;\mathcal{R}^{d} \bigr) , \\& {\mathcal{M}}^{p}(0,t;P;\mathcal{R}):=\biggl\{ V:V_{s} \mbox{ is } \mathcal{R} \mbox{-valued } {\mathcal{F}}_{s} \mbox{-adapted process with} \\& \hphantom{{\mathcal{M}}^{p}(0,t;P;\mathcal{R}):={}}E_{P}\biggl[\biggl(\int _{0}^{t}|V_{s}|\, \mathrm{d}s \biggr)^{p}\biggr]<\infty\biggr\} , \\& {\mathcal{M}}(0,t;P;\mathcal{R}):=\bigcup_{p>1}{ \mathcal {M}}^{p}(0,t;P;\mathcal{R}) \end{aligned}$$
and
$${\mathcal{L}}({\mathcal{F}}_{t}):=\bigcup _{p>1}L^{p}({\mathcal{F}}_{t}). $$
For each \(t\in[0,T ]\), we consider the following BSDE with terminal time t:
$$ y_{s}=X+\int_{s}^{t} g(r,y_{r},z_{r})\, \mathrm{d}r-\int_{s}^{t} z_{r} \cdot\mathrm{d}B_{r}, \quad s\in[0,t]. $$
(3.1)
Here the function g:
$$g(\omega,t,y,z):\Omega\times[0,T]\times\mathcal{R}\times\mathcal {R}^{d}\mapsto\mathcal{R} $$
satisfies the following assumptions:
  1. (B.1)
    there exist two non-negative deterministic functions \(\alpha(t)\) and \(\beta(t)\) such that for all \(y_{1},y_{2}\in\mathcal{R}\), \(z_{1},z_{2}\in\mathcal{R}^{d}\),
    $$\bigl\vert g(t,y_{1},z_{1})-g(t,y_{2},z_{2}) \bigr\vert \leq\alpha(t)|y_{1}-y_{2}|+\beta(t)|z_{1}-z_{2}|, \quad \forall t\in[0,T], $$
    where \(\alpha(t)\) and \(\beta(t)\) satisfy \(\int_{0}^{T}\alpha^{2}(t)\, \mathrm{d}t<\infty\), \(\int_{0}^{T}\beta^{2}(t)\, \mathrm{d}t<\infty\);
     
  2. (B.2)

    \(g(t,0,0)\in{\mathcal{M}}(0,t;P;\mathcal{R})\);

     
  3. (B.3)

    \(g(t,y,0)=0\), \(\mathrm{d}P\times\mathrm{d}t\)-a.s., \(\forall y\in\mathcal{R}\).

     

It is well known that (see Zong [11]) if we suppose that the function g satisfies (B.1) and (B.2), then for each given \(X\in{\mathcal {L}}({\mathcal{F}}_{t})\), there exists a unique solution \((Y^{X},Z^{X})\in {\mathcal{S}}(0,t;P;\mathcal{R})\times{\mathcal{L}}(0,t;P;\mathcal {R}^{d})\) of BSDE (3.1).

Example 3.1

For each given \(\xi\in{\mathcal {L}}({\mathcal{F}}_{T})\), the BSDE
$$y_{t}=\xi+\int_{t}^{T} \biggl( \frac{1}{\sqrt[5]{s}}y_{s}+\frac{1}{\sqrt[8]{T-s}}|z_{s}| \biggr)\, \mathrm{d}s-\int_{t}^{T} z_{s}\cdot \mathrm{d}B_{s}, \quad t\in[0,T], $$
has a unique solution in \({\mathcal{S}}(0,T;P;\mathcal{R})\times{\mathcal {L}}(0,T;P;\mathcal{R}^{d})\).
We denote \({\mathcal{E}}^{g} _{s,t}[X] :=Y_{s}^{X}\). We thus define a system of operators:
$${\mathcal{E}}^{g}_{s,t}[X]:X\in{\mathcal{L}}({ \mathcal{F}}_{t})\mapsto {\mathcal{L}}({\mathcal{F}}_{s}), \quad 0\leq s\leq t\leq T. $$
This system is completely determined by the above given function g. We have the following.

Proposition 3.1

We assume that the function g satisfies (B.1) and (B.2). Then the system of operators \({\mathcal {E}}^{g}_{s,t}[\cdot]\), \(0\leq s\leq t\leq T\) is an \({\mathcal {F}}_{t}\)-consistent nonlinear evaluation defined in \({\mathcal {L}}({\mathcal{F}}_{T})\).

The proof of Proposition 3.1 is very similar to that of Corollary 2.9 in [13], so we omit it.

Remark 3.1

From Proposition 3.1, we know that the dynamically consistent nonlinear evaluation \({\mathcal {E}}^{g}_{s,t}[\cdot]\), \(0\leq s\leq t\leq T\) is completely determined by the given function g. Thus, we call \({\mathcal {E}}^{g}_{s,t}[\cdot]\), \(0\leq s\leq t\leq T\) a g-evaluation.

Definition 3.1

(g-Expectation) (see Zong [11])

Suppose that the function g satisfies (B.1) and (B.3). The g-expectation \({\mathcal{E}}_{g}[\cdot]:{\mathcal{L}}({\mathcal {F}}_{T})\mapsto\mathcal{R}\) is defined by \({\mathcal {E}}_{g}[\xi]=Y_{0}^{\xi}\).

Definition 3.2

(Conditional g-expectation) (see Zong [11])

Suppose that the function g satisfies (B.1) and (B.3). The conditional g-expectation of ξ with respect to \({\mathcal{F}}_{t}\) is defined by \({\mathcal{E}}_{g}[\xi|{\mathcal{F}}_{t}]=Y_{t}^{\xi}\).

Proposition 3.2

(see Zong [11])

\({\mathcal {E}}_{g}[\xi|{\mathcal{F}}_{t}]\) is the unique random variable η in \({\mathcal{L}}({\mathcal{F}}_{t})\) such that
$${\mathcal{E}}_{g}[1_{A}\xi]={\mathcal{E}}_{g}[1_{A} \eta], \quad \forall A\in{\mathcal{F}}_{t}. $$

Proposition 3.3

For any \(\xi_{n}\in{\mathcal {L}}({\mathcal{F}}_{t})\), if \(\lim_{n\rightarrow\infty}\xi_{n}=\xi\) a.s. and \(|\xi_{n}|\leq\eta\) a.s. with \(\eta\in{\mathcal{L}}({\mathcal{F}}_{t})\), then for \(0\leq s\leq t\leq T\),
$$\lim_{n\rightarrow\infty}{\mathcal{E}}^{g}_{s,t}[ \xi_{n}]={\mathcal {E}}^{g}_{s,t}[\xi] \quad \textit{a.s.} $$

The proof of Proposition 3.3 is very similar to that of Theorem 3.1 in Hu and Chen [24], so we omit it.

In the following, we study Jensen’s inequality for g-evaluations. First, we introduce some notions on g.

Definition 3.3

Let \(g: \Omega\times[0,T]\times \mathcal{R}\times\mathcal{R}^{d}\mapsto\mathcal{R}\). The function g is said to be super-homogeneous if for each \((y,z)\in\mathcal{R}\times\mathcal {R}^{d}\) and \(\lambda\in\mathcal{R}\), then \(g(t,\lambda y,\lambda z)\geq\lambda g(t,y,z)\), \(\mathrm{d}P\times\mathrm{d}t\)-a.s. The function g is said to be positively homogeneous if for each \((y,z)\in\mathcal{R}\times \mathcal{R}^{d}\) and \(\lambda\geq0\), then \(g(t,\lambda y,\lambda z)=\lambda g(t,y,z)\), \(\mathrm{d}P\times\mathrm{d}t\)-a.s. The function g is said to be sub-additive if, for any \((y,z), (\overline{y},\overline{z})\in\mathcal{R}\times\mathcal{R}^{d}\), \(g(t,y+\overline{y},z+\overline{z})\leq g(t,y,z) +g(t,\overline{y},\overline{z})\), \(\mathrm{d}P\times\mathrm{d}t\)-a.s. The function g is said to be super-additive if, for any \((y,z), (\overline{y},\overline{z})\in\mathcal{R}\times\mathcal{R}^{d}\), \(g(t,y+\overline{y},z+\overline{z})\geq g(t,y,z) +g(t,\overline{y},\overline{z})\), \(\mathrm{d}P\times\mathrm{d}t\)-a.s.

Theorem 3.1

Suppose that \({\mathcal {E}}^{g}_{s,t}[\cdot]\), \(0\leq s\leq t\leq T\) is a g-evaluation, then the following three statements are equivalent:
  1. (i)
    Jensen’s inequality for g-evaluation \({\mathcal{E}}^{g}_{s,t}[\cdot]\) holds in general, i.e., for each convex function \(\varphi(x): \mathcal{R}\mapsto\mathcal{R}\) and each \(\xi\in{\mathcal{L}}({\mathcal{F}}_{t})\), if \(\varphi(\xi)\in{\mathcal {L}}({\mathcal{F}}_{t})\), then we have
    $${\mathcal{E}}^{g}_{s,t}\bigl[\varphi(\xi)\bigr]\geq\varphi \bigl({\mathcal {E}}^{g}_{s,t}[\xi]\bigr) \quad \textit{a.s.}; $$
     
  2. (ii)

    \(\forall(\xi,a,b)\in{\mathcal{L}}({\mathcal {F}}_{t})\times\mathcal{R}\times\mathcal{R}\), \({\mathcal {E}}^{g}_{s,t}[a\xi+b]\geq a{\mathcal{E}}^{g}_{s,t}[\xi]+b\) a.s.;

     
  3. (iii)

    g is independent of y and super-homogeneous with respect to z.

     

Theorem 3.2

Suppose that \({\mathcal {E}}^{g}_{s,t}[\cdot]\), \(0\leq s\leq t\leq T\) is a g-evaluation, then the following three statements are equivalent:
  1. (i)
    the n-dimensional (\(n>1\)) Jensen inequality for the g-evaluation \({\mathcal{E}}^{g}_{s,t}[\cdot]\) holds in general, i.e., for each convex function \(\varphi: R^{n}\mapsto R\) and \(\xi_{i}\in{\mathcal{L}}({\mathcal{F}}_{t})\) (\(i=1,2,\ldots,n\)), if \(\varphi(\xi_{1},\xi_{2},\ldots,\xi_{n})\in{\mathcal{L}}({\mathcal{F}}_{t})\), then we have
    $${\mathcal{E}}^{g}_{s,t} \bigl[\varphi(\xi_{1}, \xi_{2},\ldots,\xi_{n}) \bigr]\geq \varphi \bigl({\mathcal {E}}^{g}_{s,t}[\xi_{1}],{\mathcal{E}}^{g}_{s,t}[ \xi_{2}],\ldots,{\mathcal {E}}^{g}_{s,t}[ \xi_{n}] \bigr) \quad \textit{a.s.}; $$
     
  2. (ii)

    \({\mathcal{E}}^{g}_{s,t}\) is linear in \({\mathcal {L}}({\mathcal{F}}_{t})\);

     
  3. (iii)

    g is independent of y and linear with respect to z, i.e., g is of the form \(g(t,y,z)=g(t,z)=\alpha_{t}\cdot z\), \(\mathrm{d}P\times\mathrm{d}t\)-a.s., \(\forall(y,z)\in\mathcal{R}\times\mathcal{R}^{d}\), where α is a \(R^{d}\)-valued progressively measurable process.

     

In order to prove Theorems 3.1 and 3.2, we need the following lemmas. These lemmas can be found in Zong and Hu [33].

Lemma 3.1

Suppose that the function g satisfies (B.1) and (B.2). Then the following three conditions are equivalent:
  1. (i)

    The function g is independent of y.

     
  2. (ii)
    The corresponding dynamically consistent nonlinear evaluation \({\mathcal{E}}^{g}[\cdot]\) satisfies: for each \(0\leq s\leq t\leq T\), \({\mathcal{F}}_{t}\) measurable simple function X and \(y\in\mathcal{R}\),
    $${\mathcal{E}}^{g}_{s,t}[X+y]={\mathcal{E}}^{g}_{s,t}[X]+y\quad \textit{a.s.} $$
     
  3. (iii)
    The corresponding dynamically consistent nonlinear evaluation \({\mathcal{E}}^{g}[\cdot]\) satisfies: for each \(0\leq s\leq t\leq T\), \(X\in{\mathcal{L}}({\mathcal{F}}_{t})\), and \(\eta \in {\mathcal{L}}({\mathcal{F}}_{s})\),
    $${\mathcal{E}}^{g}_{s,t}[X+\eta]={\mathcal{E}}^{g}_{s,t}[X]+ \eta \quad \textit{a.s.} $$
     

Lemma 3.2

Suppose that the function g satisfies (B.1) and (B.2). Then the following three conditions are equivalent:
  1. (i)

    The function g is positively homogeneous.

     
  2. (ii)
    The corresponding dynamically consistent nonlinear evaluation \({\mathcal{E}}^{g}[\cdot]\) satisfies: for each \(0\leq s\leq t\leq T\), \(\lambda\geq0\), and \({\mathcal{F}}_{t}\) measurable simple function X,
    $${\mathcal{E}}^{g}_{s,t}[\lambda X]=\lambda{ \mathcal{E}}^{g}_{s,t}[X] \quad \textit{a.s.} $$
     
  3. (iii)
    The corresponding dynamically consistent nonlinear evaluation \({\mathcal{E}}^{g}[\cdot]\) is positively homogeneous: for each \(0\leq s\leq t\leq T\), \(\lambda\geq0\), and \(X\in{\mathcal{L}}({\mathcal{F}}_{t})\),
    $${\mathcal{E}}^{g}_{s,t}[\lambda X]=\lambda{ \mathcal{E}}^{g}_{s,t}[X]\quad \textit{a.s.} $$
     

Lemma 3.3

Suppose that the function g satisfies (B.1) and (B.2). Then the following three conditions are equivalent:
  1. (i)

    The function g is sub-additive (super-additive).

     
  2. (ii)
    The corresponding dynamically consistent nonlinear evaluation \({\mathcal{E}}^{g}[\cdot]\) satisfies: for each \(0\leq s\leq t\leq T\) and \({\mathcal{F}}_{t}\) measurable simple functions X and \(\overline{X}\),
    $${\mathcal{E}}^{g}_{s,t}[X+\overline{X}]\leq(\geq)\, { \mathcal{E}}^{g}_{s,t}[X] +{\mathcal{E}}^{g}_{s,t}[ \overline{X}] \quad \textit{a.s.} $$
     
  3. (iii)
    The corresponding dynamically consistent nonlinear evaluation \({\mathcal{E}}^{g}[\cdot]\) is sub-additive (super-additive): for each \(0\leq s\leq t\leq T\) and X, \(\overline{X}\in{\mathcal{L}}({\mathcal{F}}_{t})\),
    $${\mathcal{E}}^{g}_{s,t}[X+\overline{X}]\leq(\geq)\, { \mathcal{E}}^{g}_{s,t}[X] +{\mathcal{E}}^{g}_{s,t}[ \overline{X}] \quad \textit{a.s.} $$
     

Lemma 3.4

Suppose that the functions g and \(\overline{g}\) satisfy (B.1) and (B.2). Then the following three conditions are equivalent:
  1. (i)

    \(g(t,y,z)\geq\overline{g}(t,y,z)\), \(\mathrm{d}P\times\mathrm{d}t\)-a.s., \(\forall(y,z)\in\mathcal{R}\times\mathcal{R}^{d}\).

     
  2. (ii)
    The corresponding dynamically consistent nonlinear evaluations \({\mathcal{E}}^{g}[\cdot]\) and \({\mathcal {E}}^{\overline{g}}[\cdot]\) satisfy, for each \(0\leq s\leq t\leq T\) and \({\mathcal{F}}_{t}\) measurable simple function X,
    $${\mathcal{E}}^{g}_{s,t}[X]\geq{\mathcal{E}}^{\overline{g}}_{s,t}[X] \quad \textit{a.s.} $$
     
  3. (iii)
    The corresponding dynamically consistent nonlinear evaluations \({\mathcal{E}}^{g}[\cdot]\) and \({\mathcal {E}}^{\overline{g}}[\cdot]\) satisfy, for each \(0\leq s\leq t\leq T\) and \(X\in{\mathcal{L}}({\mathcal{F}}_{t})\),
    $${\mathcal{E}}^{g}_{s,t}[X]\geq{\mathcal{E}}^{\overline{g}}_{s,t}[X] \quad \textit{a.s.} $$
     
In particular, \({\mathcal {E}}^{g}[\cdot]\equiv{\mathcal{E}}^{\overline{g}}[\cdot]\) if and only if \(g\equiv\overline{g}\).

Proof of Theorem 3.1

From Theorem 2.1, we only need to prove (ii) (iii). (iii) (ii) is obvious.

In the following, we prove (ii) (iii). First, we prove that g is independent of y. Suppose (ii) holds, then we have, for any \((\xi,y)\in{\mathcal{L}}({\mathcal {F}}_{t})\times \mathcal{R}\),
$$ {\mathcal{E}}^{g}_{s,t}[\xi+y]={\mathcal{E}}^{g}_{s,t}[ \xi]+y\quad \mbox{a.s.} $$
(3.2)
By Lemma 3.1, we can deduce that g is independent of y.
Next we prove that g is super-homogeneous with respect to z. By (ii), we have, for any \((\xi,\lambda)\in{\mathcal {L}}({\mathcal{F}}_{t})\times R\),
$$ \lambda{\mathcal{E}}^{g}_{s,t}[\xi]\leq{\mathcal {E}}^{g}_{s,t}[\lambda\xi] \quad \mbox{a.s.} $$
(3.3)
For each \((s,z)\in[0,t]\times\mathcal{R}^{d}\), let \(Y_{\cdot}^{s,z}\) be the solution of the following stochastic differential equation (SDE for short) defined on \([s,t]\):
$$ Y_{t}^{s,z}=-\int_{s}^{t}g(r,z) \, \mathrm{d}r+z\cdot(B_{t}-B_{s}). $$
(3.4)
From (3.3), we have
$${\mathcal{E}}^{g}_{r,t}\bigl[\lambda Y_{t}^{s,z} \bigr]\geq\lambda{\mathcal {E}}^{g}_{r,t}\bigl[Y_{t}^{s,z} \bigr]=\lambda Y_{r}^{s,z}, \quad 0\leq s\leq r\leq t\leq T. $$
Thus, \((\lambda Y_{r}^{s,z})_{r\in[s,t]}\) is an \({\mathcal{E}}_{g}\)-submartingale. From the decomposition theorem of an \({\mathcal {E}}_{g}\)-supermartingale (see Zong and Hu [33]), it follows that there exists an increasing process \((A_{r})_{r\in[s,t]}\) such that
$$\lambda Y_{t}^{s,z}=-\int_{s}^{t}g(r,Z_{r}) \, \mathrm{d}r +A_{t}-A_{s}+\int_{s}^{t}Z_{r} \cdot\mathrm{d}B_{r}, \quad t\in[s,T]. $$
This with \(\lambda Y_{t}^{s,z}=-\int_{s}^{t}\lambda g(r,z)\, \mathrm{d}r+\int_{s}^{t}\lambda z\cdot\mathrm{d}B_{r}\) yields \(Z_{r}\equiv\lambda z\) and
$$ \lambda g(t,z)\leq g(t,\lambda z), \quad \mathrm{d}P\times\mathrm{d}t \mbox{-a.s.} $$
(3.5)
The proof of Theorem 3.1 is complete. □

Remark 3.2

The condition that g is super-homogeneous with respect to z implies that g is positively homogeneous with respect to z. Indeed, for each fixed \(\lambda>0\), by (3.5), we have \(\frac{1}{\lambda}g(t,\lambda z)\leq g(t,z)\), \(\mathrm{d}P\times\mathrm{d}t\)-a.s., i.e.,
$$ g(t,\lambda z)\leq\lambda g(t,z), \quad \mathrm{d}P\times\mathrm{d}t \mbox{-a.s.} $$
(3.6)
Thus by (3.5) and (3.6), for any \(\lambda>0\),
$$ g(t,\lambda z)=\lambda g(t,z), \quad \mathrm{d}P\times\mathrm{d}t \mbox{-a.s.} $$
(3.7)
In particular, choosing \(\lambda=2\), we have \(2 g(t,0)=g(t,0)\), \(\mathrm{d}P\times\mathrm{d}t\)-a.s. Hence \(g(t,0)=0\), \(\mathrm{d}P\times\mathrm{d}t\)-a.s. Thus, for \(\lambda=0\) (3.7) still holds.

Proof of Theorem 3.2

From Theorem 2.2, we only need to prove (ii) (iii). (iii) (ii) is obvious.

In the following, we prove (ii) (iii). From the proof of Theorem 3.1, we can obtain, for any \(\lambda\in\mathcal{R}\) and \((y,z)\in\mathcal{R}\times\mathcal {R}^{d}\), \(g(t,y,\lambda z)=g(t,\lambda z)\geq\lambda g(t,z)\), \(\mathrm{d}P\times\mathrm{d}t\)-a.s. Using the same method, we have \(g(t,y,\lambda z)=g(t,\lambda z)\leq\lambda g(t,z)\), \(\mathrm{d}P\times\mathrm{d}t\)-a.s., \(\forall\lambda\in\mathcal{R}\), \((y,z)\in\mathcal{R}\times \mathcal{R}^{d}\). The above arguments imply that, for any \(\lambda\in\mathcal{R}\) and \((y,z)\in \mathcal{R}\times\mathcal{R}^{d}\),
$$ g(t,y,\lambda z)=g(t,\lambda z)=\lambda g(t,z), \quad \mathrm{d}P\times \mathrm{d}t\mbox{-a.s.} $$
(3.8)
On the other hand, by Lemma 3.3, we have, for any \((y,z),(\overline{y},\overline{z})\in\mathcal{R}\times\mathcal{R}^{d}\),
$$ g(t,y+\overline{y},z+\overline{z})=g(t,y,z) +g(t,\overline{y},\overline{z}), \quad \mathrm{d}P\times\mathrm{d}t\mbox{-a.s.} $$
(3.9)
It follows from (3.8) and (3.9) that (iii) holds true. The proof of Theorem 3.2 is complete. □

From Theorem 3.1(iii), we know that, for any \(y\in\mathcal{R}\), \(g(t,y,0)=g(t,0)=0\), \(\mathrm{d}P\times\mathrm{d}t\)-a.s. Hence, \({\mathcal{E}}^{g}_{s,t}[\cdot]={\mathcal{E}}_{g}[\cdot|{\mathcal {F}}_{s}]\). Thus, Theorem 3.1 can be rewritten as follows.

Corollary 3.1

Suppose that \({\mathcal {E}}^{g}_{s,t}[\cdot]\), \(0\leq s\leq t\leq T\) is a g-evaluation, then the following four statements are equivalent:
  1. (i)
    Jensen’s inequality for the g-evaluation \({\mathcal{E}}^{g}_{s,t}[\cdot]\) holds in general, i.e., for each convex function \(\varphi(x): \mathcal{R}\mapsto\mathcal{R}\) and each \(\xi\in{\mathcal{L}}({\mathcal{F}}_{t})\), if \(\varphi(\xi)\in{\mathcal {L}}({\mathcal{F}}_{t})\), then we have
    $${\mathcal{E}}^{g}_{s,t}\bigl[\varphi(\xi)\bigr]\geq\varphi \bigl({\mathcal {E}}^{g}_{s,t}[\xi]\bigr) \quad \textit{a.s.}; $$
     
  2. (ii)

    \(\forall(\xi,a,b)\in L^{2}({\mathcal{F}}_{T})\times \mathcal{R}\times\mathcal{R}\), \({\mathcal{E}}^{g}_{0,T}[a\xi+b]\geq a{\mathcal {E}}^{g}_{0,T}[\xi]+b\), and, for any \(y\in\mathcal{R}\), \(g(t,y,0)=0\), \(\mathrm{d}P\times\mathrm{d}t\)-a.s.;

     
  3. (iii)

    \(\forall(\xi,a,b)\in L^{2}({\mathcal{F}}_{t})\times \mathcal{R}\times\mathcal{R}\), \({\mathcal{E}}^{g}_{s,t}[a\xi+b]\geq a{\mathcal {E}}^{g}_{s,t}[\xi]+b\) a.s.;

     
  4. (iv)

    g is independent of y and super-homogeneous with respect to z.

     

Similarly, Theorem 3.2 can be rewritten as follows.

Corollary 3.2

Suppose that \({\mathcal {E}}^{g}_{s,t}[\cdot]\), \(0\leq s\leq t\leq T\) is a g-evaluation, then the following four statements are equivalent:
  1. (i)
    the n-dimensional (\(n>1\)) Jensen inequality for g-evaluation \({\mathcal{E}}^{g}_{s,t}[\cdot]\) holds in general, i.e., for each convex function \(\varphi: \mathcal {R}^{n}\mapsto\mathcal{R}\) and \(\xi_{i}\in{\mathcal{L}}({\mathcal{F}}_{t})\) (\(i=1,2,\ldots,n\)), if \(\varphi(\xi_{1},\xi_{2},\ldots,\xi_{n})\in{\mathcal{L}}({\mathcal{F}}_{t})\), then we have
    $${\mathcal{E}}^{g}_{s,t} \bigl[\varphi(\xi_{1}, \xi_{2},\ldots,\xi_{n}) \bigr]\geq \varphi \bigl({\mathcal {E}}^{g}_{s,t}[\xi_{1}],{\mathcal{E}}^{g}_{s,t}[ \xi_{2}],\ldots,{\mathcal {E}}^{g}_{s,t}[ \xi_{n}] \bigr) \quad \textit{a.s.}; $$
     
  2. (ii)

    \({\mathcal{E}}^{g}_{0,T}\) is linear in \(L^{2}({\mathcal{F}}_{T})\) and, for any \(y\in\mathcal{R}\), \(g(t,y,0)=0\), \(\mathrm{d}P\times\mathrm{d}t\)-a.s.;

     
  3. (iii)

    \({\mathcal{E}}^{g}_{s,t}\) is linear in \(L^{2}({\mathcal{F}}_{t})\);

     
  4. (iv)

    for each \((y,z)\in\mathcal{R}\times\mathcal{R}^{d}\), \(g(t,y,z)=g(t,z)=\alpha_{t}\cdot z\), \(\mathrm{d}P\times\mathrm{d}t\)-a.s., where α is a \(\mathcal{R}^{d}\)-valued progressively measurable process.

     

Proof of Corollary 3.1

From Proposition 3.3 and Theorem 3.1, we only need to prove (ii) (iii). It is obvious that (iii) implies (ii).

In the following, we prove that (ii) implies (iii). Suppose (ii) holds. For each \((X,t,k)\in L^{2}({\mathcal{F}}_{T} )\times[0, T]\times\mathcal{R}\), by (ii), we know that for each \(A\in{\mathcal{F}}_{t}\),
$$\begin{aligned} {\mathcal {E}}^{g}_{0,T}\bigl[1_{A}(X+k) \bigr] =&{\mathcal{E}}^{g}_{0,T}[1_{A}X +1_{A}k-k]+k \\ =&{\mathcal{E}}^{g}_{0,T}\bigl[1_{A}X +1_{A^{C}}(-k)\bigr]+k \\ =&{\mathcal {E}}^{g}_{0,t}\bigl[{\mathcal {E}}^{g}_{t,T}\bigl[1_{A}X +1_{A^{C}} (-k) \bigr]\bigr]+k \\ =&{\mathcal{E}}^{g}_{0,t}\bigl[1_{A}{\mathcal {E}}^{g}_{t,T}[X]+1_{A^{C}}(-k)\bigr]+k \\ =&{\mathcal{E}}^{g}_{0,t}\bigl[1_{A}{\mathcal {E}}^{g}_{t,T}[X]+1_{A^{C}}(-k)+k\bigr] \\ =&{\mathcal {E}}^{g}_{0,t}\bigl[1_{A}\bigl({ \mathcal{E}}^{g}_{t,T}[X]+k\bigr)\bigr]. \end{aligned}$$
Thus
$$ {\mathcal{E}}^{g}_{t,T}[X+k]={\mathcal{E}}^{g}_{t,T}[X]+k \quad \mbox{a.s.} $$
(3.10)
For each \(\lambda\neq0\), define \({\mathcal {E}}^{\lambda}_{t,T}[\cdot]:=\frac{{\mathcal {E}}^{g}_{t,T}[\lambda\cdot]}{\lambda}\), \(\forall t\in[0,T]\). It is easy to check that \({\mathcal{E}}^{g}_{t,T}[\cdot]\) and \({\mathcal {E}}^{\lambda}_{t,T}[\cdot]\) are two \({\mathcal{F}}\)-expectations in \(L^{2}({\mathcal{F}}_{T})\) (the notion of \({\mathcal{F}}\)-expectation can be seen in Coquet et al. [20]). If \(\lambda>0\), for each \(\xi \in L^{2}({\mathcal{F}}_{T})\), \({\mathcal{E}}^{\lambda}_{0,T}[\xi]\geq{\mathcal {E}}^{g}_{0,T}[\xi]\). In a similar manner to Lemma 4.5 in Coquet et al. [20], we can obtain
$$ {\mathcal{E}}^{\lambda}_{t,T}[\xi]\geq{\mathcal {E}}^{g}_{t,T}[\xi] \quad \mbox{a.s.}, \forall t \in[0,T]. $$
(3.11)
If \(\lambda<0\), for each \(\xi\in L^{2}({\mathcal{F}}_{T})\), \({\mathcal {E}}^{\lambda}_{0,T}[\xi]\leq{\mathcal {E}}^{g}_{0,T}[\xi]\). In a similar manner to Lemma 4.5 in Coquet et al. [20] again, we have
$$ {\mathcal {E}}^{\lambda}_{t,T}[\xi]\leq{\mathcal{E}}^{g}_{t,T}[ \xi] \quad \mbox{a.s.}, \forall t\in[0,T]. $$
(3.12)
From (3.11) and (3.12), we have, for any \((\xi,\lambda)\in L^{2}({\mathcal{F}}_{T})\times \mathcal{R}\),
$$ {\mathcal{E}}^{g}_{t,T}[\lambda\xi]\geq\lambda{\mathcal {E}}^{g}_{t,T}[\xi] \quad \mbox{a.s.}, \forall t \in[0,T]. $$
(3.13)
From (3.10) and (3.13), we have, for any \((\xi,a,b)\in L^{2}({\mathcal {F}}_{T})\times \mathcal{R}\times\mathcal{R}\),
$${\mathcal {E}}^{g}_{t,T}[a\xi+b]\geq a{\mathcal{E}}^{g}_{t,T}[ \xi]+b \quad \mbox{a.s.}, \forall t\in[0,T]. $$
Since, for any \(y\in\mathcal{R}\), \(g(t,y,0)=0\), \(\mathrm{d}P\times\mathrm{d}t\)-a.s., we have
$${\mathcal{E}}^{g}_{s,t}[a\xi+b]={\mathcal {E}}^{g}_{s,T}[a\xi+b]\geq a{\mathcal{E}}^{g}_{s,T}[ \xi]+b=a{\mathcal {E}}^{g}_{s,t}[\xi]+b \quad \mbox{a.s.}, \forall(\xi,a,b)\in L^{2}({\mathcal{F}}_{t})\times \mathcal{R} \times\mathcal{R}. $$
Therefore, (iii) holds true. The proof of Corollary 3.1 is complete. □

Proof of Corollary 3.2

From Proposition 3.3 and Theorem 3.2, we only need to prove (ii) (iii). It is obvious that (iii) implies (ii).

In the following, we prove that (ii) implies (iii). Suppose (ii) holds. By Proposition 3.3, we know that for each sequence \(\{X_{n}\}_{n=1}^{\infty}\subset L^{2}({\mathcal{F}}_{T})\) such that \(X_{n}(\omega)\downarrow0\) for all ω, \({\mathcal {E}}^{g}_{0,T}[X_{n}]\downarrow0\). By the well-known Daniell-Stone theorem (cf., e.g., Yan [34], Theorem 3.6.8, p.83), there exists a unique probability measure \(P_{\alpha}\) defined on \((\Omega,{\mathcal {F}}_{T})\) such that
$$ {\mathcal{E}}^{g}_{0,T}[\xi]=E_{P_{\alpha}}[\xi], \quad \forall\xi\in L^{2}({\mathcal{F}}_{T}) $$
(3.14)
holds. Indeed, from (iv), we know that \(\frac{\mathrm{d}P_{\alpha}}{\mathrm{d}P}={ \exp} (\int_{0}^{T}\alpha_{t}\cdot\mathrm{d}B_{t}-\frac{1}{2}\int_{0}^{T}|\alpha_{t}|^{2}\, \mathrm{d}t )\).
On the other hand, since, for any \(y\in\mathcal{R}\), \(g(t,y,0)=0\), \(\mathrm{d}P\times\mathrm{d}t\)-a.s., we can obtain
$$ {\mathcal {E}}^{g}_{s,t}[\xi]={\mathcal{E}}^{g}_{s,T}[ \xi] \quad \mbox{a.s.}, \forall\xi\in L^{2}({\mathcal{F}}_{t}). $$
(3.15)
It follows from (3.14) and (3.15) that
$${\mathcal{E}}^{g}_{s,t}[\xi]=E_{P_{\alpha}}[\xi|{ \mathcal{F}}_{s}] \quad \mbox {a.s.}, \forall\xi\in L^{2}({ \mathcal{F}}_{t}). $$
Therefore, \({\mathcal {E}}^{g}_{s,t}\) is linear in \(L^{2}({\mathcal{F}}_{t})\). The proof of Corollary 3.2 is complete. □

From Corollary 3.2, we can immediately obtain the following.

Theorem 3.3

Suppose that \({\mathcal {E}}^{g}_{s,t}[\cdot]\), \(0\leq s\leq t\leq T\) is a g-evaluation, then the following two statements are equivalent:
  1. (i)

    \({\mathcal{E}}^{g}_{s,t}\) is linear in \({\mathcal {L}}({\mathcal{F}}_{t})\);

     
  2. (ii)
    there exists a unique probability measure \(P_{\alpha}\) defined on \((\Omega,{\mathcal{F}}_{T})\) such that, for any \(\xi\in{\mathcal{L}}({\mathcal{F}}_{t})\),
    $${\mathcal{E}}^{g}_{s,t}[\xi]=E_{P_{\alpha}}[\xi|{ \mathcal{F}}_{s}] \quad \textit{a.s.} $$
     

The following result can be seen as an extension of Theorem 3.3.

Theorem 3.4

Suppose that \({\mathcal {E}}^{g}_{s,t}[\cdot]\), \(0\leq s\leq t\leq T\) is a g-evaluation, then the following two statements are equivalent:
  1. (i)
    \({\mathcal{E}}^{g}_{s,t}\) is sublinear in \({\mathcal{L}}({\mathcal{F}}_{t})\), i.e.,
    1. (f)

      \({\mathcal{E}}^{g}_{s,t}[\lambda X]=\lambda{\mathcal {E}}^{g}_{s,t}[X]\) a.s., for any \(X\in{\mathcal{L}}({\mathcal{F}}_{t})\) and \(\lambda\geq0\);

       
    2. (g)

      \({\mathcal{E}}_{s,t}[X+Y]\leq{\mathcal {E}}^{g}_{s,t}[X]+{\mathcal{E}}^{g}_{s,t}[Y]\) a.s., for any \((X,Y)\in {\mathcal{L}}({\mathcal{F}}_{t})\times{\mathcal{L}}({\mathcal{F}}_{t})\);

       
    3. (h)

      \({\mathcal{E}}^{g}_{s,t}[\mu]=\mu\) a.s., for any \(\mu\in\mathcal{R}\);

       
     
  2. (ii)
    for any \(\xi\in{\mathcal{L}}({\mathcal{F}}_{t})\),
    $${\mathcal{E}}^{g}_{s,t}[\xi]=\sup_{Q_{\theta}\in\Lambda}E_{Q_{\theta}}[ \xi|{\mathcal{F}}_{s}] \quad \textit{a.s.}, $$
    where Λ is a set of probability measures on \((\Omega,{\mathcal{F}}_{T})\) and defined by
    $$\Lambda:=\bigl\{ Q_{\theta}:E_{Q_{\theta}}[\xi]\leq{\mathcal {E}}^{g}_{0,T}[\xi],\forall\xi\in{\mathcal{L}}({ \mathcal{F}}_{T})\bigr\} . $$
     

Proof

It is obvious that (ii) implies (i).

In the following, we prove that (i) implies (ii). Suppose (i) holds. Since \({\mathcal{E}}_{0,T}[\cdot]\) is a sublinear expectation in \({\mathcal{L}}({\mathcal{F}}_{T})\), by Lemma 2.4 in Peng [35], we know that there exists a family of linear expectations \(\{E_{\theta}:\theta\in\Theta\}\) on \((\Omega,{\mathcal {F}}_{T})\) such that, for any \(\xi\in{\mathcal{L}}({\mathcal{F}}_{T})\),
$$ {\mathcal {E}}^{g}_{0,T}[\xi]=\sup_{\theta\in\Theta}E_{\theta}[ \xi]. $$
(3.16)
On the other hand, by Proposition 3.3, we know that for each sequence \(\{X_{n}\}_{n=1}^{\infty}\subset{\mathcal{L}}({\mathcal{F}}_{T})\) such that \(X_{n}(\omega)\downarrow0\) for all ω, \({\mathcal {E}}^{g}_{0,T}[X_{n}]\downarrow0\). By the well-known Daniell-Stone theorem, we can deduce that for each \(\theta\in\Theta\) and \(\xi\in{\mathcal{L}}({\mathcal{F}}_{T})\), there exists a unique probability measure \(Q_{\theta}\) defined on \((\Omega,{\mathcal{F}}_{T})\) such that
$$ E_{\theta}[\xi]=E_{Q_{\theta}}[\xi]. $$
(3.17)
It follows from (3.16) and (3.17) that, for any \(\xi\in{\mathcal{L}}({\mathcal{F}}_{T})\),
$$ {\mathcal {E}}^{g}_{0,T}[\xi]=\sup_{Q_{\theta}\in\Lambda}E_{Q_{\theta}}[ \xi ]. $$
(3.18)
Let Π be a set of probability measures on \((\Omega,{\mathcal {F}}_{T})\) defined by
$$\Pi:= \biggl\{ P_{\alpha}:\alpha\in\Theta^{g}, \frac{\mathrm{d}P_{\alpha}}{\mathrm{d}P}={ \exp} \biggl(\int_{0}^{T} \alpha_{t}\cdot\mathrm{d}B_{t}-\frac{1}{2}\int _{0}^{T}|\alpha_{t}|^{2}\, \mathrm{d}t \biggr) \biggr\} , $$
where \(\Theta^{g}\) := {\((\alpha_{t})_{t\in[0,T]}:\alpha\) is \(\mathcal{R}^{d}\)-valued, progressively measurable and, for any \((y,z)\in\mathcal{R}\times \mathcal{R}^{d}\), \(\alpha_{t}\cdot z\leq g(t,y,z)\), \(\mathrm{d}P\times\mathrm{d}t\)-a.s.}. In order to prove (ii), now we prove that \(\Pi=\Lambda\).
For any \(\alpha\in\Theta^{g}\), we define \(g^{\alpha}(t,y,z):=\alpha_{t}\cdot z\), \(\forall t\in[0,T]\), \((y,z)\in \mathcal{R}\times\mathcal{R}^{d}\). Then, for any \(\xi\in{\mathcal {L}}({\mathcal{F}}_{T})\), by the well-known Girsanov theorem, we can deduce that
$${\mathcal {E}}^{g^{\alpha}}_{0,T}[\xi]=E_{P_{\alpha}}[\xi]. $$
Since, for any \((y,z)\in\mathcal{R}\times\mathcal{R}^{d}\), \(\alpha_{t}\cdot z=g^{\alpha}(t,y,z)\leq g(t,y,z)\), \(\mathrm{d}P\times\mathrm{d}t\)-a.s., it follows from the well-known comparison theorem for BSDEs that \(E_{P_{\alpha}}[\xi]={\mathcal {E}}^{g^{\alpha}}_{0,T}[\xi]\leq{\mathcal{E}}^{g}_{0,T}[\xi]\). Hence \(\Pi\subseteq\Lambda\).
Next let us prove that \(\Lambda\subseteq\Pi\). For each \(Q_{\theta}\in\Lambda\), since \(E_{Q_{\theta}}[\cdot]\leq{\mathcal {E}}^{g}_{0,T}[\cdot]\), \(\forall\xi, \eta\in L^{2}({\mathcal{F}}_{T})\), we have
$$ E_{Q_{\theta}}[\xi+\eta]-E_{Q_{\theta}}[\eta]=E_{Q_{\theta}}[\xi]\leq{ \mathcal {E}}^{g}_{0,T}[\xi]. $$
(3.19)
Denote \(g^{\beta}(t,y,z):=\beta(t)|z|\), \(\forall t\in[0,T]\), \((y,z)\in \mathcal{R}\times\mathcal{R}^{d}\). From Lemmas 3.1 and 3.2 and applying the well-known comparison theorem for BSDEs again, we have
$$ {\mathcal {E}}^{g}_{0,T}[\xi]={\mathcal{E}}_{g}[ \xi]\leq{\mathcal {E}}_{g^{\beta}}[\xi]. $$
(3.20)
From (3.19) and (3.20), we can deduce that \(E_{Q_{\theta}}[\xi+\eta]-E_{Q_{\theta}}[\eta]\leq{\mathcal {E}}_{g^{\beta}}[\xi]\). Then, in a similar manner to Theorem 7.1 in Coquet et al. [20], we know that there exists a unique function \(g^{\theta}\) defined on \(\Omega\times[0,T]\times\mathcal{R}\times \mathcal{R}^{d}\) satisfying the following three conditions:
  1. (H.1)

    \(g^{\theta}(t,y,0)=0\), \(\mathrm{d}P\times\mathrm{d}t\)-a.s., \(\forall y\in\mathcal{R}\);

     
  2. (H.2)

    \(|g^{\theta}(t,y_{1},z_{1})-g^{\theta}(t,y_{2},z_{2})|\leq\beta(t)|z_{1}-z_{2}|\), \(\forall(y_{1},z_{1}), (y_{2},z_{2})\in\mathcal{R}\times\mathcal{R}^{d}\), where \(\beta(t)\) is a non-negative deterministic function satisfying that \(\int_{0}^{T}\beta^{2}(t)\, \mathrm{d}t<\infty\);

     
  3. (H.3)

    \({\mathcal{E}}_{g^{\theta}}[\xi|{\mathcal {F}}_{t}]=E_{Q_{\theta}}[\xi|{\mathcal{F}}_{t}]\) a.s., \(\forall\xi\in L^{2}({\mathcal{F}}_{T})\).

     
It follows from the linearity of \(({\mathcal {E}}_{g^{\theta}}[\cdot|{\mathcal{F}}_{t}] )_{t\in[0,T]}\) and Theorem 3.2 that \(g^{\theta}\) is linear with respect to z. Therefore, there exists a \(\mathcal{R}^{d}\)-valued progressively measurable process \((\theta_{t})_{t\in[0,T]}\) such that \(g^{\theta}(t,y,z)=\theta_{t}\cdot z\), \(\mathrm{d}P\times\mathrm{d}t\)-a.s., \(\forall(y,z)\in\mathcal{R}\times \mathcal{R}^{d}\). In view of \(Q_{\theta}\in\Lambda\) and (H.3), we have for each \(\xi\in L^{2}({\mathcal{F}}_{T})\), \({\mathcal {E}}_{g^{\theta}}[\xi]=E_{Q_{\theta}}[\xi]\leq{\mathcal {E}}^{g}_{0,T}[\xi]\). Then in a similar manner to Lemma 4.5 in Coquet et al. [20] and by Lemma 3.4, we can obtain \(g^{\theta}(t,y,z)=\theta_{t}\cdot z\leq g(t,y,z)\), \(\mathrm{d}P\times\mathrm{d}t\)-a.s., \(\forall(y,z)\in\mathcal{R}\times\mathcal {R}^{d}\). For θ, we define the probability measure \(P_{\theta}\) satisfying \(\frac{\mathrm{d}P_{\theta}}{\mathrm{d}P}={ \exp} (\int_{0}^{T}\theta_{t}\cdot\mathrm{d}B_{t}-\frac{1}{2}\int_{0}^{T}|\theta_{t}|^{2}\, \mathrm{d}t )\), then \(P_{\theta}\in\Pi\) and \(E_{P_{\theta}}[\xi]={\mathcal {E}}_{g^{\theta}}[\xi]=E_{Q_{\theta}}[\xi]\), \(\forall\xi\in L^{2}({\mathcal {F}}_{T})\). Hence, \(Q_{\theta}=P_{\theta}\in\Pi\). Thus, \(\Lambda\subseteq\Pi\). Therefore, we have \(\Pi=\Lambda\).
Finally, we prove that, for any \(s, t\in[0,T]\) satisfying \(s\leq t\) and \(\xi\in{\mathcal{L}}({\mathcal{F}}_{t})\), \({\mathcal {E}}^{g}_{s,t}[\xi]=\sup_{Q_{\theta}\in\Lambda}E_{Q_{\theta}}[\xi |{\mathcal{F}}_{s}]\) a.s. It follows from (H.3), the well-known comparison theorem for BSDEs, and Proposition 3.3 that
$${\mathcal {E}}^{g}_{s,t}[\xi]\geq{\mathcal{E}}_{g^{\theta}}[ \xi|{\mathcal {F}}_{s}]=E_{Q_{\theta}}[\xi|{\mathcal{F}}_{s}] \quad \mbox{a.s.}, \forall\xi\in{\mathcal{L}}({\mathcal{F}}_{t}). $$
Hence, for any \(s, t\in[0,T]\) satisfying \(s\leq t\) and \(\xi\in{\mathcal{L}}({\mathcal {F}}_{t})\),
$$ {\mathcal {E}}^{g}_{s,t}[\xi]\geq\sup_{Q_{\theta}\in\Lambda}E_{Q_{\theta}}[ \xi |{\mathcal{F}}_{s}] \quad \mbox{a.s.} $$
(3.21)
On the other hand, by Lemmas 3.1, 3.2, and 3.3, we can deduce that g is independent of y and positively homogeneous, sub-additive with respect to z. For any \(\xi\in{\mathcal{L}}({\mathcal{F}}_{T})\), let \((Y^{\xi}_{t},Z^{\xi}_{t} )_{t\in[0,T]}\) denote the solution of the following BSDE:
$$y_{t}=\xi+\int_{t}^{T}g(s,z_{s}) \, \mathrm{d}s-\int_{t}^{T}z_{s}\cdot \mathrm{d}B_{s}, \quad \forall t\in[0,T]. $$
By a measurable selection theorem (cf., e.g., El Karoui and Quenez [21], p.215), we can deduce that there exists a progressively measurable process \(\alpha^{\xi}\in\Theta^{g}\) such that
$$ g\bigl(t,Z^{\xi}_{t}\bigr)=\alpha^{\xi}_{t} \cdot Z^{\xi}_{t}, \quad \mathrm{d}P\times\mathrm{d}t \mbox{-a.s.} $$
(3.22)
From (3.22) and applying the well-known Girsanov theorem, we have \({\mathcal{E}}^{g}_{s,t}[\xi]={\mathcal {E}}^{g}_{s,T}[\xi]=E_{P_{\alpha^{\xi}}}[\xi|{\mathcal{F}}_{s}]\) a.s. Hence, for any \(\xi\in{\mathcal{L}}({\mathcal{F}}_{t})\),
$$ {\mathcal {E}}^{g}_{s,t}[\xi]\leq\sup_{P_{\alpha}\in\Pi}E_{P_{\alpha}}[ \xi |{\mathcal{F}}_{s}]=\sup_{Q_{\theta}\in\Lambda}E_{Q_{\theta}}[ \xi |{\mathcal{F}}_{s}] \quad \mbox{a.s.} $$
(3.23)
It follows from (3.21) and (3.23) that
$${\mathcal {E}}^{g}_{s,t}[\xi]=\sup_{Q_{\theta}\in\Lambda}E_{Q_{\theta}}[ \xi |{\mathcal{F}}_{s}] \quad \mbox{a.s.}, \forall\xi\in{\mathcal {L}}({\mathcal{F}}_{t}). $$
The proof of Theorem 3.4 is complete. □

4 Hölder’s inequality and Minkowski’s inequality for g-evaluations

In this section, we give a sufficient condition on g under which Hölder’s inequality and Minkowski’s inequality for g-evaluations hold true.

First, we give the following lemma.

Lemma 4.1

Suppose that the function g satisfies (B.1) and (B.2). Let g satisfy the following conditions:
  1. (i)
    for any \(y_{1}\geq0\), \(y_{2}\geq0\), and \((z_{1},z_{2})\in\mathcal{R}^{d}\times\mathcal{R}^{d}\),
    $$g(t,y_{1}+y_{2},z_{1}+z_{2})\leq g(t,y_{1},z_{1})+g(t,y_{2},z_{2}), \quad \mathrm{d}P\times \mathrm{d}t\textit{-a.s.}; $$
     
  2. (ii)
    for any \(\lambda\geq0\), \(y\geq0\), and \(z\in \mathcal{R}^{d}\),
    $$g(t,\lambda y,\lambda z)\leq\lambda g(t,y,z),\quad \mathrm{d}P\times\mathrm{d}t \textit{-a.s.}, $$
     
then \({\mathcal{E}}^{g}_{s,t}[\cdot]\) satisfies the following conditions:
  1. (j)

    \({\mathcal{E}}^{g}_{s,t}[\xi+\eta]\leq{\mathcal {E}}^{g}_{s,t}[\xi]+{\mathcal{E}}^{g}_{s,t}[\eta]\) a.s., for any \((\xi,\eta)\in{\mathcal{L}}_{+}({\mathcal{F}}_{t})\times{\mathcal {L}}_{+}({\mathcal{F}}_{t})\);

     
  2. (k)

    \({\mathcal{E}}^{g}_{s,t}[\lambda\xi]=\lambda{\mathcal {E}}^{g}_{s,t}[\xi]\) a.s., for any \(\xi\in{\mathcal{L}}_{+}({\mathcal{F}}_{t})\) and \(\lambda\geq0\).

     

The key idea of the proof of Lemma 4.1 is the well-known comparison theorem for BSDEs. The proof is very similar to that of Proposition 4.2 in Jia [26]. So we omit it.

Applying Lemma 4.1 and Theorems 2.3 and 2.4, we immediately have the following Hölder inequality and Minkowski inequality for g-evaluations.

Theorem 4.1

Let g satisfy the conditions of Lemma  4.1, then, for any \(X,Y\in{\mathcal{L}}({\mathcal{F}}_{t})\) and \(|X|^{p}, |Y|^{q}\in{\mathcal{L}}({\mathcal{F}}_{t})\) (\(p, q>1\) and \(1/p+1/q=1\)), we have
$${\mathcal{E}}^{g}_{s,t}\bigl[\vert XY\vert \bigr]\leq \bigl({\mathcal {E}}^{g}_{s,t}\bigl[|X|^{p}\bigr] \bigr)^{\frac{1}{p}} \bigl({\mathcal {E}}^{g}_{s,t} \bigl[|Y|^{q}\bigr] \bigr)^{\frac{1}{q}}\quad \textit{a.s.} $$

Theorem 4.2

Let g satisfy the conditions of Lemma  4.1, then, for any \(X, Y\in{\mathcal{L}}({\mathcal{F}}_{t})\), and \(|X|^{p},|Y|^{p}\in{\mathcal{L}}({\mathcal{F}}_{t})\) (\(p>1\)), we have
$$\bigl({\mathcal{E}}^{g}_{s,t}\bigl[|X+Y|^{p}\bigr] \bigr)^{\frac{1}{p}}\leq\bigl({\mathcal {E}}^{g}_{s,t} \bigl[|X|^{p}\bigr]\bigr)^{\frac{1}{p}}+\bigl({\mathcal {E}}^{g}_{s,t}\bigl[|Y|^{p}\bigr] \bigr)^{\frac{1}{p}} \quad \textit{a.s.} $$

Declarations

Acknowledgements

The authors would like to thank the anonymous referees for their careful reading of this paper, correction of errors, and valuable suggestions. The work of Zhaojun Zong, Feng Hu and Chuancun Yin is supported by the National Natural Science Foundation of China (Nos. 11301295 and 11171179), the Doctoral Program Foundation of Ministry of Education of China (Nos. 20123705120005 and 20133705110002), the Program for Scientific Research Innovation Team in Colleges and Universities of Shandong Province of China and the Program for Scientific Research Innovation Team in Applied Probability and Statistics of Qufu Normal University (No. 0230518). The work of Helin Wu is Supported by the Scientific and Technological Research Program of Chongqing Municipal Education Commission (No. KJ1400922).

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors’ Affiliations

(1)
School of Statistics, Qufu Normal University
(2)
School of Mathematics, Chongqing University of Technology

References

  1. Peng, SG: Dynamical evaluations. C. R. Acad. Sci. Paris, Ser. I 339, 585-589 (2004) View ArticleMATHGoogle Scholar
  2. Peng, SG: Dynamically consistent nonlinear evaluations and expectations (2005). arXiv:math.PR/0501415v1
  3. Pardoux, E, Peng, SG: Adapted solution of a backward stochastic differential equation. Syst. Control Lett. 14, 55-61 (1990) View ArticleMATHMathSciNetGoogle Scholar
  4. Hu, Y, Peng, SG: Solution of forward-backward stochastic differential equations. Probab. Theory Relat. Fields 103, 273-283 (1995) View ArticleMATHMathSciNetGoogle Scholar
  5. Lepeltier, JP, San Martin, J: Backward stochastic differential equations with continuous coefficient. Stat. Probab. Lett. 32, 425-430 (1997) View ArticleMATHMathSciNetGoogle Scholar
  6. El Karoui, N, Peng, SG, Quenez, MC: Backward stochastic differential equations in finance. Math. Finance 7, 1-71 (1997) View ArticleMATHMathSciNetGoogle Scholar
  7. Pardoux, E: Generalized discontinuous BSDEs. In: El Karoui, N, Mazliak, L (eds.) Backward Stochastic Differential Equations. Pitman Research Notes in Mathematics Series, vol. 364, pp. 207-219. Longman, Harlow (1997) Google Scholar
  8. Pardoux, E: BSDEs, weak convergence and homogenization of semilinear PDEs. In: Nonlinear Analysis, Differential Equations and Control (Montreal, QC, 1998), pp. 503-549. Kluwer Academic, Dordrecht (1998) Google Scholar
  9. Briand, P, Delyon, B, Hu, Y, Pardoux, E, Stoica, L: \(L^{p}\) Solutions of backward stochastic differential equations. Stoch. Process. Appl. 108, 109-129 (2003) View ArticleMATHMathSciNetGoogle Scholar
  10. Chen, ZJ, Wang, B: Infinite time interval BSDEs and the convergence of g-martingales. J. Aust. Math. Soc. A 69, 187-211 (2000) View ArticleMATHGoogle Scholar
  11. Zong, ZJ: \(L^{p}\) Solutions of infinite time interval BSDEs and the corresponding g-expectations and g-martingales. Turk. J. Math. 37, 704-718 (2013) MATHMathSciNetGoogle Scholar
  12. Chen, ZJ, Epstein, L: Ambiguity, risk and asset returns in continuous time. Econometrica 70, 1403-1443 (2002) View ArticleMATHMathSciNetGoogle Scholar
  13. Peng, SG: Dynamically nonlinear consistent evaluations and expectations. Lecture notes presented in Weihai Summer School, Weihai (2004) Google Scholar
  14. Peng, SG: Filtration consistent nonlinear expectations and evaluations of contingent claims. Acta Math. Appl. Sinica (Engl. Ser.) 20, 191-214 (2004) View ArticleMATHGoogle Scholar
  15. Peng, SG: Modelling derivatives pricing mechanism with their generating functions (2006). arXiv:math.PR/0605599v1
  16. Rosazza Gianin, E: Risk measures via g-expectations. Insur. Math. Econ. 39, 19-34 (2006) View ArticleMATHMathSciNetGoogle Scholar
  17. Briand, P, Coquet, F, Hu, Y, Mémin, J, Peng, SG: A converse comparison theorem for BSDEs and related properties of g-expectation. Electron. Commun. Probab. 5, 101-117 (2000) View ArticleMATHGoogle Scholar
  18. Chen, ZJ, Kulperger, R, Jiang, L: Jensen’s inequality for g-expectation: part 1. C. R. Acad. Sci. Paris, Ser. I 337, 725-730 (2003) View ArticleMATHMathSciNetGoogle Scholar
  19. Chen, ZJ, Kulperger, R, Jiang, L: Jensen’s inequality for g-expectation: part 2. C. R. Acad. Sci. Paris, Ser. I 337, 797-800 (2003) View ArticleMATHMathSciNetGoogle Scholar
  20. Coquet, F, Hu, Y, Mémin, J, Peng, SG: Filtration-consistent nonlinear expectations and related g-expectations. Probab. Theory Relat. Fields 123, 1-27 (2002) View ArticleMATHGoogle Scholar
  21. El Karoui, N, Quenez, MC: Non-linear pricing theory and backward stochastic differential equations. In: Runggaldier, WJ (ed.) Financial Mathematics. Lecture Notes in Mathematics, vol. 1656, pp. 191-246. Springer, Heidelberg (1996) View ArticleGoogle Scholar
  22. Fan, SJ: Jensen’s inequality for filtration consistent nonlinear expectation without domination condition. J. Math. Anal. Appl. 345, 678-688 (2008) View ArticleMATHMathSciNetGoogle Scholar
  23. Hu, F: Dynamically consistent nonlinear evaluations with their generating functions in \(L^{p}\). Acta Math. Sin. Engl. Ser. 29, 815-832 (2013) View ArticleMATHMathSciNetGoogle Scholar
  24. Hu, F, Chen, ZJ: Generalized g-expectations and related properties. Stat. Probab. Lett. 80, 191-195 (2010) View ArticleMATHGoogle Scholar
  25. Hu, Y: On Jensen’s inequality for g-expectation and for nonlinear expectation. Arch. Math. 85, 572-680 (2005) View ArticleMATHGoogle Scholar
  26. Jia, GY: On Jensen’s inequality and Hölder’s inequality for g-expectation. Arch. Math. 94, 489-499 (2010) View ArticleMATHGoogle Scholar
  27. Jiang, L: Jensen’s inequality for backward stochastic differential equation. Chin. Ann. Math., Ser. B 27, 553-564 (2006) View ArticleMATHGoogle Scholar
  28. Jiang, L, Chen, ZJ: On Jensen’s inequality for g-expectation. Chin. Ann. Math., Ser. B 25, 401-412 (2004) View ArticleMATHMathSciNetGoogle Scholar
  29. Peng, SG: Backward SDE and related g-expectation. In: El Karoui, N, Mazliak, L (eds.) Backward Stochastic Differential Equations. Pitman Research Notes in Mathematics Series, vol. 364, pp. 141-159. Longman, Harlow (1997) Google Scholar
  30. Peng, SG: Monotonic limit theorem of BSDE and nonlinear decomposition theorem of Doob-Meyer’s type. Probab. Theory Relat. Fields 113, 473-499 (1999) View ArticleMATHGoogle Scholar
  31. Zong, ZJ: Jensen’s inequality for generalized Peng’s g-expectations and its applications. Abstr. Appl. Anal. 2013, Article ID 683047 (2013) MathSciNetGoogle Scholar
  32. Krein, SG, Petunin, YI, Semenov, EM: Interpolation of Linear Operators. Translations of Mathematical Monographs, vol. 54. Am. Math. Soc., Providence (1982) (Translated from the Russian by J Szucs) Google Scholar
  33. Zong, ZJ, Hu, F: L p $L^{p}$ Weak convergence method on BSDEs with non-uniformly Lipschitz coefficients and its applications (submitted) Google Scholar
  34. Yan, JA: Lecture Note on Measure Theory, 2nd edn. Science Press, Beijing (2005) (Chinese version) Google Scholar
  35. Peng, SG: Survey on normal distributions, central limit theorem, Brownian motion and the related stochastic calculus under sublinear expectations. Sci. China Ser. A 52, 1391-1411 (2009) View ArticleMATHMathSciNetGoogle Scholar

Copyright

© Zong et al.; licensee Springer. 2015