A class of backward stochastic Bellman–Bihari’s inequality and its applications

*Correspondence: wuhaomoonsky@163.com 1School of Mathematics and Statistics, South-Central University For Nationalities, Wuhan, Hubei 430000, P.R. China Full list of author information is available at the end of the article Abstract In this paper, we propose and prove several different forms of backward stochastic Bellman–Bihari’s inequality. Then, as two applications, two different types of the comparison theorems for backward stochastic differential equation with stochastic non-Lipschitz condition are presented.


Introduction
In [9], Gronwall was the first to provide Gronwall's inequality under the framework of differential form. Later, Bellman [2] put forward the integral form of Gronwall's inequality as the following proposition. Since then, on the basis of different motivation, Gronwall's inequality has been extended and used considerably in various articles. It became a useful tool to solve many problems in the fields of differential equations. We refer the reader to [4,5,12,13,19] and the references therein. In order to meet the needs of the development of stochastic differential equations, many scholars tried to generalize Gronwall's inequality. Wang and Fan [16] established the following backward stochastic Gronwall's inequality.

Proposition 1.2 Let β(ω, t) be a strictly positive F-adapted stochastic process satisfying
If the nonnegative F t -adapted stochastic process μ(ω, t) satisfies the following conditions: where a > 0 is a constant. Then, for each t ∈ [0, T], we have In particular, if a = 0, we have u(t) = 0.
They used this proposition to prove a comparison theorem of L p solution for 1dimensional backward stochastic differential equation under the stochastic Lipschitz condition. Hun et al. [10] generalized backward stochastic Gronwall's inequality in the situation of random time horizon. [1] generalized a Lipovan's result of Gronwall-like inequalities into a more general form.
Later, Bihari [3] put forward a useful generalization of Gronwall-Bellman's inequality, called Bihari's inequality, which provides explicit bounds on unknown functions. This inequality also had many applications in the fields of differential equations. Zhang and Zhu [20] used Bihari's inequality to study non-Lipschitz stochastic differential equations driven by multi-parameter Brownian motion. In [17], Bihari's inequality was used to study non-Lipschitz stochastic Volterra type equations with jumps. Furthermore, Wu et al. used [18] Bihari's inequality to analyze the solvability of anticipated backward stochastic differential equations. And Fan [6] used Bihari's inequality to study existence, uniqueness, and stability of L 1 solutions for multidimensional backward stochastic differential equations with generators of one-sided Osgood type. With further study of stochastic differential equations, it was found that original Bihari's inequality can no longer meet the needs of application. Thus, people began to generalize Bihari's inequality. [11] studied some new Gronwall-Bellman-Bihari type integral inequalities with singular as well as nonsingular kernels, generalizing some already existing results. As an application, the behavior of solution of the fractional stochastic differential equation has been investigated. [8] analyzed some new nonlinear Gronwall-Bellman-Bihari type inequalities with singular kernel via k-fractional integral of Riemann-Liouville, which can be used to study some properties of solution for fractional differential equations.
To the best of our knowledge, so far there is little study on backward stochastic Bihari's inequality. Motivated by the above articles, in this paper, we mainly generalize the following Bihari's inequality in [3] into the situation of backward stochastic Bihari's inequality.

Proposition 1.3
Let ρ : R + → R + be a continuous and nondecreasing function, if β(s), f (s) are two nonnegative functions on R + such that, for some a > 0, . = x c 1 ρ(y) dy is well defined for some c > 0 and G -1 (·) is an inverse function of G.
We will study several different forms of backward stochastic Bihari's inequality and give two applications. It is necessary to point out that the proof method in [3,14,15] for Bihari's inequality is no longer applicable since β(s) in the following theorems in this paper depends on ω, while the β(s) in Proposition 1.3 is independent of ω.

Notations
For x, y ∈ R, we use |x| to denote the Euclidean norm of x and use x, y to denote the Euclidean inner product. For B ∈ R d , |B| represents √ TrBB * . Let ( , F , P) be a complete probability space taking along a d-dimensional T] is the natural filtration generated by W . For Euclidean space H, we introduce the following spaces: In the following, ρ is a nondecreasing continuous concave function from R + to R + such that ρ(0) = 0 and 0+ 1 ρ(s) ds = +∞, where G(x) . = x c 1 ρ(y) dy is well defined for some c > 0 and G -1 (·) is an inverse function of G.
Since ρ is concave and ρ(0) = 0, one can find a pair of positive constants a and b such that 2. We make the following convention: the letter C will denote a positive constant, whose value may vary from one place to another. Moreover, C only depends on the constants in the following theorems.

Main results
Before giving our main results, we need the following lemma.

Lemma 3.1 The above function G(x), x > 0 is a concave function.
Proof For any 0 < x 1 < x 2 , Thus, G(x), x > 0 is a concave function.
If the nonnegative F t -adapted stochastic process μ(ω, t) satisfies the following conditions: In particular, if a = 0, we have u(t) = 0.
Proof Set η = T 0 β(s)ρ(μ(s)) ds. By the martingale representation theorem, there exists a stochastic process By assumptions in theorem, we know that μ(t) ≤μ(t). Moreover, Using the differential formula to G -1 (G(μ(t)) + t 0 β(s) ds), we have Since ρ is nondecreasing and Integrating on [t, T] and taking the conditional mathematical expectation with respect to F t , noting thatμ(T) = a leads to By μ(t) ≤μ(t), we obtain Remark 3. 3 We will illustrate that Since G is a concave function, by Jensen's inequality, we have The following lemma is a slight extension of Theorem 3.2. α(ω, t), β(ω, t) be two nonnegative F-adapted stochastic processes. One of them is strictly positive. If the following conditions

Lemma 3.4 Let
. = x c 1 y+ρ(y) dy is well defined for some c > 0 andW -1 (·) is an inverse function of W . In particular, if a = 0, we have u(t) = 0.
In the following theorem, the forward Gronwall-Bellman's inequality in [14] is generalized to backward stochastic Gronwall-Bellman's inequality. In addition, compared with Gronwall-Bellman's inequality in [14], α(ω, t), β(ω, t) in the following theorem are not independent of ω. Thus, the proof method in this paper is also different from the method in [14]. α(ω, t), β(ω, t) be two nonnegative F-adapted stochastic processes. If the following conditions are satisfied:
Proof Set It follows that μ(t) ≤μ(t). Set By the martingale representation theorem, there exists a stochastic process Thus,μ Furthermore, it leads to Set From the above equality, we get Integrating on [t, T] and taking the conditional expectation with respect to F t on both sides of the above inequality, we have Then From Theorem 1 in [16], we have Then we have From Theorem 3.5, we get the following lemma. If the following conditions hold:

• n(t) is a strictly positive F-adapted stochastic process and is decreasing with respect to t; 2 • μ(ω, t) is a nonnegative F-adapted stochastic process and E[sup t∈[0,T] μ(ω, t)] < ∞;
By the above theorem, we complete the proof.
Next, we will extend Bellman-Bihari's inequality in Theorem 2 of [15] to backward stochastic Bellman-Bihari's inequality. And the proof method is this paper is also different from the method in [15].
One of them is strictly positive. If the following conditions are established: then, for each t ∈ [0, T], we have whereW ,W -1 are the functions in Lemma 3.4.
Proof Set It follows that μ(t) ≤μ(t). Set By the martingale representation theorem, there exists a stochastic process Thus,μ Furthermore, it leads to From the above equality, we get Integrating on [t, T] and taking the conditional expectation with respect to F t on both sides of the above inequality, we have From Lemma 3.4, we have
Remark 4.2 Obviously, since (s) is a stochastic process, the assumption in (H3) about f is different from the assumption in [7].

Conclusion
In this paper, we mainly studied several different forms of backward stochastic Bellman-Bihari's inequality. The proposed scheme is based on studying the method of backward stochastic Gronwall's inequalities and forward Bellman-Bihari's inequalities. As far as we know, there is little study on backward stochastic Bellman-Bihari's inequality. Just as backward stochastic Gronwall's inequalities including some essential features compared with forward stochastic Gronwall's inequalities, backward stochastic Bellman-Bihari's inequalities also enjoy some essential different features, which can be applied to solve some problems on BSDEs. Our further interests will lie on more applications by using backward stochastic Bellman-Bihari's inequalities.