Skip to main content

The weighted reverse Poincaré-type estimates for the difference of two convex vectors

Abstract

In this paper we develop Caccioppoli-type estimates for arbitrary convex vectors and the vectors having both convex and concave arguments. To do this, we first develop these estimates for smooth convex vectors and then, through mollification, extend the results for arbitrary convex vectors. These types of estimates are valuable in the problems of financial mathematics for the establishment of the optimal investment strategies.

1 Introduction

Mathematical inequalities are the branch of mathematics having wide applications in the theory of optimizations, mathematical finance, theory of uniform approximations, ordinary and partial differential equations, game theory, mathematical economics, etc. Thus the development of the new inequalities often puts on a firm foundation the heuristic technique one uses in the applied sciences.

The classical work of Hardy, Littlewood and Polya [1] introduced the inequalities as a field of mathematics. Hardy et al. [1] and Beckenbach and Bellman [2] are considered as the classical references in this field. In order to understand the use of inequalities in optimization and uniform approximations, we refer to [3] and [4]. Usually the payoff function of the various options (for example, European and American options) in mathematical finance is convex and this property leads to the corresponding value function to be convex with respect to the underlying stock price (see for details El Karoui et al. [5] and Hobson [6]). Traders and practitioners dealing with real-world financial markets use the value function to construct an optimal hedging process of the options. When the value function is unknown, they use the above property to construct uniform approximations to the unknown optimal hedging process. In this construction one has to pass some weighted integrals involving weak partial derivative of the value function. For this purpose, Shashiashvili and Shashiashvili [7] introduced a very particular weighted integral inequality for the derivative of convex functions bounded from below with a very particular weight function, and with this they opened a new direction in the field of weighted inequalities. Hussain et al. [8, 9] extended this work to a variety of convex functions and subsequently applied to the hedging problems of financial mathematics. Saleem et al. [10] studied the weighted reverse Poincaré-type inequalities for the difference of two weak sub-solutions.

For convenience we will use the following notations and definitions:

\(I=(a,b)=I(x_{0},r)\), where

$$x_{0}=\frac{a+b}{2}\quad \text{and} \quad r=\frac{a+b}{2}-a $$

and is closed interval \([a,b]\).

The n-dimensional vector

$$ F(x)= \bigl(f_{1}(x),f_{2}(x), \ldots,f_{n}(x) \bigr) $$
(1.1)

is smooth convex if

$$ \frac{d^{2}}{dx^{2}} f_{i}(x)\geq0 \quad \forall i=1,2, \ldots,n. $$
(1.2)

The vector \(F(x)\) in (1.1) is arbitrary convex provided

$$\begin{aligned} {f_{i}}\bigl(\lambda{x}+(1-\lambda){y}\bigr)\leq \lambda{f_{i}}(x)+(1-\lambda ){f}(y)\quad \forall i=1,2,\ldots,n \end{aligned}$$
(1.3)

for each \(\lambda\in[0,1]\) and all x, y belongs to \(\mathbb{R}\).

Let \(\chi_{[1,j]}^{[j+1,n]}[a,b]\) be the class of vectors having convex function on its first j components and the remaining \(n-j\) components are concave on the interval \([a,b]\), and let \(\chi _{[j+1,n]}^{[1,j]}[a,b]\) be the class of vectors having concave functions on its first j components and remaining are convex on the interval \([a,b]\). It is trivial that if \(F(x)\in\chi_{[1,j]}^{[j+1,n]}[a,b]\) then \(-F(x)\in\chi_{[j+1,n]}^{[1,j]}[a,b]\).

The vector addition and scalar multiplication is defined in the usual way:

For

$$F(x)= \bigl(f_{1}(x),f_{2}(x),\ldots,f_{n}(x) \bigr) $$

and

$$G(x)= \bigl(g_{1}(x),g_{2}(x),\ldots,g_{n}(x) \bigr) $$

the vector addition is defined as

$$F(x)+G(x)= \bigl(f_{1}(x)+g_{1}(x),f_{2}(x)+g_{2}(x), \ldots,f_{n}(x)+g_{n}(x) \bigr) $$

and scalar multiplication as

$$\alpha F(x)= \bigl(\alpha f_{1}(x),\alpha f_{2}(x),\ldots, \alpha f_{n}(x) \bigr). $$

The vector composition is defined as follows:

$$F\circ G(x)=F\bigl(G(x)\bigr)= \bigl(f_{1}\bigl(g_{1}(x) \bigr),f_{2}\bigl(g_{2}(x)\bigr),\ldots,f_{n} \bigl(g_{n}(x)\bigr) \bigr). $$

The vector \(F(x)\) is said to be increasing (decreasing) vector if \(f_{i}(x)\) are increasing (decreasing) functions \(\forall i=1,2,\ldots,n\).

The following properties are easy to prove. For proof, one can refer to [11] and Proposition 1.1.7 of [4].

Proposition 1.1

For convex vectors, we have:

  1. (i)

    Adding two convex vectors, we obtain a convex vector.

  2. (ii)

    Multiplication of a convex vector by a positive scalar results in a convex vector.

  3. (iii)

    If \(F:I\rightarrow\mathbb{R}\) is a convex vector and \(G:\mathbb{R}\rightarrow\mathbb{R}\) is an increasing vector then \(G\circ F\) is a convex vector.

Proposition 1.2

Let \(F(x) ,G(x)\in\chi_{[1,j]}^{[j+1,n]}[a,b]\) then:

  1. (i)

    \(F(x)+G(x)\in\chi_{[1,j]}^{[j+1,n]}[a,b]\).

  2. (ii)

    For any positive scalar α,

    $$\alpha F(x)\in\chi_{[1,j]}^{[j+1,n]}[a,b]. $$
  3. (iii)

    Let \(F(x)\in\chi_{[1,j]}^{[j+1,n]}[a,b]\), and \(G(x)\) be the vector such that \(g_{i}(x)\) are increasing functions \(\forall i=1,\ldots,j\) and \(g_{i}(x)\) are decreasing functions \(\forall i=j+1,\ldots,n\); then

    $$G\circ F(x)\in\chi_{[1,j]}^{[j+1,n]}[a,b]. $$

The paper is organized as follows:

In next section, we develop reverse Poincaré-type inequalities for the difference of two smooth convex vectors. Then through a classical mollification technique, we pass to arbitrary convex vectors.

In the last section, we prove the existence, integrability, and weighted energy inequality for the weak partial derivative of convex vectors. These results can be directly applied to problems of mathematical finance, especially to discrete time hedging of the European and American type options.

2 The reverse Poincaré-type inequalities for smooth vectors and approximation of arbitrary convex vectors by smooth ones

Let \(h(x)\) be the weight function which is non-negative and twice continuously differentiable and satisfying

$$ h(a)=h(b)=0,\qquad h'(a)=h'(b)=0, $$
(2.1)

with \(a\leq x\leq b\), we arrive at the following result of Hussain, Pečarić, and Shashiashvili [12].

Lemma 2.1

Let the smooth convex functions \(f(x)\) and \(g(x)\) and non-negative weight function \(h(x)\) be defined on the interval I, satisfying (2.1); we have

$$\begin{aligned}& \int _{I} \bigl(f'(x)-g'(x) \bigr)^{2}h(x)\,dx \\& \quad \leq \int _{I} \biggl[ \biggl(\frac{f(x)-g(x)}{2} \biggr)^{2}+\sup_{x\in I}\bigl\vert f(x)-g(x)\bigr\vert \bigl( f(x)+g(x) \bigr) \biggr]\bigl\vert h''(x) \bigr\vert \,dx. \end{aligned}$$
(2.2)

This result gives the following estimate for n-dimensional convex vectors.

Lemma 2.2

Let \(F(x)\) and \(G(x)\) be two n-dimensional convex vectors on the interval \(I^{n}\) and \(h(x)\) be a smooth non-negative weight function satisfying (2.1); then the following energy estimate is valid:

$$\begin{aligned}& \int _{I}\bigl\vert F'(x)-G'(x) \bigr\vert ^{2}h(x) \,dx \\& \quad \leq\sum_{i=1}^{n} \int _{I} \biggl[\frac{({f_{i}(x)-g_{i}(x)})^{2}}{2}+\sup_{x\in I} \bigl\vert f_{i}(x)-g_{i}(x)\bigr\vert \bigl(f_{i}(x)+g_{i}(x) \bigr) \biggr]\bigl\vert h''(x)\bigr\vert \,dx. \end{aligned}$$
(2.3)

If \(f(x)\) and \(g(x)\) are concave functions on the interval I then \(-f(x)\) and \(-g(x)\) become convex. Hence, we get the following result.

Lemma 2.3

Let \(f(x)\) and \(g(x)\) be any two smooth concave functions on the interval I and \(h(x)\) be a non-negative weight function satisfying (2.1). The following estimate holds:

$$\begin{aligned}& \int _{I} \bigl(f'(x)-g'(x) \bigr)^{2}h(x)\,dx \\& \quad \leq \int _{I} \biggl[ \biggl(\frac{f(x)-g(x)}{2} \biggr)^{2}-\sup_{x\in I}\bigl\vert f(x)-g(x)\bigr\vert \bigl( f(x)+g(x) \bigr) \biggr]\bigl\vert h''(x) \bigr\vert \,dx. \end{aligned}$$
(2.4)

Taking the supremum on both sides of (2.3), we find the following.

Corollary 2.4

Let the n-dimensional smooth convex vectors \(F(x)\) and \(G(x)\) in interval \(I^{n}\) and non-negative weight function \(h(x)\) satisfies (2.1), we get the following estimate:

$$\begin{aligned}& \int _{I}\bigl\vert F'(x)-G'(x) \bigr\vert ^{2}h(x)\,dx \\& \quad \leq \sum_{i=1}^{n} \biggl[ \frac{1}{2}\bigl\Vert f_{i}(x)-g_{i}(x)\bigr\Vert ^{2}_{L^{\infty}} + \bigl\Vert f_{i}(x)-g_{i}(x) \bigr\Vert _{L^{\infty}} \bigl(\bigl\Vert f_{i}(x)\bigr\Vert _{L^{\infty}}+ \bigl\Vert g_{i}(x)\bigr\Vert _{L^{\infty}} \bigr) \biggr] \\& \qquad {}\times \int _{I} \bigl\vert h''(x)\bigr\vert \,dx. \end{aligned}$$
(2.5)

For concave n-dimensional vectors we have the following estimate.

Corollary 2.5

Let \(F(x)\) and \(G(x)\) be n-dimensional concave vectors on \(I^{n}\) and \(h(x)\) be the non-negative smooth weight function satisfying (2.1) then

$$\begin{aligned}& \int _{I}\bigl\vert F'(x)-G'(x) \bigr\vert ^{2}h(x) \,dx \\& \quad \leq\sum_{i=1}^{n} \int _{I} \biggl[\frac{({f_{i}(x)-g_{i}(x)})^{2}}{2}+\sup_{x\in I} \bigl\vert f_{i}(x)-g_{i}(x)\bigr\vert \bigl(f_{i}(x)+g_{i}(x) \bigr) \biggr]\bigl\vert h''(x)\bigr\vert \,dx. \end{aligned}$$
(2.6)

The next theorem gives the reverse Poincaré inequality for the difference of vectors belonging to \(\chi_{[1,j]}^{[j+1,n]}[a,b]\).

Theorem 2.6

Let \(F(x)\) and \(G(x)\) belong to \(\chi_{[1,j]}^{[j+1,n]}[a,b]\) and \(h(x)\) be a non-negative weight function satisfying (2.3); then the following inequality is valid:

$$\begin{aligned}& \int _{I}\bigl\vert F'(x)-G'(x) \bigr\vert ^{2}h(x)\,dx \\& \quad \leq \sum^{n}_{i=1} \int _{I}\frac{ (f_{i}(x)-g_{i}(x) )^{2}}{2}h''(x)\,dx + \Biggl[\sum^{j}_{i=1}\sup _{x\in I} \bigl\vert f_{i}(x)-g_{i}(x)\bigr\vert \bigl[f_{i}(x)+g_{i}(x) \bigr] \\& \qquad {}- \sum^{n}_{i=j+1}\sup _{x\in I}\bigl\vert f_{i}(x)-g_{i}(x)\bigr\vert \bigl[f_{i}(x)+g_{i}(x) \bigr] \Biggr] \int _{I} \bigl\vert h''(x)\bigr\vert \,dx. \end{aligned}$$
(2.7)

Proof

Let us express

$$\begin{aligned}& \int _{I}\bigl\vert F'(x)-G'(x) \bigr\vert ^{2}h(x)\,dx \\& \quad =\sum^{n}_{i=1} \int _{I} \bigl(f_{i}'(x)-g_{i}'(x) \bigr)^{2}h(x)\,dx \end{aligned}$$
(2.8)
$$\begin{aligned}& \quad =\sum^{j}_{i=1} \int _{I} \bigl(f_{i}'(x)-g_{i}'(x) \bigr)^{2}h(x)\,dx+\sum^{n}_{i=j+1} \int _{I} \bigl(f_{i}'(x)-g_{i}'(x) \bigr)^{2}h(x)\,dx. \end{aligned}$$
(2.9)

Using Lemma 2.1 in the first series on the right side of the latter expression, we obtain

$$\begin{aligned}& \int _{I} \bigl(f_{i}'(x)-g_{i}'(x) \bigr)^{2}h(x)\,dx \\& \quad \leq \biggl[ \int _{I}\frac{ (f_{i}(x)-g_{i}(x) )^{2}}{2}+\sup_{x\in I}\bigl\vert f_{i}(x)-g_{i}(x)\bigr\vert \int _{I} \bigl(f_{i}(x)-g_{i}(x) \bigr) \biggr], \end{aligned}$$
(2.10)

and Lemma 2.3 in the second series, we get

$$\begin{aligned}& \int _{I} \bigl(f_{i}'(x)-g_{i}'(x) \bigr)^{2}h(x)\,dx \\& \quad \leq \biggl[ \int _{I}\frac{ (f_{i}(x)-g_{i}(x) )^{2}}{2}-\sup_{x\in I}\bigl\vert f_{i}(x)-g_{i}(x)\bigr\vert \int _{I} \bigl(f_{i}(x)-g_{i}(x) \bigr) \biggr]. \end{aligned}$$
(2.11)

On combining the inequalities (2.10) and (2.11) we have the required inequality (2.7). □

Remark 2.7

Using the supremum norm in (2.7), we obtain the following inequality:

$$\begin{aligned}& \int _{I}\bigl\vert F'(x)-G'(x) \bigr\vert ^{2}h(x)\,dx \\& \quad \leq \sum_{i=1}^{n} \Biggl[ \frac{1}{2}\bigl\Vert f_{i}(x)-g_{i}(x)\bigr\Vert ^{2}_{L^{\infty}} + \sum_{i=1}^{n} \bigl\Vert f_{i}(x)-g_{i}(x)\bigr\Vert _{L^{\infty}} \bigl(\bigl\Vert f_{i}(x)\bigr\Vert _{L^{\infty}}+\bigl\Vert g_{i}(x)\bigr\Vert _{L^{\infty}} \bigr) \Biggr] \\& \qquad {}\times \int _{I} \bigl\vert h''(x)\bigr\vert \,dx. \end{aligned}$$
(2.12)

Corollary 2.8

Let \(F(x)\) and \(G(x)\) be any two twice continuously differentiable n-dimensional convex vectors defined on a closed bounded interval \([a, b]\) and weight function \(h(x)\) given in [12] and [7] as

$$h(x)=(x-a)^{2}(b-x)^{2}, \quad a\leq x\leq b. $$

Then we get the estimate

$$\begin{aligned}& \int _{I} \bigl\vert F'(x)-G'(x) \bigr\vert ^{2}h(x)\,dx \\& \quad \leq \frac {4\sqrt{3}}{9}\sum_{i=1}^{n} \biggl[\frac{1}{2}\bigl\Vert f_{i}(x)-g_{i}(x)\bigr\Vert ^{2}_{L^{\infty}} + \bigl\Vert f_{i}(x)-g_{i}(x) \bigr\Vert _{L^{\infty}} \bigl( \bigl\Vert f_{i}(x)\bigr\Vert _{L^{\infty}}+ \bigl\Vert g_{i}(x)\bigr\Vert _{L^{\infty}} \bigr) \biggr] \\& \qquad {}\times (b-a)^{3}. \end{aligned}$$
(2.13)

Proof

From [12], we have

$$\int _{I}\bigl\vert h''(x)\bigr\vert \,dx=\frac{4\sqrt{3}}{9}(b-a)^{3}. $$

Using the latter value in (2.7), we obtain the desired estimate. □

We define the vector convolution for \(F(x)\in\chi _{[1,j]}^{[j+1,n]}[a,b] \) in the following way:

Assume

$$F(x)= \bigl(f_{1}(x),f_{2}(x),\ldots,f_{n}(x) \bigr) $$

to be an n-dimensional vector and

$$\epsilon= (\epsilon_{1},\epsilon_{2},\ldots, \epsilon_{n} ),\quad \text{with } \epsilon_{i}\geq0 $$

with \(\epsilon{\rightarrow0} \) means \(\{\max (\epsilon _{1},\epsilon_{1},\ldots,\epsilon_{n} )\}\rightarrow0\). Take

$$\theta_{\epsilon}(x)= \bigl(\theta_{\epsilon_{1}}(x),\theta _{\epsilon_{2}}(x),\ldots,\theta_{\epsilon_{n}}(x) \bigr) $$

and

$$\theta_{\epsilon_{i}}(x)= \textstyle\begin{cases} c_{i} \exp\frac{1}{|x|^{2}-\epsilon_{i}^{2}} & \text{if } \vert x\vert < \epsilon_{i}, \\ 0 &\text{if } \vert x\vert >\epsilon_{i} \end{cases} $$

\(\forall i=1,2,\ldots,n\), where \(c_{i}\) is the constant such that

$$\int _{I}\theta_{\epsilon_{i}}(x)\,dx=1 \quad \forall i=1,2, \ldots, n. $$

Now we define the convolution as

$$F\ast\theta_{\epsilon}(x)= (f_{1}\ast\theta_{\epsilon _{1}},f_{2} \ast\theta_{\epsilon_{2}},\ldots,f_{n}\ast\theta _{\epsilon_{n}} ), $$

where \(f_{i}\ast\theta_{\epsilon_{i}}\) is defined as

$$f_{\epsilon_{i}} = \int _{\Re}f_{i}(x-y)\theta_{\epsilon_{i}}(y)\, dy. $$

If \(f_{i}\) is continuous then \(f_{\epsilon_{i}}\) converges uniformly to \(f_{i}\) in any compact subset \(K_{i}\subseteq I\) i.e.

$$\begin{aligned}& \vert f_{\epsilon_{i}}-f_{i}\vert \underset{\epsilon _{i}\rightarrow0}{\longrightarrow} 0;\quad \mbox{this implies that} \\& \vert F_{\epsilon}-F\vert ^{2}=\sum _{i=1}^{n}\vert f_{\epsilon _{i}}-f_{i} \vert ^{2}\underset{\epsilon _{i}\rightarrow0}{ \longrightarrow} 0. \end{aligned}$$

We claim that \(F_{\epsilon}\in\chi_{[1,j]}^{[j+1,n]}[a,b]\) i.e. \(f_{\epsilon_{i}}\) is a convex function \(\forall i=1,2,\ldots,j\) and concave for \(\forall i=j+1,\ldots,n\). It can be seen in the following way:

We can write

$$\begin{aligned} f_{\epsilon_{i}}\bigl(\lambda x_{1}+(1-\lambda)x_{2} \bigr) =& \int _{I} f_{i} \bigl(\lambda x_{1}+(1- \lambda)x_{2}-y \bigr)\theta_{\epsilon_{i}}(y)\, dy \\ =& \int _{I} f_{i} \bigl[\lambda(x_{1}-y)+(1- \lambda) (x_{2}-y) \bigr]\theta_{\epsilon_{i}}(y)\, dy. \end{aligned}$$
(2.14)

For \(i=1,\ldots,j\), we have

$$\begin{aligned}& \leq \int _{I} \bigl[\lambda f_{i}(x_{1}-y)+(1- \lambda )f_{i}(x_{2}-y) \bigr]\theta_{\epsilon_{i}}(y)\,dy \\& =\lambda \int _{I} f_{i}(x_{1}-y) \theta_{\epsilon _{i}}(y)\,dy+(1-\lambda) \int _{I} f_{i}(x_{2}-y) \theta_{\epsilon_{i}}(y)\,dy \\& =\lambda f_{\epsilon_{i}}(x_{1})+(1-\lambda)f_{\epsilon}, \end{aligned}$$

while, for \(i=j+1,\ldots, n\),

$$\begin{aligned}& \geq\lambda \int _{I} f_{i}(x_{1}-y)+(1-\lambda) \int _{I} f_{i}(x_{2}-y) \theta_{\epsilon_{i}}(y)\,dy \\& =\lambda f_{\epsilon_{i}}(x_{1})+(1-\lambda)f_{\epsilon}. \end{aligned}$$

3 Existence of weak derivative and reverse Poincaré-type inequality for arbitrary convex vectors

Throughout this section we use \(I_{k}=I(x_{0},r_{k})\) where the radius \(r_{k}\) is defined as

$$r_{k}=r \biggl(\frac{k+1}{k+2} \biggr). $$

It is trivial that \(I_{k}\subset I_{k+1}\) and \(\bigcup_{k=1}^{\infty}I_{k}=I\) and \(C_{0}^{\infty}(I)\) is the space of infinite times continuously differentiable functions having compact support. We may take a particular weight function for interval I as

$$h(x)=\bigl[r^{2}-(x-x_{0})^{2} \bigr]^{2}. $$

It is trivial that \(h(x)\) is non-negative \(\forall x\in I\) and \(h(x)=h'(x)=0\) for all \(x\in\partial I\). We also define the corresponding weight functions \(h_{k}\) for the interval \(I_{k}\) as

$$h_{k}(x)=\bigl[{r_{k}}^{2}-(x-x_{0})^{2} \bigr]^{2}. $$

We arrive at the following result.

Theorem 3.1

The arbitrary convex vector \(F(x)\) possesses a weak derivative \(F'(x)\) over the interval \(I=I(x_{0},r)\) and satisfies

$$\int _{I}\bigl\vert F'(x)\bigr\vert ^{2}h(x)\,dx< \infty, $$

where \(h(x)\) is the weight function satisfying (2.1).

Proof

Take \(F_{\epsilon}(x)\), the mollification of the arbitrary convex vector \(F(x)\) as defined in (1.1).

Since \(F(x)\) is continuous on the interval I, thus by the properties of mollification it is well known that on any closed interval \(I_{k}\subset I\), we have

$$\sup_{I_{k}}\bigl\vert f_{\epsilon,i}(x)-f_{i}(x) \bigr\vert ^{2} \underset{\epsilon\rightarrow0}{\longrightarrow} 0. $$

Let us choose \(\epsilon=\frac{1}{m}\), \(m=1,2,\ldots,n\), then the above convergence becomes

$$\sup_{I_{k}}\bigl\vert f_{m,i}(x)-f_{i}(x) \bigr\vert ^{2} \underset{m\rightarrow\infty}{\longrightarrow} 0. $$

Since \(I_{k}\subset I\) for \(p,m\in\mathbb{N}\), we write inequality (2.5), for the vectors \(F_{p}\) and \(F_{m}\),

$$\begin{aligned}& \int _{I_{k+l}}\bigl\vert F_{p}'(x)-F_{m}'(x) \bigr\vert ^{2}h_{k+l}(x)\,dx \\& \quad \leq\sum_{i=1}^{n}\frac{1}{2} \bigl\Vert f_{p,i}(x)-f_{m,i}(x)\bigr\Vert ^{2}_{L^{\infty}} +\sum_{i=1}^{n} \bigl\Vert f_{p,i}(x)-f_{m,i}(x)\bigr\Vert _{L^{\infty}} \\& \qquad {}\times \bigl(\bigl\Vert f_{p,i}(x)\bigr\Vert _{L^{\infty}}+ \bigl\Vert f_{m,i}(x)\bigr\Vert \bigr) \int _{I} \bigl\vert h_{k+l}''(x) \bigr\vert \,dx. \end{aligned}$$
(3.1)

Denote

$$\int _{I} \bigl\vert h_{k+l}''(x) \bigr\vert \,dx=c_{k+l} $$

and

$$\begin{aligned}& \hat{c}_{k+l}=\min\bigl\vert h_{k+l}(x)\bigr\vert , \\& \hat{c}_{{k+l}} \int _{I_{k}} \bigl\vert F_{p}'(x)-F_{m}'(x) \bigr\vert ^{2}\,dx \\& \quad \leq {c}_{{k+l}}\sum_{i=1}^{n} \frac{1}{2}\bigl\Vert f_{p,i}(x)-f_{m,i}(x)\bigr\Vert ^{2}_{L^{\infty}} +\sum_{i=1}^{n} \bigl\Vert f_{p,i}(x)-f_{m,i}(x)\bigr\Vert _{L^{\infty}} \\& \qquad {}\times \bigl(\bigl\Vert f_{p,i}(x)\bigr\Vert _{L^{\infty}}+ \bigl\Vert f_{m,i}(x)\bigr\Vert \bigr), \end{aligned}$$
(3.2)

since

$$\bigl\Vert f_{p,i}(x)-f_{m,i}(x)\bigr\Vert _{L_{I_{k+l}}^{\infty}} \rightarrow 0, \quad m,p\rightarrow\infty. $$

This implies that

$$\lim_{m,p\rightarrow\infty} \int _{I_{k}} \bigl\vert F_{p}'(x)-F_{m}'(x) \bigr\vert ^{2}\,dx= \lim_{m,p\rightarrow \infty}\sum _{i=1}^{n} \int _{I_{k}} \bigl(f_{p,i}'(x)-f_{m,i}'(x) \bigr)^{2}\,dx=0. $$

By the completeness of the space \(L_{\infty}(I_{k})\), there exists an n-dimensional measurable vector

$$g_{k}=(g_{k,1},g_{k,2},\ldots,g_{k,n}) $$

such that

$$\lim_{m\rightarrow 0}\sum_{i=1}^{n} \int _{I_{k}} \bigl(f_{m,i}'(x)-g_{k,i}(x) \bigr)^{2}\,dx=0. $$

Let us extend \(g_{k}\), outside the interval \(I_{k}\) by 0, and let us define

$$g(x)=\lim_{k\rightarrow\infty} \sup{g_{k,i}}(x). $$

It is trivial that \(g(x)=g_{k}(x)\) on the interval \(I_{k}\).

We claim that

$$g(x)= \bigl(g_{1}(x),g_{2}(x),\ldots,g_{n}(x) \bigr) $$

is the weak derivative of

$$F(x)= \bigl(f_{1}(x),f_{2}(x),\ldots,f_{n}(x) \bigr). $$

To show this it is enough to prove that \(g_{i}(x)\) is the weak partial derivative of \(f_{i}(x)\) for all \(i=1,2,\ldots,n\).

To do this, let us take \(\phi\in C_{0}^{\infty}(I)\). Then \(\phi\subset I_{k}\) for some k.

Hence

$$\int _{I_{k}} f_{m,i}'(x)\phi(x)\,dx=- \int _{I_{k}} f_{m,i} (x)\phi'(x)\,dx. $$

Since

$$\bigl\vert f_{m,i}(x)-f_{i}(x)\bigr\vert _{I_{k}} \underset{m\rightarrow\infty}{\longrightarrow} 0 $$

and

$$\bigl\Vert f_{m,i}'(x)-g_{i}(x)\bigr\Vert _{L^{2}(I_{k})} \underset{m\rightarrow\infty}{\longrightarrow} 0, $$

which implies

$$\int _{I_{k}} {g_{i}(x)}\phi(x)\,dx=- \int _{I_{k}}f_{i}(x)\phi'(x)\,dx. $$

Thus \(g_{i}(x)\) is the weak derivative of \(f_{i}(x)\) for \(i=1,2,\ldots ,n\).

Writing the inequality (2.5) for \(F=F_{m}\) and \(G=0\), we have

$$\int _{I_{k+l}}\bigl\vert F_{m}'(x)\bigr\vert ^{2}h_{k+l}\,dx\leq\sum_{i=1}^{n} \biggl[\bigl\Vert f_{m}(x)\bigr\Vert ^{2}_{L_{I_{k+l}}^{\infty}} +\frac{1}{2}\bigl\Vert f_{m}(x)\bigr\Vert ^{2}_{L_{I_{k+l}}^{\infty}} \biggr] \int _{I_{k+l}}\bigl\vert h''(x)\,dx \bigr\vert $$

and we denote

$$\int _{I_{k+l}}\bigl\vert h''(x)\,dx \bigr\vert =c_{k+l}; $$

we get

$$\int _{I_{k+l}}\bigl\vert F_{m}'(x)\bigr\vert ^{2}h_{k+l}\,dx\leq\sum_{i=1}^{n} \biggl[\bigl\Vert f_{m}(x)\bigr\Vert ^{2}_{L_{I_{k+l}}^{\infty}} +\frac{1}{2}\bigl\Vert f_{m}(x)\bigr\Vert ^{2}_{L_{I_{k+l}}^{\infty}} \biggr]c_{k+l}. $$

Taking the limit as \(m\rightarrow\infty\), we get

$$\int _{I_{k+l}}\bigl\vert F'(x)\bigr\vert ^{2}h_{k+l}\,dx\leq\bigl\Vert f(x) \bigr\Vert ^{2}_{L_{I_{k+l}}^{\infty}}(c_{k+l}). $$

Since \(I_{k}\subseteq I_{k+l}\), we have

$$\int _{I_{k}}\bigl\vert F'(x)\bigr\vert h_{k+l}(x)\,dx\leq\bigl\Vert F(x)\bigr\Vert ^{2}_{L_{I_{k+l}}^{\infty}}c_{k+l}. $$

In the last integral, letting \(l\rightarrow\infty\) we find

$$\int _{I_{k}}\bigl\vert F'(x)\bigr\vert ^{2}h_{k}(x)\,dx\leq\bigl\Vert F(x)\bigr\Vert ^{2}_{L^{\infty}_{I_{k}}}c_{\infty}< \infty. $$

Since the above integral is bounded for each k, we have

$$\int _{I}\bigl\vert F'(x)\bigr\vert ^{2}h(x)\,dx< \infty. $$

This completes the proof. □

Theorem 3.2

Let \(F(x)\) and \(G(x)\) be two arbitrary convex vectors that belong to \(\chi_{[j+1,n]}^{[i,j]}[a,b]\) and let \(h(x)\) be the non-negative weight function satisfying (2.1) on the interval I, then the following estimate holds:

$$\begin{aligned}& \int _{I} \bigl\vert F'(x)-G'(x) \bigr\vert ^{2}h(x)\,dx \\& \quad \leq \sum_{i=1}^{n}\frac{1}{2} \bigl\Vert f_{i}(x)-g_{i}(x)\bigr\Vert ^{2}_{L^{\infty}}+\bigl\Vert f_{i}(x)-g_{i}(x) \bigr\Vert _{L^{\infty}} \bigl(\bigl\Vert f_{i}(x)\bigr\Vert _{L^{\infty}}+\bigl\Vert g_{i}(x)\bigr\Vert _{L^{\infty}} \bigr) \\& \qquad {}\times \int _{I} \bigl\vert h''(x)\bigr\vert \,dx. \end{aligned}$$
(3.3)

Proof

Let \(F_{m}(x)\) and \(G_{m}(x)\) be the smooth approximations of \(F(x)\) and \(G(x)\), respectively, for \(m=1,2,\ldots,\infty\). There exists an integer \(m_{k+l}\) such that \(F_{m}\) and \(G_{m}\) are smooth over the interval \(I_{k+l}\) and \(F_{m}(x)\) and \(G_{m}(x)\) converge uniformly to \(F(x)\) and \(G(x)\), respectively, for \(m\geq m_{k+l}\).

Let us write the inequality (2.5) for the functions \(F_{m}(x)\) and \(G_{m}(x)\) on the interval \(I_{k+l}\) as

$$\begin{aligned}& \int _{I_{k+l}}\bigl\vert F_{m}'(x)-G_{m}'(x) \bigr\vert ^{2}h_{k+l}(x)\,dx \\& \quad \leq \int _{I_{k+l}} \Biggl[\frac{1}{2}\sum ^{n}_{i=1}\bigl\Vert f_{m,i}(x)-g_{m,i}(x) \bigr\Vert _{L^{\infty}}^{2} + \sum^{n}_{i=1} \bigl\Vert f_{m,i}(x)-g_{m,i}(x)\bigr\Vert _{L^{\infty}} \\& \qquad {}\times \bigl(\bigl\Vert f_{m,i}(x)\bigr\Vert _{L^{\infty}}+\bigl\Vert g_{m,i}(x) \bigr\Vert _{L^{\infty}} \bigr) \Biggr]\bigl\vert h''(x)\bigr\vert \,dx. \end{aligned}$$

Taking the limit as \(m\rightarrow\infty\), we get

$$\begin{aligned}& \int _{I_{k+l}}\bigl\vert F'(x)-G'(x) \bigr\vert ^{2}h_{k+l}(x)\,dx \\& \quad \leq \int _{I_{k+l}} \Biggl[\frac{1}{2}\sum ^{n}_{i=1} \bigl\Vert f_{i}(x)-g_{i}(x) \bigr\Vert _{L^{\infty}}^{2} + \sum^{n}_{i=1} \bigl\Vert f_{i}(x)-g_{i}(x)\bigr\Vert _{L^{\infty}} \\& \qquad {} \times \bigl(\bigl\Vert f_{i}(x)\bigr\Vert _{L^{\infty}}+\bigl\Vert g_{i}(x)\bigr\Vert _{L^{\infty}} \bigr) \Biggr]\bigl\vert h''(x)\bigr\vert \,dx. \end{aligned}$$

As \(I_{k}\subset I_{k+l}\), taking the limit as \(l\rightarrow\infty\), we obtain

$$\begin{aligned}& \int _{I_{k}}\bigl\vert F'(x)-G'(x) \bigr\vert ^{2}h_{k}(x)\,dx \\& \quad \leq \int _{I} \Biggl[\frac{1}{2}\sum ^{n}_{i=1}\bigl\Vert f_{i}(x)-g_{i}(x) \bigr\Vert _{L^{\infty}}^{2}+ \sum^{n}_{i=1} \bigl\Vert f_{i}(x)-g_{i}(x)\bigr\Vert _{L^{\infty}} \\& \qquad {} \times \bigl(\bigl\Vert f_{i}(x)\bigr\Vert _{L^{\infty}}+\bigl\Vert g_{i}(x)\bigr\Vert _{L^{\infty}} \bigr) \Biggr]\bigl\vert h''(x)\bigr\vert \,dx. \end{aligned}$$

Using Theorem 3.1, we get

$$\int _{I}\bigl\vert F'(x)-G'(x) \bigr\vert ^{2}h(x)\,dx< \infty. $$

By letting \(k\rightarrow\infty\), we complete the proof. □

References

  1. Hardy, GH, Littlewood, JE, Polya, G: Inequalities. Cambridge University Press, Cambridge (1934)

    MATH  Google Scholar 

  2. Beckenbach, EF, Bellman, R: Inequalities. Springer, Berlin (1965)

    Book  MATH  Google Scholar 

  3. Cheney, EW: Approximation Theory III. Academic Press, New York (1980)

    MATH  Google Scholar 

  4. Niculescu, P, Persson, L-E: Convex Functions and Their Applications. Springer, Berlin (2004)

    MATH  Google Scholar 

  5. El Karoui, N, Jeanblanc-Picque, M, Shreve, SE: Robustness of the Black and Scholes formula. Math. Finance 8(2), 93-126 (1998)

    Article  MathSciNet  MATH  Google Scholar 

  6. Hobson, DG: Volatility misspecification, option pricing and superreplication via coupling. Ann. Appl. Probab. 8(1), 193-205 (1998)

    Article  MathSciNet  MATH  Google Scholar 

  7. Shashiashvili, K, Shashiashvili, M: Estimation of the derivative of the convex function by means of its uniform approximation. J. Inequal. Pure Appl. Math. 6(4), Article 113 (2005)

    MathSciNet  MATH  Google Scholar 

  8. Hussain, S, Shashiashvili, M: Discrete time hedging of the American option. Math. Finance 20(4), 647-670 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  9. Hussain, S, Rehman, N: Estimate for the discrete time hedging error of the American option on a dividend paying stock. Math. Inequal. Appl. 15, 137-163 (2012)

    MathSciNet  MATH  Google Scholar 

  10. Saleem, MS, Shashiashvili, M: The weighted reverse Poincaré inequality for the difference of two weak subsolutions. Bull. Georgian Natl. Acad. Sci. 4(3), 24-28 (2010)

    MathSciNet  MATH  Google Scholar 

  11. Pečarić, J, Proschan, F, Tong, YL: Convex Functions, Partial Ordering and Statistical Applications. Mathematics in Science and Engineering, vol. 187 (1992)

    MATH  Google Scholar 

  12. Hussain, S, Pečarić, J, Shashiashvili, M: The weighted square integral inequalities for the first derivative of the function of a real variable. J. Inequal. Appl. 2008, Article ID 343024 (2008)

    MathSciNet  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Muhammad Shoaib Saleem.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Saleem, M.S., Pečarić, J., Hussain, S. et al. The weighted reverse Poincaré-type estimates for the difference of two convex vectors. J Inequal Appl 2016, 194 (2016). https://doi.org/10.1186/s13660-016-1133-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13660-016-1133-x

Keywords