Skip to main content

On an upper bound for Sherman’s inequality

Abstract

Considering a weighted relation of majorization, Sherman obtained a useful generalization of the classical majorization inequality. The aim of this paper is to extend Sherman’s inequality to convex functions of higher order. An upper bound for Sherman’s inequality, as well as for generalized Sherman’s inequality, is given with some applications. As easy consequences, some new bounds for Jensen’s inequality can be derived.

1 Introduction and preliminaries

For any given partial ordering of a set χ, real-valued functions ϕ defined on χ, which satisfy \(\phi(x)\leq\phi(y)\) whenever \(x\preceq y\), are variously referred to as ‘monotonic’, ‘isotonic’, or ‘order-preserving’. We consider a partial ordering of majorization.

For two vectors \(\mathbf{x}, \mathbf{y}\in\mathbb{R}^{m}\) we say that x majorizes y or y is majorized by x and write \(\mathbf{y}\prec\mathbf {x}\) if

$${ \sum_{i=1}^{k}} y_{[i]} \leq { \sum_{i=1}^{k}} x_{[i]}\quad\text{for }k=1,\ldots,m-1\quad\text{and}\quad { \sum _{i=1}^{m}} y_{i}={ \sum _{i=1}^{m}} x_{i}. $$

Here \(x_{[i]}\) and \(y_{[i]}\) denote the elements of x and y sorted in decreasing order. Majorization on vectors determines the degree of similarity between the vector elements.

For the concept of majorization, the order-preserving functions were first systematically studied by Schur (see [1, 2], p.79). In his honor, such functions are said to be ‘convex in the sense of Schur’, ‘Schur convex’, or ‘S-convex.’

Many of the inequalities that arise from a majorization can be obtained simply by identifying an appropriate order-preserving function. Historically, such inequalities have often been proved by direct methods without an awareness that a majorization underlies the validity of the inequality. The classical example of this is the Hadamard determinant inequality, where the underlying majorization was discovered by Schur.

Instead of discussing a partial order on vectors in \(\mathbb{R}^{m}\), it is natural to consider ordering matrices. Majorization in its usual sense applies to vectors with fixed element totals. For \(m\times n\) matrices, several avenues of generalization are open. Two matrices can be considered as being ordered if one is obtainable from the other by postmultiplication by a doubly stochastic matrix. This relates \(m\times n\) matrices with fixed row totals. There are, however, several variations on this approach that merit attention.

An important tool in the study of majorization is the next theorem, due to Hardy et al. [3], which gives connections with matrix theory, more specifically with doubly stochastic matrices, i.e. nonnegative square matrices with all rows and columns sums being equal to one.

Theorem 1

Let \(\mathbf{x}=(x_{1},\ldots,x_{m})\), \(\mathbf{y}=(y_{1},\ldots,y_{m})\in \mathbb{R}^{m}\). Then the following statements are equivalent:

  1. (i)

    \(\mathbf{y}\prec\mathbf{x}\);

  2. (ii)

    there is a doubly stochastic matrix A such that \(\mathbf{y}=\mathbf{xA}\);

  3. (iii)

    the inequality \(\sum_{i=1}^{m}\phi(y_{i}) \leq\sum_{i=1}^{m}\phi(x_{i})\) holds for each convex continuous function \(\phi :\mathbb{R}\rightarrow\mathbb{R}\).

For many purposes, the condition \(\mathbf{y}=\mathbf{xA}\) is more convenient than the partial sums conditions defining majorization.

Sherman [4] considered a weighted relation of majorization,

$${ \sum_{i=1}^{k}} b_{i}y_{i} \leq { \sum_{j=1}^{l}} a_{j}x_{j}$$

for nonnegative weights \(a_{j}\) and \(b_{i}\) and proved a more general result which includes the row stochastic \(k\times l\) matrix, i.e. matrix \(\mathbf{A}=(a_{ij})\in\mathcal{M}_{kl}(\mathbb{R})\) such that

$$\begin{aligned} &a_{ij} \geq0\quad\text{for all }i=1,\ldots,k,j=1,\ldots,l, \\ & \sum_{j=1}^{l} a_{ij} =1\quad\text{for all }i=1,\ldots,k. \end{aligned}$$

By \(\mathbf{A}^{T}=(a_{ji})\in\mathcal{M}_{lk}(\mathbb{R})\) we denote the transpose of A.

Sherman’s result can be formulated as the following theorem (see [4]).

Theorem 2

Let \(\mathbf{x}\in[\alpha,\beta]^{l}\), \(\mathbf{y}\in[\alpha,\beta]^{k}\), \(\mathbf{a}\in[0,\infty)^{l}\), \(\mathbf{b}\in[0,\infty)^{k}\) and

$$ \mathbf{y}=\mathbf{x}\mathbf{A}^{T}\quad\textit{and}\quad\mathbf {a}=\mathbf{bA} $$
(1.1)

for some row stochastic matrix \(\mathbf{A}\in\mathcal{M}_{kl}(\mathbb{R})\). Then for every convex function \(\phi:[\alpha,\beta]\rightarrow\mathbb {R}\) we have

$$ \sum_{i=1}^{k}b_{i} \phi(y_{i})\leq\sum_{j=1}^{l}a_{j} \phi(x_{j}). $$
(1.2)

If ϕ is concave, then the reverse inequality in (1.2) holds.

Notice that if we set \(k=l\) and \(a_{j}=b_{i}\) for all \(i,j=1,\ldots,k\), the condition \(\mathbf{a}=\mathbf{bA}\) ensures the stochasticity on columns, so in that case we deal with doubly stochastic matrices. Then, as a special case of Sherman’s inequality, we get a weighted version of the majorization inequality:

$${ \sum_{i=1}^{k}} a_{i}\phi(y_{i})\leq { \sum_{i=1}^{k}} a_{i} \phi(x_{i}). $$

Denoting \(A_{k}=\sum_{i=1}^{k}a_{i}\) and putting \(y_{1}=y_{2}=\cdots=y_{k}=\frac{1}{A_{k}}\sum_{i=1}^{k}a_{i}x_{i}\), we obtain Jensen’s inequality in the form

$$\phi \Biggl( \frac{1}{A_{k}}{ \sum _{i=1}^{k}} a_{i}x_{i} \Biggr) \leq\frac{1}{A_{k}}{ \sum _{i=1}^{k}} a_{i} \phi(x_{i}). $$

Particularly, for \(A_{k}=1\), we have

$$ \phi \Biggl( { \sum_{i=1}^{k}} a_{i}x_{i} \Biggr) \leq { \sum_{i=1}^{k}} a_{i} \phi(x_{i}). $$
(1.3)

On the other hand, the proof of Sherman’s inequality (1.2) is based on Jensen’s inequality (1.3). Since the matrix \(\mathbf{A}\in\mathcal{M}_{kl}(\mathbb{R})\) is row stochastic and (1.1) holds, for every convex function \(\phi:[\alpha,\beta]\rightarrow\mathbb{R}\) we have

$$\begin{aligned} { \sum_{i=1}^{k}} b_{i}\phi(y_{i}) & ={ \sum_{i=1}^{k}} b_{i} \phi \Biggl( { \sum_{j=1}^{l}} x_{j}a_{ij} \Biggr) \leq { \sum_{i=1}^{k}} b_{i}{ \sum_{j=1}^{l}} a_{ij} \phi ( x_{j} ) \\ & ={ \sum_{j=1}^{l}} \Biggl( { \sum_{i=1}^{k}} b_{i}a_{ij} \Biggr) \phi ( x_{j} ) ={ \sum_{j=1}^{l}} a_{j}\phi(x_{j}). \end{aligned}$$

In this paper, we consider a difference of Sherman’s inequality

$$\sum_{j=1}^{l}a_{j} \phi(x_{j})-\sum_{i=1}^{k}b_{i} \phi(y_{i}) $$

and establish generalizations of Sherman’s inequality (1.2) which hold for n-convex functions which are in special cases convex in the usual sense. Moreover, we obtain an extension to real, not necessarily nonnegative entries of the vectors a, b, and the matrix A. Some related results can be found in [5, 6].

The notion of n-convexity was defined in terms of divided differences by Popoviciu. Divided differences are a very important notion by dealing with the functions when discussing the degree of smoothness. A function \(\phi :[\alpha ,\beta]\rightarrow\mathbb{R}\) is n-convex, \(n\geq0\), if its nth order divided differences \([x_{0},\ldots,x_{n};\phi]\) are nonnegative for all choices of \((n+1)\) distinct points \(x_{i}\in[\alpha,\beta]\), \(i=0,\ldots,n\). Thus, a 0-convex function is nonnegative, a 1-convex function is nondecreasing, and a 2-convex function is convex in the usual sense. If ϕ is n-convex, then without loss of generality we can assume that ϕ is n-times differentiable and \(\phi^{(n)}\geq0\) (see [7]).

The techniques that we use in the paper are based on classical real analysis and the application of the Abel-Gontscharoff interpolation. The Abel-Gontscharoff interpolation problem in the real case was introduced in 1935 by Whittaker [8] and subsequently by Gontscharoff [9] and Davis [10]. The next theorem presents the Abel-Gontscharoff interpolating polynomial for two points with integral remainder (see [11]).

Theorem 3

Let \(n,m\in\mathbb{N}\), \(n\geq2\), \(0\leq m\leq n-1\), and \(\phi\in C^{n}([\alpha,\beta])\). Then

$$\phi(t)=Q_{n-1} ( \alpha,\beta,\phi,t ) +R ( \phi,u ) , $$

where \(Q_{n-1}\) is the Abel-Gontscharoff interpolation for two points of degree \(n-1\), i.e.

$$\begin{aligned} Q_{n-1} ( \alpha,\beta,\phi,t ) ={}&\sum_{s=0}^{m}\frac{ ( t-\alpha ) ^{s}}{s!}\phi^{(s)}(\alpha) \\ &{} +\sum_{r=0}^{n-m-2} \Biggl[ \sum _{s=0}^{r}\frac{ ( t-\alpha ) ^{m+1+s} ( \alpha-\beta ) ^{r-s}}{ ( m+1+s ) ! ( r-s ) !} \Biggr] \phi^{(m+1+r)}(\beta) \end{aligned}$$

and the remainder is given by

$$R ( \phi,u ) = \int_{\alpha}^{\beta}G_{mn}(u,t) \phi^{(n)}(t)\,dt, $$

where \(G_{mn}(u,t)\) is Green’s function defined by

$$ G_{mn}(u,t)=\frac{1}{(n-1)!}\textstyle\begin{cases}{ \sum_{s=0}^{m}} \binom{n-1}{s} ( u-\alpha ) ^{s} ( \alpha-t ) ^{n-s-1} , & \alpha\leq t\leq u;\\ -{ \sum_{s=m+1}^{n-1}} \binom{n-1}{s} ( u-\alpha ) ^{s} ( \alpha-t ) ^{n-s-1}, & u\leq t\leq\beta. \end{cases} $$
(1.4)

Remark 1

Further, for \(\alpha\leq t\), \(u\leq\beta\) the following inequalities hold:

$$\begin{aligned} &(-1)^{n-m-1}\frac{\partial^{s}G_{mn}(u,t)}{\partial u^{s}} \geq 0,\quad 0\leq s\leq m, \\ &(-1)^{n-s}\frac{\partial^{s}G_{mn}(u,t)}{\partial u^{s}} \geq 0,\quad m+1\leq s\leq n-1. \end{aligned}$$

2 Generalizations of Sherman’s theorem

Applying interpolation by the Abel-Gontscharoff polynomial we derive an identity related to generalized Sherman’s inequality.

Theorem 4

Let \(\mathbf{x}\in[\alpha,\beta]^{l}\), \(\mathbf{y}\in[\alpha,\beta]^{k}\), \(\mathbf{a}\in\mathbb{R}^{l}\) and \(\mathbf{b}\in\mathbb{R}^{k}\) be such that (1.1) holds for some matrix \(\mathbf{A}\in\mathcal{M}_{kl}(\mathbb{R})\) whose entries satisfy the condition \({ \sum_{j=1}^{l}} a_{ij}=1\), \(i=1,\ldots,k\). Let \(n,m\in\mathbb{N}\), \(n\geq2\), \(0\leq m\leq n-1\), \(\phi\in C^{n}([\alpha,\beta])\), and \(G_{mn}\) be defined by (1.4). Then

$$\begin{aligned} & \sum_{j=1}^{l}a_{j} \phi(x_{j})-\sum_{i=1}^{k}b_{i} \phi (y_{i}) \\ &\quad =\sum_{s=2}^{m}\frac{\phi^{(s)}(\alpha)}{s!} \Biggl( \sum_{j=1}^{l}a_{j}(x_{j}- \alpha)^{s}-\sum_{i=1}^{k}b_{i}(y_{i}- \alpha )^{s} \Biggr) \\ &\qquad{} +\sum_{r=0}^{n-m-2}\sum _{s=0}^{r}\frac{(-1)^{r-s}(\beta-\alpha)^{r-s} \phi^{(m+1+r)}(\beta)}{(m+1+s)!(r-s)!} \Biggl( { \sum_{j=1}^{l}} a_{j}(x_{j}-\alpha)^{m+1+s}-{ \sum_{i=1}^{k}} b_{i}(y_{i}- \alpha)^{m+1+s} \Biggr) \\ &\qquad{} + \int_{\alpha}^{\beta} \Biggl( \sum _{j=1}^{l}a_{j}G_{mn}(x_{j},t)- \sum_{i=1}^{k}b_{i}G_{mn}(y_{i},t) \Biggr) \phi ^{(n)}(t)\,dt. \end{aligned}$$
(2.1)

Proof

Using Theorem 3 we can represent every function \(\phi\in C^{n}([\alpha,\beta])\) in the form

$$\begin{aligned} \phi(u) =&\sum_{s=0}^{m}\frac{(u-\alpha)^{s}}{s!} \phi^{(s)}(\alpha) \\ &{} +\sum_{r=0}^{n-m-2} \Biggl[ \sum _{s=0}^{r}\frac{(u-\alpha)^{m+1+s}(-1)^{r-s}(\beta-\alpha)^{r-s}}{(m+1+s)!(r-s)!} \Biggr] \phi^{(m+1+r)}(\beta) \\ &{} + \int_{\alpha}^{\beta}G_{mn}(u,t) \phi^{(n)}(t)\,dt. \end{aligned}$$
(2.2)

By an easy calculation, applying (2.2) in \(\sum_{j=1}^{l}a_{j}\phi(x_{j})-\sum_{i=1}^{k}b_{i}\phi(y_{i})\), we get

$$\begin{aligned} & \sum_{j=1}^{l}a_{j} \phi(x_{j})-\sum_{i=1}^{k}b_{i} \phi (y_{i}) \\ &\quad =\sum_{s=0}^{m}\frac{\phi^{(s)}(\alpha)}{s!} \Biggl( \sum_{j=1}^{l}a_{j}(x_{j}- \alpha)^{s}-\sum_{i=1}^{k}b_{i}(y_{i}- \alpha )^{s} \Biggr) \\ & \qquad{}+\sum_{r=0}^{n-m-2}\sum _{s=0}^{r}\frac{(-1)^{r-s}(\beta-\alpha)^{r-s} \phi^{(m+1+r)}(\beta)}{(m+1+s)!(r-s)!} \Biggl( { \sum_{j=1}^{l}} a_{j}(x_{j}-\alpha)^{m+1+s}-{ \sum_{i=1}^{k}} b_{i}(y_{i}- \alpha)^{m+1+s} \Biggr) \\ &\qquad{} + \int_{\alpha}^{\beta} \Biggl( \sum _{j=1}^{l}a_{j}G_{mn}(x_{j},t)- \sum_{i=1}^{k}b_{i}G_{mn}(y_{i},t) \Biggr) \phi^{(n)}(t)\,dt. \end{aligned}$$

Since (1.1) holds,

$$\frac{\phi^{(s)}(\alpha)}{s!} \Biggl( \sum_{j=1}^{l}a_{j}(x_{j}- \alpha)^{s}-\sum_{i=1}^{k}b_{i}(y_{i}- \alpha)^{s} \Biggr) =0\quad\text{for }s=0,1. $$

Therefore, (2.1) follows. □

The following theorem extends Sherman’s result to convex functions of higher order and to real, not necessarily nonnegative entries of the vectors a, b, and the matrix A.

Theorem 5

Suppose that all the assumptions of Theorem  4 hold. Additionally, let ϕ be n-convex on \([\alpha,\beta]\) and

$$ \sum_{i=1}^{k}b_{i}G_{mn} ( y_{i},t ) \leq\sum_{j=1}^{l}a_{j}G_{mn} ( x_{j},t ) ,\quad t\in[\alpha,\beta]. $$
(2.3)

Then

$$\begin{aligned} &\sum_{j=1}^{l}a_{j}\phi ( x_{j} ) -\sum_{i=1}^{k}b_{i} \phi ( y_{i} ) \\ &\quad \geq\sum_{s=2}^{m}\frac{\phi^{(s)}(\alpha)}{s!} \Biggl( \sum_{j=1}^{l}a_{j} ( x_{j}-\alpha ) ^{s}-\sum_{i=1}^{k}b_{i} ( y_{i}-\alpha ) ^{s} \Biggr) \\ &\qquad{} +\sum_{r=0}^{n-m-2}\sum _{s=0}^{r}\frac{ ( -1 ) ^{r-s} ( \beta-\alpha ) ^{r-s}\phi^{(m+1+r)}(\beta)}{(m+1+s)!(r-s)!} \\ &\qquad{}\times\Biggl( \sum _{j=1}^{l}a_{j} ( x_{j}-\alpha ) ^{m+1+s}-\sum_{i=1}^{k}b_{i} ( y_{i}-\alpha ) ^{m+1+s} \Biggr) . \end{aligned}$$
(2.4)

If the reverse inequality in (2.3) holds, then the reverse inequality in (2.4) holds.

Proof

Since \(\phi\in C^{n}([\alpha,\beta])\) is n-convex, \(\phi^{(n)}\geq0\) on \([\alpha,\beta]\). Hence, we can apply Theorem 4 to get (2.4). □

Notice that when we take into account Sherman’s condition of nonnegativity of the vectors a, b, and the matrix A, the assumption (2.3) is equivalent to the requirement that \(G_{mn} ( \cdot,t ) \), \(t\in[\alpha,\beta]\), must be convex on \([\alpha,\beta]\). So, for \(n=2\) and \(0\leq m\leq1\), the assumption (2.3) is immediately satisfied and then the inequality (2.4) holds. Moreover, in that case, the right-hand side of (2.4) is equal to zero, so we have

$$\sum_{j=1}^{l}a_{j} \phi(x_{j})-\sum_{i=1}^{k}b_{i} \phi (y_{i})\geq0, $$

i.e. we get Sherman’s inequality as a direct consequence. For an arbitrary \(n\geq3\) and \(0\leq m\leq1\), we use Remark 1, i.e. we consider the following inequality:

$$(-1)^{n-2}\frac{\partial^{2}G_{mn}(u,t)}{\partial u^{2}}\geq0. $$

Hence, the convexity of \(G_{mn}(\cdot,t)\) depends on the parity of n. If n is even, then \(\frac{\partial^{2}G_{mn}(u,t)}{\partial u^{2}}\geq0\), i.e. \(G_{mn}(\cdot,t)\) is convex and assumption (2.3) is satisfied. Moreover, the inequality (2.4) holds. For odd n we get the reverse inequality. For all other choices, the following generalization holds.

Theorem 6

Suppose that all assumptions of Theorem  2 hold. Additionally, let \(n,m\in\mathbb{N}\), \(n\geq3\), \(2\leq m\leq n-1\), and \(\phi\in C^{n}([\alpha,\beta])\) be n-convex.

  1. (i)

    If \(n-m\) is odd, then the inequality (2.4) holds.

  2. (ii)

    If \(n-m\) is even, then the reverse inequality in (2.4) holds.

Proof

(i) By Remark 1, the following inequality holds:

$$(-1)^{n-m-1}\frac{\partial^{2}G_{mn}(u,t)}{\partial u^{2}}\geq0,\quad \alpha\leq u,t\leq\beta. $$

In case \(n-m\) is odd (\(n-m-1\) is even), we have

$$\frac{\partial^{2}G_{mn}(u,t)}{\partial u^{2}}\geq0, $$

i.e. \(G_{mn} ( \cdot,t ) \), \(t\in[\alpha,\beta ]\), is convex on \([\alpha,\beta]\). Then by Sherman’s theorem we have

$$\sum_{i=1}^{k}b_{i}G_{mn} ( y_{i},s ) \leq\sum_{j=1}^{l}a_{j}G_{mn} ( x_{j},s ) , $$

i.e. the assumption (2.3) is satisfied. Hence, applying Theorem 5 we get (2.4).

(ii) Similarly we can prove this part. □

Theorem 7

Suppose that all assumptions of Theorem  2 hold. Additionally, let \(n,m\in\mathbb{N}\), \(n\geq2\), \(0\leq m\leq n-1\), \(\phi\in C^{n}([\alpha,\beta])\) be n-convex and \(F:[\alpha,\beta ]\rightarrow\mathbb{R}\) be defined by

$$\begin{aligned} F(t) ={}&\sum_{s=2}^{m}\frac{\phi^{(s)}(\alpha)}{s!} ( t-\alpha ) ^{s} \\ &{} +\sum_{r=0}^{n-m-2}\sum _{s=0}^{r}\frac{ ( -1 ) ^{r-s} ( \beta-\alpha ) ^{r-s}}{ ( m+1+s ) ! ( r-s ) !}\phi^{(m+1+r)}(\beta) ( t-\alpha ) ^{m+1+s}. \end{aligned}$$
(2.5)
  1. (i)

    If (2.4) holds and F is convex, then the inequality (1.2) holds.

  2. (ii)

    If the reverse of (2.4) holds and F is concave, then the reverse inequality in (1.2) holds.

Proof

(i) Let (2.4) holds. If F is convex, then by Sherman’s theorem we have

$$\sum_{j=1}^{l}a_{j}F ( x_{j} ) -\sum_{i=1}^{k}b_{i}F ( y_{i} ) \geq0, $$

which, changing the order of summation, can be written in the form

$$\begin{aligned} &\sum_{s=2}^{m}\frac{\phi^{(s)}(\alpha)}{s!} \Biggl( \sum_{j=1}^{l}a_{j} ( x_{j}-\alpha ) ^{s}-\sum_{i=1}^{k}b_{i} ( y_{i}-\alpha ) ^{s} \Biggr) \\ &\qquad{} +\sum_{r=0}^{n-m-2}\sum _{s=0}^{r}\frac{ ( -1 ) ^{r-s} ( \beta-\alpha ) ^{r-s}\phi^{(m+1+r)}(\beta)}{(m+1+s)!(r-s)!}\\ &\qquad{}\times \Biggl( \sum _{j=1}^{l}a_{j} ( x_{j}-\alpha ) ^{m+1+s}-\sum_{i=1}^{k}b_{i} ( y_{i}-\alpha ) ^{m+1+s} \Biggr) \\ &\quad \geq0. \end{aligned}$$

Therefore, the right-hand side of (2.4) is nonnegative and the inequality (1.2) immediately follows.

(ii) Similarly we can prove this part. □

Remark 2

Note that the function \(t\mapsto(t-\alpha)^{p}\) is convex on \([\alpha,\beta]\) for each \(p=2,\ldots,n-1\), i.e. \(\sum_{j=1}^{l}a_{j} ( x_{j}-\alpha ) ^{p}-\sum_{i=1}^{k}b_{i} ( y_{i}-\alpha ) ^{p}\geq0\), for each \(p=2,\ldots,n-1\).

  1. (i)

    If (2.4) holds and in addition \(\phi^{(s)}(\alpha )\geq0\) for \(s=0,\ldots,m\) and \(\phi^{(m+1+s)}(\beta)\geq0\) if \(r-s\) is even and \(\phi^{(m+1+s)}(\beta)\leq0\) if \(r-s\) is odd for \(s=0,\ldots,r\) and \(r=0,\ldots,n-m-2\), then the right-hand side of (2.4) is nonnegative, i.e. the inequality (1.2) holds.

  2. (ii)

    If the reverse of (2.4) holds and in addition \(\phi^{(s)}(\alpha)\leq0\) for \(s=0,\ldots,m\) and \(\phi^{(m+1+s)}(\beta)\leq 0\) if \(r-s\) is even, and \(\phi^{(m+1+s)}(\beta)\geq0\) if \(r-s\) is odd for \(s=0,\ldots,r\) and \(r=0,\ldots,n-m-2\), then the right-hand side of (2.4) is negative, i.e. the reverse inequality in (1.2) holds.

3 Upper bound for generalized Sherman’s inequality

In the previous section, under special conditions which are imposed in Theorem 7 and Remark 2, we obtained the following estimations for Sherman’s difference:

$$\begin{aligned} &\sum_{j=1}^{l}a_{j}\phi ( x_{j} ) -\sum_{i=1}^{k}b_{i} \phi ( y_{i} ) \\ &\quad \geq\sum_{s=2}^{m}\frac{\phi^{(s)}(\alpha)}{s!} \Biggl( \sum_{j=1}^{l}a_{j} ( x_{j}-\alpha ) ^{s}-\sum_{i=1}^{k}b_{i} ( y_{i}-\alpha ) ^{s} \Biggr) \\ &\qquad{} +\sum_{r=0}^{n-m-2}\sum _{s=0}^{r}\frac{ ( -1 ) ^{r-s} ( \beta-\alpha ) ^{r-s}\phi^{(m+1+r)}(\beta)}{(m+1+s)!(r-s)!} \\ &\qquad{}\times \Biggl( \sum _{j=1}^{l}a_{j} ( x_{j}-\alpha ) ^{m+1+s}-\sum_{i=1}^{k}b_{i} ( y_{i}-\alpha ) ^{m+1+s} \Biggr) \\ &\quad \geq0. \end{aligned}$$
(3.1)

In this section we present upper bounds for obtained generalization. In the proofs of some estimations we use recent results related to the Čebyšev functional, which for two Lebesgue integrable functions \(f,g:[a,b]\rightarrow \mathbb{R}\) is defined by

$$T(f,g)=\frac{1}{b-a} \int_{a}^{b}f(t)g(t)\,dt-\frac{1}{b-a} \int_{a}^{b}f(t)\,dt\cdot \frac{1}{b-a} \int_{a}^{b}g(t)\,dt. $$

With \(\Vert \cdot \Vert _{p}\), \(1\leq p\leq\infty\), we denote the usual Lebesgue norms on space \(L_{p}[a,b]\).

Theorem 8

([12], Theorem 1)

Let \(f:[a,b]\rightarrow\mathbb{R}\) be a Lebesgue integrable function and \(g:[a,b]\rightarrow\mathbb{R}\) be an absolutely continuous function with \((\cdot-a)(b-\cdot)[g^{\prime }]^{2}\in L_{1}[a,b]\). Then

$$ \bigl|T(f,g)\bigr|\leq\frac{1}{\sqrt{2}} \bigl[ T(f,f) \bigr] ^{\frac{1}{2}} \frac {1}{\sqrt{b-a}} \biggl( \int_{a}^{b}(x-a) (b-x) \bigl[ g^{\prime }(x) \bigr] ^{2}\,dx \biggr) ^{\frac{1}{2}}. $$
(3.2)

The constant \(\frac{1}{\sqrt{2}}\) in (3.2) is the best possible.

Theorem 9

([12], Theorem 2)

Assume that \(g:[a,b]\rightarrow \mathbb{R}\) is monotonic nondecreasing on \([a,b]\) and \(f:[a,b]\rightarrow\mathbb {R}\) is absolutely continuous with \(f^{\prime}\in L_{\infty}[a,b]\). Then

$$ \bigl|T(f,g)\bigr|\leq\frac{1}{2(b-a)}\bigl\Vert f^{\prime}\bigr\Vert _{\infty }\int _{a}^{b}(x-a) (b-x)\,dg(x). $$
(3.3)

The constant \(\frac{1}{2}\) in (3.3) is the best possible.

To avoid many notations, we define the function \(\mathcal{F}:[\alpha ,\beta]\rightarrow\mathbb{R}\) by

$$ \mathcal{F}(t)=\sum_{j=1}^{l}a_{j}G_{mn} ( x_{j},t ) -\sum_{i=1}^{k}b_{i}G_{mn} ( y_{i},t ) , $$
(3.4)

under assumptions of Theorem 4. We also consider the Čebyšev functional

$$T(\mathcal{F},\mathcal{F})=\frac{1}{\beta-\alpha} \int_{\alpha}^{\beta }\mathcal{F}^{2}(t)\,dt- \biggl( \frac{1}{\beta-\alpha} \int_{\alpha}^{\beta }\mathcal{F}(t)\,dt \biggr) ^{2}. $$

Theorem 10

Suppose that all the assumptions of Theorem  4 hold. Additionally, let \((\cdot-\alpha)(\beta-\cdot)(\phi^{(n+1)})^{2}\in L_{1}[\alpha,\beta]\) and \(\mathcal{F}\) be defined as in (3.4). Then the following identity holds:

$$\begin{aligned} &\sum_{j=1}^{l}a_{j}\phi ( x_{j} ) -\sum_{i=1}^{k}b_{i} \phi ( y_{i} ) \\ &\quad =\sum_{s=2}^{m}\frac{\phi^{(s)}(\alpha)}{s!} \Biggl[ \sum_{j=1}^{l}a_{j} ( x_{j}-\alpha ) ^{s}-\sum_{i=1}^{k}b_{i} ( y_{i}-\alpha ) ^{s} \Biggr] \\ &\qquad{} +\sum_{r=0}^{n-m-2}\sum _{s=0}^{r}\frac{ ( -1 ) ^{r-s} ( \beta-\alpha ) ^{r-s}\phi^{(m+1+r)}(\beta)}{(s+1+m)!(r-s)!} \Biggl[ \sum _{j=1}^{l}a_{j} ( x_{j}-\alpha ) ^{m+1+s}-\sum_{i=1}^{k}b_{i} ( y_{i}-\alpha ) ^{m+1+s} \Biggr] \\ & \qquad{}+\frac{\phi^{(n-1)}(\beta)-\phi^{(n-1)}(\alpha)}{\beta-\alpha} \int _{\alpha }^{\beta}\mathcal{F}(t)\,dt+R_{n}( \alpha,\beta;\phi), \end{aligned}$$
(3.5)

where the remainder \(R_{n}(\alpha,\beta;\phi)\) satisfies the estimation

$$\bigl\vert R_{n}(\alpha,\beta;\phi)\bigr\vert \leq\sqrt{ \frac{\beta -\alpha }{2}} \bigl[ T(\mathcal{F},\mathcal{F}) \bigr] ^{\frac{1}{2}}\biggl\vert \int_{\alpha}^{\beta}(t-\alpha) (\beta-t) \bigl[ \phi^{(n+1)}(t) \bigr] ^{2}\,dt\biggr\vert ^{\frac{1}{2}}. $$

Proof

By applying Theorem 8 for \(f\rightarrow\mathcal{F}\) and \(g\rightarrow\phi^{(n)}\) we get

$$\begin{aligned} &\biggl\vert \frac{1}{\beta-\alpha} \int_{\alpha}^{\beta}\mathcal {F}(t)\phi^{(n)}(t) \,dt-\frac{1}{\beta-\alpha} \int_{\alpha}^{\beta }\mathcal {F}(t)\,dt\cdot \frac{1}{\beta-\alpha} \int_{\alpha}^{\beta}\phi^{(n)}(t)\,dt \biggr\vert \\ &\quad \leq \frac{1}{\sqrt{2}} \bigl[ T(\mathcal{F},\mathcal{F}) \bigr] ^{\frac{1}{2}}\frac{1}{\sqrt{\beta-\alpha}}\biggl\vert \int_{\alpha }^{\beta }(t-\alpha) (\beta-t) \bigl[ \phi^{(n+1)}(t) \bigr] ^{2}\,dt\biggr\vert ^{\frac{1}{2}}. \end{aligned}$$

Therefore we have

$$\int_{\alpha}^{\beta}\mathcal{F}(t)\phi^{(n)}(t) \,dt=\frac{\phi^{(n-1)}(\beta)-\phi^{(n-1)}(\alpha)}{\beta-\alpha} \int_{\alpha}^{\beta}\mathcal{F}(t) \,dt+R_{n}(\alpha,\beta;\phi), $$

where the remainder \(R_{n}(\alpha,\beta;\phi)\) satisfies the estimation. Now from the identity (2.1) we obtain (3.5). □

The following Grüss type inequality also holds.

Theorem 11

Suppose that all the assumptions of Theorem  4 hold. Additionally, let \(\phi^{(n+1)}\geq0\) on \([\alpha,\beta]\) and \(\mathcal {F}\) be defined as in (3.4). Then the identity (3.5) holds and the remainder \(R(\phi;a,b)\) satisfies the bound

$$ \bigl\vert R_{n}(\alpha,\beta;\phi)\bigr\vert \leq\bigl\Vert \mathcal {F}^{\prime}\bigr\Vert _{\infty} \biggl\{ \frac{\phi^{(n-1)}(\beta)+\phi ^{(n-1)}(\alpha)}{2}-\frac{\phi^{(n-2)}(\beta)-\phi^{(n-2)}(\alpha)}{\beta-\alpha} \biggr\} . $$
(3.6)

Proof

Applying Theorem 9 for \(f\rightarrow\mathcal{F}\) and \(g\rightarrow \phi^{(n)}\) we obtain

$$\begin{aligned} &\biggl\vert \frac{1}{\beta-\alpha} \int_{\alpha}^{\beta}\mathcal {F}(t)\phi^{(n)}(t) \,dt-\frac{1}{\beta-\alpha} \int_{\alpha}^{\beta }\mathcal {F}(t)\,dt\cdot\frac{1}{\beta-\alpha} \int_{\alpha}^{\beta}\phi ^{(n)}(t)\,dt\biggr\vert \\ & \quad \leq\frac{1}{2(\beta-\alpha)}\bigl\Vert \mathcal{F}^{\prime}\bigr\Vert _{\infty} \int_{\alpha}^{\beta}(t-\alpha) (\beta-t) \phi^{(n+1)}(t)\,dt. \end{aligned}$$
(3.7)

Since

$$\begin{aligned} &\int_{\alpha}^{\beta}(t-\alpha) (\beta-t) \phi^{(n+1)}(t)\,dt\\ &\quad=\int _{\alpha}^{\beta} \bigl[ 2t-(\alpha+\beta) \bigr] \phi^{(n)}(t)\,dt \\ & \quad=(\beta-\alpha) \bigl[ \phi^{(n-1)}(\beta)+\phi^{(n-1)}(\alpha) \bigr] -2 \bigl( \phi^{(n-2)}(\beta)-\phi^{(n-2)}(\alpha) \bigr) , \end{aligned}$$

using the identities (2.1) and (3.7) we deduce (3.6). □

We present an upper bound for generalized Sherman’s inequality which is an Ostrowski type inequality.

Theorem 12

Suppose that all the assumptions of Theorem  4 hold. Let \((p,q)\) be a pair of conjugate exponents, that is, \(1\leq p\), \(q\leq\infty\), \(\frac{1}{p}+\frac{1}{q}=1\). Then

$$\begin{aligned} & \Biggl|\sum_{j=1}^{l}a_{j}\phi ( x_{j} ) -\sum_{i=1}^{k}b_{i} \phi ( y_{i} ) \\ & \qquad{}- \sum_{s=2}^{m}\frac{\phi^{(s)}(\alpha)}{s!} \Biggl[ \sum_{j=1}^{l}a_{j} ( x_{j}-\alpha ) ^{s}-\sum_{i=1}^{k}b_{i} ( y_{i}-\alpha ) ^{s} \Biggr] \\ &\qquad{} -\sum_{r=0}^{n-m-2}\sum _{s=0}^{r}\frac{ ( -1 ) ^{r-s} ( \beta-\alpha ) ^{r-s}\phi^{(m+1+r)}(\beta)}{(m+1+s)!(r-s)!} \\ &\qquad{}\times \Biggl( \sum _{j=1}^{l}a_{j} ( x_{j}-\alpha ) ^{m+1+s}-\sum_{i=1}^{k}b_{i} ( y_{i}-\alpha ) ^{m+1+s} \Biggr) \Biggr| \\ &\quad \leq\bigl\Vert \phi^{(n)}\bigr\Vert _{p} \Biggl( \int_{\alpha}^{\beta }\Biggl\vert \sum _{j=1}^{l}a_{j}G_{mn} ( x_{j},t ) -\sum_{i=1}^{k}b_{i}G_{mn} ( y_{i},t ) \Biggr\vert ^{q}\,dt \Biggr) ^{\frac {1}{q}}. \end{aligned}$$
(3.8)

The constant on the right-hand side of (3.8) is sharp for \(1< p\leq\infty\) and the best possible for \(p=1\).

Proof

Applying the well-known Hölder inequality to the identity (2.1) we have

$$\begin{aligned} & \Biggl|\sum_{j=1}^{l}a_{j} \phi(x_{j})-\sum_{i=1}^{k}b_{i} \phi(y_{i}) \\ &\qquad{} -\sum_{s=2}^{m}\frac{\phi^{(s)}(\alpha)}{s!} \Biggl( \sum_{j=1}^{l}a_{j}(x_{j}- \alpha)^{s}-\sum_{i=1}^{k}b_{i}(y_{i}- \alpha )^{s} \Biggr) \\ &\qquad{} -\sum_{r=0}^{n-m-2}\sum _{s=0}^{r}\frac{(-1)^{r-s}(\beta -\alpha)^{r-s}\phi^{(m+1+r)}(\beta)}{(m+1+s)!(r-s)!} \\ &\qquad{}\times \Biggl( { \sum_{j=1}^{l}} a_{j}(x_{j}-\alpha)^{m+1+s}-{ \sum_{i=1}^{k}} b_{i}(y_{i}- \alpha)^{m+1+s} \Biggr) \Biggr| \\ &\quad =\Biggl\vert \int_{\alpha}^{\beta} \Biggl( \sum _{j=1}^{l}a_{j}G_{mn}(x_{j},t)- \sum_{i=1}^{k}b_{i}G_{mn}(y_{i},t) \Biggr) \phi^{(n)}(t)\,dt\Biggr\vert \\ &\quad \leq\bigl\Vert \phi^{ ( n ) }\bigr\Vert _{p} \biggl( \int_{\alpha}^{\beta}\bigl\vert \mathcal{F}(t)\bigr\vert ^{q}\,dt \biggr) ^{\frac{1}{q}}, \end{aligned}$$
(3.9)

where \(\mathcal{F}(t)\) is defined as in (3.4).

For the proof of the sharpness of the constant \(( \int_{\alpha}^{\beta} \vert \mathcal{F}(t)\vert ^{q}\,dt ) ^{\frac{1}{q}}\) let us find a function ϕ for which the equality in (3.9) is obtained.

For \(1< p<\infty\) take ϕ to be such that

$$\phi^{(n)}(t)=\operatorname{sgn}\mathcal{F}(t)\bigl\vert \mathcal{F}(t)\bigr\vert ^{\frac {1}{p-1}}. $$

For \(p=\infty\) take

$$\phi^{(n)}(t)=\operatorname{sgn}\mathcal{F}(t). $$

For \(p=1\) we prove that

$$ \biggl\vert \int_{\alpha}^{\beta}\mathcal{F}(t)\phi^{(n)}(t)\,dt \biggr\vert \leq\max_{t\in[\alpha,\beta]}\bigl\vert \mathcal{F}(t)\bigr\vert \biggl( \int_{\alpha}^{\beta}\bigl\vert \phi^{(n)}(t)\bigr\vert \,dt \biggr) $$
(3.10)

is the best possible inequality.

Suppose that \(\vert \mathcal{F}(t)\vert \) attains its maximum at \(t_{0}\in[ \alpha,\beta]\).

First we assume that \(\mathcal{F}(t_{0})>0\). For ϵ small enough we define \(\phi_{\epsilon}(t)\) by

$$\phi_{\epsilon}(t) := \textstyle\begin{cases} 0, & \alpha\leq t\leq t_{0},\\ \frac{1}{\epsilon n!} ( t-t_{0} ) ^{n}, & t_{0}\leq t\leq t_{0}+\epsilon,\\ \frac{1}{n!} ( t-t_{0} ) ^{n-1}, & t_{0}+\epsilon\leq t\leq \beta. \end{cases} $$

Then for ϵ small enough

$$\biggl\vert \int_{\alpha}^{\beta}\mathcal{F}(t)\phi^{(n)}(t) \biggr\vert =\biggl\vert \int_{t_{0}}^{t_{0}+\epsilon}\mathcal{F}(t)\frac{1}{\epsilon }\,dt \biggr\vert =\frac{1}{\epsilon} \int_{t_{0}}^{t_{0}+\epsilon}\mathcal {F}(t)\,dt. $$

Now from the inequality (3.10) we have

$$\frac{1}{\epsilon} \int_{t_{0}}^{t_{0}+\epsilon}\mathcal{F}(t)\,dt\leq \mathcal{F}(t_{0}) \int_{t_{0}}^{t_{0}+\epsilon}\frac{1}{\epsilon }\,dt= \mathcal{F}(t_{0}). $$

Since

$$\lim_{\epsilon\rightarrow0}\frac{1}{\epsilon} \int _{t_{0}}^{t_{0}+\epsilon }\mathcal{F}(t)\,dt= \mathcal{F}(t_{0}) $$

the statement follows.

In the case \(\mathcal{F}(t_{0})<0\), we define \(\phi_{\epsilon}(t)\) by

$$\phi_{\epsilon}(t) := \textstyle\begin{cases} \frac{1}{n!} ( t-t_{0}-\epsilon ) ^{n-1}, & \alpha\leq t\leq t_{0},\\ -\frac{1}{\epsilon n!} ( t-t_{0}-\epsilon ) ^{n}, & t_{0}\leq t\leq t_{0}+\epsilon,\\ 0, & t_{0}+\epsilon\leq t\leq\beta, \end{cases} $$

and the rest of the proof is the same as above. □

In the sequel we consider a particular case of Green’s function \(G_{mn}(u,t)\) defined by (1.4). For \(n=2\), \(m=1\), we have

$$ G_{12}(u,t)=\textstyle\begin{cases} u-t, & \alpha\leq t\leq u,\\ 0, & u\leq t\leq\beta, \end{cases} $$
(3.11)

and

$$\begin{aligned} & \sum_{j=1}^{l}a_{j}G_{12}(x_{j},t)- \sum_{i=1}^{k}b_{i}G_{12}(y_{i},t) \\ & \quad =\sum_{\mathbf{j}}a_{\mathbf{j}}(x_{\mathbf{j}}-t)- \sum_{\mathbf{i}}b_{\mathbf{i}}(y_{\mathbf{i}}-t),\quad \mathbf{j}\in\{j;x_{j}\geq t\}, \mathbf{i}\in \{i;y_{i}\geq t\}. \end{aligned}$$
(3.12)

As an easy consequence of Theorem 12, choosing \(n=2\) and \(m=1\), we get the following corollary.

Corollary 1

Let \(\phi\in C^{2}([\alpha,\beta])\), \(\mathbf{x}\in [\alpha,\beta]^{l}\), \(\mathbf{y}\in[\alpha,\beta]^{k}\), \(\mathbf{a}\in\mathbb{R}^{l}\), and \(\mathbf{b}\in\mathbb{R}^{k}\) be such that (1.1) holds for some matrix \(\mathbf{A}\in\mathcal{M}_{kl}(\mathbb{R})\) whose entries satisfy the condition \({ \sum_{j=1}^{l}} a_{ij}=1\), \(i=1,\ldots,k\). Let \((p,q)\) be a pair of conjugate exponents, that is, \(1\leq p\), \(q\leq\infty\), \(\frac{1}{p}+\frac{1}{q}=1\). Then

$$\begin{aligned} & \Biggl\vert \sum_{j=1}^{l}a_{j} \phi ( x_{j} ) -\sum_{i=1}^{k}b_{i} \phi ( y_{i} ) \Biggr\vert \\ &\quad \leq\bigl\Vert \phi^{\prime\prime}\bigr\Vert _{p} \Biggl( \int_{\alpha }^{\beta}\Biggl\vert \sum _{j=1}^{l}a_{j}G_{12} ( x_{j},t ) -\sum_{i=1}^{k}b_{i}G_{12} ( y_{i},t ) \Biggr\vert ^{q}\,dt \Biggr) ^{\frac{1}{q}}. \end{aligned}$$
(3.13)

The constant on the right-hand side of (3.13) is sharp for \(1< p\leq\infty\) and the best possible for \(p=1\).

Remark 3

If additionally suppose that vectors a, b, and matrix A are nonnegative and ϕ is convex, then the difference \(\sum_{j=1}^{l}a_{j}\phi ( x_{j} ) -\sum_{i=1}^{k}b_{i}\phi ( y_{i} ) \) is nonnegative and we have

$$\begin{aligned} 0 & \leq\sum_{j=1}^{l}a_{j}\phi ( x_{j} ) -\sum_{i=1}^{k}b_{i} \phi ( y_{i} ) \\ & \leq\bigl\Vert \phi^{\prime\prime}\bigr\Vert _{p} \Biggl( \int_{\alpha }^{\beta}\Biggl\vert \sum _{j=1}^{l}a_{j}G_{12} ( x_{j},t ) -\sum_{i=1}^{k}b_{i}G_{12} ( y_{i},t ) \Biggr\vert ^{q}\,dt \Biggr) ^{\frac{1}{q}}. \end{aligned}$$
(3.14)

In the sequel we consider some particular cases of this result.

  1. (i)

    If \(k=l\) and all weights \(b_{i}\) and \(a_{j}\) are equal, we get the following estimate for the weighted majorization inequality:

    $$\begin{aligned} 0 & \leq\sum_{i=1}^{k}a_{i}\phi ( x_{i} ) -\sum_{i=1}^{k}a_{i} \phi ( y_{i} ) \\ & \leq\bigl\Vert \phi^{\prime\prime}\bigr\Vert _{p} \Biggl( \int _{\alpha }^{\beta}\Biggl\vert \sum _{i=1}^{k}a_{i}G_{12} ( x_{i},t ) -\sum_{i=1}^{k}a_{i}G_{12} ( y_{i},t ) \Biggr\vert ^{q}\,dt \Biggr) ^{\frac{1}{q}}. \end{aligned}$$
    (3.15)
  2. (ii)

    If we denote \(A_{k}=\sum_{i=1}^{k}a_{i}\) and put \(y_{1}=y_{2}=\cdots=y_{k}=\frac{1}{A_{k}}\sum_{i=1}^{k}a_{i}x_{i}=\bar{x}\), from (3.15) as an easy consequence we get estimate for Jensen’s inequality:

    $$\begin{aligned} 0 & \leq\sum_{i=1}^{k}a_{i}\phi ( x_{i} ) -A_{k}\phi ( \bar{x} ) \\ & \leq\bigl\Vert \phi^{\prime\prime}\bigr\Vert _{p} \Biggl( \int _{\alpha }^{\beta}\Biggl\vert \sum _{i=1}^{k}a_{i}G_{12} ( x_{i},t ) -A_{k}G_{12} ( \bar{x},t ) \Biggr\vert ^{q}\,dt \Biggr) ^{\frac{1}{q}}. \end{aligned}$$

    Specially, setting \(A_{k}=1\), we have \(\bar{x}=\sum_{i=1}^{k}a_{i}x_{i}\) and

    $$\begin{aligned} 0 & \leq\sum_{i=1}^{k}a_{i}\phi ( x_{i} ) -\phi ( \bar {x} ) \\ & \leq\bigl\Vert \phi^{\prime\prime}\bigr\Vert _{p} \Biggl( \int _{\alpha }^{\beta}\Biggl\vert \sum _{i=1}^{k}a_{i}G_{12} ( x_{i},t ) -G_{12} ( \bar{x},t ) \Biggr\vert ^{q}\,dt \Biggr) ^{\frac{1}{q}}. \end{aligned}$$
  3. (iii)

    For \(k=2\), \(a_{1}=a_{2}=1\), \(x_{1}=\alpha\), and \(x_{2}=\beta\), we have

    $$\begin{aligned} 0 & \leq\phi ( \alpha ) +\phi ( \beta ) -2\phi \biggl( \frac{\alpha+\beta}{2} \biggr) \\ & \leq\bigl\Vert \phi^{\prime\prime}\bigr\Vert _{p} \biggl( \int _{\alpha }^{\beta} \biggl( G_{12} ( \alpha,t ) +G_{12} ( \beta,t ) -2G_{12} \biggl( \frac{\alpha+\beta}{2},t \biggr) \biggr) ^{q}\,dt \biggr) ^{\frac{1}{q}}, \end{aligned}$$

    where

    $$G_{12} ( \alpha,t ) +G_{12} ( \beta,t ) -2G_{12} \biggl( \frac{\alpha+\beta}{2},t \biggr) =\textstyle\begin{cases} t-\alpha, & \alpha\leq t\leq\frac{\alpha+\beta}{2},\\ \beta-t, & \frac{\alpha+\beta}{2}\leq t\leq\beta. \end{cases} $$

    Specially, for \(p=1\), \(q=\infty\), we obtain

    $$0\leq\frac{\phi ( \alpha ) +\phi ( \beta ) }{2}-\phi \biggl( \frac{\alpha+\beta}{2} \biggr) \leq \frac{1}{4} \bigl( \phi^{\prime}(\beta)-\phi^{\prime}(\alpha) \bigr) (\beta-\alpha ), $$

    where the constant \(\frac{1}{4}\) is the best possible in the sense that it cannot be replaced by a smaller constant.

4 Applications

In this section we discuss the application of Corollary 1, i.e. the inequality (3.14), to the lower and upper bounds estimations of some relationships between well-known means.

Let \(0<\alpha<\beta\), \(\mathbf{x}\in[\alpha,\beta]^{l}\), and \(\mathbf{a}\in[0,\infty)^{l}\) whose entries satisfy the condition \(\sum_{j=1}^{l}a_{j}=1\). The weighted power mean of order \(s\in\mathbb {R}\) is defined by

$$M_{s}(\mathbf{a};\mathbf{x})= \textstyle\begin{cases} ( { \sum _{j=1}^{l}} a_{j}x_{j}^{s} ) ^{\frac{1}{s}}, & s\neq0, \\ { \prod_{j=1}^{l}} x_{j}^{a_{j}}, & s=0.\end{cases} $$

The classical weighted means are defined as

$$\begin{aligned} &M_{0}(\mathbf{a};\mathbf{x}) =G(\mathbf{a}; \mathbf{x})=\prod_{j=1}^{l}x_{j}^{a_{j}},\quad \text{the geometric mean,} \\ &M_{1}(\mathbf{a};\mathbf{x)} =A(\mathbf{a};\mathbf{x})= \sum_{j=1}^{l}a_{j}x_{j},\quad \text{the arithmetic mean,} \\ &M_{-1}(\mathbf{a};\mathbf{x)} =H(\mathbf{a};\mathbf{x})= \frac {1}{\sum_{j=1}^{l}\frac{a_{j}}{x_{j}}},\quad\text{the harmonic mean.} \end{aligned}$$

In an analogous way, for \(\mathbf{y}\in[\alpha,\beta]^{k}\), \(\mathbf{b}\in[0,\infty)^{k}\) with \(\sum_{i=1}^{k}b_{i}=1\), we define

$$M_{s}(\mathbf{b};\mathbf{y})= \textstyle\begin{cases} ( { \sum _{i=1}^{k}} b_{i}y_{i}^{s} ) ^{\frac{1}{s}}, & s\neq0, \\ { \prod_{i=1}^{k}} y_{i}^{b_{i}}, & s=0,\end{cases} $$

and then accordingly the classical weighted means \(G(\mathbf{b};\mathbf{y})\), \(A(\mathbf{b};\mathbf{y})\), and \(H(\mathbf{b};\mathbf{y})\).

Corollary 2

Let \(s\geq2\) and \(0<\alpha<\beta\). Let \(\mathbf{x}\in[\alpha,\beta ]^{l}\), \(\mathbf{y}\in[\alpha,\beta]^{k}\), \(\mathbf{a}\in[ 0,\infty )^{l}\), and \(\mathbf{b}\in[0,\infty)^{k}\) be such that (1.1) holds for some row stochastic matrix \(\mathbf{A} \in\mathcal{M}_{kl}(\mathbb{R})\) and the entries of a and b satisfy the condition \(\sum_{j=1}^{l}a_{j}=\sum_{i=1}^{k}b_{i}=1\). Then

$$\begin{aligned} &0 \leq M_{s}^{s}(\mathbf{a};\mathbf{x})-M_{s}^{s}( \mathbf{b};\mathbf {y})\leq s(s-1) \biggl( \int_{\alpha}^{\beta}t^{p(s-2)}\,dt \biggr) ^{\frac {1}{p}}\Vert \mathcal{G}\Vert _{q}, \\ &1 \leq\frac{G(\mathbf{b};\mathbf{y})}{G(\mathbf{a};\mathbf{x})}\leq \exp \biggl[ \biggl( \int_{\alpha}^{\beta} \biggl( \frac{1}{t^{2}} \biggr) ^{p}\,dt \biggr) ^{\frac{1}{p}}\Vert \mathcal{G}\Vert _{q} \biggr] , \\ &0 \leq\frac{H(\mathbf{b};\mathbf{y})-H(\mathbf{b};\mathbf{y})}{H(\mathbf{a};\mathbf{x})H(\mathbf{b};\mathbf{y})}\leq \biggl( \int _{\alpha }^{\beta} \biggl( \frac{2}{t^{3}} \biggr) ^{p}\,dt \biggr) ^{\frac{1}{p}}\Vert \mathcal{G}\Vert _{q}, \end{aligned}$$

where \(\mathcal{G}(t)\) denotes the difference (3.12).

The constants on the right-hand side of inequalities are sharp for \(1< p\leq \infty\) and the best possible for \(p=1\).

Proof

Applying (3.14) to the function \(\phi(x)=x^{s}\), \(\phi (x)=\frac{x^{s}}{s(s-1)}\), \(\phi(x)=-\ln x\), and \(\phi(x)=\frac{1}{x}\), respectively. □

Remark 4

Particular cases of the previous results for \(p=1\), \(q=\infty\):

$$\begin{aligned} &0 \leq M_{s}^{s}(\mathbf{a};\mathbf{x})-M_{s}^{s}( \mathbf{b};\mathbf{y})\leq s\bigl(\beta^{s-1}- \alpha^{s-1}\bigr)\Vert \mathcal{G}\Vert _{\infty}, \\ &1 \leq\frac{G(\mathbf{b};\mathbf{y})}{G(\mathbf{a};\mathbf{x})}\leq \exp \biggl( \frac{\beta-\alpha}{\alpha\beta} \Vert \mathcal{G}\Vert _{\infty} \biggr) , \\ &0 \leq\frac{H(\mathbf{b};\mathbf{y})-H(\mathbf{a};\mathbf{x})}{H(\mathbf{a};\mathbf{x})H(\mathbf{b};\mathbf{y})}\leq \biggl( \frac {\beta^{2}-\alpha^{2}}{\alpha^{2}\beta^{2}} \biggr) \Vert \mathcal {G}\Vert _{\infty}. \end{aligned}$$

Replacing \(x_{j}\rightarrow\frac{1}{x_{j}}\), \(y_{i}\rightarrow\frac {1}{y_{i}}\), we have \(A(\mathbf{a};\mathbf{x})\rightarrow\frac{1}{H(\mathbf{a};\mathbf{x})}\), \(A(\mathbf{b};\mathbf{y})\rightarrow\frac{1}{H(\mathbf {b};\mathbf{y})}\), and the last inequality becomes

$$0\leq A(\mathbf{a};\mathbf{x})-A(\mathbf{b};\mathbf{y})\leq \bigl( \beta ^{2}-\alpha^{2} \bigr) \Vert \tilde{\mathcal{G}}\Vert _{\infty}, $$

where \(\tilde{\mathcal{G}}(t)=\sum_{\mathbf{j}}a_{\mathbf{j}}(\frac {1}{x_{\mathbf{j}}}-t)-\sum_{\mathbf{i}}b_{\mathbf{i}}(\frac {1}{y_{\mathbf{i}}}-t)\), \(\mathbf{j}\in\{j;\frac{1}{x_{j}}\geq t\}\), \(\mathbf{i}\in\{i;\frac{1}{y_{i}}\geq t\}\).

Corollary 3

Let \(0<\alpha<\beta\) and \(\mathbf{x}\in[\alpha,\beta]^{l}\), \(\mathbf{y}\in[\alpha,\beta]^{k}\), \(\mathbf{a}\in[0,\infty)^{l}\), and \(\mathbf{b}\in[0,\infty)^{k}\) be such that (1.1) holds for some row stochastic matrix \(\mathbf{A}\in\mathcal{M}_{kl}(\mathbb{R})\) and the entries of a and b satisfy the condition \(\sum_{j=1}^{l}a_{j}=\sum_{i=1}^{k}b_{i}=1\). Then

$$\begin{aligned} &1 \leq\frac{\prod_{j=1}^{l}x_{j}^{a_{j}x_{j}}}{\prod_{i=1}^{k}y_{i}^{b_{i}y_{i}}}\leq\exp \biggl[ \biggl( \int_{\alpha}^{\beta} \biggl( \frac{1}{p} \biggr) ^{p}\,dt \biggr) ^{\frac{1}{p}}\Vert \mathcal{G}\Vert _{q} \biggr] , \\ &0 \leq\sum_{j=1}^{l}a_{j}e^{x_{j}}- \sum_{i=1}^{k}b_{i}e^{y_{i}} \leq \biggl( \int_{\alpha}^{\beta}e^{tp}\,dt \biggr) ^{\frac{1}{p}}\Vert \mathcal {G}\Vert _{q}, \\ &1 \leq\frac{{ \prod_{j=1}^{l}} (1+e^{x_{j}})^{a_{j}}}{{ \prod_{i=1}^{k}} (1+e^{y_{i}})^{b_{i}}}\leq\exp \biggl( \biggl( \int_{\alpha}^{\beta} \biggl( \frac{e^{t}}{(1+e^{t})^{2}} \biggr) ^{p}\,dt \biggr) ^{\frac{1}{p}}\Vert \mathcal{G}\Vert _{q} \biggr) , \end{aligned}$$

where \(\mathcal{G}(t)\) denotes the difference (3.12).

The constants on the right-hand side of inequalities are sharp for \(1< p\leq \infty\) and the best possible for \(p=1\).

Proof

Apply (3.14) to the functions \(\phi(x)=x\ln x\), \(\phi(x)=e^{x}\), and \(\phi(x)=\ln(1+e^{x})\), respectively. □

Remark 5

Particular cases of the previous results for \(p=1\), \(q=\infty\):

$$\begin{aligned} &1 \leq\frac{\prod_{j=1}^{l}x_{j}^{a_{j}x_{j}}}{\prod_{i=1}^{k}y_{i}^{b_{i}y_{i}}}\leq \biggl( \frac{\beta}{\alpha} \biggr) ^{\Vert \mathcal{G}\Vert _{\infty}}, \\ &0 \leq\sum_{j=1}^{l}a_{j}e^{x_{j}}- \sum_{i=1}^{k}b_{i}e^{y_{i}}\leq\bigl(e^{\beta}-e^{\alpha}\bigr)\Vert \mathcal{G}\Vert _{\infty}, \\ &1 \leq\frac{{ \prod_{j=1}^{l}} (1+e^{x_{j}})^{a_{j}}}{{ \prod_{i=1}^{k}} (1+e^{y_{i}})^{b_{i}}}\leq\exp \biggl[ \frac{e^{\beta}-e^{\alpha}}{ ( 1+e^{\beta} ) (1+e^{\alpha})}\Vert \mathcal{G} \Vert _{\infty } \biggr] . \end{aligned}$$

If we consider a particular case of our result given in Remark 3(ii) and apply it to convex functions from the previous two corollaries, we could obtain some new upper bounds for Jensen’s inequality. Related estimates are given in [1315].

Finally, we indicate one more interesting application.

Using (2.4), under the assumptions of Theorem 5, we define the linear functional \(A:C^{n}([\alpha,\beta])\rightarrow\mathbb{R}\) by

$$\begin{aligned} A(\phi) ={}&\sum_{j=1}^{l}a_{j} \phi ( x_{j} ) -\sum_{i=1}^{k}b_{i} \phi ( y_{i} ) \\ & {}-\sum_{s=2}^{m}\frac{\phi^{(s)}(\alpha)}{s!} \Biggl[ \sum_{j=1}^{l}a_{j} ( x_{j}-\alpha ) ^{s}-\sum_{i=1}^{k}b_{i} ( y_{i}-\alpha ) ^{s} \Biggr] \\ &{} -\sum_{r=0}^{n-m-2}\sum _{m=0}^{r}\frac{ ( -1 ) ^{r-s} ( \beta-\alpha ) ^{r-s}\phi^{(m+1+r)}(\beta)}{(m+1+s)!(r-s)!}\\ &{}\times \Biggl[ \sum _{j=1}^{l}a_{j} ( x_{j}-\alpha ) ^{m+1+s}-\sum_{i=1}^{k}b_{i} ( y_{i}-\alpha ) ^{m+1+s} \Biggr] . \end{aligned}$$

It is obvious that if \(\phi\in C^{n}([\alpha,\beta])\) is n-convex, then \(A(\phi)\geq0\). Using the linearity and positivity of this functional we may derive corresponding mean-value theorems. Moreover, we may produce new classes of exponentially convex functions and as a result we get new means of the Cauchy type applying the same method as given in [5, 16].

5 Conclusions

In this paper we give generalizations of Sherman’s inequality from which the classical majorization inequality, as well as Jensen’s inequality, follows as a special case. The obtained results hold for convex functions of higher order, which are in a special case convex in the usual sense. Moreover, the obtained generalizations represent an extension to real, not necessarily nonnegative entries of the vectors a, b, and the matrix A. The methods used are based on classical real analysis and the application of the Abel-Gontscharoff formula and Green’s function and can be extended to the investigation of other inequalities.

References

  1. Schur, I: Über eine Klasse von Mittelbildungen mit Anwendungen die Determinanten-Theorie Sitzungsber. Berlin. Math. Gesellschaft 22, 9-20 (1923)

    Google Scholar 

  2. Marshall, AW, Olkin, I, Arnold, BC: Inequalities: Theory of Majorization and Its Applications, 2nd edn. Springer, New York (2011)

    Book  MATH  Google Scholar 

  3. Hardy, GH, Littlewood, JE, Pólya, G: Inequalities, 2nd edn. Cambridge Mathematical Library. Cambridge University Press, Cambridge (1952)

    MATH  Google Scholar 

  4. Sherman, S: On a theorem of Hardy, Littlewood, Pólya and Blackwell. Proc. Natl. Acad. Sci. USA 37(1), 826-831 (1957)

    MATH  Google Scholar 

  5. Agarwal, RP, Ivelić Bradanović, S, Pečarić, J: Generalizations of Sherman’s inequality by Lidstone’s interpolating polynomial. J. Inequal. Appl. 6, 2016 (2016)

    MATH  Google Scholar 

  6. Ivelić Bradanović, S, Pečarić, J: Generalizations of Sherman’s inequality. Period. Math. Hung. (to appear)

  7. Pečarić, J, Proschan, F, Tong, YL: Convex Functions, Partial Orderings and Statistical Applications. Academic Press, New York (1992)

    MATH  Google Scholar 

  8. Whittaker, JM: Interpolation Function Theory. Cambridge University Press, Cambridge (1935)

    MATH  Google Scholar 

  9. Gontscharoff, VL: Theory of Interpolation and Approximation of Functions. Gostekhizdat, Moscow (1954)

    Google Scholar 

  10. Davis, PJ: Interpolation and Approximation. Blaisdell, Boston (1961)

    Google Scholar 

  11. Agarwal, RP, Wong, PJY: Error Inequalities in Polynomial Interpolation and Their Applications. Kluwer Academic, Dordrecht (1993)

    Book  MATH  Google Scholar 

  12. Cerone, P, Dragomir, SS: Some new Ostrowski-type bounds for the Čebyšev functional and applications. J. Math. Inequal. 8(1), 159-170 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  13. Budimir, I, Dragomir, SS, Pečarić, J: Further reverse results for Jensen’s discrete inequality and applications in information theory. J. Inequal. Pure Appl. Math. 2(1), Article ID 5 (2001)

    MathSciNet  MATH  Google Scholar 

  14. Cirtoaje, V: The best upper bound for Jensen’s inequality. Aust. J. Math. Anal. Appl. 7(2), Article ID 22 (2011)

    MathSciNet  Google Scholar 

  15. Dragomir, SS: A converse result for Jensen’s discrete inequality via Grüss’ inequality and applications in information theory. An. Univ. Oradea, Fasc. Mat. 7, 178-189 (1999-2000)

  16. Jakšetić, J, Pečarić, J: Exponential convexity method. J. Convex Anal. 20(1), 181-197 (2013)

    MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

The research of the first and third author has been supported by the Croatian Science Foundation under the project 5435. The authors would like to thank the referees for their valuable comments and suggestions which essentially improved the quality of this paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Naveed Latif.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors contributed equally to deriving all the results of this article, and they read and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ivelić Bradanović, S., Latif, N. & Pečarić, J. On an upper bound for Sherman’s inequality. J Inequal Appl 2016, 165 (2016). https://doi.org/10.1186/s13660-016-1091-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13660-016-1091-3

MSC

Keywords