Skip to main content

Sub-super-stabilizability of certain bivariate means via mean-convexity

Abstract

In this paper, we first show that the first Seiffert mean P is concave whereas the second Seiffert mean T and the Neuman-Sándor mean NS are convex. As applications, we establish the sub-stabilizability/super-stabilizability of certain bivariate means. Open problems are derived as well.

Introduction

A (bivariate) mean m is a binary map from \((0,\infty )\times (0, \infty )\) into \((0,\infty )\) satisfying

$$\begin{aligned} \min (x,y)\leq m(x,y)\leq \max (x,y) \end{aligned}$$

for all \(x,y>0\). Symmetric (resp. homogeneous/continuous/monotone) means are defined in the usual way; see, for instance, [1]. The standard examples of such means are the following:

$$\begin{aligned} &{A(x,y)= \frac{x+y}{2};\qquad G(x,y)=\sqrt{xy};\qquad H(x,y)= \frac{2xy}{x+y};\qquad Q(x,y)=\sqrt{\frac{x^{2}+y^{2}}{2}};} \\ &{L(x,y)= \frac{y-x}{\ln y-\ln x},\qquad L(x,x)=x;\qquad I(x,y)=e^{-1} \biggl(\frac{y^{y}}{x^{x}} \biggr) ^{1/(y-x)},\qquad I(x,x)=x;} \\ &{P(x,y)= \frac{y-x}{2\arcsin \frac{y-x}{y+x}},\qquad T(x,y)= \frac{y-x}{2\arctan \frac{y-x}{y+x}},\qquad NS(x,y)=\frac{y-x}{2\operatorname{arcsinh}\frac{y-x}{y+x}},} \end{aligned}$$

with \(P(x,x)=T(x,x)=NS(x,x)=x\), known as the arithmetic mean, geometric mean, harmonic mean, quadratic (or root-square) mean, logarithmic mean, identric mean, first Seiffert mean [2], second Seiffert mean [3], and the Neuman-Sándor mean [4], respectively. Recently, the three means P, T, and NS have been the subject of intensive research because of their interesting properties and nice relationships. For more details about the three previous means and their bounds in terms of the other familiar means, we refer the reader to [410] and the references therein.

As usual, we identify a mean m with its value at \((x,y)\) by setting \(m=m(x,y)\) for simplicity. All the previous means are symmetric homogeneous monotone continuous.

For two means \(m_{1}\) and \(m_{2}\), we write \(m_{1}\leq m_{2}\) if \(m_{1}(x,y)\leq m_{2}(x,y)\) for every \(x,y>0\) and \(m_{1}< m_{2}\) if \(m_{1}(x,y)< m_{2}(x,y)\) for all \(x,y>0\) with \(x\neq y\). We say that \(m_{1}\) and \(m_{2}\) are comparable if either \(m_{1}\leq m_{2}\) or \(m_{2}\leq m_{1}\). The previous means are comparable with the chain of inequalities

$$\begin{aligned} \min < H< G< L< P< I< A< NS< T< Q< \max. \end{aligned}$$

Unless otherwise stated, all means considered are further assumed to be symmetric. Let \(m_{1}\), \(m_{2}\), \(m_{3}\) be three means. We set, for all \(x,y>0\) (see [11]),

$$\begin{aligned} {\mathcal{R}} (m_{1},m_{2},m_{3} ) (x,y)=m_{1} \bigl(m_{2} (x,m _{3} ),m_{2} (m_{3},y ) \bigr)\quad\mbox{with } m_{3}:=m _{3}(x,y). \end{aligned}$$

A mean m is called stable if \({\mathcal{R}}(m,m,m)=m\). The two trivial means min and max are stable. Let \(m_{1}\) and \(m_{2}\) be two nontrivial stable means. We say that m is \((m_{1},m_{2})\)-stabilizable if \({\mathcal{R}}(m_{1},m,m_{2})=m\). Following [12], there exists a unique \((m_{1},m_{2})\)-stabilizable mean, provided that \(m_{1}\) and \(m_{2}\) are cross means (see [12] for the definition and details).

If, moreover, \(m_{1}\) and \(m_{2}\) are comparable, then we say that m is \((m_{1},m_{2})\)-sub-stabilizable if \({\mathcal{R}}(m_{1},m,m _{2})\leq m\) and m is between \(m_{1}\) and \(m_{2}\); see [13]. If this latter mean inequality is strict, then we say that m is strictly \((m_{1},m_{2})\)-sub-stabilizable. The super-stabilizability of m is defined in an analogous manner (by reversing the previous mean inequalities). In short, we can say that m is \((m_{1},m_{2})\)-super-stabilizable if and only if \(m^{*}\) is \((m_{2}^{*},m_{1}^{*})\)-sub-stabilizable, where \(m^{*}\) refers to the dual mean of m defined by \(m^{*}(x,y)= (m(x^{-1},y^{-1} ))^{-1}\) for all \(x,y>0\). For a large study about sub-stabilizable and super-stabilizable means and the related results, see [13].

The following results will be also needed in the sequel.

Theorem 1.1

[11]

The following assertions hold:

  1. (i)

    The means A, H, G, and Q are stable.

  2. (ii)

    The mean L is simultaneously \((A,G)\)-stabilizable and \((H,A)\)-stabilizable, whereas I is \((G,A)\)-stabilizable.

Theorem 1.2

The following assertions hold:

  1. (i)

    L is strictly \((G,A)\)-super-stabilizable and strictly \((A,H)\)-sub-stabilizable, whereas I is strictly \((A,G)\)-sub-stabilizable; see [13].

  2. (ii)

    P is strictly \((A,G)\)-sub-stabilizable and strictly \((G,A)\)-super-stabilizable; see [13] and [14], respectively.

Part (ii) of Theorem 1.2 was proved in [14] via a long way and later proved again by the authors in [15] via a simple and fast way. There is no result proved yet about stabilizability, sub-stabilizability, or super-stabilizability of the means T and NS.

The remainder of this paper is organized as follows. Section 2 contains basic notions about convexity of bivariate means and some needed lemmas. Section 3 is devoted to show the convexity/concavity of the three standard means P, NS, and T. Section 4 contains some applications of the previous results to the sub/super stabilizability of certain bivariate means. Some open problems of interest are derived as well.

Mean-convexity and needed tools

The convexity concept for bivariate means can be defined as for real functions involving two variables. Precisely, a mean m is called convex if it satisfies

$$\begin{aligned} m \bigl((1-t)x_{1}+tx_{2},(1-t)y_{1}+ty_{2} \bigr)\leq (1-t)m(x_{1},y_{1})+tm(x _{2},y_{2}) \end{aligned}$$

for all \(x_{1},y_{1},x_{2},y_{2}>0\) and \(t\in [0,1]\). We say that m is concave if the previous inequality is reversed. The mean m is strictly convex (resp. strictly concave) if the previous inequality (resp. reversed inequality) is strict for all \((x_{1},y_{1})\neq (x _{2},y_{2})\) and \(t\in (0,1)\).

We say that m is partially convex (resp. concave) if the real functions \(x\longmapsto m(x,y)\) for fixed \(y>0\) and \(y\longmapsto m(x,y)\) for fixed \(x>0\) are convex (resp. concave) on \((0,\infty )\). If m is symmetric homogeneous, then m is partially convex (resp. concave) if and only if the map \(x\longmapsto m(x,1)\) is convex (resp. concave) on \((1,\infty )\). It is clear that every convex (resp. concave) mean is partially convex (resp. concave). The reverse of this latter property is not always true. However, for special class of regular means, it remains true, as confirmed by the following result.

Lemma 2.1

Let m be a homogeneous continuous mean. Then m is convex (resp. concave) if and only if the real function \(x\longmapsto m(x,1)\) is convex (resp. concave) on \((0,\infty )\).

Proof

It follows from [16], p.24. □

Remark 2.1

For a mean of class \(C^{2}\) (i.e., twice differentiable on \((0,\infty )\times (0,\infty )\) with continuous second derivative), we can give a direct proof of the previous lemma without refiring to the general result of [16] stated for general positively homogeneous functions. In fact, let m be symmetric homogeneous mean. Denote \(\phi (x)=m(x,1)\). Then \(m(x,y)=y\phi (x/y)\) for all \(x,y>0\). If m is \(C^{2}\), then m is convex (resp. concave) if and only if its Hessian defined by

$$\begin{aligned} \nabla^{2}m(x,y):=\left ( \textstyle\begin{array}{@{}c@{\quad}c@{}} \frac{d^{2}m}{dx^{2}}(x,y) & \frac{d^{2}m}{dx\,dy}(x,y) \\ \frac{d^{2}m}{dx\,dy}(x,y) & \frac{d^{2}m}{dy^{2}}(x,y) \end{array}\displaystyle \right ) \end{aligned}$$

is (symmetric) positive semidefinite (resp. negative semidefinite). Simple computations lead to

$$\begin{aligned} \nabla^{2}m(x,y):=\frac{1}{y^{3}}\phi^{\prime\prime} \biggl( \frac{x}{y} \biggr)\left ( \textstyle\begin{array}{@{}c@{\quad}c@{}} y^{2} & -xy \\ -xy & x^{2} \end{array}\displaystyle \right ), \end{aligned}$$

and the desired result follows since \(\det (\nabla^{2}m(x,y) )=0\) for all \(x,y>0\).

In order to give more application examples, we need to recall another mean of interest. Let p be a real number, and let

$$\begin{aligned} \forall x, y>0,\quad A_{p}:=A_{p}(x,y)= \biggl( \frac{x^{p}+y^{p}}{2} \biggr)^{1/p} \end{aligned}$$

with \(A_{0}(x,y)=\lim_{p\rightarrow 0}A_{p}(x,y)=\sqrt{xy}=G(x,y)\). Such a mean is known as the power (binomial) mean of order p. It is easy to see that \(A_{1}=A\), \(A_{-1}=H\), \(A_{2}=Q\), and \(A_{1/2}= \frac{A+G}{2}\). As proved in [11], \(A_{p}\) is stable for all real p. Further, it is easy to see that \(A_{p}^{*}=A_{-p}\) for all real p.

The next lemma will be needed in the sequel.

Lemma 2.2

With the above, the following statements hold:

  1. (i)

     The maps \((x,y)\longmapsto A_{p}(x,y)\) for fixed \(p\in {\mathbb{R}}\) and \(p\longmapsto A_{p}(x,y)\) for fixed \(x,y>0\) are continuous and strictly increasing.

  2. (ii)

    Let \(x,y>0\) be fixed. If \(p\geq 1\), then the map \(p\longmapsto A _{p}(x,y)\) is concave, that is (since \(p\longmapsto A_{p}(x,y)\) is continuous),

$$\begin{aligned} \forall p_{1},p_{2}\geq 1,\quad \frac{A_{p_{1}}(x,y)+A _{p_{2}}(x,y)}{2}\leq A_{\frac{p_{1}+p_{2}}{2}}(x,y). \end{aligned}$$

Proof

(i) It is a well-known result. See a generalization in [17, 18].

(ii) See, for instance, [1921]. □

Now, we can state the following examples.

Example 2.1

It is easy to check that the trivial mean \((x,y)\longmapsto \min (x,y)\) is concave, whereas \((x,y)\longmapsto \max (x,y)\) is convex. More generally (see [22]), if \(p\neq 0\), then simple computation leads to

$$\begin{aligned} \frac{d^{2}A_{p}}{dx^{2}}(x,1)=\frac{p-1}{4}x^{p-2} \biggl( \frac{x^{p}+1}{2} \biggr)^{1/p-2}. \end{aligned}$$

For \(p=0\), we have \(A_{0}(x,1)=\sqrt{x}\). It follows by Lemma 2.1 that \(A_{p}\) is strictly concave for \(p<1\) and strictly convex for \(p>1\). In particular, the geometric mean G and the harmonic mean H are strictly concave, whereas the quadratic mean Q is strictly convex. The arithmetic mean A is convex and concave since it is linear affine.

Example 2.2

As proved in [23], the real functions \(x\longmapsto L(x,1)\) and \(x\longmapsto I(x,1)\) are (strictly) concave on \((0,\infty )\). By Lemma 2.1 we then deduce that L and I are also strictly concave.

Example 2.3

Let \(S:=S(x,y)=x^{x/(x+y)}y^{y/(x+y)}\) be the weighted geometric mean. Simple computation (directly or by considering logarithmic derivative) leads to

$$\begin{aligned} \frac{d^{2}}{dx^{2}}S(x,1)=x^{\frac{x}{x+1}} \biggl( \frac{(\ln x)^{2}}{(x+1)^{4}}+ \frac{1}{x(x+1)^{2}} \biggr). \end{aligned}$$

We then conclude that S is strictly convex.

The following remark is worth to be stated.

Remark 2.2

The previous examples were just stated as direct illustrations of the related lemmas. However, their assertions are particular cases of Minkowski’s inequalities for difference means and Gini means, which were first proved in [24] and [25], respectively. It is also worth mentioning that the study of the log-convexity of these two families of means with respect to their parameters can be found, for instance, in [2629].

Before proving that the means NS and T are strictly convex and that P is strictly concave, we need another lemma.

Lemma 2.3

Let g be a real function such that \(g(x)>0\) for all \(x>1\) and set

$$\begin{aligned} \forall x>1,\quad f(x)=\frac{x-1}{g(x)}. \end{aligned}$$

If \(g\in C^{2}(1,\infty )\), then \(f\in C^{2}(1,\infty )\). Further, f is strictly convex (resp. concave) on \((1,\infty )\) if and only if

$$\begin{aligned} 2(x-1) \bigl(g^{\prime}(x) \bigr)^{2}\geq (\leq )\, 2 g(x) \bigl(g^{\prime}(x)+(x-1)g ^{\prime\prime}(x) \bigr) \end{aligned}$$

for all \(x>1\), and the equalities do not hold on a certain interval \((a,b)\subset (1,\infty )\).

Proof

It is a simple exercise of real analysis. Therefore, we leave the details to the reader. □

Convexity of P, NS, and T

We start this section by stating the following result.

Proposition 3.1

Let m be a symmetric homogeneous monotone mean. Assume that m is \(C^{2}\) and (strictly) convex. Then \(m^{*}\) is (strictly) concave.

Proof

First, it is well known that if m is symmetric homogeneous monotone, then so is \(m^{*}\). According to Lemma 2.1, we have to show that the real function \(x\longmapsto m^{*}(x,1)\) is concave, where \(m^{*}(x,1)=\frac{x}{m(x,1)}\) for all \(x>0\). Set \(\phi (x)=m(x,1)\) and \(\phi^{*}(x)=m^{*}(x,1)\) for simplicity. We have \(\phi (x)=x\phi ( \frac{1}{x})\) for all \(x>0\). By differentiating this last equality with respect to x we find (after simple computation) that

$$ \forall x>0,\quad x\phi^{\prime}(x)=\phi (x)-\phi^{\prime} \biggl(\frac{1}{x}\biggr)\leq \phi (x) $$
(3.1)

since m is monotone (i.e., ϕ is increasing). Now, the relationship

$$\begin{aligned} \forall x>0,\quad \phi^{*}(x)=\frac{x}{\phi (x)} \end{aligned}$$

by differentiation with respect to x yields (after elementary computation)

$$\begin{aligned} \bigl(\phi^{*}\bigr)^{\prime\prime}(x):=\frac{d^{2}}{dx^{2}} \phi^{*}(x)=\frac{2x ( \phi^{\prime}(x) )^{2}-x\phi (x)\phi^{\prime\prime}(x)-2\phi (x)\phi^{\prime}(x)}{ (\phi (x) )^{3}}. \end{aligned}$$

The numerator of the last expression can be written as follows:

$$\begin{aligned} 2x \bigl(\phi^{\prime}(x) \bigr)^{2}-x\phi (x) \phi^{\prime\prime}(x)-2\phi (x)\phi^{\prime}(x)=2 \phi^{\prime}(x) \bigl(x\phi^{\prime}(x) -\phi (x) \bigr)-x\phi (x)\phi^{\prime\prime}(x), \end{aligned}$$

which, with (3.1) and the fact that \(\phi (x)>0\), \(\phi^{\prime}(x) \geq 0\), and \(\phi^{\prime\prime}(x)\geq 0\), yields the desired result and so completes the proof. □

Remark 3.1

The converse of the previous proposition is in general false, that is, the concavity of m does not imply the convexity of \(m^{*}\). In fact, G is concave, and \(G^{*}=G\) is also concave. Also, it is not hard to verify that \(L^{*}\) is concave, too, as is L.

Remark 3.2

In the previous proposition, the hypothesis “m is monotone” is a primordial condition. In fact, let us consider the contraharmonic mean C defined by \(C(x,y)=\frac{x^{2}+y^{2}}{x+y}\) for all \(x,y>0\). It is easy to see that C is convex. Further, C is not monotone (see [1]). The dual \(C^{*}\) of C is such that \(C^{*}(x,1)=\frac{x ^{2}+x}{x^{2}+1}=1+\frac{x-1}{x^{2}+1}\). Simple computation leads to (we can also apply Lemma 2.3 with \(g(x)=x^{2}+1\))

$$\begin{aligned} \frac{d^{2}}{dx^{2}}C^{*}(x,1)=2 \frac{x^{3}-3x^{2}-3x+1}{(x^{2}+1)^{3}}=2\frac{(x+1)(x-2-\sqrt{3})(x-2+ \sqrt{3})}{(x^{2}+1)^{2}}. \end{aligned}$$

It follows that \(C^{*}\) is neither concave nor convex, although C is convex.

Now, we discuss the convexity of the three standard means P, NS, and T.

Theorem 3.2

The first Seiffert mean P is strictly concave.

Proof

Following Lemma 2.1, it is sufficient to show that \(x\longmapsto P(x,1)\) is strictly concave for \(x>1\). According to the explicit form of \(P(x,1)\), we set \(g(x)=\arcsin \frac{x-1}{x+1}\) for \(x>1\). Then we have (by elementary computations)

$$\begin{aligned} g^{\prime}(x)=\frac{1}{(x+1)\sqrt{x}},\qquad g^{\prime\prime}(x)=-\frac{3x+1}{2x \sqrt{x}(x+1)^{2}}. \end{aligned}$$

Using Lemma 2.3, we have to show that

$$\begin{aligned} \arcsin \frac{x-1}{x+1}>\frac{4\sqrt{x}(x-1)}{x^{2}+6x+1} \end{aligned}$$

for all \(x>1\). Noting that \(x^{2}+6x+1=2(x+1)^{2}-(x-1)^{2}\) and putting \(t=\frac{x-1}{x+1}\), we have \(x=\frac{1+t}{1-t}\) with \(0< t<1\) and

$$\begin{aligned} \frac{\sqrt{x}}{x+1}=\sqrt{\frac{x}{(x+1)^{2}}}=\frac{1}{2}\sqrt{1-t ^{2}}. \end{aligned}$$

Substituting these quantities into the last inequality, we are then in a position to prove that (after simple reduction)

$$\begin{aligned} \arcsin t>\frac{2t\sqrt{1-t^{2}}}{2-t^{2}} \end{aligned}$$

for all \(0< t<1\). Now, we set, for \(0< t<1\),

$$\begin{aligned} \Phi (t)=\arcsin t-\frac{2t\sqrt{1-t^{2}}}{2-t^{2}}. \end{aligned}$$

Simple computations lead to

$$\begin{aligned} \Phi^{\prime}(t)=\frac{1}{\sqrt{1-t^{2}}}-2 \frac{(2-t^{2})(1-2t^{2})+2t ^{2}(1-t^{2})}{(2-t^{2})^{2}\sqrt{1-t^{2}}} = \frac{t^{2}(t^{2}+2)}{(2-t ^{2})^{2}\sqrt{1-t^{2}}}. \end{aligned}$$

It follows that \(\Phi^{\prime}(t)>0\) for all \(0< t<1\), and so Φ is strictly increasing for \(0< t<1\). We then deduce that \(\Phi (t)>\Phi (0)=0\), so completing the proof. □

For the mean NS, we have the following:

Theorem 3.3

The Neuman-Sándor mean NS is strictly convex.

Proof

Similarly as before, we take \(g(x)=\operatorname{arcsinh}\frac{x-1}{x+1}\) for \(x>1\). Simple computations lead to

$$\begin{aligned} g^{\prime}(x)=\frac{\sqrt{2}}{(x+1)\sqrt{1+x^{2}}},\qquad g^{\prime\prime}(x)=- \sqrt{2} \frac{2x^{2}+x+1}{(x+1)^{2}(1+x^{2})\sqrt{1+x^{2}}}. \end{aligned}$$

According to Lemma 2.3, we have to show (after elementary simplification and reduction) that

$$\begin{aligned} \operatorname{arcsinh}\frac{x-1}{x+1}< 2\sqrt{2} \frac{(x-1)\sqrt{1+x^{2}}}{3x ^{2}+2x+3} \end{aligned}$$

for all \(x>1\). Here, we write \(3x^{2}+2x+3=2(x+1)^{2}+(x-1)^{2}\), and as in the previous proof, we set \(t=\frac{x-1}{x+1}\), \(x= \frac{1+t}{1-t}\) with \(0< t<1\). The last inequality becomes (after simple manipulations)

$$\begin{aligned} \operatorname{arcsinh}t< \frac{2t\sqrt{1+t^{2}}}{2+t^{2}}. \end{aligned}$$

Now, for \(0< t<1\), we set

$$\begin{aligned} \Phi (t)=\frac{2t\sqrt{1+t^{2}}}{2+t^{2}}-\operatorname{arcsinh}t. \end{aligned}$$

A simple computation leads to

$$\begin{aligned} \Phi^{\prime}(t)=2 \frac{(2+t^{2})(1+2t^{2})-2t^{2}(1+t^{2})}{(2+t^{2})^{2}\sqrt{1+t ^{2}}}-\frac{1}{\sqrt{1+t^{2}}} \end{aligned}$$

or, equivalently (after simplification and reduction),

$$\begin{aligned} \Phi^{\prime}(t)=\frac{t^{2}(2-t^{2})}{(2+t^{2})^{2}\sqrt{1+t^{2}}}. \end{aligned}$$

Clearly, \(\Phi^{\prime}(t)>0\) for all \(0< t<1\), and so Φ is strictly increasing for \(0< t<1\). We then deduce that \(\Phi (t)>\Phi (0)=0\) for all \(0< t<1\), which is the desired inequality. The proof is complete. □

Finally, we state the following result.

Theorem 3.4

The second Seiffert mean T is strictly convex.

Proof

It similar to the previous ones. We just present here a short proof since the related computations are easy. In fact, setting \(g(x)= \arctan \frac{x-1}{x+1}\), \(x>1\), we have

$$\begin{aligned} g^{\prime}(x)=\frac{1}{1+x^{2}},\qquad g^{\prime\prime}(x)=\frac{-2x}{(1+x^{2})^{2}}. \end{aligned}$$

The inequality corresponding to Lemma 2.3 is equivalent to (after all computation and reduction)

$$\begin{aligned} \forall x>1,\quad \arctan \frac{x-1}{x+1}< \frac{x-1}{x+1}, \end{aligned}$$

which is valid since \(\arctan z< z\) for \(z>0\). The proof is finished. □

Remark 3.3

  1. (i)

    According to Proposition 3.1, we deduce that \(NS^{*}\) and \(T^{*}\) are strictly concave.

  2. (ii)

    Following their graphs, the means NS and T seem to be not log-convex. The mean P is log-concave since it is concave.

Application for sub-super-stabilizability

As already pointed before, this section displays some applications of the mean-convexity to the so-called sub/super-stabilizability of some standard means. Let \(m_{1}\), m, \(m_{2}\) be three means such that \(m_{1}< m< m_{2}\). In some situations, it may be of interest to show that \(m< m (m_{1},m_{2} )\) or \(m (m_{1},m_{2} )< m\), that is, \(m(x,y)<(>)\,m (m_{1}(x,y),m_{2}(x,y) )\) for all \(x,y>0\). Many inequalities of this type are well known in the literature, such as \(L< L(A,G)\), \(I(A,G)< I\), and \(T(A,Q)< T\); see, for instance, [13, 30]. In what follows, we will see that strict convexity/concavity of m, when combined with its sub/super-satbilizability, can be used for obtaining some of these (composed) mean-inequalities. This is described in the following result.

Theorem 4.1

Let m be a mean. Then the following assertions hold:

  1. (i)

    If m is strictly convex and \((A,A_{p})\)-sub-stabilizable for some \(p\in {\mathbb{R}}\), then we have \(m (A,A_{p} )< m\).

  2. (ii)

    If m is strictly concave and \((A,A_{p})\)-super-stabilizable for some \(p\in {\mathbb{R}}\), then we have \(m< m (A,A_{p} )\).

Proof

(i) By assumption we have \({\mathcal{R}}(A,m,A_{p})\leq m\), that is, by the definition of \({\mathcal{R}}\) and A,

$$\begin{aligned} \forall x,y>0,\quad \frac{m(x,A_{p})+m(y,A_{p})}{2}\leq m, \end{aligned}$$

which, with the strict convexity of m, immediately implies the desired inequality.

(ii) It is analogous to (i) by similar arguments. □

It is worth mentioning that the sub-stabilizability and super-stabilizability in the previous theorem are not strict, and so we have the same conclusions when we replace both them by stabilizability. For example, L is strictly concave and \((A,G)\)-stabilizable. Then, Theorem 4.1(ii) immediately yields \(L< L(A,G)\).

Now, let us observe another example, which explains more how to use the mean-convexity for establishing the sub-stabilizability of a certain bivariate mean.

Theorem 4.2

Let m be a strictly concave mean. Assume that there exists \(r<1\) such that \(\frac{A+A_{r}}{2}\leq m< A\). Then m is strictly \((A,A_{r})\)-sub-stabilizable.

Proof

First, since \(\frac{A+A_{r}}{2}\leq m< A\) and \(r<1\), we have \(A_{r}< m< A\). Now, we need to show that \({\mathcal{R}}(A,m,A_{r})< m\). We have, for all \(x,y>0\), \(x\neq y\),

$$\begin{aligned} {\mathcal{R}} (A,m,A_{r} ) (x,y):= \frac{m(x,A_{r})+m(y,A_{r})}{2}\quad \mbox{with } A_{r}:=A_{r}(x,y), \end{aligned}$$

which, with the fact that m is strictly concave, yields

$$\begin{aligned} \mathcal{R} (A,m,A_{r} ) (x,y)< m(A,A_{r})\quad \mbox{with } A:= \frac{x+y}{2}. \end{aligned}$$

It follows that

$$\begin{aligned} {\mathcal{R}} (A,m,A_{r} ) (x,y)< m (A,A_{r} )< A (A,A _{r} ):=\frac{A+A_{r}}{2}\leq m, \end{aligned}$$

and the desired result is obtained. □

As a particular case of the previous theorem, we have the following result.

Corollary 4.3

The mean P is strictly \((A,G)\)-sub-stabilizable.

Proof

In [13] the authors proved this result by three different methods. We give here a fourth method based on the previous arguments. In fact, following [31, 32], we have \(\frac{2}{\pi }A+(1-\frac{2}{ \pi })G< P\), which implies that \(\frac{A+G}{2}< P\). The desired result follows from the previous theorem by taking \(m=P\) and \(r=0\). □

Another result of interest is presented in the following:

Theorem 4.4

Let m be a strictly convex mean. Assume that there exists \(r>1\) such that \(A< m\leq \frac{A+A_{r}}{2}\). Then m is strictly \((A,A_{r})\)-super-stabilizable.

Proof

Since \(r>1\), we have \(A< m\leq \frac{A+A_{r}}{2}< A_{r}\). We then need to show that \(m<{\mathcal{R}}(A,m,A_{r})\). By the definition of \({\mathcal{R}}\) we have, for all \(x,y>0\),

$$\begin{aligned} {\mathcal{R}} (A,m,A_{r} ) (x,y)=A \bigl(m(x,A_{r}),m(y,A_{r}) \bigr)=\frac{m(x,A_{r})+m(y,A_{r})}{2}\quad \mbox{with } A_{r}:=A_{r}(x,y). \end{aligned}$$

This, with the strict convexity of m, allows us to write, with \(x\neq y\),

$$\begin{aligned} {\mathcal{R}} (A,m,A_{r} ) (x,y)>m \biggl(\frac{x+y}{2},A_{r} \biggr)=m(A,A _{r})\quad\mbox{with } A:=A(x,y)=\frac{x+y}{2}. \end{aligned}$$

Since \(A< m\), we have \(A(A,A_{r})< m(A,A_{r})\), that is, \(m(A,A_{r})>\frac{A+A _{r}}{2}\). This with our assumption yields \({\mathcal{R}} (A,m,A _{r} )>m\), which completes the proof. □

The two following corollaries, whose proof will be deduced from that of the previous theorem, assert that the assumption \(m<\frac{A+A_{r}}{2}\), \(r>1\), is satisfied for the two particular cases \(m=NS\) and \(m=T\).

Corollary 4.5

The mean NS is strictly \((A,Q)\)-super-stabilizable.

Proof

Following [33], we have \(NS<\frac{2}{3}A+\frac{1}{3}Q\), which is stronger than \(NS<\frac{A+Q}{2}\). The desired result follows from the previous theorem by taking \(m=NS\) and \(r=2\) since \(A_{2}=Q\). □

Corollary 4.6

The mean T is strictly \((A,A_{3})\)-super-stabilizable.

Proof

Since T is convex, Theorem 4.4 immediately implies the desired result, provided that the inequality \((A{<})T<\frac{A+A_{3}}{2}\) holds. Such an inequality is established in the following: □

Proposition 4.7

We have

$$\begin{aligned} T< \frac{A+A_{3}}{2}. \end{aligned}$$

Proof

Since \(T<\frac{A+2Q}{3}\) (see [34]), it suffices to show that \(\frac{A+2Q}{3}<\frac{A+A_{3}}{2}\) or \(Q<\frac{A+3A_{3}}{2}\). By the weighted arithmetic-geometric mean inequality we have

$$\begin{aligned} A^{1/4}A_{3}^{3/4}< \frac{A+3A_{3}}{4}, \end{aligned}$$

and so it suffices to prove that \(Q< A^{1/4}A_{3}^{3/4}\). By the homogeneity of the involved means with their definitions, we have to show that the following inequality (after a simple reduction) \((x^{2}+1)^{2}<(x+1)(x^{3}+1)\) holds for all \(x>0\) with \(x\neq 1\). After simple manipulations, this latter inequality is reduced to \((x-1)^{2}>0\), and the desired result is obtained, completing the proof. □

Remark 4.1

From the three previous corollaries we immediately deduce that \(P^{*}\) is strictly \((H,G)\)-super-stabilizable, \(NS^{*}\) is strictly \((H,A_{-2})\)-sub-stabilizable, and \(T^{*}\) is strictly \((H,A_{-3})\)-sub-stabilizable, respectively.

Remark 4.2

In Theorem 4.4, the assumption \(m<\frac{A+A_{r}}{2}\) implies by Lemma 2.2 that \(m< A_{\frac{1+r}{2}}\), \(r>1\). Then if m has an upper bound as power mean, that is, \(m< A_{s}\), where s is the best possible, then we should have \(s\leq \frac{1+r}{2}\). For example, if \(m=NS\), then we know that \(NS< A_{4/3}\) with \(A_{4/3}\) the best power bound of NS (see [35]), and so we should have \(\frac{1+r}{2} \geq 4/3\), that is, \(r\geq 5/3\). Corollary 4.5 confirms that \(r=2\geq 5/3\) is a convenient case, but perhaps \(r=2\) is not the best possible. See more details in the next section.

Remark 4.3

Following Example 2.1, the mean \(\frac{A+A_{r}}{2}\) is strictly concave for \(r<1\) and strictly convex for \(r>1\). This, combined with Theorem 4.2 and Theorem 4.4, respectively, yields that \(\frac{A+A_{r}}{2}\) is strictly \((A,A_{r})\)-sub-stabilizable for \(r<1\) and strictly \((A,A_{r})\)-super-stabilizable for \(r>1\).

Some open problems

We end this paper by stating some open problems as the purpose for future research. These problems are derived from the previous theoretical results and their proofs.

Problem 1

Let p, q, r be three real numbers such that \(p< r< q\). Do exist \(\alpha \in (0,1)\) and \(\beta \in (0,1)\), \(\alpha <\beta \), satisfying the following double mean-inequality:

$$\begin{aligned} (1-\alpha )A_{p}+\alpha A_{q}< A_{r}< (1-\beta )A_{p}+\beta A_{q}? \end{aligned}$$

If the answer is positive, what are the best α and β?

Problem 2

Let m be a convex mean (resp. concave mean) such that \(m>A\) (resp. \(m< A\)). Under what conditions there exists \(r>1\) (resp. \(r<1\)) such that

$$\begin{aligned} m< \frac{A+A_{r}}{2} \biggl(\textit{resp. } m>\frac{A+A_{r}}{2} \biggr)? \end{aligned}$$

When such r exists, what is the best one?

Corollary 4.5 and Corollary 4.6 assert that Problem 2 has a positive answer when \(m=NS>A\) and \(m=T>A\), with \(r=2\) and \(r=3\), respectively. In parallel, Corollary 4.3 gives a positive answer for \(m=P< A\) with \(r=0\). Of course, we can then ask what is the best possible \(r>1\) such that \(NS<\frac{A+A_{r}}{2}\). A similar question can be stated for T and P. About this, we state the following conjectures.

Problem 3

(1) The best \(r>1\) satisfying \(T< \frac{A+A_{r}}{2}\) is

$$\begin{aligned} r:=r_{T}=\frac{\ln 2}{\ln (\frac{2\pi }{8-\pi } )}\approx 2.695325\cdots. \end{aligned}$$

(2) The best \(r>1\) such that \(NS<\frac{A+A_{r}}{2}\) is

$$\begin{aligned} r:=r_{NS}=\frac{\ln 2}{\ln (2/\alpha )},\quad \textit{with } \alpha =\ln \biggl( \frac{2}{\ln (1+\sqrt{2})-1} \biggr),\qquad r_{NS}\approx 1.524164\cdots. \end{aligned}$$

(3) The best \(r<1\) satisfying \(\frac{A+A_{r}}{2}< P\) is

$$\begin{aligned} r:=r_{P}=\frac{\ln 2}{\ln (\frac{2\pi }{4-\pi } )}\approx 0.348218\cdots. \end{aligned}$$

However, the conclusion of Problem 2 does not always work for any \(m>A\). In fact, if we take, for example, \(m=C>A\), where C is the contra-harmonic mean, then there is no \(s>1\) such that \(C< A_{s}\). Indeed, if \(C\leq A_{s}\) for some \(s>1\), then by virtue of the homogeneity of C and \(A_{s}\) we should have

$$\begin{aligned} \forall x>0,\quad \biggl(\frac{x^{2}+1}{x+1} \biggr)^{s}\leq \frac{x ^{s}+1}{2}. \end{aligned}$$

This last inequality is impossible since, for x enough large, it gives a contradiction. This, with the help of Lemma 2.2, shows that there is no \(r>1\) such that \(C<\frac{A+A_{r}}{2}\), and our claim is then justified.

Finally, we end this paper by mentioning the following. In the previous study, we have seen that the standard symmetric homogeneous monotone means that are less than A (such as min, H, G, L, \(L^{*}\), I, and P) are concave, whereas those that are greater than A (like max, C, S, T, and NS) are convex. We have seen that the nonmonotone mean \(C^{*}\) is neither convex nor concave with \(C>A\), and so \(C^{*}< A^{*}=H< A\). This, with Proposition 3.1, allows us to arise the following open problem.

Problem 4

Let m be a symmetric homogeneous strictly monotone mean.

  1. (i)

    Prove or disprove that Proposition 3.1 holds for m not necessarily of class \(C^{2}\).

  2. (ii)

    Prove or disprove that if \(m< A\), then m is strictly concave and if \(m>A\), then m is strictly convex.

References

  1. Raïssouli, M, Sándor, J: On a method of construction of new means with applications. J. Inequal. Appl. 2013, Article ID 2013:89 (2013)

    MathSciNet  MATH  Google Scholar 

  2. Seiffert, HJ: Problem 887. Nieuw Arch. Wiskd. 11, 176 (1993)

    MathSciNet  Google Scholar 

  3. Seiffert, HJ: Aufgabe 16. Wurzel 29, 87 (1995)

    Google Scholar 

  4. Neuman, E, Sándor, J: On the Schwab-Borchardt mean. Math. Pannon. 14(2), 253-266 (2003)

    MathSciNet  MATH  Google Scholar 

  5. Chu, YM, Long, BY, Gong, WM, Song, YQ: Sharps bounds for Seiffert and Neuman-Sándor means in terms of generalized logarithmic means. J. Inequal. Appl. 2013, 10 (2013)

    MathSciNet  Article  MATH  Google Scholar 

  6. Chu, YM, Long, BY: Bounds of the Neuman-Sándor mean using power and identric means. Abstr. Appl. Anal. 2013, Article ID 832591 (2013)

    MathSciNet  MATH  Google Scholar 

  7. Jiang, W-D, Qi, F: Sharps bounds for Neuman-Sándor’s mean in terms of the root-mean-square. Period. Math. Hung. 69, 134-138 (2014)

    MathSciNet  Article  MATH  Google Scholar 

  8. Jiang, W-D, Qi, F: Sharps bounds for the Neuman-Sándor mean in terms of the power and contraharmonic means. Cogent Mathematics 2015(2), 995951 (2015)

    MathSciNet  MATH  Google Scholar 

  9. Neuman, E, Sándor, J: On the Schwab-Borchardt mean II. Math. Pannon. 17(1), 49-59 (2006)

    MathSciNet  MATH  Google Scholar 

  10. Sándor, J: On certain inequalities for means III. Arch. Math. 16(1), 34-40 (2001)

    MathSciNet  Article  MATH  Google Scholar 

  11. Raïssouli, M: Stability and stabilizability for means. Appl. Math. E-Notes 11, 159-174 (2011)

    MathSciNet  MATH  Google Scholar 

  12. Raïssouli, M: Positive answer for a conjecture about stabilizable means. J. Inequal. Appl. 2013, 467 (2013)

    MathSciNet  Article  MATH  Google Scholar 

  13. Raïssouli, M, Sándor, J: Sub-stabilizable and super-stabilizable bivariate means. J. Inequal. Appl. 2014, Article ID 2014:28 (2014)

    Article  MATH  Google Scholar 

  14. Anisiu, MC, Anisiu, V: The first Seiffert mean is strictly \((G,A)\)-super-stabilizable. J. Inequal. Appl. 2014, 185 (2014)

    MathSciNet  Article  Google Scholar 

  15. Sándor, J, Raïssouli, M: On a simple and short proof for the strict \((G,A)\)-super-stabilizability of the first seiffert mean. Preprint

  16. Niculescu, CP: Invitation to Convex Function Theory. https://www.researchgate.net/253010852

  17. Stolarsky, KB: Generalizations of the logarithmic mean. Math. Mag. 48, 87-92 (1975)

    MathSciNet  Article  MATH  Google Scholar 

  18. Stolarsky, KB: The power and generalized logarithmic means. Am. Math. Mon. 87, 545-548 (1980)

    MathSciNet  Article  MATH  Google Scholar 

  19. Bege, A, Bukor, J, Tòth, JT: On (log-)convexity of power mean. Ann. Math. Inform. 42, 3-7 (2013) Available online at http://ami.ektf.hu

    MathSciNet  MATH  Google Scholar 

  20. Mildorf, TJ: A sharp bound on the two variable power mean. Math. Reflec. 2 (2006), Available online at https://services.artofproblemsolving.com/download.php?id=yxr0ywnobwvudhmvmy84lzm4zgfhmjy3nt nmzgnknwfhzdewyzy0zgm1yzvjzwuwzmzlogy1&rn=ysbzagfyccbib3vuzc5qrey=

  21. Sándor, J: A note on log-convexity of power means. Ann. Math. Inform. 45, 107-110 (2015)

    MathSciNet  MATH  Google Scholar 

  22. Matkowski, J, Rätz, J: Convex Functions with Respect to an Arbitrary Mean. International Series of Numerical Mathematics, vol. 123. Birkhäuser Verlag, Basel (1997)

    MATH  Google Scholar 

  23. Sándor, J: Monotonicity and convexity properties of means. Octogon Math. Mag. 7(2), 22-27 (1999)

    MathSciNet  Google Scholar 

  24. Losonczi, L, Páles, Zs: Minkowski’s inequality for two variable difference means. Proc. Am. Math. Soc. 126, 779-791 (1998)

    Article  MATH  Google Scholar 

  25. Losonczi, L, Páles, Zs: Minkowski’s inequality for two variable Gini means. Acta Sci. Math. 62, 413-425 (1996)

    MathSciNet  MATH  Google Scholar 

  26. Qi, F: Logarithmic convexities of the extended mean values. Proc. Am. Math. Soc. 130(6), 1787-1796 (2002)

    Article  MATH  Google Scholar 

  27. Yang, Zh-H: On the log-convexity of two-parameter homogeneous functions. Math. Inequal. Appl. 10(3), 499-516 (2007)

    MathSciNet  MATH  Google Scholar 

  28. Yang, Zh-H: On the monotonicity and log-convexity of a four-parameter homogeneous mean. J. Inequal. Appl. 2008, Article ID 149286 (2008)

    MathSciNet  Article  MATH  Google Scholar 

  29. Yang, Zh-H: Bivariate Log-Convexity of more Extended Means and its Applications. arXiv:1408.2252 [math.CA]

  30. Costin, I, Toader, G: Optimal evaluations of some Seiffert type means by power means. Appl. Comput. Math. 219, 4745-4754 (2013)

    MathSciNet  MATH  Google Scholar 

  31. Jiang, WD, Qi, F: Some sharp inequalities involving Seiffert and other means and their concise proofs. Math. Inequal. Appl. 15(4), 1007-1017 (2012)

    MathSciNet  MATH  Google Scholar 

  32. Liu, H, Meng, XJ: The optimal convex combination bounds for Seiffert’s mean. J. Inequal. Appl. 2011, Article ID 686834 (2011)

    MathSciNet  Article  MATH  Google Scholar 

  33. Neuman, E: A note on a certain bivariate mean. J. Math. Inequal. 6(4), 637-643 (2012)

    MathSciNet  Article  MATH  Google Scholar 

  34. Chu, YM, Wang, MK, Gong, WM: Two sharp double inequalities for Seiffert mean. J. Inequal. Appl. 2011, 44 (2011)

    MathSciNet  Article  MATH  Google Scholar 

  35. Yang, ZH: Sharp power means bounds for Neuman-Sándor mean (2012). arXiv:1208.0895

Download references

Acknowledgements

The authors would like to thank the anonymous referees for their comments and for bringing us some references.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mustapha Raïssouli.

Additional information

Competing interests

The authors declare that they have no competing interests regarding the publication of the present manuscript.

Authors’ contributions

Both authors worked in coordination. Both authors carried out the proof, read, and approved the final version of the manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Raïssouli, M., Sándor, J. Sub-super-stabilizability of certain bivariate means via mean-convexity. J Inequal Appl 2016, 273 (2016). https://doi.org/10.1186/s13660-016-1212-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13660-016-1212-z

MSC

  • 26E60

Keywords

  • bivariate mean
  • convexity of mean
  • sub-stabilizable mean
  • super-stabilizable mean
  • mean-inequalities