 Research
 Open Access
 Published:
On the generalised sum of squared logarithms inequality
Journal of Inequalities and Applications volume 2015, Article number: 101 (2015)
Abstract
Assume \(n\geq2\). Consider the elementary symmetric polynomials \(e_{k}(y_{1},y_{2},\ldots, y_{n})\) and denote by \(E_{0},E_{1},\ldots,E_{n1}\) the elementary symmetric polynomials in reverse order \(E_{k}(y_{1},y_{2},\ldots,y_{n}):=e_{nk}(y_{1},y_{2},\ldots,y_{n})= \sum_{i_{1}<\cdots<i_{nk}} y_{i_{1}}y_{i_{2}}\cdots y_{i_{nk}}\), \(k\in\{0,1,\ldots,n1 \}\). Let, moreover, S be a nonempty subset of \(\{0,1,\ldots,n1\}\). We investigate necessary and sufficient conditions on the function \(f:I\to\mathbb{R}\), where \(I\subset\mathbb{R}\) is an interval, such that the inequality \(f(a_{1})+f(a_{2})+\cdots+f(a_{n})\leq f(b_{1})+f(b_{2})+\cdots+f(b_{n})\) (∗) holds for all \(a=(a_{1},a_{2},\ldots,a_{n})\in I^{n}\) and \(b=(b_{1},b_{2},\ldots,b_{n})\in I^{n}\) satisfying \(E_{k}(a)< E_{k}(b)\) for \(k\in S\) and \(E_{k}(a)=E_{k}(b)\) for \(k\in\{0,1,\ldots,n1\}\setminus S\). As a corollary, we obtain our inequality (∗) if \(2\leq n\leq4\), \(f(x)=\log^{2}x\) and \(S=\{1,\ldots,n1\}\), which is the sum of squared logarithms inequality previously known for \(2\le n\le3\).
Introduction  the sum of squared logarithms inequality
In a previous contribution [1] the sum of squared logarithms inequality has been introduced and proved for the particular cases \(n=2,3\). For \(n=3\) it reads: let \(a_{1},a_{2},a_{3},b_{1},b_{2},b_{3}>0\) be given positive numbers such that
Then
The general form of this inequality can be conjectured as follows.
Definition 1.1
The standard elementary symmetric polynomials \(e_{1},\ldots ,e_{n1}, e_{n}\) are
note that \(e_{n}=y_{1}\cdot y_{2}\cdot\,\cdots\,\cdot y_{n}\).
Conjecture 1.2
(Sum of squared logarithms inequality)
Let \(a_{1},a_{2},\ldots,a_{n}\), \(b_{1},b_{2},\ldots,b_{n}\) be given positive numbers. Then the condition
implies that
Remark 1.3
Note that the conclusions of Conjecture 1.2 are trivial provided we have equality everywhere, i.e.
In this case, the coefficients \(a_{1},\ldots, a_{n}\), \(b_{1},\ldots, b_{n}\) are equal up to permutations, which can be seen by looking at the characteristic polynomials of two matrices with eigenvalues \(a_{1},\ldots ,a_{n}\) and \(b_{1},\ldots,b_{n}\). From this perspective, having equality just in the last product \(e_{n}\) and strict inequality else seems to be the most difficult case.
Based on extensive random sampling on \(\mathbb{R}_{+}^{n}\) for small numbers n it has been conjectured that Conjecture 1.2 might be true for arbitrary \(n\in\mathbb{N}\). The sum of squared logarithms inequality has immediate important applications in matrix analysis ([2]; see also [3]) as well as in nonlinear elasticity theory [4–7]. In matrix analysis it implies that the global minimiser over all rotations to
at given \(F\in\operatorname{GL}^{+}(n)\) is realised by the orthogonal factor \(R=\operatorname{polar}(F)\) (such that \(R^{T} F=\sqrt{F^{T}F}\)). Here, \(\ X\^{2}:=\sum_{i,j=1}^{n} X_{ij}^{2}\) denotes the Frobenius matrix norm and \(\operatorname{Log}: \operatorname{GL}(n) \to\mathfrak{gl}(n)=\mathbb{R}^{n\times n}\) is the multivalued matrix logarithm, i.e. any solution \(Z=\operatorname{Log} X\in\mathbb{C}^{n\times n}\) of \(\exp(Z)=X\) and \(\operatorname{sym}_{*}(Z)=\frac{1}{2} ( Z^{*}+Z )\).
Recently, the case \(n=2\) was used to verify the polyconvexity condition in nonlinear elasticity [4, 5] for a certain class of isotropic energy functions. For more background information on the sum of squared logarithms inequality we refer the reader to [1].
In this paper we extend the investigation as to the validity of Conjecture 1.2 by considering arbitrary functions f instead of \(f(x)=\log^{2} x\). We formulate this more general problem and we are able to extend Conjecture 1.2 to the case \(n=4\). The same methods should also be useful for proving the statement for \(n=5,6\). However, the necessary technicalities prevent us from discussing these cases in this paper.
In addition, we present ideas which might be helpful in attacking the fully general case, namely arbitrary f and arbitrary n.
The generalised inequality
In order to generalise Conjecture 1.2 in the directions hinted at in the introduction, we consider from now on a nonstandard definition of the elementary symmetric polynomials. In fact, for \(n\geq2\) it will be more convenient for us to reverse their numbering and define \(E_{0},E_{1},\ldots,E_{n1}\) by
In particular, now
Let \(I\subset\mathbb{R}\) be an open interval and let
Let S be a nonempty subset of \(\{0,1,\ldots,n1\}\) and assume that \(a,b\in\Delta_{n}\) are such that
In this section we investigate necessary and sufficient conditions for a (smooth) function \(f:I\to\mathbb{R}\), such that the inequality
holds for all \(a,b\in\Delta_{n}\) satisfying assumption (2.4).
Remark 2.1
The formulation of the above problem has a certain monotonicity structure: we assume that ‘\(E(a)< E(b)\)’ and want to prove that ‘\(F(a)< F(b)\)’. Therefore our idea is to consider a curve y connecting the points a and b, such that \(E(y(t))\) ‘increases’. Then the function \(g(t)=F(y(t))\) should also increase and therefore \(g'(t)>0\) must hold. From this we are able to derive necessary and sufficient conditions on the function f.
This approach motivates the following definition.
Definition 2.2
(b dominates a, \(a\preceq b\))
Let \(a,b\in\Delta_{n}\). We will say that b dominates a and denote \(a\preceq b\) if there exists a piecewise differentiable mapping \(y:[0,1]\to\Delta_{n}\) (i.e. y is continuous on \([0,1]\) and differentiable in all but at most countably many points) such that \(y(0)=a\), \(y(1)=b\), \(y_{i}(t)\neq y_{j}(t)\) for \(i\neq j\) and all but at most countably many \(t\in[0,1]\) and the functions
are nondecreasing on the interval \([0,1]\).
If \(a\preceq b\), then \(E_{k}(a)=A_{k}(0)\leq A_{k}(1)=E_{k}(b)\), so it follows from Definition 2.2 that a, b satisfy assumption (2.4) with S being the set of all k for which \(A_{k}(t)\) is not a constant function on \([0,1]\).
We are ready to formulate the main results of this section.
Theorem 2.3
Assume that \(a,b\in\Delta_{n}\) and let \(a\preceq b\). Let \(S\subseteq\{0,1,\ldots,n1\}\) denote the set of all integers k with \(E_{k}(a)< E_{k}(b)\). Moreover, assume that \(f\in C^{n}(I)\) be such that
Then the following inequality holds:
A partially reverse statement is also true.
Theorem 2.4
Let \(f\in C^{n}(I)\) be such that the inequality
holds for all \(a,b\in\Delta_{n}\) satisfying
for some subset \(S\subseteq\{0,1,\ldots,n1\}\). Then f satisfies property (2.5), i.e.
In this respect, we can formulate another conjecture.
Conjecture 2.5
Let S be a nonempty subset of \(\{0,1,\ldots,n1\}\) and assume that \(a,b\in\Delta_{n}\) are such that (2.4) is satisfied, i.e.
Then there exists a curve y satisfying the conditions from Definition 2.2 and thus \(a\preceq b\).
Remark 2.6
In concrete applications of Theorem 2.3 and Theorem 2.4 one would like to know whether condition (2.4) already implies \(a\preceq b\). This is Conjecture 2.5. Unfortunately, we are able to prove Conjecture 2.5 only for \(2\leq n\leq4\), \(I=(0,\infty)\) and \(S\subseteq\{1,2,\ldots,n1\}\) (see the next section).
Example 2.7
It is easy to see that if \(I=(0,\infty)\) then the function \(f(x)=\log^{2}x\) satisfies property (2.5) for \(S=\{ 1,2,\ldots,n1\}\). Indeed, we proceed by induction on n. For \(n=2\) and \(k=1\) the property is immediate. Moreover, for \(k\geq2\) and \(n\geq3\) we get
by the induction hypothesis, since the second summand vanishes. It remains to check property (2.5) for \(k=1\), which is also immediate.
Note also that property (2.5) is not true for \(k=0\). Therefore Theorem 2.3 and Theorem 2.4 for \(f(x)=\log^{2}x\) attain the following formulation.
Corollary 2.8
Assume that \(a,b\in\mathbb{R}_{+}^{n}\) be such that \(a\preceq b\) and \(a_{1}a_{2}\cdots a_{n}=b_{1}b_{2}\cdots b_{n} \). Then
and this inequality fails if the constraint \(a_{1}a_{2}\cdots a_{n}=b_{1}b_{2}\cdots b_{n}\) is replaced by the weaker one \(a_{1}a_{2}\cdots a_{n}\leq b_{1}b_{2}\cdots b_{n} \).
In order to see that the weaker condition is not sufficient for the inequality to hold, consider the case
Then \(a \preceq b\) and \(a_{1}a_{2}\cdots a_{n}\leq b_{1}b_{2}\cdots b_{n}\), but
Remark 2.9
Corollary 2.8 is a weaker statement than Conjecture 1.2 since we assume that \(a\preceq b\). If Conjecture 2.5 is true, then Conjecture 1.2 follows.
Example 2.10
The function \(f(x)=x^{p}\) (\(x>0\)) with \(p\in(0,1)\) satisfies property (2.5) for the set \(S=\{ 0,1,\ldots,n1\}\). Indeed, for each \(n\geq2\) and \(0\leq k\leq n1\), we have
The above product is not greater than 0, because among the factors \(k+p1,k+p2,\ldots,k+p(n1)\) there are exactly \(n1k\) negative ones.
Similarly, the function \(f(x)=x^{p}\) for \(p\in(1,0)\) satisfies property (2.5) for the set \(S=\{ 1,2,\ldots,{n1}\}\), because \(p<0\) and among the factors \(k+p1,k+p2,\ldots,k+p(n1)\) there are exactly \(nk\) negative ones. On the other hand, property (2.5) is not true for \(k=0\).
Thus, like above, we have the following.
Corollary 2.11
Assume that \(a,b\in(0,\infty)^{n}\) be such that \(a\preceq b\) and \(a_{1}a_{2}\cdots a_{n}=b_{1}b_{2}\cdots b_{n} \). If \(p\in(1,1)\), then
This inequality fails for \(1< p<0\) (but remains true for \(0< p<1\)) if the constraint \(a_{1}a_{2}\cdots a_{n}=b_{1}b_{2}\cdots b_{n}\) is replaced by the weaker one \(a_{1}a_{2}\cdots a_{n}\leq b_{1}b_{2}\cdots b_{n} \).
Proof of Theorem 2.3
If S is empty, then \(E_{k}(a)=E_{k}(b)\) for all \(k\in\{0,1,\ldots,n1\} \) and hence \(a=b\), which immediately implies the inequality. We therefore assume that S is nonempty.
Let \(y:[0,1]\to\Delta_{n}\) be the curve connecting points a and b as in Definition 2.2. Consider the function
where \(A_{k}(t)=E_{k}(y(t))E_{k}(a)\) is a nondecreasing mapping. Our goal is to show that the function
is nondecreasing on \([0,1]\), i.e. we show that \(\eta'(t)\geq 0\) a.e. on \((0,1)\).
To this end, fix \(i\in\{1,2,\ldots,n\}\). Since \(p(t,y_{i}(t))=0\) for all \(t\in(0,1)\), we obtain
for all \(t\in(0,1)\) and therefore
which gives
This equality holds, if \(y_{i}(t)\neq y_{j}(t)\) for \(i\neq j\), which is true for all but countably many values of \(t\in(0,1)\). For those values of t we get
Fix \(t\in(0,1)\) such that \(y_{i}(t)\neq y_{j}(t)\) for \(i\neq j\) and write \(y_{i}=y_{i}(t)\) for simplicity. Since \(A'_{k}(t)\geq0\), we will be done if we show that
To this end, consider the polynomial
The degree of g equals \(n1\) and the coefficient at \(x^{n1}\) is equal to \(\widehat{D}\). Moreover,
Therefore the function \(h(x)=g(x)+(1)^{n+k}x^{k}f'(x)\) has n different roots \(y_{1},y_{2},\ldots,y_{n}\) in the interval I. It follows that the function
has a root in the interval I, and since \((1)^{n+k}(x^{k}f'(x))^{(n1)}\leq0\) for all \(x\in I\), it follows that \(\widehat{D}\geq0\), which completes the proof of Theorem 2.3. □
Proof of Theorem 2.4
Suppose, to the contrary, that \((1)^{k+n}(x^{k}f'(x))^{(n1)}>0\) for some \(x\in I\) and some \(k\in S\). Then \((1)^{k+n}(x^{k}f'(x))^{(n1)}>0\) holds for all x belonging to some interval J contained in I. Choose the numbers \(a_{1}< a_{2}<\cdots<a_{n}\) from J and consider
Then for all sufficiently small t (\(0< t<\varepsilon\)), there exist different numbers \(y_{i}(t)\) belonging to J, such that
Then
and since \(t>0\), we see that a and \(b=y(t)\) satisfy (2.8). We will be done if we show that
We proceed in the same way as in the proof of Theorem 2.3. We define
and this time we want to show that \(\eta'(t)<0\) for \(0< t<\varepsilon\).
By the inverse mapping theorem (see the proof of Proposition 3.4 below for a more detailed explanation), \(y\in C^{1}(0,\varepsilon)\) and therefore
Now, like previously, write \(y_{i}=y_{i}(t)\) for simplicity. Our goal is therefore to prove that
Consider the polynomial
The degree of g equals \(n1\) and the coefficient at \(x^{n1}\) is equal to \(\widehat{D}\). Moreover, the function \(h(x)=g(x)+(1)^{n+k}x^{k}f'(x)\) has n different roots \(y_{1},y_{2},\ldots,y_{n}\) in the interval J. It follows that the function
has a root in the interval J. Since \((1)^{n+k}(x^{k}f'(x))^{(n1)}>0\) for all \(x\in J\), it follows that \(\widehat{D}<0\), which completes the proof of Theorem 2.4. □
Construction of the connecting curve
In this section we prove that condition (2.4) implies \(a\preceq b\), if \(2\leq n\leq4\), \(I=(0,\infty)\) and \(S\subseteq\{1,2,\ldots,n1\}\). However, we start with a construction of the desired curve for a general interval I, integer \(n\geq2\) and set \(S\subseteq\{0,1,\ldots,n1\}\).
For \(a,b\in\Delta_{n}\), we say that \(a< b\), if \(a\neq b\) and \(E_{k}(a)\leq E_{k}(b)\) for all \(k=0,1,\ldots,n1\). We say that \(a\leq b\), if \(a< b\) or \(a=b\).
Definition 3.1
For \(a< b\) denote by \(\mathcal{C}(a,b)\) the set of all piecewise differentiable (i.e. continuous and differentiable in all but at most countably many points) curves y in \(\Delta_{n}\) satisfying:

(a)
the curve \(y(t)\) starts at a (i.e. \(y(0)=a\), if the curve \(y(t)\) is parametrised by the interval \([0,\varepsilon]\));

(b)
\(y(t)\in\operatorname{int}(\Delta_{n})\) for all but at most countable many values t;

(c)
the mappings \(E_{k}(y(t))\) are nondecreasing in t and \(E_{k}(y(t))\leq E_{k}(b)\) for all t and each \(k=0,1,\ldots,n1\).
Note that a curve in \(\mathcal{C}(a,b)\) does not necessarily end at the point b.
Proposition 3.2
Let \(n\geq2\) be a positive integer and let S be a nonempty subset of \(\{0,1,\ldots,n1\}\). Let, moreover, \(a,b\in\Delta_{n}\) be such that (2.4) holds. Furthermore, suppose that for all \(c\in\Delta_{n}\) with \(a\leq c< b\) the set \(\mathcal{C}(c,b)\) is nonempty. Then \(a\preceq b\).
Proof
Each element (curve) of \(\mathcal{C}(a,b)\) is a (closed) subset of \(\Delta_{n}\). We equip the set \(\mathcal{C}(a,b)\) with the inclusion relation ⊆, obtaining a nonempty partially ordered set \((\mathcal{C}(a,b),\subseteq)\). We are going to show that each chain \(\{y_{i}\}_{i\in\mathcal{I}}\) has an upper bound in \(\mathcal{C}(a,b)\).
To achieve this, consider the curve
i.e. the concatenation of the curves \(y_{i}\). Then obviously \(y_{0}\) satisfies conditions (a) and (c) of Definition 3.1. To prove (b) assume that \(y_{0}\) is parametrised on \([0,1]\). Then for each positive integer k the curve \(y_{k}\), defined as the restriction of \(y_{0}\) to the interval \([0,1{1\over k}]\), is contained in some curve \(y_{i}\in\mathcal{C}(a,b)\) of the given chain \(\{y_{i}\} \). Therefore \(y_{k}(t)\) is piecewise differentiable and satisfies condition (b) for each positive integer k. Moreover,
Hence \(y_{0}\) is piecewise differentiable and satisfies (b) as well.
Now, by the KuratowskiZorn lemma, there exists a maximal element y in \((\mathcal{C}(a,b),\subseteq)\). We show that y is a desired curve connecting the points a and b, which will imply that \(a\preceq b\).
To this end, it is enough to show that, if the curve y is parametrised on \([0,1]\), then \(y(1)=b\). Suppose, to the contrary, that \(y(1)=c\neq b\). Then \(a\leq c< b\), and hence the set \(\mathcal{C}(c,b)\) is nonempty. Thus the curve y can be extended beyond the point c, which contradicts the fact that y is a maximal element in \(\mathcal{C}(a,b)\). This completes the proof of Proposition 3.2. □
From now on assume that \(I=(0,\infty)\) and S is a nonempty subset of \(\{1,2,\ldots,n1\}\).
In order to prove that (2.4) implies \(a\preceq b\), it suffices to show that the sets \(\mathcal{C}(a,b)\) for \(a,b\in\Delta_{n}\) with \(a< b\) are nonempty. This is implied by the following conjecture, which we will prove later for \(n\leq4\).
Conjecture 3.3
Let \(n\geq2\) be an integer and \(a\in\Delta_{n}\). Let S be a nonempty subset of \(\{1,2,\ldots,n1\}\) with the property that there exist \(A_{k}>0\) for \(k\in S\) such that all the roots of the polynomial
are real (and hence negative). Then there exist mappings \(B_{k}:[0,\varepsilon ]\to\mathbb{R}\) (\(k\in S\)) continuous on \([0,\varepsilon]\), differentiable on \((0,\varepsilon)\), and nondecreasing with \(B_{k}(0)=0\) such that \(\sum_{k\in S} B_{k}(t)\) is increasing on \([0,\varepsilon]\) and for all sufficiently small values of \(t>0\) the polynomial
has n distinct real (and hence negative) roots.
Now we show how Conjecture 3.3 implies that the sets \(\mathcal{C}(a,b)\) are nonempty.
Proposition 3.4
Let n and S be such that the conjecture holds. Let, moreover, \(a,b\in\Delta_{n}\) be such that (2.4) holds. Then the set \(\mathcal{C}(a,b)\) is nonempty.
Proof
Consider the polynomials
Then
where \(A_{k}>0\) for all \(k\in S\). According to the conjecture, there exist nondecreasing mappings \(B_{k}:[0,\varepsilon]\to\mathbb{R}\), continuous on \([0,\varepsilon]\) and differentiable on \((0,\varepsilon)\), with \(B_{k}(0)=0\), such that \(\sum_{k\in S}B_{k}(t)\) is increasing on \([0,\varepsilon]\) and for all \(t\in(0,\varepsilon)\) the polynomial
has n distinct real (and hence negative) roots \(y_{n}(t)<y_{n1}(t)<\cdots<y_{1}(t)<0\). We show that \(y(t)=(y_{1}(t),y_{2}(t),\ldots,y_{n}(t))\) defines a differentiable curve (parametrised on \([0,\varepsilon]\)) that belongs to \(\mathcal{C}(a,b)\), provided ε is chosen in such a way that \(B_{k}(\varepsilon)\leq A_{k}\) for \(k\in S\).
Consider the mapping \(\Psi :\overline{\Delta_{n}}\to\Psi (\overline{\Delta_{n}})\) given by
Then it follows from Remark 1.3 that the mapping Ψ is injective, hence Ψ is a continuous bijection defined on a closed subset of \(\mathbb{R}^{n}\). Therefore the restriction \(\Psi_{U}\) of Ψ to a neighbourhood U of a is continuously invertible and thus
(here we put \(B_{k}(t)=0\) for \(k\notin S\)) is a curve starting at a; note that \(\Psi(a)+(B_{0}(t),B_{1}(t), \ldots ,B_{n1}(t))\) is contained in \(\Psi(U)\) for sufficiently small ε. Moreover \(y(t)\in\Delta_{n}\). Hence condition (a) is satisfied. Since \(y(t)\in\operatorname{int}(\Delta_{n})\) for all \(t\in(0,\varepsilon)\), condition (b) holds. It is also clear that (c) is satisfied, since \(E_{k}(y(t))=E_{k}(a)+B_{k}(t)\leq E_{k}(a)+A_{k}=E_{k}(b)\) for all \(k\in\{ 0,1,\ldots,n1\}\).
It remains to prove that \(y(t)\) is differentiable on \((0,\varepsilon)\). This, however, is a consequence of the inverse mapping theorem, if we show that
To this end, let \(V(y)\) be the \(n\times n\) Vandermondetype matrix given by \(V_{ij}(y)=(y_{i})^{nj}\) (\(1\leq i,j\leq n\)). This matrix is obtained from the standard Vandermonde matrix
by reversing the order of columns of W.
Since [8]
where \(y^{(k)} = (y_{1},\ldots,y_{k1},y_{k+1},\ldots,y_{n})\) is y with its kth component removed, it follows from the general formula
that
and thus
It is well known that
Therefore we obtain
which completes the proof of Proposition 3.4. □
Lemma 3.5
Assume that \(n\geq3\) is odd and let \(0< a_{1}\leq a_{2}\leq\cdots\leq a_{n}\). Let, moreover, \(A_{k}\geq0\) for \(k=1,2,\ldots,(n1)/2\) with at least one \(A_{k}\) not equal to 0. Consider the polynomials
Then the polynomial P has exactly one root in the interval \((a_{1},0)\) and at most two roots in the interval \((a_{n},a_{n1})\). Moreover, the polynomial Q has exactly one root in the interval \((\infty,a_{n})\) and at most two roots in the interval \((a_{2},a_{1})\).
Proof
That P has exactly one root in \((a_{1},0)\) follows immediately from the observation that \(P(a_{1})<0\), \(P(0)>0\) and \(P'(x)>0\) on \((a_{1},0)\).
Now we show that Q has exactly one root in \((\infty,a_{n})\).
Dividing the equation \(Q(x)=0\) by \(x^{n}a_{1}a_{2}\cdots a_{n}\) and substituting \(z=1/x\) and \(b_{i}=1/a_{i}\) yield the equation \(P_{0}(z)=0\), where
for some nonnegative numbers \(B_{k}\), not all equal to 0. We already know that \(P_{0}\) has exactly one root in the interval \((b_{n},0)\), so it follows that Q has exactly one root in the interval \((\infty,a_{n})\).
Now we prove that Q has at most two roots in the interval \((a_{2},a_{1})\). To the contrary, suppose that Q has at least three roots in \((a_{2},a_{1})\). Since \(Q(a_{2})>0\) and \(Q(a_{1})>0\), it follows that Q has an even number, and hence at least four, roots in the interval \((a_{2},a_{1})\).
Let \(0>c_{1}\geqc_{2}\geq\cdots\geqc_{n1}\) be the roots of \(p'(x)=0\), where
Then \(a_{1}< c_{1}<a_{2}\). The polynomial \(Q(x)\) is decreasing on the interval \([a_{2},c_{1}]\), so it has at most one root in this interval. Therefore the polynomial Q has at least three roots in the interval \((c_{1},a_{1})\), and consequently the equation \(Q''(x)=0\) has a root in \((c_{1},a_{1})\). But \(Q''(x)>0\) for all \(x>c_{1}\), a contradiction. Hence Q must have at most two roots in \((a_{2},a_{1})\).
Finally, to prove that P has at most two roots in the interval \((a_{n},a_{n1})\), divide the equation \(P(x)=0\) by \(x^{n}a_{1}a_{2}\cdots a_{n}\) and substitute \(z=1/x\) and \(b_{i}=1/a_{i}\). This reduces to the equation \(Q_{0}(z)=0\), where
for some nonnegative numbers \(B_{k}\), not all equal to 0. We already know that \(Q_{0}\) has at most two roots in the interval \((b_{n1},b_{n})\), so it follows that P has at most two roots in the interval \((a_{n},a_{n1})\). This completes the proof of Lemma 3.5. □
The same proof yields an analogous result for even values of n.
Lemma 3.6
Assume that \(n\geq2\) is even and let \(0< a_{1}\leq a_{2}\leq\cdots\leq a_{n}\). Let, moreover, \(A_{k}\geq0\) for \(k=1,2,\ldots,n/2\) and not all of the \(A_{k}\) ’s are equal to 0. Consider the polynomials
Then the polynomial P has exactly one root in each of the intervals \((\infty,a_{n})\) and \((a_{1},0)\) and Q has at most two roots in each of the intervals \((a_{n},a_{n1})\) and \((a_{2},a_{1})\).
Proof
The same proof as that for Lemma 3.5 can be used. □
Now we turn to the proof of Conjecture 3.3 for \(2\leq n\leq 4\) and an arbitrary nonempty set \(S\subseteq\{1,2,\ldots,n1\}\).
We first make some useful general remarks.
Let \(I(a)=\{i\in\{1,2,\ldots,n1\} : a_{i}=a_{i+1}\}\). If \(I(a)\) is empty, then the conjecture holds. Indeed, if \(k\in S\), then all the roots of the polynomial
are, for all sufficiently small \(t>0\), real and distinct.
On the other hand, if \(I(a)=\{1,2,\ldots,n1\}\), then only the set \(S=\{1,2,\ldots,n1\}\) possibly satisfies the assumptions of the conjecture. Indeed, suppose that \(l\notin S\) and let \(b_{1}\geqb_{2}\geq\cdots\geqb_{n}\) be the roots of
Then by the inequality of arithmetic and geometric means, we obtain
and hence \(b_{1}=b_{2}=\cdots=b_{n}\). Since \(E_{0}(a)=E_{0}(b)\), it follows that \(a=b\), i.e. \(A_{k}=0\) for all \(k\in S\). A contradiction.
Let I be a nonempty subset of \(\{1,2,\ldots,n1\}\). We observe that the conjecture is true for a set S and all \(a\in\Delta_{n}\) with \(I(a)=I\), if it is true for a set \(T=\{nk : k\in S\}\) and all \(b\in\Delta_{n}\) with \(I(b)=\{ni : i\in I\}\). Indeed, if all the roots of the polynomial
are real, then substituting \(x=1/z\) and \(a_{i}=1/b_{i}\), we infer that all the roots of the polynomial
are real. Hence there exist mappings \(C_{l}(t)\) with \(C_{l}(0)=0\), continuous on \([0,\varepsilon]\), differentiable on \((0,\varepsilon)\) and nondecreasing such that the polynomial
has n distinct real roots. Substituting \(z=1/x\) and \(b_{i}=1/a_{i}\), we infer that the polynomial
has n distinct real roots.
For \(n=2\) the only possibility for the set S is \(\{1\}\) and it is enough to notice that the polynomial \((x+a_{1})(x+a_{2})+t x\) has two distinct real roots for any \(t>0\).
Assume now \(n=3\). Then, in view of the above remarks, we have to consider two cases: (1) \(a_{1}< a_{2}=a_{3}\); (2) \(a_{1}=a_{2}=a_{3}\).
(1) If \(2\notin S\), then the condition of Conjecture 3.3 cannot be satisfied since for \(A_{1}>0\), according Lemma 3.5, the polynomial
has only one real root in the interval \((a_{1},0)\) and obviously no roots on \(\mathbb{R}\setminus(a_{1},0)\). Thus P has only one real root for all \(A_{1}>0\). We can therefore assume \(2\in S\), and for all sufficiently small \(t>0\), the polynomial
has three distinct real roots.
(2) According to the above remarks, \(S=\{1,2\}\). Then the polynomial \((x+a_{1})^{3}+t a_{1}x+t x^{2}\) has three distinct real roots for all sufficiently small \(t>0\).
Assume \(n=4\). In this case we have five possibilities: (1) \(a_{1}=a_{2}< a_{3}<a_{4}\); (2) \(a_{1}< a_{2}=a_{3}<a_{4}\); (3) \(a_{1}< a_{2}=a_{3}=a_{4}\); (4) \(a_{1}=a_{2}< a_{3}=a_{4}\); (5) \(a_{1}=a_{2}=a_{3}=a_{4}\).
(1) We note that \(S\neq\{2\}\), since, by Lemma 3.6, the polynomial
has at most two real roots in the interval \((a_{4},a_{3})\) and obviously no roots on \(\mathbb{R}\setminus(a_{4},a_{3})\). Thus Q has at most two real roots. Therefore S contains an odd integer k. Then for all sufficiently small \(t>0\), the polynomial \((x+a_{1})^{2}(x+a_{3})(x+a_{4})+t x^{k}\) has four distinct real roots.
(2) Note that \(2\in S\), since by Lemma 3.6, the polynomial
has at most two real roots. Then for all sufficiently small \(t>0\), the polynomial
has four distinct real roots.
(3) We observe that \(\{1,2\}\subset S\) or \(\{2,3\}\subset S\), since by Lemma 3.6, each of the polynomials
as well as
has at most two real roots. Moreover, we prove that \(S\neq\{1,2\}\).
Suppose that the polynomial \(Q(x)=(x+a_{1})(x+a_{2})^{3}+A_{1}x+A_{2}x^{2}\) has four real roots. Let \(Q_{1}(x)=(x+a_{1})(x+a_{2})^{3}\) and \(Q_{2}(x)=A_{1}x+A_{2}x^{2}\). Let \(c\neq a_{2}\) be the root of the polynomial \(Q_{1}'(x)\) and let −d be the root of \(Q_{2}'(x)\).
If \(d< c\), then Q is decreasing on \((\infty,c]\), so Q has at most one root in this interval. Therefore Q has at least three roots in the interval \((c,0)\). Thus \(Q''(x)\) has a root in the interval \((c,0)\), which is impossible, since \(Q''(x)>0\) on \((c,0)\).
If \(a_{2}\geq d\geq c\), then Q is increasing on the interval \([c,0)\) and decreasing on the interval \((\infty,d]\), so Q must have at least two roots in the interval \((d,c)\). But \(Q(x)<0\) on this interval.
Finally, if \(d>a_{2}\), then Q may only have roots in the union \((\infty,a_{2})\cup(a_{1},0)\). But Q is increasing on \((a_{1},0)\), so Q has three roots in \((\infty,a_{2})\). This, however, is impossible, since \(Q''(x)>0\) for \(x\in(\infty,a_{2})\). Thus \(\{2,3\}\subseteq S\) and the polynomial
has, for all sufficiently small \(t>0\), four distinct roots.
(4) Since the polynomial \((x+a_{1})^{2}(x+a_{3})^{2}+A_{2}x^{2}\) has no real roots, \(1\in S\) or \(3\in S\). Then the polynomial \((x+a_{1})^{2}(x+a_{3})^{2}+t x^{k}\) for \(k=1,3\) has, for all sufficiently small \(t>0\), four distinct real roots.
(5) In view of the above remarks, \(S=\{1,2,3\}\). Consider
Then for all sufficiently small \(t>0\), \(a_{1}^{2}t^{2}>0\), and the polynomial r has four distinct real roots, because
Thus we have proved the following.
Corollary 3.7
Conjecture 3.3 is true if \(2\leq n\leq4\) and S is an arbitrary nonempty subset of \(\{1,2,\ldots,n1\}\).
This implies that the sum of squared logarithms inequality (Conjecture 1.2) holds also for \(n=4\).
Corollary 3.8
(Sum of squared logarithms inequality for \(n=4\))
Let \(a_{1},a_{2},a_{3},a_{4},b_{1},b_{2}, b_{3},b_{4}>0\) be given positive numbers such that
Then
Proof
Use Corollary 3.7 and observe that S may be an arbitrary subset of \(\{1,2,3\}\). □
Corollary 3.9
Let \(n\geq2\) be an integer and let T be an arbitrary subset of \(\{1,2,\ldots, n1\}\). Assume that the Conjecture 3.3 holds for n and for any nonempty subset S of T. Let, moreover, \(f\in C^{n}(0,\infty)\). Then the inequality
holds for all \(a,b\in\Delta_{n}\) satisfying
if and only if
Proof
Assume first (3.9) holds and let \(a,b\in\Delta_{n}\) satisfy (3.8). Consider any \(c\in\Delta_{n}\) with \(a\leq c< b\). Then the pair c, b satisfies condition (2.4) for some nonempty subset S of T. Therefore by Proposition 3.4, the set \(\mathcal{C}(c,b)\) is nonempty and hence by Proposition 3.2, \(a\preceq b\). Now Theorem 2.3 implies that inequality (2.6) holds.
Conversely, if (2.6) holds for all \(a,b\in\Delta_{n}\) satisfying (3.8), then (2.6) also holds for all \(a,b\in\Delta_{n}\) satisfying condition (2.4) with \(S=T\). Thus Theorem 2.4 implies (3.9). This completes the proof. □
Outlook
Our result generalises and extends previous results on the sum of squared logarithms inequality. Indeed, compared to the proof in [1] our development here views the problem from a different angle in that it is not the logarithm function that defines the problem, but a certain monotonicity property in the geometry of polynomials, explicitly stated in Conjecture 3.3.
If one tries to adopt the above proof of Conjecture 3.3 for \(n\leq4\) to the case \(n\geq5\), one has to deal with approximately \(2^{n}\) cases considered separately. Therefore it is clear that the extension to natural numbers n beyond \(n=6\), say, is out of reach with such a method. Instead, a general argument should be found to prove or disprove Conjecture 3.3 for general n. Furthermore, it might be worthwhile to develop a better understanding of the differential inequality condition \((1)^{n+k}(x^{k} f'(x))^{(n1)}\leq0\).
References
 1.
Bîrsan, M, Neff, P, Lankeit, J: Sum of squared logarithms  an inequality relating positive definite matrices and their matrix logarithm. J. Inequal. Appl. 2013, 168 (2013). doi:10.1186/1029242X2013168
 2.
Neff, P, Nakatsukasa, Y, Fischle, A: A logarithmic minimization property of the unitary polar factor in the spectral norm and the Frobenius matrix norm. SIAM J. Matrix Anal. Appl. 35, 11321154 (2014). arXiv:1302.3235v4
 3.
Lankeit, J, Neff, P, Nakatsukasa, Y: The minimization of matrix logarithms  on a fundamental property of the unitary polar factor. Linear Algebra Appl. 449, 2842 (2014)
 4.
Neff, P, Ghiba, ID, Lankeit, J, Martin, R, Steigmann, D: The exponentiated Henckylogarithmic strain energy. Part II: coercivity, planar polyconvexity and existence of minimizers. Z. Angew. Math. Phys. (2014, to appear). arXiv:1408.4430v1
 5.
Neff, P, Lankeit, J, Ghiba, ID: The exponentiated Henckylogarithmic strain energy. Part I: constitutive issues and rankone convexity. J. Elast. (2014, to appear). arXiv:1403.3843
 6.
Neff, P, Lankeit, J, Madeo, A: On Grioli’s minimum property and its relation to Cauchy’s polar decomposition. Int. J. Eng. Sci. 80, 209217 (2014)
 7.
Neff, P, Eidel, B, Osterbrink, F, Martin, R: A Riemannian approach to strain measures in nonlinear elasticity. C. R. Acad. Sci., Méc. 342(4), 254257 (2014)
 8.
Dannan, FM, Neff, P, Thiel, C: On the sum of squared logarithms inequality and related inequalities (2015, submitted). arXiv:1411.1290
Acknowledgements
We thank Johannes Lankeit (Universität Paderborn) as well as Robert Martin (Universität DuisburgEssen) for their help in revising this paper.
Author information
Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
Both authors contributed fully to all parts of this paper. Both authors read and approved the final manuscript.
Rights and permissions
Open Access This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.
About this article
Cite this article
Pompe, W., Neff, P. On the generalised sum of squared logarithms inequality. J Inequal Appl 2015, 101 (2015). https://doi.org/10.1186/s1366001506236
Received:
Accepted:
Published:
MSC
 26D05
 26D07
Keywords
 elementary symmetric polynomials
 logarithm
 matrix logarithm
 inequality
 characteristic polynomial
 invariants
 positive definite matrices
 inequalities