Skip to main content

Tracial and majorisation Heinz mean-type inequalities for matrices

Abstract

The Heinz mean for every nonnegative real numbers a, b and every \(0\le\nu\le1\) is \(H_{\nu}(a , b)=\frac{a^{\nu}b^{1-\nu} +a^{1-\nu}b^{\nu}}{2}\). In this paper we present tracial Heinz mean-type inequalities for positive definite matrices and apply it to prove a majorisation version of the Heinz mean inequality.

1 Introduction

The arithmetic-geometric mean inequality for two positive real numbers a, b is \(\sqrt{ab}\le\frac{a+b}{2}\), where equality holds if and only if \(a=b\). Heinz means, introduced in [1], are means that interpolate in a certain way between the arithmetic and geometric mean. For every nonnegative real numbers a, b and \(0\le\nu\le1\), the Heinz mean is defined as

$$H_{\nu}(a , b)=\frac{a^{\nu}b^{1-\nu}+a^{1-\nu}b^{\nu}}{2} . $$

The function \(H_{\nu}\) is symmetric about the point \(\nu=\frac{1}{2}\). Note that \(H_{0}(a , b)=H_{1}(a , b)=\frac{a+b}{2}\), \(H_{\frac{1}{2}}(a , b)=\sqrt{ab}\), and

$$ H_{\frac{1}{2}}(a , b)\le H_{\nu}(a , b)\le H_{1}(a , b) $$
(1)

for every \(0\le\nu\le1\), and equality holds if and only if \(a=b\).

Let \(M_{n}(\mathbb {C})\) denote the space of all \(n\times n \) matrices. We shall denote the eigenvalues and singular values of a matrix \(A\in M_{n}(\mathbb {C})\) by \(\lambda_{j}(A)\) and \(\sigma_{j}(A)\), respectively. We assume that singular values are sorted in non-increasing order. For two Hermitian matrices \(A,B\in M_{n}(\mathbb {C})\), \(A\ge B\) means that \(A-B\) is positive semi-definite. In particular, \(A\ge0\) means A is positive semi-definite. Let us write \(A>0\) when A is positive definite. \(|A|\) shall denote the modulus \(|A|=(A^{*}A)^{\frac{1}{2}}\) and \(\operatorname{tr}(A)=\sum_{j=1}^{n} \lambda_{j}(A)\).

The basic properties of singular values and trace function that some of them are used to establish the matrix inequalities in this paper are collected in the following theorems.

Theorem 1.1

Assume that \(X, Y \in M_{n}(\mathbb {C})\), \(A, B\in M_{n}(\mathbb {C})^{+}\), \(\alpha\in \mathbb {C}\), and \(j=1,2,\ldots, n\).

  1. (1)

    \(\sigma_{j}(X)= \sigma_{j}(X^{*})= \sigma_{j}(|X|)=\) and \(\sigma_{j}(\alpha X)=|\alpha|\sigma_{j}(X)\).

  2. (2)

    If \(A\le B\), then \(\sigma_{j}(A) \le\sigma_{j}(B)\).

  3. (3)

    \(\sigma_{j}(X^{r}) = ( \sigma_{j}(X) )^{r}\), for every positive real number r.

  4. (4)

    \(\sigma_{j}(XY^{*}) = \sigma_{j}(YX^{*})\).

  5. (5)

    \(\sigma_{j}(XY) \le\|X\| \sigma_{j}(Y)\).

  6. (6)

    \(\sigma_{j}(YXY^{*}) \le \|Y\|^{2} \sigma_{j}(X)\).

Theorem 1.2

Assume that \(X, Y \in M_{n}(\mathbb {C})\), \(\alpha\in \mathbb {C}\).

  1. (1)

    \(\operatorname{tr}(X+Y)=\operatorname{tr}(X)+\operatorname{tr}(Y)\).

  2. (2)

    \(\operatorname{tr}(XY)=\operatorname{tr}(YX)\).

  3. (3)

    \(\operatorname{tr}(X) \ge0\), and for \(A\in M_{n}(\mathbb {C})^{+}\), \(\operatorname{tr}(A)=0\) only if \(A=0\).

The absolute value for matrices does not satisfy \(|XY|=|X|\cdot|Y|\); however, a weaker version of this is the following:

If \(Y=U|Y|\) is the polar decomposition of Y, with unitary U, then

$$ \bigl|XY^{*}\bigr|=U\bigl|\bigl(|X|\cdot|Y|\bigr)\bigr| U^{*} $$
(2)

and

$$ \lambda_{j} \bigl(\bigl| XY^{*}\bigr| \bigr)= \sigma _{j}\bigl( |X|\cdot|Y|\bigr) . $$
(3)

The Young inequality is among the most important inequalities in matrix theory. We present here the following theorem from [2, 3].

Theorem 1.3

Let \(A, B\in M_{n}(\mathbb {C})\) be positive semi-definite. If \(p, q>1\) with \(\frac{1}{p}+\frac{1}{p}=1\), then

$$ \sigma_{j}(AB)\le\sigma_{j} \biggl( \frac{1}{p}A^{p}+\frac{1}{q}B^{q} \biggr)\quad \textit{for } j=1, 2,\ldots, n , $$
(4)

where equality holds if and only if \(A^{p}=B^{q}\).

Corollary 1.4

Let \(A, B\in M_{n}(\mathbb {C})\) be positive semi-definite. If \(p, q>1\) with \(\frac{1}{p}+\frac{1}{p}=1\), then

$$ \operatorname{tr}\bigl(|AB|\bigr)\le\frac{1}{p}\operatorname{tr}\bigl(A^{p} \bigr)+ \frac{1}{q}\operatorname{tr}\bigl(B^{q} \bigr) , $$
(5)

where equality holds if and only if \(A^{p}=B^{q}\).

Another interesting inequality is the following version of the triangle inequality for the matrix absolute value [1, 4].

Theorem 1.5

Let X and Y be \(n\times n\) matrices, then there exist unitaries U, V such that

$$ |X+Y|\le U|X|U^{*}+V|Y|V^{*} . $$
(6)

We are interested to find what types of inequalities (1) hold for positive semi-definite matrices A, B? For example, do we have

$$ \sqrt{|AB|} \le\bigl|H_{\nu}(A , B)\bigr|\le H_{1}(A , B) ? $$
(7)

Or do we have

$$ \sqrt{\sigma_{j}(AB)} \le\sigma_{j} \bigl(H_{\nu}(A , B) \bigr)\le\lambda_{j}\bigl(H_{1}(A , B)\bigr) ? $$
(8)

Here

$$H_{\nu}(A , B)=\frac{A^{\nu}B^{1-\nu}+A^{1-\nu}B^{\nu}}{2} . $$

Bhatia and Davis [5] extended inequality (1) to the matrix case, they showed that it holds for positive semi-definite matrices, in the following form:

$$ \bigl|\!\bigl|\!\bigl|A^{\frac{1}{2}}B^{\frac{1}{2}}\bigr|\!\bigr|\!\bigr|\le\bigl|\!\bigl|\!\bigl|H_{\nu}(A , B)\bigr|\!\bigr|\!\bigr|\le\biggl|\!\biggl|\!\biggl|\frac{A+B}{2}\biggr|\!\biggr|\!\biggr|, $$
(9)

where \(|\!|\!|\cdot|\!|\!|\) is any invariant unitary norm. An example shows that the first inequality in (9), to singular values, does not hold [6]. One of the results in the present article is a version of Heinz mean-type inequalities for matrices in the following theorem.

Theorem 1.6

Let A, B be two positive semi-definite matrices in \(M_{n}(\mathbb {C})\). Then

$$\operatorname{tr}\bigl(\sqrt{|AB|}\bigr) \le \operatorname{tr}\bigl(H_{1} \bigl(\bigl|A^{\nu }B^{1-\nu}\bigr| ,\bigl|A^{1-\nu}B^{\nu}\bigr| \bigr) \bigr) \le \operatorname{tr}\bigl(H_{1}(A , B) \bigr) . $$

Equality holds if and only if \(A=B\).

For a real vector \(X=(x_{1} , x_{2} , \ldots,x_{n})\), let \(X^{\downarrow}=(x_{1}^{\downarrow} , x_{2}^{\downarrow} ,\ldots ,x_{n}^{\downarrow})\) be the decreasing rearrangement of X. Let X and Y are two vectors in \(\mathbb {R}^{n}\), we say X is (weakly) submajorised by Y, in symbols \(X \prec_{w} Y\), if

$$\sum_{j=1}^{k} x_{j}^{\downarrow} \leq\sum_{j=1}^{k} y_{j}^{\downarrow} ,\quad1\leq k\leq n . $$

X is majorised by Y, in symbols \(X \prec Y\), if X is submajorised by Y and

$$\sum_{j=1}^{n} x_{j}^{\downarrow} = \sum_{j=1}^{n} y_{j}^{\downarrow}. $$

Definition 1.7

If \(A, B\in M_{n}(\mathbb {C})\), then we write \(A \prec_{w} B\) to denote that A is weakly majorised by B, meaning that

$$\sum_{j=1}^{k}\sigma_{j}(A) \le \sum_{j=1}^{k}\sigma_{j}(B), \quad\mbox{for all } 1\le k\le n . $$

If \(A \prec_{w} B\) and

$$\operatorname{tr}\bigl(|A|\bigr) = \operatorname{tr}\bigl(|B|\bigr) , $$

then we say that A is majorised by B, in symbols \(A \prec B \).

Let \(S(A)\) denote the n-vector whose coordinates are the singular values of A. Then we write \(A \prec_{w} B\) (\(A \prec B\)) when \(S(A) \prec_{w} S(B)\) (\(S(A) \prec S(B)\)) .

The following theorem has been proved in [1].

Theorem 1.8

If X and Y are two matrices in \(M_{n}(\mathbb {C})\), then

$$ S^{r}(XY)\prec_{w} S^{r}(X) S^{r}(Y) \quad\textit{for all } r>0 . $$
(10)

2 Main results

We present here the matrix inequalities that we will use in the proof of our main results. The next theorem has been proved in [6].

Theorem 2.1

For positive semi-definite matrices A and B and for all \(j=1, 2,\ldots, n\)

$$\sigma_{j} \bigl(H_{\nu}(A , B) \bigr)\le \sigma_{j} \bigl(H_{1}(A , B) \bigr) , $$

for every \(\nu\in[0 , 1]\).

Thus, this proves that the second inequality in (8) holds. The arithmetic-geometric mean inequality

$$\sqrt{ab}\le\frac{a+b}{2} $$

is used in the matrix setting, much of this is associated with Bhatia and Kittaneh. They established the next inequality in [7]:

$$ \sigma_{j} \bigl(A^{*}B \bigr)\le\lambda_{j} \biggl(\frac{AA^{*}+BB^{*}}{2} \biggr) , $$
(11)

where A and B are two matrices in \(M_{n}(\mathbb {C})\). They also studied many possible versions of this inequality in [8], and put a lot of emphasis on what they described as level three inequalities [9]. Drury [10] answered to the key question in this area in the following theorem.

Theorem 2.2

For positive semi-definite matrices A and B in \(M_{n}(\mathbb {C})\) and for all \(j=1, 2,\ldots, n\)

$$\sqrt{\sigma_{j}(AB)}\le\lambda_{j} \bigl(H_{1}(A , B) \bigr) . $$

We will show that in both Theorems 2.1 and 2.2 equality holds if and only if \(A=B\). It is still unknown whether

$$\sqrt{\sigma_{j}(AB)}\le\sigma_{j} \bigl(H_{\nu}(A , B) \bigr) $$

for every \(\nu\in(0 , 1)\). However, by using Theorems 2.1 and 2.2, we present a different version of this inequality.

Lemma 2.3

For positive semi-definite matrices A and B in \(M_{n}(\mathbb {C})\) and for all \(j=1, 2,\ldots, n\)

$$ \sqrt{\sigma_{j}(AB)}\le\lambda_{j} \bigl(H_{1} \bigl(\bigl|A^{\nu}B^{1-\nu}\bigr| ,\bigl|A^{1-\nu }B^{\nu}\bigr| \bigr) \bigr) $$
(12)

for every \(\nu\in(0 , 1)\).

Proof

We first aim to show that

$$\sigma_{j}(AB) \le\sigma_{j} \bigl(A^{1-\nu}A^{\nu}B^{1-\nu}B^{\nu} \bigr). $$

We have

$$\begin{aligned} \sigma_{j}(AB)&=\sigma_{j} \bigl(A^{1-\nu}A^{\nu}B^{1-\nu}B^{\nu }A^{1-\nu}A^{\nu-1} \bigr) \\ &\le\|A\|^{1-\nu}\sigma_{j} \bigl(A^{\nu}B^{1-\nu}B^{\nu}A^{1-\nu} \bigr)\|A \|^{\nu -1} \quad\bigl(\mbox{by part (5) Theorem }1.1 \bigr). \end{aligned}$$
(13)

As \(\nu-1<0\), the matrix \(A^{\nu-1}\) exists only if A is invertible. Therefore, to prove (13) we shall assume that A is invertible. This assumption entails no loss in generality, for if A were not invertible, then we could replace A by \(A+\varepsilon I\), which is invertible and which satisfies \(\sigma_{j}((A+\varepsilon I)B)\rightarrow \sigma_{j}(AB)\) for every \(B\in M_{n}(\mathbb {C})\) and \(j=1, 2,\ldots, n\). Thus, (13) is achieved for noninvertible A as a limiting case of (13) using the invertibility of A.

By using equation (3), we get

$$\sigma_{j} \bigl(A^{\nu}B^{1-\nu}B^{\nu}A^{1-\nu} \bigr) =\sigma_{j} \bigl(\bigl|A^{\nu}B^{1-\nu}\bigr| \cdot\bigl|A^{1-\nu}B^{\nu}\bigr| \bigr). $$

Hence, by using Theorem 2.2,

$$\sqrt{\sigma_{j}(AB)} \le\sqrt{\sigma_{j} \bigl(\bigl|A^{\nu}B^{1-\nu}\bigr|\cdot\bigl|A^{1-\nu}B^{\nu }\bigr| \bigr)}\le\lambda_{j} \bigl(H_{1} \bigl(\bigl|A^{\nu}B^{1-\nu }\bigr| ,\bigl|A^{1-\nu }B^{\nu}\bigr| \bigr) \bigr) . $$

 □

Remark 2.4

Note that Lemma 2.3 generalizes Theorem 2.2, in fact, it is the special case with \(\nu=1\) of Lemma 2.3.

Theorem 2.5

Let A, B be two positive semi-definite matrices in \(M_{n}(\mathbb {C})\). Then

$$\operatorname{tr}\bigl(\sqrt{|AB|}\bigr) \le \operatorname{tr}\bigl(H_{1} \bigl(\bigl|A^{\nu }B^{1-\nu}\bigr| ,\bigl|A^{1-\nu}B^{\nu}\bigr| \bigr) \bigr) \le \operatorname{tr}\bigl(H_{1}(A , B) \bigr) . $$

Proof

By the definition of the trace, we have

$$\begin{aligned} \operatorname{tr}\bigl(\sqrt{|AB|}\bigr) =& \sum_{j=1}^{n} \lambda_{j}\sqrt{|AB|}\\ =& \sum_{j=1}^{n} \sqrt{\sigma_{j}(AB)} \quad \bigl(\mbox{by part (3) Theorem }1.1\bigr) \\ \le& \sum_{j=1}^{n} \lambda_{j}\bigl(H_{1}\bigl(\bigl|A^{\nu }B^{1-\nu}\bigr| ,\bigl|A^{1-\nu }B^{\nu}\bigr|\bigr)\bigr)\\ =& \operatorname{tr}\bigl(H_{1}\bigl(\bigl|A^{\nu}B^{1-\nu}\bigr| , \bigl|A^{1-\nu}B^{\nu}\bigr|\bigr)\bigr) \quad\bigl(\mbox{using inequality } (12)\bigr)\\ =& \frac{1}{2}\operatorname{tr}\bigl(A^{\nu}B^{1-\nu}\bigr)+\frac{1}{2} \operatorname{tr}\bigl(A^{1-\nu}B^{\nu}\bigr)\\ \le& \frac{1}{2} \bigl( \operatorname{tr}\bigl(\nu A+(1-\nu)B\bigr) +\operatorname{tr}\bigl(\nu B+(1-\nu)A\bigr) \bigr). \end{aligned}$$

We applied (1.4) with \(p=\frac{1}{\nu}\) and \(q=\frac{1}{1-\nu}\) for the first summand, and \(q=\frac{1}{\nu}\) and \(p=\frac{1}{1-\nu}\) for the second one.

Therefore,

$$\begin{aligned} \operatorname{tr}\bigl(\sqrt{|AB|}\bigr) \le& \operatorname{tr}\bigl(H_{1}\bigl(\bigl|A^{\nu }B^{1-\nu}\bigr| , \bigl|A^{1-\nu}B^{\nu}\bigr|\bigr)\bigr)\\ \le& \frac{1}{2} \bigl( \nu \operatorname{tr}(A)+(1-\nu)\operatorname{tr}(B)+(1-\nu)\operatorname{tr}(A)+\nu \operatorname{tr}(B) \bigr)\\ = & \frac{1}{2}\operatorname{tr}(A+B)=\operatorname{tr}\bigl(H_{1}(A , B)\bigr). \end{aligned}$$

 □

Theorem 2.6

If \(A, B\in M_{n}(\mathbb {C})\) are two positive semi-definite matrices and \(0\le\nu\le1\). Then the following conditions are equivalent:

  1. (1)

    \(\operatorname{tr}(\sqrt{|AB|}) = \operatorname{tr}(H_{1}(A , B))\).

  2. (2)

    \(\operatorname{tr}(H_{1}(|A^{\nu}B^{1-\nu}| ,|A^{1-\nu}B^{\nu}|)) = \operatorname{tr}(H_{1}(A , B))\).

  3. (3)

    \(\operatorname{tr}(|H_{\nu}(A , B)|) = \operatorname{tr}(H_{1}(A , B))\).

  4. (4)

    \(A = B\).

Proof

We shall show that \((1)\Longrightarrow (2)\Longrightarrow(4)\Longrightarrow(1)\) and \((3)\Longrightarrow (2)\Longrightarrow(4)\Longrightarrow(3)\).

Let \(\operatorname{tr}(\sqrt{|AB|}) = \operatorname{tr}(H_{1}(A , B))\). Then the arguments of the proof of the above theorem implies

$$\operatorname{tr}\bigl(H_{1} \bigl(\bigl|A^{\nu}B^{1-\nu}\bigr| ,\bigl|A^{1-\nu}B^{\nu}\bigr| \bigr) \bigr) = \operatorname{tr}\bigl(H_{1}(A , B) \bigr) . $$

If the equation in part (2) holds, then from what was proved in the last theorem we conclude that

$$\begin{aligned} \operatorname{tr}\bigl(H_{1}(A , B)\bigr) = & \operatorname{tr}\bigl(H_{1}\bigl(\bigl|A^{\nu}B^{1-\nu}\bigr| , \bigl|A^{1-\nu}B^{\nu}\bigr|\bigr)\bigr)\\ = & \frac{1}{2}\operatorname{tr}\bigl(\bigl|A^{\nu}B^{1-\nu}\bigr| + \bigl|A^{1-\nu}B^{\nu}\bigr|\bigr)\\ \le& \frac{1}{2}\bigl( \operatorname{tr}\bigl(\nu A+(1-\nu)B\bigr)+\operatorname{tr}\bigl(\nu B+(1-\nu)A\bigr) \bigr)=\operatorname{tr}\bigl(H_{1}(A , B)\bigr) . \end{aligned}$$

Thus,

$$ \operatorname{tr}\bigl(\bigl|A^{\nu}B^{1-\nu}\bigr| \bigr) + \operatorname{tr}\bigl(\bigl|A^{1-\nu}B^{\nu}\bigr| \bigr) = \operatorname{tr}\bigl(\nu A+(1-\nu)B \bigr)+ \operatorname{tr}\bigl(\nu B+(1-\nu)A \bigr) . $$
(14)

By Corollary 1.4, this equality holds if and only if

$$\operatorname{tr}\bigl(\bigl|A^{\nu}B^{1-\nu}\bigr| \bigr) = \operatorname{tr}\bigl(\nu A+(1-\nu )B\bigr)\quad\mbox{and}\quad \operatorname{tr}\bigl(\bigl|A^{1-\nu}B^{\nu}\bigr| \bigr) = \operatorname{tr}\bigl(\nu B+(1-\nu)A \bigr), $$

and therefore \(A^{1-\nu}=B^{\nu}\), \(B^{1-\nu}=A^{\nu}\), which implies \(A=B\). It is clear that \((4)\Longrightarrow(1)\).

Now, we try to show that \((3)\Longrightarrow(2)\Longrightarrow (4)\Longrightarrow(3)\). Therefore assume (3): \(\operatorname{tr}(|H_{\nu}(A , B)|) = \operatorname{tr}(H_{1}(A , B))\). Then

$$\begin{aligned} \operatorname{tr}\bigl(H_{1}(A , B)\bigr) = & \operatorname{tr}\bigl(\bigl|H_{\nu}(A , B)\bigr|\bigr)\\ = & \frac{1}{2}\operatorname{tr}\bigl(\bigl|A^{\nu}B^{1-\nu} + A^{1-\nu}B^{\nu}\bigr|\bigr)\\ \le& \frac{1}{2}\bigl[ \operatorname{tr}\bigl(U\bigl|A^{\nu}B^{1-\nu}\bigr|U^{*}\bigr) + \operatorname{tr}\bigl(V^{*}\bigl|A^{1-\nu}B^{\nu}\bigr|V\bigr)\bigr]\ \ \bigl(\mbox{by the triangle inequality} (6)\bigr) \end{aligned}$$

for some unitaries U and \(V\in M_{n}(\mathbb {C})\).

Thus,

$$\begin{aligned} \operatorname{tr}\bigl(H_{1}(A , B)\bigr) \le& \frac{1}{2}\operatorname{tr}\bigl(\bigl|A^{\nu }B^{1-\nu}\bigr| + \bigl|A^{1-\nu}B^{\nu}\bigr|\bigr) \\ = & \operatorname{tr}\bigl(H_{1}\bigl(\bigl|A^{\nu}B^{1-\nu}\bigr| , \bigl|A^{1-\nu}B^{\nu}\bigr|\bigr)\bigr) \\ \le& \operatorname{tr}\bigl(H_{1}(A , B)\bigr) \quad(\mbox{by Theorem }2.5), \end{aligned}$$

thereby proving (2). \((2)\Longrightarrow(4)\) was shown in the first part. It is clear that \((4)\Longrightarrow(3)\). □

The following two corollaries are almost immediate from Theorem 2.6.

Corollary 2.7

For positive semi-definite matrices A and B in \(M_{n}(\mathbb {C})\) and for all \(j=1, 2,\ldots, n\)

$$\sqrt{\sigma_{j}(AB)} = \lambda_{j} \bigl(H_{1}(A , B) \bigr) , $$

if and only if \(A = B\).

Corollary 2.8

For positive semi-definite matrices A and B in \(M_{n}(\mathbb {C})\) and for all \(j=1, 2,\ldots, n\)

$$\sigma_{j} \bigl(H_{\nu}(A , B) \bigr) = \lambda_{j} \bigl(H_{1}(A , B) \bigr) , $$

for \(\nu\in[0 , 1]\) if and only if \(A = B\).

We do not know whether

$$\sqrt{\sigma_{j}(AB)} \le\sigma_{j} \bigl(H_{\nu}(A , B) \bigr) \le\lambda_{j} \bigl(H_{1}(A , B) \bigr) $$

for every \(\nu\in[0 , 1]\).

To answer this question, just we need to know whether

$$\sqrt{\sigma_{j}(AB)} \le\sigma_{j} \bigl(H_{\nu}(A , B) \bigr) $$

for every \(\nu\in[0 , 1]\).

In the rest of this paper, we apply the results of singular value inequalities for the means to present a new majorisation version of the means.

Lemma 2.9

Let A and B be two positive semi-definite matrices. Then

$$S^{\frac{1}{2}}(AB) \prec_{w} \frac{1}{2} \bigl(S(A) + S(B) \bigr) . $$

Proof

By Theorem 1.8,

$$\sum_{j=1}^{k} \sigma_{j}(AB)^{\frac{1}{2}} \le\sum_{j=1}^{k} \lambda_{j}(A)^{\frac{1}{2}} \lambda_{j}(B)^{\frac{1}{2}} \quad\mbox{for every } 1\le k\le n . $$

By using an arithmetic-geometric mean inequality for singular values of A and B,

$$\sum_{j=1}^{k} \sigma_{j}(AB)^{\frac{1}{2}} \le\sum_{j=1}^{k}\frac{1}{2} \lambda_{j}(A) + \sum_{j=1}^{k} \frac{1}{2} \lambda_{j}(B) \quad\mbox{for every } 1\le k\le n . $$

Thus,

$$\sum_{j=1}^{k} \sigma_{j}(AB)^{\frac{1}{2}} \le\sum_{j=1}^{k}\frac{1}{2}( \lambda_{j}(A) + \lambda_{j}(B) \quad\mbox{for every } 1\le k\le n , $$

which implies \(S^{\frac{1}{2}}(AB) \prec_{w} \frac{1}{2}(S(A) + S(B)) \). □

Lemma 2.10

If A and \(B \in M_{n}(\mathbb {C})\), then

$$\sqrt{|AB|} \prec_{w} H_{1}(A , B) . $$

Proof

It is direct result of the definition of the majorisation and Theorem 2.2. □

Lemma 2.11

If A and B are positive semi-definite \(\in M_{n}(\mathbb {C})\), then

$$H_{\nu}(A , B) \prec_{w} H_{1}(A , B) . $$

Proof

It is direct result of definition of the majorisation and Theorem 2.1. □

It is interesting to know whether

$$\sqrt{|AB|} \prec_{w} H_{j}(A , B) . $$

Lemma 2.12

If A and B are positive semi-definite \(\in M_{n}(\mathbb {C})\), then

$$\sqrt{|AB|} \prec_{w} H_{1} \bigl(\bigl|A^{\nu}B^{1-\nu}\bigr| ,\bigl|A^{1-\nu}B^{\nu}\bigr| \bigr) . $$

Proof

It is direct result of definition of the majorisation and Lemma 2.3. □

The results to this point lead to the following theorem about majorisation for positive definite matrices.

Theorem 2.13

For every two positive matrices A and B in \(M_{n}(\mathbb {C})\), the following conditions are equivalent:

  1. (1)

    \(S^{\frac{1}{2}}(AB) \prec\frac{1}{2}(S(A) + S(B)) \).

  2. (2)

    \(\sqrt{|AB|} \prec(H_{1}(A , B))\).

  3. (3)

    \(H_{\nu}(A , B) \prec H_{1}(A , B) \).

  4. (4)

    \(\sqrt{|AB|} \prec_{w} H_{1}(|A^{\nu}B^{1-\nu}| ,|A^{1-\nu }B^{\nu }|) \).

  5. (5)

    \(A = B\).

References

  1. Bhatia, R: Matrix Analysis. Springer, New York (1997)

    Book  Google Scholar 

  2. Ando, T: Matrix Young inequalities. Oper. Theory, Adv. Appl. 75, 33-38 (1995)

    Google Scholar 

  3. Hirzallah, O, Kittaneh, F: Matrix Young inequalities for the Hilbert-Schmidt norm. Linear Algebra Appl. 308, 77-84 (2000)

    Article  MathSciNet  MATH  Google Scholar 

  4. Thompson, RC: Convex and concave functions of singular values of matrix sums. Pac. J. Math. 66, 285-290 (1976)

    Article  MATH  Google Scholar 

  5. Bhatia, R, Davis, C: More matrix forms of the arithmetic-geometric mean inequality. SIAM J. Matrix Anal. Appl. 14, 132-136 (1993)

    Article  MathSciNet  MATH  Google Scholar 

  6. Audenaert, KMR: A singular value inequality for Heinz mean. Linear Algebra Appl. 422, 279-283 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  7. Bhatia, R, Kittaneh, F: On the singular values of a product of operators. SIAM J. Matrix Anal. Appl. 11, 272-277 (1990)

    Article  MathSciNet  MATH  Google Scholar 

  8. Bhatia, R, Kittaneh, F: Notes on matrix arithmetic-geometric mean inequalities. Linear Algebra Appl. 308, 203-211 (2000)

    Article  MathSciNet  MATH  Google Scholar 

  9. Bhatia, R, Kittaneh, F: On the singular values of a product of operators. Linear Algebra Appl. 428, 2177-2191 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  10. Drury, SW: On a question of Bhatia and Kittaneh. Linear Algebra Appl. 437, 1955-1960 (2012)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

This work was supported by the Department of Mathematical Sciences at Isfahan University of Technology, Iran.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Seyed Mahmoud Manjegani.

Additional information

Competing interests

The author declares that he has no competing interests.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Mahmoud Manjegani, S. Tracial and majorisation Heinz mean-type inequalities for matrices. J Inequal Appl 2016, 23 (2016). https://doi.org/10.1186/s13660-016-0965-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13660-016-0965-8

MSC

Keywords