# On Anderson–Taylor type inequalities for τ-measurable operators

## Introduction

Let $$\mathbb{M}_{n}(\mathrm{C})$$ be the space of $$n\times n$$ complex matrices. For two Hermitian matrices $$A, B\in\mathbb{M}_{n}(\mathrm{C})$$, $$A>B$$ ($$A \geq B$$) means $$A-B$$ is positive (semi) definite. Then $$A> 0$$ ($$A\geq0$$) means A is positive (semi) definite. Of course, $$B> A$$ ($$B \geq A$$) is not distinguished from $$A>B$$ ($$A \geq B$$). And we call the comparison of Hermitian matrices in this way Löewner partial order. Let $$A\geq0$$, thus it has a unique square root $$A^{\frac{1}{2}}\geq0$$. Let trA denote the trace of A. In view of the applications in probability theory, Anderson and Taylor [1, Proposition 1] proved a quadratic inequality for real numbers. In 1983, Olkin [9, Proposition] established a stronger matrix version of Anderson–Taylor inequality as well as a related trace inequality. Using the well-known arithmetic-geometric mean inequality for singular values due to Bhatia and Kittaneh , Zhan  gave a trace inequality for sums of positive semi-definite matrices, which is a generalization of Anderson–Taylor quadratic inequality for real numbers. Recently, Lin  provided a complement to Olkin’s generalization of Anderson–Taylor trace inequality and some related results for M-matrices.

In this article we consider τ-measurable operators affiliated with a finite von Neumann algebra equipped with a normal faithful finite trace τ. By virtue of the method of Lin and Zhan [8, 10], based on the notion of generalized singular value studied by Fack and Kosaki , we obtain generalizations of results in  and  with regard to Anderson–Taylor type inequalities for τ-measurable operators case.

## Preliminaries

Unless stated otherwise, throughout the paper $$\mathcal{M}$$ will always denote a finite von Neumann algebra acting on the complex separable Hilbert space $$\mathcal {H}$$, with a normal faithful finite trace τ. We denote the identity in $$\mathcal{M}$$ by 1 and let $$\mathcal{P}$$ denote the projection lattice of $$\mathcal{M}$$. The closed densely defined linear operator x in $$\mathcal{H}$$ with domain $$D(x)\subseteq\mathcal{H}$$ is said to be affiliated with $$\mathcal{M}$$ if $$u^{*}xu=x$$ for all unitary u which belong to the commutant $$\mathcal{M^{\prime}}$$ of $$\mathcal {M}$$. If x is affiliated with $$\mathcal{M}$$, then x is said to be τ-measurable if for every $$\epsilon>0$$ there exists a projection $$e\in\mathcal{M}$$ such that $$e(\mathcal{H})\subseteq D(x)$$ and $$\tau(\mathbf{1}-e)<\epsilon$$. The set of all τ-measurable operators will be denoted by $$L_{0}(\mathcal{M}, \tau)$$, or simply $$L_{0}(\mathcal{M})$$. The set $$L_{0}(\mathcal{M})$$ is a -algebra with sum and product being the respective closures of the algebraic sum and product. The space $$L_{0}(\mathcal{M})$$ is a partially ordered vector space under the ordering $$x\geq0$$ defined by $$(x\xi, \xi)\geq0$$, $$\xi\in D(x)$$. When $$x\geq0$$ is invertible, we write $$x>0$$.

Recall that the geometric mean of two positive definite operators x and y, denoted by $$x\sharp y$$, is the positive definite solution of the operator equation $$zy^{-1}z=x$$ and it has the explicit expression

$$x\sharp y=y^{\frac{1}{2}}\bigl(y^{-\frac{1}{2}}xy^{-\frac{1}{2}} \bigr)^{\frac {1}{2}}y^{\frac{1}{2}}.$$

From this, we find that $$x\sharp y=y\sharp x$$ and the monotonicity property: $$x\sharp y\geq x\sharp z$$, whenever $$y\geq z>0$$ and $$x>0$$. One of the motivations for geometric mean is the following arithmetic mean-geometric mean inequality:

$$\frac{x+y}{2}\geq x\sharp y.$$

A remarkable property of the geometric mean is a maximal characterization which is a generalization of the result in [4, Theorem 3.7]; see Lemma 3.4 in Sect. 3 for more details.

### Definition 2.1

Let $$x\in L_{0}(\mathcal{M})$$ and $$t>0$$. The “tth singular number (or generalized s-number) of x” is defined by

$$\mu_{t}(x)=\inf\bigl\{ \Vert xe \Vert : e \in\mathcal{P}, \tau( \mathbf{1}-e)\leq t\bigr\} .$$

We will denote simply by $$\mu(x)$$ the function $$t\rightarrow\mu_{t}(x)$$. The generalized singular number function $$t\rightarrow\mu_{t}(x)$$ is decreasing right-continuous. Furthermore,

\begin{aligned} \mu(uxv)\leq \Vert v \Vert \Vert u \Vert \mu(x),\quad u,v\in \mathcal{M}, x\in L_{0}(\mathcal{M}) \end{aligned}
(2.1)

and

\begin{aligned} \mu\bigl(f(x)\bigr)=f\bigl(\mu(x)\bigr) \end{aligned}
(2.2)

whenever $$0\leq x\in L_{0}(\mathcal{M})$$ and f is an increasing continuous function on $$[0,\infty)$$ satisfying $$f(0)=0$$. See  for basic properties and detailed information on the generalized s-numbers.

Let $$\mathbb{M}_{2}(\mathcal{M})$$ denote the linear space of $$2\times 2$$ matrices

$$x= \begin{bmatrix} x_{11} & x_{12} \\ x_{21} & x_{22} \end{bmatrix}$$

with entries $$x_{jk}\in\mathcal{M}$$, $$j, k=1,2$$. Let $$\mathcal{H}^{2}=\mathcal{H}\oplus\mathcal{H}$$, then $$\mathbb{M}_{2}(\mathcal{M})$$ is a von Neumann algebra in the Hilbert space $$\mathcal{H}^{2}$$. For $$x\in\mathbb{M}_{2}(\mathcal{M})$$, define $$\tau_{2}(x)=\sum_{j=1}^{2}\tau(x_{jj})$$. Then $$\tau_{2}$$ is a normal faithful finite trace on $$\mathbb{M}_{2}(\mathcal{M})$$. The direct sum of operators $$x_{1}, x_{2}\in L_{0}(\mathcal{M})$$, denoted by $$\bigoplus_{j=1}^{2} x_{j}$$, is the block-diagonal operator matrix defined on $$\mathcal{H}^{2}$$ by

$$\bigoplus_{j=1}^{2} x_{j}= \begin{bmatrix} x_{1} & 0 \\ 0 & x_{2} \end{bmatrix} .$$

## Anderson–Taylor type inequalities

To present our main results, we firstly give the following lemma. Since it is easy to obtain in a similar way to [9, Lemma], we omit the proof.

### Lemma 3.1

Let $$x, y\in\mathcal{M}$$ with $$x>0$$, $$y\geq0$$, then

\begin{aligned} (x+y)^{-1}y(x+y)^{-1}\leq x^{-1}-(x+y)^{-1}. \end{aligned}
(3.1)

Our next result provides an operator generalization of a quadratic inequality for a matrix.

### Theorem 3.2

Let $$z, x_{j}\in\mathcal{M}$$ with $$z>0$$ and $$x_{j}\geq0$$ ($$j=1,2,\ldots,n$$), then

\begin{aligned} z^{-1}>\sum_{k=1}^{n} \Biggl(z+\sum_{j=1}^{k} x_{j} \Biggr)^{-1}x_{k}\Biggl(z+\sum_{j=1}^{k} x_{j}\Biggr)^{-1} . \end{aligned}
(3.2)

### Proof

Let $$x=z+x_{0}+\sum_{j=1}^{k-1} x_{j}$$, $$y=x_{k}$$, with $$x_{0}\equiv0$$. By an application of (3.1) we obtain

\begin{aligned} &\sum_{k=1}^{n}\Biggl(z+x_{0}+ \sum_{j=1}^{k} x_{j} \Biggr)^{-1} x_{k} \Biggl(z+x_{0}+\sum _{j=1}^{k} x_{j}\Biggr)^{-1} \\ &\quad\leq\sum_{k=1}^{n} \Biggl(\Biggl(z+\sum _{j=0}^{k-1}x_{j} \Biggr)^{-1}-\Biggl(z+\sum_{j=0}^{k} x_{j}\Biggr)^{-1} \Biggr) \\ &\quad=z^{-1}-\Biggl(z+\sum_{j=1}^{n} x_{j}\Biggr)^{-1} \\ &\quad< z^{-1}, \end{aligned}

which completes the proof of (3.2). □

An immediate consequence from (3.2) is as follows:

\begin{aligned} &\sum_{k=1}^{n} \tau \Biggl( \Biggl(z+\sum _{j=1}^{k} x_{j} \Biggr)^{-1}x_{k}\Biggl(z+\sum_{j=1}^{k} x_{j}\Biggr)^{-1} \Biggr) \\ &\quad=\sum_{k=1}^{n} \tau \Biggl(x_{k} \Biggl(z+\sum_{j=1}^{k} x_{j} \Biggr)^{-2} \Biggr) \\ &\quad< \tau\bigl(z^{-1}\bigr). \end{aligned}

Furthermore, under the same condition as in Theorem 3.2 we observe that

\begin{aligned} z^{-1}>\sum_{k=1}^{n} \Biggl(z+\sum_{j=1}^{k} x_{j} \Biggr)^{-1}x_{k}\Biggl(z+\sum_{j=1}^{k} x_{j}\Biggr)^{-1}. \end{aligned}
(3.3)

In what follows, we first give an inequality complementary to (3.3). To obtain it, we need several lemmas.

### Lemma 3.3

Let $$x, z\in\mathcal{M}$$ with $$x, z>0$$ and $$x=x^{*}$$, $$z=z^{*}$$, and let x be invertible, $$y\in\mathcal{M}$$. Then the $$2\times2$$ operator matrix

\begin{aligned} \begin{bmatrix} x & y \\ y^{*} & z \end{bmatrix} \end{aligned}

is positive semi-definite if and only if $$z\geq y^{*}x^{-1}y$$.

### Proof

Put $$D=z-y^{*}x^{-1}y$$, thus

\begin{aligned} \begin{bmatrix} x & y \\ y^{*} & z \end{bmatrix} = \begin{bmatrix} x & y \\ y^{*} & y^{*}x^{-1}y \end{bmatrix} + \begin{bmatrix} 0 & 0 \\ 0 & D \end{bmatrix} . \end{aligned}

Note that

\begin{aligned} \begin{bmatrix} x & y \\ y^{*} & y^{*}x^{-1}y \end{bmatrix} = \begin{bmatrix} x^{\frac{1}{2}} & x^{-\frac{1}{2}}y \\ 0 & 0 \end{bmatrix} ^{*} \cdot \begin{bmatrix} x^{\frac{1}{2}} & x^{-\frac{1}{2}}y \\ 0 & 0 \end{bmatrix} \geq0. \end{aligned}
(3.4)

Hence, the fact that D is positive semi-definite is sufficient to ensure that $\left[\begin{array}{cc}x& y\\ {y}^{\ast }& z\end{array}\right]$ is positive semi-definite.

On the other hand, it is also evident from (3.4) that for any $$\nu\in\mathbb{C}^{n}$$, the vector $\left[\begin{array}{c}{x}^{-1}y\nu \\ \nu \end{array}\right]$ belongs to the null space of $\left[\begin{array}{cc}x& y\\ {y}^{\ast }& {y}^{\ast }{x}^{-1}y\end{array}\right]$, therefore,

\begin{aligned} \left\langle \begin{bmatrix} x^{-1}y\nu\\ \nu \end{bmatrix} , \begin{bmatrix} x & y \\ y^{*} & z \end{bmatrix} \cdot \begin{bmatrix} x^{-1}y\nu\\ \nu \end{bmatrix} \right\rangle =\langle\nu,D\nu\rangle, \end{aligned}

consequently, the positive semi-definiteness of D is necessary to ensure that $\left[\begin{array}{cc}x& y\\ {y}^{\ast }& z\end{array}\right]$ is positive semi-definite. □

Lemma 3.3 says that the set of positive Hermitian operators $$z\in\mathcal{M}$$ such that $\left[\begin{array}{cc}x& y\\ {y}^{\ast }& z\end{array}\right]$ is positive semi-definite has a minimum, namely $$z=y^{*}x^{-1}y$$.

In the next result, we give a generalization of [4, Theorem 3.7].

### Lemma 3.4

For all positive $$x, z\in\mathcal{M}$$, the set of all $$y\in\mathcal {M}$$ such that

\begin{aligned} \begin{bmatrix} x & y \\ y & z \end{bmatrix} >0 \end{aligned}

has a maximal element, which is $$M(x,z)$$.

### Proof

If

\begin{aligned} \begin{bmatrix} x & y \\ y & z \end{bmatrix} >0, \end{aligned}

then via Lemma 3.3, $$z\geq yx^{-1}y$$, and hence

\begin{aligned} x^{-\frac{1}{2}}zx^{-\frac{1}{2}}\geq x^{-\frac {1}{2}}yx^{-1}yx^{-\frac{1}{2}}= \bigl(x^{-\frac{1}{2}}yx^{-\frac{1}{2}}\bigr)^{2}. \end{aligned}

From the operator monotonicity of the square root functions, it follows that

\begin{aligned} x^{\frac{1}{2}}\bigl(x^{-\frac{1}{2}}zx^{-\frac{1}{2}} \bigr)^{\frac{1}{2}}x^{\frac {1}{2}}\leq y. \end{aligned}

This shows the maximality property of $$M(x,z)$$, i.e.,

\begin{aligned} x\sharp z =\max \left\{y\biggm| \begin{bmatrix} x & y \\ y^{*} & z \end{bmatrix} \geq0, y=y^{*} \right\}. \end{aligned}

□

Applying Lemma 3.4 to the summation of positive semi-definite operator matrices $\left[\begin{array}{cc}{x}_{i}& {x}_{i}\mathrm{♯}{y}_{i}\\ {x}_{i}\mathrm{♯}{y}_{i}& {y}_{i}\end{array}\right]$, $$i=1,2,\ldots,n$$, we get the following inequality:

\begin{aligned} \Biggl(\sum_{i=1}^{n} x_{i}\Biggr)\sharp\Biggl(\sum_{i=1}^{n} y_{i}\Biggr)\succeq\sum_{i=1}^{n} (x_{i}\sharp y_{i}). \end{aligned}
(3.5)

The operator geometric mean has the similar properties to that of matrix geometric mean in . As for the next lemma, its proof is similar to that of [8, Lemma 2.2] and we give it for easy reference.

### Lemma 3.5

Let $$x, y\in\mathcal{M}$$ with $$x>0$$ and y Hermitian. Then

\begin{aligned} x\sharp\bigl(yx^{-1}y\bigr)\geq y. \end{aligned}
(3.6)

### Proof

We may assume that y is invertible and the general case follows from a continuity argument. In fact, via Lemma 3.4, the notion of geometric mean can be extended to cover the case of positive semi-definite operators. Based on the proof of Lemma 3.3, it is easy to check that

\begin{aligned} \begin{bmatrix} x & y \\ y & yx^{-1}y \end{bmatrix} \geq0. \end{aligned}

Now from Lemma 3.4, the desired inequality follows. □

### Remark 3.6

Observe that inequality (3.6) is surely a refinement of the following inequality for $$\mathcal{M}\ni x>0$$ and Hermitian operator $$y\in \mathcal{M}$$:

\begin{aligned} x+yx^{-1}y\geq2y. \end{aligned}
(3.7)

### Lemma 3.7

Let $$x, y\in\mathcal{M}$$ with $$x, y>0$$. Then $$x\sharp y\geq y$$ if and only if $$x\geq y$$.

Now we are ready to state our main result. It is easy to get this theorem in a similar way to [8, Theorem 2.5], for completeness, we include a simple proof.

### Theorem 3.8

Let $$x_{i}\in\mathcal{M}$$ with $$x_{i}>0$$ for $$i=1,2,\ldots,n$$. Then

\begin{aligned} \sum_{j=1}^{n} \Biggl(\sum _{i=1}^{j} x_{i} \Biggr)x_{j}^{-1}\Biggl(\sum_{i=1}^{j} x_{i}\Biggr)>\frac {1}{2}\sum_{k=1}^{n} \sum_{j=1}^{k}\sum _{i=1}^{j}x_{i}. \end{aligned}
(3.8)

Moreover, the constant $$1/2$$ is best possible.

### Proof

Interchange the order of summation and we deduce that

\begin{aligned} \sum_{k=1}^{n}\sum _{j=1}^{k}\sum_{i=1}^{j}x_{i} &=\sum_{j=1}^{n} \sum _{k=j}^{n}\Biggl(\sum_{i=1}^{j} x_{i}\Biggr) \\ &=\sum_{j=1}^{n}(n-j+1)\sum _{i=1}^{j} x_{i} \\ &=\sum_{i=1}^{n} x_{i} \sum _{j=1}^{n}(n-j+1) \\ &=\sum_{i=1}^{n} {2 \choose n-i+2} x_{i} \\ &>\frac{1}{2}\sum_{i=1}^{n} (n-i+1)^{2} x_{i}, \end{aligned}

i.e.,

\begin{aligned} 2\sum_{k=1}^{n}\sum _{j=1}^{k}\sum_{i=1}^{j}x_{i} >\sum_{i=1}^{n} (n-i+1)^{2} x_{i}. \end{aligned}
(3.9)

On the other hand, combining (3.5) and (3.9), by an application of Lemma 3.4 we obtain

\begin{aligned} \sum_{k=1}^{n}\sum _{j=1}^{k}\sum_{i=1}^{j}x_{i} &=\sum_{j=1}^{n} (n-j+1) \Biggl(\sum _{i=1}^{j} x_{i}\Biggr) \\ &< \sum_{j=1}^{n} \bigl((n-j+1)^{2}x_{j} \bigr)\sharp \Biggl\{ \Biggl(\sum_{i=1}^{j} x_{i}\Biggr)x_{j}^{-1}\Biggl(\sum _{i=1}^{j} x_{i}\Biggr) \Biggr\} \\ &< \Biggl\{ \sum_{j=1}^{n} \bigl((n-j+1)^{2}x_{j}\bigr) \Biggr\} \sharp \Biggl\{ \sum _{j=1}^{n}\Biggl(\sum _{i=1}^{j} x_{i}\Biggr)x_{j}^{-1} \Biggl(\sum_{i=1}^{j} x_{i}\Biggr) \Biggr\} \\ &< 2 \Biggl\{ \sum_{k=1}^{n}\sum _{j=1}^{k}\sum_{i=1}^{j}x_{i} \Biggr\} \sharp \Biggl\{ \sum_{j=1}^{n}\Biggl( \sum_{i=1}^{j} x_{i} \Biggr)x_{j}^{-1}\Biggl(\sum_{i=1}^{j} x_{i}\Biggr) \Biggr\} \\ &= \Biggl\{ \sum_{k=1}^{n}\sum _{j=1}^{k}\sum_{i=1}^{j}x_{i} \Biggr\} \sharp \Biggl\{ 2\sum_{j=1}^{n} \Biggl(\sum_{i=1}^{j} x_{i} \Biggr)x_{j}^{-1}\Biggl(\sum_{i=1}^{j} x_{i}\Biggr) \Biggr\} . \end{aligned}

Hence, the assertion follows from Lemma 3.7. □

Regarding the proof that the constant $$1/2$$ in (3.8) is best possible, it could be organized by using the method applied in [8, Appendix], thus we omit it.

### Remark 3.9

Under the same condition as in Theorem 3.8, we have the following inequality:

\begin{aligned} \tau \Biggl( \sum_{j=1}^{n} x_{j}^{-1}\Biggl(\sum_{i=1}^{j} x_{i}\Biggr)^{2} \Biggr) >\frac{1}{2}\sum _{k=1}^{n}\sum_{j=1}^{k} \sum_{i=1}^{j}\tau(x_{i}). \end{aligned}
(3.10)

## M-Operators analog

In this section, we extend some results for M-matrix established in  to M-operators. From the definition of M-matrix, namely, [7, Definition 2.4.3], we could define the M-operator as follows.

### Definition 4.1

Let $$x\in\mathcal{M}$$ be positive and invertible operator. x is called an M-operator if $$x=sI-x_{1}$$, where $$x_{1}\geq0$$ and $$s>r(x_{1})$$ with $$r(x_{1})$$ the spectral radius of $$x_{1}$$.

Next we give the following lemma without the proof, as it is immediate from [6, p. 117].

### Lemma 4.2

Let $$x, x+y\in\mathcal{M}$$ be two M-operators with $$y\geq0$$. Then

\begin{aligned} x^{-1}-(x+y)^{-1}\geq(x+y)^{-1}y(x+y)^{-1}. \end{aligned}
(4.1)

Observe that the following result can also be given according to Theorem 3.2 and here we give another proof.

### Proposition 4.3

Let $$x_{1}, x_{1}+\sum_{i=2}^{n} x_{i} \in\mathcal{M}$$ be M-operators with $$x_{i}\geq0$$ for $$i=2,\ldots,n$$. Then

\begin{aligned} 2x_{1}^{-1}>\sum_{j=1}^{n} \Biggl(\sum_{i=1}^{j} x_{i} \Biggr)^{-1} x_{j} \Biggl(\sum_{i=1}^{j} x_{i}\Biggr)^{-1}. \end{aligned}
(4.2)

### Proof

Observe that $$x_{1}+\sum_{i=2}^{j} x_{i} \in\mathcal{M}$$ is an M-operator for $$j=2,\ldots,n$$. Moreover, we note that (4.2) is the same as

\begin{aligned} x_{1}^{-1}>\sum_{j=2}^{n} \Biggl(\sum_{i=1}^{j} x_{i} \Biggr)^{-1} x_{j} \Biggl(\sum_{i=1}^{j} x_{i}\Biggr)^{-1}. \end{aligned}
(4.3)

In fact, let $$x=\sum_{i=1}^{j-1}x_{i}$$, $$y=x_{j}$$ ($$2\leq j\leq n$$). By Lemma 4.2 we have

\begin{aligned} \Biggl(\sum_{i=1}^{j-1} x_{i} \Biggr)^{-1}-\Biggl(\sum_{i=1}^{j-1} x_{i}+x_{j}\Biggr)^{-1}>\Biggl(\sum _{i=1}^{j-1} x_{i}+x_{j} \Biggr)^{-1}x_{j}\Biggl(\sum_{i=1}^{j-1} x_{i}+x_{j}\Biggr)^{-1}, \end{aligned}

i.e.,

\begin{aligned} \Biggl(\sum_{i=1}^{j-1} x_{i}\Biggr)^{-1}-\Biggl(\sum_{i=1}^{j} x_{i}\Biggr)^{-1}>\Biggl(\sum_{i=1}^{j} x_{i}\Biggr)^{-1}x_{j}\Biggl(\sum _{i=1}^{j} x_{i}\Biggr)^{-1}. \end{aligned}
(4.4)

Summing up from 2 to n on both sides of inequality (4.4), we deduce that

\begin{aligned} \sum_{j=2}^{n} \Biggl(\Biggl(\sum _{i=1}^{j-1} x_{i}\Biggr)^{-1}- \Biggl(\sum_{i=1}^{j} x_{i} \Biggr)^{-1} \Biggr)>\sum_{j=2}^{n} \Biggl(\Biggl(\sum_{i=1}^{j} x_{i} \Biggr)^{-1}x_{j}\Biggl(\sum_{i=1}^{j} x_{i}\Biggr)^{-1} \Biggr), \end{aligned}

thus we get the desired result. □

### Remark 4.4

Under the assumption of Proposition 4.3, take the trace in (4.2) and we immediately derive that

\begin{aligned} 2\tau\bigl(x_{1}^{-1}\bigr)>\tau\Biggl(\sum _{j=1}^{n} x_{j} \Biggl(\sum _{i=1}^{j} x_{i} \Biggr)^{-2}\Biggr). \end{aligned}
(4.5)

## References

1. Anderson, T.W., Taylor, J.B.: An inequality for a sum of quadratic forms with applications to probability theory. Linear Algebra Appl. 30, 93–99 (1980)

2. Bhatia, R.: Positive Definite Matrices. Princeton University Press, Princeton (2007)

3. Bhatia, R., Kittaneh, F.: On the singular values of a product of operators. SIAM J. Matrix Anal. Appl. 11, 272–277 (1990)

4. Carlen, E.: Trace inequalities and quantum entropy: an introductory course. In: Entropy and the Quantum: Arizona School of Analysis with Applications, March 16–20, 2009, University of Arizona. Contemporary Mathematics, vol. 529. Am. Math. Soc., Providence (2010)

5. Fack, T., Kosaki, H.: Generalized s-numbers of τ-measurable operators. Pac. J. Math. 123, 269–300 (1986)

6. Horn, R.A., Johnson, C.R.: Topics in Matrix Analysis. Cambridge University Press, Cambridge (1991)

7. Kalauch, D.M.A.: Positive-off-diagonal operators on ordered normed spaces and maximum principles for M-operators. Dissertation (1972)

8. Lin, M.: Notes on an Anderson–Taylor type inequalities. Electron. J. Linear Algebra 26, 63–70 (2013)

9. Olkin, I.: An inequality for a sum of forms. Linear Algebra Appl. 52, 529–532 (1983)

10. Zhan, X.: On some matrix inequalities. Linear Algebra Appl. 376, 299–303 (2004)

### Acknowledgements

The author is grateful to the editor and referees for their useful suggestions.

Not applicable.

## Funding

This research is supported by the National Natural Science Foundation of China No. 11701255 and the National Natural Science Foundation of China No. 11761067.

## Author information

Authors

### Contributions

The main idea of this paper was proposed by the author JJSh. The author prepared the manuscript initially and performed all the steps of the proofs in this research as well as read and approved the final manuscript.

### Corresponding author

Correspondence to Jingjing Shao.

## Ethics declarations

### Competing interests

The author declares that there is no conflict of interests regarding the publication of this paper. 