Skip to main content

Matrix operator inequalities based on Mond, Pecaric, Jensen and Ando's

Abstract

This work is concerned with discussing matrix operator's inequalities based on Mond and Pecaric's matrix inequalities and matrix operator inequalities for convex and concave functions that occur in inequalities of Jensen and Ando's for positive linear maps.

1. Introduction

Operator inequalities play important roles in various mathematical and statistical contexts. In early of study operator inequalities, there have been many research results, where useful references centered mainly literatures [1–4]. In nearly 20 years, some new research results of operator inequalities continuously appears, readers may refer to Mond and Pecaric [5], Beesack [1], Hansen [6] and Kubo and Ando [2] and so on. In [5], Mond Pecaric gave several matrix operator inequalities associated with positive linear maps by means of concavity and convexity theorems. Beesack and Pĕcarić in [1] showed the Jensen's inequality for real valued convex functions. Hansen in [6] also gave operator inequalities associated with Jensen's inequality. In this article, we will give some new operator inequalities of matrix version based on Mond, Pecaric, Jensen and Ando's.

This article is organized as follows: In Section 2, based on Mond and Pecaric's operator inequalities we give matrix operator inequalities of real valued convex functions. In Section 3, we first present some new matrix operator inequalities by means of Jensen's and Ando's matrix inequalities, then we give Jensen inequality for convex functions in matrix form and conduct further research on the properties of Jensen inequality.

Throughout the article, Aj(j = 1,2,...,k) denote positive definite Hermitian matrices of order n × n with eigenvalues in interval [m,M] (0 < m < M), U j (j = 1,2,...,k) denote r × n matrices such that ∑ j = 1 k U j U j * = I . If A is a n × n Hermitian matrix, then there exists a unitary matrix U such that A = U*[λ1,...,λ n ]U, where [λ1,...,λ n ] is diagonal matrix and λ i (i = 1,2,...,n) are eigenvalues of A, respectively. f (A) is then defined by f(A) = U*[f(λ1),...,f(λ n )]U. A ≥ B means A-B is positive semi-definite.

If F(t) = F(f(t),g(t)), then we use F(A) to denote matrix operator of F(f(A), g(A)), it means that F(A) = F(f(A),g(A)), while F(A,B) denotes the matrix function of two variables as it was well defined in some known articles.

2. Matrix operator inequalities based on Mond and Pecaric's

In [5], Mond and Pecaric once gave two important matrix inequalities which are the following Theorems A and B:

Theorem A. Let Aj(j = 1,2,...,k) be Hermitian matrices of order n × n with eigenvalues in interval [m,M], and let Uj(j = 1,2,...,k) be r × n matrices such that ∑ j = 1 k U j U j * = I . If f is a real valued continuous convex function on [m,M], then

∑ j = 1 k U j f A j U j * ≤ M I - ∑ j = 1 k U j A j U * M - m f m + ∑ j = 1 k U j A j U * - m I M - m f M .

Theorem B. Let Aj(j = 1,2,...,k) be Hermitian matrices of order n × n with eigenvalues in interval [m,M], and let Uj(j = 1,2,...,k) be r × n matrices those such that ∑ j = 1 k U j U j * = I . If f is a continuous convex function on [m,M], J is an interval such that J ⊃ [m,M] and F(u,v) is a real valued continuous function defined on J × J and it is a matrix increasing function to its first variable, then

F ∑ j = 1 k U j f A j U j * , f ∑ j = 1 k U j A j U j * ≤ max x ∈ m , M F M - x M - m f m + x - m M - m f M , f x I = max θ ∈ 0 , 1 F θ f m + 1 - θ f M , f θ m + 1 - θ M I .

From Theorems A and B, we can make a more profound study and get more results which have the extensive meaning.

Theorem 2.1. Let Aj(j = 1,2,...,k) be Hermitian matrices of order n × n with eigenvalues in interval [m,M], and Uj(j = 1,2,...,k) be r × n matrices such that ∑ j = 1 k U j U j * = I . If f is a real valued continuous convex function on [m,M], then

∑ j = 1 k U j f m I + M I - A j U j * ≤ ∑ j = 1 k U j A j U j * - m I M - m f m + M I - ∑ j = 1 k U j A j U j * M - m f M ≤ f m I + f M I - ∑ j = 1 k U j A j U j * .

Proof. By the proof of Theorem 1 of [5], we know that for any real valued convex function, the following inequality holds

f m + M - z ≤ M - m + M - z M - m f m + m + M - z - m M - m f M = z - m M - m f m + M - z M - m f M = f m + f M - M - z M - m f m + z - m M - m f M ≤ f m + f M - f z m + M - z ∈ m , M .

It follows at once from the above inequality that

f m I + M I - A j ≤ f m I + f M I - f A j ,

where mI ≤ A j ≤ MI(j = 1,2,...,k). The following matrix inequality can be deduced from above one

∑ j = 1 k U j f m I + M I - A j U j * ≤ f m I + f M I - ∑ j = 1 k U j f A j U j * .

â–¡

Similarly, we can get the following theorem:

Theorem 2.2. Let Aj(j = 1,2,...,k) be Hermitian matrices of order n × n with eigenvalues in the interval [m,M], and let Uj(j = 1,2,...,k) be r × n matrices such that ∑ j = 1 k U j U j * = I . If f is a real valued continuous convex function on [m,M], then

∑ j = 1 k U j f m I + M I - A j U j * ≥ f m + M 2 I - ∑ j = 1 k U j f A j U j * ≥ 2 f m + M 2 I - M I - ∑ j = 1 k U j A j U j * M - m f m + ∑ j = 1 k U j A j U j * - m I M - m f M .

Proof. By the proof of Theorem 2.1, we know that for any a real valued convex function, the following inequality holds:

f m + M - z ≥ 2 f m + M 2 - f z ≥ 2 f m + M 2 - M - z M - m f m + z - m M - m f M m + M - z ∈ m , M .

It follows from the above inequality that

f m I + M I - A j ≥ f m + M 2 I - f A j ≥ 2 f m + M 2 I - M I - A j M - m f m + A j - m I M - m f M ,

where mI ≤ A j ≤ MI(j = 1,2,...,k). Further, the following matrix inequality can be deduced from above one

∑ j = 1 k U j f m I + M I - A j U j * ≥ f m + M 2 I - ∑ j = 1 k U j f A j U j * ≥ 2 f m + M 2 I - M I - ∑ j = 1 k U j A j U j * M - m f m + ∑ j = 1 k U j A j U j * - m I M - m f M .

â–¡

As a special case of Theorem B, we give the following:

Theorem 2.3. Let Aj(j = 1,2,...,k) be Hermitian matrices of order n × n with eigenvalues in interval [m,M] and Uj(j = 1,2,...,k) be r × n matrices such that ∑ j = 1 k U j U j * = I . If f is a continuous convex function on [m,M], J is an interval which satisfy J ⊃ [m,M], F(u,v) = u-v is matrix increasing in its first variable, then the following inequality holds:

∑ j = 1 k U j f A j U j * - f ∑ j = 1 k U j A j U j * ≤ max θ ∈ 0 , 1 θ f m + 1 - θ f M - f θ m + 1 - θ M I .

We can also get ∑ j = 1 k U j f A j U j * ≤ max θ ∈ 0 , 1 θ f m + 1 - θ f M f θ m + 1 - θ M f ∑ j = 1 k U j A j U j * .

3. Matrix operator inequalities based on Jensen and Ando's

In this section, we discuss two issues, the first is matrix operator inequalities that get based on an important lemma, and the second is the Jensen inequality of matrix form and further research on the properties of the inequality.

In the first part, we suppose A is a positive definite Hermitian matrix of order n × n with mI ≤ A ≤ MI, f(t) is defined as a nonnegative concave function on [m,M] with 0 < m < M, then f(A) is defined as a positive operator by the usual functional calculus. A real valued function f is said to be operator monotone on interval [m,M] if A ≥ B ⇒ f(A) ≥ f(B) for every positive definite Hermitian matrices A and B whose eigenvalues are contained on [m,M], and f is said to be operator concave on [m,M] if f(λA+(1-λ)B) ≥ λf(A)+(1-λ)f(B) for all real numbers 0 ≤ λ ≤ 1 and every positive definite Hermitian matrices A and B whose eigenvalues are contained in [m,M].

We begin with the following lemma.

Lemma 3.1. Let Aj(j = 1,2,...,k) be positive definite Hermitian matrices of order n × n with eigenvalues in [m,M], f(t) be a nonnegative real value continuous strictly concave twice differentiable function on [m,M] with 0 < m < M, and Uj(j = 1,2,...,k) be r × n matrices such that ∑ j = 1 k U j U j * = I . Then for any given α > 0,

∑ j = 1 k U j f A j U j * ≥αf ∑ j = 1 k U j A j U j * +βI holds for

β = β m , M , f , α = a t + b - α f t 0 ,

where t0 is defined as the unique solution of f ′ t = a α when f ′ M ≤ a α ≤ f ′ m a = f M - f m M - m , b = M f m - m f M M - m ; Otherwise t0 is defined as M or m according as a α ≤ f ′ M or f ′ m ≤ a α .

Proof. We take h(t) = at+b-αf(t0). Since f(t) is strictly concave, its derivative f'(t) is strictly decreasing. Hence, if f ′ M ≤ a α ≤ f ′ m , then h'(t) = 0 occurs in interval [m,M] only at point t0. Since h"(t) = -αf"(t) > 0, this means that min m ≤ t ≤ M h t = h t 0 ≡ β . Next, we consider a α ≤ f ′ M . Since, h'(t) < 0, h(t) is decreasing on [m,M]. Therefore, we can deduce that min m ≤ t ≤ M h t = h t 0 ≡ β as t0 = M. Similarly, if f ′ m ≤ a α , then we deduce that min m ≤ t ≤ M h t = h t 0 ≡ β as t0 = m. It follows that at+b ≥ αf(t)+β for t∈[m,M].

If applying the inequality above to ∑ j = 1 k U j A j U j * , we have

a ∑ j = 1 k U j A j U j * + b I ≥ α f ∑ j = 1 k U j A j U j * + β I ,

where m I ≤ ∑ j = 1 k U j A j U j * ≤ M I .

On the other hand, since f(t) is concave, we have f(t) ≥ at+b for t∈[m,M], applying this inequality to A j , we have f(A j ) ≥ aA j +bI, and then

∑ j = 1 k U j f A j U j * ≥ a ∑ j = 1 k U j A j U j * + b I .

Combining these two inequalities we obtain

∑ j = 1 k U j f A j U j * ≥ α f ∑ j = 1 k U j A j U j * + β I .

â–¡

Use a similar method, the following Theorem can be proved easily.

Remark 3.1. If we put α = 1 in Lemma 3.1, then

- β I ≥ f ∑ j = 1 k U j A j U j * - ∑ j = 1 k U j f A j U j *

holds for β = at+b-f(t0) and t0 satisfy f'(t) = a. Since f is strictly concave, it can deduce that f'(M) ≤ a ≤ f'(m).

The following corollaries are the special case of Lemma 3.1.

Corollary 3.1. Let, Aj(j = 1,2,...,k) be positive definite Hermitian matrices of order n × n with eigenvalues in interval [m,M] that 0 < mI ≤ A j ≤ MI and 0 < p < 1

(resp. p < 0, p > 1), and let Uj(j = 1,2,...,k) be r × n matrices such that ∑ j = 1 k U j U j * = I .

Then for any given α > 0

∑ j = 1 k U j A j p U j * ≥ α ∑ j = 1 k U j A j U j * p + β I r e s p . ∑ j = 1 k U j A j p U j * ≤ α ∑ j = 1 k U j A j U j * p + β I

holds for

β = β m , M , t p , α = α p - 1 1 α p M p - m p M - m p p - 1 + M m p - m M p M - m if p M p - 1 ≤ 1 α M p - m p M - m ≤ p m p - 1 , min 1 - α M p , 1 - α m p otherwise,

resp.

β = β m , M , t p , α = α p - 1 1 α p M p - m p M - m p p - 1 + M m p - m M p M - m if p m p - 1 ≤ 1 α M p - m p M - m ≤ p M p - 1 , max 1 - α M p , 1 - α m p otherwise .

Proof. We only prove the former, as in proof of Theorem 3.1, let f(t) = tp , since t0 is defined as the unique solution of f ′ t = a α when f ′ m ≤ a α ≤ f ′ M , so we get t 0 = a p α 1 p - 1 and β = α p - 1 1 α p M p - m p M - m p p - 1 + M m p - m M p M - m by f ′ t 0 = a α = p t p - 1 , when p M p - 1 ≤ 1 α M p - m p M - m ≤ p m p - 1 , where a = f M - f m M - m and b = M f m - m f M M - m . Otherwise β = min{(1-α)Mp ,(1-α)mp }. Hence the results obtained by the conclusion of Lemma 3.1 since f(t) = tp .

As a special case of Corollary3.1, we get the following corollary.

Corollary 3.2. Let Aj(j = 1,2,...,k) be positive definite Hermitian matrices of order n × n with eigenvalues in the interval [m,M] that 0 < mI ≤ A j ≤ MI, let Uj(j = 1,2,...,k) be r × n matrices such that ∑ j = 1 k U j U j * = I . Then for any given α > 0

∑ j = 1 k U j A j - 1 U j * ≤ α ∑ j = 1 k U j A j U j * - 1 + β I

holds for

β = β m , M , t - 1 , α = M + m m M - 2 α M m max 1 - α m , 1 - a M if m M ≤ α ≤ M m , if either 0 < α < m M o r M m < α .

In particular,

∑ j = 1 k U j A j - 1 U j * - ∑ j = 1 k U j A j U j * - 1 ≤ M - m 2 M m I . ∑ j = 1 k U j A j - 1 U j * ≤ M + m 2 4 M m ∑ j = 1 k U j A j U j * - 1 .

Proof. The results obtain by the conclusion of Corollary 3.1 as f(t) = t-1.

Remark 3.2. If we put α = 1 and p = 2 in Corollary 3.1, then

∑ j = 1 k U j A j 2 U j * - ∑ j = 1 k U j A j U j * 2 ≤ M - m 2 4 I .

The results were obtained by Liu and Neudecker [7].

The theory of operator means for positive linear operators on a Hilbert space which in connection with Löwner's theory for operator monotone functions was established by Kubo and Ando [8]. In the following discussion, as matrix versions, we will give the generalization of matrix inequalities through the theory of operator means which we apply it to derive Jensen's and Ando's matrix inequalities. For that we give the following definition:

A map (A,B)→AσB is called an operator mean if the following conditions are satisfied:

(M1) Monotonicity: A ≤ C and B ≤ D imply AσB ≤ CσD;

(M2) upper continuity: A n ↓ A and B n ↓ B imply A n σB n ↓ AσB;

(M3) transformer inequality: U(AσB)U* ≤ (UAU*)σ(UBU*) for any operator U.

If f is nonnegative operator concave function, we define an operator mean σ through the formula

A σ B = A 1 2 f A - 1 2 B A - 1 2 A 1 2

for all positive definite Hermitian matrices A and B of order n × n. Simple example of operator means is the geometric mean # defined by

A # 1 2 B = A 1 2 A - 1 2 B A - 1 2 1 2 A 1 2 .

Before proceeding further, we present a useful lemma.

Lemma 3.2. Let f be a nonnegative operator concave function on [m,M] with 0 < m < M, for all positive definite Hermitian matrices A and B, let U be any r × n matrix such that UU* = I. Then the following statements are mutually equivalent:

  1. (i)

    U(AσB)U* ≤ (UAU*)σ(UBU*),

  2. (ii)

    Uf(A)U* ≤ f(UAU*).

Proof. By (i), (ii) is obvious as f is concave. Then we will show that (ii) implies (i). In fact,

U A σ B U * = U A 1 2 f A - 1 2 B A - 1 2 A 1 2 U * = U A U * 1 2 U A U * - 1 2 U A 1 2 f A - 1 2 B A - 1 2 A 1 2 U * U A U * - 1 2 U A U * 1 2 = U A U * 1 2 U A U * - 1 2 U A 1 2 f A - 1 2 B A - 1 2 A 1 2 U * U A U * - 1 2 U A U * 1 2 ≤ U A U * 1 2 f U A U * - 1 2 U A 1 2 A - 1 2 B A - 1 2 A 1 2 U * U A U * - 1 2 U A U * 1 2 = U A U * 1 2 f U A U * - 1 2 U B U * U A U * - 1 2 U A U * 1 2 = U A U * σ U B U *

holds for V V * = U A U * - 1 2 U A 1 2 A 1 2 U * U A U * - 1 2 = I , where V = U A U * - 1 2 U A 1 2 .

We now introduce some new theorems of operator means for concave matrix functions.

Theorem 3.1. Let A and B be positive definite Hermitian matrices such that 0 < m1I ≤ A ≤ M1I, 0 < m2I ≤ B ≤ M2I and f(t) be a nonnegative real valued continuous twice differentiable function on [m,M] with 0 < m < M, let U be r × n matrix such that UU* = I. Then U(AσB)U* ≥ α(UAU*)σ(UBU*)+βUAU* holds for any given α > 0 and β = β(m,M,f,α) = at+b-αf(t0) where t0 is defined as the unique solution of f ′ t = a α

when f ′ M ≤ a α ≤ f ′ m , here a = f M - f m M - m and b = M f m - m f M M - m ; Otherwise t0 is defined as M or m according as a α ≤ f ′ M or f ′ m ≤ a α , where m = m 2 M 1 and M = M 2 m 1 .

U(BσA)U* ≥ α(UBU*)σ(UAU*)+βUBU* holds for which is defined just as above with m = m 1 M 2 , M = M 1 m 2 .

Proof. We only prove the former. It follows from Lemma 3.1 that for a given α > 0

U A U * - 1 2 U A 1 2 f A - 1 2 B A - 1 2 A 1 2 U * U A U * - 1 2 ≥ α f U A U * - 1 2 U A 1 2 A - 1 2 B A - 1 2 A 1 2 U * U A U * - 1 2 + β I

holds for V V * = U A U * - 1 2 U A 1 2 A 1 2 U * U A U * - 1 2 = I and β = β m 2 M 1 , M 2 m 1 , f , α , where V = U A U * - 1 2 U A 1 2 and

U A σ B U * = U A 1 2 f A - 1 2 B A - 1 2 A 1 2 U * = U A U * 1 2 U A U * - 1 2 U A 1 2 f A - 1 2 B A - 1 2 A 1 2 U * U A U * - 1 2 U A U * 1 2 = U A U * 1 2 U A U * - 1 2 U A 1 2 f A - 1 2 B A - 1 2 A 1 2 U * U A U * - 1 2 U A U * 1 2 ≤ U A U * 1 2 α f U A U * - 1 2 U A 1 2 A - 1 2 B A - 1 2 A 1 2 U * U A U * - 1 2 + β I U A U * 1 2 = U A U * 1 2 α f U A U * - 1 2 U B U * U A U * - 1 2 + β I U A U * 1 2 = α U A U * σ U B U * + β U A U * .

Remark 3.3. If we put α = 1 in Theorem 3.1, then we have the following:

- β U A U * ≥ U A U * σ U B U * - U A σ B U * resp . - β U B U * ≥ U B U * σ U A U * - U B σ A U *

holds for β = at+b-f(t0) and t0 such that f'(t) = a, where m = m 2 M 1 and M = M 2 m 1

resp . m = m 1 M 2 , M = M 1 m 2

.

If we put f t = t 1 2 in Theorem 3.1, the following Corollary 3.3 follows from Theorem 3.1.

Corollary 3.3. Let A and B be positive definite Hermitian matrices such that 0 < m1I ≤ A ≤ M1I and 0 < m2I ≤ B ≤ M2I. Let U be r × n matrix such that UU* = I. Then for any given α > 0

U A # 1 2 B U * ≥ α U A U * # 1 2 U B U * + β U A U *

holds for

β = β m , M , t - 1 , α = M m 1 2 - m M 1 2 M - m - α 2 4 M + m min 1 - α M 1 2 , 1 - α m 1 2 i f 2 m M + m ≤ α ≤ 2 M M + m , otherwise ,

where m = m 2 M 1 and M = M 2 m 1 , and

U B # 1 2 A U * ≥ α U B U * # 1 2 U A U * + β U B U *

holds for β which is defined just as above with m = m 1 M 2 , M = M 1 m 2 .

In the following part, according to the traditional Jensen inequality of convex function, we discuss Jensen inequality in matrix form that based on Hermitian matrix and conduct further research on the properties of Jensen inequality.

An inequality be said to be a Jensen inequality of Hermitian matrix is means that a convex function f defined on I = [m,M] satisfies f ∑ i = 1 k P i A i ≤ ∑ i = 1 k P i f A i , where A i (i = 1,...,k) are n × n Hermitian and commutative matrices with eigenvalues in [m,M], P i > 0(i = 1,...,k) with ∑ i = 1 k P i = 1 .

Lemma 3.3. Let f be convex function on I = [m,M], λ i (i = 1,...,k) ∈ [m,M] and P i > 0 (i = 1,...,k) with ∑ i = 1 k P i = 1 , λ i j j = 1 , ⋯ , n ∈ m , M , then

∑ i = 1 k P i f λ i j - f ∑ i = 1 k P i λ i j ≤ α j f m + β j f M - f α j m + β j M .

Proof. Since λ i j j = 1 , ⋯ , n ∈ m , M , there are sequences u i j , v i j ∈ 0 , 1 with u i j + v i j = 1 , such that λ i j = u i j m + v i j M , j = 1 , 2 , … , n , hence

∑ i = 1 k P i f λ i j - f ∑ i = 1 k P i λ i j = ∑ i = 1 k P i f u i j m + v i j M - f ∑ i = 1 k P i u i j m + v i j M = ∑ i = 1 k P i f u i j m + 1 - u i j M - f ∑ i = 1 k P i u i j m + 1 - u i j M ≤ ∑ i = 1 k P i u i j f m + 1 - u i j f M - f m ∑ i = 1 k P i u i j + M ∑ i = 1 k P i 1 - u i j = f m ∑ i = 1 k P i u i j + f M 1 - ∑ i = 1 k P i u i j - f m ∑ i = 1 k P i u i j + M 1 - ∑ i = 1 k P i u i j .

Denoting ∑ i = 1 k P i u i j = α j , 1 - ∑ i = 1 k P i u i j = β j , we have that 0 ≤ α j ,β j ≤ 1 with α j +β j = 1.

Consequently,

∑ i = 1 k P i f λ i j - f ∑ i = 1 k P i λ i j ≤ α j f m + β j f M - f α j m + β j M .

From this lemma, we have

Theorem 3.2. Let f be convex function on I = [m,M], A i (i = 1,...,k), are Hermitian matrices of order n × n with eigenvalues in [m,M] and commutative matrices; P i > 0 (i = 1,...,k) with ∑ i = 1 k P i = 1 , then

∑ i = 1 k P i f A i - f ∑ i = 1 k P i A i ≤ max α j α j f m + β j f M - f α j m + β j M I .

Proof. As the proof in the definition, by Lemma 3, we get

∑ i = 1 k P i f A i - f ∑ i = 1 k P i A i = ∑ i = 1 k P i U * diag f λ i 1 , … , f λ i n U - f ∑ i = 1 k P i U * diag λ i 1 , … , λ i n U = U * diag ∑ i = 1 k P i f λ i 1 , … , ∑ i = 1 k P i f λ i n U - U * diag f ∑ i = 1 k P i λ i 1 , … , f ∑ i = 1 k P i λ i n U = U * diag ∑ i = 1 k P i f λ i 1 - f ∑ i = 1 k P i λ i 1 , … , ∑ i = 1 k P i f λ i n - f ∑ i = 1 k P i λ i n U ≤ U * diag α 1 f m + β 1 f M - f α 1 m + β 1 M , … , α n f m + β n f M - f α n m + β n M U ≤ max α j α j f m + β j f M - f α j m + β j M I .

Apply Theorem 3.2 with f = x2, we obtain the following corollary which we call it pre-Gruss inequality of Hermitan matrix.

Corollary 3.4. Let A i (i = 1,...,k), P i be defined as above, then

∑ i = 1 k P i A i 2 - ∑ i = 1 k P i A i 2 ≤ 1 4 M - m 2 I .

Proof. By Theorem 3.2, we obtain at once the following inequality

∑ i = 1 k P i A i 2 - ∑ i = 1 k P i A i 2 ≤ max α j α j m 2 + β j M 2 - α j m + β j M 2 I ≤ max α j α j β j M - m 2 I = 1 4 M - m 2 I .

For

P 1 = P 2 = ⋯ = P k = 1 k .

Let us define

P 0 = 1 k , 2 k , ⋯ , k - 1 k .

From Theorem 3.2, we have

Corollary 3.5. Let f, A i (i = 1,...,k), P i be defined as above, then

f A 1 + f A 2 + ⋯ + f A k k - f A 1 + A 2 + ⋯ + A k k  â‰¤ max p ∈ P 0 p f m + 1 - p f M - f p m + 1 - p M I .

Theorem 3.3. Let f, A i (i = 1,...,k), P i be defined as above, then we have

∑ i = 1 k P i f A i - f ∑ i = 1 k P i A i ≤ f m I + f M I - 2 f M + m 2 I .

Proof. From Theorem 3.2, we have

∑ i = 1 k P i f A i - f ∑ i = 1 k P i A i ≤ U * diag α 1 f m + β 1 f M - f α 1 m + β 1 M , … , α n f m + β n f M - f α n m + β n M U .

Indeed,

α j f m + β j f M - f α j m + β j M = f m + f M - f α j M + β j m + f α j m + β j M ≤ f m + f M - 2 1 2 f α j M + β j m + 1 2 f α j m + β j M = f m + f M - 2 f M + m 2 .

Hence,

∑ i = 1 k P i f A i - f ∑ i = 1 k P i A i ≤ f m I + f M I - 2 f M + m 2 I

Moreover, we get further conclusions

Theorem 3.4. Let f, A i (i = 1,...,k), P i be defined as above, and f is also a differentiable convex function, then we have

∑ i = 1 k P i f A i - f ∑ i = 1 k P i A i ≤ 1 4 M - m f ′ M - f ′ m I .

Proof. Since f is convex, we have

f α j m + β j M ≥ f m + β j M - m f ′ m ; f α j m + β j M ≥ f M + α j m - M f ′ M .

Therefore,

α j f m + β j f M - f α j m + β j M = α j f m - f α j m + β j M + β j f M - f α j m + β j M ≤ α j f m - f M - β j M - m f ′ m + β j f M - f m - α j m - M f ′ M ≤ α j β j m - M f ′ M + α j β j M - m f ′ m = α j β j M - m f ′ M - f ′ m ≤ 1 4 M - m f ′ M - f ′ m .

Hence,

∑ i = 1 k P i f A i - f ∑ i = 1 k P i A i ≤ max α j α j f m + β j f M - f α j m + β j M I ≤ 1 4 M - m f ′ M - f ′ m I .

Thus, the proof is complete.

References

  1. Beesack PR, Pĕcarić JE: On Jensen's inequality for convex functions. J Math Anal Appl 1985, 110: 536–552. 10.1016/0022-247X(85)90315-4

    Article  MathSciNet  Google Scholar 

  2. Kubo F, Ando T: Means of positive linear operators. Math Ann 1980, 246: 205–224. 10.1007/BF01371042

    Article  MathSciNet  Google Scholar 

  3. Hansen F: An operator inequality. Math Ann 1980, 246: 249–250. 10.1007/BF01371046

    Article  Google Scholar 

  4. Gohberg I, Lancaster P, Rodman L: Matrices and Indefinite Scalar Products, OT8. Birkhäuser, Basel; 1983.

    Google Scholar 

  5. Mond B, Pećarić JE: Matrix Inequalities for convex functions. J Math Anal Appl 1997, 209: 147–153. 10.1006/jmaa.1997.5353

    Article  MathSciNet  Google Scholar 

  6. Hansen F: Operator inequalities associated with Jensen's inequality. In Survey on Classical Inequalities. Edited by: Russia TM. Kluwer, Dordrecht; 2000:67–98.

    Chapter  Google Scholar 

  7. Liu S, Neudecker H: Several matrix Kantorovich-type inequalities. J Math Anal Appl 1996, 197: 23–26. 10.1006/jmaa.1996.0003

    Article  MathSciNet  Google Scholar 

  8. Simić S: On a global upper bound for Jensen's inequality. J Math Anal Appl 2008, 343: 414–419. 10.1016/j.jmaa.2008.01.060

    Article  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Wu Junliang.

Additional information

Competing interests

The authors declare that they have no competing interests. Xue Lanlan is responsible for all the responsibility of the article appeared.

Authors' contributions

WJ carried out Matrix operator theory studies, participated in the conception and design. XL conceived of the study, participated in its design and drafting the manuscript. All authors read and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Lanlan, X., Junliang, W. Matrix operator inequalities based on Mond, Pecaric, Jensen and Ando's. J Inequal Appl 2012, 148 (2012). https://doi.org/10.1186/1029-242X-2012-148

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1029-242X-2012-148

Keywords