• Research
• Open Access

# Order generalised gradient and operator inequalities

Journal of Inequalities and Applications20152015:49

https://doi.org/10.1186/s13660-015-0574-y

• Accepted: 23 January 2015
• Published:

## Abstract

We introduce the notion of order generalised gradient, a generalisation of the notion of subgradient, in the context of operator-valued functions. We state some operator inequalities of Hermite-Hadamard and Jensen types. We discuss the connection between the notion of order generalised gradient and the Gâteaux derivative of operator-valued functions. We state a characterisation of operator convexity via an inequality concerning the order generalised gradient.

## Keywords

• operator convex function
• operator inequality

• 47A63
• 46E40

## 1 Background

Convex functions play a crucial role in many fields of mathematics, most prominently in optimisation theory. There are two main important inequalities which characterise convex functions, namely Jensen’s and Hermite-Hadamard’s inequalities. In 1905 (1906), Jensen defined convex functions as follows: $$f: I\subset\mathbb{R} \rightarrow\mathbb{R}$$ is a convex function if and only if
$$f \biggl(\frac{a+b}{2} \biggr) \leq\frac{f(a)+f(b)}{2}\quad \text{for any } a,b\in I.$$
(1)
Inequality (1) is referred to as Jensen’s inequality. Hermite-Hadamard’s inequality provides a refinement for Jensen’s inequality, namely, for a convex function $$f: I\subset\mathbb{R} \rightarrow\mathbb{R}$$,
$$f \biggl(\frac{a+b}{2} \biggr) \leq\frac{1}{b-a}\int _{a}^{b}f(x)\,dx \leq\frac {f(a)+f(b)}{2}\quad \text{for any } a,b\in I.$$
(2)
We refer the reader to Section 2 for further details regarding these inequalities.
Similarly to the case of real-valued functions, the operator convexity can be characterised by some operator inequalities. Hansen and Pedersen  characterise operator convexity via a non-commutative generalisation of Jensen’s inequality. If f is a real continuous function on an interval I, and $$\mathcal{A}(H)$$ is the set of bounded self-adjoint operators on a Hilbert space H with spectra in I, then f is operator convex if and only if
$$f \Biggl(\sum_{i=1}^{n} a_{i}^{*}x_{i}a_{i} \Biggr)\leq\sum _{i=1}^{n} a_{i}^{*}f(x_{i})a_{i}$$
for $$x_{1},\dots, x_{n}\in\mathcal{A}(H)$$ and $$a_{1},\dots, a_{n}\in\mathcal{B}(H)$$ with $$\sum_{i=1}^{n} a_{i}^{*}a_{i}=\mathbf{1}$$. We refer the reader to Section 2 for further details regarding this characterisation.
One of the useful differential properties of convex functions is the fact that their one-sided directional derivatives exist universally [2, p.213]. Just as the ordinary two-sided directional derivatives of a differentiable function can be described in terms of gradient vectors, the one-sided directional derivatives can be described in terms of ‘subgradient’ vectors [2, p.213]. A vector $$x^{*}$$ is said to be a subgradient of a convex function $$f:K\subset\mathbb{R}^{n}\rightarrow\mathbb{R}$$ at point x if
$$f(x)-f(y) \geq x^{*} \cdot(x-y) \quad\text{for all } y \in K.$$
(3)
This condition is referred to as the subgradient inequality [2, p.214]. If (3) holds for every $$x\in K$$, then (3) characterises the convexity of f (cf. Eisenberg [3, Theorem 1]).

In this paper, we introduce the notion of order generalised gradient (cf. Section 3) for operator-valued functions, which is a generalisation of (3) (without the assumption of convexity) in the settings of bounded self-adjoint operators on a Hilbert space. Furthermore, we state some inequalities of Hermite-Hadamard and Jensen types for the order generalised gradient in Section 4. Finally, in Section 5, we state the connection between the order generalised gradient and Gâteaux derivative of operator-valued functions. We state a characterisation of convexity analogues to (3) in the context of operator-valued functions.

## 2 Inequalities for convex functions

This section serves as a point of reference for known results regarding some inequalities related to convex functions (both real-valued and operator-valued functions).

### 2.1 Jensen’s inequality

Jensen’s inequality for convex functions plays a crucial role in the theory of inequalities due to the fact that other inequalities, such as the arithmetic-geometric mean, Hölder, Minkowski and Ky Fan’s inequalities, can be obtained as particular cases of it.

Let C be a convex subset of the linear space X and f be a convex function on C. If $$\mathbf{p}=(p_{1},\dots, p_{n})$$ is a probability sequence and $$\mathbf{x}=(x_{1},\dots,x_{n}) \in C^{n}$$, then
$$f \Biggl(\sum_{i=1}^{n} p_{i}x_{i} \Biggr) \leq\sum_{i=1}^{n} p_{i}f(x_{i}).$$
(4)
This inequality is referred to as Jensen’s inequality. Recently, Dragomir  obtained the following refinement of Jensen’s inequality:
\begin{aligned} f \Biggl(\sum_{j=1}^{n} p_{j}x_{j} \Biggr) \leq& \min_{k\in\{1,\dots, n\}} \biggl[ (1-p_{k}) f \biggl(\frac{\sum_{j=1}^{n}p_{j}x_{j}-p_{k}x_{k}}{1-p_{k}} \biggr)+ p_{k}f(x_{k}) \biggr] \\ \leq& \frac{1}{n} \Biggl[\sum_{k=1}^{n} (1-p_{k}) f \biggl(\frac {\sum_{j=1}^{n}p_{j}x_{j}-p_{k}x_{k}}{1-p_{k}} \biggr)+ \sum _{k=1}^{n} p_{k}f(x_{k}) \Biggr] \\ \leq&\max_{k\in\{1,\dots, n\}} \biggl[ (1-p_{k}) f \biggl( \frac {\sum_{j=1}^{n}p_{j}x_{j}-p_{k}x_{k}}{1-p_{k}} \biggr)+ p_{k}f(x_{k}) \biggr] \\ \leq& \sum_{j=1}^{n} p_{j} f(x_{j}), \end{aligned}
(5)
where f, $$x_{k}$$ and $$p_{k}$$ are as defined above. For other refinements of Jensen’s inequality, we refer the reader to Pečarić and Dragomir  and Dragomir .
The above result provides a different approach to the one that Pečarić and Dragomir  obtained in 1989
\begin{aligned} f \Biggl(\sum_{i=1}^{n}p_{i}x_{i} \Biggr) \leq& \sum_{i_{1},\dots ,i_{k+1}=1}^{n} p_{i_{1}}\dots p_{i_{k+1}} f \biggl(\frac{x_{i_{1}} + \cdots + x_{i_{k+1}} }{k+1} \biggr) \\ \leq& \sum_{i_{1},\dots,i_{k}=1}^{n} p_{i_{1}} \dots p_{i_{k}} f \biggl(\frac{x_{i_{1}} + \cdots+ x_{i_{k}} }{k} \biggr) \\ \leq& \dots\leq\sum_{i=1}^{n} p_{i} f(x_{i}) \end{aligned}
(6)
for $$k\geq1$$, and p, x are as defined above.
If $$q_{1},\dots, q_{k}\geq0$$ with $$\sum_{j=1}^{k} q_{j} =1$$, then the following refinement obtained in 1994 by Dragomir  also holds:
\begin{aligned} f \Biggl(\sum_{i=1}^{n}p_{i}x_{i} \Biggr) \leq& \sum_{i_{1},\dots ,i_{k}=1}^{n} p_{i_{1}}\dots p_{i_{k}} f \biggl(\frac{x_{i_{1}} + \cdots+ x_{i_{k}}}{k} \biggr) \\ \leq& \sum_{i_{1},\dots,i_{k}=1}^{n} p_{i_{1}}\dots p_{i_{k}} f(q_{1}x_{i_{1}} + \cdots+q_{k}x_{i_{k+1}}) \\ \leq& \sum_{i=1}^{n} p_{i} f(x_{i}), \end{aligned}
(7)
where $$1\leq k\leq n$$ and p, x are as defined above.

For more refinements and applications related to the generalised triangle inequality, the arithmetic-geometric mean inequality, the f-divergence measures, Ky Fan’s inequality, etc., we refer the readers to  and .

The following inequality also holds for any convex function f defined on :
$$(b-a)f \biggl(\frac{a+b}{2} \biggr) \leq\int _{a}^{b}f(x)\,dx\leq(b-a)\frac {f(a)+f(b)}{2}, \quad a,b\in\mathbb{R}.$$
(8)
It was first discovered by Hermite in 1881 in the journal Mathesis . However, this result was nowhere mentioned in the mathematical literature and was not widely known as Hermite’s result .

Beckenbach, a leading expert on the history and the theory of convex functions wrote that this inequality was proven by Hadamard in 1893 . In 1974, Mitrinović found Hermite’s note in Mathesis . Since (8) was known as Hadamard’s inequality, the inequality is now commonly referred to as Hermite-Hadamard’s inequality .

Hermite-Hadamard’s inequality has been extended in many different directions. One of the extensions of this inequality is in the vector space settings. Firstly, we start with the following definitions and notation: Let X be a vector space and x, y be two distinct vectors in X. We define the segment generated by x and y to be the set
$$[x,y]:= \bigl\{ (1-t)x+ty, t\in[0,1]\bigr\} .$$
For any real-valued function f defined on the segment $$[x,y]$$, there exists an associated function $$g_{x,y}: [0,1] \rightarrow\mathbb{R}$$ with
$$g_{x,y}(t)=f\bigl[(1-t)x+ty\bigr].$$
We remark that f is convex on $$[x,y]$$ if and only if g is convex on $$[0,1]$$. For any convex function defined on a segment $$[x,y] \subset X$$, we have the Hermite-Hadamard integral inequality (cf. Dragomir [14, p.2] and Dragomir [15, p.2]):
$$f \biggl(\frac{x+y}{2} \biggr) \leq\int_{0}^{1}f \bigl[(1-t)x+ty\bigr]\,dt\leq\frac {f(x)+f(y)}{2},\quad x,y\in X,$$
(9)
which can be derived by the classical Hermite-Hadamard inequality (8) for the convex function $$g_{x,y}:[0,1]\rightarrow\mathbb {R}$$. Consider the function $$f(x)=\Vert x\Vert ^{p}$$ ($$x\in X$$ and $$1\leq p<\infty$$), which is convex on X, then we have the following norm inequality (derived from (9)) [16, p.106]:
$$\biggl\Vert \frac{x+y}{2}\biggr\Vert ^{p} \leq \int_{0}^{1}\bigl\Vert (1-t)x+ty\bigr\Vert ^{p}\,dt\leq \frac {\Vert x\Vert ^{p}+\Vert y\Vert ^{p}}{2}$$
(10)
for any $$x,y\in X$$.

### 2.3 Non-commutative generalisation of Jensen’s inequality

Hansen  discussed Jensen’s operator inequality for operator monotone functions. Motivated by Aujla’s work  on the matrix convexity of functions of two variables, Hansen  characterised operator convex functions of two variables in terms of a non-commutative generalisation of Jensen’s inequality (cf. [19, Theorem 3.1]). A simplified proof of this result formulated for matrices is given in Aujla . The case for several variables is given in Hansen . The case for self-adjoint elements in the algebra $$M_{n}$$ of n-square matrices is given in Hansen and Pedersen . Finally, Hansen and Pedersen  presented a generalisation of the above results for self-adjoint operators defined on a Hilbert space.

### Theorem 1

We denote by $$\mathcal{B}(H)$$ the Banach algebra of all bounded linear operators on the Hilbert space H. If f is a real continuous function on an interval I, and $$\mathcal {A}(H)$$ is the set of bounded self-adjoint operators on a Hilbert space H with spectra in I, then the following conditions are equivalent:
1. (i)

f is operator convex;

2. (ii)

$$f(\sum_{i=1}^{n} a_{i}^{*}x_{i}a_{i})\leq\sum_{i=1}^{n} a_{i}^{*}f(x_{i})a_{i}$$ for $$x_{1},\dots, x_{n}\in\mathcal{A}(H)$$ and $$a_{1},\dots, a_{n}\in\mathcal{B}(H)$$ with $$\sum_{i=1}^{n} a_{i}^{*}a_{i}=\mathbf{1}$$;

3. (iii)

$$f(v^{*}xv)\leq v^{*}f(x)v$$ for any $$x\in\mathcal{A}(H)$$ and any isometry $$v\in\mathcal{B}(H)$$;

4. (iv)

$$pf(pxp + s (\mathbf{1}-p))p\leq pf(x)p$$ for $$x\in\mathcal{A}(H)$$, projection $$p\in\mathcal{B}(H)$$, every self-adjoint operator x with spectrum in I and $$s\in I$$.

Recall the following definition of a subgradient .

### Definition 2

A vector $$x^{*}$$ is said to be a subgradient of a convex function $$f:K\subset\mathbb{R}^{n}\rightarrow\mathbb{R}$$ at point x if
$$f(x)-f(y) \geq x^{*} \cdot(x-y) \quad\text{for all } y \in K.$$

The following theorem is a useful characterisation of convexity (cf. Eisenberg [3, Theorem 1]).

### Theorem 3

If U is a nonempty open subset of $$\mathbb{R}^{n}$$, $$f:U \rightarrow \mathbb{R}$$ is a differentiable function on U, and K is a convex subset of U, then f is convex on K if and only if
$$f(x)-f(y) \geq(x-y)^{T}f'(y) \quad\textit{for all } x,y \in K$$
(11)
where $$f'(y)$$ denotes the gradient of f at y.

This theorem has been generalised and employed in obtaining optimality conditions of a non-differentiable minimax programming problem in complex spaces (cf. Lai and Liu ). Note that $$(x-y)^{T}f'(y)$$ can be written as $$f'(y)\cdot(x-y)$$, which can be interpreted as the directional derivative of f at point y in $$x-y$$ direction.

Throughout the paper, we use the following notation. We denote by $$\mathcal{B}(H)$$ the Banach algebra of all bounded linear operators on the Hilbert space $$(H,\langle\cdot,\cdot\rangle)$$, and by $$\mathcal {A}(H)$$ the linear subspace of all self-adjoint operators on H. We denote by $$\mathcal{P}_{+}(H) \subset\mathcal{A}(H)$$ the convex cone of all positive definite operators defined on H, that is, $$P \in\mathcal {P}_{+}(H)$$ if and only if $$\langle Px,x\rangle\geq0$$, and for all $$x\in H$$, $$\langle Px,x\rangle=0$$ implies $$x=0$$. This gives a partial ordering (we refer to it as the operator order) on $$\mathcal{A}(H)$$, where two elements $$A,B \in\mathcal{A}(H)$$ satisfy $$A \leq B$$ if and only if $$B-A\in \mathcal{P}_{+}(H)$$.

### Definition 4

Let $$\mathcal{C}$$ be a convex set in $$\mathcal{A}(H)$$. A function $$f:\mathcal{C} \rightarrow\mathcal{A}(H)$$ has the function $$\nabla _{f}:\mathcal{C}\times\mathcal{A}(H) \rightarrow\mathcal{A}(H)$$ as an order generalised gradient if
$$f(A)-f(B) \geq\nabla_{f}(B, A-B) \quad\text{for any } A,B \in\mathcal{C}$$
(12)
in the operator order of $$\mathcal{A}(H)$$.

### Remark 5

We remark that in (12), if f is a real-valued differentiable function on an open set $$U\subset\mathbb{R}^{n}$$, and $$\nabla_{f}$$ is the gradient of f, then (12) becomes (11). We also note that there is no assumption of convexity at this point. We discuss the convexity case in Section 5.

### Proposition 6

If $$Q \in\mathcal{A}(H)$$ and $$f:\mathcal{A}(H) \rightarrow\mathcal{A}(H)$$, $$f(A)=QA^{2}Q$$, then
$$\nabla_{f}(B,X):=Q(BX+XB)Q$$
(13)
is an order generalised gradient for f.

### Proof

Observe that $$BX+XB\in\mathcal{A}(H)$$ and if $$P \in\mathcal{A}(H)$$ then $$P(BX+XB)P \in\mathcal{A}(H)$$. We need to prove that
$$f(A)-f(B)\geq\nabla_{f}(B,A-B)$$
for any $$A,B \in\mathcal{A}(H)$$, that is,
$$QA^{2}Q-QB^{2}Q \geq Q\bigl[B(A-B)+(A-B)B \bigr]Q.$$
(14)
Since
$$Q\bigl[B(A-B)+(A-B)B\bigr]Q = QBAQ-QB^{2}Q+QABQ-QB^{2}Q,$$
hence (14) is equivalent to
$$QA^{2}Q-QB^{2}Q\geq QBAQ-QB^{2}Q+QABQ-QB^{2}Q$$
which is also equivalent to
$$Q(A-B)^{2}Q\geq0$$
which always holds. This completes the proof. □

We denote by $$\mathcal{P}(H) \subset\mathcal{A}(H)$$ the convex cone of all nonnegative operators defined on H.

### Proposition 7

If $$P\in\mathcal{P}(H)$$, then the function $$f: \mathcal{A}(H) \rightarrow\mathcal {A}(H)$$, $$f(A)=APA$$ has
$$\nabla_{f}(B,X):=XPB+ BPX$$
(15)

### Proof

Observe that $$XPB+BPX \in\mathcal{A}(H)$$. We need to prove that
\begin{aligned} APA-BPB \geq& (A-B)PB+BP(A-B) \\ =& APB-BPB+BPA-BPB, \end{aligned}
that is,
$$APA-APB-BPA+BPB \geq0.$$
But $$APA-APB-BPA+BPB = (A-B)P(A-B)$$ and since $$(A-B)P(A-B)\geq0$$, and this completes the proof. □

Recall $$\mathcal{P}_{+}(H) \subset\mathcal{A}(H)$$ the convex cone of all positive definite operators defined on H, that is, $$P \in\mathcal{P}_{+}(H)$$ if and only if $$\langle Px,x\rangle\geq0$$, and for all $$x\in H$$, $$\langle Px,x\rangle=0$$ implies $$x=0$$.

### Proposition 8

Let $$f:\mathcal{P}_{+}(H)\rightarrow\mathcal{A}(H)$$ defined by
$$f(A) = QA^{-1} Q,$$
where $$Q\in\mathcal{A}(H)$$. The function $$\nabla_{f}:\mathcal{P}_{+}(H) \times \mathcal {P}_{+}(H) \rightarrow\mathcal{A}(H)$$ with
$$\nabla_{f}(B,X)=-QB^{-1}XB^{-1}Q$$
is an order generalised gradient for f.

### Proof

For $$B\in\mathcal{P}_{+}(H)$$, $$B^{-1} \in\mathcal{P}_{+}(H)$$ then $$B^{-1}XB^{-1} \in\mathcal{P}_{+}(H)$$ for any $$X\in\mathcal{P}_{+}(H)$$ and thus $$QB^{-1}XB^{-1}Q \in\mathcal{P}_{+}(H)$$ showing that $$\nabla_{f}(B,X) \in\mathcal{A}(H)$$. We need to prove that
$$QA^{-1}Q-QB^{-1}Q \geq-QB^{-1}(A-B)B^{-1}Q,$$
that is,
$$QA^{-1}(B-A)B^{-1}Q + QB^{-1}(A-B)B^{-1}Q \geq0$$
or equivalently
$$QA^{-1}(B-A)B^{-1}Q - QB^{-1}(B-A)B^{-1}Q \geq0$$
or
$$Q\bigl(A^{-1}-B^{-1}\bigr) (B-A)B^{-1}Q \geq0.$$
But
\begin{aligned} Q\bigl(A^{-1}-B^{-1}\bigr) (B-A)B^{-1}Q =& Q \bigl(A^{-1}-B^{-1}\bigr)AA^{-1}(B-A)B^{-1}Q \\ =& Q\bigl(A^{-1}-B^{-1}\bigr)A\bigl(A^{-1}-B^{-1} \bigr)Q\geq0, \end{aligned}
which is true since for $$A\in\mathcal{P}_{+}(H)$$ we have that
$$\bigl(A^{-1}-B^{-1}\bigr)A\bigl(A^{-1}-B^{-1} \bigr)\geq0$$
and $$Q\in\mathcal{A}(H)$$. □

## 4 Inequalities involving order generalised gradients

We start this section by the following definition.

### Definition 9

An order generalised gradient $$\nabla_{f}:\mathcal{C} \times\mathcal{A}(H) \rightarrow\mathcal{A}(H)$$ is
1. (i)
operator convex if
$$\nabla_{f} (B, \alpha X + \beta Y) \leq\alpha\nabla_{f}(B,X)+ \beta \nabla_{f}(B,Y)$$
for any $$B\in\mathcal{C}$$, $$X,Y \in\mathcal{A}(H)$$ and $$\alpha,\beta\geq0$$ with $$\alpha+\beta=1$$;

2. (ii)
$$\nabla_{f} (B, X + Y) \leq\nabla_{f}(B,X)+ \nabla_{f}(B,Y)$$
for any $$B\in\mathcal{C}$$ and $$X,Y \in\mathcal{A}(H)$$;

3. (iii)
positive homogeneous if
$$\nabla_{f} (B, \alpha X ) = \alpha\nabla_{f}(B,X)$$
for any $$B\in\mathcal{C}$$, $$X\in\mathcal{A}(H)$$ and $$\alpha\geq0$$;

4. (iv)
operator linear if
$$\nabla_{f} (B, \alpha X + \beta Y) = \alpha\nabla_{f}(B,X)+ \beta\nabla_{f}(B,Y)$$
for any $$B\in\mathcal{C}$$, $$X,Y \in\mathcal{A}(H)$$ and $$\alpha,\beta\in \mathbb{R}$$.

It can be seen that if $$\nabla_{f}(\cdot,\cdot)$$ is operator linear, then it is positive homogeneous and sub-additive. If $$\nabla_{f}(\cdot,\cdot)$$ is positive homogeneous and sub-additive, then it is operator convex.

### Theorem 10

Let $$f:\mathcal{C}\rightarrow\mathcal{A}(H)$$ be operator convex and $$\nabla _{f}:\mathcal {C} \times\mathcal{A}(H) \rightarrow\mathcal{A}(H)$$ be an order generalised gradient for f. Then, for any $$A,B, \in\mathcal{C}$$ and $$t\in[0,1]$$, we have the inequalities
\begin{aligned} &{-}(1-t) \nabla_{f}\bigl(B, -t(B-A)\bigr)-t \nabla_{f}\bigl(A, (1-t) (B-A)\bigr) \\ &\quad \geq tf(A)+(1-t)f(B)-f\bigl(tA+(1-t)B\bigr) \\ &\quad \geq \nabla_{f}\bigl(tA+(1-t)B,0\bigr). \end{aligned}
(16)

### Proof

If we write the definition of $$\nabla_{f}$$ for B instead of A, we get
$$f(B)-f(A) \geq\nabla_{f}(A,B-A),$$
which is equivalent to
$$- \nabla_{f}(A,B-A)\geq f(A)-f(B).$$
Therefore, for any $$A,B\in\mathcal{C}$$, we have the gradient inequalities
$$- \nabla_{f}(A,B-A)\geq f(A)-f(B)\geq \nabla_{f}(B, A-B).$$
(17)
Since $$\mathcal{C}$$ is a convex set, hence by (17) we have
\begin{aligned} - \nabla_{f}\bigl(A,(1-t) (B-A)\bigr) \geq& f(A)-f \bigl(tA+(1-t)B\bigr) \\ \geq& \nabla_{f}\bigl(tA+(1-t)B, -(1-t) (B-A)\bigr) \end{aligned}
(18)
and
\begin{aligned} - \nabla_{f}\bigl(B,-t(B-A)\bigr) \geq& f(B)-f \bigl(tA+(1-t)B\bigr) \\ \geq& \nabla_{f}\bigl(tA+(1-t)B, t(B-A)\bigr) \end{aligned}
(19)
for any $$t\in(0,1)$$.
If we multiply (18) by t and (19) by $$(1-t)$$ and add the obtained inequalities, then we get
\begin{aligned} & {-}t\nabla_{f}\bigl(A,(1-t) (B-A)\bigr) - (1-t) \nabla_{f}\bigl(B, -t(B-A)\bigr) \\ &\quad \geq tf(A)+(1-t)f(B) - f\bigl(tA+(1-t)B\bigr) \\ &\quad \geq t \nabla_{f}\bigl(tA+(1-t)B, -(1-t) (B-A)\bigr)+ (1-t) \nabla_{f}\bigl(tA+(1-t)B, t(B-A)\bigr). \end{aligned}
Since $$\nabla_{f}(\cdot,\cdot)$$ is operator convex, we also know that
\begin{aligned} &t\nabla_{f}\bigl(tA+(1-t)B,-(1-t) (B-A)\bigr) + (1-t) \nabla_{f}\bigl(tA+(1-t)B,t(B-A)\bigr) \\ &\quad \geq \nabla_{f}\bigl(tA+(1-t)B, -t(1-t) (B-A)+(1-t)t(B-A)\bigr) \\ &\quad \geq \nabla_{f}\bigl(tA+(1-t)B,0\bigr), \end{aligned}
which completes the proof. □

### Corollary 11

Under the assumptions of Theorem  10,
1. (1)
If $$\nabla_{f}(\cdot,\cdot)$$ is positive homogeneous, then we have
\begin{aligned} & {-}t(1-t)\bigl[\nabla_{f}(B,A-B)+\nabla_{f}(A,B-A) \bigr] \\ &\quad \geq tf(A)+(1-t)f(B) -f\bigl(tA+(1-t)B\bigr)\geq0. \end{aligned}
(20)

2. (2)
If $$\nabla_{f}(\cdot,\cdot)$$ is operator linear, then
\begin{aligned} & t(1-t)\bigl[\nabla_{f}(B,B-A)-\nabla_{f}(A,B-A) \bigr] \\ &\quad \geq tf(A)+(1-t)f(B) -f\bigl(tA+(1-t)B\bigr)\geq0. \end{aligned}
(21)

### 4.1 Hermite-Hadamard type operator inequalities

In this subsection, we will state inequalities of Hermite-Hadamard type for order generalised gradients.

### Corollary 12

Under the assumptions of Theorem  10, if $$\nabla_{f}$$ is positive homogeneous, then we have the following inequality:
\begin{aligned}[b] &{-}\frac{1}{6}\bigl[\nabla_{f}(B,A-B)+ \nabla_{f}(A,B-A)\bigr] \\ &\quad \geq \frac{f(A)+f(B)}{2} -\int_{0}^{1} f\bigl(tA+(1-t)B\bigr)\,dt \geq0. \end{aligned}
(22)

We obtain (22) by integrating (20) over $$t \in[0,1]$$.

### Example 13

1. 1.
We consider the function $$f(A)=QA^{2}Q$$ with $$Q\in\mathcal{A}(H)$$. We note that the order generalised gradient
$$\nabla_{f}(B,X)=Q(BX+XB)Q$$
is operator linear. Then
\begin{aligned} \nabla_{f}(B,X) - \nabla_{f}(A,X) =& Q(BX+XB)Q - Q(AX+XA)Q \\ =& Q\bigl[(B-A)X+X(B-A)\bigr]Q. \end{aligned}
For $$X=B-A$$, we then get
$$\nabla_{f}(B,B-A)-\nabla_{f}(A,B-A) =2Q(B-A)^{2}Q.$$
Applying inequality (21) we have
\begin{aligned} &2t(1-t)Q(B-A)^{2}Q \\ &\quad \geq Q\bigl[tA^{2}+(1-t)B^{2}-\bigl(tA+(1-t)B \bigr)^{2}\bigr]Q \geq0 \end{aligned}
(23)
for any $$A,B \in\mathcal{A}(H)$$ and $$Q\in\mathcal{A}(H)$$.

2. 2.
We consider the function $$f(A)=APA$$ with $$P\in\mathcal{P}(H)$$. We note that the order generalised gradient
$$\nabla_{f}(B,X)= XPB +BPX$$
is operator linear. Then
\begin{aligned} \nabla_{f}(B,X) - \nabla_{f}(A,X) =& XPB +BPX-XPA-APX \\ =&XP(B-A)+(B-A)PX. \end{aligned}
If $$X=B-A$$, we then get
$$\nabla_{f}(B,B-A) - \nabla_{f}(A,B-A) = 2(B-A)P(B-A).$$
Applying inequality (21) we have
\begin{aligned} &2t(1-t) (B-A)P(B-A) \\ &\quad \geq tAPA+(1-t)BPB -\bigl(tA+(1-t)B\bigr)P\bigl(tA+(1-t)B\bigr)\geq0 \end{aligned}
for any $$A,B \in\mathcal{A}(H)$$ and $$P\in\mathcal{P}(H)$$.

3. 3.
For $$f(A)=QA^{-1}Q$$ with $$Q\in\mathcal{A}(H)$$ and $$A\in\mathcal{P}_{+}(H)$$, we note that the order generalised gradient
$$\nabla_{f}(B,X)= -QB^{-1}XB^{-1}Q$$
is operator linear. Then
$$\nabla_{f}(B,X)-\nabla_{f}(A,X) =-QB^{-1}XB^{-1}Q+QA^{-1}XA^{-1}Q.$$
For $$X=B-A$$, we get
\begin{aligned} &\nabla_{f}(B,B-A) - \nabla_{f}(A,B-A) \\ &\quad = -QB^{-1}(B-A)B^{-1}Q + QA^{-1}(B-A)A^{-1}Q \\ &\quad = -Q\bigl(B^{-1}-B^{-1}AB^{-1}\bigr)Q + Q \bigl(A^{-1}BA^{-1}-A^{-1}\bigr)Q \\ &\quad = QA^{-1}BA^{-1}Q + Q B^{-1}AB^{-1}Q -QB^{-1}Q-QA^{-1}Q. \end{aligned}
By (21) we have the inequality
\begin{aligned} & t(1-t)\bigl[QA^{-1}BA^{-1}Q+ QB^{-1}AB^{-1}Q-QB^{-1}Q-QA^{-1}Q \bigr] \\ &\quad \geq tQA^{-1}Q+(1-t)QB^{-1}Q-Q\bigl(tA+(1-t)B \bigr)^{-1}Q\geq0 \end{aligned}
for any $$A,B\in\mathcal{P}_{+}(H)$$ and $$Q\in\mathcal{A}(H)$$.

### 4.2 Jensen type operator inequalities

In this subsection, we will state inequalities of Jensen type for order generalised gradients.

### Theorem 14

Let $$f:\mathcal{C}\subset\mathcal{A}(H) \rightarrow\mathcal{A}(H)$$ be a function that possesses $$\nabla_{f}: \mathcal{C} \times\mathcal{A}(H) \rightarrow \mathcal{A}(H)$$ as an order generalised gradient. Then, for any $$A_{i} \in \mathcal {C}$$, $$i\in\{1,\dots, n\}$$ and $$p_{i}\geq0$$ with $$P_{n}:=\sum_{i=1}^{n} p_{i} >0$$, we have the inequalities
\begin{aligned}[b] & {-}\frac{1}{P_{n}} \sum _{j=1}^{n} p_{j} \nabla_{f} \Biggl(A_{j}, \frac{1}{P_{n}} \sum _{i=1}^{n} p_{i}A_{i} - A_{j} \Biggr) \\ &\quad \geq{\frac{1}{P_{n}} \sum_{j=1}^{n}p_{j}f(A_{j}) - f \Biggl(\frac{1}{P_{n}} \sum_{i=1}^{n} p_{i}A_{i} \Biggr)} \\ &\quad \geq{\frac{1}{P_{n}} \sum_{j=1}^{n} p_{j} \nabla_{f} \Biggl(\frac{1}{P_{n}} \sum _{i=1}^{n}p_{i}A_{i} , A_{j}-\frac{1}{P_{n}} \sum_{i=1}^{n}p_{i}A_{i} \Biggr).} \end{aligned}
(24)

### Proof

From the definition of an order generalised gradient we have
$$-\nabla_{f}(A,B-A) \geq f(A)-f(B) \geq \nabla_{f}(B,A-B).$$
(25)
Now, if we choose $$A=A_{j}$$, $$j\in\{1,\dots,n\}$$ and $$B=(1/P_{n}) \sum_{i=1}^{n} p_{i} A_{i}$$ in (25), then we get
\begin{aligned}[b] & {-}\nabla_{f} \Biggl(A_{j}, \frac{1}{P_{n}}\sum_{i=1}^{n} p_{i} A_{i}-A_{j} \Biggr) \\ &\quad \geq{f(A_{j}) - f \Biggl(\frac{1}{P_{n}} \sum _{i=1}^{n} p_{i} A_{i} \Biggr)} \\ &\quad \geq{\nabla_{f} \Biggl(\frac{1}{P_{n}} \sum _{i=1}^{n} p_{i} A_{i}, A_{j}-\frac {1}{P_{n}} \sum_{i=1}^{n} p_{i} A_{i} \Biggr)} \end{aligned}
(26)
for any $$j\in\{1,\dots, n\}$$. We obtain the desired inequalities (24) by multiplying the inequalities in (26) by $$p_{j}\geq0$$ and taking the sum over j from 1 to n; and divide the resulted inequalities by $$P_{n}$$. □

### Corollary 15

Under the assumptions of Theorem  14, we have the following results:
1. (1)
If $$\nabla_{f}:\mathcal{C} \times\mathcal{A}(H)$$ is convex, then
$$\frac{1}{P_{n}} \sum_{j=1}^{n} p_{j}f(A_{j}) - f \Biggl(\frac{1}{P_{n}} \sum _{i=1}^{n} p_{i}A_{i} \Biggr) \geq\nabla_{f} \Biggl(\frac{1}{P_{n}} \sum _{i=1}^{n} p_{i}A_{i},0 \Biggr) .$$
(27)

2. (2)
If $$\nabla_{f}$$ is linear, then $$\nabla_{f}(B,0)=0$$ for any B, and we get the Jensen’s inequality
$$\frac{1}{P_{n}} \sum_{j=1}^{n} p_{j}f(A_{j}) - f \Biggl(\frac{1}{P_{n}} \sum _{i=1}^{n} p_{i}A_{i} \Biggr) \geq0.$$
(28)

3. (3)
If $$\nabla_{f}$$ is linear, we have
\begin{aligned} &\frac{1}{P_{n}} \sum_{j=1}^{n}p_{j} \nabla _{f} \Biggl(A_{j},A_{j} - \frac{1}{P_{n}} \sum_{i=1}^{n} p_{i}A_{i} \Biggr) \\ &\quad \geq \frac{1}{P_{n}} \sum_{j=1}^{n} p_{j}f(A_{j}) - f \Biggl(\frac {1}{P_{n}} \sum _{i=1}^{n} p_{i}A_{i} \Biggr) \geq0. \end{aligned}
(29)

### Theorem 16

Under the assumptions of Theorem  14, we have the following results:
\begin{aligned} &\frac{1}{P_{n}} \sum_{j=1}^{n} p_{j}f(A_{j})- \frac{1}{P_{n}} \sum _{j=1}^{n} p_{j} \nabla_{f}(A,A_{j}-A) \\ &\quad \geq f(A) \\ &\quad \geq \frac{1}{P_{n}} \sum_{j=1}^{n} p_{j}f(A_{j})+ \frac{1}{P_{n}} \sum _{j=1}^{n} p_{j} \nabla_{f}(A_{j},A-A_{j}). \end{aligned}

### Proof

From (25) we also have
$$-\nabla_{f}(A,A_{j}-A)\geq f(A)-f(A_{j})\geq\nabla_{f}(A_{j},A-A_{j}).$$
(30)
If we multiply (30) by $$p_{j}\geq0$$ and take the sum over j from 1 to n and divide the resulted inequalities by $$P_{n}$$, then
\begin{aligned} - \frac{1}{P_{n}} \sum_{j=1}^{n} p_{j} \nabla_{f}(A,A_{j}-A) \geq& f(A)- \frac {1}{P_{n}} \sum_{j=1}^{n} p_{j}f(A_{j}) \\ \geq& \frac{1}{P_{n}} \sum_{j=1}^{n} p_{j} \nabla_{f}(A_{j},A-A_{j}), \end{aligned}
which completes the proof. □

### Remark 17

If $$\nabla_{f}$$ is linear in Theorem 16, then we get simpler inequalities such as
$$f(A) \geq\frac{1}{P_{n}} \sum_{j=1}^{n} p_{j}f(A_{j}) + \frac{1}{P_{n}} \sum _{j=1}^{n} p_{j} \nabla_{f}(A_{j},A) - \frac{1}{P_{n}} \sum_{j=1}^{n} p_{j} \nabla _{f}(A_{j},A_{j})$$
and
$$f(A) \leq\frac{1}{P_{n}} \sum_{j=1}^{n} p_{j}f(A_{j}) - \frac{1}{P_{n}} \sum _{j=1}^{n} p_{j} \nabla_{f}(A,A_{j}) + \frac{1}{P_{n}} \sum_{j=1}^{n} p_{j} \nabla_{f}(A,A).$$
Therefore, if $$A\in\mathcal{A}(H)$$ is such that
$$\frac{1}{P_{n}} \sum_{j=1}^{n} p_{j} \nabla_{f}(A_{j},A) \geq\frac{1}{P_{n}} \sum_{j=1}^{n} p_{j} \nabla_{f}(A_{j},A_{j}),$$
then we have the Slater type inequality (cf. Slater  and Pečarić )
$$f(A) \geq\frac{1}{P_{n}} \sum_{j=1}^{n} p_{j}f(A_{j}) .$$

## 5 Connection with Gâteaux derivatives

In this section, we consider the connection between order generalised gradients and Gâteaux derivatives. We refer the reader to Dragomir  for some inequalities of Jensen type, involving Gâteaux derivatives of convex functions in linear spaces.

Let $$\mathcal{C} \subset\mathcal{A}(H)$$ be a convex set. Then $$f:\mathcal{C} \rightarrow\mathcal{A}(H)$$ is said to be operator convex if for all $$t\in[0,1]$$ and $$A,B \in\mathcal{C}$$, we have
$$f\bigl[(1-t)A+tB\bigr] \leq(1-t)f(A)+tf(B).$$

### Lemma 18

Let $$F:\mathbb{R}\rightarrow\mathcal{B}(H)$$ be a function such that $$\lim_{t\rightarrow0^{\pm}} F(t)$$ exists. Then $$\lim_{t\rightarrow0^{\pm}} F(t)$$ is a bounded linear operator and
$$\Bigl\langle \Bigl[\lim_{t\rightarrow0^{\pm}} F(t) \Bigr] x,y \Bigr\rangle = \lim_{t\rightarrow0^{\pm}} \bigl\langle F(t) x,y \bigr\rangle$$
for all nonzero $$x,y \in H$$.

### Proof

We provide the proof for the right-sided limit, as the proof for the left-sided limit follows similarly. Let $$\varepsilon>0$$ and for $$x,y \in H$$, where $$x,y\neq0$$, set $$\varepsilon_{0} = \varepsilon/(\Vert x\Vert _{H} \Vert y\Vert _{H})$$. Since $$\lim_{t\rightarrow0^{+}} F(t)=L$$, there exists $$\delta_{0}$$ such that
$$\bigl\Vert F(t)-L\bigr\Vert _{\mathcal{B}(H)} < \varepsilon_{0}$$
when $$0< t<\delta_{0}$$. Note that $$L \in\mathcal{B}(H)$$ since $$\mathcal{B}(H)$$ is a Banach space, hence $$F(t)-L$$ is also a bounded linear operator. Now, we have
\begin{aligned} \bigl\vert \bigl\langle F(t) x,y \bigr\rangle - \langle L x,y \rangle\bigr\vert \leq& \bigl\Vert \bigl(F(t)-L\bigr) x\bigr\Vert _{H} \Vert y\Vert _{H} \\ \leq&\bigl\Vert F(t)-L\bigr\Vert _{\mathcal{B}(H)}\Vert x\Vert _{H} \Vert y\Vert _{H} < \varepsilon_{0} \Vert x\Vert _{H} \Vert y\Vert _{H}=\varepsilon, \end{aligned}
which completes the proof. □

### Lemma 19

Let $$f: \mathcal{A}(H) \rightarrow\mathcal{A}(H)$$ be operator convex and $$A \in\mathcal{A}(H)$$. Then, for all $$B\in\mathcal{A}(H)$$, both limits
$$\bigl(\nabla_{-} f(A)\bigr) (B) = \lim_{t\rightarrow0^{-}} \frac{f(A+tB)-f(A)}{t}$$
and
$$\bigl(\nabla_{+} f(A)\bigr) (B)=\lim_{t\rightarrow0^{+}} \frac{f(A+tB)-f(A)}{t}$$
exist and are bounded self-adjoint operators.

### Proof

Fix an arbitrary $$B\in\mathcal{A}(H)$$, and let
$$G(t) = \frac{f(A+tB)-f(A)}{t}, \quad t\in\mathbb{R}\setminus\{0\}.$$
We want to show that G is nondecreasing. Let $$0 < t_{1}<t_{2}$$, then
\begin{aligned} f(A+t_{1}B)-f(A) =& f \biggl[\frac{t_{1}}{t_{2}}(A+t_{2}B)+ \biggl(1-\frac {t_{1}}{t_{2}} \biggr)A \biggr]-f(A) \\ \leq& \frac{t_{1}}{t_{2}} f(A+t_{2}B) + \biggl(1-\frac{t_{1}}{t_{2}} \biggr)f(A)-f(A) \\ =& \frac{t_{1}}{t_{2}} \bigl[f(A+t_{2}B)-f(A)\bigr]. \end{aligned}
Thus,
$$\frac{f(A+t_{1}B)-f(A) }{t_{1}} \leq\frac{f(A+t_{2}B)-f(A)}{t_{2}}.$$
Also,
\begin{aligned} \frac{f(A-t_{2}B)-f(A) }{-t_{2}} =& - \frac{f[A+t_{2}(-B)]-f(A)}{t_{2}} \\ \leq& - \frac{f[A+t_{1}(-B)]-f(A)}{t_{1}} = \frac{f(A-t_{1}B)-f(A)}{-t_{1}}. \end{aligned}
Note also that
\begin{aligned} f(A) = f \biggl(\frac{2A+t_{1}B-t_{1}B}{2} \biggr) =& f \biggl[\frac{1}{2}(A+t_{1}B)+ \frac{1}{2}(A-t_{1}B) \biggr] \\ \leq& \frac{1}{2} f(A+t_{1}B) +\frac{1}{2}f(A-t_{1}B), \end{aligned}
which implies that
$$2f(A) \leq f(A+t_{1}B) + f(A-t_{1}B);$$
and thus
$$f(A+t_{1}B)-f(A) \geq-\bigl[f(A-t_{1}B) -f(A)\bigr],$$
which implies that
$$\frac{f(A+t_{1}B)-f(A) }{t_{1}} \leq\frac{f(A-t_{1}B)-f(A)}{-t_{1}}.$$
By the above expositions, we conclude that G is nondecreasing on $$\mathbb{R}\setminus\{0\}$$. This proves that both $$(\nabla_{-} f(A))(B)$$ and $$(\nabla_{+}f(A))(B)$$ exist and are bounded linear operators by Lemma 18. Note that for all $$t \in\mathbb{R}$$, $$t\neq0$$ and $$A,B \in\mathcal{A}(H)$$,
$$\frac{f[B+t(A-B)]-f(B)}{t}$$
is a self-adjoint operator. If $$x,y \in H$$, then Lemma 18 gives us
\begin{aligned} & \biggl\langle \biggl[\lim_{t\rightarrow0^{\pm}}\frac {f[B+t(A-B)]-f(B)}{t} \biggr] x,y \biggr\rangle \\ &\quad = \lim_{t\rightarrow 0^{\pm}} \biggl\langle \biggl[\frac{f[B+t(A-B)]-f(B)}{t} \biggr] x,y \biggr\rangle \\ &\quad = \lim_{t\rightarrow0^{\pm}} \biggl\langle x, \biggl[\frac {f[B+t(A-B)]-f(B)}{t} \biggr] y \biggr\rangle \\ &\quad = \biggl\langle x, \lim_{t\rightarrow0^{\pm}} \biggl[\frac {f[B+t(A-B)]-f(B)}{t} \biggr] y \biggr\rangle , \end{aligned}
which completes the proof. □

### Theorem 20

Let $$\mathcal{C} \subset\mathcal{A}(H)$$ be a convex set and $$f:\mathcal{C} \rightarrow\mathcal{A}(H)$$ be operator convex. Then $$\nabla_{\pm}f$$ defined by
$$\bigl(\nabla_{\pm}f(A)\bigr) (B) =\lim _{t\rightarrow0^{\pm}}\frac{f(A+tB)-f(A)}{t}, \quad A,B \in\mathcal{C}$$
(31)

### Proof

Let $$t\in(0,1)$$ and $$A,B \in\mathcal{C}$$. Since f is operator convex, we have
\begin{aligned} \frac{f[B+t(A-B)]-f(B)}{t} =& \frac{f[(1-t)B+tA]-f(B)}{t} \\ \leq& \frac{(1-t)f(B)+tf(A)-f(B)}{t} = f(A)-f(B). \end{aligned}
This is equivalent to
$$K:=f(A)-f(B)- \frac{f[B+t(A-B)]-f(B)}{t} \in\mathcal{P}_{+}(H).$$
Note that for all $$x\in H$$,
$$\Bigl\langle \Bigl[ \lim_{t\rightarrow0^{\pm}} K \Bigr] x,x \Bigr\rangle = \lim_{t\rightarrow0^{\pm}} \langle K x,x \rangle$$
by Lemma 18. Since $$K \in\mathcal{P}_{+}(H)$$, $$\langle K x,x \rangle\geq 0$$, hence $$\langle [ \lim_{t\rightarrow0^{\pm}} K ] x,x \rangle\geq0$$, which implies that
$$\lim_{t\rightarrow0^{\pm}} \biggl[f(A)-f(B)- \frac {f[B+t(A-B)]-f(B)}{t} \biggr] \in \mathcal{P}_{+}(H).$$
Therefore,
$$\bigl(\nabla_{+} f(B) \bigr) (A-B) =\lim_{t\rightarrow0^{\pm}} \frac {f[B+t(A-B)]-f(B)}{t} \leq f(A)-f(B).$$
Lemma 19 gives us
$$\bigl(\nabla_{-} f(B) \bigr) (A-B)\leq \bigl(\nabla_{+} f(B) \bigr) (A-B),$$
which implies that
$$\bigl(\nabla_{-} f(B) \bigr) (A-B) \leq f(A)-f(B).$$
Thus both $$\nabla_{+} f$$ and $$\nabla_{-} f$$ are order generalised gradients. □

### Proposition 21

Let $$f: \mathcal{A}(H) \rightarrow\mathcal{A}(H)$$ be operator convex and $$A \in \mathcal{A}(H)$$. The right Gâteaux derivative of f is sub-additive, i.e.
$$\bigl(\nabla_{+} f(A)\bigr) (B+C) \leq\bigl(\nabla_{+}f(A)\bigr) (B) + \bigl( \nabla_{+}f(A)\bigr) (C)$$
for any $$B,C \in\mathcal{A}(H)$$. The left Gâteaux derivative of f is super-additive, i.e.
$$\bigl(\nabla_{-} f(A)\bigr) (B+C) \geq\bigl(\nabla_{-}f(A)\bigr) (B) + \bigl( \nabla_{-}f(A)\bigr) (C)$$
for any $$B,C \in\mathcal{A}(H)$$.

### Proof

Since f is operator convex, we have the following for any $$B,C \in \mathcal {A}$$ and $$t >0$$:
\begin{aligned} \frac{f[A+t(B+C)]-f(A)}{t} =& \frac{f[\frac{1}{2} (A+2tB)+\frac{1}{2} (A+2tC)]-f(A)}{t} \\ \leq& \frac{f(A+2tB)-f(A)}{2t} +\frac{f(A+2tC)-f(A)}{2t}. \end{aligned}
By a similar argument to the proof of Theorem 20, we conclude that
\begin{aligned} \bigl(\nabla_{+} f(A)\bigr) (B+C) =&\lim_{t\rightarrow0^{+}}\frac {f[A+t(B+C)]-f(A)}{t} \\ \leq& \lim_{t\rightarrow0^{+}} \frac{f(A+2tB)-f(A)}{2t} +\lim _{t\rightarrow0^{+}}\frac{f(A+2tC)-f(A)}{2t} \\ =&\bigl(\nabla_{+}f(A)\bigr) (B) + \bigl(\nabla_{+}f(A)\bigr) (C) \end{aligned}
as desired. The proof for the left Gâteaux derivative of f follows similarly. □

### Remark 22

We remark that the Gâteaux (lateral) derivative(s) is always positive homogeneous with respect to the second variable, i.e. for any function $$f: \mathcal{A}(H) \rightarrow\mathcal{A}(H)$$ and fixed $$A\in\mathcal{A}(H)$$,
$$\bigl(\nabla_{\pm}f(A)\bigr) (\alpha B) = \alpha\bigl( \nabla_{\pm}f(A)\bigr) (B)$$
for all $$\alpha\geq0$$ and $$B\in\mathcal{A}(H)$$. The Gâteaux derivative, on the other hand, is always homogeneous with respect to the second variable, i.e. for any function $$f: \mathcal{A}(H) \rightarrow \mathcal {A}(H)$$ and fixed $$A\in\mathcal{A}(H)$$,
$$\bigl(\nabla f(A)\bigr) (\alpha B) = \alpha\bigl(\nabla f(A)\bigr) (B)$$
for all $$\alpha\in\mathbb{C}$$ and $$B\in\mathcal{A}(H)$$.

The following result restates Theorem 3 in the setting of operator-valued functions.

### Corollary 23

Let $$\mathcal{C} \subset\mathcal{A}(H)$$ be a convex set and $$f:\mathcal{C} \rightarrow\mathcal{A}(H)$$ be a Gâteaux differentiable function. Then f is operator convex if and only if f defined by
$$\bigl(\nabla f(A)\bigr) (B) =\lim_{t\rightarrow0} \frac{f(A+tB)-f(A)}{t}, \quad A,B \in\mathcal{C}$$
(32)

### Proof

For any $$A,B \in\mathcal{C}$$, if f is operator convex, then by Theorem 20
$$\bigl(\nabla_{\pm}f(A)\bigr) (B) =\lim_{t\rightarrow0^{\pm}} \frac{f(A+tB)-f(A)}{t}$$
are order generalised gradients. Since f is assumed to be Gâteaux differentiable, both limits are equal, hence
$$\bigl(\nabla f(A)\bigr) (B) =\lim_{t\rightarrow0}\frac{f(A+tB)-f(A)}{t}$$
is an order generalised gradient for any $$A,B \in\mathcal{C}$$. Conversely, we have the following inequality:
$$\bigl(\nabla f(B)\bigr) (A-B)\leq f(A)-f(B)$$
for any $$A,B \in\mathcal{C}$$. Let $$C,D \in\mathcal{C}$$, $$t\in(0,1)$$, and choose $$A=C$$ and $$B=tC+(1-t)D$$. Then we have
$$(1-t) \bigl(\nabla f\bigl[tC+(1-t)D\bigr]\bigr) (C-D)\leq f(C)-f \bigl[tC+(1-t)D\bigr].$$
(33)
Let $$A=D$$ and $$B=tC+(1-t)D$$. Then we have
$$(-t) \bigl(\nabla f\bigl[tC+(1-t)D\bigr]\bigr) (C-D)\leq f(D)-f \bigl[tC+(1-t)D\bigr].$$
(34)
Multiply (33) by t and (34) by $$(1-t)$$, and add the resulting inequalities to obtain
$$f\bigl[tC+(1-t)D\bigr] \leq tf(C)+(1-t)f(D),$$
which completes the proof. □

The following result follows by Corollary 12 and employing the fact that the Gâteaux lateral derivatives are positive homogenous.

### Corollary 24

Let $$f:\mathcal{C}\subset\mathcal{A}(H) \rightarrow\mathcal{A}(H)$$ be operator convex. The following inequality holds:
\begin{aligned} &{-}\frac{1}{6} \bigl[\bigl(\nabla_{\pm}f(B)\bigr) (A-B)+\bigl( \nabla_{\pm}f(A)\bigr) (B-A)\bigr] \\ &\quad \geq \frac{f(A)+f(B)}{2} - \int_{0}^{1} f \bigl(tA+(1-t)B\bigr)\,dt \geq0. \end{aligned}
The above inequality also holds for f when f is Gâteaux differentiable.

### Example 25

1. (1)
We note that the function $$f(x)=-\log(x)$$ is operator convex. The log function is (operator) Gâteaux differentiable with the following explicit formula for the derivative (cf. Pedersen [27, p.155]):
$$\bigl(\nabla\log(A)\bigr) (B)=\int_{0}^{\infty} (sI+A)^{-1}B(sI+A)^{-1}\,ds$$
for $$A,B\in\mathcal{A}(H)$$ and I the identity operator. Thus, we have the following inequality:
\begin{aligned} &\frac{1}{6} \biggl[\int_{0}^{\infty} (sI+B)^{-1}(A-B) (sI+B)^{-1}\,ds \\ &\qquad {}+\int_{0}^{\infty} (sI+A)^{-1}(B-A) (sI+A)^{-1}\,ds \biggr] \\ &\quad \geq -\frac{\log(A)+\log(B)}{2} + \int_{0}^{1} \log \bigl(tA+(1-t)B\bigr)\,dt \geq0. \end{aligned}

2. (2)
We consider the operator convex function $$f(x)=x\log(x)$$, and using the following representation (cf. Pedersen [27, p.155])
$$\log(t)=\int_{0}^{\infty} \frac{t-1}{(s+t)(s+1)}\,ds,$$
and noting the fact that $$\frac{d}{dt} t\log(t)= \log(t)$$, we have
$$\bigl(\nabla f(A)\bigr) (B)=\int_{0}^{\infty} \frac{1}{s+1} (sI+A)^{-1}(A-I)B\, ds.$$
Then we have the following inequalities:
\begin{aligned} &{-}\frac{1}{6} \biggl[\int_{0}^{\infty} \frac{1}{s+1} (sI+B)^{-1}(B-I) (A-B)\,ds \\ &\qquad {}+\int_{0}^{\infty} \frac{1}{s+1} (sI+A)^{-1}(A-I) (B-A)\,ds \biggr] \\ &\quad \geq\frac{A\log(A)+B\log(B)}{2} - \int_{0}^{1} \bigl[tA+(1-t)B\bigr]\log\bigl(tA+(1-t)B\bigr)\,dt\geq0. \end{aligned}

The following results follow by Theorems 14 and 16.

### Corollary 26

(Jensen type inequality)

Let $$f:\mathcal{C}\subset\mathcal{A}(H) \rightarrow\mathcal{A}(H)$$ be operator convex. Then, for any $$A_{i} \in\mathcal{C}$$, $$i\in\{1,\dots, n\}$$ and $$p_{i}\geq0$$ with $$P_{n}:=\sum_{i=1}^{n} p_{i} >0$$, we have the inequalities
\begin{aligned} & {-}\frac{1}{P_{n}} \sum _{j=1}^{n} p_{j} \bigl( \nabla_{\pm}f (A_{j})\bigr) \Biggl( \frac {1}{P_{n}} \sum _{i=1}^{n} p_{i}A_{i} - A_{j} \Biggr) \\ &\quad \geq{\frac{1}{P_{n}} \sum_{j=1}^{n}p_{j}f(A_{j}) - f \Biggl(\frac{1}{P_{n}} \sum_{i=1}^{n} p_{i}A_{i} \Biggr)} \\ &\quad \geq{\frac{1}{P_{n}} \sum_{j=1}^{n} p_{j} \Biggl(\nabla_{\pm}f \Biggl(\frac {1}{P_{n}} \sum _{i=1}^{n}p_{i}A_{i} \Biggr) \Biggr) \Biggl( A_{j}-\frac{1}{P_{n}} \sum _{i=1}^{n}p_{i}A_{i} \Biggr).} \end{aligned}
We also have
\begin{aligned} &\frac{1}{P_{n}} \sum_{j=1}^{n} p_{j}f(A_{j})- \frac{1}{P_{n}} \sum _{j=1}^{n} p_{j} \bigl( \nabla_{\pm}f(A)\bigr) (A_{j}-A) \\ &\quad \geq f(A) \\ &\quad \geq \frac{1}{P_{n}} \sum_{j=1}^{n} p_{j}f(A_{j})+ \frac{1}{P_{n}} \sum _{j=1}^{n} p_{j} \bigl(\nabla_{\pm}f(A_{j})\bigr) (A-A_{j}). \end{aligned}
The above inequalities also hold for f when f is Gâteaux differentiable.

### Example 27

1. (1)
We have the following inequalities for the operator convex function $$f(x)=-\log(x)$$:
\begin{aligned} & \frac{1}{P_{n}} \sum _{j=1}^{n} p_{j} \int_{0}^{\infty} (sI+A_{j})^{-1} \Biggl( \frac{1}{P_{n}} \sum _{i=1}^{n} p_{i}A_{i} - A_{j} \Biggr) (sI+A_{j})^{-1}\,ds \\ &\quad \geq{-\frac{1}{P_{n}} \sum_{j=1}^{n}p_{j} \log(A_{j}) + \log \Biggl(\frac {1}{P_{n}} \sum _{i=1}^{n} p_{i}A_{i} \Biggr)} \\ &\quad \geq-\frac{1}{P_{n}} \sum_{j=1}^{n} p_{j} \int_{0}^{\infty} \Biggl(sI+ \frac {1}{P_{n}} \sum_{i=1}^{n}p_{i}A_{i} \Biggr)^{-1}\\ &\qquad {} \times\Biggl( A_{j}-\frac{1}{P_{n}} \sum _{i=1}^{n}p_{i}A_{i} \Biggr) \Biggl(sI+\frac{1}{P_{n}} \sum_{i=1}^{n}p_{i}A_{i} \Biggr)^{-1}\,ds \end{aligned}
and
\begin{aligned} &{-}\frac{1}{P_{n}} \sum_{j=1}^{n} p_{j}\log(A_{j})+ \frac{1}{P_{n}} \sum _{j=1}^{n} p_{j} \int_{0}^{\infty} (sI+A)^{-1}(A_{j}-A) (sI+A)^{-1}\,ds \\ &\quad \geq -\log(A) \\ &\quad \geq- \frac{1}{P_{n}} \sum_{j=1}^{n} p_{j}\log(A_{j})- \frac{1}{P_{n}} \sum _{j=1}^{n} p_{j} \int_{0}^{\infty}(sI+A_{j})^{-1}(A-A_{j}) (sI+A_{j})^{-1}\,ds. \end{aligned}

2. (2)
We have the following inequalities for the operator convex function $$f(x)=x\log(x)$$:
\begin{aligned} & {-}\frac{1}{P_{n}} \sum _{j=1}^{n} p_{j} \int_{0}^{\infty} \frac{1}{s+1} (sI+A_{j})^{-1}(A_{j}-I) \Biggl( \frac{1}{P_{n}} \sum_{i=1}^{n} p_{i}A_{i} - A_{j} \Biggr)\,ds \\ &\quad \geq{\frac{1}{P_{n}} \sum_{j=1}^{n}p_{j}A_{j} \log(A_{j}) - \Biggl(\frac {1}{P_{n}} \sum _{i=1}^{n} p_{i}A_{i} \Biggr) \log \Biggl(\frac{1}{P_{n}} \sum_{i=1}^{n} p_{i}A_{i} \Biggr)} \\ &\quad \geq\frac{1}{P_{n}} \sum_{j=1}^{n} p_{j} \int_{0}^{\infty} \frac{1}{s+1} \Biggl(sI+ \frac{1}{P_{n}} \sum_{i=1}^{n}p_{i}A_{i} \Biggr)^{-1}\\ &\qquad {} \times \Biggl(\frac {1}{P_{n}} \sum _{i=1}^{n}p_{i}A_{i} -I \Biggr) \Biggl( A_{j}-\frac{1}{P_{n}} \sum_{i=1}^{n}p_{i}A_{i} \Biggr)\,ds \end{aligned}
and
\begin{aligned} &\frac{1}{P_{n}} \sum_{j=1}^{n} p_{j}A_{j} \log(A_{j})- \frac{1}{P_{n}} \sum _{j=1}^{n} p_{j} \int _{0}^{\infty} \frac{1}{s+1} (sI+A)^{-1}(A-I) (A_{j}-A)\,ds \\ &\quad \geq A\log(A) \\ &\quad \geq \frac{1}{P_{n}} \sum_{j=1}^{n} p_{j}A_{j}\log(A_{j})+ \frac{1}{P_{n}} \sum _{j=1}^{n} p_{j} \int _{0}^{\infty} \frac{1}{s+1} (sI+A_{j})^{-1}(A_{j}-I) (A-A_{j})\,ds. \end{aligned}

## 6 Conclusions

For a function $$f:\mathcal{C} \rightarrow\mathcal{A}(H)$$ defined on a convex set $$\mathcal{C} \subset\mathcal{A}(H)$$, the function $$\nabla_{f}:\mathcal{C}\times \mathcal {A}(H) \rightarrow\mathcal{A}(H)$$ is an order generalised gradient if
$$f(A)-f(B) \geq\nabla_{f}(B, A-B) \quad\text{for any } A,B \in\mathcal{C}$$
in the operator order of $$\mathcal{A}(H)$$. We have the following operator inequalities.
1. (1)
\begin{aligned} &{-}\frac{1}{6}\bigl[\nabla_{f}(B,A-B)+\nabla_{f}(A,B-A) \bigr] \\ &\quad \geq\frac{f(A)+f(B)}{2} -\int_{0}^{1} f \bigl(tA+(1-t)B\bigr)\,dt \geq0 \quad \text{for any A,B\in\mathcal{C}}. \end{aligned}

2. (2)
Operator inequalities of Jensen type:
\begin{aligned} & {-}\frac{1}{P_{n}} \sum _{j=1}^{n} p_{j} \nabla_{f} \Biggl(A_{j}, \frac{1}{P_{n}} \sum_{i=1}^{n} p_{i}A_{i} - A_{j} \Biggr) \\ &\quad \geq{\frac{1}{P_{n}} \sum_{j=1}^{n}p_{j}f(A_{j}) - f \Biggl(\frac{1}{P_{n}} \sum_{i=1}^{n} p_{i}A_{i} \Biggr)} \\ &\quad \geq{\frac{1}{P_{n}} \sum_{j=1}^{n} p_{j} \nabla_{f} \Biggl(\frac{1}{P_{n}} \sum _{i=1}^{n}p_{i}A_{i} , A_{j}-\frac{1}{P_{n}} \sum_{i=1}^{n}p_{i}A_{i} \Biggr);} \end{aligned}
and
\begin{aligned} &\frac{1}{P_{n}} \sum_{j=1}^{n} p_{j}f(A_{j})- \frac{1}{P_{n}} \sum _{j=1}^{n} p_{j} \nabla_{f}(A,A_{j}-A) \\ &\quad \geq f(A) \\ &\quad \geq \frac{1}{P_{n}} \sum_{j=1}^{n} p_{j}f(A_{j})+ \frac{1}{P_{n}} \sum _{j=1}^{n} p_{j} \nabla_{f}(A_{j},A-A_{j}) \end{aligned}
for any $$A\in\mathcal{C}$$, $$A_{i} \in\mathcal{C}$$, $$i\in\{1,\dots, n\}$$ and $$p_{i}\geq0$$ with $$P_{n}:=\sum_{i=1}^{n} p_{i} >0$$.

3. (3)
Operator inequalities of Slater type: if $$\nabla_{f}$$ is linear and $$A\in\mathcal{A}(H)$$ is such that
$$\frac{1}{P_{n}} \sum_{j=1}^{n} p_{j} \nabla_{f}(A_{j},A) \geq\frac{1}{P_{n}} \sum_{j=1}^{n} p_{j} \nabla_{f}(A_{j},A_{j}),$$
then
$$f(A) \geq\frac{1}{P_{n}} \sum_{j=1}^{n} p_{j}f(A_{j})$$
for any $$A\in\mathcal{C}$$, $$A_{i} \in\mathcal{C}$$, $$i\in\{1,\dots, n\}$$ and $$p_{i}\geq0$$ with $$P_{n}:=\sum_{i=1}^{n} p_{i} >0$$.

Order generalised gradients extend the notion of subgradients, without the assumption of convexity, for operator-valued functions. This notion is also connected to the Gâteaux (lateral) derivatives. If f is operator convex, then $$\nabla_{\pm}f$$ defined by
$$\bigl(\nabla_{\pm}f(A)\bigr) (B) =\lim _{t\rightarrow0^{\pm}}\frac{f(A+tB)-f(A)}{t}, \quad A,B \in\mathcal{C}$$
is an order generalised gradient. Furthermore, if $$f:\mathcal{C} \rightarrow\mathcal{A}(H)$$ is a Gâteaux differentiable function, we have the following characterisation: f is operator convex if and only if f defined by
$$\bigl(\nabla f(A)\bigr) (B) =\lim_{t\rightarrow0} \frac{f(A+tB)-f(A)}{t}, \quad A,B \in\mathcal{C}$$
is an order generalised gradient. This characterisation of convexity is a generalised version of Theorem 3 of Section 2 (cf. Eisenberg [3, Theorem 1]).

## Declarations

### Acknowledgements

The research of E Kikianty is supported by the Claude Leon Foundation. The authors would like to thank the anonymous referees for valuable suggestions that have been incorporated in the final version of the manuscript.

Open Access This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.

## Authors’ Affiliations

(1)
School of Engineering and Science, Victoria University, PO Box 14428, Melbourne, Victoria, 8001, Australia
(2)
Department of Pure and Applied Mathematics, University of Johannesburg, PO Box 524, Auckland Park, 2006, South Africa

## References 