- Research
- Open access
- Published:
The interpolation of Young’s inequality using dyadics
Journal of Inequalities and Applications volume 2019, Article number: 140 (2019)
Abstract
In this article we interpolate Young’s inequality using a delicate treatment of dyadics. Although there are other simple methods to prove these results, we present this new approach hoping to reveal more of the hidden properties of such inequalities.
1 Introduction
Young’s inequality for real numbers states that, for \(a,b\in \mathbb{R}^{+}\) and \(p,q>1\) satisfying \(\frac{1}{p}+\frac{1}{q}=1\), we have
This inequality is equivalent to saying
or
Thus, this inequality is about comparing a product of powers of a and b with a convex sum of a and b.
Recall that when \(p+q\in \mathbb{N}\), we have
Thus, \(a^{p+q}\) and \(b^{p+q}\) are the first and last terms in the expansion of \((a+b)^{p+q}\). On the other hand, \(a^{p}b^{q}\) is some term in the “middle” of the expansion, which occurs when \(n=q\).
This proposes the question whether or not we can compare \(a^{p}b^{q}\) with terms before we reach the first and last terms. That is, is it possible to have an interpolated inequality of the form
for \(p,q>0\) and \(\alpha +\beta =1\)? We can use inequality (1.3) to obtain the following interpolated version, whose simple proof appeared in [8].
Proposition 1.1
Let \(a,b\in \mathbb{R}^{+}\) and let \(p\geq q\geq r\geq 0\). Then
Proof
Let \(\frac{p-q+r}{p-q+2r}=\nu \). Then \(\frac{r}{p-q+2r}=1-\nu \), and hence
where the last line is easily obtained by simplifying the powers. □
Now inequality (1.4) is an equality when \(r=0\) and is reduced to the well-known Young inequality when \(r=q\). However, as r increases from 0 to q, we obtain “interpolated” inequalities.
At this point, one starts asking about how good these interpolated inequalities are when compared with the original Young inequality. That is, is there any comparison between
We will show that these r-versions increase as r increases from 0 to q, meaning that all these interpolated inequalities lie in the middle of Young’s inequality. See Proposition 3.7 below for the general proof. It should be noted that in this article we will be dealing with rings in general. The scalar versions have been already shown in [8].
Then, the natural question that has been seen with the different types of inequalities is whether or not one can generalize these inequalities to spaces other than real numbers, and whether, of the most common spaces, the space \(\mathbb{M}_{n}\) of all matrices is of size \(n\times n\).
The matrix version of Young’s inequality states that, for \(X\in \mathbb{M}_{n}\) and \(A,B\in \mathbb{M}_{n}^{+}\) (the class of positive semidefinite matrices in \(\mathbb{M}_{n}\)), we have
for any unitarily invariant norm \(\|\!|\ \|\!|\) on \(\mathbb{M}_{n}\). This inequality was first proved in [1].
Using log-convexity, it was proved in [6] that for similar A, B, X we have
and that these inequalities interpolate increasingly between
Again, see Proposition 3.7 below for the general result.
Before proceeding any further, we remark that our restriction that \(p\geq q\) is artificial as if \(p< q\), we just switch the coefficients.
The literature is rich of Young-type inequalities that try to refine the original inequality by adding a term to the left-hand side of the inequality, meaning a better inequality. For example, in [4] the authors proved that when \(A,B\in \mathbb{M} _{n}^{+}\) and \(X\in \mathbb{M}_{n}\), then for the conjugate exponents p, q we have
where \(r=\max \{p,q\}\).
On the other hand, it was proved in [5] that, for the same A, B, X, p, q, r, we have
for any unitarily invariant norm \(\|\!|\ \|\!|\).
We refer the reader to [9] and the references therein for a general discussion of the Young matrix inequality.
In this work we tackle the problem in a different approach, where we introduce infinitely many Young-type inequalities, among which the known Young inequality is the weakest.
We shall present our proofs in terms of what we defined as a ring-pair and a norm-mean mapping. This setting is general and can be applied on any ring, not necessarily \(\mathbb{R}\) or \(\mathbb{M}_{n}\), when certain properties hold. The idea of our proof is based on delicate treatments of the dyadic expansions of real numbers.
We emphasize that the new delicate approach presented in this paper is the main goal of this work. The applications in \(\mathbb{R}\) or \(\mathbb{M}_{n}\) have already been dealt with in [6,7,8].
2 Needed setup
Recall that a dyadic is an expression of the form \(\frac{1}{2^{i}}\) for some \(i\in \mathbb{N}\). It is known that if \(\alpha \in (0,1)\) then α is the sum, possibly an infinite sum, of dyadics. That is, there is a sequence of naturals \(\{i_{n}:1\leq n\leq N\}\) such that \(\alpha =\sum_{n=1}^{N}\frac{1}{2^{i_{n}}}\), with the possibility that \(N=\infty \). In this context, we assume \(i_{j}< i_{j+1}\) for all j. A dyadic \(\frac{1}{2^{i}}\) will be said to be in the dyadic expansion of α if \(\frac{1}{2^{i}}\) appears in the above sum. In this case, we write \(\frac{1}{2^{i}}\in D(\alpha )\). The first observation we need in our proofs in this article is that we can write this sum in blocks. Namely, let
At each step, if the set corresponding to \(N_{j+1}\) is empty, stop the process (that gives a finite outer sum in (2.1)). Moreover, let \(K_{j+1}=\infty \) when the corresponding minimum is taken over an empty set. In that case, if such \(K_{j}\) exists, the process stops, giving a finite outer sum and an infinite inner sum in (2.1). Then we would have
We shall call \(\sum_{i=N_{j}}^{K_{j}}\frac{1}{2^{i}}\) a block of the dyadic expansion of α.
The other observation is that if \(\alpha ,\beta \in (0,1)\) are such that \(\alpha +\beta =1\), then we can write two disjoint dyadic representations of α and β due to the fact that \(\sum_{i=1}^{\infty }\frac{1}{2^{i}}=1\). Moreover, if we assume that \(\alpha >\beta \), then \(\alpha >\frac{1}{2}\), so \(\frac{1}{2}\) would be the first dyadic of α. In this case, \(N_{1}=1\) and if
then
In other words, if \(\frac{1}{2^{n}}\) is the last dyadic of the jth block of α, then \(\frac{1}{2^{n+1}}\) is the first dyadic in the jth block of β. This observation will be used efficiently in our proofs.
Now, we prove the needed setup for our proofs. We remind the reader that the process of writing the dyadic expansion of a number is well known. However, we shall present it here in a way that helps accomplish our proofs in the next section.
The next lemma will play a crucial role in the proof of our main result. This lemma gives the dyadic expansion of numbers in \((0,1)\) in a way that can be used for our results.
Lemma 2.1
Let \(\alpha ,\beta >0\) be such that \(\beta \geq 2\alpha \), and let
Then
-
(1)
For \(n\geq 3\), we have
$$ y_{n}=\beta \Biggl[1+\sum_{i=1}^{n-2}(-1)^{i} \prod_{j=1}^{i}2^{r_{n-j}} \Biggr]+(-1)^{n-1} \alpha \cdot 2^{r_{1}+\cdots +r_{n-1}}. $$(2.2) -
(2)
If \(y_{n}\neq0\) for all \(n\in \mathbb{N}\), then the jth blocks of the dyadic expansions of \(\frac{\beta -\alpha }{\beta }\) and \(\frac{\alpha }{\beta }\) are
$$ \frac{1}{2^{r_{0}+\cdots +r_{2j-2}}}\sum_{i=1}^{r_{2j-1}} \frac{1}{2^{i}}\quad \textit{and}\quad \frac{1}{2^{r_{1}+\cdots +r_{2j-1}}}\sum _{i=1}^{r_{2j}} \frac{1}{2^{i}}, $$(2.3)respectively.
-
(3)
If \(y_{n}=0\) for some \(n\in N\), and if this n is the first such index, then
$$ \frac{\beta -\alpha }{\beta }= \Biggl(\sum_{j=1}^{[(n-1)/2]+1} \frac{1}{2^{r _{1}+r_{2}+\cdots +r_{2j-2}}}\sum_{i=1}^{r_{2j-1}} \frac{1}{2^{i}} \Biggr) $$and
$$ \frac{\alpha }{\beta }= \Biggl(\sum_{j=1}^{[n/2]} \frac{1}{2^{r_{1}+r _{2}+\cdots +r_{2j-1}}}\sum_{i=1}^{r_{2j}} \frac{1}{2^{i}} \Biggr), $$with the convention that \(r_{n}=\infty \).
Proof
The first statement can be easily inducted. Hence, we proceed to proving the second statement.
Observe that \(\frac{\beta -\alpha }{\beta }\geq \frac{1}{2}\) because \(\beta \geq 2\alpha \). Consequently, the first dyadic of \(\frac{ \beta -\alpha }{\beta }\) is \(\frac{1}{2}\), and the first block of the dyadic expansion of it will have the form \(\sum_{i=1}^{k_{1}} \frac{1}{2^{i}}\) where
But when simplified, the above expression reduces exactly to \(k_{1}=\max \{k\in \mathbb{N}: 2^{k}\alpha \leq \beta \}\). That is, \(k_{1}=r_{1}\). This means that the first block of the dyadic expansion of \(\frac{\beta -\alpha }{\beta }\) is \(\sum_{i=1}^{r_{1}} \frac{1}{2^{i}}\).
Now, since \(\frac{\beta -\alpha }{\beta }+\frac{\alpha }{\beta }=1\), the first dyadic of \(\frac{\alpha }{\beta }\) will be the first dyadic after the last dyadic of the first block of \(\frac{\beta -\alpha }{\beta }\), which is \(2^{r_{1}}\). That is, the first block of \(\frac{\alpha }{ \beta }\) will have the form \(\frac{1}{2^{r_{1}}} \sum_{i=1}^{k_{2}} \frac{1}{2^{i}}\), where
Again, by simplifying the above expression, we will have \(k_{2}= \max \{k\in \mathbb{N}:(\beta -2^{r_{1}}\alpha )2^{k}\leq \beta \), which means that \(k_{2}=r_{2}\). This proves the truth of (2.3) when \(j=1\).
Now assume that (2.3) for all \(j\leq n\) for some \(n\in \mathbb{N}\). We prove now that the \((n+1)\)st blocks of \(\frac{\beta -\alpha }{\beta }\) and \(\frac{\alpha }{\beta }\) are
respectively.
We prove this for \(\frac{\beta -\alpha }{\beta }\) and leave the other one for the reader because of the similarity.
Observe that the first dyadic of the \((n+1)\)-st block of \(\frac{ \beta -\alpha }{\beta }\) is the first dyadic after the last dyadic of the nth block of \(\frac{\alpha }{\beta }\). But by the inductive step, this last dyadic is \(\frac{1}{2^{r_{1}+r_{2}+\cdots +r_{2n}}}\). Consequently, the \((n+1)\)st block of \(\frac{\beta -\alpha }{\beta }\) will be
where
By adding the geometric series in this expression, then multiplying by
and using equation (2.2), the above expression reduces to
which means \(k'=r_{2n+1}\). This completes the inductive proof.
For the third statement, observe first that \(n\geq 2\). We prove the case when n is even and leave the odd case to the reader. If \(y_{n}=0\), then \(r_{n-1}\) will be the last normal index \(r_{n}\). Since n is even, \(n-1\) is odd, and hence, the last normal block will belong to \(\frac{\beta -\alpha }{\beta }\). In this case, using the computations in (2.4),
where we replaced n by \(\frac{n}{2}-1\) because in (2.4) we were interested in the index \(r_{2n+1}\) and now we want \(r_{n-1}\). But then this last equation can be written as
because n is even. As for \(\frac{\alpha }{\beta }\), we have \(\frac{\alpha }{\beta }=1-\frac{\beta -\alpha }{\beta }\), which reduces to the wanted formula by noting that \(1=\sum_{i=1}^{\infty } \frac{1}{2^{i}}\). This completes the proof of the lemma. □
3 The interpolated Young inequality
Definition 3.1
Let M be a ring, and let \(M'\subset M\) be such that
-
\(A^{x}\) is well defined for all \(A\in M'\) and \(x>0\).
-
\(A^{x}\in M'\) for all \(A\in M'\) and \(x>0\).
Then the pair \((M,M')\) will be called a ring-pair.
Now, if \(F:M\longrightarrow [0,\infty )\) satisfies
-
\(F(AXB)\leq \frac{1}{2}F(A^{2}X+XB^{2})\), \(\forall A,B\in M'\), \(X\in M\),
-
\(F(A+B)\leq F(A)+F(B)\),
then F will be called a norm-mean mapping on \((M,M')\).
Example 3.2
Let \(F:\mathbb{M}_{n}\longrightarrow [0,\infty )\) be defined by \(F(X)=\|\!|X\|\!|\). Then F is norm-mean on \((\mathbb{M}_{n},\mathbb{M} _{n}^{+})\). Here, \(\|\!|\ \|\!|\) is any unitarily invariant norm. Indeed, the inequality \(\|\!|AXB\|\!|\leq \frac{1}{2}\|\!|A^{2}X+XB^{2}\|\!|\) was proved for all \(A,B\in \mathbb{M}_{n}^{+}\) and \(X\in \mathbb{M}_{n}\) in [2].
We prove the following interpolating inequality for norm-mean mappings.
Proposition 3.3
Let \((M,M')\) be a ring-pair, \(A,B\in M'\), and \(X\in M\). Then, for \(p,q\geq 0\) and \(r\leq \min \{p,q\}\), we have
for any norm-mean mapping F on \((M,M')\).
Proof
Observe that
where we have used the inequality \(F(AXB)\leq \frac{1}{2}F(A^{2}X+XB ^{2})\) of F. □
The following lemma is the result of successive applications of Proposition 3.1.
Lemma 3.4
Let \((M,M')\) be a ring-pair, \(A,B\in M'\), and \(0< r\leq q\leq p\). If
then, for the norm-mean mapping F on \((M,M')\),
Proof
If \(k_{1}=1\), then a direct application of (3.1) yields
On the other hand, if \(k_{1}>1\), we use induction on \(m\in \{1,2, \ldots ,k_{1}-1\}\). If \(m=1\), then we have the statement by our proof of \(k_{1}=1\).
Suppose now that, for any \(m\in \{1,2,\ldots ,k_{1}-1\}\), we have
To prove it for \(m+1\), observe first that \(k_{1}=\max \{k\in \mathbb{N}:2^{k}\cdot r\leq p-q+2r\}\) and that \(m< k_{1}\), hence \(2^{m+1}\cdot r\leq p-q+2r\). That is, \(q+2^{m}\cdot r-r\leq p-2^{m} \cdot r+r\). Consequently, we can apply (3.1) on the term \(F(A^{p-2^{m}\cdot r+r}XB^{q+2^{m}\cdot r-r})\) of (3.3), but by adding and subtracting \(2^{m}\cdot r\) to get
Combine this with (3.3) to get
This completes the proof. □
Now, what happens if we apply Lemma 3.4 on the term \(F(A^{p-2^{k_{1}}\cdot r+r}XB^{q+2^{k_{1}}\cdot r-r})\) appearing in the lemma? Observe first that the A power now is smaller than the B power. That is, \(p-2^{k_{1}}\cdot r+r< q+2^{k_{1}}\cdot r-r\). Indeed, if \(p-2^{k_{1}}\cdot r+r\geq q+2^{k_{1}}\cdot r-r\), we would have \(2^{k_{1}+1}\cdot r\leq p-q+2r\), contradicting the definition of \(k_{1}\). Thus, we may apply the lemma with p replaced by \(q_{1}:=q+2^{k _{1}}\cdot r-r\), q replaced by \(p_{1}:=p-2^{k_{1}}\cdot r+r\). Since \(p-q+2r-2^{k_{1}}\cdot r\leq \min \{q+2^{k_{1}}\cdot r-r, p-2^{k_{1}} \cdot r+r\}\), we may apply the lemma using \(r^{(1)}=p-q+2r-2^{k_{1}} \cdot r\). Observe that in this case the new power of A will be “the old smaller power −the new r” and the new power of B will be “the old larger power +the new value of r”; which gives the powers \(p-2^{k_{1}}\cdot r+r-(p-q+2r-2^{k_{1}}\cdot r)=q-r\) for A and \(q+2^{k_{1}}\cdot r-r+(p-q+2r-2^{k_{1}}\cdot r)=p+r\) for B.
where
where the last line is an immediate substitution of the values of \(p_{1}\), \(q_{1}\), and \(r^{(1)}\).
Going back to the notations of Lemma 2.1, let \(\beta =p-q+2r\) and \(\alpha =r\). Then clearly, \(\beta \geq 2\alpha \) because \(p\geq q\). Consequently, we may rewrite the definitions of \(k_{1}\) and \(k_{2}\) as
In other words,
If we substitute inequality (3.4) in inequality (3.2), we get
By applying Lemma 3.4 successively, we get the following easily inducted lemma.
Lemma 3.5
Let \((M,M')\) be a ring-pair, \(X\in M\), \(A,B\in M'\), and \(0< r\leq q \leq p\). If \(y_{n}\neq0\) for all \(n\in \mathbb{N}\), then for each \(n\in \mathbb{N}\), we have
where
and
On the other hand, if n is the first index such that \(y_{n}=0\), then we have the above formula for that particular n. In this case, \(G(n)\) will be added to one of the sums.
To better understand the statement of this lemma, we strongly advice the reader to consider a numerical example.
The above dyadic blocks have already appeared in Lemma 2.1. Thus, we have, using Lemma 2.1,
and
for each \(n\in \mathbb{N}\).
Consequently, by letting \(n\to \infty \), we get our first main result.
Theorem 3.6
Let \((M,M')\) be a ring-pair, and let \(F:M\to [0,\infty )\) be norm-mean on \((M,M')\). Then, for \(p\geq q\geq r\geq 0\), we have
for all \(X\in M\) and \(A,B\in M'\).
The next result tells us that these r-versions are better than the original inequality (1.2) as they increase with r.
Proposition 3.7
Let \((M,M')\) be a ring-pair, \(F:M\to [0,\infty )\) be norm-mean on \((M,M'), X\in M\), \(A,B\in M'\), and \(p\geq q>0\). Then the function
is increasing on \([0,q]\).
Proof
Let \(0< r_{1}< r_{2}\leq q\). Apply inequality (3.6) taking \(r=r_{1}\) to get
Now, apply the same inequality (3.6) on \(F(A^{p+r_{1}}XB ^{q-r_{1}})\) taking \(r=r_{2}-r_{1}\) to get
On the other hand, applying inequality (3.6) on \(F(A^{q-r _{1}}XB^{p+r_{1}})\) taking \(r=r_{2}-r_{1}\) and observing that B has the larger power yields
Now, taking (3.8) and (3.9) into consideration, we get
This proves that f is increasing on \([0,q]\). □
In fact this monotone behavior of f tells a lot about the well-known Young inequality
Observe that \(f(0)=F(A^{p}XB^{q})\) and \(f(q)=\frac{p}{p+q}F(A^{p+q}X)+ \frac{q}{p+q}F(XB^{p+q})\). Hence, this Young inequality is the weakest inequality among all the interpolated inequalities! However, it is the only one that isolates A from B.
It is worth trying to prove the monotone behavior of f using other techniques; because in the above proof we relied on our proof of inequality (3.6). Thus, if one can prove that f is increasing using some other method, that would be another proof of inequality (3.6).
The following result allows us to treat different powers of A, B. The proof can be easily obtained by induction on n.
Corollary 3.8
Let \((M,M')\) be a ring-pair, \(F:M\to [0,\infty )\) be norm-mean on \((M,M'), X\in M\), \(A,B\in M'\). Also, let \(\{p_{i},q_{i},r_{i}:i=1, \ldots, n\}\) be nonnegative numbers satisfying
Then
4 Applications in \(\mathbb{R}\) and \(L^{p}\) spaces
We begin by asking which functions on \(\mathbb{R}\) are norm-mean.
Proposition 4.1
If \(f:\mathbb{R}\longrightarrow [0,\infty )\) is continuous, then f is norm-mean on \((\mathbb{R},\mathbb{R}^{+})\) if and only if
for some \(\alpha ,\beta >0\).
Proof
Direct computations show the “if” part. For the “only if” part, suppose that \(f:\mathbb{R}\longrightarrow [0,\infty )\) is a continuous mean-norm mapping on \((\mathbb{R},\mathbb{R}^{+})\). Then, clearly, \(f(0)=0\). Let \(x,y\in \mathbb{R}\) and let \(\sigma =\frac{x+y}{|x+y|}\). If \(x+y=0\), let \(\sigma =1\). Observe that
Since f is continuous, f is convex.
Now, if \(x,y>0\) or if \(x,y<0\), we have
where we have used the facts that f is convex and \(f(0)=0\).
Thus, we have proved that \(f(x+y)\geq f(x)+f(y)\), when \(x,y>0\) or \(x,y<0\). But because f is norm-mean, we have \(f(x+y)\leq f(x)+f(y)\). This proves that \(f(x+y)=f(x)+f(y)\), when \(x,y>0\) or \(x,y<0\). But the only function (nonnegative by assumption) satisfying this linearity is
for some \(\alpha ,\beta >0\). □
Corollary 4.2
Let a, b be any positive numbers, and let \(p\geq q\geq r\geq 0\). Then
We treat some special cases of this inequality, where some nice symmetries appear:
-
1.
When \(p=q\), the above inequality reduces to the well-known inequality that \(x+\frac{1}{x}\geq 2\) for \(x>0\).
-
2.
Let \(r>0\) be arbitrary, \(q=mr\), and \(p=nq\) for some \(n,m\in \mathbb{N}\). By letting \(a^{mr}=x\) and \(b^{mr}=y\), inequality (4.1) reduces to: given \(x,y>0\), we have
$$ x^{n}y\leq \frac{nm-m+1}{nm-m+2}\sqrt[m]{\frac{x}{y}} \;x^{n}y+ \frac{1}{nm-m+2}\sqrt[m]{\frac{y}{x}}xy^{n}, \quad n,m\in \mathbb{N}. $$Observe that this inequality becomes an equality as \(m\to \infty \).
-
3.
Let \(r>0\) be arbitrary, \(n,m\in \mathbb{N}\), \(q=mr\), \(p=(n+m)r\), \(a^{nr}=x\), and \(b^{nr}=y\). Then we get: given \(x>0\), we have
$$ x\leq \frac{(n+1)x}{n+2}\sqrt[n]{\frac{x}{y}}+\frac{y}{n+2}\sqrt[n]{ \frac{y}{x}},\quad \forall y>0, n\in \mathbb{N}. $$
Corollary 4.3
Let \(f,g:X\to \mathbb{R}\) be measurable functions on the measure space X. Then
for \(p\geq q\geq r\).
As a consequence of this last corollary, we get the following result, which has its own impact in the theory of \(L^{p}\) spaces.
Corollary 4.4
Let \(f,g:X\to \mathbb{R}\) be measurable functions on the measure space X, and let \(q\leq p\). If there exists \(r\leq q\) such that both \(f^{p+r}g^{q-r}\) and \(f^{q-r}g^{p+r}\) are integrable, then so is \(f^{p}g^{q}\).
Observe that if, in the above corollary, \(g=1\), then the result can be stated as follows.
Corollary 4.5
For \(0< k< p<\ell \), we have \(L^{\ell }(X)\cap L^{k}(X)\subset L^{p}(X)\). In particular, if \(f\in L^{\ell }(X)\cap L^{k}(X)\), then
Proof
In Corollary 4.3, replace g by 1, \(q-r\) by k, and \(p+r\) by ℓ. Then \(p-q+2r=\ell -k\), \(p-q+r=p-k\), and \(r=\ell -p\). Consequently,
which implies the wanted inequality. □
The inclusion part of this corollary is well known in the theory of \(L^{p}\) spaces, but it is usually stated as follows: if \(0< q< p< r\), then \(L^{r}(X)\cap L^{q}(X)\subset L^{p}(X)\).
It is worth to mention the nice relation between the norm inequality (4.2) and the known inequality in the literature. We refer the reader to any standard book on \(L^{p}\) spaces, for example, [3], p. 185.
Proposition 4.6
([3], p. 185)
If \(0< k< p<\ell \), then \(L^{k}\cap L^{\ell }\subset L^{p}\) and
That is,
Observe that this upper bound is strongly related to that of (4.2) by the known Young inequality:
That is,
5 Applications in \(\mathbb{M}_{n}\)
Theorem 5.1
Let \(X\in \mathbb{M}_{n}\), \(A,B\in \mathbb{M}_{n}^{+}\). If \(0\leq r \leq q\leq p\), then
for any unitarily invariant norm \(\|\!|\ \|\!|\).
The proof is immediate because any unitarily invariant norm is norm-mean on \((\mathbb{M}_{n},\mathbb{M}_{n}^{+})\). See Example 3.2.
Remark
One can use the fact that the function \(g:(0, \infty )\times (0,\infty )\to [0,\infty )\) defined by \(g(p,q)=\|\!|A ^{p} X B^{q}\|\!|\) is log-convex to prove the above inequality; as one can see in [6]. However, our purpose in this article is not only the result itself, but the new delicate treatment of dyadics and its relation with Young’s inequality.
At this point we remark that the well-known inequality (1.5) is not valid for any unitarily invariant norm. That is, the following generalization is not true [4]:
Simulating our generalization, one can prove the following 2-norm version, with the aid of (4.1). We refer the reader to [8] for the following two results.
Theorem 5.2
Let \(A,B\in \mathbb{M}_{n}^{+}\), \(X\in \mathbb{M}_{n}\), and \(p\geq q \geq 0\). Then, for each \(0\leq r\leq q\), we have
Again, we prove that these 2-norm versions interpolate increasingly between
Proposition 5.3
Let \(A,B\in \mathbb{M}_{n}^{+}\), \(X\in \mathbb{M}_{n}\), and \(p\geq q \geq 0\). Then the function
is increasing on \([0,q]\).
Proof
Following our notations in Theorem 5.2, observe that
where
Observe that since \(f_{ij}\) is increasing for every i, j, by Proposition 3.7 and \(|y_{ij}|^{2}>0\), the result follows. □
Now following Corollary 3.8 and Theorem 5.2 we can prove the following.
Corollary 5.4
Let \(A,B\in \mathbb{M}_{n}^{+}\), \(X\in \mathbb{M}_{n}\), and \(\{p_{i},q _{i},r_{i}\}\) be nonnegative numbers such that
Then
References
Ando, T.: Matrix Young inequalities. Oper. Theory, Adv. Appl. 75, 33–38 (1995)
Bhatia, R., Davis, C.: More matrix forms of the arithmetic-geometric mean inequality. SIAM J. Matrix Anal. Appl. 14, 132–136 (1993)
Folland, G.: Real Analysis: Modern Techniques and Their Applications, 2nd edn. Wiley, New York (1999)
Hirzallah, O., Kittaneh, F.: Matrix Young inequalities for the Hilbert–Schmidt norm. Linear Algebra Appl. 308, 77–84 (2000)
Kittaneh, F., Manasrah, Y.: Improved Young and Heinz inequalities for matrices. J. Math. Anal. Appl. 36, 262–269 (2010)
Sababheh, M.: Interpolated inequalities for unitarily invariant norms. Linear Algebra Appl. 475, 240–250 (2015)
Sababheh, M.: Log and harmonically log-convex functions related to matrix norms. Oper. Matrices 10, 453–465 (2016)
Sababheh, M., Yousf, A., Khalil, R.: Interpolated Young and Heinz inequalities. Linear Multilinear Algebra 63, 2232–2244 (2015)
Zou, L., Jiang, Y.: Inequalities for unitarily invariant norms. J. Math. Inequal. 6, 279–287 (2012)
Acknowledgements
The authors are very grateful to the editorial board and the reviewers, whose comments improved the quality of the paper.
Availability of data and materials
Not applicable.
Funding
Not applicable.
Author information
Authors and Affiliations
Contributions
This entire work has been completed by the authors. The authors read and approved the final version of the manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare to have no competing interests.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Sababheh, M., Yousef, A. The interpolation of Young’s inequality using dyadics. J Inequal Appl 2019, 140 (2019). https://doi.org/10.1186/s13660-019-2093-8
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13660-019-2093-8