In this section, first we will present a few important definitions, which will be useful further. In the sequel, let I be an open interval in \(\mathbb{R}\).
The next four definitions are given in [13].
Definition 5.1
A function \(f:I \rightarrow \mathbb{R}\) is n-exponentially convex in the Jensen sense if
$$ \sum_{i,j=1}^{n} {\varsigma _{i} \varsigma _{j} f \biggl( \frac{x _{i} + x_{j}}{2} \biggr)} \ge 0 $$
holds for every \(\varsigma _{i} \in \mathbb{R}\) and \(x_{i} \in I\)
\(\bigl(1 \leq i \leq n \bigr )\).
Definition 5.2
A function \(f:I \rightarrow \mathbb{R}\) is n-exponentially convex if it is n-exponentially convex in the Jensen sense and continuous on I.
Definition 5.3
A function \(f:I \rightarrow \mathbb{R}\) is exponentially convex in the Jensen sense if it is n-exponentially convex in the Jensen sense for all \(n \in \mathbb{N}\).
Definition 5.4
A function \(f:I \rightarrow \mathbb{R}\) is exponentially convex if it is exponentially convex in the Jensen sense and continuous.
A log-convex function is defined as follows (see [14, p. 7]):
Definition 5.5
A function \(f :I\rightarrow \bigl(0,\infty \bigr)\) is said to be log-convex or multiplicatively convex if logf is convex. Equivalently, f is log-convex if for all \(x,y \in I\) and for all \(\lambda \in \left[0, 1 \right ]\), the inequality
$$ f \bigl(\lambda x + \left(1- \lambda \right)y \bigr)\leq f^{\lambda }\left (x\right)f^{ \left(1-\lambda \right )}(y ) $$
holds. If the inequality reverses, then f is said to be log-concave.
Divided difference of a function is defined as follows (see [14, p. 14]):
Definition 5.6
The nth-order divided difference of a function \(f: \left[a,b \right] \rightarrow \mathbb{R}\) at mutually distinct points \(x_{0},\ldots,x _{n}\in \left[a,b \right]\) is defined recursively by
$$\begin{aligned} &\left[x_{i}; f \right] = f \left(x_{i} \right),\quad i \in \left \{0,\ldots,n\right\}, \\ &\left[ x_{0},\dots,x_{n};f\right ] =\frac{ \left[ x_{1},\ldots,x _{n};f \right] - \left[ x_{0},\ldots,x_{n-1};f \right ] }{x_{n}-x_{0}} \cdot \end{aligned}$$
The value \(\left[x_{0},\ldots,x_{n}; f\right]\) is independent of the order of the points \(x_{0},\ldots,x_{n}\).
The n-convex functions can be characterized by the nth-order divided difference (see [14, p. 15]).
Definition 5.7
A function \(f : \left[a,b \right ]\rightarrow \mathbb{R}\) is said to be n-convex \(\left(n\geq 0 \right )\) if and only if for all choices of \(\left (n+1 \right )\) distinct points \(x_{0},\ldots,x_{n} \in \left[a,b \right ]\), the nth-order divided difference of f satisfies \(\left[x_{0},\ldots,x_{n};f \right] \geq 0\).
Remark 5.8
Note that 0-convex functions are non-negative functions, 1-convex functions are increasing functions, and 2-convex functions are simply the convex functions.
Next we study the n-exponential convexity and log-convexity of the functions associated with the linear functionals \(\Phi _{i}\)
\(\bigl(i=\operatorname{1,\dots,3} \bigr)\) as defined in (4.1)–(4.3).
Theorem 5.9
Let
\(\Omega = \left\{f_{s} : s \in I \subseteq \mathbb{R} \right \}\)
be a family of functions defined on
\(\left [a,b \right]\)
such that the function
\(s \mapsto \left[z_{0},z_{1},z_{2}; f_{s} \right ]\)
is
n-exponentially convex in the Jensen sense on
I
for every three mutually distinct points
\(z_{0},z_{1},z_{2} \in \left[a,b \right ]\). Let
\(\Phi _{i}\)
\(\bigl(i=\operatorname{1,\dots,3} \bigr)\)
be the linear functionals as defined in (4.1)–(4.3). Then the following statements hold:
-
(i)
The function
\(s \mapsto \Phi _{i} \left(f_{s} \right ) \)
is
n-exponentially convex in the Jensen sense on
I
and the matrix
\(\left[\Phi _{i} \left(f_{\frac{s_{j}+s_{k}}{2}} \right) \right]_{j,k=1} ^{m}\)
is positive semi-definite for all
\(m\in \mathbb{N}\), \(m\leq n\)
and
\(s_{1},\ldots,s_{m} \in I\). In particular,
$$ \det \left[\Phi _{i} \left(f_{\frac{s_{j}+s_{k}}{2}}\right ) \right] _{j,k=1}^{m}\geq 0,\quad \forall \, m \in \mathbb{N},\, m\leq n. $$
-
(ii)
If the function
\(s \mapsto \Phi _{i} (f_{s} )\)
is continuous on
I, then it is
n-exponentially convex on
I.
Proof
The idea of the proof is the same as that of the proof of Theorem 9 in [5]. □
The following corollary is an immediate consequence of Theorem 5.9.
Corollary 5.10
Let
\(\Omega = \left\{f_{s} : s \in I \subseteq \mathbb{R} \right\}\)
be a family of functions defined on
\(\left[a,b \right ]\)
such that the function
\(s \mapsto \left [z_{0},z_{1},z_{2}; f_{s} \right]\)
is exponentially convex in the Jensen sense on
I
for every three mutually distinct points
\(z_{0},z_{1},z_{2} \in \left[a,b \right]\). Let
\(\Phi _{i}\)
\(\bigl(i=\operatorname{1,\dots,3} \bigr )\)
be the linear functionals as defined in (4.1)–(4.3). Then the following statements hold:
-
(i)
The function
\(s \mapsto \Phi _{i} \left(f_{s} \right) \)
is exponentially convex in the Jensen sense on
I
and the matrix
\(\left[\Phi _{i} \left(f_{\frac{s_{j}+s_{k}}{2}} \right ) \right ]_{j,k=1} ^{m}\)
is positive semi-definite for all
\(m\in \mathbb{N}\), \(m\leq n\)
and
\(s_{1},\ldots,s_{m} \in I\). In particular,
$$ \det \left[\Phi _{i} \left(f_{\frac{s_{j}+s_{k}}{2}} \right) \right] _{j,k=1}^{m}\geq 0,\quad \forall \, m \in \mathbb{N},\, m\leq n. $$
-
(ii)
If the function
\(s \mapsto \Phi _{i} \left(f_{s} \right)\)
is continuous on
I, then it is exponentially convex on
I.
Corollary 5.11
Let
\(\Omega = \left \{f_{s} : s \in I \subseteq \mathbb{R} \right \}\)
be a family of functions defined on
\(\left[a,b \right]\)
such that the function
\(s \mapsto \left[z_{0},z_{1},z_{2}; f_{s} \right]\)
is 2-exponentially convex in the Jensen sense on
I
for every three mutually distinct points
\(z_{0},z_{1},z_{2}\in \left[a,b \right]\). Let
\(\Phi _{i}\)
\(\bigl(i=\operatorname{1,\dots,3} \bigr)\)
be the linear functionals as defined in (4.1)–(4.3). Further, assume that
\(\Phi _{i} \left(f_{s} \right)\)
\(\bigl(i=\operatorname{1,\dots,3} \bigr )\)
is strictly positive for
\(f_{s} \in \Omega \). Then the following statements hold:
-
(i)
If the function
\(s \mapsto \Phi _{i} \left(f_{s} \right )\)
is continuous on
I, then it is 2-exponentially convex on
I
and so it is log-convex on
I
and for
\(\tilde{r},s,\tilde{t}\in I\)
such that
\(\tilde{r}< s<\tilde{t}\), we have
$$ \left[ \Phi _{i} \left(f_{s} \right) \right] ^{\tilde{t}-\tilde{r}} \leq \left[ \Phi _{i} \left(f_{\tilde{r}} \right) \right] ^{\tilde{t}-s} \left[ \Phi _{i} \left(f_{\tilde{t}} \right) \right] ^{{s-\tilde{r}}},\quad i\in \left \{1,2,3\right \}, $$
(5.1)
which is known as Lyapunov’s inequality. If
\(\tilde{r}<\tilde{t}<s\)
or
\(s<\tilde{r}<\tilde{t}\), then the reversed inequality holds in (5.1).
-
(ii)
If the function
\(s \mapsto \Phi _{i} (f_{s} )\)
is differentiable on
I, then for every
\(s,q,u,v \in I\)
such that
\(s \leq u\)
and
\(q \leq v\), we have
$$ {\mu _{s,q}} \left(\Phi _{i}, \Omega \right) \leq {\mu _{u,v}} \left(\Phi _{i}, \Omega \right ), \quad i\in \{1,2,3\}, $$
(5.2)
where
$$ \mu _{s,q} \left(\Phi _{i},\Omega \right) = \textstyle\begin{cases} \left ( \frac{\Phi _{i} \left(f_{s} \right)}{\Phi _{i} \left(f_{q} \right)} \right) ^{\frac{1}{s-q}}, & s \ne q, \\ \exp { \left( \frac{\frac{d}{ds} \Phi _{i} \left(f_{s} \right )}{\Phi _{i} \left(f_{s} \right)} \right)}, & s = q, \end{cases} $$
(5.3)
for
\(f_{s},f_{q} \in \Omega \).
Proof
The idea of the proof is the same as that of the proof of Corollary 5 given in [5]. □
Remark 5.12
Note that the results from Theorem 5.9, as well as Corollaries 5.10 and 5.11 still hold when two of the points \(z_{0},z _{1},z_{2} \in \left[a,b \right]\) coincide, say \(z_{1}=z_{0}\), for a family of differentiable functions \(f_{s}\) such that the function \(s \mapsto \left[z_{0},z_{1},z_{2}; f_{s} \right]\) is n-exponentially convex in the Jensen sense (exponentially convex in the Jensen sense, log-convex in the Jensen sense on I); and furthermore, they still hold when all three points coincide for a family of twice differentiable functions with the same property.
There are several families of functions which fulfil the conditions of Theorem 5.9, Corollaries 5.10 and 5.11, and Remark 5.12 and so the results of these theorem and corollaries can be applied to them. Here we present an example for such a family of functions; for more examples see [6].
Example 5.13
Consider the family of functions
$$ \tilde{\Omega }= \bigl\{ f_{s}: \bigl(0,\infty\bigr)\rightarrow \mathbb{R}: s \in \mathbb{R} \bigr\} $$
defined by
$$ f_{s} (x ) = \textstyle\begin{cases} \frac{x^{s}}{s (s-1 )}, & s \notin \{ 0,1\}, \\ -\log x, & s = 0, \\ x\log x, & s = 1. \end{cases} $$
Here \(\frac{d^{2}}{dx^{2}}f_{s} (x )=x^{s-2}=e^{ (s-2 ) \log x}> 0\), which shows that \(f_{s}\) is convex for \(x>0\) and \(s\mapsto \frac{d^{2}}{dx^{2}}f_{s} (x )\) is exponentially convex by definition.
In order to prove that the function \(s\mapsto \left[z_{0},z_{1},z _{2};f_{s} \right]\) is exponentially convex, it is enough to show that
$$ \sum_{j,k=1}^{n} \varsigma _{j} \varsigma _{k} \left[z_{0},z_{1},z_{2};f _{\frac{s_{j}+s_{k}}{2}} \right]= \left[z_{0},z_{1},z_{2}; \sum_{j,k=1} ^{n} \varsigma _{j} \varsigma _{k} f_{\frac{s_{j}+s_{k}}{2}} \right] \geq 0, $$
(5.4)
for all \(n \in \mathbb{N}\), \(\varsigma _{j},s_{j} \in \mathbb{R}\), \(j\in \left\{ 1,\ldots,n \right\}\). By Definition 5.7, inequality (5.4) will hold if \(\Xi (x ): = \sum_{j,k=1}^{n} \varsigma _{j} \varsigma _{k} f_{\frac{s_{j}+s_{k}}{2}} (x )\) is convex. Since \(s\mapsto \frac{d^{2}}{dx^{2}}f _{s} (x )\) is exponentially convex, that is,
$$ \sum_{j,k=1}^{n} \varsigma _{j} \varsigma _{k} f_{\frac{s_{j}+s_{k}}{2} }^{\prime \prime } \geq 0,\quad \forall \, n \in \mathbb{N},\, \varsigma _{j},s_{j} \in \mathbb{R}, \,j\in \{ 1, \ldots,n\}, $$
which shows the convexity of Ξ, inequality (5.4) is immediate. Now as the function \(s\mapsto \left[z_{0},z_{1}, z_{2};f_{s} \right ]\) is exponentially convex, \(s\mapsto \left[z_{0},z_{1}, z_{2};f_{s} \right]\) is exponentially convex in the Jensen sense and, by using Corollary 5.10, we have \(s \mapsto \Phi _{i} \left(f_{s} \right)\)
\(\bigl (i=\operatorname{1,\dots,3} \bigr )\) is exponentially convex in the Jensen sense. Since these mappings are continuous, \(s \mapsto \Phi _{i} \left (f_{s} \right )\)
\(\bigl(i=\operatorname{1,\dots,3} \bigr)\) is exponentially convex.
If \(\tilde{r},s, \tilde{t}\in \mathbb{R}\) are such that \(\tilde{r}< s< \tilde{t}\), then from (5.1) we have
$$ \Phi _{i} \left(f_{s} \right) \leq \left[ \Phi _{i} \left(f_{ \tilde{r}} \right) \right] ^{\frac{\tilde{t}-s}{\tilde{t}-\tilde{r}}} \left[ \Phi _{i} \left(f_{\tilde{t}} \right) \right] ^{\frac{s- \tilde{r}}{ \tilde{t}-\tilde{r}}}, \quad i\in \left\{1,2,3\right\}. $$
(5.5)
If \(\tilde{r}<\tilde{t}<s\) or \(s<\tilde{r}<\tilde{t}\), then the reversed inequality holds in (5.5).
Particularly for \(i\in \left\{1,2,3\right\}\) and \(\tilde{r},s,\tilde{t} \in \mathbb{R} \setminus \left\{0,1\right\}\) such that \(\tilde{r}< s<\tilde{t}\), we have
$$\begin{aligned} &\frac{-S^{{s}}\left( \mathbf{p}\right) + \left( {p_{1}}\log \left( {\frac{1}{p _{1}}} \right) \right ) ^{s}+\sum_{k=2}^{n} \left( \log \left( {\frac{1}{p _{k}}} \right) \right ) ^{s} \left ( P_{k}^{s}-{P_{k-1}^{s}} \right ) }{ {s \left( s-1 \right ) }} \\ & \quad\leq \left[ \frac{-S^{{\tilde{r}}}\left( \mathbf{p}\right) + \left( {p_{1}} \log \left( {\frac{1}{p_{1}}} \right ) \right) ^{\tilde{r}}+\sum_{k=2} ^{n} \left( \log \left ( {\frac{1}{p_{k}}} \right) \right) ^{ \tilde{r}} \left( P_{k}^{\tilde{r}}-{P_{k-1}^{\tilde{r}}} \right) }{ {{\tilde{r}} \left( {\tilde{r}}-1 \right ) }} \right] ^{\frac{{\tilde{t}}-s}{ {\tilde{t}}-{\tilde{r}}}} \\ &\qquad{} \times \left[ \frac{-S^{\tilde{t}}\left( \mathbf{p}\right) + \left ( {p _{1}}\log \left( {\frac{1}{p_{1}}} \right)\right ) ^{\tilde{t}}+ \sum_{k=2}^{n} \left( \log \left ( {\frac{1}{p_{k}}} \right) \right) ^{\tilde{t}} \left( P_{k}^{\tilde{t}}-{P_{k-1}^{\tilde{t}}} \right ) }{ {\tilde{t}} \left( {\tilde{t}}-1 \right) } \right] ^{\frac{s-{\tilde{r}}}{ {\tilde{t}}-{\tilde{r}}}}, \\ &\frac{ \bigl(-S\left(\mathbf{p}\right)\bigr)^{s} - \bigl({p_{1}}\log p_{1} \bigr) ^{s} - \sum_{k=2}^{n} \bigl(\log p_{k} \bigr)^{s} \left (P_{k}^{s}-P _{k-1}^{s} \right)}{{s \left ( s-1 \right ) }} \\ &\quad \leq \left[ \frac{ \bigl(-S\left(\mathbf{p}\right) \bigr)^{\tilde{r}} - \bigl({p _{1}}\log p_{1} \bigr)^{\tilde{r}} - \sum_{k=2}^{n} \bigl(\log p _{k} \bigr)^{\tilde{r}} \left (P_{k}^{\tilde{r}}-P_{k-1}^{\tilde{r}} \right )}{ {{\tilde{r}} \left( {\tilde{r}}-1 \right) }} \right]^{\frac{{\tilde{t}}-s}{ {\tilde{t}}-{\tilde{r}}}} \\ & \qquad{}\times \left[ \frac{ \bigl(-S\left(\mathbf{p}\right) \bigr)^{\tilde{t}} - \bigl({p_{1}}\log p_{1} \bigr)^{\tilde{t}} - \sum_{k=2}^{n} \bigl(\log p _{k} \bigr)^{\tilde{t}} \left (P_{k}^{\tilde{t}}-P_{k-1}^{\tilde{t}} \right ) }{{\tilde{t}} \left( {\tilde{t}}-1 \right) } \right]^{\frac{s-{\tilde{r}}}{{\tilde{t}}- {\tilde{r}}}} \end{aligned}$$
and
$$\begin{aligned} &\frac{1}{s \left( s-1 \right) } \Biggl[ -Z^{s} \left( r,t,H_{n,r,t}\right ) + \left( \log \left( \bigl( 1+r \bigr) ^{t}H _{n,r,t} \right) ^{\frac{1}{ \left( 1+r \right) ^{t}H_{n,r,t}}} \right) ^{s} \\ &\qquad{}+ \sum_{k=2}^{n} \Biggl[ \left( \log \left( {{ \bigl( k+r \bigr) ^{t}H_{n,r,t}}} \right) ^{C_{k,n,r,t}} \right) ^{s}- \left( \log \left( {{ \bigl( k+r \bigr) ^{t}H_{n,r,t}}} \right) ^{{C_{k-1,n,r,t}}} \right) ^{s} \Biggr] \Biggr] \\ &\quad\leq \left( \frac{1}{\tilde{r} \left( \tilde{r}-1 \right ) } \right) ^{\frac{{\tilde{t}}-s}{{\tilde{t}}-{\tilde{r}}}} \Biggl[ -Z^{\tilde{r}} \left( r,t,H_{n,r,t} \right) + \left( \log \left( \bigl( 1+r \bigr) ^{t}H_{n,r,t} \right) ^{\frac{1}{ \left( 1+r \right) ^{t}H_{n,r,t}}} \right) ^{\tilde{r}} \\ &\qquad{}+ \sum_{k=2}^{n} \left[ \left( \log \left( {{ \bigl( k+r \bigr) ^{t}H_{n,r,t}}} \right) ^{{C_{k,n,r,t}}} \right) ^{ \tilde{r}}- \left( \log \left( {{ \bigl( k+r \bigr) ^{t}H_{n,r,t}}} \right) ^{{C_{k-1,n,r,t}}} \right) ^{\tilde{r}} \right] \Biggr] ^{\frac{{\tilde{t}}-s}{{\tilde{t}}-{\tilde{r}}}} \\ &\qquad{}\times \left( \frac{1}{\tilde{t} \left( \tilde{t}-1 \right) } \right) ^{\frac{{s-\tilde{r}}}{{\tilde{t}}-{\tilde{r}}}} \Biggl[ -Z^{\tilde{t}} \left( r,t,H_{n,r,t} \right) + \left( \log \left( \bigl( 1+r \bigr) ^{t}H_{n,r,t} \right) ^{\frac{1}{ \left( 1+r \right) ^{t}H_{n,r,t}}} \right) ^{\tilde{t}} \\ &\qquad{}+ \sum_{k=2}^{n} \left[ \left( \log \left( {{ \bigl( k+r \bigr) ^{t}H_{n,r,t}}} \right) ^{{{C_{k,n,r,t}}}} \right) ^{ \tilde{t}}- \left( \log \left( {{ \bigl( k+r \bigr) ^{t}H_{n,r,t}}} \right) ^{{C_{k-1,n,r,t}}} \right) ^{\tilde{t}} \right] \Biggr] ^{\frac{{s-\tilde{r}}}{{\tilde{t}}-{\tilde{r}}}}. \end{aligned}$$
In this case, \({\mu _{s,q}} \left(\Phi _{i}, \Omega \right)\)
\(\bigl (i=\operatorname{1,\dots,3} \bigr )\) defined in (5.3) becomes
$$ \mu _{s,q} \left( \Phi _{i},{\tilde{\Omega }} \right) = \textstyle\begin{cases} \left ( {\frac{{\Phi _{i} \left({f _{s}}\right)}}{{\Phi _{i} (f _{q})}}} \right) ^{\frac{1}{s - q}}, & s \neq q, \\ \exp \left( {\frac{{1 - 2s}}{{s\left(s - 1\right)}} - \frac{{\Phi _{i} \left({f_{0}} {f _{s}}\right)}}{{\Phi _{i}\left ({ f _{s}}\right)}}} \right ), & s = q \notin \{ 0,1 \}, \\ \exp \left( {1 - \frac{{\Phi _{i} \left( f _{0}^{2}\right)}}{{2\Phi _{i} \left({f _{0}}\right)}}} \right), & s = q = 0, \\ \exp \left( { - 1 - \frac{{\Phi _{i} \left(f_{0} f _{1}\right)}}{{2\Phi _{i} \left( f _{1}\right)}}} \right ), & s = q =1. \end{cases} $$
In particular for \(i=1\), we have
$$\begin{aligned} &\Phi _{1} \left( f_{s} \right) =\frac{1}{s \left( s-1 \right) } \left[-S ^{s}\left( \mathbf{p}\right)+ {p_{1}^{s}}\log ^{s} \left( {\frac{1}{p_{1}}} \right) + \sum_{k=2}^{n} {\log ^{s}} \left( {\frac{1}{p_{k}}} \right) \left( P _{k}^{s}-{P_{k-1}^{s}} \right) \right], \\ &\phantom{\Phi _{1} \left( f_{s} \right) =}s\notin \{0,1\}, \\ &\Phi _{1} \left( f_{0} \right) =\log \left( \frac{S\left( \mathbf{p}\right) }{ {p_{1}}\log \left ( {\frac{1}{p_{1}}} \right) } \right) +\sum_{k=2} ^{n}\log \left( \frac{{P_{k-1}}}{P_{k}} \right), \\ &\Phi _{1} \left( f_{1} \right) =\log \left( \frac{ \left( p_{1} \log \left( {\frac{1}{p_{1}}} \right ) \right) ^{p_{1}\log \left( {\frac{1}{p_{1}}} \right) }}{ \bigl( S\left( \mathbf{p} \right )\bigr) ^{S\left( \mathbf{p}\right) }} \right) \\ &\phantom{\Phi _{1} \left( f_{1} \right) =}{}+ \sum_{k=2}^{n}\log \left( \frac{{ \left ( {P_{k}}\log \left( {\frac{1}{p_{k}}} \right)\right ) }^{{P_{k}}\log \left ( {\frac{1}{p _{k}}} \right ) }}{ \left ( P_{k-1}\log \left ( {\frac{1}{p_{k}}} \right) \right ) ^{P_{k-1}\log \left ( {\frac{1}{p_{k}}} \right) }} \right), \\ &\Phi _{1} \left(f_{0}^{2} \right) = \sum _{k=2}^{n} \left[\log ^{2} \left({P_{k}} \log \left({\frac{1}{p_{k}}} \right) \right)-\log ^{2} \left(P_{k-1}\log \left({\frac{1}{p_{k}}} \right) \right) \right] \\ &\phantom{\Phi _{1} \left(f_{0}^{2} \right) =}{}+ \log ^{2} \left({p_{1}}\log \left({\frac{1}{p_{1}}} \right) \right)- \log ^{2} \bigl(S\left(\mathbf{p}\right) \bigr), \\ &\Phi _{1} \left( f_{0}f_{1} \right) =S\left( \mathbf{p}\right) \log ^{2} \bigl( S\left( \mathbf{p}\right) \bigr)- {p_{1}} \log \left( {\frac{1}{p_{1}}} \right) \log ^{2} \left( {p_{1}}\log \left( {\frac{1}{p_{1}}} \right) \right) \\ &\phantom{\Phi _{1} \left( f_{0}f_{1} \right) =}{}-\sum_{k=2}^{n}\log \left( { \frac{1}{p_{k}}} \right) {P_{k}} \log ^{2} \left( {P_{k}}\log \left( {\frac{1}{p_{k}}} \right) \right) \\ &\phantom{\Phi _{1} \left( f_{0}f_{1} \right) =}{}+\sum_{k=2}^{n}\log \left( { \frac{1}{p_{k}}} \right)P_{k-1} \log ^{2} \left( P_{k-1}\log \left( {\frac{1}{p_{k}}} \right) \right) \end{aligned}$$
and
$$\begin{aligned} \Phi _{1} \left( f_{0}f_{s} \right) ={}& \frac{1}{s \left( s-1 \right ) } \log \left( \frac{ \bigl( S\left( \mathbf{p}\right)\bigr) ^{ \bigl( S\left( \mathbf{p}\right)\bigr) ^{s}}}{ \left ( p_{1}\log \left ( {\frac{1}{p_{1}}} \right )\right ) ^{ {p_{1}^{s}} \log ^{s} \left( {\frac{1}{p_{1}}} \right) }} \right) \\ &{}+\frac{1}{s \left( s-1 \right ) }\sum_{k=2}^{n}\log \left( \frac{ { \left ( {P_{k-1}}\log \left( {\frac{1}{p_{k}}} \right)\right ) } ^{ {P_{k-1}^{s}}\log ^{s} \left ( {\frac{1}{p_{k}}} \right ) }}{ \left( P_{k}\log \left( {\frac{1}{p_{k}}} \right) \right) ^{ \left ( {P_{k} ^{s}}\log ^{s} \left( {\frac{1}{p_{k}}} \right ) \right) ^{s}}} \right),\quad s\notin \{0,1\}. \end{aligned}$$
If \(\Phi _{i}\)
\(\bigl (i=\operatorname{1,\dots,3} \bigr)\) is positive, then Theorem 4.2 applied for \(f=f_{s} \in {\tilde{\Omega }}\) and \(g=f_{q} \in {\tilde{\Omega }}\) yields that there exists \(\xi _{i} \in \left[a,b \right]\) such that
$$ {\xi _{i} ^{s-q}} = {\frac{{\Phi _{i} \left(f _{s} \right)}}{{\Phi _{i} \left(f _{q} \right)}}},\quad i \in \left \{1,2,3 \right\}. $$
Since the function \(\xi _{i} \mapsto \xi _{i}^{s-q}\) is invertible for \(s \neq q\), we have
$$ a\le \left({\frac{{\Phi _{i} \left({f_{s}} \right )}}{{\Phi _{i} \left({f _{q}} \right)}}} \right)^{\frac{1}{s-q}} \le b, \quad i \in \left\{1,2,3\right\}, $$
which, together with the fact that \(\mu _{s,q} \left(\Phi _{i},{\tilde{\Omega }} \right )\)
\(\bigl(i= \operatorname{1,\dots,3} \bigr)\) is continuous, symmetric and monotonous (by (5.2)), shows that \(\mu _{s,q} \left(\Phi _{i}, \tilde{\Omega }\right)\) is a mean.