Skip to main content

Theory of ϕ-Jensen variance and its applications in higher education

Abstract

This paper introduces the theory of ϕ-Jensen variance. Our main motivation is to extend the connotation of the analysis of variance and facilitate its applications in probability, statistics and higher education. To this end, we first introduce the relevant concepts and properties of the interval function. Next, we study several characteristics of the log-concave function and prove an interesting quasi-log concavity conjecture. Next, we introduce the theory of ϕ-Jensen variance and study the monotonicity of the interval function \(\operatorname{JVar}_{\phi}\varphi ( {{X _{ [ {a,b} ] }}} )\) by means of the log concavity. Finally, we demonstrate the applications of our results in higher education, show that the hierarchical teaching model is ‘normally’ better than the traditional teaching model under the appropriate hypotheses, and study the monotonicity of the interval function \(\operatorname{Var} \mathscr{A} (X _{ [ {a,b} ] } )\).

1 Introduction

This paper introduces the theory of ϕ-Jensen variance. Our main motivation is to extend the connotation of the analysis of variance and facilitate its applications in probability, statistics and higher education. Our research results have important theoretical significance and reference value for the higher education systems. The proofs of these results are both interesting and difficult. A large number of algebraic, functional analysis, probability, statistics and inequality theories are used in this paper.

Higher education is an important social activity. One of the interesting problems in higher education is whether we should advocate a hierarchical teaching model. This problem is always controversial in educational circles, which has attracted the attention of some mathematics workers [15]. In this paper, we study the problem from the angle of the analysis of variance, so as to decide on the superiority or the inferiority of the hierarchical teaching model and the traditional teaching model. The research methods of the problem are based on the theory of ϕ-Jensen variance.

Now we recall the concepts of the hierarchical teaching model and the traditional teaching model as follows [1].

The usual teaching model assumes that the scores of each student in a university is treated as a continuous random variable written as \({X _{I}}\), which takes on some value in the real interval \(I=[0,1]\), and its probability density function \({p_{I}}:I\rightarrow ( {0,\infty} ) \) is continuous. Suppose we now divide the students into m classes written as

$$\operatorname{Class} [ {{a_{0}},{a_{1}}} ] ,\quad \operatorname{Class} [ {{a_{1}},{a_{2}}} ],\quad\ldots,\quad \operatorname{Class} [ {{a_{i}},{a_{i+1}}} ],\quad\ldots,\quad \operatorname{Class} [ {{a_{m-1}},{a_{m}}} ], $$

where \(0={a_{0}}\leqslant{a_{1}}\leqslant\cdots\leqslant {a_{m}}=1\), \(m\geqslant2\), and \({a_{i}}\), \({a_{i+1}}\) are the lowest and the highest allowable scores of the students of \(\operatorname{Class} [ {{a_{i}},{a_{i+1}}} ] \), respectively. Then we say that the set

$$ \operatorname{HTM} \{ {{a_{0}},\ldots,{a_{m}},{p_{I}}} \} \triangleq \bigl\{ \operatorname{Class}[a_{0},a_{1}], \operatorname{Class}[a_{1},a_{2}],\ldots , \operatorname{Class}[a_{m-1},a_{m}],{p_{I}} \bigr\} $$
(1)

is a hierarchical teaching model such that the traditional teaching model, denoted by \(\operatorname{HTM} \{ {a_{0},a_{m},{p_{I}}} \}\), is just a special \({\operatorname{HTM}} \{ {{a_{0}},\ldots,{a_{m}},{p_{I}}} \} \) where \(m=1\).

If \(a_{0}=-\infty\), \(a_{m}=\infty\), then the \(\operatorname{HTM} \{ {{-\infty}, \ldots, {\infty},p_{\mathbb{R}}} \}\) and the \(\operatorname{HTM} \{ {{-\infty},{\infty},{p_{\mathbb{R}}}} \}\) are called generalized hierarchical teaching model and generalized traditional teaching model, respectively, where, and in the future, \(\mathbb{R}\triangleq (-\infty,\infty )\).

In order to study the hierarchical and the traditional teaching models from the angle of the analysis of variance, we need to recall the definition of the truncated random variable as follows [1].

Let \({X _{I}}\in I\) be a continuous random variable with continuous probability density function \({p_{I}}:I\rightarrow ( {0,\infty} ) \). If \({X _{J}}\in J\subseteq I\) is also a continuous random variable and its probability density function is

$$ {p_{J}}:J\rightarrow ( {0,\infty} ),\qquad {p_{J}} ( t ) \triangleq\frac{{{p_{I}} ( t ) }}{{\int_{J}{{p_{I}}}}}, $$
(2)

then we say that the random variable \({X _{J}}\) is a truncated random variable of the random variable \({X _{I}}\), written as \({X _{J}}\subseteq{X _{I}}\). If \({X _{J}}\subseteq{X _{I}}\) and \(J\subset I\), then we say that the random variable \({X _{J}}\) is a proper truncated random variable of the random variable \({X _{I}}\), written as \({X_{J}}\subset{X _{I}}\). Here I and J are n-dimensional intervals (see Section 2).

We point out a basic property of the truncated random variable as follows [1]: Let \({X _{I}}\in I\) be a continuous random variable with continuous probability density function \({p_{I}}:I\rightarrow ( {0,\infty} ) \). If \({X _{{I_{\ast}}}}\subseteq{X _{I}}\), \({X _{{I^{\ast}}}}\subseteq{X _{I}}\) and \({I_{\ast}}\subseteq{I^{\ast}}\), then \({X _{{I_{\ast}}}}\subseteq{X _{{I^{\ast}}}}\), while if \({X _{{I_{\ast}}}}\subseteq{X _{I}}\), \({X _{{I^{\ast}}}}\subseteq{X _{I}}\) and \({I_{\ast}}\subset{I^{\ast}}\), then \({X _{{I_{\ast }}}}\subset{X _{{I^{\ast}}}} \).

According to the definitions of the mathematical expectation \(\mathrm{E}\varphi ( {{X _{J}}} ) \) and the variance \(\operatorname{Var}\varphi ( {{X _{J}}} )\), we easily get

$$ \mathrm{E}\varphi ( {{X _{J}}} ) \triangleq\int_{J}{{p_{J}} \varphi}=\frac{{\int_{J}{{p_{I}} \varphi}}}{{\int_{J}{{p_{I}}}}} $$
(3)

and

$$ \operatorname{Var}\varphi ( {{X _{J}}} ) \triangleq\mathrm{E} { \bigl[ {\varphi ( {{X _{J}}} ) -\mathrm{E}\varphi ( {{X _{J}}} ) } \bigr] ^{2}}=\frac{{\int_{J}{{p_{I}} {\varphi^{2}}}}}{{\int_{J}{{p_{I}}}}}-{ \biggl( {\frac{{\int_{J}{{p_{I}} \varphi}}}{{\int_{J}{{p_{I}}}}}} \biggr) ^{2}}. $$
(4)

In the \(\operatorname{HTM} \{ {{a_{0}},\ldots,{a_{m}},{p_{I}}} \}\), the scores of each student in \(\operatorname{Class} [ {{a_{i}},{a_{i+1}}} ] \) is also a random variable written as \({X _{ [ {{a_{i}},{a_{i+1}}} ] }}\). Since \([ {{a_{i}},{a_{i+1}}} ] \subseteq I\), it is a truncated random variable of the random variable \({X _{I}}\), where \(i=0,1,\ldots,m-1\). Assume that the \(j-i\) classes

$$\operatorname{Class} [ {{a_{i}},{a_{i+1}}} ] ,\quad \operatorname{Class} [ {{ a_{i+1}},{a_{i+2}}} ] ,\quad \ldots,\quad\operatorname{Class} [ {{a_{j-1}},{a_{j}}} ] $$

are merged into one, written as \(\operatorname{Class} [ {{a_{i}},{a_{j}}} ] \). Since \([ {{a_{i}},{a_{j}}} ] \subseteq I\), we know that \({X _{ [ {{a_{i}},{a_{j}}} ] }}\) is also a truncated random variable of the random variable \({X _{I}}\), where \(0\leqslant i< j\leqslant m\). In general, we have

$$ {X _{ [ {{a_{i}},{a_{j}}} ] }}\subseteq{X_{ [ {{a_{{i^{\prime}}}},{a_{j^{\prime}}}} ] }}\subseteq{X_{I}},\quad \forall {i^{\prime}},i,j,j^{\prime}:0\leqslant{i^{\prime }} \leqslant i< j \leqslant j^{\prime}\leqslant m. $$
(5)

In the \(\operatorname{HTM} \{ {{a_{0}},\ldots,{a_{m}},{p_{I}}} \}\), we are concerned with the relationship between the variance \(\operatorname{Var}{X _{ [ {{a_{i}},{a_{j}}} ] }}\) and \(\operatorname{Var}{X _{I}}\), so as to decide on the superiority or the inferiority of the hierarchical and the traditional teaching models. If

$$ \operatorname{Var} {X _{ [ {{a_{i}},{a_{j}}} ] }}\leqslant\operatorname{Var} {X _{ [ {{a_{{i^{\prime}}}},{a_{j}}} ] }}, \quad\forall {i^{\prime}} ,i,j: 0 \leqslant{i^{\prime}}< i< j\leqslant m, $$
(6)

then we say that the \(\operatorname{HTM} \{ {{a_{0}},\ldots,{a_{m}},{p_{I}}} \}\) is left increasing. If

$$ \operatorname{Var} {X _{ [ {{a_{i}},{a_{j}}} ] }}\leqslant\operatorname{Var} {X _{ [ {{a_{i}},{a_{{j^{\prime}}}}} ] }},\quad \forall i,j,{j^{\prime}}:0\leqslant i< j< {j^{\prime}}\leqslant m, $$
(7)

then we say that the \(\operatorname{HTM} \{ {{a_{0}},\ldots,{a_{m}},{p_{I}}} \}\) is right increasing. If the hierarchical teaching model is both left and right increasing, i.e.,

$$ \operatorname{Var} {X _{ [ {{a_{i}},{a_{j}}} ] }}\leqslant\operatorname{Var} {X _{ [ {{a_{i^{\prime}}},{a_{{j^{\prime}}}}} ] }}\leqslant \operatorname{Var} {X _{I }},\quad \forall i,j,{i^{\prime}},{j^{\prime}}:0 \leqslant{i^{\prime}}\leqslant i< j\leqslant{j^{\prime}}\leqslant m, $$
(8)

then we say that the hierarchical teaching model is increasing.

If a hierarchical teaching model is increasing, then in view of the usual meaning of the variance, we tend to think that this hierarchical teaching model is better than the traditional teaching model. Otherwise, this hierarchical teaching model is probably not worth promoting.

In this paper, we study the hierarchical and the traditional teaching models from the angle of the analysis of variance. In other words, we study the monotonicity of the hierarchical teaching model, so as to decide on the superiority or the inferiority of the hierarchical and the traditional teaching models. In particular, we need to find the conditions such that inequalities (6), (7) and (8) hold (see Theorem 6) by means of the theory of ϕ-Jensen variance.

In order to facilitate the description of the theory of ϕ-Jensen variance, in Section 2, we introduce the relevant concepts and properties of the interval functions, in Section 3, we study several characteristics of the log-concave function. In particular, we will prove the interesting quasi-log concavity conjecture in [1]. In Section 4, we introduce the theory of ϕ-Jensen variance and study the monotonicity of the interval function \(\operatorname{JVar}_{\phi}\varphi ( {{X _{ [ {a,b} ] }}} )\) by means of the log concavity. In Section 5, we demonstrate the applications of our results in higher education, show that the hierarchical teaching model is ‘normally’ better than the traditional teaching model under the appropriate hypotheses, and study the monotonicity of the interval function \(\operatorname{Var} \mathscr{A} (X _{ [ {a,b} ] } )\).

2 Interval function

To study the theory of ϕ-Jensen variance, we need to introduce the relevant concepts and properties of the interval functions in this section.

We will use the following notations in this paper.

$$\begin{aligned}& \mathbf{a}\triangleq ( {{a_{1}}, \ldots,{a_{n}}} ),\qquad \mathbf {b}\triangleq ( {{b_{1}}, \ldots,{b_{n}}} ),\qquad \lambda \triangleq ( {{\lambda_{1}}, \ldots ,{\lambda_{n}}} ), \qquad\mathbf{0}\triangleq (0,\ldots,0 ), \\& \lambda\mathbf{a} + ( {1 - \lambda} )\mathbf{b}\triangleq \bigl( {{ \lambda_{1}} {a_{1}} + ( {1 - {\lambda_{1}}} ){b_{1}}, \ldots,{\lambda_{n}} {a_{n}} + ( {1 - { \lambda_{n}}} ){b_{n}}} \bigr), \\& \varDelta \mathbf{a} \triangleq ( {\varDelta {a_{1}},\ldots, \varDelta {a_{n}}} ), \qquad \varDelta \mathbf{b}\triangleq ( {\varDelta {b_{1}}, \ldots ,\varDelta {b_{n}}} ), \\& \mathbf{a} \leqslant\mathbf{b} \quad\Leftrightarrow\quad {a_{1}} \leqslant{b_{1}},\qquad \ldots,\qquad{a_{n}} \leqslant{b_{n}}. \end{aligned}$$

If \(\mathbf{a}\leqslant\mathbf{b}\) and there exists \(j\in \{ 1,2,\ldots,n \}\) such that \(a_{j}< b_{j}\), then we say that a is less than b or b is greater than a, written as \(\mathbf{a}<\mathbf{b}\) or \(\mathbf{b}>\mathbf{a}\).

Let \(I_{j}\subseteq\mathbb{R}\), \(j=1,\ldots,n\), be intervals. Then we say that the set \(I\triangleq I_{1}\times\cdots\times I_{n}\) is an n-dimensional interval, where the product × is the Descartes product.

If \(\mathbf{a},\mathbf{b}\in I\), then we say that the set

$$[ {\mathbf{a},\mathbf{b}} ] \triangleq \bigl\{ {\lambda \mathbf{a}+ ( {1- \lambda} ) \mathbf{b}| {\lambda\in{{ [ {0,1} ] }^{n}}} } \bigr\} $$

is an n-dimensional generalized closed interval of I.

Clearly, for the n-dimensional generalized closed interval, we have

$$ [ {\mathbf{a},\mathbf{b}} ] = [ {{a_{1}},{b_{1}}} ] \times \cdots\times [ {{a_{n}},{b_{n}}} ]= [ {\mathbf{b},\mathbf{a}} ], \quad\forall \mathbf {a},\mathbf{b} \in\mathbb{R}. $$
(9)

Let \(I\subseteq\mathbb{R}^{n}\) be an n-dimensional interval. Then we say that the set

$$\overline{I}\triangleq\bigl\{ { { [ {\mathbf{a},\mathbf {b}} ] } | \mathbf{a},\mathbf{b}\in I} \bigr\} $$

is a closed interval set of the interval I.

We remark here that the closed interval set is a convex set, i.e.,

$$ J\in\overline{I},\qquad K\in\overline{I}, \qquad \theta\in[0,1] \quad\Rightarrow\quad (1-\theta)J+ \theta K \in\overline{I}, $$
(10)

here we define

$$\theta [ {\mathbf{a},\mathbf{b}} ]\triangleq [ {\theta \mathbf{a},\theta \mathbf{b}} ],\quad \forall \theta\in\mathbb{R}. $$

Let be the closed interval set of the interval I. We say that the mapping \(G:\overline{I}\rightarrow\mathbb{R}\) is an interval function. The image of the closed interval \([ {\mathbf{a},\mathbf{b}} ] \) is written as \(G [ {\mathbf{a},\mathbf{b}} ]\), and the interval function \(G:\overline{I}\rightarrow\mathbb{R}\) can also be expressed as \(G [ {\mathbf{a},\mathbf{b}} ] \) (\([ {\mathbf{a},\mathbf {b}} ]\in\overline{I} \)).

By (9), for the interval function \(G:\overline{I}\rightarrow \mathbb{R}\), we have

$$ G [ {\mathbf{a},\mathbf{b}} ]=G [ {\mathbf{b},\mathbf {a}} ],\quad \forall \mathbf{a},\mathbf{b}\in I. $$
(11)

That is to say, the image \(G [ {\mathbf{a},\mathbf{b}} ]\) of the closed interval \([\mathbf{a},\mathbf{b}]\) is a symmetric function.

Let \(G:\overline{I}\rightarrow\mathbb{R} \) be an interval function, and let \(a_{j}< b_{j}\), \(j=1,\ldots,n\). If

$$ [ {\mathbf{a},\mathbf{b}} ]\subset [ {\mathbf{a},\mathbf {b}+\varDelta \mathbf{b}} ] \subseteq I\quad\Rightarrow\quad G [ {\mathbf{a},\mathbf{b}} ] < G [ {\mathbf{a}, \mathbf {b}+\varDelta \mathbf{b}} ], $$
(12)

then we say that the interval function \(G:\overline{I}\rightarrow \mathbb{R} \) is right increasing. If

$$ [ {\mathbf{a},\mathbf{b}} ] \subset [ {\mathbf{a}-\varDelta \mathbf{a},\mathbf{b}} ] \subseteq I\quad\Rightarrow\quad G [ {\mathbf{a},\mathbf{b}} ] < G [ { \mathbf{a}-\varDelta \mathbf{a},\mathbf{b}} ], $$
(13)

then we say that the interval function \(G:\overline{I}\rightarrow \mathbb{R} \) is left increasing. If the interval function \(G:\overline{I}\rightarrow\mathbb{R} \) is both left increasing and right increasing, i.e.,

$$ [ {\mathbf{a},\mathbf{b}} ]\subset [ {\mathbf{c},\mathbf {d}} ] \subseteq I \quad\Rightarrow\quad G [ {\mathbf{a},\mathbf{b}} ] < G [ {\mathbf{c},\mathbf {d}} ], $$
(14)

then we say that the interval function \(G:\overline{I}\rightarrow \mathbb{R} \) is increasing.

If G or −G is left increasing, then we say that the interval function \(G:\overline{I}\rightarrow\mathbb{R} \) is left monotonous. If G or −G is right increasing, then we say that the interval function \(G:\overline{I}\rightarrow\mathbb{R}\) is right monotonous. If G or −G is increasing, then we say that the interval function \(G:\overline{I}\rightarrow\mathbb{R} \) is monotonous.

We remark here that if an interval function \(G:\overline{I}\rightarrow \mathbb{R}\), here I is an interval, is increasing, then the graph of the function

$$Z=G[x,y],\quad (x,y)\in I^{2} $$

looks like a drain or a valley. For example, the interval function

$$G:\overline{[0,1]}\rightarrow\mathbb{R},\qquad G[x,y]=|x-y| $$

is increasing, the graph of the function

$$Z=|x-y|, \quad (x,y)\in[0,1]^{2} $$

looks like a drain or a valley, see Figure 1.

Figure 1
figure 1

The graph of the function \(\pmb{z=|x-y|}\) , \(\pmb{(x,y)\in[0,1]^{2}}\) .

If \({X}\in I\), where \(I\subseteq\mathbb{R}^{n}\) is an n-dimensional interval, is a continuous random variable, and its probability density function \({p}:I\rightarrow ( {0,\infty} ) \) is continuous, then the interval function

$$ G:\overline{I}\rightarrow [ {0,1} ],\qquad G [ {\mathbf {a},\mathbf{b}} ] \triangleq \int_{ [\mathbf{a},\mathbf{b} ]}{p} $$

is increasing, where

$$P \bigl(X\in[\mathbf{a},\mathbf{b}] \bigr)\triangleq G [ {\mathbf {a}, \mathbf{b}} ] $$

is the probability of the random event ‘\(X\in[\mathbf{a},\mathbf {b}]\)’. In other words,

$$ [ {\mathbf{a},\mathbf{b}} ]\subset [ {\mathbf{c},\mathbf {d}} ] \subseteq I \quad\Rightarrow\quad 0\leqslant P \bigl(X\in[\mathbf {a},\mathbf{b}] \bigr)< P \bigl(X\in[ \mathbf{c},\mathbf{d}] \bigr)\leqslant1. $$
(15)

For the monotonicity of the interval function, we have the following proposition.

Proposition 1

Let \(G:\overline{I}\rightarrow\mathbb{R} \), where \(I\subseteq\mathbb{R}^{n}\) is an n-dimensional interval, be an interval function, and the partial derivatives of \(G [ {\mathbf {a},\mathbf{b}} ]\) exist, where \([ {\mathbf{a},\mathbf {b}} ] \in\overline{I}\). Then we have the following two assertions.

  1. (I)

    If

    $$ a_{j}< b_{j} \quad\Rightarrow\quad\frac{\partial{G [ {\mathbf{a},\mathbf {b}} ] }}{\partial{b_{j}}}> 0, \quad j=1, \ldots, n, $$
    (16)

    then the interval function \(G:\overline{I}\rightarrow\mathbb{R} \) is right increasing.

  2. (II)

    If

    $$ a_{j}>b_{j} \quad\Rightarrow\quad\frac{\partial{G [ {\mathbf{a},\mathbf {b}} ] }}{\partial{b_{j}}}< 0,\quad j= 1, \ldots,n, $$
    (17)

    then the interval function \(G:\overline{I}\rightarrow\mathbb{R} \) is left increasing.

Proof

We first prove assertion (I). Let

$$a_{j}< b_{j},\quad 1\leqslant j\leqslant n, \qquad [ {\mathbf{a}, \mathbf {b}} ]\subset [ {\mathbf{a},\mathbf{b}+\varDelta \mathbf{b}} ] \subseteq I , $$

here \(\varDelta \mathbf{b}>\mathbf{0}\). Hence there exists \(j\in \{ 1,\ldots,n \}\) such that \(\varDelta b_{j}>0\). According to the theory of analysis and (16), we know that the function \(G [ {\mathbf{a},\mathbf{b}} ]\) is strictly increasing with respect to \(b_{j}\), hence

$$G [ {\mathbf{a},\mathbf{b}} ] < G [ {\mathbf{a},\mathbf {b}+\varDelta \mathbf{b}} ]. $$

That is to say, the interval function \(G:\overline{I}\rightarrow\mathbb {R} \) is right increasing. Assertion (I) is proved.

Next we prove assertion (II) as follows. Let

$$a_{j}< b_{j},\quad 1\leqslant j\leqslant n, \qquad [ {\mathbf{a}, \mathbf {b}} ]\subset [ {\mathbf{a}-\varDelta \mathbf{a},\mathbf{b}} ] \subseteq I, $$

here \(\varDelta \mathbf{a}>\mathbf{0}\). Hence there exists \(j\in \{ 1,\ldots,n \}\) such that \(\varDelta a_{j}>0\).

By (11), and using the switch \(\mathbf{a}\leftrightarrow\mathbf {b}\) in (17), we get

$$b_{j}>a_{j} \quad\Rightarrow\quad\frac{\partial{G [ {\mathbf{b},\mathbf {a}} ] }}{\partial{a_{j}}}= \frac{\partial{G [ {\mathbf {a},\mathbf{b}} ] }}{\partial{a_{j}}}< 0,\quad j=1,\ldots,n. $$

Hence

$$a_{j}< b_{j}\quad\Rightarrow\quad\frac{\partial{G [ {\mathbf{a},\mathbf {b}} ] }}{\partial{a_{j}}}< 0,\quad j=1, \ldots,n. $$

That is to say, the function \(G [ {\mathbf{a},\mathbf{b}} ]\) is strictly decreasing with respect to \(a_{j}\), hence

$$G [ {\mathbf{a},\mathbf{b}} ] < G [ {\mathbf{a}-\varDelta \mathbf{a},\mathbf{b}} ]. $$

In other words, the interval function \(G:\overline{I}\rightarrow\mathbb {R} \) is left increasing. Assertion (II) is proved. The proof of Proposition 1 is completed. □

In Section 4.5, we will demonstrate the applications of Proposition 1.

As an application of Proposition 1, we have the following example.

Example 1

Let \({X}\in I\), where I is an interval, be a continuous random variable, and let its probability density function \({p}:I\rightarrow ( {0,\infty} ) \) be continuous, as well as let the function \(\varphi: I\rightarrow\mathbb{R}\) be continuous and strictly increasing. Then the interval function

$$ \mathrm{E}\varphi (X_{[a,b]} )\triangleq \left \{ \textstyle\begin{array}{@{}l@{\quad}l} \frac{\int_{a}^{b}{p(x)\varphi(x) \,\mathrm{d}x}}{\int_{a}^{b}{p(x)\,\mathrm{d}x}}, & a \ne b, \\ a, & a = b, \end{array}\displaystyle \right .\quad \forall [ {a,b} ] \in\overline{I} $$
(18)

is right increasing, and the interval function \(-\mathrm{E}\varphi (X_{[a,b]} )\) is left increasing, where \(X_{[a,b]}\subseteq X\), and \(\mathrm{E}\varphi (X_{[a,b]} )\) is the mathematical expectation of \(\varphi (X_{[a,b]} )\).

Proof

Let \([ {a,b} ] \in\overline{I}\) and \(a\ne b\). Then we have

$$\begin{aligned} \frac{\partial{\mathrm{E}\varphi (X_{[a,b]} ) }}{\partial {b}} =&\frac{p(b)\varphi(b)\int_{a}^{b}{p(x) \,\mathrm{d}x}-p(b)\int_{a}^{b}{p(x)\varphi(x)\,\mathrm{d}x}}{ [\int_{a}^{b}{p(x)\,\mathrm{d}x} ]^{2}} \\ =&\frac{p(b)\int_{a}^{b}{ [\varphi(b)-\varphi(x) ]p(x)\,\mathrm{d}x}}{ [\int_{a}^{b}{p(x)\,\mathrm{d}x} ]^{2}}\\ >& 0. \end{aligned}$$

By Proposition 1, \(\mathrm{E}\varphi (X_{[a,b]} )\) is right increasing and \(-\mathrm{E}\varphi (X_{[a,b]} )\) is left increasing. This ends the proof. □

In Section 4.6, we will demonstrate the applications of Example 1.

Now we introduce the convexity and the concavity of the interval functions as follows.

The interval function \(G:\overline{I}\rightarrow\mathbb{R}\) is said to be convex if

$$ J\in\overline{I},\qquad K\in\overline{I}, \qquad \theta\in[0,1]\quad\Rightarrow\quad (1- \theta)GJ+\theta GK\geqslant G \bigl[(1-\theta)J+\theta K \bigr], $$
(19)

where \((1-\theta)J+\theta K\in\overline{I}\) by (10). The interval function \(G:\overline{I}\rightarrow\mathbb{R}\) is said to be concave if −G is convex.

For example, the interval function

$$G_{\gamma}:\overline{\mathbb{R}}\rightarrow\mathbb{R},\qquad G_{\gamma }[x,y]=|x-y|^{\gamma},\quad \gamma\geqslant1, $$

is convex.

Indeed, since the function \(|t|^{\gamma} \) (\(t\in\mathbb{R}\)) is convex, by Jensen’s inequality [610], we know that for any \([a,b]\in\overline{\mathbb{R}}\), \([c,d]\in\overline{\mathbb{R}}\), \(\theta \in[0,1]\), we have

$$\begin{aligned} (1-\theta)G_{\gamma}[a,b]+\theta G_{\gamma}[c,d] =&(1-\theta)\vert a-b\vert ^{\gamma}+\theta \vert c-d\vert ^{\gamma} \\ \geqslant& \bigl\vert (1-\theta) (a-b)+\theta(c-d)\bigr\vert ^{\gamma} \\ =& G_{\gamma} \bigl[(1-\theta)[a,b]+\theta[c,d] \bigr]. \end{aligned}$$

We remark here that the interval function \(G:\overline{I}\rightarrow \mathbb{R} \) is convex if and only if the function

$$G_{*}:I^{2}\rightarrow\mathbb{R},\qquad G_{*}(x) \triangleq G[\mathbf {a},\mathbf{b}] $$

is convex, where

$$x=(x_{1},\ldots, x_{2n}),\qquad x_{i}=a_{i},\qquad x_{n+i}=b_{i}, \quad 1\leqslant i\leqslant n, $$

and the function \(G_{*}\) is convex if and only if the following Hessian matrix

$$\biggl[\frac{\partial^{2}G_{*}(x)}{\partial{x_{i}}\,\partial{x_{j}}} \biggr]_{2n\times2n} $$

is non-negative.

3 Log concavity and quasi-log concavity

Convexity and concavity are essential attributes of functions, their research and applications are important topics in mathematics.

To study the theory of ϕ-Jensen variance, in this section, we need to study the log concavity and the quasi-log concavity of functions.

3.1 Log concavity

There are many types of convexity and concavity for functions. One of them is the log concavity which has many applications in probability and statistics.

Recall the definition of a log-concave function [1, 1120] as follows.

The function \({p}:I\rightarrow ( {0,\infty} ) \), here I is an n-dimensional interval, is called a log-concave function if logp is a concave function, i.e.,

$$ p \bigl[ \theta\mathbf{a}+ ( 1-\theta ) \mathbf{b} \bigr] \geqslant p^{\theta} (\mathbf{a} ) p^{1-\theta} ( \mathbf{b} ) , \quad\forall ( \mathbf{a},\mathbf{b})\in I^{2}, \forall \theta\in [ {0,1} ]. $$
(20)

If \(-\log{p}\) is a concave function, then we say that the function \(p:I\rightarrow ( {0,\infty} ) \) is a log-convex function.

In [20], the authors apply the log concavity to study the Roy model, and several interesting results are obtained. In particular, we have the following (see p.1128 in [20]): If D is a log concave random variable, then

$$ \frac{\partial\operatorname{Var} [ D|D>d ] }{\partial d}\leqslant 0 \quad\mbox{and}\quad \frac{\partial\operatorname{Var} [ D|D\leqslant d ] }{\partial d}\geqslant 0. $$
(21)

Unfortunately, their results did not include the case where D is a truncated random variable.

In this paper, we apply the log concavity of functions to generalize the inequalities in (21) to the case where D is a truncated random variable (see Remark 2).

To prepare for the proofs of the results in Section 4.5, we need to study several characteristics of the log-concave function as follows.

For the log-concave function, we can easily get the following Propositions 2 and 3 by the theory of analysis [1].

Proposition 2

Let the function \({p: {I}}\rightarrow ( {0,\infty} )\), here I is an interval, be differentiable. Then the function p is a log-concave function if and only if the function \({ ( {\log{p}} ) ^{{\prime}}}\) is monotone decreasing, i.e., if \(a,b\in I\), \(a< b\), then we have

$$ ( \log{p} ) ^{{\prime }}(a)\geqslant ( \log{p} ) ^{{\prime }}(b), $$
(22)

where \(( \log{p} ) ^{{\prime }}\) is the derivative of the function logp.

Proposition 3

Let the function \({p:{I}}\rightarrow ( {0,\infty} )\), here I is an interval, be twice differentiable. Then the function p is a log-concave function if and only if

$$ ( \log{p} ) ^{{\prime\prime }}(t )\leqslant0 \quad\Leftrightarrow\quad{p} ( t ) {p} {^{\prime\prime }} ( t ) -{ \bigl[ {{p} {^{\prime}} ( t ) } \bigr] ^{2}}\leqslant0,\quad \forall t\in I, $$
(23)

where \(( \log{p} ) ^{{\prime\prime }}\) is the second order derivative of the function logp.

For other characteristics of the log-concave function, we have the following non-trivial result.

Theorem 1

Let the function \({p}:I\rightarrow ( {0,\infty} )\), here I is an interval, be differentiable. Then the function p is a log-concave function if and only if

$$ {p} ( b ) -{p} ( a ) -\frac{{p^{\prime} ( b ) }}{{{p} ( b ) }}\int_{a}^{b}{{p} ( t ) \,\mathrm{d}t}\geqslant0, \quad\forall ( {a,b} ) \in{I^{2}}. $$
(24)

Proof

Assume that the function p is a log-concave function, we prove inequality (24) as follows.

We define an auxiliary function as follows:

$$ F:{I^{2}} \to ( { - \infty,\infty} ),\qquad F ( {a,b} ) \triangleq{p} ( b ) - {p} ( a ) - ( \log {p} ) ^{{\prime }}(b )\int_{a}^{b} {{p} ( t )\,\mathrm{d}t}. $$

If \(a = b\), then \(F ( {a,b} ) = 0\). Inequality (24) holds. We assume that \(b \ne a\) below.

Note that the function \({p}:I \to ( {0,\infty} )\) is differentiable. By the Cauchy mean value theorem, there exists a real number \(\theta\in ( {0,1} )\) such that

$$ \frac{{{p} ( b ) - {p} ( a )}}{{\int_{a}^{b} {{p} ( t )\,\mathrm{d}t} }} = \frac{{p^{\prime} [ {a + \theta ( {b - a} )} ]}}{{{p} [ {a + \theta ( {b - a} )} ]}} = ( \log{p} ){^{\prime}} { \bigl[ {a + \theta ( {b - a} )} \bigr]}. $$
(25)

If \(a < b \), then

$$ a < a + \theta ( {b - a} ) < b. $$
(26)

Combining with Proposition 2, (25) and (26), we obtain that

$$ \frac{{F ( {a,b} )}}{{\int_{a}^{b} {{p} ( t )\,\mathrm{d}t} }} = ( \log{p} ){^{\prime}} { \bigl[ {a + \theta ( {b - a} )} \bigr]} - ( \log{p} ){^{\prime}} { ( b )} \geqslant0. $$
(27)

Since \(\int_{a}^{b} {{p} ( t )\,\mathrm{d}t} > 0\), we have \(F ( {a,b} ) \geqslant0\) by (27). This proves inequality (24) for the case where \(a < b\).

If \(a>b\), then

$$ b< a+\theta ( {b-a} ) < a. $$
(28)

Combining with Proposition 2, (25) and (28), we obtain that

$$ \frac{{F ( {a,b} ) }}{{\int_{a}^{b}{{p} ( t ) \,\mathrm{d}t}}}= ( \log{p} ){^{\prime}} { \bigl[ {a + \theta ( {b - a} )} \bigr]} - ( \log{p} ){^{\prime}} { ( b )} \leqslant0. $$
(29)

Since \(\int_{a}^{b}{{p} ( t ) \,\mathrm{d}t}<0\), we have \(F ( {a,b} ) \geqslant0\) by (29). So inequality (24) is also valid for the last case.

Next, assume that inequality (24) holds, we prove that the function p is a log-concave function as follows.

According to Proposition 2, we just need to prove (22) where \(a,b\in I\) and \(a< b\).

Assume that \(a, b\in I\) and \(a< b\). By exchanging \(a \leftrightarrow b\) in (24), we get

$$ {p} ( {a} ) -{p} ( {b} ) -\frac{{p ^{\prime} ( {a} ) }}{{{p} ( {a} ) }}\int_{b}^{a}{{p} ( t ) \,\mathrm{d}t}\geqslant0. $$
(30)

By adding (24) and (30), we get

$$ \bigl[ ( \log{p} ){^{\prime}} { (a )} - ( \log{p} ){^{\prime}} { ( b )} \bigr]\int_{a}^{b}{{p} ( t ) \,\mathrm{d}t} \geqslant0. $$
(31)

Since \(\int_{a}^{b}{{p} ( t ) \,\mathrm{d}t}>0\), we get (22) by (31). The proof of Theorem 1 is completed. □

In Sections 3.2 and 4.5, we will demonstrate the applications of Theorem 1.

For the log concavity, we have the following interesting example.

Example 2

Let the function \(p:(\alpha,\beta )\rightarrow(0,\infty)\) be a probability density function of a random variable X, and let the probability distribution function of X be

$$P:(\alpha,\beta)\rightarrow[0,1], \qquad P(x)\triangleq\int_{\alpha }^{x}p(t) \,\mathrm{d}t. $$

If \(p:(\alpha,\beta)\rightarrow(0,\infty)\) is a differentiable log-concave function, then \(P:(\alpha,\beta)\rightarrow[0,1]\) is also a log-concave function, i.e.,

$$ 0\leqslant P \bigl[\alpha< X\leqslant(1-\theta)a+\theta b \bigr] \leqslant \bigl[P (\alpha< X\leqslant a ) \bigr]^{1-\theta } \bigl[ P (\alpha< X\leqslant b ) \bigr]^{\theta}\leqslant1, $$
(32)

where \((a,b)\in(\alpha,\beta)^{2}\), \(\theta\in [ {0,1} ]\), and \(P (\alpha< X\leqslant x )\triangleq P(x)\) is the probability of random event ‘\(\alpha< X\leqslant x\)’.

Proof

Set

$$p(x)=e^{\psi(x)},\quad \forall x\in(\alpha,\beta). $$

Since \(p:(\alpha,\beta)\rightarrow(0,\infty)\) is a differentiable log-concave function, by Proposition 2, we know that the function

$$(\log{p} )^{\prime}(x)=\psi^{\prime}(x),\quad x\in{(\alpha ,\beta)}, $$

is monotone decreasing, hence

$$ \psi^{\prime}(x)\leqslant\psi^{\prime}(t),\quad \forall {t}: \alpha< t \leqslant x, \forall {x}: \alpha< x< \beta. $$
(33)

By

$$P^{\prime}(x)=p(x),\qquad P^{\prime\prime}(x)=p^{\prime}(x)= \psi^{\prime}(x)p(x), $$

and (33), we have

$$\begin{aligned} P(x)P^{\prime\prime}(x)- \bigl[P^{\prime}(x) \bigr]^{2} =& \psi ^{\prime}(x)p(x)\int_{\alpha}^{x}p(t) \,\mathrm{d}t-p^{2}(x) \\ =& p(x) \biggl[\int_{\alpha}^{x} \psi^{\prime}(x)p(t)\,\mathrm{d}t-p(x) \biggr] \\ \leqslant& p(x) \biggl[\int_{\alpha}^{x} \psi^{\prime}(t)p(t)\,\mathrm{d}t-p(x) \biggr] \\ = & p(x) \biggl[\int_{\alpha}^{x}e^{\psi(t)} \,\mathrm{d}\psi (t)-p(x) \biggr] \\ = & p(x) \bigl[e^{\psi(x)}-e^{\psi(\alpha+0)}-p(x) \bigr] \\ = & -p(x)p(\alpha+0) \\ \leqslant& 0,\quad \forall x\in{(\alpha,\beta)}. \end{aligned}$$

According to Proposition 3, we know that the function \(P:(\alpha,\beta)\rightarrow[0,1]\) is a log-concave function. The proof is completed. □

In Section 5.1, we will demonstrate the applications of Example 2.

3.2 Quasi-log concavity

Now we recall the definitions of the quasi-log concavity and the quasi-log convexity as follows [1].

A differentiable function \(p:I\rightarrow(0,\infty)\), here I is an interval, is said to be quasi-log concave if

$$ G_{p} [{a,b} ]\triangleq \biggl( \int_{a}^{b}p \biggr) \bigl[ p^{\prime}(b)-p^{\prime}(a) \bigr] -{ \bigl[ p(b)-p(a) \bigr] ^{2}}\leqslant0,\quad \forall a,b\in I. $$
(34)

If inequality (34) is reversed, then the function \(p:I\rightarrow(0,\infty)\) is said to be quasi-log convex.

We remark here that the function

$$G_{p}:\overline{I}\rightarrow\mathbb{R},\qquad G_{p}[a,b]= \biggl( \int_{a}^{b}p \biggr) \bigl[ p^{\prime}(b)-p^{\prime}(a) \bigr] -{ \bigl[ p(b)-p(a) \bigr] ^{2}} $$

is an interval function. If the function \(p:I\rightarrow ( {0,\infty} ) \) is twice continuously differentiable, then inequalities in (34) can be rewritten as

$$ G_{p} [ {a,b} ] \triangleq\int_{a}^{b}p \int_{a}^{b}p^{{\prime \prime}}- \biggl( \int _{a}^{b}p^{\prime} \biggr) ^{2} \leqslant0,\quad \forall [a,b]\in\overline{I}. $$
(35)

The significance of the quasi-log concavity in the analysis of variance is as follows (see Theorem 5.1 in [1]): Let \({X _{I}}\) be a continuous random variable and its probability density function \(p:I\rightarrow ( {0,\infty} ) \) be twice continuously differentiable. Then the function \(p:I\rightarrow (0,\infty)\) is quasi-log concave if and only if

$$ 0\leqslant\operatorname{Var} \bigl[ ( \log{p} ) ^{{\prime }} ( X _{{}[ a,b]} ) \bigr] \leqslant-\mathrm{E} \bigl[ ( \log{p} ) ^{{\prime\prime}} ( X _{[ a,b]} ) \bigr] ,\quad \forall [a,b]\in\overline{I}. $$
(36)

We remark here that for the twice continuously differentiable function, quasi-log concavity implies log concavity, and quasi-log convexity implies log convexity, as well as log convexity implies quasi-log convexity [1].

An interesting conjecture was proposed by Wen et al. in [1] as follows.

Corollary 1

(Quasi-log concavity conjecture [1])

Let the function \(p:I\rightarrow(0,\infty)\), here I is an interval, be differentiable. If p is log concave, then p is quasi-log concave.

Now we prove Corollary 1 which is a corollary of Theorem 1.

Proof

Let p be differentiable and log concave, and let \(a,b\in I\). Without loss of generality, we may assume that \(a< b\).

Since p is log concave, by Proposition 2, we have

$$ \frac{p^{\prime}(a)}{p(a)}={ (\log{p} ) ^{\prime} { ( {a} ) }}\geqslant{ (\log{p} ) ^{\prime} { ( {b} ) }}=\frac{p^{\prime}(b)}{p(b)}. $$
(37)

Assume that \(p(b)-p(a)\geqslant0\). By Theorem 1, we know that (24) holds. Hence

$$ \bigl[{p} ( b ) -{p} ( a ) \bigr]^{2} \geqslant \bigl[{p} ( b ) -{p} ( a ) \bigr] \frac{{p^{\prime} ( b ) }}{{{p} ( b ) }}\int_{a}^{b}{{p}} . $$
(38)

According to (37), (38) and \(\int_{a}^{b}{p}>0\), we get

$$\begin{aligned} G_{p} [{a,b} ] \triangleq& \biggl( \int_{a}^{b}p \biggr) \bigl[ p^{\prime}(b)-p^{\prime}(a) \bigr] -{ \bigl[ p(b)-p(a) \bigr] ^{2}} \\ \leqslant& \biggl( \int_{a}^{b}p \biggr) \bigl[ p^{\prime}(b)-p^{\prime}(a) \bigr] - \bigl[{p} ( b ) -{p} ( a ) \bigr] \frac{{p^{\prime} ( b ) }}{{{p} ( b ) }}\int_{a}^{b}{{p}} \\ =& \frac{p(b) [p^{\prime}(b)-p^{\prime}(a) ]-p^{\prime }(b) [p(b)-p(a) ]}{p(b)}\int_{a}^{b}{{p}} \\ =& \frac{-p(b)p^{\prime}(a)+p^{\prime}(b)p(a)}{p(b)}\int_{a}^{b}{{p}} \\ =& p(a) \biggl[\frac{p^{\prime}(b)}{p(b)}-\frac{p^{\prime }(a)}{p(a)} \biggr]\int _{a}^{b}{{p}} \\ \leqslant& 0. \end{aligned}$$

That is to say, (34) holds.

Assume that \(p(b)-p(a)<0\). Then \(p(a)-p(b)>0\). By the proof of Theorem 1 we know that (30) holds. Hence

$$ \bigl[{p} ( a ) -{p} ( b ) \bigr]^{2} \geqslant \bigl[{p} ( a ) -{p} ( b ) \bigr] \frac{{p^{\prime} ( a ) }}{{{p} ( a ) }}\int_{b}^{a}{{p}} . $$
(39)

According to (37), (39) and \(\int_{a}^{b}{p}>0\), we get

$$\begin{aligned} G_{p} [{a,b} ] \triangleq& \biggl( \int_{a}^{b}p \biggr) \bigl[ p^{\prime}(b)-p^{\prime}(a) \bigr] -{ \bigl[ p(b)-p(a) \bigr] ^{2}} \\ = & \biggl( \int_{a}^{b}p \biggr) \bigl[ p^{\prime}(b)-p^{\prime}(a) \bigr] -{ \bigl[ p(a)-p(b) \bigr] ^{2}} \\ \leqslant& \biggl( \int_{a}^{b}p \biggr) \bigl[ p^{\prime}(b)-p^{\prime}(a) \bigr] - \bigl[{p} ( a ) -{p} ( b ) \bigr] \frac{{p^{\prime} ( a ) }}{{{p} ( a ) }}\int_{b}^{a}{{p}} \\ = & \biggl( \int_{a}^{b}p \biggr) \bigl[ p^{\prime}(b)-p^{\prime}(a) \bigr] - \bigl[{p} ( b ) -{p} ( a ) \bigr] \frac{{p^{\prime} ( a ) }}{{{p} ( a ) }}\int_{a}^{b}{{p}} \\ =& \frac{p(a) [p^{\prime}(b)-p^{\prime}(a) ]-p^{\prime }(a) [p(b)-p(a) ]}{p(a)}\int_{a}^{b}{{p}} \\ =& \frac{p(a)p^{\prime}(b)-p^{\prime}(a)p(b)}{p(a)}\int_{a}^{b}{{p}} \\ =& p(b) \biggl[\frac{p^{\prime}(b)}{p(b)}-\frac{p^{\prime }(a)}{p(a)} \biggr]\int _{a}^{b}{{p}} \\ \leqslant& 0. \end{aligned}$$

That is to say, (34) still holds. Hence p is quasi-log concave.

We remark here that if the function \({ ( {\log{p}} ) ^{{\prime}}}\) is strictly decreasing, then the equation in (34) holds if and only if \(a=b\). This completes the proof of Corollary 1. □

Corollary 1 implies the following interesting corollary.

Corollary 2

Let the function \(p:I\rightarrow (0,\infty)\), here I is an interval, be twice continuously differentiable. Then p is quasi-log concave if and only if p is log concave.

4 Theory of ϕ-Jensen variance

The covariance and the variance are important qualitative features of random variables. Indeed, the research and the application of these indexes are important topics in probability and statistics. In this section, we generalize the traditional covariance and the variance of random variables, and define ϕ-covariance, ϕ-variance, ϕ-Jensen variance, ϕ-Jensen covariance, integral variance and γ-order variance. We also study the relationships among these ‘variances’. In Section 4.5, we study the monotonicity of the interval function involving the ϕ-Jensen variance by means of the log concavity.

In the following discussion, we assume the following.

I is an n-dimensional interval (or n-dimensional, closed and bounded domain in \(\mathbb{R}^{n}\)). The \({X}\triangleq ( {{X _{1}},\ldots,{X _{n}}} ) \in I\) is an n-dimensional continuous random variable, and its probability density function \({p}:I\rightarrow ( {0,\infty} ) \) is continuous. The functions \({\varphi_{i}}:I\rightarrow J\) and \(\varphi:I\rightarrow J\) are continuous, where J is an interval and \(i=1,\ldots,m\), \(m\geqslant2\). The function \(\phi:J\rightarrow\mathbb{R} \) is continuous and non-constant. The \(\phi^{\prime} ( x )\), \(\phi^{\prime\prime} ( x )\) and \(\phi^{\prime \prime\prime} ( x )\) are the derivative, second order derivative and third order derivative of the function \(\phi ( x )\), respectively.

4.1 ϕ-Variance

The signed square root of the real number t is defined as

$$\sqrt[\pm]{t}\triangleq\sqrt{\vert t\vert }\operatorname{sign} ( t ) \in \mathbb{R}, $$

where \(\operatorname{sign} ( t ) \) is the sign function, which is similar to the function \(\sqrt[3]{t}\).

The functional

$$ {\operatorname{Cov}_{\phi}} ( {{\varphi_{i}},{ \varphi_{j}}} ) \triangleq \mathrm{E} \bigl[ {\sqrt[\pm]{{\phi ( {{ \varphi_{i}}} ) -\phi ( {\mathrm{E} {\varphi_{i}}} ) }}\cdot\sqrt[\pm]{{\phi ( {{\varphi _{j}}} ) -\phi ( {\mathrm{E} { \varphi_{j}}} ) }}} \bigr],\quad 1\leqslant i,j \leqslant m, $$
(40)

is called the ϕ-covariance of the random variables \({\varphi_{i}} ( {{X }} ) \) and \({\varphi_{j}} ( {{X }} )\), and the non-negative functional

$$ {\operatorname{Var}_{\phi}} {\varphi}\triangleq{\operatorname{Cov}_{\phi}} ( {\varphi ,\varphi} ) =\mathrm{E}\bigl\vert {\phi ( \varphi ) -\phi ( {\mathrm{E}\varphi} ) }\bigr\vert $$
(41)

the ϕ-variance of the random variable \(\varphi ( {{X }} )\), here the functional

$$\mathrm{E}\varphi\triangleq\int_{I}{{p}\varphi} $$

is the mathematical expectation of the random variable \(\varphi ( {{X }} ) \).

We remark here that [21] studied the convergence of the generalized integral

$$\mathrm{E}\phi (\psi+ \delta )\triangleq\int_{1}^{\infty}{p \phi (\psi+ \delta )}, $$

which is a generalized mathematical expectation of the random variable \(\phi [\psi (X )+\delta (X ) ]\) in the interval \([1,\infty)\).

We now define the ϕ-covariance matrix \([ \operatorname{Cov}_{\phi} ( \varphi_{i},\varphi_{j} ) ] _{m\times m}\) of the random variables \({\varphi_{1}} ( {X} ), \ldots , {\varphi_{m}} ( {X } )\) as follows:

$$ { \bigl[ {{\operatorname{Cov}_{\phi}} ( {{\varphi_{i}}, { \varphi_{j}}} )} \bigr]_{m \times m}} \triangleq \left.\begin{bmatrix} {{b_{1,1}}} & {{b_{1,2}}} & \cdots & {{b_{1,m}}} \\ {{b_{2,1}}} & {{b_{2,2}}} & \cdots & {{b_{2,m}}} \\ \vdots & \vdots & \ddots & \vdots \\ {{b_{m,1}}} & {{b_{m,2}}} & \cdots & {{b_{m,m}}} \end{bmatrix} \right._{m \times m}, $$
(42)

where

$$ b_{i,j}\triangleq{{{\operatorname{Cov}}_{\phi}} ( {{\varphi _{i}},{\varphi_{j}}} ) },\quad i,j=1,\ldots,m. $$
(43)

For the ϕ-covariance matrix, we have the following proposition.

Proposition 4

The ϕ-covariance matrix \([\operatorname{Cov} _{\phi} ( \varphi_{i},\varphi_{j} ) ] _{m\times m}\) of the random variables \({\varphi_{1}} ( {X } ),\ldots, {\varphi_{m}} ( {X} ) \) is non-negative.

Proof

Indeed, if we set

$$ {a_{k}}\triangleq\phi ( {{\varphi_{k}}} ) -\phi ( { \mathrm{E} {\varphi_{k}}} ),\quad k=1,\ldots,m, $$
(44)

then

$${b_{i,j}}=\mathrm{E} \bigl( {\sqrt[\pm]{{{a_{i}}}}\cdot \sqrt[\pm ]{{{a_{j}}}}} \bigr),\quad i,j=1,\ldots,m. $$

Hence, for any \(x= ( {{x_{1}},\ldots,{x_{m}}} ) \in{\mathbb{R} ^{m}}\), we have

$$\begin{aligned} \mathrm{E} { \Biggl( {\sum_{k = 1}^{m} {{x_{k}}\sqrt[ \pm]{{{a_{k}}}}} } \Biggr)^{2}} =& \mathrm{E} \Biggl[ {\sum_{k = 1}^{m} {x_{k}^{2}{{ \bigl( {\sqrt[ \pm]{{{a_{k}}}}} \bigr)}^{2}}} + 2\sum_{1 \leqslant i < j \leqslant m} {{x_{i}} {x_{j}} \bigl( \sqrt[ \pm]{{{a_{i}}}} \cdot\sqrt[ \pm]{{{a_{j}}}} \bigr)}} \Biggr]\\ = &\sum_{k = 1}^{m} {\mathrm{E} \bigl( { \sqrt[ \pm]{{{a_{k}}}}\cdot \sqrt[ \pm]{{{a_{k}}}}} \bigr)x_{k}^{2}} + 2\sum_{1 \leqslant i < j \leqslant m} {\mathrm{E} \bigl( {\sqrt[ \pm]{{{a_{i}}}}\cdot\sqrt[ \pm]{{{a_{j}}}}} \bigr){x_{i}} {x_{j}}}\\ = &\sum_{k = 1}^{m} {{b_{k,k}}x_{k}^{2}} + 2\sum_{1 \leqslant i < j \leqslant m} {{b_{i,j}} {x_{i}} {x_{j}}}\\ = &x{ \bigl[ {{{\operatorname{\operatorname{Cov}} }_{\phi}} ( {{ \varphi _{i}},{\varphi_{j}}} )} \bigr]_{m \times m}} {x^{_{T}}}\\ \geqslant& 0. \end{aligned}$$

That is to say, the ϕ-covariance matrix \({ [ {{{\operatorname{\operatorname{Cov}} }_{\phi}} ( {{\varphi_{i}},{\varphi _{j}}} )} ]_{m \times m}}\) of the random variables \({\varphi_{1}} ( {X } ), \ldots, {\varphi_{m}} ( {X} )\) is non-negative. The proof of Proposition 4 is completed. □

According to Proposition 4 and the quadratic form theory, all the principal minors of the ϕ-covariance matrix are non-negative. In particular, all the \(2\times2\) principal minors of the ϕ-covariance matrix are non-negative. Hence

$$ \det \begin{bmatrix} {{b_{i,i}}} & {{b_{i,j}}} \\ {{b_{j,i}}} & {{b_{j,j}}} \end{bmatrix} = {b_{i,i}} {b_{j,j}}-{ ( {{b_{i,j}}} )^{2}} \geqslant0. $$

So, if \({b_{i,i}}>0\), \({b_{j,j}}>0\), we can define the functional

$$ {\rho_{\phi} ( {{\varphi_{i}},{\varphi_{j}}} ) }\triangleq \frac{{{b_{i,j}}}}{{\sqrt{{b_{i,i}}}\sqrt{{b_{j,j}}}}}\in [ {-1,1} ] $$
(45)

as a ϕ-correlation coefficient of the random variables \({ \varphi_{i}} ( {{X}} ) \) and \({\varphi_{j}} ( {{X }} ) \), where \(b_{i,j}\) is defined by (43), and \(i,j=1,\ldots,m\).

4.2 ϕ-Jensen variance

We say that the function

$$a \circ b:\mathbb{R}^{2} \rightarrow\mathbb{R},\qquad a \circ b= \left \{ \textstyle\begin{array}{@{}l@{\quad}l} \sqrt[\pm]{ab}, & a \ne b, \\ a, & a=b \end{array}\displaystyle \right . $$

is a signed square root product of two real numbers a and b [22].

For the signed square root product \(a \circ b\), we have

$$\lim_{b\rightarrow a}{a \circ b}=\sqrt{a^{2}}=|a| \quad\mbox{and}\quad a \circ a=a, \quad \forall (a,b)\in\mathbb{R}^{2}. $$

Hence the function \(a \circ b\) is discontinuous if \(a=b<0\), and \(a \circ a=a\) is similar to the formula \(\sqrt[3]{a^{3}}=a\).

Since

$$ a,b\in\mathbb{R},\qquad a\ne b \quad\Rightarrow\quad a\circ b=\sqrt[\pm]{a}\times\sqrt [\pm]{b} \quad \mbox{and}\quad a\times b = a\circ b\sqrt{|ab|}, $$
(46)

and

$$ a,b\in\mathbb{R}\quad\Rightarrow \quad a\circ b=b\circ a\leqslant\sqrt[\pm ]{a}\times\sqrt[ \pm]{b}, $$
(47)

we know that the properties of the signed square root product \(a \circ b\) are similar to the product \(a \times b\) and the formula

$$\sqrt[3]{ab}=\sqrt[3]{a}\times\sqrt[3]{b}. $$

The graph of the function \(a \circ b\) is depicted in Figure 2.

Figure 2
figure 2

The graph of the function \(\pmb{a \circ b}\) , \(\pmb{(a,b)\in[-1,1]^{2}}\) .

Assume that the function ϕ is a convex function. Then we say that the functional

$$ {\operatorname{JCov}_{\phi}} ( {{\varphi_{i}},{\varphi _{j}}} ) \triangleq{\mathrm{E} \bigl[ {\phi ( \varphi_{i} ) -\phi ( {\mathrm{E}\varphi_{i} } ) } \bigr]\circ \bigl[ {\phi ( \varphi_{j} ) -\phi ( {\mathrm{E}\varphi_{j} } ) } \bigr] },\quad 1\leqslant i,j \leqslant m, $$
(48)

is a ϕ-Jensen covariance of the random variables \({\varphi_{i}} ( {{X }} ) \) and \({\varphi_{j}} ( {{X }} )\), and the functional

$$ \operatorname{JVar}_{\phi}{\varphi}={\operatorname{JCov}_{\phi}} ( {{\varphi},{\varphi}} ) $$
(49)

is a ϕ-Jensen variance of the random variables \(\varphi ( {{X}} )\).

According to the definition and Jensen’s inequality [610], we have

$$ \operatorname{JVar}_{\phi}{\varphi}=\mathrm{E}\phi ( \varphi ) -\phi ( {\mathrm{E}\varphi} ) \geqslant0. $$
(50)

According to the above definition, we have the following relationship between the ϕ-Jensen covariance \({\operatorname{JCov}_{\phi}} ( {{\varphi _{i}},{\varphi_{j}}} )\) and the ϕ-covariance \(\operatorname{Cov}_{\phi} ( {{\varphi_{i}},{\varphi _{j}}} )\).

Proposition 5

If \(\vert \Omega_{i,j}\vert =0\), where

$$\Omega_{i,j}\triangleq \bigl\{ t\in\Omega| {\phi \bigl[ \varphi_{i} (t) \bigr] -\phi ( \mathrm{E}\varphi_{i} ) }= {\phi \bigl[ \varphi_{j}(t) \bigr] -\phi (\mathrm{E} \varphi_{j} ) } \bigr\} ,\quad 1\leqslant i\ne j\leqslant m, $$

and \(\vert \Omega_{i,j}\vert \) is the measure of the set \(\Omega _{i,j}\), then we have

$$ \operatorname{JCov}_{\phi} ( {{\varphi _{i}},{ \varphi_{j}}} )=\operatorname{Cov}_{\phi} ( {{\varphi _{i}},{\varphi_{j}}} ). $$
(51)

Proof

Since \(\vert \Omega_{i,j}\vert =0\), by (44), we have

$$ \int_{\Omega_{i,j}}pa_{i}\circ a_{j}=0 \quad\mbox{and}\quad \int_{\Omega_{i,j}}p\sqrt[\pm]{a_{i}}\times \sqrt[\pm]{a_{j}}=0. $$
(52)

From (46) and (52), we get

$$\begin{aligned} {{\operatorname{JCov}_{\phi}} ( {{\varphi _{i}},{ \varphi_{j}}} )} =& {\mathrm{E} \bigl[ {\phi ( \varphi _{i} ) -\phi ( {\mathrm{E}\varphi_{i} } ) } \bigr]\circ \bigl[ {\phi ( \varphi_{j} ) -\phi ( {\mathrm{E}\varphi_{j} } ) } \bigr] } \\ =& \int_{\Omega}pa_{i}\circ a_{j} \\ =& \int_{\Omega_{i,j}}pa_{i}\circ a_{j}+\int _{\Omega\setminus\Omega _{i,j}}pa_{i}\circ a_{j} \\ =&\int_{\Omega\setminus\Omega_{i,j}}pa_{i}\circ a_{j} \\ =&\int_{\Omega\setminus\Omega_{i,j}}p\sqrt[\pm]{a_{i}}\times\sqrt[\pm ]{a_{j}} \\ =&\int_{\Omega_{i,j}}p\sqrt[\pm]{a_{i}}\times\sqrt[ \pm]{a_{j}}+\int_{\Omega\setminus\Omega_{i,j}}p\sqrt[\pm]{a_{i}} \times\sqrt[\pm ]{a_{j}} \\ =&\int_{\Omega}p\sqrt[\pm]{a_{i}}\times\sqrt[ \pm]{a_{j}} \\ =&{\operatorname{Cov}_{\phi}} ( {{\varphi _{i}},{ \varphi_{j}}} ). \end{aligned}$$

This ends the proof of Proposition 5. □

In addition, according to inequality (47), we have

$$ {\operatorname{JCov}_{\phi}} ( {{\varphi _{i}},{ \varphi_{j}}} )\leqslant\operatorname{Cov}_{\phi} ( {{\varphi _{i}},{\varphi_{j}}} ), $$
(53)

and

$$ 0 \leqslant\operatorname{JVar}_{\phi}{\varphi} \leqslant{ \operatorname{Var}_{\phi}} {\varphi}. $$
(54)

Unfortunately, the ϕ-Jensen covariance matrix \({ [ {{{\operatorname{\operatorname{JCov}} }_{\phi}} ( {{\varphi _{i}},{\varphi_{j}}} )} ]_{m \times m}}\) of the random variables \({\varphi_{1}} ( {X } ),\ldots, {\varphi_{m}} ( {X} ) \) is not non-negative in general. But since

$$\begin{aligned} &\bigl\vert {\operatorname{JCov}_{\phi}} ( {{\varphi_{i}},{ \varphi _{j}}} )\bigr\vert \\ &\quad=\bigl\vert {\mathrm{E} \bigl[ {\phi ( \varphi_{i} ) -\phi ( { \mathrm{E}\varphi_{i} } ) } \bigr]\circ \bigl[ {\phi ( \varphi_{j} ) -\phi ( {\mathrm{E}\varphi_{j} } ) } \bigr] }\bigr\vert \\ &\quad\leqslant \mathrm{E}\bigl\vert \bigl[ {\phi ( \varphi_{i} ) - \phi ( {\mathrm{E}\varphi_{i} } ) } \bigr]\circ \bigl[ {\phi ( \varphi_{j} ) -\phi ( {\mathrm{E}\varphi_{j} } ) } \bigr] \bigr\vert \\ &\quad= \mathrm{E} \bigl(\sqrt{\bigl\vert {\phi ( \varphi_{i} ) -\phi ( {\mathrm{E}\varphi_{i} } ) }\bigr\vert }\times\sqrt{\bigl\vert {\phi ( \varphi_{j} ) -\phi ( {\mathrm{E}\varphi_{j} } ) }\bigr\vert } \bigr) \\ &\quad\leqslant \sqrt{\mathrm{E}\bigl\vert {\phi ( \varphi_{i} ) -\phi ( {\mathrm{E}\varphi_{i} } ) }\bigr\vert }\times\sqrt{\mathrm{E} \bigl\vert {\phi ( \varphi_{j} ) -\phi ( {\mathrm{E} \varphi_{j} } ) }\bigr\vert } \\ &\quad= \sqrt{{\operatorname{Cov}_{\phi}} ( {{\varphi_{i}},{ \varphi _{i}}} )}\times\sqrt{{\operatorname{Cov}_{\phi}} ( {{ \varphi _{j}},{\varphi _{j}}} )}, \end{aligned}$$

if \(\sqrt{{\operatorname{Cov}_{\phi}} ( {{\varphi_{i}},{\varphi _{i}}} )}>0\), \(\sqrt{{\operatorname{Cov}_{\phi}} ( {{\varphi _{j}},{\varphi _{j}}} )}>0\), we can define the functional

$$ {\rho_{\phi}^{*} ( {{\varphi_{i}},{ \varphi_{j}}} ) }\triangleq \frac{{\operatorname{JCov}_{\phi}} ( {{\varphi_{i}},{\varphi _{j}}} )}{{\sqrt{{\operatorname{Cov}_{\phi}} ( {{\varphi _{i}},{\varphi _{i}}} )}\sqrt{{\operatorname{Cov}_{\phi}} ( {{\varphi_{j}},{\varphi _{j}}} )}}}\in [ {-1,1} ] $$
(55)

as a ϕ-Jensen correlation coefficient of the random variables \({\varphi_{i}} ( {{X}} ) \) and \({\varphi_{j}} ( {{X }} ) \), where \(i,j=1,\ldots,m\).

A natural question is why we define the ϕ-Jensen variance. One of the reasons is that we have the following relationship between the ϕ-Jensen variance \(\operatorname{JVar}_{\phi}{\varphi}\) and the variance [9, 10]:

$$ \operatorname{Var}\varphi\triangleq\mathrm{E} { ( {\varphi- \mathrm{E}\varphi } )^{2}} = \mathrm{E} {\varphi^{2}} - { ( { \mathrm{E}\varphi} )^{2}}. $$
(56)

Theorem 2

Let the function \(\phi:J\rightarrow ( -\infty,\infty ) \) be twice continuously differentiable and \(\phi^{\prime\prime} ( x ) \geqslant0\), \(\forall x\in J\), and let the function \(\varphi:I\rightarrow J\) be continuous. Then we have the inequalities

$$ \frac{1}{2}\mathop{\inf} _{t\in I} \bigl\{ {{ \phi^{\prime\prime} \bigl[ {\varphi ( t ) } \bigr] }} \bigr\} \leqslant \frac{{\operatorname{JVar}_{\phi}\varphi }}{{\operatorname{Var}\varphi}}\leqslant\frac{1}{2}\mathop{\sup} _{t\in I} \bigl\{ {{\phi^{\prime\prime} \bigl[ {\varphi ( t ) } \bigr] }} \bigr\} . $$
(57)

Suppose that \(I,J \subset(0,\infty)\) are two intervals, and \(\varphi :I\rightarrow J\) is a monotonic function. If we set \(\phi^{\prime \prime}=\varphi^{-1}>0\), then

$$\phi=\iint\varphi^{-1}\triangleq\int\,\mathrm{d}t\int \varphi^{-1}(t)\,\mathrm{d}t, $$

where \(\varphi^{-1}\) is the inverse function of the function φ. Hence inequalities (57) can be rewritten as

$$ \mathop{\inf} _{t\in I} \{t \}\leqslant\mathrm {E}_{\varphi}(X) \triangleq\frac{{2\operatorname{JVar}_{\iint\varphi ^{-1} }\varphi}}{{\operatorname{Var}\varphi}}\leqslant\mathop{\sup} _{t\in I} \{t \} . $$
(58)

We say that the functional \(\mathrm{E}_{\varphi}(X)\) is the φ-mathematical expectation of the random variable \(X _{I}\) and the functional \(\operatorname{JVar}_{\iint\varphi^{-1}}\varphi\) is an integral variance of the random variable \(\varphi ( {{X}} ) \).

In order to facilitate applications in Section 5, now we introduce a special ϕ-Jensen variance, which is called a γ-order variance.

We define a function \({\phi_{\gamma}}\) as follows:

$$ {\phi_{\gamma}}: ( {0 ,\infty} ) \rightarrow ( {0,\infty} ) ,\qquad { \phi_{\gamma}} ( t ) \triangleq\frac {2}{{\gamma ( {\gamma-1} ) }}{ t ^{\gamma}},\quad \gamma\ne0,1. $$
(59)

Then

$$\phi_{\gamma}^{\prime\prime}(t)=2t^{\gamma-2}>0, \quad \forall t\in(0, \infty ), \forall \gamma\in\mathbb{R}. $$

Hence \(\phi_{\gamma}\) is a convex function.

Let \(\varphi(t)>0\), \(\forall t\in I\). Then we say that the functional

$$ {\operatorname{Var}^{ [ \gamma ] }}\varphi\triangleq \operatorname{JVar}_{{\phi_{\gamma}}} \varphi=\frac{2}{\gamma ( {\gamma -1} ) } \bigl[\mathrm{E} {\varphi}^{\gamma}- (\mathrm{E} {\varphi} )^{\gamma} \bigr] $$
(60)

is a γ-order variance of the random variable \(\varphi ( {{X }} )\).

In general, for any real number γ, we define the γ-order variance of the random variable \(\varphi ({{X}} ) \) as follows [22]:

$$ \operatorname{Var}^{[\gamma] }\varphi\triangleq \left \{\textstyle\begin{array}{@{}l@{\quad}l} \frac{2}{{\gamma ( {\gamma-1} ) }} [\mathrm{E}{\varphi }^{\gamma}- (\mathrm{E}{\varphi} )^{\gamma} ], & \gamma\ne0,1, \\ \lim_{\gamma\rightarrow0}\operatorname{Var}^{[\gamma] }\varphi=2 [ \ln (\mathrm{E}\varphi)-\mathrm{E}(\ln{\varphi}) ], & \gamma=0, \\ \lim_{\gamma\rightarrow1}\operatorname{Var}^{[\gamma] }\varphi=2 [ \mathrm{E}(\varphi \ln{\varphi})-(\mathrm{E}\varphi) \ln(\mathrm{E}\varphi) ], & \gamma =1. \end{array}\displaystyle \right . $$
(61)

Noting that from the definition (61), we have

$$ \operatorname{Var}^{[\gamma] }\varphi\geqslant 0, \quad\forall \gamma\in \mathbb{R}. $$
(62)

Hence we may say that the functional \((\operatorname{Var}^{[\gamma] }\varphi )^{1/\gamma}\) is a γ-order mean variance of the random variable \(\varphi ({{X }} ) \), where \(\gamma\ne0\).

Since \(\phi_{\gamma}\) is a convex function, according to Theorem 2 and the continuity, we have

$$ \mathop{\inf} _{t\in I} \bigl\{ \bigl[ \varphi ( t ) \bigr]^{\gamma-2} \bigr\} \leqslant\frac{{\operatorname{Var}^{[\gamma]} \varphi}}{{\operatorname{Var}\varphi}}\leqslant\mathop{\sup} _{t\in I} \bigl\{ \bigl[ \varphi ( t ) \bigr]^{\gamma-2} \bigr\} ,\quad \forall \gamma\in\mathbb{R} . $$
(63)

In [22], the authors defined the Dresher variance mean \(V_{\gamma,\delta}(\varphi)\) of the random variable \(\varphi(X)\) and obtained the Dresher-type inequality (see Theorem 2 in [22]) and the following V-E inequality (see (7) in [22]):

$$ \frac{{\operatorname{Var}^{[\gamma]} \varphi}}{{\operatorname {Var}^{[\delta]}\varphi}}\geqslant \frac{\delta}{\gamma} (\mathrm{E}\varphi )^{\gamma-\delta}, $$
(64)

where \(\gamma>\delta\geqslant1\), and the coefficient \(\delta/\gamma\) is the best constant, and the authors demonstrated the applications of these results in space science (see (55)-(60) in [22]).

Based on the above analysis, we know that the ϕ-Jensen variance and the γ-order variance are natural extensions of the traditional variance

$$\operatorname{Var}\varphi\triangleq \operatorname{Var}^{[2]}\varphi. $$

According to Theorem 2, we may use the ϕ-Jensen variance \(\operatorname{JVar}_{\phi}{\varphi}\) to replace the traditional variance Varφ. For example, we may use the integral variance \({\operatorname{JVar}_{\iint\varphi^{-1} }\varphi}\) or γ-order variance \(\operatorname{Var}^{[\gamma]}{\varphi}\) to replace the traditional variance Varφ. If some \(\varphi(t)\leqslant0\), \(\exists t\in{I}\), then we may use the \(\phi_{\gamma}^{*} \)-Jensen variance \(\operatorname{JVar}_{\phi_{\gamma}^{*} }{\varphi}\) to replace the traditional variance Varφ, where

$$ {\phi_{\gamma}^{*}}:\mathbb{R}\rightarrow[ {0 ,\infty} ), \qquad{\phi_{\gamma}^{*}} ( t ) \triangleq\frac{2}{{\gamma ( {\gamma-1} ) }}{ \vert t\vert ^{\gamma}},\quad \gamma>1, $$
(65)

which is a convex function.

We remark here that

$$ \operatorname{JVar}_{\phi_{\gamma}^{*} }{\varphi}=\operatorname{Var}^{[\gamma ]}{ \varphi} \quad\mbox{if } \varphi>0 \mbox{ and } \gamma>1. $$
(66)

Remark 1

Theorem 1 in [7] implies the following results: Let the function \(\phi: [ {0,\infty} ) \rightarrow\mathbb{R} \) be twice continuously differentiable, and let ϕ with \(\phi^{{\prime\prime}}\) be convex, and let the function \(\varphi:I\rightarrow [ {0,\infty} ) \) be continuous. Then we have the inequalities

$$ {\phi}^{{\prime\prime}} \biggl( \frac{{\operatorname{Var}^{[3]}\varphi }}{{\operatorname{Var}\varphi}} \biggr) \leqslant \frac{2{\operatorname{JVar}_{\phi }\varphi}}{{\operatorname{Var}\varphi}}\leqslant\frac{\max \{\phi^{{\prime\prime}}(\varphi)\}+\mathrm{E}\phi^{{\prime \prime }}(\varphi)+\phi^{{\prime\prime}}(\mathrm{E}\varphi)}{3}. $$
(67)

Therefore, there are close relationships among the \({\operatorname{JVar}_{\phi }\varphi}\), \({\operatorname{Var}^{[\gamma]}\varphi}\) and Varφ.

4.3 Proof of Theorem 2

In this section, we will use the following notations [2325]:

$$\begin{aligned}& \mathbf{x}\triangleq ( {{x_{1}}, \ldots,{x_{n}}} ),\qquad \phi ( \mathbf{x} )\triangleq \bigl( {\phi ( {x_{1}} ), \ldots ,\phi ( {x_{n}} )} \bigr), \qquad \mathbf{p}\triangleq ( {{p_{1}}, \ldots,{p_{n}}} ), \\& {\Omega^{n}}\triangleq \Biggl\{ \mathbf{p} \in{{ ( {0,\infty} )}^{n}}\bigg| {\sum_{i = 1}^{n} {p_{i}} = 1} \Biggr\} ,\qquad S\triangleq \bigl\{ { ( {{t_{1}},{t_{2}}} ) \in{{ [ {0,\infty} )}^{2}}| {{t_{1}} + {t_{2}}} \leqslant1} \bigr\} , \\& A ( {\mathbf{x},\mathbf{p}} )\triangleq\sum_{i = 1}^{n} {{p_{i}} {x_{i}}}, {w_{i,j}} ( { \mathbf{x},\mathbf{p},{t_{1}},{t_{2}}} )\triangleq{t_{1}} {x_{i}} + {t_{2}} {x_{j}} + ( {1 - {t_{1}} - {t_{2}}} )A ( {\mathbf{x},\mathbf{p}} ). \end{aligned}$$

In order to prove Theorem 2, we need three lemmas as follows.

In [22], the authors proved the following Lemma 1 by means of the theory of linear algebra.

Lemma 1

(Lemma 1 in [22])

Let the function \(\phi :J\rightarrow\mathbb{R} \) be twice continuously differentiable. If \(\mathbf{x}\in{J^{n}}\), \(\mathbf{p}\in{\Omega^{n}}\), then we have the following identity:

$$ A \bigl( {\phi ( \mathbf{x} ) ,\mathbf{p}} \bigr) -\phi \bigl( {A ( {\mathbf{x}, \mathbf{p}} ) } \bigr) =\sum_{1\leqslant i< j\leqslant n}{{p_{i}} {p_{j}} \biggl\{ { \iint_{S}{ \phi^{\prime\prime} \bigl[ {{w_{i,j}} ( {\mathbf {x}, \mathbf{p},{t_{1}},{t_{2}}} ) } \bigr] \,\mathrm{d} {t_{1}}\,\mathrm{d} {t_{2}}}} \biggr\} } { ( {{x_{i}}-{x_{j}}} ) ^{2}}. $$
(68)

Lemma 2

Let the function \(\phi:J\rightarrow\mathbb{R} \) be twice continuously differentiable and \(\phi^{\prime\prime} ( x ) \geqslant0\), \(\forall x\in J\). If \(\mathbf{x}\in{J^{n}}\), \(\mathbf{p}\in{\Omega^{n}}\), then we have the following inequalities:

$$ \frac{1}{2}\mathop{\inf} _{t\in J} \bigl\{ { { \phi^{\prime\prime} ( t ) }} \bigr\} \leqslant\frac{{ {A ( {\phi ( \mathbf{x} ) ,\mathbf{p}} ) -\phi ( {A ( {\mathbf{x},\mathbf{p}} ) } ) } }}{{A ( {{\mathbf {x}^{2}},\mathbf{p}} ) -{A^{2}} ( {\mathbf{x},\mathbf{p}} ) }} \leqslant\frac {1}{2}\mathop{\sup} _{t\in J} \bigl\{ { { \phi^{\prime\prime} ( t ) }} \bigr\} . $$
(69)

Proof

We just need to prove the second inequality in (69), because the proof of the first inequality in (69) is similar.

In identity (68), set \(\phi ( t ) = {t^{2}}\). From \(\iint_{S} \,\mathrm{d}{t_{1}}\,\mathrm{d}{t_{2}} =1/2\), we get

$$ A \bigl( {{\mathbf{x}^{2}},\mathbf{p}} \bigr) - {A^{2}} ( { \mathbf{x},\mathbf{p}} ) = \sum_{1 \leqslant i < j \leqslant n} {{p_{i}} {p_{j}}} { ( {{x_{i}} - {x_{j}}} )^{2}}. $$
(70)

According to Lemma 1 and (70), and noting that \({w_{i,j}} ( {\mathbf{x},\mathbf{p},{t_{1}},{t_{2}}} ) \in J\), we get

$$\begin{aligned} &\frac{{ {A ( {\phi ( \mathbf{x} ),\mathbf{p}} ) - \phi ( {A ( {\mathbf{x},\mathbf{p}} )} )}}}{{A ( {{\mathbf {x}^{2}},\mathbf{p}} ) - {A^{2}} ( {\mathbf{x},\mathbf{p}} )}} \\ &\quad= \frac{{{\sum_{1 \leqslant i < j \leqslant n} {{p_{i}}{p_{j}} \{ {\iint_{S} {\phi'' [ {{w_{i,j}} ( {\mathbf{x},\mathbf {p},{t_{1}},{t_{2}}} )} ]\,\mathrm{d}{t_{1}} \,\mathrm{d}{t_{2}}}} \}} {{ ( {{x_{i}} - {x_{j}}} )}^{2}}}}}{{\sum_{1 \leqslant i < j \leqslant n} {{p_{i}}{p_{j}}} {{ ( {{x_{i}} - {x_{j}}} )}^{2}}}} \\ &\quad\leqslant\frac{{\sum_{1 \leqslant i < j \leqslant n} {{p_{i}}{p_{j}} [ {\iint_{S} {\mathop{\sup} _{t \in J} \{ { {\phi'' ( t )} } \}\,\mathrm{d}{t_{1}}\,\mathrm{d}{t_{2}}}} ]} {{ ( {{x_{i}} - {x_{j}}} )}^{2}}}}{{ \sum_{1 \leqslant i < j \leqslant n} {{p_{i}}{p_{j}}} {{ ( {{x_{i}} - {x_{j}}} )}^{2}}}} \\ &\quad={\iint_{S} {\mathop{\sup} _{t \in J} \bigl\{ { { \phi'' ( t )} } \bigr\} \,\mathrm{d} {t_{1}} \,\mathrm{d} {t_{2}}}} \\ &\quad=\frac{1}{2}\mathop{\sup} _{t \in J} \bigl\{ { { \phi'' ( t )}} \bigr\} . \end{aligned}$$

The second inequality in (69) is proved. This ends the proof. □

One of the integral analogues of Lemma 2 is the following Lemma 3.

Lemma 3

Under the hypotheses of Theorem  2, we have the following inequalities:

$$ \frac{1}{2}\mathop{\inf} _{t \in I} \bigl\{ { { \phi^{\prime\prime} \bigl[ {\varphi ( t )} \bigr]} } \bigr\} \leqslant \frac{{ {\int_{I} {p} \phi ( \varphi ) - \phi [ {\int_{I} {p} \phi ( \varphi )} ]}}}{{\int_{I} {p} {\varphi^{2}} - {{ ( {\int_{I} {p} \varphi} )}^{2}}}} \leqslant \frac{1}{2}\mathop{\sup} _{t \in I} \bigl\{ { {\phi^{\prime\prime} \bigl[ {\varphi ( t )} \bigr]} } \bigr\} . $$
(71)

Proof

We just need to prove the second inequality in (71), because the proof of the first inequality in (71) is similar.

Let \(T \triangleq \{ {\varDelta {I_{1}}, \ldots, \varDelta {I_{m}}} \}\) be a partition of I. Pick any \({\eta_{i}} \in \varDelta {I_{i}}\), \(1\leqslant i \leqslant m\), and set

$$\begin{aligned}& \eta \triangleq ( {{\eta_{1}},{\eta_{2}}, \ldots,{ \eta_{m}}} ) \in {I^{m}}, \qquad \Vert T \Vert \triangleq\mathop{ \max} _{1 \leqslant i \leqslant m} \mathop{\max} _{X,Y \in \varDelta {I_{i}}} \bigl\{ {\Vert {X - Y} \Vert } \bigr\} ,\\& {\mathbf{p}_{*}} ( \eta )\triangleq \bigl( {{p_{*1}} ( \eta ), \ldots,{p_{*m}} ( \eta )} \bigr)\triangleq \frac{{ ( {{p} ( {{\eta_{1}}} ) \vert {\varDelta {I_{1}}} \vert , \ldots,{p} ( {{\eta_{m}}} )\vert {\varDelta {I_{m}}} \vert } )}}{{\sum_{i = 1}^{m} {{p} ( {{\eta_{i}}} )\vert {\varDelta {I_{i}}} \vert } }}, \end{aligned}$$

where \(\Vert {X - Y} \Vert \) is the Euclid norm of \(X - Y\), \(\vert {\varDelta {I_{i}}} \vert \) is the measure of \(\varDelta {I_{i}}\), i.e., n-dimensional volume, \({\mathbf{p}_{*}} ( \eta ) \in{\Omega ^{m}}\), i.e.,

$$\sum_{i = 1}^{m} {{p_{*i}} ( \eta )} = 1. $$

Since

$$\mathop{\lim} _{\Vert T \Vert \to0} \sum_{i = 1}^{m} {{p} ( {{\eta_{i}}} )} \vert {\varDelta {I_{i}}} \vert = \int_{I} {{p}} = 1, $$

according to the definition of the Riemann integral and Lemma 2, we get

$$\begin{aligned} \frac{{ {\int_{I} {{p}} \phi ( \varphi ) - \phi [ { \int_{I} {{p}} \phi ( \varphi )} ]} }}{{\int_{I} {{p}} {\varphi^{2}} - {{ ( {\int_{I} {{p}} \varphi} )}^{2}}}} & = \mathop{\lim} _{\Vert T \Vert \to0} \frac{{ {A ( {\phi ( {\varphi ( \eta )} ),{\mathbf{p}_{*}} ( \eta )} ) - \phi ( {A ( {\varphi ( \eta ),{\mathbf {p}_{*}} ( \eta )} )} )} }}{{A ( {{\varphi^{2}} ( \eta ), {\mathbf{p}_{*}} ( \eta )} ) - {A^{2}} ( {\varphi ( \eta ), {\mathbf{p}_{*}} ( \eta )} )}} \\ & \leqslant\mathop{\lim} _{\Vert T \Vert \to0} \frac{1}{2}\mathop{\sup} _{t \in I} \bigl\{ { {\phi '' \bigl[ {\varphi ( t )} \bigr]} } \bigr\} \\ & = \frac{1}{2}\mathop{\sup} _{t \in I} \bigl\{ { { \phi'' \bigl[ {\varphi ( t )} \bigr]}} \bigr\} . \end{aligned}$$

The second inequality in (71) is proved. This ends the proof of Lemma 3. □

The proof of Theorem 2 is now relatively easy.

Proof

We just need to prove the second inequality in (57), because the proof of the first inequality in (57) is similar.

According to (56) and Lemma 3, we get

$$\frac{{\operatorname{JVar}_{\phi}\varphi}}{{\operatorname{Var}\varphi}}=\frac{{{\mathrm{E}\phi ( \varphi ) -\phi ( {\mathrm{E}\varphi } ) }}}{{\mathrm{E}{{ ( {\varphi-\mathrm{E}\varphi} ) }^{2}}}}=\frac {{{\int_{I}{p}\phi ( \varphi ) -\phi [ {\int_{I}{p}\phi ( \varphi ) } ] } }}{{\int_{I}{p}{\varphi^{2}}-{{ ( {\int_{I}{p}\varphi} ) }^{2}}}}\leqslant \frac{1}{2}\mathop{\sup} _{t\in I} \bigl\{ {{\phi^{\prime\prime} \bigl[ {\varphi ( t ) } \bigr] }} \bigr\} . $$

This proves the second inequality in (57). The proof of Theorem 2 is completed. □

A large number of algebra, functional analysis and inequality theories are used in the proof of Theorem 2. Based on these theories, we obtained Lemma 3, which is the discrete form of Theorem 2. According to Lemma 3 and the definition of the Riemann integral, we obtained the proof of Theorem 2. Therefore, the proof of Theorem 2 is both interesting and very difficult.

4.4 An example in the generalized traditional teaching model

In order to illustrate the significance of the ϕ-Jensen variance, integral variance and γ-order variance, we provide an illustrative example as follows.

In the generalized traditional teaching model \(\operatorname{HTM} \{ {-\infty,\infty, {p_{\mathbb{R}}}} \}\), suppose that the score of a student is \({X \in{J}}\), where \(J=(\mu, \infty)\), \(0 \leqslant\mu< \infty\), and μ is the average score of the students. In order to stimulate the learning enthusiasm of a student, we may want to give the student a bonus payment \(\mathscr{A} ( {{X }} )\), where \(X> \mu\). The function \(\mathscr{A} :J\rightarrow ( {0,\infty} ) \) is called an allowance function of the \(\operatorname{HTM} \{ {-\infty,\infty, {p_{\mathbb{R}}}} \}\) [1]. In general, we define the allowance function \(\mathscr{A}\) as follows:

$$ \mathscr{A}: J\rightarrow ( {0,\infty} ), \qquad\mathscr{A} ( t ) \triangleq c{ ( {t- \mu} ) ^{\alpha}},\quad c>0, \alpha>0. $$
(72)

Assume that \(s=\mathscr{A}(X)\), \(X\in J\). Then

$$ X=\mathscr{A}^{-1}(s)= \biggl(\frac{s}{c} \biggr)^{1/\alpha}+ \mu. $$
(73)

Hence

$$ \phi=\iint\mathscr{A}^{-1}\triangleq\int\,\mathrm{d} s\int \mathscr{A}^{-1}(s)\,\mathrm{d}s=\frac{c^{-1/\alpha}s^{1/\alpha +2}}{(1/\alpha+1)(1/\alpha+2)}+\frac{1}{2}{ \mu}s^{2}+C_{1}s+C_{0}, $$
(74)

here we define the constants \(C_{0}\triangleq0\) and \(C_{1}\triangleq0\). Therefore, the integral variance of the random variable \(\mathscr{A} ( { {X}} ) \) is

$$\begin{aligned} {\operatorname{JVar}_{\iint\mathscr{A}^{-1} }\mathscr{A} } =&\mathrm{E}\phi ( \mathscr{A} ) -\phi ( {\mathrm{E}\mathscr{A} } ) \\ =&\mathrm{E} \biggl[\frac{c^{-1/\alpha}\mathscr{A}^{1/\alpha+2}}{(1/\alpha +1)(1/\alpha+2)}+\frac{1}{2}{\mu} \mathscr{A}^{2} \biggr]- \biggl[\frac {c^{-1/\alpha}(\mathrm{E}\mathscr{A})^{1/\alpha+2}}{(1/\alpha+1)(1/\alpha +2)}+\frac{1}{2}{ \mu}(\mathrm{E}\mathscr{A})^{2} \biggr] \\ =& \frac{c^{-1/\alpha}}{2}\operatorname{Var}^{[1/\alpha+2]}(\mathscr{A})+ \frac {1}{2}{\mu}\operatorname{Var}(\mathscr{A}) \\ =& \frac{1}{2}c^{2}\operatorname{Var}^{[1/\alpha+2]}(X- \mu)^{\alpha}+\frac {1}{2}c^{2}{\mu}\operatorname{Var}(X- \mu)^{\alpha} \\ =& \frac{1}{2}c^{2} \bigl[\operatorname{Var}^{[1/\alpha+2]}(X- \mu)^{\alpha}+{\mu }\operatorname{Var}(X-\mu)^{\alpha} \bigr] , \end{aligned}$$

i.e.,

$$ {\operatorname{JVar}_{\iint\mathscr{A}^{-1} }\mathscr{A}}=\frac{1}{2}c^{2} \bigl[\operatorname{Var}^{[1/\alpha+2]}(X-\mu)^{\alpha}+{\mu} \operatorname{Var}(X-\mu )^{\alpha} \bigr], $$
(75)

and the \(\mathscr{A}\)-mathematical expectation of the random variable X is

$$ \mathrm{E}_{\mathscr{A}}(X)\triangleq\frac{{2\operatorname{JVar}_{\iint\mathscr {A}^{-1} }\mathscr{A} }}{{\operatorname{Var}\mathscr{A} }}=\frac{\operatorname{Var}^{[1/\alpha+2]}(X-\mu)^{\alpha}}{\operatorname {Var}(X-\mu)^{\alpha}}+\mu \in J. $$
(76)

On the other hand, by inequality (64), we have

$$ \mathrm{E}_{\mathscr{A}}(X)\geqslant\frac{2}{1/\alpha+2} \bigl[\mathrm {E}(X- \mu)^{\alpha} \bigr]^{1/\alpha}+\mu,\quad \forall \alpha>0, $$
(77)

where \([\mathrm{E}(X-\mu)^{\alpha} ]^{1/\alpha}\) is the α-power mean [22, 26, 27] of the random variable \(X-\mu\).

4.5 Monotonicity of the interval function \(\operatorname{JVar}_{\phi}\varphi ( {{X _{ [ {a,b} ] }}} ) \)

In this section, we apply the log concavity of function to study the monotonicities of the interval function \(\operatorname{JVar}_{\phi }\varphi ( {{X _{ [ {a,b} ] }}} ) \) involving a ϕ-Jensen variance. In particular, we generalize inequalities in (21) to the case where D is a truncated random variable (see Remark 2). Our purpose is to study the hierarchical and the traditional teaching models from the angle of the analysis of variance, so as to decide on the superiority or the inferiority of the hierarchical teaching model and the traditional teaching model.

Let \(X_{[a,b]}\) be a truncated random variable of X, where the probability density function \({p}:I\rightarrow ( {0,\infty} ) \) of X is continuous. Then, by (4), (50) and the definition of the truncated random variable, we know that the ϕ-Jensen variance of the random variable \(\varphi (X_{[a,b]} )\) is

$$ \operatorname{JVar}_{\phi}\varphi ( {{X _{ [ {a,b} ] }}} ) \triangleq \left \{ \textstyle\begin{array}{@{}l@{\quad}l} \frac{{\int_{a}^{b} {{p}\phi\bullet\varphi} }}{{\int_{a}^{b} {{p}} }} - \phi ( {\frac{{\int_{a}^{b} {{p}\varphi} }}{{\int_{a}^{b} {{p}} }}} ), & a \ne b, \\ 0, & {a = b}, \end{array}\displaystyle \right . \quad \forall [{a,b} ] \in\overline{I}, $$
(78)

which is a non-negative interval function, where \(\phi\bullet\varphi \triangleq\phi ( {\varphi} )\) is a composite function.

The main results of this section is the following Theorem 3.

Theorem 3

Let the function \({p}:I\rightarrow ( {0,\infty } ) \) be differentiable and log-concave, and let the functions \(\phi :J\rightarrow\mathbb{R} \) and \(\varphi:I\rightarrow J \) be thrice differentiable and twice differentiable, respectively, which satisfy the following conditions:

$$\phi ^{{\prime\prime}} ( x ) >0,\quad \forall x\in J \quad\textit{and} \quad{ \varphi^{{\prime}}} ( t ) >0, \quad\forall t\in I, $$

where I and J are intervals. Then we have the following two assertions.

  1. (I)

    If \(\phi^{{\prime\prime\prime}} ( x )\geqslant 0\), \(\forall x\in J\), and \(\varphi^{{\prime\prime}} ( t )\geqslant 0\), \(\forall t\in I\), then the interval function \(\operatorname{JVar}_{\phi}\varphi ( {{X _{ [ {a,b} ] }}} ) \) (\([ {a,b} ]\in\overline{I} \)) is right increasing.

  2. (II)

    If \(\phi^{{\prime\prime\prime}} ( x ) \leqslant 0\), \(\forall x\in J\), and \(\varphi^{{\prime\prime}} ( t ) \leqslant 0\), \(\forall t\in I\), then the interval function \(\operatorname{JVar}_{\phi}\varphi ( {{X _{ [ {a,b} ] }}} ) \) (\([ {a,b} ]\in\overline{I} \)) is left increasing.

Here the interval function \(\operatorname{JVar}_{\phi}\varphi ( {{X _{ [ {a,b} ] }}} ) \) is defined by (78).

Two real numbers α and β are said to have the same sign [28], written as \(\alpha\sim\beta\), if

$$ \alpha>0\quad\Rightarrow\quad\beta>0,\qquad \alpha=0\quad\Rightarrow\quad\beta=0 \quad\mbox{and}\quad \alpha < 0 \quad\Rightarrow\quad\beta< 0. $$
(79)

In the following discussion, we set

$$ w \triangleq\frac{{\int_{a}^{b}{p\varphi}}}{{\int_{a}^{b}p}}=\frac{{\int_{a}^{b}{p ( t )\varphi ( t ) }\,\mathrm{d}t}}{{\int_{a}^{b}p ( t )\,\mathrm{d}t}},\quad a\ne b. $$
(80)

In order to prove Theorem 3, we need four lemmas as follows.

Lemma 4

Let the functions \({p}:I\rightarrow ( {0,\infty } ) \) and \(\varphi:I\rightarrow J \) be continuous, and let the function \(\phi :J\rightarrow\mathbb{R} \) be differentiable. If we set

$$ H ( {a,b} ) \triangleq \bigl[ {\phi\bullet\varphi ( b ) -{\phi{^{\prime}}} ( w ) \bigl( {\varphi ( b ) -w} \bigr) } \bigr] \int_{a}^{b}p- \int_{a}^{b}{p\phi\bullet\varphi}, $$
(81)

then we have

$$ \frac{\partial\operatorname{JVar}_{\phi}\varphi ( {{X _{ [ {a,b} ] }}} )}{{\partial b}}\sim H ( {a,b} ),\quad \forall [ {a,b} ] \in \overline{I}, $$
(82)

where I and J are intervals, \(\operatorname{JVar}_{\phi}\varphi ( {{X _{ [ {a,b} ] }}} )\) and w are defined by (78) and (80), respectively.

Proof

According to the above definition, we have the following formula:

$$ c>0, \qquad\alpha\in\mathbb{R} \quad\Rightarrow \quad c\alpha\sim \alpha. $$
(83)

By the identity

$$ \frac{\partial}{{\partial b}}\int_{a}^{b}{f}\triangleq \frac{\partial}{{ \partial b}}\int_{a}^{b}{f ( t ) } \,\mathrm{d}t\equiv f ( b ),\quad \forall ( {a,b} ) \in{I^{2}}, $$
(84)

we get

$$ \frac{{\partial w}}{{\partial b}}=\frac{{p ( b ) \varphi ( b ) \int_{a}^{b}p-p ( b ) \int_{a}^{b}{p\varphi }}}{{{{ ( {\int_{a}^{b}p} ) }^{2}}}}=\frac{{p ( b ) }}{{\int_{a}^{b}p}} \bigl( { \varphi ( b ) -w} \bigr), $$
(85)

and

$$ \frac{{\partial}}{{\partial b}}{\frac{{\int_{a}^{b}{p\phi\bullet \varphi}}}{{\int_{a}^{b}p}}}= \frac{{p ( b ) \phi\bullet \varphi ( b ) \int_{a}^{b}p-p ( b ) \int_{a}^{b}{p\phi\bullet\varphi }}}{{{{ ( {\int_{a}^{b}p} ) }^{2}}}}. $$
(86)

According to (83)-(86) and (78), we have

$$\begin{aligned} \frac{\partial\operatorname{JVar}_{\phi}\varphi ( {{X _{ [ {a,b} ] }}} )}{{\partial b}}& =\frac{\partial}{{\partial b}} \biggl[ {\frac{{\int_{a}^{b}{p\phi\bullet\varphi}}}{{\int_{a}^{b}p}}- \phi ( w ) } \biggr] \\ & =\frac{{p ( b ) \phi\bullet\varphi ( b ) \int_{a}^{b}p-p ( b ) \int_{a}^{b}{p\phi\bullet\varphi }}}{{{{ ( {\int_{a}^{b}p} ) }^{2}}}}-{\phi^{{\prime}}} ( w )\frac{{ \partial w}}{{\partial b}} \\ & =\frac{{p ( b ) \phi\bullet\varphi ( b ) \int_{a}^{b}p-p ( b ) \int_{a}^{b}{p\phi\bullet\varphi }}}{{{{ ( {\int_{a}^{b}p} ) }^{2}}}}-{\phi^{{\prime}}} ( w ) \frac {{p ( b ) }}{{\int_{a}^{b}p}} \bigl( { \varphi ( b ) -w} \bigr) \\ & =\frac{{p ( b ) }}{{{{ ( {\int_{a}^{b}p} ) }^{2}}}} \biggl[ {\phi\bullet\varphi ( b ) \int _{a}^{b}p-\int_{a}^{b}{p \phi \bullet\varphi}-{\phi{^{\prime}}} ( w ) \bigl( {\varphi ( b ) -w} \bigr) \int_{a}^{b}p} \biggr] \\ & =\frac{{p ( b ) }}{{{{ ( {\int_{a}^{b}p} ) }^{2}}}} \biggl\{ { \bigl[ {\phi\bullet\varphi ( b ) -{ \phi{^{\prime }}} ( w ) \bigl( {\varphi ( b ) -w} \bigr) } \bigr] \int _{a}^{b}p-\int_{a}^{b}{p \phi\bullet\varphi}} \biggr\} \\ & \sim \bigl[ {\phi\bullet\varphi ( b ) -{\phi^{{\prime }}} ( w ) \bigl( { \varphi ( b ) -w} \bigr) } \bigr] \int_{a}^{b}p- \int_{a}^{b}{p\phi\bullet\varphi} \\ &= H(a,b). \end{aligned}$$

Hence (82) holds. This ends the proof. □

Lemma 5

Let the function \({p}:I\rightarrow ( {0,\infty } ) \) be continuous, and let the functions \(\varphi:I\rightarrow J \) and \(\phi :J\rightarrow\mathbb{R} \) be differentiable and twice differentiable, respectively. Then we have

$$ \frac{{\partial H ( {a,b} ) }}{{\partial b}}= \biggl( {\varphi ^{{\prime}} ( b ) \int _{a}^{b}p} \biggr) \int_{w}^{\varphi ( b ) }{ \phi^{{\prime\prime}} ( t ) }\,\mathrm{d}t-p ( b ) \phi^{{\prime\prime}} ( w ) { \bigl( {\varphi ( b ) -w} \bigr) ^{2}}, \quad\forall [ {a,b} ] \in\overline{I}, $$
(87)

where I and J are intervals, w and \(H ( {a,b} )\) are defined by (80) and (81), respectively.

Proof

By (85), we have

$$\begin{aligned} &{\frac{\partial [ {\phi\bullet\varphi ( b ) -\phi^{\prime} ( w ) ( {\varphi ( b ) -w} ) } ] }{{\partial b}}} \\ &\quad={\frac{\partial [ {\phi\bullet\varphi ( b ) } ] }{{\partial b}}}-{\frac{\partial [ {\phi ^{\prime} ( w ) ( {\varphi ( b ) -w} ) } ] }{{\partial b}}} \\ &\quad={\phi^{\prime}\bullet\varphi ( b ) \varphi^{\prime} ( b )} - \phi^{{\prime\prime}} ( w ) \frac{{\partial w}}{{\partial b}} \bigl( {\varphi ( b ) -w} \bigr) - \phi ^{{\prime }} ( w ) \biggl( {\varphi^{{\prime}} ( b ) - \frac{{\partial w}}{{\partial b}}} \biggr)\\ &\quad={\phi^{\prime}\bullet\varphi ( b ) \varphi^{\prime} ( b )} - \phi^{{\prime\prime}} ( w ) \frac{{p ( b ) }}{{\int_{a}^{b}p}} \bigl( {\varphi ( b ) -w} \bigr)^{2} -\phi^{{\prime }} ( w ) \biggl[ {\varphi^{{\prime}} ( b ) -\frac {{p ( b ) }}{{\int_{a}^{b}p}} \bigl( {\varphi ( b ) -w} \bigr)} \biggr] \\ &\quad= \bigl[{\phi^{\prime}\bullet\varphi ( b )}-\phi^{{\prime }} ( w ) \bigr]\varphi^{{\prime}} ( b ) -\phi ^{{\prime\prime}} ( w ) \frac{{p ( b ) }}{{\int_{a}^{b}p}} \bigl( {\varphi ( b ) -w} \bigr)^{2} + \phi^{{\prime }} ( w ) { \frac{{p ( b ) }}{{\int_{a}^{b}p}} \bigl( {\varphi ( b ) -w} \bigr)}. \end{aligned}$$

Hence from (81) and (84), we get

$$\begin{aligned} \frac{{\partial H ( {a,b} ) }}{{\partial b}} ={}&\frac{\partial }{{\partial b}} \biggl\{ { \bigl[ {\phi\bullet\varphi ( b ) -\phi ^{{\prime }} ( w ) \bigl( {\varphi ( b ) -w} \bigr) } \bigr] \int _{a}^{b}p-\int_{a}^{b}{p \phi\bullet\varphi}} \biggr\} \\ ={}& {\frac{\partial [ {\phi\bullet\varphi ( b ) -\phi^{\prime} ( w ) ( {\varphi ( b ) -w} ) } ] }{{\partial b}}} \int_{a}^{b}p \\ &{} + \bigl[ {\phi\bullet\varphi ( b ) -\phi^{{\prime}} ( w ) \bigl( {\varphi ( b ) -w} \bigr) } \bigr] p ( b ) -p ( b ) \phi\bullet\varphi ( b ) \\ ={}& \bigl[{\phi^{\prime}\bullet\varphi ( b )}-\phi^{{\prime }} ( w ) \bigr]\varphi^{{\prime}} ( b )\int_{a}^{b}p- \phi^{{\prime\prime}} ( w )p ( b ) \bigl( {\varphi ( b ) -w} \bigr)^{2} \\ &{} +\phi^{{\prime }} ( w )p ( b ) \bigl( {\varphi ( b ) -w} \bigr)+ \bigl[ { \phi\bullet\varphi ( b ) -\phi^{{\prime}} ( w ) \bigl( {\varphi ( b ) -w} \bigr) } \bigr] p ( b ) -p ( b ) \phi\bullet\varphi ( b ) \\ ={}& \biggl( {\varphi^{{\prime}} ( b ) \int_{a}^{b}p} \biggr) \bigl[ {\phi^{{\prime}}\bullet\varphi ( b ) -\phi^{{\prime }} ( w ) } \bigr] -p ( b ) \phi^{{\prime\prime}} ( w ) { \bigl( {\varphi ( b ) -w} \bigr) ^{2}} \\ ={}& \biggl( {\varphi^{{\prime}} ( b ) \int_{a}^{b}p} \biggr) \int_{w}^{\varphi ( b ) }{\phi^{{\prime\prime}} ( t ) }\,\mathrm{d}t-p ( b ) \phi^{{\prime\prime}} ( w ) { \bigl( {\varphi ( b ) -w} \bigr) ^{2}}. \end{aligned}$$

The proof is completed. □

Lemma 6

Let the function \({p}:I\rightarrow ( {0,\infty } ) \) be differentiable and log-concave, and let the function \(\varphi :I\rightarrow J \) be twice differentiable and satisfy the following condition:

$${\varphi^{{\prime}}} ( t ) >0,\quad \forall t\in I. $$

If we set

$$ {H_{\ast}} ( {a,b} ) \triangleq{\varphi^{{\prime}}} ( b ) \int _{a}^{b}p-p ( b ) \bigl( {\varphi ( b ) -w} \bigr), $$
(88)

where w is defined by (80), then we have

$$ \frac{{\partial{H_{\ast}} ( {a,b} ) }}{{\partial b}}> \varphi^{{\prime\prime}} ( b ) \int_{a}^{b}p,\quad \forall [a,b]\in\overline{I}, a\ne b. $$
(89)

Proof

By (84) and (85), we get

$$\begin{aligned} \frac{{\partial{H_{\ast}} ( {a,b} ) }}{{\partial b}}& =\frac{ \partial}{{\partial b}} \biggl[ {\varphi^{{\prime}} ( b ) \int_{a}^{b}p-p ( b ) \bigl( {\varphi ( b ) -w} \bigr) } \biggr] \\ & =\varphi^{{\prime\prime}} ( b ) \int_{a}^{b}p+ \varphi ^{{\prime}} ( b ) p ( b ) -p^{{\prime}} ( b ) \bigl( {\varphi ( b ) -w} \bigr) -p ( b ) \biggl( {\varphi ^{\prime} (b ) -\frac{{\partial w}}{{\partial b}}} \biggr) \\ & =\varphi^{{\prime\prime}} ( b ) \int_{a}^{b}p-p^{\prime } ( b ) \bigl( {\varphi ( b ) -w} \bigr) +p ( b ) \frac{{\partial w}}{{\partial b}} \\ & =\varphi^{{\prime\prime}} ( b ) \int_{a}^{b}p-p^{{\prime }} ( b ) \bigl( {\varphi ( b ) -w} \bigr) +p ( b ) \frac{{p ( b ) }}{{\int_{a}^{b}p}} \bigl( { \varphi ( b ) -w} \bigr) \\ &=\varphi^{{\prime\prime}} ( b ) \int_{a}^{b}p+ \frac {{p ( b ) }}{{\int_{a}^{b}p}} \bigl( {\varphi ( b ) -w} \bigr) \biggl[ {p ( b ) - \frac{{p^{\prime} ( b ) }}{{p ( b ) }}\int_{a}^{b}p} \biggr], \end{aligned}$$

i.e.,

$$ \frac{{\partial{H_{\ast}} ( {a,b} ) }}{{\partial b}}=\varphi ^{{\prime\prime}} ( b ) \int_{a}^{b}p+ \frac{{p ( b ) }}{{\int_{a}^{b}p}} \bigl( {\varphi ( b ) -w} \bigr) \biggl[ {p ( b ) - \frac{{p^{\prime} ( b ) }}{{p ( b ) }}\int_{a}^{b}p} \biggr]. $$
(90)

Since the function \(p:I\rightarrow ( {0,\infty} ) \) is differentiable and log-concave, according to Theorem 1, we have

$$ p ( b ) -\frac{{{p^{{\prime}}} ( b ) }}{{p ( b ) }}\int_{a}^{b}p \geqslant p ( a ) >0,\quad \forall ( {a,b} ) \in{I^{2}}. $$
(91)

If \(a< b\), then, by \({\varphi^{{\prime}}} ( t ) >0\), \(\forall t\in I\), we have

$$ \varphi ( a ) < w\triangleq\frac{{\int_{a}^{b}{{p_{I}}\varphi }}}{{\int_{a}^{b}{{p_{I}}}}}< \varphi ( b ). $$
(92)

Combining with (90)-(92) and \(\int_{a}^{b}p>0\), we get (89).

If \(a>b\), then, by \({\varphi^{{\prime}}} ( t ) >0\), \(\forall t\in I\), we have

$$ \varphi ( b ) < w\triangleq\frac{{\int_{a}^{b}{{p_{I}}\varphi }}}{{\int_{a}^{b}{{p_{I}}}}}< \varphi ( a ). $$
(93)

Combining with (90), (91), (93) and \(\int_{a}^{b}p<0\), we get (89). This ends the proof. □

Lemma 7

Under the hypotheses of Theorem  3, if

$$ a,b\in I,\qquad a< b, \quad\textit{and}\quad \phi^{{\prime\prime\prime }} ( x )\geqslant0, \quad\forall x\in J, $$
(94)

or

$$ a,b\in I,\qquad a>b,\quad \textit{and} \quad\phi^{{\prime\prime\prime }} ( x )\leqslant0, \quad\forall x\in J, $$
(95)

then we have

$$ \frac{{\partial H ( {a,b} ) }}{{\partial b}} \geqslant \bigl[ {\varphi ( b ) -w} \bigr] \phi^{{\prime\prime}} ( w ) {H_{\ast}} ( {a,b} ) , $$
(96)

where \(H ( {a,b} )\), \({H_{\ast}} ( {a,b} )\) and w are defined by (81), (88) and (80), respectively.

Proof

First, we assume that (94) holds. Then (92) holds by the proof of Lemma 6. Now we prove that inequality (96) holds as follows.

From (94), we know that the function \(\phi^{\prime\prime}\) is increasing, hence from (92) we get

$$ \phi^{\prime\prime} ( w )\leqslant\phi^{\prime\prime } ( t )\leqslant \phi^{\prime\prime} \bigl( \varphi(b) \bigr) \quad\mbox{for } w\leqslant t\leqslant \varphi(b). $$
(97)

By (92) and (97), we have

$$ \int_{w}^{\varphi ( b ) }{\phi^{\prime\prime} ( t ) } \,\mathrm{d}t\geqslant\int_{w}^{\varphi ( b ) }{ \phi^{{\prime \prime}} ( w ) }\,\mathrm{d}t= \bigl( {\varphi ( b ) -w} \bigr) \phi ^{\prime\prime } ( w )>0. $$
(98)

From \({\varphi^{{\prime}}}(t)>0\), \(\forall t\in I\), and \(a,b\in I\), \(a< b\), we know that

$$ {\varphi^{{\prime}} ( b ) \int_{a}^{b}p}>0. $$
(99)

By (98), (99) and Lemma 5, we have

$$\begin{aligned} \frac{{\partial H ( {a,b} ) }}{{\partial b}}& = \biggl( {\varphi ^{{\prime}} ( b ) \int _{a}^{b}p} \biggr) \int_{w}^{\varphi ( b ) }{ \phi^{{\prime\prime}} ( t ) }\,\mathrm{d}t-p ( b ) \phi^{{\prime\prime}} ( w ) { \bigl( {\varphi ( b ) -w} \bigr) ^{2}} \\ & \geqslant \biggl( {\varphi^{{\prime}} ( b ) \int_{a}^{b}p} \biggr) \bigl( {\varphi ( b ) -w} \bigr) \phi^{{\prime\prime}} ( w ) -p ( b ) \phi^{{\prime\prime}} ( w ) { \bigl( {\varphi ( b ) -w} \bigr) ^{2}} \\ &= \bigl[ {\varphi ( b ) -w} \bigr] \phi^{{\prime\prime}} ( w ) {H_{\ast}} ( {a,b} ), \end{aligned}$$

that is to say, inequality (96) holds.

Next, we assume that (95) holds. Then (93) holds by the proof of Lemma 6. Now we prove that inequality (96) also holds as follows.

From (95) we know that the function \(\phi^{\prime\prime}\) is decreasing, hence from (93) we get

$$ \phi^{\prime\prime} ( w )\leqslant\phi^{\prime\prime } ( t )\leqslant \phi^{\prime\prime} \bigl( \varphi(b) \bigr) \quad\mbox{for } \varphi(b)\leqslant t \leqslant w. $$
(100)

By (93) and (100), we have

$$ \int_{\varphi ( b )}^{w }{\phi^{\prime\prime} ( t ) } \,\mathrm{d}t\geqslant\int_{\varphi ( b )}^{w }{ \phi^{{\prime \prime}} ( w ) }\,\mathrm{d}t= \bigl( {w-\varphi ( b )} \bigr) \phi ^{\prime\prime } ( w )>0. $$
(101)

From \({\varphi^{{\prime}}}(t)>0\), \(\forall t\in I\), and \(a,b\in I\), \(a>b\), we know that

$$ {\varphi^{{\prime}} ( b ) \int_{b}^{a}p}>0. $$
(102)

By (101), (102) and Lemma 5, we have

$$\begin{aligned} \frac{{\partial H ( {a,b} )}}{{\partial b}} & = \biggl( {\varphi ^{{\prime}} ( b )\int _{a}^{b} p } \biggr)\int_{w}^{\varphi ( b )} {\phi^{{\prime\prime}} ( t )} \,\mathrm{d}t - p ( b )\phi^{{\prime\prime}} ( w ){ \bigl( {\varphi ( b ) - w} \bigr)^{2}} \\ & = \biggl( {\varphi^{{\prime}} ( b )\int_{b}^{a} p } \biggr)\int_{\varphi ( b )}^{w} {\phi^{\prime\prime} ( t )} \,\mathrm{d}t - p ( b )\phi^{{\prime\prime}} ( w ){ \bigl( {\varphi ( b ) - w} \bigr)^{2}} \\ & \geqslant \biggl( {\varphi^{{\prime}} ( b )\int_{b}^{a} p } \biggr) \bigl( {w - \varphi ( b )} \bigr)\phi^{\prime\prime} ( w ) - p ( b )\phi^{{\prime\prime}} ( w ){ \bigl( {\varphi ( b ) - w} \bigr)^{2}} \\ & = \biggl( {\varphi^{{\prime}} ( b )\int_{a}^{b} p } \biggr) \bigl( {\varphi ( b )-w} \bigr)\phi^{\prime\prime} ( w ) - p ( b )\phi^{{\prime\prime}} ( w ){ \bigl( {\varphi ( b ) - w} \bigr)^{2}} \\ &= \bigl[ {\varphi ( b ) -w} \bigr] \phi^{{\prime\prime}} ( w ) {H_{\ast}} ( {a,b} ). \end{aligned}$$

That is to say, inequality (96) still holds. This ends the proof of Lemma 7. □

The proof of Theorem 3 is as follows.

Proof

We first prove assertion (I). By Proposition 1, we just need to prove that

$$ \frac{\partial\operatorname{JVar}_{\phi}\varphi ( {{X _{ [ {a,b} ] }}} )}{{\partial b}}>0,\quad \forall [ {a,b} ] \in\overline{I}, a< b. $$
(103)

Suppose that

$$ x\in J\quad\Rightarrow\quad\phi^{{\prime\prime\prime}} ( x ) \geqslant0, \quad t\in I\quad\Rightarrow\quad \varphi^{{\prime\prime}} ( t ) \geqslant0,\quad a,b\in I, a< b. $$
(104)

We prove (103) as follows.

By (104), we have

$$ \varphi^{{\prime\prime}} ( b ) \int_{a}^{b}p \geqslant0. $$
(105)

According to Lemma 6 and (105), we have

$$ \frac{{\partial{H_{\ast}} ( {a,b} ) }}{{\partial b}} >\varphi ^{{\prime\prime}} ( b ) \int_{a}^{b}p \geqslant0\quad\Rightarrow\quad \frac{{\partial{H_{\ast}} ( {a,b} ) }}{{\partial b}}>0. $$
(106)

According to \(b>a\), (106) and

$$ \lim_{b\rightarrow a}{w}=\varphi(a), $$
(107)

we get

$$ {H_{\ast}} ( {a,b} ) >{H_{\ast}} ( {a,a} ) =0. $$
(108)

From \(\phi^{{\prime\prime}} ( x ) >0\), \(\forall x\in J \) and (92), we get

$$ \bigl[ {\varphi ( b ) - w} \bigr]\phi^{{\prime\prime }} ( w ) >0. $$
(109)

Combining with (108), (109) and Lemma 7, we get

$$ \frac{{\partial H ( {a,b} )}}{{\partial b}} \geqslant \bigl[ {\varphi ( b ) -w} \bigr] \phi^{{\prime\prime}} ( w ) {H_{\ast}} ( {a,b} ) > 0\quad\Rightarrow\quad \frac{{\partial H ( {a,b} )}}{{\partial b}} >0. $$
(110)

From (107), (110) and \(a< b\), we get

$$ H ( {a,b} ) > H ( {a,a} ) = 0. $$
(111)

Combining with (111) and Lemma 4, we get

$$ \frac{\partial\operatorname{JVar}_{\phi}\varphi ( {{X _{ [ {a,b} ] }}} )}{{\partial b}} \sim H ( {a,b} )> 0\quad\Rightarrow\quad \frac{\partial\operatorname{JVar}_{\phi}\varphi ( {{X _{ [ {a,b} ] }}} )}{{\partial b}}>0. $$

Hence (103) holds. Assertion (I) is proved.

Next, we prove assertion (II) as follows. By Proposition 1, we just need to prove that

$$ \frac{\partial\operatorname{JVar}_{\phi}\varphi ( {{X _{ [ {a,b} ] }}} )}{{\partial b}}< 0, \quad\forall [ {a,b} ] \in\overline{I}, a>b. $$
(112)

Suppose that

$$ x\in J\quad\Rightarrow\quad\phi^{{\prime\prime\prime}} ( x ) \leqslant0, \quad t\in I\quad\Rightarrow\quad \varphi^{{\prime\prime}} ( t ) \leqslant0,\quad a,b\in I, a>b. $$
(113)

We prove (112) as follows:

$$\begin{aligned} a>b \quad&\Rightarrow\quad\frac{{\partial{H_{*}} ( {a,b} )}}{{\partial b}}>\varphi^{{\prime\prime}} ( b ) \int _{a}^{b}p \geqslant 0 \quad\Rightarrow\quad {H_{*}} ( {a,b} ) < {H_{*}} ( {a,a} ) = 0\\ \quad&\Rightarrow\quad\frac{{\partial H ( {a,b} )}}{{\partial b}} \geqslant \bigl( {\varphi ( b ) - w} \bigr) \phi^{{\prime\prime}} ( w ){H_{*}} ( {a,b} ) > 0 \\ \quad&\Rightarrow\quad H ( {a,b} ) < H ( {a,a} ) = 0 \quad\Rightarrow\quad\frac{\partial\operatorname{JVar}_{\phi}\varphi ( {{X _{ [ {a,b} ] }}} )}{{\partial b}}\sim H ( {a,b} ) < 0. \end{aligned}$$

Hence (112) holds. Assertion (II) is also proved.

We remark here that the proof of Theorem 3 can be rewritten as

$$\begin{aligned} \frac{\partial\operatorname{JVar}_{\phi}\varphi ( {{X _{ [ {a,b} ] }}} )}{{\partial b}} & \sim H ( {a,b} )\sim (b-a ) \frac{{\partial H ( {a,b} )}}{{\partial b}} \sim (b-a ) \bigl[ {\varphi ( b ) -w} \bigr]{H_{\ast}} ( {a,b} )\\ & \sim{H_{\ast}} ( {a,b} ) \sim (b-a ) \frac {{\partial{H_{\ast}} ( {a,b} ) }}{{\partial b}} \sim b-a \left \{ \textstyle\begin{array}{@{}l@{\quad}l} >0,& a< b, \\ < 0, & a>b. \end{array}\displaystyle \right . \end{aligned}$$

The proof of Theorem 3 is completed. □

A large number of analysis and inequality theories are used in the proof of Theorem 3. Based on these theories, we obtained Theorem 1 and Lemmas 4-7, and according to Theorem 1 and Lemmas 4-7, we obtained the proof of Theorem 3. Therefore, the proof of Theorem 3 is also both interesting and very difficult.

Remark 2

Let \(D\in\mathbb{R}\) be a log concave random variable. In (103) and (112), if we set \(\phi(x)\equiv x^{2}\), \(\varphi(t)\equiv t\), \(I=\mathbb{R}\), then we get the inequalities

$$ \frac{\partial\operatorname{Var}D_{ [ d,\infty ] }}{\partial d}=\frac{\partial}{\partial d} \biggl[ {\frac{{\int_{d}^{\infty}{{p(t)}t^{2} \,\mathrm{d}t}}}{{\int_{d}^{\infty}{{p(t)}\,\mathrm{d}t}}}- \biggl( { \frac{{\int_{d}^{\infty}{{p(t)}t\,\mathrm{d}t}}}{{\int_{d}^{\infty}{{p(t)}\,\mathrm{d}t}}}} \biggr) ^{2}} \biggr] \leqslant0,\quad \forall d\in \mathbb{R}, $$
(114)

and

$$ \frac{\partial\operatorname{Var}D_{ [ -\infty,d ] }}{\partial d}=\frac{\partial}{\partial d} \biggl[ {\frac{{\int_{-\infty }^{d}{{p(t)}t^{2}\,\mathrm{d}t}}}{{\int_{-\infty}^{d}{{p(t)}\,\mathrm{d}t}}}- \biggl( { \frac{{\int_{-\infty }^{d}{{p(t)}t\,\mathrm{d}t}}}{{\int_{-\infty}^{d}{{p(t)}\,\mathrm{d}t}}}} \biggr) ^{2}} \biggr] \geqslant 0, \quad\forall d\in \mathbb{R}, $$
(115)

where \(p:\mathbb{R}\rightarrow(0,\infty)\) is a differentiable log-concave function. In other words, we have generalized the inequalities in (21) to the case where D is a truncated random variable.

In Section 4.6, we will demonstrate the applications of Theorem 3.

4.6 Corollaries of Theorem 3

The connotation of Theorem 3 is very rich, which implies the following four interesting corollaries.

Corollary 3

Let X be a continuous random variable and its probability density function \({p}:I\rightarrow ( {0,\infty} ) \) be a differentiable log-concave function, and let the twice differentiable function \(\varphi: I\rightarrow J \) satisfy the following conditions:

$$I,J \subset(0,\infty),\qquad {\varphi^{{\prime}}} ( t ) >0,\qquad {\varphi^{{\prime\prime}}} ( t ) \geqslant0, \quad\forall t\in I, $$

where I and J are intervals. Then the interval function \(\operatorname{JVar} _{\iint\varphi^{-1} }\varphi ( {{X _{ [ {a,b} ] }}} ) \) (\([ {a,b} ]\in\overline{I} \)) is right increasing.

Proof

Set \(\phi^{\prime\prime}=\varphi^{-1}\), where \(\varphi ^{-1}\) is the inverse function of the function φ. Since

$$I,J \subset(0,\infty),\qquad {\varphi^{{\prime}}} ( t ) >0,\qquad {\varphi^{{\prime\prime}}} ( t ) \geqslant0,\quad \forall t\in I, $$

we have

$$\phi^{\prime\prime}(x)>0, \qquad\phi^{\prime\prime\prime}(x)>0, \quad\forall x\in J. $$

By assertion (I) in Theorem 3, the interval function

$$\operatorname{JVar} _{\iint\varphi^{-1} }\varphi ( {{X _{ [ {a,b} ] }}} )\equiv{ \operatorname{JVar} _{\phi}\varphi} ( {{X _{ [ {a,b} ] }}} ) \quad\bigl( [ {a,b} ]\in\overline{I} \bigr) $$

is right increasing. This ends the proof. □

In Theorem 3, if we set \(\phi(x)\equiv x^{2}\) and \(\varphi(t)\equiv t\), then we get the following.

Corollary 4

Let X be a continuous random variable and its probability density function \({p}:I\rightarrow ( {0,\infty} ) \) be differentiable and log-concave. Then the interval function \(\operatorname{Var}{{X _{ [ {a,b} ] }}} \) (\([ {a,b} ]\in \overline{I} \)) is increasing.

In Section 5.2, we will demonstrate the applications of Corollary 4 in the hierarchical teaching model.

Corollary 5

Let X be a continuous random variable and its probability density function \({p}:I\rightarrow ( {0,\infty} ) \) be a differentiable function, and let the twice differentiable function \(\varphi:I\rightarrow J \) satisfy the following condition:

$${\varphi^{{\prime}}} ( t ) >0, \quad\forall t\in I, $$

where I is an interval. If the function

$$p \bullet\varphi^{-1} \bigl(\varphi^{-1} \bigr)^{\prime}\triangleq p\bigl(\varphi ^{-1}\bigr) \bigl( \varphi^{-1} \bigr)^{\prime} $$

is a log-concave function, then the interval function \(\operatorname{Var}\varphi ({{X _{ [ {a,b} ] }}} )\) is increasing, where \(\varphi^{-1}\) is the inverse function of φ, and \({{X _{ [ {a,b} ] }}}\subseteq X\).

Proof

To be more precise, we set that

$$\operatorname{Var}^{(p_{I})}\varphi ({{X _{ [ {a,b} ] }}} )\triangleq \operatorname{Var}\varphi ({{X _{ [ {a,b} ] }}} ) \quad\mbox{and}\quad p_{I} \triangleq p, \operatorname{Var}^{(p_{I})}{{X _{ [ {a,b} ] }}}\triangleq \operatorname{Var} {{X _{ [ {a,b} ] }}}. $$

Without loss of generality, we may assume \(a< b\). Set

$$\begin{aligned}& x=\varphi(t)\quad\Leftrightarrow\quad t=\varphi^{-1}(x),\qquad p_{J}^{*}=p_{I} \bullet \varphi^{-1} \bigl(\varphi^{-1} \bigr)^{\prime},\quad x \in J=\varphi (I),\\& a^{*}=\varphi(a)< \varphi(b)=b^{*}, \end{aligned}$$

then

$$p_{J}^{*}(x)>0, \quad\int_{J}{p_{J}^{*}}= \int_{I}{p_{I}}=1. $$

Hence \(p_{J}^{*}\) is a probability density function of a random variable on J. Since

$$\begin{aligned} \operatorname{Var}^{(p_{I})}\varphi ({{X _{ [ {a,b} ] }}} )&= \frac{\int_{a}^{b}{p_{I}(t)\varphi^{2}(t)\,\mathrm{d}t}}{\int_{a}^{b}{p_{I}(t)\,\mathrm{d}t}} - \biggl(\frac{\int_{a}^{b}{p_{I}(t)\varphi(t)\,\mathrm{d}t}}{ \int_{a}^{b}{p_{I}(t)\,\mathrm{d}t}} \biggr)^{2} \\ & =\frac{\int_{a^{*}}^{b^{*}}{p_{J}^{*}(x)\varphi^{2}(\varphi ^{-1}(x))\,\mathrm{d}x}}{\int_{a^{*}}^{b^{*}}{p_{J}^{*}(x)\,\mathrm{d}x}} - \biggl(\frac{\int_{a^{*}}^{b^{*}}{p_{J}^{*}(x)\varphi(\varphi ^{-1}(x))\,\mathrm{d}x}}{ \int_{a^{*}}^{b^{*}}{p_{J}^{*}(x)\,\mathrm{d}x}} \biggr)^{2} \\ & =\frac{\int_{a^{*}}^{b^{*}}{p_{J}^{*}(x)x^{2}\,\mathrm{d}x}}{\int_{a^{*}}^{b^{*}}{p_{J}^{*}(x)\,\mathrm{d}x}} - \biggl(\frac{\int_{a^{*}}^{b^{*}}{p_{J}^{*}(x)x\,\mathrm{d}x}}{ \int_{a^{*}}^{b^{*}}{p_{J}^{*}(x)\,\mathrm{d}x}} \biggr)^{2} \\ &=\operatorname{Var}^{(p_{J}^{*})}{{X _{ [ {a^{*},b^{*}} ] }^{*}}}, \end{aligned}$$

and \(p_{J}^{*}=p_{I} \bullet\varphi^{-1} (\varphi^{-1} )^{\prime}\) is a differentiable log-concave function, by Corollary 4, the interval function \(\operatorname{Var}^{(p_{J}^{*})}{{X _{ [ {a^{*},b^{*}} ] }^{*}}} \) (\([ {a^{*},b^{*}} ]\in\overline{J} \)) is increasing, i.e.,

$$\bigl[ {a^{*},b^{*}} \bigr]\subset \bigl[ {c^{*},d^{*}} \bigr]\subseteq J\quad\Rightarrow\quad \operatorname{Var}^{(p_{J}^{*})}{{X _{ [ {a^{*},b^{*}} ] }^{*}}}< \operatorname{Var}^{(p_{J}^{*})}{{X _{ [ {c^{*},d^{*}} ] }^{*}}}. $$

Since \({\varphi^{{\prime}}} ( t ) >0\), \(\forall t\in I\), and

$$\bigl[ {a^{*},b^{*}} \bigr]\subset \bigl[ {c^{*},d^{*}} \bigr]\subseteq J\quad\Leftrightarrow\quad [ {a,b} ] \subset [ {c,d} ]\subseteq I,\qquad \operatorname{Var}^{(p_{I})}\varphi ({{X _{ [ {a,b} ] }}} )=\operatorname{Var}^{(p_{J}^{*})}{{X _{ [ {a^{*},b^{*}} ] }^{*}}}, $$

we have

$$[ {a,b} ]\subset [ {c,d} ]\subseteq I\quad\Rightarrow \quad \operatorname{Var}^{(p_{I})}{{X _{ [ {a,b} ] }}}< \operatorname{Var}^{(p_{I})}{{X _{ [ {c,d} ] }}}. $$

That is to say, the interval function \(\operatorname{Var}\varphi ({{X _{ [ {a,b} ] }}} ) \) (\([ {a,b} ]\in\overline {I} \)) is increasing. The proof of Corollary 5 is completed. □

In Section 5.3, we will demonstrate the applications of Corollary 5 in the generalized traditional teaching model.

In Theorem 3, if I is an n-dimensional interval, then we have the following result.

Corollary 6

Let the probability density function \({p_{j}}:I_{j}\rightarrow ( {0,\infty} ) \) of the random variable \(X _{j}\) be a differentiable log-concave function, and let \(\varphi_{j} :I_{j}\rightarrow ( {0,\infty} ) \) be twice differentiable, which satisfy the following conditions:

$${\varphi_{j} ^{\prime}} ( t_{j} ) >0,\qquad { \varphi_{j} ^{\prime \prime}} ( t_{j} ) \geqslant0,\quad \forall t_{j}\in I_{j}, $$

where \(1\leqslant j\leqslant n\), \(n\geqslant2\), and let

$$\varphi: I\rightarrow(0,\infty), \qquad\varphi(t)\triangleq\prod _{j=1}^{n}\varphi_{j} (t_{j} ), $$

where \(I=I_{1}\times\cdots\times I_{n}\), \(t= (t_{1},\ldots ,t_{n} )\). If \(\gamma\geqslant2\), and \({X _{1}}, \ldots,{X _{n}} \) are independent random variables, then the interval function

$$ \operatorname{Var}^{[\gamma]}\varphi ( {{X _{ [ {\mathbf{a},\mathbf {b}} ] }}} ) \triangleq \left \{ { \textstyle\begin{array}{@{}l@{\quad}l} {\frac{2}{\gamma ({\gamma-1} )} [\frac{{\int_{[\mathbf {a},\mathbf{b}]} {{p}\varphi^{\gamma}} }}{{\int_{[\mathbf{a},\mathbf {b}]} {{p}} }} - ( {\frac{{\int_{[a,\mathbf{b}]} {{p}\varphi} }}{{\int_{[\mathbf {a},\mathbf{b}]}{{p}} }}} )^{\gamma} ],} & {\mathbf{a} \ne\mathbf{b}}, \\ {0,} & {\mathbf{a} = \mathbf{b}}, \end{array}\displaystyle } \right . \quad \forall [ { \mathbf{a},\mathbf{b}} ] \in\overline{I}, $$
(116)

is right increasing, where \(p:I\rightarrow(0,\infty)\) is the probability density function of the n-dimensional random variable \(X\triangleq(X_{1},\ldots,X_{n})\), and \(X_{[\mathbf{a},\mathbf {b}]}\subseteq X\).

Proof

Let

$$a_{j}< b_{j},\quad 1\leqslant j\leqslant n,\qquad [ {\mathbf{a}, \mathbf {b}} ] \subset \bigl[ {\mathbf{a},\mathbf{b}^{\prime}} \bigr] \subseteq I, \qquad\gamma\geqslant2. $$

We just need to prove that

$$ \operatorname{Var}^{[\gamma]}\varphi ( {{X _{ [ {\mathbf{a},\mathbf {b}} ] }}} ) < \operatorname{Var}^{[\gamma]}\varphi ( {{X _{ [ {\mathbf {a},\mathbf{b}^{\prime}} ] }}} ). $$
(117)

Set

$$A_{j}\triangleq\frac{{\int_{a_{j}}^{b_{j}} {{p_{j}}\varphi_{j}^{\gamma } } }}{{\int_{a_{j}}^{b_{j}} {{p_{j}}} }},\qquad B_{j}\triangleq \biggl(\frac {{\int_{a_{j}}^{b_{j}} {{p_{j}}\varphi_{j}} }}{{\int_{a_{j}}^{b_{j}} {{p_{j}}} }} \biggr)^{\gamma},\qquad A_{j}^{\prime} \triangleq\frac{{\int_{a_{j}}^{b_{j}^{\prime}} {{p_{j}}\varphi_{j}^{\gamma} } }}{{\int_{a_{j}}^{b_{j}^{\prime}} {{p_{j}}} }},\qquad B_{j}^{\prime}\triangleq \biggl( \frac{{\int_{a_{j}}^{b_{j}^{\prime}} {{p_{j}}\varphi_{j}} }}{{\int_{a_{j}}^{b_{j}^{\prime}} {{p_{j}}} }} \biggr)^{\gamma}. $$

According to the facts that

$$\begin{aligned}& {\phi_{\gamma} ^{\prime\prime}} ( x )=x^{\gamma-2}>0,\qquad { \phi_{\gamma} ^{\prime\prime\prime}} ( x )= (\gamma-2 )x^{\gamma-3}\geqslant0,\quad \forall x\in(0,\infty), \\& {\varphi_{j} ^{\prime}} ( t_{j} ) >0,\qquad { \varphi_{j} ^{\prime \prime}} ( t_{j} ) \geqslant0,\quad \forall t_{j}\in I_{j}, j=1,\ldots,n, \end{aligned}$$

and Theorem 3 with Example 1, we have

$$\begin{aligned}& 0< A_{j}-B_{j}\leqslant A_{j}^{\prime}-B_{j}^{\prime},\quad j=1, \ldots,n, \end{aligned}$$
(118)
$$\begin{aligned}& 0< B_{j}\leqslant B_{j}^{\prime}, \quad j=1,\ldots,n, \end{aligned}$$
(119)

and there is \(j:1\leqslant j\leqslant n\) such that the equations in (118) and (119) do not hold, where the function \({\phi_{\gamma}}\) is defined by (59).

Since \({X _{1}}, \ldots,{X _{n}}\) are independent random variables, we have

$$\begin{aligned}& p(X)=\prod_{j=1}^{n}p_{j},\qquad \int_{[a,b]} {{p}\varphi^{\gamma}}\equiv \prod _{j=1}^{n}{\int_{[a_{j},b_{j}]} {{p_{j}}\varphi_{j} ^{\gamma}}}, \\& \operatorname{Var}^{[\gamma]}\varphi ( {{X _{ [ {a,b} ] }}} ) \equiv\frac{2}{\gamma ({\gamma-1} )} \Biggl(\prod_{j=1}^{n}A_{j}- \prod_{j=1}^{n}B_{j} \Biggr). \end{aligned}$$

Hence inequality (117) can be rewritten as

$$ 0< \prod_{j=1}^{n}A_{j}-\prod _{j=1}^{n}B_{j} < \prod _{j=1}^{n}A_{j}^{\prime}- \prod_{j=1}^{n}B_{j}^{\prime}. $$
(120)

We prove inequalities (120) by means of the mathematical induction as follows.

(I) Let \(n=2\). From (118) and (119), we get

$$\begin{aligned}& 0< ( A_{1}-B_{1} ) ( A_{2}-B_{2} ) \leqslant \bigl( A_{1}^{\prime}-B_{1}^{\prime} \bigr) \bigl( A_{2}^{\prime}-B_{2}^{\prime} \bigr), \end{aligned}$$
(121)
$$\begin{aligned}& 0< B_{1} ( A_{2}-B_{2} )\leqslant B_{1}^{\prime} \bigl( A_{2}^{\prime}-B_{2}^{\prime} \bigr), \end{aligned}$$
(122)
$$\begin{aligned}& 0< B_{2} ( A_{1}-B_{1} )\leqslant B_{2}^{\prime} \bigl( A_{1}^{\prime}-B_{1}^{\prime} \bigr). \end{aligned}$$
(123)

From

$$\begin{aligned}& A_{1}A_{2}-B_{1}B_{2}= ( A_{1}-B_{1} ) ( A_{2}-B_{2} )+ B_{1} ( A_{2}-B_{2} )+B_{2} ( A_{1}-B_{1} ), \\& A_{1}^{\prime}A_{2}^{\prime}-B_{1}^{\prime}B_{2}^{\prime}= \bigl( A_{1}^{\prime}-B_{1}^{\prime} \bigr) \bigl( A_{2}^{\prime}-B_{2}^{\prime} \bigr)+ B_{1}^{\prime} \bigl( A_{2}^{\prime}-B_{2}^{\prime} \bigr)+B_{2}^{\prime} \bigl( A_{1}^{\prime}-B_{1}^{\prime} \bigr), \end{aligned}$$

and (121)-(123), we get

$$ 0< A_{1}A_{2}-B_{1}B_{2}\leqslant A_{1}^{\prime}A_{2}^{\prime}-B_{1}^{\prime}B_{2}^{\prime}. $$
(124)

Since there is \(j:1\leqslant j\leqslant n\) such that the equations in (118) and (119) do not hold, the equation in inequalities (123) does not hold. That is to say, inequalities (120) hold when \(n=2\).

(II) Suppose that

$$ 0< \prod_{j=1}^{n-1}A_{j}-\prod _{j=1}^{n-1}B_{j} \leqslant\prod _{j=1}^{n-1}A_{j}^{\prime}- \prod_{j=1}^{n-1}B_{j}^{\prime},\quad n \geqslant3. $$
(125)

By (118), (119), (125),

$$0< \prod_{j=1}^{n-1}B_{j} \leqslant\prod_{j=1}^{n-1}B_{j}^{\prime} , $$

and the proof of the case \(n=2\), we have

$$0< A_{n}\prod_{j=1}^{n-1}A_{j}-B_{n} \prod_{j=1}^{n-1}B_{j} \leqslant A_{n}^{\prime}\prod_{j=1}^{n-1}A_{j}^{\prime}-B_{n}^{\prime} \prod_{j=1}^{n-1}B_{j}^{\prime}, $$

i.e.,

$$ 0< \prod_{j=1}^{n}A_{j}-\prod _{j=1}^{n}B_{j}\leqslant\prod _{j=1}^{n}A_{j}^{\prime}- \prod_{j=1}^{n}B_{j}^{\prime}. $$
(126)

Since there is \(j:1\leqslant j\leqslant n\) such that the equations in (118) and (119) do not hold, the equation in inequalities (126) does not hold. That is to say, inequalities (120) hold. The proof of Corollary 6 is completed. □

As an application of Corollary 6, we have the following example.

Example 3

In Corollary 6, if we set

$$I_{j}= ({\alpha_{j},\beta_{j}} ),\qquad \varphi_{j} (t_{j} )=\int_{\alpha_{j}}^{t_{j}}{p_{j} }, \quad j=1,\ldots,n, $$

then

$$\varphi(t)\triangleq\prod_{j=1}^{n} \varphi_{j} (t_{j} )=P (\alpha_{1}< X_{1} \leqslant t_{1},\ldots,\alpha _{n}< X_{n} \leqslant t_{n} ), $$

which is the probability of the random event

$$\alpha_{1}< X_{1}\leqslant t_{1},\qquad\ldots,\qquad \alpha_{n}< X_{n}\leqslant t_{n}, $$

and \(\varphi: I\rightarrow[0,1]\) is the probability distribution function of X, where \({X _{1}}, \ldots,{X _{n}} \) are independent random variables. If \({p_{j}}:I_{j}\rightarrow ( {0,\infty} ) \) is differentiable, increasing and log-concave, then

$${\varphi_{j} ^{\prime}} ( t_{j} ) = {p_{j} } ( t_{j} )>0,\qquad {\varphi_{j} ^{\prime\prime}} ( t_{j} ) ={p_{j} ^{\prime }} ( t_{j} )\geqslant0,\quad \forall t_{j}\in I_{j}, $$

where \(1\leqslant j\leqslant n\). By Corollary 6, the interval function

$$\operatorname{Var}^{[\gamma]}\varphi ( {{X _{ [ {\mathbf{a},\mathbf {b}} ] }}} ) \quad\bigl([ \mathbf{a},\mathbf{b}]\in\overline {I} \bigr) $$

is right increasing.

5 Applications in higher education

5.1 k-Normal distribution

The normal distribution [2932] is considered as the most prominent probability distribution in probability and statistics. In order to facilitate the applications in Sections 5.2 and 5.3, in this section, we need to recall the concept of k-normal distribution as follows: If the probability density function of the random variable X is

$$ p ( {t;\mu,\sigma,k} ) \triangleq\frac{{{k^{1-{k^{-1}}}}}}{{ 2\Gamma ( {{k^{-1}}} ) \sigma}}\exp \biggl( { - \frac{{{{\vert {t-\mu} \vert }^{k}}}}{{k{\sigma^{k}}}}} \biggr) , $$
(127)

then we say that the random variable X follows a k-normal distribution [1], or X follows a generalized normal distribution [32, 33], denoted by \(X \sim{N_{k}} ( {\mu ,\sigma} ) \), where \(t\in\mathbb{R}\), the parameters \(\mu\in\mathbb{R}\), \(\sigma \in ( {0,\infty} )\), \(k\in ( {1,\infty} )\), and \(\Gamma ( s ) \) is the gamma function. The graph of the function \(p ( {t;0,1,k} )\) is depicted in Figure 3.

Figure 3
figure 3

The graph of the function \(\pmb{p( {t;0 ,1 ,k})}\) , where \(\pmb{-4 \leqslant t \leqslant4}\) , \(\pmb{1< k\leqslant3}\) .

Clearly, \(p ( {t;\mu,\sigma,2} )\) is just the standard normal distribution \(N ( {\mu,\sigma} )\) with mean μ and mean variance σ, as well as \(p ( {t;\mu,\sigma,k} )\) and the probability distribution function

$$P ( {x;\mu,\sigma,k} )\triangleq\int_{-\infty}^{x}p ( {t;\mu,\sigma,k} )\,\mathrm{d}t $$

are log-concave functions by Proposition 3 and Example 2. By (127) and [1, 32], we easily get

$$ \begin{aligned} &\mathrm{E}X =\mu, \qquad\mathrm{E} {\vert {X -\mathrm{E}X }\vert ^{k}}={\sigma^{k}},\\ & \operatorname{Var} {X }= \frac{{{k^{2{k^{ - 1}}}} \Gamma ( {3{k^{ - 1}}} )}}{{\Gamma ( {{k^{ - 1}}} )}} {\sigma^{2}}\left \{ { \textstyle\begin{array}{@{}l@{\quad}l} { \textstyle\begin{array}{@{}l@{\quad}l} { > {\sigma^{2}},} & {1 < k < 2}, \\ { = {\sigma^{2}},} & {k = 2}, \\ { < {\sigma^{2}},} & {k > 2}, \end{array}\displaystyle } \end{array}\displaystyle } \right . \end{aligned} $$
(128)

here μ, \(\sigma^{k}\) and σ are the mathematical expectation, the k-order absolute central moment and the k-order mean absolute central moment of the random variable X, respectively.

We remark here that there are close relationships between the k-normal distribution and the Weibull distribution [1].

5.2 Applications in the hierarchical teaching model

In the hierarchical teaching model or the traditional teaching model, the score of each student is treated as a random variable \({X _{I}}\in I= [ {0,1} ]\). By using the central limit theorem [34], we may think that \({X _{I}}\subseteq X \sim {N_{2}} ( {\mu,\sigma } )\), where μ is the average score of the students and σ is the mean variance of the score. If the top and bottom students are insignificant, that is to say, the variance VarX of the random variable X is very small, according to formulas (128) and Figure 3, we may think that there is a real number \(k\in ( {2,\infty} ) \) such that \({X _{I}}\subseteq X \sim {N_{k}} ( {\mu,\sigma} ) \). Otherwise, we may think that there is a real number \(k\in ( {1,2} ) \) such that \({X _{I}}\subseteq X \sim{N_{k}} ( {\mu,\sigma} ) \). Here \(\mu\in ( {0,1} ) \) is the average score of the students and σ is the k-order mean absolute central moment of the score. We can estimate the number k by means of a sampling procedure.

Based on the above analysis, \({\phi_{\gamma}^{\prime\prime\prime }} ( x )= (\gamma-2 )x^{\gamma-3}\), where the function \({\phi_{\gamma}}\) is defined by (59), Theorem 3, Corollary 4 and formulas (128), we get the following proposition.

Proposition 6

In the hierarchical teaching model \(\operatorname{HTM} \{ {{a_{0}},\ldots,{a_{m}},{p_{I}}} \}\), assume that \({X _{I}}\subset X \sim{N_{k}} ( {\mu,\sigma} )\), \(k>1\). Then we have the following three assertions.

  1. (I)

    If \(\gamma\geqslant2\), \(0\leqslant i< j \leqslant{j^{\prime}}\leqslant m\), then we have the following inequality:

    $$ {\operatorname{Var}^{ [ \gamma ] }} {X _{ [ {{a_{i}},{a_{j}}} ] }}\leqslant{ \operatorname{Var}^{ [ \gamma ] }} {X _{ [ {{a_{i}},{a_{{j^{\prime}}}}} ] }}. $$
    (129)
  2. (II)

    If \(1<\gamma\leqslant2\), \(0\leqslant{i^{\prime}}\leqslant i< j \leqslant m\), then we have the following inequality:

    $$ {\operatorname{Var}^{ [ \gamma ] }} {X _{ [ {{a_{i}},{a_{j}}} ] }}\leqslant{ \operatorname{Var}^{ [ \gamma ] }} {X _{ [ {{a_{{i^{\prime}}}},{a_{j}}} ] }}. $$
    (130)
  3. (III)

    If \(0\leqslant{i^{\prime}}\leqslant i< j \leqslant{j^{\prime}}\leqslant m\), then we have the following inequalities:

    $$ \operatorname{Var} {X _{ [ {{a_{i}},{a_{j}}} ] }}\leqslant \operatorname{Var} {X _{ [ {{a_{{i^{\prime}}}},{a_{{j^{\prime}}}}} ] }}\leqslant \operatorname{Var} {X _{I}}\leqslant \operatorname{Var} {X }=\frac{{{k^{2{k^{ - 1}}}} \Gamma ( {3{k^{ - 1}}} )}}{{\Gamma ( {{k^{ - 1}}} )}} {\sigma^{2}}. $$
    (131)

Remark 3

According to Proposition 6, we know that the \(\operatorname{HTM} \{ {{a_{0}},\ldots,{a_{m}},{p_{I}}} \}\) is increasing under the hypotheses

$${X _{I}}\subset X \sim{N_{k}} ( {\mu,\sigma} ),\quad k>1. $$

Therefore, we may conclude that the hierarchical teaching model is ‘normally’ better than the traditional teaching model by the central limit theorem and Proposition 6.

Remark 4

In [1], the authors proved that the probability density function of the k-normal distribution is quasi-log concave and showed that the generalized hierarchical teaching model is ‘normally’ better than the generalized traditional teaching model. That is to say, in the \(\operatorname{HTM} \{ {{-\infty}, \ldots, {\infty},p_{\mathbb{R}}} \}\), if \(X_{\mathbb{R}} \sim{N_{2}} ( {\mu,\sigma} )\), then we have the following inequalities:

$$ \operatorname{Var} {X _{ [ {{a_{i}},{a_{j}}} ] }}\leqslant \operatorname{Var} {X_{\mathbb{R}} }=\sigma^{2},\quad \forall {i,j}:0\leqslant i< j \leqslant m. $$
(132)

Therefore, Proposition 6 is a generalization of (132).

5.3 Applications in the generalized traditional teaching model

Next, we demonstrate the applications of Corollary 5 in the generalized traditional teaching model.

In the generalized traditional teaching model \(\operatorname{HTM} \{ {-\infty,\infty,{p_{\mathbb{R}}}} \} \), according to the central limit theorem, we may assume that the score X of each student follows a k-normal distribution, i.e., \(X \sim{N_{k}} ( {\mu,\sigma} )\), \(k>1\), where \(\mu>0\) is the average score of the students and \(\sigma>0 \) is the k-order mean absolute central moment of the score.

In the \(\operatorname{HTM} \{ {{-\infty},{\infty},{p_{\mathbb{R}}}} \}\), assume that

$$ X_{[a,b]}\subset X_{(\mu,\infty)}\subset X_{\mathbb{R}} \sim{N_{k}} ( {\mu,\sigma} ),\qquad \mathscr{A}(x)=c(x-\mu)^{k-1},\quad k>1, c>0, x>\mu>0, $$

then we have the following inequalities (see Theorem 5.3 in [1]):

$$ 0\leqslant\operatorname{Var}\mathscr{A} (X_{[a,b]} )\leqslant c\sigma^{k}\mathrm{E}\mathscr{A}^{\prime } (X_{[a,b]} ). $$
(133)

In the \(\operatorname{HTM} \{ {-\infty,\infty,{p_{\mathbb{R}}}} \} \), for the general allowance function (72), we have the following.

Proposition 7

In the \(\operatorname{HTM} \{ {-\infty,\infty ,{p_{\mathbb{R}}}} \} \), assume that the score X of each student follows a k-normal distribution, where \(k>1\). Then we have the following two assertions.

  1. (I)

    If \(0<\alpha\leqslant1\), then the interval function \(\operatorname{Var} \mathscr{A} (X _{ [ {a,b} ] } ) \) (\([ {a,b} ]\in\overline{ (\mu,\infty )} \)) is increasing.

  2. (II)

    If \(1<\alpha<k\), then the interval function \(\operatorname{Var} \mathscr{A} (X _{ [ {a,b} ] } ) \) (\([ {a,b} ]\in\overline{ [\mu^{*},\infty )} \)) is also increasing. Here

    $$\begin{aligned}& \mathscr{A} ( t ) \triangleq c{ ( {t-\mu} ) ^{\alpha}}, \qquad\operatorname{Var} \mathscr{A} (X _{ [ {a,b} ] } )\triangleq\frac{\int_{a}^{b}{p\mathscr{A}^{2}}}{\int_{a}^{b}{p}}- \biggl( \frac{\int_{a}^{b}{p\mathscr{A}}}{\int_{a}^{b}{p}} \biggr)^{2}, \\& \mu^{*}\triangleq \mu+\sigma \biggl[\frac{\alpha (\alpha-1 )}{k-\alpha} \biggr]^{\frac{1}{k}}. \end{aligned}$$

Proof

By (72), we have

$$\mathscr{A}^{\prime} ( t ) = c\alpha{ ( {t-\mu} ) ^{\alpha-1 }}> 0,\quad \forall t>\mu. $$

According to Corollary 5, we just need to prove that the function \(p_{J}^{*}\triangleq p\bullet\mathscr{A} ^{-1} (\mathscr{A} ^{-1} )^{\prime} \) is a differentiable log-concave function under the hypotheses of assertions (I) and (II).

By (72) and (73), we have

$$\begin{aligned} \log{p_{J}^{\ast}} ( s )&= \log p\bullet {{\mathscr{A} ^{ - 1}} ( s )} { \bigl( {{\mathscr{A} ^{ - 1}} ( s )} \bigr)'} \\ & =\log \biggl[ {\frac{{{k^{1-{k^{-1}}}}}}{{2\Gamma ( {{k^{-1}}} ) \sigma}}\exp \biggl( {-\frac{{{{\vert {{\mathscr {A} ^{-1}} ( s ) -\mu} \vert }^{k}}}}{{k{\sigma^{k}}}}} \biggr) \frac{\mathrm{d}}{{\mathrm{d}s}}{\mathscr{A} ^{-1}} ( s ) } \biggr] \\ & =\log{ \biggl[\frac{{{k^{1-{k^{-1}}}}}}{{2\Gamma ( {{k^{-1}}} ) \sigma}} \biggr]}- {\frac{1}{{k{\sigma^{k}}}}{{ \biggl( { \frac{s}{c}} \biggr) }^{\frac{k}{\alpha}}}+\log \biggl[ \frac{1}{\alpha c} \biggl(\frac {s}{c} \biggr)^{\frac{1-\alpha}{\alpha}} \biggr] } \\ & =\log{ \biggl[\frac{{{k^{1-{k^{-1}}}}}}{{2\Gamma ( {{k^{-1}}} ) \sigma}} \biggr]}- {\frac{1}{{k{\sigma^{k}c^{\frac {k}{\alpha}}}}}{{s }^{\frac{k}{\alpha}}}+\frac{1-\alpha}{\alpha}\log {s}-\log{ \bigl(\alpha c^{\frac{1}{\alpha}} \bigr)}}, \end{aligned}$$

and

$$\begin{aligned} \frac{\mathrm{d}^{2}}{\mathrm{d}s^{2}}\log{p_{J}^{\ast}} ( s )&= \frac{\mathrm{d}^{2}}{\mathrm{d}s^{2}} \biggl(- {\frac{1}{{k{\sigma ^{k}c^{\frac{k}{\alpha}}}}}{{s }^{\frac{k}{\alpha}}}+ \frac{1-\alpha }{\alpha}\log{s}} \biggr) =- {\frac{k-\alpha}{{{\alpha}^{2}{\sigma^{k}c^{\frac{k}{\alpha }}}}}{{s }^{\frac{k}{\alpha}-2}}-\frac{1-\alpha}{\alpha}s^{-2}} \\ &\sim- (k-\alpha ){{ \biggl(\frac{s}{c\sigma^{\alpha}} \biggr) }^{\frac{k}{\alpha}}-\alpha (1-\alpha )}. \end{aligned}$$

Hence

$$\begin{aligned} & \frac{\mathrm{d}^{2}}{\mathrm{d}s^{2}}\log{p_{J}^{\ast}} ( s )\leqslant0 \\ &\quad\Leftrightarrow\quad{-} (k-\alpha ){{ \biggl(\frac{s}{c\sigma^{\alpha }} \biggr) }^{\frac{k}{\alpha}}-\alpha (1-\alpha )}\leqslant0 \\ &\quad\Leftrightarrow\quad{-} (k-\alpha ){{ \biggl[\frac{c{ ( {t - \mu } )^{\alpha}}}{c\sigma^{\alpha}} \biggr] }^{\frac{k}{\alpha }}-\alpha (1-\alpha )}\leqslant0 \\ &\quad\Leftrightarrow \quad(k-\alpha ){{ \biggl(\frac{{ {t - \mu} }}{\sigma} \biggr) }^{k}+\alpha (1-\alpha )}\geqslant0 \\ &\quad\Leftrightarrow\quad 0< \alpha\leqslant1, \quad t\geqslant\mu \quad\mbox{or}\quad 1< \alpha < k,\quad t \geqslant\mu^{*}=\mu+\sigma \biggl[\frac{\alpha (\alpha-1 )}{k-\alpha} \biggr]^{\frac{1}{k}}. \end{aligned}$$

Therefore, the function \(p_{J}^{*}\triangleq p\bullet\mathscr{A} ^{-1} (\mathscr{A} ^{-1} )^{\prime} \) is a differentiable log-concave function under the hypotheses of assertions (I) and (II). This completes the proof of Proposition 7. □

6 Conclusion

Variances and covariances are important concepts in the analysis of variance since they can be used as quantitative tools in mathematical models involving probability and statistics. The motivation of this paper is to extend the connotation of the analysis of variance and facilitate its applications in probability, statistics and higher education. In the applications, one of our main purposes is to study the hierarchical and the traditional teaching models from the angle of the analysis of variance, so as to decide on the superiority or the inferiority of the hierarchical teaching model and the traditional teaching model.

In this paper, we first introduce the relevant concepts and properties of the interval functions. Next, we study several characteristics of the log-concave function, and prove the interesting quasi-log concavity conjecture in [1]. Next, we generalize the traditional covariance and the variance of random variables and define ϕ-covariance, ϕ-variance, ϕ-Jensen variance, ϕ-Jensen covariance, integral variance and γ-order variance, and study the relationships among these ‘variances’, as well as study the monotonicity of the interval function \(\operatorname{JVar}_{\phi}\varphi ( {{X _{ [ {a,b} ] }}} )\). Finally, we demonstrate the applications of our results in higher education. Based on the monotonicity of the interval function \({\operatorname{Var}^{ [ \gamma ] }}{X _{ [ {{a},{b}} ] }} \) (\([a,b]\in\overline{I} \)), we show that the hierarchical teaching model is ‘normally’ better than the traditional teaching model under the hypotheses that \({X _{I}}\subset X \sim{N_{k}} ( {\mu,\sigma} )\), \(k>1\). We also study the monotonicity of the interval function \(\operatorname{Var} \mathscr{A} (X _{ [ {a,b} ] } )\) involving an allowance function \(\mathscr{A}\). Theorems 1 and 2 are the main theoretical basis and Theorem 3 is one of main results of this paper.

A large number of algebraic, functional analysis, probability, statistics and inequality theories are used in this paper. The proofs of our results are both interesting and difficult, and the problems of proof of these results are difficult to be solved by means of the existing probability and statistics theories. Some of our proof methods can also be found in the references of this paper.

Based on the above analysis, we know that the theory of ϕ-Jensen variance is of great theoretical significance and application value in inequality, probability, statistics and higher education.

References

  1. Wen, JJ, Han, TY, Cheng, SS: Quasi-log concavity conjecture and its applications in statistics. J. Inequal. Appl. 2014, Article ID 339 (2014)

    Article  MathSciNet  Google Scholar 

  2. Hawkins, GE, Brown, SD, Steyvers, M, Wagenmakers, EJ: Context effects in multi-alternative decision making: empirical data and a Bayesian model. Cogn. Sci. 36, 498-516 (2012)

    Article  Google Scholar 

  3. Yang, CF, Pu, YJ: Bayes analysis of hierarchical teaching. Math. Pract. Theory 34(9), 107-113 (2004) (in Chinese)

    Google Scholar 

  4. de Valpine, P: Shared challenges and common ground for Bayesian and classical analysis of hierarchical models. Ecol. Appl. 19, 584-588 (2009)

    Article  Google Scholar 

  5. Carlin, BP, Gelfand, AE, Smith, AFM: Hierarchical Bayesian analysis of change point problem. Appl. Stat. 41(2), 389-405 (1992)

    Article  MATH  Google Scholar 

  6. Pec̆arić, JE, Proschan, F, Tong, YL: Convex Functions. Academic Press, New York (1992)

    Google Scholar 

  7. Wen, JJ: The inequalities involving Jensen functions. J. Syst. Sci. Math. Sci. 27(2), 208-218 (2007) (in Chinese)

    MATH  Google Scholar 

  8. Wen, JJ, Zhang, ZH: Jensen type inequalities involving homogeneous polynomials. J. Inequal. Appl. 2010, Article ID 850215 (2010)

    Article  MathSciNet  Google Scholar 

  9. Gao, CB, Wen, JJ: Theory of surround system and associated inequalities. Comput. Math. Appl. 63, 1621-1640 (2012)

    Article  MATH  MathSciNet  Google Scholar 

  10. Wen, JJ, Gao, CB, Wang, WL: Inequalities of J-P-S-F type. J. Math. Inequal. 7(2), 213-225 (2013)

    Article  MATH  MathSciNet  Google Scholar 

  11. An, MY: Log-Concave Probability Distributions: Theory and Statistical Testing. Duke University Press, Durham (1995)

    Google Scholar 

  12. An, MY: Log-concavity and statistical inference of linear index modes. Manuscript, Duke University (1997)

  13. Finner, H, Roters, M: Log-concavity and inequalities for Chi-square, F and Beta distributions with applications in multiple comparisons. Stat. Sin. 7, 771-787 (1997)

    MATH  MathSciNet  Google Scholar 

  14. Miravete, EJ: Preserving log-concavity under convolution: comment. Econometrica 70(3), 1253-1254 (2002). doi:10.1111/1468-0262.00327

    Article  Google Scholar 

  15. Al-Zahrani, B, Stoyanov, J: On some properties of life distributions with increasing elasticity and log-concavity. Appl. Math. Sci. 2(48), 2349-2361 (2008)

    MATH  MathSciNet  Google Scholar 

  16. Caramanis, C, Mannor, S: An inequality for nearly log-concave distributions with applications to learning. IEEE Trans. Inf. Theory 53(3), 1043-1057 (2007)

    Article  MATH  MathSciNet  Google Scholar 

  17. Tong, YL: An adaptive solution to ranking and selection problems. Ann. Stat. 6(3), 658-672 (1978)

    Article  MATH  Google Scholar 

  18. Bagnoli, M, Bergstrom, T: Log-concave probability and its applications. Mimeo, University of Michigan (1991)

  19. Patel, JK, Read, CB: Handbook of the Normal Distribution, 2nd edn. CRC Press, Boca Raton (1996)

    MATH  Google Scholar 

  20. Heckman, JJ, Honor, BE: The empirical content of the Roy model. Econometrica 58(5), 1121-1149 (1990)

    Article  Google Scholar 

  21. Wen, JJ, Han, TY, Gao, CB: Convergence tests on constant Dirichlet series. Comput. Math. Appl. 62(9), 3472-3489 (2011)

    Article  MATH  MathSciNet  Google Scholar 

  22. Wen, JJ, Han, TY, Cheng, SS: Inequalities involving Dresher variance mean. J. Inequal. Appl. 2013, Article ID 366 (2013)

    Article  MathSciNet  Google Scholar 

  23. Pec̆arić, JE, Wen, JJ, Wang, WL, Tao, L: A generalization of Maclaurin’s inequalities and its applications. Math. Inequal. Appl. 8(4), 583-598 (2005)

    MathSciNet  Google Scholar 

  24. Wen, JJ, Wang, WL: Inequalities involving generalized interpolation polynomial. Comput. Math. Appl. 56(4), 1045-1058 (2008)

    Article  MATH  MathSciNet  Google Scholar 

  25. Wen, JJ, Cheng, SS: Closed balls for interpolating quasi-polynomials. Comput. Appl. Math. 30(3), 545-570 (2011)

    MathSciNet  Google Scholar 

  26. Wen, JJ, Wang, WL: Chebyshev type inequalities involving permanents and their applications. Linear Algebra Appl. 422(1), 295-303 (2007)

    Article  MATH  MathSciNet  Google Scholar 

  27. Wen, JJ, Wu, SH, Han, TY: Minkowski-type inequalities involving Hardy function and symmetric functions. J. Inequal. Appl. 2014, Article ID 186 (2014)

    Article  MathSciNet  Google Scholar 

  28. Wen, JJ, Cheng, SS, Gao, C: Optimal sublinear inequalities involving geometric and power means. Math. Bohem. 134(2), 133-149 (2009)

    MATH  MathSciNet  Google Scholar 

  29. Wlodzimierz, B: The Normal Distribution: Characterizations with Applications. Springer, Berlin (1995)

    MATH  Google Scholar 

  30. Spiegel, MR: Theory and Problems of Probability and Statistics, pp. 109-111. McGraw-Hill, New York (1992)

    Google Scholar 

  31. Whittaker, ET, Robinson, G: The Calculus of Observations: A Treatise on Numerical Mathematics, 4th edn., pp. 164-208. Dover, New York (1967)

    Google Scholar 

  32. Nadarajah, S: A generalized normal distribution. J. Appl. Stat. 32(7), 685-694 (2005)

    Article  MATH  MathSciNet  Google Scholar 

  33. Varanasi, MK, Aazhang, B: Parametric generalized Gaussian density estimation. J. Acoust. Soc. Am. 86(4), 1404-1415 (1989)

    Article  Google Scholar 

  34. Johnson, OT: Information Theory and the Central Limit Theorem, p. 88. Imperial College Press, London (2004)

    Book  MATH  Google Scholar 

Download references

Acknowledgements

This work is supported in part by the Natural Science Foundation of China (No. 61309015), the Scientific Research Fund of the Education Department of Sichuan Province of China (No. 07ZA207, No. 14ZB0372), Natural Science Fund of Chengdu University for Young Researchers (No. 2013XJZ08), and Doctoral Research Fund of Chengdu University (No.20819022).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yi Huang.

Additional information

Competing interests

The authors declare that they have no conflicts of interest in this joint work.

Authors’ contributions

All authors contributed equally and significantly in this paper. All authors read and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wen, J., Huang, Y. & Cheng, S.S. Theory of ϕ-Jensen variance and its applications in higher education. J Inequal Appl 2015, 270 (2015). https://doi.org/10.1186/s13660-015-0796-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13660-015-0796-z

MSC

Keywords