Skip to main content

Quasi-log concavity conjecture and its applications in statistics

Abstract

This paper is motivated by several interesting problems in statistics. We first define the concept of quasi-log concavity, and a conjecture involving quasi-log concavity is proposed. By means of analysis and inequality theories, several interesting results related to the conjecture are obtained; in particular, we prove that log concavity implies quasi-log concavity under proper hypotheses. As applications, we first prove that the probability density function of k-normal distribution is quasi-log concave. Next, we point out the significance of quasi-log concavity in the analysis of variance. Next, we prove that the generalized hierarchical teaching model is usually better than the generalized traditional teaching model. Finally, we demonstrate the applications of our results in the research of the allowance function in the generalized traditional teaching model.

MSC:26D15, 62J10.

1 Introduction

Convexity and concavity are essential attributes of functions, their research and applications are important topics in mathematics (see [112]).

There are many types of convexity and concavity, one of them is log concavity which has many applications in statistics (see [2, 4, 712]). In [4], the authors apply the log concavity to study the Roy model, and some interesting results are obtained (see p.1128 in [4]), which include the following: If D is a log concave random variable, then

Var [ D | D > d ] d 0and Var [ D | D d ] d 0.
(1.1)

Recall the definitions of log-concave function (see [15]) and β-log-concave function (see [13]): If the function p:I(0,) satisfies the inequality

p [ θ u + ( 1 θ ) v ] e β p θ (u) p 1 θ (v),β[0,),(u,v) I 2 ,θ[0,1],
(1.2)

then we say that the function p:I(0,) is a β-log-concave function. 0-log-concave function is called a log-concave function. In other words, the function p:I(0,) is a log-concave function if and only if the function logp is a concave function. If logp is a concave function, then we call the function p:I(0,) a log-convex function. Here I is an interval (or high dimension interval).

For the log-concave function, we have the following results (see [5]). Let the function p:I(0,) be differentiable, where I is an interval. Then the function p is a log-concave function if and only if the function ( log p ) is decreasing, i.e., if v 1 , v 2 I, v 1 < v 2 , then we have

[ log p ( v 1 ) ] [ log p ( v 2 ) ] .
(1.3)

Let the function p(0,) be twice differentiable. Then the function p is a log-concave function if and only if

p(x) p (x) [ p ( x ) ] 2 0,xI.
(1.4)

Let for convenience that

p p ( x ) , a b p a b p ( x ) d x , p ( x ) d p ( x ) d x , p ( x ) d 2 p ( x ) d x 2 , p ( x ) d 3 p ( x ) d x 3 , p ( n ) ( x ) d n p ( x ) d x n , n 4 .

It is well known that there is a wide range of applications of log concavity in probability and statistics theories (see [2, 4, 712]). However, quasi-log concavity also has fascinating significance in probability and statistics theories, see Section 4 and Section 5. The main object of this paper is to introduce the quasi-log concavity of a function and demonstrate its applications in the analysis of variance.

Now we introduce the definition of quasi-log concavity and quasi-log convexity as follows.

Definition 1.1 A differentiable function p:I(0,) is said to be quasi-log concave if the following inequality

G p [a,b] ( a b p ) [ p ( b ) p ( a ) ] [ p ( b ) p ( a ) ] 2 0,a,bI
(1.5)

holds, here I is an interval. If inequality (1.5) is reversed, then the function p:I(0,) is said to be quasi-log convex.

We remark here if the function p:I(0,) is twice continuously differentiable, then inequality (1.5) can be rewritten as follows:

G p [a,b] a b p a b p ( a b p ) 2 0,a,bI.
(1.6)

Now we show that for the twice continuously differentiable function, quasi-log concavity implies log concavity, and quasi-log convexity implies log convexity.

Indeed, suppose that p:I(0,) is twice continuously differentiable and quasi-log concave. Then (1.6) holds. Hence

p(x) p (x) [ p ( x ) ] 2 = lim b x 1 ( b x ) 2 { x b p ( t ) d t x b p ( t ) d t [ x b p ( t ) d t ] 2 } 0

for all xI so that

d 2 log p ( x ) d x 2 = p ( x ) p ( x ) [ p ( x ) ] 2 p 2 ( x ) 0,xI.

Therefore, (1.4) holds and p is log concave on I. Similarly, we can prove that quasi-log convexity implies log convexity.

On the other hand, we can prove that for the twice continuously differentiable function log convexity implies quasi-log convexity.

Indeed, suppose that p:I(0,) is twice continuously differentiable and log convex. Then (1.4) is reversed. Hence

p ( x ) p ( x ) [ p ( x ) ] 2 0 , x I p ( x ) 0 , | p ( x ) | p ( x ) p ( x ) , x I ( a b p ) 2 ( a b | p | ) 2 ( a b p p ) 2 a b p a b p , a , b I ,

that is, inequality (1.6) is reversed, here we used the Cauchy inequality

( a b f g ) 2 a b f 2 a b g 2 .

Therefore, p is quasi-log convex on I.

Unfortunately, we have not found the connection between quasi-log concavity and β-log concavity, where β>0.

Based on the above analysis, we have reason to propose a conjecture (abbreviated as quasi-log concavity conjecture) as follows.

Conjecture 1.1 (Quasi-log concavity conjecture)

Suppose that the function p:I(0,) is twice continuously differentiable. If p is log concave, then p is quasi-log concave. Here I is an interval.

We have done a lot of experiments with mathematical software to verify the correctness of Conjecture 1.1, but did not find a counter-example.

We remark that similar concepts may be defined for sequences { x n } n = 1 (0,). We first define

x n x n + 1 x n , n N { 1 , 2 , } , a b x n a n < b x n , a , b N , a < b , G x n [ a , b ] ( a b x n ) ( x b x a ) ( x b x a ) 2 , a , b N , a < b ,

the sequence { x n } n = 1 (0,) is called a log-concave sequence if

x a x a + 2 x a + 1 2 0,aN,
(1.7)

and is called quasi-log concave if

G x n [a,b]0,a,bN,a<b.
(1.8)

Set b=a+1 in (1.8). Then (1.8) can be rewritten as (1.7). Hence for the sequence { x n } n = 1 (0,), quasi-log concavity implies log concavity. Similarly, we can define a log-convex sequence and quasi-log convexity of a sequence. We expect inter-relations between these concepts but they will be dealt with elsewhere.

In this paper, we are concerned with Conjecture 1.1 and demonstrate the applications of our results in the analysis of variance and the generalized hierarchical teaching model with generalized traditional teaching model. Our motivation is to study several interesting problems in statistics.

In Section 2, we take up Conjecture 1.1. In Section 3, we give several illustrative examples. In Section 4, we prove that the probability density function of the k-normal distribution is quasi-log concave. In Section 5, we demonstrate the applications of these results, we show that the generalized hierarchical teaching model is normally better than the generalized traditional teaching model (see Remark 5.3), and we point out the significance of quasi-log concavity in the analysis of variance and the generalized traditional teaching model.

2 Study of Conjecture 1.1

For Conjecture 1.1, we have the following five theorems.

Theorem 2.1 Let the function p:I(0,) be twice continuously differentiable, log concave and monotone. If

0< sup t I { | | ( log p ) | ( log p ) | } inf t I { | ( log p ) | + ( log p ) } ,
(2.1)

then p is quasi-log concave.

Proof Since the function p is log concave, we have

( log p ) = [ log p ( t ) ] 0,tI,

so inequality (2.1) is well defined.

Without loss of generality, we may assume that

a,bI,a<b.

Note that for any positive real number λ and any real numbers x, y, we have the inequality

xy ( λ 2 x + y 2 λ ) 2 ,
(2.2)

the equality holds if and only if λ 2 x=y.

According to inequality (2.1), there exists a positive real number λ such that

0< sup t I { | | ( log p ) | ( log p ) | } λ inf t I { | ( log p ) | + ( log p ) } .
(2.3)

From (2.3) we know that for the positive real number λ, we have

λ 2 p(t)2λ| p (t)|+ p (t)0,tI,
(2.4)

and

λ 2 p(t)+2λ| p (t)|+ p (t)0,tI.
(2.5)

Indeed, since

( log p ) = p p , ( log p ) = p p ( p ) 2 p 2 , p p = [ ( log p ) ] 2 + ( log p ) ,
(2.6)

inequality (2.4) is equivalent to the inequalities

| ( log p ) | ( log p ) λ| ( log p ) |+ ( log p ) ,tI,
(2.7)

and inequality (2.5) is equivalent to the inequalities

λ [ | ( log p ) | ( log p ) ] ,tI,
(2.8)

or

λ [ | ( log p ) | + ( log p ) ] ,tI.
(2.9)

Hence if inequalities (2.3) hold, then both inequality (2.4) and inequality (2.5) hold. That is to say, inequalities (2.4) and (2.5) are equivalent to inequalities (2.3).

Since p:I(0,) is monotonic, we obtain that

( a b p ) 2 = ( a b | p | ) 2 .
(2.10)

Combining with (2.2), (2.4), (2.5) and (2.10), we get

a b p a b p ( a b p ) 2 ( λ 2 a b p + a b p 2 λ ) 2 ( a b p ) 2 = ( a b λ 2 p + p 2 λ ) 2 ( a b | p | ) 2 = ( a b λ 2 p + p 2 λ + a b | p | ) ( a b λ 2 p + p 2 λ a b | p | ) = 1 4 λ 2 [ a b ( λ 2 p + 2 λ | p | + p ) ] [ a b ( λ 2 p 2 λ | p | + p ) ] 0 .

This means that inequality (1.6) holds.

The proof of Theorem 2.1 is completed. □

Corollary 2.1 Let the function p:[α,β](0,) be thrice continuously differentiable and log concave. If

p 0, p >0, ( log p ) 2 ( ( log p ) ) 3 ,x[α,β],
(2.11)

then the function p:[α,β](0,) is quasi-log concave.

Proof Let

φ=|| ( log p ) | ( log p ) |,ψ=| ( log p ) |+ ( log p ) .

From (2.11), we have

φ = ( log p ) ( log p ) = p ( p ) 2 p p p > 0 , ψ = ( log p ) + ( log p ) φ > 0 , d φ d x = ( log p ) + ( log p ) 2 ( log p ) 0 , d ψ d x = ( log p ) ( log p ) 2 ( log p ) 0 ,

and

0<φ(x)φ(α)ψ(α)ψ(x),x[α,β],

hence

0 < sup x [ α , β ] { | | ( log p ) | ( log p ) | } = φ ( α ) ψ ( α ) = inf x [ α , β ] { | ( log p ) | + ( log p ) } .

By Theorem 2.1, the function p:[α,β](0,) is quasi-log concave. This ends the proof. □

Theorem 2.2 Let the function p:[α,β](0,) be twice continuously differentiable and log concave. If

( β α ) 2 sup x [ α , β ] { | ( log p ( x ) ) | 2 } 2 inf x [ α , β ] { | ( log p ( x ) ) | } 0,
(2.12)

then p:[α,β](0,) is quasi-log concave.

Proof Now we prove that (1.6) holds as follows.

Without loss of generality, we assume that a,b[α,β] and a<b. Note that

G p [ a , b ] a b p a b p ( a b p ) 2 , G p [ a , b ] b = p ( b ) a b p + p ( b ) a b p 2 p ( b ) a b p ,

and

2 G p [ a , b ] b a =p(b) p (a) p (b)p(a)+2 p (b) p (a).

According to Lagrange mean value theorem, there are two real numbers a , b ,

a , b Ianda< a < b <b,

such that

G p [ a , b ] b a = G p [ a , b ] G p [ a , a ] b a = G p [ a , b ] b ,

and

G p [ a , b ] b a b = G p [ a , b ] b G p [ b , b ] b a b = 2 G p [ a , b ] b a ,

hence

G p [a,b]=(ba)( b a) [ p ( b ) p ( a ) + p ( b ) p ( a ) 2 p ( b ) p ( a ) ] .
(2.13)

From (2.6) and the Lagrange mean value theorem, we get

p ( b ) p ( a ) + p ( b ) p ( a ) 2 p ( b ) p ( a ) = p ( a ) p ( b ) { [ ( log p ( b ) ) ( log p ( a ) ) ] 2 + ( log p ( a ) ) + ( log p ( b ) ) } = p ( a ) p ( b ) { [ ( b a ) ( log p ( ξ ) ) ] 2 + ( log p ( a ) ) + ( log p ( b ) ) } p ( a ) p ( b ) { ( β α ) 2 [ ( log p ( ξ ) ) ] 2 + ( log p ( a ) ) + ( log p ( b ) ) } = p ( a ) p ( b ) { ( β α ) 2 [ ( log p ( ξ ) ) ] 2 | ( log p ( a ) ) | | ( log p ( b ) ) | } p ( a ) p ( b ) [ ( β α ) 2 sup x [ α , β ] { | ( log p ( x ) ) | 2 } 2 inf x [ α , β ] { | ( log p ( x ) ) | } ] 0 ,

i.e.,

p( b ) p ( a )+ p ( b )p( a )2 p ( b ) p ( a )0,
(2.14)

where

αa< a <ξ< b <bβ.
(2.15)

Combining with (2.13), (2.14) and (2.15), we get inequality (1.6).

This completes the proof of Theorem 2.2. □

Theorem 2.3 Let the function φ:(α,β)(,) be thrice continuously differentiable. If

φ (x)>0,2 φ (x) φ (x) φ (x)0,x(α,β),
(2.16)

then the function

p:(α,β)(0,),p(x)c e φ ( x ) ,c>0,

is quasi-log concave.

Proof Let

G p [a,b] a b p a b p ( a b p ) 2 .

We just need to show that

G p [a,b]0,a,b(α,β).
(2.17)

Since

G p [a,b] G p [b,a], G p [a,a]=0,a,b(α,β),

without loss of generality, we can assume that

α<b<a<β,c=1,
(2.18)

and a is a fixed constant.

Note that

logp(x)=φ, d log p ( x ) d x = p p = φ ,

and

d 2 log p ( x ) d x 2 = p p ( p ) 2 p 2 = φ .

Hence

p (x)= φ p,x(α,β),
(2.19)

and

p (x)= [ ( φ ) 2 φ ] p,x(α,β).
(2.20)

From

G p [ a , b ] b =p(b) a b p + p (b) a b p2 p (b) a b p ,

(2.19) and (2.20), we may see that

G p [ a , b ] b =p(b) { a b p + [ ( φ ( b ) ) 2 φ ( b ) ] a b p + 2 φ ( b ) a b p } .
(2.21)

Let

F(a,b) 1 p ( b ) G p [ a , b ] b = a b p + [ ( φ ( b ) ) 2 φ ( b ) ] a b p+2 φ (b) a b p .
(2.22)

Then

F ( a , b ) b = p ( b ) + [ ( φ ( b ) ) 2 φ ( b ) ] p ( b ) + [ 2 φ ( b ) φ ( b ) φ ( b ) ] a b p + 2 [ φ ( b ) a b p + φ ( b ) p ] = 2 [ ( φ ( b ) ) 2 φ ( b ) ] p ( b ) + [ 2 φ ( b ) φ ( b ) φ ( b ) ] a b p + 2 { φ ( b ) [ p ( b ) p ( a ) ] ( φ ( b ) ) 2 p ( b ) } = [ 2 φ ( b ) φ ( b ) φ ( b ) ] a b p 2 φ ( b ) p ( a ) ,

i.e.,

F ( a , b ) b = [ 2 φ ( b ) φ ( b ) φ ( b ) ] a b p2 φ (b)p(a).
(2.23)

Based on assumption (2.16), a b p<0 and (2.23), we have

F ( a , b ) b <0,b(α,a).
(2.24)

From (2.24), (2.18) and (2.22), we have

F(a,b)>F(a,a)=0, G p [ a , b ] b >0.
(2.25)

By (2.25) and (2.18), we get

G p [a,b]< G p [a,a]=0.
(2.26)

That is to say, inequality (2.17) holds.

We remark that the equality in (2.17) holds if and only if a=b.

The proof of Theorem 2.3 is completed. □

Theorem 2.4 Let the function φ:(α,β)(,) be four times continuously differentiable. If

φ (x)>0,2 [ φ ( x ) ] 3 φ ( 4 ) (x) φ (x)+ [ φ ( x ) ] 2 0,x(α,β),
(2.27)

and

G p [α,β] α β p α β p ( α β p ) 2 0,
(2.28)

then the function

p:(α,β)(0,),p(x)c e φ ( x ) ,c>0,

is quasi-log concave, where c is a constant.

In order to prove Theorem 2.4, we need the following lemma.

Lemma 2.1 Under the assumptions of Theorem  2.4, if

α<a<b< β <β,
(2.29)

then we have

G p [a,b]max { 0 , G p [ a , β ] } .
(2.30)

Proof Without loss of generality, we can assume that c=1 and a is a fixed constant.

We continue to use the proof of Theorem 2.3. Note that equation (2.23) can be rewritten as

F ( a , b ) b = φ (b) ( a b p ) F (a,b),
(2.31)

where

F (a,b)2 φ (b) φ ( b ) φ ( b ) 2 p ( a ) a b p .
(2.32)

Based on assumption (2.27), a b p>0 and (2.32), we have

F ( a , b ) b = 2 [ φ ( b ) ] 3 φ ( 4 ) ( b ) φ ( b ) + [ φ ( b ) ] 2 [ φ ( b ) ] 2 + 2 p ( a ) p ( b ) ( a b p ) 2 > 0 , b ( a , β ) ,
(2.33)

which means that F (a,b) is strictly increasing for the variable b(a, β ).

From (2.32), we may see that

lim b a + F (a,b)= F ( a , a + ) =.
(2.34)

We prove inequality (2.30) in two cases (A) and (B).

  1. (A)

    Assume that

    F (a, β )>0.
    (2.35)

By (2.34), (2.35) and the intermediate value theorem, there exists only one number b (a, β ) such that

F (a, b )=0.
(2.36)

From (2.33) and (2.31), we get

a<b< b F (a,b)< F (a, b )=0 b F(a,b)<0,

and

b <b< β F (a,b)> F (a, b )=0 b F(a,b)>0,

hence F(a,b) is strictly decreasing if b(a, b ] and strictly increasing if b[ b , β ).

If F(a, β )0, since F(a,a)=0, we have

a < b b F ( a , b ) < F ( a , a ) = 0 , b b < β F ( a , b ) F ( a , β ) 0 , F ( a , b ) = 1 p ( b ) G p [ a , b ] b 0 , b ( a , β ) , G p [ a , b ] b 0 , b ( a , β ) ,

and

G p [a,b] G p [a,a]=0,b(a, β ).

This means that inequality (2.30) holds.

Now we assume that

F(a, β )>0.
(2.37)

Note that F(a,b) is strictly decreasing if b(a, b ], we have

b (a, b ]F(a, b )<F(a,a)=0.
(2.38)

By (2.37), (2.38), F(a,b) is strictly increasing if b[ b , β ) and the continuity, we know that there exists a unique real number b ( b , β ) such that

F ( a , b ) =0.
(2.39)

Since

a < b b F ( a , b ) < F ( a , a ) = 0 b G p [ a , b ] = p ( b ) F ( a , b ) < 0 , b < b < b F ( a , b ) < F ( a , b ) = 0 b G p [ a , b ] = p ( b ) F ( a , b ) < 0 , a < b < b b G p [ a , b ] < 0 ,

and

b <b<βF(a,b)>F ( a , b ) =0 b G p [a,b]=p(b)F(a,b)>0,

we know that G p [a,b] is strictly decreasing if b(a, b ] and strictly increasing if b[ b , β ), so that

G p [a,b]max { G p [ a , a ] , G p [ a , β ] } =max { 0 , G p [ a , β ] } .
(2.40)

This means that inequality (2.30) also holds.

  1. (B)

    Assume that

    F (a, β )0.
    (2.41)

Since F (a,b) is strictly increasing for the variable b(a, β ), we have

F ( a , b ) < F ( a , β ) 0 , b ( α , β ) , F ( a , b ) b = φ ( b ) ( a b p ) F ( a , b ) 0 , b ( α , β ) , F ( a , b ) F ( a , a ) = 0 , b ( α , β ) , G p [ a , b ] b = p ( b ) F ( a , b ) 0 , b ( α , β ) ,

and

G p [a,b] G p [a,a]=0max { 0 , G p [ a , β ] } ,b(α, β ).

That is to say, inequality (2.30) still holds.

The proof of Lemma 2.1 is completed. □

The proof of Theorem 2.4 is now relatively easy.

Proof of Theorem 2.4 We just need to show that (2.17) holds. Without loss of generality, we assume that

α<a<b<β,c=1.
(2.42)

Let α , β (α,β) such that

α< α <a<b< β <β.
(2.43)

By Lemma 2.1, inequality (2.30) holds.

We define the auxiliary function p as follows:

p :(β,α)(0,), p (x)c e φ ( x ) ,c>0.

Then, by (2.27), we have

[ φ ( x ) ] = φ ( x ) > 0 , x ( β , α ) , [ φ ( x ) ] = φ ( x ) , [ φ ( x ) ] ( 4 ) = φ ( 4 ) ( x ) , x ( β , α ) ,

and

2 { [ φ ( x ) ] } 3 [ φ ( x ) ] ( 4 ) [ φ ( x ) ] + { [ φ ( x ) ] } 2 = 2 [ φ ( x ) ] 3 φ ( 4 ) ( x ) φ ( x ) + [ φ ( x ) ] 2 0 , x ( β , α ) .

According to Lemma 2.1 and

β< β <a< α <α,

we get

G p [ β ,a]max { 0 , G p [ β , α ] } .
(2.44)

Since

G p [b,a]= b a p(x)dx b a p (x)dx [ b a p ( x ) d x ] 2 G p [a,b],

we have

G p [ β ,a]= G p [a, β ], G p [ β , α ]= G p [ α , β ],

and inequality (2.44) can be rewritten as

G p [a, β ]max { 0 , G p [ α , β ] } .
(2.45)

Combining with inequalities (2.30) and (2.45), we get

G p [a,b]max { 0 , G p [ a , β ] } max { 0 , G p [ α , β ] } .
(2.46)

In (2.46), set α α, β β, we get

G p [a,b]max { 0 , G p [ α , β ] } .
(2.47)

According to conditions (2.28) and (2.47), inequality (2.17) holds.

This completes the proof of Theorem 2.4. □

Theorem 2.5 Let the function φ:(α,β)(,) be four times continuously differentiable. If

lim x β φ(x)=, lim x β φ ( x ) e φ ( x ) =0, lim x β [ φ ( x ) ] 2 φ ( x ) e φ ( x ) =0, α β p<,
(2.48)

and (2.27) holds, then the function

p:(α,β)(0,),p(x)c e φ ( x ) ,c>0,

is quasi-log concave.

Proof We just need to show that (2.17) holds. Without loss of generality, we assume that

α<a<b<β,c=1.
(2.49)

Set β β in Lemma 2.1, we have

G p [a,b]max { 0 , G p [ a , β ] } .
(2.50)

To complete the proof of inequality (2.17), by (2.50), we just need to show that

G p [a,β]0,a(α,β).
(2.51)

Now, we believe that the real number a is variable. By condition (2.48), we have

p (β) lim x β p (x)=0,p(β) lim x β p(x)=0

and

G p [ a , β ] = ( a β p ) [ p ( β ) p ( a ) ] [ p ( β ) p ( a ) ] 2 = p ( a ) a β p [ p ( a ) ] 2 = p ( a ) [ φ ( a ) a β p p ( a ) ] ,

i.e.,

G p [a,β]=p(a)ϕ(a),
(2.52)

where

ϕ(a) φ (a) a β pp(a),a(α,β).
(2.53)

If φ (a)0, then ϕ(a)<0, (2.51) holds by (2.52). Here we assume that φ (a)>0.

Note that

d ϕ ( a ) d a = φ ( a ) a β p φ ( a ) p ( a ) p ( a ) = φ ( a ) a β p > 0 .

Since

φ (x)>0,x(α,β),

the limit

lim a β φ (a)

exists.

If

0< φ (a) lim a β φ (a)<,

from α β p<, we have

lim a β a β p= lim a β ( α β p α a p ) =0
(2.54)

and

lim a β φ (a) a β p=0.
(2.55)

If

lim a β φ (a)=,

then, by (2.54), (2.48) and L’Hospital’s rule, we have

lim a β φ ( a ) a β p = lim a β a β p [ φ ( a ) ] 1 = lim a β ( d a β p ) / d a ( d [ φ ( a ) ] 1 ) / d a = lim a β p ( a ) [ φ ( a ) ] 2 φ ( a ) = lim x β [ φ ( x ) ] 2 φ ( x ) e φ ( x ) = 0 ,

that is to say, (2.55) also holds. Hence

ϕ(a)<ϕ(β)= lim a β [ φ ( a ) a β p p ( a ) ] = lim a β φ (a) a β p=0.

By (2.52), inequality (2.51) holds.

The proof of Theorem 2.5 is completed. □

3 Four illustrative examples

In order to illustrate the connotation of quasi-log concavity, we give four examples as follows.

Example 3.1 The function

p:[0,π](0,),p(x)exp(sinx)

is quasi-log concave.

Proof Indeed, if

x I [ 0 , π 2 ] or [ π 2 , π ] ,

then p(x) is twice continuous differentiable and log concave with monotonous function, and

| | ( log p ) | ( log p ) | = | | cos x | sin x | max { | cos x | , sin x } 1 , | ( log p ) | + ( log p ) = | cos x | + sin x | cos x | + sin x = 1 + | sin 2 x | 1 ,

hence

0< sup x I { | | ( log p ) | ( log p ) | } =1= inf x I { | ( log p ) | + ( log p ) } .

By Theorem 2.1, the function p(x) is quasi-log concave. That is to say, for any a,b I , inequality (1.5) holds.

Let

0a< π 2 <bπ.

Since

p (b)<0< p (a),

inequality (1.5) still holds. The proof is completed. □

Example 3.2 The function

p:(0,)(0,β),p(x)exp(arctanx)

is quasi-log concave, where

β= 1 9 ( 7 + 4 10 3 + 100 3 ) =2.251036399304479

is the root of the equation

16 x 3 ( 1 + x 2 ) 6 2 x [ 48 x 3 ( 1 + x 2 ) 4 24 x ( 1 + x 2 ) 3 ] 2 ( 1 + x 2 ) 2 + [ 8 x 2 ( 1 + x 2 ) 3 + 2 ( 1 + x 2 ) 2 ] 2 =0.
(3.1)

Proof Indeed, in Theorem 2.4, set φ(x)=arctanx, then

p(x)=exp [ φ ( x ) ] ,x(0,β).

By means of Mathematica software, we get

φ ( x ) = 2 x ( 1 + x 2 ) 2 0 , x ( 0 , β ) , 2 [ φ ( x ) ] 3 φ ( 4 ) ( x ) φ ( x ) + [ φ ( x ) ] 2 = 16 x 3 ( 1 + x 2 ) 6 2 x [ 48 x 3 ( 1 + x 2 ) 4 24 x ( 1 + x 2 ) 3 ] 2 ( 1 + x 2 ) 2 + [ 8 x 2 ( 1 + x 2 ) 3 + 2 ( 1 + x 2 ) 2 ] 2 0 , x ( 0 , β ] ,

the equation holds if and only if x=β.

Since

0< 0 β p<,

and

G p [0,β] 0 β p 0 β p ( 0 β p ) 2 =7.095040628958467<0,

so (2.27) and (2.28) hold. By Theorem 2.4, the function p(x) is quasi-log concave. This ends the proof. □

Example 3.3 The function

p:(0,)(0,),p(x) x α ,

is quasi-log concave, where α>0.

Proof Note that inequality (1.6) can be rewritten as

α α + 1 ( b α + 1 a α + 1 ) ( b α 1 a α 1 ) ( b α a α ) 2 0,a,bI.
(3.2)

If 0<α1, then

α α + 1 ( b α + 1 a α + 1 ) ( b α 1 a α 1 ) 0,

inequality (3.2) holds. Let α>1. Then

( b α + 1 a α + 1 ) ( b α 1 a α 1 ) 0.

Since

0< α α + 1 <1,

and

( b α + 1 a α + 1 ) ( b α 1 a α 1 ) ( b α a α ) 2 = a α 1 b α 1 ( a b ) 2 0,

we have

α α + 1 ( b α + 1 a α + 1 ) ( b α 1 a α 1 ) ( b α a α ) 2 ( b α + 1 a α + 1 ) ( b α 1 a α 1 ) ( b α a α ) 2 0 ,

that is to say, inequality (3.2) still holds. The proof is completed. □

Example 3.4 The function

p:(0,)(0,),p(x)exp ( e x )

is quasi-log concave.

Proof Indeed,

0 < 0 p = 0.21938393439552026 < , φ ( x ) = e x , lim x φ ( x ) = lim x e x = ,

and

lim x φ ( x ) e φ ( x ) = lim x [ φ ( x ) ] 2 φ ( x ) e φ ( x ) = lim x e x exp ( e x ) =0,

hence equations in (2.48) hold. Since

φ (x)= e x >0,x(0,),

and

2 [ φ ( x ) ] 3 φ ( 4 ) (x) φ (x)+ [ φ ( x ) ] 2 =2 e 3 x >0,x(0,),

inequalities in (2.27) hold. By Theorem 2.5, the function p is quasi-log concave. This ends the proof. □

In the next section, we demonstrate the applications of Theorem 2.3 and Theorem 2.5 in the theory of k-normal distribution.

4 Quasi-log concavity of pdf of k-normal distribution

The normal distribution (see [1416]) is considered as the most prominent probability distribution in statistics. Besides the important central limit theorem that says the mean of a large number of random variables drawn from a common distribution, under mild conditions, is distributed approximately normally, the normal distribution is also tractable in the sense that a large number of related results can be derived explicitly and that many qualitative properties may be stated in terms of various inequalities.

But perhaps one of the main practical uses of the normal distribution is to model empirical distributions of many different random variables encountered in practice. In such a case, a possible generalization would be families of distributions having more than two parameters (namely the mean and the standard variation) which may be used to fit empirical distributions more accurately. Examples of such generalizations are the normal-exponential-gamma distribution which contains three parameters and the Pearson distribution which contains four parameters for simulating different skewness and kurtosis values.

In this section, we first introduce another generalization of the normal distribution as follows: If the probability density function of the random variable X is

p(x;μ,σ,k) k 1 k 1 2 Γ ( k 1 ) σ exp ( | x μ | k k σ k ) ,
(4.1)

then we say that the random variable X follows the k-normal distribution or generalized normal distribution (see [17] or [18]), denoted by X N k (μ,σ), where

x(,),μ(,),σ(0,),k(1,),

and Γ(s) is the well-known gamma function.

For the probability density function p(x;μ,σ,k) of k-normal distribution, the graphs of the functions p(x;0,1,3/2), p(x;0,1,2) and p(x;0,1,5/2) are depicted in Figure 1 and p(x;0,1,k) is depicted in Figure 2.

Figure 1
figure 1

The graphs of the functions p(x;0,1,3/2) , p(x;0,1,2) and p(x;0,1,5/2) , where 4x4 .

Figure 2
figure 2

The graph of the function p(x;0,1,k) , where 4x4 , 1<k3 .

Clearly, when k=2, p(x;μ,σ,k) is just the standard normal distribution N(μ,σ) with mean μ and standard deviation σ, and it is easily checked that p(x;0,1,k) is symmetric about 0 and that

p(x;0,1,k)=σp(σx+μ;μ,σ,k),x(,).
(4.2)

According to (4.1), we get (see (2) in [17])

p ( x ; μ , σ s 1 / s , s ) = s 2 σ Γ ( 1 / s ) exp ( | x μ σ | s ) .
(4.3)

According to the results of [17], we may easily show that (see [17], p.688)

p(x;μ,σ,k)>0, p(x;μ,σ,k)dt=1,
(4.4)
EX=μ,
(4.5)
E|XEX | k = σ k ,
(4.6)

and

E ( X E X ) 2 = k 2 k 1 Γ ( 3 k 1 ) Γ ( k 1 ) σ 2 { > σ 2 , 1 < k < 2 , = σ 2 , k = 2 , < σ 2 , k > 2 .
(4.7)

Here μ, σ k and σ are the mathematical expectation, k-order absolute central moment and k-order mean absolute central moment of the random variable X, respectively.

We remark here if

p 0 (x)=exp [ ( x μ ) k k σ k ] ,x(μ,),

then

w(x) p 0 (x)= ( x μ ) k 1 σ k exp [ ( x μ ) k k σ k ] >0,

and

μ w(x)dx=1,

where w(x) is the probability density function of a Weibull distribution. Therefore, there are close relationships between the k-normal distribution and the Weibull distribution.

Next, we study the quasi-log concavity of the probability density function of k-normal distribution.

Theorem 4.1 The probability density function p(x;μ,σ,k) of the k-normal distribution is quasi-log concave on (,) for all μ(,), σ(0,) and k(1,).

Proof In view of (4.2), we may assume that

(μ,σ)=(0,1).

Let for convenience that

p(x)p(x;0,1,k)=cexp [ φ ( x ) ] ,

where

c= k 1 k 1 2 Γ ( k 1 ) >0,φ(x)= | x | k k .

Then

G p [ a , a ] = 0 = G p [ b , b ] , G p [ a , b ] = G p [ b , a ] and G p [ a , b ] = G p [ b , a ] ,
(4.8)

where the last equality holds because p is even.

Now we show that

G p [a,b]0,a,b(,)
(4.9)

in two steps (A) and (B).

  1. (A)

    We first consider the case where k2.

By (4.8) and continuity, without loss of generality, we may assume that either

(i):0<a<b,or(ii):a<0<b.

We first consider the case (i): 0<a<b.

By (4.4), we have

0 p= 1 2 <.

Since

lim x φ(x)= lim x x k 1 =, lim x φ ( x ) e φ ( x ) = lim x x k 1 exp ( x k k ) =0,

and

lim x [ φ ( x ) ] 2 φ ( x ) e φ ( x ) = lim x x 2 k 2 ( k 1 ) x k 2 exp ( x k k ) =0,

equations in (2.48) hold. Since

φ (x)=(k1) x k 2 >0,x(0,),

and

2 [ φ ( x ) ] 3 φ ( 4 ) ( x ) φ ( x ) + [ φ ( x ) ] 2 = 2 ( k 1 ) 2 x 3 k 6 ( k 1 ) 2 ( k 2 ) ( k 3 ) x 2 k 6 + ( k 1 ) 2 ( k 2 ) 2 x 2 k 6 = ( k 1 ) 2 x 3 k 6 + ( k 1 ) 2 ( k 2 ) x 2 k 6 > 0 , x ( 0 , ) ,

inequalities in (2.27) hold. By Theorem 2.5, inequality (4.9) holds.

Next, we consider the case (ii): a<0<b.

Since

p (b)<0< p (a),

we have

G p [a,b]= ( a b p ) [ p ( b ) p ( a ) ] ( a b p ) 2 ( a b p ) [ p ( b ) p ( a ) ] <0,

that is, inequality (4.9) still holds.

  1. (B)

    Next we assume that 1<k<2.

Since

φ (x)=(k1) x k 2 >0,x(0,),

and

2 φ (x) φ (x) φ (x)=2(k1) x 2 k 3 +(k1)(2k) x k 3 0,x(0,),

inequalities in (2.16) hold. By Theorem 2.3, inequality (4.9) holds.

Based on the above analysis, inequality (4.9) is proved.

The proof of Theorem 4.1 is completed. □

In the next section, we demonstrate the applications of Theorem 4.1 in the generalized hierarchical teaching model and the generalized traditional teaching model.

5 Applications in statistics

5.1 Hierarchical teaching model and truncated random variable

We first introduce the hierarchical teaching model as follows.

The usual teaching model assumes that the math scores of each student in a class are treated as a continuous random variable, written as ξ I , which takes on some value in the real interval I=[ a 0 , a m ], and its probability density function p I :I(0,) is continuous. Suppose we now divide the students into m classes, written as

Class[ a 0 , a 1 ],Class[ a 1 , a 2 ],,Class[ a m 1 , a m ],

where

0 a 0 a 1 a m ,m2,

and

a i , a i + 1 ,i=0,1,,m1,

are the lowest and the highest allowable scores of the students of Class[ a i , a i + 1 ], respectively. We introduce a set

HTM{ a 0 ,, a m , p I } { Class [ a 0 , a 1 ] , Class [ a 1 , a 2 ] , , Class [ a m 1 , a m ] , p I }

called a hierarchical teaching model (see [1922]) such that the traditional teaching model, denoted by HTM{ a 0 , a m , p I }, is just a special HTM{ a 0 ,, a m , p I }, where m=1.

If a 0 = and a m =, then HTM{,,, p I } and HTM{,, p I } are called generalized hierarchical teaching model and generalized traditional teaching model, respectively.

In order to study the hierarchical teaching model and the traditional teaching model from the angle of the analysis of variance, we need to recall the definition of truncated random variable.

Definition 5.1 Let ξ I I be a continuous random variable with continuous probability density function p I :I(0,). If ξ J JI is also a continuous random variable and its probability density function is

p J :J(0,), p J (t) p I ( t ) J p I ,

then we call the random variable ξ J a truncated random variable of the random variable ξ I , written as ξ J ξ I . If ξ J ξ I and JI, then we call the random variable ξ J a proper truncated random variable of the random variable ξ I , written as ξ J ξ I . Here I and J are high dimensional intervals.

We point out that a basic property of the truncated random variable is as follows: Let ξ I I be a continuous random variable with continuous probability density function p I :I(0,). If

ξ I ξ I , ξ I ξ I and I I ,

then ξ I ξ I . If

ξ I ξ I , ξ I ξ I and I I ,

then ξ I ξ I .

Indeed, by Definition 5.1, the probability density functions of the truncated random variables ξ I , ξ I are

p I : I ( 0 , ) , p I ( t ) = p I ( t ) I p I , p I : I ( 0 , ) , p I ( t ) = p I ( t ) I p I ,

respectively. Thus, the probability density function of ξ I can be rewritten as

p I : I (0,), p I (t)= p I ( t ) / I p I I ( p I / I p I ) = p I ( t ) I p I .

Hence

I I ξ I ξ I and I I ξ I ξ I .

According to the definitions of the mathematical expectation Eφ( ξ J ) and the variance Varφ( ξ J ) with Definition 5.1, we easily get

Eφ( ξ J ) J p J φ= J p I φ J p I ,
(5.1)

and

Varφ( ξ J )E [ φ ( ξ J ) E φ ( ξ J ) ] 2 = J p I φ 2 J p I ( J p I φ J p I ) 2 ,
(5.2)

where ξ J ξ I , and the function φ:J(,) of the random variable ξ J is continuous.

In the generalized hierarchical teaching model HTM{,,, p I }, the math scores of each student in Class[ a i , a i + 1 ] is also a random variable, written as ξ [ a i , a i + 1 ] . Since

[ a i , a i + 1 ]I,i=0,1,,m1,

so ξ [ a i , a i + 1 ] is a truncated random variable of the random variable ξ I . Assume that the ji classes, i.e.,

Class[ a i , a i + 1 ],Class[ a i + 1 , a i + 2 ],,Class[ a j 1 , a j ]

are merged into one, written as Class[ a i , a j ]. Since [ a i , a j ]I, we know that ξ [ a i , a j ] is a truncated random variable of the random variable ξ I , where 0i<jm. In general, we have

ξ [ a i , a j ] ξ [ a i , a j ] ξ I , i ,i,j, j :0 i i<j j m.
(5.3)

In the generalized hierarchical teaching model HTM{,,, p I }, we are concerned with the relationship between the variance Var ξ [ a i , a j ] and the variance Var ξ I , where 0i<jm, so as to decide the superiority and inferiority of the hierarchical and the traditional teaching models.

If

Var ξ [ a i , a j ] Var ξ I ,i,j:0i<jm,
(5.4)

then in view of the usual meaning of the variance, we tend to think that this generalized hierarchical teaching model is better than the generalized traditional teaching model. Otherwise, this generalized hierarchical teaching model is probably not worth promoting, where I=(,).

In this section, one of our purposes is to study the generalized hierarchical teaching model and the generalized traditional teaching model from the angle of the analysis of variance so as to decide the superiority and inferiority of the generalized hierarchical and the generalized traditional teaching models. In particular, we will study the conditions such that inequality (5.4) holds (see Theorem 5.2).

In the generalized hierarchical teaching model HTM{,,, p I }, we can choose the parameters a 1 , a 2 ,, a m 1 (,) such that the ‘variance’

Var ( Var ξ [ a 0 , a 1 ] , , Var ξ [ a m 1 , a m ] ) 1 m j = 0 m 1 ( Var ξ [ a j , a j + 1 ] 1 m i = 0 m 1 Var ξ [ a i , a i + 1 ] ) 2
(5.5)

of

Var ξ [ a 0 , a 1 ] ,Var ξ [ a 1 , a 2 ] ,,Var ξ [ a m 1 , a m ]

is the minimal by means of mathematical software, its purpose is to make the scores of the classes

Class[ a 0 , a 1 ],Class[ a 1 , a 2 ],,Class[ a m 1 , a m ]

stable, where a 0 = and a m =.

Remark 5.1 We remark here if ξ I I is a continuous random variable with continuous probability density function p I :I(0,), then the integration I p I converges (see [23]), and it satisfies the following conditions:

I p I =1, P I (x)P( ξ I <x)= ( , x ) I p I ,xI.
(5.6)

We call the function P I :I[0,1] a probability distribution function of the random variable ξ I , where P I (x) is the probability of the random event ‘ ξ I <x’, and I is an interval.

5.2 Applications in the analysis of variance

The analysis of variance is one of the central topics in statistics. Recently, the authors [24] have expanded the connotation of analysis of variance and obtained some interesting results.

In this section, we point out the significance of quasi-log concavity in the analysis of variance as follows.

Theorem 5.1 Let ξ I be a continuous random variable and its probability density function p I :I(0,) be twice continuously differentiable. Then the function p I :I(0,) is quasi-log concave if and only if

0Var [ ( log p I ) ( ξ [ a , b ] ) ] E [ ( log p I ) ( ξ [ a , b ] ) ] ,a,bI,a<b,
(5.7)

where ξ [ a , b ] [a,b] is a truncated random variable of the random variable ξ I .

Proof By identities (2.6) and (5.1) with (5.2), we get

Var [ ( log p I ) ( ξ [ a , b ] ) ] + E [ ( log p I ) ( ξ [ a , b ] ) ] = Var ( p I p I ) + E [ p I p I ′′ ( p I ) 2 p I 2 ] = a b ( p I p I ) 2 p I a b p I ( a b p I p I p I a b p I ) 2 + a b p I p I ′′ ( p I ) 2 p I 2 p I a b p I = a b p I ′′ a b p I ( a b p I a b p I ) 2 = a b p I a b p I ′′ ( a b p I ) 2 ( a b p I ) 2 ,

i.e.,

Var [ ( log p I ) ( ξ [ a , b ] ) ] +E [ ( log p I ) ( ξ [ a , b ] ) ] = a b p I a b p I ′′ ( a b p I ) 2 ( a b p I ) 2 .
(5.8)

According to identity (5.8), we know that inequality (1.6) can be rewritten as (5.7).

This completes the proof of Theorem 5.1. □

Remark 5.2 According to Theorem 5.1, quasi-log concavity is of great significance in the analysis of variance.

5.3 Applications in the generalized hierarchical teaching model

Now we demonstrate the application of Theorem 4.1 in the generalized hierarchical teaching model.

In the generalized hierarchical teaching model HTM{,,, p I }, the math scores of each student are treated as a random variable ξ I , where ξ I I=(,). By using the central limit theorem (see [25]), we may think that the random variable ξ I follows a normal distribution, that is, ξ I N 2 (μ,σ), where μ is the average score of the students and σ is the mean square deviation of the score. Hence

p I (x)= 1 2 π σ exp [ ( x μ ) 2 2 σ 2 ] ,xI.
(5.9)

We remark here that if the math scores ξ I of each student satisfies

ξ I [0,1]andμ[0,1],

then, by (5.9), we have

P( ξ I <0)0,P( ξ I >1)0.

Hence we can use the generalized hierarchical teaching model instead of the hierarchical teaching model, approximately.

Based on the above analysis and Theorem 4.1, we have the following theorem.

Theorem 5.2 In the generalized hierarchical teaching model HTM{,,, p I }, assume that ξ I N 2 (μ,σ). Then we have the following inequality:

Var ξ [ a i , a j ] Var ξ I = σ 2 ,i,j:0i<jm.
(5.10)

Proof Note that

p I (t)=p(t;μ,σ,2)

is quasi-log concave by Theorem 4.1, hence inequality (5.7) holds by Theorem 5.1, so we obtain that

Var [ ( log p I ) ( ξ [ a i , a j ] ) ] E [ ( log p I ) ( ξ [ a i , a j ] ) ] .
(5.11)

Note that

Var ( φ + C ) Var ( φ ) , Var ( C φ ) C 2 Var ( φ ) , E ( C φ ) C E ( φ ) , ( log p I ) ( ξ [ a i , a j ] ) = ξ [ a i , a j ] μ σ 2 , ( log p I ) ( ξ [ a i , a j ] ) = 1 σ 2 , E ( 1 ) = 1 ,

where C is a constant. By (5.11), we have

Var ( ξ [ a i , a j ] μ σ 2 ) E ( 1 σ 2 ) 1 σ 4 Var ξ [ a i , a j ] 1 σ 2 E ( 1 ) Var ξ [ a i , a j ] Var ξ I = σ 2 ,

that is to say, inequality (5.10) holds.

This completes the proof of Theorem 5.2. □

Remark 5.3 According to Theorem 5.2, we may conclude that the generalized hierarchical teaching model is normally better than the generalized traditional teaching model.

5.4 Applications in the generalized traditional teaching model

Next, we demonstrate the applications of Theorem 4.1 in the generalized traditional teaching model as follows.

In the generalized traditional teaching model HTM{,, p I }, the math scores of each student are treated as a random variable ξ I , where ξ I I=(,). By using the central limit theorem (see [25]), we may think that the random variable ξ follows a normal distribution, that is, ξ I N 2 (μ,σ), where μ is the average score of the students and σ is the mean square deviation of the score. If the top and bottom students are insignificant, that is to say, the variance Var ξ I of the random variable ξ I is close to 0, according to Figure 1 and Figure 2 with formula (4.7), we may think that there is a real number k(2,) such that ξ I N k (μ,σ). Otherwise, we may think that there is a real number k(1,2) such that ξ I N k (μ,σ). We can estimate the number k by means of a sampling procedure.

In the generalized traditional teaching model HTM{,, p I }, we may assume that

ξ J ξ I , ξ I N k (μ,σ),k>1,

where J=(μ,), and μ(0,) is the average math score of the students and σ is the k-order mean absolute central moment of the score. Then the probability density function of ξ J is that

p J (t)= p ( x ; μ , σ , k ) J p ( x ; μ , σ , k ) ,xJ.

In the generalized traditional teaching model HTM{,, p I }, suppose that the math score of the student is ξ J . In order to stimulate the learning enthusiasm of students, we may want to give each student a bonus payment A( ξ J ). The function

A:J(0,)

may be regarded as an allowance function.

In the generalized traditional teaching model HTM{,, p I }, we define the allowance function as follows:

A:J(0,),A(x)c ( x μ ) k 1 ,c>0,k>1.
(5.12)

For the above allowance function (5.12), we have the following theorem.

Theorem 5.3 In the generalized traditional teaching model HTM{,, p I }, assume that

ξ J ξ I , ξ I N k (μ,σ),k>1.

Then we have the following inequalities:

0Var [ A ( ξ [ a , b ] ) ] c σ k E [ A ( ξ [ a , b ] ) ] ,a,bJ,a<b.
(5.13)

Here the allowance function A is defined by (5.12).

Proof By Theorem 4.1, the function p p J (t) is quasi-log concave on J. Hence inequalities (5.7) hold by Theorem 5.1. Note that

Var ( C A ) C 2 Var ( A ) , E ( C A ) C E ( A ) , ( log p I ) ( ξ [ a , b ] ) = ( ξ [ a , b ] μ ) k 1 σ k = 1 c σ k A ( ξ [ a , b ] ) ,

and

( log p I ) ( ξ [ a , b ] )= 1 c σ k A ( ξ [ a , b ] ),

where C is a constant. By inequalities (5.7), we get

Var [ ( log p I ) ( ξ [ a , b ] ) ] E [ ( log p I ) ( ξ [ a , b ] ) ] , Var [ 1 c σ k A ( ξ [ a , b ] ) ] E [ 1 c σ k A ( ξ [ a , b ] ) ] ,

and

1 ( c σ k ) 2 Var [ A ( ξ [ a , b ] ) ] 1 c σ k E [ A ( ξ [ a , b ] ) ] .

That is to say, inequalities (5.13) hold.

This completes the proof of Theorem 5.3. □

Remark 5.4 A large number of inequality analysis and statistical theories are used in this paper. Some theories in the proofs of our results are used in the references [5, 23, 24, 2634].

References

  1. Tong Y: An adaptive solution to ranking and selection problems. Ann. Stat. 1978,6(3):658-672. 10.1214/aos/1176344210

    Article  MATH  Google Scholar 

  2. Bagnoli, M, Bergstrom, T: Log-concave probability and its applications. Mimeo (1991)

    Google Scholar 

  3. Patel JK, Read CB: Handbook of the Normal Distribution. 2nd edition. CRC Press, Boca Raton; 1996.

    MATH  Google Scholar 

  4. Heckman JJ, Honor BE: The empirical content of the Roy model. Econometrica 1990,58(5):1121-1149. 10.2307/2938303

    Article  Google Scholar 

  5. Wang WL: Approaches to Prove Inequalities. Harbin Institute of Technology Press, Harbin; 2011.(in Chinese) (in Chinese)

    Google Scholar 

  6. Wen JJ, Gao CB, Wang WL: Inequalities of J-P-S-F type. J. Math. Inequal. 2013,7(2):213-225.

    MathSciNet  Article  MATH  Google Scholar 

  7. Mohtashami Borzadaran GR, Mohtashami Borzadaran HA: Log concavity property for some well-known distributions. Surv. Math. Appl. 2011, 6: 203-219.

    MathSciNet  Google Scholar 

  8. An, MY: Log-concave probability distributions: theory and statistical testing. Working paper, Duke University (1995)

    Google Scholar 

  9. An, MY: Log-concavity and statistical inference of linear index modes. Manuscript, Duke University (1997)

    Google Scholar 

  10. Finner H, Roters M: Log-concavity and inequalities for Chi-square, F and Beta distributions with applications in multiple comparisons. Stat. Sin. 1997, 7: 771-787.

    MathSciNet  MATH  Google Scholar 

  11. Miravete EJ: Preserving log-concavity under convolution: Comment. Econometrica 2002,70(3):1253-1254. 10.1111/1468-0262.00327

    Article  Google Scholar 

  12. Al-Zahrani B, Stoyanov J: On some properties of life distributions with increasing elasticity and log-concavity. Appl. Math. Sci. 2008,2(48):2349-2361.

    MathSciNet  MATH  Google Scholar 

  13. Caramanis C, Mannor S: An inequality for nearly log-concave distributions with applications to learning. IEEE Trans. Inf. Theory 2007,53(3):1043-1057.

    MathSciNet  Article  MATH  Google Scholar 

  14. Wlodzimierz B: The Normal Distribution: Characterizations with Applications. Springer, Berlin; 1995.

    MATH  Google Scholar 

  15. Spiegel MR: Theory and Problems of Probability and Statistics. McGraw-Hill, New York; 1992:109-111.

    Google Scholar 

  16. Whittaker ET, Robinson G: The Calculus of Observations: A Treatise on Numerical Mathematics. 4th edition. Dover, New York; 1967:164-208.

    Google Scholar 

  17. Nadarajah S: A generalized normal distribution. J. Appl. Stat. 2005,32(7):685-694. 10.1080/02664760500079464

    MathSciNet  Article  MATH  Google Scholar 

  18. Varanasi MK, Aazhang B: Parametric generalized Gaussian density estimation. J. Acoust. Soc. Am. 1989,86(4):1404-1415. 10.1121/1.398700

    Article  Google Scholar 

  19. Hawkins GE, Brown SD, Steyvers M, Wagenmakers EJ: Context effects in multi-alternative decision making: empirical data and a Bayesian model. Cogn. Sci. 2012, 36: 498-516. 10.1111/j.1551-6709.2011.01221.x

    Article  Google Scholar 

  20. Yang CF, Pu YJ: Bayes analysis of hierarchical teaching. Math. Pract. Theory 2004(in Chinese),34(9):107-113. (in Chinese)

    Google Scholar 

  21. de Valpine P: Shared challenges and common ground for Bayesian and classical analysis of hierarchical models. Ecol. Appl. 2009, 19: 584-588. 10.1890/08-0562.1

    Article  Google Scholar 

  22. Carlin BP, Gelfand AE, Smith AFM: Hierarchical Bayesian analysis of change point problem. Appl. Stat. 1992,41(2):389-405. 10.2307/2347570

    Article  MATH  Google Scholar 

  23. Wen JJ, Han TY, Gao CB: Convergence tests on constant Dirichlet series. Comput. Math. Appl. 2011,62(9):3472-3489. 10.1016/j.camwa.2011.08.064

    MathSciNet  Article  MATH  Google Scholar 

  24. Wen JJ, Han TY, Cheng SS: Inequalities involving Dresher variance mean. J. Inequal. Appl. 2013.Article ID 366, 2013: Article ID 366

    Google Scholar 

  25. Johnson OT: Information Theory and the Central Limit Theorem. Imperial College Press, London; 2004:88.

    Book  MATH  Google Scholar 

  26. Wen JJ, Zhang ZH: Jensen type inequalities involving homogeneous polynomials. J. Inequal. Appl. 2010. Article ID 850215, 2010: Article ID 850215

    Google Scholar 

  27. Wen JJ, Cheng SS: Optimal sublinear inequalities involving geometric and power means. Math. Bohem. 2009,2009(2):133-149.

    MathSciNet  Google Scholar 

  28. Peĉarić JE, Wen JJ, Wang WL, Tao L: A generalization of Maclaurin’s inequalities and its applications. Math. Inequal. Appl. 2005,8(4):583-598.

    MathSciNet  MATH  Google Scholar 

  29. Gao CB, Wen JJ: Theory of surround system and associated inequalities. Comput. Math. Appl. 2012, 63: 1621-1640. 10.1016/j.camwa.2012.03.037

    MathSciNet  Article  MATH  Google Scholar 

  30. Wen JJ, Wang WL: Inequalities involving generalized interpolation polynomial. Comput. Math. Appl. 2008,56(4):1045-1058. 10.1016/j.camwa.2008.01.032

    MathSciNet  Article  MATH  Google Scholar 

  31. Wen JJ, Wang WL: Chebyshev type inequalities involving permanents and their applications. Linear Algebra Appl. 2007,422(1):295-303. 10.1016/j.laa.2006.10.014

    MathSciNet  Article  MATH  Google Scholar 

  32. Wen JJ, Cheng SS: Closed balls for interpolating quasi-polynomials. Comput. Appl. Math. 2011,30(3):545-570.

    MathSciNet  Google Scholar 

  33. Wen JJ, Wu SH, Gao CB: Sharp lower bounds involving circuit layout system. J. Inequal. Appl. 2013.Article ID 592, 2013: Article ID 592

    Google Scholar 

  34. Wen JJ, Wu SH, Tian YH: Minkowski-type inequalities involving Hardy function and symmetric functions. J. Inequal. Appl. 2014. Article ID 186, 2014: Article ID 186

    Google Scholar 

Download references

Acknowledgements

This work was supported in part by the Natural Science Foundation of China (No. 61309015) and the Natural Science Foundation of Sichuan Province Education Department (No. 07ZA207). The authors are indebted to several unknown referees for many useful comments and keen observations which led to the present improved version of the paper as it stands.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tian Yong Han.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors contributed equally and significantly in this paper. All authors read and approved the final manuscript.

Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

Authors’ original file for figure 1

Authors’ original file for figure 2

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License ( https://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Wen, J.J., Han, T.Y. & Cheng, S.S. Quasi-log concavity conjecture and its applications in statistics. J Inequal Appl 2014, 339 (2014). https://doi.org/10.1186/1029-242X-2014-339

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1029-242X-2014-339

Keywords

  • quasi-log concavity
  • quasi-log concavity conjecture
  • truncated random variable
  • hierarchical teaching model
  • k-normal distribution