Skip to main content

Some inequalities on generalized entropies

Abstract

We give several inequalities on generalized entropies involving Tsallis entropies, using some inequalities obtained by the improvements of Young’s inequality. We also give a generalized Han’s inequality.

MSC:26D15, 94A17.

1 Introduction

Generalized entropies have been studied by many researchers (we refer the interested reader to [1, 2]). Rényi [3] and Tsallis [4] entropies are well known as one-parameter generalizations of Shannon’s entropy, being intensively studied not only in the field of classical statistical physics [57], but also in the field of quantum physics in relation to the entanglement [811]. The Tsallis entropy is a natural one-parameter extended form of the Shannon entropy, hence it can be applied to known models which describe systems of great interest in atomic physics [12]. However, to our best knowledge, the physical relevance of a parameter of the Tsallis entropy was highly debated and it has not been completely clarified yet, the parameter being considered as a measure of the non-extensivity of the system under consideration. One of the authors of the present paper studied the Tsallis entropy and the Tsallis relative entropy from the mathematical point of view. Firstly, fundamental properties of the Tsallis relative entropy were discussed in [13]. The uniqueness theorem for the Tsallis entropy and Tsallis relative entropy was studied in [14]. Following this result, an axiomatic characterization of a two-parameter extended relative entropy was given in [15]. In [16], information theoretical properties of the Tsallis entropy and some inequalities for conditional and joint Tsallis entropies were derived. These entropies are again used in the present paper, to derive the generalized Han’s inequality. In [17], matrix trace inequalities for the Tsallis entropy were studied. And in [18], the maximum entropy principle for the Tsallis entropy and the minimization of the Fisher information in Tsallis statistics were studied. Quite recently, we provided mathematical inequalities for some divergences in [19], considering that it is important to study the mathematical inequalities for the development of new entropies. In this paper, we define a further generalized entropy based on Tsallis and Rényi entropies and study mathematical properties by the use of scalar inequalities to develop the theory of entropies.

We start from the weighted quasilinear mean for some continuous and strictly monotonic function ψ:IR, defined by

M ψ ( x 1 , x 2 ,, x n ) ψ 1 ( j = 1 n p j ψ ( x j ) ) ,
(1)

where j = 1 n p j =1, p j >0, x j I for j=1,2,,n and nN. If we take ψ(x)=x, then M ψ ( x 1 , x 2 ,, x n ) coincides with the weighted arithmetic mean A( x 1 , x 2 ,, x n ) j = 1 n p j x j . If we also take ψ(x)=log(x), then M ψ ( x 1 , x 2 ,, x n ) coincides with the weighted geometric mean G( x 1 , x 2 ,, x n ) j = 1 n x j p j .

If ψ(x)=x and x j = ln q 1 p j , then M ψ ( x 1 , x 2 ,, x n ) is equal to the Tsallis entropy [4]:

H q ( p 1 , p 2 ,, p n ) j = 1 n p j q ln q p j = j = 1 n p j ln q 1 p j (q0,q1),
(2)

where { p 1 , p 2 ,, p n } is a probability distribution with p j >0 for all j=1,2,,n and the q-logarithmic function for x>0 is defined by ln q (x) x 1 q 1 1 q which uniformly converges to the usual logarithmic function log(x) in the limit q1. Therefore, the Tsallis entropy converges to the Shannon entropy in the limit q1:

lim q 1 H q ( p 1 , p 2 ,, p n )= H 1 ( p 1 , p 2 ,, p n ) j = 1 n p j log p j .
(3)

Thus, we find that the Tsallis entropy is one of generalizations of the Shannon entropy. It is known that the Rényi entropy [3] is also a generalization of the Shannon entropy. Here, we review a quasilinear entropy [1] as another generalization of the Shannon entropy. For a continuous and strictly monotonic function ϕ on (0,1], the quasilinear entropy is given by

I ϕ ( p 1 , p 2 ,, p n )log ϕ 1 ( j = 1 n p j ϕ ( p j ) ) .
(4)

If we take ϕ(x)=log(x) in (4), then we have I log ( p 1 , p 2 ,, p n )= H 1 ( p 1 , p 2 ,, p n ). We may redefine the quasilinear entropy by

I 1 ψ ( p 1 , p 2 ,, p n )log ψ 1 ( j = 1 n p j ψ ( 1 p j ) )
(5)

for a continuous and strictly monotonic function ψ on (0,). If we take ψ(x)=log(x) in (5), we have I 1 log ( p 1 , p 2 ,, p n )= H 1 ( p 1 , p 2 ,, p n ). The case ψ(x)= x 1 q is also useful in practice, since we recapture the Rényi entropy, namely I 1 x 1 q ( p 1 , p 2 ,, p n )= R q ( p 1 , p 2 ,, p n ) where the Rényi entropy [3] is defined by

R q ( p 1 , p 2 ,, p n ) 1 1 q log ( j = 1 n p j q ) .
(6)

From a viewpoint of application on source coding, the relation between the weighted quasilinear mean and the Rényi entropy has been studied in Chapter 5 of [1] in the following way.

Theorem A ([1])

For all real numbers q>0 and integers D>1, there exists a code ( x 1 , x 2 ,, x n ) such that

R q ( p 1 , p 2 , , p n ) log D M D 1 q q x ( x 1 , x 2 ,, x n )< R q ( p 1 , p 2 , , p n ) log D +1,
(7)

where the exponential function D 1 q q x is defined on [1,).

By simple calculations, we find that

lim q 1 M D 1 q q x ( x 1 , x 2 ,, x n )= j = 1 n p j x j

and

lim q 1 R q ( p 1 , p 2 , , p n ) log D = j = 1 n p j log D p j .

Therefore, Theorem A appears as a generalization of the famous Shannon’s source coding theorem:

j = 1 n p j log D p j j = 1 n p j x j < j = 1 n p j log D p j +1.

Motivated by the above results and recent advances on the Tsallis entropy theory, we investigate the mathematical results for generalized entropies involving Tsallis entropies and quasilinear entropies, using some inequalities obtained by improvements of Young’s inequality.

Definition 1.1 For a continuous and strictly monotonic function ψ on (0,) and two probability distributions { p 1 , p 2 ,, p n } and { r 1 , r 2 ,, r n } with p j >0, r j >0 for all j=1,2,,n, the quasilinear relative entropy is defined by

D 1 ψ ( p 1 , p 2 ,, p n r 1 , r 2 ,, r n )log ψ 1 ( j = 1 n p j ψ ( r j p j ) ) .
(8)

The quasilinear relative entropy coincides with the Shannon relative entropy if ψ(x)=log(x), i.e.,

D 1 log ( p 1 , p 2 ,, p n r 1 , r 2 ,, r n )= j = 1 n p j log r j p j = D 1 ( p 1 , p 2 ,, p n r 1 , r 2 ,, r n ).

We denote by R q ( p 1 , p 2 ,, p n r 1 , r 2 ,, r n ) the Rényi relative entropy [3] defined by

R q ( p 1 , p 2 ,, p n r 1 , r 2 ,, r n ) 1 q 1 log ( j = 1 n p j q r j 1 q ) .
(9)

This is another particular case of the quasilinear relative entropy, namely for ψ(x)= x 1 q we have

D 1 x 1 q ( p 1 , p 2 , , p n r 1 , r 2 , , r n ) = log ( j = 1 n p j ( r j p j ) 1 q ) 1 1 q = 1 q 1 log ( j = 1 n p j q r j 1 q ) = R q ( p 1 , p 2 , , p n r 1 , r 2 , , r n ) .

We denote by

D q ( p 1 , p 2 ,, p n r 1 , r 2 ,, r n ) j = 1 n p j q ( ln q p j ln q r j )= j = 1 n p j ln q r j p j
(10)

the Tsallis relative entropy which converges to the usual relative entropy (divergence, K-L information) in the limit q1:

lim q 1 D q ( p 1 , p 2 , , p n r 1 , r 2 , , r n ) = D 1 ( p 1 , p 2 , , p n r 1 , r 2 , , r n ) j = 1 n p j ( log p j log r j ) .
(11)

See [2, 57, 1320] and references therein for recent advances and applications on the Tsallis entropy. We easily find that the Tsallis relative entropy is a special case of Csiszár f-divergence [2123] defined for a convex function f on (0,) with f(1)=0 by

D f ( p 1 , p 2 ,, p n r 1 , r 2 ,, r n ) j = 1 n r j f ( p j r j ) ,
(12)

since f(x)=x ln q (1/x) is convex on (0,), vanishes at x=1 and

D x ln q ( 1 / x ) ( p 1 , p 2 ,, p n r 1 , r 2 ,, r n )= D q ( p 1 , p 2 ,, p n r 1 , r 2 ,, r n ).

Furthermore, we define the dual function with respect to a convex function f by

f (t)=tf ( 1 t )
(13)

for t>0. Then the function f (t) is also convex on (0,). In addition, we define the f-divergence for incomplete probability distributions { a 1 , a 2 ,, a n } and { b 1 , b 2 , b n } where a i >0 and b i >0, in the following way:

D f ˜ ( a 1 , a 2 ,, a n b 1 , b 2 ,, b n ) j = 1 n a j f ( b j a j ) .
(14)

On the other hand, the studies on refinements for Young’s inequality have given a great progress in the papers [2435]. In the present paper, we give some inequalities on Tsallis entropies applying two types of inequalities obtained in [29, 32]. In addition, we give the generalized Han’s inequality for the Tsallis entropy in the final section.

2 Tsallis quasilinear entropy and Tsallis quasilinear relative entropy

As an analogy with (5), we may define the following entropy.

Definition 2.1 For a continuous and strictly monotonic function ψ on (0,) and q0 with q1, the Tsallis quasilinear entropy (q-quasilinear entropy) is defined by

I q ψ ( p 1 , p 2 ,, p n ) ln q ψ 1 ( j = 1 n p j ψ ( 1 p j ) ) ,
(15)

where { p 1 , p 2 ,, p n } is a probability distribution with p j >0 for all j=1,2,,n.

We notice that if ψ does not depend on q, then lim q 1 I q ψ ( p 1 , p 2 ,, p n )= I 1 ψ ( p 1 , p 2 ,, p n ).

For x>0 and q0 with q1, we define the q-exponential function as the inverse function of the q-logarithmic function by exp q (x) { 1 + ( 1 q ) x } 1 / ( 1 q ) , if 1+(1q)x>0, otherwise it is undefined. If we take ψ(x)= ln q (x), then we have I q ln q ( p 1 , p 2 ,, p n )= H q ( p 1 , p 2 ,, p n ). Furthermore, we have

I q x 1 q ( p 1 , p 2 , , p n ) = ln q ( j = 1 n p j p j q 1 ) 1 1 q = ln q ( j = 1 n p j q ) 1 1 q = [ ( j = 1 n p j q ) 1 1 q ] 1 q 1 1 q = j = 1 n ( p j q p j ) 1 q = H q ( p 1 , p 2 , , p n ) .

Proposition 2.2 The Tsallis quasilinear entropy is nonnegative:

I q ψ ( p 1 , p 2 ,, p n )0.

Proof We assume that ψ is an increasing function. Then we have ψ( 1 p j )ψ(1) from 1 p j 1 for p j >0 for all j=1,2,,n. Thus, we have j = 1 n p j ψ( 1 p j )ψ(1) which implies ψ 1 ( j = 1 n p j ψ( 1 p j ))1, since ψ 1 is also increasing. For the case that ψ is a decreasing function, we can prove it similarly. □

We note here that the q-exponential function gives us the following connection between the Rényi entropy and Tsallis entropy [36]:

exp R q ( p 1 , p 2 ,, p n )= exp q H q ( p 1 , p 2 ,, p n ).
(16)

We should note here exp q H q ( p 1 , p 2 ,, p n ) is always defined, since we have

1+(1q) H q ( p 1 , p 2 ,, p n )= j = 1 n p j q >0.

From (16), we have the following proposition.

Proposition 2.3 Let A{ A i :i=1,2,,k} be a partition of {1,2,,n} and put p i A j A i p j . Then we have

(17)
(18)

Proof We use the generalized Shannon additivity (which is often called q-additivity) for the Tsallis entropy (see [14] for example):

H q ( x 11 ,, x n m n )= H q ( x 1 ,, x n )+ i = 1 n x i q H q ( x i 1 x i , , x i m i x i ) ,
(19)

where x i j 0, x i = j = 1 m i x i j (i=1,,n; j=1,, m i ). Thus, we have

H q ( p 1 , p 2 ,, p n ) H q ( p 1 A , p 2 A , , p k A ) ,
(20)

since the second term of the right-hand side in (19) is nonnegative because of nonnegativity of the Tsallis entropy. Thus, we have

exp R q ( p 1 , p 2 , , p n ) = exp q H q ( p 1 , p 2 , , p n ) exp q H q ( p 1 A , p 2 A , , p k A ) = exp R q ( p 1 A , p 2 A , , p k A ) ,

since exp q is a monotone increasing function. Hence, the inequality

R q ( p 1 , p 2 ,, p n ) R q ( p 1 A , p 2 A , , p k A )
(21)

holds, which proves the present proposition. □

Definition 2.4 For a continuous and strictly monotonic function ψ on (0,) and two probability distributions { p 1 , p 2 ,, p n } and { r 1 , r 2 ,, r n } with p j >0, r j >0 for all j=1,2,,n, the Tsallis quasilinear relative entropy is defined by

D q ψ ( p 1 , p 2 ,, p n r 1 , r 2 ,, r n ) ln q ψ 1 ( j = 1 n p j ψ ( r j p j ) ) .
(22)

For ψ(x)= ln q (x), the Tsallis quasilinear relative entropy becomes Tsallis relative entropy, that is,

D q ln q ( p 1 , p 2 ,, p n r 1 , r 2 ,, r n )= j = 1 n p j ln q r j p j = D q ( p 1 , p 2 ,, p n r 1 , r 2 ,, r n ),

and for ψ(x)= x 1 q , we have

D q x 1 q ( p 1 , p 2 , , p n r 1 , r 2 , , r n ) = ln q ( j = 1 n p j ( r j p j ) 1 q ) 1 1 q = ln q ( j = 1 n p j q r j 1 q ) 1 1 q = { [ ( j = 1 n p j q r j 1 q ) 1 1 q ] 1 q 1 } 1 q = j = 1 n ( p j p j q r j 1 q ) 1 q = D q ( p 1 , p 2 , , p n r 1 , r 2 , , r n ) .
(23)

We give a sufficient condition on nonnegativity of the Tsallis quasilinear relative entropy.

Proposition 2.5 If ψ is a concave increasing function or a convex decreasing function, then we have nonnegativity of the Tsallis quasilinear relative entropy:

D q ψ ( p 1 , p 2 ,, p n r 1 , r 2 ,, r n )0.

Proof We firstly assume that ψ is a concave increasing function. The concavity of ψ shows that we have ψ( j = 1 n p j r j p j ) j = 1 n p j ψ( r j p j ) which is equivalent to ψ(1) j = 1 n p j ψ( r j p j ). From the assumption, ψ 1 is also increasing so that we have 1 ψ 1 ( j = 1 n p j ψ( r j p j )). Therefore, we have ln q ψ 1 ( j = 1 n p j ψ( r j p j ))0, since ln q x is increasing and ln q (1)=0. For the case that ψ is a convex decreasing function, we can prove similarly nonnegativity of the Tsallis quasilinear relative entropy. □

Remark 2.6 The following two functions satisfy the sufficient condition in the above proposition.

  1. (i)

    ψ(x)= ln q x for q0, q1.

  2. (ii)

    ψ(x)= x 1 q for q0, q1.

It is notable that the following identity holds:

exp R q ( p 1 , p 2 ,, p n r 1 , r 2 ,, r n )= exp 2 q D q ( p 1 , p 2 ,, p n r 1 , r 2 ,, r n ).
(24)

We should note here exp 2 q D q ( p 1 , p 2 ,, p n r 1 , r 2 ,, r n ) is always defined, since we have

1+(q1) D q ( p 1 , p 2 ,, p n r 1 , r 2 ,, r n )= j = 1 n p j q r j 1 q >0.

We also find that (24) implies the monotonicity of the Rényi relative entropy.

Proposition 2.7 Under the same assumptions as in Proposition 2.3 and r i A j A i r j , we have

R q ( p 1 , p 2 ,, p n r 1 , r 2 ,, r n ) R q ( p 1 A , p 2 A , , p k A r 1 A , r 2 A , , r k A ) .
(25)

Proof We recall that the Tsallis relative entropy is a special case of f-divergence so that it has the same properties with f-divergence. Since exp 2 q is a monotone increasing function for 0q2 and f-divergence has a monotonicity [21, 23], we have

exp R q ( p 1 , p 2 , , p n r 1 , r 2 , , r n ) = exp 2 q D q ( p 1 , p 2 , , p n r 1 , r 2 , , r n ) exp 2 q D q ( p 1 A , p 2 A , , p k A r 1 A , r 2 A , , r k A ) = exp R q ( p 1 A , p 2 A , , p k A r 1 A , r 2 A , , r k A ) ,

which proves the statement. □

3 Inequalities for Tsallis quasilinear entropy and f-divergence

In this section, we give inequalities for the Tsallis quasilinear entropy and f-divergence. For this purpose, we review the results obtained in [29] as one of generalizations of refined Young’s inequality.

Proposition 3.1 ([29])

For two probability vectors p={ p 1 , p 2 ,, p n } and r={ r 1 , r 2 ,, r n } such that p j >0, r j >0, j = 1 n p j = j = 1 n r j =1 and x={ x 1 , x 2 ,, x n } such that x i 0, we have

min 1 i n { r i p i } T(f,x,p)T(f,x,r) max 1 i n { r i p i } T(f,x,p),
(26)

where

T(f,x,p) j = 1 n p j f( x j )f ( ψ 1 ( j = 1 n p j ψ ( x j ) ) ) ,
(27)

for a continuous increasing function ψ:II and a function f:IJ such that

f ( ψ 1 ( ( 1 λ ) ψ ( a ) + λ ψ ( b ) ) ) (1λ)f(a)+λf(b)
(28)

for any a,bI and any λ[0,1].

We have the following inequalities on the Tsallis quasilinear entropy and Tsallis entropy.

Theorem 3.2 For q0, a continuous and strictly monotonic function ψ on (0,) and a probability distribution { r 1 , r 2 ,, r n } with r j >0 for all j=1,2,,n, we have

0 n min 1 i n { r i } { ln q ( ψ 1 ( 1 n j = 1 n ψ ( 1 r j ) ) ) 1 n j = 1 n ln q 1 r j } I q ψ ( r 1 , r 2 , , r n ) H q ( r 1 , r 2 , , r n ) n max 1 i n { r i } { ln q ( ψ 1 ( 1 n j = 1 n ψ ( 1 r j ) ) ) 1 n j = 1 n ln q 1 r j } .

Proof If we take the uniform distribution p={ 1 n ,, 1 n }u in Proposition 3.1, then we have

n min 1 i n { r i } T n (f,x,u) T n (f,x,r)n max 1 i n { r i } T n (f,x,u)
(29)

(which coincides with Theorem 3.3 in [29]). In the inequalities (29), we put f(x)= ln q (x) and x j = 1 r j for any j=1,2,,n, then we obtain the statement. □

Corollary 3.3 For q0 and a probability distribution { r 1 , r 2 ,, r n } with r j >0 for all j=1,2,,n, we have

0 n min 1 i n { r i } { ln q ( 1 n j = 1 n 1 r j ) 1 n j = 1 n ln q 1 r j } ln q n H q ( r 1 , r 2 , , r n ) n max 1 i n { r i } { ln q ( 1 n j = 1 n 1 r j ) 1 n j = 1 n ln q 1 r j } .
(30)

Proof Put ψ(x)=x in Theorem 3.2. □

Remark 3.4 Corollary 3.3 improves the well-known inequalities 0 H q ( r 1 , r 2 ,, r n ) ln q n. If we take the limit q1, the inequalities (30) recover Proposition 1 in [25].

We also have the following inequalities.

Theorem 3.5 For two probability distributions p={ p 1 , p 2 ,, p n } and r={ r 1 , r 2 ,, r n }, and an incomplete probability distribution t={ t 1 , t 2 ,, t n } with t j p j 2 r j , we have

0 min 1 i n { r i p i } ( D f ˜ ( t p ) f ( j = 1 n t j ) ) D f ( p r ) max 1 i n { r i p i } ( D f ˜ ( t p ) f ( j = 1 n t j ) ) .
(31)

Proof Put x j = p j r j in Proposition 3.1 with ψ(x)=x. Since we have the relation

j = 1 n p j f ( p j r j ) = j = 1 n p j p j r j f ( r j p j ) = j = 1 n t j f ( p j t j ) ,

we have the statement. □

Corollary 3.6 ([25])

Under the same assumption as in Theorem 3.5, we have

0 min 1 i n { r i p i } ( log ( j = 1 n t j ) D 1 ( p r ) ) D 1 ( r p ) max 1 i n { r i p i } ( log ( j = 1 n t j ) D 1 ( p r ) ) .

Proof If we take f(x)=log(x) in Theorem 3.5, then we have

D f (pr)= j = 1 n r j log p j r j = j = 1 n r j log r j p j = D 1 (rp).

Since f (x)=xlog(x) and t j = p j 2 r j , we also have

D f ˜ ( t p ) f ( j = 1 n t j ) = j = 1 n t j p j t j log p j t j + log ( j = 1 n t j ) = j = 1 n p j log r j p j + log ( j = 1 n t j ) = j = 1 n p j log p j r j + log ( j = 1 n t j ) = log ( j = 1 n t j ) D 1 ( p r ) .

 □

4 Inequalities for Tsallis entropy

We firstly give Lagrange’s identity [37], to establish an alternative generalization of refined Young’s inequality.

Lemma 4.1 (Lagrange’s identity)

For two vectors { a 1 , a 2 ,, a n } and { b 1 , b 2 ,, b n }, we have

( k = 1 n a k 2 ) ( k = 1 n b k 2 ) ( k = 1 n a k b k ) 2 = 1 2 i = 1 n j = 1 n ( a i b j a j b i ) 2 = 1 i < j n ( a i b j a j b i ) 2 .
(32)

Theorem 4.2 Let f:IR be a twice differentiable function such that there exist real constants m and M so that 0m f (x)M for any xI. Then we have

m 2 1 i < j n p i p j ( x j x i ) 2 j = 1 n p j f ( x j ) f ( j = 1 n p j x j ) M 2 1 i < j n p i p j ( x j x i ) 2 ,
(33)

where p j >0 with j = 1 n p j =1 and x j I for all j=1,2,,n.

Proof We consider the function g:IR defined by g(x)f(x) m 2 x 2 . Since we have g (x)= f (x)m0, g is a convex function. Applying Jensen’s inequality, we thus have

j = 1 n p j g( x j )g ( j = 1 n p j x j ) ,
(34)

where p j >0 with j = 1 n p j =1 and x j I for all j=1,2,,n. From the inequality (34), we have

j = 1 n p j f ( x j ) f ( j = 1 n p j x j ) m 2 { j = 1 n p j x j 2 ( j = 1 n p j x j ) 2 } = m 2 { ( j = 1 n p j ) ( j = 1 n p j x j 2 ) ( j = 1 n p j x j ) 2 } = m 2 1 i < j n ( p i p j x j p j p i x i ) 2 = m 2 1 i < j n p i p j ( x j x i ) 2 .

In the above calculations, we used Lemma 4.1. Thus, we proved the first part of the inequalities. Similarly, one can prove the second part of the inequalities putting the function h:IR defined by h(x) M 2 x 2 f(x). We omit the details. □

Lemma 4.3 For { p 1 , p 2 ,, p n } with p j >0 and j = 1 n p j =1, and { x 1 , x 2 ,, x n } with x j >0, we have

1 i < j n p i p j ( x j x i ) 2 = j = 1 n p j ( x j i = 1 n p i x i ) 2 .
(35)

Proof We denote

x ¯ = i = 1 n p i x i .

The left-side term becomes

1 i < j n p i p j ( x j x i ) 2 = 1 2 i = 1 n j = 1 n p i p j ( x j x i ) 2 = 1 2 i = 1 n j = 1 n p i p j ( x j 2 + x i 2 2 x j x i ) = 1 2 i = 1 n j = 1 n p i p j x j 2 + 1 2 i = 1 n j = 1 n p i p j x i 2 i = 1 n j = 1 n p i p j x j x i = 1 2 i = 1 n p i j = 1 n p j x j 2 + 1 2 i = 1 n p i x i 2 j = 1 n p j i = 1 n p i x i j = 1 n p j x j = j = 1 n p j x j 2 x ¯ 2 .

Similarly, a straightforward computation yields

j = 1 n p j ( x j i = 1 n p i x i ) 2 = j = 1 n p j ( x j 2 2 x j x ¯ + x ¯ 2 ) = j = 1 n p j x j 2 2 x ¯ 2 + x ¯ 2 = j = 1 n p j x j 2 x ¯ 2 .

This concludes the proof. □

Corollary 4.4 Under the assumptions of Theorem 4.2, we have

m 2 j = 1 n p j ( x j i = 1 n p i x i ) 2 j = 1 n p j f ( x j ) f ( j = 1 n p j x j ) M 2 j = 1 n p j ( x j i = 1 n p i x i ) 2 .
(36)

Remark 4.5 Corollary 4.4 gives a similar form with Cartwright-Field’s inequality [38]:

1 2 M j = 1 n p j ( x j i = 1 n p i x i ) 2 j = 1 n p j x j j = 1 n x j p j 1 2 m j = 1 n p j ( x j i = 1 n p i x i ) 2 ,
(37)

where p j >0 for all j=1,2,,n and j = 1 n p j =1, m min{ x 1 , x 2 ,, x n }>0 and M max{ x 1 , x 2 ,, x n }.

We also have the following inequalities for the Tsallis entropy.

Theorem 4.6 For two probability distributions { p 1 , p 2 ,, p n } and { r 1 , r 2 ,, r n } with p j >0, r j >0 and j = 1 n p j = j = 1 n r j =1, we have

(38)

where m q and M q are positive numbers depending on the parameter q0 and satisfying m q q r j q 1 M q and m q q p j q 1 M q for all j=1,2,,n.

Proof Applying Theorem 4.2 for the convex function ln q (x) and x j = 1 r j , we have

m q 2 1 i < j n p i p j ( 1 r j 1 r i ) 2 j = 1 n p j ln q 1 r j + ln q ( j = 1 n p j r j ) M q 2 1 i < j n p i p j ( 1 r j 1 r i ) 2 ,
(39)

since the second derivative of ln q (x) is q x q 1 . Putting r j = p j for all j=1,2,,n in the inequalities (39), it follows

m q 2 1 i < j n p i p j ( 1 p j 1 p i ) 2 j = 1 n p j ln q 1 p j + ln q n M q 2 1 i < j n p i p j ( 1 p j 1 p i ) 2 .
(40)

From the inequalities (39) and (40), we have the statement. □

Remark 4.7 The first part of the inequalities (40) gives another improvement of the well-known inequalities 0 H q ( r 1 , r 2 ,, r n ) ln q n.

Corollary 4.8 For two probability distributions { p 1 , p 2 ,, p n } and { r 1 , r 2 ,, r n } with p j >0, r j >0 and j = 1 n p j = j = 1 n r j =1, we have

(41)

where m 1 and M 1 are positive numbers satisfying m 1 r j 2 M 1 and m 1 p j 2 M 1 for all j=1,2,,n.

Proof Take the limit q1 in Theorem 4.6. □

Remark 4.9 The second part of the inequalities (41) gives the reverse inequality for the so-called information inequality [[39], Theorem 2.6.3]

0 j = 1 n p j log 1 r j j = 1 n p j log 1 p j
(42)

which is equivalent to the non-negativity of the relative entropy

D 1 ( p 1 , p 2 ,, p n r 1 , r 2 ,, r n )0.

Using the inequality (42), we derive the following result.

Proposition 4.10 For two probability distributions { p 1 , p 2 ,, p n } and { r 1 , r 2 ,, r n } with 0< p j <1, 0< r j <1 and j = 1 n p j = j = 1 n r j =1, we have

j = 1 n (1 p j )log 1 1 p j j = 1 n (1 p j )log 1 1 r j .
(43)

Proof In the inequality (42), we put p j = 1 p j n 1 and r j = 1 r j n 1 which satisfy j = 1 n 1 p j n 1 = j = 1 n 1 r j n 1 =1. Then we have the present proposition. □

5 A generalized Han’s inequality

In order to state our result, we give the definitions of the Tsallis conditional entropy and the Tsallis joint entropy.

Definition 5.1 ([16, 40])

For the conditional probability p( x i | y j ) and the joint probability p( x i , y j ), we define the Tsallis conditional entropy and the Tsallis joint entropy by

H q (x|y) i , j p ( x i , y j ) q ln q p( x i | y j )(q0,q1)
(44)

and

H q (x,y) i , j p ( x i , y j ) q ln q p( x i , y j )(q0,q1).
(45)

We summarize briefly the following chain rules representing relations between the Tsallis conditional entropy and the Tsallis joint entropy.

Proposition 5.2 ([16, 40])

Assume that x, y are random variables. Then

H q (x,y)= H q (x)+ H q ( y | x ) .
(46)

Proposition 5.2 implied the following propositions.

Proposition 5.3 ([16])

Suppose x 1 , x 2 ,, x n are random variables. Then

H q ( x 1 , x 2 ,, x n )= i = 1 n H q ( x i | x i 1 , , x 1 ) .
(47)

Proposition 5.4 ([16, 40])

For q1, two random variables x and y, we have the following inequality:

H q (x|y) H q (x).
(48)

Consequently, we have the following self-bounding property of the Tsallis joint entropy.

Theorem 5.5 (Generalized Han’s inequality)

Let x 1 , x 2 ,, x n be random variables. Then for q1, we have the following inequality:

H q ( x 1 ,, x n ) 1 n 1 i = 1 n H q ( x 1 ,, x i 1 , x i + 1 ,, x n ).

Proof Since the Tsallis joint entropy has a symmetry H q (x,y)= H q (y,x), we have

H q ( x 1 , , x n ) = H q ( x 1 , , x i 1 , x i + 1 , , x n ) + H q ( x i | x 1 , , x i 1 , x i + 1 , , x n ) H q ( x 1 , , x i 1 , x i + 1 , , x n ) + H q ( x i | x 1 , , x i 1 )

by the use of Proposition 5.2 and Proposition 5.4. Summing both sides on i from 1 to n, we have

n H q ( x 1 , , x n ) = i = 1 n H q ( x 1 , , x i 1 , x i + 1 , , x n ) + i = 1 n H q ( x i | x 1 , , x i 1 , x i + 1 , , x n ) i = 1 n H q ( x 1 , , x i 1 , x i + 1 , , x n ) + H q ( x 1 , , x n ) ,

due to Proposition 5.3. Therefore, we have the present proposition. □

Remark 5.6 Theorem 5.5 recovers the original Han’s inequality [41, 42] if we take the limit as q1.

6 Conclusion

We gave an improvement of Young’s inequalities for scalar numbers. Using this result, we gave several inequalities on generalized entropies involving Tsallis entropies. We also provided a generalized Han’s inequality, based on the conditional Tsallis entropy and the joint Tsallis entropy.

References

  1. Aczél J, Daróczy Z: On Measures of Information and Their Characterizations. Academic Press, San Diego; 1975.

    Google Scholar 

  2. Furuichi S: Tsallis entropies and their theorems, properties and applications. In Aspects of Optical Sciences and Quantum Information. Edited by: Abdel-Aty M. Research Signpost, Trivandrum; 2007:1–86.

    Google Scholar 

  3. Rényi A: On measures of entropy and information. 1. In Proc. 4th Berkeley Symp., Mathematical and Statistical Probability. University of California Press, Berkeley; 1961:547–561.

    Google Scholar 

  4. Tsallis C: Possible generalization of Bolzmann-Gibbs statistics. J. Stat. Phys. 1988, 52: 479–487. 10.1007/BF01016429

    Article  MathSciNet  Google Scholar 

  5. Tsallis C, et al.: Nonextensive Statistical Mechanics and Its Applications. Edited by: Abe S, Okamoto Y. Springer, Berlin; 2001. See also the comprehensive list of references at http://tsallis.cat.cbpf.br/biblio.htm

    Google Scholar 

  6. Tsallis C: Introduction to Nonextensive Statistical Mechanics: Approaching a Complex World. Springer, Berlin; 2009.

    Google Scholar 

  7. Tsallis C: Entropy. In Encyclopedia of Complexity and Systems Science. Springer, Berlin; 2009.

    Google Scholar 

  8. Sebawe Abdalla M, Thabet L: Nonclassical properties of a model for modulated damping under the action of an external force. Appl. Math. Inf. Sci. 2011, 5: 570–588.

    MathSciNet  Google Scholar 

  9. El-Barakaty A, Darwish M, Obada A-SF: Purity loss for a cooper pair box interacting dispersively with a nonclassical field under phase damping. Appl. Math. Inf. Sci. 2011, 5: 122–131.

    MathSciNet  Google Scholar 

  10. Sun L-H, Li G-X, Ficek Z: Continuous variables approach to entanglement creation and processing. Appl. Math. Inf. Sci. 2010, 4: 315–339.

    MathSciNet  Google Scholar 

  11. Ficek Z: Quantum entanglement processing with atoms. Appl. Math. Inf. Sci. 2009, 3: 375–393.

    MathSciNet  Google Scholar 

  12. Furuichi S: A note on a parametrically extended entanglement-measure due to Tsallis relative entropy. Information 2006, 9: 837–844.

    MathSciNet  Google Scholar 

  13. Furuichi S, Yanagi K, Kuriyama K: Fundamental properties of Tsallis relative entropy. J. Math. Phys. 2004, 45: 4868–4877. 10.1063/1.1805729

    Article  MathSciNet  Google Scholar 

  14. Furuichi S: On uniqueness theorems for Tsallis entropy and Tsallis relative entropy. IEEE Trans. Inf. Theory 2005, 47: 3638–3645.

    Article  MathSciNet  Google Scholar 

  15. Furuichi S: An axiomatic characterization of a two-parameter extended relative entropy. J. Math. Phys. 2010., 51: Article ID 123302

    Google Scholar 

  16. Furuichi S: Information theoretical properties of Tsallis entropies. J. Math. Phys. 2006., 47: Article ID 023302

    Google Scholar 

  17. Furuichi S: Matrix trace inequalities on Tsallis entropies. J. Inequal. Pure Appl. Math. 2008., 9(1): Article ID 1

    Google Scholar 

  18. Furuichi S: On the maximum entropy principle and the minimization of the Fisher information in Tsallis statistics. J. Math. Phys. 2009., 50: Article ID 013303

    Google Scholar 

  19. Furuichi S, Mitroi F-C: Mathematical inequalities for some divergences. Physica A 2012, 391: 388–400. 10.1016/j.physa.2011.07.052

    Article  MathSciNet  Google Scholar 

  20. Furuichi S: Inequalities for Tsallis relative entropy and generalized skew information. Linear Multilinear Algebra 2012, 59: 1143–1158.

    Article  MathSciNet  Google Scholar 

  21. Csiszár I: Axiomatic characterizations of information measures. Entropy 2008, 10: 261–273. 10.3390/e10030261

    Article  Google Scholar 

  22. Csiszár I: Information measures: a critical survey. In Transactions of the Seventh Prague Conference on Information Theory, Statistical Decision Functions, Random Processes. Reidel, Dordrecht; 1978:73–86.

    Google Scholar 

  23. Csiszár I, Shields PC: Information theory and statistics: a tutorial. Found. Trends Commun. Inf. Theory 2004, 1: 417–528. 10.1561/0100000004

    Article  Google Scholar 

  24. Bobylev, NA, Krasnoselsky, MA: Extremum analysis (degenerate cases), Moscow. Preprint (1981), 52 pages (in Russian)

  25. Dragomir S: Bounds for the normalised Jensen functional. Bull. Aust. Math. Soc. 2006, 74: 471–478. 10.1017/S000497270004051X

    Article  Google Scholar 

  26. Kittaneh F, Manasrah Y: Improved Young and Heinz inequalities for matrices. J. Math. Anal. Appl. 2010, 36: 262–269.

    Article  MathSciNet  Google Scholar 

  27. Aldaz JM: Self-improvement of the inequality between arithmetic and geometric means. J. Math. Inequal. 2009, 3: 213–216.

    Article  MathSciNet  Google Scholar 

  28. Aldaz JM: Comparison of differences between arithmetic and geometric means. Tamkang J. Math. 2011, 42: 445–451.

    Article  MathSciNet  Google Scholar 

  29. Mitroi FC: About the precision in Jensen-Steffensen inequality. An. Univ. Craiova, Ser. Math. Comput. Sci 2010, 37: 73–84.

    MathSciNet  Google Scholar 

  30. Furuichi S: On refined Young inequalities and reverse inequalities. J. Math. Inequal. 2011, 5: 21–31.

    Article  MathSciNet  Google Scholar 

  31. Furuichi S: Refined Young inequalities with Specht’s ratio. J. Egypt. Math. Soc. 2012, 20: 46–49. 10.1016/j.joems.2011.12.010

    Article  MathSciNet  Google Scholar 

  32. Minculete N: A result about Young inequality and several applications. Sci. Magna 2011, 7: 61–68.

    Google Scholar 

  33. Minculete N: A refinement of the Kittaneh-Manasrah inequality. Creat. Math. Inform. 2011, 20: 157–162.

    MathSciNet  Google Scholar 

  34. Furuichi S, Minculete N: Alternative reverse inequalities for Young’s inequality. J. Math. Inequal. 2011, 5: 595–600.

    Article  MathSciNet  Google Scholar 

  35. Minculete N, Furuichi S: Several applications of Cartwright-Field’s inequality. Int. J. Pure Appl. Math. 2011, 71: 19–30.

    MathSciNet  Google Scholar 

  36. Masi M: A step beyond Tsallis and Rényi entropies. Phys. Lett. A 2005, 338: 217–224. 10.1016/j.physleta.2005.01.094

    Article  MathSciNet  Google Scholar 

  37. Weisstein EW: CRC Concise Encyclopedia of Mathematics. 2nd edition. CRC Press, Boca Raton; 2003.

    Google Scholar 

  38. Cartwright DI, Field MJ: A refinement of the arithmetic mean-geometric mean inequality. Proc. Am. Math. Soc. 1978, 71: 36–38. 10.1090/S0002-9939-1978-0476971-2

    Article  MathSciNet  Google Scholar 

  39. Cover TM, Thomas JA: Elements of Information Theory. 2nd edition. Wiley, New York; 2006.

    Google Scholar 

  40. Daróczy Z: General information functions. Inf. Control 1970, 16: 36–51. 10.1016/S0019-9958(70)80040-7

    Article  Google Scholar 

  41. Han T: Nonnegative entropy measures of multivariate symmetric correlations. Inf. Control 1978, 36: 133–156. 10.1016/S0019-9958(78)90275-9

    Article  Google Scholar 

  42. Boucheron S, Lugosi G, Bousquet O: Concentration inequalities. In Advanced Lectures on Machine Learning. Springer, Berlin; 2003:208–240.

    Google Scholar 

Download references

Acknowledgements

We would like to thank the anonymous reviewer for providing valuable comments to improve the manuscript. The author SF was supported in part by the Japanese Ministry of Education, Science, Sports and Culture, Grant-in-Aid for Encouragement of Young Scientists (B), 20740067. The author NM was supported in part by the Romanian Ministry of Education, Research and Innovation through the PNII Idei project 842/2008. The author FCM was supported by CNCSIS Grant 420/2008.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Shigeru Furuichi.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

The work presented here was carried out in collaboration between all authors. The study was initiated by SF. The author SF also played the role of the corresponding author. All authors contributed equally and significantly in writing this article. All authors have contributed to, seen and approved the manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Furuichi, S., Minculete, N. & Mitroi, FC. Some inequalities on generalized entropies. J Inequal Appl 2012, 226 (2012). https://doi.org/10.1186/1029-242X-2012-226

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1029-242X-2012-226

Keywords