Skip to main content

A note on Schur-concave functions

Abstract

In this paper we consider a class of Schur-concave functions with some measure properties. The isoperimetric inequality and Brunn-Minkowsky’s inequality for such kind of functions are presented. Applications in geometric programming and optimization theory are also derived.

MSC:26B25, 26B15, 52A40.

1 Introduction

About 100 years ago, the properties concerning such notions as length, area, volume as well as the probability of events were abstracted under the banner of the word measure. We review the notion of measure using this word in an unusual way. More exactly, we study some measure properties of a special class of Schur-concave functions which will be revealed via some discrete versions of isoperimetric inequality and Brunn-Minkowsky’s inequality.

The notion of Schur-convex function was introduced by I. Schur in 1923 and has had interesting applications in analytic inequalities, elementary quantum mechanics and quantum information theory. See [15]. Let us consider x=( x 1 ,, x n )y=( y 1 ,, y n ) to be two vectors from R n .

Definition 1 We say that x is majorized by y, denote it by xy, if the rearrangement of the components of x and y such that x [ 1 ] x [ 2 ] x [ n ] , y [ 1 ] y [ 2 ] y [ n ] satisfies i = 1 k x [ i ] i = 1 k y [ i ] (1kn1) and i = 1 n x [ i ] = i = 1 n y [ i ] .

Definition 2 The function F:AR, where A R n , is called Schur-convex if xy implies F(x)F(y). Any such function F is called Schur-concave if −F is Schur-convex.

An important source of Schur-convex functions can be found in Merkle [16]. Guan [10, 11] proved that all symmetric elementary functions and the symmetric means of order k are Schur-concave functions. Other families of Schur-convex functions are studied in [17, 2024, 27].

In [26] a class of analytic inequalities for Schur-convex functions that are made of solutions of a second order nonlinear differential equation was established. These analytic inequalities are used to infer some geometric inequalities such as isoperimetric inequality. Li and Trudinger [14] consider a special class of inequalities for elementary symmetric functions that are relevant to the study of partial differential equations associated with curvature problems.

Recall here a classical result concerning the study of Schur-convexity for the case of smooth functions. See [19].

Theorem 1 LetF(x)=F( x 1 ,, x n )be a symmetric function with continuous partial derivatives on I n =I×I××I, where I is an open interval. ThenF: I n Ris Schur-convex if and only if the inequality

( x i x j ) ( F x i F x j ) 0
(1.1)

holds on I n for eachi,j{1,,n}. It is strictly Schur-convex if the inequality (1.1) is strict for x i x j , 1i,jn. Any such function F is Schur-concave if the inequality (1.1) is reversed.

We present a discrete version of isoperimetric inequality related to a special class of Schur-concave functions. The reason we discuss isoperimetric inequality in the context of Schur-concave functions is given by the well-known property of every Schur-concave function F, which is an essential property of the volume measure

F( x 1 ,, x n )F ( x 1 + + x n n , , x 1 + + x n n ) .
(1.2)

In other words, by using F as an area measure, the inequality (1.2) says that from all polygons with n edges and the sum of all edges constant, the regular polygon with equal edges has the biggest area.

The main result of the paper concerning the discrete isoperimetric inequality will be presented in Section 2. Generalized geometric programming refers to optimization problems that involve signomial functions which have applications in process synthesis, process design, molecular conformation, chemical equilibrium. See [9]. In Section 3 we introduce a class of Schur-convex functions where we emphasize the relevance of such type of functions in convexification transformations for geometric programming. In addition, we consider a family of symmetric functions which have applications in fully nonlinear elliptic equations such as Monge-Ampère equation. See Theorem 6.4, Corollary 6.5 in [13].

2 A discrete isoperimetric inequality

In this section we define some volume measures by using a family of Schur-concave functions. Some discrete versions of isoperimetric inequality and Brunn-Minkowsky’s inequality will confirm that our approach is correct. Recall here a well-known result concerning Brunn-Minkowski’s inequality for convex bodies, which are nonempty compact, convex subsets of R n .

Theorem 2 Letλ(0,1)and let K, L be two convex bodies. Then we have

( Vol n ( ( 1 λ ) k + λ L ) ) 1 / n (1λ) ( Vol n ( K ) ) 1 / n +λ ( Vol n ( L ) ) 1 / n ,
(2.1)

with equality when K and L are identically up to a translation.

We replace the volume measure Vol n (K) by a Schur-concave function of the form F n ( x 1 ,, x n )=f( x 1 )++f( x n ), where f is a nonnegative concave function. In the rest of the paper, F n will be called the n-dimensional volume function.

Theorem 3 Letλ[0,1]. Then, for each nonnegative concave function f, we have

( F n ( ( 1 λ ) x + λ y ) ) 1 / n (1λ) ( F n ( x ) ) 1 / n +λ ( F n ( y ) ) 1 / n ,x,y R n ,

where F n ( x 1 ,, x n )=f( x 1 )++f( x n ).

Proof Let g( x 1 ,, x n )= ( x 1 + + x n ) 1 / n defined on R + n , which is globally concave and nondecreasing in each variable. What we need to prove is the concavity of the function g(f( x 1 ),,f( x n )), which holds since we have a composing between a nondecreasing globally concave function and another concave function. □

Let us consider a more difficult problem concerning the classical isoperimetric inequality for convex bodies from R N .

Theorem 4 (See [18])

Let K be a convex subset from R n and B a closed ball from R n . Then we have

( Vol n ( K ) Vol n ( B ) ) 1 n ( S n 1 ( K ) S n 1 ( B ) ) 1 n 1 ,
(2.2)

with equality if and only if K is a ball. Here, S n 1 (K)means the area of the surface of a convex body K.

We replace the volume measure Vol n with F n , the n-dimensional volume function from above. Notice that the well-known measures - perimeter, area, volume - are Schur-concave functions. For example, in R 3 the functions F( x 1 , x 2 , x 3 )= x 1 x 2 x 3 , F( x 1 , x 2 , x 3 )=2( x 1 x 2 + x 1 x 3 + x 2 x 3 ) and F( x 1 , x 2 , x 3 )=4( x 1 + x 2 + x 3 ) are Schur-concave. In fact, F n ( x 1 ,, x n ) could mean the n-dimensional volume of a body with edges x 1 ,, x n .

The difficulty here is to develop a connection between the n-dimensional volume functions F n . We define the connection between the n dimensional volume function F n and the n1 dimensional volume function F n 1 in the following way:

F n 1 ( x 1 ,, x n )= F n ( x 1 + + x n n , x 2 , , x n ) ++ F n ( x 1 , , x 1 + + x n n ) .

Remark 1 It is easy to check that if F n is a Schur-concave function, then F n 1 is also a Schur-concave function. Hence, our relation between F n and F n 1 is well defined.

Now, we are able to present the discrete form of the isoperimetric inequality in the context of this type of n-dimensional volume functions.

Theorem 5 Let( x 1 ,, x n ) R + n and F n ( x 1 ,, x n )=f( x 1 )++f( x n ), wheref: R + R + is a concave function. Then we have the following isoperimetric inequality:

( F n ( x 1 , , x n ) F n ( x 1 + + x n n , , x 1 + + x n n ) ) 1 n ( F n 1 ( x 1 , , x n ) F n 1 ( x 1 + + x n n , , x 1 + + x n n ) ) 1 n 1 .
(2.3)

Proof If we denote by M=f( x 1 + + x n n ), the inequality (2.3) becomes

( f ( x 1 ) + + f ( x n ) n M ) 1 n ( ( n 1 ) ( f ( x 1 ) + + f ( x n ) ) + n M n 2 M ) 1 n 1 .

Since M=f( x 1 + + x n n ) f ( x 1 ) + + f ( x n ) n =x, it is sufficient to prove that for each xM we have

( x M ) 1 n ( ( n 1 ) x + M n M ) 1 n 1 .
(2.4)

If we consider the function f(x)= 1 n (lnxlnM) 1 n 1 (ln((n1)x+M)ln(nM)), we need to prove that f is nonpositive for every 0<xM.

If we compute f (x)= M x n x ( ( n 1 ) x + M ) , we have that f is nondecreasing on (0,M]. Since f(M)=0, it follows that f(x)0 for every 0<xM. □

In the following, we extend the class of Schur-concave functions which verifies (2.3). Consider the elementary symmetric functions of n variables given by

E n k = F n k / ( n k ) ,where F n k = 1 i 1 < < i k n j = 1 k x i j ,k=1,2,,n.

Proposition 1 Each elementary symmetric function F n k satisfies the isoperimetric inequality (2.3).

Proof For k=1, the inequality (2.3) is obvious. If F n = F n n the inequality (2.3) becomes the classical means inequality, i.e., the geometric mean is greater than the harmonic mean. For simplicity, we present the proof only in the case k=2. In this case, the inequality (2.3) is reduced to

n n ( E n 1 ) 2 ( E n 2 ) n 1 ( 2 ( E n 1 ) 2 + ( n 2 ) E n 2 ) n .
(2.5)

We consider the function g(x)= ( 2 x + ( n 2 ) a ) n n n x a n 1 , where a= E n 2 . By Newton’s inequalities we have ( E n 1 ) 2 E n 2 , and now the inequality (2.5) is equivalent to f(x)0, which holds for all xa. □

Moreover, it can be easily seen that if F n is a Schur-concave function with the property

F n ( x 1 + + x n n , x 2 , , x n ) + + F n ( x 1 , , x 1 + + x n n ) ( n 1 ) F n ( x 1 , , x n ) + F n ( x 1 + + x n n , , x 1 + + x n n ) ,
(2.6)

then the inequality (2.3) holds. We have the equality in (2.6) if F n =f( x 1 )++f( x n ), but (2.6) is not necessary. For example, the symmetric fundamental polynomial of degree n satisfies (2.3) but not (2.6).

Finally, it remains an open question if all Schur-concave functions satisfy the isoperimetric inequality (2.3).

3 Schur-convexity of a family of symmetric functions and applications

Let us consider the following family of symmetric functions:

F n k = 1 i 1 < < i k n j = 1 k f( x i j ),k=1,2,,n,

where f is a positive function.

In this section, we apply the Schur-convexity of such a family of symmetric functions to infer some applications in generalized geometric programming.

For the case k=1, if f is a convex function, then the Schur-convexity of F n 1 is obvious. See Hardy-Littlewood-Polya’s inequality [12]. In [20] an extensive study concerning the Schur convexity of the above class of symmetric functions was given. For the convenience of the reader, we recall here the proof of the Schur-convexity of F n k only in the cases k=2 and k=n1.

We say that a function f:Ω R + is log-convex if the function logf is convex. If f is a log-convex function then f is also a convex function. See [18].

Theorem 6 LetIRbe a convex set with a nonempty interior. Iff:I R + is a differentiable function in the interior of I, continuous on I, positive and log-convex, then F n 2 (x)= 1 i < j n f( x i )f( x j ), x i , x j Ω, is Schur-convex on I n .

Proof We can write F n 2 in the following form:

F n 2 (x)=f( x 1 )f( x 2 )+ ( f ( x 1 ) + f ( x 2 ) ) i = 3 n f( x i )+ 3 i < j n f( x i )f( x j ).

Thus, we have

( x 1 x 2 ) ( F n 2 ( x ) x 1 F n 2 ( x ) x 2 ) = ( x 1 x 2 ) ( f ( x 1 ) f ( x 2 ) f ( x 2 ) f ( x 1 ) + ( f ( x 1 ) f ( x 2 ) ) i = 3 n f ( x i ) ) .

Since f is a log-convex function ( f f is monotone increasing), we deduce that f is convex and we have

( x 1 x 2 ) ( f ( x 1 ) f ( x 2 ) f ( x 2 ) f ( x 1 ) ) 0,

respectively,

( x 1 x 2 ) ( f ( x 1 ) f ( x 2 ) ) 0.

In conclusion, we obtain

( x 1 x 2 ) ( F n 2 ( x ) x 1 F n 2 ( x ) x 2 ) 0,

and it follows that F n 2 is a Schur-convex function. □

Theorem 7 LetIRbe a convex set with a nonempty interior. Iff:I R + is a differentiable function in the interior of I, continuous on I, positive and log-convex, then F n n 1 (x)= 1 i 1 < i n 1 n j = 1 n 1 f( x i j )is Schur-convex on I n .

Proof We can write F n n 1 in the following form:

F n n 1 (x)= i = 3 n f( x i ) ( f ( x 1 ) + f ( x 2 ) + f ( x 1 ) f ( x 2 ) j = 3 n 1 f ( x j ) ) .

Hence, we have

( x 1 x 2 ) ( F n n 1 ( x ) x 1 F n n 1 ( x ) x 2 ) = ( x 1 x 2 ) i = 3 n f ( x i ) ( f ( x 1 ) f ( x 2 ) + j = 3 n 1 f ( x j ) ( f ( x 1 ) f ( x 2 ) f ( x 2 ) f ( x 1 ) ) ) 0 ,

by the same arguments as in the proof from above. □

From many other applications of Schur-convex functions we recall here some results in optimization problems. Generalized geometric programming refers to optimization problems that involve signomial functions which have applications in process synthesis, process design, molecular conformation, chemical equilibrium. See [9]. A signomial function is the sum of products of independent variables, each of them exponentiated to some nonzero rational number. A posynomial is a signomial function with positive coefficients.

In geometric programming, the properties of the function f:RR, such that the transformation xf(y) convexifies a posynomial, are investigated.

The main idea to solve such optimization problems is to convexificate the function which needs to be optimized. It will be easy to find the supremum of a convex function because we are looking only to the boundary of the definition domain. Moreover, in the case of Schur-convex functions, the minimum is attained when all the independent variables are equal.

This is the reason why we investigate the transformation which convexifies a particular function. In fact, in geometric programming, the particular case is reduced to the study of convexity of the transformation.

Consider a family of strict positive and monotone functions f i :RRi=1,,n and define P n ( y 1 ,, y n )= i = 1 n f i ( y i ). Recall here a recent result from [9].

Theorem 8 If f i (y) f i (y) ( f i ( y ) ) 2 0for everyi=1,,n, andyR, then the transformation function P n ( y 1 ,, y n )is convex in R n .

Consider the following family of functions:

P n k = 1 i 1 < < i k n j = 1 k f i j ( x i j ),k=1,2,,n.

We are now in position to use the Schur-convexity of our family of symmetric functions.

Theorem 9 Let f i :RR, i=1,,n, be a family of strict positive and monotone functions. If each function f i is a log-convex function, then the function P n k is convex. Moreover, the function P n k is Schur-convex provided that f 1 == f n .

Proof Taking into account Theorems 6, 7 and 8 because the constant function is log-convex, summing term by term, we obtain that all functions P n k are convex. □

Finally, we refer to some inequalities for elementary symmetric functions which are related to the study of partial differential equations associated with curvature problems and have applications to fully nonlinear elliptic equations [8, 14]. Other results on min-max inequalities and optimization theory can be found in [17, 25].

Proposition 2 Consider the symmetric function F k l ( x 1 , x 2 ,, x n )= F n 1 F n l F n k , where x i >0, i=1,,n. Then F k l is Schur-convex, for each1lkn.

Remark 2 The convexity of such functions is an open problem and has applications in fully nonlinear elliptic equations such as Monge-Ampère equation. See Theorem 6.4, Corollary 6.5 from [13], where the above functions are defined on a particular convex cone.

Proof For simplicity, we denote by σ k the elementary symmetric function F n k , by σ i ( x 1 ) the elementary symmetric function of order i which contains only the variables x 2 , x 3 ,, x n , by σ i ( x 2 ) the elementary symmetric function of order i which contains only the variables x 1 , x 3 ,, x n and by σ i ( x 1 , x 2 ) the elementary symmetric function of order i which contains only the variables x 3 , x 4 ,, x n .

In order to study the Schur-convexity of F, we need to evaluate the two partial derivatives

f x 1 = ( σ l + σ 1 σ l 1 ( x 1 ) ) σ k σ 1 σ l σ k 1 ( x 1 ) σ k 2 , f x 2 = ( σ l + σ 1 σ l 1 ( x 2 ) ) σ k σ 1 σ l σ k 1 ( x 2 ) σ k 2 ,

and it follows that

( x 1 x 2 ) ( f x i f x j ) = ( x 1 x 2 ) 2 σ 1 σ k 2 ( σ l σ k 2 ( x 1 , x 2 ) σ k σ l 2 ( x 1 , x 2 ) ) .

Now, we need to study the sign of σ l σ k 2 ( x 1 , x 2 ) σ k σ l 2 ( x 1 , x 2 ). Taking into account that

σ l = x 1 x 2 σ l 2 ( x 1 , x 2 ) + ( x 1 + x 2 ) σ l 1 ( x 1 , x 2 ) + σ l ( x 1 , x 2 ) , σ k = x 1 x 2 σ k 2 ( x 1 , x 2 ) + ( x 1 + x 2 ) σ k 1 ( x 1 , x 2 ) + σ k ( x 1 , x 2 ) ,

we obtain

σ l σ k 2 ( x 1 , x 2 ) σ k σ l 2 ( x 1 , x 2 ) = ( x 1 + x 2 ) ( σ l 1 ( x 1 , x 2 ) σ k 2 ( x 1 , x 2 ) σ k 1 ( x 1 , x 2 ) σ l 2 ( x 1 , x 2 ) ) .

We need to prove that σ l 1 σ k 2 σ k 1 σ l 2 . If we denote by E i the arithmetic mean of the elementary symmetric function of order i, we have E i 2 E i 1 E i + 1 , the so called Newton’s inequalities. Hence, Newton’s inequalities imply the fact that E l 1 E k 2 E k 1 E l 2 , for kl.

Since, for each kl we have

( n 2 k 2 ) ( n 2 l ) ( n 2 k ) ( n 2 l 2 ) ,

it is obvious that σ l 1 σ k 2 σ k 1 σ l 2 .

Hence, the Schur convexity of the function F k l follows. □

Remark 3 By a similar argument, it follows that the function F α is Schur convex for each α>0.

References

  1. Chu Y-M, Wang G-D, Zhang X-M: The Schur multiplicative and harmonic convexities of the complete symmetric function. Math. Nachr. 2011, 284(5–6):653–663. 10.1002/mana.200810197

    Article  MathSciNet  MATH  Google Scholar 

  2. Chu Y-M, Xia W-F, Zhao TH: Schur convexity for a class of symmetric functions. Sci. China Math. 2010, 53(2):465–474. 10.1007/s11425-009-0188-2

    Article  MathSciNet  MATH  Google Scholar 

  3. Chu Y-M, Zhang X-M, Wang G-D: The Schur geometrical convexity of the extended mean values. J. Convex Anal. 2008, 15(4):707–718.

    MathSciNet  MATH  Google Scholar 

  4. Chu Y-M, Wang G-D, Zhang X-H: Schur convexity and Hadamard’s inequality. Math. Inequal. Appl. 2010, 13(4):725–731.

    MathSciNet  MATH  Google Scholar 

  5. Chu Y-M, Xia W-F: Solution of an open problem for Schur convexity or concavity of the Gini mean values. Sci. China Ser. A 2009, 52(10):2099–2106. 10.1007/s11425-009-0116-5

    Article  MathSciNet  MATH  Google Scholar 

  6. Chu Y-M, Sun T-C: The Schur harmonic convexity for a class of symmetric functions. Acta Math. Sci., Ser. B 2010, 30(5):1501–1506.

    Article  MathSciNet  MATH  Google Scholar 

  7. Chu Y-M, Lv Y-P: The Schur harmonic convexity of the Hamy symmetric function and its applications. J. Inequal. Appl. 2009., 2009:

    Google Scholar 

  8. Gilbarg D, Trudinger NS: Elliptic Partial Differential Equations of Second Order. Springer, Berlin; 1983.

    Book  MATH  Google Scholar 

  9. Gounaris CE, Floudas CA: Convexity of products of univariate functions and convexification transformations for geometric programming. J. Optim. Theory Appl. 2008, 138: 407–427. 10.1007/s10957-008-9402-6

    Article  MathSciNet  MATH  Google Scholar 

  10. Guan K: Schur-convexity of complete elementary symmetric function. J. Inequal. Appl. 2006., 2006:

    Google Scholar 

  11. Guan K: Some properties of a class of symmetric functions. J. Math. Anal. Appl. 2007, 336: 70–80. 10.1016/j.jmaa.2007.02.064

    Article  MathSciNet  MATH  Google Scholar 

  12. Hardy GH, Littlewood JE, Pólya G: Inequalities. Cambridge Mathematical Library, Cambridge; 1952. Reprinted (1988)

    MATH  Google Scholar 

  13. Krylov NV: On the general notion of fully nonlinear second-order elliptic equations. Trans. Am. Math. Soc. 1995, 347: 857–895. 10.1090/S0002-9947-1995-1284912-8

    Article  MathSciNet  MATH  Google Scholar 

  14. Li M, Trudinger NS: On some inequalities for elementary symmetric functions. Bull. Aust. Math. Soc. 1994, 50: 317–326. 10.1017/S0004972700013770

    Article  MathSciNet  MATH  Google Scholar 

  15. Marshal AW, Olkin I: Inequalities: Theory of Majorization and Its Application. Academic Press, New York; 1979.

    Google Scholar 

  16. Merkle M: Conditions for convexity of a derivative and applications to the gamma and digammma function. Facta Univ., Math. Inform. 2001, 16: 13–20.

    MathSciNet  MATH  Google Scholar 

  17. Niculescu CP, Rovenţa I: Fan’s inequality in geodesic spaces. Appl. Math. Lett. 2009, 22: 1529–1533. 10.1016/j.aml.2009.03.020

    Article  MathSciNet  MATH  Google Scholar 

  18. Niculescu CP, Persson L-E: Convex Functions and Their Applications. A Contemporary Approach. Springer, New York; 2006.

    Book  MATH  Google Scholar 

  19. Roberts AW, Varberg DE: Convex Functions. Academic Press, New York; 1973.

    MATH  Google Scholar 

  20. Rovenţa I: Schur-convexity of a class of symmetric functions. An. Univ. Craiova, Ser. Mat. Inform. 2010, 37(1):12–18.

    MathSciNet  MATH  Google Scholar 

  21. Sun T-C, Lv Y-P, Chu Y-M: Schur multiplicative and harmonic convexities of generalized Heronian mean in n variables and their applications. Int. J. Pure Appl. Math. 2009, 55(1):25–33.

    MathSciNet  MATH  Google Scholar 

  22. Xia W-F, Chu Y-M: The Schur convexity of Gini mean values in the sense of harmonic mean. Acta Math. Sci., Ser. B 2011, 31(3):1103–1112.

    Article  MathSciNet  MATH  Google Scholar 

  23. Xia W-F, Chu Y-M: The Schur harmonic convexity of Lehmer means. Int. Math. Forum 2009, 4(41–44):2009–2015.

    MathSciNet  MATH  Google Scholar 

  24. Xia W-F, Wang G-D, Chu Y-M: Schur convexity and inequalities for a class of symmetric functions. Int. J. Pure Appl. Math. 2010, 58(4):435–452.

    MathSciNet  MATH  Google Scholar 

  25. Zhang X-M: Optimization of Schur-convex functions. Math. Inequal. Appl. 1998, 1(3):319–330.

    MathSciNet  MATH  Google Scholar 

  26. Zhang X-M: Schur-convex functions and isoperimetric inequalities. Proc. Am. Math. Soc. 1998, 126(2):461–470. 10.1090/S0002-9939-98-04151-3

    Article  MathSciNet  MATH  Google Scholar 

  27. Zhang X-M, Chu Y-M: Convexity of the integral arithmetic mean of a convex function. Rocky Mt. J. Math. 2010, 40(3):1061–1068. 10.1216/RMJ-2010-40-3-1061

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgement

This work was supported by a grant of the Romanian National Authority for Scientific Research, CNCS-UEFISCDI, project number PN-II-RU-TE-2011-3-0223.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ionel Rovenţa.

Additional information

Competing interests

The author declares that they have no competing interests.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Rovenţa, I. A note on Schur-concave functions. J Inequal Appl 2012, 159 (2012). https://doi.org/10.1186/1029-242X-2012-159

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1029-242X-2012-159

Keywords