On Kedlaya-type inequalities for weighted means

In 2016 we proved that for every symmetric, repetition invariant and Jensen concave mean M\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\mathscr{M}$\end{document} the Kedlaya-type inequality A(x1,M(x1,x2),…,M(x1,…,xn))≤M(x1,A(x1,x2),…,A(x1,…,xn))\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ \mathscr{A} \bigl(x_{1},\mathscr{M}(x_{1},x_{2}), \ldots,\mathscr{M}(x _{1},\ldots,x_{n}) \bigr) \le \mathscr{M} \bigl( x_{1}, \mathscr{A}(x _{1},x_{2}), \ldots,\mathscr{A}(x_{1},\ldots,x_{n}) \bigr) $$\end{document} holds for an arbitrary (xn)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$(x_{n})$\end{document} (A\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\mathscr{A}$\end{document} stands for the arithmetic mean). We are going to prove the weighted counterpart of this inequality. More precisely, if (xn)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$(x_{n})$\end{document} is a vector with corresponding (non-normalized) weights (λn)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$(\lambda_{n})$\end{document} and Mi=1n(xi,λi)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\mathscr{M}_{i=1}^{n}(x _{i},\lambda_{i})$\end{document} denotes the weighted mean then, under analogous conditions on M\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\mathscr{M}$\end{document}, the inequality holds for every (xn)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$(x_{n})$\end{document} and (λn)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$(\lambda_{n})$\end{document} such that the sequence (λkλ1+⋯+λk)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$(\frac{\lambda_{k}}{\lambda_{1}+\cdots+\lambda_{k}})$\end{document} is decreasing.

In this setting Kedlaya's result could be expressed briefly as: a geometric mean is a Kedlaya mean. Nevertheless, there appears a natural problem-to find a broad family of Ked-laya means. For example, it is quite easy to prove that min and arithmetic means are Kedlaya means. Moreover, convex combinations of Kedlaya means are again Kedlaya means.
An approach to this problem was given recently by the authors in [3]. We are going to present this result in a while, but we need to introduce some properties of means first.
We say that M is symmetric, (strictly) increasing, and Jensen convex (concave) if, for all n ∈ N, the n-variable restriction M| I n is a symmetric, (strictly) increasing in each of its variables, and Jensen convex (concave) on I n , respectively.
A mean M is called repetition invariant if, for all n, m ∈ N and (x 1 , . . . , x n ) ∈ I n , the following identity is satisfied: Having this in hand, let us recall one of the most important results from this paper.
Five years later in 1999 Kedlaya [23] improved his result to a weighted setting. In more detail, he showed the following. Theorem 1.2 (Kedlaya) Let x 1 , . . . , x n , λ 1 , . . . , λ n be positive real numbers and define k := λ 1 + · · · + λ k . If the sequence (λ i / i ) n i=1 is nonincreasing then Motivated by these preliminaries, we are going to struggle with a weighted counterpart of a Kedlaya inequality. Before it could be done we need to make some introduction to weighted means in an abstract setting. We need to realize that there is no formal agreement concerning this definition. They were introduced for particular families only.
In this situation let us present weighted deviation and quasi-deviation means only. A formal definition of weighted means in the abstract setting will be introduced in the following section.
For an interval I, and a deviation function E : I 2 → R (E(x, ·) is continuous and strictly increasing and E(x, x) = 0, x ∈ I), for x ∈ I n , we define a mean y = D E (x) as a unique solution of the equation Its weighted counterpart is defined for any x ∈ I n and λ ∈ R n + as a unique solution of equation, This definition could be generalized further; if a function E satisfying the following properties: for all x, y ∈ I, the mapping I t → E(x, t)/E(y, t), x < t < y is strictly increasing, then equalities (1.2) and (1.3) define the so-called quasi-deviation and weighted quasideviation means, respectively.
At the moment, each time we are dealing with a family which is a particular case of quasideviation means, weighted means are immediately defined. In this way, we can simply obtain quasi-arithmetic means, Gini means, Bajraktarević means etc. (cf. [24] for definitions) in their weighted setting.
Nevertheless, for the purpose of the present note, we need to separate the definition of weighted means from any particular family. This will be accomplished in the next section.

Weighted means
In this section we will introduce the notion of weighted means. Before we begin, let us underline a few important facts. Weighted means are used very often in the literature. Most usually they are obtained by adding extra values to some symmetric operator (for example λ 1 x 1 +···+λ n x n λ 1 +···+λ n instead of x 1 +···+x n n ). It is done in such a way that if we put λ 1 = λ 2 = · · · = λ n (very often weights are required to be normalized, that is, λ i = 1; see e.g. [25]) then a weighted mean goes back to non-weighted one. Due to this fact, whenever we talk about a weighted mean, its non-weighted counterpart is repetition invariant.
Let us also underline that in this definition weights are taken from some arbitrary ring R ⊂ R. In fact, there are three particular rings which are significantly more important than any other: the ring of integers and the fields of rational numbers and real numbers.
As we will see, every repetition invariant mean generate (in a unique way) a weighted mean on rationals (roughly speaking it is implied by scaling invariance; see the definition below). Reals are also of special interest, because each time we are dealing with a quasideviation mean, we naturally require all real weights to be considered.
A weighted mean on I over R or, in other words, an R-weighted mean on I is a function M : ∞ n=1 I n × W n (R) → I satisfying the conditions (i)-(iv) presented below. Elements belonging to I will be called entries; elements from R are called weights.
(iii) Mean value property: For all n ∈ N and for all (x, λ) ∈ I n × W n (R) (iv) Elimination principle: For all n ∈ N, for all (x, λ) ∈ I n × W n (R) and for all j ∈ {1, . . . , n} such that λ j = 0, i.e., entries with a zero weight can be omitted.
Proof If λ k = 0, then the statement follows from the elimination principle immediately. In the other case, for i ∈ {1, . . . , n}, define λ i := δ ik λ k , where δ stands for the Kronecker symbol. Applying the elimination principle iteratively n -1 times, and then using the reduction principle, we obtain which is exactly the identity to be proved.
In the following theorem we will prove that a weighted mean defined on a ring can be extended to its quotient field denoted as Quot(R). Moreover, if M is symmetric/monotone then so is M.
Proof Fix n ∈ N, x ∈ I n , and λ ∈ W n (Quot(R)). Then there exists q ∈ R such that qλ ∈ W n (R) (for example a product of all denominators). We define To prove the correctness of this definition, it suffices to show that it does not depend on the selection of q. Indeed, take q ∈ R such that q λ ∈ W n (R). We need to verify if the equality M(x, qλ) = M(x, q λ) is valid. However, applying the nullhomogeneity of M (twice), we get In order to verify the nullhomogeneity of M, observe that every positive element of Quot(R) can be represented as a/b for some a, b ∈ R + . Then, obviously, bq · (a/b) · λ ∈ W n (R). Thus To prove reduction principle, take λ, μ ∈ W n (Quot(R)) arbitrarily. Then there exist q, r ∈ R such that qλ, rμ ∈ W n (R). In this case we also have qrλ, qrμ ∈ W n (R). Then (qrλ) (qrμ) ∈ W 2n (R) and (qrλ) (qrμ) = qr · (λ μ). Having these properties, we obtain The two remaining properties (mean value property, elimination principle) are obvious. The last assertion is simply implied by (2.1).
What we are going to prove now is that every repetition invariant (non-weighted) mean can be associated with a Z-weighted and, in the virtue of Theorem 2.2, a Q-weighted mean. In fact this operation can also be reversed. Proof Clearly, the transformations described in the theorem are inverses of each other. Let M be a repetition invariant mean on I and let M be given by (2.2). We need to show that M satisfies all properties (i)-(iv) listed in the definition of weighted means. First observe that M obviously admits the mean value property. The elimination principle is also immediate because if λ j = 0 then element x j does not appear on the right hand side of (2.2).
Let us now verify the nullhomogeneity in the weights. For t ∈ N + , we can apply repetition invariance of M to get Finally, we will prove the reduction principle. We may assume that λ, μ ∈ N n . Then, for all x ∈ I n , To see this equality, we shall apply Lemma 2.1 iteratively to encompass each block appearing on the left hand side.
Let us now introduce some natural properties of weighted means. A weighted mean M : ∞ n=1 I n × W n (R) → I is said to be symmetric, if for all n ∈ N, x ∈ I n , λ ∈ W n (R), and for all permutations σ ∈ S n , We will call a weighted mean M Jensen concave if, for all n ∈ N, x, y ∈ I n and λ ∈ W n (R), If, on the above indicated domain, the reversed inequality is satisfied, then M is said to be Jensen convex. First observe that, given a (symmetric) Jensen concave mean R weighted mean M : is a (symmetric) Jensen convex R-weighted mean on (-I). Therefore, everything that we obtain in terms of Jensen concavity, can be rewritten for Jensen convexity, and vice versa. Another important observation is that, due to the mean value property, means are locally bounded functions. Therefore, as a consequence of the celebrated Bernstein-Doetsch theorem (cf. [26,27]), Jensen concavity or Jensen convexity is equivalent to their concavity or convexity, respectively. Henceforth, it implies their continuity with respect to their entries over the interior of I n .
A weighted mean M is said to be continuous in the weights if, for all n ∈ N and x ∈ I n , the mapping λ → M(x, λ) is continuous on W n (R).
The following two statements are easy to see. Usually, instead of explicitly writing down weights, we can consider a function with finite range as the argument of the given mean. Let R be a subring of R. We say that The Cartesian product of two Rintervals will be called an R-rectangle. The length of an interval D will be denoted by |D|.
Given an R-interval D, a function f : One can easily see that, for every x ∈ D, y ∈ E, the mappings f (x, ·) and f (·, y) are R-simple functions on E and D, respectively.
A subset H ⊆ R or H ⊆ R 2 will be called R-simple if its characteristic function is Rsimple. It is easy to see that a set H is R-simple if and only is it is the disjoint union of finitely many R-intervals or R-rectangles, respectively.
For an R-simple set H ⊆ R, the sum of the lengths of the decomposing R-intervals will be denoted by |H|. In fact, this is the Lebesgue measure of H.
In this section we will prove two important lemmas. for all x ∈ D, |{y : (x, y) ∈ H}| = θ · |E|, 2. for all y ∈ E, |{x : (x, y) ∈ H}| = θ · |D|. A set H with the above properties will be called a θ -proportional subset of D × E.
Proof Let D and E be arbitrary R-intervals. Let us recall first that there exists an affine bijection ϕ : , then such an affine bijection can be given by Finally, define the set H 0 ⊆ [0, 1) 2 by It is simple to verify that H 0 is a θ -proportional subset of [0, 1) 2 . Therefore, the set H : The inequality stated in the next result will be called the Jensen-Fubini inequality in the sequel. We remind the reader that the symbol A stands for the arithmetic mean.

Lemma 2.7 Let D and E be
In addition, the validity of the reversed inequality in (2.6) characterizes the Jensen convexity of M.
Having this, we can stretch f to a Z-simple function f : On the other hand, the nullhomogeneity of M and also of A in the weights implies Therefore we may assume that the initial function f is Z-simple and D, E are Z-intervals. Furthermore (just to make the notation simple) we can shift the left-bottom corner of D × E to the origin, that is, we assume that D = [0, n), E = [0, m) for some m, n ∈ N. Then we can construct a matrix (a i,j )i∈{1,...,n} j∈{1,...,m} with entries in I such that Then we have a 1,j + · · · + a n,j n , 1 , Finally, applying the Jensen concavity of M, we obtain the following inequality: a 1,j + · · · + a n,j n , 1 , which implies (2.6).
To complete the proof, assume that (2.6) holds for all Q-simple function f : D×E → I. To prove the Jensen concavity of the mean M, let x, y ∈ I n and λ ∈ W n (Q). We may assume that λ i > 0 for all i ∈ {1, . . . , n}. Let E be a Q-interval which is partitioned into some Q-intervals . . , n}. Now construct the function f : [0, 2) × E → I as follows: Then, obviously, f is a Q-simple function. Applying (2.6) for this f , it follows that which shows that M is Jensen concave, indeed. The last assertion of the theorem can be obtained by the transformation M → M defined in (2.5).

Results: the weighted Kedlaya inequality
We are heading toward the inequality which is the main target for the present paper.
The nonincreasingness of the ratio sequence ( λ i λ 1 +···+λ i ) will be a key assumption for Kedlaya-type inequalities, therefore, we also set is nonincreasing .
Given n ∈ N and a weight sequence λ ∈ W 0 n (R), we say that a weighted mean M : ∞ n=1 I n × W n (R) → I satisfies the n variable λ-weighted Kedlaya inequality, or, shortly, the (n, λ)- If λ ∈ W 0 (R) and this inequality holds for all n ∈ N, then we say that M satisfies the λweighted Kedlaya inequality, or, shortly, the λ-Kedlaya inequality. The main result of the present note is to provide a sufficient condition for the weight sequence λ and the weighted mean M such that the n variable λ-weighted Kedlaya inequality is satisfied by M.
Theorem 3.1 Let n ∈ N, λ ∈ V n (Q) and let M : ∞ n=1 I n × W n (Q) → I be a symmetric and Jensen concave Q-weighted mean on I. Then M satisfies the n variable λ-weighted Kedlaya inequality (3.1).
On the other hand, if M is a symmetric and Jensen convex Q-weighted mean on I, then (3.1) holds with reversed inequality.
Take an arbitrary vector x ∈ I n and, for k ∈ {1, . . . , n}, denote In what follows, we are going to prove that, for all j ∈ {2, . . . , n}, Then, applying this inequality for all j ∈ {2, . . . , n}, summing up side by side, after simple reduction, we get Then, after dividing both sides of this inequality by n , we arrive at (3.1). For the sake of convenience let us rewrite (3.2) into the following equivalent form: To prove this, we will define a Q-simple function f : [0, j ) 2 → R + such that respective sides of the inequalities (2.6) and for (x, y) ∈ B k \ H k , k = 1, . . . , j -1; x k for (x, y) ∈ C k , k = 1, . . . , j.
To verify the correctness of this definition, we need to check j}. An elementary calculation shows that this inequality holds if and only if λ k / k ≥ λ j / j , which results by the assumption on the weight vector λ.
Then, by the symmetry of M and the definition of the M-integral, for all x 0 ∈ [0, j-1 ), we have We can now calculate the weighted arithmetic mean with respect to x and obtain This proves that the left hand sides of (3.3) and (2.6) are equal to each other. Finally, we shall prove that it is also the case for the right hand sides. It suffices to prove that For y 0 ∈ [ 0 , 1 ), this equality is the consequence of the trivial equality m 1 = x 1 . For k ∈ {2, . . . , j} and y 0 ∈ [ k-1 , k ), we see that f (x, y 0 ) equals m k-1 , m k , or x k on H k , B k \ H k , and C k , respectively. But, by the proportionality property of H k , we know that Therefore, and we also have Obviously the total length of the slice {x : (x, y 0 ) ∈ B k ∪ C k } equals j . Using this and the easy-to-see identity Therefore, the corresponding sides of (3.3) and (2.6) coincide. As the Jensen concavity of M implies the Jensen-Fubini inequality (2.6), we obtain (3.3), and hence (3.2) and, finally, the desired inequality (3.1). The last assertion of the theorem can be obtained by the transformation M → M defined in (2.5).
We have two immediate corollaries. On the other hand, if M is a symmetric and Jensen convex Q-weighted mean on I, then (3.1) holds with reversed inequality for all n ∈ N.
Taking the constant sequence λ n = 1 in the above corollary, we arrive at a statement which was one of the main results of [3]. On the other hand, if M is a symmetric and Jensen convex repetition invariant mean on I, then (1.1) holds with reversed inequality for all n ∈ N and x ∈ I n .
We first show that M is a homogeneous Jensen convex symmetric mean. The homogeneity and symmetry are obvious. For the proof of the Jensen convexity, let x, y ∈ [0, ∞) n and λ ∈ W n (R). We have to verify that If the left hand side is zero, then there is nothing to prove. In the other case, by the definition of the mean, we have λ 1 (x 1 + y 1 ) + · · · + λ n (x n + y n ) > 0. If λ 1 x 1 + · · · + λ n x n = 0, then M(x, λ) = 0 and, for all i ∈ {1, . . . , n}, we have λ i x i = 0. Therefore, M x + y 2 , λ = λ 1 (x 1 + y 1 ) 2 + · · · + λ n (x n + y n ) 2 2(λ 1 (x 1 + y 1 ) + · · · + λ n (x n + y n )) = λ 1 y 2 1 + · · · + λ n y 2 n 2(λ 1 y 1 + · · · + λ n y n ) In the other subcase λ 1 y 1 + · · · + λ n y n = 0, a completely analogous argument shows that (3.6) is valid, too. Therefore, in the rest of the proof of the Jensen convexity, we may assume that λ 1 x 1 + · · · + λ n x n > 0 and λ 1 y 1 + · · · + λ n y n > 0. Denote M(x, λ) and M(y, λ) by u and v, respectively. Then it follows from the definition of the mean M that Now, using the convexity of the function x → x 2x, we get After a simple calculation, this inequality implies that which is equivalent to the inequality (3.6). Let λ ∈ W (Q) be any sequence with positive terms. We show that the reversed λ-Kedlaya inequality On the other hand, assume that M satisfies the reversed λ-Kedlaya inequality (3.1). In order to obtain λ ∈ V (Q), by Corollary 3.5 it suffices to verify that M satisfies conditions (i) and (ii) of this result. Condition (i) is trivially valid. To see that (ii) also holds, observe that Then Thus, by Corollary 3.5, the sequence ( λ n λ 1 +···+λ n ) ∞ n=1 must be nonincreasing, i.e., λ ∈ V (Q) should be valid.
At the very end of this section let us emphasize that the Kedlaya property is stable under affine transformations of means. More precisely we can establish the following simple lemma.

Lemma 3.6
Let I be an interval R be a ring, n ∈ N and λ ∈ W 0 n (R). Let a, b ∈ R with a = 0. If an R-weighed mean M on I satisfies the (n, λ)-Kedlaya inequality (3.1) and a > 0, then this inequality is also satisfied by the mean M a,b : If a < 0 then M a,b satisfies the inequality (3.1) with the reversed inequality sign.
We note that a similar invariance property holds concerning Jensen convexity and concavity of means.
From now on, we will extensively use Theorem 3.1. To make the notation easier let us define, for every n ∈ N, (a) Q n to be the set of all λ ∈ W 0 n (Q) such that the (n, λ)-Kedlaya inequality is satisfied for every symmetric and Jensen concave Q-weighted mean; (b) R n to be the set of all λ ∈ W 0 n (R) such that the (n, λ)-Kedlaya inequality is satisfied for every symmetric and Jensen concave R-weighted mean which is continuous in the weights. It is quite easy to observe that this property does not depend on the selection of the domain (cf. [28]). It will be mostly used to distinguish the (technical) assumptions of Theorem 3.1 and the requirements for the family of means. In fact requirements of Jensen concavity of mean and its symmetry were taken just to provide an assumption of these two results to be satisfied. In fact, each collection of constraints leads us to an analogous family of sets.
Some properties of these sets are implied just by their definition. For example as an immediate result of continuity in weights, we see that R n is a closed subset of W 0 n (R). Furthermore, the nullhomogeneity in the weights implies that R n is a cone, that is, cλ ∈ R n for all c > 0 and λ ∈ R n . Having already introduced this notation, Theorem 3.1 and Example 1 imply λ n-1 λ 1 + · · · + λ n-1 ≥ λ n λ 1 + · · · + λ n . (3.7) But Q n ⊆ R n and R n is closed in W 0 n (R), therefore we obtain a generalization of Theorem 1.2 to a broad family of R-weighted means. Proposition 3.7 For every n ∈ N with n ≥ 2, the following inclusions are valid Proof Like in the case of Q n , the second inclusion is the consequence of Example 1. We will have to prove the first one only. Let us keep the notation that whenever a sequence λ is defined, denotes its respective sequence of partial sums. Define the sets B := x ∈ (0, 1] n | x 1 = 1 > x 2 and x is nonincreasing .

Discussion
In this section we will apply results already obtained to an important families of means.
Each of subsection will consist of definition of the family, a characterization of Jensen concavity and, finally, applications of the notation of R n . Let us stress that to get some particular examples we need to use Proposition 3.7. This purely technical operation will be, however, omitted just to keep the notation more compact.

Deviation means
Given a function E : I × I → R vanishing on the diagonal of I × I, continuous and strictly decreasing with respect to the second variable (we will call such a function a deviation function), we can define a mean D E : ∞ n=1 I n → I in the following manner (cf. Daróczy [10]). For every n ∈ N, for every vector x = (x 1 , . . . , x n ) ∈ I n and λ = (λ 1 , . . . , λ n ) ∈ W n (R), the weighted deviation mean (or Daróczy mean) D E (x, λ) is the unique solution y of the equation According to [15] deviation means are symmetric weighted means which are continuous in the weights. The increasingness of a deviation mean D E is equivalent to the increasingness of the deviation E in its first variable. All these properties and characterizations are consequences of the general results obtained in a series of papers by Losonczi [4][5][6][7][8][9] (for Bajraktarević means and Gini means) and by Daróczy [10,11], Daróczy-Losonczi [12], Daróczy-Páles [13,14] (for deviation means) and by Páles [15][16][17][18][19][20][21] (for deviation and quasi-deviation means).
The only property which requires some calculations is the characterization of the Jensen concavity of a deviation mean.

Lemma 4.1 Let E : I × I → R be a deviation function which is differentiable with respect to its second variable and ∂ 2 E(t, t) is nonvanishing for t ∈ I. Then D E is Jensen concave if and only if the mapping E
is Jensen concave.
Proof Define E(x, t) := -E(-x, -t) for (x, t) ∈ (-I) 2 , and In particular E * is Jensen convex if and only if E * is Jensen concave. Furthermore, in view of the identity D E = D E , we see that D E is Jensen concave if and only if D E is Jensen convex. Moreover, applying [19,Theorem 6] with appropriate substitutions we see that D E is Jensen convex if and only if E * is Jensen convex.
Finally, binding all equivalences above, one can easily finish the proof.
Based on the above lemma, it is simple now to formulate a corollary which is important in view of Proposition 3.7.
Proposition 4.2 Let E : I × I → R be a deviation function which is differentiable with respect to its second variable such that ∂ 2 E(t, t) is nonvanishing for t ∈ I and the mapping E * defined by (4.1) is Jensen concave. Then D E satisfies the (n, λ)-weighted Kedlaya inequality for all n ∈ N and λ ∈ R n .
Observe that if E(x, y) = f (x)f (y) for some continuous, strictly monotone function f : I → R, then the deviation mean D E reduces to the quasi-arithmetic mean A f . Therefore, deviation means include quasi-arithmetic means. One can also notice that Bajraktarević means and Gini means are also form subclasses of deviation means.

Homogeneous deviation means
It is well known [21] that a deviation mean generated by a continuous deviation function E : R 2 + → R is homogeneous if and only if E is of the form E(x, y) = g(y)f ( x y ) for some continuous functions f , g : R + → R such that f vanishes at 1 and g is positive. Clearly, the deviation mean generated by E is determined only by the function f , therefore, as we are going to deal with homogeneous deviation means, let E f denote the corresponding deviation mean.
Let us just mention that homogeneous deviation means generalize power means. Indeed, whenever I = R + and f = π p , where π p (x) := x p if p = 0 and π 0 (x) := ln x, then E π p coincide with P p for all p ∈ R. The following result is also known [22,Theorem 2.3]. This theorem has an immediate corollary which is implied by the definition of R n itself. Its usefulness is shown by Proposition 3.7.

Quasi-arithmetic means
The idea of quasi-arithmetic means first was only glimpsed at in a pioneering paper by Knopp [29]. The theory was somewhat later axiomatized in a series of three independent but nearly simultaneous papers by De Finetti [30], Kolmogorov [31], and Nagumo [32] at the beginning of the 1930s.
The weighted mean A f : ∞ n=1 I n × W n (R) → I defined this way is called the weighted quasi-arithmetic mean generated by the function f . Quasi-arithmetic means are a natural generalization of power means. Indeed, like in the case of a deviation mean, for all p ∈ R, the means A π p and P p are equal. These means share most of the properties of power means. In particular, it is easy to verify that they are symmetric and strictly increasing. In fact, they admit even more properties of power means (cf. [31,33] Similarly to the case of Theorem 4.3, this one could also be used to obtain some results concerning the Kedlaya inequality. Let us stress again the meaningfulness of Proposition 3.7. Proposition 4.6 Let f : I → R be a twice continuously differentiable function with a nonvanishing first derivative such that either f is identically zero or f is nowhere zero and the ratio function f f is a convex and negative function on I. Then A f satisfies the (n, λ)weighted Kedlaya inequality for all n ∈ N and λ ∈ R n .

Gini means
Given two real numbers p, q ∈ R, define the function χ p,q : R + → R by In this case, the function E p,q : R 2 + → R defined by E p,q (x, y) := y p χ p,q x y is a deviation function on R + . The weighted deviation mean generated by E p,q will be denoted by G p,q and will be called the weighted Gini mean of parameter p, q (cf. [34]). One can easily see that G p,q has the following explicit form:  Clearly, in the particular case q = 0, the mean G p,q reduces to the pth power mean P p . It is also obvious that G p,q = G q,p . It is well known [5,7] that G p,q is concave if and only if min(p, q) ≤ 0 ≤ max(p, q) ≤ 1. Therefore, as an immediate consequence, we have the following.

Proposition 4.7
If p, q ∈ R satisfy (4.3), then G p,q satisfies the (n, λ)-weighted Kedlaya inequality for all n ∈ N and λ ∈ R n .

Power means
Let just recall from the previous sections that P p = G p,0 = A π p , therefore it was already covered in the previous results. In fact we can use either Proposition 4.6 or 4.7 to obtain the following.

Proposition 4.8
For every p ≤ 1 the power mean P p satisfies the (n, λ)-weighted Kedlaya inequality for all n ∈ N and λ ∈ R n .

Conclusions
The main result of the paper, the weighted Kedlaya inequality established in Theorem 3.1, generalizes the result of Kedlaya of 1999, which was established for the geometric mean. The inequality has several particular cases in the classes of deviation means, quasiarithmetic means, Gini means and power means.