Skip to content

Advertisement

Open Access

Optimal lower and upper bounds for the geometric convex combination of the error function

Journal of Inequalities and Applications20152015:382

https://doi.org/10.1186/s13660-015-0906-y

Received: 11 February 2015

Accepted: 23 November 2015

Published: 3 December 2015

Abstract

For \(x\in R\), the error function \(\operatorname{erf}(x)\) is defined as
$$\operatorname{erf}(x)=\frac{2}{\sqrt{\pi}} \int_{0}^{x}e^{-t^{2}}\,dt. $$
In this paper, we answer the question: what are the greatest value p and the least value q, such that the double inequality \(\operatorname {erf}(M_{p}(x,y;\lambda))\leq G(\operatorname{erf}(x),\operatorname {erf}(y);\lambda)\leq\operatorname{erf}(M_{q}(x,y;\lambda))\) holds for all \(x,y\geq1\) (or \(0< x,y<1\)) and \(\lambda\in(0,1)\)? Here, \(M_{r}(x,y;\lambda)=(\lambda x^{r}+(1-\lambda)y^{r})^{1/r}\) (\(r\neq0\)), \(M_{0}(x,y;\lambda)=x^{\lambda}y^{1-\lambda}\) and \(G(x,y;\lambda )=x^{\lambda}y^{1-\lambda}\) are the weighted power and the weighted geometric mean, respectively.

Keywords

error functionpower meanfunctional inequalities

MSC

33B2026D15

1 Introduction

For \(x\in R\), the error function \(\operatorname{erf}(x)\) is defined as
$$\operatorname{erf}(x)=\frac{2}{\sqrt{\pi}} \int_{0}^{x}e^{-t^{2}}\,dt. $$

The most important properties of this function are collected, for example, in [1, 2]. In the recent past, the error function has been a topic of recurring interest, and a great number of results on this subject have been reported in the literature [316]. It might be surprising that the error function has application in the field of heat conduction besides probability [17, 18].

In 1933, Aumann [19] introduced a generalized notion of convexity, the so-called MN-convexity, when M and N are mean values. A function \(f:[0,\infty)\to[0,\infty)\) is MN-convex if \(f(M(x,y))\leq N(f(x),f(y))\) for \(x,y\in[0,\infty)\). The usual convexity is the special case when M and N both are arithmetic means. Furthermore, the applications of MN-convexity reveal a new world of beautiful inequalities which involve a broad range of functions from the elementary ones, such as sine and cosine function, to the special ones, such as the Γ function, the Gaussian hypergeometric function, and the Bessel function. For the details as regards MN-convexity and its applications the reader is referred to [2025].

Let \(\lambda\in(0,1)\), we define \(A(x,y;\lambda)=\lambda x+(1-\lambda)y\), \(G(x,y;\lambda)=x^{\lambda}y^{1-\lambda}\), \(H(x,y;\lambda)=\frac{xy}{\lambda y+(1-\lambda)x}\) and \(M_{r}(x,y;\lambda)=(\lambda x^{r}+(1-\lambda)y^{r})^{1/r}\) (\(r\neq0\)), \(M_{0}(x,y;\lambda)=x^{\lambda}y^{1-\lambda}\). These are commonly known as weighted arithmetic mean, weighted geometric mean, weighted harmonic mean, and weighted power mean of two positive numbers x and y, respectively. Then it is well known that the inequalities
$$H(x,y;\lambda)=M_{-1}(x,y;\lambda)< G(x,y;\lambda)=M_{0}(x,y; \lambda )< A(x,y;\lambda)=M_{1}(x,y;\lambda) $$
hold for all \(\lambda\in(0,1)\) and \(x,y>0\) with \(x\neq y\).
By elementary computations, one has
$$ \lim_{r\to-\infty}M_{r}(x,y;\lambda)=\min(x,y) $$
(1.1)
and
$$\lim_{r\to+\infty}M_{r}(x,y;\lambda)=\max(x,y). $$
In [26], Alzer proved that \(c_{1}(\lambda)=\frac{\lambda+(1-\lambda )\operatorname{erf}(1)}{\operatorname{erf}(1/(1-\lambda))}\) and \(c_{2}(\lambda)=1\) are the best possible factors such that the double inequality
$$ c_{1}(\lambda)\operatorname{erf}\bigl(H(x,y;\lambda) \bigr)\leq A\bigl(\operatorname{erf}(x),\operatorname{erf}(y);\lambda\bigr)\leq c_{2}(\lambda)\operatorname{erf}\bigl(H(x,y;\lambda)\bigr) $$
(1.2)
holds for all \(x, y \in[1,+\infty)\) and \(\lambda\in(0,1/2)\).

Inspired by (1.2), it is natural to ask: does the inequality \(\operatorname{erf}(M(x,y))\leq N(\operatorname{erf}(x),\operatorname {erf}(y))\) hold for other means M, N, such as geometric, harmonic or power means?

In [27, 28], the authors found the greatest values \(\alpha_{1}\), \(\alpha_{2}\) and the least values \(\beta_{1}\), \(\beta_{2}\), such that the double inequalities
$$\operatorname{erf}\bigl(M_{\alpha_{1}}(x,y;\lambda)\bigr)\leq A\bigl( \operatorname {erf}(x),\operatorname{erf}(y);\lambda\bigr)\leq\operatorname {erf} \bigl(M_{\beta_{1}}(x,y;\lambda)\bigr) $$
and
$$\operatorname{erf}\bigl(M_{\alpha_{2}}(x,y;\lambda)\bigr)\leq H\bigl( \operatorname {erf}(x),\operatorname{erf}(y);\lambda\bigr)\leq\operatorname {erf} \bigl(M_{\beta_{2}}(x,y;\lambda)\bigr) $$
hold for all \(x,y\geq1\) (or \(0< x,y<1\)) and \(\lambda\in(0,1)\).
In the following we answer the question: what are the greatest value p and the least value q, such that the double inequality
$$\operatorname{erf}\bigl(M_{p}(x,y;\lambda)\bigr)\leq G\bigl( \operatorname {erf}(x),\operatorname{erf}(y);\lambda\bigr)\leq\operatorname {erf} \bigl(M_{q}(x,y;\lambda)\bigr) $$
holds for all \(x,y\geq1\) (or \(0< x,y<1\)) and \(\lambda\in(0,1)\)?

2 Lemmas

In this section we present two lemmas, which will be used in the proof of our main results.

Lemma 2.1

Let \(r\neq0\), \(r_{0}=-1-\frac{2}{e\sqrt{\pi}\operatorname{erf}(1)}=-1.4926\ldots \) , and \(u(x)=\log\operatorname{erf}(x^{1/r})\). Then the following statements are true:
  1. (1)

    if \(r< r_{0}\), then \(u(x)\) is strictly convex on \([1,+\infty)\);

     
  2. (2)

    if \(r_{0}\leq r<0\), then \(u(x)\) is strictly concave on \((0,1]\);

     
  3. (3)

    if \(r>0\), then \(u(x)\) is strictly concave on \((0,+\infty)\).

     

Proof

Simple computations lead to
$$ u'(x)=\frac{2e^{-x^{2/r}}x^{1/r-1}}{r\sqrt{\pi}\operatorname{erf}(x^{1/r})} $$
(2.1)
and
$$ u''(x)=\frac{2e^{-x^{2/r}}x^{1/r-2}}{r^{2}\sqrt{\pi}\operatorname {erf}^{2}(x^{1/r})}g(x), $$
(2.2)
where
$$ g(x)=\bigl(-2x^{2/r}+1-r\bigr)\operatorname{erf} \bigl(x^{1/r}\bigr)-\frac{2}{\sqrt{\pi }}e^{-x^{2/r}}x^{1/r}. $$
(2.3)
Then
$$\begin{aligned}& g'(x)=4x^{2/r-1}g_{1}(x), \end{aligned}$$
(2.4)
$$\begin{aligned}& g_{1}(x)=-\frac{1}{r}\operatorname{erf}\bigl(x^{1/r} \bigr)-\frac{1}{2\sqrt {\pi}}e^{-x^{2/r}}x^{-1/r}, \end{aligned}$$
(2.5)
and
$$ g_{1}'(x)=\frac{1}{2r^{2}\sqrt{\pi}}e^{-x^{2/r}}x^{-1/r-1} \bigl[(2r-4)x^{2/r}+r\bigr]. $$
(2.6)

We divide the proof into two cases.

Case 1. If \(r<0\), then (2.6), (2.5), and (2.3) lead to
$$\begin{aligned}& g_{1}'(x)< 0, \end{aligned}$$
(2.7)
$$\begin{aligned}& \lim_{x\to0^{+}}g_{1}(x)>0, \qquad \lim _{x\to+\infty}g_{1}(x)=-\infty, \end{aligned}$$
(2.8)
$$\begin{aligned}& \lim_{x\to0^{+}}g(x)=-\infty,\qquad \lim_{x\to+\infty}g(x)=0, \end{aligned}$$
(2.9)
and
$$ g(1)=(-1-r)\operatorname{erf}(1)-\frac{2}{e\sqrt{\pi}}. $$
(2.10)

Inequality (2.7) implies that \(g_{1}(x)\) is strictly decreasing on \([0,+\infty)\).

It follows from the monotonicity of \(g_{1}(x)\) and (2.8) that there exists \(x_{1}\in(0,+\infty)\), such that \(g(x)\) is strictly increasing on \([0,x_{1}]\) and strictly decreasing on \([x_{1},+\infty)\).

From the piecewise monotonicity of \(g(x)\) and (2.9) we clearly see that there exists \(x_{2}\in(0,+\infty)\), such that \(g(x)<0\) for \(x\in(0,x_{2})\) and \(g(x)>0\) for \(x\in(x_{2},+\infty)\).

Case 1.1. If \(r< r_{0}\), then from (2.10) we know that \(g(1)>0\). This leads to \(g(x)>0\) for \(x\in[1,+\infty)\). Therefore (2.2) leads to the conclusion that \(u(x)\) is strictly convex on \([1,+\infty)\).

Case 1.2. If \(r_{0}\leq r<0\), then (2.10) implies that \(g(1)\leq0\). This leads to \(g(x)\leq0\) for \(x\in(0,1]\). Therefore (2.2) leads to the conclusion that \(u(x)\) is strictly concave on \((0,1]\).

Case 2. If \(r>0\), then (2.5) and (2.3) imply that
$$ g_{1}(x)< 0 $$
(2.11)
and
$$ \lim_{x\to0^{+}}g(x)=0 $$
(2.12)
for \(x\in(0,+\infty)\).

It follows from (2.11), (2.4), and (2.12) that \(g(x)<0\). Therefore (2.2) leads to the conclusion that \(u(x)\) is strictly concave on \((0,+\infty)\). □

Lemma 2.2

The function \(h(x)=2x^{2}+\frac{xe^{-x^{2}}}{\int_{0}^{x}e^{-t^{2}}\,dt}\) is strictly increasing on \((0,+\infty)\).

Proof

Simple computations lead to
$$ h'(x)=\frac{h_{1}(x)}{(\int_{0}^{x}e^{-t^{2}}\,dt)^{2}}, $$
(2.13)
where
$$\begin{aligned}& h_{1}(x)=4x \biggl( \int_{0}^{x}e^{-t^{2}}\,dt \biggr)^{2}+\bigl(1-2x^{2}\bigr)e^{-x^{2}} \int_{0}^{x}e^{-t^{2}}\,dt-xe^{-2x^{2}}, \\& \lim_{x\to0^{+}}h_{1}(x)=0, \end{aligned}$$
(2.14)
and
$$ h_{1}'(x)=4 \biggl( \int_{0}^{x}e^{-t^{2}}\,dt \biggr)^{2}+\bigl(4x^{3}+2x\bigr)e^{-x^{2}} \int_{0}^{x}e^{-t^{2}}\,dt+2x^{2}e^{-2x^{2}}>0 $$
(2.15)
for \(x\in(0,+\infty)\).

Hence, \(h(x)\) is strictly increasing on \((0,+\infty)\), as follows from (2.15), (2.14), and (2.13). □

3 Main results

Theorem 3.1

Let \(\lambda\in(0,1)\) and \(r_{0}=-1-\frac {2}{e\sqrt{\pi}\operatorname{erf}(1)}=-1.4926\ldots\) . Then the double inequality
$$ \operatorname{erf}\bigl(M_{p}(x,y;\lambda)\bigr)\leq G \bigl(\operatorname{erf}(x),\operatorname{erf}(y);\lambda\bigr)\leq \operatorname{erf}\bigl(M_{q}(x,y;\lambda)\bigr) $$
(3.1)
holds for all \(x,y\geq1\) if and only if \(p=-\infty\) and \(q\geq r_{0}\).

Proof

First of all, we prove that inequality (3.1) holds if \(p=-\infty\) and \(q\geq r_{0}\). It follows from (1.1) that the first inequality in (3.1) is true if \(p=-\infty\). Since the weighted power mean \(M_{t}(x,y;\lambda)\) is strictly increasing with respect to t on R, thus we only need to prove that the second inequality in (3.1) is true if \(r_{0}\leq q<0\).

If \(r_{0}\leq q<0\), \(u(z)=\log\operatorname{erf}(z^{1/q})\), then Lemma 2.1(2) leads to
$$ \lambda u(s)+(1-\lambda)u(t)\leq u\bigl(\lambda s+(1-\lambda)t \bigr) $$
(3.2)
for \(\lambda\in(0,1)\) and \(s,t\in(0,1]\).

Let \(s=x^{q}\), \(t=y^{q}\), and \(x,y\geq1\). Then (3.2) leads to the second inequality in (3.1).

Second, we prove that the second inequality in (3.1) implies \(q\geq r_{0}\).

Let \(x\geq1\) and \(y\geq1\). Then the second inequality in (3.1) leads to
$$ D(x,y)=:\operatorname{erf}\bigl(M_{q}(x,y;\lambda) \bigr)-G\bigl(\operatorname {erf}(x),\operatorname{erf}(y);\lambda\bigr)\geq0. $$
(3.3)
It follows from (3.3) that
$$D(y,y)=\frac{\partial}{\partial x}D(x,y)|_{x=y}=0 $$
and
$$ \frac{\partial^{2}}{\partial x^{2}}D(x,y)|_{x=y}=\frac{\lambda(1-\lambda)y}{\operatorname {erf}'(y)} \biggl[q-1+ \biggl(2y^{2}+\frac{ye^{-y^{2}}}{\int ^{y}_{0}e^{-t^{2}}\,dt} \biggr) \biggr]. $$
(3.4)
Therefore,
$$q\geq\lim_{y\to 1^{+}}\biggl(1-2y^{2}-\frac{ye^{-y^{2}}}{\int^{y}_{0}e^{-t^{2}}\,dt} \biggr)=r_{0} $$
follows from (3.3) and (3.4) together with Lemma 2.2.

Finally, we prove that the first inequality in (3.1) implies \(p=-\infty\). We distinguish two cases.

Case I. \(p\geq0\). Then for any fixed \(y\in[1,+\infty)\) we have
$$\lim_{x\to+\infty}\operatorname{erf}\bigl(M_{p}(x,y; \lambda)\bigr)=1 $$
and
$$\lim_{x\to+\infty}G\bigl(\operatorname{erf}(x),\operatorname {erf}(y); \lambda\bigr)=\operatorname{erf}^{1-\lambda}(y)< 1, $$
which contradicts the first inequality in (3.1).
Case II. \(-\infty< p<0\). Let \(x\geq1\), \(\alpha=\lambda ^{1/p}\) and \(y\to +\infty\). Then the first inequality in (3.1) leads to
$$ E(x)=:\operatorname{erf}^{\lambda}(x)-\operatorname{erf}( \alpha x)\geq0. $$
(3.5)
It follows from (3.5) that
$$ \lim_{x\to+\infty}E(x)=0 $$
(3.6)
and
$$ E'(x)=\frac{2\lambda}{\sqrt{\pi}}e^{-x^{2}} \biggl[ \operatorname {erf}^{\lambda-1}(x)-\frac{\alpha}{\lambda}e^{(1-\alpha ^{2})x^{2}} \biggr]. $$
(3.7)
Note that \(\alpha>1\), then
$$ \lim_{x\to +\infty} \biggl[\operatorname{erf}^{\lambda-1}(x)- \frac{\alpha }{\lambda}e^{(1-\alpha^{2})x^{2}} \biggr]=1. $$
(3.8)

It follows from (3.7) and (3.8) that there exists a sufficiently large \(\eta_{1}\in[1,+\infty)\), such that \(E'(x)>0\) for \(x\in(\eta_{1},+\infty)\). Hence \(E(x)\) is strictly increasing on \([\eta_{1},+\infty)\).

From the monotonicity of \(E(x)\) on \([\eta_{1},+\infty)\) and (3.6) we conclude that there exists \(\eta_{2}\in[1,+\infty)\), such that \(E(x)<0\) for \(x\in(\eta_{2},+\infty)\), this contradicts (3.5). □

Theorem 3.2

Let \(\lambda\in(0,1)\), then the double inequality
$$ \operatorname{erf}\bigl(M_{\mu}(x,y;\lambda)\bigr)\leq G \bigl(\operatorname{erf}(x),\operatorname{erf}(y);\lambda\bigr)\leq \operatorname{erf}\bigl(M_{\nu}(x,y;\lambda)\bigr) $$
(3.9)
holds for all \(0< x,y<1\) if and only if \(\mu\leq r_{0}\) and \(\nu\geq 0\).

Proof

First of all, we prove that (3.9) holds if \(\mu\leq r_{0}\) and \(\nu\geq0\).

If \(\mu\leq r_{0}\), \(u(z)=\log\operatorname{erf}(z^{1/\mu})\), then Lemma 2.1(1) leads to
$$ u\bigl(\lambda s+(1-\lambda)t\bigr)\leq\lambda u(s)+(1- \lambda)u(t) $$
(3.10)
for \(\lambda\in(0,1)\), \(s,t>1\).

Let \(s=x^{\mu}\), \(t=y^{\mu}\), and \(0< x,y<1\). Then (3.10) leads to the first inequality in (3.9).

If \(\nu\geq0\), \(u(z)=\log\operatorname{erf}(z^{1/\nu})\), then Lemma 2.1(3) leads to
$$ \lambda u(s)+(1-\lambda)u(t)\leq u\bigl(\lambda s+(1-\lambda)t \bigr) $$
(3.11)
for \(\lambda\in(0,1)\), \(0< s,t<1\).

Therefore, the second inequality in (3.9) follows from \(s=x^{\nu}\), \(t=y^{\nu}\), and \(0< x,y<1\) together with (3.11).

Second, we prove that the second inequality in (3.9) implies \(\nu\geq0\).

Let \(0< x,y<1\). Then the second inequality in (3.9) leads to
$$ J(x,y)=:\operatorname{erf}\bigl(M_{\nu}(x,y;\lambda) \bigr)-G\bigl(\operatorname {erf}(x),\operatorname{erf}(y);\lambda\bigr)\geq 0. $$
(3.12)
It follows from (3.12) that
$$J(y,y)=\frac{\partial}{\partial x}J(x,y)|_{x=y}=0 $$
and
$$ \frac{\partial^{2}}{\partial x^{2}}J(x,y)|_{x=y}=\frac{\lambda(1-\lambda)y}{\operatorname {erf}'(y)} \biggl[\nu-1+ \biggl(2y^{2}+\frac{ye^{-y^{2}}}{\int _{0}^{y}e^{-t^{2}}\,dt} \biggr) \biggr]. $$
(3.13)
Hence, from (3.12) and (3.13) together with Lemma 2.2 we know that
$$\nu\geq\lim_{y\to0^{+}} \biggl[1- \biggl(2y^{2}+ \frac{ye^{-y^{2}}}{\int _{0}^{y}e^{-t^{2}}\,dt} \biggr) \biggr]=0. $$

Finally, we prove that the first inequality in (3.9) implies \(\mu\leq r_{0}\).

Let \(y\to1\). Then the first inequality in (3.9) leads to
$$ L(x)=:G\bigl(\operatorname{erf}(x),\operatorname{erf}(1);\lambda \bigr)-\operatorname{erf}\bigl(M_{\mu}(x,1;\lambda)\bigr)\geq 0 $$
(3.14)
for \(0< x<1\).
It follows from (3.14) that
$$ L(1)=0 $$
(3.15)
and
$$ L'(x)=\frac{2\lambda e^{-x^{2}}}{\sqrt{\pi}} \bigl[\operatorname{erf}^{1-\lambda }(1) \operatorname{erf}^{\lambda-1}(x)-x^{\mu-1}\bigl(\lambda x^{\mu}+1-\lambda\bigr)^{1/\mu-1}e^{x^{2}-(\lambda x^{\mu}+1-\lambda)^{2/\mu}} \bigr]. $$
(3.16)
Let
$$ L_{1}(x)=\log \bigl[\operatorname{erf}^{1-\lambda}(1) \operatorname {erf}^{\lambda-1}(x) \bigr]-\log \bigl[x^{\mu-1}\bigl( \lambda x^{\mu}+1-\lambda\bigr)^{1/\mu-1}e^{x^{2}-(\lambda x^{\mu}+1-\lambda)^{2/\mu}} \bigr]. $$
(3.17)
Then
$$\begin{aligned}& \lim_{x\to1^{-}}L_{1}(x)=0, \\& L'_{1}(x)=(\lambda-1)\frac{\operatorname{erf}'(x)}{\operatorname {erf}(x)}- \frac{(\mu-1)(1-\lambda)}{x(\lambda x^{\mu}+1-\lambda)}-2x+2\lambda x^{\mu-1}\bigl(\lambda x^{\mu}+1- \lambda\bigr)^{2/\mu-1}, \end{aligned}$$
(3.18)
and
$$ \lim_{x\to1^{-}}L_{1}'(x)=(1- \lambda) \biggl[-\mu-1-\frac{2}{e \sqrt{\pi}\operatorname{erf}(1)} \biggr]. $$
(3.19)

If \(\mu>r_{0}\), then from (3.19) we clearly see that there exists a small \(\delta_{1}>0\), such that \(L_{1}'(x)<0\) for \(x\in(1-\delta_{1},1)\). Therefore, \(L_{1}(x)\) is strictly decreasing on \([1-\delta_{1},1]\).

The monotonicity of \(L_{1}(x)\) on \([1-\delta_{1},1]\) and (3.18) imply that there exists \(\delta_{2}>0\), such that \(L_{1}(x)>0\) for \(x\in(1-\delta_{2},1)\).

Hence, (3.16) and (3.17) lead to \(L(x)\) being strictly increasing on \([1-\delta_{2},1]\). It follows from the monotonicity of \(L(x)\) and (3.15) that there exists \(\delta_{3}>0\), such that \(L(x)<0\) for \(x\in(1-\delta_{3},1)\), this contradicts (3.14). □

Declarations

Acknowledgements

This research was supported by the Natural Science Foundation of China under Grants 61174076, 61374086, 11371125, and 11401191, and the Natural Science Foundation of Zhejiang Province under Grant LY13A010004. The authors wish to thank the anonymous referees for their careful reading of the manuscript and their fruitful comments and suggestions.

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors’ Affiliations

(1)
School of Science, Huzhou Teachers College, Huzhou, China
(2)
School of Automation, Nanjing University of Science and Technology, Nanjing, China
(3)
School of Mathematics and Computation Sciences, Hunan City University, Yiyang, China

References

  1. Abramowitz, M, Stegun, I (eds.): Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables. Dover, New York (1965) Google Scholar
  2. Oldham, K, Myland, J, Spanier, J: An Atlas of Functions: With Equator, the Atlas Function Calculator, 2nd edn. Springer, New York (2009) View ArticleGoogle Scholar
  3. Clendenin, W: Rational approximations for the error function and for similar functions. Commun. ACM 4, 354-355 (1961) View ArticleMathSciNetMATHGoogle Scholar
  4. Hart, RG: A close approximation related to the error function. Math. Comput. 20, 600-602 (1966) View ArticleMATHGoogle Scholar
  5. Cody, WJ: Rational Chebyshev approximations for the error function. Math. Comput. 23, 631-637 (1969) View ArticleMathSciNetMATHGoogle Scholar
  6. Matta, F, Reichel, A: Uniform computation of the error function and other related functions. Math. Comput. 25, 339-344 (1971) View ArticleMathSciNetMATHGoogle Scholar
  7. Bajić, B: On the computation of the inverse of the error function by means of the power expansion. Bull. Math. Soc. Sci. Math. Roum. 17(65), 115-121 (1973) Google Scholar
  8. Blair, JM, Edwards, CA, Johnson, JH: Rational Chebyshev approximations for the inverse of the error function. Math. Comput. 30(136), loose microfiche suppl., 7-68 (1976) View ArticleMathSciNetGoogle Scholar
  9. Bhaduri, RK, Jennings, BK: Note on the error function. Am. J. Phys. 44(6), 590-592 (1976) View ArticleMathSciNetGoogle Scholar
  10. Zimmerman, IH: Extending Menzel’s closed-form approximation for the error function. Am. J. Phys. 44(6), 592-593 (1976) View ArticleMathSciNetGoogle Scholar
  11. Elbert, Á, Laforgia, A: An inequality for the product of two integrals relating to the incomplete gamma function. J. Inequal. Appl. 5, 39-51 (2000) MathSciNetMATHGoogle Scholar
  12. Gawronski, W, Müller, J, Reinhard, M: Reduced cancellation in the evaluation of entire functions and applications to the error function. SIAM J. Numer. Anal. 45(6), 2564-2576 (2007) View ArticleMathSciNetMATHGoogle Scholar
  13. Baricz, Á: Mills’ ratio: monotonicity patterns and functional inequalities. J. Math. Anal. Appl. 340, 1362-1370 (2008) View ArticleMathSciNetMATHGoogle Scholar
  14. Alzer, H: Functional inequalities for the error function. II. Aequ. Math. 78(1-2), 113-121 (2009) View ArticleMathSciNetMATHGoogle Scholar
  15. Dominici, D: Some properties of the inverse error function. Contemp. Math. 457, 191-203 (2008) View ArticleMathSciNetGoogle Scholar
  16. Temme, NM: Error functions, Dawson’s and Fresnel integrals. In: NIST Handbook of Mathematical Functions, pp. 159-171. U.S. Dept. Commerce, Washington (2010) Google Scholar
  17. Kharin, SN: A generalization of the error function and its application in heat conduction problems. In: Differential Equations and Their Applications, vol. 176, pp. 51-59 (1981) (in Russian) Google Scholar
  18. Chaudhry, MA, Qadir, A, Zubair, SM: Generalized error functions with applications to probability and heat conduction. Int. J. Appl. Math. 9(3), 259-278 (2002) MathSciNetMATHGoogle Scholar
  19. Aumann, G: Konvexe Funktionen und die Induktion bei Ungleichungen zwischen Mittelwerten. Münchner Sitzungsber. 109, 403-415 (1933) Google Scholar
  20. Anderson, GD, Vamanamurthy, MK, Vuorinen, M: Generalized convexity and inequalities. J. Math. Anal. Appl. 335(2), 1294-1308 (2007) View ArticleMathSciNetMATHGoogle Scholar
  21. Gronau, D: Selected topics on functional equations. In: Functional Analysis, IV (Dubrovnik, 1993). Various Publ. Ser. (Aarhus), vol. 43, pp. 63-84. Aarhus Univ., Aarhus (1994) Google Scholar
  22. Gronau, D, Matkowski, J: Geometrical convexity and generalization of the Bohr-Mollerup theorem on the gamma function. Math. Pannon. 4, 153-160 (1993) MathSciNetMATHGoogle Scholar
  23. Gronau, D, Matkowski, J: Geometrically convex solutions of certain difference equations and generalized Bohr-Mollerup type theorems. Results Math. 26, 290-297 (1994) View ArticleMathSciNetMATHGoogle Scholar
  24. Matkowski, J: \(\mathbf{L}^{p}\)-Like paranorms. In: Selected Topics in Functional Equations and Iteration Theory. Proceedings of the Austrian-Polish Seminar (Graz, 1991). Grazer Math. Ber., vol. 316, pp. 103-138. Karl-Franzens-Univ. Graz, Graz (1992) Google Scholar
  25. Niculescu, CP: Convexity according to the geometric mean. Math. Inequal. Appl. 3, 155-167 (2000) MathSciNetMATHGoogle Scholar
  26. Alzer, H: Error function inequalities. Adv. Comput. Math. 33(3), 349-379 (2010) View ArticleMathSciNetMATHGoogle Scholar
  27. Xia, W, Chu, Y: Optimal inequalities for the convex combination of error function. J. Math. Inequal. 9(1), 85-99 (2015) View ArticleMathSciNetGoogle Scholar
  28. Chu, Y, Li, Y, Xia, W, Zhang, X: Best possible inequalities for the harmonic mean of error function. J. Inequal. Appl. 2014, 525 (2014) View ArticleMathSciNetGoogle Scholar

Copyright

© Li et al. 2015

Advertisement