# On the Hurwitz zeta function with an application to the beta-exponential distribution

## Abstract

We prove a monotonicity property of the Hurwitz zeta function which, in turn, translates into a chain of inequalities for polygamma functions of different orders. We provide a probabilistic interpretation of our result by exploiting a connection between Hurwitz zeta function and the cumulants of the beta-exponential distribution.

## 1 Main result

Let $$\zeta (x,s)=\sum_{k=0}^{+\infty } (k+s)^{-x}$$ be the Hurwitz zeta function [1, 2] defined for $$(x,s)\in (1,+\infty )\times (0,+\infty )$$, and, for any $$a>0$$ and $$b>0$$, consider the function

$$x\mapsto f(x,a,b) = \bigl( \zeta (x,b) - \zeta (x,a+b) \bigr)^{ \frac{1}{x}}$$
(1)

defined on $$[1,+\infty )$$, where $$f(1,a,b)$$ is defined by continuity as

$$f(1,a,b)=\sum_{k=0}^{\infty } \biggl( \frac{1}{k+b}-\frac{1}{k+a+b} \biggr)=\sum _{k=0}^{\infty }\frac{a}{(k+b)(k+a+b)}.$$
(2)

The function $$f(x,a,b)$$ can be alternatively written, with a geometric flavor, as

$$f(x,a,b) = \bigl( \Vert \boldsymbol{v}_{a+b} \Vert _{x}^{x} - \Vert \boldsymbol{v}_{b} \Vert _{x}^{x} \bigr)^{\frac{1}{x}},$$

where, for any $$s>0$$, $$\boldsymbol{v}_{s}$$ is an infinite-dimensional vector whose kth component coincides with $$(k-1+s)^{-1}$$.

The main result of the paper establishes that the function $$x\mapsto f(x,a,b)$$ is monotone on $$[1,+\infty )$$ with variations only determined by the value of a. More specifically,

### Theorem 1

For any$$b>0$$, the function$$x\mapsto f(x,a,b)$$defined on$$[1,+\infty )$$is increasingFootnote 1if$$0< a<1$$, decreasing if$$a>1$$, and constantly equal to$$\frac{1}{b}$$if$$a=1$$.

Remarkably, when the first argument x of f is a positive integer, say $$x=n\in \mathbb{N}\setminus \{0\}$$, the monotonicity property established by Theorem 1 translates into a chain of inequalities in terms of polygamma functions of different orders, which might be of independent interest. Namely, for any $$b>0$$ and any $$0< a_{1}<1<a_{2}$$, the following holds:

$$\textstyle\begin{cases} \psi ^{(0)}(b+a_{1})-\psi ^{(0)}(b)< \cdots < ( \frac{\psi ^{(n)}(b+a_{1})-\psi ^{(n)}(b)}{n!} )^{\frac{1}{n+1}}< \cdots < \frac{1}{b}, \\ \psi ^{(0)}(b+a_{2})-\psi ^{(0)}(b)>\cdots > ( \frac{\psi ^{(n)}(b+a_{2})-\psi ^{(n)}(b)}{n!} )^{\frac{1}{n+1}}> \cdots > \frac{1}{b}, \end{cases}$$
(3)

where $$\psi ^{(m)}$$ for $$m\in \mathbb{N}$$ denotes the polygamma function of order m, defined as the derivative of order $$m+1$$ of the logarithm of the gamma function.

Theorem 1 and the derived inequalities in (3) add to the current body of literature about inequalities and monotonicity properties of the Hurwitz zeta function [1, 36] and polygamma functions , respectively. The last part of the statement of Theorem 1 is immediately verified as, when $$a=1$$, $$f(x,a,b)$$ simplifies to a telescoping series, which gives $$f(x,1,b)=\frac{1}{b}$$ for every $$x\in [1,+\infty )$$. The rest of the proof is presented in Sect. 2, while Sect. 3 is dedicated to an application of Theorem 1 to the study of the so-called beta-exponential distribution [12, 13], obtained by applying a log-transformation to a beta distributed random variable. Specifically, the chain of inequalities in (3), formally derived in Sect. 3, nicely translates into an analogous monotonicity property involving the cumulants of the beta-exponential distribution. Furthermore, the dichotomy observed in Theorem 1, determined by the position of a with respect to 1, is shown to hold for the beta-exponential distribution at the level of (i) its cumulants (whether function (6) is increasing or not), (ii) its dispersion (Corollary 2), (iii) the shape of its density (log-convex or log-concave, Proposition 1 and Fig. 1), and (iv) its hazard function (increasing or decreasing, Proposition 2).

## 2 Proof of Theorem 1

The proof of Theorem 1 relies on Lemma 1, stated in what follows. Lemma 1 considers two sequences and establishes the monotonicity of a third one, function of the first two, whose direction depends on how the two original sequences compare with each other. The same dichotomy, in Theorem 1, is driven by the position of the real number a with respect to 1.

### Lemma 1

Let$$(s_{n})_{n\geq 1}$$and$$(r_{n})_{n\geq 1}$$be two sequences in$$(0,1)$$and define, for$$N\geq 1$$,

$$u_{N}\overset{\mathrm{def}}{=} \Biggl(1+\sum _{n=1}^{N} (s_{n}-r_{n}) \Biggr)\ln \Biggl(1+\sum_{n=1}^{N} (s_{n}-r_{n}) \Biggr)-\sum_{n=1}^{N}(s_{n} \ln s_{n}-r_{n}\ln r_{n}).$$

We define by convention$$u_{0}=0$$. Then two cases are considered:

1.:

If, for any$$n\geq 1$$, $$r_{n}\leq s_{n}$$then, for all$$N\geq 0$$, we have$$u_{N+1}\geq u_{N}$$, with the equality holding if and only if$$s_{N+1}=r_{N+1}$$;

2.:

If, for any$$n\geq 1$$, $$s_{n+1}\leq r_{n+1}\leq s_{n}\leq r_{n}$$then, for all$$N\geq 0$$, we have$$u_{N+1}\leq u_{N}$$, with the equality holding if and only if$$s_{N+1}=r_{N+1}$$.

Moreover, if$$\sum_{n=1}^{\infty } \vert s_{n}-r_{n} \vert <\infty$$ (implying absolute convergence of the series$$\sum_{n=1}^{\infty } (s_{n}\ln s_{n}-r_{n}\ln r_{n})$$) then$$u_{\infty }\overset{\mathrm{def}}{=}\lim_{N\to +\infty }u_{N}$$exists and is given by

$$u_{\infty }= \Biggl(1+\sum_{n=1}^{\infty } (s_{n}-r_{n}) \Biggr)\ln \Biggl(1+ \sum _{n=1}^{\infty } (s_{n}-r_{n}) \Biggr)-\sum_{n=1}^{\infty }(s_{n} \ln s_{n}-r_{n}\ln r_{n}),$$

which is such that$$u_{\infty }\geq 0$$in case1, while$$u_{\infty }\leq 0$$in case2. In both cases, $$u_{\infty }= 0$$if and only if the two sequences$$(r_{n})_{n\geq 1}$$and$$(s_{n})_{n\geq 1}$$equal each other.

### Remark 1

Note that, in case 2, we have

\begin{aligned} 1+\sum_{n=1}^{N} (s_{n}-r_{n})=(1-r_{1})+s_{N}+ \sum_{n=1}^{{N-1}}(s_{n}-r_{n+1}) \geq (1-r_{1})+s_{N}> 0, \end{aligned}

so that all quantities defined in the lemma make sense. The absolute convergence of $$\sum_{n=1}^{\infty } (s_{n}\ln s_{n}-r_{n}\ln r_{n})$$, stated in Lemma 1, follows directly from the trivial inequalities

\begin{aligned} 0\leq s\ln s -r\ln r\leq s-r ,\quad \forall 0< r\leq s< 1. \end{aligned}

### Proof

(case $$N=0$$). We first study the case $$N=0$$ and define

\begin{aligned} h_{r_{1}}(s_{1})=(1+s_{1}-r_{1})\ln (1+s_{1}-r_{1})-s_{1}\ln s_{1}+r_{1} \ln r_{1}. \end{aligned}

For $$s_{1}=r_{1}$$, we trivially have $$h_{r_{1}}(r_{1})=0$$. A straightforward computation shows that

\begin{aligned} h_{r_{1}}'(s_{1})=\ln (1+s_{1}-r_{1})- \ln s_{1}=\ln \bigl((1-r_{1})+s_{1}\bigr)- \ln s_{1}>0 \end{aligned}

since $$r_{1}<1$$. Hence $$h_{r_{1}}$$ is an increasing function on $$(0,1)$$. Since $$h_{r_{1}}(r_{1})=0$$, we immediately get that $$h_{r_{1}}$$ is positive on $$(r_{1},1)$$ and negative on $$(0,r_{1})$$, thus proving both cases for $$N=0$$. □

### Proof

(case $$N\geq 1$$). We now consider the case $$N\geq 1$$ and define

\begin{aligned} h_{r_{1},\ldots,r_{N+1},s_{1},\ldots,s_{N}}(s_{N+1})={}& u_{N+1}-u_{N} \\ ={}& \Biggl(1+\sum_{n=1}^{N+1} (s_{n}-r_{n}) \Biggr)\ln \Biggl(1+ \sum _{n=1}^{N+1} (s_{n}-r_{n}) \Biggr) \\ & {} - \Biggl(1+\sum_{n=1}^{N} (s_{n}-r_{n}) \Biggr)\ln \Biggl(1+\sum _{n=1}^{N} (s_{n}-r_{n}) \Biggr) \\ & {} -s_{N+1}\ln s_{N+1}+r_{N+1}\ln r_{N+1}. \end{aligned}

We trivially get that $$h_{r_{1},\ldots,r_{N+1},s_{1},\ldots,s_{N}}(r_{N+1})=0$$. Moreover, we have

\begin{aligned} h_{r_{1},\ldots,r_{N+1},s_{1},\ldots,s_{N}}'(s_{N+1})&=\ln \Biggl(1+ \sum _{n=1}^{N+1} (s_{n}-r_{n}) \Biggr)-\ln s_{N+1} \\ &\overset{\text{(1)}}{=} \ln \Biggl(s_{N+1}+(1-r_{N+1})+\sum _{n=1}^{N} (s_{n}-r_{n}) \Biggr)-\ln s_{N+1} \\ &\overset{\text{(2)}}{=} \ln \Biggl(s_{N+1}+(1-r_{1})+\sum _{n=1}^{N} (s_{n}-r_{n+1}) \Biggr)-\ln s_{N+1}. \end{aligned}

Equality $$(1)$$ shows that $$h_{r_{1},\ldots,r_{N+1},s_{1},\ldots,s_{N}}'$$ is positive on $$(r_{N+1},1)$$ for conditions of case 1, while equality $$(2)$$ shows that $$h_{r_{1},\ldots,r_{N+1},s_{1},\ldots,s_{N}}'$$ is positive on $$(0,r_{N+1})$$ for conditions of case 2. Since $$h_{r_{1},\ldots,r_{N+1},s_{1},\ldots,s_{N}}(r_{N+1})=0$$, we get that $$h_{r_{1},\ldots,r_{N+1},s_{1},\ldots,s_{N}}$$ is positive on $$(r_{N+1},1)$$ for conditions of case 1, while $$h_{r_{1},\ldots,r_{N+1},s_{1},\ldots,s_{N}}$$ is negative on $$(0,r_{N+1})$$ for conditions of case 2. This ends the proof of monotonicity of $$(u_{N})_{N\geq 0}$$ and conditions for strict monotonicity in both cases. Extending results from finite N to $$N\to \infty$$ follows directly from these results and Remark 1. □

### Proof

We want to study the variations of

$$x \mapsto f(x,a,b)= \bigl(\zeta (x,b)-\zeta (x,a+b) \bigr)^{ \frac{1}{x}}$$

on $$[1,\infty )$$, for which it is enough, by continuity, to focus on $$(1,\infty )$$. Since f is positive, its variations are equivalent to those of

\begin{aligned} F(x,a,b)&\overset{\mathrm{def}}{=}\ln f(x,a,b) \\ &= \frac{1}{x}\ln \bigl(\zeta (x,b)-\zeta (x,a+b) \bigr)= \frac{1}{x}\ln \Biggl( \sum_{k=0}^{\infty } \frac{1}{(k+b)^{x}}- \frac{1}{(k+a+b)^{x}} \Biggr) \\ &=-\ln b+ \frac{1}{x} \ln \Biggl( \sum_{k=0}^{\infty } \biggl( \frac{b}{k+b} \biggr)^{x}- \biggl(\frac{b}{k+a+b} \biggr)^{x} \Biggr). \end{aligned}

A straightforward computation shows that

\begin{aligned} \partial _{x} F(x,a,b)= \frac{H(x,a,b)}{x^{2} (\sum_{k=0}^{\infty } (\frac{b}{k+b} )^{x}- (\frac{b}{k+a+b} )^{x} )}. \end{aligned}

Hence the sign of the derivative $$\partial _{x} F(x,a,b)$$ is the same as that of $$H(x,a,b)$$ defined by

\begin{aligned} H(x,a,b)\overset{\mathrm{def}}{=} {}&\sum_{k=0}^{\infty } \biggl( \frac{b}{k+b} \biggr)^{x} \ln \biggl(\frac{b}{k+b} \biggr)^{x}- \biggl(\frac{b}{k+b+a} \biggr)^{x} \ln \biggl(\frac{b}{k+b+a} \biggr)^{x} \\ &{} - \Biggl(\sum_{k=0}^{\infty } \biggl( \frac{b}{k+b} \biggr)^{x}- \biggl(\frac{b}{k+a+b} \biggr)^{x} \Biggr)\ln \Biggl(\sum_{k=0}^{ \infty } \biggl(\frac{b}{k+b} \biggr)^{x}- \biggl(\frac{b}{k+a+b} \biggr)^{x} \Biggr) \\ ={}&\sum_{k=1}^{\infty } \biggl(\frac{b}{k+b} \biggr)^{x} \ln \biggl(\frac{b}{k+b} \biggr)^{x} - \biggl(\frac{b}{k+b+a-1} \biggr)^{x} \ln \biggl(\frac{b}{k+b+a-1} \biggr)^{x} \\ &{}- \Biggl(1+\sum_{k=1}^{\infty } \biggl( \frac{b}{k+b} \biggr)^{x}- \biggl(\frac{b}{k+a-1+b} \biggr)^{x} \Biggr) \\ &{}\times \ln \Biggl(1+\sum_{k=1}^{\infty } \biggl(\frac{b}{k+b} \biggr)^{x}- \biggl(\frac{b}{k+a-1+b} \biggr)^{x} \Biggr), \end{aligned}

which can be rewritten as

\begin{aligned} \sum_{n=1}^{\infty } (s_{n} \ln s_{n}-r_{n}\ln r_{n}) - \Biggl(1+ \sum _{n=1}^{\infty } (s_{n}- r_{n}) \Biggr)\ln \Biggl(1+\sum_{n=1}^{ \infty } (s_{n}-r_{n}) \Biggr), \end{aligned}

where, for all $$n \geq 1$$, we have defined

$$s_{n}= \biggl(\frac{b}{n+b} \biggr)^{x} \quad \text{and} \quad r_{n}= \biggl(\frac{b}{n+a-1+b} \biggr)^{x}.$$

Note that, for any values of $$a>0$$ and $$b>0$$, we have $$\sum_{n=1}^{\infty } \vert s_{n}-r_{n} \vert <\infty$$ as $$x>1$$ implies that $$\sum_{n=1}^{\infty } s_{n}<\infty$$ and $$\sum_{n=1}^{\infty } r_{n}<\infty$$. Moreover, when $$a>1$$, we have $$0< r_{n}<s_{n}< 1$$, while when $$0< a<1$$, we have $$0< r_{n+1}< s_{n}< r_{n}<1$$ for all $$n\geq 1$$ and $$x>1$$. We can then apply Lemma 1 to obtain that $$H(x,a,b)$$, and thus $$\partial _{x} F(x,a,b)$$ is negative if $$a>1$$ (case 1 of Lemma 1) and is positive if $$0< a<1$$ (case 2 of Lemma 1), which concludes the proof. □

## 3 Probabilistic interpretation: application to the beta-exponential distribution

The aim of this section is to identify a connection between Theorem 1 and the beta-exponential distribution. The latter arises by taking a log-transformation of a beta random variable. More specifically, let V be a beta random variable with parameters $$a>0$$ and $$b>0$$, then we say that X is a beta-exponential random variable with parameters a and b if $$X = -\ln (1-V)$$, and use the notation $$X\sim \operatorname{BE}(a,b)$$. A three-parameter generalization of the beta-exponential distribution is studied in ; a related family of distributions, named generalized exponential, is investigated in . The density function of $$X\sim \operatorname{BE}(a,b)$$ is given by

$$g(x;a,b) = \frac{1}{B(a,b)}\bigl(1-\mathrm{e}^{-x} \bigr)^{a-1}\mathrm{e}^{-bx} \mathbf{1}_{(0,+\infty )}(x),$$
(4)

where $$B(a,b)$$ denotes the beta function.Footnote 2 See the right panel of Fig. 1 for an illustration of densities $$x\mapsto g(x;a,1)$$ for different values of a. The corresponding cumulant-generating function can be written as follows:

\begin{aligned} K(t)&\overset{\mathrm{def}}{=}\ln \mathbb{E}\bigl(\exp (tX)\bigr) =\ln \varGamma (a+b)+ \ln \varGamma (b-t)-\ln \varGamma (b)-\ln \varGamma (a+b-t), \end{aligned}

provided that $$t< b$$ (see Sect. 3 of ). This implies that, for any $$n\geq 1$$, the nth cumulant of X, denoted by $$\kappa _{n}(a,b)$$, is given by

$$\kappa _{n}(a,b) =(-1)^{n} \bigl(\psi ^{(n-1)}(b)-\psi ^{(n-1)}(b+a) \bigr).$$
(5)

An interesting relation across cumulants of different orders is then obtained as a straightforward application of Theorem 1. Before stating the result, and for the sake of compactness, we define on $$\mathbb{N}\setminus \{0\}$$, for any $$a>0$$ and $$b>0$$, the function

$$n\mapsto f_{\mathrm{BE}}(n,a,b)= \biggl(\frac{\kappa _{n}(a,b)}{(n-1)!} \biggr)^{\frac{1}{n}}.$$
(6)

### Corollary 1

For any$$b>0$$, the function$$n\mapsto f_{\mathrm{{BE}}}(n,a,b)$$defined on$$\mathbb{N}\setminus \{0\}$$, is increasing if$$0< a<1$$, decreasing if$$a>1$$, and constantly equal to$$\frac{1}{b}$$if$$a=1$$.

### Proof

The proof follows by observing that, when $$n\in \mathbb{N}\setminus \{0\}$$, $$f_{\mathrm{BE}}(n,a,b)=f(n,a,b)$$, with the latter defined in (1) and (2). This can be seen, when $$n>1$$, by applying twice the identity $$\psi ^{(n-1)}(s)=(-1)^{n} (n-1)! \zeta (n,s)$$, and, when $$n=1$$, by applying twice the identity $$\psi ^{(0)}(s)=-\gamma +\sum_{k=0}^{\infty } \frac{s-1}{(k+1)(k+s)}$$, where γ is the Euler–Mascheroni constant, which holds for any $$s>-1$$ (see Identity 6.3.16 in ). □

As a by-product, a combination of Corollary 1 and (5) proves the chain of inequalities (3) presented in Sect. 1. Corollary 1 highlights the critical role played by the exponential distribution with mean $$\frac{1}{b}$$, special case of the beta-exponential distribution recovered from (4) by setting $$a=1$$. In such a special instance, the cumulants simplify to $$\kappa _{n}(1,b)=b^{-n}(n-1)!$$, which makes $$f_{\mathrm{BE}}(n,1,b)=\frac{1}{b}$$ for every $$n\in \mathbb{N}\setminus \{0\}$$. Within the beta-exponential distribution, the case $$a=1$$ then creates a dichotomy by identifying two subclasses of densities, namely $$\{g(x;a,b) : 0< a<1\}$$, whose cumulants $$\kappa _{n}(a,b)$$ make $$f_{\mathrm{BE}}(n,a,b)$$ an increasing function of n, and $$\{g(x;a,b) : a>1\}$$ for which $$f_{\mathrm{BE}}(n,a,b)$$ is a decreasing function of n. The left panel of Fig. 1 is an illustration of Corollary 1: it displays the function $$b\mapsto f_{\mathrm{BE}}(n,a,b)$$ for values of $$n\in \{1,2,3\}$$ and $$a\in [0.4,4]$$, and it can be appreciated that, for any b in the considered range $$[0,10]$$, the order of the values taken by $$f_{\mathrm{BE}}(n,a,b)$$ is in agreement with Corollary 1.

The first two cumulants of a random variable X have a simple interpretation in terms of its first two moments, namely $$\kappa _{1} = \mathbb{E}[X]$$ and $$\kappa _{2} = \operatorname{Var}[X]$$. A special case of Corollary 1, focusing on the case $$n\in \{1,2\}$$, then provides an interesting result relating the dispersion of the beta-exponential distribution with its mean. Specifically,

### Corollary 2

For any$$b>0$$, the beta-exponential random variable$$X\sim \mathrm{BE}(a,b)$$is characterized by over-dispersion$$(\sqrt{\operatorname{Var}[X]}>\mathbb{E}[X] )$$if$$0< a<1$$, under-dispersion$$(\sqrt{\operatorname{Var}[X]}<\mathbb{E}[X] )$$if$$a>1$$, and equi-dispersion$$(\sqrt{\operatorname{Var}[X]}=\mathbb{E}[X] )$$if$$a=1$$.

The behavior of the cumulants is not the only distinctive feature characterizing the two subclasses of density functions corresponding to $$0< a<1$$ and $$a>1$$. For any b, the value of a determines the shape of the density as displayed in the right panel of Fig. 1 and summarized by the next proposition, whose proof is trivial and thus omitted: for any $$0< a<1$$, the density is log-convex (curves in blue on the right panel of Fig. 1), while for any $$a>1$$, the density is log-concave (curves in red on the right panel of Fig. 1); the case $$a=1$$ corresponds to the exponential distribution (curve in black on the right panel of Fig. 1).

### Proposition 1

For any$$b>0$$, the beta-exponential density$$g(x;a,b)$$is log-convex if$$0< a<1$$and log-concave if$$a>1$$.

The same dichotomy within the beta-exponential distribution is further highlighted by the behavior of the corresponding hazard function, defined for an absolutely continuous random variable X as the function $$x\mapsto \frac{f_{X}(x)}{1-F_{X}(x)}$$, where $$f_{X}$$ and $$F_{X}$$ are, respectively, the probability density function and the cumulative distribution function of X.

### Proposition 2

For any$$b>0$$, the hazard function of the beta-exponential distribution with parametersaandbis decreasing if$$a<1$$, increasing if$$a>1$$, and constantly equal tobif$$a=1$$.

### Proof

The result follows from the log-convexity and log-concavity properties of $$g(x;a,b)$$ (see ). □

Finally, it is worth remarking that an analogous dichotomy holds within the class of gamma density functions with $$a>0$$ and $$b>0$$ shape and rate parameters, and that once again the exponential distribution with mean $$\frac{1}{b}$$, special case recovered by setting $$a=1$$, lays at the border between the two subclasses. The nth cumulant of the gamma distribution is $$\kappa _{n}=ab^{-n}(n-1)!$$, which makes the function $$n\mapsto (\frac{\kappa _{n}(a,b)}{(n-1)!} )^{\frac{1}{n}}$$, defined on $$\mathbb{N}\setminus \{0\}$$, increasing if $$0< a<1$$, decreasing if $$a>1$$, and constantly equal to $$\frac{1}{b}$$ if $$a=1$$. Similarly, the gamma density is log-convex if $$0< a<1$$ and log-concave if $$a>1$$ and, thus, the corresponding hazard function is decreasing if $$a<1$$, increasing if $$a>1$$, and constantly equal to b if $$a=1$$.

1. Throughout the paper, we say that a function f is increasing (resp. decreasing) if $$x< y$$ implies $$f(x)< f(y)$$ (resp. $$f(x)>f(y)$$), and that a quantity A is positive (resp. negative) if $$A>0$$ (resp. $$A<0$$).

2. In this article $$B(a,b)$$ is defined as $$B(a,b)=\int _{0}^{+\infty }(1-\mathrm{e}^{-x})^{a-1}\mathrm{e}^{-bx} \,\mathrm{d}x$$ (see, e.g., ).

## References

1. Berndt, B.C.: On the Hurwitz zeta-function. Rocky Mt. J. Math. 2(1), 151–157 (1972)

2. Srivastava, H.M., Choi, J.: Zeta and q-Zeta Functions and Associated Series and Integrals. Elsevier, London (2012)

3. Simsek, Y.: q-Dedekind type sums related to q-zeta function and basic L-series. J. Math. Anal. Appl. 318(1), 333–351 (2006)

4. Simsek, Y.: On twisted q-Hurwitz zeta function and q-two-variable L-function. Appl. Math. Comput. 187(1), 466–473 (2007)

5. Srivastava, H.M., Jankov, D., Pogány, T.K., Saxena, R.K.: Two-sided inequalities for the extended Hurwitz–Lerch Zeta function. Comput. Math. Appl. 62(1), 516–522 (2011)

6. Leping, H., Mingzhe, G.: A Hilbert integral inequality with Hurwitz Zeta function. J. Math. Inequal. 7(3), 377–387 (2013)

7. Alzer, H.: Inequalities for the gamma and polygamma functions. Abhandlungen Math. Semin. Univ. Hamb. 68(1), 363–372 (1998)

8. Alzer, H.: Mean-value inequalities for the polygamma functions. Aequ. Math. 61(1–2), 151–161 (2001)

9. Batir, N.: Some new inequalities for gamma and polygamma functions. J. Inequal. Pure Appl. Math. 6(4), 1–9 (2005)

10. Qi, F., Guo, S., Guo, B.-N.: Complete monotonicity of some functions involving polygamma functions. J. Comput. Appl. Math. 233(9), 2149–2160 (2010)

11. Guo, B.-N., Qi, F., Zhao, J.-L., Luo, Q.-M.: Sharp inequalities for polygamma functions. Math. Slovaca 65(1), 103–120 (2015)

12. Gupta, R.D., Kundu, D.: Generalized exponential distributions. Aust. N. Z. J. Stat. 41(2), 173–188 (1999)

13. Nadarajah, S., Kotz, S.: The beta exponential distribution. Reliab. Eng. Syst. Saf. 91(6), 689–697 (2006)

14. Abramowitz, M., Stegun, I.A.: Handbook of Mathematical Functions: With Formulas, Graphs, and Mathematical Tables. Dover, New York (1965)

15. Barlow, R.E., Proschan, F.: Statistical Theory of Reliability and Life Testing: Probability Models. Holt, Rinehart & Winston, New York (1975)

### Acknowledgements

The authors would like to thank Marco Mazzola for fruitful suggestions.

### Availability of data and materials

Data sharing not applicable to this article as no datasets were generated or analysed during the current study.

## Funding

O.M. would like to thank Université Lyon 1, Université Jean Monnet and Institut Camille Jordan for material support. This work was developed in the framework of the Ulysses Program for French–Irish collaborations (43135ZK), the Grenoble Alpes Data Institute and the LABEX MILYON (ANR-10-LABX-0070) of Université de Lyon, within the program “Investissements d’Avenir” (ANR-11-IDEX-0007) and (ANR-15-IDEX-02) operated by the French National Research Agency (ANR).

## Author information

Authors

### Contributions

All three authors contributed equally to the present manuscript. All authors read and approved the final manuscript.

### Corresponding author

Correspondence to Julyan Arbel.

## Ethics declarations

### Competing interests

The authors declare that they have no competing interests.

## Rights and permissions 