# Extended cumulative entropy based on kth lower record values for the coherent systems lifetime

## Abstract

Kayal (Probab. Eng. Inf. Sci. 30(4):640–662, 2016) first proposed generalized cumulative entropy based on lower record values. Motivated by Kayal (Probab. Eng. Inf. Sci. 30(4):640–662, 2016), recently, Tahmasebi and Eskandarzadeh (Stat. Probab. Lett. 126:164–172, 2017) proposed an extended cumulative entropy (ECE) based on kth lower record values. In this paper, we obtain some properties of ECE. We study this measure of information for the coherent systems lifetime with identically distributed components. We define the conditional ECE for the system lifetime and discuss some properties of it. We also use this idea to propose a measure of extended cumulative past inaccuracy. Finally, we propose the estimators of these measures using empirical approach. In addition, we study large sample properties of these estimators.

## Introduction

Information measures have a fundamental role in various areas of science such as probability and statistics, financial analysis, engineering, and information theory; see, e.g., Cover and Thomas . One of the most important measures of uncertainty in probability and statistics is the entropy of a random phenomenon. Let X denote the random lifetime of a system or a component with probability density function (pdf) f and a survival function $$\bar{F}=1-F$$, respectively. Shannon  introduced a measure of uncertainty associated with X as follows:

\begin{aligned} H(X)=-\mathbb{E}\bigl[\log f(X)\bigr]=- \int _{0}^{+\infty }f(x)\log f(x)\,dx, \end{aligned}
(1.1)

where “log” stands for the natural logarithm, with the convention $$0\log 0=0$$ and $$\mathbb{E}(\cdot)$$ denotes the expectation. $$H(X)$$ can be used to measure how far the distribution of X is from a uniform distribution. Some alternative information measures have been proposed in the literature. Rao et al.  defined another measure of information called the cumulative residual entropy (CRE) given by

\begin{aligned} {\mathcal{E}}(X)= \int _{0}^{+\infty }\bar{F}(x)\varLambda (x)\,dx, \end{aligned}

where $$\varLambda (x)=-\log \bar{F}(x)$$. An information measure similar to $${\mathcal{E}}(X)$$ is the cumulative entropy (CE) defined as follows (see Di Crescenzo and Longobardi ):

\begin{aligned} {\mathcal{CE}}(X)= \int _{0}^{+\infty }F(x)\tilde{\varLambda }(x)\,dx, \end{aligned}
(1.2)

where $$\tilde{\varLambda }(x)=-\log F(x)$$. Note that $${\mathcal{CE}}(X)\geq 0$$ and that $${\mathcal{CE}}(X)=0$$ if and only if $$X=c$$. The CE can be seen as a dispersion measure (see Toomaj et al. ). More properties on CE in past lifetime are available in Di Crescenzo and Longobardi  and Navarro et al. .

Recently Di Crescenzo and Toomaj  discussed some properties of a new weighted distribution based on stochastic orders and introduced the reversed relevation transform in connection with CE function. Some new connections of the CRE and the residual lifetime are given by Kapodistria and Psarrakos  using the relevation transform. Psarrakos and Navarro  generalized the concept of CRE relating this concept with the mean time between record values and with the concept of relevation transform, and they also considered a dynamic version of this new measure. Sordo and Psarrakos  provided comparison results for the cumulative residual entropy of systems and their dynamic versions. Toomaj et al.  used the CRE for coherent and mixed systems when the component lifetimes are identically distributed. Kayal  proposed generalized cumulative entropy based on lower record values and obtained various results of it. Cali et al.  studied the generalized cumulative past information in coherent systems.

Let $$\{X_{n}, n\geq 1\}$$ be a sequence of independent and identically distributed random variables with cumulative distribution function (cdf) F and pdf f. An observation $$X_{j}$$ will be called a lower record value if its value is less than the values of all previous observations. Thus, $$X_{j }$$ is a lower record value if $$X_{j}< X_{i}$$ for every $$i< j$$. For a fixed positive integer k, Dziubdziela and Kopocinski  defined the sequence $$\{L_{n(k)}, n\geq 1\}$$ of kth lower record times for the sequence $$\{X_{n}, n\geq 1\}$$ as follows:

$$L_{1(k)}=1 ,\qquad L_{n+1(k)}=\min \{j>L_{n(k)} :X_{k:L_{n(k)}+k-1}>X_{k:k+j-1} \},$$

where $$X_{j:m}$$ denotes the jth order statistic in a sample of size m. Then $$\{X_{n(k)}:=X_{k:L_{n(k)}+k-1}\}$$ is called a sequence of kth lower record values of $$\{X_{n}, n\geq 1\}$$. The pdf of $$X_{n(k)}$$ is given by Dziubdziela and Kopocinski  as follows:

\begin{aligned} f_{n(k)}(x)=\frac{k^{n}}{(n-1)!}\bigl[F(x)\bigr]^{k-1} \bigl[\tilde{\varLambda }(x)\bigr]^{n-1}f(x). \end{aligned}
(1.3)

The cdf of Equation (1.3) can be obtained as

\begin{aligned} F_{n(k)}(x) =\bigl[F(x)\bigr]^{k}\sum _{i=0}^{n-1} \frac{[k\tilde{\varLambda }(x)]^{i}}{i!}. \end{aligned}
(1.4)

Now, let X be a nonnegative absolutely continuous random variable with cdf F. Then, Tahmasebi and Eskandarzadeh  defined further extension of CE of X as follows:

\begin{aligned} {\mathcal{CE}}_{n,k}(X) =& \int _{0}^{+\infty }k\bigl[F_{n+1(k)}(x)-F_{n(k)}(x) \bigr]\,dx \\ =&\frac{k^{n+1}}{n!} \int _{0}^{\infty }\varphi _{n,k} \bigl(F_{X}(x)\bigr)\,dx \\ =&\frac{k^{n+1}}{n!} \int _{0}^{1} \frac{\varphi _{n,k}(u)}{f_{X}(F^{-1}_{X}(u))}\,du,\quad \text{for } n=1,2,\ldots, k\geq 1, \end{aligned}
(1.5)

where $$\varphi _{n,k}(u)=u^{k}(-\log u)^{n}\geq 0$$, $$0< u<1$$. Note that $$\varphi _{n,k}(0)=\varphi _{n,k}(1)=0$$. For $$k=1$$, Equation (1.5) reduces to the generalized cumulative entropy due to Kayal .

Equation (1.5) is a new CE which is presented on the idea of GCRE introduced by Psarrakos and Navarro  and is obtained relating the concept of CE with the mean time between lower k-record values and with the concept of relevation transform (see Krakowski  and Baxter ). They called it extended cumulative entropy (ECE). In reliability theory, the performance characteristics of the coherent systems are of great importance. Accordingly, this paper is organized as follows. In Sect. 2, we present general properties of ECE including stochastic ordering, linear transformations, and bounds. In Sect. 3, we study $${\mathcal{CE}}_{n,k}(T)$$ when T is the lifetime of a coherent system with identically distributed components. In Sect. 4, we also obtain some results on the conditional ECE of a system lifetime. In Sect. 5, we propose an extended cumulative past inaccuracy (ECPI) measure and study a measure of distance symmetric in coherent and mixed systems. Finally, in Sect. 6, the estimators of ECE and ECPI using empirical approach are presented. Throughout this paper, the terms ‘increasing’ and ‘decreasing’ are used in a nonstrict sense.

## General properties on the ECE

In this section, we study some general results of ECE. For that we first present the following example.

### Example 2.1

Let X denote the lifetime of a system or a unit.

1. i.

If X has the Fréchet distribution with $$F(x)=e^{\frac{-\theta }{x}}$$, $$x>0$$, $$\theta >0$$, then for $$n>1$$ we have $${\mathcal{CE}}_{n,k}(X)=\frac{k^{2}\theta }{n(n-1)}=k^{2}{ \mathcal{CE}}_{n,1}(X)$$.

2. ii.

If X has a uniform distribution in $$(0, b)$$, then we have

\begin{aligned} {\mathcal{CE}}_{n,k}(X)=\frac{k}{k+1}{\mathcal{CE}}_{n-1,k}(X)= \biggl( \frac{k}{k+1} \biggr)^{2}{\mathcal{CE}}_{n-2,k}(X)=b \biggl( \frac{k}{k+1} \biggr)^{n+1}. \end{aligned}
3. iii.

If X has an inverse Weibull distribution with the cdf $$F(x)=\exp (-(\frac{\alpha }{x})^{\beta })$$, $$x>0$$, $$\alpha ,\beta >0$$. Then it holds that

\begin{aligned} {\mathcal{CE}}_{n,k}(X)= \frac{\alpha k^{\frac{\beta +1}{\beta }}}{\beta n!}\varGamma \biggl( \frac{n\beta -1}{\beta }\biggr)=k^{\frac{\beta +1}{\beta }}{ \mathcal{CE}}_{n,1}(X), \end{aligned}

where $$\varGamma (\cdot)$$ is the complete gamma function.

In the following, we state various results of ECE. It includes basic properties such as stochastic orderings, bounds, the effect of linear transformations, and a two-dimensional representation of ECE. The proofs follow the same line of Kayal  and Di Crescenzo and Toomaj . Let us recall some stochastic orders; see for details Shaked and Shantikumar .

### Definition 2.1

Assume that X and Y are nonnegative random variables with cdfs F and G, respectively, then

1. 1.

X is smaller than Y in the usual stochastic order (denoted by $$X\leq _{st}Y$$) if $$P(X\geq x)\leq P(Y\geq x)$$ for all x.

2. 2.

X is smaller than Y in the hazard rate order, denoted by $$X \leq _{hr}Y$$, if $$\lambda _{X}(x)\geq \lambda _{Y}(x)$$ for all x, where $$\lambda _{X}(x)$$ and $$\lambda _{Y}(x)$$ are the failure rate functions X and Y, respectively.

3. 3.

X is smaller than Y in the dispersive order, denoted by $$X \leq _{\mathrm{disp}}Y$$, if $$f(F^{-1}(u))\geq g(G^{-1}(u))$$ for all $$u\in (0,1)$$, where $$F^{-1}$$ and $$G^{-1}$$ are right-continuous inverses of F and G, respectively.

4. 4.

X is said to have decreasing failure rate (DFR) if $$\lambda _{X}(x)=\frac{f(x)}{\bar{F}(x)}$$ is decreasing in x.

5. 5.

X is smaller than Y in the convex transform order, denoted by $$X \leq _{c}Y$$, if $$G^{-1}F(x)$$ is a convex function on the support of X.

6. 6.

X is smaller than Y in the decreasing convex order, denoted by $$X \leq ^{dcx}Y$$, if $$\mathbb{E}(\phi (X))\leq \mathbb{E}(\phi (Y))$$ for all decreasing convex functions ϕ such that the expectations exist.

7. 7.

X is smaller than Y in the star order, denoted by $$X \leq _{*}Y$$, if $$\frac{G^{-1}F(x)}{x}$$ is increasing in $$x\geq 0$$.

8. 8.

X is smaller than Y in the supper additive order, denoted by $$X \leq _{\mathrm{su}}Y$$, if $$G^{-1}F(t+u)\geq G^{-1}F(t)+G^{-1}F(u)$$ for $$t\geq 0$$, $$u\geq 0$$.

### Theorem 2.1

LetXandYbe absolutely continuous nonnegative random variables with cdfsFandG, respectively. If$$X\leq _{\mathrm{disp}}Y$$, then for any$$k\geq 1$$and$$n=1,2,\ldots$$we have

\begin{aligned} {\mathcal{CE}}_{n,k}(X)\leq {\mathcal{CE}}_{n,k}(Y). \end{aligned}

### Proof

The proof is similar to the proof of Lemma 3 in Klein et al.  and hence it is omitted. □

### Proposition 2.1

If$$X\leq _{hr}Y$$andXorYis DFR, then

\begin{aligned} {\mathcal{CE}}_{n,k}(X)\leq {\mathcal{CE}}_{n,k}(Y). \end{aligned}

### Proof

If $$X\leq _{hr}Y$$ and X or Y is DFR, then $$X\leq _{\mathrm{disp}}Y$$. Therefore, from Theorem 2.1 the desired result follows. □

### Proposition 2.2

LetXandYbe two nonnegative random variables with pdfsfandg, respectively. If$$X\leq _{\mathrm{su}}Y(X\leq _{*}Y \text{ or } X\leq _{c}Y)$$, then$${\mathcal{CE}}_{n,k}(X)\leq {\mathcal{CE}}_{n,k}(Y)$$.

### Proof

If $$X\leq _{\mathrm{su}}Y(X\leq _{*}Y \text{ or } X\leq _{c}Y)$$, then $$X\leq _{\mathrm{disp}}Y$$ due to Ahmed et al. . Therefore, from Theorem (2.1) the desired result follows. □

### Proposition 2.3

LetXbe a nonnegative random variable with decreasing pdffsuch that$$f(0)\leq 1$$, then

\begin{aligned} {\mathcal{CE}}_{n,k}(X)\leq {\mathcal{CE}}_{n,k}(U), \end{aligned}

where$${\mathcal{CE}}_{n,k}(U)= (\frac{k}{k+1} )^{n+1}$$.

### Proof

The nonnegative random variable X has a decreasing pdf if and only if $$U\leq _{c} X$$, where $$U\sim \operatorname{Uniform}(0,1)$$ (see Shaked and Shanthikumar ). Hence, from Proposition 2.2 the desired result follows. □

### Proposition 2.4

Suppose thatXandYare two independent nonnegative random variables. IfXandYhave log-concave densities, then

\begin{aligned} {\mathcal{CE}}_{n,k}(X+Y)\geq \max \bigl\{ {\mathcal{CE}}_{n,k}(X),{ \mathcal{CE}}_{n,k}(Y)\bigr\} . \end{aligned}

### Proof

The proof is similar to that of Theorem 3.2 of Di Crescenzo and Toomaj . □

### Proposition 2.5

LetXbe a random variable with cdfF. Further, let$$Y=aX+b$$, where$$a>0$$and$$b\geq 0$$. Then

1. i.

$${\mathcal{CE}}_{n,k}(Y)=a{\mathcal{CE}}_{n,k}(X)$$,

2. ii.

$${\mathcal{CE}}_{n,k}(X)=0$$if and only ifXis degenerate.

### Proposition 2.6

LetXbe a nonnegative random variable with cdfFand the reversed hazard rate$$r(z)$$, $$z>0$$. Then, for any$$k\geq 1$$and$$n=1,2,\ldots$$ , we have

\begin{aligned} {\mathcal{CE}}_{n,k}(X)=\frac{k^{n+1}}{n!} \int _{0}^{+\infty }r(z) \biggl\{ \int _{0}^{F(z)}\varphi _{n-1,k}(u)\,du \biggr\} \,dz. \end{aligned}

### Remark 2.1

Let X be a nonnegative random variable with cdf F, then we have

\begin{aligned} {\mathcal{CE}}_{n,k}(X)\leq k^{n+1}{\mathcal{CE}}_{n,1}(X), \end{aligned}

where $${\mathcal{CE}}_{n,1}(X)$$ is the generalized cumulative entropy (see Kayal ).

### Remark 2.2

Let X be a nonnegative absolutely continuous random variable. Then

$${\mathcal{CE}}_{n,k}(X)\geq \sum_{i=0}^{n} \frac{(-1)^{i}k^{n+1}}{i!(n-i)!} \int _{0}^{+\infty }\bigl[F(x)\bigr]^{i+k} \,dx.$$

Let X and Y denote the lifetimes of two components of a system with joint distribution function $$F(x,y)$$, respectively. Then the bivariate ECE is defined as follows:

\begin{aligned} {\mathcal{CE}}_{n,k}(X,Y) =&\frac{k^{n+1}}{n!} \int _{0}^{+\infty } \int _{0}^{+\infty }\bigl[F(x,y)\bigr]^{k} \bigl[\tilde{\varLambda }(x,y)\bigr]^{n}\,dx\,dy, \end{aligned}
(2.1)

where $$\tilde{\varLambda }(x,y)=-\log F(x,y)$$. Using the binomial expansion in (2.1), we obtain the following proposition.

### Proposition 2.7

Suppose that the nonnegative random variablesXandYare independent with joint distribution function$$F(x,y)$$, then

\begin{aligned} {\mathcal{CE}}_{n,k}(X,Y)=\frac{1}{k}\sum _{i=0}^{n}{\mathcal{CE}}_{n-i,k}(X){ \mathcal{CE}}_{i,k}(Y). \end{aligned}
(2.2)

### Proposition 2.8

LetXbe a symmetric random variable with respect to the finite mean$$\mu =\mathbb{E}(X)$$, i.e., $$F(x+\mu )=1-F(\mu -x)$$for all$$x\in \mathbb{R}$$. Then

\begin{aligned} {\mathcal{CE}}_{n,k}(X)={\mathcal{E}}_{n,k}(X), \end{aligned}

where$${\mathcal{E}}_{n,k}(X)=\int _{0}^{+\infty }\frac{k^{n+1}}{n!}[\bar{F}_{X}(x)]^{k}[ \varLambda (x)]^{n}\,dx$$is the generalized cumulative residual entropy (see Tahmasebi et al. ).

The concept of elasticity in life expectancy is an important feature in life tables. It should be noted that$$V_{1}(X)=\frac{{\mathcal{E}}_{1,1}(X)}{\mathbb{E}(X)}$$is the elasticity in life expectancy with respect to proportional hazards models with survival function$$\bar{F}_{k}(x)=[\bar{F}(x)]^{k}$$ (see Leser  and Rao ). Recently, by using$${\mathcal{E}}_{n-1,k}(X)$$, Psarrakos and Toomaj  obtained the following approximation:

\begin{aligned} \frac{{\mathcal{E}}_{n-1,k}(X)-{\mathcal{E}}_{n-1,1}(X)}{{\mathcal{E}}_{n-1,1}(X)} \approx -V_{n}(X)k, \end{aligned}

where$$V_{n}(X)=n \frac{{\mathcal{E}}_{n,1}(X)}{{\mathcal{E}}_{n-1,1}(X)}-n+1$$is the elasticity of expected interepoch intervals in a nonhomogeneous Poisson process (NHPP) with respect to a proportional hazards models.

Let us now investigate the ECE within the proportional reversed hazards model (PRHM). We recall that two random variables X and $${X}^{\ast }_{\theta }$$ satisfy the PRHM if their distribution functions are related by the following identity, for $$\theta >0$$:

$$F^{\ast }_{\theta }(x)=\bigl[F(x) \bigr]^{\theta },\quad x\in \mathbb{R}.$$
(2.3)

For instance, for some properties of such a model associated with aging notions and the reversed relevation transform, see Gupta and Gupta  and Di Crescenzo and Toomaj , respectively. In this case, we assume that X and $${X}^{\ast }_{\theta }$$ are nonnegative absolutely continuous random variables. Due to Equation (2.1) and noting that $$\varLambda _{\theta }^{\ast }(x)=\theta \widetilde{\varLambda }(x)$$, we obtain the ECE measure for $$X^{\ast }_{\theta }$$ as follows, for $$\theta >0$$:

\begin{aligned} {\mathcal{CE}}_{n,k}\bigl(X^{\ast }_{\theta }\bigr)= \frac{\theta ^{n}k^{n+1}}{n!} \int _{0}^{+\infty }\bigl[F(x)\bigr]^{k\theta } \bigl[\tilde{\varLambda }(x)\bigr]^{n}\,dx. \end{aligned}

### Proposition 2.9

LetXand$${X}^{\ast }_{\theta }$$be nonnegative absolutely continuous random variables satisfying the PRHM as specified in (2.3), with$$\theta >0$$. If$$\theta \geq (\leq ) 1$$, then we have

$${\mathcal{CE}}_{n,k}\bigl(X^{\ast }_{\theta }\bigr) \leq (\geq ) \theta ^{n}{ \mathcal{CE}}_{n,k}(X).$$

### Proof

Clearly, for $$\theta \geq (\leq )1$$, it is $$[F(x)]^{k\theta }\leq (\geq )[F(x)]^{k}$$ for all $$x\geq 0$$, and then the thesis immediately follows from (1.5). □

## ECE of coherent systems

A system is said to be coherent if it does not have any irrelevant components and its structure function is monotone. Now, let the component lifetimes have the common distribution $$F_{X}$$. Suppose that T is the lifetime of a coherent system with m identically distributed (id) components, then its distribution function $$F_{T}$$ can be written as

\begin{aligned} F_{T}(t)=q\bigl(F_{X}(t)\bigr), \end{aligned}

where $$q:[0,1]\rightarrow [0,1]$$ is a distortion function that depends on the structure of a system and the copula of the component lifetime. Note that the function q is a continuous increasing function such that $$q(0)=0$$ and $$q(1)=1$$ (for more details on coherent systems, see Burkschat and Navarro  and Navarro et al. ). A special case of coherent systems is the k-out-of-n system, where the system fails when the kth component failure occurs. For example, for a 2-out-of-3 system with i.i.d. components, we have $$q(u)=3u^{2}-2u^{3}$$. Also, for a parallel system with lifetime $$T=\max (X_{1},X_{2},X_{3},\ldots,X_{m})$$, we have $$q(u)=u^{m}$$. Hence, the ECE of the random lifetime T can be obtained as follows:

\begin{aligned} {\mathcal{CE}}_{n,k}(T) =& \int _{0}^{+\infty }\frac{k^{n+1}}{n!} \bigl[F_{T}(x)\bigr]^{k}\bigl[- \log F_{T}(x) \bigr]^{n}\,dx \\ =&\frac{k^{n+1}}{n!} \int _{0}^{+\infty }\varphi _{n,k} \bigl(F_{T}(x)\bigr)\,dx \\ =&\frac{k^{n+1}}{n!} \int _{0}^{+\infty }\varphi _{n,k}\bigl(q \bigl(F_{X}(x)\bigr)\bigr)\,dx \\ =&\frac{k^{n+1}}{n!} \int _{0}^{1} \frac{\varphi _{n,k}(q(u))}{f_{X}(F^{-1}_{X}(u))}\,du. \end{aligned}
(3.1)

For example, for a parallel system with i.i.d components of $$U(0,1)$$, we have

\begin{aligned} {\mathcal{CE}}_{n,k}(T)=m^{n} \biggl( \frac{k}{1+km} \biggr)^{n+1}\leq { \mathcal{CE}}_{n,k}(X)= \biggl(\frac{k}{k+1} \biggr)^{n+1}. \end{aligned}

As an application of Equation (3.1), we have the following example.

### Example 3.1

We consider a coherent system with lifetime $$T=\max \{X_{1},\min \{X_{2},X_{3},X_{4}\}\}$$ and i.d. components having the common exponential with mean θ. From (3.1) we obtain

\begin{aligned} {\mathcal{CE}}_{3,2}(T)=(0.3157)\theta ,\qquad {\mathcal{CE}}_{3,3}(T)=(0.5692) \theta . \end{aligned}

Note that $${\mathcal{CE}}_{3,2}(T)<{\mathcal{CE}}_{3,3}(T)$$. Now, if the system has dependent identical exponential components with an exchangeable copula C, then we have

\begin{aligned} {\mathcal{CE}}_{3,2}(T)=(2.66)\theta \int _{0}^{1} \frac{\varphi _{3,2}(q_{1}(u))}{1-u}\,du,\qquad { \mathcal{CE}}_{3,3}(T)=(13.5) \theta \int _{0}^{1}\frac{\varphi _{3,3}(q_{1}(u))}{1-u}\,du, \end{aligned}

where $$q_{1}(u)=3C(u,u,1,1)-3C(u,u,u,1)+C(u,u,u,u)$$. Assume that the component lifetimes are dependent with the Farlie–Gumbel–Morgenstern (FGM) copula as follows:

\begin{aligned} C(u_{1},u_{2},u_{3},u_{4})=u_{1}u_{2}u_{3}u_{4} \bigl[1+\alpha (1-u_{1}) (1-u_{2}) (1-u_{3}) (1-u_{4})\bigr],\quad -1\leq \alpha \leq 1. \end{aligned}

Then, for $$\alpha =\frac{1}{3}$$, we obtain $$q_{1}(u)=3u^{2}-3u^{3}+u^{4}[1+\frac{(1-u)^{4}}{3}]$$ and

\begin{aligned} {\mathcal{CE}}_{3,2}(T)=(0.3152)\theta ,\qquad {\mathcal{CE}}_{3,3}(T)=(0.5684) \theta . \end{aligned}

Finally, we see that $${\mathcal{CE}}_{3,2}(T)$$ and $${\mathcal{CE}}_{3,3}(T)$$ decrease when the dependence parameter α increases.

### Proposition 3.1

LetTbe the lifetime of a coherent system with i.d components and with a distortion functionq. If$$\varphi _{n,k}(q(u))\geq (\leq )\varphi _{n,k}(u)$$, then we have

\begin{aligned} {\mathcal{CE}}_{n,k}(T)\geq (\leq ){\mathcal{CE}}_{n,k}(X). \end{aligned}

### Proposition 3.2

Assume that the components have cdf$$F_{X}$$and pdf$$f_{X}$$and supportS. LetTbe the lifetime of a coherent system with i.d components and with a distortion function q.

1. (i)

If$$f(x)\leq M$$for all$$x\in S$$, then

\begin{aligned} {\mathcal{CE}}_{n,k}(T)\geq \frac{k^{n+1}}{M n!} \int _{0}^{1}\varphi _{n,k} \bigl(q(u)\bigr)\,du. \end{aligned}
2. (ii)

If$$f(x)\geq L>0$$for all$$x\in S$$, then

\begin{aligned} {\mathcal{CE}}_{n,k}(T)\leq \frac{k^{n+1}}{L n!} \int _{0}^{1}\varphi _{n,k} \bigl(q(u)\bigr)\,du. \end{aligned}

### Example 3.2

1. (i)

Let T be the lifetime of a coherent system with i.d components having an exponential distribution with mean θ, then $$L=\frac{1}{\theta }$$ and

\begin{aligned} {\mathcal{CE}}_{n,k}(T)\leq \frac{\theta k^{n+1}}{ n!} \int _{0}^{1} \varphi _{n,k} \bigl(q(u)\bigr)\,du. \end{aligned}
2. (ii)

Let T be the lifetime of a coherent system with i.d components having a Pareto type II distribution with cdf $$F(x)=1- (\frac{\beta }{\beta +x} )^{\alpha }$$, $$x>0$$, then $$M=\alpha \beta ^{\alpha }$$ and

\begin{aligned} {\mathcal{CE}}_{n,k}(T)\geq \frac{k^{n+1}}{\alpha \beta ^{\alpha } n!} \int _{0}^{1}\varphi _{n,k} \bigl(q(u)\bigr)\,du. \end{aligned}

### Proposition 3.3

Suppose thatTis the lifetime of a coherent system with i.d components and with distortion functionq. Let$$\varphi _{n,k}(u)=u^{k}[-\log (u)]^{n}$$. Then

\begin{aligned} B_{1,n}{\mathcal{CE}}_{n,k}(X_{1})\leq { \mathcal{CE}}_{n,k}(T)\leq B_{2,n}{ \mathcal{CE}}_{n,k}(X_{1}), \end{aligned}

where$$B_{1,n}=\inf_{u\in (0,1)} ( \frac{\varphi _{n,k}(q(u))}{\varphi _{n,k}(u)} )$$and$$B_{2,n}=\sup_{u\in (0,1)} ( \frac{\varphi _{n,k}(q(u))}{\varphi _{n,k}(u)} )$$.

### Proof

The proof is similar to the proof of Proposition 3 of Calì et al.  and hence it is omitted. □

For example, if the distortion function for a 2-out-of-3 system with i.i.d components is $$q(u)=3u^{2}-2u^{3}$$, then we obtain

\begin{aligned} 0 \leq {\mathcal{CE}}_{3,2}(T) \leq 1.15{\mathcal{CE}}_{3,2}(X_{1}) \end{aligned}

and

\begin{aligned} 0\leq {\mathcal{CE}}_{3,3}(T)\leq 1.03 {\mathcal{CE}}_{3,3}(X_{1}). \end{aligned}

In Table 1, we give the distortion functions for all the coherent systems with 1–4 i.i.d. components. Also, in Table 2, we give $${\mathcal{CE}}_{n,2}(T)$$ for these systems when the components have a uniform distribution in $$(0,1)$$.

In the following proposition, we compare the ECE of two systems with distinct lifetimes.

### Proposition 3.4

Suppose that$$T_{1}$$and$$T_{2}$$are the lifetimes of two coherent systems with i.d components and with distortion functions$$q_{1}$$and$$q_{2}$$, respectively. Let$$\varphi _{n,k}(u)=u^{k}[-\log (u)]^{n}$$. Then

\begin{aligned} D_{1,n} {\mathcal{CE}}_{n,k}(T_{1})\leq { \mathcal{CE}}_{n,k}(T_{2}) \leq D_{2,n} { \mathcal{CE}}_{n,k}(T_{1}), \end{aligned}

where$$D_{1,n}=\inf_{u\in (0,1)} ( \frac{\varphi _{n,k}(q_{2}(u))}{\varphi _{n,k}(q_{1}(u))} )$$and$$D_{2,n}=\sup_{u\in (0,1)} ( \frac{\varphi _{n,k}(q_{2}(u))}{\varphi _{n,k}(q_{1}(u))} )$$.

It is clear that if $$D_{2,n}\leq 1$$, then $${\mathcal{CE}}_{n,k}(T_{2})\leq {\mathcal{CE}}_{n,k}(T_{1})$$. Now, let us have two coherent systems with i.i.d components. Suppose that $$T_{1}=X_{2:2}=\max (X_{1},X_{2})$$ is the lifetime of a 2 component parallel system with $$q_{1}(u)=u^{2}$$ and $$T_{2}$$ is the lifetime of a 2-out-of-3 system with $$q_{2}(u)=3u^{2}-2u^{3}$$, then from the previous proposition we obtain

\begin{aligned} {\mathcal{CE}}_{2,2}(T_{2})\leq 8 {\mathcal{CE}}_{2,2}(T_{1}). \end{aligned}

In the following example, we consider a parallel system with dependent and identically distributed (d.i.d) components and obtain the bounds of $${\mathcal{CE}}_{n,k}(T)$$.

### Example 3.3

Let $$T=\max (X_{1},X_{2})$$ be the lifetime of a parallel system with d.i.d components. If the component lifetimes are dependent with the FGM copula as

\begin{aligned} C(u_{1},u_{2})=u_{1}u_{2} \bigl[1+\alpha (1-u_{1}) (1-u_{2})\bigr], \quad 0 \leq u_{1},u_{2}\leq 1, -1\leq \alpha \leq 1, \end{aligned}

then $$q(u)=u^{2}[1+\alpha (1-u)^{2}]$$. So, from Proposition 3.3, we obtain

\begin{aligned} {\mathcal{CE}}_{3,2}(T)\leq 8 {\mathcal{CE}}_{3,2}(X_{1}). \end{aligned}

Also, if $$L\leq f(x)\leq M$$, then from Proposition 3.2 we obtain

\begin{aligned} \frac{0.2}{M}\leq {\mathcal{CE}}_{3,2}(T)\leq \frac{0.2}{L}. \end{aligned}

### Example 3.4

Suppose that $$T=\max (X_{1},X_{2})$$ is the lifetime of a parallel system with d.i.d components. If the component lifetimes are dependent with the Clayton–Oakes copula as follows:

\begin{aligned} C(u_{1},u_{2})=\frac{u_{1}u_{2}}{u_{1}+u_{2}-u_{1}u_{2}},\quad 0 \leq u_{1},u_{2}\leq 1, \end{aligned}

then $$q(u)=\frac{u}{2-u}$$. Hence, from Proposition 3.3, we obtain

\begin{aligned} {\mathcal{CE}}_{4,2}(T)\leq 16 {\mathcal{CE}}_{4,2}(X_{1}). \end{aligned}

Also, if $$L\leq f(x)\leq M$$, then from Proposition 3.2 we obtain

\begin{aligned} \frac{0.17}{M}\leq {\mathcal{CE}}_{4,2}(T)\leq \frac{0.17}{L}. \end{aligned}

An application of (3.1) is the comparison of the ECE of two coherent systems when two systems have the same structure with different i.d. component lifetimes. Thus we have the following theorem.

### Proposition 3.5

Let$$T_{1}$$and$$T_{2}$$be the lifetimes of two coherent systems with the same structure and with i.d. components having common distributionsFandG, respectively. If$$X\leq _{\mathrm{disp}} Y$$, then for any$$k\geq 1$$and$$n = 1,2,\ldots$$ , we have

\begin{aligned} {\mathcal{CE}}_{n,k}(T_{1})\leq {\mathcal{CE}}_{n,k}(T_{2}). \end{aligned}

### Proof

Since both systems have a common distortion function q and the same structure, the proof follows from Equation (3.1) and the assumption on the dispersive order. □

### Corollary 3.1

Under the assumptions of Proposition 3.5, if$$X\leq _{hr}Y$$andXorYis DFR, then$${\mathcal{CE}}_{n,k}(T_{1})\leq {\mathcal{CE}}_{n,k}(T_{2})$$.

### Corollary 3.2

Under the assumptions of Proposition 3.5, if$$X\leq _{\mathrm{su}}Y(X\leq _{*}Y \text{ or } X\leq _{c}Y)$$, then$${\mathcal{CE}}_{n,k}(T_{1})\leq {\mathcal{CE}}_{n,k}(T_{2})$$.

### Theorem 3.1

Let$$T_{1}$$and$$T_{2}$$be the lifetimes of two coherent systems with the same structure and with i.d. components having common distributionsFandG, respectively. If$${\mathcal{CE}}_{n,k}(X)\leq {\mathcal{CE}}_{n,k}(Y)$$and

\begin{aligned} \inf_{u\in A_{1}} \biggl[ \frac{\varphi _{n,k}(q(u))}{\varphi _{n,k}(u)} \biggr]\geq \sup _{u \in A_{2}} \biggl[\frac{\varphi _{n,k}(q(u))}{\varphi _{n,k}(u)} \biggr], \end{aligned}

for$$A_{1}=\{u\in [0,1]: f(F^{-1}(u))>g(G^{-1}(u))\}$$and$$A_{2}=\{u\in [0,1]: f(F^{-1}(u))\leq g(G^{-1}(u))\}$$, then$${\mathcal{CE}}_{n,k}(T_{1})\leq {\mathcal{CE}}_{n,k}(T_{2})$$.

### Proof

Since $${\mathcal{CE}}_{n,k}(X)\leq {\mathcal{CE}}_{n,k}(Y)$$, we have from (1.5) that

\begin{aligned} {\mathcal{CE}}_{n,k}(Y)-{\mathcal{CE}}_{n,k}(X)= \frac{k^{n+1}}{n!} \int _{0}^{1}\Delta (u)\,du\geq 0, \end{aligned}

where $$\Delta (u)=\frac{\varphi _{n,k}(u)}{g_{Y}(G^{-1}_{Y}(u))}- \frac{\varphi _{n,k}(u)}{f_{X}(F^{-1}_{X}(u))}$$. It follows from (3.1) that

\begin{aligned} &{\mathcal{CE}}_{n,k}(T_{2})-{\mathcal{CE}}_{n,k}(T_{1}) \\ &\quad = \frac{k^{n+1}}{n!} \int _{0}^{1}\frac{\varphi _{n,k}(q(u))}{\varphi _{n,k}(u)}\Delta (u) \,du \\ &\quad =\frac{k^{n+1}}{n!} \int _{A_{1}} \frac{\varphi _{n,k}(q(u))}{\varphi _{n,k}(u)}\Delta (u) \,du+ \frac{k^{n+1}}{n!} \int _{A_{2}} \frac{\varphi _{n,k}(q(u))}{\varphi _{n,k}(u)}\Delta (u) \,du \\ &\quad \geq \inf_{u\in A_{1}} \frac{\varphi _{n,k}(q(u))}{\varphi _{n,k}(u)} \int _{A_{1}} \frac{k^{n+1}}{n!}\Delta (u) \,du+\sup_{u\in A_{2}} \frac{\varphi _{n,k}(q(u))}{\varphi _{n,k}(u)} \int _{A_{2}} \frac{k^{n+1}}{n!}\Delta (u) \,du \\ &\quad \geq \sup_{u\in A_{2}} \frac{\varphi _{n,k}(q(u))}{\varphi _{n,k}(u)} \int _{A_{2}} \frac{k^{n+1}}{n!}\Delta (u) \,du \\ &\quad \geq 0. \end{aligned}
(3.2)

So, the proof is completed. □

### Remark 3.1

Under the assumptions of Theorem 3.1, if q is strictly increasing in $$(0,1)$$, then $${\mathcal{CE}}_{n,k}(X)\leq _{\mathrm{disp}}{\mathcal{CE}}_{n,k}(Y)$$ if and only if $${\mathcal{CE}}_{n,k}(T_{1})\leq _{\mathrm{disp}}{\mathcal{CE}}_{n,k}(T_{2})$$.

### Proof

The proof follows from Theorem 2.9 of Navarro et al. . □

### Remark 3.2

Let T be the lifetime of coherent system with cdf $$F_{T}$$, then we have

\begin{aligned} {\mathcal{CE}}(T)\geq D^{*}_{G}(T), \end{aligned}

where $$D^{*}_{G}(T)=\frac{D_{G}(T)}{2}=\int _{0}^{+\infty }F_{T}(x)\bar{F}_{T}(x)\,dx$$, and $$D_{G}(T)$$ is the Gini mean difference as a dispersion measure.

## Conditional ECE

Suppose that X is the lifetime of a system on a probability space $$(\varOmega ,{\mathcal{F}},\mathbb{P})$$ such that $$\mathbb{E}|X|<\infty$$. We denote by $$\mathbb{E}(X|{\mathcal{G}})$$ the conditional expectation of X given sub σ-field $${\mathcal{G}}$$, where $${\mathcal{G}}\subset {\mathcal{F}}$$. Here, we define the conditional ECE of X and discuss some properties of it.

### Definition 4.1

Let X be the lifetime of a system with cdf $$F_{X}$$. Then, for a given σ-field $${\mathcal{F}}$$, the conditional ECE is defined as follows:

\begin{aligned} {\mathcal{CE}}_{n,k}(X|{\mathcal{F}}) =&\frac{k^{n+1}}{n!} \int _{ \mathbb{R^{+}}}\bigl[\mathbb{P}(X\leq x| { \mathcal{F}})\bigr]^{k}\bigl[-\log \bigl( \mathbb{P}(X\leq x|{ \mathcal{F}})\bigr)\bigr]^{n}\,dx \\ =&\frac{k^{n+1}}{n!} \int _{\mathbb{R^{+}}} \bigl(\mathbb{E}[I_{(X \leq x)}| {\mathcal{F}}] \bigr)^{k}\bigl[-\log \bigl(\mathbb{E}[I_{(X\leq x)}|{ \mathcal{F}}]\bigr)\bigr]^{n}\,dx. \end{aligned}

### Lemma 4.1

LetXbe the lifetime of a coherent system with i.d components. If$${\mathcal{F}}=\{\phi ,\varOmega \}$$, then$${\mathcal{CE}}_{n,k}(X|{\mathcal{F}})={\mathcal{CE}}_{n,k}(X)$$.

The following proposition says that the conditional ECE has the “super-martingale property”.

### Proposition 4.1

Let$$X\in L^{p}$$for some$$p>2$$, then forσ-fields$${\mathcal{G}}\subset {\mathcal{F}}$$

\begin{aligned} \mathbb{E}\bigl({\mathcal{CE}}_{n,k}(X|{\mathcal{F}})|{\mathcal{G}} \bigr)\leq { \mathcal{CE}}_{n,k}(X|{\mathcal{G}}). \end{aligned}
(4.1)

### Proof

The proof follows by applying Jensen”s inequality for the convex function $$t^{k}(-\log t)^{n}$$, $$0< t<1$$ as follows:

\begin{aligned} \mathbb{E}\bigl({\mathcal{CE}}_{n}(X|{\mathcal{F}})|{\mathcal{G}} \bigr) =&\frac{k^{n+1}}{n!} \int _{\mathbb{R^{+}}}\mathbb{E} \bigl[\bigl( \mathbb{P}(X\leq x| { \mathcal{F}})\bigr)^{k}\bigl[-\log \mathbb{P}(X \leq x|{\mathcal{F}})\bigr]^{n}|{ \mathcal{G}} \bigr]\,dx \\ \leq &\frac{k^{n+1}}{n!} \int _{\mathbb{R^{+}}} \bigl(\mathbb{E}\bigl[ \mathbb{P}(X\leq x| {\mathcal{F}})|{\mathcal{G}}\bigr] \bigr)^{k} \bigl[-\log \mathbb{E}\bigl[\mathbb{P}(X\leq x|{\mathcal{F}})|{\mathcal{G}}\bigr] \bigr]^{n}\,dx \\ =&\frac{k^{n+1}}{n!} \int _{\mathbb{R^{+}}} \bigl(\mathbb{E}\bigl[ \mathbb{E}(I_{(X\leq x)}| {\mathcal{F}})|{\mathcal{G}}\bigr] \bigr)^{k}\bigl[- \log \mathbb{E}\bigl[\mathbb{E}(I_{(X\leq x)}|{ \mathcal{F}})|{\mathcal{G}}\bigr]\bigr]^{n}\,dx \\ =&\frac{k^{n+1}}{n!} \int _{\mathbb{R^{+}}} \bigl[\mathbb{E}(I_{(X \leq x)}|{ \mathcal{G}}) \bigr]^{k}\bigl[-\log \mathbb{E}(I_{(X\leq x)}|{ \mathcal{G}})\bigr]^{n}\,dx \\ =&\frac{k^{n+1}}{n!} \int _{\mathbb{R^{+}}} \bigl[\mathbb{P}(X \leq x|{ \mathcal{G}}) \bigr]^{k}\bigl[-\log \mathbb{P}(X\leq x|{ \mathcal{G}}) \bigr]^{n}\,dx \\ =&{\mathcal{CE}}_{n,k}(X|{\mathcal{G}}). \end{aligned}

□

From the Markov property for the lifetime random variables $$T_{1}$$, $$T_{2}$$, and $$T_{3}$$, we have the following lemma.

### Lemma 4.2

If$$T_{1}\rightarrow T_{2}\rightarrow T_{3}$$is a Markov chain, then

1. (i)

$${\mathcal{CE}}_{n,k}(T_{3}|T_{2},T_{1})={\mathcal{CE}}_{n,k}(T_{3}|T_{2})$$;

2. (ii)

$$\mathbb{E}[{\mathcal{CE}}_{n,k}(T_{3}|T_{2})]\leq \mathbb{E}[{ \mathcal{CE}}_{n,k}(T_{3}|T_{1})]$$.

### Proof

(i) By using the Markov property and the definition of $${\mathcal{CE}}_{n,k}(T_{3}|T_{2},T_{1})$$, the result follows.

(ii) Let $${\mathcal{G}}=\sigma (T_{1})$$ and $${\mathcal{F}}=\sigma (T_{1},T_{2})$$, then from (4.1) we have

\begin{aligned} \mathbb{E}\bigl[{\mathcal{CE}}_{n,k}(T_{3}|T_{1}) \bigr] \geq &\mathbb{E}\bigl( \mathbb{E}\bigl[{\mathcal{CE}}_{n,k}(T_{3}|T_{1},T_{2})|T_{1} \bigr]\bigr) \\ =&\mathbb{E}\bigl[{\mathcal{CE}}_{n,k}(T_{3}|T_{1},T_{2}) \bigr] \\ =&\mathbb{E}\bigl[{\mathcal{CE}}_{n,k}(T_{3}|T_{2}) \bigr], \end{aligned}

and the result follows. □

### Theorem 4.1

Let$$X\in L^{p}$$for some$$p>2$$be the lifetime of a system and$${\mathcal{F}}$$be aσ-field. Then$$\mathbb{E}({\mathcal{CE}}_{n,k}(X|{\mathcal{F}}))=0$$iffXis$${\mathcal{F}}$$-measurable.

### Proof

Suppose that $$\mathbb{E}({\mathcal{CE}}_{n,k}(X|{\mathcal{F}}))=0$$, then $${\mathcal{CE}}_{n,k}(X|{\mathcal{F}})=0$$. Now using the definition of $${\mathcal{CE}}_{n,k}(X|{\mathcal{F}})$$, we conclude that $$\mathbb{E}(I_{(X\leq x)}|{\mathcal{F}})=0$$ or 1. Hence, using relation (24) of Rao et al. , T is $${\mathcal{F}}$$-measurable.

Supposing that T is $${\mathcal{F}}$$-measurable, again using relation (24) of Rao et al. , we have $$P(X\leq x|{\mathcal{F}})=0$$ or 1 for almost all $$x\in \mathbb{R^{+}}$$, so the result follows. □

### Remark 4.1

For any random variable $$X>0$$ and strictly convex $$\phi :\mathbb{R}^{+}\rightarrow \mathbb{R}^{+}( {\phi ''}>0)$$. If $$\mathbb{E}(\phi (X))=\phi (\mathbb{E}(X))$$, then $$X=\mathbb{E}(X)$$.

### Theorem 4.2

For any random variableXandσ-field$${\mathcal{F}}$$, we have

\begin{aligned} \mathbb{E}\bigl({\mathcal{CE}}_{n,k}(X|{\mathcal{F}})\bigr)\leq { \mathcal{CE}}_{n,k}(X), \end{aligned}
(4.2)

and the equality holds if, and only if, Xis independent of$${\mathcal{F}}$$.

### Proof

The inequality (4.2) follows from (4.1) by taking $${\mathcal{F}}=\{\phi ,\varOmega \}$$. Now, suppose that X is independent of $${\mathcal{F}}$$, then clearly

\begin{aligned} \mathbb{P}(X\leq x| {\mathcal{F}})=\mathbb{P}(X\leq x). \end{aligned}
(4.3)

Using Definition 4.1 and (4.3), we have

\begin{aligned} \mathbb{E}\bigl({\mathcal{CE}}_{n,k}(X|{\mathcal{F}})\bigr)= { \mathcal{CE}}_{n,k}(X). \end{aligned}

Conversely, suppose that there is equality in (4.2). We set $$W:=\mathbb{P}(X\leq x| {\mathcal{F}})$$; since $$\phi (w)=w^{k}[-\log w]^{n}$$ is strictly convex and $$\mathbb{E}( \phi (W))=\phi (\mathbb{E}(W))$$, we have $$\mathbb{P}(X\leq x| {\mathcal{F}})=\mathbb{P}(X\leq x)$$, i.e., X is independent of $${\mathcal{F}}$$. □

## Extended cumulative past inaccuracy measure

Let X and Y be two nonnegative random variables with distribution functions $$F(x)$$, $$G(x)$$, respectively. If $$F (x)$$ is the actual cdf corresponding to the observations and $$G(x)$$ is the cdf assigned by the experimenter, then the cumulative past inaccuracy measure between $$F(x)$$ and $$G(x)$$ is defined by Thapliyal and Taneja  as follows:

$$I(F,G)=- \int _{0}^{+\infty }F(x)\log G(x)\,dx.$$
(5.1)

When $$G(x)= F(x)$$, then (5.1) becomes cumulative entropy which is studied by Di Crescenzo and Longobardi .

In analogy with the measure defined in Equation (5.1), we now introduce the extended cumulative past inaccuracy (ECPI) defined as

$$I_{n,k}(F,G)=\frac{k^{n+1}}{n!} \int _{0}^{+\infty }\bigl[F(x)\bigr]^{k} \bigl[-\log G(x)\bigr]^{n}\,dx.$$
(5.2)

As applications of Equation (5.2), we have the following properties.

### Theorem 5.1

LetXandYbe two nonnegative absolutely continuous random variables with cdfsFandG, respectively. Then we have

$$I_{n,k}(F,G)=\mathbb{E} \bigl[\tilde{h}_{n,k}(X) \bigr],$$
(5.3)

where

$$\tilde{h}_{n,k}(x)=\frac{k^{n+1}}{n!} \int _{x}^{+\infty }\bigl[F(z)\bigr]^{k} \bigl[- \log G(z)\bigr]^{n}\,dz.$$

### Proof

From Equation (5.2) and Fubini’s theorem, it follows

\begin{aligned} I_{n,k}(F,G) =&\frac{k^{n+1}}{n!} \int _{0}^{+\infty }\bigl[F(x)\bigr]^{k} \bigl[-\log G(x)\bigr]^{n}\,dx \\ =&\frac{k^{n+1}}{n!} \int _{0}^{+\infty } \biggl[ \int _{0}^{x}\,dF(t) \biggr] \bigl[F(x) \bigr]^{k-1}[\bigl[-\log G(x)\bigr]^{n}\,dx \\ =& \int _{0}^{+\infty } \biggl[ \int _{t}^{\infty } \frac{k^{n+1}[F(x)]^{k-1}[-\log G(x)]^{n}}{n!}\,dx \biggr] \,dF(t)= \int _{0}^{+ \infty }\tilde{h}_{n,k}(t) \,dF(t). \end{aligned}

Therefore, the stated results follow. □

### Proposition 5.1

LetXandYbe nonnegative random variables with distribution functionsFandG, respectively. If$$X\leq _{st}Y$$, then for$$n,k\geq 1$$we have

\begin{aligned} I_{n,k}(G,F)\leq {\mathcal{CE}}_{n,k}(X)\leq I_{n,k}(F,G). \end{aligned}

### Proof

The proof is similar to that of Proposition 7 of Cali et al. . □

### Proposition 5.2

LetXandYbe nonnegative random variables with distribution functionsFandG, respectively. If$$X\leq _{dcx}Y$$, then for$$n,k\geq 1$$we have

\begin{aligned} {\mathcal{CE}}_{n,k}(X)\leq I_{n,k}(G,F). \end{aligned}

### Proof

Since $${\mathcal{CE}}_{n,k}(X)$$ and $$I_{n,k}(G,F)$$ can be expressed as mean value of $$\tilde{h}_{n,k}(\cdot)$$, the proof follows by noting that $$\tilde{h}_{n,k}(\cdot)$$ is a decreasing convex function.

Park et al.  have recently suggested an extension of KL information in terms of the distribution function, which can be called cumulative Kullback–Leibler information (CKL), as follows:

\begin{aligned} \operatorname{CKL}(F,G)=\operatorname{CKL}(X,Y)=I_{1,1}(F,G)-{ \mathcal{CE}}(X)+\mathbb{E}(X)- \mathbb{E}(Y). \end{aligned}

This measure of information is also studied in Di Crescenzo and Longobardi . In the following, we define a symmetric version of the $$\operatorname{CKL}(X,Y)$$. □

### Definition 5.1

Let X and Y be nonnegative random variables with distribution functions F and G, respectively. Then the symmetric CKL is defined as follows:

\begin{aligned} \operatorname{SCKL}(X,Y)=\operatorname{CKL}(X,Y)+\operatorname{CKL}(Y,X)= \int _{0}^{+\infty }\bigl[F(x)-G(x)\bigr]\log \frac{F(x)}{G(x)}\,dx. \end{aligned}

Note that $$\operatorname{SCKL}(X,Y)\geq 0$$ and symmetric. It is noted that when we are comparing systems pairwise, then we can find a system in which its distribution is closer to the distribution of the parallel system or the series system. Thus, in the following we will propose a measure of distance symmetric to the mixed system.

### Lemma 5.1

LetXandYbe random variables with cdfsF, G, andH, respectively. If$$X\leq _{st}Y\leq _{st}Z$$, then$$\operatorname{SCKL}(X,Y)\leq \operatorname{SCKL}(X,Z)$$and$$\operatorname{SCKL}(Y,Z)\leq \operatorname{SCKL}(X,Z)$$.

### Proof

Proof is similar to the proof of Lemma 1 in Toomaj et al.  and hence it is omitted. □

A mixed system is a stochastic mixture of coherent systems. Hence, any coherent system is a mixed system (Toomaj and Doostparast ). Toomaj and Doostparast  obtained an expression for the Shannon entropy of mixed r-out-of-n systems when the lifetimes of the components are independent and identically distributed. Toomaj  discussed the Renyi entropy of mixed system’s lifetime. Kayal  studied a generalized entropy of mixed systems whose component lifetimes are independent identically distributed.

If T is the lifetime of any arbitrary mixed system, then it is well known that $$X_{1:n}\leq _{st}T\leq _{st}X_{n:n}$$ (see Samaniego  and Barlow and Proschan ). Therefore, we can find a system in which its structure (or distribution) is similar (or closer) to the distribution of the parallel system or the series system.

### Proposition 5.3

LetTbe the lifetime of a mixed(or coherent) system based on i.i.d. component lifetimes$$X_{1},X_{2},\ldots,X_{n}$$, then$$\operatorname{SCKL}(T,X_{i:n})\leq \operatorname{SCKL}(X_{1:n},X_{n:n})$$for$$i=1,n$$.

Now, we propose a measure of distance symmetric (DS) for T as follows:

\begin{aligned} \operatorname{DS}(T)=\frac{\operatorname{SCKL}(T,X_{1:n})-\operatorname{SCKL}(T,X_{n:n})}{\operatorname{SCKL}(X_{1:n},X_{n:n})}. \end{aligned}

Note that $$|\operatorname{DS}(T)|\leq 1$$. If $$\operatorname{DS}(T)$$ is closer to $$1(-1)$$, then the distribution of T is closer to the distribution of the parallel(series) system. In Table 3, we have computed the values of $$\operatorname{DS}(T)$$, standard deviation, CE, and the Gini mean difference for all coherent system with 1–4 i.i.d components having uniform distribution in $$(0,1)$$. We can also see that $$D^{*}_{G}(T)\leq {\mathcal{CE}}(T)\leq \sigma (T)$$ and $$\operatorname{DS}(T)=1$$ if and only if $$T\equiv ^{d}X_{n:n}$$ or $$\operatorname{DS}(T)=-1$$ if and only if $$T\equiv ^{d}X_{1:n}$$.

## Empirical measures of ECE and ECPI

Let $$X_{1},X_{2}, \ldots ,X_{m}$$ be a random sample of size m from an absolutely continuous cumulative distribution function $$F(x)$$. If $$X_{1:m}\leq X_{2:m}\leq \cdots\leq X_{m:m}$$ represent the order statistics of the sample $$X_{1},X_{2}, \ldots ,X_{m}$$, then the empirical measure of $$F(x)$$ for $$i=1,2,\ldots,m-1$$ is defined as follows:

$$\hat{F}_{m}(x)=\textstyle\begin{cases} 0, & x< X_{1:m}, \\ \frac{i}{m},& X_{i:m}\le x\le X_{i+1:m}, \\ 1, & x>X_{m:m}. \end{cases}$$

Thus the empirical measure of ECE is obtained as

\begin{aligned} {\mathcal{CE}}_{n,k}(\hat{F}_{m}) =& \frac{1}{n!} \int _{0}^{+\infty }\bigl[ \hat{F}_{m}(x) \bigr]^{k} \bigl(-\log \hat{F}_{m}(x) \bigr)^{n}\,dx \\ =&\frac{1}{n!}\sum_{i=1}^{m-1} \int _{X_{i:m}}^{X_{i+1:m}} \biggl( \frac{i}{m} \biggr)^{k} \biggl(-\log \biggl(\frac{i}{m} \biggr) \biggr)^{n}\,dx \\ =&\frac{1}{n!}\sum_{i=1}^{m-1}U_{i} \biggl(\frac{i}{m} \biggr)^{k}[- \log i+\log m]^{n} \\ =&\frac{1}{n!}\sum_{i=1}^{m-1} \sum_{j=0}^{n}(-1)^{j} \binom{n}{j}U_{i} \biggl(\frac{i}{m} \biggr)^{k}[\log i]^{j}[\log m]^{n-j}, \end{aligned}
(6.1)

where $$U_{i}=X_{i+1:m}-X_{i:m}$$.The following example provides an application of the empirical measure of ECE to real data.

### Example 6.1

Consider the data set from Blischke and Murthy , concerning the failure times of 84 mechanical components.

0.040, 1.866, 2.385, 3.443, 0.301, 1.876, 2.481, 3.467,0.309,1.899, 2.610, 3.478, 0.557, 1.911, 2.625, 3.578,0.943, 1.912, 2.632, 3.595,1.070, 1.914, 2.646, 3.699, 1.124, 1.981, 2.661, 3.779,1.248, 2.010, 2.688, 3.924,1.281, 2.038, 2.823, 4.035, 1.281, 2.085, 2.890, 4.121,1.303, 2.089, 2.902, 4.167,1.432, 2.097, 2.934, 4.240, 1.480, 2.135, 2.962, 4.255,1.505, 2.154, 2.964, 4.278,1.506, 2.190, 3.000, 4.305, 1.568, 2.194, 3.103, 4.376,1.615, 2.223,3.114, 4.449,1.619, 2.224, 3.117, 4.485,1.652,2.229,3.166, 4.570,1.652, 2.300, 3.344, 4.602,1.757, 2.324, 3.376, 4.663.

Then, from the data set, we compute $${\mathcal{CE}}_{5,1}(\hat{F}_{m})=0.1564$$, $${\mathcal{CE}}_{5,2}(\hat{F}_{m})=0.5280$$, $${\mathcal{CE}}_{5,3}(\hat{F}_{m})= 0.6701$$, $${\mathcal{CE}}_{5,4}(\hat{F}_{m})=0.8043$$, and $${\mathcal{CE}}_{5,5}(\hat{F}_{m})=0.9892$$. Figure 1 shows the function $${\mathcal{CE}}_{n,2}(\hat{F}_{m})$$ for $$n>1$$. It decreases in empirical measure of ECE for different values of $$n\geq 2$$.

Let us remember that the well-known theorem of Glivenko–Cantelli states that

$$\sup \bigl\vert \hat{F}_{m}(x)-F(x) \bigr\vert \rightarrow 0 \quad \text{a.s as } m\rightarrow \infty .$$

Using this result, the following theorem asserts that $${\mathcal{CE}}_{n,k}(\hat{F}_{m})$$ converges almost surely to $${\mathcal{CE}}_{n,k}(X)$$. The proof of which follows the same lines as given in Theorem 9 of Rao et al. .

### Theorem 6.1

LetXbe a nonnegative and absolutely continuous random variable with cdfF. Then, for any randomXin$$L^{p}$$for some$$p>2$$, we have

$${\mathcal{CE}}_{n,k}(\hat{F}_{m})\rightarrow { \mathcal{CE}}_{n,k}(F) \quad \textit{a.s as } m\rightarrow \infty .$$

### Proof

From (6.1), we have

\begin{aligned} &\frac{n!}{(-1)^{n}k^{n+1}}{\mathcal{CE}}_{n,k}(\hat{F}_{m}) \\ &\quad = \int _{0}^{1}\bigl[ \hat{F}_{m}(x) \bigr]^{k}\bigl[\log \hat{F}_{m}(x)\bigr]^{n} \,dx+ \int _{1}^{+\infty }\bigl[ \hat{F}_{m}(x) \bigr]^{k}\bigl[\log \hat{F}_{m}(x)\bigr]^{n} \,dx \\ &\quad =: J_{1}+J_{2}. \end{aligned}
(6.2)

Using the dominated convergence theorem and the Glivenko–Canlelli theorem, we have

\begin{aligned} \int _{0}^{1}\bigl[\hat{F}_{m}(x) \bigr]^{k}\bigl[\log \hat{F}_{m}(x)\bigr]^{n} \,dx \rightarrow \int _{0}^{1}\bigl[F(x)\bigr]^{k} \bigl[\log {F}(x)\bigr]^{n}\,dx \quad \text{as } m\rightarrow \infty . \end{aligned}
(6.3)

It follows that

\begin{aligned} x^{p}\hat{\bar{F}}_{m}(x)\leq \frac{1}{m} \sum_{i=1}^{m}X_{i}^{p}. \end{aligned}
(6.4)

Moreover, by using SLLN $$\frac{1}{m}\sum_{i=1}^{m}X_{i}^{p}\rightarrow E(X^{p})$$ and $$\sup_{m}(\frac{1}{m}\sum_{i=1}^{m}X_{i}^{p})<\infty$$, then

\begin{aligned} \hat{\bar{F}}_{m}(x)\leq x^{-p} \Biggl(\sup _{m}\Biggl(\frac{1}{m}\sum _{i=1}^{m}X_{i}^{p} \Biggr) \Biggr)=Cx^{-p}. \end{aligned}
(6.5)

Now, by applying the dominated convergence theorem and using (5.2), we conclude

\begin{aligned} \lim_{m\rightarrow \infty }J_{2}= \int _{1}^{\infty }\bigl[F(x)\bigr]^{k} \bigl[\log {F}(x)\bigr]^{n}\,dx. \end{aligned}
(6.6)

Using (6.2)–(6.6) the result follows.

According to Equation (5.2), we define the empirical ECPI as follows:

\begin{aligned} I_{n,k}(\hat{F}_{m},\hat{G}_{m}) =& \frac{k^{n+1}}{n!} \int _{0}^{+ \infty }\bigl[\hat{F}_{m}(u) \bigr]^{k}\bigl[-\log \hat{G}_{m}(u)\bigr]^{n} \,dx \\ =&\frac{k^{n+1}}{n!}\sum_{j=1}^{m-1} \biggl(-\log \biggl(\frac{j}{m}\biggr) \biggr)^{k} \int _{Y_{j:m}}^{Y_{j+1:m}}\bigl[\hat{F}_{m}(u) \bigr]^{k}\,du, \end{aligned}
(6.7)

where $$Y_{1:m},Y_{2:m},\ldots,Y_{m:m}$$ are the order statistics of the new sample. Let us denote by

$$\sum_{i=1}^{m}\mathbf{1}_{\{X_{i}\leq Y_{j:m}\}},\quad j=1,2,\ldots,m,$$

the number of random variables of the first sample that are less than or equal to the jth order statistic of the second sample. Moreover, we rename by $$X_{j,1}< X_{j,2}< \cdots$$ the random variables of the first sample belonging to interval $$(Y_{j:m},Y_{j+1:m}]$$. So, we have

\begin{aligned} \int _{Y_{j:m}}^{Y_{j+1:m}}\bigl[\hat{F}_{m}(u) \bigr]^{k}\,du= \biggl( \frac{N_{j}}{n} \biggr)^{k}[Y_{j+1:m}-Y_{j:m}]+ \frac{1}{n}\sum_{r=1}^{N_{j+1}-N_{j}}[Y_{j+1:m}-X_{j,r}]. \end{aligned}

Then

\begin{aligned} I_{n,k}(\hat{F}_{m},\hat{G}_{m})= \frac{k^{n+1}}{n!}\sum_{j=1}^{m-1} \Biggl[\bigl(N^{k}_{j}-N_{j}+N_{j+1} \bigr)Y_{j+1:m}-N_{j}Y_{j:m}-\sum _{r=1}^{N_{j+1}-N_{j}}X_{j,r} \Biggr] \biggl(- \log \frac{j}{m} \biggr)^{n}. \end{aligned}

Clearly, $$I(\hat{G}_{m},\hat{F}_{m})$$ can be obtained by symmetry. □

### Theorem 6.2

LetXandYbe the nonnegative and absolutely continuous random variables with cdfsFandG. Then, for any randomXandYin$$L^{p}$$for some$$p>2$$, we have

\begin{aligned} I_{n,k}(\hat{F}_{m},\hat{G}_{m})\rightarrow I_{n,k}(F,G) \quad \textit{a.s as } m\rightarrow \infty . \end{aligned}

### Proof

The proof is similar to that of Theorem 5 of Calì et al. . □

## Conclusions

In this paper, we have obtained various properties of ECE. This concept of cumulative entropy can be applied in measuring the uncertainty contained in the associated past lifetime. We studied this measure of uncertainty for the coherent systems lifetime with identically distributed components. We also discussed the conditional ECE of a system lifetime. Moreover, we proposed a measure of ECPI and its empirical version. We studied a measure of distance symmetric in coherent and mixed systems. Finally, we proposed estimators of these measures by using empirical approach and studied numerical results of ECE in lifetime data. Further, it has been shown that the empirical measure of ECE and ECPI converges to normal distribution, when a random sample is taken from continuous distribution.

## References

1. Ahmed, A.N., Alzaid, A., Bartoszewicz, J., Kochar, S.C.: Dispersive and superadditive ordering. Adv. Appl. Probab. 18(4), 1019–1022 (1986)

2. Barlow, R.E., Proschan, F.: Statistical theory of reliability and life testing. Silver Spring (1981)

3. Baxter, L.A.: Reliability applications of the relevation transform. Nav. Res. Logist. Q. 29, 323–330 (1982)

4. Blischke, W.R., Murthy, D.N.P.: Reliability. Wiley, New York (2000)

5. Burkschat, M., Navarro, J.: Stochastic comparisons of systems based on sequential order statistics via properties of distorted distributions. Probab. Eng. Inf. Sci. 32, 246–274 (2018)

6. Calì, C., Longobardi, M., Navarro, J.: Properties for generalized cumulative past measures of information. Probab. Eng. Inf. Sci. (2018). https://doi.org/10.1017/S0269964818000360

7. Cover, T.A., Thomas, J.A.: Elements of Information Theory. Wiley, New York (2006)

8. Di Crescenzo, A., Longobardi, M.: On cumulative entropies. J. Stat. Plan. Inference 139, 4072–4087 (2009)

9. Di Crescenzo, A., Longobardi, M.: Some properties and applications of cumulative Kullback-Leibler information. Appl. Stoch. Models Bus. Ind. 31, 875–891 (2015)

10. Di Crescenzo, A., Toomaj, A.: Extension of the past lifetime and its connection to the cumulative entropy. J. Appl. Probab. 52, 1156–1174 (2015)

11. Di Crescenzo, A., Toomaj, A.: Further results on the generalized cumulative entropy. Kybernetika 53(5), 959–982 (2017)

12. Dziubdziela, W., Kopocinski, B.: Limiting properties of the k-th record value. Zastos. Mat. 15, 187–190 (1976)

13. Gupta, R.C., Gupta, R.D.: Proportional reversed hazard rate model and its applications. J. Stat. Plan. Inference 137, 3525–3536 (2007)

14. Kapodistria, S., Psarrakos, G.: Some extensions of the residual lifetime and its connection to the cumulative residual entropy. Probab. Eng. Inf. Sci. 26, 129–146 (2012)

15. Kayal, S.: On generalized cumulative entropies. Probab. Eng. Inf. Sci. 30(4), 640–662 (2016)

16. Kayal, S.: On a generalized entropy of mixed systems. J. Stat. Manag. Syst. 22(6), 1183–1198 (2019)

17. Klein, I., Mangold, B., Doll, M.: Cumulative paired ϕ-entropy. Entropy 18(7), 248 (2016)

18. Krakowski, M.: The relevation transform and a generalization of the gamma distribution function. Rev. Fr. Autom. Inform. Rech. Opér. 7(2), 107–120 (1973)

19. Leser, C.E.V.: Variations in mortality and life-expectation. Popul. Stud. 9, 67–71 (1955)

20. Navarro, J., del Aguila, Y., Asadi, M.: Some new results on the cumulative residual entropy. J. Stat. Plan. Inference 140, 310–322 (2010)

21. Navarro, J., del Aguila, Y., Sordo, M.A., Suárez-Llorens, A.: Stochastic ordering properties for systems with dependent identically distributed components. Appl. Stoch. Models Bus. Ind. 29, 264–278 (2013)

22. Park, S., Rao, M., Shin, D.W.: On cumulative residual Kullback–Leibler information. Stat. Probab. Lett. 82, 2025–2032 (2012)

23. Psarrakos, G., Navarro, J.: Generalized cumulative residual entropy and record values. Metrika 76, 623–640 (2013)

24. Psarrakos, G., Toomaj, A.: On the elasticity of expected interepoch intervals in a non-homogeneous Poisson process under small variations of hazard rate. Probab. Eng. Inf. Sci. (2019). https://doi.org/10.1017/S0269964819000135

25. Rao, M.: More on a new concept of entropy and information. J. Theor. Probab. 18, 967–981 (2005)

26. Rao, M., Chen, Y., Vemuri, B.C., Wang, F.: Cumulative residual entropy: a new measure of information. IEEE Trans. Inf. Theory 50(6), 1220–1228 (2004)

27. Samaniego, F.J.: System Signatures and Their Applications in Engineering Reliability. Springer, New York (2007)

28. Shaked, M., Shanthikumar, J.G.: Stochastic Orders. Springer, NewYork (2007)

29. Shannon, C.: A mathematical theory of communication. Bell Syst. Tech. J. 27, 379–432 (1948)

30. Sordo, M.A., Psarrakos, G.: Stochastic comparisons of interfailure times under a relevation replacement policy. J. Appl. Probab. 54, 134–145 (2017)

31. Tahmasebi, S., Eskandarzadeh, M.: Generalized cumulative entropy based on kth lower record values. Stat. Probab. Lett. 126, 164–172 (2017)

32. Tahmasebi, S., Eskandarzadeh, M., Jafari, A.A.: An extension of generalized cumulative residual entropy. J. Stat. Theory Appl. 16(1), 165–177 (2017)

33. Thapliyal, R., Taneja, H.C.: Dynamic cumulative residual and past inaccuracy measures. J. Stat. Theory Appl. 14(4), 399–412 (2015)

34. Toomaj, A.: Renyi entropy properties of mixed systems. Commun. Stat., Theory Methods 46(2), 906–916 (2017)

35. Toomaj, A., Doostparast, M.: A note on signature–based expressions for the entropy of mixed r-out-of-n systems. Nav. Res. Logist. 61(3), 202–206 (2014)

36. Toomaj, A., Doostparast, M.: On the Kullback Leibler information for mixed systems. Int. J. Syst. Sci. 47(10), 2458–2465 (2016)

37. Toomaj, A., Sunoj, S.M., Navarro, J.: Some properties of the cumulative residual entropy of coherent and mixed systems. J. Appl. Probab. 54, 379–393 (2017)

### Acknowledgements

The authors would like to thank the editors and reviewers for their valuable contributions, which greatly improved the readability of this paper.

Not applicable.

## Funding

The authors state that there is no funding source for this paper.

## Author information

Authors

### Contributions

The authors have equally made contributions. All authors read and approved the final manuscript.

### Corresponding author

Correspondence to Saeid Tahmasebi.

## Ethics declarations

### Competing interests

The authors declare that they have no competing interests. The authors state that no funding source or sponsor has participated in the realization of this work.

## Rights and permissions 