Open Access

A sharp bound for the ergodic distribution of an inventory control model under the assumption that demands and inter-arrival times are dependent

Journal of Inequalities and Applications20142014:75

https://doi.org/10.1186/1029-242X-2014-75

Received: 15 October 2013

Accepted: 17 January 2014

Published: 13 February 2014

Abstract

In this study, a stochastic process which represents a single-item inventory control model with ( s , S ) -type policy is constructed when the demands of a costumer are dependent on the inter-arrival times between consecutive arrivals. Under the assumption that the demands can be expressed as a monotone convex function of the inter-arrival times, it is proved that this process is ergodic, and closed form of the ergodic distribution is given. Moreover, a sharp lower bound for this distribution is obtained.

Keywords

dependenceergodic distributioninventory model of type ( s , S )

1 Introduction

Consider a single-item inventory control model as follows. Customers arrive at the depot at random times { T n } , and the amount of their demands can be modeled by a sequence of random variables { η n } . If there exists enough supply in the stock, then the demanded items of the customer are satisfied from the stock, else an immediate replenishment order takes place so that to raise the inventory level to an order-up-to level S > 0 . In other words, if X ( t ) denotes stock level just before an arrival of a customer at time t and η is the amount of his/her demand, then
X ( t ) = { X ( t ) η , X ( t ) η > s , S , X ( t ) η s .

Here s 0 is a pre-defined control level. We will assume that no product is returned or defective and the supplier of this depot is reliable so that the replenishment is not delayed. This model is known as ( S , s ) -type policy inventory control model and it has been extensively studied under various assumptions in the literature (see Scarf [1], Rabta and Aissani [2], Chen and Yang [3], Khaniyev and Atalay [4], Khaniyev and Aksop [5]).

The classical inventory model assumes that the inter-arrival times of the customers and the amounts of the demands are mutually independent random variables. However, real life problems are generally too complex so the assumptions of classical inventory control theory are not valid. This situation shows itself mainly on the structure of the amounts of the demands and the inter-arrival times between consecutive costumers.

In most cases, these demands cannot be modeled by independent random variables. For example, demands can depend on the day of the week so that at the weekends there can be a high demand while at weekdays there can be low demand, or there can be seasonal demand for that item (see Sethi and Cheng [6]). Also the amount of the demand can depend on the inventory level: if the supplier can offer a wide selection of his/her items, then he/she can increase the probability of making a sale (see Urban [7]).

The model investigated in this study assumes that the demands are dependent on the inter-arrival times. This assumption makes sense especially in the situations when the demands can be met only by one supplier and reaching that supplier is not easy for the customers. In this case, if a customer did not make a demand for a long time, then he or she will probably have a need for more items. This is the situation occurring at rural districts or regions which are difficult to access.

The main purpose of this paper is to investigate the ergodicity of an inventory control model with ( s , S ) -policy under the assumption that the amount of demands is dependent on inter-arrival times. Next section gives a mathematical construction of the studied stochastic process X ( t ) . Section 3 gives some notations used in this study, and Section 4 gives the main results. In the last section, some discussion is given.

2 Mathematical construction of the process X ( t )

Let { ( ξ i , η i ) } be an independent and identically distributed random pair, where ξ i and η i are dependent random variables with joint distribution G ( x , y ) and marginal distributions Φ and F, respectively, that is,
G ( t , x ) = P { ξ i t , η i x } , i = 1 , 2 ,
and
Φ ( t ) = P { ξ i t } , F ( x ) = P { η i x } .

Moreover, let η i be absolutely continuous random variables.

Let us construct a sequence of integer-valued random variables { N n } as follows:
N 0 = 0 , N 1 = min { n 1 : S Y n < s } , N m = min { n N m 1 + 1 : S ( Y n Y N m 1 ) < s } , m = 2 , 3 , ,
where Y n = i = 1 n η i , n = 1 , 2 ,  . For the sake of simplicity, we will use the following notation:
ξ k n = ξ N k 1 + n , η k n = η N k 1 + n , k = 1 , , n = 1 , 2 , .
Let us construct a stochastic process X 1 ( t ) as follows:
X 1 ( t ) = S i = 1 ν 1 ( t ) η i , t > 0 .
Here
ν 1 ( t ) = max { n 0 : T 1 n t } ,

and T 1 n = i = 1 n ξ 1 i , n = 1 , 2 ,  , T 10 = 0 .

Whenever the value of this process falls below to a pre-defined fixed control level s > 0 , we kill this process and a new replica X 2 ( t ) of the process X 1 ( t ) is constructed with the initial value S. Let us denote this time by τ 1 ; that is,
τ 1 = inf { t > 0 : X 1 ( t ) < s } .
So the new process X 2 ( t ) can be expressed as follows:
X 2 ( t ) = S i = 1 ν 2 ( t ) η 2 i , t > 0 ,

where, η 2 i and ν 2 ( t ) are defined similar to η 1 i and ν 1 ( t ) , respectively.

In a similar way, let us construct a sequence of stochastic processes { X n ( t ) , n = 1 , 2 , } . By using these sequences of stochastic processes, we can define the desired stochastic process X ( t ) as follows:
X ( t ) = n = 1 X n ( t τ n 1 ) I [ τ n 1 , τ n ) ( t ) , t > 0 ,
where I A ( ) is the indicator function of set A, i.e.,
I A ( t ) = { 1 , t A , 0 , otherwise

and τ 0 = 0 .

3 Notation

In this section, the notations used in this article are given.
p ( t , x ) d x = P { X 1 ( t ) d x } , U θ ( t ) = n = 0 R 1 n ( t ) , R n ( t ) = P { τ n t } , n = 1 , 2 , , p n ( t , x ) = ( p ( , x ) R n ) ( t ) 0 t p ( t u , x ) R n ( d u ) , Y n : m = i = n m η i , m n ; Y n : m = 0 , m < n , F n ( x ) = F n ( x ) , U ( x ) n = 0 F n ( x ) , F 0 ( x ) = 1 , x 0 , f ( x ) d x = F ( d x ) , Φ n ( t ) = Φ n ( t ) , F ¯ ( x ) = 1 F ( x ) , γ = S s .

4 Main results

In Theorem 4.2 below, the ergodicity of the constructed stochastic process is obtained under the assumption that the demands can be expressed as a monotone increasing function of the inter-arrival times by utilizing a theorem from Gihman and Skorohod [8]. Then with an additional assumption of convexity, we obtained an upper bound for the first period’s distribution function (that is, for the distribution function of X 1 ( t ) , 0 t < τ 1 ). An explicit expression for the ergodic characteristics of the process X ( t ) is given in Corollary 4.1. In Theorem 4.4 a lower bound for the ergodic distribution is obtained.

Proposition 4.1 (see Feller [9])

For all t 0 , the following equation holds true:
n = 1 n [ F n 1 ( t ) F n ( t ) ] = U ( t ) .

Definition 4.1 (Lehmann [10])

A pair of random variables ( X , Y ) is positively quadrant dependent if
P { X x , Y y } P { X x } P { Y y } , x , y R .
(1)

Proposition 4.2 (Lehmann [10])

If X and Y are positively quadrant dependent, then the following inequality holds:
P { X x , Y y } P { X x } P { Y y } .

Theorem 4.1 Let E [ ξ 1 ] < and E [ η 1 ] > 0 . If ξ 1 and η 1 are positively quadrant dependent, then E [ τ 1 ] < .

Proof Note that
E [ τ 1 ] = E [ i = 1 N 1 ξ i ] = n = 1 n E [ ξ 1 | N 1 = n ] P { N 1 = n } = n = 1 n 0 P { ξ 1 x , N 1 = n } d x .
(2)
On the other hand, for n = 1 , 2 we have
P { ξ 1 x , N 1 = n } P { ξ 1 x }
(3)
and for n 3 we have
P { ξ 1 x , N 1 = n } = P { ξ 1 x , Y n 1 γ < Y n } = 0 γ P { ξ 1 x , η 1 d v , Y 2 : n 1 γ v < Y 2 : n } = 0 γ P { Y n 2 γ v < Y n 1 } P { ξ 1 x , η 1 d v } = 0 γ 0 γ v F ¯ ( γ v w ) F n 2 ( d w ) P { ξ 1 x , η 1 d v } = 0 γ [ F n 2 ( γ v ) F n 1 ( γ v ) ] P { ξ 1 x , η 1 d v } .
(4)
By applying Proposition 4.2, we get from (4)
P { ξ 1 x , N 1 = n } 0 γ [ F n 2 ( γ v ) F n 1 ( γ v ) ] P { η 1 d v } P { ξ 1 x } = [ F n 1 ( γ ) F n ( γ ) ] P { ξ 1 x } .
(5)
Substituting (5) into (2) and using Proposition 4.1 yields
E [ τ 1 ] n = 1 2 0 n P { ξ 1 x } d x + 0 n = 3 n [ F n 1 ( γ ) F n ( γ ) ] P { ξ 1 x } d x = E [ ξ 1 ] [ U ( γ ) + 2 F 2 ( γ ) F ( γ ) + 2 ] .
Since for all finite values of γ, U ( γ ) is finite (Feller [9]), we get
E [ τ 1 ] E [ ξ 1 ] [ U ( γ ) + 2 F 2 ( γ ) F ( γ ) + 2 ] ( 4 + U ( γ ) ) E [ ξ 1 ] < .

 □

Theorem 4.2 Let η n = h ( ξ n ) , n = 0 , 1 ,  , where h C ( R + ) is a monotone increasing function. Let h ( 0 ) 0 and E [ η 1 ] = μ < .
  1. (A)

    If sup x R h ( x ) > γ , then the process X ( t ) is ergodic.

     
  2. (B)
    If sup x R h ( x ) < γ , then, additionally, let
    0 ( 1 h ( x ) γ ) d x < .
     

Then the process X ( t ) is ergodic.

Proof It is known that the following conditions are sufficient to prove that the process X ( t ) is ergodic (Gihman and Skorohod [8]):
  1. 1.

    For a sequence of random variables { γ n } such that 0 γ 1 < γ 2 <  , the process X n X ( γ n ) must form an ergodic Markov chain.

     
  2. 2.

    E [ γ n + 1 γ n ] < , n = 1 , 2 ,  .

     

Observe that X n X ( τ n ) form an ergodic Markov chain because X ( τ n ) = S for each n 0 .

To see that E [ τ n + 1 τ n ] < , it is enough to show only that E [ τ 1 ] < because τ 1 , τ 2 τ 1 , have identical distribution. Note that
E [ τ 1 ] = E [ i = 1 N 1 ξ 1 i ] = E [ i = 1 N 1 E [ ξ 1 i | N 1 ] ] = E [ N 1 E [ ξ 1 | N 1 ] ] .
On the other hand, from Proposition 4.1 and equation (4) we have
E [ τ 1 ] = 0 h ( x ) U ( γ y ) F ( d y ) d x .
Here, if a > b , then we take a b f d t = 0 .
  1. (A)
    Let us assume that there exists x R + such that h ( x ) > γ holds. Then let us denote the infimum of such numbers with x (<∞); that is,
    x = inf { x > 0 : h ( x ) > γ } .
    Then we get
    0 h ( x ) γ U ( γ y ) F ( d y ) d x = 0 x h ( x ) γ U ( γ y ) F ( d y ) d x 0 x 0 γ U ( γ y ) F ( d y ) d x = x ( U ( γ ) 1 ) < .
     
  2. (B)
    If h ( x ) < γ for all x R + , then
    0 h ( x ) γ U ( γ y ) F ( d y ) d x sup y [ 0 , γ ] { U ( γ y ) f ( y ) } 0 γ ( γ h ( x ) ) d x sup y [ 0 , γ ] { U ( γ y ) f ( y ) } 0 ( γ h ( x ) ) d x = γ sup y [ 0 , γ ] { U ( γ y ) f ( y ) } 0 ( 1 h ( x ) γ ) d x < .
     

Therefore E [ τ 1 ] < .

Now, put γ n = τ n in 1. and 2. to see that the process X ( t ) is ergodic. □

Lemma 4.1 For every measurable function g, the following equation holds true:
E [ g ( X ( t ) ) ] = n = 0 ( g p n ( t , ) ) ( S ) .
(6)
Proof For every t n < t , we have
E [ g ( X n ( t τ n ) ) χ ( X n ( t τ n ) ) | τ n = t n ] = E [ g ( X n ( t t n ) ) χ ( X n ( t t n ) ) ] = E [ g ( X 0 ( t t n ) ) χ ( X 0 ( t t n ) ) ] = 0 g ( x ) P { X 0 ( t t n ) d x } = 0 S g ( x ) p ( t t n , S x ) d x .
Here
χ ( x ) = { 1 , x 0 , 0 , x < 0 .
Therefore,
E [ g ( X n ( t τ n ) ) I [ τ n , τ n + 1 ) ( t ) ] = 0 t 0 S g ( x ) p ( t t n , S x ) d x P { τ n d t n }
and
E [ g ( X ( t ) ) ] = n = 0 0 t 0 S g ( x ) p ( t t n , S x ) d x P { τ n d t n } = n = 0 0 S g ( x ) p n ( t , S x ) d x .

 □

In Theorem 4.2, it is proved that under some assumptions the process X ( t ) is ergodic. Therefore, lim t P { X ( t ) x } exists for every x ( s , S ) . Let us denote a random variable X which admits this limit as a distribution; that is,
P { X x } = lim t P { X ( t ) x } , x ( s , S ) .
Corollary 4.1 For every measurable function g, the following equation holds:
E [ g ( X ) ] = 1 E [ τ 1 ] 0 ( g p ( u , ) ) ( S ) d u .
Proof Note that from Theorem 4.1 we have
E [ g ( X ( t ) ) ] = n = 0 0 S g ( x ) p n ( t , S x ) d x = n = 0 0 S g ( x ) 0 t p ( t u , S x ) P { τ n d u } d x = 0 S g ( x ) 0 t p ( t u , S x ) U θ ( d u ) d x = 0 S g ( x ) ( p ( , S x ) U θ ) ( t ) d x .
On the other hand, it is well known from the key renewal theorem that
lim t ( p ( , S x ) U θ ) ( t ) = 1 E [ τ 1 ] 0 p ( t , S x ) d t .
Therefore we get
E [ g ( X ) ] = 0 S g ( x ) 1 E [ τ 1 ] 0 p ( t , S x ) d t d x = 1 E [ τ 1 ] 0 0 S g ( x ) p ( t , S x ) d x d t = 1 E [ τ 1 ] 0 ( g p ( t , ) ) ( S ) d t .

 □

The following theorem can be obtained by an application of Theorem 4.2, Lemma 4.1 and Corollary 4.1.

Theorem 4.3 For all x 0 ,
P { X x } = 1 E [ τ 1 ] 0 0 x p ( t , S x ) d x d t .

Remark Since X ( t ) is ergodic and lim t P { X ( t ) x } = P { X x } , distribution in Theorem 4.3 is the ergodic distribution of the process X ( t ) .

Lemma 4.2 In addition to the assumption in Theorem  4.2, let h ( x ) be a convex function. Then the following inequality holds for t < τ 1 :
P { X 1 ( t ) x } n = 1 n h ( S x n ) t Φ ¯ ( t y ) Φ n ( d y ) .
(7)

Here, if a > b , we take a b d t = 0 . The inequality in (7) will be ≥, when h is a concave function.

Proof Note that
P { X 1 ( t ) x } = P { S i = 1 ν 1 ( t ) η 1 i x } = n = 1 P { i = 1 ν 1 ( t ) η 1 i S x , ν 1 ( t ) = n } = n = 1 P { i = 1 n η i S x , i = 1 n ξ i t < i = 1 n + 1 ξ i } = n = 1 P { i = 1 n h 1 ( ξ i ) S x , i = 1 n ξ i t < i = 1 n + 1 ξ i } = n = 1 P { 1 n i = 1 n h 1 ( ξ i ) S x n , i = 1 n ξ i t < i = 1 n + 1 ξ i } n = 1 P { h 1 ( 1 n i = 1 n ξ i ) S x n , i = 1 n ξ i t < i = 1 n + 1 ξ i } = n = 1 P { 1 n i = 1 n ξ i h ( S x n ) , i = 1 n ξ i t < i = 1 n + 1 ξ i } = n = 1 n h ( S x n ) t Φ ¯ ( t y ) Φ n ( d y ) .
(8)

 □

Remark Note that the series in (7) is convergent.

Theorem 4.4 Under the assumptions of Lemma  4.2, we have
P { X x } 1 E [ τ 1 ] 0 [ 1 n = 1 n h ( x n ) t Φ ¯ ( t y ) Φ n ( d y ) ] d t , x 0 .
(9)
Proof From Theorem 4.3 we have
P { X x } = 1 E [ τ 1 ] 0 0 x p ( t , S x ) d x d t ,
which is also the ergodic distribution of X ( t ) . Therefore, with an application of Lemma 4.2 we get
P { X x } = 1 E [ τ 1 ] 0 S x S p ( t , y ) d y d t = 1 E [ τ 1 ] 0 P { X 1 ( t ) S x } d t = 1 E [ τ 1 ] 0 [ 1 P { X 1 ( t ) S x } ] d t 1 E [ τ 1 ] 0 [ 1 n = 1 n h ( x n ) t Φ ¯ ( t y ) Φ n ( d y ) ] d t .
(10)

 □

Proposition 4.3 The inequality in Theorem  4.4 is sharp; that is, there exists a convex function such that (9) is satisfied with equality.

Proof To prove this proposition, it is enough to show that (7) is sharp. Let h ( x ) = a x , a > 0 . From (8) we have
P { X 1 ( t ) x } = n = 1 P { i = 1 n ξ i a ( S x ) , i = 1 n ξ i t < i = 1 n + 1 ξ i } = n = 1 a ( S x ) t Φ ¯ ( t y ) Φ n ( d y ) .
(11)

Therefore, (7) is satisfied with equality. Moreover, note that in this case the assumptions of Theorem 4.2 are satisfied. Therefore the process X ( t ) is ergodic and lim t P { X ( t ) x } is finite. The proof of this proposition follows by substituting (11) in (10). □

5 Conclusion

In this study, a stochastic model is constructed for an inventory where the customers’ demands are dependent on their arrival times. This assumption is important for modeling real life problems such as the models for the supply chain of items to researchers at poles or space. In Theorem 4.2, under some assumptions, it is proved that the stochastic process X ( t ) is ergodic. Moreover, an explicit expression for the ergodic characteristics of the process X ( t ) is obtained. A bound for the ergodic distribution is given in Theorem 4.4 which is sharp.

Declarations

Acknowledgements

This study was partially supported by TÜBİTAK 110T559 coded project.

Authors’ Affiliations

(1)
Department of Industrial Engineering, TOBB University of Economics and Technology
(2)
Institute of Cybernetics, Azerbaijan National Academy of Sciences
(3)
Science and Society Department, The Scientific and Technological Research Council of Turkey

References

  1. Scarf H:The optimality of ( s , S ) policies in the dynamic inventory problem. In Mathematical Methods in the Social Sciences 1959. Proceedings of the First Stanford Symposium. Edited by: Sheshinski E, Weiss Y. Stanford University Press, Stanford; 1959:196–202.Google Scholar
  2. Rabta B, Aissani D:Strong stability in an ( R , s , S ) inventory model. Int. J. Prod. Econ. 2005, 97: 159–171. 10.1016/j.ijpe.2004.06.050View ArticleMathSciNetMATHGoogle Scholar
  3. Chen Z, Yang Y:Optimality of ( s , S , p ) policy in a general inventory-pricing model with uniform demands. Oper. Res. Lett. 2010, 38: 256–260. 10.1016/j.orl.2010.04.004MathSciNetView ArticleMATHGoogle Scholar
  4. Khaniyev T, Atalay KD:On the weak convergence of the ergodic distribution for an inventory model of type ( s , S ) . Hacet. J. Math. Stat. 2010,39(4):599–611.MathSciNetMATHGoogle Scholar
  5. Khaniyev T, Aksop C:Asymptotic results for an inventory model of type ( s , S ) with a generalized beta interference of chance. TWMS J. Appl. Eng. Math. 2011,1(2):223–236.MathSciNetMATHGoogle Scholar
  6. Sethi SP, Cheng F:Optimality of ( s , S ) polices in inventory models with Markovian demand. Oper. Res. 1997,45(6):931–939. 10.1287/opre.45.6.931MathSciNetView ArticleMATHGoogle Scholar
  7. Urban TL: Inventory models with inventory-level-dependent demand: a comprehensive review and unifying theory. Eur. J. Oper. Res. 2005, 162: 792–804. 10.1016/j.ejor.2003.08.065View ArticleMATHGoogle Scholar
  8. Gihman II, Skorohod AV II. In Theory of Stochastic Processes. 1st edition. Springer, Berlin; 1975.View ArticleGoogle Scholar
  9. Feller W II. In An Introduction to Probability Theory and Its Applications. 1st edition. Wiley, New York; 1971.Google Scholar
  10. Lehmann EL: Some concepts of dependence. Ann. Math. Stat. 1966, 37: 1137–1153. 10.1214/aoms/1177699260View ArticleMathSciNetMATHGoogle Scholar

Copyright

© Hanalioğlu (Khaniyev) and Aksop; licensee Springer. 2014

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.