Skip to main content

Optimal consumption of the stochastic Ramsey problem for non-Lipschitz diffusion

Abstract

The stochastic Ramsey problem is considered in a growth model with the production function of a Cobb-Douglas form. The existence of a unique classical solution is proved for the Hamilton-Jacobi-Bellman equation associated with the optimization problem. A synthesis of the optimal consumption policy in terms of its solution is proposed.

MSC:49L20, 49L25, 91B62.

1 Introduction

We are concerned with the stochastic Ramsey problem in a growth model discussed by Merton [1]. For recent contribution in this direction, we refer to [2]. A firm produces goods according to the Cobb-Douglas production function x γ for capital x, where 0<γ<1 (cf. Barro and Sala-i-Martin [3]). The stock of capital X t at time t is modeled by the stochastic differential equation

d X t = X t γ dt+σ X t d B t ,t>0, X 0 =x>0,σ0,

on a complete probability space (Ω,F,P) carrying a standard Brownian motion { B t } endowed with the natural filtration F t generated by σ( B s ,st).

The capital stock can be consumed and the flow of consumption at time t is assumed to be written as c t X t . The rate of consumption c={ c t } per capital stock is called an admissible policy if { c t } is an { F t }-progressively measurable process such that

0 c t 1for all t0 a.s.
(1.1)

We denote by the set of admissible policies. Given a policy cA, the capital stock process { X t } obeys the equation

d X t = [ X t γ c t X t ] dt+σ X t d B t , X 0 =x>0.
(1.2)

The objective is to find an optimal policy c ={ c t } so as to maximize the expected discounted utility of consumption

J x (c)=E [ 0 e α t U ( c t X t ) d t ]
(1.3)

over cA, where α>0 is a discount rate and U(x) is a utility function in C 2 (0,)C[0,), which is assumed to have the following properties:

U ()=U(0)=0, U (0+)=U()=, U <0.
(1.4)

The Hamilton-Jacobi-Bellman (HJB for short) equation associated with this problem is given by

αu(x)= 1 2 σ 2 x 2 u (x)+ x γ u (x)+ U ˜ ( x , u ( x ) ) ,x>0,
(1.5)

where

U ˜ (x,y)= max 0 c 1 { U ( c x ) c x y } for x,y>0.
(1.6)

This kind of economic growth problem has been studied by Kamien and Schwartz [4] and Sethi and Thompson [[5], Chapter 11]. However, the optimization problem is unsolved. It is not guaranteed that (1.2) admits a unique positive solution { X t } and the value function is a viscosity solution of the HJB equation. The main difficulty stems from the fact that (1.5) is a degenerate nonlinear equation of elliptic type with the non-Lipschitz coefficient x γ in (0,). It is also analytically studied by [6], nevertheless in the finite time horizon. The resulting HJB equation is a parabolic partial differential equation (PDE, for short), which is very different from the elliptic PDE dealt with in the present paper.

In this paper, we provide the existence results on a unique positive solution { X t } to (1.2) and a classical solution u of (1.5) by the theory of viscosity solutions. For the detail of the theory of viscosity solutions, we mention the works [7, 8] and [9]. An optimal policy is shown to exist in terms of u.

This paper is organized as follows. In Section 2, we show that (1.2) admits a unique positive solution. In Section 3, we show the existence of a viscosity solution u of the HJB equation (1.5). Section 4 is devoted to the C 2 -regularity of its solution. In Section 5, we present a synthesis of the optimal consumption policy.

2 Preliminaries

In this section, we show the existence of a unique solution { X t } to (1.2).

Proposition 2.1 There exists a unique positive solution { X t }={ X t x } to (1.2), for each cA, such that

E[ X t ] { ( 1 γ ) t + x 1 γ } 1 / ( 1 γ ) ,
(2.1)
E [ X t 2 ] e σ 2 t { 2 ( 1 λ ) t + x 2 ( 1 λ ) } 1 / ( 1 λ ) ,λ:=(1+γ)/2,
(2.2)
ε>0, C ε >0s.t. E [ | X t x X t y | ] C ε |xy|+ε ( 1 + t 1 / ( 1 γ ) + x + y ) ,x,y>0.
(2.3)

Proof We set x t = X t 1 γ . Then, by Ito’s formula and (1.2),

d x t = ( 1 γ ) X t γ d X t + σ 2 2 ( 1 γ ) ( γ ) X t 1 γ d t = ( 1 γ ) [ 1 ( c t + σ 2 2 γ ) x t ] d t + ( 1 γ ) σ x t d B t , x 0 = x 1 γ .
(2.4)

By linearity, (2.4) has a unique solution { x t }. Since

d x ˆ t =(1γ) [ ( c t + σ 2 2 γ ) x ˆ t ] dt+(1γ)σ x ˆ t d B t , x ˆ 0 = x 1 γ
(2.5)

has a positive solution { x ˆ t }, we see by the comparison theorem [[10], Chapter 6, Theorem 1.1] that x t x ˆ t >0 holds almost surely (a.s.). Therefore, (1.2) admits a unique positive solution { X t } of the form X t = x t 1 / ( 1 γ ) , which satisfies sup 0 t T E[ X t 4 ]< for each T0.

Let θ t be the right-hand side of (2.1) and ϕ t =E[ X t ]. Obviously, we see that θ t is a unique solution of

d θ t = θ t γ dt, θ 0 =x>0.

By (1.2) and Jensen’s inequality,

d ϕ t =dE[ X t ]=E [ X t γ c t X t ] dt ϕ t γ dt.

Since θ 0 = ϕ 0 =x, we deduce ϕ t θ t , which implies (2.1).

Similarly, let Θ t be the right-hand side of (2.2) and Φ t =E[ X t 2 ]. By substitution, it is easy to see that Θ ¯ t := e σ 2 t Θ t is a unique solution of

d Θ ¯ t =2 Θ ¯ t λ dt, Θ ¯ 0 = x 2 >0.

Hence

d Θ t = e σ 2 t ( 2 Θ ¯ t λ + σ 2 Θ ¯ t ) dt ( 2 Θ t λ + σ 2 Θ t ) dt.

Furthermore, by (1.2), Ito’s formula and Jensen’s inequality,

d Φ t = d E [ X t 2 ] = E [ 2 X t 2 λ 2 c t X t 2 + σ 2 X t 2 ] d t ( 2 Φ t λ + σ 2 Φ t ) d t .

Thus, we deduce Φ t Θ t and Φ 0 = Θ 0 , which implies (2.2).

Next, let { Y t } denote the solution { X t y } of (1.2) with Y 0 =y and y t = Y t 1 γ . Then, by (2.4)

d( x t y t )=(1γ) ( c t + σ 2 2 γ ) ( x t y t )dt+(1γ)σ( x t y t )d B t ,

which implies

x t y t =( x 0 y 0 )exp { ( 1 γ ) ( 0 t c s d s + σ 2 2 γ t ) + ( 1 γ ) σ B t σ 2 2 ( 1 γ ) 2 t } .

Setting β=1/(1γ)>1, we have

E [ | x t y t | β ] | x 0 y 0 | β E [ exp { σ B t σ 2 2 t } ] = | x 1 γ y 1 γ | 1 / ( 1 γ ) | x y | .
(2.6)

By Young’s inequality [11], for any ε 0 >0,

| x β y β | β ( x β 1 + y β 1 ) | x y | β [ 1 β ( 1 ε 0 ) β | x y | β + β 1 β { ε 0 ( x β 1 + y β 1 ) } β / ( β 1 ) ] ( 1 ε 0 ) β | x y | β + ( β 1 ) ( 2 ε 0 ) β / ( β 1 ) ( x β + y β ) , x , y 0 .

Hence, for any ε>0, we choose C ε >0 such that

| x β y β | C ε | x y | β +ε ( 1 + x β + y β ) ,x,y0.

Therefore, by (2.1) and (2.6), we have

E [ | X t Y t | ] = E [ | x t β y t β | ] C ε E [ | x t y t | β ] + ε E [ 1 + x t β + y t β ] C ε | x y | + ε E [ 1 + X t + Y t ] C ε | x y | + ε { 1 + 2 β ( t β + x ) + 2 β ( t β + y ) } ,

which implies (2.3). □

Remark 2.1 The uniqueness for (1.2) is violated if x=0 and c t is deterministic since 0 and the limit process χ t := lim x 0 + X t x satisfy (1.2) with X 0 =0, and

E [ χ t 1 γ ] =E [ 0 t ( 1 γ ) { 1 ( c s + σ 2 2 γ ) χ s 1 γ } d s ] 0.
(2.7)

3 Viscosity solutions of the HJB equation

Definition 3.1 Let uC(0,). Then u is called a viscosity solution of (1.5) if the following relations are satisfied:

α u ( x ) 1 2 σ 2 x 2 q + x γ p + U ˜ ( x , p ) , ( p , q ) J 2 , + u ( x ) , x > 0 , α u ( x ) 1 2 σ 2 x 2 q + x γ p + U ˜ ( x , p ) , ( p , q ) J 2 , u ( x ) , x > 0 ,

where J 2 , + u(x) and J 2 , u(x) are the second-order superjets and subjets [7].

Define the value function u by

u(x)= sup c A E [ 0 e α t U ( c t X t ) d t ] ,
(3.1)

where the supremum is taken over all systems (Ω,F,P,{ F t };{ B t },{ c t }).

In this section, we study the viscosity solution u of the HJB equation (1.5). Due to Proposition 2.1, we can show the value function u with the following properties.

Lemma 3.1 We assume (1.4). Then we have

0u(x)ζ(x):=x+ ζ 0 ,x>0
(3.2)

for some constant ζ 0 >0, and there exists C ρ >0 for any ρ>0 such that

|u(x)u(y)| C ρ |xy|+ρ(1+x+y),x,y>0.
(3.3)

Proof Clearly, u is nonnegative. By concavity, there is C ¯ >0 such that

U(x)α 2 1 / ( 1 γ ) x+ C ¯ ,x0.

By (1.1) and (2.1), we have

E [ 0 e α t U ( c t X t ) d t ] E [ 0 e α t ( α 2 1 / ( 1 γ ) X t + C ¯ ) d t ] 0 e α t { α ( t 1 / ( 1 γ ) + x ) + C ¯ } d t = x + α 0 e α t t 1 / ( 1 γ ) d t + C ¯ / α ,

which implies (3.2).

Now, let ρ>0 be arbitrary. By (1.4), there is δ>0 such that U(x)ρ for all x[0,δ]. Furthermore,

|U(x)U(y)| U (δ)|xy|,x,yδ.

Thus, we obtain a constant C ρ >0, depending on ρ>0, such that

|U(x)U(y)| C ρ |xy|+ρ,x,y0.
(3.4)

By (1.1), (2.3) and (3.4), we get

| u ( x ) u ( y ) | sup c A E [ 0 e α t | U ( c t X t ) U ( c t Y t ) | d t ] sup c A E [ 0 e α t { C ρ | X t Y t | + ρ } d t ] C ρ 0 e α t { C ε | x y | + ε ( 1 + t 1 / ( 1 γ ) + x + y ) } d t + ρ / α C { C ρ C ε | x y | + ( ε + ρ ) ( 1 + x + y ) } , x , y > 0 ,
(3.5)

where the constant C>0 is independent of ε, ρ>0. Replacing ρ by ρ/2C and choosing sufficiently small ε>0, we deduce (3.3). □

Remark 3.1 The continuity of u is immediate from (3.3).

Theorem 3.1 We assume (1.4). Then the value function u is a viscosity solution of (1.5).

Proof According to [12], the viscosity property of u follows from the dynamic programming principle for u, that is,

u(x)= sup c A E [ 0 τ e α t U ( c t X t ) d t + e α τ u ( X τ ) ] ,x>0
(3.6)

for any bounded stopping time τ0, where the supremum is taken over all systems (Ω,F,P,{ F t };{ B t },{ c t }). Let u ¯ (x) be the right-hand side of (3.6). Let X ˜ t = X t + r and B ˜ t = B t + r B r , when τ=r is non-random. Then we have

d X ˜ t = [ X ˜ t γ c ˜ t X ˜ t ] dt+σ X ˜ t d B ˜ t , X ˜ 0 = X r

for the shifted process c ˜ ={ c ˜ t } of c by r, i.e., c ˜ t = c t + r . It is easy to see that

e α r E [ r e α t U ( c t X t ) d t | F r ] =E [ 0 e α t U ( c ˜ t X ˜ t ) d t | F r ] = J X r ( c ˜ )a.s.

with respect to the conditional probability P(| F r ). We take ζ 1 >0 such that x γ αx+ ζ 1 and sufficiently large ζ 0 >0 to obtain

αζ+ 1 2 σ 2 x 2 ζ + x γ ζ α ζ 0 + ζ 1 0.

By (3.2) in Lemma 3.1, Ito’s formula and Doob’s inequality, we have

E [ sup 0 t T e α t J X t ( c ˜ ) ] E [ sup 0 t T e α t ζ ( X t ) ] ζ(x)+C,T>0

for some constant C>0. Hence, approximating τ by a sequence of countably valued stopping times, we see that

E [ e α τ J X τ ( c ˜ ) ] =E [ τ e α t U ( c t X t ) d t ] .

Thus

J x ( c ) = E [ 0 τ e α t U ( c t X t ) d t + τ e α t U ( c t X t ) d t ] E [ 0 τ e α t U ( c t X t ) d t + e α τ u ( X τ ) ] .

Taking the supremum, we deduce u u ¯ .

We shall show the reverse inequality in the case that τ=r is constant. For any ε>0, we consider a sequence { S j :j=1,,n+1} of disjoint subsets of (0,) such that

diam( S j )<δ, j = 1 n S j =(0,R)and S n + 1 =[R,)

for δ,R>0 chosen later. We take x j S j and c ( j ) A such that

u( x j )ε J x j ( c ( j ) ) ,j=1,,n+1.
(3.7)

By the same argument as (3.5), we note that

| J x ( c ( j ) ) J y ( c ( j ) ) |+|u(x)u(y)| C ε |xy|+ ε 4 (1+x+y),x,y>0

for some constant C ε >0. We choose 0<δ<1 such that C ε δ<ε/2. Then we have

| J x ( c ( j ) ) J y ( c ( j ) ) |+|u(x)u(y)|ε(1+x),x,y S j ,j=1,2,,n.
(3.8)

Hence, by (3.7) and (3.8),

J X r ( c ( j ) ) = J X r ( c ( j ) ) J x j ( c ( j ) ) + J x j ( c ( j ) ) ε ( 1 + X r ) + u ( x j ) ε 2 ε ( 1 + X r ) + u ( X r ) ε 3 ε ( 1 + X r ) + u ( X r ) if  X r S j , j = 1 , , n .
(3.9)

By definition, we find cA such that

u ¯ (x)εE [ 0 r e α t U ( c t X t ) d t + e α r u ( X r ) ] .

In view of [[10], Chapter 6, Theorem 1.1], we can take c, c ( j ) on the same probability space. Define

c t r = c t 1 { t < r } + c t r ( j ) 1 { r t } if  X r S j ,j=1,,n+1,

where 1 { } denotes the indicator function. Then { c t r } belongs to . Let { X t r } be the solution of

d X t r = [ ( X t r ) γ c t r X t r ] dt+σ X t r d B t , X 0 r =x>0.

Clearly, X t r = X t a.s. if t<r. Further, for each j=1,,n+1, we have on { X r S j }

X t + r r = X r + r t + r [ ( X s r ) γ c s r X s r ] d s + r t + r σ X s r d B s = X r + 0 t [ ( X s + r r ) γ c s ( j ) X s + r r ] d s + 0 t σ X s + r r d B ˜ s a.s.

Hence, X t + r r coincides with the solution X t ( j ) of (1.2) for ( Ω ˜ , F ˜ , P ˜ ,{ F ˜ t };{ B ˜ t },{ c t ( j ) }) a.s. on { X r S j } with X 0 ( j ) = X r . Thus, we get

J X r ( c ˜ r ) = E P ˜ [ 0 e α t U ( c t + r r X t + r r ) d t | F ˜ r ] = E P ˜ [ 0 e α t U ( c t ( j ) X t ( j ) ) d t | F ˜ r ] = J X r ( c ( j ) ) a.s. on  { X r S j } , j = 1 , , n + 1 ,
(3.10)

where E P ˜ denotes the expectation with respect to P ˜ .

Now, we fix x>0 and choose R>0, by (2.1), (2.2) and (3.1), such that

sup c A E [ u ( X r ) 1 { X r R } ] sup c A E [ ζ ( X r ) 1 { X r R } ] sup c A 1 R E [ X r 2 + ζ 0 X r ] C 0 R ( 1 + x + x 2 ) < ε ,
(3.11)

where the constant C 0 >0 depends only on r, ζ 0 . By (3.9), (3.10) and (3.11), we have

E [ r e α t U ( c t r X t r ) d t ] = E [ E [ r e α t U ( c t r X t r ) d t | F r ] ] = E [ e α r J X r ( c ˜ r ) ] = E [ j = 1 n + 1 e α r J X r ( c ( j ) ) 1 { X r S j } ] E [ j = 1 n e α r { u ( X r ) 3 ε ( 1 + X r ) } 1 { X r S j } ] E [ e α r { u ( X r ) u ( X r ) 1 { X r R } } ] 3 ε E [ 1 + X r ] E [ e α r u ( X r ) ] ε 3 ε C ( 1 + x )

for some constant C>0 independent of ε. Thus

u ( x ) E [ 0 r e α t U ( c t r X t r ) d t + r e α t U ( c t r X t r ) d t ] E [ 0 r e α t U ( c t X t ) d t + e α r u ( X r ) ] ε 3 ε C ( 1 + x ) u ¯ ( x ) 2 ε 3 ε C ( 1 + x ) .

Letting ε0, we get u ¯ u.

In the general case, by the above argument, we note that

u ( X r ) = u ( X ˜ 0 ) E [ 0 s e α t U ( c ˜ t X ˜ t ) d t + e α s u ( X ˜ s ) | F r ] = E [ 0 s e α t U ( c t + r X t + r ) d t + e α s u ( X s + r ) | F r ] a.s.  s , r 0 .

Hence { e α s u( X s )+ 0 s e α t U( c t X t )dt} is a supermartingale. By the optional sampling theorem,

u( X 0 )E [ 0 τ e α t U ( c t X t ) d t + e α τ u ( X τ ) | F 0 ] a.s.

Taking the expectation and then the supremum over , we conclude that u ¯ u. Noting the continuity of u, we obtain (3.6). □

4 Classical solutions

In this section, using the viscosity solutions technique, we show the C 2 -regularity of the viscosity solution u of (1.5). For any fixed 0<a<b, we consider the boundary value problem

αw= 1 2 σ 2 x 2 w + x γ w + U ˜ ( x , w ) in (a,b),
(4.1)

with boundary condition

w(a)=u(a),w(b)=u(b),
(4.2)

given by u.

Proposition 4.1 Let w i C[a,b], i=1,2, be two viscosity solutions of (3.1), (4.2). Then, under (1.4), we have

w 1 = w 2 .

Proof It is sufficient to show that w 1 w 2 . Suppose that there exists x 0 [a,b] such that w 1 ( x 0 ) w 2 ( x 0 )>0. Clearly, by (4.2), x 0 a,b, and we find x ¯ (a,b) such that

ϱ:= sup x [ a , b ] { w 1 ( x ) w 2 ( x ) } = w 1 ( x ¯ ) w 2 ( x ¯ )>0.

Define

Ψ k (x,y)= w 1 (x) w 2 (y) k 2 | x y | 2

for k>0. Then there exists ( x k , y k ) [ a , b ] 2 such that

Ψ k ( x k , y k )= sup ( x , y ) [ a , b ] 2 Ψ k (x,y) Ψ k ( x ¯ , x ¯ )=ϱ,
(4.3)

from which

k 2 | x k y k | 2 < w 1 ( x k ) w 2 ( y k ).

Thus

| x k y k |0as k.
(4.4)

Furthermore, by the definition of ( x k , y k ),

Ψ k ( x k , y k ) Ψ k ( x k , x k ).

Hence, by uniform continuity

k 2 | x k y k | 2 w 2 ( x k ) w 2 ( y k ) sup | x y | ρ | w 2 ( x ) w 2 ( y ) | 0 as  k  and then  ρ 0 .
(4.5)

By (4.3), (4.4) and (4.5), extracting a subsequence, we have

( x k , y k )( x ˜ , x ˜ ) ( a , b ) 2 as k.
(4.6)

Now, we may consider that ( x k , y k ) ( a , b ) 2 for sufficiently large k. Applying Ishii’s lemma [7] to Ψ k (x,y), we obtain X,YR such that

( k ( x k y k ) , X ) J ¯ 2 , + w 1 ( x k ) , ( k ( x k y k ) , Y ) J ¯ 2 , w 2 ( y k ) , ( X 0 0 Y ) 3 k ( 1 1 1 1 ) .
(4.7)

By Definition 3.1,

α w 1 ( x k ) 1 2 σ 2 x k 2 X + x k γ μ + U ˜ ( x k , μ ) , α w 2 ( y k ) 1 2 σ 2 y k 2 Y + y k γ μ + U ˜ ( y k , μ ) ,

where μ=k( x k y k ). Putting these inequalities together, we get

α { w 1 ( x k ) w 2 ( y k ) } 1 2 σ 2 ( x k 2 X y k 2 Y ) + ( x k γ y k γ ) μ + { U ˜ ( x k , μ ) U ˜ ( y k , μ ) } I 1 + I 2 + I 3 , say .

By (4.5) and (4.7), it is clear that

I 1 = σ 2 2 ( x k 2 X y k 2 Y ) σ 2 2 3k ( x k y k ) 2 0as k.

Also, by (4.5)

I 2 =k ( x k γ y k γ ) ( x k y k )kγ a γ 1 | x k y k | 2 0as k.

By (1.6), (3.4), (4.5) and (4.6), we have

I 3 max 0 c 1 | U ( c x k ) U ( c y k ) | + | x k y k | | μ | C ρ | x k y k | + ρ + k | x k y k | 2 0 as  k  and then  ρ 0 .

Consequently, by (4.6), we deduce that

αϱα { w 1 ( x ˜ ) w 2 ( x ˜ ) } 0,

which is a contradiction. □

Theorem 4.1 We assume (1.4). Then there exists a solution u C 2 (0,) of (1.5).

Proof For any 0<a<b, we recall the boundary value problem (4.1), (4.2). Since

U(0) U (x)(0x)+U(x),x>0,

we have

K 0 := sup 0 < x a x U (x)<.

Hence, by (1.4)

| U ( c x 1 ) U ( c x 2 ) | c U ( c a ) | x 1 x 2 | K 0 a | x 1 x 2 | , x 1 , x 2 [ a , b ] , 0 c 1 .

Also, by (1.6)

| U ˜ ( x 1 , y 1 ) U ˜ ( x 2 , y 2 ) | max 0 c 1 | U ( c x 1 ) U ( c x 2 ) | + | x 1 y 1 x 2 y 2 | K 0 a | x 1 x 2 | + | x 1 x 2 | | y 1 | + b | y 1 y 2 | , y 1 , y 2 > 0 .

Thus the nonlinear term of (4.1) is Lipschitz. By uniform ellipticity, a standard theory of nonlinear elliptic equations yields that there exists a unique solution w C 2 (a,b)C[a,b] of (4.1), (4.2). For details, we refer to [[13], Theorem 17.18] and [[14], Chapter 5, Theorem 3.7]. Clearly, by Theorem 3.1, u is a viscosity solution of (4.1), (4.2). Therefore, by Proposition 4.1, we have w=u and u is smooth. Since a, b are arbitrary, we obtain the assertion. □

5 Optimal consumption

In this section, we give a synthesis of the optimal policy c ={ c t } for the optimization problem (1.4) subject to (1.2). We consider the stochastic differential equation

d X t = [ ( X t ) γ η ( X t ) X t ] dt+σ X t d B t , X 0 =x>0,
(5.1)

where η(x)=I(x, u (x)) and I(x,y) denotes the maximizer of (1.6) for x,y>0, i.e.,

I(x,y)= { ( U ) 1 ( y ) / x if  U ( x ) y , 1 otherwise .
(5.2)

Our objective is to prove the following.

Theorem 5.1 We assume (1.4). Then the optimal consumption policy { c t } is given by

c t =η ( X t ) .
(5.3)

To obtain the optimal consumption policy { c t }, we should study the properties of the value function u and the existence of strong solution { X t } of (5.1). We need the following lemmas.

Lemma 5.1 Under (1.4), the value function u is concave. In addition, we have

u (x)>0for x>0,
(5.4)
u (0+)=.
(5.5)

Proof Let x i >0, i=1,2. For any ε>0, there exists c ( i ) A such that

u( x i )ε<E [ 0 e α t U ( c t ( i ) X t ( i ) ) d t ] ,

where { X t ( i ) } is the solution of (1.2) corresponding to c ( i ) with X 0 ( i ) = x i . Let 0ξ1, and we set

c ¯ t = ξ c t ( 1 ) X t ( 1 ) + ( 1 ξ ) c t ( 2 ) X t ( 2 ) ξ X t ( 1 ) + ( 1 ξ ) X t ( 2 ) ,

which belongs to . Define { X ¯ t } and { X ˜ t } by

d X ¯ t = [ ( X ¯ t ) γ c ¯ t X ¯ t ] d t + σ X ¯ t d B t , X ¯ 0 = ξ x 1 + ( 1 ξ ) x 2 , X ˜ t = ξ X t ( 1 ) + ( 1 ξ ) X t ( 2 ) .

By concavity,

X ˜ t ξ x 1 +(1ξ) x 2 + 0 t [ ( X ˜ s ) γ c ¯ s X ˜ s ] ds+ 0 t σ X ˜ s d B s a.s.

By the comparison theorem, we have

X ˜ t X ¯ t for all t0 a.s.

Thus, by (1.4)

u ( ξ x 1 + ( 1 ξ ) x 2 ) E [ 0 e α t U ( c ¯ t X ¯ t ) d t ] E [ 0 e α t U ( c ¯ t X ˜ t ) d t ] = E [ 0 e α t U ( ξ c t ( 1 ) X t ( 1 ) + ( 1 ξ ) c t ( 2 ) X t ( 2 ) ) d t ] ξ E [ 0 e α t U ( c t ( 1 ) X t ( 1 ) ) d t ] + ( 1 ξ ) E [ 0 e α t U ( c t ( 2 ) X t ( 2 ) ) d t ] ξ u ( x 1 ) + ( 1 ξ ) u ( x 2 ) ε .

Therefore, letting ε0, we obtain the concavity of u.

To prove (5.4), by Theorem 4.1, we recall that u is smooth. Furthermore, we get u (x)0 for x>0. If not, then u ( a 0 )<0 for some a 0 >0. By concavity,

0u(x) u ( a 0 )(x a 0 )+u( a 0 )as x,

which is a contradiction. Suppose that u (z)=0 for some z>0. Then, by concavity, we have u (x)=0 for all xz. Hence, by (1.5) and (1.6),

αu(z)=αu(x)= U ˜ (x,0)=U(x),xz.

This is contrary to (1.4). Thus, we obtain (5.4).

Next, by definition, we have

0<E [ 0 e α t U ( X ˇ t ) d t ] u(x),x>0,

where { X ˇ t } is the solution of (1.2) corresponding to c t =1. As in (2.7), the limit process χ ˇ t := lim x 0 + X ˇ t is different from 0. Hence

0<E [ 0 e α t U ( χ ˇ t ) d t ] u(0+).

Suppose that u (0+)<. By (1.5) and concavity, we get u(0+)=0, which is a contradiction. This implies (5.5). □

Lemma 5.2 Under (1.4), there exists a unique positive strong solution { X t } of (5.1).

Proof Let { N t } be the solution of (1.2) corresponding to c t =0. We can take the Brownian motion { B t } on the canonical probability space [[4], p.71]. Since 0η1, the probability measure P ˆ is defined by

d P ˆ /dP=exp { 0 t η ( N s ) / σ d B s 1 2 0 t ( η ( N s ) / σ ) 2 d s }

for every t0. Girsanov’s theorem yields that

B ˆ t := B t + 0 t η( N s )/σdsis a Brownian motion under  P ˆ .

Hence

d N t = [ ( N t ) γ η ( N t ) N t ] dt+σ N t d B ˆ t under  P ˆ .

Thus, (5.1) admits a weak solution.

Now, by (5.2), we have

η(x)x=min { ( U ) 1 u ( x ) , x } .

Hence, by (1.4) and concavity,

d d x ( U ) 1 u (x)= u ( x ) U ( U ) 1 u ( x ) 0.

Thus, η(x)x is nondecreasing on (0,). We rewrite (5.1) as the form of (2.4) to obtain X t >0 a.s. Then we see that the pathwise uniqueness holds for (5.1). Therefore, by the Yamada-Watanabe theorem [10], we deduce that (5.1) admits a unique strong solution { X t }. □

Proof of Theorem 5.1 Since { c t } satisfies (1.1), it belongs to . By Lemma 5.2, we note that

0< u (x)xu(x)u(0+)<u(x),x>0.

Hence, by (2.2) and (3.2),

E [ 0 t { e α s u ( X s ) X s } 2 d s ] E [ 0 t { e α s u ( X s ) } 2 d s ] E [ 0 t e α s ζ ( X s ) 2 d s ] < .

This yields that { 0 t e α s u ( X s ) X s d B s } is a martingale. By (1.6), (5.3) and Ito’s formula,

E [ e α t u ( X t ) ] = u ( x ) + E [ 0 t e α s { α u ( X s ) + ( X s ) γ u ( X s ) c s X s u ( X s ) + 1 2 σ 2 ( X s ) 2 u ( X s ) } d s ] = u ( x ) E [ 0 t e α s U ( c s X s ) d s ] .

By (2.1) and (3.2), it is clear that

E [ e α t u ( X t ) ] E [ e α t ζ ( X t ) ] e α t { ( 1 γ ) t + x ( 1 γ ) } 1 / ( 1 γ ) + e α t ζ 0 0 as  t .

Letting t, we deduce

E [ 0 e α t U ( c t X t ) d t ] =u(x).

By the same calculation as above, we obtain

E [ 0 e α t U ( c t X t ) d t ] u(x)

for any cA. The proof is complete. □

Remark 5.1 From the proof of Theorem 5.1, it follows that the solution u of the HJB equation (1.5) coincides with the value function. This implies that the uniqueness holds for (1.5).

References

  1. Merton RC: An asymptotic theory of growth under uncertainty. Rev. Econ. Stud. 1975, 42: 375-393. 10.2307/2296851

    Article  Google Scholar 

  2. Baten MA, Kamil AA: Optimal consumption in a stochastic Ramsey model with Cobb-Douglas production function. Int. J. Math. Math. Sci. 2013. 10.1155/2013/684757

    Google Scholar 

  3. Barro RJ, Sala-i-Martin X: Economic Growth. 2nd edition. MIT Press, Cambridge; 2004.

    MATH  Google Scholar 

  4. Kamien MI, Schwartz NL: Dynamic Optimization. 2nd edition. North-Holland, Amsterdam; 1991.

    MATH  Google Scholar 

  5. Sethi SP, Thompson GL: Optimal Control Theory. 2nd edition. Kluwer Academic, Boston; 2000.

    MATH  Google Scholar 

  6. Morimoto H, Zhou XY: Optimal consumption in a growth model with the Cobb-Douglas production function. SIAM J. Control Optim. 2009,47(6):2991-3006. 10.1137/070709153

    Article  MathSciNet  MATH  Google Scholar 

  7. Crandall MG, Ishii H, Lions PL: User’s guide to viscosity solutions of second order partial differential equations. Bull. Am. Math. Soc. 1992, 27: 1-67. 10.1090/S0273-0979-1992-00266-5

    Article  MathSciNet  MATH  Google Scholar 

  8. Koike I MSJ Memoirs. In A Beginner’s Guide to Theory of Viscosity Solutions. Math. Soc. Japan, Tokyo; 2004.

    Google Scholar 

  9. Darling RWR, Pardoux E: Backwards SDE with random terminal time and applications to semilinear elliptic PDE. Ann. Probab. 1997, 25: 1135-1159.

    Article  MathSciNet  MATH  Google Scholar 

  10. Ikeda N, Watanabe S: Stochastic Differential Equations and Diffusion Processes. North-Holland, Amsterdam; 1981.

    MATH  Google Scholar 

  11. Pales Z: A general version of Young’s inequality. Arch. Math. 1992,58(4):360-365. 10.1007/BF01189925

    Article  MathSciNet  MATH  Google Scholar 

  12. Fleming WH, Soner HM: Controlled Markov Processes and Viscosity Solutions. Springer, New York; 1993.

    MATH  Google Scholar 

  13. Gilbarg D, Trudinger NS: Elliptic Partial Differential Equations of Second Order. Springer, Berlin; 1983.

    Book  MATH  Google Scholar 

  14. Morimoto H: Stochastic Control and Mathematical Modeling: Applications in Economics. Cambridge University Press, Cambridge; 2010.

    Book  MATH  Google Scholar 

Download references

Acknowledgements

I would like to thank Professor H Morimoto for his useful help. The research was supported by the National Natural Science Foundation of China (11171275) and the Fundamental Research Funds for the Central Universities (XDJK2012C045).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Chuandi Liu.

Additional information

Competing interests

The author declares that they have no competing interests.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0), which permits use, duplication, adaptation, distribution, and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Liu, C. Optimal consumption of the stochastic Ramsey problem for non-Lipschitz diffusion. J Inequal Appl 2014, 391 (2014). https://doi.org/10.1186/1029-242X-2014-391

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1029-242X-2014-391

Keywords