Skip to content

Advertisement

  • Research
  • Open Access

Viscosity approximation methods for hierarchical optimization problems in CAT ( 0 ) spaces

Journal of Inequalities and Applications20132013:471

https://doi.org/10.1186/1029-242X-2013-471

  • Received: 10 July 2013
  • Accepted: 15 October 2013
  • Published:

Abstract

This paper aims at investigating viscosity approximation methods for solving a system of variational inequalities in a CAT ( 0 ) space. Two algorithms are given. Under certain appropriate conditions, we prove that the iterative schemes converge strongly to the unique solution of the hierarchical optimization problem. The result presented in this paper mainly improves and extends the corresponding results of Shi and Chen (J. Appl. Math. 2012:421050, 2012, doi:10.1155/2012/421050), Wangkeeree and Preechasilp (J. Inequal. Appl. 2013:93, 2013, doi:10.1186/1029-242X-2013-93) and others.

MSC:47H09, 47H05.

Keywords

  • viscosity approximation method
  • variational inequality
  • hierarchical optimization problems
  • CAT ( 0 ) space
  • common fixed point

1 Introduction

The concept of variational inequalities plays an important role in various kinds of problems in pure and applied sciences (see, for example, [111]). Moreover, the rapid development and the prolific growth of the theory of variational inequalities have been made by many researchers.

In a CAT ( 0 ) space, Saejung [12] studied the convergence theorems of the following Halpern iterations for a nonexpansive mapping T. Let u be fixed and x t C be the unique fixed point of the contraction x t u ( 1 t ) T x ; i.e.,
x t = t u ( 1 t ) T x t ,
(1.1)
where t [ 0 , 1 ] and x 0 , u C are arbitrarily chosen and
x n + 1 = α n u ( 1 α n ) T x n , n 0 ,
(1.2)

where α n ( 0 , 1 ) . It is proved that { x t } converges strongly as t 0 to x ˜ F ( T ) such that x ˜ = P F ( T ) u , and { x n } converges strongly as n to x ˜ F ( T ) under certain appropriate conditions on α n , where P C x is a metric projection from X onto C.

In 2012, Shi and Chen [13] studied the convergence theorems of the following Moudafi viscosity iterations for a nonexpansive mapping T: For a contraction f on C and t ( 0 , 1 ) , let x t C be the unique fixed point of the contraction x t f ( x ) ( 1 t ) T x ; i.e.,
x t = t f ( x t ) ( 1 t ) T x t ,
(1.3)
and x 0 C is arbitrarily chosen and
x n + 1 = α n f ( x n ) ( 1 α n ) T x n , n 0 ,
(1.4)
where { α n } ( 0 , 1 ) . They proved that { x t } defined by (1.3) converges strongly as t 0 to x ˜ F ( T ) such that x ˜ = P F ( T ) f ( x ˜ ) in the framework of a CAT ( 0 ) space satisfying the property , i.e., if for x , u , y 1 , y 2 X ,
d ( x , P [ x , y 1 ] u ) d ( x , y 1 ) d ( x , P [ x , y 2 ] u ) d ( x , y 2 ) + d ( x , u ) d ( y 1 , y 2 ) .

Furthermore, they also obtained that { x n } defined by (1.4) converges strongly as n to x ˜ F ( T ) under certain appropriate conditions imposed on { α n } .

By using the concept of quasilinearization, which was introduced by Berg and Nikolaev [14], Wangkeeree and Preechasilp [15] studied the strong convergence theorems of iterative schemes (1.3) and (1.4) in CAT ( 0 ) spaces without the property . They proved that iterative schemes (1.3) and (1.4) converge strongly to x ˜ such that x ˜ = P F ( T ) f ( x ˜ ) , which is the unique solution of the variational inequality (VIP)
x ˜ f ( x ˜ ) , x x ˜ 0 , x F ( T ) .
(1.5)
In this paper, we are interested in the following so-called hierarchical optimization problems (HOP). More precisely, let f , g : C C be two contractions with coefficient α ( 0 , 1 ) , and let T 1 , T 2 : C C be two nonexpansive mappings such that F ( T 1 ) and F ( T 2 ) are nonempty. The class of hierarchical optimization problems (HOP) consists in finding ( x ˜ , y ˜ ) F ( T 1 ) × F ( T 2 ) such that the following inequalities hold:
{ x ˜ f ( y ˜ ) , x x ˜ 0 , x F ( T 1 ) , y ˜ g ( x ˜ ) , y y ˜ 0 , y F ( T 2 ) .
(1.6)
For this purpose, we introduce the following iterative schemes:
{ x t = t f ( T 2 y t ) ( 1 t ) T 1 x t , y t = t g ( T 1 x t ) ( 1 t ) T 2 y t ,
(1.7)
where t ( 0 , 1 ) , and
{ x 0 , y 0 C , x n + 1 = α n f ( T 2 y n ) ( 1 α n ) T 1 x n , y n + 1 = α n g ( T 1 x n ) ( 1 α n ) T 2 x n , n 0 ,
(1.8)

where { α n } ( 0 , 1 ) satisfies

(H1) α n 0 ,

(H2) n = 0 = ,

(H3) either n = 0 | α n + 1 α n | < or lim n α n + 1 α n = 1 .

We prove that iterative schemes (1.7) and (1.8) converge strongly to ( x ˜ , y ˜ ) F ( T 1 ) × F ( T 2 ) such that x ˜ = P F ( T 1 ) f ( y ˜ ) and y ˜ = P F ( T 2 ) g ( x ˜ ) , which is the unique solution of (1.6).

2 Preliminaries

Let ( X , d ) be a metric space. A geodesic path joining x X to y Y (or, more briefly, a geodesic from x to y) is a map c : [ 0 , l ] X such that c ( 0 ) = x , c ( l ) = y , and d ( c ( t ) , c ( t ) ) = | t t | for all t , t [ 0 , l ] . In particular, c is an isometry and d ( x , y ) = l . The image of c is called a geodesic segment joining x and y. When it is unique, this geodesic segment is denoted by [ x , y ] . The space ( X , d ) is said to be a geodesic space if every two points of X are joined by a geodesic, and X is said to be uniquely geodesic if there is exactly one geodesic joining x and y for each x , y X . A subset Y X is said to be convex if Y includes every geodesic segment joining any two of its points.

A geodesic triangle ( x 1 , x 2 , x 3 ) in a geodesic metric space ( X , d ) consists of three points x 1 , x 2 , and x 3 in X (the vertices of ) and a geodesic segment between each pair of vertices (the edges of ). A comparison triangle for the geodesic triangle ( x 1 , x 2 , x 3 ) in ( X , d ) is a triangle ¯ ( x 1 , x 2 , x 3 ) : = ( x ¯ 1 , x ¯ 2 , x ¯ 3 ) in the Euclidean plane E 2 such that d E 2 ( x ¯ i , x ¯ j ) = d ( x i , x j ) for i , j 1 , 2 , 3 .

A geodesic space is said to be a CAT ( 0 ) space if all geodesic triangles satisfy the following comparison axiom.

CAT ( 0 ) : Let be a geodesic triangle in X, and let ¯ be a comparison triangle for . Then is said to satisfy the CAT ( 0 ) inequality if for all x , y and all comparison points x ¯ , y ¯ ¯ ,
d ( x , y ) d E 2 ( x ¯ , y ¯ ) .
Let x , y X by [[16], Lemma 2.1(iv)] for each t [ 0 , 1 ] , then there exists a unique point z [ x , y ] such that
d ( x , z ) = t d ( x , y ) , d ( y , z ) = ( 1 t ) d ( x , y ) .
(2.1)

From now on, we will use the notation ( 1 t ) x t y for the unique point z satisfying (2.1).

We now collect some elementary facts about CAT ( 0 ) spaces which will be used in the proofs of our main results.

Lemma 2.1 Let X be a CAT ( 0 ) space. Then
  1. (i)
    (see [[16], Lemma  2.4]) for each x , y , z X and t [ 0 , 1 ] , one has
    d ( ( 1 t ) x t y , z ) ( 1 t ) d ( x , z ) + t d ( y , z ) ;
    (2.2)
     
  2. (ii)
    (see [17]) for each x , y , z X and t , s [ 0 , 1 ] , one has
    d ( ( 1 t ) x t y , ( 1 s ) x s y ) | t s | d ( x , y ) ;
    (2.3)
     
  3. (iii)
    (see [18]) for each x , y , z , w X and t [ 0 , 1 ] , one has
    d ( ( 1 t ) x t y , ( 1 t ) z t w ) ( 1 t ) d ( x , z ) + t d ( y , w ) ;
    (2.4)
     
  4. (iv)
    (see [19]) for each x , y , z X and t [ 0 , 1 ] , one has
    d ( ( 1 t ) z t x , ( 1 t ) z t y ) t d ( x , y ) ;
    (2.5)
     
  5. (v)
    (see [16]) for each x , y , z X and t [ 0 , 1 ] , one has
    d 2 ( ( 1 t ) x t y , z ) ( 1 t ) d 2 ( x , z ) + t d 2 ( y , z ) t ( 1 t ) d 2 ( x , y ) .
    (2.6)
     

Let C be a nonempty subset of a complete CAT ( 0 ) space X. Recall that a self-mapping T : C C is a nonexpansion on C iff d ( T x , T y ) d ( x , y ) for all x , y C . A point x C is called a fixed point of T if x = T x . We denote by F ( T ) the set of all fixed points of T. A self-mapping f : C C is a contraction on C if there exists a constant α ( 0 , 1 ) such that d ( f x , f y ) α d ( x , y ) . Banach’s contraction principle [20] guarantees that f has a unique fixed point when C is a nonempty closed convex subset of a complete metric space.

Fixed-point theory in CAT ( 0 ) spaces was first studied by Kirk (see [19, 21]). He showed that every nonexpansive (single-valued) mapping defined on a bounded closed convex subset of a complete CAT ( 0 ) space always has a fixed point. Since then, the fixed-point theory for single-valued and multivalued mappings in CAT ( 0 ) spaces has been rapidly developed.

Berg and Nikolaev [14] introduced the concept of quasilinearization as follows.

Let ( X , d ) be a metric space. Let us formally denote a pair ( a , b ) X × X by a b and call it a vector. Then quasilinearization is defined as a mapping , : ( X × X ) × ( X × X ) R defined by
a b , c d = 1 2 ( d 2 ( a , d ) + d 2 ( b , c ) d 2 ( a , c ) d 2 ( b , d ) ) , a , b , c , d X .
(2.7)

It is easily seen that a b , c d = c d , a b , a b , c d = b a , c d and a x , c d + x b , c d = a b , c d for all a , b , c , d , x X .

We say that X satisfies the Cauchy-Schwarz inequality if
a b , c d d ( a , b ) d ( c , d )
(2.8)

for all a , b , c , d X .

It is known [[14], Corollary 3] that a geodesically connected metric space is a CAT ( 0 ) space if and only if it satisfies the Cauchy-Schwarz inequality.

Recently, Dehghan and Rooin [22] presented a characterization of a metric projection in CAT ( 0 ) spaces as follows.

Lemma 2.2 Let C be a nonempty closed and convex subset of a complete CAT ( 0 ) space X, x X and u C . Then u = P C x if and only if y u , u x 0 for all y C .

Let { x n } be a bounded sequence in a CAT ( 0 ) space X. For x X , we set
r ( x , { x n } ) = lim sup n d ( x , x n ) .
The asymptotic radius r ( { x n } ) of { x n } is given by
r ( { x n } ) = inf { r ( x , { x n } ) : x X } ,
and the asymptotic center A ( { x n } ) of { x n } is the set
A ( { x n } ) = { x X : r ( x , { x n } ) = r ( { x n } ) } .
It is known from Proposition 7 of [23] that for each bounded sequence { x n } in a complete CAT ( 0 ) space, A ( { x n } ) consists of exactly one point. A sequence { x n } X is said to -converge to x X if A ( { x n k } ) = { x } for every subsequence { x n k } of { x n } . We use x n x to denote that { x n } -converges to x. The uniqueness of an asymptotic center implies that the CAT ( 0 ) space X satisfies Opial’s property, i.e., for given { x n } X such that x n x , then for any given y X with y x , the following holds:
lim sup n d ( x n , x ) < lim sup n d ( x n , y ) .

Lemma 2.3 [24]

Assume that X is a complete CAT ( 0 ) space. Then:
  1. (i)

    Every bounded sequence in X always has a -convergent subsequence.

     
  2. (ii)

    If C is a closed convex subset of X and T : C X is a nonexpansive mapping, then the conditions x n x and d ( x n , T x n ) 0 imply x C and T x = x .

     

The following lemma shows a characterization of -convergence.

Lemma 2.4 [23]

Let X be a complete CAT ( 0 ) space, { x n } be a sequence in X, and x X . Then x n x if and only if lim sup n x x n , x y 0 for all y X .

Lemma 2.5 [23]

Let { a n } be a sequence of nonnegative real numbers satisfying the property
a n + 1 ( 1 α n ) a n + α n β n , n 0 ,
where { α n } ( 0 , 1 ) and { β n } R such that
  1. 1.

    n = 0 α n = ;

     
  2. 2.

    lim sup n β n 0 or n = 0 | α n β n | < .

     

Then { a n } converges to zero as n .

Lemma 2.6 [15]

Let X be a complete CAT ( 0 ) space. Then,
  1. (i)
    for each u , x , y X , one has
    d 2 ( x , u ) d 2 ( y , u ) + 2 x y , x u ;
    (2.9)
     
  2. (ii)

    for any u , v X and t [ 0 , 1 ] , letting u t = t u ( 1 t ) v for all x , y X , we have:

     
  3. (a)

    u t x , u t y t u x , u t y + ( 1 t ) v x , u t y ;

     
  4. (b)

    u t x , u y t u x , u y + ( 1 t ) v x , u y ;

     
  5. (c)

    u t x , u t y t u x , u y + ( 1 t ) v x , v y .

     

3 Main results

Now we are ready to give our main results in this paper.

Let ( X , d ) be a metric space. Define a mapping d ˆ : ( X × X ) × ( X × X ) R + by
d ˆ ( ( x 1 , y 1 ) , ( x 2 , y 2 ) ) = d ( x 1 , x 2 ) + d ( y 1 , y 2 )

for all x 1 , x 2 , y 1 , y 2 X . Then it is easy to verify that ( X × X , d ˆ ) is a metric space, and ( X × X , d ˆ ) is complete if and only if ( X , d ) is complete.

Lemma 3.1 Let C be a closed convex subset of a complete CAT ( 0 ) space. Let f , g : C C be two contractions with coefficient α ( 0 , 1 ) , and let T 1 , T 2 : C C be two nonexpansive mappings. For any t ( 0 , 1 ) , define another mapping G t : C × C C × C by
G t ( x , y ) = ( t f ( T 2 y ) ( 1 t ) T 1 x , t g ( T 1 x ) ( 1 t ) T 2 y ) .

Then G t is a contraction on C × C .

Proof For any ( x 1 , y 1 ) , ( x 2 , y 2 ) C × C and t ( 0 , 1 ) , we have
d ˆ ( G t ( x 1 , y 1 ) , G t ( x 2 , y 2 ) ) = d ( t f ( T 2 y 1 ) ( 1 t ) T 1 x 1 , t f ( T 2 y 2 ) ( 1 t ) T 1 x 2 ) + d ( t g ( T 1 x 1 ) ( 1 t ) T 2 y 1 , t g ( T 1 x 2 ) ( 1 t ) T 2 y 2 ) t d ( f ( T 2 y 1 ) , f ( T 2 y 2 ) ) + ( 1 t ) d ( T 1 x 1 , T 1 x 2 ) + t d ( g ( T 1 x 1 ) , g ( T 1 x 2 ) ) + ( 1 t ) d ( T 2 y 1 , T 2 y 2 ) ( 1 t ( 1 α ) ) ( d ( x 1 , x 2 ) + d ( y 1 , y 2 ) ) = ( 1 t ( 1 α ) ) d ˆ ( ( x 1 , y 1 ) , ( x 2 , y 2 ) ) .
This implies that G t is a contraction mapping. Therefore there exists a unique fixed point ( x t , y t ) C × C of G t such that
{ x t = t f ( T 2 y t ) ( 1 t ) T 1 x t , y t = t g ( T 1 x t ) ( 1 t ) T 2 y t .

 □

Theorem 3.2 Let C be a closed convex subset of a complete CAT ( 0 ) space X, and let T 1 , T 2 : C C be two nonexpansive mappings such that F ( T 1 ) and F ( T 2 ) are nonempty. Let f, g be two contractions on C with coefficient 0 < α < 1 . For each t ( 0 , 1 ] , let { x t } and { y t } be given by (1.7). Then x t x ˜ and y t y ˜ as t 0 such that x ˜ = P F ( T 1 ) f ( y ˜ ) , y ˜ = P F ( T 2 ) g ( x ˜ ) which is the unique solution of HOP (1.6).

Proof We first show that { x t } and { y t } are bounded. Indeed, take ( p , q ) F ( T 1 ) × F ( T 2 ) to derive that
d ( x t , p ) + d ( y t , q ) = d ( t f ( T 2 y t ) ( 1 t ) T 1 x t , p ) + d ( t g ( T 1 x t ) ( 1 t ) T 2 y t , q ) t d ( f ( T 2 y t ) , p ) + ( 1 t ) d ( T 1 x t , p ) + t d ( g ( T 1 x t ) , q ) + ( 1 t ) d ( T 2 y t , q ) t d ( f ( T 2 y t ) , f ( q ) ) + t d ( f ( q ) , p ) + ( 1 t ) d ( T 1 x t , p ) + t d ( g ( T 1 x t ) , g ( p ) ) + t d ( g ( p ) , q ) + ( 1 t ) d ( T 2 y t , q ) t α d ( y t , q ) + t d ( f ( q ) , p ) + ( 1 t ) d ( x t , p ) + t α d ( x t , p ) + t d ( g ( p ) , q ) + ( 1 t ) d ( y t , q ) .
After simplifying, we have
d ( x t , p ) + d ( y t , q ) 1 1 α ( d ( f ( q ) , p ) + d ( g ( p ) , q ) ) .
Hence { x t } and { y t } are bounded, so are { T 1 x t } , { T 2 y t } , { f ( T 2 y t ) } and { g ( T 1 x t ) } . Consequently,
d ( x t , T 1 x t ) + d ( y t , T 2 y t ) = d ( t f ( T 2 y t ) ( 1 t ) T 1 x t , T 1 x t ) + d ( t g ( T 1 x t ) ( 1 t ) T 2 y t , T 2 y t ) = t d ( f ( T 2 y t ) , T 2 y t ) + t d ( g ( T 1 x t ) , T 2 y t ) 0 ( as  t 0 ) .
In particular, we have
d ( x t , T 1 x t ) 0 , d ( y t , T 2 y t ) 0 ( as  t 0 ) .
(3.1)

Next we prove that { x t } is relatively compact as t 0 .

In fact, let { t n } ( 0 , 1 ) be any subsequence such that t n 0 as n . Put x n : = x t n and y n : = y t n . Now we prove that { ( x n , y n ) } contains a subsequence converging strongly to ( x ˜ , y ˜ ) where x ˜ = P F ( T 1 ) f ( y ˜ ) , y ˜ = P F ( T 2 ) g ( x ˜ ) and it is a solution of HOP (1.6).

In fact, since { x n } and { y n } are both bounded, by Lemma 2.3(i), (ii) and (3.1), we may assume that x n x ˜ and y n y ˜ , and x ˜ F ( T 1 ) , y ˜ F ( T 2 ) . Hence it follows from Lemma 2.6 that
d 2 ( x n , x ˜ ) + d 2 ( y n , y ˜ ) = x n x ˜ , x n x ˜ + y n y ˜ , y n y ˜ t n f ( T 2 y n ) x ˜ , x n x ˜ + ( 1 t n ) ( T 1 x n ) x ˜ , x n x ˜ + t n g ( T 1 x n ) y ˜ , y n y ˜ + ( 1 t n ) ( T 2 y n ) y ˜ , y n y ˜ t n f ( T 2 y n ) x ˜ , x n x ˜ + ( 1 t n ) d ( T 1 x n , x ˜ ) d ( x n , x ˜ ) + t n g ( T 1 x n ) y ˜ , y n y ˜ + ( 1 t n ) d ( T 2 y n , y ˜ ) d ( y n , y ˜ ) t n f ( T 2 y n ) x ˜ , x n x ˜ + ( 1 t n ) d 2 ( x n , x ˜ ) + t n g ( T 1 x n ) y ˜ , y n y ˜ + ( 1 t n ) d 2 ( y n , y ˜ ) .
(3.2)
After simplifying, we have
d 2 ( x n , x ˜ ) + d 2 ( y n , y ˜ ) f ( T 2 y n ) x ˜ , x n x ˜ + g ( T 1 x n ) y ˜ , y n y ˜ = f ( T 2 y n ) f ( y ˜ ) , x n x ˜ + f ( y ˜ ) x ˜ , x n x ˜ + g ( T 1 x n ) g ( x ˜ ) , y n y ˜ + g ( x ˜ ) y ˜ , y n y ˜ d ( f ( T 2 y n ) , f ( y ˜ ) ) d ( x n , x ˜ ) + f ( y ˜ ) x ˜ , x n x ˜ + d ( g ( T 1 x n ) , g ( x ˜ ) ) d ( y n , y ˜ ) + g ( x ˜ ) y ˜ , y n y ˜ 2 α d ( x n , x ˜ ) d ( y n , y ˜ ) + f ( y ˜ ) x ˜ , x n x ˜ + g ( x ˜ ) y ˜ , y n y ˜ α ( d 2 ( x n , x ˜ ) + d 2 ( y n , y ˜ ) ) + f ( y ˜ ) x ˜ , x n x ˜ + g ( x ˜ ) y ˜ , y n y ˜
and thus
d 2 ( x n , x ˜ ) + d 2 ( y n , y ˜ ) 1 1 α [ f ( y ˜ ) x ˜ , x n x ˜ + g ( x ˜ ) y ˜ , y n y ˜ ] .
(3.3)
Since x n x ˜ and y n y ˜ , by Lemma 2.4,
lim sup n f ( y ˜ ) x ˜ , x n x ˜ 0 and lim sup n g ( x ˜ ) y ˜ , y n y ˜ 0 .
Hence we have
lim sup n [ f ( y ˜ ) x ˜ , x n x ˜ + g ( x ˜ ) y ˜ , y n y ˜ ] lim sup n f ( y ˜ ) x ˜ , x n x ˜ + lim sup n g ( x ˜ ) y ˜ , y n y ˜ 0 .
(3.4)

It follows from (3.3) that d 2 ( x n , x ˜ ) + d 2 ( y n , y ˜ ) 0 . Hence x n x ˜ and y n y ˜ .

Next we show that ( x ˜ , y ˜ ) F ( T 1 ) × F ( T 2 ) , which solves HOP (1.6).

Indeed, for each x F ( T 1 ) , y F ( T 2 ) , we have
d 2 ( x t , x ) = d 2 ( t f ( T 2 y t ) ( 1 t ) T 1 x t , x ) t d 2 ( f ( T 2 y t ) , x ) + ( 1 t ) d 2 ( T 1 x t , x ) t ( 1 t ) d 2 ( f ( T 2 y t ) , T 1 x t ) t d 2 ( f ( T 2 y t ) , x ) + ( 1 t ) d 2 ( x t , x ) t ( 1 t ) d 2 ( f ( T 2 y t ) , T 1 x t ) .
This implies that
d 2 ( x t , x ) d 2 ( f ( T 2 y t ) , x ) ( 1 t ) d 2 ( f ( T 2 y t ) , T 1 x t ) .
Letting t = t n 0 and taking the limit and noting that d ( y t , T 2 y t ) 0 and d ( x t , T 1 x t ) 0 , we have
d 2 ( x ˜ , x ) d 2 ( f ( y ˜ ) , x ) d 2 ( f ( y ˜ ) , x ˜ ) .
Hence
x ˜ f ( y ˜ ) , x x ˜ = 1 2 [ x ˜ + d 2 ( f ( y ˜ ) , x ) d 2 ( f ( y ˜ ) , x ˜ ) d 2 ( x ˜ , x ) ] 0 .
It is similar to proving that
y ˜ g ( x ˜ ) , y y ˜ 0 .

That is, ( x ˜ , y ˜ ) solves inequality (1.6).

Finally, we show that the entire net { x t } converges to x ˜ , and { y t } converges to y ˜ . In fact, for any subsequence { s n } ( 0 , 1 ) such that s n 0 (as n ), assume that x s n x ˆ and y s n y ˆ . By the same argument as above, we get that ( x ˆ , y ˆ ) F ( T 1 ) × F ( T 2 ) and solves inequality (1.6). Hence we have
{ x ˜ f ( y ˜ ) , x ˜ x ˆ 0 , y ˜ g ( x ˜ ) , y ˜ y ˆ 0
(3.5)
and
{ x ˆ f ( y ˆ ) , x ˆ x ˜ 0 , y ˆ g ( x ˆ ) , y ˆ y ˜ 0 .
(3.6)
Adding up (3.5) and (3.6), we get that
0 x ˜ f ( y ˜ ) , x ˜ x ˆ + y ˜ g ( x ˜ ) , y ˜ y ˆ x ˆ f ( y ˆ ) , x ˜ x ˆ y ˆ g ( x ˆ ) , y ˜ y ˆ = x ˜ f ( y ˆ ) , x ˜ x ˆ + f ( y ˆ ) f ( y ˜ ) , x ˜ x ˆ x ˆ x ˜ , x ˜ x ˆ x ˜ f ( y ˆ ) , x ˜ x ˆ + y ˜ g ( x ˆ ) , y ˜ y ˆ + g ( x ˆ ) g ( x ˜ ) , y ˜ y ˆ y ˆ y ˜ , y ˜ y ˆ y ˜ g ( x ˆ ) , y ˜ y ˆ = x ˜ x ˆ , x ˜ x ˆ + y ˜ y ˆ , y ˜ y ˆ f ( y ˜ ) f ( y ˆ ) , x ˜ x ˆ g ( x ˜ ) g ( x ˆ ) , y ˜ y ˆ d 2 ( x ˜ , x ˆ ) + d 2 ( y ˜ , y ˆ ) d ( f ( y ˜ ) , f ( y ˆ ) ) d ( x ˜ , x ˆ ) d ( g ( x ˜ ) , g ( x ˆ ) ) d ( y ˜ , y ˆ ) d 2 ( x ˜ , x ˆ ) + d 2 ( y ˜ , y ˆ ) 2 α d ( y ˜ , y ˆ ) d ( x ˜ , x ˆ ) ( 1 α ) [ d 2 ( x ˜ , x ˆ ) + d 2 ( y ˜ , y ˆ ) ] .

Since 0 < α < 1 , we have that d 2 ( x ˜ , x ˆ ) + d 2 ( y ˜ , y ˆ ) = 0 , and so x ˜ = x ˆ , y ˜ = y ˆ . Hence the entire net { x t } converges to x ˜ and { y t } converges to y ˜ , which ( x ˆ , y ˆ ) solves HOP (1.6). This completes the proof of Theorem 3.2. □

Theorem 3.3 Let C be a closed convex subset of a complete CAT ( 0 ) space X, and let T 1 , T 2 : C C be two nonexpansive mappings such that F ( T 1 ) and F ( T 2 ) are nonempty. Let f, g be two contractions on C with coefficient 0 < α < 1 . Let { x n } and { y n } be the sequences defined by (1.8). If conditions (H1)-(H3) are satisfied, then x n x ˜ and y n y ˜ as n , where x ˜ = P F ( T 1 ) f ( y ˜ ) , y ˜ = P F ( T 2 ) g ( x ˜ ) , which solves HOP (1.6).

Proof First we show that { x n } and { y n } are bounded. Indeed, taking ( p , q ) F ( T 1 ) × F ( T 2 ) , it follows that
d ( x n + 1 , p ) + d ( y n + 1 , q ) = d ( α n f ( T 2 y n ) ( 1 α n ) T 1 x n , p ) + d ( α n g ( T 1 x n ) ( 1 α n ) T 2 y n , q ) α n d ( f ( T 2 y n ) , p ) + ( 1 α n ) d ( T 1 x n , p ) + α n d ( g ( T 1 x n ) , q ) + ( 1 α n ) d ( T 2 y n , q ) α n d ( f ( T 2 y n ) , f ( q ) ) + α n d ( f ( q ) , p ) + ( 1 α n ) d ( T 1 x n , p ) + α n d ( g ( T 1 x n ) , g ( p ) ) + α n d ( g ( p ) , q ) + ( 1 α n ) d ( T 2 y n , q ) α n α d ( y n , q ) + α n d ( f ( q ) , p ) + ( 1 α n ) d ( x n , p ) + α n α d ( x n , p ) + α n d ( g ( p ) , q ) + ( 1 α n ) d ( y n , q ) = ( 1 α n ( 1 α ) ) [ d ( x n , p ) + d ( y n , q ) ] + α n ( 1 α ) d ( f ( q ) , p ) + d ( g ( p ) , q ) 1 α max { d ( x n , p ) + d ( y n , q ) , d ( f ( q ) , p ) + d ( g ( p ) , q ) 1 α } .
By induction, we can prove that
d ( x n , p ) + d ( y n , q ) max { d ( x 0 , p ) + d ( y 0 , q ) , d ( f ( q ) , p ) + d ( g ( p ) , q ) 1 α }
(3.7)

for all n N . This implies that { x n } and { y n } are bounded, so are { T 1 x n } , { T 2 y n } , { f ( T 2 y n ) } and { g ( T 1 x n ) } .

We claim that d ( x n + 1 , x n ) 0 and d ( y n + 1 , y n ) 0 . Indeed, we have (for some appropriate constant M > 0 )
d ( x n + 1 , x n ) + d ( y n + 1 , y n ) = d ( α n f ( T 2 y n ) ( 1 α n ) T 1 x n , α n 1 f ( T 2 y n 1 ) ( 1 α n 1 ) T 1 x n 1 ) + d ( α n g ( T 1 x n ) ( 1 α n ) T 1 y n , α n 1 g ( T 1 x n 1 ) ( 1 α n 1 ) T 2 y n 1 ) d ( α n f ( T 2 y n ) ( 1 α n ) T 1 x n , α n f ( T 2 y n 1 ) ( 1 α n ) T 1 x n 1 ) + d ( α n f ( T 2 y n 1 ) ( 1 α n ) T 1 x n 1 , α n 1 f ( T 2 y n 1 ) ( 1 α n 1 ) T 1 x n 1 ) + d ( α n g ( T 1 x n ) ( 1 α n ) T 2 y n , α n g ( T 1 x n 1 ) ( 1 α n ) T 2 y n 1 ) + d ( α n g ( T 1 x n 1 ) ( 1 α n ) T 2 y n 1 , α n 1 g ( T 1 x n 1 ) ( 1 α n 1 ) T 2 y n 1 ) α n d ( f ( T 2 y n ) , f ( T 2 y n 1 ) ) + ( 1 α n ) d ( T 1 x n , T 1 x n 1 ) + | α n α n 1 | d ( f ( T 2 y n 1 ) , T 1 x n 1 ) + α n d ( g ( T 1 x n ) , g ( T 1 x n 1 ) ) + ( 1 α n ) d ( T 2 y n , T 2 y n 1 ) + | α n α n 1 | d ( g ( T 1 x n 1 ) , T 2 y n 1 ) α n α d ( T 2 y n , T 2 y n 1 ) + ( 1 α n ) d ( T 1 x n , T 1 x n 1 ) + | α n α n 1 | d ( f ( T 2 y n 1 ) , T 1 x n 1 ) + α n α d ( T 1 x n , T 1 x n 1 ) + ( 1 α n ) d ( T 2 y n , T 2 y n 1 ) + | α n α n 1 | d ( g ( T 1 x n 1 ) , T 2 y n 1 ) ( 1 α n ( 1 α ) ) [ d ( x n , x n 1 ) + d ( y n , y n 1 ) ] + M | α n α n 1 | .
By conditions (H2) and (H3) and Lemma 2.5, we have
d ( x n + 1 , x n ) + d ( y n + 1 , y n ) 0 ,
(3.8)

and thus d ( x n + 1 , x n ) 0 , d ( y n + 1 , y n ) 0 .

Consequently, by condition (H1), we have
d ( x n , T 1 x n ) + d ( y n , T 2 y n ) d ( x n , x n + 1 ) + d ( x n + 1 , T 1 x n ) + d ( y n , y n + 1 ) + d ( y n + 1 , T 2 y n ) = d ( x n , x n + 1 ) + d ( α n f ( T 2 y n ) ( 1 α n ) T 1 x n , T 1 x n ) + d ( y n , y n + 1 ) + d ( α n g ( T 1 x n ) ( 1 α n ) T 2 y n , T 2 y n ) = d ( x n , x n + 1 ) + α n d ( f ( T 2 y n ) , T 1 x n ) + d ( y n , y n + 1 ) + α n d ( g ( T 1 x n ) , T 2 y n ) 0 ( n ) .
(3.9)
This implies that
d ( x n , T 1 x n ) 0 , d ( y n , T 2 y n ) 0 ( as  n ) .
(3.10)
Let { x t } and { y t } be two nets in C such that
{ x t = t f ( T 2 y t ) ( 1 t ) T 1 x t , y t = t g ( T 1 x t ) ( 1 t ) T 2 y t .
By Theorem 3.2, we have that x t x ˜ and y t y ˜ as t 0 such that x ˜ = P F ( T 1 ) f ( y ˜ ) , y ˜ = P F ( T 2 ) g ( x ˜ ) , which solves the variational inequality (1.6). Now, we claim that
lim sup n [ f ( y ˜ ) x ˜ , x n x ˜ + g ( x ˜ ) y ˜ , y n y ˜ ] 0 .
From Lemma 2.5, we have
d 2 ( x t , x n ) + d 2 ( y t , y n ) = x t x n , x t x n + y t y n , y t y n t f ( T 2 y t ) x n , x t x n + ( 1 t ) T 1 ( x t ) x n , x t x n + t g ( T 1 x t ) y n , y t y n + ( 1 t ) T 2 ( y t ) y n , y t y n = t f ( T 2 y t ) f ( y ˜ ) , x t x n + t f ( y ˜ ) x ˜ , x t x n + t x ˜ x t , x t x n + t x t x n , x t x n + t g ( T 1 x t ) g ( x ˜ ) , y t y n + t g ( x ˜ ) y ˜ , y t y n + t y ˜ y t , y t y n + t y t y n , y t y n + ( 1 t ) T 1 ( x t ) T 1 ( x n ) , x t x n + ( 1 t ) T 1 ( x n ) x n , x t x n + ( 1 t ) T 2 ( y t ) T 2 ( y n ) , y t y n + ( 1 t ) T 2 ( y n ) y n , y t y n t α d ( y t , y ˜ ) d ( x t , x n ) + t f ( y ˜ ) x ˜ , x t x n + t d ( x ˜ , x t ) d ( x t , x n ) + t d 2 ( x t , x n ) + t α d ( x t , x ˜ ) d ( y t , y n ) + t g ( x ˜ ) y ˜ , y t y n + t d ( y ˜ , y t ) d ( y t , y n ) + t d 2 ( y t , y n ) + ( 1 t ) d 2 ( x t , x n ) + ( 1 t ) d ( T 1 ( x n ) , x n ) d ( x t x n ) + ( 1 t ) d 2 ( y t , y n ) + ( 1 t ) d ( T 2 ( y n ) , y n ) d ( y t y n ) t α d ( y t , y ˜ ) M + t f ( y ˜ ) x ˜ , x t x n + t d ( x ˜ , x t ) M + t d 2 ( x t , x n ) + t α d ( x t , x ˜ ) M + t g ( x ˜ ) y ˜ , y t y n + t d ( y ˜ , y t ) M + t d 2 ( y t , y n ) + ( 1 t ) d 2 ( x t , x n ) + ( 1 t ) d ( T 1 ( x n ) , x n ) M + ( 1 t ) d 2 ( y t , y n ) + ( 1 t ) d ( T 2 ( y n ) , y n ) M [ d 2 ( x t , x n ) + d 2 ( y t , y n ) ] + t M α [ d ( x t , x ˜ ) + d ( y t , y ˜ ) ] + t M [ d ( x ˜ , x t ) + d ( y ˜ , y t ) ] + M [ d ( T 1 ( x n ) , x n ) + d ( T 2 ( y n ) , y n ) ] + t [ f ( y ˜ ) x ˜ , x t x n + g ( x ˜ ) y ˜ , y t y n ] ,
where M max { sup { d ( x t , x n ) : t ( 0 , 1 ) , n 0 } , sup { d ( y t , y n ) : t ( 0 , 1 ) , n 0 } } . Simplifying this, we have
f ( y ˜ ) x ˜ , x n x t + g ( x ˜ ) y ˜ , y n y t M ( 1 + α ) [ d ( x t , x ˜ ) + d ( y t , y ˜ ) ] + M t [ d ( T 1 ( x n ) , x n ) + d ( T 2 ( y n ) , y n ) ] .
(3.11)
Taking the limit as n first and then letting t 0 on both sides of (3.11), we have
lim sup t 0 lim sup n [ f ( y ˜ ) x ˜ , x n x t + g ( x ˜ ) y ˜ , y n y t ] 0 .
Since x t x ˜ and y t y ˜ as t 0 , by the continuity of a metric d, it follows that
lim sup t 0 [ f ( y ˜ ) x ˜ , x n x t + g ( x ˜ ) y ˜ , y n y t ] = f ( y ˜ ) x ˜ , x n x ˜ + g ( x ˜ ) y ˜ , y n y ˜ .
This implies that, for any ϵ > 0 , there exists δ > 0 such that for any t ( 0 , δ ) , we have
f ( y ˜ ) x ˜ , x n x ˜ + g ( x ˜ ) y ˜ , y n y ˜ f ( y ˜ ) x ˜ , x n x t + g ( x ˜ ) y ˜ , y n y t + ϵ .
(3.12)
First letting t 0 and taking limit, and then letting n and taking the upper limit on (3.12), we obtain
lim sup n [ f ( y ˜ ) x ˜ , x n x ˜ + g ( x ˜ ) y ˜ , y n y ˜ ] ϵ .
Since ϵ > 0 is arbitrary, we have
lim sup n [ f ( y ˜ ) x ˜ , x n x ˜ + g ( x ˜ ) y ˜ , y n y ˜ ] 0 .
Finally, we prove that x n x ˜ and y n y ˜ as n . Indeed, taking u n = α n x ˜ ( 1 α n ) T 1 x n , v n = α n y ˜ ( 1 α n ) T 2 y n for any n N , it follows from Lemma 2.6(i) that
d 2 ( x n + 1 , x ˜ ) + d 2 ( y n + 1 , y ˜ ) d 2 ( u n , x ˜ ) + d 2 ( v n , y ˜ ) + 2 x n + 1 u n , x n + 1 x ˜ + 2 y n + 1 v n , y n + 1 y ˜ ( 1 α n ) 2 [ d 2 ( x n , x ˜ ) + d 2 ( y n , y ˜ ) ] + 2 [ α n f ( T 2 y n ) u n , x n + 1 x ˜ + ( 1 α n ) T 1 ( x n ) u n , x n + 1 x ˜ ] + 2 [ α n g ( T 1 x n ) v n , y n + 1 y ˜ + ( 1 α n ) T 2 ( y n ) v n , y n + 1 y ˜ ] ( 1 α n ) 2 [ d 2 ( x n , x ˜ ) + d 2 ( y n , y ˜ ) ] + 2 [ α n 2 f ( T 2 y n ) x ˜ , x n + 1 x ˜ + α n ( 1 α n ) f ( T 2 y n ) T 1 ( x n ) , x n + 1 x ˜ + ( 1 α n ) α n T 1 ( x n ) x ˜ , x n + 1 x ˜ + ( 1 α n ) 2 T 1 ( x n ) T 1 ( x n ) , x n + 1 x ˜ ] + 2 [ α n 2 g ( T 1 x n ) y ˜ , y n + 1 y ˜ + α n ( 1 α n ) g ( T 1 x n ) T 2 ( y n ) , y n + 1 y ˜ + ( 1 α n ) 2 T 2 ( y n ) y ˜ , y n + 1 y ˜ + α n ( 1 α n ) T 2 ( y n ) T 2 ( y n ) , y n + 1 y ˜ ] = ( 1 α n ) 2 [ d 2 ( x n , x ˜ ) + d 2 ( y n , y ˜ ) ] + 2 α n [ f ( T 2 y n ) x ˜ , x n + 1 x ˜ + g ( T 1 x n ) y ˜ , y n + 1 y ˜ ] = ( 1 α n ) 2 [ d 2 ( x n , x ˜ ) + d 2 ( y n , y ˜ ) ] + 2 α n [ f ( T 2 y n ) f ( y ˜ ) , x n + 1 x ˜ + f ( y ˜ ) x ˜ , x n + 1 x ˜ + g ( T 1 x n ) g ( x ˜ ) , y n + 1 y ˜ + g ( x ˜ ) y ˜ , y n + 1 y ˜ ] ( 1 α n ) 2 [ d 2 ( x n , x ˜ ) + d 2 ( y n , y ˜ ) ] + 2 α n α [ d ( y n , y ˜ ) d ( x n + 1 , x ˜ ) + d ( x n , x ˜ ) d ( y n + 1 , y ˜ ) ] + 2 α n [ f ( y ˜ ) x ˜ , x n + 1 x ˜ + g ( x ˜ ) y ˜ , y n + 1 y ˜ ] ( 1 α n ) 2 [ d 2 ( x n , x ˜ ) + d 2 ( y n , y ˜ ) ] + α n α [ d 2 ( y n , y ˜ ) + d 2 ( x n + 1 , x ˜ ) + d 2 ( x n , x ˜ ) + d 2 ( y n + 1 , y ˜ ) ] + 2 α n [ f ( y ˜ ) x ˜ , x n + 1 x ˜ + g ( x ˜ ) y ˜ , y n + 1 y ˜ ] ,
(3.13)
which implies that
d 2 ( x n + 1 , x ˜ ) + d 2 ( y n + 1 , y ˜ ) 1 ( 2 α ) α n + α n 2 1 α α n [ d 2 ( x n , x ˜ ) + d 2 ( y n , y ˜ ) ] + 2 α n 1 α α n [ f ( y ˜ ) x ˜ , x n + 1 x ˜ + g ( x ˜ ) y ˜ , y n + 1 y ˜ ] 1 ( 2 α ) α n 1 α α n [ d 2 ( x n , x ˜ ) + d 2 ( y n , y ˜ ) ] + 2 α n 1 α α n [ f ( y ˜ ) x ˜ , x n + 1 x ˜ + g ( x ˜ ) y ˜ , y n + 1 y ˜ ] + α n 2 1 α M ,
(3.14)
where M > sup n 0 [ d 2 ( x n , x ˜ ) + d 2 ( y n , y ˜ ) ] . Thus,
d 2 ( x n + 1 , x ˜ ) + d 2 ( y n + 1 , y ˜ ) ( 1 α n ) [ d 2 ( x n , x ˜ ) + d 2 ( y n , y ˜ ) ] + α n β n ,
(3.15)
where
α n = 2 ( 1 α ) α n 1 α α n and β n = ( 1 α α n ) α n 2 ( 1 α ) 2 M + 1 1 α [ f ( y ˜ ) x ˜ , x n + 1 x ˜ + g ( x ˜ ) y ˜ , y n + 1 y ˜ ] .

Applying Lemma 2.6, we have d 2 ( x n , x ˜ ) + d 2 ( y n , y ˜ ) 0 . Hence x n x ˜ and y n y ˜ as n . This completes the proof of Theorem 3.3. □

Declarations

Acknowledgements

The authors would like to express their thanks to the referees for their helpful suggestions and comments. This work was supported by the Scientific Research Fund of Sichuan Provincial Education Department (11ZA221) and the Scientific Research Fund of Science Technology Department of Sichuan Province 2011JYZ010.

Authors’ Affiliations

(1)
Department of Mathematics, Yibin University, Yibin, Sichuan, 644007, China
(2)
College of Statistics and Mathematics, Yunnan University of Finance and Economics, Kunming, Yunnan, 650221, China

References

  1. Buzogany E, Mezei I, Varga V: Two-variable variational-hemivariational inequalities. Stud. Univ. Babeş-Bolyai, Math. 2002, 47: 31–41.MATHMathSciNetGoogle Scholar
  2. Kassay G, Kolumban J, Pales Z: On Nash stationary points. Publ. Math. (Debr.) 1999, 54: 267–279.MATHMathSciNetGoogle Scholar
  3. Kassay G, Kolumban J, Pales Z: Factorization of Minty and Stampacchia variational inequality systems. Eur. J. Oper. Res. 2002, 143: 377–389. 10.1016/S0377-2217(02)00290-4MATHMathSciNetView ArticleGoogle Scholar
  4. Kassay G, Kolumban J: System of multi-valued variational inequalities. Publ. Math. (Debr.) 2000, 56: 185–195.MATHMathSciNetGoogle Scholar
  5. Verma RU: Projection methods, algorithms, and a new system of nonlinear variational inequalities. Comput. Math. Appl. 2001, 41: 1025–1031. 10.1016/S0898-1221(00)00336-9MATHMathSciNetView ArticleGoogle Scholar
  6. Kinderlehrer D, Stampacchia G: An Introduction to Variational Inequalities and Their Applications. Academic Press, New York; 1980.MATHGoogle Scholar
  7. Yamada I, Ogura N: Hybrid steepest descent method for variational inequality problem over the fixed point set of certain quasi-nonexpansive mappings. Numer. Funct. Anal. Optim. 2004, 25: 619–655.MATHMathSciNetView ArticleGoogle Scholar
  8. Xu HK: Viscosity approximation methods for nonexpansive mappings. J. Math. Anal. Appl. 2004, 298: 279–291. 10.1016/j.jmaa.2004.04.059MATHMathSciNetView ArticleGoogle Scholar
  9. Zegeye H, Shahzad N: Strong convergence theorem for a common point of solution of variational inequality and fixed point problem. Adv. Fixed Point Theory 2012, 2: 374–397.Google Scholar
  10. Luo H, Wang Y: Iterative approximation for the common solutions of a infinite variational inequality system for inverse-strongly accretive mappings. J. Math. Comput. Sci. 2012, 2: 1660–1670.MathSciNetGoogle Scholar
  11. Dhompongsa S, Kaewkhao A, Panyanak B: On Kirk’s strong convergence theorem for multivalued nonexpansive mappings on CAT ( 0 ) spaces. Nonlinear Anal., Theory Methods Appl. 2012, 75(2):459–468. 10.1016/j.na.2011.08.046MATHMathSciNetView ArticleGoogle Scholar
  12. Saejung S: Halpern’s iteration in CAT ( 0 ) spaces. Fixed Point Theory Appl. 2010., 2010: Article ID 471781 10.1155/2010/471781Google Scholar
  13. Shi LY, Chen RD: Strong convergence of viscosity approximation methods for nonexpansive mappings in CAT ( 0 ) spaces. J. Appl. Math. 2012., 2012: Article ID 421050 10.1155/2012/421050Google Scholar
  14. Berg ID, Nikolaev IG: Quasilinearization and curvature of Alexandrov spaces. Geom. Dedic. 2008, 133: 195–218. 10.1007/s10711-008-9243-3MATHMathSciNetView ArticleGoogle Scholar
  15. Wangkeeree R, Preechasilp P: Viscosity approximation methods for nonexpansive mappings in CAT ( 0 ) spaces. J. Inequal. Appl. 2013., 2013: Article ID 93 10.1186/1029-242X-2013-93Google Scholar
  16. Dhompongsa S, Panyanak B: On -convergence theorems in CAT ( 0 ) spaces. Comput. Math. Appl. 2008, 56(10):2572–2579. 10.1016/j.camwa.2008.05.036MATHMathSciNetView ArticleGoogle Scholar
  17. Chaoha P, Phon-on A: A note on fixed point sets in CAT ( 0 ) spaces. J. Math. Anal. Appl. 2006, 320(2):983–987. 10.1016/j.jmaa.2005.08.006MATHMathSciNetView ArticleGoogle Scholar
  18. Bridson MR, Haefliger A Grundlehren der Mathematischen Wissenschaften 319. In Metric Spaces of Non-Positive Curvature. Springer, Berlin; 1999.View ArticleGoogle Scholar
  19. Kirk WA: Geodesic geometry and fixed point theory. II. In International Conference on Fixed Point Theory and Applications. Yokohama, Yokohama; 2004:113–142.Google Scholar
  20. Banach S: Sur les o’pration dans les ensembles abstraits et leur applications aux ’equations int’egrales. Fundam. Math. 1922, 3: 133–181.MATHGoogle Scholar
  21. Kirk WA: Geodesic geometry and fixed point theory. Colecc. Abierta 64. In Seminar of Mathematical Analysis. University of Seville, Secretary Publication, Seville; 2003:195–225. Malaga/Seville, 2002/2003Google Scholar
  22. Dehghan H, Rooin J: A characterization of metric projection in CAT ( 0 ) spaces. International Conference on Functional Equation, Geometric Functions and Applications (ICFGA 2012) 2012, 41–43. Payame Noor University, Tabriz, Iran, 10-12 May 2012Google Scholar
  23. Kakavandi BA: Weak topologies in complete CAT ( 0 ) metric spaces. Proc. Am. Math. Soc. 2013, 141: 1029–1039.MATHView ArticleGoogle Scholar
  24. Kirk WA, Panyanak B: A concept of convergence in geodesic spaces. Nonlinear Anal., Theory Methods Appl. 2008, 68(12):3689–3696. 10.1016/j.na.2007.04.011MATHMathSciNetView ArticleGoogle Scholar

Copyright

© Liu and Chang; licensee Springer. 2013

This article is published under license to BioMed Central Ltd. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Advertisement