Skip to main content

Advertisement

We're creating a new version of this page. See preview

  • Research
  • Open Access

Superimposed algorithms for the split equilibrium problems and fixed point problems

Journal of Inequalities and Applications20142014:380

https://doi.org/10.1186/1029-242X-2014-380

  • Received: 29 March 2014
  • Accepted: 11 September 2014
  • Published:

Abstract

The split common problems for finding the equilibrium points and fixed points have been studied. A parallel superimposed algorithm is introduced to solve this split common problem. Strong convergence theorems are shown with some analysis techniques.

MSC:49J30, 47H09, 65K10.

Keywords

  • split equilibrium problem
  • split fixed point
  • nonexpansive mapping
  • parallel superimposed algorithms

1 Introduction

In the present article, our main purpose is to study the split problem. First, we recall some relevant background in the literature.

Problem 1: the split feasibility problem

Let C and Q be two nonempty closed convex subsets of Hilbert spaces H 1 and H 2 , respectively, and let A : H 1 H 2 be a bounded linear operator. The problem of finding a point x such that
x C and A x Q
(1.1)
is called the split feasibility; it was first introduced by Censor and Elfving [1] in finite dimensional Hilbert spaces. Such problems arise in the field of intensity-modulated radiation therapy when one attempts to describe physical dose constraints and equivalent uniform dose constraints within a single model. When C R N and Q R M are a single pair of sets, Censor and Elfving [1] introduced the simultaneous multi-projections algorithm:
x n + 1 = A 1 ( λ 1 I + λ 2 A A ) 1 ( λ 1 v 1 n + 1 + λ 2 A A v 2 n + 1 ) , n 0 ,
(1.2)
where λ 1 > 0 , λ 2 > 0 , λ 1 + λ 2 = 1 , v i n + 1 = P C i f i ( b n ) ( i = 1 , 2 ), and b n is the solution of the equation
i = 1 2 λ i f i ( b n ) = i = 1 2 f i ( v i n ) .
Note that the simultaneous multi-projections algorithm (1.2) involve a matrix inversion A 1 at each iterative step. This is very time-consuming, particularly if the dimensions are large. In order to solve this problem, Byrne [2] derived a new algorithm, called the CQ-algorithm:
x n + 1 = P C ( x n τ A ( I P Q ) A x n ) , n 0 ,

where τ ( 0 , 2 L ) with L being the largest eigenvalue of the matrix A A , I is the unit matrix or operator, and P C and P Q denote the orthogonal projections onto C and Q, respectively. The CQ-algorithm and its variant forms have now been studied for the split feasibility problem; see, for instance [313].

Problem 2: the split common fixed point problem

If every closed convex subset of a Hilbert space is the fixed point set of its associating projection, then the split feasibility problem becomes a special case of the split common fixed point problem of finding a point x with the property:
x Fix ( U ) and A x Fix ( T ) .
(1.3)
This problem was first introduced by Censor and Segal [14] who invented an algorithm which generates a sequence { x n } according to the iterative procedure:
x n + 1 = U ( x n γ A ( I T ) A x n ) , n N .
(1.4)
Moudafi [15] extended (1.4) to the following relaxed algorithm:
x n + 1 = U α n ( x n + γ A ( T β I ) A x n ) , n N ,

where β ( 0 , 1 ) , α n ( 0 , 1 ) are relaxation parameters. Consequently, Wang and Xu [16] considered a general cyclic algorithm. Very recently, the split problem has also been extended to solve other problems, such as the split monotone variational inclusions and the split variational inequalities, please refer to [15, 1722] and [2325].

Problem 3: the equilibrium problem

Consider the following equilibrium problem: Finding x C such that
F ( x , x ) 0 , x C ,
(1.5)

where F : C × C R is a bifunction. We will denote by EP ( F ) the set of solutions of (1.5).

The equilibrium problems, in its various forms, found application in optimization problems, fixed point problems, and convex minimization problems; in other words, equilibrium problems are a unified model for problems arising in physics, engineering, economics, and so on (see [2629]).

Motivated by the split common fixed point problem and the equilibrium problem, He and Du [30] presented the following split equilibrium problem and fixed point problem:
Find a point  x Fix ( T ) EP ( F )  such that  A x Fix ( S ) EP ( G ) ,
(1.6)
where Fix ( S ) and Fix ( T ) are the sets of fixed points of two nonlinear mappings S and T, respectively, EP ( F ) and EP ( G ) are the solution sets of two equilibrium problems with bifunctions F and G, respectively, and A is a bounded linear mapping. Denote the solution set of (1.6) by
Γ = { x Fix ( T ) EP ( F ) : A x Fix ( S ) EP ( G ) } .

Special cases

  1. 1.
    If F = 0 and G = 0 , then (1.6) is reduced to the following split common fixed point problem, which has been considered by many authors, for example, [14, 15, 17] and [21]:
    Find a point  x Fix ( T )  such that  A x Fix ( S ) .
    (1.7)
     
  2. 2.

    If S = P Q and T = P C , then (1.7) is reduced to the split feasibility problem (1.1).

     
  3. 3.
    If S and T are all identity operators, then (1.6) is reduced to the split equilibrium problem which has been considered in [18]:
    Find a point  x EP ( F )  such that  A x EP ( G ) .
     

Based on the work in this direction, in this paper we will develop new algorithms to solve the split equilibrium problem and the fixed point problem (1.6). We first introduce a parallel superimposed algorithm. Consequently, strong convergence theorems are shown with some analysis techniques.

2 Preliminaries

Let H be a real Hilbert space with inner product , and norm , respectively. Let C be a nonempty closed convex subset of H.

Definition 2.1 A mapping T : C C is called nonexpansive if
T x T y x y

for all x , y C .

We will use Fix ( T ) to denote the set of fixed points of T, that is, Fix ( T ) = { x C : x = T x } .

Definition 2.2 A mapping f : C C is called contractive if
f ( x ) f ( y ) ρ x y

for all x , y C and for some constant ρ ( 0 , 1 ) . In this case, we call f is a ρ-contraction.

Definition 2.3 A linear bounded operator B : H H is called strongly positive if there exists a constant γ > 0 such that
B x , x γ x 2

for all x , y H .

Definition 2.4 We call P C : H C the metric projection if for each x H
x P C ( x ) = inf { x y : y C } .
It is well known that the metric projection P C : H C is characterized by
x P C ( x ) , y P C ( x ) 0
for all x H , y C . From this, we can deduce that P C is firmly nonexpansive, that is,
P C ( x ) P C ( y ) 2 x y , P C ( x ) P C ( y )
(2.1)

for all x , y H . Hence P C is also nonexpansive.

It is well known that in a real Hilbert space H, the following two equalities hold:
t x + ( 1 t ) y 2 = t x 2 + ( 1 t ) y 2 t ( 1 t ) x y 2
(2.2)
for all x , y H and t [ 0 , 1 ] , and
x + y 2 = x 2 + 2 x , y + y 2
(2.3)
for all x , y H . It follows that
x + y 2 x 2 + 2 y , x + y
(2.4)

for all x , y H .

Throughout this paper, we assume that a bifunction F : C × C R satisfies the following conditions:
  1. (H1)

    F ( x , x ) = 0 for all x C ;

     
  2. (H2)

    F is monotone, i.e., F ( x , y ) + F ( y , x ) 0 for all x , y C ;

     
  3. (H3)

    for each x , y , z C , lim t 0 F ( t z + ( 1 t ) x , y ) F ( x , y ) ;

     
  4. (H4)

    for each x C , y F ( x , y ) is convex and lower semicontinuous.

     

Lemma 2.5 ([31])

Let C be a nonempty closed convex subset of a real Hilbert space H. Let F : C × C R be a bifunction which satisfies conditions (H1)-(H4). Let r > 0 and x C . Then there exists z C such that
F ( z , y ) + 1 r y z , z x 0 , y C .
Further, if U r F ( x ) = { z C : F ( z , y ) + 1 r y z , z x 0 , y C } , then the following hold:
  1. (i)

    U r F is single-valued and U r F is firmly nonexpansive, i.e., for any x , y H , U r F x U r F y 2 U r F x U r F y , x y ;

     
  2. (ii)

    EP ( F ) is closed and convex and EP ( F ) = Fix ( U r F ) .

     

Lemma 2.6 ([32])

Let the mapping U r F be defined as in Lemma  2.5. Then, for r , s > 0 and x , y H ,
U r F ( x ) U s F ( y ) x y + | s r | s U s F ( y ) y .

Lemma 2.7 ([33])

Let { x n } and { y n } be two bounded sequences in a Banach space X and let { β n } be a sequence in [ 0 , 1 ] with 0 < lim inf n β n lim sup n β n < 1 . Suppose that
x n + 1 = ( 1 β n ) y n + β n x n
for all n 0 and
lim sup n ( y n + 1 y n x n + 1 x n ) 0 .

Then lim n y n x n = 0 .

Lemma 2.8 ([34])

Let C be a closed convex subset of a real Hilbert space H and let S : C C be a nonexpansive mapping. Then the mapping I S is demiclosed. That is, if { x n } is a sequence in C such that x n x weakly and ( I S ) x n y strongly, then ( I S ) x = y .

Lemma 2.9 ([35])

Assume that { a n } is a sequence of nonnegative real numbers such that
a n + 1 ( 1 γ n ) a n + δ n , n N ,
where { γ n } is a sequence in ( 0 , 1 ) and { δ n } is a sequence such that
  1. (1)

    n = 1 γ n = ;

     
  2. (2)

    lim sup n δ n γ n 0 or n = 1 | δ n | < .

     

Then lim n a n = 0 .

3 Main results

In this section, we introduce our algorithm and prove our main results.

Let H 1 and H 2 be two real Hilbert spaces and let C and D be two nonempty closed convex subsets of H 1 and H 2 , respectively. Let A : H 1 H 2 be a bounded linear operator with its adjoint A , B be a strongly positive bounded linear operator on H 1 with coefficient γ > 0 . Let f : C C be a ρ-contraction and F : C × C R and G : D × D R be two bifunctions satisfying the conditions (H1)-(H4). Let S : D D and T : C C be two nonexpansive mappings.

Algorithm 3.1 Taking x 0 H 1 arbitrarily, we define a sequence { x n } by the following:
x n + 1 = α n σ f ( x n ) + β n x n + ( ( 1 β n ) I α n B ) T U λ n F ( x n + δ A ( S U γ n G I ) A x n )
(3.1)

for all n N , where { λ n } and { γ n } are two real number sequences in ( 0 , ) , δ ( 0 , 1 A 2 ) and σ > 0 are two constants and { α n } and { β n } are two real number sequences in ( 0 , 1 ) .

Theorem 3.2 Suppose Γ and suppose the following conditions hold:
  1. (C1):

    lim n α n = 0 and n = 1 α n = ;

     
  2. (C2):

    0 < lim inf n β n lim sup n β n < 1 ;

     
  3. (C3):

    lim inf n λ n > 0 and lim n λ n + 1 λ n = 1 ;

     
  4. (C4):

    lim inf n γ n > 0 and lim n γ n + 1 γ n = 1 ;

     
  5. (C5):

    σ ρ < γ .

     
Then the sequence { x n } generated by algorithm (3.1) converges strongly to p = Proj Γ ( σ f + I B ) p , which solves the following VI:
( σ f B ) x , y x 0 , y Γ .
(3.2)
Proof First, we know that the solution of (3.2) is unique. We denote the unique solution by p. That is, p = Proj Γ ( σ f + I B ) p . Then we have p Fix ( T ) EP ( F ) and A p Fix ( S ) EP ( G ) . Set z n = U γ n G A x n , y n = x n + δ A ( S U γ n G I ) A x n and u n = U λ n F ( x n + δ A ( S U γ n G I ) A x n ) for all n N . Then u n = U λ n F y n . From Lemma 2.5, we know that U λ n F and U γ n G are firmly nonexpansive. By these facts, we have the following conclusions:
z n A p = U γ n G A x n A p A x n A p ,
(3.3)
u n p = U λ n F y n p y n p
(3.4)
and
S U γ n G A x n A p 2 U γ n G A x n A p 2 A x n A p 2 U γ n G A x n A x n 2 .
(3.5)
Applying Lemma 2.6, we deduce
u n + 1 u n = U λ n + 1 F y n + 1 U λ n F y n y n + 1 y n + | λ n + 1 λ n λ n + 1 | u n + 1 y n + 1
(3.6)
and
z n + 1 z n = U γ n + 1 G A x n + 1 U γ n G A x n A x n + 1 A x n + | γ n + 1 γ n γ n + 1 | z n + 1 A x n .
(3.7)
From (3.1), we have
x n + 1 p = α n ( σ f ( x n ) B p ) + β n ( x n p ) + ( ( 1 β n ) I α n B ) ( T u n p ) α n σ f ( x n ) f ( p ) + α n σ f ( p ) B p + β n x n p + ( 1 β n α n γ ) u n p .
(3.8)
Using (2.3), we get
y n p 2 = x n p + δ A ( S z n A x n ) 2 = x n p 2 + δ 2 A ( S z n A x n ) 2 + 2 δ x n p , A ( S z n A x n ) .
(3.9)
Since A is a linear operator with its adjoint A , we have
x n p , A ( S z n A x n ) = A ( x n p ) , S z n A x n = A x n A p + S z n A x n ( S z n A x n ) , S z n A x n = S z n A p , S z n A x n z n A x n 2 .
(3.10)
Again from (2.3), we obtain
S z n A p , S z n A x n = 1 2 ( S z n A p 2 + S z n A x n 2 A x n A p 2 ) .
(3.11)
From (3.5), (3.10), and (3.11), we have
x n p , A ( S z n A x n ) = 1 2 ( S z n A p 2 + S z n A x n 2 A x n A p 2 ) S z n A x n 2 1 2 ( A x n A p 2 z n A x n 2 + S z n A x n 2 A x n A p 2 ) S z n A x n 2 = 1 2 z n A x n 2 1 2 S z n A x n 2 .
(3.12)
Substituting (3.12) into (3.9) to deduce
y n p 2 x n p 2 + δ 2 A 2 S z n A x n 2 + 2 δ ( 1 2 z n A x n 2 1 2 S z n A x n 2 ) = x n p 2 + ( δ 2 A 2 δ ) S z n A x n 2 δ z n A x n 2 x n p 2 .
It follows that
y n p x n p .
Thus, from (3.8), we get
x n + 1 p α n σ ρ x n p + α n σ f ( p ) B p + β n x n p + ( 1 β n α n γ ) x n p = [ 1 ( γ σ ρ ) α n ] x n p + α n σ f ( p ) B p max { x n p , σ f ( p ) B p γ σ ρ } .

The boundedness of the sequence { x n } follows.

Next, we estimate u n + 1 u n . Observe that
y n + 1 y n 2 = x n + 1 x n + δ [ A ( S z n + 1 A x n + 1 ) A ( S z n A x n ) ] 2 = x n + 1 x n 2 + δ 2 A ( S z n + 1 A x n + 1 ) A ( S z n A x n ) 2 + 2 δ x n + 1 x n , A [ ( S z n + 1 A x n + 1 ) ( S z n A x n ) ] x n + 1 x n 2 + δ 2 A 2 S z n + 1 S z n ( A x n + 1 A x n ) 2 + 2 δ A x n + 1 A x n , S z n + 1 S z n ( A x n + 1 A x n ) = x n + 1 x n 2 + δ 2 A 2 S z n + 1 S z n ( A x n + 1 A x n ) 2 + 2 δ S z n + 1 S z n , S z n + 1 S z n ( A x n + 1 A x n ) 2 δ S z n + 1 S z n ( A x n + 1 A x n ) 2 = x n + 1 x n 2 + δ 2 A 2 S z n + 1 S z n ( A x n + 1 A x n ) 2 + δ ( S z n + 1 S z n 2 + S z n + 1 S z n ( A x n + 1 A x n ) 2 A x n + 1 A x n 2 ) 2 δ S z n + 1 S z n ( A x n + 1 A x n ) 2 = x n + 1 x n 2 + ( δ 2 A 2 δ ) S z n + 1 S z n ( A x n + 1 A x n ) 2 + δ ( S z n + 1 S z n 2 A x n + 1 A x n 2 ) x n + 1 x n 2 + ( δ 2 A 2 δ ) S z n + 1 S z n ( A x n + 1 A x n ) 2 + δ ( z n + 1 z n 2 A x n + 1 A x n 2 ) .
(3.13)
Since δ ( 0 , 1 A 2 ) , we derive by virtue of (3.7) and (3.13) that
y n + 1 y n 2 x n + 1 x n 2 + δ | γ n + 1 γ n γ n + 1 | ( z n + 1 z n + A x n + 1 A x n ) .
(3.14)
According to (3.6) and (3.14), we have
u n + 1 u n 2 = U λ n + 1 F y n + 1 U λ n F y n 2 ( y n + 1 y n + | λ n + 1 λ n λ n + 1 | u n + 1 y n + 1 ) 2 y n + 1 y n 2 + | λ n + 1 λ n λ n + 1 | ( 2 y n + 1 y n u n + 1 y n + 1 + | λ n + 1 λ n λ n + 1 | u n + 1 y n + 1 2 ) x n + 1 x n 2 + | λ n + 1 λ n λ n + 1 | ( 2 y n + 1 y n u n + 1 y n + 1 + | λ n + 1 λ n λ n + 1 | u n + 1 y n + 1 2 ) + δ | γ n + 1 γ n γ n + 1 | ( z n + 1 z n + A x n + 1 A x n ) x n + 1 x n 2 + ( | λ n + 1 λ n λ n + 1 | + δ | γ n + 1 γ n γ n + 1 | ) M ,
(3.15)
where M > 0 is a constant such that
sup n { 2 y n + 1 y n u n + 1 y n + 1 + | λ n + 1 λ n λ n + 1 | u n + 1 y n + 1 2 + δ ( z n + 1 z n + A x n + 1 A x n ) } M .
Therefore,
u n + 1 u n x n + 1 x n + ( | λ n + 1 λ n λ n + 1 | + δ | γ n + 1 γ n γ n + 1 | ) M .
(3.16)
From (3.1), we write x n + 1 = β n x n + ( 1 β n ) w n where w n = T u n + α n 1 β n ( σ f ( x n ) B T u n ) for all n N . Then we have
w n + 1 w n = T u n + 1 T u n + α n + 1 1 β n + 1 ( σ f ( x n + 1 ) B T u n + 1 ) α n 1 β n ( σ f ( x n ) B T u n ) T u n + 1 T u n + α n + 1 1 β n + 1 σ f ( x n + 1 ) B T u n + 1 + α n 1 β n σ f ( x n ) B T u n u n + 1 u n + α n + 1 1 β n + 1 σ f ( x n + 1 ) B T u n + 1 + α n 1 β n σ f ( x n ) B T u n x n + 1 x n + ( | λ n + 1 λ n λ n + 1 | + δ | γ n + 1 γ n γ n + 1 | ) M + α n + 1 1 β n + 1 σ f ( x n + 1 ) B T u n + 1 + α n 1 β n σ f ( x n ) B T u n .
Noting the condition (C1) and the boundedness of the sequences { u n + 1 } , { y n + 1 } , { z n + 1 } , { A x n } , { f ( x n ) } , and { B T u n } , we have
lim sup n ( w n + 1 w n x n + 1 x n ) 0 .
By Lemma 2.7, we deduce
lim n x n w n = 0 .
Hence,
lim n x n + 1 x n = lim n ( 1 β n ) x n w n = 0 .
(3.17)
Since x n + 1 x n = α n ( σ f ( x n ) B T u n ) + ( 1 β n ) ( T u n x n ) , we obtain
T u n x n 1 β n { α n σ f ( x n ) B T u n + x n + 1 x n } .
Thus,
lim n x n T u n = 0 .
(3.18)
Using the firmly nonexpansiveness of U λ n F , we have
u n p 2 = U λ n F y n p 2 y n p 2 U λ n F y n y n 2 = y n p 2 u n y n 2 = y n p 2 u n x n δ A ( S U γ n G I ) A x n 2 = y n p 2 u n x n 2 δ 2 A ( S U γ n G I ) A x n 2 + 2 δ u n x n , A ( S U γ n G I ) A x n .
(3.19)
Applying (2.4) to (3.1) to deduce
x n + 1 p 2 = α n ( σ f ( x n ) B p ) + β n ( x n T u n ) + ( I α n B ) ( T u n p ) 2 ( I α n B ) ( T u n p ) + β n ( x n T u n ) 2 + 2 α n σ f ( x n ) B p , x n + 1 p [ I α n B T u n p + β n x n T u n ] 2 + 2 α n σ f ( x n ) B p x n + 1 p [ ( 1 α n γ ) u n p + β n x n T u n ] 2 + 2 α n σ f ( x n ) B p x n + 1 p = ( 1 α n γ ) 2 u n p 2 + β n 2 x n T u n 2 + 2 ( 1 α n γ ) β n u n p x n T u n + 2 α n σ f ( x n ) B p x n + 1 p .
(3.20)
It follows from (3.19) that
x n + 1 p 2 x n p 2 u n y n 2 + β n 2 x n T u n 2 + 2 ( 1 α n γ ) β n u n p x n T u n + 2 α n σ f ( x n ) B p x n + 1 p .
(3.21)
Then
u n y n 2 x n p 2 x n + 1 p 2 + β n 2 x n T u n 2 + 2 ( 1 α n γ ) β n u n p x n T u n + 2 α n σ f ( x n ) B p x n + 1 p ( x n p + x n + 1 p ) x n + 1 x n + β n 2 x n T u n 2 + 2 ( 1 α n γ ) β n u n p x n T u n + 2 α n σ f ( x n ) B p x n + 1 p .
This together with (C1), (3.17), and (3.18) implies that
lim n u n y n = 0 .
(3.22)
From (3.20), we have
x n + 1 p 2 ( 1 α n γ ) 2 u n p 2 + β n 2 x n T u n 2 + 2 ( 1 α n γ ) β n u n p x n T u n + 2 α n σ f ( x n ) B p x n + 1 p y n p 2 + β n 2 x n T u n 2 + 2 ( 1 α n γ ) β n u n p x n T u n + 2 α n σ f ( x n ) B p x n + 1 p x n p 2 + ( δ 2 A 2 δ ) S z n A x n 2 δ z n A x n 2 + β n 2 x n T u n 2 + 2 ( 1 α n γ ) β n u n p x n T u n + 2 α n σ f ( x n ) B p x n + 1 p .
Hence,
( δ δ 2 A 2 ) S z n A x n 2 + δ z n A x n 2 x n p 2 x n + 1 p 2 + β n 2 x n T u n 2 + 2 ( 1 α n γ ) β n u n p x n T u n + 2 α n σ f ( x n ) B p x n + 1 p ( x n p + x n + 1 p ) x n + 1 x n + β n 2 x n T u n 2 + 2 ( 1 α n γ ) β n u n p x n T u n + 2 α n σ f ( x n ) B p x n + 1 p ,
which implies that
lim n S z n A x n = lim n z n A x n = 0 .
So,
lim n S z n z n = 0 .
(3.23)
Note that
y n x n = δ A ( S U γ n G I ) A x n δ A S z n A x n .
Therefore,
lim n x n y n = 0 .
(3.24)
From (3.18), (3.22), and (3.24), we get
lim n x n T x n = 0 .
(3.25)
Now, we show that lim sup n ( σ f B ) p , x n p 0 . Choose a subsequence { x n i } of { x n } such that
lim sup n ( σ f B ) p , x n p = lim i ( σ f B ) p , x n i p .
(3.26)
Since the sequence { x n i } is bounded, we can choose a subsequence { x n i j } of { x n i } such that x n i j z . For the sake of convenience, we assume (without loss of generality) that x n i z . Consequently, we derive from the above conclusions that
y n i z , u n i z , A x n i z and z n i A z .
(3.27)

By the demi-closed principle of the nonexpansive mappings S and T (see Lemma 2.8), we deduce z Fix ( T ) and A z Fix ( S ) (according to (3.25) and (3.23), respectively).

Next, we show that z EP ( F ) . Since u n = U λ n F y n , we have
F ( u n , y ) + 1 λ n y u n , u n y n 0 , y C .
(3.28)
It follows from the monotonicity of F that
1 λ n y u n , u n y n F ( y , u n ) ,
(3.29)
and hence
y u n i , u n i y n i λ n i F ( y , u n i ) .
(3.30)
Since u n y n 0 , u n i z , and lim inf n λ n > 0 , we obtain u n i y n i λ n i 0 . It follows that 0 F ( y , z ) . For t with 0 < t 1 and y C , let y t = t y + ( 1 t ) z C . It follows that F ( y t , z ) 0 . So,
0 = F ( y t , y t ) t F ( y t , y ) + ( 1 t ) F ( y t , z ) t F ( y t , y ) .
(3.31)
Therefore, 0 F ( y t , y ) . Thus 0 F ( z , y ) . This implies that z EP ( F ) . Similarly, we can prove that A z EP ( G ) . To this end, we deduce z Fix ( T ) EP ( F ) and A z Fix ( S ) EP ( G ) . That is to say, z Γ . Therefore,
lim sup n ( σ f B ) p , x n p = lim i ( σ f B ) p , x n i p = lim i ( σ f B ) p , z p 0 .
(3.32)
Finally, we prove x n p . From (3.1), we have
x n + 1 p 2 = α n ( σ f ( x n ) B p ) + β n ( x n p ) + ( ( 1 β n ) I α n B ) ( T u n p ) , x n + 1 p = α n σ f ( x n ) B p , x n + 1 p + β n x n p , x n + 1 p + ( ( 1 β n ) I α n B ) ( T u n p ) , x n + 1 p α n σ f ( x n ) f ( p ) , x n + 1 p + α n σ f ( p ) B p , x n + 1 p + β n x n p x n + 1 p + ( 1 β n α n γ ) T u n p x n + 1 p [ 1 ( γ σ ρ ) α n ] x n p x n + 1 p + α n σ f ( p ) B p , x n + 1 p 1 ( γ σ ρ ) α n 2 x n p 2 + 1 2 x n + 1 p 2 + α n σ f ( p ) B p , x n + 1 p .
(3.33)
It follows that
x n + 1 p 2 [ 1 ( γ σ ρ ) α n ] x n p 2 + 2 α n σ f ( p ) B p , x n + 1 p .
(3.34)

Applying Lemma 2.9 and (3.32) to (3.34), we deduce x n p . The proof is completed. □

Algorithm 3.3 Taking x 0 H 1 arbitrarily, we define a sequence { x n } by the following:
x n + 1 = α n σ f ( x n ) + β n x n + ( ( 1 β n ) I α n B ) T ( x n + δ A ( S I ) A x n )
(3.35)

for all n N , where δ ( 0 , 1 A 2 ) and σ > 0 are two constants and { α n } and { β n } are two real number sequences in ( 0 , 1 ) .

Corollary 3.4 Suppose Γ 1 = { x Fix ( T ) : A x Fix ( S ) } and suppose the following conditions hold:
  1. (C1):

    lim n α n = 0 and n = 1 α n = ;

     
  2. (C2):

    0 < lim inf n β n lim sup n β n < 1 ;

     
  3. (C3):

    σ ρ < γ .

     
Then the sequence { x n } generated by algorithm (3.35) converges strongly to p = Proj Γ 1 ( σ f + I B ) p , which solves the following VI:
( σ f B ) x , y x 0 , y Γ 1 .
Algorithm 3.5 Taking x 0 H 1 arbitrarily, we define a sequence { x n } by the following:
x n + 1 = α n σ f ( x n ) + β n x n + ( ( 1 β n ) I α n B ) U λ n F ( x n + δ A ( U γ n G I ) A x n )
(3.36)

for all n N , where { λ n } and { γ n } are two real number sequences in ( 0 , ) , δ ( 0 , 1 A 2 ) and σ > 0 are two constants and { α n } and { β n } are two real number sequences in ( 0 , 1 ) .

Corollary 3.6 Suppose Γ 2 = { x EP ( F ) : A x EP ( G ) } and suppose the following conditions hold:
  1. (C1):

    lim n α n = 0 and n = 1 α n = ;

     
  2. (C2):

    0 < lim inf n β n lim sup n β n < 1 ;

     
  3. (C3):

    lim inf n λ n > 0 and lim n λ n + 1 λ n = 1 ;

     
  4. (C4):

    lim inf n γ n > 0 and lim n γ n + 1 γ n = 1 ;

     
  5. (C5):

    σ ρ < γ .

     
Then the sequence { x n } generated by algorithm (3.36) converges strongly to p = Proj Γ 2 ( σ f + I B ) p , which solves the following VI:
( σ f B ) x , y x 0 , y Γ 2 .
Algorithm 3.7 Taking x 0 H 1 arbitrarily, we define a sequence { x n } by the following:
x n + 1 = α n σ f ( x n ) + β n x n + ( ( 1 β n ) I α n B ) P C ( x n + δ A ( P Q I ) A x n )
(3.37)

for all n N , where δ ( 0 , 1 A 2 ) and σ > 0 are two constants and { α n } and { β n } are two real number sequences in ( 0 , 1 ) .

Corollary 3.8 Suppose Γ 3 = { x C : A x Q } and suppose the following conditions hold:
  1. (C1):

    lim n α n = 0 and n = 1 α n = ;

     
  2. (C2):

    0 < lim inf n β n lim sup n β n < 1 ;

     
  3. (C3):

    σ ρ < γ .

     
Then the sequence { x n } generated by algorithm (3.37) converges strongly to p = Proj Γ 3 ( σ f + I B ) p , which solves the following VI:
( σ f B ) x , y x 0 , y Γ 3 .

Declarations

Acknowledgements

Li-Jun Zhu was supported in part by NSFC 61362033 and NZ13087. Yeong-Cheng Liou was supported in part by NSC 101-2628-E-230-001-MY3 and NSC 101-2622-E-230-005-CC3.

Authors’ Affiliations

(1)
School of Mathematics and Information Science, Beifang University of Nationalities, Yinchuan, 750021, China
(2)
School of Mathematics and Information Technology, Nanjing Xiaozhuang University, Nanjing, 211171, China
(3)
Department of Information Management, Cheng Shiu University, Kaohsiung, 833, Taiwan
(4)
Center for Fundamental Science, Kaohsiung Medical University, Kaohsiung, 807, Taiwan
(5)
Department of Mathematics, Tianjin Polytechnic University, Tianjin, 300387, China

References

  1. Censor Y, Elfving T: A multiprojection algorithm using Bregman projections in a product space. Numer. Algorithms 1994, 8: 221-239. 10.1007/BF02142692MathSciNetView ArticleMATHGoogle Scholar
  2. Byrne C: Iterative oblique projection onto convex subsets and the split feasibility problem. Inverse Probl. 2002, 18: 441-453. 10.1088/0266-5611/18/2/310MathSciNetView ArticleMATHGoogle Scholar
  3. Ceng LC, Ansari QH, Yao JC: An extragradient method for split feasibility and fixed point problems. Comput. Math. Appl. 2012, 64: 633-642. 10.1016/j.camwa.2011.12.074MathSciNetView ArticleMATHGoogle Scholar
  4. Dang Y, Gao Y: The strong convergence of a KM-CQ-like algorithm for a split feasibility problem. Inverse Probl. 2011., 27: Article ID 015007Google Scholar
  5. Wang F, Xu HK: Approximating curve and strong convergence of the CQ algorithm for the split feasibility problem. J. Inequal. Appl. 2010., 2010: Article ID 102085 10.1155/2010/102085Google Scholar
  6. Xu HK: Iterative methods for the split feasibility problem in infinite-dimensional Hilbert spaces. Inverse Probl. 2010., 26: Article ID 105018Google Scholar
  7. Yang Q: The relaxed CQ algorithm solving the split feasibility problem. Inverse Probl. 2004, 20: 1261-1266. 10.1088/0266-5611/20/4/014View ArticleMathSciNetMATHGoogle Scholar
  8. Yao Y, Wu J, Liou YC: Regularized methods for the split feasibility problem. Abstr. Appl. Anal. 2012., 2012: Article ID 140679 10.1155/2012/140679Google Scholar
  9. Yao Y, Kim TH, Chebbi S, Xu HK: A modified extragradient method for the split feasibility and fixed point problems. J. Nonlinear Convex Anal. 2012, 13: 383-396.MathSciNetMATHGoogle Scholar
  10. Zhao J, Yang Q: Several solution methods for the split feasibility problem. Inverse Probl. 2005, 21: 1791-1799. 10.1088/0266-5611/21/5/017View ArticleMathSciNetMATHGoogle Scholar
  11. Qu B, Xiu N: A note on the CQ algorithm for the split feasibility problem. Inverse Probl. 2005, 21: 1655-1665. 10.1088/0266-5611/21/5/009MathSciNetView ArticleMATHGoogle Scholar
  12. Wang Z, Yang Q, Yang Y: The relaxed inexact projection methods for the split feasibility problem. Appl. Math. Comput. 2010. 10.1016/j.amc.2010.11.058Google Scholar
  13. Ceng LC, Ansari QH, Yao JC: Relaxed extragradient methods for finding minimum-norm solutions of the split feasibility problem. Nonlinear Anal. 2012, 75: 2116-2125. 10.1016/j.na.2011.10.012MathSciNetView ArticleMATHGoogle Scholar
  14. Censor Y, Segal A: The split common fixed point problem for directed operators. J. Convex Anal. 2009, 16: 587-600.MathSciNetMATHGoogle Scholar
  15. Moudafi A: The split common fixed-point problem for demi-contractive mappings. Inverse Probl. 2010, 26: 587-600.MathSciNetView ArticleGoogle Scholar
  16. Wang F, Xu HK: Cyclic algorithms for split feasibility problems in Hilbert spaces. Nonlinear Anal. 2011, 74: 4105-4111. 10.1016/j.na.2011.03.044MathSciNetView ArticleMATHGoogle Scholar
  17. Moudafi A: A note on the split common fixed-point problem for quasi-nonexpansive operators. Nonlinear Anal. 2011, 74: 4083-4087. 10.1016/j.na.2011.03.041MathSciNetView ArticleMATHGoogle Scholar
  18. He ZH: The split equilibrium problems and its convergence algorithms. J. Inequal. Appl. 2012., 2012: Article ID 162Google Scholar
  19. He ZH, Du WS: Nonlinear algorithms approach to split common solution problems. Fixed Point Theory Appl. 2012., 2012: Article ID 130Google Scholar
  20. Moudafi A: Split monotone variational inclusions. J. Optim. Theory Appl. 2011, 150: 275-283. 10.1007/s10957-011-9814-6MathSciNetView ArticleMATHGoogle Scholar
  21. Byrne C, Censor Y, Gibali A, Reich S: The split common null point problem. J. Nonlinear Convex Anal. 2012, 13: 759-775.MathSciNetMATHGoogle Scholar
  22. Censor Y, Gibali A, Reich S: Algorithms for the split variational inequality problem. Numer. Algorithms 2012, 59: 301-323. 10.1007/s11075-011-9490-5MathSciNetView ArticleMATHGoogle Scholar
  23. Yao Y, Marino G, Muglia L: A modified Korpelevich’s method convergent to the minimum norm solution of a variational inequality. Optimization 2014, 63: 559-569. 10.1080/02331934.2012.674947MathSciNetView ArticleMATHGoogle Scholar
  24. Yao Y, Marino G, Xu HK, Liou YC: Construction of minimum norm fixed points of pseudocontractions in Hilbert spaces. J. Inequal. Appl. 2014., 2014: Article ID 206Google Scholar
  25. Chang SS, Wang L, Tang YK, Yang L: The split common fixed point problem for total asymptotically strictly pseudocontractive mappings. J. Appl. Math. 2012., 2012: Article ID 385638Google Scholar
  26. Blum E, Oettli W: From optimization and variational inequalities to equilibrium problems. Math. Stud. 1994, 63: 123-145.MathSciNetMATHGoogle Scholar
  27. Moudafi A: Weak convergence theorems for nonexpansive mappings and equilibrium problems. J. Nonlinear Convex Anal. 2008, 9: 37-43.MathSciNetMATHGoogle Scholar
  28. Yao Y, Noor MA, Liou YC: On iterative methods for equilibrium problems. Nonlinear Anal. 2009, 70: 497-507. 10.1016/j.na.2007.12.021MathSciNetView ArticleMATHGoogle Scholar
  29. Ceng LC, Al-Homidan S, Ansari QH, Yao JC: An iterative scheme for equilibrium problems and fixed point problems of strict pseudocontraction mappings. J. Comput. Appl. Math. 2009, 223: 967-974. 10.1016/j.cam.2008.03.032MathSciNetView ArticleMATHGoogle Scholar
  30. He ZH, Du WS: On hybrid split problem and its nonlinear algorithms. Fixed Point Theory Appl. 2013., 2013: Article ID 47Google Scholar
  31. Combettes PL, Hirstoaga A: Equilibrium programming in Hilbert spaces. J. Nonlinear Convex Anal. 2005, 6: 117-136.MathSciNetMATHGoogle Scholar
  32. Cianciaruso F, Marino G, Muglia L, Yao Y: A hybrid projection algorithm for finding solutions of mixed equilibrium problem and variational inequality problem. Fixed Point Theory Appl. 2010., 2010: Article ID 383740Google Scholar
  33. Suzuki T: Strong convergence theorems for infinite families of nonexpansive mappings in general Banach spaces. Fixed Point Theory Appl. 2005, 2005: 103-123.View ArticleMathSciNetMATHGoogle Scholar
  34. Geobel K, Kirk WA Cambridge Studies in Advanced Mathematics 28. In Topics in Metric Fixed Point Theory. Cambridge University Press, Cambridge; 1990.View ArticleGoogle Scholar
  35. Xu HK: Iterative algorithms for nonlinear operators. J. Lond. Math. Soc. 2002, 66: 240-256. 10.1112/S0024610702003332View ArticleMathSciNetMATHGoogle Scholar

Copyright

© Zhu et al.; licensee Springer. 2014

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.

Advertisement