Skip to main content

Strong convergence of iterative algorithms for the split equality problem

Abstract

Let H 1 , H 2 , H 3 be real Hilbert spaces, C H 1 , Q H 2 be two nonempty closed convex sets, and let A: H 1 H 3 , B: H 2 H 3 be two bounded linear operators. The split equality problem (SEP) is finding xC, yQ such that Ax=By. Recently, Moudafi has presented the ACQA algorithm and the RACQA algorithm to solve SEP. However, the two algorithms are weakly convergent. It is therefore the aim of this paper to construct new algorithms for SEP so that strong convergence is guaranteed. Firstly, we define the concept of the minimal norm solution of SEP. Using Tychonov regularization, we introduce two methods to get such a minimal norm solution. And then, we introduce two algorithms which are viewed as modifications of Moudafi’s ACQA, RACQA algorithms and KM-CQ algorithm, respectively, and converge strongly to a solution of SEP. More importantly, the modifications of Moudafi’s ACQA, RACQA algorithms converge strongly to the minimal norm solution of SEP. At last, we introduce some other algorithms which converge strongly to a solution of SEP.

1 Introduction and preliminaries

Let C and Q be nonempty closed convex subsets of real Hilbert spaces H 1 and H 2 , respectively, and let A: H 1 H 2 be a bounded linear operator. The split feasibility problem (SFP) is to find a point x satisfying the property

xC,AxQ

if such a point exists. SFP was first introduced by Censor and Elfving [1], which has attracted many authors’ attention due to its application in signal processing [1]. Various algorithms have been invented to solve it (see [27]).

Recently, Moudafi [8] proposed a new split equality problem (SEP): Let H 1 , H 2 , H 3 be real Hilbert spaces, C H 1 , Q H 2 be two nonempty closed convex sets, and let A: H 1 H 3 , B: H 2 H 3 be two bounded linear operators. Find xC, yQ satisfying

Ax=By.
(1.1)

When B=I, SEP reduces to the well-known SFP. In the paper [8], Moudafi gave the following iterative algorithms for solving the split equality problem.

Alternating CQ-algorithm (ACQA):

{ x k + 1 = P C ( x k γ k A ( A x k B y k ) ) ; y k + 1 = P Q ( y k + γ k B ( A x k + 1 B y k ) ) .

Relaxed alternating CQ-algorithm (RACQA):

{ x k + 1 = P C k ( x k γ A ( A x k B y k ) ) ; y k + 1 = P Q k ( y k + β B ( A x k + 1 B y k ) ) .

However, the above algorithms converge weakly to a solution of SEP.

It is therefore the aim of this paper to construct a new algorithm for SEP so that strong convergence is guaranteed. The paper is organized as follows. In Section 2, we define the concept of the minimal norm solution of SEP (1.1). Using Tychonov regularization, we obtain a net of solutions for some minimization problem approximating such minimal norm solutions (see Theorem 2.4). In Section 3, we introduce an algorithm which is viewed as a modification of Moudafi’s ACQA and RACQA algorithms; and we prove the strong convergence of the algorithm, more importantly, its limit is the minimum-norm solution of SEP (1.1) (see Theorem 3.2). In Section 4, we introduce a KM-CQ-like iterative algorithm which converges strongly to a solution of SEP (1.1) (see Theorem 4.3). In Section 5, we introduce some other iterative algorithms which converge strongly to a solution of SEP (1.1).

Throughout the rest of this paper, I denotes the identity operator on a Hilbert space H, Fix(T) is the set of the fixed points of an operator T and f is the gradient of the functional f:HR. An operator T on a Hilbert space H is nonexpansive if, for each x and y in H, TxTyxy. T is said to be averaged if there exists 0<α<1 and a nonexpansive operator N such that T=(1α)I+αN.

Let P S denote the projection from H onto a nonempty closed convex subset S of H; that is,

P S (w)= min x S xw.

It is well known that P S (w) is characterized by the inequality

w P S ( w ) , x P S ( w 0,xS,

and P S is nonexpansive and averaged.

We now collect some elementary facts which will be used in the proofs of our main results.

Lemma 1.1 [9, 10]

Let X be a Banach space, C be a closed convex subset of X, and T:CC be a nonexpansive mapping with Fix(T). If { x n } is a sequence in C weakly converging to x and if {(IT) x n } converges strongly to y, then (IT)x=y.

Lemma 1.2 [11]

Let { s n } be a sequence of nonnegative real numbers, { α n } be a sequence of real numbers in [0,1] with n = 1 α n =, { u n } be a sequence of nonnegative real numbers with n = 1 u n <, and { t n } be a sequence of real numbers with lim sup n t n 0. Suppose that

s n + 1 =(1 α n ) s n + α n t n + u n ,nN.

Then lim n s n =0.

Lemma 1.3 [12]

Let { w n }, { z n } be bounded sequences in a Banach space, and let { β n } be a sequence in [0,1] which satisfies the following condition:

0< lim inf n β n lim sup n β n <1.

Suppose that w n + 1 =(1 β n ) w n + β n z n and lim sup n z n + 1 z n w n + 1 w n 0, then lim n z n w n =0.

Lemma 1.4 [13]

Let f be a convex and differentiable functional, and let C be a closed convex subset of H. Then xC is a solution of the problem

min x C f(x)

if and only if xC satisfies the following optimality condition:

f ( x ) , v x 0,vC.

Moreover, if f is, in addition, strictly convex and coercive, then the minimization problem has a unique solution.

Lemma 1.5 [3]

Let A and B be averaged operators and suppose that Fix(A)Fix(B) is nonempty. Then Fix(A)Fix(B)=Fix(AB)=Fix(BA).

2 Minimum-norm solution of SEP

In this section, we define the concept of the minimal norm solution of SEP (1.1). Using Tychonov regularization, we obtain a net of solutions for some minimization problem approximating such minimal norm solutions.

We use Γ to denote the solution set of SEP, i.e.,

Γ= { ( x , y ) H 1 × H 2 , A x = B y , x C , y Q }

and assume the consistency of SEP so that Γ is closed, convex and nonempty.

Let S=C×Q in H= H 1 × H 2 , define G:H H 3 by G=[A,B], then G G:HH has the matrix form

G G= [ A A A B B A B B ] .

The original problem can now be reformulated as finding w=(x,y)S with Gw=0, or, more generally, minimizing the function Gw over wS. Therefore solving SEP (1.1) is equivalent to solving the following minimization problem:

min w S f(w)= 1 2 G w 2 ,
(2.1)

which is in general ill-posed. A classical way to deal with such a possibly ill-posed problem is the well-known Tychonov regularization, which approximates a solution of problem (2.1) by the unique minimizer of the regularized problem:

min w S f α (w)= 1 2 G w 2 + 1 2 α w 2 ,
(2.2)

where α>0 is the regularization parameter. Denote by w α =( x α , y α ) the unique solution of (2.2).

Proposition 2.1 For any α>0, the solution w α =( x α , y α ) of (2.2) is uniquely defined. Moreover, w α =( x α , y α ) is characterized by the inequality

G G w α + α w α , w w α 0,wS,

i.e.,

A ( A x α B y α ) + α x α , x x α 0,xC;

and

B ( A x α B y α ) + α y α , y y α 0,yQ.

Proof It is well known that f(w)= 1 2 G w 2 is convex and differentiable with gradient f(w)= G Gw, f α (w)=f(w)+ 1 2 α w 2 . We can get that f α is strictly convex, coercive, and differentiable with gradient

f α (w)= G Gw+αw.

It follows from Lemma 1.4 that w α is characterized by the inequality

G G w α + α w α , w w α 0,wS.
(2.3)

Note that {(x,0),xC}S, {(0,y),yQ}S, adding up (2.3), we can get that

A ( A x α B y α ) + α x α , x x α 0,xC;

and

B ( A x α B y α ) + α y α , y y α 0,yQ.

 □

Definition 2.2 An element w ˜ =( x ˜ , y ˜ )Γ is said to be the minimal norm solution of SEP (1.1) if w ˜ = inf w Γ w.

The next result collects some useful properties of { w α }, the unique solution of (2.2).

Proposition 2.3 Let w α be given as the unique solution of (2.2). Then the following assertions hold.

  1. (i)

    w α is decreasing for α(0,).

  2. (ii)

    α w α defines a continuous curve from (0,) to H.

Proof Let α>β>0; since w α and w β are the unique minimizers of f α and f β , respectively, we can get that

1 2 G w α 2 + 1 2 α w α 2 1 2 G w β 2 + 1 2 α w β 2 , 1 2 G w β 2 + 1 2 β w β 2 1 2 G w α 2 + 1 2 β w α 2 .

Hence we can obtain that w α w β . That is to say, w α is decreasing for α(0,).

By Proposition 2.1, we have

G G w α + α w α , w β w α 0

and

G G w β + β w β , w α w β 0.

It follows that

w α w β ,α w α β w β w α w β , G G ( w β w α ) 0.

Hence

α w α w β (αβ) w α w β , w β .

It turns out that

w α w β 2 | α β | α w β .

Thus α w α defines a continuous curve from (0,) to H. □

Theorem 2.4 Let w α be given as the unique solution of (2.2). Then w α converges strongly as α0 to the minimum-norm solution w ˜ of SEP (1.1).

Proof For any 0<α<, w α is given as (2.2), it follows that

1 2 G w α 2 + 1 2 α w α 2 1 2 G w ˜ 2 + 1 2 α w ˜ 2 .

Since w ˜ Γ is a solution for SEP, we get

1 2 G w α 2 + 1 2 α w α 2 1 2 α w ˜ 2 .

Hence, w α w ˜ for all α>0. That is to say, { w α } is a bounded net in H= H 1 × H 2 .

For any sequence { α n } such that lim n α n =0, let w α n be abbreviated as w n . All we need to prove is that { w n } contains a subsequence converging strongly to w ˜ .

Indeed { w n } is bounded and S is bounded convex. By passing to a subsequence if necessary, we may assume that { w n } converges weakly to a point w ˆ S. By Proposition 2.1, we get that

G G w n + α n w n , w ˜ w n 0.

It follows that

G w n ,G w ˜ G w n α n w n , w n w ˜ .

Since w ˜ Γ, it turns out that

G w n ,G w n α n w n , w n w ˜ .

Using w n w ˜ , we can get that

G w n 2 α n w ˜ 0.

Furthermore, note that { w n } converges weakly to a point w ˆ S, then {G w n } converges weakly to G w ˆ . It follows that G w ˆ =0, i.e., w ˆ Γ.

At last, we prove that w ˆ = w ˜ and this finishes the proof.

Since { w n } converges weakly to w ˆ and w n w ˜ , we can get that

w ˆ lim inf n w n w ˜ =min { w : w Γ } .

This shows that w ˆ is also a point in Γ which assumes a minimum norm. Due to the uniqueness of a minimum-norm element, we obtain w ˆ = w ˜ . □

Finally, we introduce another method to get the minimum-norm solution of SEP.

Lemma 2.5 Let T=Iγ G G, where 0<γ<2/ρ( G G) with ρ( G G) being the spectral radius of the self-adjoint operator G G on H. Then we have the following:

  1. (1)

    T1 (i.e., T is nonexpansive) and averaged;

  2. (2)

    Fix(T)={(x,y)H,Ax=By}, Fix( P S T)=Fix( P S )Fix(T)=Γ;

  3. (3)

    wFix( P S T) if and only if w is a solution of the variational inequality G Gw,vw0, vS.

Proof (1) It is easily proved that T1, we only prove that T=Iγ G G is averaged. Indeed, choose 0<β<1 such that γ/(1β)<2/ρ( G G), then T=Iγ G G=βI+(1β)V, where V=Iγ/(1β) G G is a nonexpansive mapping. That is to say, T is averaged.

(2) If w{(x,y)H,Ax=By}, it is obvious that wFix(T). Conversely, assume that wFix(T), we have w=wγ G Gw, hence γ G Gw=0, then G w 2 = G Gw,w=0, we get that w{(x,y)H,Ax=By}. This leads to Fix(T)={(x,y)H,Ax=By}.

Now we prove Fix( P S T)=Fix( P S )Fix(T)=Γ. By Fix(T)={(x,y)H,Ax=By}, Fix( P S )Fix(T)=Γ is obvious. On the other hand, since Fix( P S )Fix(T)=Γ, and both P S and T are averaged, from Lemma 1.5, we have Fix( P S T)=Fix( P S )Fix(T).

  1. (3)
    G G w , v w 0 , v S w ( w γ G G w ) , v w 0 , v S w = P S ( w γ G G w ) w Fix ( P S T ) .

 □

Remark 2.6 Take a constant γ such that 0<γ<2/ρ( G G) with ρ( G G) being the spectral radius of the self-adjoint operator G G. For α(0, 2 γ G G 2 γ ), we define a mapping

W α (w):= P S [ ( 1 α γ ) I γ G G ] w.

It is easy to check that W α is contractive. So, W α has a unique fixed point denoted by w α , that is,

w α = P S [ ( 1 α γ ) I γ G G ] w α .
(2.4)

Theorem 2.7 Let w α be given as (2.4). Then w α converges strongly as α0 to the minimum-norm solution w ˜ of SEP (1.1).

Proof Let w ˇ be a point in Γ. Since α(0, 2 γ G G 2 γ ), I γ ( 1 α γ ) G G is nonexpansive. It follows that

w α w ˇ = P S [ ( 1 α γ ) I γ G G ] w α P S [ w ˇ γ G G w ˇ ] [ ( 1 α γ ) I γ G G ] w α [ w ˇ γ G G w ˇ ] = ( 1 α γ ) [ w α γ 1 α γ G G w α ] ( 1 α γ ) [ w ˇ γ 1 α γ G G w ˇ ] α γ w ˇ ( 1 α γ ) ( w α γ 1 α γ G G w α ) ( w ˇ γ 1 α γ G G w ˇ ) + α γ w ˇ ( 1 α γ ) w α w ˇ + α γ w ˇ .

Hence,

w α w ˇ w ˇ .

Then { w α } is bounded.

From (2.4), we have

w α P S [ I γ G G ] w α αγ w α 0.

Next we show that { w α } is relatively norm compact as α 0 + . In fact, assume that { β n }(0, 2 γ G G 2 γ ) is such that α n 0 + as n. Put w n := w α n , we have the following:

w n P S [ I γ G G ] w n α n γ w n 0.

By the property of the projection, we deduce that

w α w ˇ 2 = P S [ ( 1 α γ ) I γ G G ] w α P S [ w ˇ γ G G w ˇ ] 2 [ ( 1 α γ ) I γ G G ] w α [ w ˇ γ G G w ˇ ] , w α w ˇ = ( 1 α γ ) [ w α γ 1 α γ G G w α ] ( 1 α γ ) [ w ˇ γ 1 α γ G G w ˇ ] , w α w ˇ α γ w ˇ , w α w ˇ ( 1 α γ ) w α w ˇ 2 α γ w ˇ , w α w ˇ .

Therefore,

w α w ˇ 2 w ˇ , w α w ˇ .

In particular,

w n w ˇ 2 w ˇ , w n w ˇ , w ˇ Γ.

Since { w n } is bounded, there exists a subsequence of { w n } which converges weakly to a point w ˜ . Without loss of generality, we may assume that { w n } converges weakly to w ˜ . Notice that

w n P S [ I γ G G ] w n α n γ w n 0,

and by Lemma 1.1 we can get that w ˜ Fix( P S [Iγ G G])=Γ.

By

w n w ˇ 2 w ˇ , w n w ˇ , w ˇ Γ,

we have

w n w ˜ 2 w ˜ , w n w ˜ .

Consequently, { w n } converges weakly to w ˜ actually implies that { w n } converges strongly to  w ˜ . That is to say, { w α } is relatively norm compact as α 0 + .

On the other hand, by

w n w ˇ 2 w ˇ , w n w ˇ , w ˇ Γ,

let n, we have

w ˜ w ˇ 2 w ˇ , w ˜ w ˇ , w ˇ Γ.

This implies that

w ˇ , w ˇ w ˜ 0, w ˇ Γ,

which is equivalent to

w ˜ , w ˇ w ˜ 0, w ˇ Γ.

It follows that w ˜ P S (0). Therefore, each cluster point of w α equals w ˜ . So w α w ˜ (α0) the minimum-norm solution of SEP. □

3 Modification of Moudafi’s ACQA and RACQA algorithms

In this section, we introduce the following algorithm which is viewed as a modification of Moudafi’s ACQA and RACQA algorithms. The purpose for such a modification lies in the hope of strong convergence.

Algorithm 3.1 For an arbitrary point w 0 =( x 0 , y 0 )H= H 1 × H 2 , the sequence { w n }={( x n , y n )} is generated by the iterative algorithm

w n + 1 = P S { ( 1 α n ) [ I γ G G ] w n } ,
(3.1)

i.e.,

{ x n + 1 = P C { ( 1 α n ) [ x n γ A ( A x n B y n ) ] } , n 0 ; y n + 1 = P Q { ( 1 α n ) [ y n + γ B ( A x n B y n ) ] } , n 0 ,

where α n >0 is a sequence in (0,1) such that

  1. (i)

    lim n α n =0;

  2. (ii)

    n = 0 α n =;

  3. (iii)

    n = 0 | α n + 1 α n |< or lim n | α n + 1 α n |/ α n =0.

Now, we prove the strong convergence of the iterative algorithm.

Theorem 3.2 The sequence { w n } generated by algorithm (3.1) converges strongly to the minimum-norm solution w ˜ of SEP (1.1).

Proof Let R n and R be defined by

R n w : = P S { ( 1 α n ) [ I γ G G ] } w = P S [ ( 1 α n ) T w ] , R w : = P S ( I γ G G ) w = P S ( T w ) ,

where T=Iγ G G. By Lemma 2.5 it is easy to see that R n is a contraction with contractive constant 1 α n ; and algorithm (3.1) can be written as w n + 1 = R n w n .

For any w ˆ Γ, we have

R n w ˆ w ˆ = P S [ ( 1 α n ) T w ˆ ] w ˆ = P S [ ( 1 α n ) T w ˆ ] P S ( T w ˆ ) ( 1 α n ) T w ˆ T w ˆ = α n T w ˆ α n w ˆ .

Hence,

w n + 1 w ˆ = R n w n w ˆ R n w n R n w ˆ + R n w ˆ w ˆ P S [ ( 1 α n ) T w ˆ ] P S ( T w ˆ ) ( 1 α n ) w n w ˆ + α n w ˆ max { w n w ˆ , w ˆ } .

It follows that w n w ˆ max{ w 0 w ˆ , w ˆ }. So { w n } is bounded.

Next we prove that lim n w n + 1 w n =0.

Indeed,

w n + 1 w n = R n w n R n 1 w n 1 R n w n R n w n 1 + R n w n 1 R n 1 w n 1 ( 1 α n ) w n w n 1 + R n w n 1 R n 1 w n 1 .

Notice that

R n w n 1 R n 1 w n 1 = P S [ ( 1 α n ) T w n 1 ] P S [ ( 1 α n 1 ) T w n 1 ] ( 1 α n ) T w n 1 ( 1 α n 1 ) T w n 1 = | α n α n 1 | T w n 1 | α n α n 1 | w n 1 .

Hence,

w n + 1 w n (1 α n ) w n w n 1 +| α n α n 1 | w n 1 .

By virtue of assumptions (1)-(3) and Lemma 1.2, we have

lim n w n + 1 w n =0.

Therefore,

w n R w n w n + 1 w n + R n w n R w n w n + 1 w n + ( 1 α n ) T w n T w n w n + 1 w n + α n w n 0 .

The demiclosedness principle ensures that each weak limit point of { w n } is a fixed point of the nonexpansive mapping R= P S T, that is, a point of the solution set Γ of SEP (1.1).

At last, we will prove that lim n w n + 1 w ˜ =0.

Choose 0<β<1 such that γ/(1β)<2/ρ( G G), then T=Iγ G G=βI+(1β)V, where V=Iγ/(1β) G G is a nonexpansive mapping. Taking zΓ, we deduce that

w n + 1 z 2 = P S [ ( 1 α n ) T w n ] z 2 ( 1 α n ) T w n z 2 ( 1 α n ) T w n z 2 + α n z 2 β ( w n z ) + ( 1 β ) ( V w n z ) 2 + α n z 2 β ( w n z ) 2 + ( 1 β ) ( V w n z ) 2 β ( 1 β ) w n V w n 2 + α n z 2 ( w n z ) 2 β ( 1 β ) w n V w n 2 + α n z 2 .

Then

β ( 1 β ) w n V w n w n z 2 w n + 1 z 2 + α n z 2 ( w n z + w n + 1 z ) ( w n z w n + 1 z ) α n z 2 ( w n z + w n + 1 z ) ( w n w n + 1 ) α n z 2 0 .

Note that T=Iγ G G=βI+(1β)V, it follows that lim n T w n w n =0.

Take a subsequence { w n k } of { w n } such that lim sup n w n w ˜ , w ˜ = lim k w n k w ˜ , w ˜ .

By virtue of the boundedness of w n , we may further assume, with no loss of generality, that w n k converges weakly to a point w ˇ . Since R w n w n 0, using the demiclosedness principle, we know that w ˇ Fix(R)=Fix( P S T)=Γ. Noticing that w ˜ is the projection of the origin onto Γ, we get that

lim sup n w n w ˜ , w ˜ = lim k w n k w ˜ , w ˜ = w ˇ w ˜ , w ˜ 0.

Finally, we compute

w n + 1 w ˜ 2 = P S [ ( 1 α n ) T w n ] w ˜ 2 = P S [ ( 1 α n ) T w n ] P S T w ˜ 2 ( 1 α n ) T w n T w ˜ 2 = ( 1 α n ) T w n w ˜ 2 = ( 1 α n ) ( T w n w ˜ ) + α n ( w ˜ ) 2 = ( 1 α n ) 2 ( T w n w ˜ ) 2 + α n 2 w ˜ 2 + 2 α n ( 1 α n ) T w n w ˜ , w ˜ ( 1 α n ) ( T w n w ˜ ) 2 + α n [ α n w ˜ 2 + 2 ( 1 α n ) T w n w ˜ , w ˜ ] .

Since lim sup n w n w ˜ , w ˜ 0, w n T w n 0, we know that lim sup n ( α n w ˜ 2 +2(1 α n )T w n w ˜ , w ˜ )0. By Lemma 1.2, we conclude that lim n w n + 1 w ˜ =0. This completes the proof. □

Remark 3.3 When B=I, the iteration algorithm (3.1) becomes

x n + 1 = P C { ( 1 α n ) [ x n γ A ( A x n y n ) ] } ; y n + 1 = P Q { ( 1 α n ) [ y n + γ ( A x n y n ) ] } .

By Theorem 3.2, we can get the following result.

Corollary 3.4 For an arbitrary point w 0 =( x 0 , y 0 )H= H 1 × H 2 , the sequence { w n }={( x n , y n )} is generated by the iterative algorithm

{ x n + 1 = P C { ( 1 α n ) [ x n γ A ( A x n y n ) ] } , n 0 ; y n + 1 = P Q { ( 1 α n ) [ y n + γ ( A x n y n ) ] } , n 0 ,

where α n >0 is a sequence in (0,1) such that

  1. (i)

    lim n α n =0;

  2. (ii)

    n = 0 α n =;

  3. (iii)

    n = 0 | α n + 1 α n |< or lim n | α n + 1 α n |/ α n =0.

Then x n converges strongly to the minimum-norm solution of SFP.

4 KM-CQ-like iterative algorithm for SEP

In this section, we establish a KM-CQ-like algorithm converging strongly to a solution of SEP.

Algorithm 4.1 For an arbitrary initial point w 0 =( x 0 , y 0 ), the sequence { w n =( x n , y n )} is generated by the iteration

w n + 1 =(1 β n ) w n + β n P S [ ( 1 α n ) ( I γ G G ) ] w n ,
(4.1)

i.e.,

{ x n + 1 = ( 1 β n ) x n + β n P C { ( 1 α n ) [ x n γ A ( A x n B y n ) ] } , n 0 ; y n + 1 = ( 1 β n ) y n + β n P Q { ( 1 α n ) [ y n + γ B ( A x n B y n ) ] } , n 0 ,

where α n >0 is a sequence in (0,1) such that

  1. (i)

    lim n α n =0, n = 0 α n =;

  2. (ii)

    lim n | α n + 1 α n |=0;

  3. (iii)

    0< lim inf n β n lim sup n β n <1.

Lemma 4.2 If zFix(T)=Fix(Iγ G G), then for any w we have T w z 2 w z 2 β(1β) V w w 2 , where β and V are the same as in Lemma  2.5(1).

Proof According to Lemma 2.5(1), we know that T=βI+(1β)V, where 0<β<1 and V is nonexpansive. It is easy to check that zFix(T)=Fix(V), and

T w z 2 = β w + ( 1 β ) V w z 2 β w z 2 + ( 1 β ) V w z 2 β ( 1 β ) V w w 2 β w z 2 + ( 1 β ) w z 2 β ( 1 β ) V w w 2 = w z 2 β ( 1 β ) V w w 2 .

 □

Theorem 4.3 The sequence { w n } generated by algorithm (4.1) converges strongly to a solution of SEP (1.1).

Proof For any solution of SEP w ˆ , according to Lemma 2.5, w ˆ Fix( P S T)=Fix( P S )Fix(T), where T=Iγ G G, and

w n + 1 w ˆ = ( 1 β n ) w n + β n P S [ ( 1 α n ) T ] w n w ˆ = ( 1 β n ) ( w n w ˆ ) + β n ( P S [ ( 1 α n ) T ] w n w ˆ ) ( 1 β n ) w n w ˆ + β n P S [ ( 1 α n ) T ] w n w ˆ ( 1 β n ) w n w ˆ + β n P S [ ( 1 α n ) T ] w n P S [ ( 1 α n ) T ] w ˆ + β n P S [ ( 1 α n ) T ] w ˆ w ˆ ( 1 β n ) w n w ˆ + β n ( 1 α n ) w n w ˆ + β n α n w ˆ = ( 1 β n α n ) w n w ˆ + β n α n w ˆ max { w n w ˆ , w ˆ } .

By induction,

w n w ˆ max { w 0 w ˆ , w ˆ } .

Hence, { w n } is bounded and so is {T w n }. Moreover,

P S [ ( 1 α n ) T ] w n w ˆ ( 1 α n ) T w n w ˆ = ( 1 α n ) [ T w n w ˆ ] α n w ˆ ( 1 α n ) w n w ˆ + α n w ˆ max { w n w ˆ , w ˆ } .

Since { w n } is bounded, we have that {T w n }, (1 α n )T w n and { P S [(1 α n )T] w n } are also bounded.

Let z n = P S [(1 α n )T] w n , and M>0 such that M= sup n 1 {T w n }. We observe that

P S [ ( 1 α n + 1 ) T ] w n P S [ ( 1 α n ) T ] w n ( 1 α n + 1 ) T w n ( 1 α n ) T w n = ( α n α n + 1 ) T w n M | α n α n + 1 | .

Hence,

z n + 1 z n = P S [ ( 1 α n + 1 ) T ] w n + 1 P S [ ( 1 α n ) T ] w n P S [ ( 1 α n + 1 ) T ] w n + 1 P S [ ( 1 α n + 1 ) T ] w n + P S [ ( 1 α n + 1 ) T ] w n P S [ ( 1 α n ) T ] w n ( 1 α n + 1 ) w n + 1 w n + P S [ ( 1 α n + 1 ) T ] w n P S [ ( 1 α n ) T ] w n ( 1 α n + 1 ) w n + 1 w n + M | α n α n + 1 | .

Since 0< α n <1 and lim n | α n + 1 α n |=0, we obtain that

z n + 1 z n w n + 1 w n M| α n α n + 1 |

and

lim sup n z n + 1 z n w n + 1 w n 0.

Using Lemma 1.3, we get that

lim n P S [ ( 1 α n ) T ] w n w n = lim n z n w n =0.

Therefore,

w n + 1 w n = ( 1 β n ) w n + β n P S [ ( 1 α n ) T ] w n w n = β n P S [ ( 1 α n ) T ] w n w n 0 .

Let R n and R be defined by

R n w : = P S { ( 1 α n ) [ I γ G G ] } w = P S [ ( 1 α n ) T w ] , R w : = P S ( I γ G G ) w = P S ( T w ) .

We find

w n R w n w n w n + 1 + w n + 1 R w n = w n w n + 1 + ( 1 β n ) w n + β n R n w n R w n w n w n + 1 + ( 1 β n ) w n R w n + β n R n w n R w n .

So, we have

w n R w n w n w n + 1 / β n + R n w n R w n = w n w n + 1 / β n + P S [ ( 1 α n ) T ] w n P S T w n w n w n + 1 / β n + ( 1 α n ) T w n T w n w n w n + 1 / β n + M α n .

By assumption, we have

lim n w n R w n =0.

On the other hand, { w n } is bounded, there exists a subsequence of { w n } which converges weakly to a point w ˇ . Without loss of generality, we may assume that { w n } converges weakly to w ˇ . Since R w n w n 0, using the demiclosedness principle we know that w ˇ Fix(R)=Fix( P S T)=Fix( P S )Fix(T)=Γ.

At last, we will prove that lim n w n + 1 w ˇ =0. To do this, we calculate

w n + 1 w ˇ 2 = ( 1 β n ) w n + β n P S [ ( 1 α n ) T ] w n P S T w ˇ 2 ( 1 β n ) w n w ˇ 2 + β n P S [ ( 1 α n ) T ] w n P S T w ˇ 2 ( 1 β n ) w n w ˇ 2 + β n ( 1 α n ) T w n w ˇ 2 = ( 1 β n ) w n w ˇ 2 + β n ( 1 α n ) ( T w n w ˇ ) + α n w ˇ 2 = ( 1 β n ) w n w ˇ 2 + β n [ ( 1 α n ) 2 T w n w ˇ 2 + α n 2 w ˇ 2 + 2 α n ( 1 α n ) T w n w ˇ , w ˇ ] ( 1 β n ) w n w ˇ 2 + β n [ ( 1 α n ) w n w ˇ 2 + α n 2 w ˇ 2 + 2 α n ( 1 α n ) T w n w ˇ , w ˇ ] = ( 1 α n β n ) w n w ˇ 2 + α n β n [ 2 ( 1 α n ) T w n w ˇ , w ˇ + α n w ˇ 2 ] .

By Lemma 1.2, we only need to prove that

lim sup n T w n w ˇ , w ˇ 0.

By Lemma 2.5, T is averaged, that is, T=βI+(1β)V, where 0<β<1 and V is nonexpansive. Then, for zFix( P S T), we have

w n + 1 z 2 = ( 1 β n ) w n + β n P S [ ( 1 α n ) T ] w n z 2 ( 1 β n ) w n z 2 + β n ( 1 α n ) T w n z 2 = ( 1 β n ) w n z 2 + β n ( 1 α n ) ( T w n z ) α n z 2 ( 1 β n ) w n z 2 + β n [ ( 1 α n ) T w n z 2 + α n z 2 ] ( 1 β n ) w n z 2 + β n [ T w n z 2 + α n z 2 ] .

By Lemma 4.2, we can get

w n + 1 z 2 ( 1 β n ) w n z 2 + β n [ w n z 2 β ( 1 β ) V w n w n 2 + α n z 2 ] w n z 2 β n β ( 1 β ) V w n w n 2 + β n α n z 2 .

Let K>0 such that w n zK for all n, then we have

β n β ( 1 β ) V w n w n 2 w n z 2 w n + 1 z 2 + β n α n z 2 2 N | w n z w n + 1 z | + β n α n z 2 2 N w n w n + 1 + β n α n z 2 .

Hence,

β(1β) V w n w n 2 2 N w n w n + 1 β n + α n z 2 .

Since w n w n + 1 0, we can get that

V w n w n 0.

Therefore,

T w n w n 0.

It follows that

lim sup n T w n w ˇ , w ˇ = lim sup n w n w ˇ , w ˇ .

Since { w n } converges weakly to w ˇ , it follows that

lim sup n T w n w ˇ , w ˇ 0.

 □

Similar to the proof of Theorem 4.3, we can get that the following iterative algorithm converges strongly to a solution of SEP also. Since the proof is similar to Theorem 4.3, we omit it.

Algorithm 4.4 For an arbitrary initial point w 0 =( x 0 , y 0 ), the sequence { w n =( x n , y n )} is generated by the iteration

w n + 1 =(1 β n )(1 α n ) ( I γ G G ) w n + β n P S [ ( 1 α n ) ( I γ G G ) ] w n ,
(4.2)

i.e.,

{ x n + 1 = ( 1 β n ) ( 1 α n ) [ x n γ A ( A x n B y n ) ] x n + 1 = + β n P C { ( 1 α n ) [ x n γ A ( A x n B y n ) ] } ; y n + 1 = ( 1 β n ) ( 1 α n ) [ y n + γ B ( A x n B y n ) ] y n + 1 = + β n P Q { ( 1 α n ) [ y n + γ B ( A x n B y n ) ] } ,

where α n >0 is a sequence in (0,1) such that

  1. (i)

    lim n α n =0, n = 0 α n =;

  2. (ii)

    lim n | α n + 1 α n |=0;

  3. (iii)

    0< lim inf n β n lim sup n β n <1.

5 Other iterative methods

In this section, we introduce some other iterative algorithms which converge strongly to a solution of SEP.

According to Lemma 2.5, we know that w=(x,y) belongs to the solution set Γ of SEP (1.1) if and only if wFix( P S (Iγ G G)). Moreover, P S (Iγ G G) is a nonexpansive mapping. That is to say, the essence of SEP is to find a fixed point for the nonexpansive mapping P S (Iγ G G).

For the fixed point of a nonexpansive mapping, the following results have been obtained.

In 1974, Ishikawa [14] gave the Ishikawa iteration as follows:

{ x 0 C , y n = ( 1 β n ) x n + β n T x n , n 0 , x n + 1 = ( 1 α n ) x n + α n T y n , n 0 ,

where x 0 C is an arbitrary (but fixed) element in C, and { α n }, { β n } are two sequences in (0,1). He proved that if 0 α n β n 1, β n 0, n = 1 α n β n =, then { x n } converges strongly to a fixed point of T.

In 2004, Xu [15] gave the viscosity iteration for nonexpansive mappings. He considered the iteration process

x n + 1 = α n f( x n )+(1 α n )T x n ,n0,

where f is a contraction on C and x 0 is an arbitrary (but fixed) element in C. He proved that if α n 0, n = 0 α n =, either n = 0 | α n + 1 α n |< or lim n ( α n + 1 / α n )=1, then { x n } converges strongly to a fixed point of T.

Halpern’s iteration is as follows:

x n + 1 = α n u+(1 α n )T x n ,n0,

where uC is an arbitrary (but fixed) element in C.

Mann’s iteration method that produces a sequence { x n } via the recursive manner is as follows:

x n + 1 = α n x n +(1 α n )T x n ,n0,

where the initial guess x 0 C is chosen arbitrarily. However, this scheme has only weak convergence even in a Hilbert space.

In 2005, Kim and Xu [16] modified Mann’s iteration scheme and the modified iteration method still works in a Banach space. Let C be a closed convex subset of a Banach space and T:CC be a nonexpansive mapping such that Fix(T). Define { x n } in the following way:

{ x 0 C , y n = α n x n + ( 1 α n ) T x n , n 0 , x n + 1 = β n u + ( 1 β n ) T y n , n 0 ,

where uC is an arbitrary (but fixed) element in C, and { α n }, { β n } are two sequences in (0,1). They proved that if α n 0, β n 0, n = 1 α n =, n = 1 β n =, and n = 1 | α n + 1 α n |<, n = 1 | β n + 1 β n |<, then { x n } converges strongly to a fixed point of T.

Therefore, we have the following iterative algorithms which converge strongly to a solution of SEP.

Algorithm 5.1

{ w 0 = ( x 0 , y 0 ) H = H 1 × H 2 , v n = ( 1 β n ) w n + β n P S T w n , n 0 , w n + 1 = ( 1 α n ) w n + α n P S T v n , n 0 ,

particulars:

{ x 0 H 1 , y 0 H 2 , z n = x n γ A ( A x n B y n ) , h n = y n + γ B ( A x n B y n ) , j n = x n A ( γ A x n B y n ) , k n = y n + B ( A x n γ B y n ) , x n + 1 = ( 1 α n ) x n + α n P C [ ( 1 β n ) j n + β n ( I γ A A ) P C z n + β n A B P Q h n ] , y n + 1 = ( 1 α n ) y n + α n P Q [ ( 1 β n ) k n + β n B A P C z n + β n ( I γ B B ) P Q h n ] ,

where w 0 =( x 0 , y 0 ) is an arbitrary (but fixed) element in H, T=Iγ G G and { α n }, { β n } are two sequences in (0,1). If 0 α n β n 1, β n 0, n = 1 α n β n =, then { w n } converges strongly to a solution of SEP.

Algorithm 5.2

w n + 1 = α n f( w n )+(1 α n ) P S T w n ,n0,

particulars:

{ x n + 1 = α n P H 1 f ( x n , y n ) + ( 1 α n ) P C [ x n γ A ( A x n B y n ) ] , y n + 1 = α n P H 2 f ( x n , y n ) + ( 1 α n ) P Q [ y n + γ B ( A x n B y n ) ] ,

where f is a contraction on H= H 1 × H 2 and w 0 =( x 0 , y 0 ) is an arbitrary (but fixed) element in H, and T=Iγ G G. If α n 0, n = 0 α n =, either n = 0 | α n + 1 α n |< or lim n ( α n + 1 / α n )=1, then { w n } converges strongly to a solution of SEP.

Algorithm 5.3

{ w 0 = ( x 0 , y 0 ) , u = ( x 1 , y 1 ) H = H 1 × H 2 , v n = α n w n + ( 1 α n ) P S T w n , n 0 , w n + 1 = β n u + ( 1 β n ) P S T v n , n 0 ,

particulars:

{ x 0 , x 1 H 1 , y 0 , y 1 H 2 , z n = x n γ A ( A x n B y n ) , h n = y n + γ B ( A x n B y n ) , j n = x n A ( γ A x n B y n ) , k n = y n + B ( A x n γ B y n ) , x n + 1 = α n x 1 + ( 1 α n ) P C [ β n j n + ( 1 β n ) ( I γ A A ) P C z n + ( 1 β n ) A B P Q h n ] , y n + 1 = α n y 1 + ( 1 α n ) P Q [ β n k n + ( 1 β n ) B A P C z n + ( 1 β n ) ( I γ B B ) P Q h n ] ,

where u, w 0 are arbitrary (but fixed) elements in H, T=Iγ G G, and { α n }, { β n } are two sequences in (0,1). They proved that if α n 0, β n 0, n = 1 α n =, n = 1 β n = and n = 1 | α n + 1 α n |<, n = 1 | β n + 1 β n |<, then { w n } converges strongly to a solution of SEP.

References

  1. Censor Y, Elfving T: A multiprojection algorithm using Bregman projections in a product space. Numer. Algorithms 1994,8(2–4):221–239.

    Article  MathSciNet  MATH  Google Scholar 

  2. Byrne C: Iterative oblique projection onto convex sets and the split feasibility problem. Inverse Problems 2002,18(2):441–453. 10.1088/0266-5611/18/2/310

    Article  MathSciNet  MATH  Google Scholar 

  3. Byrne C: A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Problems 2004,20(1):103–120. 10.1088/0266-5611/20/1/006

    Article  MathSciNet  MATH  Google Scholar 

  4. Qu B, Xiu N: A note on the CQ algorithm for the split feasibility problem. Inverse Problems 2005,21(5):1655–1665. 10.1088/0266-5611/21/5/009

    Article  MathSciNet  MATH  Google Scholar 

  5. Xu H-K: A variable Krasnosel’skii-Mann algorithm and the multiple-set split feasibility problem. Inverse Problems 2006,22(6):2021–2034. 10.1088/0266-5611/22/6/007

    Article  MathSciNet  MATH  Google Scholar 

  6. Yang Q: The relaxed CQ algorithm solving the split feasibility problem. Inverse Problems 2004,20(4):1261–1266. 10.1088/0266-5611/20/4/014

    Article  MathSciNet  MATH  Google Scholar 

  7. Yang Q, Zhao J: Generalized KM theorems and their applications. Inverse Problems 2006,22(3):833–844. 10.1088/0266-5611/22/3/006

    Article  MathSciNet  MATH  Google Scholar 

  8. Moudafi A: A relaxed alternating CQ-algorithms for convex feasibility problems. Nonlinear Anal., Theory Methods Appl. 2013, 79: 117–121.

    Article  MathSciNet  MATH  Google Scholar 

  9. Geobel K, Kirk WA Cambridge Studies in Advanced Mathematics 28. In Topics in Metric Fixed Point Theory. Cambridge University Press, Cambridge; 1990.

    Chapter  Google Scholar 

  10. Geobel K, Reich S: Uniform Convexity, Nonexpansive Mappings, and Hyperbolic Geometry. Dekker, New York; 1984.

    Google Scholar 

  11. Aoyama K, Kimura Y, Takahashi W, Toyoda M: Approximation of common fixed points of a countable family of nonexpansive mappings in a Banach space. Nonlinear Anal., Theory Methods Appl. 2007,67(8):2350–2360. 10.1016/j.na.2006.08.032

    Article  MathSciNet  MATH  Google Scholar 

  12. Suzuki T: Strong convergence theorems for infinite families of nonexpansive mappings in general Banach space. Fixed Point Theory Appl. 2005, 1: 103–123.

    MATH  Google Scholar 

  13. Engl HW, Hanke M, Neubauer A Mathematics and Its Applications 375. In Regularization of Inverse Problems. Kluwer Academic, Dordrecht; 1996.

    Chapter  Google Scholar 

  14. Ishikawa S: Fixed points by a new iteration method. Proc. Am. Math. Soc. 1974, 44: 147–150. 10.1090/S0002-9939-1974-0336469-5

    Article  MathSciNet  MATH  Google Scholar 

  15. Xu H-K: Viscosity approximation methods for nonexpansive mappings. J. Math. Anal. Appl. 2004, 298: 279–291. 10.1016/j.jmaa.2004.04.059

    Article  MathSciNet  MATH  Google Scholar 

  16. Kim TH, Xu H-K: Strong convergence of modified Mann iterations. Nonlinear Anal. 2005, 61: 51–60. 10.1016/j.na.2004.11.011

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

This research was supported by NSFC Grants No:11071279; No:11226125; No:11301379.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Rudong Chen.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

The main idea of this paper was proposed by LS, RC and YW prepared the manuscript initially and performed all the steps of the proofs in this research. All authors read and approved the final manuscript.

Rights and permissions

Open Access  This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.

The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

To view a copy of this licence, visit https://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Shi, L.Y., Chen, R. & Wu, Y. Strong convergence of iterative algorithms for the split equality problem. J Inequal Appl 2014, 478 (2014). https://doi.org/10.1186/1029-242X-2014-478

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1029-242X-2014-478

Keywords