- Research Article
- Open Access
- Published:
Approximating Curve and Strong Convergence of the CQ Algorithm for the Split Feasibility Problem
Journal of Inequalities and Applications volume 2010, Article number: 102085 (2010)
Abstract
Using the idea of Tikhonov's regularization, we present properties of the approximating curve for the split feasibility problem (SFP) and obtain the minimum-norm solution of SFP as the strong limit of the approximating curve. It is known that in the infinite-dimensional setting, Byrne's CQ algorithm (Byrne, 2002) has only weak convergence. We introduce a modification of Byrne's CQ algorithm in such a way that strong convergence is guaranteed and the limit is also the minimum-norm solution of SFP.
1. Introduction
Let and
be nonempty closed convex subsets of real Hilbert spaces
and
, respectively. The problem under consideration in this article is formulated as finding a point
satisfying the property:

where is a bounded linear operator. Problem (1.1), referred to by Censor and Elfving [1] as the split feasibility problem (SFP), attracts many authors' attention due to its application in signal processing [1]. Various algorithms have been invented to solve it (see [2–7] and reference therein).
In particular, Byrne [2] introduced the so-called algorithm. Take an initial guess
arbitrarily, and define
recursively as

where and where
denotes the projector onto
and
is the spectral radius of the self-adjoint operator
Then the sequence
generated by (1.2) converges strongly to a solution of SFP whenever
is finite-dimensional and whenever there exists a solution to SFP (1.1).
However, the algorithm need not necessarily converge strongly in the case when
is infinite dimensional. Let us mention that the
algorithm can be regarded as a special case of the well-known Krasnosel'skii-Mann algorithm for approximating fixed points of nonexpansive mappings [3]. This iterative method is introduced in [8] and defined as follows. Take an initial guess
arbitrarily, and define
recursively as

where satisfying
If
is nonexpansive with a nonempty fixed point set, then the sequence
generated by (1.3) converges weakly to a fixed point of
. It is known that Krasnosel'skii-Mann algorithm is in general not strongly convergent (see [9, 10] for counterexamples) and neither is the
algorithm.
It is therefore the aim of this paper to construct a new algorithm so that strong convergence is guaranteed. The paper is organized as follows. In the next section, some useful lemmas are given. In Section 3, we define the concept of the minimal norm solution of SFP (1.1). Using Tikhonov's regularization, we obtain a continuous curve for approximating such minimal norm solution. Together with some properties of this approximating curve, we introduce, in Section 4, a modification of Byrne's algorithm so that strong convergence is guaranteed and its limit is the minimum-norm solution of SFP (1.1).
2. Preliminaries
Throughout the rest of this paper, denotes the identity operator on
,
the set of the fixed points of an operator
and
the gradient of the functional
The notation "
" denotes strong convergence and "
" weak convergence.
Recall that an operator from
into itself is called nonexpansive if

contractive if there exists such that

monotone if

Obviously, contractions are nonexpansive, and if is nonexpansive, then
is monotone (see [11]).
Let denote the projection from
onto a nonempty closed convex subset
of
; that is,

It is well known that is characterized by the inequality

Consequently, is nonexpansive.
The lemma below is referred to as the demiclosedness principle for nonexpansive mappings (see [12]).
Lemma 2.1 (demiclosedness principle).
Let be a nonempty closed convex subset of
and
a nonexpansive mapping with
If
is a sequence in
weakly converging to
and if the sequence
converges strongly to
, then
In particular, if
, then
.
Let be a a functional. Recall that
(i) is convex if

(ii) is strictly convex if the strict less than inequality in (2.6) holds for all distinct
.
(iii) is strongly convex if there exists a constant
such that

(iv) is coercive if
whenever
It is easily seen that if
is strongly convex, then it is coercive. See [13] for more details about convex functions.
The following lemma gives the optimality condition for the minimizer of a convex functional over a closed convex subset.
Lemma 2.2 (see [14]).
Let be a convex and differentiable functional and let
be a closed convex subset of
. Then
is a solution of the problem

if and only if satisfies the following optimality condition:

Moreover, if is, in addition, strictly convex and coercive, then problem (2.8) has a unique solution.
The following is a sufficient condition for a real sequence to converge to zero.
Lemma 2.3 (see [15]).
Let be a nonnegative real sequence satisfying

where the sequences and
satisfy the conditions:
(1)
(2)
(3)either or
Then .
3. Approximating Curves
The convexly constrained linear problem requires to solve the constrained linear system (cf. [16, 17])

where . A classical way to deal with such a possibly ill-posed problem is the well-known Tikhonov regularization, which approximates a solution of problem (3.1) by the unique minimizer of the regularized problem

where is known as the regularization parameter.
We now try to transfer this idea of Tikhonov's regularization method for solving the constrained linear inverse problem (3.1) to the case of SFP (1.1).
It is not hard to find that SFP (1.1) is equivalent to the minimization problem

Motivated by Tikhonov's regularization, we consider the minimization problem

where is the regularization parameter. Denote by
the unique solution of (3.4); namely,

Proposition 3.1.
For any the minimizer
given by (3.5) is uniquely defined. Moreover,
is characterized by the inequality

Proof.
Let

Since is convex and differentiable with gradient (see [13])


is strictly convex, coercive, and differentiable with gradient

Thus, applying Lemma 2.2 gets the assertion (3.6), as desired.
The next result collects some useful properties of .
Proposition 3.2.
The following assertions hold.
(a) is decreasing for
.
(b) is increasing for
.
(c) defines a continuous curve from
to
Proof.
Let be fixed. Since
and
are the (unique) minimizers of
and
, respectively, we get


Adding up (3.10) and (3.11) yields

which implies that . Hence (a) holds.
It follows from (3.11) that

which together with (a) implies

and therefore (b) holds.
By Proposition 3.1, we have that

and also that

Adding up (3.15) and (3.16), we get

Since is monotone, we obtain from the last relation

It turns out that

Thus (c) holds.
Let , where
. In what follows, we assume that
that is, the solution set of SFP (1.1) is nonempty. The fact that
is nonempty closed convex set thus allows us to introduce the concept of minimum-norm solution of SFP (1.1).
Definition 3.3.
An element is said to be the minimal norm solution of SFP (1.1) if
. In other words,
is the projection of the origin onto the solution set
of SFP (1.1). Thus the minimum-norm solution
for SFP (1.1) exists and is unique.
Theorem 3.4.
Let be given as (3.5). Then
converges strongly as
to the minimum-norm solution
of SFP (1.1).
Proof.
We first show that the inequality

holds for any . To this end, observe that

Since ,
. It follows from (3.21) that

and (3.20) is proven.
Let now be a sequence such that
as
and let
be abbreviated as
All we need to prove is that
contains a subsequence converging strongly to
Since
is bounded and since
is bounded convex, by passing to a subsequence if necessary, we may assume that
converges weakly to a point
By Proposition 3.1, we deduce that

It turns out that

Since the characterizing inequality (2.5) gives

and this implies that

Now by combining (3.26) and (3.24), we get

where the last inequality follows from (3.20). Consequently, we get

Note that is also weakly continuous and hence
. Now due to (3.28), we can use the demiclosedness principle (Lemma 2.1) to conclude that
. That is,
or
; therefore,
We next prove that
and this finishes the proof. To see this, we have that the weak convergence to
of
together with (3.20) implies that

This shows that is also a point in
which assumes minimum norm. Due to uniqueness of minimum-norm element, we must have
.
Remark 3.5.
The above argument shows that if the solution set of SFP (1.1) is empty, then the net of norms,
, diverges to
as
.
4. A Modified CQ Algorithm
It is a standard way to use contractions to approximate nonexpansive mappings. We follow this idea and use contractions to approximate the nonexpansive mapping in order to modify Byrne's
algorithm. More precisely, we introduce the following algorithm which is viewed as a modification of Byrne's
algorithm. The purpose for such a modification lies in the hope of strong convergence.
Algorithm 4.1.
For an arbitrary guess , the sequence
is generated by the iterative algorithm

where is a sequence in
such that
(1);
(2);
(3)either or
Note that a prototype of is
for all
.
To prove the convergence of algorithm (4.1) (see Theorem 4.3 below), we need a lemma below.
Lemma 4.2.
Set , where
with
being the spectral radius of the self-adjoint operator
(i) is an averaged mapping; namely,
, where
is a constant and
is a nonexpansive mapping from
into itself.
(ii); consequently,
.
Proof.
-
(i)
That
which is averaged is actually proved in [3].
To see (ii), we first observe that the inclusion holds trivially. It remains to prove the implication:
. To see this, we notice that the relation
is equivalent to the relation
. It turns out that

Since the solution set , we can take
. Now since
, we have by (2.5),

It follows from (4.2) and (4.3) that

This shows that ; that is,
.
Finally, since , and both
and
are averaged, we have
.
Theorem 4.3.
The sequence generated by algorithm (4.1) converges strongly to the minimum-norm solution
of SFP (1.1).
Proof.
Define operators and
on
by

where is averaged by Lemma 4.2.
It is readily seen that is a contraction with contractive constant
. Namely,

Also we may rewrite algorithm (4.1) as

We first prove that is a bounded sequence. Indeed, since
, we can take any
(thus
by Lemma 4.2) to deduce that

Note that

Substituting (4.9) into (4.8), we get

By induction, we can easily show that, for all ,

In particular, is bounded.
We now claim that

To see this, we compute

Letting be a constant such that
for all
, we find

Substituting (4.14) into (4.13), we arrive at

By virtue of the assumptions (a)–(c), we can apply Lemma 2.3 to (4.15) to obtain (4.12). Consequently we also have

This follows from the following computations:

Therefore, the demiclosedness principle (Lemma 2.1) ensures that each weak limit point of is a fixed point of the nonexpansive mapping
, that is, a point of the solution set
of SFP (1.1).
One of the key ingredients of the proof is the following conclusion:

where is the minimum-norm element of
(i.e., the projection
). Since

to prove (4.18), it suffices to prove that


To prove (4.20), we use Lemma 4.2 to get and
is averaged. Write
for some
and nonexpansive mapping
. Then we derive, by taking a point
, that

It turns out that (for some constant for all
)

Now since , (4.23) implies (4.20).
To prove (4.21), we take a subsequence of
so that

Since is bounded, we may further assume with no loss of generality that
converges weakly to a point
. Noticing that
and that
is the projection of the origin onto
, and applying (2.5), we arrive at

This is (4.21).
Finally we prove in norm. To see this, we compute

where

satisfies the property (due to (4.18))

We therefore can apply Lemma 2.3 to (4.26) to conclude that . This completes the proof.
References
Censor Y, Elfving T: A multiprojection algorithm using Bregman projections in a product space. Numerical Algorithms 1994, 8(2–4):221–239.
Byrne C: Iterative oblique projection onto convex sets and the split feasibility problem. Inverse Problems 2002, 18(2):441–453. 10.1088/0266-5611/18/2/310
Byrne C: A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Problems 2004, 20(1):103–120. 10.1088/0266-5611/20/1/006
Qu B, Xiu N: A note on the CQ algorithm for the split feasibility problem. Inverse Problems 2005, 21(5):1655–1665. 10.1088/0266-5611/21/5/009
Xu H-K: A variable Krasnosel'skiÄ-Mann algorithm and the multiple-set split feasibility problem. Inverse Problems 2006, 22(6):2021–2034. 10.1088/0266-5611/22/6/007
Yang Q: The relaxed CQ algorithm solving the split feasibility problem. Inverse Problems 2004, 20(4):1261–1266. 10.1088/0266-5611/20/4/014
Yang Q, Zhao J: Generalized KM theorems and their applications. Inverse Problems 2006, 22(3):833–844. 10.1088/0266-5611/22/3/006
Mann WR: Mean value methods in iteration. Proceedings of the American Mathematical Society 1953, 4: 506–510. 10.1090/S0002-9939-1953-0054846-3
Genel A, Lindenstrauss J: An example concerning fixed points. Israel Journal of Mathematics 1975, 22(1):81–86. 10.1007/BF02757276
Güler O: On the convergence of the proximal point algorithm for convex minimization. SIAM Journal on Control and Optimization 1991, 29(2):403–419. 10.1137/0329022
Alber Y, Ryazantseva I: Nonlinear Ill-Posed Problems of Monotone Type. Springer, Dordrecht, The Netherlands; 2006:xiv+410.
Goebel K, Kirk WA: Topics in Metric Fixed Point Theory, Cambridge Studies in Advanced Mathematics. Volume 28. Cambridge University Press, Cambridge, Mass, USA; 1990:viii+244.
Aubin J-P: Optima and Equilibria: An Introduction to Nonlinear Analysis, Graduate Texts in Mathematics. Volume 140. Springer, Berlin, Germany; 1993:xvi+417.
Engl HW, Hanke M, Neubauer A: Regularization of Inverse Problems, Mathematics and Its Applications. Volume 375. Kluwer Academic Publishers, Dordrecht, The Netherlands; 1996:viii+321.
Xu H-K: Iterative algorithms for nonlinear operators. Journal of the London Mathematical Society. Second Series 2002, 66(1):240–256. 10.1112/S0024610702003332
Eicke B: Iteration methods for convexly constrained ill-posed problems in Hilbert space. Numerical Functional Analysis and Optimization 1992, 13(5–6):413–429. 10.1080/01630569208816489
Neubauer A: Tikhonov-regularization of ill-posed linear operator equations on closed convex sets. Journal of Approximation Theory 1988, 53(3):304–320. 10.1016/0021-9045(88)90025-1
Acknowledgment
H. K. Xu was supported in part by NSC 97-2628-M-110-003-MY3, and by DGES MTM2006-13997-C02-01.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
About this article
Cite this article
Wang, F., Xu, HK. Approximating Curve and Strong Convergence of the CQ Algorithm for the Split Feasibility Problem. J Inequal Appl 2010, 102085 (2010). https://doi.org/10.1155/2010/102085
Received:
Accepted:
Published:
DOI: https://doi.org/10.1155/2010/102085
Keywords
- Minimization Problem
- Weak Convergence
- Nonexpansive Mapping
- Strong Convergence
- Bounded Linear Operator