# Adaptively relaxed algorithms for solving the split feasibility problem with a new step size

- Haiyun Zhou
^{1, 2}and - Peiyuan Wang
^{2, 3}Email author

**2014**:448

https://doi.org/10.1186/1029-242X-2014-448

© Zhou and Wang; licensee Springer. 2014

**Received: **26 February 2014

**Accepted: **20 October 2014

**Published: **5 November 2014

## Abstract

In the present paper, we propose several kinds of adaptively relaxed iterative algorithms with a new step size for solving the split feasibility problem in real Hilbert spaces. The proposed algorithms never terminate, while the known algorithms existing in the literature may terminate. Several weak and strong convergence theorems of the proposed algorithms have been established. Some numerical experiments are also included to illustrate the effectiveness of the proposed algorithms.

**MSC:**46E20, 47J20, 47J25.

## Keywords

## 1 Introduction

Since its inception in 1994, the split feasibility problem SFP [1] has been attracting researchers’ interest [2, 3] due to its extensive applications in signal processing and image reconstruction [4], with particular progress in intensity-modulated radiation therapy [5, 6].

*C*and

*Q*be nonempty closed convex subsets of ${H}_{1}$ and ${H}_{2}$, respectively, and $A:{H}_{1}\to {H}_{2}$ a bounded linear operator. Then SFP can be formulated as finding a point $\stackrel{\u02c6}{x}$ with the property

The set of solutions for SFP (1.1) is denote by $\mathrm{\Gamma}=C\cap {A}^{-1}(Q)$.

where the step size ${\tau}_{n}$ is chosen in the open interval $(0,2/{\parallel A\parallel}^{2})$, while ${P}_{C}$ and ${P}_{Q}$ are the orthogonal projections onto *C* and *Q*, respectively.

*f*is differentiable and has a Lipschitz gradient given by

where ${\tau}_{n}\in (0,2/L)$, while *L* is the Lipschitz constant of ∇*f*. Noting that $L={\parallel A\parallel}^{2}$, we see immediately that (1.8) is exactly CQ algorithm (1.2).

We note that, in algorithms (1.2) and (1.8) mentioned above, the choice of the step size ${\tau}_{n}$ depends heavily on the operator (matrix) norm $\parallel A\parallel $. This means that for actual implementation of CQ algorithm (1.2), one has first to know at least an upper bound of the operator (matrix) norm $\parallel A\parallel $, which is in general difficult. To overcome this difficulty, several authors proposed several various of adaptive methods, which permit the step size ${\tau}_{n}$ to be selected self-adaptively; see [7–9].

*et al.*[11] introduced another choice of the step size sequence $\{{\tau}_{n}\}$ as follows:

where $\{{\rho}_{n}\}$ is chosen in the open interval $(0,4)$. By virtue of the step size (1.11), López *et al.* [11] introduced four kinds of algorithms for solving SFP (1.1).

*et al.*[11] have to terminate in the

*n*th step of iterations. In this case ${x}_{n}$ is not necessarily a solution of the SFP (1.1), since ${x}_{n}$ may be not in

*C*, Algorithm 4.1 in [11] is such a case. To make up the flaw, we introduce a new choice of the step size sequence $\{{\tau}_{n}\}$ as follows:

where $\{{Q}_{n}\}$ will be defined in Section 3.

The purpose of this paper is to introduce a new choice of the step size sequence $\{{\tau}_{n}\}$ that makes the associated algorithms never terminate. A new stop rule is also given, which ensures that the $(n+1)$th iteration ${x}_{n+1}$ is a solution of SFP (1.1) and the iterative process stops. Several weak and strong convergence results are presented. Numerical experiments are included to illustrate the effectiveness of the proposed algorithms and the applications in signal processing of the CQ algorithm with the step size selected in this paper.

The rest of this paper is organized as follows. In the next section, some necessary concepts and important facts are collected. The weak and strong convergence theorems of the proposed algorithms with step size (1.12) are established in Section 3. Finally in Section 4, we provide some numerical experiments to illustrate the effectiveness and applications of the proposed algorithms with step size (1.12) to inverse problems arising from signal processing.

## 2 Preliminaries

Throughout this paper, we assume that SFP (1.1) is consistent, *i.e.*, $\mathrm{\Gamma}\ne \mathrm{\varnothing}$. We denote by ℝ the set of real numbers. Let ${H}_{1}$ and ${H}_{2}$ real Hilbert spaces and the letter *I* the identity mapping on ${H}_{1}$ or ${H}_{2}$. If $f:{H}_{1}\to \mathbb{R}$ is a differentiable (subdifferentiable) functional, then we denote by ∇*f* (*∂f*) the gradient (subdifferential) of *f*. Given a sequence $\{{x}_{n}\}$ in *H*, ${w}_{w}({x}_{n})$ (resp. ‘${x}_{n}\rightharpoonup x$’) denotes the strong (resp. weak) convergence of $\{{x}_{n}\}$ to *x*. The symbols $\u3008\cdot ,\cdot \u3009$ and $\parallel \cdot \parallel $ denote inner product and norm of Hilbert spaces ${H}_{1}$ and ${H}_{2}$, respectively. Let $T:{H}_{1}\to {H}_{1}$ be a mapping. We use $Fix(T)$ to denote the set of fixed points of *T*. We also denote by $dom(T)$ the domain of *T*.

Some equalities in Hilbert space ${H}_{1}$ play very important roles for solving linear and nonlinear problems arising from real world.

for all $x,y\in {H}_{1}$ and $t\in \mathbb{R}$.

- (i)nonexpansive if$\parallel Tx-Ty\parallel \le \parallel x-y\parallel ,$(2.3)

- (ii)firmly nonexpansive if${\parallel Tx-Ty\parallel}^{2}\le {\parallel x-y\parallel}^{2}-{\parallel (I-T)x-(I-T)y\parallel}^{2},$(2.4)

- (iii)
*λ*-averaged if there exist some $\lambda \in (0,1)$ and another nonexpansive mapping $S:{H}_{1}\to {H}_{2}$ such that$T=(1-\lambda )I+\lambda S.$(2.5)

The following proposition describes the characterizations of firmly nonexpansive mappings (see [12]).

**Proposition 2.1**

*Let*$T:dom(T)\subset {H}_{1}\to {H}_{1}$

*be a mapping*.

*Then the following statements are equivalent*.

- (i)
*T**is firmly nonexpansive*; - (ii)
$I-T$

*is firmly nonexpansive*; - (iii)
${\parallel Tx-Ty\parallel}^{2}\le \u3008x-y,Tx-Ty\u3009$

*for all*$x,y\in {H}_{1}$; - (iv)
*T**is*$\frac{1}{2}$-*averaged*; - (v)
$2T-I$

*is nonexpansive*.

*C*of ${H}_{1}$ is defined as follows: for each $x\in {H}_{1}$, there exists a unique point ${P}_{C}x\in C$ with the property:

Now we list some basic properties of ${P}_{C}$ below; see [12] for details.

**Proposition 2.2**

_{1})

*Given*$x\in H$

*and*$z\in C$.

*Then*$z={P}_{C}x$

*if and only if we have the inequality*

_{2})

_{3})

*for all* $x,y\in {H}_{1}$.

(p_{4}) $2{P}_{C}-I$ *is nonexpansive*;

_{5})

*in particular*,

_{6})

From (p_{2}), (p_{3}), and (p_{4}), we see immediately that both ${P}_{C}$ and $(I-{P}_{C})$ are firmly nonexpansive and $\frac{1}{2}$-averaged.

*f*is convex if and only if we have the relation

*x*if

*x*, it is said to be subdifferentiable at

*x*. The set of subgradients of

*f*at the point

*x*is called the subdifferential of

*f*at

*x*, and is denoted by $\partial f(x)$. A function

*f*is called subdifferentiable if it is subdifferentiable at every $x\in {H}_{1}$. If

*f*is convex and differentiable, then $\partial f(x)=\{\mathrm{\nabla}f(x)\}$ for every $x\in {H}_{1}$. A function

*f*is called subdifferentiable if it is subdifferentiable at every $x\in {H}_{1}$. If

*f*is convex and differentiable, then $\partial f(x)=\{\mathrm{\nabla}f(x)\}$ for every $x\in {H}_{1}$. A function $f:{H}_{1}\to \mathbb{R}$ is said to be weakly lower semi-continuous (w-lsc) at

*x*if ${x}_{n}\rightharpoonup x$ implies

*f* is said to be w-lsc on ${H}_{1}$ if it is w-lsc at every point $x\in {H}_{1}$.

It is well known that for a convex function $f:{H}_{1}\to \mathbb{R}$, it is w-lsc on ${H}_{1}$ if and only if it is lsc on ${H}_{1}$.

It is an easy exercise to prove the following conclusions (see [13, 14]).

**Proposition 2.3**

*Let*

*f*

*be given as in*(1.3).

*Then the following conclusions hold*.

- (i)
*f**is convex and differentiable*; - (ii)
$\mathrm{\nabla}f(x)={A}^{\ast}(I-{P}_{Q})Ax$, $x\in {H}_{1}$;

- (iii)
*f**is w*-*lsc on*${H}_{1}$; - (iv)∇
*f**is*${\parallel A\parallel}^{2}$-*Lipschitz*:$\parallel \mathrm{\nabla}f(x)-\mathrm{\nabla}f(y)\parallel \le {\parallel A\parallel}^{2}\parallel x-y\parallel ,\phantom{\rule{1em}{0ex}}x,y\in H.$

*C*in ${H}_{1}$ if

**Proposition 2.4** (see [11, 15])

*Let*

*C*

*be a nonempty closed convex in*${H}_{1}$.

*If the sequence*$\{{x}_{n}\}$

*is Fejér monotone w*.

*r*.

*t*.

*C*,

*then the following hold*:

- (i)
${x}_{n}\rightharpoonup \stackrel{\u02c6}{x}$

*if and only if*${w}_{w}({x}_{n})\subset C$; - (ii)
*the sequence*$\{{P}_{C}{x}_{n}\}$*converges strongly*; - (iii)
*if*${x}_{n}\rightharpoonup \stackrel{\u02c6}{x}\in C$,*then*$\stackrel{\u02c6}{x}={lim}_{n}{P}_{C}{x}_{n}$.

**Proposition 2.5** (see [16])

*Let*$\{{\alpha}_{n}\}$

*be a sequence of nonnegative real numbers such that*

*where*$\{{t}_{n}\}$

*is a sequence in*$(0,1)$

*and*${b}_{n}$

*is a sequence in*ℝ

*such that*

- (i)
${\sum}_{n=1}^{\mathrm{\infty}}{t}_{n}=\mathrm{\infty}$;

- (ii)
${\overline{lim}}_{n}{b}_{n}\le 0$

*or*${\sum}_{n=1}^{\mathrm{\infty}}|{t}_{n}{b}_{n}|<\mathrm{\infty}$.*Then*${\alpha}_{n}\to 0$ ($n\to \mathrm{\infty}$).

## 3 Main results

*c*and

*q*as follows:

*c*and

*q*are subdifferentiable on ${H}_{1}$ and ${H}_{2}$, respectively, and that

*∂c*and

*∂q*are bounded mappings. Given an arbitrary initial data ${x}_{1}\in {H}_{1}$. Assume that ${x}_{n}$ is the current value for $n\ge 1$. We introduce two sequences of half-spaces as follows:

where ${\eta}_{n}\in \partial q(A{x}_{n})$. Clearly, $C\subseteq {C}_{n}$ and $Q\subseteq {Q}_{n}$ for all $n\ge 1$.

More precisely, we introduce the following relaxed CQ algorithm in an adaptive way.

**Algorithm 3.1**Choose an initial data ${x}_{1}\in {H}_{1}$ arbitrarily. Assume that the

*n*th iterate ${x}_{n}$ has been constructed then we compute the $(n+1)$th iteration ${x}_{n+1}$ via the formula:

with $0<{\rho}_{n}<4$ and $0<{\sigma}_{n}<1$. If ${x}_{n+1}={x}_{n}$ for some $n\ge 1$, then ${x}_{n}$ is a solution of the SFP (1.1) and the iterative process stops; otherwise, we set $n:=n+1$ and go on to (3.7) to compute the next iteration ${x}_{n+2}$.

We remark in passing that if ${x}_{n+1}={x}_{n}$ for some $n\ge 1$, then ${x}_{m}={x}_{n}$ for all $m\ge n+1$, consequently, ${lim}_{m\to \mathrm{\infty}}{x}_{m}={x}_{n}$ is a solution of SFP (1.1). Thus, we may assume that the sequence $\{{x}_{n}\}$ generated by Algorithm 3.1 is infinite.

**Theorem 3.2** *Assume that* ${\underline{lim}}_{n}{\rho}_{n}(4-{\rho}_{n})\ge \rho >0$. *Then the sequence* $\{{x}_{n}\}$ *generated by Algorithm * 3.1 *converges weakly to a solution* $\stackrel{\u02c6}{x}$ *of SFP* (1.1), *where* $\stackrel{\u02c6}{x}={lim}_{n\to \mathrm{\infty}}{P}_{\mathrm{\Gamma}}{x}_{n}$.

*Proof*Let $z\in \mathrm{\Gamma}$ be fixed, and set ${y}_{n}={x}_{n}-{\tau}_{n}\mathrm{\nabla}{f}_{n}({x}_{n})$. By virtue of (2.1), (3.7), and Proposition 2.2(p

_{6}), we have

- (i)
$\{{x}_{n}\}$ is Fejér monotone w.r.t. Γ; in particular,

- (ii)
$\{{x}_{n}\}$ is a bounded sequence;

- (iii)
${\sum}_{n=1}^{\mathrm{\infty}}{\rho}_{n}(4-{\rho}_{n}){f}_{n}^{2}({x}_{n})/{(\parallel \mathrm{\nabla}{f}_{n}({x}_{n})\parallel +{\sigma}_{n})}^{2}<\mathrm{\infty}$; and

- (iv)$\sum _{n=1}^{\mathrm{\infty}}{\parallel {y}_{n}-{P}_{{C}_{n}}{y}_{n}\parallel}^{2}<\mathrm{\infty}.$(3.12)

Note that $\parallel \mathrm{\nabla}{f}_{n}({x}_{n})\parallel +{\sigma}_{n}\le {\parallel A\parallel}^{2}\parallel {x}_{n}-z\parallel +1$ for $z\in \mathrm{\Gamma}$. This, together with (3.13), implies that ${f}_{n}({x}_{n})\to 0$, that is, $\parallel (I-{P}_{{Q}_{n}})A{x}_{n}\parallel \to 0$. By our assumption that *∂q* is a bounded mapping, we see that there exists a constant $M>0$ such that $\parallel {\eta}_{n}\parallel \le M$, $\mathrm{\forall}{\eta}_{n}\in \partial q(A{x}_{n})$.

*q*, we have $q(A{x}^{\ast})\le {\underline{lim}}_{k}q(A{x}_{{n}_{k}})\le 0$, which implies that $A{x}^{\ast}\in Q$. We next prove ${x}^{\ast}\in C$. Firstly, from (3.12) (iv), we know that $\parallel {y}_{n}-{P}_{{C}_{n}}{y}_{n}\parallel \to 0$. Notice that

*∂c*is a bounded mapping, we have ${M}_{1}>0$ such that

Then w-lsc of *C* implies that $c({x}^{\ast})\le {\underline{lim}}_{k}c({x}_{{n}_{k}})\le 0$, thus ${x}^{\ast}\in C$ and ${w}_{w}({x}_{n})\subset \mathrm{\Gamma}$, completing the proof. □

We introduce a little more general algorithm as follows.

**Algorithm 3.3**Choose an initial data ${x}_{1}\in {H}_{1}$ arbitrarily. Assume that the

*n*th iteration ${x}_{n}$ has been constructed; then we compute the $(n+1)$th iteration ${x}_{n+1}$ via the formula:

where the step size $\{{\tau}_{n}\}$ is as before and $\{{\beta}_{n}\}$ is a sequence in $(0,1)$ satisfying ${\overline{lim}}_{n}{\beta}_{n}<1$. If ${x}_{n+1}={x}_{n}$ for some $n\ge 1$, then ${x}_{n}$ is a solution of the SFP (1.1) and the iterative process stops; otherwise, we set $n:=n+1$ and go on to (3.15) to compute the next iteration ${x}_{n+2}$.

We have the following weak convergence theorem.

**Theorem 3.4** *Assume that* ${\underline{lim}}_{n}{\rho}_{n}(4-{\rho}_{n})\ge \rho >0$. *Then the sequence* $\{{x}_{n}\}$ *generated by Algorithm * 3.3 *converges weakly to a solution* $\stackrel{\u02c6}{x}$ *of the SFP* (1.1) *where* $\stackrel{\u02c6}{x}={lim}_{n\to \mathrm{\infty}}{P}_{\mathrm{\Gamma}}{x}_{n}$.

*Proof*Let $z\in \mathrm{\Gamma}$ be fixed and set ${y}_{n}={x}_{n}-{\tau}_{n}\mathrm{\nabla}{f}_{n}({x}_{n})$. By virtue of (2.1), (2.2), (3.15), (3.10), and Proposition 2.2(p

_{6}), we have

- (i)
$\{{x}_{n}\}$ is Fejér monotone w.r.t. Γ; in particular,

- (ii)
$\{{x}_{n}\}$ is a bounded sequence;

- (iii)
${\sum}_{n=1}^{\mathrm{\infty}}(1-{\beta}_{n}){\rho}_{n}(4-{\rho}_{n}){f}_{n}^{2}({x}_{n})/{(\parallel \mathrm{\nabla}{f}_{n}({x}_{n})\parallel +{\sigma}_{n})}^{2}<\mathrm{\infty}$; and

- (iv)
${\sum}_{n=1}^{\mathrm{\infty}}(1-{\beta}_{n}){\parallel {y}_{n}-{P}_{{C}_{n}}{y}_{n}\parallel}^{2}<\mathrm{\infty}$.

By our assumptions on $\{{\beta}_{n}\}$ and $\{{\rho}_{n}\}$, we have $\frac{{f}_{n}({x}_{n})}{\parallel \mathrm{\nabla}{f}_{n}({x}_{n})\parallel +{\sigma}_{n}}\to 0$ and ${y}_{n}-{P}_{{C}_{n}}{y}_{n}\to 0$, the rest of the arguments follow exactly form the corresponding parts of Theorem 3.2, we omit its details. This completes the proof. □

We remark that Theorem 3.4 generalizes Theorem 3.2, that is, if we take ${\beta}_{n}\equiv 0$ in Theorem 3.4, then we can obtain Theorem 3.2. It is really interesting work to compare convergence rate of Algorithms 3.1 and 3.3.

Generally speaking, Algorithms 3.1 and 3.3 have only the weak convergence in the frame work of infinite-dimensional spaces, and therefore the modifications of Algorithms 3.1 and 3.3 are needed in order to realize the strong convergence. Considerable efforts have been made and several interesting results have been reported recently; see [17–20]. Below is our modification of Algorithms 3.1 and 3.3.

**Algorithm 3.5**Choose an arbitrary initial data ${x}_{1}\in {H}_{1}$. Assume that the

*n*th iteration ${x}_{n}\in {H}_{1}$ has been constructed. Set

If ${x}_{n+1}={x}_{n}$ for some $n\ge 1$, then ${x}_{n}$ is a solution of SFP (1.1) and the iterative process stops; otherwise, we set $n:=n+1$ and go on to (3.17)-(3.20) to compute the next iteration ${x}_{n+2}$.

**Theorem 3.6** *Assume that* ${\underline{lim}}_{n}{\rho}_{n}(4-{\rho}_{n})\ge \rho >0$. *Then the sequence* $\{{x}_{n}\}$ *generated by Algorithm * 3.5 *converges strongly to a solution* ${x}^{\ast}$ *of SFP* (1.1), *where* ${x}^{\ast}={P}_{\mathrm{\Gamma}}({x}_{1})$.

*Proof*Firstly, we show that

_{1}) that

This implies that $z\in {Z}_{k+1}$ and hence $\mathrm{\Gamma}\subset {Z}_{k+1}$. Consequently, $\mathrm{\Gamma}\subset {Z}_{n}$ for all $n\ge 1$, and thus (3.21) holds true.

_{1}), we see that ${x}_{n}={P}_{{Z}_{n}}{x}_{1}$. It then follows from (3.20) that

This derives that ${lim}_{n}\parallel {x}_{n}-{x}_{1}\parallel $ exists, dented by *d*.

From this one derives that ${x}_{n+1}-{x}_{n}\to 0$ ($n\to \mathrm{\infty}$).

*∂q*is a bounded mapping, we have

*q*is w-lsc on ${H}_{2}$, we derive

which implies that $A\stackrel{\u02c6}{x}\in Q$.

*∂c*is a bounded mapping, we immediately obtain

*c*ensures that

consequently, ${x}_{{n}_{j}}\to \stackrel{\u02c6}{x}$, since ${x}_{{n}_{j}}\stackrel{w}{\u27f6}\stackrel{\u02c6}{x}$.

This implies that $\stackrel{\u02c6}{x}={P}_{\mathrm{\Gamma}}{x}_{1}$ by Proposition 2.2(p_{1}). Therefore $\{{x}_{n}\}$ converges strongly to $\stackrel{\u02c6}{x}={P}_{\mathrm{\Gamma}}{x}_{1}$ because of the uniqueness of ${P}_{\mathrm{\Gamma}}{x}_{1}$. This completes the proof. □

**Algorithm 3.7**Choose an arbitrary initial data ${x}_{1}\in {H}_{1}$. Assume that the

*n*th iteration ${x}_{n}\in {H}_{1}$ has been constructed. Set

If ${x}_{n+1}={x}_{n}$ for some $n\ge 1$, then ${x}_{n}$ is a solution of SFP (1.1) and the iterative process stops; otherwise, we set $n:=n+1$ and go on to (3.39)-(3.41) to compute the next iteration ${x}_{n+2}$.

Along the proof lines of Theorem 3.6 we can prove the following.

**Theorem 3.8** *Assume that* ${\underline{lim}}_{n}{\rho}_{n}(4-{\rho}_{n})\ge \rho >0$; *then the sequence* $\{{x}_{n}\}$ *generated by Algorithm * 3.7 *converges strongly to a solution* ${x}^{\ast}$ *of SFP* (1.1), *where* ${x}^{\ast}={P}_{\mathrm{\Gamma}}({x}_{1})$.

The proof of Theorem 3.8 is similar to that of Theorem 3.6, and therefore we omit its details.

We next turn our attention to another kind of algorithm.

**Algorithm 3.9**Choose an arbitrary initial data ${x}_{1}\in {H}_{1}$. Assume that the

*n*th iteration ${x}_{n}\in {H}_{1}$ has been constructed; then we compute the $(n+1)$th iteration ${x}_{n+1}$ via the recursion:

where the step size ${\tau}_{n}$ is given by (1.12), $g:{H}_{1}\to {H}_{1}$ is a contraction with contractive coefficient $\delta \in (0,1)$, and $\{{\alpha}_{n}\}$ is a real sequence in $(0,1)$. If ${x}_{n+1}={x}_{n}$ for some $n\ge 1$, then ${x}_{n}$ is an approximate solution of SFP (1.1) (he approximate rule will be given below) and the iterative process stops; otherwise, we set $n:=n+1$ and go on to (3.43) to compute the next iteration ${x}_{n+2}$.

Such an ${x}_{n}$ is called an approximate solution of SFP (1.1). If $e({x}_{n},{\tau}_{n})=0$, then ${x}_{n}$ is a solution of SFP (1.1).

**Theorem 3.10**

*Assume that*$\{{\alpha}_{n}\}$

*and*$\{{\rho}_{n}\}$

*satisfy conditions*(C

_{1}) ${\alpha}_{n}\to 0$, ${\sum}_{n=1}^{\mathrm{\infty}}{\alpha}_{n}=\mathrm{\infty}$

*and*(C

_{2}) ${\underline{lim}}_{n}{\rho}_{n}(4-{\rho}_{n})>0$,

*respectively*.

*Then the sequence*$\{{x}_{n}\}$

*generated by Algorithm*3.9

*converges strongly to a solution*${x}^{\ast}$

*of SFP*(1.1),

*where*${x}^{\ast}={P}_{\mathrm{\Gamma}}g({x}^{\ast})$,

*equivalently*, ${x}^{\ast}$

*solves the following variational inequality*:

*Proof*First of all, we show there exists a unique ${x}^{\ast}\in \mathrm{\Gamma}$ such that ${x}^{\ast}={P}_{\mathrm{\Gamma}}g({x}^{\ast})$. Indeed, since ${P}_{\mathrm{\Gamma}}g:{H}_{1}\to {H}_{1}$ is a contraction with the contractive coefficient $\delta \in (0,1)$, by the Banach contractive mapping principle, we conclude that there exists a unique ${x}^{\ast}\in {H}_{1}$ such that ${x}^{\ast}={P}_{\mathrm{\Gamma}}g({x}^{\ast})\in \mathrm{\Gamma}$, equivalently, ${x}^{\ast}$ solves the following variational inequality:

for all $n\ge 1$.

_{6}), (3.48), and (3.51), noting that ${x}^{\ast}\in C\subset {C}_{n}$ for all $n\ge 1$, we have

for all $n\ge 1$, therefore $\{{x}_{n}\}$ is bounded; so are $\{{y}_{n}\}$ and $\{{z}_{n}\}$.

Finally, we show that ${x}_{n}\to {x}^{\ast}$ ($n\to \mathrm{\infty}$).

We consider two possible cases.

*i.e.*, there exists some integer ${n}_{0}\ge 1$ such that

*q*implies that

and thus $A\stackrel{\u02c6}{x}\in Q$.

*c*implies that

_{1}) that

Applying Proposition 2.5 to (3.58), we derive that ${s}_{n}\to 0$ as $n\to \mathrm{\infty}$, *i.e.*, ${x}_{n}\to {x}^{\ast}$ as $n\to \mathrm{\infty}$.

Then $\tau (n)\to \mathrm{\infty}$ as $n\to \mathrm{\infty}$, ${s}_{\tau (n)}\le {s}_{\tau (n)+1}$ for all $n>{n}_{0}$ and ${s}_{n}\le {s}_{\tau (n)+1}$ for all $n>{n}_{0}$; see [20] for details.

and $\parallel (I-{P}_{{C}_{\tau (n)}}){z}_{\tau (n)}\parallel \to 0$ as $n\to \mathrm{\infty}$.

At this point, by virtue of a similar reasoning to the corresponding parts in case 1, we can deduce that ${\overline{lim}}_{n}\u3008(g-I){x}^{\ast},{z}_{\tau (n)}-{x}^{\ast}\u3009={\overline{lim}}_{n}\u3008(g-I){x}^{\ast},{x}_{\tau (n)}-{x}^{\ast}\u3009\le 0$.

from which one derives that ${\overline{lim}}_{n}{s}_{\tau (n)}\le 0$, and hence ${s}_{\tau (n)}\to 0$ as $n\to \mathrm{\infty}$. From this it turns out that ${s}_{\tau (n)+1}\to 0$ as $n\to \mathrm{\infty}$, since ${s}_{\tau (n)+1}-{s}_{\tau (n)}\to 0$ as $n\to \mathrm{\infty}$. Consequently, ${s}_{n}\to 0$, as $n\to \mathrm{\infty}$, since $0\le {s}_{n}\le {s}_{\tau (n)+1}\to 0$ as $n\to \mathrm{\infty}$. This completes the proof. □

By using an argument like the method in Theorem 3.10, we have the following more general algorithm and convergence theorem.

**Algorithm 3.11**Choose an arbitrary initial data ${x}_{1}\in {H}_{1}$. Assume that the

*n*th iteration ${x}_{n}\in {H}_{1}$ has been constructed; then we compute the $(n+1)$th iteration ${x}_{n+1}$ via the recursion

where $\{{\beta}_{n}\}$ is a real sequence in $[0,1)$ satisfying ${\overline{lim}}_{n}{\beta}_{n}<1$, $\{{\alpha}_{n}\}$ is a real sequence in $(0,1)$ satisfying conditions (C_{1}) ${\alpha}_{n}\to 0$ and (C_{2}) ${\sum}_{n=1}^{\mathrm{\infty}}{\alpha}_{n}=\mathrm{\infty}$, $g:{H}_{1}\to {H}_{1}$ is a contraction with contractive coefficient $\delta \in (0,1)$ and ${\tau}_{n}$ is given by (1.12). If ${x}_{n+1}={x}_{n}$ for some $n\ge 1$, then ${x}_{n}$ is an approximate solution of SFP (1.1) and the iterative process stops; otherwise, we set $n:=n+1$ and go on to (3.59) to compute the next iteration ${x}_{n+2}$.

**Theorem 3.12**

*Assume that*${\underline{lim}}_{n}{\rho}_{n}(4-{\rho}_{n})>0$;

*the sequence generated by algorithm*(3.59)

*converges strongly to a solution*${x}^{\ast}$

*of the SFP*(1.1),

*where*${x}^{\ast}={P}_{\mathrm{\Gamma}}g({x}^{\ast})$,

*equivalently*, ${x}^{\ast}$

*is a solution of the following variational inequality*:

## 4 Numerical experiments

*ε*. $A:{\mathrm{R}}^{N}\to {\mathrm{R}}^{M}$ denotes the bounded linear observation operator.

*A*is sparse and the range of it is not closed in most inverse problems, thus

*A*is often ill-condition and the problem is also ill-posed. When

*x*is a sparse expansion, finding the solutions of (4.1) can be seen as finding a solution to the least-square problem

for any real number $t>0$.

When we set $C=\{x\in {\mathrm{R}}^{N}:{\parallel x\parallel}_{1}\le t\}$ and $Q=\{y\}$, it is a particular case of SFP (1.1); see [11]. Therefore, we continue by applying the CQ algorithm to solve (4.2). We compute the projection onto *C* through a soft thresholding method; see [11, 21–23].

Next, according to the examples in [11, 22], we also choose two similar particular problems: compressed sensing and image deconvolution, which can be covered by (4.1). The experiments compare the performances of the proposed step size (1.12) with the step size in [11], and analysis some properties of (1.12).

### 4.1 Compressed sensing

*m*=50 spikes with amplitude ±1 distributed in the whole domain randomly. The plot can be seen on the top of Figure 1. Then we set the observation dimension $M={2}^{10}$ and a matrix

*A*with $M\times N$ order is also generated arbitrarily. A standard Gaussian distribution noise with variance ${\sigma}_{\epsilon}^{2}={10}^{-4}$ is added. Let

*t*=50 in (4.2).

where ${x}^{\ast}$ is an estimated signal of *x*.

The second and third plots in Figure 1 correspond to the results with step sizes (1.11) and (1.12) to Algorithm 3.1, respectively. The recovered result by Algorithm 3.3 with step size (1.12) is shown in the fourth plot. Especially for the fifth, when we set ${\beta}_{n}={(n+1)}^{-k}$, $k=1,2,3,\dots $ , when we have $k\ge 3$ the iteration steps of Algorithm 3.3 start to approach the number in the second plot, and the restored precision is a little poorer than the others.

### 4.2 Image deconvolution

In this subsection, we continue by applying Algorithms 3.1 and 3.3 to recover the blurred Cameraman image. In the experiments, from [22, 24] we employ Haar wavelets and the blur point spread function ${h}_{ij}={(1+{i}^{2}+{j}^{2})}^{-1}$, for $i,j=-4,\dots ,4$; the noise variance is ${\sigma}^{2}=2$. The size of the image is $N=M={256}^{2}$. The threshold value is hand-tuned for the best SNR improvement. *t* is the sum of all the original pixel values.

## 5 Conclusion remarks

In this paper we have proposed several kinds of adaptively relaxed iterative algorithms with a new variable step size ${\tau}_{n}$ for solving SFP (1.1). The feature is that the new variable step size ${\tau}_{n}$ contains a sequence of positive numbers in its denominator. Because of this, the proposed algorithms with relaxed iterations will never terminate at any iteration step. On the other hand, unlike the previous known algorithms, our stop rule is that the related iteration process will stop if ${x}_{n+1}={x}_{n}$ for some $n\ge 1$.

By means of new analysis techniques, we have proved several kinds of weak and strong convergence theorems of the proposed algorithms for solving SFP (1.1), which improved, extended, and complemented those existing in the literature. We remark that all convergence results in this paper still hold true if we use the step size ${\tau}_{n}$ given by (1.11) to replace the step size given by (1.12). In such a case, the stop rules should be modified. We would like to point out that our Theorems 3.10 and 3.12 are closely related to a sort of variational inequalities.

Finally, numerical experiments have been presented to illustrate the effectiveness of the proposed algorithms and applications in signal processing of the algorithms with the step size selected in this paper. The numerical results tell us that the changes of the choice of the step size ${\sigma}_{n}$ given by (1.12) may affect the convergence rate of the iterative algorithms, and ${\sigma}_{n}$ should be chosen as small as possible; for instance, we can choose ${\sigma}_{n}$ such that ${\sigma}_{n}\to 0$ as $n\to \mathrm{\infty}$.

## Declarations

### Acknowledgements

This research was supported by the National Natural Science Foundation of China (11071053).

## Authors’ Affiliations

## References

- Censor Y, Elfving T:
**A multiprojection algorithm using Bregman projections in a product space.***Numer. Algorithms*1994,**8:**221–239. 10.1007/BF02142692MathSciNetView ArticleMATHGoogle Scholar - López G, Martín-Márquez V, Xu HK:
**Iterative algorithms for the multiple-sets split feasibility problem.**In*Biomedical Mathematics: Promising Directions in Imaging, Therapy Planning and Inverse Problems*. Edited by: Censor Y, Jiang M, Wang G. Medical Physics Publishing, Madison; 2009:243–279.Google Scholar - Chang SS, Kim JK, Cho YJ, Sim JY:
**Weak and strong convergence theorems of solutions to split feasibility problem for nonspreading type mapping in Hilbert spaces.***Fixed Point Theory Appl.*2014.,**2014:**Article ID 11Google Scholar - Stark H, Yang Y:
*Vector Space Projections: a Numerical Approach to Signal and Image Processing, Neural Nets and Optics*. Wiley, New York; 1998.MATHGoogle Scholar - Censor Y, Bortfeld T, Martin B, Bortfeld T:
**A unified approach for inversion problems in intensity-modulated radiation therapy.***Phys. Med. Biol.*2003,**51:**2353–2365.View ArticleGoogle Scholar - Censor Y, Elfving T, Kopf N, Bortfeld T:
**The multiple-sets split feasibility problem and its applications for inverse problems.***Inverse Probl.*2005,**21:**2071–2084. 10.1088/0266-5611/21/6/017MathSciNetView ArticleMATHGoogle Scholar - Qu B, Xiu N:
**A note on the CQ algorithm for the split feasibility problem.***Inverse Probl.*2005,**21:**1655–1665. 10.1088/0266-5611/21/5/009MathSciNetView ArticleMATHGoogle Scholar - Li M:
**Improved relaxed CQ methods for solving the split feasibility problem.***Adv. Model. Optim.*2011,**13:**305–318.MathSciNetGoogle Scholar - Abdellah B, Muhammad AN, Mohamed K, Sheng ZH:
**On descent-projection method for solving the split feasibility problems.***J. Glob. Optim.*2012,**54:**627–639. 10.1007/s10898-011-9782-2View ArticleMathSciNetMATHGoogle Scholar - Yang Q:
**On variable-step relaxed projection algorithm for variational inequalities.***J. Math. Anal. Appl.*2005,**302:**166–179. 10.1016/j.jmaa.2004.07.048MathSciNetView ArticleMATHGoogle Scholar - López G, Martín-Márquez V, Wang FH, Xu HK:
**Solving the split feasibility problem without prior knowledge of matrix norms.***Inverse Probl.*2012.,**28:**Article ID 085004Google Scholar - Göebel K, Kirk WA:
*Topics on Metric Fixed Point Theory*. Cambridge University Press, Cambridge; 1990.View ArticleMATHGoogle Scholar - Aubin JP:
*Optima and Equilibria: An Introduction to Nonlinear Analysis*. Springer, Berlin; 1993.View ArticleMATHGoogle Scholar - Byrne C:
**A unified treatment of some iterative algorithms in signal processing and image reconstruction.***Inverse Probl.*2004,**20:**103–120. 10.1088/0266-5611/20/1/006MathSciNetView ArticleMATHGoogle Scholar - Bauschke HH, Borwein JM:
**On projection algorithms for solving convex feasibility problems.***SIAM Rev.*1996,**38:**367–426. 10.1137/S0036144593251710MathSciNetView ArticleMATHGoogle Scholar - Xu HK:
**Iterative algorithms for nonlinear operators.***J. Lond. Math. Soc.*2011,**66:**240–256.View ArticleMathSciNetMATHGoogle Scholar - Wang FH, Xu HK:
**Approximating curve and strong convergence of the CQ algorithm for the split feasibility problem.***J. Inequal. Appl.*2010. 10.1155/2010/102085Google Scholar - Dang YZ, Gao Y:
**The strong convergence of a KM-CQ-like algorithm for a split feasibility problem.***Inverse Probl.*2011.,**27:**Article ID 015007Google Scholar - Yu X, Shahzad N, Yao YH:
**Implicit and explicit algorithms for solving the split feasibility problem.***Optim. Lett.*2012,**6:**1447–1462. 10.1007/s11590-011-0340-0MathSciNetView ArticleMATHGoogle Scholar - Yao YH, Postolache M, Liou YC:
**Strong convergence of a self-adaptive method for the split feasibility problem.***Fixed Point Theory Appl.*2013.,**2013:**Article ID 201 10.1186/1687-1812-2013-201Google Scholar - Daubechies I, Fornasier M, Loris I:
**Accelerated projected gradient method for linear inverse problems with sparsity constraints.***J. Fourier Anal. Appl.*2008,**14:**764–792. 10.1007/s00041-008-9039-8MathSciNetView ArticleMATHGoogle Scholar - Figueiredo MA, Nowak RD, Wright SJ:
**Gradient projection for sparse reconstruction: application to compressed sensing and other inverse problems.***IEEE J. Sel. Top. Signal Process.*2007,**1:**586–598.View ArticleGoogle Scholar - Starck JL, Murtagh F, Fadili JM:
*Sparse Image and Signal Processing Wavelets, Curvelets, Morphological Diversity*. Cambridge University Press, Cambridge; 2010:166.View ArticleMATHGoogle Scholar - Mário ATF:
**An EM algorithm for wavelet-based image restoration.***IEEE Trans. Image Process.*2003,**12:**906–917. 10.1109/TIP.2003.814255MathSciNetView ArticleMATHGoogle Scholar

## Copyright

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.