# General iterative scheme based on the regularization for solving a constrained convex minimization problem

## Abstract

It is well known that the regularization method plays an important role in solving a constrained convex minimization problem. In this article, we introduce implicit and explicit iterative schemes based on the regularization for solving a constrained convex minimization problem. We establish results on the strong convergence of the sequences generated by the proposed schemes to a solution of the minimization problem. Such a point is also a solution of a variational inequality. We also apply the algorithm to solve a split feasibility problem.

MSC:47H09, 47H05, 47H06, 47J25, 47J05.

## 1 Introduction

The gradient-projection algorithm is a classical power method for solving constrained convex optimization problems and has been studied by many authors (see  and the references therein). The method has recently been applied to solve split feasibility problems which find applications in image reconstruction and the intensity modulated radiation therapy (see ).

Consider the problem of minimizing f over the constraint set C (assuming that C is a nonempty closed and convex subset of a real Hilbert space H). If $f:H\to \mathbb{R}$ is a convex and continuously Fréchet differentiable functional, the gradient-projection algorithm generates a sequence ${\left\{{x}_{n}\right\}}_{n=0}^{\mathrm{\infty }}$ determined by the gradient of f and the metric projection onto C. Under the condition that f has a Lipschitz continuous and strongly monotone gradient, the sequence ${\left\{{x}_{n}\right\}}_{n=0}^{\mathrm{\infty }}$ can be strongly convergent to a minimizer of f in C. If the gradient of f is only assumed to be inverse strongly monotone, then ${\left\{{x}_{n}\right\}}_{n=0}^{\mathrm{\infty }}$ can only be weakly convergent if H is infinite-dimensional.

Recently, Xu  gave an operator-oriented approach as an alternative to the gradient-projection method and to the relaxed gradient-projection algorithm, namely, an averaged mapping approach. He also presented two modifications of gradient-projection algorithms which are shown to have strong convergence.

On the other hand, regularization, in particular the traditional Tikhonov regularization, is usually used to solve ill-posed optimization problems [24, 25]. Under some conditions, we know that the regularization method is weakly convergent.

The purpose of this paper is to present the general iterative method combining the regularization method and the averaged mapping approach. We first propose implicit and explicit iterative schemes for solving a constrained convex minimization problem and prove that the methods converge strongly to a solution of the minimization problem, which is also a solution of the variational inequality. Furthermore, we use the above method to solve a split feasibility problem.

## 2 Preliminaries

Throughout the paper, we assume that H is a real Hilbert space whose inner product and norm are denoted by $〈\cdot ,\cdot 〉$ and $\parallel \cdot \parallel$, respectively, and that C is a nonempty closed convex subset of H. The set of fixed points of a mapping T is denoted by $Fix\left(T\right)$, that is, $Fix\left(T\right)=\left\{x\in H:Tx=x\right\}$. We write ${x}_{n}⇀x$ to indicate that the sequence $\left\{{x}_{n}\right\}$ converges weakly to x. The fact that the sequence $\left\{{x}_{n}\right\}$ converges strongly to x is denoted by ${x}_{n}\to x$. The following definition and results are needed in the subsequent sections.

Recall that a mapping $T:H\to H$ is said to be L-Lipschitzian if

$\parallel Tx-Ty\parallel \le L\parallel x-y\parallel ,\phantom{\rule{1em}{0ex}}\mathrm{\forall }x,y\in H,$
(1)

where $L>0$ is a constant. In particular, if $L\in \left[0,1\right)$, then T is called a contraction on H; if $L=1$, then T is called a nonexpansive mapping on H. T is called firmly nonexpansive if $2T-I$ is nonexpansive, or equivalently, $〈x-y,Tx-Ty〉\ge {\parallel Tx-Ty\parallel }^{2}$, $\mathrm{\forall }x,y\in H$. Alternatively, T is firmly nonexpansive if and only if T can be expressed as $T=\frac{1}{2}\left(I+W\right)$, where $W:H\to H$ is nonexpansive.

Definition 2.1 A mapping $T:H\to H$ is said to be an averaged mapping if it can be written as the average of the identity I and a nonexpansive mapping; that is,

$T=\left(1-\alpha \right)I+\alpha W,$
(2)

where α is a number in $\left(0,1\right)$ and $W:H\to H$ is nonexpansive. More precisely, when (2) holds, we say that T is α-averaged. Clearly, a firmly nonexpansive mapping (in particular, projection) is a $\frac{1}{2}$-averaged map.

Proposition 2.1 [16, 26]

For given operators $W,T,V:H\to H$:

1. (i)

If $T=\left(1-\alpha \right)W+\alpha V$ for some $\alpha \in \left(0,1\right)$ and if W is averaged and V is nonexpansive, then T is averaged.

2. (ii)

T is firmly nonexpansive if and only if the complement $I-T$ is firmly nonexpansive.

3. (iii)

If $T=\left(1-\alpha \right)W+\alpha V$ for some $\alpha \in \left(0,1\right)$ and if W is firmly nonexpansive and V is nonexpansive, then T is averaged.

4. (iv)

The composite of finitely many averaged mappings is averaged. That is, if each of the mappings ${\left\{{T}_{i}\right\}}_{i=1}^{N}$ is averaged, then so is the composite ${T}_{1}\cdots {T}_{N}$. In particular, if ${T}_{1}$ is ${\alpha }_{1}$-averaged and ${T}_{2}$ is ${\alpha }_{2}$-averaged, then the composite ${T}_{1}{T}_{2}$ is α-averaged, where $\alpha ={\alpha }_{1}+{\alpha }_{2}-{\alpha }_{1}{\alpha }_{2}$.

Recall that the metric (or nearest point) projection from H onto C is the mapping ${P}_{C}:H\to C$ which assigns to each point $x\in H$ the unique point ${P}_{C}x\in C$ satisfying the property

$\parallel x-{P}_{C}x\parallel =\underset{y\in C}{inf}\parallel x-y\parallel =:d\left(x,C\right).$
(3)

Lemma 2.1 For given $x\in H$:

1. (i)

$z={P}_{C}x$ if and only if

$〈x-z,y-z〉\le 0,\phantom{\rule{1em}{0ex}}\mathrm{\forall }y\in C;$
2. (ii)

$z={P}_{C}x$ if and only if

${\parallel x-z\parallel }^{2}\le {\parallel x-y\parallel }^{2}-{\parallel y-z\parallel }^{2},\phantom{\rule{1em}{0ex}}\mathrm{\forall }y\in C;$
3. (iii)
$〈{P}_{C}x-{P}_{C}y,x-y〉\ge {\parallel {P}_{C}x-{P}_{C}y\parallel }^{2},\phantom{\rule{1em}{0ex}}\mathrm{\forall }x,y\in H.$

Consequently, ${P}_{C}$ is nonexpansive.

Lemma 2.2 The following inequality holds in a Hilbert space X:

${\parallel x+y\parallel }^{2}\le {\parallel x\parallel }^{2}+2〈y,x+y〉,\phantom{\rule{1em}{0ex}}\mathrm{\forall }x,y\in X.$

Lemma 2.3 

In a Hilbert space H, we have

Lemma 2.4 (Demiclosedness principle )

Let C be a closed and convex subset of a Hilbert space H, and let $T:C\to C$ be a nonexpansive mapping with $Fix\left(T\right)\ne \mathrm{\varnothing }$. If ${\left\{{x}_{n}\right\}}_{n=1}^{\mathrm{\infty }}$ is a sequence in C weakly converging to x and if ${\left\{\left(I-T\right){x}_{n}\right\}}_{n=1}^{\mathrm{\infty }}$ converges strongly to y, then $\left(I-T\right)x=y$. In particular, if $y=0$, then $x\in Fix\left(T\right)$.

Definition 2.2 A nonlinear operator G with domain $D\left(G\right)\subseteq H$ and range $R\left(G\right)\subseteq H$ is said to be:

1. (i)

monotone if

$〈x-y,Gx-Gy〉\ge 0,\phantom{\rule{1em}{0ex}}\mathrm{\forall }x,y\in D\left(G\right),$
2. (ii)

β-strongly monotone if there exists $\beta >0$ such that

$〈x-y,Gx-Gy〉\ge \beta {\parallel x-y\parallel }^{2},\phantom{\rule{1em}{0ex}}\mathrm{\forall }x,y\in D\left(G\right),$
3. (iii)

ν-inverse strongly monotone (for short, ν-ism) if there exists $\nu >0$ such that

$〈x-y,Gx-Gy〉\ge \nu {\parallel Gx-Gy\parallel }^{2},\phantom{\rule{1em}{0ex}}\mathrm{\forall }x,y\in D\left(G\right).$

Proposition 2.2 

Let $T:H\to H$ be an operator from H to itself.

1. (i)

T is nonexpansive if and only if the complement $I-T$ is $\frac{1}{2}$-ism.

2. (ii)

If T is ν-ism, then for $\gamma >0$, γT is $\frac{\nu }{\gamma }$-ism.

3. (iii)

T is averaged if and only if the complement $I-T$ is ν-ism for some $\nu >1/2$. Indeed, for $\alpha \in \left(0,1\right)$, T is α-averaged if and only if $I-T$ is $\frac{1}{2\alpha }$-ism.

Lemma 2.5 

Assume that $\left\{{a}_{n}\right\}$ is a sequence of nonnegative real numbers such that

${a}_{n+1}\le \left(1-{\gamma }_{n}\right){a}_{n}+{\gamma }_{n}{\delta }_{n},\phantom{\rule{1em}{0ex}}n\ge 0,$

where $\left\{{\gamma }_{n}\right\}$ is a sequence in $\left(0,1\right)$ and $\left\{{\delta }_{n}\right\}$ is a sequence in such that

1. (i)

${\sum }_{n=1}^{\mathrm{\infty }}{\gamma }_{n}=\mathrm{\infty }$;

2. (ii)

${lim sup}_{n\to \mathrm{\infty }}{\delta }_{n}\le 0$ or ${\sum }_{n=1}^{\mathrm{\infty }}{\gamma }_{n}|{\delta }_{n}|<\mathrm{\infty }$.

Then ${lim}_{n\to \mathrm{\infty }}{a}_{n}=0$.

## 3 Main results

We now look at the constrained convex minimization problem:

$\underset{x\in C}{min}f\left(x\right),$
(4)

where C is a closed and convex subset of a Hilbert space H and $f:C\to \mathbb{R}$ is a real-valued convex function. Assume that problem (4) is consistent, let S denote the solution set. If f is Fréchet differentiable, then the gradient-projection algorithm (GPA) generates a sequence ${\left\{{x}_{n}\right\}}_{n=0}^{\mathrm{\infty }}$ according to the recursive formula

${x}_{n+1}={Proj}_{C}\left(I-\gamma \mathrm{\nabla }f\right)\left({x}_{n}\right),\phantom{\rule{1em}{0ex}}n\ge 0,$
(5)

or more generally,

${x}_{n+1}={Proj}_{C}\left(I-{\gamma }_{n}\mathrm{\nabla }f\right)\left({x}_{n}\right),\phantom{\rule{1em}{0ex}}n\ge 0,$
(6)

where, in both (5) and (6), the initial guess ${x}_{0}$ is taken from C arbitrarily, the parameters γ or ${\gamma }_{n}$ are positive real numbers.

As a matter of fact, it is known that if f fails to be strongly monotone, and is only $\frac{1}{L}$-ism, namely, there is a constant $L>0$ such that

$〈x-y,\mathrm{\nabla }f\left(x\right)-\mathrm{\nabla }f\left(y\right)〉\ge \frac{1}{L}{\parallel \mathrm{\nabla }f\left(x\right)-\mathrm{\nabla }f\left(y\right)\parallel }^{2},\phantom{\rule{1em}{0ex}}x,y\in C,$

under some assumption for γ or ${\gamma }_{n}$, then algorithms (5) and (6) can still converge in the weak topology.

Now consider the regularized minimization problem

$\underset{x\in C}{min}{f}_{\alpha }\left(x\right):=\underset{x\in C}{min}\left\{f\left(x\right)+\frac{\alpha }{2}{\parallel x\parallel }^{2}\right\},$

where $\alpha >0$ is the regularization parameter, and again f is convex with a $\frac{1}{L}$-ism gradient f.

It is known that the regularization method is defined as follows:

${x}_{n+1}={Proj}_{C}\left(I-\gamma \mathrm{\nabla }{f}_{{\alpha }_{n}}\right)\left({x}_{n}\right).$

We also know that $\left\{{x}_{n}\right\}⇀\stackrel{˜}{x}$, where $\stackrel{˜}{x}$ is a solution of constrained convex minimization problem (4).

Let $h:C\to H$ be a contraction with a constant $\rho >0$. In this section, we introduce the following implicit scheme generating a net $\left\{{x}_{s,{t}_{s}}\right\}$ in an implicit way:

${x}_{s,{t}_{s}}={P}_{C}\left[sh\left({x}_{s,{t}_{s}}\right)+\left(1-s\right){T}_{{t}_{s}}{x}_{s,{t}_{s}}\right],$
(7)

where $0<\gamma <\frac{2}{L}$, ${t}_{s}\in \left(0,\frac{2}{\gamma }-L\right)$. Let ${T}_{{t}_{s}}$ and s satisfy the following conditions:

1. (i)

$\lambda :=\lambda \left({t}_{s}\right)=\frac{2-\gamma \left(L+{t}_{s}\right)}{4}$;

2. (ii)

${P}_{C}\left(I-\gamma \mathrm{\nabla }{f}_{{t}_{s}}\right)=\lambda I+\left(1-\lambda \right){T}_{{t}_{s}}$.

Consider a mapping

${Q}_{s}x={P}_{C}\left[sh\left(x\right)+\left(1-s\right){T}_{{t}_{s}}x\right],\phantom{\rule{1em}{0ex}}\mathrm{\forall }x\in C.$
(8)

It is easy to see that ${Q}_{s}$ is a contraction. Indeed, we have

$\begin{array}{rcl}\parallel {Q}_{s}x-{Q}_{s}y\parallel & \le & \parallel sh\left(x\right)+\left(1-s\right){T}_{{t}_{s}}x-\left[sh\left(y\right)+\left(1-s\right){T}_{{t}_{s}}y\right]\parallel \\ \le & \rho s\parallel x-y\parallel +\left(1-s\right)\parallel x-y\parallel =\left[1-\left(1-\rho \right)s\right]\parallel x-y\parallel .\end{array}$

Hence, ${Q}_{s}$ has a unique fixed point in C, denoted by ${x}_{s,{t}_{s}}$, which uniquely solves fixed point equation (7).

We proved the strong convergence of ${\left\{{x}_{s,{t}_{s}}\right\}}_{{t}_{s}\in \left(0,\frac{2}{\gamma }-L\right)}$ to a solution ${x}^{\ast }$ of the minimization problem

$〈\left(I-h\right){x}^{\ast },{x}^{\ast }-z〉\le 0,\phantom{\rule{1em}{0ex}}\mathrm{\forall }z\in S.$
(9)

For a given arbitrary guess ${x}_{0}\in C$ and a sequence $\left\{{\alpha }_{n}\right\}\in \left(0,\frac{2}{\gamma }-L\right)$, we also propose the following explicit scheme that generates a sequence $\left\{{x}_{n}\right\}$ in an explicit way:

${x}_{n+1}={P}_{C}\left[{\theta }_{n}h\left({x}_{n}\right)+\left(1-{\theta }_{n}\right){T}_{n}{x}_{n}\right],\phantom{\rule{1em}{0ex}}n\ge 0,$
(10)

where $0<\gamma <\frac{2}{L}$, ${\lambda }_{n}=\frac{2-\gamma \left(L+{\alpha }_{n}\right)}{4}$ and ${P}_{C}\left(I-\gamma \mathrm{\nabla }{f}_{{\alpha }_{n}}\right)={\lambda }_{n}I+\left(1-{\lambda }_{n}\right){T}_{n}$ for each $n\ge 0$. It is proven that this sequence $\left\{{x}_{n}\right\}$ converges strongly to a minimizer ${x}^{\ast }\in S$ of (4).

### 3.1 Convergence of the implicit scheme

Proposition 3.1 If $0<\gamma <\frac{2}{L}$, $\alpha \in \left(0,\frac{2}{\gamma }-L\right)$, f is $\frac{1}{L}$-ism, then

$\begin{array}{c}{Proj}_{C}\left(I-\gamma \mathrm{\nabla }{f}_{\alpha }\right)=\left(1-{\mu }_{\alpha }\right)I+{\mu }_{\alpha }{T}_{\alpha },\hfill \\ {Proj}_{C}\left(I-\gamma \mathrm{\nabla }f\right)=\left(1-\mu \right)I+\mu T,\hfill \end{array}$

where ${\mu }_{\alpha }=\frac{2+\gamma \left(L+\alpha \right)}{4}$, $\mu =\frac{2+\gamma L}{4}$.

In addition, for $\mathrm{\forall }x\in C$,

$\parallel {T}_{\alpha }x-Tx\parallel \le \alpha M\left(x\right),$

where

$M\left(x\right)=\gamma \left(5\parallel x\parallel +\parallel Tx\parallel \right).$

Proof Since f is $\frac{1}{L}$-ism, so $\gamma \mathrm{\nabla }{f}_{\alpha }$ is $\frac{1}{\gamma \left(L+\alpha \right)}$-ism, by Proposition 2.2, $I-\gamma \mathrm{\nabla }{f}_{\alpha }$ is $\frac{\gamma \left(L+\alpha \right)}{2}$-averaged, because ${Proj}_{C}$ is $\frac{1}{2}$-averaged, by Proposition 2.1, ${Proj}_{C}\left(I-\gamma \mathrm{\nabla }{f}_{\alpha }\right)$ is ${\mu }_{\alpha }$-averaged, i.e.,

${Proj}_{C}\left(I-\gamma \mathrm{\nabla }{f}_{\alpha }\right)=\left(1-{\mu }_{\alpha }\right)I+{\mu }_{\alpha }{T}_{\alpha },$

where ${\mu }_{\alpha }=\frac{2+\gamma \left(L+\alpha \right)}{4}$. The same case holds for ${Proj}_{C}\left(I-\gamma \mathrm{\nabla }f\right)$.

Hence,

$\begin{array}{c}\parallel {Proj}_{C}\left(I-\gamma \mathrm{\nabla }{f}_{\alpha }\right)x-{Proj}_{C}\left(I-\gamma \mathrm{\nabla }{f}_{\alpha }\right)x\parallel \hfill \\ \phantom{\rule{1em}{0ex}}=\parallel \left(\mu -{\mu }_{\alpha }\right)x+{\mu }_{\alpha }{T}_{\alpha }x-\mu Tx\parallel \hfill \\ \phantom{\rule{1em}{0ex}}\le \parallel \left(I-\gamma \mathrm{\nabla }{f}_{\alpha }\right)x-\left(I-\gamma \mathrm{\nabla }f\right)x\parallel \hfill \\ \phantom{\rule{1em}{0ex}}=\gamma \parallel \mathrm{\nabla }{f}_{\alpha }\left(x\right)-\mathrm{\nabla }f\left(x\right)\parallel =\alpha \gamma \parallel x\parallel ,\hfill \end{array}$

then

$\begin{array}{c}\parallel {\mu }_{\alpha }\left({T}_{\alpha }x\right)-\mu Tx\parallel \le |\mu -{\mu }_{\alpha }|\parallel x\parallel +\alpha \gamma \parallel x\parallel ,\hfill \\ \parallel {T}_{\alpha }x-Tx\parallel \le \frac{\alpha \gamma \left(5\parallel x\parallel +\parallel Tx\parallel \right)}{2+\gamma \left(L+\alpha \right)}\le \alpha M\left(x\right),\hfill \end{array}$

where $M\left(x\right)=\gamma \left(5\parallel x\parallel +\parallel Tx\parallel \right)$. □

Proposition 3.2 Let $h:C\to H$ be a contraction with $0<\rho <1$ and $0<\gamma <\frac{2}{L}$, let ${t}_{s}$ be continuous with respect to s, ${t}_{s}=o\left(s\right)$. Suppose that problem (4) is consistent, let S denote the solution set for each $s\in \left(0,1\right)$, and let ${x}_{s,{t}_{s}}$ denote a unique solution of fixed point equation (7). Then the following properties hold for the net $\left\{{x}_{s,{t}_{s}}\right\}$:

1. (i)

${\left\{{x}_{s,{t}_{s}}\right\}}_{{t}_{s}\in \left(0,\frac{2}{\gamma }-L\right)}$ is bounded;

2. (ii)

${lim}_{s\to 0}\parallel {x}_{s,{t}_{s}}-{T}_{{t}_{s}}{x}_{s,{t}_{s}}\parallel =0$;

3. (iii)

${x}_{s,{t}_{s}}$ defines a continuous curve from $\left(0,1\right)$ into C.

Proof (i) Take any $p\in S$, then

${x}_{s,{t}_{s}}-p={P}_{C}\left[sh\left({x}_{s,{t}_{s}}\right)+\left(1-s\right){T}_{{t}_{s}}{x}_{s,{t}_{s}}\right]-{P}_{C}p.$

Therefore,

$\begin{array}{rcl}\parallel {x}_{s,{t}_{s}}-p\parallel & \le & s\rho \parallel {x}_{s,{t}_{s}}-p\parallel +s\parallel h\left(p\right)-p\parallel +\left(1-s\right)\parallel {T}_{{t}_{s}}{x}_{s,{t}_{s}}-Tp\parallel \\ \le & s\rho \parallel {x}_{s,{t}_{s}}-p\parallel +s\parallel h\left(p\right)-p\parallel +\left(1-s\right)\left[\parallel {T}_{{t}_{s}}{x}_{s,{t}_{s}}-{T}_{{t}_{s}}p\parallel +\parallel {T}_{{t}_{s}}p-Tp\parallel \right]\\ \le & \left[1-\left(1-\rho \right)s\right]\parallel {x}_{s,{t}_{s}}-p\parallel +s\parallel h\left(p\right)-p\parallel +\left(1-s\right){t}_{s}\parallel M\left(p\right)\parallel ,\end{array}$

hence,

$\parallel {x}_{s,{t}_{s}}-p\parallel \le \frac{\parallel h\left(p\right)-p\parallel }{1-\rho }+\left(1-s\right)\frac{{t}_{s}}{s\left(1-\rho \right)}\parallel M\left(p\right)\parallel .$
(11)

So, $\left\{{x}_{s,{t}_{s}}\right\}$ is bounded.

1. (ii)
$\parallel {x}_{s,{t}_{s}}-{T}_{{t}_{s}}{x}_{s,{t}_{s}}\parallel \le \parallel sh\left({x}_{s,{t}_{s}}\right)-s{T}_{{t}_{s}}{x}_{s,{t}_{s}}\parallel \to 0.$
2. (iii)

Take $s,{s}_{0}\in \left(0,1\right)$, and calculate

$\begin{array}{c}\parallel {x}_{s,{t}_{s}}-{x}_{{s}_{0},{t}_{{s}_{0}}}\parallel \le \parallel sh\left({x}_{s,{t}_{s}}\right)+\left(1-s\right)T{t}_{s}{x}_{s,{t}_{s}}-\left[{s}_{0}h\left({x}_{{s}_{0},{t}_{{s}_{0}}}\right)+\left(1-{s}_{0}\right){T}_{{t}_{{s}_{0}}}{x}_{{s}_{0},{t}_{{s}_{0}}}\right]\parallel \hfill \\ \phantom{\parallel {x}_{s,{t}_{s}}-{x}_{{s}_{0},{t}_{{s}_{0}}}\parallel }=\parallel s\left(h\left({x}_{s,{t}_{s}}\right)-h\left({x}_{{s}_{0},{t}_{{s}_{0}}}\right)\right)+\left(s-{s}_{0}\right)\left[h\left({x}_{{s}_{0},{t}_{{s}_{0}}}\right)-{T}_{{t}_{{s}_{0}}}{x}_{{s}_{0},{t}_{{s}_{0}}}\right]\hfill \\ \phantom{\parallel {x}_{s,{t}_{s}}-{x}_{{s}_{0},{t}_{{s}_{0}}}\parallel \le }+\left(1-s\right)\left[{T}_{{t}_{s}}{x}_{s,{t}_{s}}-{T}_{{t}_{s}}{x}_{{s}_{0},{t}_{{s}_{0}}}\right]+\left(1-s\right)\left[{T}_{{t}_{s}}{x}_{{s}_{0},{t}_{{s}_{0}}}-{T}_{{t}_{{s}_{0}}}{x}_{{s}_{0},{t}_{{s}_{0}}}\right]\parallel \hfill \\ \phantom{\parallel {x}_{s,{t}_{s}}-{x}_{{s}_{0},{t}_{{s}_{0}}}\parallel }\le s\rho \parallel {x}_{s,{t}_{s}}-{x}_{{s}_{0},{t}_{{s}_{0}}}\parallel +\left(1-s\right)\parallel {x}_{s,{t}_{s}}-{x}_{{s}_{0},{t}_{{s}_{0}}}\parallel \hfill \\ \phantom{\parallel {x}_{s,{t}_{s}}-{x}_{{s}_{0},{t}_{{s}_{0}}}\parallel \le }+\left(1-s\right)\parallel {T}_{{t}_{s}}{x}_{{s}_{0},{t}_{{s}_{0}}}-{T}_{{t}_{{s}_{0}}}{x}_{{s}_{0},{t}_{{s}_{0}}}\parallel +|s-{s}_{0}|\parallel h\left({x}_{{s}_{0},{t}_{{s}_{0}}}\right)-{T}_{{t}_{{s}_{0}}}{x}_{{s}_{0},{t}_{{s}_{0}}}\parallel ,\hfill \end{array}$
(12)
$\begin{array}{c}\parallel {T}_{{t}_{s}}{x}_{{s}_{0},{t}_{{s}_{0}}}-{T}_{{t}_{{s}_{0}}}{x}_{{s}_{0},{t}_{{s}_{0}}}\parallel \hfill \\ \phantom{\rule{1em}{0ex}}=\parallel \frac{4{P}_{C}\left(I-\gamma \mathrm{\nabla }{f}_{{t}_{s}}\right)-\left[2-\gamma \left(L+{t}_{s}\right)\right]I}{2+\gamma \left(L+{t}_{s}\right)}{x}_{{s}_{0},{t}_{{s}_{0}}}\hfill \\ \phantom{\rule{2em}{0ex}}-\frac{4{P}_{C}\left(I-\gamma \mathrm{\nabla }{f}_{{t}_{{s}_{0}}}\right)-\left[2-\gamma \left(L+{t}_{{s}_{0}}\right)\right]I}{2+\gamma \left(L+{t}_{{s}_{0}}\right)}{x}_{{s}_{0},{t}_{{s}_{0}}}\parallel \hfill \\ \phantom{\rule{1em}{0ex}}\le \parallel \frac{4{P}_{C}\left(I-\gamma \mathrm{\nabla }{f}_{{t}_{s}}\right)}{2+\gamma \left(L+{t}_{s}\right)}{x}_{{s}_{0},{t}_{{s}_{0}}}-\frac{4{P}_{C}\left(I-\gamma \mathrm{\nabla }{f}_{{t}_{{s}_{0}}}\right)}{2+\gamma \left(L+{t}_{{s}_{0}}\right)}{x}_{{s}_{0},{t}_{{s}_{0}}}\parallel \hfill \\ \phantom{\rule{2em}{0ex}}+\parallel \frac{-\left[2-\gamma \left(L+{t}_{s}\right)\right]}{2+\gamma \left(L+{t}_{s}\right)}{x}_{{s}_{0},{t}_{{s}_{0}}}+\frac{\left[2-\gamma \left(L+{t}_{{s}_{0}}\right)\right]}{2+\gamma \left(L+{t}_{{s}_{0}}\right)}{x}_{{s}_{0},{t}_{{s}_{0}}}\parallel \hfill \\ \phantom{\rule{1em}{0ex}}=\parallel \frac{4\left[2+\gamma \left(L+{t}_{{s}_{0}}\right)\right]{P}_{C}\left(I-\gamma \mathrm{\nabla }{f}_{{t}_{s}}\right){x}_{{s}_{0},{t}_{{s}_{0}}}-4\left[2+\gamma \left(L+{t}_{s}\right)\right]{P}_{C}\left(I-\gamma \mathrm{\nabla }{f}_{{t}_{{s}_{0}}}\right){x}_{{s}_{0},{t}_{{s}_{0}}}}{\left[2+\gamma \left(L+{t}_{s}\right)\right]\left[2+\gamma \left(L+{t}_{{s}_{0}}\right)\right]}\parallel \hfill \\ \phantom{\rule{2em}{0ex}}+\frac{4\gamma |{t}_{s}-{t}_{{s}_{0}}|\parallel {x}_{{s}_{0},{t}_{{s}_{0}}}\parallel }{\left[2+\gamma \left(L+{t}_{s}\right)\right]\left[2+\gamma \left(L+{t}_{{s}_{0}}\right)\right]}\hfill \\ \phantom{\rule{1em}{0ex}}=\parallel \frac{4\gamma \left({t}_{{s}_{0}}-{t}_{s}\right){P}_{C}\left(I-\gamma \mathrm{\nabla }{f}_{{t}_{s}}\right){x}_{{s}_{0},{t}_{{s}_{0}}}}{\left[2+\gamma \left(L+{t}_{s}\right)\right]\left[2+\gamma \left(L+{t}_{{s}_{0}}\right)\right]}\hfill \\ \phantom{\rule{2em}{0ex}}+\frac{4\left(2+\gamma \left(L+{t}_{s}\right)\right)\left[{P}_{C}\left(I-\gamma \mathrm{\nabla }{f}_{{t}_{s}}\right){x}_{{s}_{0},{t}_{{s}_{0}}}-{P}_{C}\left(I-\gamma \mathrm{\nabla }{f}_{{t}_{{s}_{0}}}\right){x}_{{s}_{0},{t}_{{s}_{0}}}\right]}{\left[2+\gamma \left(L+{t}_{s}\right)\right]\left[2+\gamma \left(L+{t}_{{s}_{0}}\right)\right]}\parallel \hfill \\ \phantom{\rule{2em}{0ex}}+\frac{4\gamma |{t}_{s}-{t}_{{s}_{0}}|\parallel {x}_{{s}_{0},{t}_{{s}_{0}}}\parallel }{\left[2+\gamma \left(L+{t}_{s}\right)\right]\left[2+\gamma \left(L+{t}_{{s}_{0}}\right)\right]}\hfill \\ \phantom{\rule{1em}{0ex}}\le {M}_{1}|{t}_{s}-{t}_{{s}_{0}}|.\hfill \end{array}$
(13)

So, by (12) and (13),

$\parallel {x}_{s,{t}_{s}}-{x}_{{s}_{0},{t}_{{s}_{0}}}\parallel \to 0\phantom{\rule{1em}{0ex}}\left(s\to {s}_{0}\right).$

□

Theorem 3.1 Assume that minimization problem (4) is consistent, and let S denote the solution set. Assume that the gradient f is $\frac{1}{L}$-ism. Let $h:C\to H$ be a ρ-contraction with $\rho \in \left[0,1\right)$,

${x}_{s,{t}_{s}}={P}_{C}\left[sh\left({x}_{s,{t}_{s}}\right)+\left(1-s\right){T}_{{t}_{s}}{x}_{s,{t}_{s}}\right],$

where $0<\gamma <\frac{2}{L}$, ${t}_{s}\in \left(0,\frac{2}{\gamma }-L\right)$, ${t}_{s}=o\left(s\right)$. Let ${T}_{{t}_{s}}$ satisfy the following conditions:

1. (i)

$\lambda :=\lambda \left({t}_{s}\right)=\frac{2-\gamma \left(L+{t}_{s}\right)}{4}$;

2. (ii)

${P}_{C}\left(I-\gamma \mathrm{\nabla }{f}_{{t}_{s}}\right)=\lambda I+\left(1-\lambda \right){T}_{{t}_{s}}$.

Then the net $\left\{{x}_{s,{t}_{s}}\right\}$ converges strongly as $s\to 0$ to a minimizer of problem (4), which is also the unique solution of the variational inequality

${x}^{\ast }\in S,\phantom{\rule{1em}{0ex}}〈\left(I-h\right){x}^{\ast },x-{x}^{\ast }〉\ge 0,\phantom{\rule{1em}{0ex}}\mathrm{\forall }x\in S.$

Proof Set ${Proj}_{C}\left(I-\gamma \mathrm{\nabla }f\right)=\left(1-\tau \right)I+\tau T$, $\tau =\frac{2-\gamma L}{4}$. Let ${y}_{s,{t}_{s}}=sh\left({x}_{s,{t}_{s}}\right)+\left(1-s\right){T}_{{t}_{s}}{x}_{s,{t}_{s}}$, ${t}_{s}\in \left(0,\frac{2}{\gamma }-L\right)$.

We then have ${x}_{s,{t}_{s}}={P}_{C}{y}_{s,{t}_{s}}$. For any given $z\in S$, $z={P}_{C}\left(I-\gamma \mathrm{\nabla }f\right)z$, we obtain

$\begin{array}{rcl}{x}_{s,{t}_{s}}-z& =& {P}_{C}{y}_{s,{t}_{s}}-{y}_{s,{t}_{s}}+{y}_{s,{t}_{s}}-z\\ =& {P}_{C}{y}_{s,{t}_{s}}-{y}_{s,{t}_{s}}+sh\left({x}_{s,{t}_{s}}\right)+\left(1-s\right){T}_{{t}_{s}}{x}_{s,{t}_{s}}-Tz\\ =& {P}_{C}{y}_{s,{t}_{s}}-{y}_{s,{t}_{s}}+s\left[h\left({x}_{s,{t}_{s}}\right)-h\left(z\right)\right]+s\left[h\left(z\right)-Tz\right]+\left(1-s\right)\left[{T}_{{t}_{s}}{x}_{s,{t}_{s}}-Tz\right].\end{array}$

Next we prove that $\left\{{x}_{s,{t}_{s}}\right\}\to {x}^{\ast }\in S$, which is also the unique solution of the variational inequality. We have

$\begin{array}{rcl}{\parallel {x}_{s,{t}_{s}}-z\parallel }^{2}& =& 〈{P}_{C}{y}_{s,{t}_{s}}-{y}_{s,{t}_{s}},{P}_{C}{y}_{s,{t}_{s}}-z〉+s〈h\left({x}_{s,{t}_{s}}\right)-h\left(z\right),{x}_{s,{t}_{s}}-z〉\\ +s〈h\left(z\right)-Tz,{x}_{s,{t}_{s}}-z〉+\left(1-s\right)〈{T}_{{t}_{s}}{x}_{s,{t}_{s}}-Tz,{x}_{s,{t}_{s}}-z〉\\ \le & s\rho {\parallel {x}_{s,{t}_{s}}-z\parallel }^{2}+s〈h\left(z\right)-Tz,{x}_{s,{t}_{s}}-z〉+\left(1-s\right)〈{T}_{{t}_{s}}{x}_{s,{t}_{s}}-Tz,{x}_{s,{t}_{s}}-z〉\\ \le & \left[1-\left(1-\rho \right)s\right]{\parallel {x}_{s,{t}_{s}}-z\parallel }^{2}+s〈h\left(z\right)-Tz,{x}_{s,{t}_{s}}-z〉\\ +\left(1-s\right){t}_{s}\parallel M\left(z\right)\parallel \parallel {x}_{s,{t}_{s}}-z\parallel .\end{array}$

So,

${\parallel {x}_{s,{t}_{s}}-z\parallel }^{2}\le \frac{〈h\left(z\right)-Tz,{x}_{s,{t}_{s}}-z〉}{1-\rho }+\frac{\left(1-s\right){t}_{s}\parallel M\left(z\right)\parallel \parallel {x}_{s,{t}_{s}}-z\parallel }{s\left(1-\rho \right)}.$
(14)

Then if ${x}_{{s}_{n},{t}_{{s}_{n}}}⇀p$, then ${x}_{{s}_{n},{t}_{{s}_{n}}}\to p$. Next, we prove that

$\parallel {x}_{s,{t}_{s}}-{Proj}_{C}\left(I-\gamma \mathrm{\nabla }f\right){x}_{s,{t}_{s}}\parallel \to 0.$

Since

$\begin{array}{rcl}\parallel {Proj}_{C}\left(I-\gamma \mathrm{\nabla }{f}_{{t}_{s}}\right){x}_{s,{t}_{s}}-{x}_{s,{t}_{s}}\parallel & =& \parallel \lambda {x}_{s,{t}_{s}}+\left(1-\lambda \right){T}_{{t}_{s}}\left({x}_{s,{t}_{s}}\right)-{x}_{s,{t}_{s}}\parallel \\ \le & \parallel {T}_{{t}_{s}}\left({x}_{s,{t}_{s}}\right)-{x}_{s,{t}_{s}}\parallel .\end{array}$

So,

$\parallel {x}_{s,{t}_{s}}-{Proj}_{C}\left(I-\gamma \mathrm{\nabla }f\right){x}_{s,{t}_{s}}\parallel \le \gamma {t}_{s}\parallel {x}_{s,{t}_{s}}\parallel +\parallel {T}_{{t}_{s}}\left({x}_{s,{t}_{s}}\right)-{x}_{s,{t}_{s}}\parallel \to 0.$

Finally, we prove that $\left\{{x}_{s,{t}_{s}}\right\}\to {x}^{\ast }\in S$, which is also the unique solution of the variational inequality. We only need to prove that if ${x}_{{s}_{n},{t}_{{s}_{n}}}⇀\stackrel{˜}{x}$, then

$〈\left(I-h\right)\stackrel{˜}{x},x-\stackrel{˜}{x}〉\ge 0,\phantom{\rule{1em}{0ex}}\mathrm{\forall }x\in S.$

Suppose that ${x}_{{s}_{n},{t}_{{s}_{n}}}⇀\stackrel{˜}{x}$, by Lemma 2.4 and $\parallel {x}_{s,{t}_{s}}-{Proj}_{C}\left(I-\gamma \mathrm{\nabla }f\right){x}_{s,{t}_{s}}\parallel \to 0$, $\stackrel{˜}{x}={Proj}_{C}\left(I-\gamma \mathrm{\nabla }f\right)\stackrel{˜}{x}$, it follows that $\stackrel{˜}{x}\in S$. Note that ${x}_{{s}_{n},{t}_{{s}_{n}}}\to \stackrel{˜}{x}$ by (14). From the definition

${x}_{s,{t}_{s}}={P}_{C}\left[sh\left({x}_{s,{t}_{s}}\right)+\left(1-s\right){T}_{{t}_{s}}{x}_{s,{t}_{s}}\right],$

we have

$\left(I-h\right)\left({x}_{{s}_{n},{t}_{{s}_{n}}}\right)=\frac{1}{{s}_{n}}\left({P}_{C}{y}_{{s}_{n},{t}_{{s}_{n}}}-{y}_{{s}_{n},{t}_{{s}_{n}}}\right)-\frac{1}{{s}_{n}}\left[\left(I-{T}_{{t}_{{s}_{n}}}\right){x}_{{s}_{n},{t}_{{s}_{n}}}\right]+\left[{x}_{{s}_{n},{t}_{{s}_{n}}}-{T}_{{t}_{{s}_{n}}}{x}_{{s}_{n},{t}_{{s}_{n}}}\right].$

So

$\begin{array}{c}〈\left(I-h\right)\left({x}_{{s}_{n},{t}_{{s}_{n}}}\right),{x}_{{s}_{n},{t}_{{s}_{n}}}-z〉\hfill \\ \phantom{\rule{1em}{0ex}}=\frac{1}{{s}_{n}}〈{P}_{C}{y}_{{s}_{n},{t}_{{s}_{n}}}-{y}_{{s}_{n},{t}_{{s}_{n}}},{x}_{{s}_{n},{t}_{{s}_{n}}}-z〉\hfill \\ \phantom{\rule{2em}{0ex}}-\frac{1}{{s}_{n}}〈{x}_{{s}_{n},{t}_{{s}_{n}}}-{T}_{{t}_{{s}_{n}}}{x}_{{s}_{n},{t}_{{s}_{n}}},{x}_{{s}_{n},{t}_{{s}_{n}}}-z〉+〈{x}_{{s}_{n},{t}_{{s}_{n}}}-{T}_{{t}_{{s}_{n}}}{x}_{{s}_{n},{t}_{{s}_{n}}},{x}_{{s}_{n},{t}_{{s}_{n}}}-z〉\hfill \\ \phantom{\rule{1em}{0ex}}\le -\frac{1}{{s}_{n}}〈\left(I-{T}_{{t}_{{s}_{n}}}\right){x}_{{s}_{n},{t}_{{s}_{n}}},{x}_{{s}_{n},{t}_{{s}_{n}}}-z〉+〈\left(I-{T}_{{t}_{{s}_{n}}}\right){x}_{{s}_{n},{t}_{{s}_{n}}},{x}_{{s}_{n},{t}_{{s}_{n}}}-z〉\hfill \\ \phantom{\rule{1em}{0ex}}\le -\frac{1}{{s}_{n}}\left[〈\left(I-T\right){x}_{{s}_{n},{t}_{{s}_{n}}},{x}_{{s}_{n},{t}_{{s}_{n}}}-z〉-\frac{1}{{s}_{n}}〈\left(T-{T}_{{t}_{{s}_{n}}}\right){x}_{{s}_{n},{t}_{{s}_{n}}},{x}_{{s}_{n},{t}_{{s}_{n}}}-z〉\right]\hfill \\ \phantom{\rule{2em}{0ex}}+〈{x}_{{s}_{n},{t}_{{s}_{n}}}-{T}_{{t}_{{s}_{n}}}{x}_{{s}_{n},{t}_{{s}_{n}}},{x}_{{s}_{n},{t}_{{s}_{n}}}-z〉,\hfill \end{array}$

then

$〈\left(I-h\right)\stackrel{˜}{x},\stackrel{˜}{x}-z〉=\underset{n\to \mathrm{\infty }}{lim}〈\left(I-h\right)\left({x}_{{s}_{n},{t}_{{s}_{n}}}\right),{x}_{{s}_{n},{t}_{{s}_{n}}}-z〉\le 0.$
(15)

So, $\left\{{x}_{s,{t}_{s}}\right\}\to {x}^{\ast }\in S$, which is also the unique solution of the variational inequality. □

### 3.2 Convergence of the explicit scheme

Theorem 3.2 Assume that minimization problem (4) is consistent, and let S denote the solution set. Assume that the gradient f is $\frac{1}{L}$-ism. Let $h:C\to H$ be a ρ-contraction with $\rho \in \left[0,1\right)$. Let a sequence ${\left\{{x}_{n}\right\}}_{n=0}^{\mathrm{\infty }}$ be generated by the following hybrid gradient projection algorithm:

${x}_{n+1}={P}_{c}\left[{\theta }_{n}h\left({x}_{n}\right)+\left(1-{\theta }_{n}\right){T}_{n}\left({x}_{n}\right)\right],\phantom{\rule{1em}{0ex}}n=0,1,2,\dots ,$
(16)

where $0<\gamma <\frac{2}{L}$, ${P}_{c}\left[I-\gamma \mathrm{\nabla }{f}_{{\alpha }_{n}}\right]={\lambda }_{n}I+\left(1-{\lambda }_{n}\right){T}_{n}$ and ${\lambda }_{n}=\frac{2-\gamma \left(L+{\alpha }_{n}\right)}{4}$, and, in addition, assume that the following conditions are satisfied for ${\left\{{\theta }_{n}\right\}}_{n=0}^{\mathrm{\infty }}$ and ${\left\{{\alpha }_{n}\right\}}_{n=0}^{\mathrm{\infty }}$:

1. (i)

${\theta }_{n}\to 0$; ${\alpha }_{n}=o\left({\theta }_{n}\right)$;

2. (ii)

${\sum }_{n=0}^{\mathrm{\infty }}{\theta }_{n}=\mathrm{\infty }$;

3. (iii)

${\sum }_{n=0}^{\mathrm{\infty }}|{\theta }_{n+1}-{\theta }_{n}|<\mathrm{\infty }$;

4. (iv)

${\sum }_{n=0}^{\mathrm{\infty }}|{\alpha }_{n+1}-{\alpha }_{n}|<\mathrm{\infty }$.

Then the sequence ${\left\{{x}_{n}\right\}}_{n=0}^{\mathrm{\infty }}$ converges in norm to a minimizer of (4) which is also the unique solution of the variational inequality (VI)

${x}^{\ast }\in S,\phantom{\rule{1em}{0ex}}〈\left(I-h\right){x}^{\ast },x-{x}^{\ast }〉\ge 0,\phantom{\rule{1em}{0ex}}\mathrm{\forall }x\in S.$

In other words, ${x}^{\ast }$ is the unique fixed point of the contraction ${Proj}_{S}h$,

${x}^{\ast }={Proj}_{S}h\left({x}^{\ast }\right).$

Proof (1) We first prove that ${\left\{{x}_{n}\right\}}_{n=0}^{\mathrm{\infty }}$ is bounded. Set ${Proj}_{C}\left(I-\gamma \mathrm{\nabla }f\right)=\left(1-\tau \right)I+\tau T$, $\tau =\frac{2-\gamma L}{4}$. Indeed, we have, for $\stackrel{˜}{x}\in S$,

$\begin{array}{c}\parallel {x}_{n+1}-\stackrel{˜}{x}\parallel \hfill \\ \phantom{\rule{1em}{0ex}}=\parallel {P}_{C}\left[{\theta }_{n}h\left({x}_{n}\right)+\left(1-{\theta }_{n}\right){T}_{n}\left({x}_{n}\right)\right]-{P}_{C}\stackrel{˜}{x}\parallel \hfill \\ \phantom{\rule{1em}{0ex}}\le \parallel {\theta }_{n}h\left({x}_{n}\right)+\left(1-{\theta }_{n}\right){T}_{n}\left({x}_{n}\right)-\stackrel{˜}{x}\parallel \hfill \\ \phantom{\rule{1em}{0ex}}=\parallel {\theta }_{n}\left(h\left({x}_{n}\right)-h\left(\stackrel{˜}{x}\right)\right)+{\theta }_{n}\left(h\left(\stackrel{˜}{x}\right)-\stackrel{˜}{x}\right)+\left(1-{\theta }_{n}\right)\left({T}_{n}\left({x}_{n}\right)-\stackrel{˜}{x}\right)\parallel \hfill \\ \phantom{\rule{1em}{0ex}}\le {\theta }_{n}\rho \parallel {x}_{n}-\stackrel{˜}{x}\parallel +{\theta }_{n}\parallel h\left(\stackrel{˜}{x}\right)-\stackrel{˜}{x}\parallel +\left(1-{\theta }_{n}\right)\left[\parallel {x}_{n}-\stackrel{˜}{x}\parallel +\parallel {T}_{n}\left(\stackrel{˜}{x}\right)-T\left(\stackrel{˜}{x}\right)\parallel \right]\hfill \\ \phantom{\rule{1em}{0ex}}\le \left(1-\left(1-\rho \right){\theta }_{n}\right)\parallel {x}_{n}-\stackrel{˜}{x}\parallel +{\theta }_{n}\parallel h\left(\stackrel{˜}{x}\right)-\stackrel{˜}{x}\parallel +{\alpha }_{n}\parallel M\left(\stackrel{˜}{x}\right)\parallel \hfill \\ \phantom{\rule{1em}{0ex}}\le max\left\{\parallel {x}_{n}-\stackrel{˜}{x}\parallel ,\frac{1}{1-\rho }\left[\parallel h\left(\stackrel{˜}{x}\right)-\stackrel{˜}{x}\parallel +\parallel M\left(\stackrel{˜}{x}\right)\parallel \right]\right\}.\hfill \end{array}$

So, $\left\{{x}_{n}\right\}$ is bounded.

(2) Next we prove that $\parallel {x}_{n+1}-{x}_{n}\parallel \to 0$ as $n\to \mathrm{\infty }$.

$\begin{array}{c}\parallel {x}_{n+1}-{x}_{n}\parallel \hfill \\ \phantom{\rule{1em}{0ex}}=\parallel {P}_{C}\left[{\theta }_{n}h\left({x}_{n}\right)+\left(1-{\theta }_{n}\right){T}_{n}{x}_{n}\right]-{P}_{C}\left[{\theta }_{n-1}h\left({x}_{n-1}\right)+\left(1-{\theta }_{n-1}\right){T}_{n-1}{x}_{n-1}\right]\parallel \hfill \\ \phantom{\rule{1em}{0ex}}\le \parallel \left[{\theta }_{n}h\left({x}_{n}\right)+\left(1-{\theta }_{n}\right){T}_{n}{x}_{n}\right]-\left[{\theta }_{n-1}h\left({x}_{n-1}\right)+\left(1-{\theta }_{n-1}\right){T}_{n-1}{x}_{n-1}\right]\parallel \hfill \\ \phantom{\rule{1em}{0ex}}=\parallel {\theta }_{n}\left(h\left({x}_{n}\right)-h\left({x}_{n-1}\right)\right)+\left(1-{\theta }_{n}\right)\left({T}_{n}{x}_{n}-{T}_{n}{x}_{n-1}\right)\hfill \\ \phantom{\rule{2em}{0ex}}+\left({\theta }_{n}-{\theta }_{n-1}\right)\left(h\left({x}_{n-1}\right)-{T}_{n}{x}_{n-1}\right)+\left(1-{\theta }_{n-1}\right)\left({T}_{n}{x}_{n-1}-{T}_{n-1}{x}_{n-1}\right)\parallel \hfill \\ \phantom{\rule{1em}{0ex}}\le \left[1-\left(1-\rho \right){\theta }_{n}\right]\parallel {x}_{n}-{x}_{n-1}\parallel +{M}_{2}|{\theta }_{n}-{\theta }_{n-1}|+\left(1-{\theta }_{n-1}\right)\parallel {T}_{n}{x}_{n-1}-{T}_{n-1}{x}_{n-1}\parallel ,\hfill \end{array}$

but

$\begin{array}{c}\parallel {T}_{n}{x}_{n-1}-{T}_{n-1}{x}_{n-1}\parallel \hfill \\ \phantom{\rule{1em}{0ex}}=\parallel \frac{4{P}_{C}\left(I-\gamma \mathrm{\nabla }{f}_{{\alpha }_{n}}\right)-\left[2-\gamma \left(L+{\alpha }_{n}\right)\right]I}{2+\gamma \left(L+{\alpha }_{n}\right)}{x}_{n-1}\hfill \\ \phantom{\rule{2em}{0ex}}-\frac{4{P}_{C}\left(I-\gamma \mathrm{\nabla }{f}_{{\alpha }_{n-1}}\right)-\left[2-\gamma \left(L+{\alpha }_{n-1}\right)\right]I}{2+\gamma \left(L+{\alpha }_{n-1}\right)}{x}_{n-1}\parallel \hfill \\ \phantom{\rule{1em}{0ex}}\le \parallel \frac{4{P}_{C}\left(I-\gamma \mathrm{\nabla }{f}_{{\alpha }_{n}}\right)}{2+\gamma \left(L+{\alpha }_{n}\right)}{x}_{n-1}-\frac{4{P}_{C}\left(I-\gamma \mathrm{\nabla }{f}_{{\alpha }_{n-1}}\right)}{2+\gamma \left(L+{\alpha }_{n-1}\right)}{x}_{n-1}\parallel \hfill \\ \phantom{\rule{2em}{0ex}}+\parallel \frac{-\left[2-\gamma \left(L+{\alpha }_{n}\right)\right]}{2+\gamma \left(L+{\alpha }_{n}\right)}{x}_{n-1}+\frac{\left[2-\gamma \left(L+{\alpha }_{n-1}\right)\right]}{2+\gamma \left(L+{\alpha }_{n-1}\right)}{x}_{n-1}\parallel \hfill \\ \phantom{\rule{1em}{0ex}}=\parallel \frac{4\left[2+\gamma \left(L+{\alpha }_{n-1}\right)\right]{P}_{C}\left(I-\gamma \mathrm{\nabla }{f}_{{\alpha }_{n}}\right){x}_{n-1}-4\left[2+\gamma \left(L+{\alpha }_{n}\right)\right]{P}_{C}\left(I-\gamma \mathrm{\nabla }{f}_{{\alpha }_{n-1}}\right){x}_{n-1}}{\left[2+\gamma \left(L+{\alpha }_{n}\right)\right]\left[2+\gamma \left(L+{\alpha }_{n-1}\right)\right]}\parallel \hfill \\ \phantom{\rule{2em}{0ex}}+\frac{4\gamma |{\alpha }_{n}-{\alpha }_{n-1}|\parallel {x}_{n-1}\parallel }{\left[2+\gamma \left(L+{\alpha }_{n}\right)\right]\left[2+\gamma \left(L+{\alpha }_{n-1}\right)\right]}\hfill \\ \phantom{\rule{1em}{0ex}}=\parallel \frac{4\gamma \left({\alpha }_{n-1}-{\alpha }_{n}\right){P}_{C}\left(I-\gamma \mathrm{\nabla }{f}_{{\alpha }_{n}}\right){x}_{n-1}}{\left[2+\gamma \left(L+{\alpha }_{n}\right)\right]\left[2+\gamma \left(L+{\alpha }_{n-1}\right)\right]}\hfill \\ \phantom{\rule{2em}{0ex}}+\frac{4\left(2+\gamma \left(L+{\alpha }_{n}\right)\right)\left[{P}_{C}\left(I-\gamma \mathrm{\nabla }{f}_{{\alpha }_{n}}\right){x}_{n-1}-{P}_{C}\left(I-\gamma \mathrm{\nabla }{f}_{{\alpha }_{n-1}}\right){x}_{n-1}\right]}{\left[2+\gamma \left(L+{\alpha }_{n}\right)\right]\left[2+\gamma \left(L+{\alpha }_{n-1}\right)\right]}\parallel \hfill \\ \phantom{\rule{2em}{0ex}}+\frac{4\gamma |{\alpha }_{n}-{\alpha }_{n-1}|\parallel {x}_{n-1}\parallel }{\left[2+\gamma \left(L+{\alpha }_{n}\right)\right]\left[2+\gamma \left(L+{\alpha }_{n-1}\right)\right]}\hfill \\ \phantom{\rule{1em}{0ex}}\le \frac{4\gamma |{\alpha }_{n-1}-{\alpha }_{n}|\parallel {P}_{C}\left(I-\gamma \mathrm{\nabla }{f}_{{\alpha }_{n}}\right){x}_{n-1}\parallel }{\left[2+\gamma \left(L+{\alpha }_{n}\right)\right]\left[2+\gamma \left(L+{\alpha }_{n-1}\right)\right]}\hfill \\ \phantom{\rule{2em}{0ex}}+\frac{4\gamma |{\alpha }_{n-1}-{\alpha }_{n}|\left[2+\gamma \left(L+{\alpha }_{n}\right)\right]\parallel {x}_{n-1}\parallel }{\left[2+\gamma \left(L+{\alpha }_{n}\right)\right]\left[2+\gamma \left(L+{\alpha }_{n-1}\right)\right]}\hfill \\ \phantom{\rule{2em}{0ex}}+\frac{4\gamma |{\alpha }_{n}-{\alpha }_{n-1}|\parallel {x}_{n-1}\parallel }{\left[2+\gamma \left(L+{\alpha }_{n}\right)\right]\left[2+\gamma \left(L+{\alpha }_{n-1}\right)\right]}\hfill \\ \phantom{\rule{1em}{0ex}}\le |{\alpha }_{n-1}-{\alpha }_{n}|\left[\gamma \parallel {P}_{C}\left(I-\gamma \mathrm{\nabla }{f}_{{\alpha }_{n}}\right){x}_{n-1}\parallel +5\gamma \parallel {x}_{n-1}\parallel \right]\hfill \\ \phantom{\rule{1em}{0ex}}\le {M}_{3}|{\alpha }_{n-1}-{\alpha }_{n}|.\hfill \end{array}$

So,

$\parallel {x}_{n+1}-{x}_{n}\parallel \le \left[1-\left(1-\rho \right){\theta }_{n}\right]\parallel {x}_{n}-{x}_{n-1}\parallel +{M}_{2}|{\theta }_{n}-{\theta }_{n-1}|+{M}_{3}|{\alpha }_{n-1}-{\alpha }_{n}|,$

by Lemma 2.5,

$\parallel {x}_{n+1}-{x}_{n}\parallel \to 0.$

(3) Next we show that $\parallel {x}_{n}-{T}_{n}{x}_{n}\parallel \to 0$.

Indeed, it follows that

$\begin{array}{rcl}\parallel {x}_{n}-{T}_{n}{x}_{n}\parallel & \le & \parallel {x}_{n}-{x}_{n+1}\parallel +\parallel {x}_{n+1}-{T}_{n}{x}_{n}\parallel \\ =& \parallel {x}_{n}-{x}_{n+1}\parallel +\parallel {P}_{c}\left[{\theta }_{n}h\left({x}_{n}\right)+\left(1-{\theta }_{n}\right){T}_{n}\left({x}_{n}\right)\right]-{P}_{C}{T}_{n}\left({x}_{n}\right)\parallel \\ \le & \parallel {x}_{n}-{x}_{n+1}\parallel +{\theta }_{n}\parallel h\left({x}_{n}\right)-{T}_{n}{x}_{n}\parallel \to 0.\end{array}$

Now we show that

$\underset{n\to \mathrm{\infty }}{lim sup}〈h\left({x}^{\ast }\right)-{x}^{\ast },{x}_{n}-{x}^{\ast }〉\le 0.$

Let ${x}_{{n}_{k}}⇀\stackrel{˜}{x}$, observe that

$\begin{array}{rcl}\parallel {P}_{C}\left(I-\gamma \mathrm{\nabla }{f}_{{\alpha }_{n}}\right){x}_{n}-{x}_{n}\parallel & =& \parallel {\lambda }_{n}I+\left(1-{\lambda }_{n}\right){T}_{n}{x}_{n}-{x}_{n}\parallel \\ =& \left(1-{\lambda }_{n}\right)\parallel {T}_{n}{x}_{n}-{x}_{n}\parallel \\ \le & \parallel {T}_{n}{x}_{n}-{x}_{n}\parallel ,\end{array}$

hence we have

$\begin{array}{c}\parallel {P}_{C}\left(I-\gamma \mathrm{\nabla }f\right){x}_{n}-{x}_{n}\parallel \hfill \\ \phantom{\rule{1em}{0ex}}\le \parallel {P}_{C}\left(I-\gamma \mathrm{\nabla }f\right){x}_{n}-{P}_{C}\left(I-\gamma \mathrm{\nabla }{f}_{{\alpha }_{n}}\right){x}_{n}\parallel +\parallel {P}_{C}\left(I-\gamma \mathrm{\nabla }{f}_{{\alpha }_{n}}\right){x}_{n}-{x}_{n}\parallel \hfill \\ \phantom{\rule{1em}{0ex}}\le \gamma {\alpha }_{n}\parallel {x}_{n}\parallel +\parallel {T}_{n}{x}_{n}-{x}_{n}\parallel \to 0.\hfill \end{array}$

So

$\underset{n\to \mathrm{\infty }}{lim}\parallel {P}_{C}\left(I-\gamma \mathrm{\nabla }f\right){x}_{n}-{x}_{n}\parallel =0,$

by Lemma 2.4 and ${lim}_{n\to \mathrm{\infty }}\parallel {P}_{C}\left(I-\gamma \mathrm{\nabla }f\right){x}_{n}-{x}_{n}\parallel =0$, then

$\stackrel{˜}{x}={P}_{C}\left(I-\gamma \mathrm{\nabla }f\right)\left(\stackrel{˜}{x}\right).$

This shows that

$\underset{n\to \mathrm{\infty }}{lim sup}〈h\left({x}^{\ast }\right)-{x}^{\ast },{x}_{n}-{x}^{\ast }〉\le 0.$

It follows that

$\begin{array}{c}{\parallel {x}_{n+1}-{x}^{\ast }\parallel }^{2}\hfill \\ \phantom{\rule{1em}{0ex}}={\parallel {P}_{c}\left[{\theta }_{n}h\left({x}_{n}\right)+\left(1-{\theta }_{n}\right){T}_{n}\left({x}_{n}\right)\right]-{P}_{c}{x}^{\ast }\parallel }^{2}\hfill \\ \phantom{\rule{1em}{0ex}}\le {\parallel {\theta }_{n}\left(h\left({x}_{n}\right)-{x}^{\ast }\right)+\left(1-{\theta }_{n}\right)\left({T}_{n}{x}_{n}-T{x}^{\ast }\right)\parallel }^{2}\hfill \\ \phantom{\rule{1em}{0ex}}={\parallel {\theta }_{n}\left(h\left({x}_{n}\right)-h\left({x}^{\ast }\right)\right)+\left(1-{\theta }_{n}\right)\left({T}_{n}{x}_{n}-T{x}^{\ast }\right)+{\theta }_{n}\left(h\left({x}^{\ast }\right)-{x}^{\ast }\right)\parallel }^{2}\hfill \\ \phantom{\rule{1em}{0ex}}\le {\parallel {\theta }_{n}\left(h\left({x}_{n}\right)-h\left({x}^{\ast }\right)\right)+\left(1-{\theta }_{n}\right)\left({T}_{n}{x}_{n}-T{x}^{\ast }\right)\parallel }^{2}+2{\theta }_{n}〈h\left({x}^{\ast }\right)-{x}^{\ast },{x}_{n+1}-{x}^{\ast }〉\hfill \\ \phantom{\rule{1em}{0ex}}\le {\theta }_{n}{\parallel h\left({x}_{n}\right)-h\left({x}^{\ast }\right)\parallel }^{2}+\left(1-{\theta }_{n}\right){\parallel \left({T}_{n}{x}_{n}-{x}^{\ast }\right)\parallel }^{2}+2{\theta }_{n}〈h\left({x}^{\ast }\right)-{x}^{\ast },{x}_{n+1}-{x}^{\ast }〉\hfill \\ \phantom{\rule{1em}{0ex}}\le {\theta }_{n}{\rho }^{2}{\parallel {x}_{n}-{x}^{\ast }\parallel }^{2}+\left(1-{\theta }_{n}\right){\parallel \left({T}_{n}{x}_{n}-{T}_{n}{x}^{\ast }+{T}_{n}{x}^{\ast }-T{x}^{\ast }\right)\parallel }^{2}\hfill \\ \phantom{\rule{2em}{0ex}}+2{\theta }_{n}〈h\left({x}^{\ast }\right)-{x}^{\ast },{x}_{n+1}-{x}^{\ast }〉\hfill \\ \phantom{\rule{1em}{0ex}}\le {\theta }_{n}{\rho }^{2}{\parallel {x}_{n}-{x}^{\ast }\parallel }^{2}+\left(1-{\theta }_{n}\right){\left[\parallel {x}_{n}-{x}^{\ast }\parallel +{\alpha }_{n}M\left({x}^{\ast }\right)\right]}^{2}+2{\theta }_{n}〈h\left({x}^{\ast }\right)-{x}^{\ast },{x}_{n+1}-{x}^{\ast }〉\hfill \\ \phantom{\rule{1em}{0ex}}\le \left[1-\left(1-\rho \right){\theta }_{n}\right]{\parallel {x}_{n}-{x}^{\ast }\parallel }^{2}+\left(1-{\theta }_{n}\right)\left[2{\alpha }_{n}M\left({x}^{\ast }\right)\parallel {x}_{n}-{x}^{\ast }\parallel +{\alpha }_{n}^{2}{\left(M\left({x}^{\ast }\right)\right)}^{2}\right]\hfill \\ \phantom{\rule{2em}{0ex}}+2{\theta }_{n}〈h\left({x}^{\ast }\right)-{x}^{\ast },{x}_{n+1}-{x}^{\ast }〉.\hfill \end{array}$

Hence,

$\begin{array}{c}{\parallel {x}_{n+1}-{x}^{\ast }\parallel }^{2}=\left(1-{\beta }_{n}\right){\parallel {x}_{n}-{x}^{\ast }\parallel }^{2}+{\beta }_{n}{\delta }_{n},\hfill \\ {\beta }_{n}=\left(1-\rho \right){\theta }_{n},\hfill \\ {\delta }_{n}=\frac{1}{1-\rho }\left[\frac{{\alpha }_{n}}{{\theta }_{n}}\left(1-{\theta }_{n}\right)2M\left({x}^{\ast }\right)\parallel {x}_{n}-{x}^{\ast }\parallel +\left(1-{\theta }_{n}\right){\left(M\left({x}^{\ast }\right)\right)}^{2}\frac{{\alpha }_{n}^{2}}{{\theta }_{n}}\hfill \\ \phantom{{\delta }_{n}=}+2〈h\left({x}^{\ast }\right)-{x}^{\ast },{x}_{n+1}-{x}^{\ast }〉\right],\hfill \end{array}$
(17)

by Lemma 2.5 and ${lim}_{n\to \mathrm{\infty }}{\beta }_{n}=0$, ${\sum }_{n=0}^{\mathrm{\infty }}{\beta }_{n}=\mathrm{\infty }$, ${lim sup}_{n\to \mathrm{\infty }}{\delta }_{n}\le 0$, we have ${x}_{n}\to {x}^{\ast }$. □

## 4 Application of the iterative method

Next, we give an application of Theorem 3.2 to the split feasibility problem (say SFP, for short) which was introduced by Censor and Elfving .

(18)

where C and Q are nonempty closed convex subsets of Hilbert spaces ${H}_{1}$ and ${H}_{2}$, respectively. $A:{H}_{1}\to {H}_{2}$ is a bounded linear operator.

It is clear that ${x}^{\ast }$ is a solution of split feasibility problem (18) if and only if ${x}^{\ast }\in C$ and $A{x}^{\ast }-{P}_{Q}A{x}^{\ast }=0$.

We define the proximity function f by

$f\left(x\right)=\frac{1}{2}{\parallel Ax-{P}_{Q}Ax\parallel }^{2},$

and consider the convex optimization problem

$\underset{x\in C}{min}f\left(x\right)=\underset{x\in C}{min}\frac{1}{2}{\parallel Ax-{P}_{Q}Ax\parallel }^{2}.$
(19)

Then ${x}^{\ast }$ solves split feasibility problem (18) if and only if ${x}^{\ast }$ solves minimization (19) with minimizer equal to 0. Byrne  introduced the so-called CQ algorithm to solve the (SFP)

${x}_{n+1}={P}_{C}\left(I-\mu {A}^{\ast }\left(I-{P}_{Q}\right)A\right){x}_{n},\phantom{\rule{1em}{0ex}}n\ge 0,$
(20)

where $0<\mu <\frac{2}{\parallel {A}^{\ast }A\parallel }=\frac{2}{{\parallel A\parallel }^{2}}$.

He obtained that the sequence $\left\{{x}_{n}\right\}$ generated by (20) converges weakly to a solution of the (SFP).

Now we consider the regularization technique. Let

${f}_{\alpha }\left(x\right)=\frac{1}{2}{\parallel Ax-{P}_{Q}Ax\parallel }^{2}+\frac{\alpha }{2}{\parallel x\parallel }^{2},$

then we establish the iterative scheme as follows:

${x}_{n+1}={P}_{C}\left[{\theta }_{n}h\left({x}_{n}\right)+\left(1-{\theta }_{n}\right){T}_{n}{x}_{n}\right],\phantom{\rule{1em}{0ex}}n\ge 0,$

where $h:C\to {H}_{1}$ is a contraction with $0<\rho <1$,

$\begin{array}{c}{P}_{C}\left[I-\mu \left({A}^{\ast }\left(I-{P}_{Q}\right)A+{\alpha }_{n}I\right)\right]={\lambda }_{n}I+\left(1-{\lambda }_{n}\right){T}_{n},\hfill \\ {\lambda }_{n}=\frac{2-\mu \left({\parallel A\parallel }^{2}+{\alpha }_{n}\right)}{4}.\hfill \end{array}$

Applying Theorem 3.2, we obtain the following result.

Theorem 4.1 Assume that the split problem is consistent, let the sequence $\left\{{x}_{n}\right\}$ be generated by

${x}_{n+1}={P}_{C}\left[{\theta }_{n}h\left({x}_{n}\right)+\left(1-{\theta }_{n}\right){T}_{n}{x}_{n}\right],\phantom{\rule{1em}{0ex}}n\ge 0,$

where

$\begin{array}{c}{P}_{C}\left[I-\mu \left({A}^{\ast }\left(I-{P}_{Q}\right)A+{\alpha }_{n}I\right)\right]={\lambda }_{n}I+\left(1-{\lambda }_{n}\right){T}_{n},\hfill \\ {\lambda }_{n}=\frac{2-\mu \left({\parallel A\parallel }^{2}+{\alpha }_{n}\right)}{4},\hfill \end{array}$

and the sequence $\left\{{\theta }_{n}\right\}\subset \left(0,1\right)$, $\left\{{\alpha }_{n}\right\}$ and $\left\{{\theta }_{n}\right\}$ satisfy the following conditions:

1. (i)

${\theta }_{n}\to 0$; $0<\mu <\frac{2}{\parallel {A}^{\ast }A\parallel }=\frac{2}{{\parallel A\parallel }^{2}}$;

2. (ii)

${\sum }_{n=0}^{\mathrm{\infty }}{\theta }_{n}=\mathrm{\infty }$;

3. (iii)

${\sum }_{n=0}^{\mathrm{\infty }}|{\theta }_{n+1}-{\theta }_{n}|<\mathrm{\infty }$;

4. (iv)

${\sum }_{n=0}^{\mathrm{\infty }}|{\alpha }_{n+1}-{\alpha }_{n}|<\mathrm{\infty }$;

5. (v)

${\alpha }_{n}=o\left({\theta }_{n}\right)$.

Then the sequence $\left\{{x}_{n}\right\}$ converges strongly to the solution of split feasibility problem (18).

Proof By the definition of the proximity function f, we have

$\mathrm{\nabla }f\left(x\right)={A}^{\ast }\left(I-{Proj}_{Q}\right)Ax,$

and f is $1/{\parallel A\parallel }^{2}$-ism, i.e., since ${Proj}_{Q}$ is $1/2$-averaged mapping, then $I-{Proj}_{Q}$ is 1-ism,

$\begin{array}{c}〈\mathrm{\nabla }f\left(x\right)-\mathrm{\nabla }f\left(y\right),x-y〉-1/{\parallel A\parallel }^{2}\cdot {\parallel \mathrm{\nabla }f\left(x\right)-\mathrm{\nabla }f\left(y\right)\parallel }^{2}\hfill \\ \phantom{\rule{1em}{0ex}}=〈{A}^{\ast }\left(I-{Proj}_{Q}\right)Ax-{A}^{\ast }\left(I-{Proj}_{Q}\right)Ay,x-y〉\hfill \\ \phantom{\rule{2em}{0ex}}-1/{\parallel A\parallel }^{2}\cdot {\parallel {A}^{\ast }\left(I-{Proj}_{Q}\right)Ax-{A}^{\ast }\left(I-{Proj}_{Q}\right)Ay\parallel }^{2}\hfill \\ \phantom{\rule{1em}{0ex}}=〈{A}^{\ast }\left[\left(I-{Proj}_{Q}\right)Ax-\left(I-{Proj}_{Q}\right)Ay\right],x-y〉\hfill \\ \phantom{\rule{2em}{0ex}}-1/{\parallel A\parallel }^{2}\cdot {\parallel {A}^{\ast }\left[\left(I-{Proj}_{Q}\right)Ax-\left(I-{Proj}_{Q}\right)Ay\right]\parallel }^{2}\hfill \\ \phantom{\rule{1em}{0ex}}=〈\left(I-{Proj}_{Q}\right)Ax-\left(I-{Proj}_{Q}\right)Ay,Ax-Ay〉\hfill \\ \phantom{\rule{2em}{0ex}}-1/{\parallel A\parallel }^{2}\cdot {\parallel {A}^{\ast }\left[\left(I-{Proj}_{Q}\right)Ax-\left(I-{Proj}_{Q}\right)Ay\right]\parallel }^{2}\hfill \\ \phantom{\rule{1em}{0ex}}\ge {\parallel \left(I-{Proj}_{Q}\right)Ax-\left(I-{Proj}_{Q}\right)Ay\parallel }^{2}\hfill \\ \phantom{\rule{2em}{0ex}}-{\parallel \left(I-{Proj}_{Q}\right)Ax-\left(I-{Proj}_{Q}\right)Ay\parallel }^{2}\hfill \\ \phantom{\rule{1em}{0ex}}=0.\hfill \end{array}$

Hence, $〈\mathrm{\nabla }f\left(x\right)-\mathrm{\nabla }f\left(y\right),x-y〉\ge 1/{\parallel A\parallel }^{2}\cdot {\parallel \mathrm{\nabla }f\left(x\right)-\mathrm{\nabla }f\left(y\right)\parallel }^{2}$.

Set ${f}_{{\lambda }_{n}}\left(x\right)=f\left(x\right)+\frac{\alpha }{2}{\parallel x\parallel }^{2}$; consequently,

$\begin{array}{rcl}\mathrm{\nabla }{f}_{\alpha }\left(x\right)& =& \mathrm{\nabla }f\left(x\right)+\alpha I\left(x\right)\\ =& {A}^{\ast }\left(I-{Proj}_{Q}\right)Ax+\alpha x.\end{array}$

Let $\gamma =\mu$, $L={\parallel A\parallel }^{2}$, then the iterative scheme is equivalent to

${x}_{n+1}={P}_{C}\left[{\theta }_{n}h\left({x}_{n}\right)+\left(1-{\theta }_{n}\right){T}_{n}{x}_{n}\right],\phantom{\rule{1em}{0ex}}n\ge 0,$

where $0<\gamma <\frac{2}{L}$, ${P}_{c}\left[I-\gamma \mathrm{\nabla }{f}_{{\alpha }_{n}}\right]={\lambda }_{n}I+\left(1-{\lambda }_{n}\right){T}_{n}$ and ${\lambda }_{n}=\frac{2-\gamma \left(L+{\alpha }_{n}\right)}{4}$.

Due to Theorem 3.2, we have the conclusion immediately. □

## Author’s contributions

The author read and approved the final manuscript.

## References

1. Levitin ES, Polyak BT: Constrained minimization methods. Zh. Vychisl. Mat. Mat. Fiz. 1966, 6: 787–823.

2. Calamai PH, More JJ: Projected gradient methods for linearly constrained problem. Math. Program. 1987, 39: 93–116. 10.1007/BF02592073

3. Polyak BT: Introduction to Optimization. Optimization Software, New York; 1987.

4. Su M, Xu HK: Remarks on the gradient-projection algorithm. J. Nonlinear Anal. Optim. 2010, 1: 35–43.

5. Moudafi A: Viscosity approximation methods for fixed-points problems. J. Math. Anal. Appl. 2000, 241: 46–55. 10.1006/jmaa.1999.6615

6. Xu HK: Viscosity approximation methods for nonexpansive mappings. J. Math. Anal. Appl. 2004, 298: 279–291. 10.1016/j.jmaa.2004.04.059

7. Ceng LC, Ansari QH, Yao JC: Some iterative methods for finding fixed points and for solving constrained convex minimization problems. Nonlinear Anal. TMA 2011, 74: 5286–5302. 10.1016/j.na.2011.05.005

8. Marino G, Xu HK: Convergence of generalized proximal point algorithm. Commun. Pure Appl. Anal. 2004, 3: 791–808.

9. Marino G, Xu HK: A general iterative method for nonexpansive mappings in Hilbert spaces. J. Math. Anal. Appl. 2006, 318: 43–52. 10.1016/j.jmaa.2005.05.028

10. Tian M: A general iterative algorithm for nonexpansive mappings in Hilbert spaces. Nonlinear Anal. TMA 2010, 73: 689–694. 10.1016/j.na.2010.03.058

11. Yao Y, Liou YC, Chen CP: Algorithms construction for nonexpansive mappings and inverse-strongly monotone mapping. Taiwan. J. Math. 2011, 15: 1979–1998.

12. Yao Y, Chen R, Liou Y-C: A unified implicit algorithm for solving the triple-hierarchical constrained optimization problem. Math. Comput. Model. 2012, 55: 1506–1515. 10.1016/j.mcm.2011.10.041

13. Yao Y, Liou Y-C, Kang SM: Two-step projection methods for a system of variational inequality problems in Banach spaces. J. Glob. Optim. 2013. 10.1007/s10898-011-9804-0

14. Wiyada K, Praairat J, Poom K: Generalized systems of variational inequalities and projection methods for inverse-strongly monotone mapping. Discrete Dyn. Nat. Soc. 2011. 10.1155/2011/976505

15. Censor Y, Elfving T: A multiprojection algorithm using Bregman projections in a product space. Numer. Algorithms 1994, 8: 221–239. 10.1007/BF02142692

16. Byrne C: A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 2004, 20: 103–120. 10.1088/0266-5611/20/1/006

17. Censor Y, Elfving T, Kopf N, Bortfeld T: The multiple-sets split feasibility problem and its applications for inverse problem. Inverse Probl. 2005, 21: 2071–2084. 10.1088/0266-5611/21/6/017

18. Censor Y, Bortfeld T, Martin B, Trfimov A: A unified approach for inversion problems in intensity-modulated radiation therapy. Phys. Med. Biol. 2006, 51: 2353–2365. 10.1088/0031-9155/51/10/001

19. Xu HK: A variable Krasnosel’skii-Mann algorithm and the multiple-set split feasibility problem. Inverse Probl. 2006, 22: 2021–2034. 10.1088/0266-5611/22/6/007

20. Xu HK: Iterative methods for the split feasibility problem in infinite-dimensional Hilbert space. Inverse Probl. 2010., 26: Article ID 105018

21. Lopez G, Martin V, Xu HK: Perturbation techniques for nonexpansive mapping with applications. Nonlinear Anal., Real World Appl. 2009, 10: 2369–2383. 10.1016/j.nonrwa.2008.04.020

22. Lopez G, Martin V, Xu HK: Iterative algorithms for the multiple-sets split feasibility problem. In Biomedical Mathematics: Promising Directions in Imaging, Therapy Planning and Inverse Problems. Edited by: Censor Y, Jiang M, Wang G. Medical Physica Publishing, Madison; 2009:243–279.

23. Xu HK: Averaged mapping and the gradient-projection algorithm. J. Optim. Theory Appl. 2011, 150: 360–378. 10.1007/s10957-011-9837-z

24. Xu HK: A regularization method for the proximal point algorithm. J. Glob. Optim. 2006, 36: 115–125. 10.1007/s10898-006-9002-7

25. Cho YJ, Petrot N: Regularization and iterative method for general variational inequality problem in Hilbert spaces. J. Inequal. Appl. 2011., 2011: Article ID 21

26. Combettes PL: Solving monotone inclusions via composition of nonexpansive averaged operators. Optimization 2004, 53(5–6):475–504. 10.1080/02331930412331327157

27. Geobel K, Kirk WA Cambridge Studies in Advanced Mathematics 28. In Topics on Metric Fixed-Point Theory. Cambridge University Press, Cambridge; 1990.

## Acknowledgements

The author thanks the referees for their helpful comments, which notably improved the presentation of this manuscript. This work was supported by the Fundamental Research Funds for the Central Universities (the Special Fund of Science in Civil Aviation University of China: No. 3122013K004).

## Author information

Authors

### Corresponding author

Correspondence to Ming Tian.

### Competing interests

The author declares that they have no competing interests.

## Rights and permissions

Reprints and Permissions

Tian, M. General iterative scheme based on the regularization for solving a constrained convex minimization problem. J Inequal Appl 2013, 550 (2013). https://doi.org/10.1186/1029-242X-2013-550

• Accepted:

• Published:

• DOI: https://doi.org/10.1186/1029-242X-2013-550

### Keywords

• averaged mapping 