# Regularized gradient-projection methods for equilibrium and constrained convex minimization problems

## Abstract

In this article, based on Marino and Xu’s method, an iterative method which combines the regularized gradient-projection algorithm (RGPA) and the averaged mappings approach is proposed for finding a common solution of equilibrium and constrained convex minimization problems. Under suitable conditions, it is proved that the sequences generated by implicit and explicit schemes converge strongly. The results of this paper extend and improve some existing results.

MSC:58E35, 47H09, 65J15.

## 1 Introduction

Let H be a real Hilbert space with the inner product $〈\cdot ,\cdot 〉$ and the induced norm $\parallel \cdot \parallel$. Let C be a nonempty, closed and convex subset of H. We need some nonlinear operators which are introduced below.

Let $T,A:H\to H$ be nonlinear operators.

• T is nonexpansive if $\parallel Tx-Ty\parallel \le \parallel x-y\parallel$ for all $x,y\in H$.

• T is Lipschitz continuous if there exists a constant $L>0$ such that $\parallel Tx-Ty\parallel \le L\parallel x-y\parallel$ for all $x,y\in H$.

• A is a strongly positive bounded linear operator if there exists a constant $\overline{\gamma }$ such that $〈Ax,x〉\ge \overline{\gamma }{\parallel x\parallel }^{2}$ for all $x\in H$.

• $A:H\to H$ is monotone if $〈x-y,Ax-Ay〉\ge 0$ for all $x,y\in H$.

• Given is a number $\eta >0$, $A:H\to H$ is η-strongly monotone if $〈x-y,Ax-Ay〉\ge \eta {\parallel x-y\parallel }^{2}$ for all $x,y\in H$.

• Given is a number $\upsilon >0$. $A:H\to H$ is υ-inverse strongly monotone (υ-ism) if $〈x-y,Ax-Ay〉\ge \upsilon {\parallel Ax-Ay\parallel }^{2}$ for all $x,y\in H$.

It is known that inverse strongly monotone operators have been studied widely (see ) and applied to solve practical problems in various fields; for instance, in traffic assignment problems (see [4, 5]).

• T is firmly nonexpansive if and only if $2T-I$ is nonexpansive or, equivalently, $〈x-y,Tx-Ty〉\ge {\parallel Tx-Ty\parallel }^{2}$ for all $x,y\in H$.

• $T:H\to H$ is said to be an averaged mapping if $T=\left(1-\alpha \right)I+\alpha S$, where α is a number in $\left(0,1\right)$ and $S:H\to H$ is nonexpansive. In particular, projections are $\left(1/2\right)$-averaged mappings.

Averaged mappings have received many investigations, see ().

Let ϕ be a bifunction of $C×C$ into , where is the set of real numbers. The equilibrium problem for $\varphi :C×C\to \mathbb{R}$ is to find $x\in C$ such that

(1.1)

The set of solutions of (1.1) is denoted by $EP\left(\varphi \right)$. Given a mapping $T:C\to H$, let $\varphi \left(x,y\right)=〈Tx,y-x〉$ for all $x,y\in C$. Then $z\in EP\left(\varphi \right)$ if and only if $〈Tz,y-z〉\ge 0$ for all $y\in C$, i.e., z is a solution of the variational inequality. Numerous problems in physics, optimization and economics reduce to finding a solution of (1.1). Some methods have been proposed to solve the equilibrium problem; see, for instance, .

In 2000, Moudafi  introduced the viscosity approximation method for nonexpansive mappings. Let h be a contraction on H, starting with an arbitrary initial ${x}_{0}\in H$, define a sequence $\left\{{x}_{n}\right\}$ recursively by

${x}_{n+1}={\alpha }_{n}h\left({x}_{n}\right)+\left(1-{\alpha }_{n}\right)T{x}_{n},\phantom{\rule{1em}{0ex}}n\ge 0,$
(1.2)

where $\left\{{\alpha }_{n}\right\}$ is a sequence in $\left(0,1\right)$. Xu  proved that under certain conditions on $\left\{{\alpha }_{n}\right\}$, the sequence $\left\{{x}_{n}\right\}$ generated by (1.2) converges strongly to the unique solution ${x}^{\ast }\in F\left(T\right)$ of the variational inequality

$〈\left(I-h\right){x}^{\ast },x-{x}^{\ast }〉\ge 0\phantom{\rule{1em}{0ex}}\mathrm{\forall }x\in F\left(T\right).$

We use $F\left(T\right)$ to denote the set of fixed points of the mapping T; that is, $F\left(T\right)=\left\{x\in H:x=Tx\right\}$.

In 2006, Marino and Xu  introduced a general iterative method for nonexpansive mappings. Let h be a contraction on H with a coefficient $\rho \in \left(0,1\right)$, and let A be a strongly positive bounded linear operator on H with a constant $\overline{\gamma }>0$. Starting with an arbitrary initial guess ${x}_{0}\in H$, define a sequence $\left\{{x}_{n}\right\}$ recursively by

${x}_{n+1}={\alpha }_{n}\gamma h\left({x}_{n}\right)+\left(I-{\alpha }_{n}A\right)T{x}_{n},\phantom{\rule{1em}{0ex}}n\ge 0,$
(1.3)

where $\left\{{\alpha }_{n}\right\}$ is a sequence in $\left(0,1\right)$, and $0<\gamma <\overline{\gamma }/\rho$ is a constant. It is proved that the sequence $\left\{{x}_{n}\right\}$ converges strongly to the unique solution ${x}^{\ast }\in F\left(T\right)$ of the variational inequality

$〈\left(\gamma h-A\right){x}^{\ast },x-{x}^{\ast }〉\le 0\phantom{\rule{1em}{0ex}}\mathrm{\forall }x\in F\left(T\right).$

For finding the common solution of $EP\left(\varphi \right)$ and a fixed point problem, Takahashi and Takahashi  introduced the following iterative scheme by the viscosity approximation method in a Hilbert space: ${x}_{1}\in H$ and

$\left\{\begin{array}{c}\varphi \left({u}_{n},y\right)+\frac{1}{{r}_{n}}〈y-{u}_{n},{u}_{n}-{x}_{n}〉\ge 0\phantom{\rule{1em}{0ex}}\mathrm{\forall }y\in C,\hfill \\ {x}_{n+1}={\alpha }_{n}h\left({x}_{n}\right)+\left(1-{\alpha }_{n}\right)S{u}_{n}\hfill \end{array}$
(1.4)

for all $n\in \mathbb{N}$, where $\left\{{\alpha }_{n}\right\}\subset \left(0,1\right)$ and $\left\{{r}_{n}\right\}\subset \left(0,\mathrm{\infty }\right)$ satisfy some appropriate conditions. Further, they proved $\left\{{x}_{n}\right\}$ and $\left\{{u}_{n}\right\}$ converge strongly to $z\in F\left(S\right)\cap EP\left(\varphi \right)$, where $z={P}_{F\left(S\right)\cap EP\left(\varphi \right)}h\left(z\right)$.

On the other hand, let $f:C\to \mathbb{R}$ be a convex function, and consider the following minimization problem:

$\underset{x\in C}{min}f\left(x\right).$
(1.5)

Assume that the constrained convex minimization problem (1.5) is solvable, and let U denote the set of solutions of (1.5). Then the gradient-projection algorithm (GPA) generates a sequence ${\left\{{x}_{n}\right\}}_{n=0}^{\mathrm{\infty }}$ according to the recursive formula

${x}_{n+1}={Proj}_{C}\left(I-{\gamma }_{n}\mathrm{\nabla }f\right){x}_{n},$

where the parameters ${\gamma }_{n}$ are real positive numbers, and ${Proj}_{C}$ is the metric projection from H onto C. It is known that the convergence of the sequence ${\left\{{x}_{n}\right\}}_{n=0}^{\mathrm{\infty }}$ depends on the behavior of the gradient f. If the gradient f is only assumed to be inverse strongly monotone, then ${\left\{{x}_{n}\right\}}_{n=0}^{\mathrm{\infty }}$ can only be convergent weakly to a minimizer of (1.5). If the gradient f is Lipschitz continuous and strongly monotone, then the sequence ${\left\{{x}_{n}\right\}}_{n=0}^{\mathrm{\infty }}$ can be convergent strongly to a minimizer of (1.5) provided the parameters ${\gamma }_{n}$ satisfy appropriate conditions.

As everyone knows, Xu  gave an averaged mapping approach to the gradient-projection method, and he constructed a counterexample which shows that the sequence generated by the gradient-projection method does not converge strongly in the infinite-dimensional space. Moreover, he presented two modifications of the gradient-projection method which are shown to have strong convergence.

In 2011, motivated by Xu, Ceng  proposed the following iterative algorithm:

${x}_{n+1}={Proj}_{C}\left[{s}_{n}\gamma V{x}_{n}+\left(I-{s}_{n}\mu F\right){T}_{n}{x}_{n}\right],\phantom{\rule{1em}{0ex}}n\ge 0,$
(1.6)

where $V:C\to H$ is an l-Lipschitzian mapping with a constant $l>0$, and $F:C\to H$ is a k-Lipschitzian and η-strongly monotone operator with constants $k,\eta >0$. Let $0<\mu <2\eta /{k}^{2}$, $0\le \gamma l<\tau$, and $\tau =1-\sqrt{1-\mu \left(2\eta -\mu {k}^{2}\right)}$. Let ${T}_{n}$ and ${s}_{n}$ satisfy ${s}_{n}=\frac{2-{\lambda }_{n}L}{4}$, ${Proj}_{C}\left(I-{\lambda }_{n}\mathrm{\nabla }f\right)={s}_{n}I+\left(1-{s}_{n}\right){T}_{n}$. Under suitable conditions, it is proved that the sequence ${\left\{{x}_{n}\right\}}_{n=0}^{\mathrm{\infty }}$ generated by (1.6) converges strongly to a minimizer ${x}^{\ast }$ of (1.5).

In 2012, Tian and Liu  introduced the following iterative method in a Hilbert space: ${x}_{1}\in C$ and

$\left\{\begin{array}{c}\varphi \left({u}_{n},y\right)+\frac{1}{{\beta }_{n}}〈y-{u}_{n},{u}_{n}-{x}_{n}〉\ge 0\phantom{\rule{1em}{0ex}}\mathrm{\forall }y\in C,\hfill \\ {x}_{n+1}={\alpha }_{n}\gamma V{u}_{n}+\left(I-{\alpha }_{n}A\right){T}_{n}{u}_{n}\phantom{\rule{1em}{0ex}}\mathrm{\forall }n\in \mathbb{N},\hfill \end{array}$
(1.7)

where ${u}_{n}={Q}_{{\beta }_{n}}{x}_{n}$, ${Proj}_{C}\left(I-{\lambda }_{n}\mathrm{\nabla }f\right)={s}_{n}I+\left(1-{s}_{n}\right){T}_{n}$, ${s}_{n}=\frac{2-{\lambda }_{n}L}{4}$ and $\left\{{\lambda }_{n}\right\}\subset \left(0,2/L\right)$, and $\left\{{\alpha }_{n}\right\}$, $\left\{{\beta }_{n}\right\}$, $\left\{{s}_{n}\right\}$ satisfy appropriate conditions. Further, they proved the sequence $\left\{{x}_{n}\right\}$ converges strongly to a point $q\in U\cap EP\left(\varphi \right)$, which solves the variational inequality

$〈\left(A-\gamma V\right)q,q-z〉\le 0,\phantom{\rule{1em}{0ex}}z\in U\cap EP\left(\varphi \right).$

It is the first time that the equilibrium and constrained convex minimization problems have been solved.

Since, in general, the minimization problem (1.5) has more than one solution, so regularization is needed. Now we consider the following regularized minimization problem:

$\underset{x\in C}{min}{f}_{\alpha }\left(x\right):=f\left(x\right)+\frac{\alpha }{2}{\parallel x\parallel }^{2},$

where $\alpha >0$ is the regularization parameter, f is a convex function with a $1/L$-ism continuous gradient f. Then the regularized GPA generates a sequence ${\left\{{x}_{n}\right\}}_{n=0}^{\mathrm{\infty }}$ by the following recursive formula:

${x}_{n+1}={Proj}_{C}\left(I-\gamma \mathrm{\nabla }{f}_{{\alpha }_{n}}\right){x}_{n}={Proj}_{C}\left[{x}_{n}-\gamma \left(\mathrm{\nabla }f+{\alpha }_{n}I\right)\left({x}_{n}\right)\right],$
(1.8)

where the parameter ${\alpha }_{n}>0$, γ is a constant with $0<\gamma <2/L$, and ${Proj}_{C}$ is the metric projection from H onto C. We all know that the sequence ${\left\{{x}_{n}\right\}}_{n=0}^{\mathrm{\infty }}$ generated by algorithm (1.8) converges weakly to a minimizer of (1.5) in the setting of infinite-dimensional spaces (see ).

In this paper, motivated and inspired by the above results, we introduce a new iterative method: ${x}_{1}\in H$ and

$\left\{\begin{array}{c}\varphi \left({u}_{n},y\right)+\frac{1}{{\beta }_{n}}〈y-{u}_{n},{u}_{n}-{x}_{n}〉\ge 0\phantom{\rule{1em}{0ex}}\mathrm{\forall }y\in C,\hfill \\ {x}_{n+1}={\alpha }_{n}rV{u}_{n}+\left(I-{\alpha }_{n}A\right){T}_{{\lambda }_{n}}{u}_{n}\phantom{\rule{1em}{0ex}}\mathrm{\forall }n\in \mathbb{N},\hfill \end{array}$
(1.9)

for finding a element of $U\cap EP\left(\varphi \right)$, where ${u}_{n}={Q}_{{\beta }_{n}}{x}_{n}$, ${Proj}_{C}\left(I-\gamma \mathrm{\nabla }{f}_{{\lambda }_{n}}\right)={\theta }_{n}I+\left(1-{\theta }_{n}\right){T}_{{\lambda }_{n}}$, ${\theta }_{n}=\frac{2-\gamma \left(L+{\lambda }_{n}\right)}{4}$, $\gamma \in \left(0,2/L\right)$. Under appropriate conditions, it is proved that the sequence $\left\{{x}_{n}\right\}$ generated by (1.9) converges strongly to a point $z\in U\cap EP\left(\varphi \right)$, which solves the variational inequality

$〈\left(A-rV\right)z,z-x〉\le 0,\phantom{\rule{1em}{0ex}}x\in U\cap EP\left(\varphi \right).$

## 2 Preliminaries

In this section we introduce some useful properties and lemmas which will be used in the proofs for the main results in the next section.

Some properties of averaged mappings are gathered in the proposition below.

Proposition 2.1 [7, 8]

Let the operators $S,T,V:H\to H$ be given:

1. (i)

If $T=\left(1-\alpha \right)S+\alpha V$ for some $\alpha \in \left(0,1\right)$ and if S is averaged and V is nonexpansive, then T is averaged.

2. (ii)

The composition of finitely many averaged mappings is averaged. That is, if each of the mappings ${\left\{{T}_{i}\right\}}_{i=1}^{N}$ is averaged, then so is the composite ${T}_{1}\cdots {T}_{N}$. In particular, if ${T}_{1}$ is ${\alpha }_{1}$-averaged and ${T}_{2}$ is ${\alpha }_{2}$-averaged, where ${\alpha }_{1},{\alpha }_{2}\in \left(0,1\right)$, then the composite ${T}_{1}{T}_{2}$ is α-averaged, where $\alpha ={\alpha }_{1}+{\alpha }_{2}-{\alpha }_{1}{\alpha }_{2}$.

3. (iii)

If the mappings ${\left\{{T}_{i}\right\}}_{i=1}^{N}$ are averaged and have a common fixed point, then

$\bigcap _{i=1}^{N}Fix\left({T}_{i}\right)=Fix\left({T}_{1}\cdots {T}_{N}\right).$

Here the notation $Fix\left(T\right)$ denotes the set of fixed points of the mapping T; that is, $Fix\left(T\right):=\left\{x\in H:Tx=x\right\}$.

The following proposition gathers some results on the relationship between averaged mappings and inverse strongly monotone operators.

Proposition 2.2 [7, 22]

Let $T:H\to H$ be given. We have:

1. (i)

T is nonexpansive if and only if the complement $I-T$ is ($1/2$)-ism;

2. (ii)

If T is υ-ism, then for $\gamma >0$, γT is ($\upsilon /\gamma$)-ism;

3. (iii)

T is averaged if and only if the complement $I-T$ is υ-ism for some $\upsilon >1/2$; indeed, for $\alpha \in \left(0,1\right)$, T is α-averaged if and only if $I-T$ is ($1/2\alpha$)-ism.

Lemma 2.1 

Assume that ${\left\{{a}_{n}\right\}}_{n=0}^{\mathrm{\infty }}$ is a sequence of nonnegative real numbers such that

${a}_{n+1}\le \left(1-{\gamma }_{n}\right){a}_{n}+{\gamma }_{n}{\delta }_{n}+{\beta }_{n},\phantom{\rule{1em}{0ex}}n\ge 0,$

where ${\left\{{\gamma }_{n}\right\}}_{n=0}^{\mathrm{\infty }}$ and ${\left\{{\beta }_{n}\right\}}_{n=0}^{\mathrm{\infty }}$ are sequences in $\left(0,1\right)$ and ${\left\{{\delta }_{n}\right\}}_{n=0}^{\mathrm{\infty }}$ is a sequence in such that

1. (i)

${\sum }_{n=0}^{\mathrm{\infty }}{\gamma }_{n}=\mathrm{\infty }$;

2. (ii)

either ${lim sup}_{n\to \mathrm{\infty }}{\delta }_{n}\le 0$ or ${\sum }_{n=0}^{\mathrm{\infty }}{\gamma }_{n}|{\delta }_{n}|<\mathrm{\infty }$;

3. (iii)

${\sum }_{n=0}^{\mathrm{\infty }}{\beta }_{n}<\mathrm{\infty }$.

Then ${lim}_{n\to \mathrm{\infty }}{a}_{n}=0$.

The so-called demiclosed principle for nonexpansive mappings will often be used.

Lemma 2.2 (Demiclosed principle )

Let C be a closed and convex subset of a Hilbert space H and let $T:C\to C$ be a nonexpansive mapping with $Fix\left(T\right)\ne \mathrm{\varnothing }$. If ${\left\{{x}_{n}\right\}}_{n=1}^{\mathrm{\infty }}$ is a sequence in C weakly converging to x and if ${\left\{\left(I-T\right){x}_{n}\right\}}_{n=1}^{\mathrm{\infty }}$ converges strongly to y, then $\left(I-T\right)x=y$. In particular, if $y=0$, then $x\in Fix\left(T\right)$.

Lemma 2.3 

Let H be a Hilbert space, let C be a closed and convex subset of H, let $V:C\to H$ be a Lipschitzian operator with a coefficient $l>0$, and let $A:C\to H$ be a strongly positive bounded linear operator with a coefficient $\overline{\gamma }>0$. Then, for $0,

$\begin{array}{c}〈x-y,\left(A-rV\right)x-\left(A-rV\right)y〉\hfill \\ \phantom{\rule{1em}{0ex}}\ge \left(\overline{\gamma }-rl\right){\parallel x-y\parallel }^{2}\phantom{\rule{1em}{0ex}}\mathrm{\forall }x,y\in C.\hfill \end{array}$

That is, $A-rV$ is strongly monotone with a coefficient $\overline{\gamma }-rl$.

Recall the metric (nearest point) projection ${Proj}_{C}$ from a real Hilbert space H to a closed and convex subset C of H is defined as follows: given $x\in H$, ${Proj}_{C}x$ is the unique point in C with the property

$\parallel x-{Proj}_{C}x\parallel =inf\left\{\parallel x-y\parallel :y\in C\right\}.$

${Proj}_{C}$ is characterized as follows.

Lemma 2.4 Let C be a closed and convex subset of a real Hilbert space H. Given $x\in H$ and $y\in C$. Then $y={Proj}_{C}x$ if and only if the following inequality holds:

$〈x-y,y-z〉\ge 0\phantom{\rule{1em}{0ex}}\mathrm{\forall }z\in C.$

Lemma 2.5 

Assume that A is a strongly positive bounded linear operator on a Hilbert space H with a coefficient $\overline{\gamma }>0$ and $0. Then $\parallel I-tA\parallel \le 1-t\overline{\gamma }$.

For solving the equilibrium problem for a bifunction $\varphi :C×C\to \mathbb{R}$, let us assume that ϕ satisfies the following conditions:

(A1) $\varphi \left(x,x\right)=0$ for all $x\in C$;

(A2) ϕ is monotone, i.e., $\varphi \left(x,y\right)+\varphi \left(y,x\right)\le 0$ for all $x,y\in C$;

(A3) for each $x,y,z\in C$, ${lim}_{t↓0}\varphi \left(tz+\left(1-t\right)x,y\right)\le \varphi \left(x,y\right)$;

(A4) for each $x\in C$, $y↦\varphi \left(x,y\right)$ is convex and lower semicontinuous.

Lemma 2.6 

Let C be a nonempty, closed and convex subset of H and let ϕ be a bifunction of $C×C$ into satisfying (A1)-(A4). Let $r>0$ and $x\in H$. Then there exists $z\in C$ such that

$\varphi \left(z,y\right)+\frac{1}{r}〈y-z,z-x〉\ge 0\phantom{\rule{1em}{0ex}}\mathit{\text{for all}}y\in C.$

Lemma 2.7 

Assume that $\varphi :C×C\to \mathbb{R}$ satisfies (A1)-(A4). For $r>0$ and $x\in H$, define a mapping ${Q}_{r}:H\to C$ as follows:

${Q}_{r}\left(x\right)=\left\{z\in C:\varphi \left(z,y\right)+\frac{1}{r}〈y-z,z-x〉\ge 0\phantom{\rule{0.25em}{0ex}}\mathrm{\forall }y\in C\right\}.$

Then the following hold:

1. (1)

${Q}_{r}$ is single-valued;

2. (2)

${Q}_{r}$ is firmly nonexpansive, i.e., ${\parallel {Q}_{r}x-{Q}_{r}y\parallel }^{2}\le 〈{Q}_{r}x-{Q}_{r}y,x-y〉$ for any $x,y\in H$;

3. (3)

$F\left({Q}_{r}\right)=EP\left(\varphi \right)$;

4. (4)

$EP\left(\varphi \right)$ is closed and convex.

• ${x}_{n}\to x$ means that ${x}_{n}\to x$ strongly;

• ${x}_{n}⇀x$ means that ${x}_{n}\to x$ weakly.

## 3 Main results

Recall that throughout this paper we always denote U as the solution set of the constrained convex minimization problem (1.5), and denote $EP\left(\varphi \right)$ as the solution set of the equilibrium problem (1.1).

Let H be a real Hilbert space and C be a nonempty closed convex subset of H. Let $V:C\to H$ be Lipschitzian with a constant $l>0$, and $A:C\to H$ be a strongly positive bounded linear operator with a coefficient $\overline{\gamma }>0$, and $0. Suppose that f is $1/L$-ism continuous. Let ${Q}_{{\beta }_{n}}$ be a mapping defined as in Lemma 2.7. We now consider the following mapping ${S}_{n}$ on H defined by

${S}_{n}\left(x\right)={\alpha }_{n}rV{Q}_{{\beta }_{n}}\left(x\right)+\left(I-{\alpha }_{n}A\right){T}_{{\lambda }_{n}}{Q}_{{\beta }_{n}}\left(x\right)\phantom{\rule{1em}{0ex}}\mathrm{\forall }x\in H,n\in \mathbb{N},$

where ${Proj}_{C}\left(I-\gamma \mathrm{\nabla }{f}_{{\lambda }_{n}}\right)={\theta }_{n}I+\left(1-{\theta }_{n}\right){T}_{{\lambda }_{n}}$, ${\theta }_{n}=\frac{2-\gamma \left(L+{\lambda }_{n}\right)}{4}$, and $\gamma \in \left(0,2/L\right)$, $\left\{{\alpha }_{n}\right\}\subset \left(0,1\right)$. It is easy to see that ${S}_{n}$ is a contraction. Indeed, by Lemma 2.5 and Lemma 2.7, we have for each $x,y\in H$

$\begin{array}{c}\parallel {S}_{n}x-{S}_{n}y\parallel \hfill \\ \phantom{\rule{1em}{0ex}}=\parallel \left({\alpha }_{n}rV{Q}_{{\beta }_{n}}x-{\alpha }_{n}rV{Q}_{{\beta }_{n}}y\right)+\left(I-{\alpha }_{n}A\right){T}_{{\lambda }_{n}}{Q}_{{\beta }_{n}}x-\left(I-{\alpha }_{n}A\right){T}_{{\lambda }_{n}}{Q}_{{\beta }_{n}}y\parallel \hfill \\ \phantom{\rule{1em}{0ex}}\le {\alpha }_{n}rl\parallel x-y\parallel +\left(1-{\alpha }_{n}\overline{\gamma }\right)\parallel x-y\parallel \hfill \\ \phantom{\rule{1em}{0ex}}=\left(1-{\alpha }_{n}\left(\overline{\gamma }-rl\right)\right)\parallel x-y\parallel .\hfill \end{array}$

Since $0<1-{\alpha }_{n}\left(\overline{\gamma }-rl\right)<1$, it follows that ${S}_{n}$ is a contraction. Therefore, by the Banach contraction principle, ${S}_{n}$ has a unique fixed point ${x}_{n}\in H$ such that

${x}_{n}={\alpha }_{n}rV{Q}_{{\beta }_{n}}\left({x}_{n}\right)+\left(I-{\alpha }_{n}A\right){T}_{{\lambda }_{n}}{Q}_{{\beta }_{n}}\left({x}_{n}\right).$

Note that ${x}_{n}$ indeed depends on V as well, but we will suppress this dependence of ${x}_{n}$ on V for simplicity of notation throughout the rest of this paper.

The following theorem summarizes the properties of the sequence $\left\{{x}_{n}\right\}$.

Theorem 3.1 Let C be a nonempty, closed and convex subset of a Hilbert space H. Let ϕ be a bifunction from $C×C\to \mathbb{R}$ satisfying (A1)-(A4), and let $f:C\to \mathbb{R}$ be a real-valued convex function, and assume that the gradient f is $1/L$-ism with a constant $L>0$. Assume that $U\cap EP\left(\varphi \right)\ne \mathrm{\varnothing }$. Let $V:C\to H$ be Lipschitzian with a constant $l>0$, and let $A:C\to H$ be a strongly positive bounded linear operator with a coefficient $\overline{\gamma }>0$, and $0. Let the sequences $\left\{{u}_{n}\right\}$ and $\left\{{x}_{n}\right\}$ be generated by

$\left\{\begin{array}{c}\varphi \left({u}_{n},y\right)+\frac{1}{{\beta }_{n}}〈y-{u}_{n},{u}_{n}-{x}_{n}〉\ge 0\phantom{\rule{1em}{0ex}}\mathrm{\forall }y\in C,\hfill \\ {x}_{n}={\alpha }_{n}rV{u}_{n}+\left(I-{\alpha }_{n}A\right){T}_{{\lambda }_{n}}{u}_{n}\phantom{\rule{1em}{0ex}}\mathrm{\forall }n\in \mathbb{N},\hfill \end{array}$
(3.1)

where ${u}_{n}={Q}_{{\beta }_{n}}{x}_{n}$, ${Proj}_{C}\left(I-\gamma \mathrm{\nabla }{f}_{{\lambda }_{n}}\right)={\theta }_{n}I+\left(1-{\theta }_{n}\right){T}_{{\lambda }_{n}}$, ${\theta }_{n}=\frac{2-\gamma \left(L+{\lambda }_{n}\right)}{4}$ and $\gamma \in \left(0,2/L\right)$. Let $\left\{{\beta }_{n}\right\}$, $\left\{{\alpha }_{n}\right\}$ and $\left\{{\lambda }_{n}\right\}$ satisfy the following conditions:

1. (i)

$\left\{{\beta }_{n}\right\}\subset \left(0,\mathrm{\infty }\right)$, ${lim inf}_{n\to \mathrm{\infty }}{\beta }_{n}>0$;

2. (ii)

$\left\{{\alpha }_{n}\right\}\subset \left(0,1\right)$, ${lim}_{n\to \mathrm{\infty }}{\alpha }_{n}=0$;

3. (iii)

$\left\{{\lambda }_{n}\right\}\subset \left(0,2/\gamma -L\right)$, ${\lambda }_{n}=o\left({\alpha }_{n}\right)$.

Then the sequence $\left\{{x}_{n}\right\}$ converges strongly to a point $z\in U\cap EP\left(\varphi \right)$, which solves the variational inequality

$〈\left(A-rV\right)z,z-x〉\le 0,\phantom{\rule{1em}{0ex}}x\in U\cap EP\left(\varphi \right).$
(3.2)

Equivalently, we have $z={P}_{U\cap EP\left(\varphi \right)}\left(I-A+rV\right)\left(z\right)$.

Proof It is well known that $\stackrel{ˆ}{x}\in C$ solves the minimization problem (1.5) if and only if for each fixed $0<\gamma <2/L$, $\stackrel{ˆ}{x}$ solves the fixed-point equation

$\stackrel{ˆ}{x}={Proj}_{C}\left(I-\gamma \mathrm{\nabla }f\right)\stackrel{ˆ}{x}=\frac{2-\gamma L}{4}\stackrel{ˆ}{x}+\frac{2+\gamma L}{4}T\stackrel{ˆ}{x}.$

It is clear that $\stackrel{ˆ}{x}=T\stackrel{ˆ}{x}$, i.e., $\stackrel{ˆ}{x}\in U=Fix\left(T\right)$.

First, we assume that ${\alpha }_{n}\in \left(0,{\parallel A\parallel }^{-1}\right)$. By Lemma 2.5, we obtain $\parallel I-{\alpha }_{n}A\parallel \le 1-{\alpha }_{n}\overline{\gamma }$. Let $p\in U\cap EP\left(\varphi \right)$, then from ${u}_{n}={Q}_{{\beta }_{n}}{x}_{n}$, we have

$\parallel {u}_{n}-p\parallel =\parallel {Q}_{{\beta }_{n}}{x}_{n}-{Q}_{{\beta }_{n}}p\parallel \le \parallel {x}_{n}-p\parallel$
(3.3)

for all $n\in \mathbb{N}$. Thus, we have from (3.3)

$\begin{array}{c}\parallel {x}_{n}-p\parallel \hfill \\ \phantom{\rule{1em}{0ex}}=\parallel {\alpha }_{n}rV{u}_{n}+\left(I-{\alpha }_{n}A\right){T}_{{\lambda }_{n}}{u}_{n}-p\parallel \hfill \\ \phantom{\rule{1em}{0ex}}\le \parallel {\alpha }_{n}rV{u}_{n}-{\alpha }_{n}A\left(p\right)\parallel +\parallel I-{\alpha }_{n}A\parallel \cdot \parallel {T}_{{\lambda }_{n}}{u}_{n}-p\parallel \hfill \\ \phantom{\rule{1em}{0ex}}\le \parallel {\alpha }_{n}rV{u}_{n}-{\alpha }_{n}rV\left(p\right)\parallel +\parallel {\alpha }_{n}rV\left(p\right)-{\alpha }_{n}A\left(p\right)\parallel \hfill \\ \phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}+\left(1-{\alpha }_{n}\overline{\gamma }\right)\parallel {T}_{{\lambda }_{n}}{u}_{n}-{T}_{{\lambda }_{n}}\left(p\right)+{T}_{{\lambda }_{n}}\left(p\right)-Tp\parallel \hfill \\ \phantom{\rule{1em}{0ex}}\le {\alpha }_{n}rl\parallel {u}_{n}-p\parallel +{\alpha }_{n}\parallel rV\left(p\right)-A\left(p\right)\parallel \hfill \\ \phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}+\left(1-{\alpha }_{n}\overline{\gamma }\right)\parallel {u}_{n}-p\parallel +\left(1-{\alpha }_{n}\overline{\gamma }\right)\parallel {T}_{{\lambda }_{n}}\left(p\right)-Tp\parallel \hfill \\ \phantom{\rule{1em}{0ex}}\le \left(1-{\alpha }_{n}\left(\overline{\gamma }-rl\right)\right)\parallel {x}_{n}-p\parallel +{\alpha }_{n}\parallel rV\left(p\right)-A\left(p\right)\parallel +\left(1-{\alpha }_{n}\overline{\gamma }\right)\parallel {T}_{{\lambda }_{n}}\left(p\right)-Tp\parallel .\hfill \end{array}$

It follows that

$\parallel {x}_{n}-p\parallel \le \frac{1}{\overline{\gamma }-rl}\parallel rV\left(p\right)-A\left(p\right)\parallel +\frac{1-{\alpha }_{n}\overline{\gamma }}{{\alpha }_{n}\left(\overline{\gamma }-rl\right)}\parallel {T}_{{\lambda }_{n}}\left(p\right)-Tp\parallel .$
(3.4)

For $x\in C$, note that

${Proj}_{C}\left(I-\gamma \mathrm{\nabla }{f}_{{\lambda }_{n}}\right)x={\theta }_{n}x+\left(1-{\theta }_{n}\right){T}_{{\lambda }_{n}}x$

and

${Proj}_{C}\left(I-\gamma \mathrm{\nabla }f\right)x=\theta x+\left(1-\theta \right)Tx,$

where ${\theta }_{n}=\frac{2-\gamma \left(L+{\lambda }_{n}\right)}{4}$ and $\theta =\frac{2-\gamma L}{4}$.

Then we get

$\begin{array}{rcl}\parallel \left({\theta }_{n}-\theta \right)x+{T}_{{\lambda }_{n}}x-Tx+\theta Tx-{\theta }_{n}{T}_{{\lambda }_{n}}x\parallel & =& \parallel {Proj}_{C}\left(I-\gamma \mathrm{\nabla }{f}_{{\lambda }_{n}}\right)x-{Proj}_{C}\left(I-\gamma \mathrm{\nabla }f\right)x\parallel \\ \le & \gamma {\lambda }_{n}\parallel x\parallel .\end{array}$

Since ${\theta }_{n}=\frac{2-\gamma \left(L+{\lambda }_{n}\right)}{4}$ and $\theta =\frac{2-\gamma L}{4}$, there exists $M\left(x\right)>0$ such that

$\parallel {T}_{{\lambda }_{n}}x-Tx\parallel \le \frac{{\lambda }_{n}\gamma \left(5\parallel x\parallel +\parallel Tx\parallel \right)}{2+\gamma \left(L+{\lambda }_{n}\right)}\le {\lambda }_{n}M\left(x\right),$
(3.5)

where $M\left(x\right)=\gamma \left(5\parallel x\parallel +\parallel Tx\parallel \right)$.

It follows from (3.4) and (3.5) that

$\parallel {x}_{n}-p\parallel \le \frac{1}{\overline{\gamma }-rl}\parallel rV\left(p\right)-A\left(p\right)\parallel +\frac{M\left(p\right)}{\overline{\gamma }-rl}\cdot \frac{{\lambda }_{n}}{{\alpha }_{n}}.$

Since ${\lambda }_{n}=o\left({\alpha }_{n}\right)$, there exists a real number ${M}^{\mathrm{\prime }}>0$ such that $\frac{{\lambda }_{n}}{{\alpha }_{n}}\le {M}^{\mathrm{\prime }}$, and

$\begin{array}{c}\parallel {x}_{n}-p\parallel \hfill \\ \phantom{\rule{1em}{0ex}}\le \frac{1}{\overline{\gamma }-rl}\parallel rV\left(p\right)-A\left(p\right)\parallel +\frac{{M}^{\mathrm{\prime }}M\left(p\right)}{\overline{\gamma }-rl}\hfill \\ \phantom{\rule{1em}{0ex}}=\frac{\parallel rV\left(p\right)-A\left(p\right)\parallel +{M}^{\mathrm{\prime }}M\left(p\right)}{\overline{\gamma }-rl}.\hfill \end{array}$

Hence $\left\{{x}_{n}\right\}$ is bounded and we also obtain that $\left\{{u}_{n}\right\}$ is bounded.

Next, we show that $\parallel {x}_{n}-{u}_{n}\parallel \to 0$.

Indeed, for any $p\in U\cap EP\left(\varphi \right)$, by Lemma 2.7, we have

$\begin{array}{c}{\parallel {u}_{n}-p\parallel }^{2}\hfill \\ \phantom{\rule{1em}{0ex}}={\parallel {Q}_{{\beta }_{n}}{x}_{n}-{Q}_{{\beta }_{n}}p\parallel }^{2}\hfill \\ \phantom{\rule{1em}{0ex}}\le 〈{x}_{n}-p,{u}_{n}-p〉\hfill \\ \phantom{\rule{1em}{0ex}}=\frac{1}{2}\left({\parallel {x}_{n}-p\parallel }^{2}+{\parallel {u}_{n}-p\parallel }^{2}-{\parallel {u}_{n}-{x}_{n}\parallel }^{2}\right).\hfill \end{array}$

This implies that

${\parallel {u}_{n}-p\parallel }^{2}\le {\parallel {x}_{n}-p\parallel }^{2}-{\parallel {u}_{n}-{x}_{n}\parallel }^{2}.$
(3.6)

Then from (3.5), we derive that

$\begin{array}{c}{\parallel {x}_{n}-p\parallel }^{2}\hfill \\ \phantom{\rule{1em}{0ex}}={\parallel {\alpha }_{n}rV{u}_{n}+\left(I-{\alpha }_{n}A\right){T}_{{\lambda }_{n}}{u}_{n}-p\parallel }^{2}\hfill \\ \phantom{\rule{1em}{0ex}}={\parallel \left(I-{\alpha }_{n}A\right)\left({T}_{{\lambda }_{n}}{u}_{n}-p\right)+{\alpha }_{n}rV{u}_{n}-{\alpha }_{n}A\left(p\right)\parallel }^{2}\hfill \\ \phantom{\rule{1em}{0ex}}\le {\parallel \left(I-{\alpha }_{n}A\right)\left({T}_{{\lambda }_{n}}{u}_{n}-p\right)\parallel }^{2}\hfill \\ \phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}+2\parallel \left(I-{\alpha }_{n}A\right)\left({T}_{{\lambda }_{n}}{u}_{n}-p\right)\parallel \cdot \parallel {\alpha }_{n}rV{u}_{n}-{\alpha }_{n}A\left(p\right)\parallel +{\parallel {\alpha }_{n}rV{u}_{n}-{\alpha }_{n}A\left(p\right)\parallel }^{2}\hfill \\ \phantom{\rule{1em}{0ex}}\le {\left(1-{\alpha }_{n}\overline{\gamma }\right)}^{2}{\parallel {T}_{{\lambda }_{n}}{u}_{n}-{T}_{{\lambda }_{n}}p+{T}_{{\lambda }_{n}}p-Tp\parallel }^{2}\hfill \\ \phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}+2\left(1-{\alpha }_{n}\overline{\gamma }\right)\parallel {T}_{{\lambda }_{n}}{u}_{n}-p\parallel \parallel {\alpha }_{n}rV{u}_{n}-{\alpha }_{n}A\left(p\right)\parallel +{\alpha }_{n}^{2}{\parallel rV{u}_{n}-A\left(p\right)\parallel }^{2}\hfill \\ \phantom{\rule{1em}{0ex}}\le {\left(1-{\alpha }_{n}\overline{\gamma }\right)}^{2}{\left(\parallel {u}_{n}-p\parallel +\parallel {T}_{{\lambda }_{n}}p-Tp\parallel \right)}^{2}\hfill \\ \phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}+2\left(1-{\alpha }_{n}\overline{\gamma }\right)\left(\parallel {u}_{n}-p\parallel +\parallel {T}_{{\lambda }_{n}}p-Tp\parallel \right)\cdot \left({\alpha }_{n}rl\parallel {u}_{n}-p\parallel +{\alpha }_{n}\parallel rV\left(p\right)-A\left(p\right)\parallel \right)\hfill \\ \phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}+{\alpha }_{n}^{2}{\parallel rV{u}_{n}-A\left(p\right)\parallel }^{2}\hfill \\ \phantom{\rule{1em}{0ex}}\le \left({\left(1-{\alpha }_{n}\overline{\gamma }\right)}^{2}+2\left(1-{\alpha }_{n}\overline{\gamma }\right){\alpha }_{n}rl\right){\parallel {u}_{n}-p\parallel }^{2}\hfill \\ \phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}+{\left(1-{\alpha }_{n}\overline{\gamma }\right)}^{2}{\lambda }_{n}^{2}\cdot {\left(M\left(p\right)\right)}^{2}+2{\left(1-{\alpha }_{n}\overline{\gamma }\right)}^{2}\parallel {u}_{n}-p\parallel {\lambda }_{n}M\left(p\right)\hfill \\ \phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}+2\left(1-{\alpha }_{n}\overline{\gamma }\right)\parallel {u}_{n}-p\parallel {\alpha }_{n}\parallel rV\left(p\right)-A\left(p\right)\parallel +2\left(1-{\alpha }_{n}\overline{\gamma }\right){\lambda }_{n}M\left(p\right){\alpha }_{n}rl\parallel {u}_{n}-p\parallel \hfill \\ \phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}+2\left(1-{\alpha }_{n}\overline{\gamma }\right){\lambda }_{n}M\left(p\right){\alpha }_{n}\parallel rV\left(p\right)-A\left(p\right)\parallel +{\alpha }_{n}^{2}{\left(rl\parallel {u}_{n}-p\parallel +\parallel rV\left(p\right)-A\left(p\right)\parallel \right)}^{2}.\hfill \end{array}$

It follows from (3.6) that

$\begin{array}{c}\left({\left(1-{\alpha }_{n}\overline{\gamma }\right)}^{2}+2\left(1-{\alpha }_{n}\overline{\gamma }\right){\alpha }_{n}rl\right){\parallel {u}_{n}-{x}_{n}\parallel }^{2}\hfill \\ \phantom{\rule{1em}{0ex}}\le \left({\left(1-{\alpha }_{n}\overline{\gamma }\right)}^{2}+2\left(1-{\alpha }_{n}\overline{\gamma }\right){\alpha }_{n}rl-1\right){\parallel {x}_{n}-p\parallel }^{2}\hfill \\ \phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}+{\lambda }_{n}^{2}\cdot {\left(M\left(p\right)\right)}^{2}+2\parallel {u}_{n}-p\parallel \left({\lambda }_{n}M\left(p\right)+{\alpha }_{n}\parallel rV\left(p\right)-A\left(p\right)\parallel \right)\hfill \\ \phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}+2{\alpha }_{n}{\lambda }_{n}M\left(p\right)\left(rl\parallel {u}_{n}-p\parallel +\parallel rV\left(p\right)-A\left(p\right)\parallel \right)\hfill \\ \phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}+{\alpha }_{n}^{2}{\left(rl\parallel {u}_{n}-p\parallel +\parallel rV\left(p\right)-A\left(p\right)\parallel \right)}^{2}.\hfill \end{array}$

Since both $\left\{{x}_{n}\right\}$ and $\left\{{u}_{n}\right\}$ are bounded and ${\alpha }_{n}\to 0$, ${\lambda }_{n}\to 0$, it follows that $\parallel {u}_{n}-{x}_{n}\parallel \to 0$.

We claim that $\parallel {x}_{n}-{T}_{{\lambda }_{n}}{x}_{n}\parallel \to 0$. Indeed,

$\begin{array}{c}\parallel {x}_{n}-{T}_{{\lambda }_{n}}{x}_{n}\parallel \hfill \\ \phantom{\rule{1em}{0ex}}=\parallel {x}_{n}-{T}_{{\lambda }_{n}}{u}_{n}+{T}_{{\lambda }_{n}}{u}_{n}-{T}_{{\lambda }_{n}}{x}_{n}\parallel \hfill \\ \phantom{\rule{1em}{0ex}}\le \parallel {x}_{n}-{T}_{{\lambda }_{n}}{u}_{n}\parallel +\parallel {T}_{{\lambda }_{n}}{u}_{n}-{T}_{{\lambda }_{n}}{x}_{n}\parallel \hfill \\ \phantom{\rule{1em}{0ex}}\le {\alpha }_{n}\parallel rV{u}_{n}-A{T}_{{\lambda }_{n}}{u}_{n}\parallel +\parallel {u}_{n}-{x}_{n}\parallel .\hfill \end{array}$

Since ${\alpha }_{n}\to 0$ and $\parallel {u}_{n}-{x}_{n}\parallel \to 0$, we obtain that

$\parallel {x}_{n}-{T}_{{\lambda }_{n}}{x}_{n}\parallel \to 0.$

Thus,

$\begin{array}{c}\parallel {u}_{n}-{T}_{{\lambda }_{n}}{u}_{n}\parallel \hfill \\ \phantom{\rule{1em}{0ex}}=\parallel {u}_{n}-{x}_{n}+{x}_{n}-{T}_{{\lambda }_{n}}{x}_{n}+{T}_{{\lambda }_{n}}{x}_{n}-{T}_{{\lambda }_{n}}{u}_{n}\parallel \hfill \\ \phantom{\rule{1em}{0ex}}\le \parallel {u}_{n}-{x}_{n}\parallel +\parallel {x}_{n}-{T}_{{\lambda }_{n}}{x}_{n}\parallel +\parallel {T}_{{\lambda }_{n}}{x}_{n}-{T}_{{\lambda }_{n}}{u}_{n}\parallel \hfill \\ \phantom{\rule{1em}{0ex}}\le \parallel {u}_{n}-{x}_{n}\parallel +\parallel {x}_{n}-{T}_{{\lambda }_{n}}{x}_{n}\parallel +\parallel {x}_{n}-{u}_{n}\parallel \hfill \end{array}$

and

$\parallel {x}_{n}-{T}_{{\lambda }_{n}}{u}_{n}\parallel \le \parallel {u}_{n}-{x}_{n}\parallel +\parallel {T}_{{\lambda }_{n}}{u}_{n}-{u}_{n}\parallel ,$

we have $\parallel {u}_{n}-{T}_{{\lambda }_{n}}{u}_{n}\parallel \to 0$ and $\parallel {x}_{n}-{T}_{{\lambda }_{n}}{u}_{n}\parallel \to 0$.

Since $\left\{{u}_{n}\right\}$ is bounded, without loss of generality, we can assume that ${u}_{{n}_{i}}⇀z$. Next, we show that $z\in U\cap EP\left(\varphi \right)$.

By (3.5), we have

$\begin{array}{c}\parallel {u}_{n}-T{u}_{n}\parallel \hfill \\ \phantom{\rule{1em}{0ex}}\le \parallel {u}_{n}-{T}_{{\lambda }_{n}}{u}_{n}\parallel +\parallel {T}_{{\lambda }_{n}}{u}_{n}-T{u}_{n}\parallel \hfill \\ \phantom{\rule{1em}{0ex}}\le \parallel {u}_{n}-{T}_{{\lambda }_{n}}{u}_{n}\parallel +{\lambda }_{n}M\left({u}_{n}\right)\to 0.\hfill \end{array}$

So, by Lemma 2.2, we get $z\in Fix\left(T\right)=U$.

Next, we show that $z\in EP\left(\varphi \right)$. Since ${u}_{n}={Q}_{{\beta }_{n}}{x}_{n}$, for any $y\in C$, we obtain

$\varphi \left({u}_{n},y\right)+\frac{1}{{\beta }_{n}}〈y-{u}_{n},{u}_{n}-{x}_{n}〉\ge 0.$

From (A2), we have

$\frac{1}{{\beta }_{n}}〈y-{u}_{n},{u}_{n}-{x}_{n}〉\ge \varphi \left(y,{u}_{n}\right)$

and hence

$〈y-{u}_{{n}_{i}},\frac{{u}_{{n}_{i}}-{x}_{{n}_{i}}}{{\beta }_{{n}_{i}}}〉\ge \varphi \left(y,{u}_{{n}_{i}}\right).$

Since $\frac{{u}_{{n}_{i}}-{x}_{{n}_{i}}}{{\beta }_{{n}_{i}}}\to 0$ and ${u}_{{n}_{i}}⇀z$, it follows from (A4) that $\varphi \left(y,z\right)\le 0$ for any $y\in C$.

Let ${z}_{t}=ty+\left(1-t\right)z$, $\mathrm{\forall }t\in \left(0,1\right]$, $y\in C$, then we have ${z}_{t}\in C$ and hence $\varphi \left({z}_{t},z\right)\le 0$.

Thus, from (A1) and (A4), we have

$\begin{array}{rcl}0& =& \varphi \left({z}_{t},{z}_{t}\right)\\ \le & t\varphi \left({z}_{t},y\right)+\left(1-t\right)\varphi \left({z}_{t},z\right)\\ \le & t\varphi \left({z}_{t},y\right)\end{array}$

and hence $\varphi \left({z}_{t},y\right)\ge 0$. From (A3), we have $\varphi \left(z,y\right)\ge 0$ for any $y\in C$, hence $z\in EP\left(\varphi \right)$. Therefore, $z\in U\cap EP\left(\varphi \right)$.

On the other hand, we note that

$\begin{array}{rcl}{x}_{n}-z& =& {\alpha }_{n}rV{u}_{n}+\left(I-{\alpha }_{n}A\right){T}_{{\lambda }_{n}}{u}_{n}-z\\ =& \left(I-{\alpha }_{n}A\right)\left({T}_{{\lambda }_{n}}{u}_{n}-z\right)+{\alpha }_{n}rV{u}_{n}-{\alpha }_{n}A\left(z\right)\\ =& \left(I-{\alpha }_{n}A\right)\left({T}_{{\lambda }_{n}}{u}_{n}-z\right)+{\alpha }_{n}r\left(V{u}_{n}-Vz\right)+{\alpha }_{n}\left(rV\left(z\right)-A\left(z\right)\right).\end{array}$

Hence, we obtain from (3.3) and (3.5) that

$\begin{array}{rcl}{\parallel {x}_{n}-z\parallel }^{2}& =& 〈\left(I-{\alpha }_{n}A\right)\left({T}_{{\lambda }_{n}}{u}_{n}-z\right),{x}_{n}-z〉\\ +{\alpha }_{n}r〈V{u}_{n}-Vz,{x}_{n}-z〉+{\alpha }_{n}〈rV\left(z\right)-A\left(z\right),{x}_{n}-z〉\\ \le & \left(1-{\alpha }_{n}\overline{\gamma }\right)\parallel {T}_{{\lambda }_{n}}{u}_{n}-z\parallel \parallel {x}_{n}-z\parallel +{\alpha }_{n}rl\parallel {u}_{n}-z\parallel \parallel {x}_{n}-z\parallel \\ +{\alpha }_{n}〈rV\left(z\right)-A\left(z\right),{x}_{n}-z〉\\ \le & \left(1-{\alpha }_{n}\overline{\gamma }\right)\left(\parallel {T}_{{\lambda }_{n}}{u}_{n}-{T}_{{\lambda }_{n}}z\parallel +\parallel {T}_{{\lambda }_{n}}z-Tz\parallel \right)\parallel {x}_{n}-z\parallel \\ +{\alpha }_{n}rl\parallel {u}_{n}-z\parallel \parallel {x}_{n}-z\parallel +{\alpha }_{n}〈rV\left(z\right)-A\left(z\right),{x}_{n}-z〉\\ \le & \left(1-{\alpha }_{n}\overline{\gamma }\right){\parallel {x}_{n}-z\parallel }^{2}+\left(1-{\alpha }_{n}\overline{\gamma }\right)\parallel {T}_{{\lambda }_{n}}z-Tz\parallel \parallel {x}_{n}-z\parallel \\ +{\alpha }_{n}rl{\parallel {x}_{n}-z\parallel }^{2}+{\alpha }_{n}〈rV\left(z\right)-A\left(z\right),{x}_{n}-z〉\\ \le & \left(1-{\alpha }_{n}\left(\overline{\gamma }-rl\right)\right){\parallel {x}_{n}-z\parallel }^{2}\\ +\left(1-{\alpha }_{n}\overline{\gamma }\right){\lambda }_{n}M\left(z\right)\parallel {x}_{n}-z\parallel +{\alpha }_{n}〈rV\left(z\right)-A\left(z\right),{x}_{n}-z〉.\end{array}$

It follows that

${\parallel {x}_{n}-z\parallel }^{2}\le \frac{1}{\overline{\gamma }-rl}\frac{{\lambda }_{n}}{{\alpha }_{n}}M\left(z\right)\parallel {x}_{n}-z\parallel +\frac{1}{\overline{\gamma }-rl}〈rV\left(z\right)-A\left(z\right),{x}_{n}-z〉.$

In particular,

${\parallel {x}_{{n}_{i}}-z\parallel }^{2}\le \frac{1}{\overline{\gamma }-rl}\frac{{\lambda }_{{n}_{i}}}{{\alpha }_{{n}_{i}}}M\left(z\right)\parallel {x}_{{n}_{i}}-z\parallel +\frac{1}{\overline{\gamma }-rl}〈rV\left(z\right)-A\left(z\right),{x}_{{n}_{i}}-z〉.$
(3.7)

Since ${x}_{{n}_{i}}⇀z$ and ${\lambda }_{n}=o\left({\alpha }_{n}\right)$, it follows from (3.7) that ${x}_{{n}_{i}}\to z$ as $i\to \mathrm{\infty }$.

Next, we show that z solves the variational inequality (3.2). Observe that ${x}_{n}={\alpha }_{n}rV{u}_{n}+\left(I-{\alpha }_{n}A\right){T}_{{\lambda }_{n}}{u}_{n}$.

Hence, we conclude that

$\left(A-rV\right){x}_{n}=-\frac{1}{{\alpha }_{n}}\left(I-{\alpha }_{n}A\right)\left(I-{T}_{{\lambda }_{n}}{Q}_{{\beta }_{n}}\right){x}_{n}+r\left(V{u}_{n}-V{x}_{n}\right).$

Since ${T}_{{\lambda }_{n}}{Q}_{{\beta }_{n}}$ is nonexpansive, we have $I-{T}_{{\lambda }_{n}}{Q}_{{\beta }_{n}}$ is monotone. Note that for any given $x\in U\cap EP\left(\varphi \right)$,

$\begin{array}{c}〈\left(A-rV\right){x}_{n},{x}_{n}-x〉\hfill \\ \phantom{\rule{1em}{0ex}}=-\frac{1}{{\alpha }_{n}}〈\left(I-{\alpha }_{n}A\right)\left(I-{T}_{{\lambda }_{n}}{Q}_{{\beta }_{n}}\right){x}_{n},{x}_{n}-x〉+r〈V{u}_{n}-V{x}_{n},{x}_{n}-x〉\hfill \\ \phantom{\rule{1em}{0ex}}=-\frac{1}{{\alpha }_{n}}〈\left(I-{T}_{{\lambda }_{n}}{Q}_{{\beta }_{n}}\right){x}_{n}-\left(I-{T}_{{\lambda }_{n}}{Q}_{{\beta }_{n}}\right)x,{x}_{n}-x〉\hfill \\ \phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}-\frac{1}{{\alpha }_{n}}〈\left(I-{T}_{{\lambda }_{n}}{Q}_{{\beta }_{n}}\right)x,{x}_{n}-x〉+〈A\left(I-{T}_{{\lambda }_{n}}{Q}_{{\beta }_{n}}\right){x}_{n},{x}_{n}-x〉\hfill \\ \phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}+r〈V{u}_{n}-V{x}_{n},{x}_{n}-x〉\hfill \\ \phantom{\rule{1em}{0ex}}\le -\frac{1}{{\alpha }_{n}}〈\left(I-{T}_{{\lambda }_{n}}{Q}_{{\beta }_{n}}\right)x,{x}_{n}-x〉+〈A\left(I-{T}_{{\lambda }_{n}}{Q}_{{\beta }_{n}}\right){x}_{n},{x}_{n}-x〉\hfill \\ \phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}+r〈V{u}_{n}-V{x}_{n},{x}_{n}-x〉\hfill \\ \phantom{\rule{1em}{0ex}}\le \frac{1}{{\alpha }_{n}}\parallel x-{T}_{{\lambda }_{n}}x\parallel \parallel {x}_{n}-x\parallel +\parallel A\left(I-{T}_{{\lambda }_{n}}{Q}_{{\beta }_{n}}\right){x}_{n}\parallel \parallel {x}_{n}-x\parallel \hfill \\ \phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}+rl\parallel {u}_{n}-{x}_{n}\parallel \parallel {x}_{n}-x\parallel \hfill \\ \phantom{\rule{1em}{0ex}}\le \frac{{\lambda }_{n}}{{\alpha }_{n}}M\left(x\right)\parallel {x}_{n}-x\parallel +\parallel A\parallel \parallel {x}_{n}-{T}_{{\lambda }_{n}}{Q}_{{\beta }_{n}}{x}_{n}\parallel \parallel {x}_{n}-x\parallel \hfill \\ \phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}+rl\parallel {u}_{n}-{x}_{n}\parallel \parallel {x}_{n}-x\parallel .\hfill \end{array}$

Now, replacing n with ${n}_{i}$ in the above inequality, and letting $i\to \mathrm{\infty }$, since ${\lambda }_{n}=o\left({\alpha }_{n}\right)$, $\parallel {x}_{n}-{T}_{{\lambda }_{n}}{u}_{n}\parallel \to 0$, and $\parallel {u}_{n}-{x}_{n}\parallel \to 0$, we have

$〈\left(A-rV\right)z,z-x〉\le 0.$

It follows that $z\in U\cap EP\left(\varphi \right)$ is a solution of the variational inequality (3.2). Further, by the uniqueness of the solution of the variational inequality (3.2), we conclude that ${x}_{n}\to z$ as $n\to \mathrm{\infty }$.

The variational inequality (3.2) can be rewritten as

$〈\left(I-A+rV\right)z-z,z-x〉\ge 0\phantom{\rule{1em}{0ex}}\mathrm{\forall }x\in U\cap EP\left(\varphi \right).$

By Lemma 2.4, it is equivalent to the following fixed-point equation:

${P}_{U\cap EP\left(\varphi \right)}\left(I-A+rV\right)z=z.$

This completes the proof. □

Theorem 3.2 Let C be a nonempty, closed and convex subset of Hilbert space H. Let ϕ be a bifunction from $C×C\to \mathbb{R}$ satisfying (A1)-(A4), and let $f:C\to \mathbb{R}$ be a real-valued convex function, and assume that the gradient f is $1/L$-ism with a constant $L>0$. Assume that $U\cap EP\left(\varphi \right)\ne \mathrm{\varnothing }$. Let $V:C\to H$ be Lipschitzian with a constant $l>0$, and let $A:C\to H$ be a strongly positive bounded linear operator with a coefficient $\overline{\gamma }>0$, and $0. Let the sequences $\left\{{u}_{n}\right\}$ and $\left\{{x}_{n}\right\}$ be generated by ${x}_{1}\in H$ and

$\left\{\begin{array}{c}\varphi \left({u}_{n},y\right)+\frac{1}{{\beta }_{n}}〈y-{u}_{n},{u}_{n}-{x}_{n}〉\ge 0\phantom{\rule{1em}{0ex}}\mathrm{\forall }y\in C,\hfill \\ {x}_{n+1}={\alpha }_{n}rV{u}_{n}+\left(I-{\alpha }_{n}A\right){T}_{{\lambda }_{n}}{u}_{n}\phantom{\rule{1em}{0ex}}\mathrm{\forall }n\in \mathbb{N},\hfill \end{array}$
(3.8)

where ${u}_{n}={Q}_{{\beta }_{n}}{x}_{n}$, ${Proj}_{C}\left(I-\gamma \mathrm{\nabla }{f}_{{\lambda }_{n}}\right)={\theta }_{n}I+\left(1-{\theta }_{n}\right){T}_{{\lambda }_{n}}$, ${\theta }_{n}=\frac{2-\gamma \left(L+{\lambda }_{n}\right)}{4}$ and $\gamma \in \left(0,2/L\right)$. Let $\left\{{\beta }_{n}\right\}$, $\left\{{\alpha }_{n}\right\}$ and $\left\{{\lambda }_{n}\right\}$ satisfy the following conditions:

(C1) $\left\{{\beta }_{n}\right\}\subset \left(0,\mathrm{\infty }\right)$, ${lim inf}_{n\to \mathrm{\infty }}{\beta }_{n}>0$, ${\sum }_{n=1}^{\mathrm{\infty }}|{\beta }_{n+1}-{\beta }_{n}|<\mathrm{\infty }$;

(C2) $\left\{{\alpha }_{n}\right\}\subset \left(0,1\right)$, ${lim}_{n\to \mathrm{\infty }}{\alpha }_{n}=0$, ${\sum }_{n=1}^{\mathrm{\infty }}{\alpha }_{n}=\mathrm{\infty }$, ${\sum }_{n=1}^{\mathrm{\infty }}|{\alpha }_{n+1}-{\alpha }_{n}|<\mathrm{\infty }$;

(C3) $\left\{{\lambda }_{n}\right\}\subset \left(0,2/\gamma -L\right)$, ${\lambda }_{n}=o\left({\alpha }_{n}\right)$, ${\sum }_{n=1}^{\mathrm{\infty }}|{\lambda }_{n+1}-{\lambda }_{n}|<\mathrm{\infty }$.

Then the sequence $\left\{{x}_{n}\right\}$ converges strongly to a point $z\in U\cap EP\left(\varphi \right)$, which solves the variational inequality (3.2).

Proof It is well known that:

1. (a)

$\stackrel{ˆ}{x}\in C$ solves the minimization problem (1.5) if and only if for each fixed $0<\gamma <2/L$, $\stackrel{ˆ}{x}$ solves the fixed-point equation

$\stackrel{ˆ}{x}={Proj}_{C}\left(I-\gamma \mathrm{\nabla }f\right)\stackrel{ˆ}{x}=\frac{2-\gamma L}{4}\stackrel{ˆ}{x}+\frac{2+\gamma L}{4}T\stackrel{ˆ}{x}.$

It is clear that $\stackrel{ˆ}{x}=T\stackrel{ˆ}{x}$, i.e., $\stackrel{ˆ}{x}\in U=Fix\left(T\right)$.

1. (b)

The gradient f is $1/L$-ism.

2. (c)

${Proj}_{C}\left(I-\gamma \mathrm{\nabla }{f}_{{\lambda }_{n}}\right)$ is $\frac{2+\gamma \left(L+{\lambda }_{n}\right)}{4}$ averaged for $\gamma \in \left(0,2/L\right)$, in particular, the following relation holds:

${Proj}_{C}\left(I-\gamma \mathrm{\nabla }{f}_{{\lambda }_{n}}\right)=\frac{2-\gamma \left(L+{\lambda }_{n}\right)}{4}I+\frac{2+\gamma \left(L+{\lambda }_{n}\right)}{4}{T}_{{\lambda }_{n}}={\theta }_{n}I+\left(1-{\theta }_{n}\right){T}_{{\lambda }_{n}}.$

Since ${\alpha }_{n}\to 0$, we may assume that ${\alpha }_{n}\in \left(0,{\parallel A\parallel }^{-1}\right)$. Now, we first show that $\left\{{x}_{n}\right\}$ is bounded. Indeed, pick $p\in U\cap EP\left(\varphi \right)$, since ${u}_{n}={Q}_{{\beta }_{n}}{x}_{n}$, by Lemma 2.7, we know that

$\parallel {u}_{n}-p\parallel =\parallel {Q}_{{\beta }_{n}}{x}_{n}-{Q}_{{\beta }_{n}}p\parallel \le \parallel {x}_{n}-p\parallel .$
(3.9)

Thus, we derive from (3.5) that

$\begin{array}{rcl}\parallel {x}_{n+1}-p\parallel & =& \parallel {\alpha }_{n}rV{u}_{n}+\left(I-{\alpha }_{n}A\right){T}_{{\lambda }_{n}}{u}_{n}-p\parallel \\ =& \parallel \left(I-{\alpha }_{n}A\right)\left({T}_{{\lambda }_{n}}{u}_{n}-p\right)+{\alpha }_{n}rV{u}_{n}-{\alpha }_{n}A\left(p\right)\parallel \\ \le & \left(1-{\alpha }_{n}\overline{\gamma }\right)\parallel {T}_{{\lambda }_{n}}{u}_{n}-p\parallel +{\alpha }_{n}\parallel rV{u}_{n}-A\left(p\right)\parallel \\ \le & \left(1-{\alpha }_{n}\overline{\gamma }\right)\left(\parallel {T}_{{\lambda }_{n}}{u}_{n}-{T}_{{\lambda }_{n}}p\parallel +\parallel {T}_{{\lambda }_{n}}p-Tp\parallel \right)\\ +{\alpha }_{n}\left(\parallel rV{u}_{n}-rV\left(p\right)\parallel +\parallel rV\left(p\right)-A\left(p\right)\parallel \right)\\ \le & \left(1-{\alpha }_{n}\overline{\gamma }\right)\left(\parallel {u}_{n}-p\parallel +{\lambda }_{n}M\left(p\right)\right)\\ +{\alpha }_{n}\left(rl\parallel {u}_{n}-p\parallel +\parallel rV\left(p\right)-A\left(p\right)\parallel \right)\\ =& \left(1-{\alpha }_{n}\left(\overline{\gamma }-rl\right)\right)\parallel {u}_{n}-p\parallel +\left(1-{\alpha }_{n}\overline{\gamma }\right){\lambda }_{n}M\left(p\right)+{\alpha }_{n}\parallel rV\left(p\right)-A\left(p\right)\parallel \\ \le & \left(1-{\alpha }_{n}\left(\overline{\gamma }-rl\right)\right)\parallel {x}_{n}-p\parallel +{\lambda }_{n}M\left(p\right)+{\alpha }_{n}\parallel rV\left(p\right)-A\left(p\right)\parallel \\ =& \left(1-{\alpha }_{n}\left(\overline{\gamma }-rl\right)\right)\parallel {x}_{n}-p\parallel +{\alpha }_{n}\left(\overline{\gamma }-rl\right)\left[\frac{{\lambda }_{n}}{{\alpha }_{n}}\frac{M\left(p\right)}{\overline{\gamma }-rl}+\frac{\parallel rV\left(p\right)-A\left(p\right)\parallel }{\overline{\gamma }-rl}\right].\end{array}$

Since ${\lambda }_{n}=o\left({\alpha }_{n}\right)$, there exists a real number ${M}^{\mathrm{\prime }}>0$ such that $\frac{{\lambda }_{n}}{{\alpha }_{n}}\le {M}^{\mathrm{\prime }}$. Thus,

$\parallel {x}_{n+1}-p\parallel \le \left(1-{\alpha }_{n}\left(\overline{\gamma }-rl\right)\right)\parallel {x}_{n}-p\parallel +{\alpha }_{n}\left(\overline{\gamma }-rl\right)\frac{{M}^{\mathrm{\prime }}M\left(p\right)+\parallel rV\left(p\right)-A\left(p\right)\parallel }{\overline{\gamma }-rl}.$

By induction, we have

$\parallel {x}_{n}-p\parallel \le max\left\{\parallel {x}_{1}-p\parallel ,\frac{{M}^{\mathrm{\prime }}M\left(p\right)+\parallel rV\left(p\right)-A\left(p\right)\parallel }{\overline{\gamma }-rl}\right\},\phantom{\rule{1em}{0ex}}n\ge 1.$

Hence $\left\{{x}_{n}\right\}$ is bounded. From (3.9), we also derive that $\left\{{u}_{n}\right\}$ is bounded.

Next, we show that $\parallel {x}_{n+1}-{x}_{n}\parallel \to 0$.

Indeed, since

${Proj}_{C}\left(I-\gamma \mathrm{\nabla }{f}_{{\lambda }_{n}}\right)=\frac{2-\gamma \left(L+{\lambda }_{n}\right)}{4}I+\frac{2+\gamma \left(L+{\lambda }_{n}\right)}{4}{T}_{{\lambda }_{n}},$

we have

${T}_{{\lambda }_{n}}=\frac{4{Proj}_{C}\left(I-\gamma \mathrm{\nabla }{f}_{{\lambda }_{n}}\right)-\left[2-\gamma \left(L+{\lambda }_{n}\right)\right]I}{2+\gamma \left(L+{\lambda }_{n}\right)}.$

So, we obtain that

$\begin{array}{c}\parallel {T}_{{\lambda }_{n}}\left({u}_{n-1}\right)-{T}_{{\lambda }_{n-1}}\left({u}_{n-1}\right)\parallel \hfill \\ \phantom{\rule{1em}{0ex}}=\parallel \frac{4{Proj}_{C}\left(I-\gamma \mathrm{\nabla }{f}_{{\lambda }_{n}}\right)-\left[2-\gamma \left(L+{\lambda }_{n}\right)\right]I}{2+\gamma \left(L+{\lambda }_{n}\right)}{u}_{n-1}\hfill \\ \phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}-\frac{4{Proj}_{C}\left(I-\gamma \mathrm{\nabla }{f}_{{\lambda }_{n-1}}\right)-\left[2-\gamma \left(L+{\lambda }_{n-1}\right)\right]I}{2+\gamma \left(L+{\lambda }_{n-1}\right)}{u}_{n-1}\parallel \hfill \\ \phantom{\rule{1em}{0ex}}\le \parallel \frac{4{Proj}_{C}\left(I-\gamma \mathrm{\nabla }{f}_{{\lambda }_{n}}\right)}{2+\gamma \left(L+{\lambda }_{n}\right)}{u}_{n-1}-\frac{4{Proj}_{C}\left(I-\gamma \mathrm{\nabla }{f}_{{\lambda }_{n-1}}\right)}{2+\gamma \left(L+{\lambda }_{n-1}\right)}{u}_{n-1}\parallel \hfill \\ \phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}+\parallel \frac{\left[2-\gamma \left(L+{\lambda }_{n}\right)\right]I}{2+\gamma \left(L+{\lambda }_{n}\right)}{u}_{n-1}-\frac{\left[2-\gamma \left(L+{\lambda }_{n-1}\right)\right]I}{2+\gamma \left(L+{\lambda }_{n-1}\right)}{u}_{n-1}\parallel \hfill \\ \phantom{\rule{1em}{0ex}}=\parallel \frac{4\left[2+\gamma \left(L+{\lambda }_{n-1}\right)\right]{Proj}_{C}\left(I-\gamma \mathrm{\nabla }{f}_{{\lambda }_{n}}\right)-4\left[2+\gamma \left(L+{\lambda }_{n}\right)\right]{Proj}_{C}\left(I-\gamma \mathrm{\nabla }{f}_{{\lambda }_{n-1}}\right)}{\left[2+\gamma \left(L+{\lambda }_{n}\right)\right]\left[2+\gamma \left(L+{\lambda }_{n-1}\right)\right]}{u}_{n-1}\parallel \hfill \\ \phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}+\frac{4\gamma |{\lambda }_{n}-{\lambda }_{n-1}|}{\left[2+\gamma \left(L+{\lambda }_{n}\right)\right]\left[2+\gamma \left(L+{\lambda }_{n-1}\right)\right]}\parallel {u}_{n-1}\parallel \hfill \\ \phantom{\rule{1em}{0ex}}=\parallel \frac{4\gamma \left({\lambda }_{n-1}-{\lambda }_{n}\right){Proj}_{C}\left(I-\gamma \mathrm{\nabla }{f}_{{\lambda }_{n}}\right)}{\left[2+\gamma \left(L+{\lambda }_{n}\right)\right]\left[2+\gamma \left(L+{\lambda }_{n-1}\right)\right]}{u}_{n-1}\hfill \\ \phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}+\frac{4\left[2+\gamma \left(L+{\lambda }_{n}\right)\right]\left({Proj}_{C}\left(I-\gamma \mathrm{\nabla }{f}_{{\lambda }_{n}}\right)-{Proj}_{C}\left(I-\gamma \mathrm{\nabla }{f}_{{\lambda }_{n-1}}\right)\right)}{\left[2+\gamma \left(L+{\lambda }_{n}\right)\right]\left[2+\gamma \left(L+{\lambda }_{n-1}\right)\right]}{u}_{n-1}\parallel \hfill \\ \phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}+\frac{4\gamma |{\lambda }_{n}-{\lambda }_{n-1}|}{\left[2+\gamma \left(L+{\lambda }_{n}\right)\right]\left[2+\gamma \left(L+{\lambda }_{n-1}\right)\right]}\parallel {u}_{n-1}\parallel \hfill \\ \phantom{\rule{1em}{0ex}}\le \frac{4\gamma |{\lambda }_{n}-{\lambda }_{n-1}|\parallel {Proj}_{C}\left(I-\gamma \mathrm{\nabla }{f}_{{\lambda }_{n}}\right){u}_{n-1}\parallel }{\left[2+\gamma \left(L+{\lambda }_{n}\right)\right]\left[2+\gamma \left(L+{\lambda }_{n-1}\right)\right]}\hfill \\ \phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}+\frac{4\left[2+\gamma \left(L+{\lambda }_{n}\right)\right]\parallel {Proj}_{C}\left(I-\gamma \mathrm{\nabla }{f}_{{\lambda }_{n}}\right){u}_{n-1}-{Proj}_{C}\left(I-\gamma \mathrm{\nabla }{f}_{{\lambda }_{n-1}}\right){u}_{n-1}\parallel }{\left[2+\gamma \left(L+{\lambda }_{n}\right)\right]\left[2+\gamma \left(L+{\lambda }_{n-1}\right)\right]}\hfill \\ \phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}+\frac{4\gamma |{\lambda }_{n}-{\lambda }_{n-1}|}{\left[2+\gamma \left(L+{\lambda }_{n}\right)\right]\left[2+\gamma \left(L+{\lambda }_{n-1}\right)\right]}\parallel {u}_{n-1}\parallel \hfill \\ \phantom{\rule{1em}{0ex}}\le |{\lambda }_{n}-{\lambda }_{n-1}|\left[\gamma \parallel {Proj}_{C}\left(I-\gamma \mathrm{\nabla }{f}_{{\lambda }_{n}}\right){u}_{n-1}\parallel +4\gamma \parallel {u}_{n-1}\parallel +\gamma \parallel {u}_{n-1}\parallel \right]\hfill \\ \phantom{\rule{1em}{0ex}}\le K|{\lambda }_{n}-{\lambda }_{n-1}|\hfill \end{array}$

for some appropriate constant $K>0$ such that

$K\ge \gamma \parallel {Proj}_{C}\left(I-\gamma \mathrm{\nabla }{f}_{{\lambda }_{n}}\right){u}_{n-1}\parallel +5\gamma \parallel {u}_{n-1}\parallel .$

Thus, we get

$\begin{array}{c}\parallel {x}_{n+1}-{x}_{n}\parallel \hfill \\ \phantom{\rule{1em}{0ex}}=\parallel \left[{\alpha }_{n}rV{u}_{n}+\left(I-{\alpha }_{n}A\right){T}_{{\lambda }_{n}}{u}_{n}\right]-\left[{\alpha }_{n-1}rV{u}_{n-1}+\left(I-{\alpha }_{n-1}A\right){T}_{{\lambda }_{n-1}}{u}_{n-1}\right]\parallel \hfill \\ \phantom{\rule{1em}{0ex}}\le \parallel {\alpha }_{n}rV{u}_{n}-{\alpha }_{n}rV{u}_{n-1}\parallel +\parallel {\alpha }_{n}rV{u}_{n-1}-{\alpha }_{n-1}rV{u}_{n-1}\parallel \hfill \\ \phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}+\parallel \left(I-{\alpha }_{n}A\right)\left({T}_{{\lambda }_{n}}{u}_{n}-{T}_{{\lambda }_{n}}{u}_{n-1}\right)\parallel \hfill \\ \phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}+\parallel \left(I-{\alpha }_{n}A\right){T}_{{\lambda }_{n}}{u}_{n-1}-\left(I-{\alpha }_{n-1}A\right){T}_{{\lambda }_{n-1}}{u}_{n-1}\parallel \hfill \\ \phantom{\rule{1em}{0ex}}\le {\alpha }_{n}rl\parallel {u}_{n}-{u}_{n-1}\parallel +|{\alpha }_{n}-{\alpha }_{n-1}|r\parallel V{u}_{n-1}\parallel +\left(1-{\alpha }_{n}\overline{\gamma }\right)\parallel {u}_{n}-{u}_{n-1}\parallel \hfill \\ \phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}+\parallel {T}_{{\lambda }_{n}}{u}_{n-1}-{T}_{{\lambda }_{n-1}}{u}_{n-1}\parallel +\parallel {\alpha }_{n-1}A{T}_{{\lambda }_{n-1}}{u}_{n-1}-{\alpha }_{n}A{T}_{{\lambda }_{n}}{u}_{n-1}\parallel \hfill \\ \phantom{\rule{1em}{0ex}}\le \left(1-{\alpha }_{n}\left(\overline{\gamma }-rl\right)\right)\parallel {u}_{n}-{u}_{n-1}\parallel +|{\alpha }_{n}-{\alpha }_{n-1}|r\parallel V{u}_{n-1}\parallel \hfill \\ \phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}+\parallel {T}_{{\lambda }_{n}}{u}_{n-1}-{T}_{{\lambda }_{n-1}}{u}_{n-1}\parallel \left(1+{\alpha }_{n-1}\parallel A\parallel \right)+|{\alpha }_{n}-{\alpha }_{n-1}|\parallel A{T}_{{\lambda }_{n}}{u}_{n-1}\parallel \hfill \\ \phantom{\rule{1em}{0ex}}\le \left(1-{\alpha }_{n}\left(\overline{\gamma }-rl\right)\right)\parallel {u}_{n}-{u}_{n-1}\parallel \hfill \\ \phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}+|{\alpha }_{n}-{\alpha }_{n-1}|\left(r\parallel V{u}_{n-1}\parallel +\parallel A{T}_{{\lambda }_{n}}{u}_{n-1}\parallel \right)\hfill \\ \phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}+|{\lambda }_{n}-{\lambda }_{n-1}|\left(K+K\parallel A\parallel \right).\hfill \end{array}$

Since both $\left\{V{u}_{n-1}\right\}$ and $\left\{A{T}_{{\lambda }_{n}}{u}_{n-1}\right\}$ are bounded, we can take a constant $E>0$ such that

$E\ge r\parallel V{u}_{n-1}\parallel +\parallel A{T}_{{\lambda }_{n}}{u}_{n-1}\parallel ,\phantom{\rule{1em}{0ex}}n\ge 1.$

Consequently,

$\begin{array}{rcl}\parallel {x}_{n+1}-{x}_{n}\parallel & \le & \left(1-{\alpha }_{n}\left(\overline{\gamma }-rl\right)\right)\parallel {u}_{n}-{u}_{n-1}\parallel \\ +E|{\alpha }_{n}-{\alpha }_{n-1}|+|{\lambda }_{n}-{\lambda }_{n-1}|\left(K+\parallel A\parallel K\right).\end{array}$
(3.10)

From ${u}_{n+1}={Q}_{{\beta }_{n+1}}{x}_{n+1}$ and ${u}_{n}={Q}_{{\beta }_{n}}{x}_{n}$, we note that

$\varphi \left({u}_{n+1},y\right)+\frac{1}{{\beta }_{n+1}}〈y-{u}_{n+1},{u}_{n+1}-{x}_{n+1}〉\ge 0\phantom{\rule{1em}{0ex}}\mathrm{\forall }y\in C$
(3.11)

and

$\varphi \left({u}_{n},y\right)+\frac{1}{{\beta }_{n}}〈y-{u}_{n},{u}_{n}-{x}_{n}〉\ge 0\phantom{\rule{1em}{0ex}}\mathrm{\forall }y\in C.$
(3.12)

Putting $y={u}_{n}$ in (3.11) and $y={u}_{n+1}$ in (3.12), we have

$\varphi \left({u}_{n+1},{u}_{n}\right)+\frac{1}{{\beta }_{n+1}}〈{u}_{n}-{u}_{n+1},{u}_{n+1}-{x}_{n+1}〉\ge 0$

and

$\varphi \left({u}_{n},{u}_{n+1}\right)+\frac{1}{{\beta }_{n}}〈{u}_{n+1}-{u}_{n},{u}_{n}-{x}_{n}〉\ge 0.$

So, from (A2), we have

$〈{u}_{n+1}-{u}_{n},\frac{{u}_{n}-{x}_{n}}{{\beta }_{n}}-\frac{{u}_{n+1}-{x}_{n+1}}{{\beta }_{n+1}}〉\ge 0$

and hence

$〈{u}_{n+1}-{u}_{n},{u}_{n}-{u}_{n+1}+{u}_{n+1}-{x}_{n}-\frac{{\beta }_{n}}{{\beta }_{n+1}}\left({u}_{n+1}-{x}_{n+1}\right)〉\ge 0.$

Since ${lim inf}_{n\to \mathrm{\infty }}{\beta }_{n}>0$, without loss of generality, let us assume that there exists a real number a such that ${\beta }_{n}>a>0$ for all $n\in \mathbb{N}$. Thus, we have

$\begin{array}{rcl}{\parallel {u}_{n+1}-{u}_{n}\parallel }^{2}& \le & 〈{u}_{n+1}-{u}_{n},{x}_{n+1}-{x}_{n}+\left(1-\frac{{\beta }_{n}}{{\beta }_{n+1}}\right)\left({u}_{n+1}-{x}_{n+1}\right)〉\\ \le & \parallel {u}_{n+1}-{u}_{n}\parallel \left\{\parallel {x}_{n+1}-{x}_{n}\parallel +|1-\frac{{\beta }_{n}}{{\beta }_{n+1}}|\parallel {u}_{n+1}-{x}_{n+1}\parallel \right\},\end{array}$

thus,

$\parallel {u}_{n+1}-{u}_{n}\parallel \le \parallel {x}_{n+1}-{x}_{n}\parallel +\frac{1}{a}|{\beta }_{n+1}-{\beta }_{n}|{M}_{1},$
(3.13)

where ${M}_{1}=sup\left\{\parallel {u}_{n}-{x}_{n}\parallel :n\in \mathbb{N}\right\}$.

From (3.10) and (3.13), we obtain

$\begin{array}{rcl}\parallel {x}_{n+1}-{x}_{n}\parallel & \le & \left(1-{\alpha }_{n}\left(\overline{\gamma }-rl\right)\right)\left(\parallel {x}_{n}-{x}_{n-1}\parallel +\frac{1}{a}|{\beta }_{n}-{\beta }_{n-1}|{M}_{1}\right)\\ +E|{\alpha }_{n}-{\alpha }_{n-1}|+|{\lambda }_{n}-{\lambda }_{n-1}|\left(K+\parallel A\parallel K\right)\\ \le & \left(1-{\alpha }_{n}\left(\overline{\gamma }-rl\right)\right)\parallel {x}_{n}-{x}_{n-1}\parallel +\frac{{M}_{1}}{a}|{\beta }_{n}-{\beta }_{n-1}|\\ +E|{\alpha }_{n}-{\alpha }_{n-1}|+|{\lambda }_{n}-{\lambda }_{n-1}|\left(K+\parallel A\parallel K\right)\\ \le & \left(1-{\alpha }_{n}\left(\overline{\gamma }-rl\right)\right)\parallel {x}_{n}-{x}_{n-1}\parallel \\ +{M}_{2}\left(|{\beta }_{n}-{\beta }_{n-1}|+|{\alpha }_{n}-{\alpha }_{n-1}|+|{\lambda }_{n}-{\lambda }_{n-1}|\right),\end{array}$

where ${M}_{2}=max\left\{\frac{{M}_{1}}{a},E,K+\parallel A\parallel K\right\}$. Hence, by Lemma 2.1, we have

$\underset{n\to \mathrm{\infty }}{lim}\parallel {x}_{n+1}-{x}_{n}\parallel =0.$
(3.14)

Then, from (3.13), (3.14) and $|{\beta }_{n+1}-{\beta }_{n}|\to 0$, we have

$\underset{n\to \mathrm{\infty }}{lim}\parallel {u}_{n+1}-{u}_{n}\parallel =0.$
(3.15)

For any $p\in U\cap EP\left(\varphi \right)$, as the same proof of Theorem 3.1, we have

${\parallel {u}_{n}-p\parallel }^{2}\le {\parallel {x}_{n}-p\parallel }^{2}-{\parallel {u}_{n}-{x}_{n}\parallel }^{2}.$
(3.16)

Then, from (3.5) and (3.16), by the same argument as in the proof of Theorem 3.1, we derive that

$\begin{array}{rcl}{\parallel {x}_{n+1}-p\parallel }^{2}& \le & \left({\left(1-{\alpha }_{n}\overline{\gamma }\right)}^{2}+2\left(1-{\alpha }_{n}\overline{\gamma }\right){\alpha }_{n}rl\right)\left({\parallel {x}_{n}-p\parallel }^{2}-{\parallel {u}_{n}-{x}_{n}\parallel }^{2}\right)\\ +{\lambda }_{n}^{2}\cdot {\left(M\left(p\right)\right)}^{2}+2\parallel {u}_{n}-p\parallel \left({\lambda }_{n}M\left(p\right)+{\alpha }_{n}\parallel rV\left(p\right)-A\left(p\right)\parallel \right)\\ +2{\alpha }_{n}{\lambda }_{n}M\left(p\right)\left(rl\parallel {u}_{n}-p\parallel +\parallel rV\left(p\right)-A\left(p\right)\parallel \right)\\ +{\alpha }_{n}^{2}{\left(rl\parallel {u}_{n}-p\parallel +\parallel rV\left(p\right)-A\left(p\right)\parallel \right)}^{2}.\end{array}$

Since both $\left\{{x}_{n}\right\}$ and $\left\{{u}_{n}\right\}$ are bounded, ${\alpha }_{n}\to 0$, ${\lambda }_{n}\to 0$, and $\parallel {x}_{n+1}-{x}_{n}\parallel \to 0$, we have

$\underset{n\to \mathrm{\infty }}{lim}\parallel {x}_{n}-{u}_{n}\parallel =0.$
(3.17)

Next,

$\begin{array}{c}\parallel {x}_{n}-{T}_{{\lambda }_{n}}{x}_{n}\parallel \hfill \\ \phantom{\rule{1em}{0ex}}=\parallel {x}_{n}-{x}_{n+1}+{x}_{n+1}-{T}_{{\lambda }_{n}}{u}_{n}+{T}_{{\lambda }_{n}}{u}_{n}-{T}_{{\lambda }_{n}}{x}_{n}\parallel \hfill \\ \phantom{\rule{1em}{0ex}}\le \parallel {x}_{n}-{x}_{n+1}\parallel +\parallel {x}_{n+1}-{T}_{{\lambda }_{n}}{u}_{n}\parallel +\parallel {T}_{{\lambda }_{n}}{u}_{n}-{T}_{{\lambda }_{n}}{x}_{n}\parallel \hfill \\ \phantom{\rule{1em}{0ex}}\le \parallel {x}_{n}-{x}_{n+1}\parallel +{\alpha }_{n}\parallel rV{u}_{n}-A{T}_{{\lambda }_{n}}{u}_{n}\parallel +\parallel {u}_{n}-{x}_{n}\parallel \hfill \end{array}$

and then

$\parallel {x}_{n}-{T}_{{\lambda }_{n}}{x}_{n}\parallel \to 0.$
(3.18)

It follows that $\parallel {u}_{n}-{T}_{{\lambda }_{n}}{u}_{n}\parallel \to 0$.

Now, we show that

$\underset{n\to \mathrm{\infty }}{lim sup}〈{x}_{n}-z,\left(rV-A\right)z〉\le 0,$

where $z\in U\cap EP\left(\varphi \right)$ is a unique solution of the variational inequality (3.2).

Indeed, since $\left\{{x}_{n}\right\}$ is bounded, without loss of generality, we may assume that ${x}_{{n}_{k}}⇀\stackrel{˜}{x}$ such that

$\underset{n\to \mathrm{\infty }}{lim sup}〈{x}_{n}-z,\left(rV-A\right)z〉=\underset{k\to \mathrm{\infty }}{lim}〈{x}_{{n}_{k}}-z,\left(rV-A\right)z〉.$
(3.19)

By (3.17) and ${x}_{{n}_{k}}⇀\stackrel{˜}{x}$, we derive that ${u}_{{n}_{k}}⇀\stackrel{˜}{x}$.

Note that

$\begin{array}{rcl}\parallel {u}_{n}-T{u}_{n}\parallel & \le & \parallel {u}_{n}-{T}_{{\lambda }_{n}}{u}_{n}\parallel +\parallel {T}_{{\lambda }_{n}}{u}_{n}-T{u}_{n}\parallel \\ \le & \parallel {u}_{n}-{T}_{{\lambda }_{n}}{u}_{n}\parallel +{\lambda }_{n}M\left({u}_{n}\right).\end{array}$

Hence, by $\parallel {u}_{n}-{T}_{{\lambda }_{n}}{u}_{n}\parallel \to 0$, we get $\parallel {u}_{n}-T{u}_{n}\parallel \to 0$.

In terms of Lemma 2.2, we get $\stackrel{˜}{x}\in Fix\left(T\right)=U$.

Then, by the same argument as in the proof of Theorem 3.1, we have $\stackrel{˜}{x}\in U\cap EP\left(\varphi \right)$.

Since $z\in U\cap EP\left(\varphi \right)$ is the solution of the variational inequality (3.2), we derive from (3.19) that

$\underset{n\to \mathrm{\infty }}{lim sup}〈{x}_{n}-z,\left(rV-A\right)z〉\le 0.$
(3.20)

Finally, we show that ${x}_{n}\to z$.

As a matter of fact,

$\begin{array}{rcl}{x}_{n+1}-z& =& {\alpha }_{n}rV{u}_{n}+\left(I-{\alpha }_{n}A\right){T}_{{\lambda }_{n}}{u}_{n}-z\\ =& \left(I-{\alpha }_{n}A\right)\left({T}_{{\lambda }_{n}}{u}_{n}-z\right)+{\alpha }_{n}rV{u}_{n}-{\alpha }_{n}A\left(z\right)\\ =& \left(I-{\alpha }_{n}A\right)\left({T}_{{\lambda }_{n}}{u}_{n}-z\right)+{\alpha }_{n}r\left(V{u}_{n}-Vz\right)+{\alpha }_{n}\left(rV\left(z\right)-A\left(z\right)\right).\end{array}$

So, from (3.5) and (3.9), we derive

$\begin{array}{rcl}{\parallel {x}_{n+1}-z\parallel }^{2}& =& 〈\left(I-{\alpha }_{n}A\right)\left({T}_{{\lambda }_{n}}{u}_{n}-z\right),{x}_{n+1}-z〉\\ +{\alpha }_{n}r〈V{u}_{n}-Vz,{x}_{n+1}-z〉+{\alpha }_{n}〈rV\left(z\right)-A\left(z\right),{x}_{n+1}-z〉\\ \le & \left(1-{\alpha }_{n}\overline{\gamma }\right)\parallel {T}_{{\lambda }_{n}}{u}_{n}-z\parallel \parallel {x}_{n+1}-z\parallel +{\alpha }_{n}rl\parallel {u}_{n}-z\parallel \parallel {x}_{n+1}-z\parallel \\ +{\alpha }_{n}〈rV\left(z\right)-A\left(z\right),{x}_{n+1}-z〉\\ \le & \left(1-{\alpha }_{n}\overline{\gamma }\right)\left(\parallel {T}_{{\lambda }_{n}}{u}_{n}-{T}_{{\lambda }_{n}}z\parallel +\parallel {T}_{{\lambda }_{n}}z-Tz\parallel \right)\parallel {x}_{n+1}-z\parallel \\ +{\alpha }_{n}rl\parallel {u}_{n}-z\parallel \parallel {x}_{n+1}-z\parallel +{\alpha }_{n}〈rV\left(z\right)-A\left(z\right),{x}_{n+1}-z〉\\ \le & \left(1-{\alpha }_{n}\overline{\gamma }\right)\parallel {u}_{n}-z\parallel \parallel {x}_{n+1}-z\parallel +\left(1-{\alpha }_{n}\overline{\gamma }\right)\parallel {T}_{{\lambda }_{n}}z-Tz\parallel \parallel {x}_{n+1}-z\parallel \\ +{\alpha }_{n}rl\parallel {u}_{n}-z\parallel \parallel {x}_{n+1}-z\parallel +{\alpha }_{n}〈rV\left(z\right)-A\left(z\right),{x}_{n+1}-z〉\\ \le & \left(1-{\alpha }_{n}\left(\overline{\gamma }-rl\right)\right)\parallel {x}_{n}-z\parallel \parallel {x}_{n+1}-z\parallel \\ +\left(1-{\alpha }_{n}\overline{\gamma }\right){\lambda }_{n}M\left(z\right)\parallel {x}_{n+1}-z\parallel +{\alpha }_{n}〈rV\left(z\right)-A\left(z\right),{x}_{n+1}-z〉\\ \le & \left(1-{\alpha }_{n}\left(\overline{\gamma }-rl\right)\right)\frac{1}{2}\left({\parallel {x}_{n}-z\parallel }^{2}+{\parallel {x}_{n+1}-z\parallel }^{2}\right)\\ +{\alpha }_{n}\left[〈rV\left(z\right)-A\left(z\right),{x}_{n+1}-z〉+\frac{{\lambda }_{n}}{{\alpha }_{n}}M\left(z\right)\parallel {x}_{n+1}-z\parallel \right].\end{array}$

It follows that

$\begin{array}{c}{\parallel {x}_{n+1}-z\parallel }^{2}\hfill \\ \phantom{\rule{1em}{0ex}}\le \frac{1-{\alpha }_{n}\left(\overline{\gamma }-rl\right)}{1+{\alpha }_{n}\left(\overline{\gamma }-rl\right)}{\parallel {x}_{n}-z\parallel }^{2}\hfill \\ \phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}+\frac{2{\alpha }_{n}}{1+{\alpha }_{n}\left(\overline{\gamma }-rl\right)}\left[〈rV\left(z\right)-A\left(z\right),{x}_{n+1}-z〉+\frac{{\lambda }_{n}}{{\alpha }_{n}}M\left(z\right)\parallel {x}_{n+1}-z\parallel \right]\hfill \\ \phantom{\rule{1em}{0ex}}\le \left(1-{\alpha }_{n}\left(\overline{\gamma }-rl\right)\right){\parallel {x}_{n}-z\parallel }^{2}\hfill \\ \phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}+\frac{2{\alpha }_{n}}{1+{\alpha }_{n}\left(\overline{\gamma }-rl\right)}\left[〈rV\left(z\right)-A\left(z\right),{x}_{n+1}-z〉+\frac{{\lambda }_{n}}{{\alpha }_{n}}M\left(z\right)\parallel {x}_{n+1}-z\parallel \right].\hfill \end{array}$

Since $\left\{{x}_{n}\right\}$ is bounded, we can take a constant ${M}_{3}>0$ such that

${M}_{3}\ge M\left(z\right)\parallel {x}_{n+1}-z\parallel ,\phantom{\rule{1em}{0ex}}n\ge 1.$

Then, we obtain that

${\parallel {x}_{n+1}-z\parallel }^{2}\le \left(1-{\alpha }_{n}\left(\overline{\gamma }-rl\right)\right){\parallel {x}_{n}-z\parallel }^{2}+{\alpha }_{n}{\delta }_{n},$
(3.21)

where ${\delta }_{n}=\frac{2}{1+{\alpha }_{n}\left(\overline{\gamma }-rl\right)}\left[〈rV\left(z\right)-A\left(z\right),{x}_{n+1}-z〉+\frac{{\lambda }_{n}}{{\alpha }_{n}}{M}_{3}\right]$.

By (3.20) and ${\lambda }_{n}=o\left({\alpha }_{n}\right)$, we get ${lim sup}_{n\to \mathrm{\infty }}{\delta }_{n}\le 0$. Now applying Lemma 2.1 to (3.21) concludes that ${x}_{n}\to z$ as $n\to \mathrm{\infty }$. □

## 4 Application

In this section, we give an application of Theorem 3.2 to the split feasibility problem (SFP for short) which was introduced by Censor and Elfving . Since its inception in 1994, the split feasibility problem (SFP) has received much attention (see [21, 26, 27]) due to its applications in signal processing and image reconstruction, with particular progress in intensity-modulated radiation therapy.

The SFP can mathematically be formulated as the problem of finding a point x with the property

$x\in C\phantom{\rule{1em}{0ex}}\text{and}\phantom{\rule{1em}{0ex}}Bx\in Q,$
(4.1)

where C and Q are nonempty closed convex subsets of Hilbert spaces ${H}_{1}$ and ${H}_{2}$, respectively. $B:{H}_{1}\to {H}_{2}$ is a bounded linear operator.

It is clear that ${x}^{\ast }$ is a solution to the split feasibility problem (4.1) if and only if ${x}^{\ast }\in C$ and $B{x}^{\ast }-{Proj}_{Q}B{x}^{\ast }=0$. We define the proximity function f by

$f\left(x\right)=\frac{1}{2}{\parallel Bx-{Proj}_{Q}Bx\parallel }^{2}$

and consider the constrained convex minimization problem

$\underset{x\in C}{min}f\left(x\right)=\underset{x\in C}{min}\frac{1}{2}{\parallel Bx-{Proj}_{Q}Bx\parallel }^{2}.$
(4.2)

Then ${x}^{\ast }$ solves the split feasibility problem (4.1) if and only if ${x}^{\ast }$ solves the minimization problem (4.2) with the minimize equal to 0. Byrne  introduced the so-called CQ algorithm to solve the SFP.

${x}_{n+1}={Proj}_{C}\left(I-\gamma {B}^{\ast }\left(I-{Proj}_{Q}\right)B\right){x}_{n},\phantom{\rule{1em}{0ex}}n\ge 0,$
(4.3)

where $0<\gamma <2/{\parallel B\parallel }^{2}$. He obtained that the sequence $\left\{{x}_{n}\right\}$ generated by (4.3) converges weakly to a solution of the SFP.

In order to obtain a strong convergence iterative sequence to solve the SFP, we propose the following algorithm:

$\left\{\begin{array}{c}\varphi \left({u}_{n},y\right)+\frac{1}{{\beta }_{n}}〈y-{u}_{n},{u}_{n}-{x}_{n}〉\ge 0\phantom{\rule{1em}{0ex}}\mathrm{\forall }y\in C,\hfill \\ {x}_{n+1}={\alpha }_{n}rV{u}_{n}+\left(I-{\alpha }_{n}A\right){T}_{{\lambda }_{n}}{u}_{n}\phantom{\rule{1em}{0ex}}\mathrm{\forall }n\in \mathbb{N},\hfill \end{array}$
(4.4)

where ${u}_{n}={Q}_{{\beta }_{n}}{x}_{n}$. Let $\left\{{T}_{{\lambda }_{n}}\right\}$ satisfy the following conditions:

1. (i)

${Proj}_{C}\left(I-\gamma \left({B}^{\ast }\left(I-{Proj}_{Q}\right)B+{\lambda }_{n}I\right)\right)={\theta }_{n}I+\left(1-{\theta }_{n}\right){T}_{{\lambda }_{n}}$ and $\gamma \in \left(0,2/{\parallel B\parallel }^{2}\right)$;

2. (ii)

${\theta }_{n}=\frac{2-\gamma \left(L+{\lambda }_{n}\right)}{4}$ for all n,

where $V:C\to H$ is Lipschitzian with a constant $l>0$ and $A:C\to H$ is a strongly positive bounded linear operator with a constant $\overline{\gamma }>0$. Suppose that $0. We can show that the sequence $\left\{{x}_{n}\right\}$ generated by (4.4) converges strongly to a solution of SFP (4.1) if the sequence $\left\{{\alpha }_{n}\right\}\subset \left(0,1\right)$ and the sequence $\left\{{\lambda }_{n}\right\}$ of parameters satisfy appropriate conditions.

Applying Theorem 3.2, we obtain the following result.

Theorem 4.1 Assume that the split feasibility problem (4.1) is consistent. Let the sequence $\left\{{x}_{n}\right\}$ be generated by (4.4). Where the sequence $\left\{{\beta }_{n}\right\},\left\{{\alpha }_{n}\right\}\subset \left(0,1\right)$ and the sequence $\left\{{\lambda }_{n}\right\}$ satisfy the conditions (C1)-(C3). Then the sequence $\left\{{x}_{n}\right\}$ converges strongly to a solution of the split feasibility problem (4.1).

Proof By the definition of the proximity function f, we have

$\mathrm{\nabla }f\left(x\right)={B}^{\ast }\left(I-{Proj}_{Q}\right)Bx$

and f is Lipschitz continuous, i.e.,

$\parallel \mathrm{\nabla }f\left(x\right)-\mathrm{\nabla }f\left(y\right)\parallel \le L\parallel x-y\parallel ,$

where $L={\parallel B\parallel }^{2}$.

Set ${f}_{{\lambda }_{n}}\left(x\right)=f\left(x\right)+\frac{{\lambda }_{n}}{2}{\parallel x\parallel }^{2}$; consequently,

$\begin{array}{rcl}\mathrm{\nabla }{f}_{{\lambda }_{n}}\left(x\right)& =& \mathrm{\nabla }f\left(x\right)+{\lambda }_{n}I\left(x\right)\\ =& {B}^{\ast }\left(I-{Proj}_{Q}\right)Bx+{\lambda }_{n}x.\end{array}$

Then the iterative scheme (4.4) is equivalent to

$\left\{\begin{array}{c}\varphi \left({u}_{n},y\right)+\frac{1}{{\beta }_{n}}〈y-{u}_{n},{u}_{n}-{x}_{n}〉\ge 0\phantom{\rule{1em}{0ex}}\mathrm{\forall }y\in C,\hfill \\ {x}_{n+1}={\alpha }_{n}rV{u}_{n}+\left(I-{\alpha }_{n}A\right){T}_{{\lambda }_{n}}{u}_{n}\phantom{\rule{1em}{0ex}}\mathrm{\forall }n\in \mathbb{N},\hfill \end{array}$
(4.5)

where ${u}_{n}={Q}_{{\beta }_{n}}{x}_{n}$. $\left\{{T}_{{\lambda }_{n}}\right\}$ satisfy the following conditions:

1. (i)

${Proj}_{C}\left(I-\gamma \mathrm{\nabla }{f}_{{\lambda }_{n}}\right)={\theta }_{n}I+\left(1-{\theta }_{n}\right){T}_{{\lambda }_{n}}$ and $\gamma \in \left(0,2/L\right)$;

2. (ii)

${\theta }_{n}=\frac{2-\gamma \left(L+{\lambda }_{n}\right)}{4}$ for all n.

Due to Theorem 3.2, we have the conclusion immediately. □

## 5 Conclusion

Methods for solving the equilibrium problem (EP) and the constrained convex minimization problem have been extensively studied, respectively, in a Hilbert space. In 2012, Tian and Liu proposed an iterative method for finding a common solution of an EP and a constrained convex minimization problem. But, in this paper, it is the first time that we combine the regularized gradient-projection algorithm and the averaged mappings approach to propose implicit and explicit algorithms for finding the common solution of an EP and a constrained convex minimization problem, which also solves a certain variational inequality.

## References

1. Brezis H: Operateurs Maximaux Monotones et Semi-Groups de Contractions dans les Espaces de Hilbert. North-Holland, Amsterdam; 1973.

2. Jaiboon C, Kumam P: A hybrid extragradient viscosity approximation method for solving equilibrium problems and fixed point problems of infinitely many nonexpansive mappings. Fixed Point Theory Appl. 2009. doi:10.1155/2009/374815

3. Jitpeera T, Katchang P, Kumam P: A viscosity of Cesàro mean approximation methods for a mixed equilibrium, variational inequalities, and fixed point problems. Fixed Point Theory Appl. 2011. doi:10.1155/2011/945051

4. Bertsekas DP, Gafni EM: Projection methods for variational inequalities with applications to the traffic assignment problem. Math. Program. Stud. 1982, 17: 139–159. 10.1007/BFb0120965

5. Han D, Lo HK: Solving non-additive traffic assignment problems, a descent method for cocoercive variational inequalities. Eur. J. Oper. Res. 2004, 159: 529–544. 10.1016/S0377-2217(03)00423-5

6. Bauschke H, Borwein J: On projection algorithms for solving convex feasibility problems. SIAM Rev. 1996, 38: 367–426. 10.1137/S0036144593251710

7. Byrne C: A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 2004, 20: 103–120. 10.1088/0266-5611/20/1/006

8. Combettes PL: Solving monotone inclusions via compositions of nonexpansive averaged operators. Optimization 2004, 53: 475–504. 10.1080/02331930412331327157

9. Xu HK: Averaged mappings and the gradient-projection algorithm. J. Optim. Theory Appl. 2011, 150: 360–378. 10.1007/s10957-011-9837-z

10. Yamada I: The hybrid steepest descent for the variational inequality problems over the intersection of fixed points sets of nonexpansive mapping. In Inherently Parallel Algorithms in Feasibility and Optimization and Their Application. Edited by: Butnariu D, Censor Y, Reich S. Elservier, New York; 2001:473–504.

11. Takahashi S, Takahashi W: Viscosity approximation methods for equilibrium problems and fixed point problems in Hilbert spaces. J. Math. Anal. Appl. 2007, 331: 506–515. 10.1016/j.jmaa.2006.08.036

12. Combettes PL, Hirstoaga SA: Equilibrium programming in Hilbert spaces. J. Nonlinear Convex Anal. 2005, 6: 117–136.

13. Flam SD, Antipin AS: Equilibrium programming using proximal-like algorithms. Math. Program. 1997, 78: 29–41.

14. He HM, Liu SY, Cho YJ: An explicit method for systems of equilibrium problems and fixed points of infinite family of nonexpansive mappings. J. Comput. Appl. Math. 2011, 235: 4128–4139. 10.1016/j.cam.2011.03.003

15. Qin XL, Cho YJ, Kang SM: Convergence analysis on hybrid projection algorithms for equilibrium problems and variational inequality problems. Math. Model. Anal. 2009, 14: 335–351. 10.3846/1392-6292.2009.14.335-351

16. Moudafi A: Viscosity approximation method for fixed-points problems. J. Math. Anal. Appl. 2000, 241: 46–55. 10.1006/jmaa.1999.6615

17. Xu HK: Viscosity approximation methods for nonexpansive mappings. J. Math. Anal. Appl. 2004, 298: 279–291. 10.1016/j.jmaa.2004.04.059

18. Marino G, Xu HK: A generated iterative method for nonexpansive mappings in Hilbert spaces. J. Math. Anal. Appl. 2006, 318: 43–52. 10.1016/j.jmaa.2005.05.028

19. Ceng LC, Ansari QH, Yao JC: Some iterative methods for finding fixed points and for solving constrained convex minimization problems. Nonlinear Anal. 2011, 74: 5286–5302. 10.1016/j.na.2011.05.005

20. Tian M, Liu L: General iterative methods for equilibrium and constrained convex minimization problem. Optimization 2012. doi:10.1080/02331934.2012.713361

21. Xu HK: Iterative methods for the split feasibility problem in infinite-dimensional Hilbert spaces. Inverse Probl. 2010., 26: Article ID 105018

22. Martinez-Yanes C, Xu HK: Strong convergence of the CQ method for fixed-point iteration processes. Nonlinear Anal. 2006, 64: 2400–2411. 10.1016/j.na.2005.08.018

23. Hundal H: An alternating projection that does not converge in norm. Nonlinear Anal. 2004, 57: 35–61. 10.1016/j.na.2003.11.004

24. Blum E, Oettli W: From optimization and variational inequalities to equilibrium problems. Math. Stud. 1994, 63: 123–145.

25. Censor Y, Elfving T: A multiprojection algorithm using Bregman projections in a product space. Numer. Algorithms 1994, 8: 221–239. 10.1007/BF02142692

26. López G, Martin-Márquez V, Wang FH, Xu HK: Solving the split feasibility problem without prior knowledge of matrix norms. Inverse Probl. 2012., 28: Article ID 085004

27. Zhao JL, Zhang YJ, Yang QZ: Modified projection methods for the split feasibility problem and the multiple-set split feasibility problem. Appl. Math. Comput. 2012, 219: 1644–1653. 10.1016/j.amc.2012.08.005

## Acknowledgements

The authors wish to thank the referees for their helpful comments, which notably improved the presentation of this manuscript. This work was supported by the Fundamental Research Funds for the Central Universities (No. ZXH2012K001).

## Author information

Authors

### Corresponding author

Correspondence to Ming Tian.

### Competing interests

The authors declare that they have no competing interests.

### Authors’ contributions

All the authors read and approved the final manuscript.

## Rights and permissions

Reprints and Permissions

Tian, M., Huang, LH. Regularized gradient-projection methods for equilibrium and constrained convex minimization problems. J Inequal Appl 2013, 243 (2013). https://doi.org/10.1186/1029-242X-2013-243 