Skip to main content

Methods for solving constrained convex minimization problems and finding zeros of the sum of two operators in Hilbert spaces

Abstract

In this paper, let H be a real Hilbert space and let C be a nonempty, closed, and convex subset of H. We assume that \((A+B)^{-1}0\cap U\neq\emptyset\), where \(A:C\rightarrow H\) is an α-inverse-strongly monotone mapping, \(B:H\rightarrow H\) is a maximal monotone operator, the domain of B is included in C. Let U denote the solution set of the constrained convex minimization problem. Based on the viscosity approximation method, we use a gradient-projection algorithm to propose composite iterative algorithms and find a common solution of the problems which we studied. Then we regularize it to find a unique solution by gradient-projection algorithm. The point \(q\in (A+B)^{-1}0\cap U\) which we find solves the variational inequality \(\langle(I-f)q, p-q\rangle\geq0\), \(\forall p\in(A+B)^{-1}0\cap U\). Under suitable conditions, the constrained convex minimization problem can be transformed into the split feasibility problem. Zeros of the sum of two operators can be transformed into the variational inequality problem and the fixed point problem. Furthermore, new strong convergence theorems and applications are obtained in Hilbert spaces, which are useful in nonlinear analysis and optimization.

1 Introduction

Throughout this paper, let H be a real Hilbert space with the inner product \(\langle\cdot,\cdot\rangle\) and norm \(\|\cdot\|\). Let C be a nonempty, closed, and convex subset of H. Let \(\mathbb{N}\) and \(\mathbb{R}\) be the sets of positive integers and real numbers, respectively. In the following, we introduce some operators which will be used in this paper.

  • \(f:C\rightarrow C\) is a contraction if there exists \(k\in(0,1)\) such that \(\|f(x)-f(y)\|\leq k\|x-y\|\) for all \(x,y\in C\).

  • \(T:C\rightarrow C\) is nonexpansive if \(\|Tx-Ty\|\leq\|x-y\|\) for all \(x,y\in C\).

  • \(V:C\rightarrow C\) is Lipschitz continuous if there exists a constant \(L>0\) such that \(\|Vx-Vy\|\leq L\|x-y\|\) for all \(x,y\in C\).

  • \(W:C\rightarrow H\) is a strict pseudo-contraction [1] if there exists \(t\in\mathbb{R}\) with \(0\leq t<1\) such that \(\|Wx-Wy\|^{2}\leq\|x-y\|^{2}+t\|(I-W)x-(I-W)y\|^{2}\) for all \(x,y\in C\).

  • \(P_{C}: H\rightarrow C\) is metric projection if \(\|x-P_{C}x\|\leq\|x-y\|\) for all \(x\in H\) and \(y\in C\). \(P_{C}\) is firmly nonexpansive if \(\|P_{C}x-P_{C}y\|^{2}\leq\langle P_{C}x-P_{C}y,x-y\rangle\) for all \(x,y\in H\).

  • \(A:H\rightarrow H\) is monotone if \(\langle x-y, Ax-Ay\rangle\geq0\) for all \(x,y\in H\).

  • Given a number \(\eta>0\), \(A:H\rightarrow H\) is η-strongly monotone if \(\langle x-y, Ax-Ay\rangle\geq\eta\|x-y\|^{2}\) for all \(x,y\in H\).

  • Given a number \(\alpha>0\), \(A:C\rightarrow H\) is α-inverse strongly monotone (α-ism) if \(\langle x-y, Ax-Ay\rangle\geq \alpha\|Ax-Ay\|^{2}\) for all \(x,y\in H\).

We first consider the problem of zero points of the maximal monotone operator:

$$ B^{-1}0=\{x\in H: 0\in Bx\}, $$

where B is a mapping of H into \(2^{H}\), the effective domain of B is denoted by domB or \(D(B)\), that is, \(\operatorname{dom}B=\{x\in H: Bx\neq\emptyset\}\). A multi-valued mapping B is said to be a monotone operator on H if \(\langle x-y, u-v\rangle\geq0\) for all \(x,y\in \operatorname{dom}B\), \(u\in Bx\), \(v\in By\). A monotone operator B on H is said to be maximal if its graph is not properly contained in the graph of any other monotone operator on H. For a maximal monotone operator B on H and \(r>0\), we may define a single-valued operator \(J_{r}=(I+rB)^{-1}: H\rightarrow \operatorname{dom}B\), which is called the resolvent of B for r. It is well known that \(B^{-1}0=\operatorname{Fix}(J_{r})\) for all \(r>0\) and the resolvent \(J_{r}\) is firmly nonexpansive, i.e.,

$$ \|J_{r}x-J_{r}y\|^{2}\leq\langle x-y, J_{r}x-J_{r}y\rangle,\quad \forall x,y\in H. $$
(1.1)

Some authors introduced various algorithms to solve zeros of the operators (see [2]) and monotone operators (see [3]).

We consider the following constrained convex minimization problem:

$$ \min_{x\in C}g(x), $$
(1.2)

where \(g:C\rightarrow\mathbb{R}\) is a real-valued convex function. Assume that the constrained convex minimization problem (1.2) is solvable, and let U denote the solution set of (1.2). For solving constrained convex minimization problems, some methods were proposed by some authors (see [4] and [5]). The gradient-projection algorithm generates a sequence \(\{x_{n}\}_{n=0}^{\infty}\) according to the recursive formula:

$$ x_{n+1}=P_{C}(I-\beta\nabla g)x_{n}, \quad\forall n\geq0, $$
(1.3)

or more generally,

$$ x_{n+1}=P_{C}(I-\beta_{n}\nabla g)x_{n}, \quad\forall n\geq0, $$
(1.4)

where the parameters \(\beta_{n}\) are real positive numbers, and \(P_{C}\) is the metric projection from H onto C. It is well known that the convergence of the algorithms (1.3) depends on the behavior of the gradient g. If the gradient g is only assumed to be inverse-strongly monotone, then the sequence \(\{x_{n}\}\) defined by the algorithm (1.3) and (1.4) can only converge weakly to a minimizer of (1.2). If the gradient g is Lipschitz continuous and strongly monotone, then the sequence generated by (1.3) and (1.4) can converge strongly to a minimizer of (1.2).

However, we all know that the minimization problem (1.2) has more than one solution under suitable conditions, so regularization is essential in finding the unique solution of the minimization problem (1.2). Some authors used methods with regularization to solve the minimization problems (see [6]), and the other methods for hierarchical minimization problems (see [7]). Now, we consider the following regularized minimization problem:

$$ \min_{x\in C}g_{\lambda}(x):=g(x)+\frac{\lambda}{2}\|x \|^{2}, $$

where \(\lambda>0\) is the regularization parameter, g is a convex function with a \(1/L\)-ism continuous gradient g. Then the regularized gradient-projection algorithm generates a sequence \(\{ x_{n}\}_{n=0}^{\infty}\) by the following recursive formula:

$$ x_{n+1}=P_{C}(I-\beta\nabla g_{\lambda _{n}})x_{n}=P_{C}\bigl(x_{n}-\beta( \nabla g+\lambda_{n}I) (x_{n})\bigr), $$
(1.5)

where the parameter \(\lambda_{n}>0\), β is a constant with \(0<\beta<2/L\), and \(P_{C}\) is the metric projection from H onto C. We all know that the sequence \(\{x_{n}\}_{n=0}^{\infty}\) generated by algorithm (1.5) converges weakly to a minimizer of (1.2) in the setting of infinite-dimensional spaces (see [8]).

The subdifferential of the lower semicontinuous convex function and indicator function will also be used in this paper. See the introduction from Section 3 for more details as regards ∂h and \(\partial i_{C}\).

In 2000, Moudafi [9] introduced the viscosity approximation method for nonexpansive mappings, extended in [10]. Let f be a contraction on H, starting with an arbitrary initial \(x_{0}\in H\), define a sequence \(\{x_{n}\}\) recursively by

$$ x_{n+1} = \alpha_{n}f(x_{n})+(1- \alpha_{n})Tx_{n}, \quad n\geq0, $$
(1.6)

we use \(\operatorname{Fix}(T)\) to denote the set of fixed points of the mapping T, i.e., \(\operatorname{Fix}(T)=\{x\in H: x=Tx\}\).

In 2007, for finding a common element of equilibrium problem \(EP(F)\) and a fixed point problem, Takahashi and Takahashi [11] introduced the following iterative scheme by the viscosity approximation method in a Hilbert space: \(x_{1}\in H\) and

$$ \left \{ \textstyle\begin{array}{@{}l} F(u_{n}, y)+\frac{1}{r_{n}}\langle y-u_{n}, u_{n}-x_{n}\rangle\geq0, \quad \forall y\in C,\\ x_{n+1}=\alpha_{n}f(x_{n})+(1-\alpha_{n})T(u_{n}),\quad \forall n\in \mathbb{N}, \end{array}\displaystyle \right . $$
(1.7)

where \(\{\alpha_{n}\}\subset(0,1)\) and \(\{\gamma_{n}\}\subset(0,\infty)\) satisfy some appropriate conditions. Further, they proved \(\{x_{n}\}\) and \(\{u_{n}\}\) converge strongly to \(z\in \operatorname{Fix}(T)\cap EP(F)\), where \(z=P_{\operatorname{Fix}(T)\cap EP(F)}f(z)\).

In 2012, Tian and Liu [12] introduced the following iterative method in a Hilbert space: \(x_{1}\in C\) and

$$ \left \{ \textstyle\begin{array}{@{}l} F(u_{n}, y)+\frac{1}{r_{n}}\langle y-u_{n}, u_{n}-x_{n}\rangle\geq0, \quad \forall y\in C,\\ x_{n+1}=\alpha_{n}\gamma f(u_{n})+(I-\alpha_{n}A)T_{n}(u_{n}),\quad \forall n\in\mathbb{N}, \end{array}\displaystyle \right . $$
(1.8)

where \(F: C\times C\rightarrow\mathbb{R}\), \(u_{n}=Q_{\beta_{n}}(x_{n})\), \(P_{C}(I-\lambda_{n}\nabla g)=\theta_{n}I+(1-\theta_{n})T_{n}\), \(\theta_{n}=\frac{2-\lambda_{n}L}{4}\), and \(\{\lambda_{n}\}\subset(0,2/L)\), and \(\{\alpha_{n}\}\), \(\{r_{n}\}\), \(\{\theta_{n}\}\) satisfy appropriate conditions. Further, they proved the sequence \(\{x_{n}\}\) converges strongly to a point \(q\in U\cap EP(F)\), which solves the variational inequality

$$ \bigl\langle (A-\gamma f)q,q-z\bigr\rangle \leq0, \quad z\in U\cap EP(F). $$

It is the first time that the equilibrium and constrained convex minimization problems have been solved.

Also in 2012, Lin and Takahashi [13] proposed the following iterative sequence in a Hilbert space: \(x_{1}=x\in H\) and \(\{x_{n}\}\subset H\) a sequence generated by

$$ x_{n+1}=\alpha_{n}\gamma g(x_{n})+(I- \alpha_{n}V)J_{\lambda_{n}}(I-\lambda _{n}A)T_{r_{n}}x_{n}, \quad\forall n\in\mathbb{N}. $$
(1.9)

Under appropriate conditions, it is proved that the sequence \(\{x_{n}\}\) generated by (1.9) converges strongly to a point \(z_{0}\in(A+B)^{-1}0\cap F^{-1}0\) which is a unique fixed point of \(P_{(A+B)^{-1}0\cap F^{-1}0}(I-V+\gamma g)\) in \((A+B)^{-1}0\cap F^{-1}0\). This point \(z_{0}\) is also a unique solution of the hierarchical variational inequality

$$ \bigl\langle (V-\gamma g)z_{0}, q-z_{0}\bigr\rangle \geq0, \quad \forall q\in(A+B)^{-1}0\cap F^{-1}0. $$

In 2013, Kong et al. [14] proposed a multistep hybrid extragradient method for triple hierarchical variational inequalities.

In this paper, motivated and inspired by the above results, we introduce two new iterative algorithms, the one is: \(x_{1}\in C\) and

$$ \left \{ \textstyle\begin{array}{@{}l} u_{n}=J_{r_{n}}(I-r_{n}A)(x_{n}),\\ x_{n+1}=\alpha_{n}f(x_{n})+(1-\alpha_{n})T_{n}(u_{n}),\quad \forall n\in\mathbb{N}, \end{array}\displaystyle \right . $$
(1.10)

to find a common element of \((A+B)^{-1}0\cap U\), where \(T_{n}=P_{C}(I-\beta_{n}\nabla g)\), \(0< b\leq\beta_{n}\leq2/L\).

The other is: \(x_{1}\in C\) and

$$ \left \{ \textstyle\begin{array}{@{}l} u_{n}=J_{r_{n}}(I-r_{n}A)(x_{n}),\\ x_{n+1}=\alpha_{n}f(x_{n})+(1-\alpha_{n})T_{\lambda_{n}}(u_{n}),\quad \forall n\in\mathbb{N}, \end{array}\displaystyle \right . $$
(1.11)

to find a unique solution of \((A+B)^{-1}0\cap U\), where \(T_{\lambda_{n}}=P_{C}(I-\beta\nabla g_{\lambda_{n}})\), \(\nabla g_{\lambda_{n}}=\nabla g+\lambda_{n}I\), \(\beta\in(0,2/L)\).

Under suitable conditions, it is proved that both of the sequences \(\{x_{n}\}\) generated by (1.10) and (1.11) converge strongly to a point \(q\in(A+B)^{-1}0\cap U\), which solves the variational inequality

$$ \bigl\langle (I-f)q,q-p\bigr\rangle \leq0, \quad \forall p \in(A+B)^{-1}0\cap U. $$
(1.12)

Equivalently, \(q=P_{(A+B)^{-1}0\cap U}f(q)\).

The main purpose of this paper is to find a solution of \((A+B)^{-1}0\cap U\) by using the gradient-projection algorithm. Then we use the regularized gradient-projection composite iterative method to find a unique solution of \((A+B)^{-1}0\cap U\). In the case that the maximal monotone operator \(B=\partial i_{C}\), the problem of finding a unique solution in \((A+B)^{-1}0\cap U\) is equivalent to the problem of finding a unique solution in \(VI(C,A)\cap U\). In the case \(B=\partial i_{C}\) and \(A=I-W\), \((A+B)^{-1}0\) is equivalent to \(\operatorname{Fix}(W)\).

The paper is organized as follows: in Section 2, we introduce some useful properties and lemmas. In Section 3, we prove our main results and apply our results to the variational inequality, fixed point problem and the split feasibility problem. In the final section, we give our conclusion due to the main results.

We will use the following notations:

  1. 1.

    ’ for weak convergence and ‘→’ for strong convergence;

  2. 2.

    \(\operatorname{Fix}(T)\) denotes the set of fixed points of the mapping T;

  3. 3.

    U denotes the solution set of (1.2).

  4. 4.

    ‘GPA’ for the gradient-projection algorithm and ‘RGPA’ for the regularized gradient-projection algorithm.

2 Preliminaries

In this section, we give our preliminaries which will be useful for the main results in the next section.

Throughout this paper, we always assume that C is a nonempty, closed, and convex subset of a real Hilbert space H.

The following inequality holds in an inner product space X:

$$ \|x+y\|^{2}\leq\|x\|^{2}+2\langle y,x+y \rangle, \quad \forall x,y\in X. $$
(2.1)

We need some facts and tools in a real Hilbert space H which are listed as lemmas below.

Firstly, we recall the metric (nearest point) projection from H onto C is the mapping \(P_{C}: H\rightarrow C\) which is defined as follows: given \(x\in H\), \(P_{C}x\) is the unique point in C with the property

$$ \|x-P_{C}x\|=\inf_{y\in C}\|x-y\|=: d(x,C). $$

\(P_{C}\) is characterized as follows.

Lemma 2.1

Given \(x\in H\) and \(y\in C\). Then \(y=P_{C}x\) if and only if the following inequality holds:

$$ \langle x-y,y-z\rangle\geq0, \quad \forall z\in C. $$

Then we introduce the following lemma which is about the resolvent of the maximal monotone operator.

Lemma 2.2

(see [1517])

Let H be a real Hilbert space and let B be a maximal monotone operator on H. For \(r>0\) and \(x\in H\), define the resolvent \(J_{r}x\). Then the following holds:

$$ \frac{s-t}{s}\langle J_{s}x-J_{t}x, J_{s}x-x\rangle\geq\|J_{s}x-J_{t}x \|^{2} $$

for all \(s,t>0\) and \(x\in H\). In particular,

$$ \|J_{s}x-J_{t}x\|\leq\bigl(|s-t|/s\bigr)\|x-J_{s}x\| $$

for all \(s,t>0\) and \(x\in H\).

Besides, the following two lemmas are extremely important in the proof of theorems.

Lemma 2.3

[18]

Assume that \(\{a_{n}\}_{n=0}^{\infty}\) is a sequence of nonnegative real numbers such that

$$a_{n+1} \leq(1-\gamma_{n})a_{n} + \gamma_{n}\delta_{n} +\beta _{n}, \quad n \geq0, $$

where \(\{\gamma_{n}\}_{n=0}^{\infty}\) and \(\{\beta_{n}\}_{n=0}^{\infty}\) are sequences in \((0,1)\) and \(\{\delta_{n}\}_{n=0}^{\infty}\) is a sequence in \(\mathbb{R}\) such that

  1. (i)

    \(\sum_{n=0}^{\infty}\gamma_{n} = \infty\);

  2. (ii)

    either \(\limsup_{n\rightarrow\infty}\delta_{n} \leq0\) or \(\sum_{n=0}^{\infty}\gamma_{n}|\delta_{n}| < \infty\);

  3. (iii)

    \(\sum_{n=0}^{\infty}\beta_{n} < \infty\).

Then \(\lim_{n\rightarrow\infty}a_{n} = 0\).

The so-called demiclosed principle for nonexpansive mappings will often be used.

Lemma 2.4

(Demiclosed principle [19])

Let \(T : C\rightarrow C\) be a nonexpansive mapping with \(F(T)\neq\emptyset\). If \(\{x_{n}\}_{n=1}^{\infty}\) is a sequence in C weakly converging to x and if \(\{(I-T)x_{n}\}_{n=1}^{\infty}\) converges strongly to y, then \((I-T)x = y\). In particular, if \(y = 0\), then \(x\in F(T)\).

The lemma below shows the uniqueness of solution of the variational inequality (1.12).

Lemma 2.5

[20]

Let H be a Hilbert space, C a closed convex subset of H, and \(f:C\rightarrow C\) a contraction with coefficient \(\alpha<1\). Then

$$ \bigl\langle x-y, (I-f)x-(I-f)y\bigr\rangle \geq(1-\alpha)\|x-y\|^{2}, \quad x,y\in C. $$

That is, \(I-f\) is strongly monotone with coefficient \(1-\alpha\).

3 Main results

We always assume that H is a real Hilbert space and C is a nonempty, closed, and convex subset of H. Let \(P_{C}: H\rightarrow C\) be the metric projection. Let \(f: C\rightarrow C\) be a contraction with the constant \(k\in(0,1)\). Let \(A: C\rightarrow H\) be an α-inverse-strongly monotone mapping with \(\alpha>0\), and let \(B: H\rightarrow H\) be a maximal monotone operator and the domain of B is included in C. Let \(J_{r}=(I+r B)^{-1}\) be the resolvent of B for \(r>0\). Suppose that g is \(1/L\)-ism continuous. Consider the two mappings \(G_{n}\) and \(S_{n}\),

$$\begin{aligned}& G_{n}(x)=\alpha_{n}f(x)+(1-\alpha_{n})T_{n}J_{r_{n}}(I-r_{n}A) (x),\quad \forall x\in C, n\in\mathbb{N}, \end{aligned}$$
(3.1)
$$\begin{aligned}& S_{n}(x)=\alpha_{n}f(x)+(1-\alpha_{n})T_{\lambda _{n}}J_{r_{n}}(I-r_{n}A) (x),\quad \forall x\in C, n\in\mathbb{N}, \end{aligned}$$
(3.2)

where \(T_{n}=P_{C}(I-\beta_{n}\nabla g)\), \(T_{\lambda_{n}}=P_{C}(I-\beta\nabla g_{\lambda_{n}})\), \(0< b\leq\beta_{n}\leq2/L\),\(\nabla g_{\lambda_{n}}=\nabla g+\lambda_{n}I\), \(\lambda_{n}\in(0,2/\beta-L)\), \(\beta\in(0,2/L)\), \(\{\alpha_{n}\}\subset(0,1)\). It is easy to prove that \(\nabla g_{\lambda_{n}}\) is \(\frac{1}{L+\lambda_{n}}\)-ism, \(T_{\lambda_{n}}\) is nonexpansive. It is easy to see that if \(0< r\leq2\alpha\), then \(I-rA\) is nonexpansive of C into H. Indeed, we have, for all \(x,y\in C\),

$$\begin{aligned} \bigl\| (I-rA)x-(I-rA)y\bigr\| ^{2} =&\bigl\| x-y-r(Ax-Ay) \bigr\| ^{2} \\ =&\|x-y\|^{2}-2r\langle x-y, Ax-Ay\rangle+r^{2}\|Ax-Ay \|^{2} \\ \leq&\|x-y\|^{2}-2r\alpha\|Ax-Ay\|^{2}+r^{2} \|Ax-Ay\|^{2} \\ =&\|x-y\|^{2}+r(r-2\alpha)\|Ax-Ay\|^{2} \\ \leq&\|x-y\|^{2}. \end{aligned}$$
(3.3)

Thus, \(I-rA\) is nonexpansive of C into H.

Then we can claim that both of \(G_{n}\) and \(S_{n}\) are contractions. Indeed, by (1.1) and (3.1)-(3.3), we have, for each \(x,y\in C\),

$$\begin{aligned} \bigl\| G_{n}(x)-G_{n}(y)\bigr\| =&\bigl\| \bigl(\alpha_{n}f(x)- \alpha_{n}f(y)\bigr) \\ &{}+(1-\alpha _{n}) \bigl(T_{n}J_{r_{n}}(I-r_{n}A) (x)-T_{n}J_{r_{n}}(I-r_{n}A) (y)\bigr)\bigr\| \\ \leq&\alpha_{n}k\|x-y\|+(1-\alpha_{n})\bigl\| J_{r_{n}}(I-r_{n}A) (x)-J_{r_{n}}(I-r_{n}A) (y)\bigr\| \\ \leq&\alpha_{n}k\|x-y\|+(1-\alpha_{n}) \bigl\| (I-r_{n}A) (x)-(I-r_{n}A) (y)\bigr\| \\ \leq&\alpha_{n}k\|x-y\|+(1-\alpha_{n})\|x-y\| \\ =&\bigl(1-\alpha_{n}(1-k)\bigr)\|x-y\|. \end{aligned}$$

Similarly,

$$\begin{aligned} \bigl\| S_{n}(x)-S_{n}(y)\bigr\| \leq\bigl(1-\alpha_{n}(1-k) \bigr)\|x-y\|. \end{aligned}$$

Since \(0<1-\alpha_{n}(1-k)<1\), it follows that both of \(G_{n}\) and \(S_{n}\) are contractions. Thus, by the Banach contraction principle, \(G_{n}\) has a unique fixed point \(x_{n}^{f}\in C\) such that

$$ x_{n}^{f}=\alpha_{n}f\bigl(x_{n}^{f} \bigr)+(1-\alpha _{n})T_{n}J_{r_{n}}(I-r_{n}A) \bigl(x_{n}^{f}\bigr). $$

Similarly, \(S_{n}\) has a unique fixed point \(x_{n}^{*}\in C\) such that

$$ x_{n}^{*}=\alpha_{n}f\bigl(x_{n}^{*} \bigr)+(1-\alpha_{n})T_{\lambda _{n}}J_{r_{n}}(I-r_{n}A) \bigl(x_{n}^{*}\bigr). $$

For simplicity, we will write \(x_{n}\) for \(x_{n}^{f}\) and \(x_{n}^{*}\) provided no confusion occurs. Furthermore, we prove the convergence of \(\{x_{n}\}\), while we claim the existence of the \(q\in(A+B)^{-1}0\cap U\), which solves the variational inequality

$$ \bigl\langle (I-f)q,p-q\bigr\rangle \geq0,\quad \forall p \in(A+B)^{-1}0\cap U. $$
(3.4)

Equivalently, \(q=P_{(A+B)^{-1}0\cap U}f(q)\).

The following is our main result.

Theorem 3.1

Let H be a real Hilbert space and let C be a nonempty, closed, and convex subset of H. Let \(P_{C}:H\rightarrow C\) be the metric projection. Let \(f:C\rightarrow C\) be a contraction with the constant \(k\in(0,1)\). Let \(A:C\rightarrow H\) be an α-inverse-strongly monotone mapping with \(\alpha>0\). Let \(B: H\rightarrow H\) be a maximal monotone operator and the domain of B is included in C. Let \(J_{r}=(I+r B)^{-1}\) be the resolvent of B for \(r>0\). Suppose that g is \(1/L\)-ism continuous with \(L>0\). Assume that \((A+B)^{-1}0\cap U\neq\emptyset\). Use GPA and let the sequences \(\{u_{n}\}\) and \(\{x_{n}\}\) be generated by

$$ \left \{ \textstyle\begin{array}{@{}l} u_{n}=J_{r_{n}}(I-r_{n}A)(x_{n}),\\ x_{n}=\alpha_{n}f(x_{n})+(1-\alpha_{n})T_{n}(u_{n}), \quad \forall n\in\mathbb{N}, \end{array}\displaystyle \right . $$
(3.5)

when regularize it by using RGPA, the sequences generated by (3.5) changed into the following sequence:

$$ \left \{ \textstyle\begin{array}{@{}l} u_{n}=J_{r_{n}}(I-r_{n}A)(x_{n}),\\ x_{n}=\alpha_{n}f(x_{n})+(1-\alpha_{n})T_{\lambda_{n}}(u_{n}),\quad \forall n\in\mathbb{N}, \end{array}\displaystyle \right . $$
(3.6)

where \(T_{n}=P_{C}(I-\beta_{n}\nabla g)\), \(T_{\lambda_{n}}=P_{C}(I-\beta\nabla g_{\lambda_{n}})\), \(\nabla g_{\lambda_{n}}=\nabla g+\lambda_{n}I\), \(\beta\in(0,2/L)\), \(0< b\leq\beta_{n}\leq2/L\). Let \(\{\alpha_{n}\}\), \(\{r_{n}\}\), and \(\{\lambda_{n}\}\) satisfy the following conditions:

  1. (i)

    \(\{\alpha_{n}\}\subset(0,1)\), \(\lim_{n\rightarrow\infty}\alpha_{n}=0\);

  2. (ii)

    \(\{r_{n}\}\subset(0,\infty)\), \(0< l\leq r_{n}\leq2\alpha\);

  3. (iii)

    \(\{\lambda_{n}\}\subset(0,2/\beta-L)\), \(\lambda_{n}=o(\alpha _{n})\).

Then the sequence \(\{x_{n}\}\) converges strongly to a point \(q\in (A+B)^{-1}0\cap U\), which solves the variational inequality (3.4).

Proof

It is well known that \(\tilde{x}\in C\) solves the minimization problem (1.2) if and only if for each fixed \(0<\beta<2/L\), \(\tilde{x}\) solves the fixed point equation

$$ \tilde{x}=P_{C}(I-\beta\nabla g)\tilde{x}=T\tilde{x}. $$

It is clear that \(\tilde{x}=T\tilde{x}\), i.e., \(\tilde{x}\in U=\operatorname{Fix}(T)\). Since T is nonexpansive, U is closed and convex.

As in [21], we have, for any \(r>0\),

$$\begin{aligned} q\in(A+B)^{-1}0\quad \Longleftrightarrow&\quad 0\in Aq+Bq \\ \quad \Longleftrightarrow&\quad0\in rAq+rBq \\ \quad \Longleftrightarrow&\quad q-rAq\in q+rBq \\ \quad \Longleftrightarrow&\quad q=J_{r}(I-rA)q \\ \quad \Longleftrightarrow&\quad q\in \operatorname{Fix}\bigl(J_{r}(I-rA) \bigr). \end{aligned}$$
(3.7)

If \(0< r\leq2\alpha\), we see from (1.1) and (3.3) that \(J_{r}(I-rA)\) is nonexpansive. Thus \(\operatorname{Fix}(J_{r}(I-rA))\) is closed and convex.

In the first step, we show that \(\{x_{n}\}\) is bounded. Indeed, pick any \(p\in(A+B)^{-1}0\cap U\), put \(M_{n}=J_{r_{n}}(I-r_{n}A)\), since \(u_{n}=J_{r_{n}}(I-r_{n}A)(x_{n})\), and \(p=J_{r_{n}}(I-r_{n}A)(p)\), we know that for any \(n\in\mathbb{N}\),

$$ \|u_{n}-p\|=\bigl\| M_{n}(x_{n})-M_{n}(p) \bigr\| \leq\|x_{n}-p\|. $$
(3.8)

For \(x\in C\), we note that

$$ P_{C}(I-\beta\nabla g_{\lambda_{n}})x=T_{\lambda_{n}}x $$

and

$$ P_{C}(I-\beta\nabla g)x=Tx. $$

Then we get

$$\begin{aligned} \|T_{\lambda_{n}}x-Tx\| =&\bigl\| P_{C}(I-\beta\nabla g_{\lambda_{n}})x-P_{C}(I-\beta\nabla g)x\bigr\| \leq\lambda_{n}\beta\|x\|. \end{aligned}$$
(3.9)

Thus, by (3.5) and (3.8), we derive that

$$\begin{aligned} \|x_{n}-p\| =&\bigl\| \alpha_{n}f(x_{n})+(1- \alpha_{n})T_{n}(u_{n})-p\bigr\| \\ \leq&\bigl\| \alpha_{n}f(x_{n})-\alpha_{n}f(p)\bigr\| +\bigl\| \alpha_{n}f(p)-\alpha _{n}p\bigr\| +(1-\alpha_{n}) \bigl\| T_{n}(u_{n})-T_{n}(p)\bigr\| \\ \leq&\alpha_{n}k\|x_{n}-p\|+\alpha_{n}\bigl\| (I-f)p \bigr\| +(1-\alpha_{n})\| u_{n}-p\| \\ \leq&\bigl(1-\alpha_{n}(1-k)\bigr)\|x_{n}-p\|+ \alpha_{n}\bigl\| (I-f)p\bigr\| . \end{aligned}$$

Then we have

$$\begin{aligned} \|x_{n}-p\| \leq&\frac{1}{1-k}\bigl\| (I-f)p\bigr\| , \end{aligned}$$

and hence \(\{x_{n}\}\) is bounded. From (3.5), we also derive that \(\{u_{n}\}\) is bounded.

Similarly, by (3.6) and (3.8), we obtain

$$\begin{aligned} \|x_{n}-p\| =&\bigl\| \alpha_{n}f(x_{n})+(1- \alpha_{n})T_{\lambda_{n}}(u_{n})-p\bigr\| \\ \leq&\bigl\| \alpha_{n}f(x_{n})-\alpha_{n}f(p)\bigr\| +\bigl\| \alpha_{n}f(p)-\alpha _{n}p\bigr\| +(1-\alpha_{n}) \bigl\| T_{\lambda_{n}}(u_{n})-T_{\lambda_{n}}(p)\bigr\| \\ &{}+(1-\alpha_{n})\bigl\| T_{\lambda_{n}}(p)-T(p)\bigr\| \\ \leq&\alpha_{n}k\|x_{n}-p\|+\alpha_{n}\bigl\| (I-f)p \bigr\| +(1-\alpha_{n})\| u_{n}-p\|+(1-\alpha_{n}) \bigl\| T_{\lambda_{n}}(p)-T(p)\bigr\| \\ \leq&\bigl(1-\alpha_{n}(1-k)\bigr)\|x_{n}-p\|+ \alpha_{n}\bigl\| (I-f)p\bigr\| +(1-\alpha _{n})\bigl\| T_{\lambda_{n}}(p)-T(p) \bigr\| . \end{aligned}$$

It follows from (3.9) that

$$\begin{aligned} \|x_{n}-p\| \leq&\frac{1}{1-k}\bigl\| (I-f)p\bigr\| +\frac{1-\alpha_{n}}{\alpha _{n}(1-k)} \bigl\| T_{\lambda_{n}}(p)-T(p)\bigr\| \\ \leq&\frac{1}{1-k}\bigl\| (I-f)p\bigr\| +\frac{(1-\alpha_{n})\beta}{1-k}\cdot\frac {\lambda_{n}}{\alpha_{n}}\|p\|. \end{aligned}$$

Since \(\lambda_{n}=o(\alpha_{n})\), there exists a real number \(R>0\) such that \(\frac{\lambda_{n}}{\alpha_{n}}\leq R\), and

$$\begin{aligned} \|x_{n}-p\| \leq&\frac{1}{1-k}\bigl\| (I-f)p\bigr\| +\frac{(1-\alpha_{n})\beta}{1-k}R\|p\| =\frac{\|(I-f)p\|+(1-\alpha_{n})\beta R\|p\|}{1-k}. \end{aligned}$$

Hence \(\{x_{n}\}\) is bounded. From (3.8), we also see that \(\{u_{n}\}\) is bounded.

In the second step, we prove that \(\|x_{n}-u_{n}\|\rightarrow0\). Indeed, for any \(p\in(A+B)^{-1}0\cap U\), by (1.1), we derive that

$$\begin{aligned} \|u_{n}-p\|^{2} =&\bigl\| J_{r_{n}}(I-r_{n}A) (x_{n})-J_{r_{n}}(I-r_{n}A) (p)\bigr\| ^{2} \\ \leq&\bigl\langle (I-r_{n}A) (x_{n})-(I-r_{n}A) (p),u_{n}-p\bigr\rangle \\ =&\langle x_{n}-p, u_{n}-p\rangle-r_{n}\langle Ax_{n}-Ap, u_{n}-p\rangle \\ =&\frac{1}{2}\bigl(\|x_{n}-p\|^{2}+ \|u_{n}-p\|^{2}-\|u_{n}-x_{n}\| ^{2}\bigr)-r_{n}\langle Ax_{n}-Ap, u_{n}-p\rangle \\ \leq&\frac{1}{2}\bigl(\|x_{n}-p\|^{2}+ \|u_{n}-p\|^{2}-\|u_{n}-x_{n} \|^{2}\bigr). \end{aligned}$$

This implies that

$$ \|u_{n}-p\|^{2}\leq\|x_{n}-p \|^{2}-\|u_{n}-x_{n}\|^{2}. $$
(3.10)

From (3.5), (3.10) and (2.1), we derive that

$$\begin{aligned} \|x_{n}-p\|^{2} =&\bigl\| \alpha_{n}f(x_{n})+(1- \alpha_{n})T_{n}(u_{n})-p\bigr\| ^{2} \\ =&\bigl\| \alpha_{n}f(x_{n})-\alpha_{n}p+(1- \alpha_{n})T_{n}(u_{n})-(1-\alpha _{n})T_{n}(p)\bigr\| ^{2} \\ \leq&(1-\alpha_{n})^{2}\|u_{n}-p \|^{2}+2\alpha_{n}\bigl\langle f(x_{n})-p, x_{n}-p\bigr\rangle \\ \leq&\|x_{n}-p\|^{2}-\|u_{n}-x_{n} \|^{2}+2\alpha_{n}\bigl(k\|x_{n}-p\|+\bigl\| (I-f)p\bigr\| \bigr)\cdot\|x_{n}-p\|. \end{aligned}$$

Since \(\alpha_{n}\rightarrow0\), it follows that \(\lim_{n\rightarrow\infty}\|x_{n}-u_{n}\|=0\).

Similarly, from (3.6), (3.9), (3.10), and (2.1), we derive that

$$\begin{aligned} \|x_{n}-p\|^{2} \leq&(1-\alpha_{n}) \bigl( \|x_{n}-p\|^{2}-\|u_{n}-x_{n} \|^{2}+2\|u_{n}-p\|\cdot \lambda_{n}\beta\|p\|+ \lambda_{n}^{2}\beta^{2}\|p\|^{2}\bigr) \\ &{}+2\alpha_{n}\bigl(k\|x_{n}-p\|+\bigl\| (I-f)p\bigr\| \bigr)\cdot \|x_{n}-p\|. \end{aligned}$$

Hence, we obtain

$$\begin{aligned} (1-\alpha_{n})\|u_{n}-x_{n}\|^{2} \leq&(2\alpha_{n}k-\alpha_{n})\|x_{n}-p \|^{2}+2(1-\alpha_{n})\lambda _{n}\beta \|u_{n}-p\|\cdot\|p\| \\ &{}+(1-\alpha_{n})\lambda_{n}^{2} \beta^{2}\|p\|^{2}+2\alpha_{n}\bigl\| (I-f)p\bigr\| \cdot \|x_{n}-p\|. \end{aligned}$$

Since both \(\{x_{n}\}\) and \(\{u_{n}\}\) are bounded and \(\alpha_{n}\rightarrow0\), \(\lambda_{n}\rightarrow0\), it follows that \(\|u_{n}-x_{n}\|\rightarrow0\).

In the third step, from (3.5), we show that \(\|x_{n}-T_{n}(x_{n})\|\rightarrow0\). Indeed,

$$\begin{aligned} \bigl\| x_{n}-T_{n}(x_{n})\bigr\| =&\bigl\| x_{n}-T_{n}(u_{n})+T_{n}(u_{n})-T_{n}(x_{n}) \bigr\| \\ \leq&\bigl\| x_{n}-T_{n}(u_{n})\bigr\| + \bigl\| T_{n}(u_{n})-T_{n}(x_{n})\bigr\| \\ \leq&\alpha_{n}\bigl\| f(x_{n})-T_{n}(u_{n}) \bigr\| +\|u_{n}-x_{n}\|. \end{aligned}$$

Since \(\alpha_{n}\rightarrow0\) and \(\|x_{n}-u_{n}\|\rightarrow0\), we obtain \(\|x_{n}-T_{n}(x_{n})\|\rightarrow0\).

Thus,

$$\begin{aligned} \bigl\| u_{n}-T_{n}(u_{n})\bigr\| =&\bigl\| u_{n}-x_{n}+x_{n}-T_{n}(x_{n})+T_{n}(x_{n})-T_{n}(u_{n}) \bigr\| \\ \leq&\|u_{n}-x_{n}\|+\bigl\| x_{n}-T_{n}(x_{n}) \bigr\| +\bigl\| T_{n}(x_{n})-T_{n}(u_{n})\bigr\| \\ \leq&\|u_{n}-x_{n}\|+\bigl\| x_{n}-T_{n}(x_{n}) \bigr\| +\|x_{n}-u_{n}\| \end{aligned}$$

and

$$ \bigl\| x_{n}-T_{n}(u_{n})\bigr\| \leq\|x_{n}-u_{n} \|+\bigl\| u_{n}-T_{n}(u_{n})\bigr\| , $$

we have \(\|u_{n}-T_{n}(u_{n})\|\rightarrow0\) and \(\|x_{n}-T_{n}(u_{n})\| \rightarrow0\).

Similarly, from (3.6), we show that \(\|x_{n}-T_{\lambda_{n}}(x_{n})\|\rightarrow0\). Indeed,

$$\begin{aligned} \bigl\| x_{n}-T_{\lambda_{n}}(x_{n})\bigr\| \leq& \alpha_{n}\bigl\| f(x_{n})-T_{\lambda_{n}}(u_{n})\bigr\| + \|u_{n}-x_{n}\|. \end{aligned}$$

Since \(\alpha_{n}\rightarrow0\) and \(\|u_{n}-x_{n}\|\rightarrow0\), we obtain \(\|x_{n}-T_{\lambda_{n}}(x_{n})\|\rightarrow0\).

Therefore,

$$\begin{aligned} \bigl\| u_{n}-T_{\lambda_{n}}(u_{n})\bigr\| \leq& \|u_{n}-x_{n}\|+\bigl\| x_{n}-T_{\lambda_{n}}(x_{n}) \bigr\| +\|x_{n}-u_{n}\| \end{aligned}$$

and

$$ \bigl\| x_{n}-T_{\lambda_{n}}(u_{n})\bigr\| \leq\|u_{n}-x_{n} \|+\bigl\| T_{\lambda _{n}}(u_{n})-u_{n}\bigr\| , $$

we have \(\|u_{n}-T_{\lambda_{n}}(u_{n})\|\rightarrow0\) and \(\|x_{n}-T_{\lambda_{n}}(u_{n})\|\rightarrow0\).

In the fourth step, we show that \(q\in(A+B)^{-1}0\cap U\).

Consider a subsequence \(\{u_{n_{i}}\}\) of \(\{u_{n}\}\). Since \(\{u_{n}\}\) is bounded, without loss of generality, we can assume that \(u_{n_{i}}\rightharpoonup q\).

We first see the gradient-projection algorithm generated by (3.5), from the boundedness of \(\{u_{n_{i}}\}\), \(\beta_{n_{i}}\rightarrow\beta\), and \(\|u_{n_{i}}-T_{n_{i}}(u_{n_{i}})\|\rightarrow0\), we distinguish two cases to show \(q\in U\).

Case 1. \(\lim_{i\rightarrow\infty}\beta_{n_{i}}=\beta=\frac{2}{L}\).

Observe that

$$\begin{aligned} \biggl\| P_{C}\biggl(I-\frac{2}{L}\nabla g\biggr)u_{n_{i}}-u_{n_{i}} \biggr\| \leq&\biggl\| P_{C}\biggl(I-\frac{2}{L}\nabla g \biggr)u_{n_{i}}-P_{C}(I-\beta_{n_{i}}\nabla g)u_{n_{i}}\biggr\| \\ &{}+\bigl\| P_{C}(I-\beta_{n_{i}}\nabla g)u_{n_{i}}-u_{n_{i}} \bigr\| \\ \leq&\biggl\| \biggl(I-\frac{2}{L}\nabla g\biggr)u_{n_{i}}-(I- \beta_{n_{i}}\nabla g)u_{n_{i}}\biggr\| \\ &{}+\bigl\| P_{C}(I-\beta_{n_{i}}\nabla g)u_{n_{i}}-u_{n_{i}} \bigr\| \\ \leq&\biggl(\frac{2}{L}-\beta_{n_{i}}\biggr)\bigl\| \nabla g(u_{n_{i}})\bigr\| +\bigl\| T_{n_{i}}(u_{n_{i}})-u_{n_{i}}\bigr\| . \end{aligned}$$

Then we conclude that

$$ \lim_{i\rightarrow\infty}\biggl\| u_{n_{i}}-P_{C}\biggl(I- \frac{2}{L}\nabla g\biggr)u_{n_{i}}\biggr\| =0. $$

Since g is \(\frac{1}{L}\)-ism, \(P_{C}(I-\frac{2}{L}\nabla g)\) is nonexpansive self-mapping on C. Indeed, we have for each \(x,y\in C\)

$$\begin{aligned} \biggl\| P_{C}\biggl(I-\frac{2}{L}\nabla g\biggr)x-P_{C} \biggl(I-\frac{2}{L}\nabla g\biggr)y\biggr\| ^{2} \leq&\biggl\| \biggl(I- \frac{2}{L}\nabla g\biggr)x-\biggl(I-\frac{2}{L}\nabla g\biggr)y \biggr\| ^{2} \\ =&\biggl\| x-y-\frac{2}{L}\bigl(\nabla g(x)-\nabla g(y)\bigr)\biggr\| ^{2} \\ =&\|x-y\|^{2}-\frac{4}{L}\bigl\langle x-y, \nabla g(x)-\nabla g(y)\bigr\rangle \\ &{}+\frac{4}{L^{2}}\bigl\| \nabla g(x)-\nabla g(y)\bigr\| ^{2} \\ \leq&\|x-y\|^{2}-\frac{4}{L^{2}}\bigl\| \nabla g(x)-\nabla g(y) \bigr\| ^{2} \\ &{}+\frac{4}{L^{2}}\bigl\| \nabla g(x)-\nabla g(y)\bigr\| ^{2} \\ =&\|x-y\|^{2}. \end{aligned}$$

Case 2. \(0< b\leq \lim_{i\rightarrow\infty}\beta_{n_{i}}=\beta<\frac{2}{L}\).

Observe that

$$\begin{aligned} \bigl\| P_{C}(I-\beta\nabla g)u_{n_{i}}-u_{n_{i}}\bigr\| \leq& \bigl\| P_{C}(I-\beta\nabla g)u_{n_{i}}-P_{C}(I- \beta_{n_{i}}\nabla g)u_{n_{i}}\bigr\| \\ &{}+\bigl\| P_{C}(I-\beta_{n_{i}}\nabla g)u_{n_{i}}-u_{n_{i}} \bigr\| \\ \leq&\bigl\| (I-\beta\nabla g)u_{n_{i}}-(I-\beta_{n_{i}}\nabla g)u_{n_{i}}\bigr\| \\ &{}+\bigl\| P_{C}(I-\beta_{n_{i}}\nabla g)u_{n_{i}}-u_{n_{i}} \bigr\| \\ \leq&|\beta-\beta_{n_{i}}|\cdot\bigl\| \nabla g(u_{n_{i}})\bigr\| + \bigl\| T_{n_{i}}(u_{n_{i}})-u_{n_{i}}\bigr\| . \end{aligned}$$

Then we conclude that

$$ \lim_{i\rightarrow\infty}\bigl\| u_{n_{i}}-P_{C}(I-\beta\nabla g)u_{n_{i}}\bigr\| =0. $$

Since \(P_{C}(I-\frac{2}{L}\nabla g)\) and \(P_{C}(I-\beta\nabla g)\) are both nonexpansive. Then, by the above two cases and Lemma 2.4, we derive that

$$ q=P_{C}(I-\beta\nabla g)q=Tq. $$

This shows that \(q\in \operatorname{Fix}(T)=U\).

When we regularize it, we see the sequences generated by (3.6), which use RGPA. By (3.9), we have

$$\begin{aligned} \bigl\| u_{n}-T(u_{n})\bigr\| \leq&\bigl\| u_{n}-T_{\lambda_{n}}(u_{n}) \bigr\| +\bigl\| T_{\lambda _{n}}(u_{n})-T(u_{n})\bigr\| \\ \leq&\bigl\| u_{n}-T_{\lambda_{n}}(u_{n})\bigr\| + \lambda_{n}\beta\|u_{n}\|. \end{aligned}$$

Since \(\|u_{n}-T_{\lambda_{n}}(u_{n})\|\rightarrow0\) and \(\lambda_{n}\rightarrow0\), we have \(\|u_{n}-T(u_{n})\|\rightarrow0\). Thus, we get by Lemma 2.4 that \(q\in \operatorname{Fix}(T)=U\).

In the fifth step, we show that \(q\in(A+B)^{-1}0\).

Take \(r_{0}\in[l,2\alpha]\). Putting \(z_{n}=(I-r_{n}A)x_{n}\), we have from Lemma 2.2 that

$$\begin{aligned} \bigl\| J_{r_{0}}(I-r_{0}A)x_{n}-u_{n} \bigr\| \leq&\bigl\| J_{r_{0}}(I-r_{0}A)x_{n}-J_{r_{0}}(I-r_{n}A)x_{n} \bigr\| \\ &{}+\bigl\| J_{r_{0}}(I-r_{n}A)x_{n}-u_{n}\bigr\| \\ \leq&\bigl\| (I-r_{0}A)x_{n}-(I-r_{n}A)x_{n} \bigr\| \\ &{}+\bigl\| J_{r_{0}}(z_{n})-J_{r_{n}}(z_{n})\bigr\| \\ \leq&|r_{n}-r_{0}|\cdot\bigl\| A(x_{n})\bigr\| \\ &{}+\frac{|r_{n}-r_{0}|}{r_{0}}\bigl\| J_{r_{0}}(z_{n})-z_{n}\bigr\| , \end{aligned}$$
(3.11)

we also have

$$\begin{aligned} \bigl\| J_{r_{0}}(I-r_{0}A)x_{n}-x_{n} \bigr\| \leq\bigl\| J_{r_{0}}(I-r_{0}A)x_{n}-u_{n}\bigr\| + \|u_{n}-x_{n}\|. \end{aligned}$$
(3.12)

Take any subsequence \(\{x_{n_{i}}\}\) of \(\{x_{n}\}\). Since \(\{x_{n}\}\) is bounded, \(\{x_{n_{i}}\}\) is bounded and \(\{r_{n_{i}}\}\subset[l,2\alpha]\). Without loss of generality, there exist a subsequence \(\{x_{n_{i_{j}}}\}\) of \(\{x_{n_{i}}\}\) and a subsequence \(\{r_{n_{i_{j}}}\}\) of \(\{r_{n_{i}}\}\) such that \(x_{n_{i_{j}}}\rightharpoonup q\) and \(r_{n_{i_{j}}}\rightarrow r_{0}\) for some \(\{r_{0}\}\subset[l,2\alpha]\). Since \(\{x_{n_{i_{j}}}\}\subset C\) and C is closed and convex, we have \(q\in C\). Using \(r_{n_{i_{j}}}\rightarrow r_{0}\) and (3.11), we have

$$ \bigl\| J_{r_{0}}(I-r_{0}A)x_{n_{i_{j}}}-u_{n_{i_{j}}}\bigr\| \rightarrow0. $$

Furthermore, we have from \(\|x_{n_{i_{j}}}-u_{n_{i_{j}}}\|\rightarrow0\) and (3.12)

$$ \bigl\| J_{r_{0}}(I-r_{0}A)x_{n_{i_{j}}}-x_{n_{i_{j}}}\bigr\| \rightarrow0. $$

Since \(J_{r_{0}}(I-r_{0}A)\) is nonexpansive, we have from Lemma 2.4 \(q=J_{r_{0}}(I-r_{0}A)q\). By (3.7), we obtain \(q\in(A+B)^{-1}0\).

Thus, we have \(q\in(A+B)^{-1}0\cap U\).

On the other hand, from the sequence \(\{x_{n}\}\) generated by (3.5), we note that

$$\begin{aligned} x_{n}-q =&\alpha_{n}f(x_{n})+(1- \alpha_{n})T_{n}(u_{n})-q \\ =&\alpha_{n}f(x_{n})-\alpha_{n}f(q)+ \alpha_{n}f(q)-\alpha _{n}q+(1-\alpha_{n}) \bigl(T_{n}(u_{n})-q\bigr). \end{aligned}$$

Hence, we obtain from (3.8) that

$$\begin{aligned} \|x_{n}-q\|^{2} =&\alpha_{n}\bigl\langle (f-I)q,x_{n}-q\bigr\rangle \\ &{}+\bigl\langle \alpha_{n}\bigl(f(x_{n})-f(q)\bigr)+(1- \alpha _{n}) \bigl(T_{n}(u_{n})-T_{n}(q) \bigr), x_{n}-q\bigr\rangle \\ \leq&\alpha_{n}\bigl\langle (f-I)q,x_{n}-q\bigr\rangle \\ &{}+\alpha_{n}k\|x_{n}-q\|^{2}+(1- \alpha_{n})\|u_{n}-q\|\cdot\| x_{n}-q\| \\ \leq&\alpha_{n}\bigl\langle (f-I)q,x_{n}-q\bigr\rangle + \bigl(1-\alpha_{n}(1-k)\bigr)\| x_{n}-q\|^{2}. \end{aligned}$$

It follows that

$$ \|x_{n}-q\|^{2} \leq\frac{1}{1-k}\bigl\langle (f-I)q,x_{n}-q\bigr\rangle . $$

In particular,

$$ \|x_{n_{i}}-q\|^{2} \leq\frac{1}{1-k}\bigl\langle (f-I)q,x_{n_{i}}-q\bigr\rangle . $$
(3.13)

Since \(x_{n_{i}}\rightharpoonup q\), it follows from (3.13) that \(x_{n_{i}}\rightarrow q\) as \(i\rightarrow\infty\).

Similarly, from using RGPA, and by the sequence \(\{x_{n}\}\) generated by (3.6), we note that

$$\begin{aligned} x_{n}-q =&\alpha_{n}f(x_{n})- \alpha_{n}f(q)+\alpha_{n}f(q)-\alpha _{n}q+(1- \alpha_{n}) \bigl(T_{\lambda_{n}}(u_{n})-q\bigr). \end{aligned}$$

Hence, we obtain from (3.8) and (3.9)

$$\begin{aligned} \|x_{n}-q\|^{2} \leq&\alpha_{n}\bigl\langle (f-I)q,x_{n}-q\bigr\rangle +(1-\alpha_{n}+\alpha _{n}k)\|x_{n}-q\|^{2} \\ &{}+(1-\alpha_{n})\lambda_{n}\beta\|q\|\cdot \|x_{n}-q\|. \end{aligned}$$

It follows that

$$ \|x_{n}-q\|^{2} \leq\frac{\langle (f-I)q,x_{n}-q\rangle}{1-k}+\frac{(1-\alpha_{n})\lambda_{n}\beta\|q\| \cdot\|x_{n}-q\|}{(1-k)\alpha_{n}}. $$

In particular,

$$ \|x_{n_{i}}-q\|^{2} \leq\frac{(1-\alpha_{n})\beta}{1-k}\cdot \frac{\lambda_{n_{i}}}{\alpha _{n_{i}}}\|q\|\cdot\|x_{n_{i}}-q\|+\frac{1}{1-k}\bigl\langle (f-I)q,x_{n_{i}}-q\bigr\rangle . $$
(3.14)

Since \(x_{n_{i}}\rightharpoonup q\) and \(\lambda_{n}=o(\alpha_{n})\), it follows from (3.14) that \(x_{n_{i}}\rightarrow q\) as \(i\rightarrow\infty\).

Finally, we show that q solves the variational inequality (3.4).

From the sequence \(\{x_{n}\}\) generated by (3.5), we observe that

$$\begin{aligned} x_{n} =&\alpha_{n}f(x_{n})+(1- \alpha_{n})T_{n}(u_{n}) \\ =&\alpha_{n}f(x_{n})+(1-\alpha_{n})T_{n}J_{r_{n}}(I-r_{n}A) (x_{n}). \end{aligned}$$

Hence, we conclude that

$$ (I-f) (x_{n})=-\frac{1}{\alpha _{n}}\bigl(I-T_{n}J_{r_{n}}(I-r_{n}A) \bigr) (x_{n})-T_{n}J_{r_{n}}(I-r_{n}A) (x_{n})+x_{n}. $$

Since \(T_{n}J_{r_{n}}(I-r_{n}A)\) is nonexpansive, we find that \(I-T_{n}J_{r_{n}}(I-r_{n}A)\) is monotone. Note that, for any given \(z\in(A+B)^{-1}0\cap U\),

$$\begin{aligned} \bigl\langle (I-f)x_{n},x_{n}-z\bigr\rangle =&- \frac{1}{\alpha_{n}}\bigl\langle \bigl(I-T_{n}J_{r_{n}}(I-r_{n}A) \bigr)x_{n} \\ &{}-\bigl(I-T_{n}J_{r_{n}}(I-r_{n}A) \bigr)z,x_{n}-z\bigr\rangle -\bigl\langle T_{n}(u_{n})-x_{n}, x_{n}-z\bigr\rangle \\ \leq&\bigl\| T_{n}(u_{n})-x_{n}\bigr\| \cdot \|x_{n}-z\|. \end{aligned}$$

Now, replacing n with \(n_{i}\) in the above inequality, and letting \(i\rightarrow\infty\), since \(\{x_{n}\}\) is bounded, \(\|T_{n}(u_{n})-x_{n}\|\rightarrow0\), we have

$$ \bigl\langle (I-f)q,q-z\bigr\rangle =\lim_{i\rightarrow\infty}\bigl\langle (I-f)x_{n_{i}},x_{n_{i}}-z\bigr\rangle \leq0. $$

From a similar step, we observe the sequence \(\{x_{n}\}\) generated by (3.6) has similar results, namely as follows:

$$ (I-f) (x_{n})=-\frac{1}{\alpha_{n}}\bigl(I-T_{\lambda _{n}}J_{r_{n}}(I-r_{n}A) \bigr) (x_{n})-T_{\lambda _{n}}J_{r_{n}}(I-r_{n}A) (x_{n})+x_{n}. $$

Since \(T_{\lambda_{n}}J_{r_{n}}(I-r_{n}A)\) is nonexpansive, we have \(I-T_{\lambda_{n}}J_{r_{n}}(I-r_{n}A)\) is monotone. Note that for any given \(z\in(A+B)^{-1}0\cap U\), by (3.9), we get

$$\begin{aligned} \bigl\langle (I-f) (x_{n}),x_{n}-z\bigr\rangle \leq& \frac{\lambda_{n}}{\alpha_{n}}\beta\|z\|\cdot\|x_{n}-z\|+\bigl\| T_{\lambda_{n}}(u_{n})-x_{n} \bigr\| \cdot\|x_{n}-z\|. \end{aligned}$$

Then replacing n with \(n_{i}\) in the above inequality, and letting \(i\rightarrow\infty\), since \(\lambda_{n}=o(\alpha_{n})\), \(\|T_{\lambda_{n}}(u_{n})-x_{n}\|\rightarrow0\), we also have

$$ \bigl\langle (I-f)q,q-z\bigr\rangle =\lim_{i\rightarrow\infty}\bigl\langle (I-f)x_{n_{i}},x_{n_{i}}-z\bigr\rangle \leq0. $$

Therefore from the above two sequences generated by GPA (3.5) and RGPA (3.6), we obtain the same results:

Because of the arbitrariness of \(z\in(A+B)^{-1}0\cap U\), we see that \(q\in(A+B)^{-1}0\cap U\) is a solution of the variational inequality (3.4). Further, by the uniqueness of the solution of the variational inequality (3.4), we conclude that \(x_{n}\rightarrow q\) as \(n\rightarrow\infty\).

The variational inequality (3.4) can be rewritten as

$$ \bigl\langle f(q)-q,q-z\bigr\rangle \geq0,\quad \forall z\in(A+B)^{-1}0 \cap U. $$

By Lemma 2.1, it is equivalent to the following fixed point equation:

$$ P_{(A+B)^{-1}0\cap U}f(q)=q. $$

This completes the proof. □

Theorem 3.2

Let H be a real Hilbert space and let C be a nonempty, closed, and convex subset of H. Let \(P_{C}:H\rightarrow C\) be the metric projection. Let \(f:C\rightarrow C\) be a contraction with the constant \(k\in(0,1)\). Let \(A:C\rightarrow H\) be an α-inverse-strongly monotone mapping with \(\alpha>0\) and let \(B: H\rightarrow H\) be a maximal monotone operator and the domain of B is included in C. Let \(J_{r}=(I+r B)^{-1}\) be the resolvent of B for \(r>0\). Suppose that g is \(1/L\)-ism continuous with \(L>0\). Assume that \((A+B)^{-1}0\cap U\neq\emptyset\). Let the sequences \(\{u_{n}\}\) and \(\{x_{n}\}\) be generated by \(x_{1}\in C\) and

$$ \left \{ \textstyle\begin{array}{@{}l} u_{n}=J_{r_{n}}(I-r_{n}A)(x_{n}),\\ x_{n+1}=\alpha_{n}f(x_{n})+(1-\alpha_{n})T_{n}(u_{n}),\quad \forall n\in\mathbb{N}, \end{array}\displaystyle \right . $$
(3.15)

when regularize it by using RGPA, the sequences generated by (3.15) changed into the following sequences:

$$ \left \{ \textstyle\begin{array}{@{}l} u_{n}=J_{r_{n}}(I-r_{n}A)(x_{n}),\\ x_{n+1}=\alpha_{n}f(x_{n})+(1-\alpha_{n})T_{\lambda_{n}}(u_{n}),\quad \forall n\in\mathbb{N}, \end{array}\displaystyle \right . $$
(3.16)

where \(T_{n}=P_{C}(I-\beta_{n}\nabla g)\), \(T_{\lambda_{n}}=P_{C}(I-\beta\nabla g_{\lambda_{n}})\), \(\nabla g_{\lambda_{n}}=\nabla g+\lambda_{n}I\), \(0< b\leq\beta_{n}\leq\frac{2}{L}\), \(\sum_{n=1}^{\infty}|\beta_{n}-\beta_{n+1}|<\infty\), \(\beta\in (0,2/L)\). Let \(\{\alpha_{n}\}\), \(\{r_{n}\}\), and \(\{\lambda_{n}\}\) satisfy the following conditions:

  1. (C1)

    \(\{\alpha_{n}\}\subset(0,1)\), \(\lim_{n\rightarrow\infty}\alpha_{n}=0\), \(\sum_{n=1}^{\infty}\alpha_{n}=\infty\), \(\sum_{n=1}^{\infty}|\alpha_{n+1}-\alpha_{n}|<\infty\);

  2. (C2)

    \(\{r_{n}\}\subset(0,\infty)\), \(0< l\leq r_{n}\leq2\alpha\), \(\sum_{n=1}^{\infty}|r_{n+1}-r_{n}|<\infty\);

  3. (C3)

    \(\{\lambda_{n}\}\subset(0,2/\beta-L)\), \(\lambda_{n}=o(\alpha_{n})\), \(\sum_{n=1}^{\infty}|\lambda_{n+1}-\lambda_{n}|<\infty\).

Then the sequences \(\{x_{n}\}\) from (3.15) and (3.16) are both converge strongly to a point \(q\in(A+B)^{-1}0\cap U\), which solves the variational inequality (3.4).

Proof

It is clear that \(\hat{x}\in C\) solves the minimization problem (1.2) if and only if for each fixed \(0< b\leq\beta\leq2/L\), \(\hat{x}\) solves the fixed point equation

$$ \hat{x}=P_{C}(I-\beta\nabla g)\hat{x}=T\hat{x}, $$

and \(\hat{x}=T\hat{x}\), i.e., \(\hat{x}\in U=\operatorname{Fix}(T)\).

Now, we first show that \(\{x_{n}\}\) is bounded. Indeed, pick any \(p\in(A+B)^{-1}0\cap U\), and by (3.8) and (3.15) we derive that

$$\begin{aligned} \|x_{n+1}-p\| =&\bigl\| \alpha_{n}f(x_{n})+(1- \alpha_{n})T_{n}(u_{n})-p\bigr\| \\ \leq&\alpha_{n}\bigl\| f(x_{n})-f(p)\bigr\| +(1-\alpha_{n}) \bigl\| T_{n}(u_{n})-T_{n}(p)\bigr\| +\alpha_{n} \bigl\| f(p)-p\bigr\| \\ \leq&\alpha_{n}k\|x_{n}-p\|+(1-\alpha_{n}) \|u_{n}-p\|+\alpha_{n}\bigl\| f(p)-p\bigr\| \\ \leq&\bigl(1-\alpha_{n}(1-k)\bigr)\|x_{n}-p\|+ \alpha_{n}\bigl\| f(p)-p\bigr\| . \end{aligned}$$

By induction, we have

$$ \|x_{n}-p\|\leq\max\biggl\{ \|x_{1}-p\|,\frac{1}{1-k} \bigl\| f(p)-p\bigr\| \biggr\} ,\quad n\geq1. $$

Hence, \(\{x_{n}\}\) is bounded. From (3.8), we also see that \(\{u_{n}\}\) is bounded.

Similarly, we derive from (3.9) and (3.16) that

$$\begin{aligned} \|x_{n+1}-p\| \leq&\bigl(1-\alpha_{n}(1-k)\bigr) \|x_{n}-p\| \\ &{}+\alpha_{n}(1-k)\biggl[\frac{\lambda_{n}\beta(1-\alpha_{n})}{\alpha _{n}(1-k)}\|p\|+ \frac{\alpha_{n}}{\alpha_{n}(1-k)}\cdot\bigl\| f(p)-p\bigr\| \biggr]. \end{aligned}$$

Since \(\lambda_{n}=o(\alpha_{n})\), there exists a real number \(a>0\) such that \(\frac{\lambda_{n}}{\alpha_{n}}\leq a\).

Thus,

$$ \|x_{n+1}-p\|\leq\bigl(1-\alpha_{n}(1-k)\bigr) \|x_{n}-p\|+\alpha_{n}(1-k)\frac {a\beta\|p\|+\|f(p)-p\|}{1-k}. $$

By induction, we have

$$ \|x_{n}-p\|\leq\max\biggl\{ \|x_{1}-p\|,\frac{1}{1-k} \bigl(a\beta\|p\|+\bigl\| f(p)-p\bigr\| \bigr)\biggr\} ,\quad n\geq1. $$

Hence, \(\{x_{n}\}\) is bounded. From (3.8), we also see that \(\{u_{n}\}\) is bounded.

Next, we show that \(\|x_{n+1}-x_{n}\|\rightarrow0\).

Indeed, since g is \(1/L\)-ism, \(P_{C}(I-\beta_{n}\nabla g)\) is nonexpansive, we derive from (3.15) that

$$\begin{aligned} \bigl\| T_{n}(u_{n-1})-T_{n-1}(u_{n-1})\bigr\| =& \bigl\| P_{C}(I-\beta_{n}\nabla g) (u_{n-1})-P_{C}(I- \beta_{n-1}\nabla g) (u_{n-1})\bigr\| \\ \leq&\bigl\| (I-\beta_{n}\nabla g) (u_{n-1})-(I- \beta_{n-1}\nabla g) (u_{n-1})\bigr\| \\ =&|\beta_{n}-\beta_{n-1}|\cdot\bigl\| \nabla g(u_{n-1}) \bigr\| . \end{aligned}$$

Thus, we get

$$\begin{aligned} \|x_{n+1}-x_{n}\| =&\bigl\| \alpha_{n}f(x_{n})+(1- \alpha_{n})T_{n}(u_{n})-\bigl(\alpha _{n-1}f(x_{n-1}) \\ &{}+(1-\alpha_{n-1})T_{n-1}(u_{n-1})\bigr)\bigr\| \\ \leq&\alpha_{n}\bigl\| f(x_{n})-f(x_{n-1})\bigr\| +\bigl\| \alpha_{n}f(x_{n-1})-\alpha _{n-1}f(x_{n-1}) \bigr\| \\ &{}+(1-\alpha_{n})\bigl\| T_{n}(u_{n})-T_{n}(u_{n-1}) \bigr\| \\ &{}+(1-\alpha_{n})\bigl\| T_{n}(u_{n-1})-T_{n-1}(u_{n-1}) \bigr\| \\ &{}+\bigl\| (1-\alpha_{n})T_{n-1}(u_{n-1})-(1-\alpha _{n-1})T_{n-1}(u_{n-1})\bigr\| \\ \leq&\alpha_{n}k\|x_{n}-x_{n-1}\|+(1- \alpha_{n})\|u_{n}-u_{n-1}\| \\ &{}+(1-\alpha_{n})|\beta_{n}-\beta_{n-1}|\cdot \bigl\| \nabla g(u_{n-1})\bigr\| \\ &{}+|\alpha_{n}-\alpha_{n-1}|\bigl(\bigl\| f(x_{n-1}) \bigr\| +\bigl\| T_{n-1}(u_{n-1})\bigr\| \bigr) \\ \leq&\alpha_{n}k\|x_{n}-x_{n-1}\|+(1- \alpha_{n})\|u_{n}-u_{n-1}\| \\ &{}+M_{1}\bigl(|\beta_{n}-\beta_{n-1}|+| \alpha_{n}-\alpha_{n-1}|\bigr) \end{aligned}$$
(3.17)

for some appropriate constant \(M_{1}>0\) such that

$$ M_{1}\geq\max\bigl\{ \bigl\| \nabla g(u_{n-1})\bigr\| , \bigl\| f(x_{n-1})\bigr\| +\bigl\| T_{n-1}(u_{n-1})\bigr\| \bigr\} , \quad \forall n\geq1. $$

Similarly, since g is \(1/L\)-ism, \(P_{C}(I-\beta\nabla g_{\lambda_{n}})=T_{\lambda_{n}}\) is nonexpansive, we derive from (3.16) that

$$\begin{aligned} \bigl\| T_{\lambda_{n}}(u_{n-1})-T_{\lambda_{n-1}}(u_{n-1})\bigr\| \leq& \beta|\lambda_{n}-\lambda_{n-1}| \|u_{n-1}\|. \end{aligned}$$

Thus, we get

$$\begin{aligned} \|x_{n+1}-x_{n}\| \leq&\alpha_{n}k \|x_{n}-x_{n-1}\|+(1-\alpha_{n}) \|u_{n}-u_{n-1}\| \\ &{}+(1-\alpha_{n})\beta|\lambda_{n}- \lambda_{n-1}|\cdot \|u_{n-1}\| \\ &{}+|\alpha_{n}-\alpha_{n-1}|\bigl(\bigl\| f(x_{n-1}) \bigr\| +\bigl\| T_{\lambda _{n-1}}(u_{n-1})\bigr\| \bigr) \\ \leq&\alpha_{n}k\|x_{n}-x_{n-1}\|+(1- \alpha_{n})\|u_{n}-u_{n-1}\| \\ &{}+M_{1}^{\prime}\bigl(|\lambda_{n}- \lambda_{n-1}|+|\alpha_{n}-\alpha_{n-1}|\bigr) \end{aligned}$$
(3.18)

for some appropriate constant \(M_{1}^{\prime}>0\) such that

$$ M_{1}^{\prime}\geq\max\bigl\{ \beta\|u_{n-1}\|, \bigl\| f(x_{n-1})\bigr\| +\bigl\| T_{\lambda _{n-1}}(u_{n-1})\bigr\| \bigr\} , \quad \forall n\geq1. $$

Since \(u_{n+1}=J_{r_{n+1}}(I-r_{n+1}A)(x_{n+1})\) and \(u_{n}=J_{r_{n}}(I-r_{n}A)(x_{n})\), we get from Lemma 2.2 and (3.3) that

$$\begin{aligned} \|u_{n+1}-u_{n}\| =&\bigl\| J_{r_{n+1}}(I-r_{n+1}A) (x_{n+1})-J_{r_{n}}(I-r_{n}A) (x_{n})\bigr\| \\ \leq&\bigl\| J_{r_{n+1}}(I-r_{n+1}A) (x_{n+1})-J_{r_{n+1}}(I-r_{n+1}A) (x_{n})\bigr\| \\ &{}+\bigl\| J_{r_{n+1}}(I-r_{n+1}A) (x_{n})-J_{r_{n+1}}(I-r_{n}A) (x_{n})\bigr\| \\ &{}+\bigl\| J_{r_{n+1}}(I-r_{n}A) (x_{n})-J_{r_{n}}(I-r_{n}A) (x_{n})\bigr\| \\ \leq&\bigl\| x_{n+1}-x_{n}\bigr\| +|r_{n+1}-r_{n}| \cdot\bigl\| A(x_{n})\bigr\| \\ &{}+\frac{|r_{n+1}-r_{n}|}{r_{n+1}}\bigl\| J_{r_{n+1}}(I-r_{n}A) (x_{n})-(I-r_{n}A) (x_{n})\bigr\| . \end{aligned}$$

Since \(0< l\leq r_{n}\leq2\alpha\), we have

$$\begin{aligned} \|u_{n+1}-u_{n}\| \leq&\|x_{n+1}-x_{n} \|+|r_{n+1}-r_{n}|\cdot\bigl\| A(x_{n})\bigr\| \\ &{}+\frac{|r_{n+1}-r_{n}|}{l}\bigl\| J_{r_{n+1}}(I-r_{n}A) (x_{n})-(I-r_{n}A) (x_{n})\bigr\| \\ \leq&\|x_{n+1}-x_{n}\|+|r_{n+1}-r_{n}|M_{2}, \end{aligned}$$
(3.19)

where \(M_{2}=\sup\{\|A(x_{n})\|, \frac{1}{l}\|J_{r_{n+1}}(I-r_{n}A)(x_{n})-(I-r_{n}A)(x_{n})\|: n\in\mathbb{N}\}\).

From (3.17) and (3.19), we obtain

$$\begin{aligned} \|x_{n+1}-x_{n}\| \leq&\alpha_{n}k \|x_{n}-x_{n-1}\|+(1-\alpha_{n}) \|u_{n}-u_{n-1}\| \\ &{}+M_{1}\bigl(|\lambda_{n}-\lambda_{n-1}|+| \alpha_{n}-\alpha_{n-1}|\bigr) \\ \leq&\alpha_{n}k\|x_{n}-x_{n-1}\|+(1- \alpha_{n}) \bigl(\|x_{n}-x_{n-1}\| +|r_{n}-r_{n-1}|M_{2}\bigr) \\ &{}+M_{1}\bigl(|\lambda_{n}-\lambda_{n-1}|+| \alpha_{n}-\alpha_{n-1}|\bigr) \\ \leq&\bigl(1-\alpha_{n}(1-k)\bigr)\|x_{n}-x_{n-1} \|+M_{2}|r_{n}-r_{n-1}| \\ &{}+M_{1}\bigl(|\lambda_{n}-\lambda_{n-1}|+| \alpha_{n}-\alpha_{n-1}|\bigr) \\ \leq&\bigl(1-\alpha_{n}(1-k)\bigr)\|x_{n}-x_{n-1} \| \\ &{}+M_{3}\bigl(|r_{n}-r_{n-1}|+|\lambda_{n}- \lambda_{n-1}|+|\alpha _{n}-\alpha_{n-1}|\bigr), \end{aligned}$$

where \(M_{3}=\max\{M_{1},M_{2}\}\). Hence by Lemma 2.3, we have

$$ \lim_{n\rightarrow\infty}\|x_{n+1}-x_{n} \|=0. $$
(3.20)

Then, from (3.18), (3.20), and \(|r_{n+1}-r_{n}|\rightarrow0\), we have

$$ \lim_{n\rightarrow\infty}\|u_{n+1}-u_{n} \|=0. $$
(3.21)

For any \(p\in(A+B)^{-1}0\cap U\), by the same argument as in the proof of Theorem 3.1, we have

$$ \|u_{n}-p\|^{2}\leq\|x_{n}-p \|^{2}-\|u_{n}-x_{n}\|^{2}. $$
(3.22)

Then, for the GPA, generated by (3.15) and from (3.22), by the same argument as in the proof of Theorem 3.1, we derive that

$$\begin{aligned} \|x_{n+1}-p\|^{2} =&\bigl\| \alpha_{n}f(x_{n})+(1- \alpha_{n})T_{n}(u_{n})-p\bigr\| ^{2} \\ =&\bigl\| \alpha_{n}\bigl(f(x_{n})-p\bigr)+(1- \alpha_{n})T_{n}(u_{n})-(1-\alpha _{n})T_{n}(p)\bigr\| ^{2} \\ \leq&\alpha_{n}^{2}\bigl\| f(x_{n})-p \bigr\| ^{2}+2\alpha_{n}(1-\alpha_{n})\bigl\| f(x_{n})-p\bigr\| \cdot\|u_{n}-p\| +(1-\alpha_{n})^{2}\|u_{n}-p \|^{2} \\ \leq&\alpha_{n}\bigl(\bigl\| f(x_{n})-p\bigr\| ^{2}+2 \bigl\| f(x_{n})-p\bigr\| \cdot\|u_{n}-p\|\bigr)+\| u_{n}-p \|^{2} \\ \leq&\alpha_{n}\bigl(\bigl\| f(x_{n})-p\bigr\| ^{2}+2 \bigl\| f(x_{n})-p\bigr\| \cdot\|u_{n}-p\|\bigr) \\ &{}+\|x_{n}-p\|^{2}-\|u_{n}-x_{n} \|^{2}, \end{aligned}$$

and hence

$$\begin{aligned} \|x_{n}-u_{n}\|^{2} \leq&\|x_{n}-p \|^{2}-\|x_{n+1}-p\|^{2} \\ &{}+\alpha_{n}\bigl(\bigl\| f(x_{n})-p\bigr\| ^{2}+2 \bigl\| f(x_{n})-p\bigr\| \cdot\|u_{n}-p\|\bigr) \\ =&\|x_{n}-x_{n+1}\|^{2}+2\langle x_{n}-x_{n+1}, x_{n+1}-p\rangle \\ &{}+\alpha_{n}\bigl(\bigl\| f(x_{n})-p\bigr\| ^{2}+2 \bigl\| f(x_{n})-p\bigr\| \cdot\|u_{n}-p\|\bigr) \\ \leq&\|x_{n}-x_{n+1}\|^{2}+2\|x_{n}-x_{n+1} \|\cdot\|x_{n+1}-p\| \\ &{}+\alpha_{n}\bigl(\bigl\| f(x_{n})-p\bigr\| ^{2}+2 \bigl\| f(x_{n})-p\bigr\| \cdot\|u_{n}-p\|\bigr). \end{aligned}$$

Since \(\{x_{n}\}\) is bounded, \(\alpha_{n}\rightarrow0\) and \(\|x_{n}-x_{n+1}\|\rightarrow0\), we have

$$ \lim_{n\rightarrow\infty}\|x_{n}-u_{n} \|=0. $$
(3.23)

Next, we derive that

$$\begin{aligned} \bigl\| x_{n}-T_{n}(x_{n})\bigr\| =&\bigl\| x_{n}-x_{n+1}+x_{n+1}-T_{n}(x_{n}) \bigr\| \\ \leq&\|x_{n}-x_{n+1}\|+\bigl\| \alpha_{n}f(x_{n})+(1- \alpha _{n})T_{n}(u_{n})-T_{n}(x_{n}) \bigr\| \\ \leq&\|x_{n}-x_{n+1}\|+\alpha_{n} \bigl\| f(x_{n})-T_{n}(x_{n})\bigr\| +(1-\alpha _{n})\|u_{n}-x_{n}\|. \end{aligned}$$

From (3.20), (3.23), and \(\alpha_{n}\rightarrow0\), we have

$$ \bigl\| x_{n}-T_{n}(x_{n})\bigr\| \rightarrow0, $$

it follows that \(\|u_{n}-T_{n}(u_{n})\|\rightarrow0\).

Similarly, for the RGPA, generated by (3.16) and from (3.9) and (3.22), by the same argument as in the proof of Theorem 3.1, we derive that

$$\begin{aligned} \|x_{n+1}-p\|^{2} \leq&\|x_{n}-p\|^{2}- \|u_{n}-x_{n}\|^{2}+2\|u_{n}-p\|\cdot \lambda_{n}\beta \|p\|+\lambda_{n}^{2} \beta^{2}\|p\|^{2} \\ &{}+\alpha_{n}\bigl(2\bigl(\|u_{n}-p\|+\lambda_{n} \beta\|p\|\bigr)\cdot\bigl\| f(x_{n})-p\bigr\| +\bigl\| f(x_{n})-p\bigr\| ^{2} \bigr) \end{aligned}$$

and hence

$$\begin{aligned} \|x_{n}-u_{n}\|^{2} \leq&\|x_{n+1}-x_{n} \|\bigl(\|x_{n}-p\|+\|x_{n+1}-p\|\bigr) \\ &{}+2\beta\lambda_{n}\|u_{n}-p\|\cdot\|p\|+ \lambda_{n}^{2}\beta^{2}\|p\|^{2} \\ &{}+\alpha_{n}\bigl(2\bigl(\|u_{n}-p\|+\beta \lambda_{n}\|p\|\bigr)\cdot\bigl\| f(x_{n})-p\bigr\| +\bigl\| f(x_{n})-p \bigr\| ^{2}\bigr). \end{aligned}$$

Since both \(\{x_{n}\}\), \(\{f(x_{n})\}\) and \(\{u_{n}\}\) are bounded, \(\alpha_{n}\rightarrow0\), \(\lambda_{n}\rightarrow0\), and \(\|x_{n+1}-x_{n}\|\rightarrow0\), we also derive the result (3.23).

Next, we derive that

$$\begin{aligned} \bigl\| x_{n}-T_{\lambda_{n}}(x_{n})\bigr\| \leq& \|x_{n}-x_{n+1}\|+\alpha_{n}\bigl\| f(x_{n})-T_{\lambda_{n}}(u_{n}) \bigr\| +\| u_{n}-x_{n}\|. \end{aligned}$$

From (3.20), (3.23), and \(\alpha_{n}\rightarrow0\), we also have

$$ \bigl\| x_{n}-T_{\lambda_{n}}(x_{n})\bigr\| \rightarrow0. $$

It follows that \(\|u_{n}-T_{\lambda_{n}}(u_{n})\|\rightarrow0\).

Now we show that

$$ \limsup_{n\rightarrow\infty}\bigl\langle x_{n}-q,-(I-f)q\bigr\rangle \leq0, $$

where \(q\in(A+B)^{-1}0\cap U\) is a unique solution of the variational inequality (3.4).

Indeed, take a subsequence \(\{x_{n_{k}}\}\) of \(\{x_{n}\}\) such that

$$ \limsup_{n\rightarrow\infty}\bigl\langle x_{n}-q,-(I-f)q\bigr\rangle =\lim_{k\rightarrow\infty}\bigl\langle x_{n_{k}}-q,-(I-f)q\bigr\rangle . $$

Since \(\{x_{n}\}\) is bounded, without loss of generality, we may assume that \(x_{n_{k}}\rightharpoonup\hat{x}\).

By the same argument as in the proof of Theorem 3.1, we have \(\hat{x}\in (A+B)^{-1}0\cap U\).

Since \(q=P_{(A+B)^{-1}0\cap U}f(q)\), it follows that

$$ \limsup_{n\rightarrow\infty}\bigl\langle (I-f)q,q-x_{n} \bigr\rangle =\bigl\langle (I-f)q,q-\hat{x}\bigr\rangle \leq0. $$
(3.24)

Finally, we show that \(x_{n}\rightarrow q\).

In fact, for the GPA, generated by (3.15),

$$\begin{aligned} x_{n+1}-q =&\alpha_{n}f(x_{n})+(1- \alpha_{n})T_{n}(u_{n})-q \\ =&\alpha_{n}\bigl(f(x_{n})-f(q)\bigr)+ \alpha_{n}\bigl(f(q)-q\bigr)+(1-\alpha _{n}) \bigl(T_{n}(u_{n})-T_{n}(q)\bigr). \end{aligned}$$

So, from (2.1) and (3.22), we obtain

$$\begin{aligned} \|x_{n+1}-q\|^{2} =&\bigl\| \alpha_{n} \bigl(f(x_{n})-f(q)\bigr)+\alpha_{n}\bigl(f(q)-q\bigr)+(1- \alpha _{n}) \bigl(T_{n}(u_{n})-T_{n}(q) \bigr)\bigr\| ^{2} \\ \leq&(1-\alpha_{n})^{2}\bigl\| T_{n}(u_{n})-T_{n}(q) \bigr\| ^{2} \\ &{}+2\alpha_{n}\bigl\langle f(x_{n})-f(q)-(I-f)q,x_{n+1}-q \bigr\rangle \\ \leq&(1-\alpha_{n})^{2}\|u_{n}-q \|^{2}+2\alpha_{n}k\|x_{n}-q\|\cdot\| x_{n+1}-q\| \\ &{}+2\alpha_{n}\bigl\langle -(I-f)q,x_{n+1}-q\bigr\rangle \\ \leq&(1-\alpha_{n})^{2}\|x_{n}-q \|^{2}+\alpha_{n}k\bigl(\|x_{n}-q\|^{2}+ \| x_{n+1}-q\|^{2}\bigr) \\ &{}+2\alpha_{n}\bigl\langle -(I-f)q,x_{n+1}-q\bigr\rangle . \end{aligned}$$

It follows that

$$\begin{aligned} \|x_{n+1}-q\|^{2} \leq&\frac{1-2\alpha_{n}+\alpha_{n}k}{1-\alpha_{n}k} \|x_{n}-q\| ^{2}+\frac{\alpha_{n}^{2}}{1-\alpha_{n}k}\|x_{n}-q \|^{2} \\ &{}+\frac{2\alpha_{n}}{1-\alpha_{n}k}\bigl\langle -(I-f)q,x_{n+1}-q\bigr\rangle \\ \leq&\bigl(1-2(1-k)\alpha_{n}\bigr)\|x_{n}-q \|^{2}+\frac{\alpha_{n}^{2}}{1-\alpha _{n}k}\|x_{n}-q\|^{2} \\ &{}+\frac{2\alpha_{n}}{1-\alpha_{n}k}\bigl\langle -(I-f)q,x_{n+1}-q\bigr\rangle \\ \leq&\bigl(1-2(1-k)\alpha_{n}\bigr)\|x_{n}-q \|^{2}+2(1-k)\alpha_{n}\biggl[\frac{\alpha _{n}}{2(1-k)(1-\alpha_{n}k)}M \\ &{}+\frac{1}{(1-k)(1-\alpha_{n}k)}\bigl\langle -(I-f)q,x_{n+1}-q\bigr\rangle \biggr] \\ =&\bigl(1-2(1-k)\alpha_{n}\bigr)\|x_{n}-q \|^{2}+2(1-k)\alpha_{n}\delta_{n}, \end{aligned}$$

where \(\delta_{n}=\frac{\alpha_{n}}{2(1-k)(1-\alpha_{n}k)}M+\frac {1}{(1-k)(1-\alpha_{n}k)}\langle -(I-f)q,x_{n+1}-q\rangle\), and \(M=\sup\{\|x_{n}-q\|^{2}: n\in\mathbb{N}\}\).

It is easy to see that \(\lim_{n\rightarrow\infty}2(1-k)\alpha_{n}=0\), \(\sum_{n=1}^{\infty}2(1-k)\alpha_{n}=\infty\), and \(\limsup_{n\rightarrow\infty}\delta_{n}\leq0\) by (3.24). Hence, by Lemma 2.3, the sequence \(\{x_{n}\}\) converges strongly to q.

Similarly, for the RGPA, generated by (3.16),

$$\begin{aligned} x_{n+1}-q =&\alpha_{n}\bigl(f(x_{n})-f(q)\bigr)+ \alpha_{n}\bigl(f(q)-q\bigr)+(1-\alpha _{n}) \bigl(T_{\lambda_{n}}(u_{n})-T_{\lambda_{n}}(q)\bigr) \\ &{}+(1-\alpha_{n}) \bigl(T_{\lambda_{n}}(q)-T(q)\bigr). \end{aligned}$$

So, from (3.9) and (3.22), we derive

$$\begin{aligned} \|x_{n+1}-q\|^{2} \leq&\bigl(1-\alpha_{n}(1-k) \bigr)\|x_{n}-q\|\cdot\|x_{n+1}-q\|+\lambda_{n} \beta \|q\|\cdot\|x_{n+1}-q\| \\ &{}+\alpha_{n}\bigl\langle -(I-f)q,x_{n+1}-q\bigr\rangle \\ \leq&\bigl(1-\alpha_{n}(1-k)\bigr)\frac{1}{2}\bigl( \|x_{n}-q\|^{2}+\|x_{n+1}-q\| ^{2}\bigr) \\ &{}+\alpha_{n}\biggl(\bigl\langle -(I-f)q,x_{n+1}-q\bigr\rangle +\frac{\lambda_{n}}{\alpha_{n}}\beta\|q\|\cdot\| x_{n+1}-q\|\biggr). \end{aligned}$$

It follows that

$$\begin{aligned} \|x_{n+1}-q\|^{2} \leq&\bigl(1-\alpha_{n}(1-k) \bigr)\|x_{n}-q\|^{2} \\ &{}+\frac{2\alpha_{n}}{1+\alpha_{n}(1-k)}\biggl(\bigl\langle -(I-f)q,x_{n+1}-q\bigr\rangle +\frac{\lambda_{n}}{\alpha_{n}}\beta\|q\|\cdot\| x_{n+1}-q\|\biggr) \end{aligned}$$

since \(\{x_{n}\}\) is bounded, we can take a constant \(M^{\prime}>0\) such that

$$ M\geq\|x_{n+1}-q\|, \quad n\geq1. $$

Then we obtain

$$ \|x_{n+1}-q\|^{2}\leq\bigl(1- \alpha_{n}(1-k)\bigr)\|x_{n}-q\|^{2}+ \alpha_{n}\delta_{n}, $$
(3.25)

where \(\delta_{n}=\frac{2}{1+\alpha_{n}(1-k)}[\langle -(I-f)q,x_{n+1}-q\rangle+\frac{\lambda_{n}}{\alpha_{n}}\beta\|q\| M^{\prime}]\).

By (3.24) and \(\lambda_{n}=o(\alpha_{n})\), we get \(\limsup_{n\rightarrow\infty}\delta_{n}\leq0\). Now applying Lemma 2.3 to (3.25) concludes that \(x_{n}\rightarrow q\) as \(n\rightarrow\infty\). The variational inequality (3.4) can be rewritten as

$$ \bigl\langle f(q)-q,q-z\bigr\rangle \geq0, \quad\forall z\in(A+B)^{-1}0 \cap U. $$

By Lemma 2.1, it is equivalent to the following fixed point equation:

$$ P_{(A+B)^{-1}0\cap U}f(q)=q. $$

This completes the proof. □

In this following, based on Theorem 3.2 and taking the RGPA for example, from the sequences generated by (3.16), we will give new strong convergence theorems in Hilbert space, which are useful in nonlinear analysis and optimization.

In 1994, Censor and Elfving [22] introduced the split feasibility problem (SFP). Then various algorithms were introduced by some authors to solve it (see [13, 17, 23], and [21, 24, 25]). Recently, many authors have paid attention to the split feasibility problem (SFP) due to its wide application in signal processing and image reconstructions (see [15, 16] and [26]).

Let C and Q be nonempty, closed, and convex subset of real Hilbert space \(H_{1}\) and \(H_{2}\), respectively. Then the SFP under consideration in this paper can mathematically be formulated as finding a point x satisfying the following property:

$$ x\in C\quad \mbox{and}\quad Fx\in Q, $$
(3.26)

where \(F:H_{1}\rightarrow H_{2}\) is a bounded linear operator. It is clear that \(x^{*}\) is a solution to the split feasibility problem (3.26) if and only if \(x^{*}\in C\) and \(Fx^{*}-P_{Q}Fx^{*}=0\). We define the proximity function g by

$$ g(x)=\frac{1}{2}\|Fx-P_{Q}Fx\|^{2}. $$

Consider the constrained convex minimization problem

$$ \min_{x\in C}g(x)=\min_{x\in C} \frac{1}{2}\|Fx-P_{Q}Fx\|^{2}. $$
(3.27)

Then \(x^{*}\) solves the SFP (3.26) if and only if \(x^{*}\) solves the minimization problem (3.27) with the minimize equal to 0.

In particular, Byrne [24] introduced the so-called CQ algorithm. Take an initial guess \(x_{0}\in H_{1}\) arbitrarily, and define \(\{x_{n}\}\) recursively as follows:

$$ x_{n+1}=P_{C}\bigl(I-\beta F^{*}(I-P_{Q})F\bigr)x_{n}, \quad n\geq0, $$
(3.28)

where \(0<\beta<2/\|A\|^{2}\) and \(P_{C}\) denotes the projector onto C. Then the sequence \(\{x_{n}\}\) generated by (3.28) converges weakly to a solution of the SFP.

Let \(\alpha>0\) and let A be an α-inverse-strongly monotone mapping of C into H. Let B be a maximal monotone operator on Hilbert space H, such that the domain of B is included in C. Let \(J_{r}=(I+rB)^{-1}\) be the resolvent of B for \(r>0\). In order to obtain a strong convergence iterative sequence to solve the SFP, we propose a new algorithm as follows: \(x_{1}\in C\),

$$ \left \{ \textstyle\begin{array}{@{}l} u_{n}=J_{r_{n}}(I-r_{n}A)(x_{n}),\\ x_{n+1}=\alpha_{n}f(x_{n})+(1-\alpha_{n})T_{\lambda_{n}}(u_{n}),\quad \forall n\in\mathbb{N}, \end{array}\displaystyle \right . $$
(3.29)

where \(f:C\rightarrow C\) is a contraction with the constant \(k\in(0,1)\), and \(\{T_{\lambda_{n}}\}\) satisfy \(T_{\lambda_{n}}=P_{C}(I-\beta(F^{*}(I-P_{Q})F+\lambda_{n}I))\) for all n, and \(\beta\in(0,2/\|F\|^{2})\). We can show that the sequence \(\{x_{n}\}\) generated by (3.29) converges strongly to a solution of the SFP (3.26) if the sequence \(\{\alpha_{n}\}\subset(0,1)\). Applying Theorem 3.2, we obtain the following result.

Theorem 3.3

Assume that the split feasibility problem (3.26) is consistent. Let the sequence \(\{x_{n}\}\) be generated by (3.29). Here the sequence \(\{\alpha_{n}\}\) and \(\{\lambda_{n}\}\) satisfy the conditions (C1) and (C2). Then the sequence \(\{x_{n}\}\) converges strongly to a point \(q\in(A+B)^{-1}0\cap V\), which V denote the solution set of SFP (3.26).

Proof

By the definition of the proximity function g, we have

$$ \nabla g(x)=F^{*}(I-P_{Q})Fx, $$

since \(P_{Q}\) is \(1/2\)-averaged mapping, then \(I-P_{Q}\) is 1-ism, for \(\forall x,y\in C\), we obtain

$$\begin{aligned} &\bigl\langle \nabla g(x)-\nabla g(y),x-y\bigr\rangle -1/\|F\|^{2}\cdot \bigl\| \nabla g(x)-\nabla g(y)\bigr\| ^{2} \\ &\quad=\bigl\langle F^{*}(I-P_{Q})Fx-F^{*}(I-P_{Q})Fy,x-y \bigr\rangle \\ &\qquad{}-1/\|F\|^{2}\cdot\bigl\| F^{*}(I-P_{Q})Fx-F^{*}(I-P_{Q})Fy \bigr\| ^{2} \\ &\quad=\bigl\langle F^{*}\bigl((I-P_{Q})Fx-(I-P_{Q})Fy \bigr),x-y\bigr\rangle \\ &\qquad{} -1/\|F\|^{2}\cdot\bigl\| F^{*}\bigl((I-P_{Q})Fx-(I-P_{Q})Fy \bigr)\bigr\| ^{2} \\ &\quad=\bigl\langle (I-P_{Q})Fx-(I-P_{Q})Fy,Fx-Fy\bigr\rangle \\ &\qquad{}-1/\|F\|^{2}\cdot\bigl\| F^{*}\bigl((I-P_{Q})Fx-(I-P_{Q})Fy \bigr)\bigr\| ^{2} \\ &\quad\geq\bigl\| (I-P_{Q})Fx-(I-P_{Q})Fy\bigr\| ^{2} -\bigl\| (I-P_{Q})Fx-(I-P_{Q})Fy\bigr\| ^{2} \\ &\quad=0. \end{aligned}$$

So, g is \(1/\|F\|^{2}\)-ism.

Set \(g_{\lambda_{n}}(x)=g(x)+\frac{\lambda_{n}}{2}\|x\|^{2}\), consequently,

$$\begin{aligned} \nabla g_{\lambda_{n}}(x) =&\nabla g(x)+\lambda_{n}I(x) =F^{*}(I-P_{Q})Fx+\lambda_{n}x. \end{aligned}$$

Then the iterative scheme (3.29) is equivalent to

$$ \left \{ \textstyle\begin{array}{@{}l} u_{n}=J_{\lambda_{n}}(I-\lambda_{n}A)(x_{n}),\\ x_{n+1}=\alpha_{n}f(x_{n})+(1-\alpha_{n})T_{\lambda_{n}}(u_{n}),\quad \forall n\in\mathbb{N}, \end{array}\displaystyle \right . $$

where \(T_{\lambda_{n}}=P_{C}(I-\beta\nabla g_{\lambda_{n}})\) for all n, and \(\beta\in(0,2/\|F\|^{2})\). □

On the other hand, based on Theorem 3.2, we will give another two applications of it.

Let h be a proper lower semicontinuous convex function on Hilbert space H into \((-\infty,\infty]\). Then the subdifferential ∂h of h is defined as follows:

$$ \partial h(x)=\bigl\{ z\in H: h(x)+\langle z, y-x\rangle\leq h(y), \forall y\in H \bigr\} $$

for all \(x\in H\). From Rockafellar [23], we known that ∂h is a maximal monotone operator. Let \(i_{C}\) be the indicator function of C (C is a nonempty, closed, and convex subset of H), i.e.,

$$ i_{C}(x)=\left \{ \textstyle\begin{array}{@{}l@{\quad}l} 0, & x\in C,\\ \infty,& x\notin C. \end{array}\displaystyle \right . $$

Then \(i_{C}\) is a proper lower semicontinuous convex function on H and the subdifferential \(\partial i_{C}\) of \(i_{C}\) is a maximal monotone operator. So we can define the resolvent \(J_{r}\) of \(\partial i_{C}\) for \(r>0\), i.e.,

$$ J_{r}x=(I+r\partial i_{C})^{-1}x $$

for all \(x\in H\). We have, for any \(x\in H\) and \(q\in C\),

$$\begin{aligned} q=J_{r}x \quad \Longleftrightarrow&\quad x\in q+r\partial i_{C}(q) \\ \quad \Longleftrightarrow&\quad x\in q+rN_{C}(q) \\ \quad \Longleftrightarrow&\quad x-q\in rN_{C}(q) \\ \quad \Longleftrightarrow&\quad \frac{1}{r}\langle x-q,p-q\rangle\leq0, \quad \forall p\in C \\ \quad \Longleftrightarrow&\quad \langle x-q,p-q\rangle\leq0,\quad \forall p\in C \\ \quad \Longleftrightarrow&\quad q=P_{C}x, \end{aligned}$$

where \(N_{C}(q)\) is the normal cone to C at q, i.e.,

$$ N_{C}(q)=\bigl\{ z\in H: \langle z,p-q\rangle\leq0, \forall p\in C \bigr\} . $$

Based on Theorem 3.2, we prove a strong convergence theorem for inverse-strongly monotone operators in a Hilbert space.

Theorem 3.4

Let C be a nonempty, closed, and convex subset of the Hilbert space H. Let \(A:C\rightarrow H\) be an α-inverse-strongly monotone mapping with \(\alpha>0\). Let \(f:C\rightarrow C\) be a k-contraction mapping with \(0< k<1\). Suppose that g is \(1/L\)-ism with \(L>0\). Let \(x_{1}=x\in C\) and let \(\{x_{n}\}\subset C\) be a sequence generated by

$$ x_{n+1}=\alpha_{n}f(x_{n})+(1- \alpha_{n})T_{\lambda_{n}}P_{C}(I-\lambda _{n}A) (x_{n}) $$

for all \(n\in\mathbb{N}\), where \(T_{\lambda_{n}}=P_{C}(I-\beta\nabla g_{\lambda_{n}})\), \(\nabla g_{\lambda_{n}}=\nabla g+\lambda_{n}I\), \(\beta\in(0,2/L)\). Let \(\{\alpha_{n}\}\), \(\{r_{n}\}\), and \(\{\lambda_{n}\}\) satisfy the conditions (C1)-(C3) which appear in the Theorem  3.2. Suppose \(VI(C,A)\cap U\neq\emptyset\). Then \(\{x_{n}\}\) converges strongly to a point \(q_{0}\in VI(C,A)\cap U\), where \(q_{0}\in VI(C,A)\cap U\) is a unique fixed point of \(P_{VI(C,A)\cap U}f\). This point \(q_{0}\in VI(C,A)\cap U\) is also a unique solution of the hierarchical variational inequality

$$ \bigl\langle (I-f)q_{0},z-q_{0}\bigr\rangle \geq0,\quad z\in VI(C,A)\cap U. $$

Proof

Put \(B=\partial i_{C}\) in Theorem 3.2. Then for \(r_{n}>0\), we have that \(J_{r_{n}}=P_{C}\). Furthermore we have \((A+\partial i_{C})^{-1}0=VI(C,A)\). Indeed, for \(q\in C\), we have

$$\begin{aligned} q\in(A+\partial i_{C})^{-1}0\quad \Longleftrightarrow&\quad 0\in Aq+\partial i_{C}(q) \\ \quad \Longleftrightarrow&\quad 0\in Aq+N_{C}(q) \\ \quad \Longleftrightarrow&\quad -Aq\in N_{C}(q) \\ \quad \Longleftrightarrow&\quad \langle-Aq,p-q\rangle\leq0,\quad \forall p\in C \\ \quad \Longleftrightarrow&\quad \langle Aq,p-q\rangle\geq0,\quad \forall p\in C \\ \quad \Longleftrightarrow&\quad q\in VI(C,A). \end{aligned}$$

Thus we obtain the desired result by Theorem 3.2. □

Recall the mapping \(W: C\rightarrow H\) is called a widely strict pseudo-contraction if there exists \(r\in\mathbb{R}\) with \(r<1\) such that

$$ \|Wx-Wy\|^{2}\leq\|x-y\|^{2}+r\bigl\| (I-W)x-(I-W)y \bigr\| ^{2},\quad \forall x,y\in C. $$

We call such W a widely r-strict pseudo-contraction. If \(0\leq r<1\), then W is a strict pseudo-contraction. Based on Theorem 3.2, we obtain the following result, which generalizes Zhou’s strong convergence theorem [25] for strict pseudo-contractions in a Hilbert space.

Theorem 3.5

Let C be a nonempty, closed, and convex subset of Hilbert space H. Let \(W:C\rightarrow H\) be a widely r-strict pseudo-contraction with \(r<1\) (\(r\in\mathbb{R}\)), suppose that \(\operatorname{Fix}(W)\neq\emptyset\). Let \(f:C\rightarrow C\) be a k-contraction with \(0< k<1\). Suppose that g is \(1/L\)-ism with \(L>0\). Let \(x_{1}=x\in C\) and let \(\{x_{n}\}\subset C\) be a sequence generated by

$$ x_{n+1}=\alpha_{n}f(x_{n})+(1- \alpha_{n})T_{\lambda_{n}}P_{C}\bigl\{ (1-t_{n})W+t_{n}I \bigr\} (x_{n}) $$

for all \(n\in\mathbb{N}\), where \(T_{\lambda_{n}}=P_{C}(I-\beta\nabla g_{\lambda_{n}})\), \(\nabla g_{\lambda_{n}}=\nabla g+\lambda_{n}I\), \(\beta\in(0,2/L)\). Let \(\{\alpha_{n}\}\) and \(\{\lambda_{n}\}\) satisfy the conditions (C1) and (C3), respectively, which appear in Theorem  3.2. \(\{t_{n}\}\) satisfy:

  1. (1)

    \(\{t_{n}\}\subset(-\infty, 1)\);

  2. (2)

    \(r\leq t_{n}\leq b<1\);

  3. (3)

    \(\sum_{n=1}^{\infty}|t_{n}-t_{n+1}|<\infty\).

Then \(\{x_{n}\}\) converges strongly to a point \(q_{0}\in \operatorname{Fix}(W)\cap U\) which is a unique fixed point of \(P_{\operatorname{Fix}(W)\cap U}f\) in \(\operatorname{Fix}(W)\cap U\).

Proof

Put \(B=\partial i_{C}\) and \(A=I-W\) in Theorem 3.2. Furthermore, we put \(a=1-b\), \(r_{n}=1-t_{n}\), and \(2\alpha=1-r\) in Theorem 3.2. From \(\{t_{n}\}\subset(-\infty, 1)\) and \(r\leq t_{n}\leq b<1\), we get \(\{r_{n}\}\subset(0,\infty)\) and \(0< a\leq r_{n}\leq2\alpha\). We also get

$$ \sum_{n=1}^{\infty}|r_{n+1}-r_{n}|= \sum_{n=1}^{\infty }|t_{n+1}-t_{n}|< \infty $$

and

$$ I-r_{n}A=I-(1-t_{n}) (I-W)=(1-t_{n})W+t_{n}I. $$

Furthermore we have \((A+\partial i_{C})^{-1}0=\operatorname{Fix}(W)\). Indeed, for \(q\in C\), we have

$$\begin{aligned} q\in(A+\partial i_{C})^{-1}0\quad \Longleftrightarrow&\quad 0\in Aq+\partial i_{C}(q) \\ \quad \Longleftrightarrow&\quad 0\in q-Wq+N_{C}(q) \\ \quad \Longleftrightarrow&\quad Wq-q\in N_{C}(q) \\ \quad \Longleftrightarrow& \quad\langle Wq-q,p-q\rangle\leq0,\quad \forall p\in C \\ \quad \Longleftrightarrow&\quad P_{C}W(q)=q. \end{aligned}$$

Since \(\operatorname{Fix}(W)\neq\emptyset\), we get from [25] \(\operatorname{Fix}(P_{C}W)=\operatorname{Fix}(W)\). Thus we obtain the desired result by Theorem 3.2. □

Referring to Theorem 3.2, we will immediately give our conclusion in the next section.

4 Conclusion

In a real Hilbert space, methods for solving the constrained convex minimization problem have been extensively studied. Recently, Tian and Liu were first to propose composite iterative algorithms to find a common solution of an equilibrium and a constrained convex minimization problem. However, in this paper, for solving constrained convex minimization problems and finding zeros of the sum of two operators in Hilbert spaces, we use two algorithms; one is the gradient-projection algorithm (GPA), the other is the regularized gradient-projection algorithm (RGPA). We use them to propose new strong convergence theorems, which find a common solution by GPA and a unique solution by RGPA. Take RGPA for example, some new strong convergence theorems are obtained. Under suitable conditions, the constrained convex minimization problem can be transformed into the SFP problem, zeros of the sum of two operators can be transformed into the variational inequality problem and the fixed point problem, which play important roles in nonlinear analysis and optimization problems.

References

  1. Browder, FE, Petryshyn, WV: Construction of fixed points of nonlinear mappings in Hilbert spaces. J. Math. Anal. Appl. 20, 197-228 (1967)

    Article  MathSciNet  Google Scholar 

  2. Ceng, LC, Ansari, QH, Khan, AR, Yao, JC: Strong convergence of composite iterative schemes for zeros of m-accretive operators in Banach spaces. Nonlinear Anal., Theory Methods Appl. 70, 1830-1840 (2009)

    Article  MathSciNet  Google Scholar 

  3. Ceng, LC, Ansari, QH, Khan, AR, Yao, JC: Viscosity approximation methods for strongly positive and monotone operators. Fixed Point Theory 10, 35-71 (2009)

    MathSciNet  Google Scholar 

  4. Ceng, LC, Ansari, QH, Yao, JC: Some iterative methods for finding fixed points and for solving constrained convex minimization problems. Nonlinear Anal. 74(16), 5286-5302 (2011)

    Article  MathSciNet  Google Scholar 

  5. Ceng, LC, Ansari, QH, Yao, JC: Extragradient-projection method for solving constrained convex minimization problems. Numer. Algebra Control Optim. 1(3), 341-359 (2011)

    Article  MathSciNet  Google Scholar 

  6. Ceng, LC, Ansari, QH, Wen, CF: Multi-step implicit iterative methods with regularization for minimization problems and fixed point problems. J. Inequal. Appl. 2013, Article ID 240 (2013)

    Article  MathSciNet  Google Scholar 

  7. Sahu, DR, Ansari, QH, Yao, YC: An unified hybrid iterative method for hierarchical minimization problems. J. Comput. Appl. Math. 253, 208-221 (2013)

    Article  MathSciNet  Google Scholar 

  8. Xu, HK: Iterative methods for the split feasibility problem in infinite-dimensional Hilbert spaces. Inverse Probl. 26, 10518 (2010)

    Article  Google Scholar 

  9. Moudafi, A: Viscosity approximation method for fixed-points problems. J. Math. Anal. Appl. 241, 46-55 (2000)

    Article  MathSciNet  Google Scholar 

  10. Marino, G, Xu, HK: A general method for nonexpansive mappings in Hilbert space. J. Math. Anal. Appl. 318, 43-52 (2006)

    Article  MathSciNet  Google Scholar 

  11. Takashi, S, Takahashi, W: Viscosity approximation methods for equilibrium problems and fixed point problems in Hilbert spaces. J. Math. Anal. Appl. 331, 506-515 (2007)

    Article  MathSciNet  Google Scholar 

  12. Tian, M, Liu, L: General iterative methods for equilibrium and constrained convex minimization problem. Optimization 63(9), 1367-1385 (2014)

    Article  MathSciNet  Google Scholar 

  13. Lin, LJ, Takahashi, W: A general iterative method for hierarchical variational inequality problems in Hilbert spaces and applications. Positivity 16, 429-453 (2012)

    Article  MathSciNet  Google Scholar 

  14. Kong, ZR, Ceng, LC, Ansari, QH, Pang, CT: Multistep hybrid extragradient method for triple hierarchical variational inequalities. Abstr. Appl. Anal. 2013, Article ID 718624 (2013)

    MathSciNet  Google Scholar 

  15. Eshita, K, Takahashi, W: Approximating zero points of accretive operators in general Banach spaces. JP J. Fixed Point Theory Appl. 2, 105-116 (2007)

    MathSciNet  Google Scholar 

  16. Takahashi, S, Takahashi, W, Toyoda, M: Strong convergence theorems for maximal monotone operators with nonlinear mappings in Hilbert spaces. J. Optim. Theory Appl. 147, 27-41 (2010)

    Article  MathSciNet  Google Scholar 

  17. Takahashi, W: Nonlinear Functional Analysis. Yokohama Publishers, Yokohama (2000)

    Google Scholar 

  18. Xu, HK: Averaged mappings and the gradient-projection algorithm. J. Optim. Theory Appl. 150, 360-378 (2011)

    Article  MathSciNet  Google Scholar 

  19. Hundal, H: An alternating projection that does not converge in norm. Nonlinear Anal. 57, 35-61 (2004)

    Article  MathSciNet  Google Scholar 

  20. Xu, HK: Viscosity approximation methods for nonexpansive mappings. J. Math. Anal. Appl. 298, 279-291 (2004)

    Article  MathSciNet  Google Scholar 

  21. Aoyama, K, Kimura, Y, Takahashi, W, Toyoda, M: On a strongly nonexpansive sequence in Hilbert spaces. J. Nonlinear Convex Anal. 8, 471-489 (2007)

    MathSciNet  Google Scholar 

  22. Censor, Y, Elfving, T: A multiprojection algorithm using Bregman projections in a product space. Numer. Algorithms 8, 221-239 (1994)

    Article  MathSciNet  Google Scholar 

  23. Rockafellar, RT: On the maximal monotonicity of subdifferential mappings. Pac. J. Math. 33, 209-216 (1970)

    Article  MathSciNet  Google Scholar 

  24. Byrne, C: A unified treatment of some iterative algorithms in signal processing and image reconstraction. Inverse Probl. 20, 103-120 (2004)

    Article  MathSciNet  Google Scholar 

  25. Zhou, H: Convergence theorems of fixed points for k-strict pseudo-contractions in Hilbert spaces. Nonlinear Anal. 69, 456-462 (2008)

    Article  MathSciNet  Google Scholar 

  26. Yang, Q, Zhao, J: Generalized KM theorems and their applications. Inverse Probl. 22(3), 833-844 (2006)

    Article  Google Scholar 

Download references

Acknowledgements

Ming Tian was supported by the Foundation of Tianjin key Laboratory for Advanced Signal Processing and the Fundamental Research Funds for the Central Universities (No. 3122015L007). Yeong-Cheng Liou was supported in part by a grant from MOST NSC 101-2628-E-230-001-MY3 and NSC 103-2923-E-037-001-MY3. This research is supported partially by Kaohsiung Medical University ‘Aim for the Top Universities Grant, grant No. KMU-TP103F00’.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yeong-Cheng Liou.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All the authors read and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Tian, M., Jiao, SW. & Liou, YC. Methods for solving constrained convex minimization problems and finding zeros of the sum of two operators in Hilbert spaces. J Inequal Appl 2015, 227 (2015). https://doi.org/10.1186/s13660-015-0743-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13660-015-0743-z

MSC

Keywords