Open Access

A viscosity splitting algorithm for solving inclusion and equilibrium problems

  • Buthinah A Bin Dehaish1Email author,
  • Abdul Latif2,
  • Huda O Bakodah1 and
  • Xiaolong Qin3
Journal of Inequalities and Applications20152015:50

https://doi.org/10.1186/s13660-015-0554-2

Received: 9 October 2014

Accepted: 7 January 2015

Published: 11 February 2015

Abstract

In this paper, we present a viscosity splitting algorithm with computational errors for solving common solutions of inclusion and equilibrium problems. Strong convergence theorems are established in the framework of real Hilbert spaces. Applications are also provided to support the main results.

Keywords

equilibrium problem variational inequality splitting algorithm nonexpansive mapping fixed point

1 Introduction

Splitting algorithms have recently received much attention due to the fact that many nonlinear problems arising in applied areas such as image recovery, signal processing, and machine learning are mathematically modeled as a nonlinear operator equation and this operator is decomposed as the sum of two nonlinear operators; see, for example, [14] and the references therein. The central problem is to iteratively find a zero point of the sum of two monotone operators.

One of the classical methods of studying the problem \(0\in Tx\), where T is a maximal monotone operator, is the proximal point algorithm (PPA) which was initiated by Martinet [5] and further developed by Rockafellar [6]. The PPA and its dual version in the context of convex programming, the method of multipliers of Hesteness and Powell, have been extensively studied and are known to yield as special cases decomposition methods such as the method of partial inverses [7], the Douglas-Rachford splitting method, and the alternating direction method of multipliers [8]. In the case of \(T=A+B\), where A and B are monotone mappings, the splitting method \(x_{n+1}=(I+r_{n}B)^{-1}(I-r_{n}A)x_{n}\), \(n=0,1,\ldots\) , where \(r_{n}>0\), was proposed by Lions and Mercier [9] and by Passty [10]. There are many nonlinear problems arising in engineering areas needing more than one constraint. Solving such problems, we have to obtain some solution which is simultaneously the solution of two or more subproblems or the solution of one subproblem on the solution set of another subproblem; see [1118] and the references therein. The viscosity approximation method, which was introduced by Moudafi [19], has been extensively investigated by many authors for solving the problems; see, for example, [2023] and the references therein. The highlight of the viscosity approximation method is that the desired limit point is not only a solution of a nonlinear problem but a unique solution of a classical monotone variational inequality. The aim of this paper is to introduce and investigate a viscosity splitting algorithm with computational errors for solving common solutions of inclusion and equilibrium problems. The common solution is a unique solution of a monotone variational inequality. Strong convergence is guaranteed without the aid of compactness assumptions or metric projections. The main results mainly improve the corresponding results in [1115].

The organization of this paper is as follows. In Section 2, we provide some necessary preliminaries. In Section 3, a viscosity splitting algorithm with computational errors is investigated. A strong convergence theorem is established. In Section 4, applications of the main results are discussed.

2 Preliminaries

From now on, we always assume that H is a real Hilbert space with the inner product \(\langle\cdot,\cdot\rangle\) and the norm \(\|\cdot\|\). Let C is a nonempty, closed, and convex subset of H.

Let F be a bifunction of \(C\times C\) into , where denotes the set of real numbers. We consider the following equilibrium problem in the terminology of Blum and Oettli [24], which is also known as the Ky Fan inequality [25]:
$$ \mbox{Find } x\in C \mbox{ such that } F(x,y)\geq0,\quad \forall y\in C. $$
(2.1)
In this paper, the set of such an \(x\in C\) is denoted by \(EP(F)\), i.e., \(EP(F)=\{x\in C: F(x,y)\geq0, \forall y\in C\}\).
To study equilibrium problem (2.1), we may assume that F satisfies the following conditions:
  1. (A1)

    \(F(x,x)=0\) for all \(x\in C\);

     
  2. (A2)

    F is monotone, i.e., \(F(x,y)+F(y,x)\leq0\) for all \(x,y\in C\);

     
  3. (A3)
    for each \(x,y,z\in C\),
    $$\limsup_{t\downarrow0}F\bigl(tz+(1-t)x,y\bigr)\leq F(x,y); $$
     
  4. (A4)

    for each \(x\in C\), \(y\mapsto F(x,y)\) is convex and lower semicontinuous.

     
Let S be a mapping on C. \(F(S)\) stands for the fixed point set of S. Recall that S is said to be contractive iff there exists a constant \(\kappa\in(0,1)\) such that
$$\|Sx-Sy\|\leq\kappa\|x-y\|,\quad \forall x,y\in C. $$
S is said to be nonexpansive iff
$$\|Sx-Sy\|\leq\|x-y\|, \quad\forall x,y\in C. $$
If C is closed, convex, and bounded, then \(F(S)\) is not empty; see [26] and the references therein.
Let \(A:C\rightarrow H\) be a mapping. Recall that A is said to be monotone iff
$$\langle Ax-Ay,x-y\rangle\geq0, \quad\forall x,y\in C. $$
A is said to be strongly monotone iff there exists a constant \(\alpha>0\) such that
$$\langle Ax-Ay,x-y\rangle\geq\alpha\|x-y\|^{2},\quad \forall x,y\in C. $$
For such a case, we say that A is an α-strongly monotone mapping. A is said to be inverse-strongly monotone iff there exists a constant \(\alpha>0\) such that
$$\langle Ax-Ay,x-y\rangle\geq\alpha\|Ax-Ay\|^{2}, \quad\forall x,y\in C. $$
For such a case, we say that A is an α-inverse-strongly monotone mapping. It is clear that A is inverse-strongly monotone if and only if \(A^{-1}\) is strongly monotone.
A set-valued mapping \(T:H\rightarrow2^{H}\) is said to be monotone iff for all \(x,y\in H\), \(f\in Tx\), and \(g\in Ty\) imply \(\langle x-y,f-g\rangle\geq0\). A monotone mapping \(T:H\rightarrow2^{H}\) is maximal iff the graph \(G(T)\) of T is not properly contained in the graph of any other monotone mapping. It is well known that a monotone mapping T is maximal iff, for any \((x,f)\in H\times H\), \(\langle x-y,f-g\rangle\geq0\) for all \((y,g)\in G(T)\) implies \(f\in Tx\). Let A be a monotone mapping of C into H and \(N_{C}v\) the normal cone to C at \(v\in C\), i.e.,
$$N_{C}v=\bigl\{ w\in H:\langle v-u,w\rangle\geq0, \forall u\in C\bigr\} , $$
and define a mapping T on C by
$$Tv= \begin{cases} Av+N_{C}v, &v\in C,\\ \emptyset, & v\notin C. \end{cases} $$
Then T is maximal monotone and \(0\in Tv\) iff \(\langle Av,u-v\rangle\geq0\) for all \(u\in C\); see [27] and the references therein. Let I denote the identity operator on H and \(B:H\rightarrow2^{H}\) be a maximal monotone operator. Then we can define, for each \(\lambda >0\), a nonexpansive single valued mapping \(J_{r}:H\rightarrow H\) by \(J_{r}=(I+r B)^{-1}\). It is called the resolvent of B. We know that \(B^{-1}0=F(J_{r})\) for all \(r>0\) and \(J_{r}\) is firmly nonexpansive, that is,
$$\|J_{r}x-J_{r}y\|^{2}\leq\langle J_{r}x-J_{r}y, x-y\rangle,\quad x,y\in H. $$

In order to prove our main results, we also need the following lemmas.

Lemma 2.1

[24]

Let C be a nonempty, closed, and convex subset of H and let \(F:C\times C\rightarrow\mathbb{R}\) be a bifunction satisfying (A1)-(A4). Then, for any \(r>0\) and \(x\in H\), there exists \(z\in C\) such that
$$F(z,y)+\frac{1}{r}\langle y-z,z-x\rangle\geq0, \quad\forall y\in C. $$
Further, define
$$T_{r}x=\biggl\{ z\in C: F(z,y)+\frac{1}{r}\langle y-z,z-x\rangle \geq 0, \forall y\in C\biggr\} $$
for all \(r>0\) and \(x\in H\). Then, the following hold:
  1. (a)

    \(F(T_{r})=EP(F)\);

     
  2. (b)

    \(T_{r}\) is single-valued;

     
  3. (c)
    \(T_{r}\) is firmly nonexpansive, i.e., for any \(x,y\in H\),
    $$\|T_{r}x-T_{r}y\|^{2}\leq\langle T_{r}x-T_{r}y, x-y\rangle; $$
     
  4. (d)

    The solution set is closed and convex.

     

Lemma 2.2

Let C be a nonempty, closed, and convex subset of H. Let \(A:C\rightarrow H\) be a mapping and let \(B:H\rightrightarrows H\) be a maximal monotone operator. Then \(F(J_{r}(I-r A))=(A+B)^{-1}(0)\).

Proof

$$\begin{aligned} p\in F\bigl(J_{r}(I-r A)\bigr) &\quad\Longleftrightarrow\quad p=J_{r}(I-r A)p\\ &\quad\Longleftrightarrow\quad p+rBp=p-rAp\\ &\quad \Longleftrightarrow\quad p\in(A+B)^{-1}(0). \end{aligned}$$
 □

The following lemma appears implicitly in [16]. For the sake of completeness, we still give the proof.

Lemma 2.3

Let C be a nonempty, closed, and convex subset of H and let \(F:C\times C\rightarrow\mathbb{R}\) be a bifunction satisfying (A1)-(A4). \(T_{t}\) and \(T_{s}\) are defined as in Lemma  2.1. Then
$$\|T_{s}x-T_{t}x\|\leq\frac{|s-t|}{t} \|x-T_{t}x\|. $$

Proof

Putting \(\varsigma=T_{s}x\) and \(\tau=T_{t}x\), we find that \(F(\tau,\varsigma)+\frac{1}{t}\langle\varsigma-\tau,\tau-x\rangle\geq0 \) and \(F(\varsigma,\tau)+\frac{1}{s}\langle\tau-\varsigma,\varsigma-x\rangle \geq0\). It follows that
$$\frac{1}{s}\langle\tau-\varsigma,\varsigma-x\rangle+\frac{1}{t} \langle \varsigma-\tau,\tau-x\rangle\geq0. $$
Hence, we have
$$\biggl\langle \tau-\varsigma,\varsigma-x+\frac{s}{t}(x-\tau)\biggr\rangle \geq0. $$
That is,
$$\|\tau-\varsigma\|\leq\biggl|\frac{t-s}{t}\biggr|\|\tau-x\|. $$
This proves the lemma. □

Lemma 2.4

[28]

Let \(\{x_{n}\}\) and \(\{y_{n}\}\) be bounded sequences in H and let \(\{\beta_{n}\}\) be a sequence in \((0,1)\) with \(0<\liminf_{n\rightarrow\infty}\beta_{n}\leq\limsup_{n\rightarrow\infty }\beta_{n}<1\). Suppose that \(x_{n+1}=(1-\beta_{n})y_{n}+\beta_{n}x_{n}\) for all integers \(n\geq 0\) and
$$\limsup_{n\rightarrow\infty}\bigl(\|y_{n+1}-y_{n}\|- \|x_{n+1}-x_{n}\|\bigr)\leq0. $$
Then \(\lim_{n\rightarrow\infty}\|y_{n}-x_{n}\|=0\).

Lemma 2.5

[29]

Assume that \(\{\alpha_{n}\}\) is a sequence of nonnegative real numbers such that
$$\alpha_{n+1}\leq(1-\gamma_{n})\alpha_{n}+ \delta_{n}+e_{n}, $$
where \(\{\gamma_{n}\}\) is a real number sequence in \((0,1)\), and \(\{\delta _{n}\}\) and \(\{e_{n}\}\) are nonnegative real number sequences such that
  1. (i)

    \(\sum_{n=1}^{\infty}\gamma_{n}=\infty\);

     
  2. (ii)

    \(\limsup_{n\rightarrow\infty}\delta_{n}/\gamma_{n}\leq0\) or \(\sum_{n=1}^{\infty}\delta_{n}<\infty\);

     
  3. (iii)

    \(\sum_{n=1}^{\infty}e_{n}<\infty\).

     
Then \(\lim_{n\rightarrow\infty}\alpha_{n}=0\).

3 Main results

Theorem 3.1

Let C be a nonempty, closed, and convex subset of H and let F be a bifunction from \(C\times C\) to R which satisfies (A1)-(A4). Let \(f:C\rightarrow C\) be a contraction with the contractive constant \(\kappa\in(0,1)\). Let \(A:C\rightarrow H\) be an α-inverse-strongly monotone mapping and let \(B:H\rightrightarrows H\) be a maximal monotone mapping. Assume \((A+B)^{-1}(0)\cap EP(F)\neq\emptyset\). Let \(\{r_{n}\}\) and \(\{ s_{n}\}\) be positive real number sequences. Let \(\{\alpha_{n}\}\), \(\{\beta_{n}\}\), and \(\{\gamma_{n}\}\) be real number sequences in \((0,1)\) such that \(\alpha_{n}+\beta_{n}+\gamma_{n}=1\). Let \(\{ e_{n}\}\) be a sequence in H such that \(\sum_{n=1}^{\infty}\|e_{n}\|<\infty\). Let \(\{x_{n}\}\) be a sequence generated in the following process: \(x_{1}\in C\), \(x_{n+1}=\alpha_{n} f(x_{n})+\beta_{n}x_{n}+\gamma_{n}y_{n}\), where \(\{y_{n}\}\) is a sequence in C such that \(F(y_{n},y)+\frac{1}{s_{n}}\langle y-y_{n},y_{n}-J_{r_{n}} (x_{n}-r_{n}Ax_{n}+e_{n} )\rangle\geq0\), \(\forall y\in C\). Assume that the control sequences \(\{\alpha_{n}\}\), \(\{\beta_{n}\}\), \(\{ r_{n}\}\), and \(\{s_{n}\}\) satisfy the conditions: \(\lim_{n\rightarrow\infty }\alpha_{n}=0\), \(\sum_{n=1}^{\infty}\alpha_{n}=\infty\), \(0<\liminf_{n\rightarrow\infty}\beta_{n}\leq\limsup_{n\rightarrow\infty}\beta_{n}<1\), \(0<\liminf_{n\rightarrow\infty}s_{n}\), \(0< r\leq r_{n}\leq r'<2\alpha\), \(\lim_{n\rightarrow\infty}|r_{n}-r_{n+1}|=\lim_{n\rightarrow\infty}|s_{n}-s_{n+1}|=0\), where r and \(r'\) are real constants. Then \(\{x_{n}\}\) converges strongly to \(q=P_{(A+B)^{-1}(0)\cap EP(F)}f(q)\).

Proof

From Lemmas 2.1 and 2.2, we see that \((A+B)^{-1}(0)\) and \(EP(F)\) are closed and convex. It follows that the projection onto the intersection \((A+B)^{-1}(0)\cap EP(F)\) is well defined. For any \(x,y\in C\), we see that
$$\begin{aligned} &\bigl\| (I-r_{n}A)x-(I-r_{n}A)y\bigr\| ^{2} \\ &\quad=\|x-y\|^{2}-2r_{n}\langle x-y, Ax-Ay \rangle+r_{n}^{2}\| Ax-Ay\|^{2} \\ &\quad\leq\|x-y\|^{2}-r_{n}(2\alpha-r_{n}) \|Ax-Ay\|^{2}. \end{aligned}$$
By using the condition imposed on \(\{r_{n}\}\), we see that \(\|(I-r_{n}A)x-(I-r_{n}A)y\|\leq\|x-y\|\). This proves that \(I-r_{n}A\) is nonexpansive. Let \(p\in(A+B)^{-1}(0)\cap EP(F)\) be fixed arbitrarily. By using Lemma 2.1, we find that \(y_{n}=T_{s_{n}}J_{r_{n}} (x_{n}-r_{n}Ax_{n}+e_{n})\). It follows that
$$\begin{aligned} \|x_{n+1}-p\| \leq{}&\alpha_{n}\bigl\| f(x_{n})-p\bigr\| + \beta_{n}\|x_{n}-p\|+\gamma_{n}\|y_{n}-p\| \\ \leq{}&\alpha_{n}\bigl\| x_{n}-f(p)\bigr\| +\alpha_{n}\bigl\| f(p)-p \bigr\| +\beta_{n}\|x_{n}-p\| \\ &{} +\gamma_{n}\bigl\| J_{r_{n}} (x_{n}-r_{n}Ax_{n}+e_{n})-J_{r_{n}} (p-r_{n}Ap)\bigr\| \\ \leq{}&\alpha_{n}\kappa\|x_{n}-p\|+\alpha_{n} \bigl\| f(p)-p\bigr\| +\beta_{n}\|x_{n}-p\| \\ &{} +\gamma_{n}\bigl\| (x_{n}-r_{n}Ax_{n}+e_{n})- (p-r_{n}Ap)\bigr\| \\ \leq{}& \bigl(1-\alpha_{n}(1-\kappa)\bigr)\|x_{n}-p\|+ \alpha_{n}\bigl\| f(p)-p\bigr\| +\|e_{n}\| \\ \leq{}&\max\biggl\{ \|x_{n}-p\|,\frac{\|f(p)-p\|}{1-\kappa}\biggr\} + \|e_{n}\| \\ \leq{}&\cdots \\ \leq{}&\max\biggl\{ \|x_{1}-p\|,\frac{\|f(p)-p\|}{1-\kappa}\biggr\} +\sum _{i=1}^{\infty}\| e_{i}\|< \infty. \end{aligned}$$
This implies that \(\{x_{n}\}\) is bounded, and so is \(\{y_{n}\}\). Letting \(\xi_{n}=\frac{x_{n+1}-\beta_{n}x_{n}}{1-\beta_{n}}\), we have
$$\begin{aligned} \xi_{n+1}-\xi_{n} ={}&\frac{\alpha_{n+1} (f(x_{n+1})-y_{n})+(1-\beta_{n+1})y_{n+1}}{1-\beta _{n+1}} \\ &{} -\frac{\alpha_{n} (f(x_{n})-y_{n})+(1-\beta_{n})y_{n}}{1-\beta_{n}} \\ ={}&\frac{\alpha_{n+1} (f(x_{n+1})-y_{n+1})}{1-\beta_{n+1}}-\frac{\alpha _{n} (f(x_{n})-y_{n})}{1-\beta_{n}}+y_{n+1}-y_{n}. \end{aligned}$$
It follows that
$$ \|\xi_{n+1}-\xi_{n}\| \leq\frac{\alpha_{n+1}}{1-\beta_{n+1}} \bigl\| f(x_{n+1})-y_{n+1}\bigr\| +\frac{\alpha_{n}}{1-\beta_{n}}\bigl\| f(x_{n})-y_{n} \bigr\| +\|y_{n+1}-y_{n}\|. $$
(3.1)
Put \(u_{n}=x_{n}-r_{n}Ax_{n}+e_{n}\). Since B is monotone, we see that
$$\biggl\langle J_{r_{n+1}}u_{n}-J_{r_{n}}u_{n}, \frac{u_{n}-J_{r_{n+1}}u_{n}}{r_{n+1}}- \frac{u_{n}-J_{r_{n}}u_{n}}{r_{n}}\biggr\rangle \geq0. $$
It follows that
$$\biggl\langle J_{r_{n+1}}u_{n}-J_{r_{n}}u_{n}, \biggl(1-\frac{r_{n+1}}{r_{n}}\biggr) (u_{n}-J_{r_{n}}u_{n}) \biggr\rangle \geq\| J_{r_{n+1}}u_{n}-J_{r_{n}}u_{n} \|^{2}. $$
This in turn implies that
$$ \|J_{r_{n+1}}u_{n}-J_{r_{n}}u_{n}\|\leq \frac{|r_{n+1}-r_{n}|}{r_{n}}\| u_{n}-J_{r_{n}}u_{n}\|. $$
(3.2)
Putting \(z_{n}=J_{r_{n}}(x_{n}-r_{n}Ax_{n}+e_{n})\), we have
$$ \begin{aligned}[b] \|z_{n+1}-z_{n}\| \leq{}& \bigl\| J_{r_{n+1}} (x_{n+1}-r_{n+1}Ax_{n+1}+e_{n+1})-J_{r_{n+1}} (x_{n}-r_{n}Ax_{n}+e_{n})\bigr\| \\ &{} +\bigl\| J_{r_{n+1}} (x_{n}-r_{n}Ax_{n}+e_{n})-J_{r_{n}} (x_{n}-r_{n}Ax_{n}+e_{n})\bigr\| \\ \leq{}&\bigl\| (x_{n+1}-r_{n+1}Ax_{n+1}+e_{n+1})- (x_{n}-r_{n}Ax_{n}+e_{n})\bigr\| \\ &{} +\bigl\| J_{r_{n+1}} (x_{n}-r_{n}Ax_{n}+e_{n})-J_{r_{n}} (x_{n}-r_{n}Ax_{n}+e_{n}) \bigr\| . \end{aligned} $$
(3.3)
From (3.2) and (3.3), we find that
$$ \begin{aligned}[b] \|z_{n+1}-z_{n}\| \leq{}&\bigl\| (x_{n+1}-r_{n+1}Ax_{n+1}+e_{n+1})- (x_{n}-r_{n}Ax_{n}+e_{n})\bigr\| \\ &{} +\frac{|r_{n+1}-r_{n}|}{r_{n}}\|u_{n}-J_{r_{n}}u_{n}\| \\ \leq{}&\|x_{n+1}-x_{n}\|+|r_{n+1}-r_{n}| \biggl(\|Ax_{n}\|+\frac{\|u_{n}-J_{r_{n}}u_{n}\| }{r_{n}}\biggr) \\ &{} +\|e_{n+1}\|+\|e_{n}\|. \end{aligned} $$
(3.4)
By using Lemma 2.3, we find that
$$\begin{aligned} \|y_{n+1}-y_{n}\| \leq{}&\bigl\| T_{s_{n+1}}J_{r_{n+1}} (x_{n+1}-r_{n+1}Ax_{n+1}+e_{n+1})-T_{s_{n+1}}J_{r_{n}} (x_{n}-r_{n}Ax_{n}+e_{n})\bigr\| \\ &{} +\bigl\| T_{s_{n+1}}J_{r_{n}} (x_{n}-r_{n}Ax_{n}+e_{n})-T_{s_{n}}J_{r_{n}} (x_{n}-r_{n}Ax_{n}+e_{n})\bigr\| \\ \leq{}&\|z_{n+1}-z_{n}\|+\frac{\|T_{s_{n}}z_{n}-z_{n}\| }{s_{n}}|s_{n+1}-s_{n}|. \end{aligned}$$
(3.5)
From (3.4) and (3.5), we find that
$$\begin{aligned} \|y_{n+1}-y_{n}\| \leq{}&\|x_{n+1}-x_{n} \|+|r_{n+1}-r_{n}|\biggl(\|Ax_{n}\|+ \frac{\|u_{n}-J_{r_{n}}u_{n}\| }{r_{n}}\biggr) \\ &{} +\frac{\|T_{s_{n}}z_{n}-z_{n}\|}{s_{n}}|s_{n+1}-s_{n}|+\|e_{n+1}\| + \|e_{n}\|. \end{aligned}$$
(3.6)
Substituting (3.6) into (3.1), we arrive at
$$\begin{aligned} \|\xi_{n+1}-\xi_{n}\|-\|x_{n+1}-x_{n}\| \leq{}&\frac{\alpha_{n+1}}{1-\beta_{n+1}}\bigl\| f(x_{n+1})-y_{n+1}\bigr\| + \frac{\alpha_{n}}{1-\beta_{n}}\bigl\| f(x_{n})-y_{n}\bigr\| \\ &{} +|r_{n+1}-r_{n}|\biggl(\|Ax_{n}\|+ \frac{\|u_{n}-J_{r_{n}}u_{n}\|}{r_{n}}\biggr) \\ &{} +\frac{\|T_{s_{n}}z_{n}-z_{n}\|}{s_{n}}|s_{n+1}-s_{n}|+\|e_{n+1}\| + \|e_{n}\|. \end{aligned}$$
It follows from the conditions that
$$\limsup_{n\rightarrow\infty} \bigl(\|\xi_{n+1}-\xi_{n}\|- \|x_{n+1}-x_{n}\| \bigr)=0. $$
By using Lemma 2.4, we see that \(\lim_{n\rightarrow\infty}\|\xi_{n}-x_{n}\|=0\), which in turn implies that
$$ \lim_{n\rightarrow\infty}\|x_{n+1}-x_{n} \|=0. $$
(3.7)
Since \(J_{r_{n}}\) is firmly nonexpansive, we find that
$$\begin{aligned} \|z_{n}-p\|^{2} \leq{}&\bigl\langle (x_{n}-r_{n}Ax_{n}+e_{n})- (p-r_{n}Ap),z_{n}-p\bigr\rangle \\ ={}& \frac{1}{2} \bigl(\bigl\| (x_{n}-r_{n}Ax_{n}+e_{n})- (p-r_{n}Ap)\bigr\| ^{2}+\|z_{n}-p\| ^{2} \\ &{} -\bigl\| \bigl((x_{n}-r_{n}Ax_{n}+e_{n})- (p-r_{n}Ap) \bigr)-(z_{n}-p)\bigr\| ^{2}\bigr) \\ \leq{}&\frac{1}{2} \bigl(\|x_{n}-p\|^{2}+ \|e_{n}\|\bigl(\|e_{n}\|+2\|x_{n}-p\|\bigr)+ \|z_{n}-p\| ^{2} \\ &{} -\bigl\| x_{n}-z_{n}-r_{n}(Ax_{n}-Ap)+e_{n} \bigr\| ^{2} \bigr) \\ \leq{}&\frac{1}{2} \bigl(\|x_{n}-p\|^{2}+ \|e_{n}\|\bigl(\|e_{n}\|+2\|x_{n}-p\|\bigr)+ \|z_{n}-p\| ^{2}-\|x_{n}-z_{n} \|^{2} \\ &{} -\bigl\| r_{n}(Ax_{n}-Ap)-e_{n}\bigr\| ^{2}+2 \|x_{n}-z_{n}\|\bigl\| r_{n}(Ax_{n}-Ap)-e_{n} \bigr\| \bigr). \end{aligned}$$
It follows that
$$\begin{aligned} \|z_{n}-p\|^{2} \leq{}&\|x_{n}-p \|^{2}+\|e_{n}\|\bigl(\|e_{n}\|+2\|x_{n}-p\|\bigr)- \|x_{n}-z_{n}\|^{2} \\ &{} +2r_{n}\|x_{n}-z_{n}\|\|Ax_{n}-Ap \|+2\|x_{n}-z_{n}\|\|e_{n}\|. \end{aligned}$$
(3.8)
Since \(T_{s_{n}}\) is firmly nonexpansive, we find from (3.8) that
$$\begin{aligned} \| x_{n+1}-p\|^{2} \leq{}&\alpha_{n} \bigl\| f(x_{n})-p\bigr\| ^{2}+\beta_{n}\|x_{n}-p \|^{2}+\gamma_{n}\|y_{n}-p\|^{2} \\ \leq{}&\alpha_{n} \bigl\| f(x_{n})-p\bigr\| ^{2}+ \beta_{n}\|x_{n}-p\|^{2}+\gamma_{n} \|z_{n}-p\|^{2} \\ \leq{}&\alpha_{n} \bigl\| f(x_{n})-p\bigr\| ^{2}+ \|x_{n}-p\|^{2}+\|e_{n}\|\bigl(\|e_{n}\|+2 \|x_{n}-p\| \bigr)-\gamma_{n}\|x_{n}-z_{n} \|^{2} \\ &{} +2r_{n}\|x_{n}-z_{n}\|\|Ax_{n}-Ap \|+2\|x_{n}-z_{n}\|\|e_{n}\|. \end{aligned}$$
It follows that
$$\begin{aligned} \gamma_{n}\|x_{n}-z_{n} \|^{2} \leq{}&\alpha_{n} \bigl\| f(x_{n})-p \bigr\| ^{2}+\|e_{n}\|\bigl(\|e_{n}\|+2\|x_{n}-p\|+2 \|x_{n}-z_{n}\|\bigr) \\ &{} +2r_{n}\|x_{n}-z_{n}\|\|Ax_{n}-Ap\|+ \|x_{n}-x_{n+1}\|\bigl(\|x_{n}-p\|+\| x_{n+1}-p \|\bigr). \end{aligned}$$
(3.9)
Since A is inverse-strongly monotone, we find that
$$\begin{aligned} \|z_{n}-p\|^{2} &\leq\bigl\| (x_{n}-r_{n}Ax_{n})-(p-r_{n}Ap)+e_{n} \bigr\| ^{2} \\ &\leq\bigl\| (x_{n}-p)-r_{n}(Ax_{n}-Ap)\bigr\| ^{2}+ \|e_{n}\|\bigl(\|e_{n}\|+2\|x_{n}-p\|\bigr) \\ &\leq\|x_{n}-p\|^{2}-r_{n}(2\alpha_{n}-r_{n}) \|Ax_{n}-Ap\|^{2}+\|e_{n}\|\bigl(\|e_{n}\|+2\| x_{n}-p\|\bigr). \end{aligned}$$
Hence, we have
$$\begin{aligned} \bigl\| x_{n+1}-p\bigr\| ^{2} \leq{}&\alpha_{n} \bigl\| f(x_{n})-p\bigr\| ^{2}+\beta_{n}\|x_{n}-p \|^{2}+\gamma_{n}\|y_{n}-p\|^{2} \\ \leq{}&\alpha_{n} \bigl\| f(x_{n})-p\bigr\| ^{2}+ \beta_{n}\|x_{n}-p\|^{2}+\gamma_{n}\bigl\| (x_{n}-r_{n}Ax_{n}+e_{n})- (p-r_{n}Ap)\bigr\| ^{2} \\ \leq{}&\alpha_{n} \bigl\| f(x_{n})-p\bigr\| ^{2}+ \|x_{n}-p\|^{2}-r_{n}(2\alpha_{n}-r_{n}) \gamma_{n}\| Ax_{n}-Ap\|^{2} \\ &{} +\|e_{n}\|\bigl(\|e_{n}\|+2\|e_{n}\| \|x_{n}-p\|\bigr). \end{aligned}$$
This implies that
$$\begin{aligned} r_{n}(2\alpha_{n}-r_{n})\gamma_{n} \|Ax_{n}-Ap\|^{2} \leq{}&\alpha_{n} \bigl\| f(x_{n})-p\bigr\| ^{2}+ \|x_{n}-x_{n+1}\|\bigl(\|x_{n}-p\|+\|x_{n+1}-p\|\bigr)\\ &{} +\|e_{n}\|\bigl(\|e_{n}\|+2\|x_{n}-p\|\bigr). \end{aligned}$$
By using the conditions imposed on the control sequences, we find from (3.7) that \(\lim_{n\rightarrow\infty}\|Ax_{n}-Ap\|=0\). This in turn implies from (3.9) and the conditions that
$$ \lim_{n\rightarrow\infty}\|x_{n}-z_{n}\|=0. $$
(3.10)
Since \(T_{s_{n}}\) is firmly nonexpansive, we find that
$$\begin{aligned} \|y_{n}-p\|^{2} &\leq\langle z_{n}-p,y_{n}-p \rangle \\ &= \frac{1}{2} \bigl(\|z_{n}-p\|^{2}+ \|y_{n}-p\|^{2}-\|y_{n}-z_{n} \|^{2} \bigr). \end{aligned}$$
That is,
$$\begin{aligned} \|y_{n}-p\|^{2} &\leq\|z_{n}-p\|^{2}- \|y_{n}-z_{n}\|^{2} \\ &\leq\|x_{n}-p\|^{2}+\|e_{n}\|\bigl(\|e_{n} \|+2\|x_{n}-p\|\bigr)-\|y_{n}-z_{n}\|^{2}. \end{aligned}$$
It follows that
$$\begin{aligned} \| x_{n+1}-p\|^{2} &\leq\alpha_{n} \bigl\| f(x_{n})-p\bigr\| ^{2}+\beta_{n}\|x_{n}-p \|^{2}+\gamma_{n}\|y_{n}-p\|^{2} \\ &\leq\alpha_{n} \bigl\| f(x_{n})-p\bigr\| ^{2}+ \|x_{n}-p\|^{2}+\|e_{n}\|\bigl(\|e_{n}\|+2 \|x_{n}-p\| \bigr)-\gamma_{n}\|y_{n}-z_{n} \|^{2}. \end{aligned}$$
This implies that
$$\begin{aligned} \gamma_{n}\|y_{n}-z_{n}\|^{2} \leq{}& \alpha_{n} \bigl\| f(x_{n})-p\bigr\| ^{2}+\bigl(\|x_{n}-p \|+\| x_{n+1}-p\|\bigr)\|x_{n}-x_{n+1}\| \\ &{} +\|e_{n}\|\bigl(\|e_{n}\|+2\|x_{n}-p\|\bigr). \end{aligned}$$
By using the conditions imposed on the control sequences, we find from (3.7) that
$$ \lim_{n\rightarrow\infty}\|z_{n}-y_{n}\|=0. $$
(3.11)
It follows from (3.10) and (3.11) that
$$ \lim_{n\rightarrow\infty}\|x_{n}-y_{n}\|=0. $$
(3.12)
Since the mapping \(P_{(A+B)^{-1}(0)\cap EP(F)}f\) is contractive, we see that there exists a unique fixed point. Next, we use q to denote the unique fixed point of the mapping. That is, \(q=P_{(A+B)^{-1}(0)\cap EP(F)}f(q)\). Now, we are in a position to show \(\limsup_{n\rightarrow\infty}\langle f(q)-q,x_{n}-q\rangle\leq0\). To show it, we can choose a subsequence \(\{x_{n_{i}}\}\) of \(\{x_{n}\}\) such that
$$\limsup_{n\rightarrow\infty}\bigl\langle f(q)-q,x_{n}-q\bigr\rangle =\lim_{i\rightarrow\infty}\bigl\langle f(q)-q,x_{n_{i}}-q\bigr\rangle . $$
Since \(\{x_{n_{i}}\}\) is bounded, we can choose a subsequence \(\{x_{n_{i_{j}}}\}\) of \(\{x_{n_{i}}\}\) which converges weakly some point \(\bar{x}\). We may assume, without loss of generality, that \(\{x_{n_{i}}\}\) converges weakly to \(\bar{x}\). Now, we are in a position to show \(\bar{x}\in(A+B)^{-1}(0)\cap EP(F)\).
First, we prove \(\bar{x}\in(A+B)^{-1}(0)\). Notice that
$$\frac{x_{n}-z_{n}+e_{n}}{r_{n}}-Ax_{n}\in Bz_{n}. $$
Let \(\mu\in B\nu\). Since B is monotone, we find that
$$\biggl\langle \frac{x_{n}-z_{n}+e_{n}}{r_{n}}-Ax_{n}-\mu, z_{n}-\nu \biggr\rangle \geq0. $$
It follows from (3.10) that \(\langle-A\bar{x}-\mu, \bar{x}-\nu\rangle\geq0\). This implies that \(-A\bar{x}\in B\bar{x}\), that is, \(\bar{x}\in(A+B)^{-1}(0)\).
Next, we prove \(\bar{x}\in EP(F)\). Notice that
$$F(y_{n},y)+\frac{1}{s_{n}}\langle y-y_{n},y_{n}-z_{n} \rangle\geq0, \quad\forall y\in C. $$
By using condition (A2), we see that \(\frac{1}{s_{n}}\langle y-y_{n},y_{n}-z_{n}\rangle\geq F(y,y_{n})\), \(\forall y\in C\). Replacing n by \(n_{i}\), we arrive at
$$\biggl\langle y-y_{n_{i}},\frac{y_{n_{i}}-z_{n_{i}}}{s_{n_{i}}}\biggr\rangle \geq F(y,y_{n_{i}}), \quad\forall y\in C. $$
By using the condition \(\liminf_{n\rightarrow\infty}s_{n}>0\), (3.11), and (3.12), we find that \(\{y_{n_{i}}\}\) converges weakly to \(\bar{x}\). It follows that \(0\geq F(y,\bar{x})\). This proves that \(\bar{x}\in EP(F)\). This shows that \(\limsup_{n\rightarrow\infty}\langle f(q)-q,x_{n}-q\rangle\leq0\). Note that
$$\begin{aligned} \| x_{n+1}-q\|^{2} \leq{}&\alpha_{n} \bigl\langle f(x_{n})-q,x_{n+1}-q\bigr\rangle +\beta_{n} \|x_{n}-q\|\| x_{n+1}-q\| \\ &{} +\gamma_{n}\|y_{n}-q\|\|x_{n+1}-q\| \\ \leq{}&\alpha_{n} \bigl\langle f(x_{n})-f(q),x_{n+1}-q \bigr\rangle +\alpha_{n} \bigl\langle f(q)-q,x_{n+1}-q\bigr\rangle \\ &{} +\beta_{n}\|x_{n}-q\|\|x_{n+1}-q\|+ \gamma_{n}\bigl(\|x_{n}-q\|+e_{n}\bigr)\| x_{n+1}-q\| \\ \leq{}&\frac{\alpha_{n}\kappa+\beta_{n}+\gamma_{n}}{2}\bigl(\|x_{n}-q\|^{2}+\| x_{n+1}-q\|^{2}\bigr) \\ &{} +\alpha_{n} \bigl\langle f(q)-q,x_{n+1}-q\bigr\rangle +e_{n}\|x_{n+1}-q\|. \end{aligned}$$
It follows that
$$\| x_{n+1}-q\|^{2} \leq \bigl(1-\alpha_{n}(1- \kappa) \bigr)\|x_{n}-q\|^{2}+2\alpha_{n} \bigl\langle f(q)-q,x_{n+1}-q\bigr\rangle +2\|x_{n+1}-q \|e_{n}. $$
By using Lemma 2.5, we find that \(\lim_{n\rightarrow\infty}\|x_{n}-q\| =0\). This completes the proof. □

From Theorem 3.1, we have the following result on the equilibrium problem immediately.

Corollary 3.2

Let C be a nonempty, closed, and convex subset of H and let F be a bifunction from \(C\times C\) to R which satisfies (A1)-(A4). Assume \(EP(F)\neq\emptyset\). Let \(f:C\rightarrow C\) be a contraction with the contractive constant \(\kappa\in(0,1)\). Let \(\{s_{n}\}\) be a positive real number sequence. Let \(\{\alpha_{n}\}\), \(\{\beta_{n}\}\), and \(\{\gamma_{n}\}\) be real number sequences in \((0,1)\) such that \(\alpha_{n}+\beta_{n}+\gamma_{n}=1\). Let \(\{ e_{n}\}\) be a sequence in H such that \(\sum_{n=1}^{\infty}\|e_{n}\|<\infty\). Let \(\{x_{n}\}\) be a sequence generated in the following process: \(x_{1}\in C\), \(x_{n+1}=\alpha_{n} f(x_{n})+\beta_{n}x_{n}+\gamma_{n}y_{n}\), where \(\{y_{n}\}\) is a sequence such that \(F(y_{n},y)+\frac{1}{s_{n}}\langle y-y_{n},y_{n}-x_{n}-e_{n}\rangle\geq0\), \(\forall y\in C\). Assume that the control sequences \(\{\alpha_{n}\}\), \(\{\beta_{n}\}\), and \(\{ s_{n}\}\) satisfy the conditions: \(\lim_{n\rightarrow\infty}\alpha_{n}=0\), \(\sum_{n=1}^{\infty}\alpha_{n}=\infty\), \(0<\liminf_{n\rightarrow\infty}\beta_{n}\leq\limsup_{n\rightarrow\infty }\beta_{n}<1\), \(0<\liminf_{n\rightarrow\infty}s_{n}\), \(\lim_{n\rightarrow\infty}|s_{n}-s_{n+1}|=0\). Then \(\{x_{n}\}\) converges strongly to \(q=P_{EP(F)}f(q)\).

From Theorem 3.1, we have the following result on the inclusion problem immediately.

Corollary 3.3

Let C be a nonempty, closed, and convex subset of H and let \(f:C\rightarrow C\) be a contraction with the contractive constant \(\kappa\in(0,1)\). Let \(A:C\rightarrow H\) be an α-inverse-strongly monotone mapping and let \(B:H\rightrightarrows H\) be a maximal monotone mapping such that the domain of B is in C. Assume \((A+B)^{-1}(0)\neq\emptyset\). Let \(\{r_{n}\}\) be a positive real number sequence. Let \(\{\alpha_{n}\}\), \(\{\beta_{n}\}\), and \(\{\gamma_{n}\}\) be real number sequences in \((0,1)\) such that \(\alpha_{n}+\beta_{n}+\gamma_{n}=1\). Let \(\{ e_{n}\}\) be a sequence in H such that \(\sum_{n=1}^{\infty}\|e_{n}\|<\infty\). Let \(\{x_{n}\}\) be a sequence generated in the following process: \(x_{1}\in C\) and \(x_{n+1}=\alpha_{n} f(x_{n})+\beta_{n}x_{n}+\gamma_{n}J_{r_{n}} (x_{n}-r_{n}Ax_{n}+e_{n} )\), \(n\geq1\). Assume that the control sequences \(\{\alpha_{n}\}\), \(\{\beta_{n}\}\), and \(\{r_{n}\}\) satisfy the following conditions: \(\lim_{n\rightarrow\infty}\alpha_{n}=0\), \(\sum_{n=1}^{\infty}\alpha _{n}=\infty\), \(0<\liminf_{n\rightarrow\infty}\beta_{n}\leq\limsup_{n\rightarrow\infty}\beta_{n}<1\), \(0< r\leq r_{n}\leq r'<2\alpha\), \(\lim_{n\rightarrow\infty}|r_{n}-r_{n+1}|=0\), where r and \(r'\) are real constants. Then \(\{x_{n}\}\) converges strongly to \(q=P_{(A+B)^{-1}(0)}f(q)\).

4 Applications

In this section, we give some results on equilibrium problems, variational inequalities, and convex functions.

Lemma 4.1

[12]

Let G be a bifunction from \(C\times C\) to which satisfies (A1)-(A4), and let W be a multivalued mapping of H into itself defined by
$$ Wx= \begin{cases} \{z\in H:G(x,y)\geq\langle y-x,z\rangle, \forall y\in C\},&x\in C,\\ \emptyset, &x\notin C. \end{cases} $$
(4.1)
Then W is a maximal monotone operator with the domain \(D(W)\subset C\) and \(EP(G)=W^{-1}(0)\).

Theorem 4.2

Let C be a nonempty, closed, and convex subset of H and let F and G be two bifunctions from \(C\times C\) to R which satisfies (A1)-(A4). Assume that \(EP(G)\cap EP(F)\neq\emptyset\). Let \(f:C\rightarrow C\) be a contraction with the contractive constant \(\kappa\in(0,1)\). Let \(\{r_{n}\}\) and \(\{s_{n}\}\) be positive real number sequences. Let \(\{\alpha_{n}\}\), \(\{\beta_{n}\}\), and \(\{\gamma_{n}\}\) be real number sequences in \((0,1)\) such that \(\alpha_{n}+\beta_{n}+\gamma_{n}=1\). Let \(\{ e_{n}\}\) be a sequence in H such that \(\sum_{n=1}^{\infty}\|e_{n}\|<\infty\). Let \(\{x_{n}\}\) be a sequence generated in the following process: \(x_{1}\in C\), \(x_{n+1}=\alpha_{n} f(x_{n})+\beta_{n}x_{n}+\gamma_{n}y_{n}\), \(n\geq1\), where \(F(y_{n},y)+\frac{1}{s_{n}}\langle y-y_{n},y_{n}-(I+r_{n}W)^{-1} (x_{n}+e_{n} )\rangle\geq0\), \(\forall y\in C\). Assume that the control sequences \(\{\alpha_{n}\}\), \(\{\beta_{n}\}\), \(\{ r_{n}\}\), and \(\{s_{n}\}\) satisfy the following conditions: \(\lim_{n\rightarrow\infty}\alpha_{n}=0\), \(\sum_{n=1}^{\infty}\alpha_{n}=\infty\), \(0<\liminf_{n\rightarrow\infty}\beta_{n}\leq\limsup_{n\rightarrow\infty }\beta_{n}<1\), \(0<\liminf_{n\rightarrow\infty}s_{n}\), \(0< r\leq r_{n}\), \(\lim_{n\rightarrow\infty}|r_{n}-r_{n+1}|=\lim_{n\rightarrow\infty }|s_{n}-s_{n+1}|=0\), where r is a real constant. Then \(\{x_{n}\}\) converges strongly to \(q=P_{EP(G)\cap EP(F)}f(q)\).

Recall that the classical variational inequality is to find an \(x\in C\) such that
$$ \langle Ax,y-x\rangle\geq0, \quad\forall y\in C, $$
(4.2)
where \(A:C\rightarrow H\) is a monotone mapping. The solution set of (4.2) is denoted by \(VI(C,A)\). Projection methods have been recently investigated for solving variational inequality (4.2). It is well known that x is a solution to (4.2) iff x is a fixed point of the mapping \(\operatorname{Proj}_{C}(I-rA)\), where \(\operatorname{Proj}_{C}\) is the metric projection from H onto C and I denotes the identity on H. If A is inverse-strongly monotone, then \(\operatorname{Proj}_{C}(I-rA)\) is nonexpansive. Moreover, if C is bounded, closed, and convex, then the existence of solutions of the variational inequality is guaranteed by the nonexpansivity of the mapping \(\operatorname{Proj}_{C}(I-rA)\). Next, we consider solutions of variational inequality (4.2). Let \(i_{C}\) be a function defined by
$$i_{C}(x)= \begin{cases} 0,&x\in C,\\ \infty,&x\notin C. \end{cases} $$
It is easy to see that \(i_{C}\) is a proper lower and semicontinuous convex function on H, and the subdifferential \(\partial i_{C}\) of \(i_{C}\) is maximal monotone. Define the resolvent \(J_{r}:=(I+r\partial i_{C})^{-1}\) of the subdifferential operator \(\partial i_{C}\). Letting \(x=J_{r}y\), we find that
$$\begin{aligned} y\in x+r\partial i_{C}x &\quad\Longleftrightarrow\quad y\in x+rN_{C}x \\ &\quad\Longleftrightarrow\quad\langle y-x,v-x\rangle\leq0,\quad\forall v\in C \\ &\quad\Longleftrightarrow\quad x=\operatorname{Proj}_{C}y, \end{aligned}$$
where \(N_{C}x:=\{e\in H:\langle e, v-x\rangle,\forall v\in C\}\). Putting \(B=\partial i_{C}\) in Theorems 3.1, we find the following result immediately.

Theorem 4.3

Let C be a nonempty, closed, and convex subset of H and let \(f:C\rightarrow C\) be a contraction with the contractive constant \(\kappa\in(0,1)\). Let \(A:C\rightarrow H\) be an α-inverse-strongly monotone mapping such that \(VI(C,A)\neq \emptyset\). Let \(\{r_{n}\}\) be a positive real number sequence. Let \(\{\alpha_{n}\}\), \(\{\beta_{n}\}\), and \(\{\gamma_{n}\}\) be real number sequences in \((0,1)\) such that \(\alpha_{n}+\beta_{n}+\gamma_{n}=1\). Let \(\{ e_{n}\}\) be a sequence in H such that \(\sum_{n=1}^{\infty}\|e_{n}\|<\infty\). Let \(\{x_{n}\}\) be a sequence generated in the following process: \(x_{1}\in C\), \(x_{n+1}=\alpha_{n} f(x_{n})+\beta_{n}x_{n}+\gamma_{n}y_{n}\), \(n\geq 1\), \(y_{n}=\operatorname{Proj}_{C} (x_{n}-r_{n}Ax_{n}+e_{n} )\). Assume that the control sequences \(\{\alpha_{n}\}\), \(\{\beta_{n}\}\), and \(\{r_{n}\}\) satisfy the conditions: \(\lim_{n\rightarrow\infty}\alpha_{n}=0\), \(\sum_{n=1}^{\infty}\alpha_{n}=\infty\), \(0<\liminf_{n\rightarrow\infty}\beta _{n}\leq\limsup_{n\rightarrow\infty}\beta_{n}<1\), \(0< r\leq r_{n}\leq r'<2\alpha\), \(\lim_{n\rightarrow\infty}|r_{n}-r_{n+1}|=0\), where r and \(r'\) are real constants. Then \(\{x_{n}\}\) converges strongly to \(q=P_{VI(C,A)}f(q)\).

Proof

Put \(F(x,y)=0\) for any \(x,y\in C\) and \(s_{n}=1\). From Theorem 3.1, we draw the desired conclusion immediately. □

Now, we are in a position to consider the problem of finding minimizers of proper lower semicontinuous convex functions. For a proper lower semicontinuous convex function \(g:H\rightarrow(-\infty,\infty]\), the subdifferential mapping ∂g of g is defined by \(\partial g(x)=\{x^{*}\in H:g(x)+\langle y-x,x^{*}\rangle\leq g(y),\forall y\in H\}\), \(\forall x\in H\). Rockafellar [30] proved that ∂g is a maximal monotone operator. It is easy to verify that \(0\in\partial g(v)\) iff \(g(v)=\min_{x\in H} g(x)\).

Theorem 4.4

Let \(g:H\rightarrow(-\infty,+\infty]\) be a proper convex lower semicontinuous function such that \((\partial g)^{-1}(0)\) is not empty. Let f be a contraction on H with the contractive constant \(\kappa\in(0,1)\). Let \(\{\alpha_{n}\}\), \(\{\beta_{n}\}\), and \(\{\gamma_{n}\}\) be real number sequences in \((0,1)\) such that \(\alpha_{n}+\beta_{n}+\gamma_{n}=1\). Let \(\{ e_{n}\}\) be a sequence in H such that \(\sum_{n=1}^{\infty}\|e_{n}\|<\infty\). Let \(\{x_{n}\}\) be a sequence generated in the following process: \(x_{1}\in C\), \(x_{n+1}=\alpha_{n} f(x_{n})+\beta_{n}x_{n}+\gamma_{n}y_{n}\), \(n\geq 1\), \(y_{n}=\arg\min_{z\in H}\{g(z)+\frac{\|z-x_{n}+e_{n}\|^{2}}{2r_{n}}\}\). Assume that the control sequences \(\{\alpha_{n}\}\), \(\{\beta_{n}\}\), and \(\{ r_{n}\}\) satisfy the conditions: \(\lim_{n\rightarrow\infty}\alpha_{n}=0\), \(\sum_{n=1}^{\infty}\alpha_{n}=\infty\), \(0<\liminf_{n\rightarrow\infty}\beta _{n}\leq\limsup_{n\rightarrow\infty}\beta_{n}<1\), \(0< r\leq r_{n}<\infty\) \(\lim_{n\rightarrow\infty}|r_{n}-r_{n+1}|=0\), where r is a real constant. Then \(\{x_{n}\}\) converges strongly to \(q=P_{(\partial g)^{-1}(0)}f(q)\).

Proof

Since \(g:H\rightarrow(-\infty,\infty]\) is a proper convex and lower semicontinuous function, we see that subdifferential ∂g of g is maximal monotone. Putting \(F(x,y)=0\) for any \(x,y\in C\), \(s_{n}=1\), and \(A=0\), we find \(y_{n}=J_{r_{n}}(x_{n}+e_{n})\). It follows that
$$y_{n}=\arg\min_{z\in H}\biggl\{ g(z)+\frac{\|z-x_{n}-e_{n}\|^{2}}{2r_{n}} \biggr\} $$
is equivalent to
$$0\in\partial g(y_{n})+\frac{1}{r_{n}}(y_{n}-x_{n}-e_{n}). $$
Hence, we have
$$x_{n}+e_{n}\in y_{n}+r_{n}\partial g(y_{n}). $$
By using Theorem 3.1, we find the desired conclusion immediately. □

Declarations

Acknowledgements

This project was funded by the Deanship of Scientific Research (DSR), King Abdulaziz University under grant No. (58-130-35-HiCi). The authors, therefore, acknowledge technical and financial support of KAU.

Open Access This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.

Authors’ Affiliations

(1)
Department of Mathematics, Faculty of Science for Girls, King Abdulaziz University
(2)
Department of Mathematics, King Abdulaziz University
(3)
Department of Mathematics, Faculty of Science, King Abdulaziz University

References

  1. Lions, PL, Mercier, B: Splitting algorithms for the sum of two nonlinear operators. SIAM J. Numer. Anal. 16, 964-979 (1979) View ArticleMATHMathSciNetGoogle Scholar
  2. Fattorini, HO: Infinite-Dimensional Optimization and Control Theory. Cambridge University Press, Cambridge (1999) View ArticleMATHGoogle Scholar
  3. Iiduka, H: Fixed point optimization algorithm and its application to network bandwidth allocation. J. Comput. Appl. Math. 236, 1733-1742 (2012) View ArticleMATHMathSciNetGoogle Scholar
  4. Byrne, C: A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 20, 103-120 (2008) View ArticleMathSciNetGoogle Scholar
  5. Martinet, B: Régularisation d’inéquations variationnelles par approximations successives. Rev. Fr. Inform. Rech. Oper. 4, 154-159 (1970) MATHMathSciNetGoogle Scholar
  6. Rockafellar, RT: Monotone operators and proximal point algorithm. SIAM J. Control Optim. 14, 877-898 (1976) View ArticleMATHMathSciNetGoogle Scholar
  7. Spingarn, JE: Applications of the method of partial inverses to convex programming decomposition. Math. Program. 32, 199-223 (1985) View ArticleMATHMathSciNetGoogle Scholar
  8. Eckstein, J, Bertsekas, DP: On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55, 293-318 (1992) View ArticleMATHMathSciNetGoogle Scholar
  9. Lions, PL, Mercier, B: Splitting algorithms for the sum of two nonlinear operators. SIAM J. Numer. Anal. 16, 964-979 (1979) View ArticleMATHMathSciNetGoogle Scholar
  10. Passty, GB: Ergodic convergence to a zero of the sum of monotone operators in Hilbert space. J. Math. Anal. Appl. 72, 383-390 (1979) View ArticleMATHMathSciNetGoogle Scholar
  11. Takahashi, S, Takahashi, W: Viscosity approximation methods for equilibrium problems and fixed point problems in Hilbert spaces. J. Math. Anal. Appl. 331, 506-515 (2007) View ArticleMATHMathSciNetGoogle Scholar
  12. Takahashi, S, Takahashi, W, Toyoda, M: Strong convergence theorems for maximal monotone operators with nonlinear mappings in Hilbert spaces. J. Optim. Theory Appl. 147, 27-41 (2010) View ArticleMATHMathSciNetGoogle Scholar
  13. Kammura, S, Takahashi, W: Approximating solutions of maximal monotone operators in Hilbert spaces. J. Approx. Theory 106, 226-240 (2000) View ArticleMathSciNetGoogle Scholar
  14. Iiduka, H, Takahashi, W: Strong convergence theorems for nonexpansive mappings and inverse-strongly monotone mappings. Nonlinear Anal. 61, 341-350 (2005) View ArticleMATHMathSciNetGoogle Scholar
  15. Qin, X, Cho, SY, Wang, L: A regularization method for treating zero points of the sum of two monotone operators. Fixed Point Theory Appl. 2014, Article ID 75 (2014) View ArticleGoogle Scholar
  16. Colao, V, Acedo, GG, Marino, G: An implicit method for finding common solutions of variational inequalities and systems of equilibrium problems and fixed points of infinite family of nonexpansive mappings. Nonlinear Anal. 71, 2708-2715 (2009) View ArticleMATHMathSciNetGoogle Scholar
  17. Wang, ZM, Zhang, X: Shrinking projection methods for systems of mixed variational inequalities of Browder type, systems of mixed equilibrium problems and fixed point problems. J. Nonlinear Funct. Anal. 2014, 15 (2014) View ArticleGoogle Scholar
  18. Maginé, PM: A hybrid extragradient-viscosity method for monotone operator and fixed point problems. SIAM J. Optim. 47, 1499-1515 (2008) Google Scholar
  19. Moudafi, A: Viscosity approximation methods for fixed-points problems. J. Math. Anal. Appl. 241, 46-55 (2000) View ArticleMATHMathSciNetGoogle Scholar
  20. Mainge, PE: A viscosity method with no spectral radius requirements for the split common fixed point problem. Eur. J. Oper. Res. 235, 17-27 (2014) View ArticleMathSciNetGoogle Scholar
  21. Cho, SY, Kang, SM: Approximation of common solutions of variational inequalities via strict pseudocontractions. Acta Math. Sci. 32, 1607-1618 (2012) View ArticleMATHGoogle Scholar
  22. Cho, SY, Kang, SM: Approximation of fixed points of pseudocontraction semigroups based on a viscosity iterative process. Appl. Math. Lett. 24, 224-228 (2011) View ArticleMATHMathSciNetGoogle Scholar
  23. Zegeye, H, Shahzad, N: Strong convergence theorem for a common point of solution of variational inequality and fixed point problem. Adv. Fixed Point Theory 2, 374-397 (2012) Google Scholar
  24. Blum, E, Oettli, W: From optimization and variational inequalities to equilibrium problems. Math. Stud. 63, 123-145 (1994) MATHMathSciNetGoogle Scholar
  25. Fan, K: A minimax inequality and applications. In: Shisha, O (ed.) Inequalities III, pp. 103-113. Academic Press, New York (1972) Google Scholar
  26. Browder, FE: Nonexpansive nonlinear operators in a Banach space. Proc. Natl. Acad. Sci. USA 54, 1041-1044 (1965) View ArticleMATHMathSciNetGoogle Scholar
  27. Rockafellar, RT: On the maximality of sums of nonlinear monotone operators. Trans. Am. Math. Soc. 149, 75-88 (1970) View ArticleMATHMathSciNetGoogle Scholar
  28. Suzuki, T: Strong convergence of Krasnoselskii and Mann’s type sequences for one-parameter nonexpansive semigroups without Bochner integrals. J. Math. Anal. Appl. 305, 227-239 (2005) View ArticleMATHMathSciNetGoogle Scholar
  29. Liu, LS: Ishikawa and Mann iterative process with errors for nonlinear strongly accretive mappings in Banach spaces. J. Math. Anal. Appl. 194, 114-125 (1995) View ArticleMATHMathSciNetGoogle Scholar
  30. Rockafellar, RT: Convex Analysis. Princeton University Press, Princeton (1970) MATHGoogle Scholar

Copyright

© Bin Dehaish et al.; licensee Springer. 2015