Open Access

On the role of the coefficients in the strong convergence of a general type Mann iterative scheme

Journal of Inequalities and Applications20152015:118

https://doi.org/10.1186/s13660-015-0641-4

Received: 4 November 2014

Accepted: 24 March 2015

Published: 3 April 2015

Abstract

Let H be a Hilbert space. Let \((W_{n})_{n\in\mathbb{N}}\) be a suitable family of mappings. Let S be a nonexpansive mapping and D be a strongly monotone operator. We study the convergence of the general scheme \(x_{n+1}=W_{n}(\alpha_{n}Sx_{n}+(1-\alpha_{n})(I-\mu_{n}D)x_{n}) \) in dependence on the coefficients \((\alpha_{n})_{n\in\mathbb{N}}\), \((\mu_{n})_{n\in\mathbb{N}}\).

Keywords

iterative methods nonexpansive mappings strongly monotone operators dependence on the coefficients variational inequality

MSC

47H09 58E35 47H10 65J25

1 Introduction and motivations

The approximation of fixed points of nonlinear mappings is a wide and active research area and its applications occur more and more widely in the calculus of variations and optimization. The starting point of many papers is a modification of Mann’s iterative method [1],
$$x_{n+1}=\alpha_{n} x_{n}+(1-\alpha_{n})Tx_{n}, $$
in order to obtain strong convergence results.
Many of these modified Mann schemes yield approximation sequences by suitable convex combinations like
$$x_{n+1}=\alpha_{n}g(x_{n})+(1-\alpha_{n})Vy_{n}, $$
where g, V, and \((y_{n})_{n\in\mathbb{N}}\) are opportunely chosen (see, for instance, Halpern [2], Ishikawa [3], Moudafi [4], Nakajo and Takahashi [5]).
In this paper, we instead focus on the following iterative method:
$$x_{n+1}=W_{n} \bigl(\alpha_{n} Sx_{n}+(1- \alpha_{n}) (I-\mu_{n}D)x_{n} \bigr). $$
This method is very different from most of existing methods in literature and immediately we discuss on some motivations.
Let H be a Hilbert space and \(f:H\to\mathbb{R}\) be a convex and lower semicontinuous. Our interest is focused on the minimization problem
$$ \min_{x\in C} f(x), $$
(1.1)
where C is a constraint closed and convex subset of H.

The following theorem is proved in [6].

Theorem 1.1

Let H be a Hilbert space and \(f:H\to\mathbb{R}\) be a convex functional. Then
  1. (a)

    \(f(x_{0})=\min_{H}f(x)\) if and only if \(0\in\partial f(x_{0})\).

     
  2. (b)

    Let \(C\subset H\). Then \(f(x_{0})=\min_{C}f(x)\) if and only if \((-\partial f(x_{0})\cap\partial\delta_{C}(x_{0}))\neq\emptyset\), where \(\delta_{C}\) is the indicator function of C.

     

Denote by Σ the set of solutions of (1.1). Let us start by the simple case in which \(f:H\to\mathbb{R}\) is a convex and continuously Fréchet differentiable functional.

By the definition of an indicator function we recall that (see [6])
$$ \textstyle\partial\delta_{C}(x_{0})= \left \{ \begin{array}{@{}l@{\quad}l} \emptyset, & x_{0}\in H\setminus C,\\ 0, & x_{0}\in\mathring{C}, \\ \{x^{*}\in H:\sup_{C}\langle x^{*},x\rangle=\langle x^{*},x_{0}\rangle\}, & x_{0}\in C\setminus\mathring{C}. \end{array} \right . $$
(1.2)
\(f(\cdot)\) being Fréchet differentiable, \(\partial f(x_{0})\) is a singleton, \(\nabla f(x_{0})\); hence Theorem 1.1(b) of [6] ensures that \(x_{0}\in C\) is a solution of (1.1) if and only if \(-\nabla f(x_{0})\in\partial\delta_{C}(x_{0})\), i.e.
$$\bigl\langle \nabla f(x_{0}),x_{0} \bigr\rangle \leq \bigl\langle \nabla f(x_{0}),x \bigr\rangle ,\quad \forall x\in C. $$
In other words \(x_{0}\in C\) is a solution of (1.1) if and only if
$$ \bigl\langle \nabla f(x_{0}),x-x_{0} \bigr\rangle \geq0,\quad \forall x\in C. $$
(1.3)
From (1.3), for every \(\gamma>0\), \(x_{0}\) is a solution for (1.1) if and only if
$$ \bigl\langle x_{0}- \bigl(x_{0}-\gamma\nabla f(x_{0}) \bigr),x-x_{0} \bigr\rangle \geq0, \quad\forall x\in C, $$
(1.4)
and, in view of Browder’s characterization of the metric projections \(P_{C}\), to solve (1.4) is equivalent to finding \(x_{0}\) such that
$$x_{0}=P_{C} (I-\gamma\nabla f )x_{0}. $$

Therefore, to solve problem (1.1) (respectively to approximate solutions of (1.1)) is equivalent to solving (resp. to approximate the solutions of) a fixed point problem which involves the operator f.

It is well known, by the convexity of the functional f, that the operator f is a monotone operator; indeed since
$$\begin{aligned}[b] & f(x)\geq f(y)+ \bigl\langle \nabla f(y),y-x \bigr\rangle , \quad\forall x\in H, \\ & f(y)\geq f(x)+ \bigl\langle \nabla f(x),x-y \bigr\rangle , \quad\forall y\in H, \end{aligned} $$
it easily follows that
$$\bigl\langle \nabla f(y)-\nabla f(x),y-x \bigr\rangle \geq0,\quad \forall x,y\in H. $$
If we assume that f is \(L_{f}\)-lipschitzian then, by Baillon-Haddad’s results [7], we have
$$\bigl\langle \nabla f(y)-\nabla f(x),y-x \bigr\rangle \geq\frac{1}{L_{f}} \bigl\| \nabla f(x)-\nabla f{y}\bigr\| ^{2}, \quad\forall x,y\in H, $$
i.e. f is \(\frac{1}{L_{f}}\)-inverse strongly monotone.
Under such a hypothesis on f, Takahashi and Toyoda in [8] proved that \(P_{C} (I-\frac{1}{L_{f}}\nabla f)\) is a nonexpansive mapping, hence to solve (resp. to approximate a solution of (1.1)) is equivalent to finding (resp. to approximate) a fixed point of the nonexpansive mapping \(P_{C} (I-\frac{1}{L_{f}}\nabla f )\). Xu in 2011 [9] showed that, even if \(\Sigma\neq\emptyset\), it is not guaranteed that the natural iteration
$$ x_{n+1}=P_{C} \biggl(I-\frac{1}{L_{f}}\nabla f \biggr)x_{n}= \biggl(P_{C} \biggl(I-\frac{1}{L_{f}} \nabla f \biggr) \biggr)^{n}x_{0}, $$
(1.5)
strongly converges to a solution of Σ. An example is given in the following.

Example 1.2

[9]

Following Hundal [10], there exist in \(H=l^{2}\) two closed and convex subset \(C_{1}\) and \(C_{2}\) such that: (i) \(C_{1}\cap C_{2}\neq \emptyset\), and (ii) the sequence generated by \(x_{0}\in C_{2}\) and the formula \(x_{n}=(P_{C_{2}}P_{C_{1}})^{n}x_{0}\) weakly converges but it does not strongly converge.

Let \(f(x)=\frac{1}{2}\|x-P_{C_{1}}x\|^{2}\). We deal with minimized \(f(x)\) on \(C_{2}\). It follows that \(\nabla f(x)=(I-P_{C_{1}})x\). Since \(P_{C_{1}}\) is firmly nonexpansive, i.e., 1-inverse strongly monotone, iteration (1.5) becomes
$$x_{n+1}=P_{C_{2}}(I-\nabla f)x_{n}=P_{C_{2}}P_{C_{1}}x_{n}, $$
that is, the sequence generated by (ii).
If we add to the lipschitzianity of f also the (stronger) assumption that f is a \(\sigma_{f}\)-strongly monotone operator, i.e.
$$\bigl\langle \nabla f(y)-\nabla f(x),y-x \bigr\rangle \geq\sigma_{f} \|x-y\|^{2},\quad \forall x,y\in H, $$
then the mapping \(P_{C} (I-\frac{\sigma _{f}}{L_{f}^{2}}\nabla f )\) is a contraction; therefore the contraction principle ensures that problem (1.1) has a unique solution \(x^{*}\) and the iterative sequence
$$ x_{n+1}=P_{C} \biggl(I-\frac{\sigma_{f}}{L_{f}^{2}}\nabla f \biggr)x_{n} $$
(1.6)
strongly converges to \(x^{*}\).
Notice that, if \(C=H\), \(P_{C}=I\), then the iteration
$$x_{n+1}= \biggl(I-\frac{\sigma_{f}}{L_{f}^{2}}\nabla f \biggr)x_{n} $$
strongly converges to a zero of f.

Hence a natural question is how to use the good properties of strongly monotone operators to find a solution of ( 1.1 ) if f is only lipschitzian.

A well-known approach is to consider a regularized problem; an example is to appeal to Tikhonov’s regularized problem:
$$\min_{x\in C} \biggl[f(x)+\frac{\varepsilon}{2}\|x\|^{2} \biggr], $$
where \(\varepsilon>0\) is given.
This approach arises by the following idea: if f is only lipschitzian (for instance nonexpansive), we can perturb problem (1.1) by a convex and differentiable functional g such that g is a \(\sigma_{g}\)-strongly monotone and \(L_{g}\)-lipschizian operator in such a way that
$$ \min_{x\in C}f(x)+\varepsilon g(x). $$
(1.7)
The operator \((\nabla f+\varepsilon\nabla g)\) is a lipschizian and a strongly monotone operator, the minumum problem (1.7) has a unique solution and, for a suitable \(\lambda>0\),
$$x_{n+1}=P_{C} \bigl(I-\lambda(\nabla f+\varepsilon\nabla g) \bigr)x_{n} $$
strongly converges to this solution.
Let us observe that
$$\begin{aligned} x_{n+1} =&P_{C} \bigl(I-\lambda(\nabla f+\varepsilon\nabla g) \bigr)x_{n}=P_{C}(I-\lambda\nabla f-\lambda\varepsilon \nabla g)x_{n} \\ =&P_{C} \biggl(\lambda(I-\nabla f)+(1-\lambda) \biggl(I- \frac{\lambda \varepsilon}{(1-\lambda)} \nabla g \biggr) \biggr)x_{n} \\ =&P_{C} \bigl(\lambda(I-\nabla f)+(1-\lambda) (I-\gamma \varepsilon \nabla g ) \bigr)x_{n}, \end{aligned}$$
i.e. \((x_{n})_{n\in\mathbb{N}}\) is generated by the composition of the projection \(P_{C}\) and the convex combination of two maps: the first is a nonexpansive mapping; the second is a strongly monotone operator. In fact for an opportune choice of λ (and \(\gamma :=\frac{\lambda}{1-\lambda}\)), we find the results that
  • \((I-\nabla f)\) is a nonexpansive mapping;

  • the mapping \((I-\gamma\varepsilon\nabla g )\) is a contraction.

For these reasons we are interested in the iteration
$$ x_{n+1}=W_{n} \bigl(\alpha_{n} Sx_{n}+(1-\alpha_{n}) (I-\mu_{n}D)x_{n} \bigr), $$
(1.8)
under the following hypotheses:
Hypotheses ()
  • \((\alpha_{n})_{n\in\mathbb{N}}\) is a sequence in \([0,1)\).

  • \(S:H\to H\) is a nonexpansive mapping not necessarily with fixed points.

  • \(D:H \to H\) is a σ-strongly monotone operator and L-lipschitzian.

  • \(0<\mu_{n}\leq\mu\) with \(\mu< \frac{2\sigma}{L^{2}}\), \(\rho=\frac{2\sigma-\mu L^{2}}{2}\).

  • \((W_{n})_{n\in\mathbb{N}}\) is a sequence of mappings defined on H such that \(F:=\bigcap_{n\in\mathbb{N}} \operatorname{Fix}(W_{n})\neq \emptyset\) and
    1. (h1)
      \(W_{n}:H\to H\) are nonexpansive mappings, uniformly asymptotically regular on bounded subsets \(B\subset H\), i.e.
      $$\lim_{n\to\infty}\sup_{x\in B}\|W_{n+1}x-W_{n}x \|=0, $$
       
    2. (h2)

      it is possible to define a nonexpansive mapping \(W:H\to H\), with \(Wx:=\lim_{n\to\infty}W_{n}x\) such that \(\operatorname{Fix}(W)=F\).

       

An interesting example of sequence \((W_{n})_{n\in\mathbb{N}}\) satisfying our hypotheses is the following.

Example 1.3

Let \(f(x)\) be functional on H convex and lower semicontinuous. We recall that the proximal operator of f on H is defined as
$$\operatorname{prox}_{\lambda f}(x):=\mathop{\operatorname{argmin}}\limits_{v\in H} \biggl\{ f(x)+ \frac{1}{2\lambda}\|x-v\|^{2} \biggr\} , $$
where \(\lambda>0\).
The proximal operator obeys:
  1. (1)

    it is a single-value firmly nonexpansive mapping (hence nonexpansive);

     
  2. (2)

    it coincides with \(P_{C}\) if \(f(x)=\delta_{C}(x)\);

     
  3. (3)

    \(\operatorname{prox}_{\lambda f}=(I+\lambda\partial f)^{-1}\) i.e. it is the resolvent of the subdifferential of f;

     
  4. (4)

    \(\operatorname{prox}_{\lambda f}x=\operatorname{prox}_{\nu f} (\frac{\nu}{\lambda}x+ (1-\frac{\nu}{\lambda} )\operatorname{prox}_{\lambda f}x )\);

     
  5. (5)
    $$x^{*}=\operatorname{prox}_{\lambda f} \bigl(x^{*} \bigr)\quad\Leftrightarrow\quad0\in \partial f \bigl(x^{*} \bigr). $$
     
If \((\lambda_{n})_{n\in\mathbb{N}}\) converges to \(\lambda>0\) then \(W_{n}:=\operatorname{prox}_{\lambda_{n} f}(x)\) satisfied (h1) and (h2) where \(W:=\operatorname{prox}_{\lambda f}(x)\). In fact, the set of fixed point coincides by (3) and (5). Moreover, by (4),
$$\begin{aligned} \|W_{n+1}-W_{n}x\| =&\bigl\| \operatorname{prox}_{\lambda_{n+1} f}(x)- \operatorname{prox}_{\lambda_{n} f}(x)\bigr\| = \\ =&\biggl\| \operatorname{prox}_{\lambda_{n} f} \biggl(\frac{\lambda _{n}}{\lambda_{n+1}}x+ \biggl(1- \frac{\lambda_{n}}{\lambda_{n+1}} \biggr)\operatorname {prox}_{\lambda_{n+1} f}x \biggr)- \operatorname{prox}_{\lambda_{n} f}(x)\biggr\| \\ \leq&\biggl\| \biggl(\frac{\lambda_{n}}{\lambda_{n+1}}x+ \biggl(1-\frac {\lambda _{n}}{\lambda_{n+1}} \biggr) \operatorname{prox}_{\lambda_{n+1} f}x \biggr)-x\biggr\| \\ =& \biggl\vert 1-\frac{\lambda_{n}}{\lambda_{n+1}} \biggr\vert \| x-\operatorname{prox}_{\lambda _{n+1} f}x \|, \end{aligned}$$
so if x lies in a bounded subset, the uniform asymptotical regularity follows.

In any case we have the following.

Remark 1.4

If \(C=\bigcap_{n\in\mathbb{N}}C_{n}\), where \(C_{n}\subset H\) are closed and convex for all \(n\in\mathbb{N}\), we can always suppose that \(C=\bigcap _{n\in\mathbb{N}}\operatorname{Fix}(W_{n})\) where \((W_{n})_{n\in\mathbb {N}}\) is a sequence of nonexpansive mappings satisfying (h1) and (h2). Indeed starting by the sequence of nonexpansive mappings \(T_{n}=P_{C_{n}}\) we can always construct a sequence \((W_{n})_{n\in\mathbb{N}}\) such that \(C=\bigcap _{n\in \mathbb{N}}C_{n}=\bigcap_{n\in\mathbb{N}}\operatorname{Fix}(T_{n})=\bigcap _{n\in\mathbb {N}}\operatorname{Fix}(W_{n})\) and it satisfies (h1) and (h2) (see for details [1114]).

Moreover, regarding the strongly monotone operator D we note that the sequence of operators \(B_{n}x:=(I-\mu_{n}D)x\) is a sequence of contractions when the sequence \((\mu_{n})_{n\in\mathbb{N}}\) lies in an opportune interval. Such an interval can be detected by the following lemma, proved by Kim and Xu.

Lemma 1.5

[15]

Let \(D:H\to H\) be σ-strongly monotone and L-lipschitzian. If \(\mu<\frac{2\sigma}{L^{2}}\), \(\rho=\frac {2\sigma-\mu L^{2}}{2}\), and \((\mu_{n})_{n\in\mathbb{N}}\subset(0,\mu ]\), then
$$\bigl\| (I-\mu_{n}D)x-(I-\mu_{n}D)y\bigr\| \leq(1-\mu_{n}\rho) \|x-y\|, $$
i.e. \((I-\mu_{n}D)\) is a \((1-\mu_{n}\rho)\)-contraction.
In this paper we study some asymptotic behaviors of the sequence generated by iteration (1.8), supposing that there exists (finite or infinite)
$$\tau:=\lim_{n\to\infty}\frac{\alpha_{n}}{\mu_{n}}. $$
We will be able to show that (1.8) strongly converges to a solution of the variational inequality
$$\bigl\langle \tau(I-S)x+Dx,y-x \bigr\rangle \geq0,\quad \forall y\in F, $$
when \(\tau\in[0,+\infty)\), and to a special solution of
$$\bigl\langle (I-S)x,y-x \bigr\rangle \geq0, \quad\forall y\in F, $$
if \(\tau=+\infty\).

Our research is not far from the research area studied by Moudafi and Maingé and also known as the hierarchical fixed point approach (see [1619]).

2 Some asymptotic behaviors of the iterative scheme

To study the asymptotic behavior of our method
$$ x_{n+1}=W_{n} \bigl(\alpha_{n} Sx_{n}+(1-\alpha_{n}) (I-\mu_{n}D)x_{n} \bigr) $$
(2.1)
we suppose that there exists
$$\tau:=\lim_{n\to\infty}\frac{\alpha_{n}}{\mu_{n}}. $$
The method can be equivalently written as
$$x_{n+1}=W_{n}y_{n}, $$
where \(y_{n}:=\alpha_{n}Sx_{n}+(1-\alpha_{n})B_{n}x_{n}\) and \(B_{n}=(I-\mu_{n}D)\). We will use the following convenient notations:
  • We say that \(\zeta_{n}=o(\eta_{n})\) if \(\frac{\zeta_{n}}{\eta _{n}}\to0\) as \(n\to\infty\).

  • We say that \(\zeta_{n}=O(\eta_{n})\) if there exist \(K,N>0\) such that \(N\leq \vert \frac{\zeta_{n}}{\eta_{n}}\vert \leq K\).

A central role in proving the convergence results is played by the boundedness of the sequence \((x_{n})_{n\in\mathbb{N}}\). We want to put its role in evidence. An expected case occurs when there are common fixed points between S and \((W_{n})_{n\in\mathbb{N}}\).

Proposition 2.1

Suppose that (2.1) satisfies Hypotheses ().

If \(\operatorname{Fix}(S)\cap F\neq\emptyset\) then \((x_{n})_{n\in \mathbb{N}}\) is bounded.

Proof

If \(z\in\operatorname{Fix}(S)\cap F\)
$$\begin{aligned} \|x_{n+1}-z\| \leq& \bigl\| \alpha_{n}Sx_{n}+(1- \alpha_{n})B_{n}x_{n}-z\bigr\| \\ \leq& \alpha_{n}\|Sx_{n}-z\|+(1- \alpha_{n}) \|B_{n}x_{n}-B_{n}z\| +(1- \alpha_{n}) \|B_{n}z-z\| \\ \leq& \alpha_{n}\|x_{n}-z\|+(1-\alpha_{n}) (1- \mu_{n}\rho)\|x_{n}-z\| +(1-\alpha_{n})\mu_{n}\|Dz\| \\ \leq& \bigl(1-(1-\alpha_{n})\mu_{n}\rho \bigr) \|x_{n}-z\|+(1-\alpha _{n})\mu _{n}\rho \frac{\|Dz\|}{\rho}. \end{aligned}$$
(2.2)
Calling \(\beta_{n}:=(1-\alpha_{n})\mu_{n}\rho\) we have
$$\begin{aligned} \|x_{n+1}-z\| \leq (1-\beta_{n})\|x_{n}-z\|+ \beta_{n}\frac{\|Dz\| }{\rho} \leq\max \biggl\{ \|x_{n}-z\|, \frac{\|Dz\|}{\rho} \biggr\} . \end{aligned}$$
Since, by an inductive process, one can see that
$$\|x_{n}-z\|\leq \max \biggl\{ \|x_{0}-z\|,\frac{\|Dz\|}{\rho} \biggr\} , $$
the claim follows. □

Notice that, in this case, boundedness does not depend by any hypotheses on \((\alpha_{n})_{n\in\mathbb{N}}\), \((\mu_{n})_{n\in\mathbb {N}}\), sequences in \([0,1]\).

On the contrary, in the following proposition the boundeness of the sequence is guaranteed by the assumption on the coefficients.

Proposition 2.2

Let us suppose that (2.1) satisfies Hypotheses (). Let \((\alpha_{n})_{n\in\mathbb{N}}\) be a sequence in \([0,1]\) and let \((\mu_{n})_{n\in\mathbb{N}}\) be a sequence in \((0,\mu)\). Assume that
  1. (B)

    either \(\alpha_{n}=O (\mu_{n} )\) or \(\alpha_{n}=o (\mu_{n} )\) (a sufficient condition is that there exists \(\lim_{n\to\infty} \frac{\alpha _{n}}{\mu _{n}}=\tau\in[0,+ \infty)\)).

     
Then \((x_{n})_{n\in\mathbb{N}}\) is bounded.

Proof

Let \(z \in F\). Then for every \(n\in\mathbb{N}\),
$$\begin{aligned} \|x_{n+1}-z\| \leq& \bigl\| \alpha_{n}Sx_{n}+(1- \alpha_{n})B_{n}x_{n}-z\bigr\| \\ \leq& \alpha_{n}\|Sx_{n}-Sz\|+\alpha_{n} \|Sz-z \|+(1-\alpha_{n})\| B_{n}x_{n}-B_{n}z \|+(1-\alpha_{n})\|B_{n}z-z\| \\ \leq& \alpha_{n}\|x_{n}-z\|+\alpha_{n}\|Sz-z \|+(1-\alpha_{n}) (1-\mu_{n}\rho )\| x_{n}-z\| +(1-\alpha_{n})\mu_{n}\|Dz\| \\ \leq& \bigl(1-(1-\alpha_{n})\mu_{n}\rho \bigr) \|x_{n}-z\|+\alpha_{n}\| Sz-z\| +(1-\alpha_{n}) \mu_{n}\rho\frac{\|Dz\|}{\rho}. \end{aligned}$$
(2.3)
Since (B) holds, there exist \(\gamma>0\) and \(N_{0}\) such that, for all \(n>N_{0}\), \(\alpha_{n}\leq\gamma(1-\alpha_{n})\mu_{n}\); hence
$$\begin{aligned} \|x_{n+1}-z\| \leq& \bigl(1-(1-\alpha_{n})\mu_{n} \rho \bigr)\|x_{n}-z\|+ \gamma (1-\alpha_{n})\mu_{n} \|Sz-z\|+(1-\alpha_{n})\mu_{n}\rho\frac{\|Dz\|}{\rho} \\ \leq& \bigl(1-(1-\alpha_{n})\mu_{n}\rho \bigr) \|x_{n}-z\|+ (1-\alpha_{n})\mu_{n}\rho \frac{\gamma\|Sz-z\|+\|Dz\|}{\rho}. \end{aligned}$$
Calling \(\beta_{n}:=(1-\alpha_{n})\mu_{n}\rho\) we have
$$\begin{aligned} \|x_{n+1}-z\| \leq& (1-\beta_{n})\|x_{n}-z\|+ \beta_{n}\frac{\gamma\| Sz-z\| +\|Dz\|}{\rho} \\ \leq&\max \biggl\{ \|x_{n}-z\|, \frac{\gamma\|Sz-z\|+\|Dz\|}{\rho } \biggr\} . \end{aligned}$$
Since, by an inductive process, one can see that
$$\|x_{n}-z\|\leq \max \biggl\{ \|x_{i}-z\|, \frac{\|Dz\|+\gamma\|Sz-z\|}{\rho}: i=0,\ldots ,N_{0} \biggr\} , $$
the claim follows. □
It is remarkable that, by boundedness, we can deduce the asymptotical regularity of the iterative sequence, i.e. that
$$\|x_{n+1}-x_{n}\|\to0, \quad\mbox{as } n\to\infty, $$
which is often a key to prove convergent results when the mappings involved are continuous.

To prove it, we use the Xu lemma.

Lemma 2.3

[20]

Assume \((a_{n})_{n \in\mathbb{N}}\) is a sequence of nonnegative numbers such that
$$ a_{n+1}\leq(1-\gamma_{n})a_{n}+ \delta_{n},\quad n\geq0, $$
where \((\gamma_{n})_{n}\) is a sequence in \((0,1)\) and \((\delta_{n})_{n}\) is a sequence in such that:
  1. (1)

    \(\sum_{n=1}^{\infty}\gamma_{n}=\infty\);

     
  2. (2)

    \(\lim\sup_{n\to\infty}\delta_{n}/\gamma_{n}\leq0\) or \(\sum_{n=1}^{\infty}|\delta_{n}|<\infty\).

     
Then \(\lim_{n\to\infty}a_{n}=0\).

Proposition 2.4

Let Hypotheses () be satisfied. We suppose that \(\lim_{n\to\infty}\frac{\alpha _{n}}{\mu _{n}}=\tau\in[0,+\infty)\) and that:
  1. (H1)

    \(\sum_{n=1}^{\infty}\mu_{n}=\infty\) and \(|\mu _{n}-\mu_{n-1}|=o(\mu_{n})\);

     
  2. (H2)

    \(|\alpha_{n}-\alpha_{n-1}|=o(\mu_{n})\);

     
  3. (H3)

    \(\sup_{z\in B}\|W_{n}z-W_{n-1}z\|=o(\mu_{n})\), with \(B\subset H\) bounded.

     
Then \((x_{n})_{n\in\mathbb{N}}\) is asymptotically regular.

Remark 2.5

Note that, for \((W_{n})_{n\in\mathbb{N}}\) as in Example 1.3, hypothesis (H3) reduces to an hypothesis on \((\lambda_{n})_{n\in\mathbb {N}}\) since
$$\lim_{n\to\infty}\frac{\|W_{n+1}x-W_{n}x\|}{\mu_{n}}=\lim_{n\to \infty} \frac {|\lambda_{n+1}-\lambda_{n}|}{\mu_{n}}. $$

Proof of Proposition 2.4

First of all, from Proposition 2.2, \((x_{n})_{n\in\mathbb{N}}\) is bounded.

If we denote by \(y_{n}=\alpha_{n}Sx_{n}+(1-\alpha_{n})B_{n}x_{n}\) then
$$x_{n+1}-x_{n}=W_{n}y_{n}-W_{n-1}y_{n-1}=W_{n}y_{n}-W_{n}y_{n-1}+W_{n}y_{n-1}-W_{n-1}y_{n-1}, $$
so, passing to the norm and by using the nonexpansivity of \((W_{n})_{n\in \mathbb{N}}\),
$$\begin{aligned} \|x_{n+1}-x_{n}\|\leq\|y_{n}-y_{n-1} \|+\|W_{n}y_{n-1}-W_{n-1}y_{n-1}\|. \end{aligned}$$
(2.4)
Now let us observe that
$$\begin{aligned} y_{n}-y_{n-1} =&\alpha_{n}(Sx_{n}-Sx_{n-1})+( \alpha_{n}-\alpha _{n-1})Sx_{n-1}+(1- \alpha_{n}) (B_{n}x_{n}-B_{n-1}x_{n-1}) \\ &{}+(1-\alpha_{n})B_{n-1}x_{n-1}-(1- \alpha_{n-1})B_{n-1}x_{n-1} \\ =&\alpha_{n}(Sx_{n}-Sx_{n-1})+( \alpha_{n}-\alpha _{n-1}) (Sx_{n-1}-B_{n-1}x_{n-1}) \\ &{} +(1-\alpha_{n}) (B_{n}x_{n}-B_{n-1}x_{n-1}). \end{aligned}$$
Therefore replacing the last equality in (2.4) and by using the boundedness of \((x_{n})_{n\in\mathbb{N}}\), we obtain
$$\begin{aligned} \|x_{n+1}-x_{n}\| \leq& \alpha_{n} \|Sx_{n}-Sx_{n-1}\|+|\alpha _{n}- \alpha_{n-1}|O(1)+(1-\alpha_{n})\|B_{n}x_{n}-B_{n-1}x_{n-1} \| \\ &{}+\|W_{n}y_{n-1}-W_{n-1}y_{n-1}\| \\ \leq&\alpha_{n}\|x_{n}-x_{n-1}\|+| \alpha_{n}-\alpha _{n-1}|O(1)+(1-\alpha_{n}) \|B_{n}x_{n}-B_{n}x_{n-1}\| \\ &{}+(1-\alpha_{n})\|B_{n}x_{n-1}-B_{n-1}x_{n-1} \|+\| W_{n}y_{n-1}-W_{n-1}y_{n-1}\| \\ \leq&\alpha_{n}\|x_{n}-x_{n-1}\|+| \alpha_{n}-\alpha _{n-1}|O(1)+(1-\alpha_{n}) (1- \mu_{n}\rho)\|x_{n}-x_{n-1}\| \\ &{}+(1-\alpha_{n})|\mu_{n-1}-\mu_{n}| \|Dx_{n-1}\|+\| W_{n}y_{n-1}-W_{n-1}y_{n-1} \| \\ \leq& \bigl(1-(1-\alpha_{n})\rho\mu_{n} \bigr) \|x_{n}-x_{n-1}\|+\| W_{n}y_{n-1}-W_{n-1}y_{n-1} \| \\ &{}+ \bigl(|\alpha_{n}-\alpha_{n-1}|+(1- \alpha_{n})| \mu_{n-1}-\mu_{n}| \bigr)O(1). \end{aligned}$$
(2.5)
Denoting
$$\begin{aligned} &a_{n}:=\|x_{n}-x_{n-1}\|,\qquad \gamma_{n}:=(1- \alpha_{n})\rho\mu_{n}, \\ &\delta_{n}:=\|W_{n}y_{n-1}+W_{n-1}y_{n-1} \|+ \bigl(|\alpha_{n}-\alpha _{n-1}|+(1-\alpha_{n})| \mu_{n-1}-\mu_{n}| \bigr)O(1), \end{aligned}$$
(2.4) becomes
$$a_{n+1}\leq(1-\gamma_{n})a_{n}+ \delta_{n}. $$
Thus, our hypotheses (H1), (H2), and (H3), are enough to ensure, by Lemma 2.3, that \((x_{n})_{n\in\mathbb{N}}\) is asymptotically regular. □

Remark 2.6

By the previous proof, it is clear that the hypothesis \(\tau\in [0,+\infty)\) is needed only to ensure the boundedness of \((x_{n})_{n\in \mathbb{N}}\). So, more in general, boundedness, (H1), (H2), and (H3) are enough to prove asymptotical regularity.

From now on we will suppose that \(\mu_{n}\to0\), as \(n\to\infty\); then, since τ is nonnegative, either \(\alpha_{n}\to0\), as \(n\to\infty\), or \(\alpha_{n}=0\).

Since we are searching for solutions of variational inequalities on fixed points sets, we show some sufficient condition for which the set of weak limits of \((x_{n})_{n\in\mathbb{N}}\) lies in F.

Proposition 2.7

Let Hypotheses () satisfied. Let us suppose that \(\lim_{n\to\infty}\alpha_{n}= \lim_{n\to \infty}{\mu_{n}}=0\). Let us suppose \(\lim_{n\to\infty}\frac{\alpha_{n}}{\mu _{n}}=\tau\in[0,+\infty)\) and let \((x_{n})_{n\in\mathbb{N}}\) defined by (2.1) be asymptotically regular. Then \(\omega_{w}(x_{n})\subset F\).

Proof

The proof is based on Opial’s condition. The condition on τ gives the boundedness of our sequence by Proposition 2.2.

Let thus \(z\in\omega_{w}(x_{n})\) and let \((x_{n_{k}})_{k\in\mathbb{N}}\) be a subsequence weak convergent to z. If \(z\notin F\) then \(z\neq Wz\) and
$$\begin{aligned} \liminf_{k\to\infty} \|x_{n_{k}}-z\| < & \liminf _{k\to\infty} \| x_{n_{k}}-Wz\| \\ \leq& \liminf_{k\to\infty} \bigl[\|x_{n_{k}}-x_{n_{k}+1}\|+ \|x_{n_{k}+1}-Wz\| \bigr] \\ \leq& \bigl(\mbox{by asymptotical regularity of } (x_{n})_{n\in\mathbb{N}} \bigr) \\ \leq& \liminf_{k\to\infty} \bigl[\|W_{n_{k}}y_{n_{k}}-W_{n_{k}}z \|+\| W_{n_{k}}z-Wz\| \bigr] \\ \bigl(\mbox{by condition (h2) on }(W_{n})_{n\in\mathbb{N}} \bigr) \leq& \liminf_{k\to\infty} \|y_{n_{k}}-z\| \\ (\mbox{since }\alpha_{n}\to0) \leq& \liminf_{k\to\infty} (1-\alpha _{n_{k}})\|B_{n_{k}}x_{n_{k}}-z\| \\ =& \liminf_{k\to\infty} (1-\alpha_{n_{k}})\|x_{n_{k}}- \mu _{n_{k}}Dx_{n_{k}}-z\| \\ \leq& \liminf_{k\to\infty} \bigl[\|x_{n_{k}}-z\|+ \mu_{n_{k}}\|Dx_{n_{k}}\|\bigr]. \end{aligned}$$
Therefore, the boundedness of \((x_{n})_{n\in\mathbb{N}}\), along with the hypothesis \(\mu_{n}\to0\), produces the contradiction
$$\liminf_{k\to\infty} \|x_{n_{k}}-z\|< \liminf _{k\to\infty} \| x_{n_{k}}-Wz\| \leq\liminf_{k\to\infty} \|x_{n_{k}}-z\|. $$
 □

Now we are able to prove our first convergence result.

Theorem 2.8

Let Hypotheses () be satisfied. Let us suppose that \(\mu_{n}\to0\) and there exists
$$\lim_{n\to\infty}\frac{\alpha_{n}}{\mu_{n}}=\tau\in[0,+\infty). $$
Moreover, suppose that
  1. (H1)

    \(\sum_{n=1}^{\infty}\mu_{n}=\infty\) and \(|\mu _{n}-\mu_{n-1}|=o(\mu_{n})\);

     
  2. (H2)

    \(|\alpha_{n}-\alpha_{n-1}|=o(\mu_{n})\);

     
  3. (H3)

    \(\sup_{z\in B}\|W_{n}z-W_{n-1}z\|=o(\mu_{n})\), with \(B\subset H\) bounded.

     
Then \((x_{n})_{n\in\mathbb{N}}\) defined by (2.1) strongly converges in F to \(x^{*}\), that is, the unique solution of the variational inequality problem
$$ \bigl\langle \tau(I-S)x+Dx, y-x \bigr\rangle \geq0,\quad \forall y \in F. $$
(2.6)

Proof

Recall that, since S is nonexpansive, \((I-S)\) is \(\frac {1}{2}\)-inverse strongly monotone, so the operator \((\tau(I-S)+D)\) is a strongly monotone operator. Since F is closed and convex, problem (2.6) has a unique solution in F, which we indicate by \(x^{*}\).

The hypotheses on τ furnish, by Proposition 2.2, the boundedness of \((x_{n})_{n\in\mathbb{N}}\). Then, in view of hypotheses (H1), (H2), and (H3), we can apply Proposition 2.4 to obtain asymptotical regularity. This allows one to apply Proposition 2.7 to get \(\omega _{w}(x_{n})\subset F\). So, let \(x^{*}\in F\), the unique solution of (2.6); by using the convexity of the norm and the subdifferential inequality
$$\|x+y\|^{2}\leq\|x\|^{2}+2\langle y,x+y\rangle,\quad \forall x,y \in H, $$
we have, denoting again \(B_{n}=(I-\mu_{n}D)\),
$$\begin{aligned} \bigl\| x_{n+1}-x^{*}\bigr\| ^{2} \leq& \bigl\| \alpha_{n} \bigl(Sx_{n}-x^{*} \bigr)+(1-\alpha _{n}) \bigl(B_{n}x_{n}-x^{*} \bigr)\bigr\| ^{2} \\ =& \bigl\| \alpha_{n} \bigl(Sx_{n}-Sx^{*} \bigr)+ \alpha_{n} \bigl(Sx^{*}-x^{*} \bigr)+(1-\alpha _{n}) \bigl(B_{n}x_{n}-B_{n}x^{*} \bigr) \\ &{}+(1-\alpha_{n}) \bigl(B_{n}x^{*}-x^{*} \bigr) \bigr\| ^{2} \\ =& \bigl\| \alpha_{n} \bigl(Sx_{n}-Sx^{*} \bigr)+(1-\alpha _{n}) \bigl(B_{n}x_{n}-B_{n}x^{*} \bigr) \\ &{}- \bigl(\alpha _{n}(I-S)x^{*}+(1-\alpha_{n})\mu_{n}Dx^{*} \bigr)\bigr\| ^{2} \\ \leq& \alpha_{n}\bigl\| x_{n}-x^{*}\bigr\| ^{2}+(1- \alpha_{n}) (1-\mu_{n}\rho)\bigl\| x_{n}-x^{*} \bigr\| ^{2} \\ &{}-2 \bigl\langle \bigl(\alpha_{n}(I-S)x^{*}+(1- \alpha_{n}) \mu _{n}Dx^{*} \bigr),x_{n+1}-x^{*} \bigr\rangle \\ =& \bigl(1-(1-\alpha_{n})\mu_{n}\rho \bigr) \bigl\| x_{n}-x^{*}\bigr\| ^{2} \\ &{}-2(1-\alpha_{n})\mu_{n} \biggl\langle \frac{\alpha_{n}}{(1-\alpha _{n})\mu _{n}}(I-S)x^{*}+Dx^{*},x_{n+1}-x^{*} \biggr\rangle . \end{aligned}$$
(2.7)
Denoting by
$$\begin{aligned} & a_{n}=\bigl\| x_{n}-x^{*}\bigr\| ^{2},\qquad \gamma_{n}= (1-\alpha_{n})\mu_{n}\rho, \\ &\delta_{n}=-\frac{2}{\rho} \biggl\langle \frac{\alpha_{n}}{(1-\alpha_{n})\mu _{n}}(I-S)x^{*}+Dx^{*},x_{n+1}-x^{*} \biggr\rangle , \end{aligned}$$
(2.7) can be written \(a_{n+1}\leq(1-\gamma_{n})a_{n}+\gamma _{n}\delta _{n}\).

To invoke the Xu Lemma 2.3, since \(\sum_{n}\gamma _{n}=\infty\) from (H1), we need to prove only that \(\limsup_{n\to\infty}\delta_{n}\leq0\).

There exists a subsequence \((x_{n_{k}})_{k\in\mathbb{N}}\) of \((x_{n})_{n\in \mathbb{N}}\) such that
$$\begin{aligned} \limsup_{n\to\infty}\delta_{n}&=\limsup_{n\to\infty} \biggl\langle \frac{\alpha_{n}}{(1-\alpha_{n})\mu _{n}}(I-S)x^{*}+Dx^{*},x^{*}-x_{n+1} \biggr\rangle \\ &=\lim_{k\to\infty} \biggl\langle \frac {\alpha_{n_{k}}}{(1-\alpha_{n_{k}})\mu _{n_{k}}}(I-S)x^{*}+Dx^{*},x^{*}-x_{{n_{k}}+1} \biggr\rangle . \end{aligned}$$
Since \((x_{n_{k}})_{k\in\mathbb{N}}\) is bounded, we can suppose that \((x_{n_{k}})_{k\in\mathbb{N}}\) weakly converges to p. Proposition 2.7 gives \(p\in F\). By using the asymptotical regularity of \((x_{n})_{n\in\mathbb{N}}\) we have
$$\begin{aligned} &\limsup_{n\to\infty} \biggl\langle \frac{\alpha_{n}}{(1-\alpha_{n})\mu _{n}}(I-S)x^{*}+Dx^{*},x^{*}-x_{n+1} \biggr\rangle \\ &\quad=\lim_{k\to\infty} \biggl\langle \frac{\alpha_{n_{k}}}{(1-\alpha _{n_{k}})\mu _{n_{k}}}(I-S)x^{*}+Dx^{*},x^{*}-x_{{n_{k}}+1} \biggr\rangle \\ &\quad=\lim_{k\to\infty} \biggl[ \biggl\langle \frac{\alpha_{n_{k}}}{(1-\alpha _{n_{k}})\mu _{n_{k}}}(I-S)x^{*}+Dx^{*},x^{*}-x_{n_{k}} \biggr\rangle \\ &\qquad{}+ \biggl\langle \frac{\alpha _{n_{k}}}{(1-\alpha_{n_{k}})\mu _{n_{k}}}(I-S)x^{*}+Dx^{*},x_{{n_{k}}}-x_{{n_{k}}+1} \biggr\rangle \biggr] \\ &\quad=\lim_{k\to\infty} \biggl\langle \frac{\alpha_{n_{k}}}{(1-\alpha _{n_{k}})\mu _{n_{k}}}(I-S)x^{*}+Dx^{*},x^{*}-x_{{n_{k}}} \biggr\rangle \\ &\quad= \bigl\langle \tau(I-S)x^{*}+Dx^{*},x^{*}-p \bigr\rangle \leq0 \quad\bigl(\mbox{since }x^{*} \mbox{ is the solution of (2.6)} \bigr). \end{aligned}$$
 □

Remark 2.9

Let us remark that, in the study of the behavior of \((x_{n})_{n\in \mathbb {N}}\) for \(\tau\in[0,+\infty)\), the set of fixed points of S never appears; all the properties, including the strong convergence, have been proved only by the hypotheses on the control sequences.

Let us now suppose \(\lim_{n\to\infty}\frac{\alpha _{n}}{\mu _{n}}=\tau=+\infty\). In this case, necessarily \(\mu_{n}\to0\) as \(n\to \infty\). Therefore either \(\alpha_{n}\to\alpha>0\) or \(\alpha_{n}\to0\) too and \(\mu_{n}=o(\alpha_{n})\).

By Proposition 2.1, if \(\operatorname{Fix}(S)\cap F\) is nonempty, the boundedness of \((x_{n})_{n\in\mathbb{N}}\) follows. On the contrary, if there are no common fixed points, the boundedness is not guaranteed as shown by the following counterexample.

Example 2.10

Let us consider \(H=\mathbb{R}\), \(x_{0}=1\), \(W_{n}x=Dx=x\), \(Sx=x+1\), \(\alpha_{n}=\frac{1}{\sqrt{n}}\), and \(\mu _{n}=\frac{1}{n}\).

Our method gives the positive number sequence:
$$x_{n+1}=\frac{1}{\sqrt{n}}(x_{n}+1)+ \biggl(1- \frac{1}{\sqrt{n}} \biggr) \biggl(1-\frac{1}{n} \biggr)x_{n}. $$
If there exists \(M>0\) such that \(x_{n}< M\) then we note that, for every k,
$$\begin{aligned} x_{k+1}-x_{k} =&\frac{x_{k}}{\sqrt{k}}+\frac{1}{\sqrt{k}}+ \biggl(1-\frac {1}{\sqrt{k}} \biggr) \biggl(1-\frac{1}{k} \biggr)x_{k}-x_{k} \\ =&\frac{1}{\sqrt{k}}-\frac{x_{k}}{k} \biggl(1-\frac{1}{\sqrt {k}} \biggr) \simeq\frac{1}{\sqrt{k}}-\frac{M}{k} \\ >&\frac{1}{\sqrt{k}} \biggl(1-\frac{M}{\sqrt{k}} \biggr)= \frac {1}{\sqrt{k}}, \end{aligned}$$
and this is in contradiction with the boundedness of \((x_{n})_{n\in \mathbb{N}}\).
Nevertheless, we explicitly note that if \(W_{n}=P_{C}\) and there exist solutions of the variational inequality problem
$$\bigl\langle (I-S)x,y-x \bigr\rangle \geq0,\quad \forall y\in C, $$
then the boundedness is ensured even if \(F\cap\operatorname {Fix}(S)=\emptyset\). This is shown in the following proposition.

Proposition 2.11

Let C be a closed and convex subset of H. Let us suppose that the variational inequality problem
$$\bigl\langle (I-S)x,y-x \bigr\rangle \geq0, \quad\forall y\in C, $$
has at least a solution \(x^{*}\). Then the sequence defined by
$$x_{n+1}=P_{C} \bigl(\alpha_{n}Sx_{n}+(1- \alpha_{n})B_{n}x_{n} \bigr) $$
is bounded.

Proof

We know that, for all \(\eta\in(0,1]\), we have
$$ x^{*}=P_{C} \bigl(\eta Sx^{*}+(1-\eta)x^{*} \bigr). $$
(2.8)
Taking \(W_{n}=P_{C}\), we have
$$\begin{aligned} \bigl\| x_{n+1}-x^{*}\bigr\| \leq& \bigl\| P_{C} \bigl(\alpha_{n}Sx_{n}+(1- \alpha _{n})B_{n}x_{n} \bigr)-P_{C} \bigl( \alpha _{n}Sx^{*}+(1-\alpha_{n})B_{n}x^{*} \bigr) \bigr\| \\ &{}+\bigl\| P_{C} \bigl(\alpha_{n}Sx^{*}+(1-\alpha_{n})B_{n}x^{*} \bigr)-x^{*}\bigr\| \quad \bigl(\mbox{as in Proposition 2.1 in (2.8)} \bigr) \\ \leq& \bigl(1-(1-\alpha_{n})\mu_{n}\rho \bigr) \bigl\| x_{n}-x^{*}\bigr\| \\ &{}+\bigl\| P_{C} \bigl(\alpha _{n}Sx^{*}+(1-\alpha_{n})B_{n}x^{*} \bigr)-x^{*}\bigr\| \quad \bigl(\mbox{taking }\eta=\alpha_{n}\mbox{ in (2.8)} \bigr) \\ \leq& \bigl(1-(1-\alpha_{n})\mu_{n}\rho \bigr) \bigl\| x_{n}-x^{*}\bigr\| \\ &{}+\bigl\| P_{C} \bigl(\alpha _{n}Sx^{*}+(1-\alpha_{n})B_{n}x^{*} \bigr)-P_{C} \bigl(\alpha_{n} Sx^{*}+(1-\alpha_{n})x^{*} \bigr)\bigr\| \\ \leq& \bigl(1-(1-\alpha_{n})\mu_{n}\rho \bigr) \bigl\| x_{n}-x^{*}\bigr\| +(1-\alpha_{n})\mu_{n}\rho \frac {\|Dx^{*}\|}{\rho}. \end{aligned}$$
So the boundedness follows as in Proposition 2.1. □

Therefore it is meaningful to prove convergence results if \(\operatorname{Fix}(S)\cap F\neq\emptyset\).

Theorem 2.12

Let Hypotheses () satisfied. Let us suppose that
$$\lim_{n\to\infty} \mu_{n}= 0,\qquad \lim _{n\to\infty} \alpha_{n}=\alpha \in[0,1),\qquad \lim _{n\to\infty} \frac{\alpha_{n}}{\mu_{n}}=\tau=+\infty, $$
and \(\operatorname{Fix}(S)\cap F\neq\emptyset\). Moreover, suppose that:
  1. (H1s)

    \(\sum_{n=1}^{\infty}\mu_{n}=\infty\) and \(|\mu _{n}-\mu_{n-1}|=o(\alpha_{n}\mu_{n})\);

     
  2. (H2s)

    \(|\alpha_{n}-\alpha_{n-1}|=o(\alpha_{n}\mu_{n})\);

     
  3. (H3s)

    \(\sup_{z\in B}\|W_{n}z-W_{n-1}z\|=o(\alpha_{n}\mu_{n})\), with \(B\subset H\) bounded.

     
  4. (H4)

    \(\vert \frac{1}{\alpha_{n}}-\frac {1}{\alpha _{n-1}}\vert =O(\mu_{n})\).

     
(Note that (H1s), (H2s), (H3s) are stronger than (H1), (H2), (H3) of Theorem  2.8.)
Then \((x_{n})_{n\in\mathbb{N}}\) defined by (2.1) strongly converges to \(\bar{x}\in F\cap\operatorname{Fix}(S)\), that is, the unique solution of the variational inequality problem
$$ \langle Dx, y-x\rangle\geq0, \quad\forall y\in F\cap \operatorname{Fix}(S). $$
(2.9)

Remark 2.13

Note that, if \(\alpha_{n}\to\alpha>0\), the requirements (H1s), (H2s), (H3s) reduce to (H1), (H2), (H3).

Proof

If \(\operatorname{Fix}(S)\cap F\neq\emptyset\), \((x_{n})_{n\in\mathbb {N}}\) is bounded by Proposition 2.1. Since (H1s)-(H2s)-(H3s) imply (H1)-(H2)-(H3), by using Proposition 2.4, we see that \((x_{n})_{n\in\mathbb{N}}\) is asymptotically regular. Let us divide the proof in steps.

Step 1. \(\|x_{n+1}-x_{n}\|=o(\alpha_{n})\).

Proof of Step 1

We need to prove that
$$\lim_{n\to\infty}\frac{\|x_{n+1}-x_{n}\|}{\alpha_{n}}=0. $$
If \(\alpha_{n}\to\alpha>0\) we do not need to prove anything; so let \(\alpha=0\). Dividing by \(\alpha_{n}\) in (2.5) of Proposition 2.4 we have
$$\begin{aligned} \frac{\|x_{n+1}-x_{n}\|}{\alpha_{n}} \leq& \bigl(1-(1-\alpha_{n})\rho\mu _{n} \bigr)\frac{\| x_{n}-x_{n-1}\|}{\alpha_{n}}+\frac{\|W_{n}y_{n-1}+W_{n-1}y_{n-1}\|}{\alpha _{n}} \\ &{}+\frac{(|\alpha_{n}-\alpha_{n-1}|+(1-\alpha_{n})|\mu_{n-1}-\mu _{n}|)}{\alpha_{n}}O(1) \\ =& \bigl(1-(1-\alpha_{n})\rho\mu_{n} \bigr) \frac{\|x_{n}-x_{n-1}\|}{\alpha_{n}}\pm \bigl(1-(1-\alpha_{n})\rho\mu_{n} \bigr) \frac{\|x_{n}-x_{n-1}\|}{\alpha_{n-1}} \\ &{}+\frac{\|W_{n}y_{n-1}+W_{n-1}y_{n-1}\|}{\alpha_{n}}+\frac{(|\alpha _{n}-\alpha_{n-1}|+(1-\alpha_{n})|\mu_{n-1}-\mu_{n}|)}{\alpha_{n}}O(1) \\ \leq& \bigl(1-(1-\alpha_{n})\rho\mu_{n} \bigr) \frac{\|x_{n}-x_{n-1}\|}{\alpha _{n-1}}+ \biggl\vert \frac{1}{\alpha_{n}}-\frac{1}{\alpha_{n-1}} \biggr\vert \| x_{n}-x_{n-1}\| \\ &{}+\frac{\|W_{n}y_{n-1}+W_{n-1}y_{n-1}\|}{\alpha_{n}}+\frac{(|\alpha _{n}-\alpha_{n-1}|+(1-\alpha_{n})|\mu_{n-1}-\mu_{n}|)}{\alpha_{n}}O(1). \end{aligned}$$
The boundedness of \((x_{n})_{n\in\mathbb{N}}\) and (H4) give
$$\begin{aligned} \frac{\|x_{n}-x_{n+1}\|}{\alpha_{n}} \leq& \bigl(1-(1-\alpha_{n})\rho\mu _{n} \bigr)\frac {\|x_{n}-x_{n-1}\|}{\alpha_{n-1}}+O(\mu_{n})\|x_{n-1}-x_{n}\| \\ &{}+\frac{\|W_{n}y_{n-1}+W_{n-1}y_{n-1}\|}{\alpha_{n}}+\frac{(|\alpha _{n}-\alpha_{n-1}|+|\mu_{n-1}-\mu_{n}|)}{\alpha_{n}}O(1), \end{aligned}$$
so denoting
$$\begin{aligned}[b] &a_{n}=\frac{\|x_{n}-x_{n-1}\|}{\alpha_{n-1}}, \qquad\gamma_{n}= (1-\alpha _{n})\mu _{n}\rho, \\ &\delta_{n}= \biggl[O(\mu_{n})\|x_{n-1}-x_{n} \|+\frac{\| W_{n}y_{n-1}+W_{n-1}y_{n-1}\|}{\alpha_{n}}+\frac{(|\alpha_{n}-\alpha _{n-1}|+|\mu_{n-1}-\mu_{n}|)}{\alpha_{n}} \biggr]O(1), \end{aligned} $$
our inequality can be written as \(a_{n+1}\leq(1-\gamma_{n})a_{n}+\delta _{n}\). In view of (H1s), (H2s), and (H3s), we can apply the Xu Lemma 2.3 to conclude that \(\|x_{n+1}-x_{n}\|=o(\alpha_{n})\). □

Step 2. \(\omega_{w}(x_{n})\subset F\cap\operatorname{Fix}(S)\).

Proof of Step 2

Let \(z\in F\cap\operatorname{Fix}(S)\); then by the boundedness and the subdifferential inequality
$$\begin{aligned} \|x_{n+1}-z\|^{2} \leq& \bigl\| \alpha_{n}(Sx_{n}-z)+(1- \alpha_{n}) (B_{n}x_{n}-z)\bigr\| ^{2} \\ \leq&\bigl\| \alpha_{n}(Sx_{n}-z)+(1-\alpha_{n}) (x_{n}-z)\bigr\| ^{2}-2\mu_{n}\langle Dx_{n},x_{n+1}-z\rangle \\ \leq& \alpha_{n}\|Sx_{n}-z\|^{2}+(1- \alpha_{n})\|x_{n}-z\|^{2}-\alpha _{n}(1- \alpha _{n})\|Sx_{n}-x_{n}\|^{2} \\ &{} +2\mu_{n}\langle Dx_{n},z-x_{n+1}\rangle \\ \leq& \|x_{n}-z\|^{2}-\alpha_{n}(1- \alpha_{n})\|Sx_{n}-x_{n}\|^{2}+2 \mu_{n}O(1), \end{aligned}$$
we have
$$\begin{aligned} \alpha_{n}(1-\alpha_{n})\|Sx_{n}-x_{n} \|^{2} \leq&\|x_{n}-z\|^{2}-\|x_{n+1}-z\| ^{2}+2\mu_{n}O(1) \\ \leq&\|x_{n}-x_{n+1}\|O(1)+2\mu_{n}O(1). \end{aligned}$$
Dividing by \(\alpha_{n}\) we obtain
$$\begin{aligned} (1-\alpha_{n})\|Sx_{n}-x_{n}\|^{2} \leq& \frac{\|x_{n}-x_{n+1}\|}{\alpha _{n}}O(1)+2\frac{\mu_{n}}{\alpha_{n}}O(1). \end{aligned}$$
Since \(\tau=+\infty\) and by using Step 1, \(\|x_{n}-Sx_{n}\|\to0\), as \(n\to \infty\), the demiclosedness principle for nonexpansive mappings guarantees that \(\omega_{w}(x_{n})\subset\operatorname{Fix}(S)\). By Opial’s condition, if \(z\in\omega_{w}(x_{n})\subset\operatorname{Fix}(S)\), \((x_{n_{k}})_{k\in\mathbb{N}}\) weakly converges to z and \(z\notin F\) then
$$\begin{aligned} \liminf_{k\to\infty} \|x_{n_{k}}-z\| < & \liminf _{k\to\infty} \| x_{n_{k}}-Wz\| \\ \leq& \liminf_{k\to\infty} \bigl[\|x_{n_{k}}-x_{n_{k}+1}\|+ \|x_{n_{k}+1}-Wz\| \bigr] \\ \leq& \liminf_{k\to\infty} \bigl[\|x_{n_{k}}-x_{n_{k}+1}\|+\| W_{n_{k}}y_{n_{k}}-W_{n}z\|+\|W_{n_{k}}z-Wz\|\bigr] \\ \leq& \liminf_{k\to\infty} \bigl[\|x_{n_{k}}-x_{n_{k}+1}\|+ \|y_{n_{k}}-z\|+\| W_{n_{k}}z-Wz\|\bigr] \\ \leq& \liminf_{k\to\infty} \bigl[\|x_{n_{k}}-x_{n_{k}+1} \|+\alpha_{n_{k}}\| x_{n_{k}}-z\| \\ &{}+(1-\alpha_{n_{k}})\|B_{n_{k}}x_{n_{k}}-z\|+ \|W_{n_{k}}z-Wz\| \bigr] \\ \leq& \liminf_{k\to\infty} \bigl[\|x_{n_{k}}-x_{n_{k}+1} \|+\|x_{n_{k}}-z\| \\ &{}+(1-\alpha_{n_{k}})\mu_{n_{k}}\|Dx_{n_{k}}\|+ \|W_{n_{k}}z-Wz\| \bigr] \\ \leq& \liminf_{k\to\infty} \|x_{n_{k}}-z\|, \end{aligned}$$
which is absurd. So we have \(\omega_{w}(x_{n})\subset F\cap\operatorname{Fix}(S)\). □

Finally we conclude our proof, showing the convergence of the sequence.

Step 3. \((x_{n})_{n\in\mathbb{N}}\) strongly converges to \(\bar{x}\) satisfying (2.9).

Proof of Step 3

Let \(\bar{x}\) the unique solution of the variational inequality problem (2.9). Since \(\bar{x}\in F\cap\operatorname{Fix}(S)\), we have
$$\begin{aligned} \|x_{n+1}-\bar{x}\|^{2} \leq& \bigl\| \alpha_{n}(Sx_{n}- \bar{x})+(1-\alpha _{n}) (B_{n}x_{n}-\bar{x}) \bigr\| ^{2} \\ =& \bigl\| \alpha_{n}(Sx_{n}-\bar{x})+(1-\alpha_{n}) (B_{n}x_{n}-B_{n}\bar {x})+(1-\alpha _{n}) (B_{n}\bar{x}-\bar{x})\bigr\| ^{2} \\ =& \bigl\| \alpha_{n}(Sx_{n}-\bar{x})+(1-\alpha_{n}) (B_{n}x_{n}-B_{n}\bar {x})-(1-\alpha _{n})\mu_{n}D\bar{x}\bigr\| ^{2} \\ \leq& \alpha_{n}\|x_{n}-\bar{x}\|^{2}+(1- \alpha_{n}) (1-\mu_{n}\rho)\| x_{n}-\bar {x} \|^{2}-2 \bigl\langle (1-\alpha_{n})\mu_{n}D \bar{x},x_{n+1}-\bar{x} \bigr\rangle \\ =& \bigl(1-(1-\alpha_{n})\mu_{n}\rho \bigr) \|x_{n}-\bar{x}\|^{2}-2(1-\alpha_{n})\mu _{n}\langle D\bar{x},x_{n+1}-\bar{x}\rangle. \end{aligned}$$
Denoting
$$\begin{aligned} a_{n}=\|x_{n}-\bar{x}\|^{2},\qquad \gamma_{n}= (1-\alpha_{n})\mu_{n}\rho, \qquad\delta _{n}=\langle D\bar{x},\bar{x}-x_{n+1}\rangle, \end{aligned}$$
our inequality can be written as
$$a_{n+1}\leq(1-\gamma_{n})a_{n}+\frac{2}{\rho} \gamma_{n}\delta_{n}. $$
To invoke the Xu Lemma 2.3 we need to prove that \(\limsup_{n\to\infty}\delta_{n}\leq0\).
There exists a subsequence \((x_{n_{k}})_{k\in\mathbb{N}}\) of \((x_{n})_{n\in \mathbb{N}}\) such that
$$\limsup_{n\to\infty}\langle D\bar{x},\bar{x}-x_{n+1}\rangle= \lim_{k\to \infty}\langle D\bar{x},\bar{x}-x_{{n_{k}}+1}\rangle. $$
Since \((x_{n_{k}})_{k\in\mathbb{N}}\) is bounded, we suppose that \((x_{n_{k}})_{k\in\mathbb{N}}\) weakly converges to p. Step 3 guarantees that \(p\in F\cap\operatorname{Fix}(S)\). By using the asymptotical regularity of \((x_{n_{k}})_{k\in\mathbb{N}}\) we have
$$\begin{aligned} \limsup_{n\to\infty}\langle D\bar{x},\bar{x}-x_{n+1} \rangle &=\lim_{k\to\infty}\langle D\bar{x},\bar{x}-x_{{n_{k}}+1} \rangle \\ &=\lim_{k\to\infty} \bigl[\langle D\bar{x},\bar{x}-x_{n_{k}} \rangle +\langle D\bar{x},x_{{n_{k}}}-x_{{n_{k}}+1}\rangle \bigr]=\lim _{k\to\infty}\langle D\bar {x},\bar{x}-x_{{n_{k}}}\rangle \\ &=\langle D\bar{x},\bar{x}-p\rangle\leq0. \end{aligned}$$
 □

 □

Theorem 2.14

Let Hypotheses (). Let us suppose that
$$\lim_{n\to\infty}\mu_{n}=\lim_{n\to\infty} \alpha_{n}=0 \quad\textit{and}\quad \tau =\lim_{n\to\infty} \frac{\alpha_{n}}{\mu_{n}}=+\infty. $$
Let us suppose that \((x_{n})_{n\in\mathbb{N}}\) is bounded. Moreover, suppose that
  1. (H1s)

    \(\sum_{n=1}^{\infty}\mu_{n}=\infty\) and \(|\mu _{n}-\mu_{n-1}|=o(\alpha_{n}\mu_{n})\);

     
  2. (H2s)

    \(|\alpha_{n}-\alpha_{n-1}|=o(\alpha_{n}\mu_{n})\);

     
  3. (H3s)

    \(\sup_{z\in B}\|W_{n}z-W_{n-1}z\|=o(\alpha_{n}\mu_{n})\), with \(B\subset H\) bounded;

     
  4. (H4)

    \(\vert \frac{1}{\alpha_{n}}-\frac {1}{\alpha _{n-1}}\vert =O(\mu_{n})\).

     
Let \(\bar{\Sigma}\) be the set of solutions of the variational inequality problem
$$ \bigl\langle (I-S)x, y-x \bigr\rangle \geq0,\quad \forall y\in F, $$
(2.10)
and let us suppose that \(\bar{\Sigma}\neq\emptyset\).
Then \((x_{n})_{n\in\mathbb{N}}\) defined by (2.1) strongly converges to \(\tilde{x,}\) that is, the unique solution of the variational inequality problem
$$ \langle Dx, y-x\rangle\geq0,\quad \forall y\in\bar{\Sigma}. $$
(2.11)

Proof

Since \(\bar{\Sigma}\) coincides with the set of fixed point of the nonexpansive mapping \(P_{F}S\), it is closed and convex. So (2.11) has a unique solution.

Let us note that (H1s)-(H2s)-(H3s) imply (H1)-(H2)-(H3); hence, by using Proposition 2.4, \((x_{n})_{n\in\mathbb{N}}\) is asymptotically regular. We divide the proof in steps.

Step 1. \(\|x_{n+1}-x_{n}\|=o(\alpha_{n})\).

Proof

As for Step 1 of Theorem 2.12. □

Step 2. \(\omega_{w}(x_{n})\subset\bar{\Sigma}\).

Proof of Step 2

Denoting by \(y_{n}=\alpha_{n}Sx_{n}+(1-\alpha_{n})B_{n}x_{n}\), we have
$$\begin{aligned} x_{n}-y_{n} =&x_{n}- \alpha_{n}Sx_{n}-(1-\alpha_{n}) (x_{n}- \mu_{n}Dx_{n}) \\ =& x_{n}-\alpha_{n}Sx_{n}-(1- \alpha_{n})x_{n}+(1-\alpha_{n})\mu _{n}Dx_{n}) \\ =& \alpha_{n}(I-S)x_{n}+(1-\alpha_{n}) \mu_{n}Dx_{n}. \end{aligned}$$
(2.12)
Hypotheses \(\alpha_{n}\to0\) and \(\mu_{n}\to0\) allow one to conclude that \(\|x_{n}-y_{n}\|\to0\). As a rule
$$\|y_{n}-W_{n}y_{n}\|\leq\|y_{n}-x_{n} \|+\|x_{n}-W_{n}y_{n}\|=\|y_{n}-x_{n} \|+\| x_{n}-x_{n+1}\| \to0, $$
as \(n\to\infty\). Moreover,
$$\begin{aligned} x_{n}-x_{n+1} =&x_{n}-W_{n}y_{n}=(x_{n}-y_{n})+(y_{n}-W_{n}y_{n}) \\ =&\alpha_{n}(I-S)x_{n}+(1-\alpha_{n}) (x_{n}-B_{n}x_{n})+(I-W_{n})y_{n} \\ =&\alpha_{n}(I-S)x_{n}+(1-\alpha_{n}) \mu_{n}Dx_{n}+(I-W_{n})y_{n}. \end{aligned}$$
Dividing by \(\alpha_{n}\) we have
$$w_{n}:=\frac{x_{n}-x_{n+1}}{\alpha_{n}}=(I-S)x_{n}+\frac{(1-\alpha_{n})\mu _{n}}{\alpha_{n}}Dx_{n}+ \frac{1}{\alpha_{n}}(I-W_{n})y_{n}. $$
For all \(z\in F\),
$$\begin{aligned} \langle w_{n},x_{n}-z\rangle =& \bigl\langle (I-S)x_{n},x_{n}-z \bigr\rangle +\frac {(1-\alpha _{n})\mu_{n}}{\alpha_{n}}\langle Dx_{n},x_{n}-z\rangle \\ &{}+\frac{1}{\alpha_{n}} \bigl\langle (I-W_{n})y_{n},x_{n}-z \bigr\rangle \quad \bigl(\mbox{by monotonicity of }(I-S) \bigr) \\ \geq& \bigl\langle (I-S)z,x_{n}-z \bigr\rangle +\frac{(1-\alpha_{n})\mu_{n}}{\alpha _{n}} \langle Dx_{n},x_{n}-z\rangle \\ &{}+\frac{1}{\alpha_{n}} \bigl\langle (I-W_{n})y_{n},x_{n}-y_{n} \bigr\rangle +\frac {1}{\alpha _{n}} \bigl\langle (I-W_{n})y_{n},y_{n}-z \bigr\rangle . \end{aligned}$$
Since \(z\in F\), \(z=W_{n}z\) for all \(n\in\mathbb{N,}\) and \((I-W_{n})\) is monotone:
$$\begin{aligned} \langle w_{n},x_{n}-z\rangle \geq& \bigl\langle (I-S)z,x_{n}-z \bigr\rangle +\frac {(1-\alpha _{n})\mu_{n}}{\alpha_{n}}\langle Dx_{n},x_{n}-z\rangle \\ &{}+\frac{1}{\alpha_{n}} \bigl\langle (I-W_{n})y_{n},x_{n}-y_{n} \bigr\rangle +\frac {1}{\alpha _{n}} \bigl\langle (I-W_{n})y_{n}+(I-W_{n})z,y_{n}-z \bigr\rangle \\ \geq& \bigl\langle (I-S)z,x_{n}-z \bigr\rangle +\frac{(1-\alpha_{n})\mu_{n}}{\alpha _{n}} \langle Dx_{n},x_{n}-z\rangle +\frac{1}{\alpha_{n}} \bigl\langle (I-W_{n})y_{n},x_{n}-y_{n} \bigr\rangle . \end{aligned}$$
By using (2.12)
$$\begin{aligned} \langle w_{n},x_{n}-z\rangle \geq& \bigl\langle (I-S)z,x_{n}-z \bigr\rangle +\frac {(1-\alpha_{n})\mu_{n}}{\alpha_{n}}\langle Dx_{n},x_{n}-z\rangle \\ &{}+ \bigl\langle (I-W_{n})y_{n},(I-S)x_{n} \bigr\rangle +\frac{(1-\alpha_{n})\mu _{n}}{\alpha _{n}} \bigl\langle (I-W_{n})y_{n},Dx_{n} \bigr\rangle . \end{aligned}$$
Let us denote by \((x_{n_{k}})_{k\in\mathbb{N}}\) a subsequence weakly converging to p; by the same proof as Proposition 2.7 one can see that the boundedness of \((x_{n})\), combined with the assumptions \(\mu_{n}\to0\) and \(\alpha_{n}\to0\), is enough to guarantee that \(p\in F\). We have
$$\begin{aligned} \langle w_{n_{k}},x_{n}-z\rangle \geq& \bigl\langle (I-S)z,x_{n_{k}}-z \bigr\rangle +\frac{(1-\alpha_{n_{k}})\mu _{n_{k}}}{\alpha_{n_{k}}}\langle Dx_{n_{k}},x_{n_{k}}-z\rangle \\ &{}+ \bigl\langle (I-W_{n_{k}})y_{n_{k}},(I-S)x_{n_{k}} \bigr\rangle +\frac{(1-\alpha _{n_{k}})\mu_{n_{k}}}{\alpha_{n_{k}}} \bigl\langle (I-W_{n_{k}})y_{n_{k}},Dx_{n_{k}} \bigr\rangle . \end{aligned}$$
Passing \(k\to\infty\), since \(w_{n}\to0\) by Step 1, \(\|(I-W_{n})y_{n}\|\to0\) and \(\tau=+\infty\), we have
$$0\geq \bigl\langle (I-S)z,p-z \bigr\rangle , \quad\forall z\in F. $$
If we replace z by \(p+\eta(z-p)\), \(\eta\in(0,1)\), we have
$$\bigl\langle (I-S) \bigl(p+\eta(z-p) \bigr),p-z \bigr\rangle \leq0. $$
Letting \(\eta\to0\), finally,
$$\bigl\langle (I-S)p,p-z \bigr\rangle \leq0,\quad \forall z\in F, $$
i.e. the claim follows. □

Step 3. Convergence of the sequence.

Proof of Step 3

Let \(\tilde{x}\) be the unique solution of the variational inequality problem (2.11). As in Theorem 2.8 we have
$$\begin{aligned} \|x_{n+1}-\tilde{x}\|^{2} \leq& \bigl(1-(1- \alpha_{n})\mu_{n}\rho \bigr)\|x_{n}-\tilde {x}\| ^{2} \\ &{}-2(1-\alpha_{n})\mu_{n} \biggl\langle \frac{\alpha_{n}}{(1-\alpha_{n})\mu _{n}}(I-S) \tilde{x}+D\tilde{x},x_{n+1}-\tilde{x} \biggr\rangle . \end{aligned}$$
Denoting
$$\begin{aligned} & a_{n}=\|x_{n}-\tilde{x}\|^{2},\qquad \gamma_{n}= (1-\alpha_{n})\mu_{n}\rho, \\ &\delta_{n}=\frac{2}{\rho} \biggl\langle \frac{\alpha_{n}}{(1-\alpha_{n})\mu _{n}}(I-S) \tilde{x}+D\tilde{x},\tilde{x}-x_{n+1} \biggr\rangle , \end{aligned}$$
our inequality can be written as
$$a_{n+1}\leq(1-\gamma_{n})a_{n}+\frac{2}{\rho} \gamma_{n}\delta_{n}. $$
To invoke the Xu Lemma 2.3 we need to prove that \(\limsup_{n\to\infty}\delta_{n}\leq0\).
There exists a subsequence \((x_{n_{k}})_{k\in\mathbb{N}}\) of \((x_{n})_{n\in \mathbb{N}}\) such that
$$\limsup_{n\to\infty} \biggl\langle \frac{\alpha_{n}}{(1-\alpha_{n})\mu _{n}}(I-S)\tilde{x}+D \tilde{x},\tilde{x}-x_{n+1} \biggr\rangle =\lim_{k\to \infty} \biggl\langle \frac{\alpha_{n_{k}}}{(1-\alpha_{n_{k}})\mu_{n_{k}}}(I-S) \tilde {x}+D\tilde{x},\tilde{x}-x_{{n_{k}}+1} \biggr\rangle . $$
Since \((x_{n_{k}})_{k\in\mathbb{N}}\) is bounded, we can suppose that \((x_{n_{k}})_{k\in\mathbb{N}}\) weakly converges to p. We know, by Step 2, that \(p\in\Sigma\subset F\). By using the asymptotical regularity of \((x_{n})_{n\in\mathbb{N}}\) we have
$$\begin{aligned} &\limsup_{n\to\infty} \biggl\langle \frac{\alpha_{n}}{(1-\alpha_{n})\mu _{n}}(I-S) \tilde{x}+D\tilde{x},\tilde{x}-x_{n+1} \biggr\rangle \\ &\quad=\lim_{k\to \infty} \biggl\langle \frac{\alpha_{n_{k}}}{(1-\alpha_{n_{k}})\mu_{n_{k}}}(I-S) \tilde {x}+D\tilde{x}, \tilde{x}-x_{{n_{k}}+1} \biggr\rangle \\ &\quad=\lim_{k\to\infty} \biggl[ \biggl\langle \frac{\alpha_{n_{k}}}{(1-\alpha _{n_{k}})\mu _{n_{k}}}(I-S) \tilde{x}+D\tilde{x},\tilde{x}-x_{n_{k}} \biggr\rangle \\ &\qquad{}+ \biggl\langle \frac{\alpha_{n_{k}}}{(1-\alpha_{n_{k}})\mu_{n_{k}}}(I-S) \tilde {x}+D\tilde{x},x_{n_{k}}-x_{{n_{k}}+1} \biggr\rangle \biggr] \\ &\quad=\lim_{k\to\infty} \biggl\langle \frac{\alpha_{n_{k}}}{(1-\alpha _{n_{k}})\mu _{n_{k}}}(I-S) \tilde{x}+D \tilde{x},\tilde{x}-x_{{n_{k}}} \biggr\rangle . \end{aligned}$$
Since \(\tau=\infty\), \(p\in F\), and \(\tilde{x}\in\Sigma\),
$$\bigl\langle (I-S)\tilde{x},\tilde{x}-x_{n_{k}} \bigr\rangle \to \bigl\langle (I-S)\tilde {x},\tilde{x}-p \bigr\rangle \leq0. $$
Moreover, since \(p\in\Sigma\) and \(\tilde{x}\) is the solution of (2.11)
$$\langle D\tilde{x},\tilde{x}-x_{n_{k}}\rangle\to\langle D\tilde {x}, \tilde {x}-p\rangle\leq0, $$
so we have
$$\lim_{k\to\infty} \biggl\langle \frac{\alpha_{n_{k}}}{(1-\alpha_{n_{k}})\mu _{n_{k}}}(I-S)\tilde{x}+D \tilde{x},\tilde{x}-x_{{n_{k}}} \biggr\rangle \leq0, $$
and the claim is proved. □

 □

Before we show some applications, we would like to focus on some open questions.

Open Question 1

Since \(F\cap\operatorname{Fix}(S)\subset\bar {\Sigma}\), we conjecture that the solution of (2.9) is a solution of (2.11) too, i.e. if \(F\cap\operatorname {Fix}(S)\neq\emptyset\), \(\bar{x}\) of Theorem 2.8 coincides with \(\tilde{x}\) of Theorem 2.14.

Open Question 2

As we have seen in the above, Proposition 2.11, the existence of solutions of the variational inequality problem
$$\bigl\langle (I-S)x,y-x \bigr\rangle \geq0, \quad\forall y\in C, $$
implies the boundedness of the sequence generated by
$$x_{n+1}=P_{C} \biggl(I-\alpha_{n} \biggl((I-S)+ \frac{(1-\alpha_{n})\mu _{n}}{\alpha _{n}}D \biggr) \biggr)x_{n}. $$
By Proposition 2.1, if \(\operatorname{Fix}(S)\cap F\neq \emptyset\), our method
$$x_{n+1}=W_{n} \bigl(\alpha_{n}Sx_{n}+(1- \alpha_{n}) (I-\mu_{n}D)x_{n} \bigr) $$
is bounded. We do not know if the existence of solutions of
$$\bigl\langle (I-S)x,y-x \bigr\rangle \geq0, \quad\forall y\in F, $$
implies the boundedness of the sequence generated by
$$x_{n+1}=W_{n} \bigl(\alpha_{n}Sx_{n}+(1- \alpha_{n}) (I-\mu_{n}D)x_{n} \bigr) $$
(i.e., in general, when \(W_{n}\) replaces \(P_{C}\)).

3 Applications

Let \(f(x)\) and \(g(x)\) be functionals convex and Fréchet differentiable. Let f be \(L_{f}\)-lipschitzian and let g be \(\sigma_{g}\)-strongly monotone and \(L_{g}\)-lipschitzian. Let us consider
$$\min_{C} \bigl(f(x)+\varepsilon g(x) \bigr), $$
where \(\varepsilon>0\) is given and C is a closed and convex subset of H. Without loss of generality we can suppose that \(C=\bigcap_{n\in \mathbb {N}}\operatorname{Fix}(W_{n})\) with \((W_{n})_{n\in\mathbb{N}}\) is an opportune nonexpansive mapping, We have the following.

Theorem 3.1

Pick two sequences such that \((\mu_{n})_{n\in\mathbb{N}}\subset (0,\frac {2\sigma_{g}}{L_{g}^{2}})\) and
$$\lim_{n\to\infty}\frac{\alpha_{n}}{\mu_{n}}=\frac{1}{\varepsilon}, $$
where \(\mu_{n}\to0\), as \(n\to\infty\), and
  1. (H1)

    \(\sum_{n=1}^{\infty}\mu_{n}=\infty\) and \(|\mu _{n}-\mu_{n-1}|=o(\mu_{n})\);

     
  2. (H2)

    \(|\alpha_{n}-\alpha_{n-1}|=o(\mu_{n})\);

     
  3. (H3)

    \(\sup_{z\in B}\|W_{n}z-W_{n-1}z\|=o(\mu_{n})\), with \(B\subset H\) bounded.

     
Then \((x_{n})_{n\in\mathbb{N}}\) generated by
$$x_{n+1}=W_{n} \biggl(\alpha_{n} \biggl(I- \frac{1}{L_{f}}\nabla f \biggr) (x_{n})+(1-\alpha_{n}) \biggl(I-\frac{\mu_{n}}{L_{f}}\nabla g \biggr) (x_{n}) \biggr) $$
strongly converges to \(x^{*}\), that is, the unique solution of the variational inequality problem
$$ \bigl\langle \nabla f(x)+\varepsilon\nabla g(x), y-x \bigr\rangle \geq0, \quad\forall y\in C. $$
(3.1)

Proof

The proof follows by Theorem 2.8 since \((I-\frac{1}{L_{f}}\nabla f)\) is nonexpansive and \((\frac{1}{L_{f}}\nabla g)\) is a strongly monotone and lipschitzian operator. □

Choosing \(\mu_{n}=\frac{1}{n}\) we immediately obtain the following.

Corollary 3.2

The sequence generated by
$$x_{n+1}=W_{n} \biggl(I-\frac{1}{nL_{f}} \biggl(\nabla f(x_{n})+ \biggl(1-\frac {1}{n} \biggr)\frac{\nabla g(x_{n})}{\varepsilon} \biggr) \biggr) $$
strongly converges to \(x^{*}\), that is, the unique solution of the variational inequality problem
$$ \bigl\langle \nabla f(x)+\varepsilon\nabla g(x), y-x \bigr\rangle \geq0, \quad\forall y\in C. $$
(3.2)

Following [21], let \(f(x)=\frac{1}{2}\|Ax-b\|^{2}\) where A is a linear and bounded operator and \(b\in H\). Let \(g(x)=\frac{1}{2}\|x\| ^{2}\). The next corollary easily follows.

Corollary 3.3

The \((x_{n})_{n\in\mathbb{N}}\) generated by
$$x_{n+1}=W_{n} \biggl(I-\frac{1}{n\|A\|^{2}} \biggl(A^{*}Ax_{n}+A^{*}b+ \biggl(1-\frac {1}{n} \biggr) \frac{x_{n}}{\varepsilon} \biggr) \biggr), $$
strongly converges to \(x^{*}\), that is, the unique solution of the variational inequality problem
$$ \bigl\langle A^{*}Ax+A^{*}b+\varepsilon x, y-x \bigr\rangle \geq0,\quad \forall y\in C, $$
(3.3)
i.e. \(x^{*}\) is the unique solution of
$$\min_{C} \frac{1}{2}\|Ax-b\|^{2}+ \frac{1}{2}\varepsilon\|x\|^{2}. $$
Let us consider a least absolute shrinkage and selection operator, called briefly the lasso problem. Let \(H=\mathbb{R}^{n}\); the lasso problem is the minimization problem defined as
$$\min_{C}\frac{1}{2}\|Ax-b\|_{2}^{2}+ \frac{1}{2}\|x\|_{1}, $$
where A is a \(m\times n\) matrix, \(x\in\mathbb{R}^{n}\), \(b\in\mathbb {R}^{m}\) [22]. We consider a lasso problem with solutions. This ill-posed problem can be regularized as
$$\min_{\mathbb{R}^{n}}\frac{1}{2}\|Ax-b\|_{2}^{2}+ \gamma\|x\|_{1}+\frac {1}{2}\varepsilon\|x\|_{2}^{2}+ \delta_{C}(x). $$
This regularization, called an elastic net, is studied in [23].
Taking in account Example 1.3 the proximal operator of \(\| \cdot\|_{1}\) on \(\mathbb{R}^{n}\) is defined as
$$\operatorname{prox}_{\gamma\|\cdot\|_{1}}(x):=\mathop{\operatorname{argmin}}\limits_{v\in\mathbb {R}^{n}} \biggl\{ \gamma\|x\| _{1}+\frac{1}{2}\|x-v\|^{2} \biggr\} . $$
In [22] the author proved the following.

Proposition 3.4

[22]

If g is a convex and Fréchet differentiable functional on H, a point \(x^{*}\) is a solution of the lasso problem if and only if
$$x^{*}=\operatorname{prox}_{\lambda f}(I-\lambda\nabla g)x^{*}. $$

Thus, by Theorem 2.8, we have the following.

Theorem 3.5

Pick two sequences such that
$$\lim_{n\to\infty}\frac{\alpha_{n}}{\mu_{n}}=0 $$
and \(\mu_{n}\to0\), as \(n\to\infty\). Moreover, suppose that
  1. (H1)

    \(\sum_{n=1}^{\infty}\mu_{n}=\infty\) and \(|\mu _{n}-\mu_{n-1}|=o(\mu_{n})\);

     
  2. (H2)

    \(|\alpha_{n}-\alpha_{n-1}|=o(\mu_{n})\).

     
Then \((x_{n})_{n\in\mathbb{N}}\) generated by
$$x_{n+1}=P_{C} \bigl(\alpha_{n} \operatorname{prox}_{\gamma\|\cdot\| _{1}} \bigl(I-A^{*}A+A^{*}b \bigr)x_{n}+(1- \alpha _{n}) (1- \mu_{n})x_{n} \bigr) $$
strongly converges to \(x^{*}\in C\), that is, the unique solution of
$$\langle x,y-x\rangle\geq0,\quad \forall y\in\operatorname {Fix} \bigl( \operatorname{prox}_{\gamma\|\cdot \|_{1}} \bigl(I-A^{*}A+A^{*}b \bigr) \bigr)\cap C, $$
i.e. the solution of the lasso problem with minimum \(\|\cdot \| _{2}\)-norm solution.

Proof

It is enough to choose \(S=\operatorname{prox}_{\gamma\|\cdot\| _{1}}(I-A^{*}A+A^{*}b)\), \(P_{C}\). □

By Theorem 2.12, one can prove the following.

Theorem 3.6

Pick \(u\in H\). Let \(\mu_{n}=\frac{1}{n}\) and \(\alpha_{n}=\alpha>0\). Let \((W_{n})_{n\in \mathbb{N}}\) such that \(\sup_{z\in B}\|W_{n}z-W_{n-1}z\|=o(\frac{1}{n})\), with \(B\subset H\) bounded. Then \((x_{n})_{n\in\mathbb{N}}\) generated by
$$x_{n+1}=W_{n}\bigl(\alpha\operatorname{prox}_{\gamma\|\cdot\| _{1}} \bigl(I-A^{*}A+A^{*}b \bigr)x_{n}+(1-\alpha ) \bigl(\mu_{n}u+(1- \mu_{n})x_{n} \bigr)\bigr) $$
strongly converges to \(x^{*}\), that is, the unique solution of the variational inequality problem
$$ \langle x-u, y-x\rangle\geq0, \quad\forall y\in F\cap \operatorname{Fix} \bigl(\operatorname{prox}_{\gamma\|\cdot\|_{1}} \bigl(I-A^{*}A+A^{*}b \bigr) \bigr), $$
(3.4)
i.e. the solution of the lasso problem nearest to u.

Declarations

Acknowledgements

Supported by Ministero dell’Universitá e della Ricerca of Italy. The authors are extremely grateful to the anonymous referees for their useful comments and suggestions.

Open Access This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.

Authors’ Affiliations

(1)
Dipartimento di Matematica, Universitá della Calabria
(2)
Department of Mathematics, King Abdulaziz University

References

  1. Mann, WR: Mean value methods in iteration. Proc. Am. Math. Soc. 4, 506-510 (1953) View ArticleMATHGoogle Scholar
  2. Halpern, B: Fixed points of nonexpanding maps. Bull. Am. Math. Soc. 73, 957-961 (1967) View ArticleMATHGoogle Scholar
  3. Ishikawa, S: Fixed points and iteration of a nonexpansive mapping in a Banach space. Proc. Am. Math. Soc. 59, 65-71 (1976) View ArticleMATHMathSciNetGoogle Scholar
  4. Moudafi, A: Viscosity approximation methods for fixed-points problems. J. Math. Anal. Appl. 241, 46-55 (2000) View ArticleMATHMathSciNetGoogle Scholar
  5. Nakajo, K, Takahashi, W: Strong convergence theorems for nonexpansive mappings and nonexpansive semigroups. J. Math. Anal. Appl. 279(2), 372-379 (2003) View ArticleMATHMathSciNetGoogle Scholar
  6. Demling, K: Nonlinear Functional Analysis. Dover, New York (2010) (first Springer, Berlin (1985)) Google Scholar
  7. Baillon, JB, Haddad, G: Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Isr. J. Math. 26(2), 137-150 (1977) View ArticleMATHMathSciNetGoogle Scholar
  8. Takahashi, W, Toyoda, M: Weak convergence theorems for nonexpansive mappings and monotone mappings. J. Optim. Theory Appl. 118(2), 417-428 (2003) View ArticleMATHMathSciNetGoogle Scholar
  9. Xu, HK: Averaged mappings and the gradient-projection algorithm. J. Optim. Theory Appl. 150(2), 360-378 (2011) View ArticleMATHMathSciNetGoogle Scholar
  10. Hundal, HS: An alternating projection that does not converge in norm. Nonlinear Anal., Theory Methods Appl. 57(1), 35-61 (2004) View ArticleMATHMathSciNetGoogle Scholar
  11. Atsushiba, S, Takahashi, W: Strong convergence theorems for a finite family of nonexpansive mappings and applications. B. N. Prasad birth centenary commemoration volume. Indian J. Math. 41(3), 435-453 (1999) MATHMathSciNetGoogle Scholar
  12. Marino, G, Muglia, L: On the auxiliary mappings generated by a family of mappings and solutions of variational inequalities problems. Optim. Lett. 9, 263-282 (2015) View ArticleMATHMathSciNetGoogle Scholar
  13. Marino, G, Muglia, L, Yao, Y: The uniform asymptotical regularity of families of mappings and solutions of variational inequality problems. J. Nonlinear Convex Anal. 15(3), 477-492 (2014) MATHMathSciNetGoogle Scholar
  14. Shimoji, K, Takahashi, W: Strong convergence to common fixed points of infinite nonexpansive mappings and applications. Taiwan. J. Math. 5, 387-404 (2001) MATHMathSciNetGoogle Scholar
  15. Xu, HK, Kim, TH: Convergence of hybrid steepest-descent methods for variational inequalities. J. Optim. Theory Appl. 119(1), 185-201 (2003) View ArticleMATHMathSciNetGoogle Scholar
  16. Cianciaruso, F, Marino, G, Muglia, L, Yao, Y: On a two-step algorithm for hierarchical fixed point problems and variational inequalities. J. Inequal. Appl. 2009, Article ID 208692 (2009). doi:https://doi.org/10.1155/2009/208692 View ArticleMathSciNetGoogle Scholar
  17. Moudafi, A, Maingé, P-E: Towards viscosity approximations of hierarchical fixed-points problems. Fixed Point Theory Appl. 2006, Article ID 95453 (2006) View ArticleGoogle Scholar
  18. Maingé, P-E, Moudafi, A: Strong convergence of an iterative method for hierarchical fixed-points problems. Pac. J. Optim. 3, 529-538 (2007) MATHMathSciNetGoogle Scholar
  19. Marino, G, Xu, HK: Explicit hierarchical fixed point approach to variational inequalities. J. Optim. Theory Appl. 149(1), 61-78 (2011) View ArticleMATHMathSciNetGoogle Scholar
  20. Xu, HK: Iterative algorithms for nonlinear operators. J. Lond. Math. Soc. 2, 1-17 (2002) Google Scholar
  21. Reich, S, Xu, H-K: An iterative approach to a constrained least squares problem. Abstr. Appl. Anal. 2003(8), 503-512 (2003) View ArticleMATHMathSciNetGoogle Scholar
  22. Xu, H-K: Properties and iterative methods for the Lasso and its variants. Chin. Ann. Math., Ser. B 35(3), 501-518 (2014) View ArticleMATHMathSciNetGoogle Scholar
  23. Zou, H, Hastie, T: Regularization and variable selection via the elastic net. J. R. Stat. Soc., Ser. B 67, 301-320 (2005) View ArticleMATHMathSciNetGoogle Scholar

Copyright

© Marino and Muglia; licensee Springer. 2015