- Research
- Open access
- Published:
On the role of the coefficients in the strong convergence of a general type Mann iterative scheme
Journal of Inequalities and Applications volume 2015, Article number: 118 (2015)
Abstract
Let H be a Hilbert space. Let \((W_{n})_{n\in\mathbb{N}}\) be a suitable family of mappings. Let S be a nonexpansive mapping and D be a strongly monotone operator. We study the convergence of the general scheme \(x_{n+1}=W_{n}(\alpha_{n}Sx_{n}+(1-\alpha_{n})(I-\mu_{n}D)x_{n}) \) in dependence on the coefficients \((\alpha_{n})_{n\in\mathbb{N}}\), \((\mu_{n})_{n\in\mathbb{N}}\).
1 Introduction and motivations
The approximation of fixed points of nonlinear mappings is a wide and active research area and its applications occur more and more widely in the calculus of variations and optimization. The starting point of many papers is a modification of Mann’s iterative method [1],
in order to obtain strong convergence results.
Many of these modified Mann schemes yield approximation sequences by suitable convex combinations like
where g, V, and \((y_{n})_{n\in\mathbb{N}}\) are opportunely chosen (see, for instance, Halpern [2], Ishikawa [3], Moudafi [4], Nakajo and Takahashi [5]).
In this paper, we instead focus on the following iterative method:
This method is very different from most of existing methods in literature and immediately we discuss on some motivations.
Let H be a Hilbert space and \(f:H\to\mathbb{R}\) be a convex and lower semicontinuous. Our interest is focused on the minimization problem
where C is a constraint closed and convex subset of H.
The following theorem is proved in [6].
Theorem 1.1
Let H be a Hilbert space and \(f:H\to\mathbb{R}\) be a convex functional. Then
-
(a)
\(f(x_{0})=\min_{H}f(x)\) if and only if \(0\in\partial f(x_{0})\).
-
(b)
Let \(C\subset H\). Then \(f(x_{0})=\min_{C}f(x)\) if and only if \((-\partial f(x_{0})\cap\partial\delta_{C}(x_{0}))\neq\emptyset\), where \(\delta_{C}\) is the indicator function of C.
Denote by Σ the set of solutions of (1.1). Let us start by the simple case in which \(f:H\to\mathbb{R}\) is a convex and continuously Fréchet differentiable functional.
By the definition of an indicator function we recall that (see [6])
\(f(\cdot)\) being Fréchet differentiable, \(\partial f(x_{0})\) is a singleton, \(\nabla f(x_{0})\); hence Theorem 1.1(b) of [6] ensures that \(x_{0}\in C\) is a solution of (1.1) if and only if \(-\nabla f(x_{0})\in\partial\delta_{C}(x_{0})\), i.e.
In other words \(x_{0}\in C\) is a solution of (1.1) if and only if
From (1.3), for every \(\gamma>0\), \(x_{0}\) is a solution for (1.1) if and only if
and, in view of Browder’s characterization of the metric projections \(P_{C}\), to solve (1.4) is equivalent to finding \(x_{0}\) such that
Therefore, to solve problem (1.1) (respectively to approximate solutions of (1.1)) is equivalent to solving (resp. to approximate the solutions of) a fixed point problem which involves the operator ∇f.
It is well known, by the convexity of the functional f, that the operator ∇f is a monotone operator; indeed since
it easily follows that
If we assume that ∇f is \(L_{f}\)-lipschitzian then, by Baillon-Haddad’s results [7], we have
i.e. ∇f is \(\frac{1}{L_{f}}\)-inverse strongly monotone.
Under such a hypothesis on ∇f, Takahashi and Toyoda in [8] proved that \(P_{C} (I-\frac{1}{L_{f}}\nabla f)\) is a nonexpansive mapping, hence to solve (resp. to approximate a solution of (1.1)) is equivalent to finding (resp. to approximate) a fixed point of the nonexpansive mapping \(P_{C} (I-\frac{1}{L_{f}}\nabla f )\). Xu in 2011 [9] showed that, even if \(\Sigma\neq\emptyset\), it is not guaranteed that the natural iteration
strongly converges to a solution of Σ. An example is given in the following.
Example 1.2
[9]
Following Hundal [10], there exist in \(H=l^{2}\) two closed and convex subset \(C_{1}\) and \(C_{2}\) such that: (i) \(C_{1}\cap C_{2}\neq \emptyset\), and (ii) the sequence generated by \(x_{0}\in C_{2}\) and the formula \(x_{n}=(P_{C_{2}}P_{C_{1}})^{n}x_{0}\) weakly converges but it does not strongly converge.
Let \(f(x)=\frac{1}{2}\|x-P_{C_{1}}x\|^{2}\). We deal with minimized \(f(x)\) on \(C_{2}\). It follows that \(\nabla f(x)=(I-P_{C_{1}})x\). Since \(P_{C_{1}}\) is firmly nonexpansive, i.e., 1-inverse strongly monotone, iteration (1.5) becomes
that is, the sequence generated by (ii).
If we add to the lipschitzianity of ∇f also the (stronger) assumption that ∇f is a \(\sigma_{f}\)-strongly monotone operator, i.e.
then the mapping \(P_{C} (I-\frac{\sigma _{f}}{L_{f}^{2}}\nabla f )\) is a contraction; therefore the contraction principle ensures that problem (1.1) has a unique solution \(x^{*}\) and the iterative sequence
strongly converges to \(x^{*}\).
Notice that, if \(C=H\), \(P_{C}=I\), then the iteration
strongly converges to a zero of ∇f.
Hence a natural question is how to use the good properties of strongly monotone operators to find a solution of ( 1.1 ) if ∇f is only lipschitzian.
A well-known approach is to consider a regularized problem; an example is to appeal to Tikhonov’s regularized problem:
where \(\varepsilon>0\) is given.
This approach arises by the following idea: if ∇f is only lipschitzian (for instance nonexpansive), we can perturb problem (1.1) by a convex and differentiable functional g such that ∇g is a \(\sigma_{g}\)-strongly monotone and \(L_{g}\)-lipschizian operator in such a way that
The operator \((\nabla f+\varepsilon\nabla g)\) is a lipschizian and a strongly monotone operator, the minumum problem (1.7) has a unique solution and, for a suitable \(\lambda>0\),
strongly converges to this solution.
Let us observe that
i.e. \((x_{n})_{n\in\mathbb{N}}\) is generated by the composition of the projection \(P_{C}\) and the convex combination of two maps: the first is a nonexpansive mapping; the second is a strongly monotone operator. In fact for an opportune choice of λ (and \(\gamma :=\frac{\lambda}{1-\lambda}\)), we find the results that
-
\((I-\nabla f)\) is a nonexpansive mapping;
-
the mapping \((I-\gamma\varepsilon\nabla g )\) is a contraction.
For these reasons we are interested in the iteration
under the following hypotheses:
Hypotheses (ℋ)
-
\((\alpha_{n})_{n\in\mathbb{N}}\) is a sequence in \([0,1)\).
-
\(S:H\to H\) is a nonexpansive mapping not necessarily with fixed points.
-
\(D:H \to H\) is a σ-strongly monotone operator and L-lipschitzian.
-
\(0<\mu_{n}\leq\mu\) with \(\mu< \frac{2\sigma}{L^{2}}\), \(\rho=\frac{2\sigma-\mu L^{2}}{2}\).
-
\((W_{n})_{n\in\mathbb{N}}\) is a sequence of mappings defined on H such that \(F:=\bigcap_{n\in\mathbb{N}} \operatorname{Fix}(W_{n})\neq \emptyset\) and
-
(h1)
\(W_{n}:H\to H\) are nonexpansive mappings, uniformly asymptotically regular on bounded subsets \(B\subset H\), i.e.
$$\lim_{n\to\infty}\sup_{x\in B}\|W_{n+1}x-W_{n}x \|=0, $$ -
(h2)
it is possible to define a nonexpansive mapping \(W:H\to H\), with \(Wx:=\lim_{n\to\infty}W_{n}x\) such that \(\operatorname{Fix}(W)=F\).
-
(h1)
An interesting example of sequence \((W_{n})_{n\in\mathbb{N}}\) satisfying our hypotheses is the following.
Example 1.3
Let \(f(x)\) be functional on H convex and lower semicontinuous. We recall that the proximal operator of f on H is defined as
where \(\lambda>0\).
The proximal operator obeys:
-
(1)
it is a single-value firmly nonexpansive mapping (hence nonexpansive);
-
(2)
it coincides with \(P_{C}\) if \(f(x)=\delta_{C}(x)\);
-
(3)
\(\operatorname{prox}_{\lambda f}=(I+\lambda\partial f)^{-1}\) i.e. it is the resolvent of the subdifferential of f;
-
(4)
\(\operatorname{prox}_{\lambda f}x=\operatorname{prox}_{\nu f} (\frac{\nu}{\lambda}x+ (1-\frac{\nu}{\lambda} )\operatorname{prox}_{\lambda f}x )\);
-
(5)
$$x^{*}=\operatorname{prox}_{\lambda f} \bigl(x^{*} \bigr)\quad\Leftrightarrow\quad0\in \partial f \bigl(x^{*} \bigr). $$
If \((\lambda_{n})_{n\in\mathbb{N}}\) converges to \(\lambda>0\) then \(W_{n}:=\operatorname{prox}_{\lambda_{n} f}(x)\) satisfied (h1) and (h2) where \(W:=\operatorname{prox}_{\lambda f}(x)\). In fact, the set of fixed point coincides by (3) and (5). Moreover, by (4),
so if x lies in a bounded subset, the uniform asymptotical regularity follows.
In any case we have the following.
Remark 1.4
If \(C=\bigcap_{n\in\mathbb{N}}C_{n}\), where \(C_{n}\subset H\) are closed and convex for all \(n\in\mathbb{N}\), we can always suppose that \(C=\bigcap _{n\in\mathbb{N}}\operatorname{Fix}(W_{n})\) where \((W_{n})_{n\in\mathbb {N}}\) is a sequence of nonexpansive mappings satisfying (h1) and (h2). Indeed starting by the sequence of nonexpansive mappings \(T_{n}=P_{C_{n}}\) we can always construct a sequence \((W_{n})_{n\in\mathbb{N}}\) such that \(C=\bigcap _{n\in \mathbb{N}}C_{n}=\bigcap_{n\in\mathbb{N}}\operatorname{Fix}(T_{n})=\bigcap _{n\in\mathbb {N}}\operatorname{Fix}(W_{n})\) and it satisfies (h1) and (h2) (see for details [11–14]).
Moreover, regarding the strongly monotone operator D we note that the sequence of operators \(B_{n}x:=(I-\mu_{n}D)x\) is a sequence of contractions when the sequence \((\mu_{n})_{n\in\mathbb{N}}\) lies in an opportune interval. Such an interval can be detected by the following lemma, proved by Kim and Xu.
Lemma 1.5
[15]
Let \(D:H\to H\) be σ-strongly monotone and L-lipschitzian. If \(\mu<\frac{2\sigma}{L^{2}}\), \(\rho=\frac {2\sigma-\mu L^{2}}{2}\), and \((\mu_{n})_{n\in\mathbb{N}}\subset(0,\mu ]\), then
i.e. \((I-\mu_{n}D)\) is a \((1-\mu_{n}\rho)\)-contraction.
In this paper we study some asymptotic behaviors of the sequence generated by iteration (1.8), supposing that there exists (finite or infinite)
We will be able to show that (1.8) strongly converges to a solution of the variational inequality
when \(\tau\in[0,+\infty)\), and to a special solution of
if \(\tau=+\infty\).
Our research is not far from the research area studied by Moudafi and Maingé and also known as the hierarchical fixed point approach (see [16–19]).
2 Some asymptotic behaviors of the iterative scheme
To study the asymptotic behavior of our method
we suppose that there exists
The method can be equivalently written as
where \(y_{n}:=\alpha_{n}Sx_{n}+(1-\alpha_{n})B_{n}x_{n}\) and \(B_{n}=(I-\mu_{n}D)\). We will use the following convenient notations:
-
We say that \(\zeta_{n}=o(\eta_{n})\) if \(\frac{\zeta_{n}}{\eta _{n}}\to0\) as \(n\to\infty\).
-
We say that \(\zeta_{n}=O(\eta_{n})\) if there exist \(K,N>0\) such that \(N\leq \vert \frac{\zeta_{n}}{\eta_{n}}\vert \leq K\).
A central role in proving the convergence results is played by the boundedness of the sequence \((x_{n})_{n\in\mathbb{N}}\). We want to put its role in evidence. An expected case occurs when there are common fixed points between S and \((W_{n})_{n\in\mathbb{N}}\).
Proposition 2.1
Suppose that (2.1) satisfies Hypotheses (ℋ).
If \(\operatorname{Fix}(S)\cap F\neq\emptyset\) then \((x_{n})_{n\in \mathbb{N}}\) is bounded.
Proof
If \(z\in\operatorname{Fix}(S)\cap F\)
Calling \(\beta_{n}:=(1-\alpha_{n})\mu_{n}\rho\) we have
Since, by an inductive process, one can see that
the claim follows. □
Notice that, in this case, boundedness does not depend by any hypotheses on \((\alpha_{n})_{n\in\mathbb{N}}\), \((\mu_{n})_{n\in\mathbb {N}}\), sequences in \([0,1]\).
On the contrary, in the following proposition the boundeness of the sequence is guaranteed by the assumption on the coefficients.
Proposition 2.2
Let us suppose that (2.1) satisfies Hypotheses (ℋ). Let \((\alpha_{n})_{n\in\mathbb{N}}\) be a sequence in \([0,1]\) and let \((\mu_{n})_{n\in\mathbb{N}}\) be a sequence in \((0,\mu)\). Assume that
-
(B)
either \(\alpha_{n}=O (\mu_{n} )\) or \(\alpha_{n}=o (\mu_{n} )\) (a sufficient condition is that there exists \(\lim_{n\to\infty} \frac{\alpha _{n}}{\mu _{n}}=\tau\in[0,+ \infty)\)).
Then \((x_{n})_{n\in\mathbb{N}}\) is bounded.
Proof
Let \(z \in F\). Then for every \(n\in\mathbb{N}\),
Since (B) holds, there exist \(\gamma>0\) and \(N_{0}\) such that, for all \(n>N_{0}\), \(\alpha_{n}\leq\gamma(1-\alpha_{n})\mu_{n}\); hence
Calling \(\beta_{n}:=(1-\alpha_{n})\mu_{n}\rho\) we have
Since, by an inductive process, one can see that
the claim follows. □
It is remarkable that, by boundedness, we can deduce the asymptotical regularity of the iterative sequence, i.e. that
which is often a key to prove convergent results when the mappings involved are continuous.
To prove it, we use the Xu lemma.
Lemma 2.3
[20]
Assume \((a_{n})_{n \in\mathbb{N}}\) is a sequence of nonnegative numbers such that
where \((\gamma_{n})_{n}\) is a sequence in \((0,1)\) and \((\delta_{n})_{n}\) is a sequence in ℝ such that:
-
(1)
\(\sum_{n=1}^{\infty}\gamma_{n}=\infty\);
-
(2)
\(\lim\sup_{n\to\infty}\delta_{n}/\gamma_{n}\leq0\) or \(\sum_{n=1}^{\infty}|\delta_{n}|<\infty\).
Then \(\lim_{n\to\infty}a_{n}=0\).
Proposition 2.4
Let Hypotheses (ℋ) be satisfied. We suppose that \(\lim_{n\to\infty}\frac{\alpha _{n}}{\mu _{n}}=\tau\in[0,+\infty)\) and that:
-
(H1)
\(\sum_{n=1}^{\infty}\mu_{n}=\infty\) and \(|\mu _{n}-\mu_{n-1}|=o(\mu_{n})\);
-
(H2)
\(|\alpha_{n}-\alpha_{n-1}|=o(\mu_{n})\);
-
(H3)
\(\sup_{z\in B}\|W_{n}z-W_{n-1}z\|=o(\mu_{n})\), with \(B\subset H\) bounded.
Then \((x_{n})_{n\in\mathbb{N}}\) is asymptotically regular.
Remark 2.5
Note that, for \((W_{n})_{n\in\mathbb{N}}\) as in Example 1.3, hypothesis (H3) reduces to an hypothesis on \((\lambda_{n})_{n\in\mathbb {N}}\) since
Proof of Proposition 2.4
First of all, from Proposition 2.2, \((x_{n})_{n\in\mathbb{N}}\) is bounded.
If we denote by \(y_{n}=\alpha_{n}Sx_{n}+(1-\alpha_{n})B_{n}x_{n}\) then
so, passing to the norm and by using the nonexpansivity of \((W_{n})_{n\in \mathbb{N}}\),
Now let us observe that
Therefore replacing the last equality in (2.4) and by using the boundedness of \((x_{n})_{n\in\mathbb{N}}\), we obtain
Denoting
(2.4) becomes
Thus, our hypotheses (H1), (H2), and (H3), are enough to ensure, by Lemma 2.3, that \((x_{n})_{n\in\mathbb{N}}\) is asymptotically regular. □
Remark 2.6
By the previous proof, it is clear that the hypothesis \(\tau\in [0,+\infty)\) is needed only to ensure the boundedness of \((x_{n})_{n\in \mathbb{N}}\). So, more in general, boundedness, (H1), (H2), and (H3) are enough to prove asymptotical regularity.
From now on we will suppose that \(\mu_{n}\to0\), as \(n\to\infty\); then, since τ is nonnegative, either \(\alpha_{n}\to0\), as \(n\to\infty\), or \(\alpha_{n}=0\).
Since we are searching for solutions of variational inequalities on fixed points sets, we show some sufficient condition for which the set of weak limits of \((x_{n})_{n\in\mathbb{N}}\) lies in F.
Proposition 2.7
Let Hypotheses (ℋ) satisfied. Let us suppose that \(\lim_{n\to\infty}\alpha_{n}= \lim_{n\to \infty}{\mu_{n}}=0\). Let us suppose \(\lim_{n\to\infty}\frac{\alpha_{n}}{\mu _{n}}=\tau\in[0,+\infty)\) and let \((x_{n})_{n\in\mathbb{N}}\) defined by (2.1) be asymptotically regular. Then \(\omega_{w}(x_{n})\subset F\).
Proof
The proof is based on Opial’s condition. The condition on τ gives the boundedness of our sequence by Proposition 2.2.
Let thus \(z\in\omega_{w}(x_{n})\) and let \((x_{n_{k}})_{k\in\mathbb{N}}\) be a subsequence weak convergent to z. If \(z\notin F\) then \(z\neq Wz\) and
Therefore, the boundedness of \((x_{n})_{n\in\mathbb{N}}\), along with the hypothesis \(\mu_{n}\to0\), produces the contradiction
□
Now we are able to prove our first convergence result.
Theorem 2.8
Let Hypotheses (ℋ) be satisfied. Let us suppose that \(\mu_{n}\to0\) and there exists
Moreover, suppose that
-
(H1)
\(\sum_{n=1}^{\infty}\mu_{n}=\infty\) and \(|\mu _{n}-\mu_{n-1}|=o(\mu_{n})\);
-
(H2)
\(|\alpha_{n}-\alpha_{n-1}|=o(\mu_{n})\);
-
(H3)
\(\sup_{z\in B}\|W_{n}z-W_{n-1}z\|=o(\mu_{n})\), with \(B\subset H\) bounded.
Then \((x_{n})_{n\in\mathbb{N}}\) defined by (2.1) strongly converges in F to \(x^{*}\), that is, the unique solution of the variational inequality problem
Proof
Recall that, since S is nonexpansive, \((I-S)\) is \(\frac {1}{2}\)-inverse strongly monotone, so the operator \((\tau(I-S)+D)\) is a strongly monotone operator. Since F is closed and convex, problem (2.6) has a unique solution in F, which we indicate by \(x^{*}\).
The hypotheses on τ furnish, by Proposition 2.2, the boundedness of \((x_{n})_{n\in\mathbb{N}}\). Then, in view of hypotheses (H1), (H2), and (H3), we can apply Proposition 2.4 to obtain asymptotical regularity. This allows one to apply Proposition 2.7 to get \(\omega _{w}(x_{n})\subset F\). So, let \(x^{*}\in F\), the unique solution of (2.6); by using the convexity of the norm and the subdifferential inequality
we have, denoting again \(B_{n}=(I-\mu_{n}D)\),
Denoting by
(2.7) can be written \(a_{n+1}\leq(1-\gamma_{n})a_{n}+\gamma _{n}\delta _{n}\).
To invoke the Xu Lemma 2.3, since \(\sum_{n}\gamma _{n}=\infty\) from (H1), we need to prove only that \(\limsup_{n\to\infty}\delta_{n}\leq0\).
There exists a subsequence \((x_{n_{k}})_{k\in\mathbb{N}}\) of \((x_{n})_{n\in \mathbb{N}}\) such that
Since \((x_{n_{k}})_{k\in\mathbb{N}}\) is bounded, we can suppose that \((x_{n_{k}})_{k\in\mathbb{N}}\) weakly converges to p. Proposition 2.7 gives \(p\in F\). By using the asymptotical regularity of \((x_{n})_{n\in\mathbb{N}}\) we have
□
Remark 2.9
Let us remark that, in the study of the behavior of \((x_{n})_{n\in \mathbb {N}}\) for \(\tau\in[0,+\infty)\), the set of fixed points of S never appears; all the properties, including the strong convergence, have been proved only by the hypotheses on the control sequences.
Let us now suppose \(\lim_{n\to\infty}\frac{\alpha _{n}}{\mu _{n}}=\tau=+\infty\). In this case, necessarily \(\mu_{n}\to0\) as \(n\to \infty\). Therefore either \(\alpha_{n}\to\alpha>0\) or \(\alpha_{n}\to0\) too and \(\mu_{n}=o(\alpha_{n})\).
By Proposition 2.1, if \(\operatorname{Fix}(S)\cap F\) is nonempty, the boundedness of \((x_{n})_{n\in\mathbb{N}}\) follows. On the contrary, if there are no common fixed points, the boundedness is not guaranteed as shown by the following counterexample.
Example 2.10
Let us consider \(H=\mathbb{R}\), \(x_{0}=1\), \(W_{n}x=Dx=x\), \(Sx=x+1\), \(\alpha_{n}=\frac{1}{\sqrt{n}}\), and \(\mu _{n}=\frac{1}{n}\).
Our method gives the positive number sequence:
If there exists \(M>0\) such that \(x_{n}< M\) then we note that, for every k,
and this is in contradiction with the boundedness of \((x_{n})_{n\in \mathbb{N}}\).
Nevertheless, we explicitly note that if \(W_{n}=P_{C}\) and there exist solutions of the variational inequality problem
then the boundedness is ensured even if \(F\cap\operatorname {Fix}(S)=\emptyset\). This is shown in the following proposition.
Proposition 2.11
Let C be a closed and convex subset of H. Let us suppose that the variational inequality problem
has at least a solution \(x^{*}\). Then the sequence defined by
is bounded.
Proof
We know that, for all \(\eta\in(0,1]\), we have
Taking \(W_{n}=P_{C}\), we have
So the boundedness follows as in Proposition 2.1. □
Therefore it is meaningful to prove convergence results if \(\operatorname{Fix}(S)\cap F\neq\emptyset\).
Theorem 2.12
Let Hypotheses (ℋ) satisfied. Let us suppose that
and \(\operatorname{Fix}(S)\cap F\neq\emptyset\). Moreover, suppose that:
-
(H1s)
\(\sum_{n=1}^{\infty}\mu_{n}=\infty\) and \(|\mu _{n}-\mu_{n-1}|=o(\alpha_{n}\mu_{n})\);
-
(H2s)
\(|\alpha_{n}-\alpha_{n-1}|=o(\alpha_{n}\mu_{n})\);
-
(H3s)
\(\sup_{z\in B}\|W_{n}z-W_{n-1}z\|=o(\alpha_{n}\mu_{n})\), with \(B\subset H\) bounded.
-
(H4)
\(\vert \frac{1}{\alpha_{n}}-\frac {1}{\alpha _{n-1}}\vert =O(\mu_{n})\).
(Note that (H1s), (H2s), (H3s) are stronger than (H1), (H2), (H3) of Theorem 2.8.)
Then \((x_{n})_{n\in\mathbb{N}}\) defined by (2.1) strongly converges to \(\bar{x}\in F\cap\operatorname{Fix}(S)\), that is, the unique solution of the variational inequality problem
Remark 2.13
Note that, if \(\alpha_{n}\to\alpha>0\), the requirements (H1s), (H2s), (H3s) reduce to (H1), (H2), (H3).
Proof
If \(\operatorname{Fix}(S)\cap F\neq\emptyset\), \((x_{n})_{n\in\mathbb {N}}\) is bounded by Proposition 2.1. Since (H1s)-(H2s)-(H3s) imply (H1)-(H2)-(H3), by using Proposition 2.4, we see that \((x_{n})_{n\in\mathbb{N}}\) is asymptotically regular. Let us divide the proof in steps.
Step 1. \(\|x_{n+1}-x_{n}\|=o(\alpha_{n})\).
Proof of Step 1
We need to prove that
If \(\alpha_{n}\to\alpha>0\) we do not need to prove anything; so let \(\alpha=0\). Dividing by \(\alpha_{n}\) in (2.5) of Proposition 2.4 we have
The boundedness of \((x_{n})_{n\in\mathbb{N}}\) and (H4) give
so denoting
our inequality can be written as \(a_{n+1}\leq(1-\gamma_{n})a_{n}+\delta _{n}\). In view of (H1s), (H2s), and (H3s), we can apply the Xu Lemma 2.3 to conclude that \(\|x_{n+1}-x_{n}\|=o(\alpha_{n})\). □
Step 2. \(\omega_{w}(x_{n})\subset F\cap\operatorname{Fix}(S)\).
Proof of Step 2
Let \(z\in F\cap\operatorname{Fix}(S)\); then by the boundedness and the subdifferential inequality
we have
Dividing by \(\alpha_{n}\) we obtain
Since \(\tau=+\infty\) and by using Step 1, \(\|x_{n}-Sx_{n}\|\to0\), as \(n\to \infty\), the demiclosedness principle for nonexpansive mappings guarantees that \(\omega_{w}(x_{n})\subset\operatorname{Fix}(S)\). By Opial’s condition, if \(z\in\omega_{w}(x_{n})\subset\operatorname{Fix}(S)\), \((x_{n_{k}})_{k\in\mathbb{N}}\) weakly converges to z and \(z\notin F\) then
which is absurd. So we have \(\omega_{w}(x_{n})\subset F\cap\operatorname{Fix}(S)\). □
Finally we conclude our proof, showing the convergence of the sequence.
Step 3. \((x_{n})_{n\in\mathbb{N}}\) strongly converges to \(\bar{x}\) satisfying (2.9).
Proof of Step 3
Let \(\bar{x}\) the unique solution of the variational inequality problem (2.9). Since \(\bar{x}\in F\cap\operatorname{Fix}(S)\), we have
Denoting
our inequality can be written as
To invoke the Xu Lemma 2.3 we need to prove that \(\limsup_{n\to\infty}\delta_{n}\leq0\).
There exists a subsequence \((x_{n_{k}})_{k\in\mathbb{N}}\) of \((x_{n})_{n\in \mathbb{N}}\) such that
Since \((x_{n_{k}})_{k\in\mathbb{N}}\) is bounded, we suppose that \((x_{n_{k}})_{k\in\mathbb{N}}\) weakly converges to p. Step 3 guarantees that \(p\in F\cap\operatorname{Fix}(S)\). By using the asymptotical regularity of \((x_{n_{k}})_{k\in\mathbb{N}}\) we have
□
□
Theorem 2.14
Let Hypotheses (ℋ). Let us suppose that
Let us suppose that \((x_{n})_{n\in\mathbb{N}}\) is bounded. Moreover, suppose that
-
(H1s)
\(\sum_{n=1}^{\infty}\mu_{n}=\infty\) and \(|\mu _{n}-\mu_{n-1}|=o(\alpha_{n}\mu_{n})\);
-
(H2s)
\(|\alpha_{n}-\alpha_{n-1}|=o(\alpha_{n}\mu_{n})\);
-
(H3s)
\(\sup_{z\in B}\|W_{n}z-W_{n-1}z\|=o(\alpha_{n}\mu_{n})\), with \(B\subset H\) bounded;
-
(H4)
\(\vert \frac{1}{\alpha_{n}}-\frac {1}{\alpha _{n-1}}\vert =O(\mu_{n})\).
Let \(\bar{\Sigma}\) be the set of solutions of the variational inequality problem
and let us suppose that \(\bar{\Sigma}\neq\emptyset\).
Then \((x_{n})_{n\in\mathbb{N}}\) defined by (2.1) strongly converges to \(\tilde{x,}\) that is, the unique solution of the variational inequality problem
Proof
Since \(\bar{\Sigma}\) coincides with the set of fixed point of the nonexpansive mapping \(P_{F}S\), it is closed and convex. So (2.11) has a unique solution.
Let us note that (H1s)-(H2s)-(H3s) imply (H1)-(H2)-(H3); hence, by using Proposition 2.4, \((x_{n})_{n\in\mathbb{N}}\) is asymptotically regular. We divide the proof in steps.
Step 1. \(\|x_{n+1}-x_{n}\|=o(\alpha_{n})\).
Proof
As for Step 1 of Theorem 2.12. □
Step 2. \(\omega_{w}(x_{n})\subset\bar{\Sigma}\).
Proof of Step 2
Denoting by \(y_{n}=\alpha_{n}Sx_{n}+(1-\alpha_{n})B_{n}x_{n}\), we have
Hypotheses \(\alpha_{n}\to0\) and \(\mu_{n}\to0\) allow one to conclude that \(\|x_{n}-y_{n}\|\to0\). As a rule
as \(n\to\infty\). Moreover,
Dividing by \(\alpha_{n}\) we have
For all \(z\in F\),
Since \(z\in F\), \(z=W_{n}z\) for all \(n\in\mathbb{N,}\) and \((I-W_{n})\) is monotone:
By using (2.12)
Let us denote by \((x_{n_{k}})_{k\in\mathbb{N}}\) a subsequence weakly converging to p; by the same proof as Proposition 2.7 one can see that the boundedness of \((x_{n})\), combined with the assumptions \(\mu_{n}\to0\) and \(\alpha_{n}\to0\), is enough to guarantee that \(p\in F\). We have
Passing \(k\to\infty\), since \(w_{n}\to0\) by Step 1, \(\|(I-W_{n})y_{n}\|\to0\) and \(\tau=+\infty\), we have
If we replace z by \(p+\eta(z-p)\), \(\eta\in(0,1)\), we have
Letting \(\eta\to0\), finally,
i.e. the claim follows. □
Step 3. Convergence of the sequence.
Proof of Step 3
Let \(\tilde{x}\) be the unique solution of the variational inequality problem (2.11). As in Theorem 2.8 we have
Denoting
our inequality can be written as
To invoke the Xu Lemma 2.3 we need to prove that \(\limsup_{n\to\infty}\delta_{n}\leq0\).
There exists a subsequence \((x_{n_{k}})_{k\in\mathbb{N}}\) of \((x_{n})_{n\in \mathbb{N}}\) such that
Since \((x_{n_{k}})_{k\in\mathbb{N}}\) is bounded, we can suppose that \((x_{n_{k}})_{k\in\mathbb{N}}\) weakly converges to p. We know, by Step 2, that \(p\in\Sigma\subset F\). By using the asymptotical regularity of \((x_{n})_{n\in\mathbb{N}}\) we have
Since \(\tau=\infty\), \(p\in F\), and \(\tilde{x}\in\Sigma\),
Moreover, since \(p\in\Sigma\) and \(\tilde{x}\) is the solution of (2.11)
so we have
and the claim is proved. □
□
Before we show some applications, we would like to focus on some open questions.
Open Question 1
Since \(F\cap\operatorname{Fix}(S)\subset\bar {\Sigma}\), we conjecture that the solution of (2.9) is a solution of (2.11) too, i.e. if \(F\cap\operatorname {Fix}(S)\neq\emptyset\), \(\bar{x}\) of Theorem 2.8 coincides with \(\tilde{x}\) of Theorem 2.14.
Open Question 2
As we have seen in the above, Proposition 2.11, the existence of solutions of the variational inequality problem
implies the boundedness of the sequence generated by
By Proposition 2.1, if \(\operatorname{Fix}(S)\cap F\neq \emptyset\), our method
is bounded. We do not know if the existence of solutions of
implies the boundedness of the sequence generated by
(i.e., in general, when \(W_{n}\) replaces \(P_{C}\)).
3 Applications
Let \(f(x)\) and \(g(x)\) be functionals convex and Fréchet differentiable. Let ∇f be \(L_{f}\)-lipschitzian and let ∇g be \(\sigma_{g}\)-strongly monotone and \(L_{g}\)-lipschitzian. Let us consider
where \(\varepsilon>0\) is given and C is a closed and convex subset of H. Without loss of generality we can suppose that \(C=\bigcap_{n\in \mathbb {N}}\operatorname{Fix}(W_{n})\) with \((W_{n})_{n\in\mathbb{N}}\) is an opportune nonexpansive mapping, We have the following.
Theorem 3.1
Pick two sequences such that \((\mu_{n})_{n\in\mathbb{N}}\subset (0,\frac {2\sigma_{g}}{L_{g}^{2}})\) and
where \(\mu_{n}\to0\), as \(n\to\infty\), and
-
(H1)
\(\sum_{n=1}^{\infty}\mu_{n}=\infty\) and \(|\mu _{n}-\mu_{n-1}|=o(\mu_{n})\);
-
(H2)
\(|\alpha_{n}-\alpha_{n-1}|=o(\mu_{n})\);
-
(H3)
\(\sup_{z\in B}\|W_{n}z-W_{n-1}z\|=o(\mu_{n})\), with \(B\subset H\) bounded.
Then \((x_{n})_{n\in\mathbb{N}}\) generated by
strongly converges to \(x^{*}\), that is, the unique solution of the variational inequality problem
Proof
The proof follows by Theorem 2.8 since \((I-\frac{1}{L_{f}}\nabla f)\) is nonexpansive and \((\frac{1}{L_{f}}\nabla g)\) is a strongly monotone and lipschitzian operator. □
Choosing \(\mu_{n}=\frac{1}{n}\) we immediately obtain the following.
Corollary 3.2
The sequence generated by
strongly converges to \(x^{*}\), that is, the unique solution of the variational inequality problem
Following [21], let \(f(x)=\frac{1}{2}\|Ax-b\|^{2}\) where A is a linear and bounded operator and \(b\in H\). Let \(g(x)=\frac{1}{2}\|x\| ^{2}\). The next corollary easily follows.
Corollary 3.3
The \((x_{n})_{n\in\mathbb{N}}\) generated by
strongly converges to \(x^{*}\), that is, the unique solution of the variational inequality problem
i.e. \(x^{*}\) is the unique solution of
Let us consider a least absolute shrinkage and selection operator, called briefly the lasso problem. Let \(H=\mathbb{R}^{n}\); the lasso problem is the minimization problem defined as
where A is a \(m\times n\) matrix, \(x\in\mathbb{R}^{n}\), \(b\in\mathbb {R}^{m}\) [22]. We consider a lasso problem with solutions. This ill-posed problem can be regularized as
This regularization, called an elastic net, is studied in [23].
Taking in account Example 1.3 the proximal operator of \(\| \cdot\|_{1}\) on \(\mathbb{R}^{n}\) is defined as
In [22] the author proved the following.
Proposition 3.4
[22]
If g is a convex and Fréchet differentiable functional on H, a point \(x^{*}\) is a solution of the lasso problem if and only if
Thus, by Theorem 2.8, we have the following.
Theorem 3.5
Pick two sequences such that
and \(\mu_{n}\to0\), as \(n\to\infty\). Moreover, suppose that
-
(H1)
\(\sum_{n=1}^{\infty}\mu_{n}=\infty\) and \(|\mu _{n}-\mu_{n-1}|=o(\mu_{n})\);
-
(H2)
\(|\alpha_{n}-\alpha_{n-1}|=o(\mu_{n})\).
Then \((x_{n})_{n\in\mathbb{N}}\) generated by
strongly converges to \(x^{*}\in C\), that is, the unique solution of
i.e. the solution of the lasso problem with minimum \(\|\cdot \| _{2}\)-norm solution.
Proof
It is enough to choose \(S=\operatorname{prox}_{\gamma\|\cdot\| _{1}}(I-A^{*}A+A^{*}b)\), \(P_{C}\). □
By Theorem 2.12, one can prove the following.
Theorem 3.6
Pick \(u\in H\). Let \(\mu_{n}=\frac{1}{n}\) and \(\alpha_{n}=\alpha>0\). Let \((W_{n})_{n\in \mathbb{N}}\) such that \(\sup_{z\in B}\|W_{n}z-W_{n-1}z\|=o(\frac{1}{n})\), with \(B\subset H\) bounded. Then \((x_{n})_{n\in\mathbb{N}}\) generated by
strongly converges to \(x^{*}\), that is, the unique solution of the variational inequality problem
i.e. the solution of the lasso problem nearest to u.
References
Mann, WR: Mean value methods in iteration. Proc. Am. Math. Soc. 4, 506-510 (1953)
Halpern, B: Fixed points of nonexpanding maps. Bull. Am. Math. Soc. 73, 957-961 (1967)
Ishikawa, S: Fixed points and iteration of a nonexpansive mapping in a Banach space. Proc. Am. Math. Soc. 59, 65-71 (1976)
Moudafi, A: Viscosity approximation methods for fixed-points problems. J. Math. Anal. Appl. 241, 46-55 (2000)
Nakajo, K, Takahashi, W: Strong convergence theorems for nonexpansive mappings and nonexpansive semigroups. J. Math. Anal. Appl. 279(2), 372-379 (2003)
Demling, K: Nonlinear Functional Analysis. Dover, New York (2010) (first Springer, Berlin (1985))
Baillon, JB, Haddad, G: Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Isr. J. Math. 26(2), 137-150 (1977)
Takahashi, W, Toyoda, M: Weak convergence theorems for nonexpansive mappings and monotone mappings. J. Optim. Theory Appl. 118(2), 417-428 (2003)
Xu, HK: Averaged mappings and the gradient-projection algorithm. J. Optim. Theory Appl. 150(2), 360-378 (2011)
Hundal, HS: An alternating projection that does not converge in norm. Nonlinear Anal., Theory Methods Appl. 57(1), 35-61 (2004)
Atsushiba, S, Takahashi, W: Strong convergence theorems for a finite family of nonexpansive mappings and applications. B. N. Prasad birth centenary commemoration volume. Indian J. Math. 41(3), 435-453 (1999)
Marino, G, Muglia, L: On the auxiliary mappings generated by a family of mappings and solutions of variational inequalities problems. Optim. Lett. 9, 263-282 (2015)
Marino, G, Muglia, L, Yao, Y: The uniform asymptotical regularity of families of mappings and solutions of variational inequality problems. J. Nonlinear Convex Anal. 15(3), 477-492 (2014)
Shimoji, K, Takahashi, W: Strong convergence to common fixed points of infinite nonexpansive mappings and applications. Taiwan. J. Math. 5, 387-404 (2001)
Xu, HK, Kim, TH: Convergence of hybrid steepest-descent methods for variational inequalities. J. Optim. Theory Appl. 119(1), 185-201 (2003)
Cianciaruso, F, Marino, G, Muglia, L, Yao, Y: On a two-step algorithm for hierarchical fixed point problems and variational inequalities. J. Inequal. Appl. 2009, Article ID 208692 (2009). doi:10.1155/2009/208692
Moudafi, A, Maingé, P-E: Towards viscosity approximations of hierarchical fixed-points problems. Fixed Point Theory Appl. 2006, Article ID 95453 (2006)
Maingé, P-E, Moudafi, A: Strong convergence of an iterative method for hierarchical fixed-points problems. Pac. J. Optim. 3, 529-538 (2007)
Marino, G, Xu, HK: Explicit hierarchical fixed point approach to variational inequalities. J. Optim. Theory Appl. 149(1), 61-78 (2011)
Xu, HK: Iterative algorithms for nonlinear operators. J. Lond. Math. Soc. 2, 1-17 (2002)
Reich, S, Xu, H-K: An iterative approach to a constrained least squares problem. Abstr. Appl. Anal. 2003(8), 503-512 (2003)
Xu, H-K: Properties and iterative methods for the Lasso and its variants. Chin. Ann. Math., Ser. B 35(3), 501-518 (2014)
Zou, H, Hastie, T: Regularization and variable selection via the elastic net. J. R. Stat. Soc., Ser. B 67, 301-320 (2005)
Acknowledgements
Supported by Ministero dell’Universitá e della Ricerca of Italy. The authors are extremely grateful to the anonymous referees for their useful comments and suggestions.
Author information
Authors and Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that there is no conflict of interests.
Authors’ contributions
All authors contributed equally and significantly in writing this article. All authors read and approved the final manuscript.
Rights and permissions
Open Access This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.
About this article
Cite this article
Marino, G., Muglia, L. On the role of the coefficients in the strong convergence of a general type Mann iterative scheme. J Inequal Appl 2015, 118 (2015). https://doi.org/10.1186/s13660-015-0641-4
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13660-015-0641-4