Skip to main content

A modified regularization method for finding zeros of monotone operators in Hilbert spaces

Abstract

We study the regularization method for solving the variational inclusion problem of the sum of two monotone operators in Hilbert spaces. The strong convergence theorem is then established under some relaxed conditions which mainly improves and recovers that of Qin et al. (Fixed Point Theory Appl. 2014:75, 2014). We also apply our main result to the convex minimization problem, the fixed point problem and the variational inequality problem. Finally we provide numerical examples for supporting the main result.

1 Introduction

Let C be a nonempty subset of a real Hilbert space H. Define the domain and the range of an operator \(B:H\rightarrow2^{H}\) by \(D(B)=\{x\in H: Bx\neq\emptyset\}\) and \(R(B)=\bigcup\{Bx: x\in D(B)\}\), respectively. The inverse of B, denoted by \(B^{-1}\), is defined by \(x\in B^{-1}y\) if and only if \(y\in Bx\). We study the problem of finding \(\hat{x}\) such that

$$0\in A\hat{x}+B\hat{x}, $$

where \(A:C\rightarrow H\) is an operator and \(B:D(B)\subset H\rightarrow2^{H}\) is a set-valued operator. This problem is called the variational inclusion problem. Some typical problems arising in various branches of science, applied sciences, economics, and engineering such as machine learning, image restoration, and signal recovery can be viewed as this form. To be more precise, it includes, as special cases, the variational inequality problem, the split feasibility problem, the linear inverse problem, and the following convex minimization problem:

$$\min_{x\in H} F(x)+G(x), $$

where \(F:H\rightarrow\mathbb{R}\) is a smooth convex function, and \(G:H\rightarrow\mathbb{R}\) is a non-smooth convex function. That is,

$$F(\hat{x})+G(\hat{x})=\min_{x\in H}F(x)+G(x)\quad \Leftrightarrow\quad 0\in \nabla F(\hat{x})+\partial G(\hat{x}), $$

where ∇F is the gradient of F and ∂G is the subdifferential of G defined by

$$\partial G(x)=\bigl\{ z\in H: \langle y-x,z\rangle+G(x)\leq G(y), \forall y\in H \bigr\} . $$

For \(r>0\), define the mapping \(T_{r}:C\rightarrow D(B)\) as follows:

$$ T_{r}=(I+rB)^{-1}(I-rA). $$
(1.1)

We see that

$$ T_{r}x=x\quad \Leftrightarrow\quad x=(I+rB)^{-1}(x-rAx)\quad \Leftrightarrow\quad x-rAx\in x+rBx\quad \Leftrightarrow\quad 0\in Ax+Bx, $$

which shows that the fixed points set of \(T_{r}\) coincides with the solutions set of \((A+B)^{-1}(0)\). This suggests the following iteration process: \(x_{0}\in C\) and

$$x_{n+1}=(I+r_{n}B)^{-1}(x_{n}-r_{n}Ax_{n})=T_{r_{n}}x_{n}, \quad n\geq0, $$

where \(\{r_{n}\}\subset(0,\infty)\) and \(D(B)\subset C\). This method is called a forward-backward splitting algorithm [1, 2]. If \(A\equiv0\), then we obtain the proximal point algorithm [3–6] and if \(B\equiv0\), then we obtain the gradient method [7]. However, it is noted that the sequences generated by these schemes converge weakly in general. In the literature, many methods have been suggested to solve the variational inclusion problem for maximal monotone operators; see, e.g., [8–12].

Very recently, Qin et al. [13] proved the following theorem in Hilbert spaces.

Theorem Q

Let \(A:C\rightarrow H\) be an α-inverse strongly monotone mapping and let B be a maximal monotone operator on H. Assume that \(D(B)\subset C\) and \((A+B)^{-1}(0)\) is nonempty. Let \(f:C\rightarrow C\) be a fixed k-contraction and let \(J_{r_{n}}=(I+r_{n}B)^{-1}\). Let \(\{z_{n}\}\) be a sequence in C in the following process: \(z_{0}\in C\) and

$$ \begin{aligned} &w_{n} = \alpha_{n}f(z_{n})+(1- \alpha_{n})z_{n}, \\ &z_{n+1} = J_{r_{n}}(w_{n}-r_{n}Aw_{n}+e_{n}), \quad n\geq0, \end{aligned} $$
(1.2)

where \(\{\alpha_{n}\}\subset(0,1)\), \(\{e_{n}\}\subset H\), and \(\{r_{n}\}\subset(0,2\alpha)\). If the control sequences satisfy the following restrictions:

  1. (a)

    \(\alpha_{n}\rightarrow0\), \(\sum_{n=0}^{\infty}\alpha_{n}=\infty\) and \(\sum_{n=0}^{\infty}|\alpha_{n+1}-\alpha_{n}|<\infty\);

  2. (b)

    \(0< a\leq r_{n}\leq b<2\alpha\) and \(\sum_{n=0}^{\infty}|r_{n+1}-r_{n}|<\infty\);

  3. (c)

    \(\sum_{n=0}^{\infty}\|e_{n}\|<\infty\).

Then \(\{z_{n}\}\) converges strongly to a point \(\overline{x}\in(A+B)^{-1}(0)\), where \(\overline{x}=P_{(A+B)^{-1}(0)}f(\overline{x})\).

In this paper, motivated by Qin et al. [13], we prove that the above theorem still holds even if the additional conditions that \(\sum_{n=0}^{\infty}|\alpha_{n+1}-\alpha_{n}|<\infty\) and \(\sum_{n=0}^{\infty}|r_{n+1}-r_{n}|<\infty\) are removed. As a direct consequence, we obtain some results concerning the fixed point problem of strict pseudocontractions, the convex minimization problem and the variational inequality problem. We also provide examples as well as numerical results.

2 Preliminaries and lemmas

We now provide some basic concepts, definitions and lemmas which will be used in the sequel.

Let C be a nonempty, closed, and convex subset of a real Hilbert space H with norm \(\|\cdot\|\) and inner product \(\langle\cdot,\cdot\rangle\). For each \(x\in H\), there exists a unique nearest point in C, denoted by \(P_{C}x\), such that \(\|x-P_{C}x\|=\min_{y\in C}\|x-y\|\). Then \(P_{C}\) is called the metric projection of H on to C. For \(x\in H\), we know that

$$ \langle x-P_{C}x,y-P_{C}x\rangle\leq0 $$
(2.1)

for all \(y\in C\). Recall that the mapping \(T:C\rightarrow C\) is said to be

  1. (i)

    nonexpansive if \(\|Tx-Ty\|\leq\|x-y\|\) for all \(x,y\in C\);

  2. (ii)

    k-contractive if there exists \(0< k<1\) such that

    $$ \|Tx-Ty\|\leq k\|x-y\| $$

    for all \(x,y\in C\);

  3. (iii)

    firmly nonexpansive if

    $$ \|Tx-Ty\|^{2}\leq\|x-y\|^{2}-\bigl\Vert (I-T)x-(I-T)y\bigr\Vert ^{2} $$

    for all \(x,y\in C\).

  4. (iv)

    monotone if \(\langle Tx-Ty,x-y\rangle\geq0\) for all \(x,y\in C\);

  5. (v)

    α-inverse strongly monotone if there exists \(\alpha>0\) such that

    $$ \langle Tx-Ty,x-y\rangle\geq\alpha\|Tx-Ty\|^{2} $$

    for all \(x,y\in C\). We denote by \(F(T)\) the fixed points set of T, that is, \(F(T)=\{x\in C: x=Tx\}\).

A set-valued operator B is said to be monotone if, for each \(x,y\in D(B)\),

$$ \langle u-v,x-y \rangle\geq0, \quad u\in Bx, v\in By. $$

A monotone operator A is said to be maximal if \(R(I+rB)=H\) for all \(r>0\) (see Minty [14]). For a maximal monotone operator B on H, and \(r>0\), we define the single-valued resolvent \(J_{r} : H\rightarrow D(B)\) by \(J_{r}=(I+rB)^{-1}\). It is well known that \(J_{r}\) is firmly nonexpansive, and \(F(J_{r})=B^{-1}(0)\).

We now collect some crucial lemmas.

Lemma 2.1

[15]

Let C be a nonempty, closed, and convex subset of a real Hilbert space H. Let the mapping \(A:C\rightarrow H\) be α-inverse strongly monotone and \(r>0\) be a constant. Then we have

$$ \bigl\Vert (I-r A)x-(I-r A)y\bigr\Vert ^{2}\leq\|x-y \|^{2}+r(r-2\alpha)\|Ax-Ay\|^{2} $$

for all \(x,y\in C\). In particular, if \(0< r\leq2\alpha\), then \(I-r A\) is nonexpansive.

Lemma 2.2

[9]

Let \(A:C\rightarrow H\) be a mapping and \(B:D(B)\subset H\rightarrow2^{H}\) a monotone operator. Then \(\|x-T_{s}x\| \leq2\|x-T_{r}x\|\) for all \(0< s\leq r\) and \(x\in C\).

Lemma 2.3

[16]

Let C be a nonempty, closed, and convex subset of a Hilbert space H, and \(T:C\rightarrow C\) be a nonexpansive mapping with \(F(T)\neq\emptyset\). If \(x_{n}\rightharpoonup x\) and \(\|x_{n}-Tx_{n}\|\rightarrow0\), then \(x\in F(T)\).

Lemma 2.4

[17]

Let \(\{a_{n}\}\) and \(\{c_{n}\}\) are sequences of nonnegative real numbers such that

$$ a_{n+1}\leq(1-\delta_{n})a_{n}+b_{n}+c_{n}, \quad n\geq0, $$

where \(\{\delta_{n}\}\) is a sequence in \((0,1)\) and \(\{b_{n}\}\) is a real sequence. Assume \(\sum_{n=0}^{\infty}c_{n}<\infty\). Then the following results hold:

  1. (i)

    If \(b_{n}\leq\delta_{n}M\) for some \(M\geq0\), then \(\{a_{n}\}\) is a bounded sequence.

  2. (ii)

    If \(\sum_{n=0}^{\infty}\delta_{n}=\infty\) and \(\limsup_{n\rightarrow\infty}b_{n}/\delta_{n}\leq0\), then \(\lim_{n\rightarrow\infty}a_{n}=0\).

We need the following crucial lemma proved by He-Yang [18].

Lemma 2.5

[18]

Assume \(\{s_{n}\}\) is a sequence of nonnegative real numbers such that

$$ s_{n+1}\leq(1-\gamma_{n})s_{n}+ \gamma_{n}\delta_{n},\quad n\geq0 $$

and

$$ s_{n+1}\leq s_{n}-\eta_{n}+\rho_{n}, \quad n\geq0, $$

where \(\{\gamma_{n}\}\) is a sequence in \((0,1)\), \(\{\eta_{n}\}\) is a sequence of nonnegative real numbers and \(\{\delta_{n}\}\), and \(\{\rho _{n}\}\) are real sequences such that

  1. (i)

    \(\sum_{n=0}^{\infty}\gamma_{n}=\infty\),

  2. (ii)

    \(\lim_{n\rightarrow\infty}\rho_{n}=0\),

  3. (iii)

    \(\lim_{k\rightarrow\infty}\eta_{n_{k}}=0\) implies \(\limsup_{n\rightarrow\infty}\delta_{n_{k}}\leq0\) for any subsequence \(\{n_{k}\}\) of \(\{n\}\).

Then \(\lim_{n\rightarrow\infty}s_{n}=0\).

3 Main results

In this section, we present the main theorem of this paper.

Theorem 3.1

Let \(A:C\rightarrow H\) be an α-inverse strongly monotone mapping and let B be a maximal monotone operator on H such that \(D(B)\subset C\) and \((A+B)^{-1}(0)\) is nonempty. Let \(f:C\rightarrow C\) be a k-contraction. Assume that \(\{\alpha_{n}\}\subset(0,1)\), \(\{ e_{n}\}\subset H\), and \(\{r_{n}\}\subset(0,2\alpha)\) with the following restrictions:

  1. (a)

    \(\alpha_{n}\rightarrow0\) and \(\sum_{n=0}^{\infty}\alpha_{n}=\infty\);

  2. (b)

    \(0< a\leq r_{n}\leq b<2\alpha\);

  3. (c)

    \(\sum_{n=0}^{\infty}\|e_{n}\|<\infty\) or \(\|e_{n}\|/\alpha _{n}\rightarrow0\).

Then the sequence \(\{z_{n}\}\) generated by (1.2) converges strongly to a point \(\overline{x}\in(A+B)^{-1}(0)\), where \(\overline{x}=P_{(A+B)^{-1}(0)}f(\overline{x})\).

Proof

Let \(\{x_{n}\}\) be a sequence generated by \(x_{0}\in C\) and

$$\begin{aligned}& y_{n} = \alpha_{n}f(x_{n})+(1- \alpha_{n})x_{n}, \\& x_{n+1} = J_{r_{n}}(y_{n}-r_{n}Ay_{n}), \quad n\geq0. \end{aligned}$$

Firstly, we shall show that \(\{x_{n}\}\) and \(\{z_{n}\}\) are equivalent. Indeed,

$$\begin{aligned} \|y_{n}-w_{n}\| =&\bigl\Vert \alpha_{n} \bigl(f(x_{n})-f(z_{n})\bigr)+(1-\alpha _{n}) (x_{n}-z_{n})\bigr\Vert \\ \leq&\alpha_{n} k\|x_{n}-z_{n}\|+(1- \alpha_{n})\|x_{n}-z_{n}\| \\ =& \bigl(1-\alpha_{n}(1-k) \bigr)\|x_{n}-z_{n} \|. \end{aligned}$$

Using Lemma 2.1, condition (b), and the fact that \(J_{r_{n}}\) is nonexpansive, we obtain

$$\begin{aligned} \|x_{n+1}-z_{n+1}\| =&\bigl\Vert J_{r_{n}}(y_{n}-r_{n}Ay_{n})-J_{r_{n}}(w_{n}-r_{n}Aw_{n}+e_{n}) \bigr\Vert \\ \leq&\bigl\Vert (y_{n}-r_{n}Ay_{n})-(w_{n}-r_{n}Aw_{n}+e_{n}) \bigr\Vert \\ \leq&\|y_{n}-w_{n}\|+\|e_{n}\| \\ \leq& \bigl(1-\alpha_{n}(1-k) \bigr)\|x_{n}-z_{n} \|+\|e_{n}\|. \end{aligned}$$

Applying Lemma 2.4(ii) with conditions (a) and (c), we conclude that \(\|x_{n}-z_{n}\|\rightarrow0\).

On the other hand, it can be checked that \(P_{(A+B)^{-1}(0)}f\) is a contraction. So there exists a unique point \(\overline{x}\in C\) such that

$$ \overline{x}=P_{(A+B)^{-1}(0)}f(\overline{x}). $$
(3.1)

To finish our proof, it suffices to show that \(x_{n}\rightarrow \overline{x}\) as \(n\rightarrow\infty\).

We next show that \(\{x_{n}\}\) is bounded. Fixing \(p\in(A+B)^{-1}(0)\), we obtain

$$\begin{aligned} \|y_{n}-p\| =&\bigl\Vert \alpha_{n}\bigl(f(x_{n})-f(p) \bigr)+\alpha_{n}\bigl(f(p)-p\bigr)+(1-\alpha _{n}) (x_{n}-p)\bigr\Vert \\ \leq&\alpha_{n} k\|x_{n}-p\|+\alpha_{n}\bigl\Vert f(p)-p\bigr\Vert +(1-\alpha_{n})\| x_{n}-p\| \\ =& \bigl(1-\alpha_{n}(1-k) \bigr)\|x_{n}-p\|+ \alpha_{n}\bigl\Vert f(p)-p\bigr\Vert . \end{aligned}$$

It follows that

$$\begin{aligned} \|x_{n+1}-p\| =&\bigl\Vert J_{r_{n}}(y_{n}-r_{n}Ay_{n})-J_{r_{n}}(p-r_{n}Ap) \bigr\Vert \\ \leq&\|y_{n}-p\| \\ \leq& \bigl(1-\alpha_{n}(1-k) \bigr)\|x_{n}-p\|+ \alpha_{n}\bigl\Vert f(p)-p\bigr\Vert . \end{aligned}$$

Hence \(\{x_{n}\}\) is bounded by Lemma 2.4(i). So are \(\{f(x_{n})\} \) and \(\{y_{n}\}\). Observing

$$\begin{aligned} \|y_{n}-\overline{x}\|^{2} =&\alpha_{n} \bigl\langle f(x_{n})-f(\overline{x}),y_{n}-\overline{x} \bigr\rangle +\alpha_{n} \bigl\langle f(\overline{x})-\overline{x},y_{n}- \overline{x} \bigr\rangle +(1-\alpha _{n}) \langle x_{n}- \overline{x},y_{n}-\overline{x} \rangle \\ \leq&\alpha_{n}k\|x_{n}-\overline{x}\|\|y_{n}- \overline{x}\|+\alpha _{n} \bigl\langle f(\overline{x})- \overline{x},y_{n}-\overline{x} \bigr\rangle +(1-\alpha _{n}) \|x_{n}-\overline{x}\|\|y_{n}-\overline{x}\| \\ =& \bigl(1-\alpha_{n}(1-k) \bigr)\|x_{n}-\overline{x}\| \|y_{n}-\overline {x}\|+\alpha_{n} \bigl\langle f( \overline{x})-\overline{x},y_{n}-\overline{x} \bigr\rangle \\ \leq&\frac{1}{2} \bigl(1-\alpha_{n}(1-k) \bigr) \bigl( \|x_{n}-\overline{x}\| ^{2}+\|y_{n}-\overline{x} \|^{2} \bigr)+\alpha_{n} \bigl\langle f(\overline{x})- \overline{x},y_{n}-\overline{x} \bigr\rangle , \end{aligned}$$

we have

$$ \|y_{n}-\overline{x}\|^{2}\leq \biggl(1-\frac{2\alpha_{n}(1-k)}{1+\alpha _{n}(1-k)} \biggr)\|x_{n}-\overline{x}\|^{2}+\frac{2\alpha_{n}}{1+\alpha _{n}(1-k)} \bigl\langle f(\overline{x})-\overline{x},y_{n}-\overline{x} \bigr\rangle . $$

So, by Lemma 2.1 and the firm nonexpansiveness of \(J_{r_{n}}\), we have

$$\begin{aligned} \|x_{n+1}-\overline{x}\|^{2} =&\bigl\Vert J_{r_{n}}(y_{n}-r_{n}Ay_{n})-J_{r_{n}}( \overline{x}-r_{n}A\overline {x})\bigr\Vert ^{2} \\ \leq&\bigl\Vert (y_{n}-r_{n}Ay_{n})-( \overline{x}-r_{n}A\overline{x})\bigr\Vert ^{2} \\ &{}-\bigl\Vert (I-J_{r_{n}}) (y_{n}-r_{n}Ay_{n})-(I-J_{r_{n}}) (\overline {x}-r_{n}A\overline{x})\bigr\Vert ^{2} \\ \leq&\|y_{n}-\overline{x}\|^{2}-r_{n}(2 \alpha-r_{n})\|Ay_{n}-A\overline {x}\|^{2} \\ &{}-\bigl\Vert (I-J_{r_{n}}) (y_{n}-r_{n}Ay_{n})-(I-J_{r_{n}}) (\overline {x}-r_{n}A\overline{x})\bigr\Vert ^{2} \\ \leq& \biggl(1-\frac{2\alpha_{n}(1-k)}{1+\alpha_{n}(1-k)} \biggr)\| x_{n}-\overline{x} \|^{2}+\frac{2\alpha_{n}}{1+\alpha_{n}(1-k)} \bigl\langle f(\overline{x})- \overline{x},y_{n}-\overline{x} \bigr\rangle \\ &{} -r_{n}(2\alpha- r_{n})\|Ay_{n}-A \overline{x}\|^{2} \\ &{}-\bigl\Vert (I-J_{r_{n}}) (y_{n}-r_{n}Ay_{n})-(I-J_{r_{n}}) ( \overline {x}-r_{n}A\overline{x})\bigr\Vert ^{2}. \end{aligned}$$

This implies that

$$ \|x_{n+1}-\overline{x}\|^{2} \leq \biggl(1- \frac{2\alpha _{n}(1-k)}{1+\alpha_{n}(1-k)} \biggr)\|x_{n}-\overline{x}\|^{2}+ \frac {2\alpha_{n}}{1+\alpha_{n}(1-k)} \bigl\langle f(\overline{x})-\overline{x},y_{n}- \overline{x} \bigr\rangle $$
(3.2)

and

$$\begin{aligned} \|x_{n+1}-\overline{x}\|^{2} \leq& \|x_{n}-\overline{x}\| ^{2}-r_{n}(2\alpha- r_{n})\|Ay_{n}-A\overline{x}\|^{2} \\ &{}- \bigl\Vert (I-J_{r_{n}}) (y_{n}-r_{n}Ay_{n})-(I-J_{r_{n}}) (\overline {x}-r_{n}A\overline{x})\bigr\Vert ^{2} \\ &{}+ \frac{2\alpha_{n}}{1+\alpha_{n}(1-k)}\bigl\Vert f(\overline{x})-\overline {x}\bigr\Vert \|y_{n}-\overline{x}\|. \end{aligned}$$
(3.3)

We set, for all \(n\geq1\), \(s_{n}=\|x_{n}-\overline{x}\|^{2}\), \(\gamma_{n}=\frac{2\alpha_{n}(1-k)}{1+\alpha_{n}(1-k)}\), \(\delta _{n}=\frac{1}{1-k} \langle f(\overline{x})-\overline{x},y_{n}-\overline{x} \rangle\), \(\rho _{n}=\frac{2\alpha_{n}}{1+\alpha_{n}(1-k)}\|f(\overline{x})-\overline {x}\|\|y_{n}-\overline{x}\|\), and \(\eta_{n}=r_{n}(2\alpha- r_{n})\| Ay_{n}-A\overline{x}\|^{2}+\| (I-J_{r_{n}})(y_{n}-r_{n}Ay_{n})-(I-J_{r_{n}})(\overline {x}-r_{n}A\overline{x})\|^{2}\). We can check that all sequences satisfies conditions (i) and (ii) in Lemma 2.5. Then (3.2) and (3.3) can be rewritten as the following inequalities:

$$ s_{n+1}\leq(1-\gamma_{n})s_{n}+ \gamma_{n}\delta_{n},\quad n\geq0 $$

and

$$ s_{n+1}\leq s_{n}-\eta_{n}+\rho_{n}, \quad n\geq0. $$

To complete the proof, we verify that the condition (iii) in Lemma 2.5 is satisfied. Let \(\{n_{k}\}\subset\{n\}\) be such that \(\eta _{n_{k}}\rightarrow0\). Then, by condition (b), we have

$$ \lim_{n\rightarrow\infty}\|Ay_{n_{k}}-A\overline{x}\|=0 $$

and

$$ \lim_{n\rightarrow\infty}\bigl\Vert (I-J_{r_{n_{k}}}) (y_{n_{k}}-r_{n_{k}}Ay_{n_{k}})-(I-J_{r_{n_{k}}}) ( \overline {x}-r_{n_{k}}A\overline{x})\bigr\Vert =0. $$

Hence we obtain

$$ \lim_{n\rightarrow\infty}\bigl\Vert y_{n_{k}}-J_{r_{n_{k}}}(y_{n_{k}}-r_{n_{k}}Ay_{n_{k}}) \bigr\Vert =0. $$

By Lemma 2.2(ii) and (b), we have

$$ \bigl\Vert J_{a}(y_{n_{k}}-aAy_{n_{k}})-y_{n_{k}} \bigr\Vert \leq2\bigl\Vert J_{r_{n_{k}}}(y_{n_{k}}-r_{n_{k}}Ay_{n_{k}})-y_{n_{k}} \bigr\Vert \rightarrow0, $$

where \(J_{a}=(I+aB)^{-1}\). Since \(\{y_{n}\}\) is bounded, by Lemma 2.3, we have \(\omega_{w}(y_{n_{k}})\subset(A+B)^{-1}(0)\). Hence

$$ \limsup_{k\rightarrow\infty}\bigl\langle f(\overline{x})-\overline {x},y_{n_{k}}-\overline{x}\bigr\rangle =\bigl\langle f(\overline{x})- \overline {x},y-\overline{x}\bigr\rangle \leq0, $$

where \(y\in\omega_{w}(y_{n_{k}})\). It follows that \(\limsup_{n\rightarrow \infty}\delta_{n_{k}}\leq0\). So, by Lemma 2.5, we conclude that \(x_{n}\rightarrow\overline{x}\) as \(n\rightarrow\infty\). We thus complete the proof. □

Remark 3.2

We remove the additionally required conditions: \(\sum_{n=0}^{\infty }|\alpha_{n+1}-\alpha_{n}|<\infty\) and \(\sum_{n=0}^{\infty}|r_{n+1}-r_{n}|<\infty\) proposed in the main theorem of Qin et al. [13].

4 Applications and numerical examples

In this section, we give some applications of our result to the variational inequality problem, the fixed point problem of strict pseudocontractions and the convex minimization problem.

4.1 Variational inequality problem

Let C be a nonempty subset of a Hilbert space H. The variational inequality problem is to find \(x\in C\) such that

$$ \langle Ax,y-x\rangle\geq0, \quad \forall y\in C. $$
(4.1)

The solution set of (4.1) is denoted by \(\operatorname{VI}(A,C)\). It is well known that \(F (P_{C}(I-rA) )=\operatorname{VI}(A,C)\) for all \(r>0\). Define the indicator function of C, denoted by \(i_{C}\), as \(i_{C}(x)=0\) if \(x\in C\) and \(i_{C}(x)=\infty\) if \(x\notin C\). We see that \(\partial i_{C}\) is maximal monotone. So, for \(r>0\), we can define \(J_{r}=(I+r\partial i_{C})^{-1}\). Moreover, \(x=J_{r}y\) if and only if \(x=P_{C}y\). Hence we obtain the following result.

Theorem 4.1

Let \(A:C\rightarrow H\) be an α-inverse strongly monotone mapping such that \(\operatorname{VI}(A,C)\) is nonempty. Let \(f:C\rightarrow C\) be a k-contraction. Let \(\{z_{n}\}\) be a sequence in C defined by \(z_{0}\in C\) and

$$ \begin{aligned} &w_{n} = \alpha_{n}f(z_{n})+(1- \alpha_{n})z_{n}, \\ &z_{n+1} = P_{C}(w_{n}-r_{n}Aw_{n}+e_{n}), \quad n\geq0, \end{aligned} $$
(4.2)

where \(\{\alpha_{n}\}\subset(0,1)\), \(\{e_{n}\}\subset H\), and \(\{r_{n}\}\subset(0,2\alpha)\). Assume that

  1. (a)

    \(\alpha_{n}\rightarrow0\) and \(\sum_{n=0}^{\infty}\alpha_{n}=\infty\);

  2. (b)

    \(0< a\leq r_{n}\leq b<2\alpha\);

  3. (c)

    \(\sum_{n=0}^{\infty}\|e_{n}\|<\infty\) or \(\|e_{n}\|/\alpha _{n}\rightarrow0\).

Then \(\{z_{n}\}\) converges strongly to a point \(\overline{x}\in \operatorname{VI}(A,C)\), where \(\overline{x}=P_{\operatorname{VI}(A,C)}f(\overline{x})\).

4.2 Fixed point problem of strict pseudocontractions

A mapping \(T:C\rightarrow C\) is called β-strictly pseudocontractive if there exists \(\beta\in[0,1)\) such that

$$\|Tx-Ty\|^{2}\leq\|x-y\|^{2}+\beta\bigl\Vert (I-T)x-(I-T)y \bigr\Vert ^{2} $$

for all \(x,y\in C\). It is well known that if T is β-strictly pseudocontractive, then \(I-T\) is \(\frac{1-\beta}{2}\)-inverse strongly monotone. Moreover, by putting \(A=I-T\), we have \(F(T)=\operatorname{VI}(A,C)\). So we immediately obtain the following result.

Theorem 4.2

Let \(T:C\rightarrow C\) be a β-strict pseudocontraction such that \(F(T)\neq\emptyset\) and let \(f:C\rightarrow C\) be a k-contraction. Let \(\{z_{n}\}\) be a sequence in C defined by \(z_{0}\in C\) and

$$ \begin{aligned} &w_{n} = \alpha_{n}f(z_{n})+(1- \alpha_{n})z_{n}, \\ &z_{n+1} = P_{C} \bigl((1-r_{n})w_{n}+r_{n}Tw_{n}+e_{n} \bigr),\quad n\geq0, \end{aligned} $$
(4.3)

where \(\{\alpha_{n}\}\subset(0,1)\), \(\{e_{n}\}\subset H\), and \(\{r_{n}\}\subset(0,1-\beta)\). Assume that

  1. (a)

    \(\alpha_{n}\rightarrow0\) and \(\sum_{n=0}^{\infty}\alpha_{n}=\infty\);

  2. (b)

    \(0< a\leq r_{n}\leq b<1-\beta\);

  3. (c)

    \(\sum_{n=0}^{\infty}\|e_{n}\|<\infty\) or \(\|e_{n}\|/\alpha _{n}\rightarrow0\).

Then \(\{z_{n}\}\) converges strongly to a point \(\overline{x}\in F(T)\), where \(\overline{x}=P_{F(T)}f(\overline{x})\).

4.3 Convex minimization problem

We next consider the following convex minimization problem:

$$ \min_{x\in H} F(x)+G(x), $$

where \(F:H\rightarrow\mathbb{R}\) is a convex and differentiable function and \(G:H\rightarrow\mathbb{R}\) is a convex function. It is well known that if ∇F is \((1/L)\)-Lipschitz continuous, then it is L-inverse strongly monotone [19]. Moreover, ∂G is maximal monotone [20]. Putting \(A=\nabla F\) and \(B=\partial G\), we then obtain the following result.

Theorem 4.3

Let H be a Hilbert space. Let \(F:H\rightarrow\mathbb{R}\) be a convex and differentiable function with \((1/L)\)-Lipschitz continuous gradient ∇F and \(G:H\rightarrow\mathbb{R}\) be a convex and lower semi-continuous function such that \(\Omega:=(\nabla F+\partial G)^{-1}(0)\neq\emptyset\). Let \(f:H\rightarrow H\) be a k-contraction. Let \(\{z_{n}\}\) be generated by \(z_{0}\in H\) and

$$ \begin{aligned} &w_{n} = \alpha_{n}f(z_{n})+(1- \alpha_{n})z_{n}, \\ &z_{n+1} = J_{r_{n}} \bigl(w_{n}-r_{n} \nabla F(w_{n})+e_{n} \bigr),\quad n\geq0, \end{aligned} $$
(4.4)

where \(J_{r_{n}}=(I+r_{n}\partial G)^{-1}\), \(\{\alpha_{n}\}\subset (0,1)\), \(\{e_{n}\}\subset H\), and \(\{r_{n}\}\subset(0,2L)\). Assume that

  1. (a)

    \(\alpha_{n}\rightarrow0\) and \(\sum_{n=0}^{\infty}\alpha_{n}=\infty\);

  2. (b)

    \(0< a\leq r_{n}\leq b<2L\);

  3. (c)

    \(\sum_{n=0}^{\infty}\|e_{n}\|<\infty\) or \(\|e_{n}\|/\alpha _{n}\rightarrow0\).

Then \(\{z_{n}\}\) converges strongly to a minimizer \(\overline{x}\) of \(F+G\), where \(\overline{x}=P_{\Omega}f(\overline{x})\).

We next provide the example as well as its numerical results.

Example 4.4

Let \(H=\mathbb{R}^{3}\). Minimize the following \(\ell_{1}\)-least square problem:

$$ \min_{x\in\mathbb{R}^{3}} \|x\|_{1}+\frac{1}{2}\|x \|_{2}^{2}+(2,1,3)x-5, $$

where \(x=(t, u, v)^{T}\).

Let \(F(x)=\frac{1}{2}\|x\|_{2}^{2}+(2,1,3)x-5\) and \(G(x)=\|x\|_{1}\). Then \(\nabla F(x)=(t+2,u+1,v+3)^{T}\). Moreover, ∇F is 1-Lipschitz continuous and hence it is 1-inverse strongly monotone.

From [21] we know that, for \(r>0\),

$$\begin{aligned}& (I+r\partial G)^{-1}(x) \\& \quad =\bigl(\max\bigl\{ \vert t\vert -r,0\bigr\} \operatorname{sign}(t),\max\bigl\{ \vert u\vert -r,0\bigr\} \operatorname{sign}(u),\max\bigl\{ \vert v\vert -r,0\bigr\} \operatorname{sign}(v)\bigr)^{T}. \end{aligned}$$

Let \(z_{n}=(t_{n},u_{n},v_{n})^{T}\). Set \(f(x)=\frac{x}{5}\) and choose \(\alpha_{n}=\frac{10^{-6}}{n+1}\), \(r_{n}=0.5\), and \(e_{n}=\frac{1}{(n+1)^{3}}(1,1,1)^{T}\). For the initial point \(z_{0}=(t_{0},u_{0},v_{0})^{T}=(-3, 10, 4)^{T}\), computing \(\{z_{n}\}\) by the algorithm (4.4), we obtain numerical results with an error 10−7 in Table 1.

Table 1 Numerical results of Example 4.4 for iteration process ( 4.4 )

From Table 1, we see that \(z_{\infty}=(-1, 0, -2)\) is the minimizer of \(F+G\) and its minimum value is −7.5.

References

  1. Lions, PL, Mercier, B: Splitting algorithms for the sum of two nonlinear operators. SIAM J. Numer. Anal. 16, 964-979 (1979)

    Article  MathSciNet  MATH  Google Scholar 

  2. Passty, GB: Ergodic convergence to a zero of the sum of monotone operators in Hilbert space. J. Math. Anal. Appl. 72, 383-390 (1979)

    Article  MathSciNet  Google Scholar 

  3. Brézis, H, Lions, PL: Produits infinis de resolvantes. Isr. J. Math. 29, 329-345 (1978)

    Article  Google Scholar 

  4. Güler, O: On the convergence of the proximal point algorithm for convex minimization. SIAM J. Control Optim. 29, 403-419 (1991)

    Article  MathSciNet  MATH  Google Scholar 

  5. Martinet, B: Régularisation d’inéquations variationnelles par approximations successives. Rev. Fr. Inform. Rech. Oper. 4, 154-158 (1970)

    MathSciNet  MATH  Google Scholar 

  6. Rockafellar, RT: Monotone operators and the proximal point algorithm. SIAM J. Control Optim. 14, 877-898 (1976)

    Article  MathSciNet  MATH  Google Scholar 

  7. Dunn, JC: Convexity, monotonicity, and gradient processes in Hilbert space. J. Math. Anal. Appl. 53, 145-158 (1976)

    Article  MathSciNet  Google Scholar 

  8. Combettes, PL: Iterative construction of the resolvent of a sum of maximal monotone operators. J. Convex Anal. 16, 727-748 (2009)

    MathSciNet  Google Scholar 

  9. López, G, Martín-Márquez, V, Wang, F, Xu, HK: Forward-backward splitting methods for accretive operators in Banach spaces. Abstr. Appl. Anal. 2012, Article ID 109236 (2012)

    Article  Google Scholar 

  10. Takahashi, W: Viscosity approximation methods for resolvents of accretive operators in Banach spaces. J. Fixed Point Theory Appl. 1, 135-147 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  11. Wang, F, Cui, H: On the contraction-proximal point algorithms with multi-parameters. J. Glob. Optim. 54, 485-491 (2012)

    Article  MathSciNet  Google Scholar 

  12. Xu, HK: A regularization method for the proximal point algorithm. J. Glob. Optim. 36, 115-125 (2006)

    Article  Google Scholar 

  13. Qin, X, Cho, SY, Wang, L: A regularization method for treating zero points of the sum of two monotone operators. Fixed Point Theory Appl. 2014, 75 (2014)

    Article  Google Scholar 

  14. Minty, GJ: On the maximal domain of a monotone function. Mich. Math. J. 8, 135-137 (1961)

    Article  MathSciNet  Google Scholar 

  15. Nadezhkina, N, Takahashi, W: Weak convergence theorem by an extragradient method for nonexpansive mappings and monotone mappings. J. Optim. Theory Appl. 128, 191-201 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  16. Browder, FE: Nonexpansive nonlinear operators in a Banach space. Proc. Natl. Acad. Sci. USA 54, 1041-1044 (1965)

    Article  MathSciNet  MATH  Google Scholar 

  17. Xu, HK: Iterative algorithms for nonlinear operators. J. Lond. Math. Soc. 66, 240-256 (2002)

    Article  MATH  Google Scholar 

  18. He, S, Yang, C: Solving the variational inequality problem defined on intersection of finite level sets. Abstr. Appl. Anal. 2013, Article ID 942315 (2013)

    MathSciNet  Google Scholar 

  19. Baillon, JB, Haddad, G: Quelques proprietes des operateurs angle-bornes et cycliquement monotones. Isr. J. Math. 26, 137-150 (1977)

    Article  MathSciNet  MATH  Google Scholar 

  20. Rockafellar, RT: On the maximal monotonicity of subdifferential mappings. Pac. J. Math. 33, 209-216 (1970)

    Article  MathSciNet  MATH  Google Scholar 

  21. Hale, ET, Yin, W, Zhang, Y: A fixed-point continuation method for \(\ell_{1}\)-regularized minimization with applications to compressed sensing. Tech. rep., CAAM TR07-07 (2007)

Download references

Acknowledgements

The first author would like to thank University of Phayao. The second author and the corresponding author would like to thank the Thailand Research Fund under the project RTA5780007 and Chiang Mai University.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Suthep Suantai.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Cholamjiak, P., Cholamjiak, W. & Suantai, S. A modified regularization method for finding zeros of monotone operators in Hilbert spaces. J Inequal Appl 2015, 220 (2015). https://doi.org/10.1186/s13660-015-0739-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13660-015-0739-8

MSC

Keywords