# Iterative algorithms for finding the zeroes of sums of operators

## Abstract

Let ${H}_{1}$, ${H}_{2}$ be real Hilbert spaces, $C\subseteq {H}_{1}$ be a nonempty closed convex set, and $0\notin C$. Let $A:{H}_{1}\to {H}_{2}$, $B:{H}_{1}\to {H}_{2}$ be two bounded linear operators. We consider the problem to find $x\in C$ such that $Ax=-Bx$ ($0=Ax+Bx$). Recently, Eckstein and Svaiter presented some splitting methods for finding a zero of the sum of monotone operator A and B. However, the algorithms are largely dependent on the maximal monotonicity of A and B. In this paper, we describe some algorithms for finding a zero of the sum of A and B which ignore the conditions of the maximal monotonicity of A and B.

## 1 Introduction and preliminaries

Let ${H}_{1}$, ${H}_{2}$, ${H}_{3}$ be real Hilbert spaces, $C\subseteq {H}_{1}$ be a nonempty closed convex set and $0\notin C$. Let $A:{H}_{1}\to {H}_{2}$, $B:{H}_{1}\to {H}_{2}$ be two bounded linear operators. We consider the interesting problem of finding $x\in C$ such that

(1.1)

For convenience, we denote the problem by $\mathcal{P}$.

For $\mathcal{P}$ it is generally difficult to find zeroes of A and B separately. To overcome this difficulty, Eckstein and Svaiter [1] present the splitting methods for finding a zero of the sum of monotone operator A and B. Three basic families of splitting methods for this problem were identified in [1]:

1. (i)

The Douglas/Peaceman-Rachford family, whose iteration is given by

$\begin{array}{c}{y}_{k}=\left[2{\left(I+\xi B\right)}^{-1}+I\right]{x}_{k},\hfill \\ {z}_{k}=\left[2{\left(I+\xi A\right)}^{-1}+I\right]{y}_{k},\hfill \\ {x}_{k+1}=\left(1-{\rho }_{k}\right){x}_{k}+{\rho }_{k}{z}_{k},\hfill \end{array}$

where $\xi >0$ is a fixed scalar, and $\left\{{\rho }_{k}\right\}\subseteq \left(0,1\right]$ is a sequence of relaxation parameters.

1. (ii)

The double backward splitting method, with iteration given by

$\begin{array}{c}{y}_{k}={\left(I+{\lambda }_{k}B\right)}^{-1}{x}_{k},\hfill \\ {x}_{k+1}={\left(I+{\lambda }_{k}A\right)}^{-1}{y}_{k},\hfill \end{array}$

where $\left\{{\lambda }_{k}\right\}\subseteq \left(0,\mathrm{\infty }\right)$ a sequence of regularization parameters.

1. (iii)

The forward-backward splitting method, with iteration given by

$\begin{array}{c}{y}_{k}\in {\left(I-{\lambda }_{k}A\right)}^{-1}{x}_{k},\hfill \\ {x}_{k+1}={\left(I+{\lambda }_{k}B\right)}^{-1}{y}_{k},\hfill \end{array}$

where $\left\{{\lambda }_{k}\right\}\subseteq \left(0,\mathrm{\infty }\right)$ a sequence of regularization parameters.

Convergence results for the scheme (i), in the case in which $\left\{{\rho }_{k}\right\}$ is contained in a compact subset of $\left(0,1\right)$, can be found in [2]; the convergence analysis of the double backward scheme given by (ii), which can be found in [3] and [4]; the standard convergence analysis for (iii) one can see [5]. However, the convergence results are largely dependent on the maximal monotonicity of A and B. It is therefore the aim of this paper to construct new algorithms for problem $\mathcal{P}$ which ignore the conditions of the maximal monotonicity of A and B.

The paper is organized as follows. In Section 2, we define the concept of the minimal norm solution of the problem $\mathcal{P}$ (1.1). Using Tychonov regularization, we obtain a net of solutions for some minimization problem approximating such minimal norm solution (see Theorem 2.4). In Section 3, we introduce an algorithm and prove the strong convergence of the algorithm, more importantly, its limit is the minimum-norm solution of the problem $\mathcal{P}$ (1.1) (see Theorem 3.2). In Section 4, we introduce KM-CQ-like iterative algorithm which converge strongly to a solution of the problem $\mathcal{P}$ (1.1) (see Theorem 4.3).

Throughout the rest of this paper, I denotes the identity operator on Hilbert space H, $Fix\left(T\right)$ the set of the fixed points of an operator T and f the gradient of the functional $f:H\to R$. An operator T on a Hilbert space H is nonexpansive if, for each x and y in H, $\parallel Tx-Ty\parallel \le \parallel x-y\parallel$. T is said to be averaged, if there exist $0<\alpha <1$ and a nonexpansive operator N such that $T=\left(1-\alpha \right)I+\alpha N$.

We know that the projection ${P}_{C}$ from H onto a nonempty closed convex subset C of H is a typical example of a nonexpansive and averaged mapping, which is defined by

${P}_{C}\left(w\right)=arg\underset{x\in C}{min}\parallel x-w\parallel .$

It is well known that ${P}_{C}\left(w\right)$ is characterized by the inequality

$〈w-{P}_{C}\left(w\right),x-{P}_{C}\left(w〉\le 0,\phantom{\rule{1em}{0ex}}\mathrm{\forall }x\in C.$

We now collect some elementary facts which will be used in the proofs of our main results.

Lemma 1.1 [6, 7]

Let X be a Banach space, C a closed convex subset of X, and $T:C\to C$ a nonexpansive mapping with $Fix\left(T\right)\ne \mathrm{\varnothing }$. If $\left\{{x}_{n}\right\}$ is a sequence in C weakly converging to x and if $\left\{\left(I-T\right){x}_{n}\right\}$ converges strongly to y, then $\left(I-T\right)x=y$.

Lemma 1.2 [8]

Let $\left\{{s}_{n}\right\}$ be a sequence of nonnegative real numbers, $\left\{{\alpha }_{n}\right\}$ a sequence of real numbers in $\left[0,1\right]$ with ${\sum }_{n=1}^{\mathrm{\infty }}{\alpha }_{n}=\mathrm{\infty }$, $\left\{{u}_{n}\right\}$ a sequence of nonnegative real numbers with ${\sum }_{n=1}^{\mathrm{\infty }}{u}_{n}<\mathrm{\infty }$, and $\left\{{t}_{n}\right\}$ a sequence of real numbers with ${lim sup}_{n}{t}_{n}\le 0$. Suppose that

${s}_{n+1}=\left(1-{\alpha }_{n}\right){s}_{n}+{\alpha }_{n}{t}_{n}+{u}_{n},\phantom{\rule{1em}{0ex}}\mathrm{\forall }n\in N.$

Then ${lim}_{n\to \mathrm{\infty }}{s}_{n}=0$.

Lemma 1.3 [9]

Let $\left\{{w}_{n}\right\}$, $\left\{{z}_{n}\right\}$ be bounded sequences in a Banach space and let $\left\{{\beta }_{n}\right\}$ be a sequence in $\left[0,1\right]$ which satisfies the following condition: $0<{lim inf}_{n\to \mathrm{\infty }}{\beta }_{n}\le {lim sup}_{n\to \mathrm{\infty }}{\beta }_{n}<1$. Suppose that ${w}_{n+1}=\left(1-{\beta }_{n}\right){w}_{n}+{\beta }_{n}{z}_{n}$ and ${lim sup}_{n\to \mathrm{\infty }}\parallel {z}_{n+1}-{z}_{n}\parallel -\parallel {w}_{n+1}-{w}_{n}\parallel \le 0$, then ${lim}_{n\to \mathrm{\infty }}\parallel {z}_{n}-{w}_{n}\parallel =0$.

Lemma 1.4 [10]

Let f be a convex and differentiable functional and let C be a closed convex subset of H. Then $x\in C$ is a solution of the problem

$\underset{x\in C}{min}f\left(x\right)$

if and only if $x\in C$ satisfies the following optimality condition:

$〈\mathrm{\nabla }f\left(x\right),v-x〉\ge 0,\phantom{\rule{1em}{0ex}}\mathrm{\forall }v\in C.$

Moreover, if f is, in addition, strictly convex and coercive, then the minimization problem has a unique solution.

Lemma 1.5 [11]

Let A and B be averaged operators and suppose that $Fix\left(A\right)\cap Fix\left(B\right)$ is nonempty. Then $Fix\left(A\right)\cap Fix\left(B\right)=Fix\left(AB\right)=Fix\left(BA\right)$.

## 2 The minimum-norm solution of the problem $\mathcal{P}$

In this section, we propose the concept of the minimal norm solution of $\mathcal{P}$ (1.1). Then, using Tychonov regularization, we obtain the minimal norm solution by a net of solution for some minimization problem.

We use Γ to denote the solution set of $\mathcal{P}$, i.e.,

$\mathrm{\Gamma }=\left\{x\in {H}_{1},Ax=-Bx,x\in C\right\}$

and assume consistency of $\mathcal{P}$. Hence Γ is closed, convex, and nonempty.

Let $H={H}_{1}×{H}_{1}$, $M=\left\{\left(x,x\right),x\in {H}_{1}\right\}\subseteq H$, P be the linear operator from ${H}_{1}$ onto M, and P has the matrix form

$P=\left[\begin{array}{c}I\\ I\end{array}\right],$

that is to say, $P\left(x\right)=\left(x,x\right)$, $\mathrm{\forall }x\in {H}_{1}$.

Define $G:H\to {H}_{2}$ by $G\left(\left(x,y\right)\right)=Ax+By$, $\mathrm{\forall }\left(x,y\right)\in {H}_{2}$. Then G has the matrix form $G=\left[A,B\right]$, and $GP=A+B$, $P{G}^{\ast }GP={A}^{\ast }A+{A}^{\ast }B+{B}^{\ast }A+{B}^{\ast }B$.

The problem can now be reformulated as finding $x\in C$ with $GPx=0$, or solving the following minimization problem:

$\underset{x\in C}{min}f\left(x\right)=\frac{1}{2}{\parallel GPx\parallel }^{2},$
(2.1)

which is ill-posed. A classical way is the well-known Tychonov regularization, which approximates a solution of problem (2.1) by the unique minimizer of the regularized problem:

$\underset{x\in C}{min}{f}_{\alpha }\left(x\right)=\frac{1}{2}{\parallel GPx\parallel }^{2}+\frac{1}{2}\alpha {\parallel x\parallel }^{2},$
(2.2)

where $\alpha >0$ is the regularization parameter. Denote by ${x}_{\alpha }$ the unique solution of (2.2).

Proposition 2.1 For $\alpha >0$, the solution ${x}_{\alpha }$ of (2.2) is uniquely defined. ${x}_{\alpha }$ is characterized by the inequality

$〈{P}^{\ast }{G}^{\ast }GP{x}_{\alpha }+\alpha {x}_{\alpha },x-{x}_{\alpha }〉\ge 0,\phantom{\rule{1em}{0ex}}\mathrm{\forall }x\in C.$

Proof Obviously, $f\left(x\right)=\frac{1}{2}{\parallel GPx\parallel }^{2}$ is convex and differentiable with gradient $\mathrm{\nabla }f\left(x\right)={P}^{\ast }{G}^{\ast }GPx$. Recall that ${f}_{\alpha }\left(x\right)=f\left(x\right)+\frac{1}{2}\alpha {\parallel x\parallel }^{2}$, we see that ${f}_{\alpha }$ is strictly convex and differentiable with gradient

$\mathrm{\nabla }{f}_{\alpha }\left(x\right)={P}^{\ast }{G}^{\ast }GPx+\alpha x.$

According Lemma 1.4, ${x}_{\alpha }$ is characterized by the inequality

$〈{P}^{\ast }{G}^{\ast }GP{x}_{\alpha }+\alpha {x}_{\alpha },x-{x}_{\alpha }〉\ge 0,\phantom{\rule{1em}{0ex}}\mathrm{\forall }x\in C.$
(2.3)

□

Definition 2.2 An element $\stackrel{˜}{x}\in \mathrm{\Gamma }$ is said to be the minimal norm solution of SEP (1.1) if $\parallel \stackrel{˜}{x}\parallel ={inf}_{x\in \mathrm{\Gamma }}\parallel x\parallel$.

The following proposition collects some useful properties of $\left\{{x}_{\alpha }\right\}$ the unique solution of (2.2).

Proposition 2.3 Let ${x}_{\alpha }$ be given as the unique solution of (2.2). Then we have:

1. (i)

$\parallel {x}_{\alpha }\parallel$ is decreasing for $\alpha \in \left(0,\mathrm{\infty }\right)$.

2. (ii)

$\alpha ↦{x}_{\alpha }$ defines a continuous curve from $\left(0,\mathrm{\infty }\right)$ to ${H}_{1}$.

Proof Let $\alpha >\beta >0$, since ${x}_{\alpha }$ and ${x}_{\beta }$ are the unique minimizers of ${f}_{\alpha }$ and ${f}_{\beta }$, respectively, we get

$\begin{array}{c}\frac{1}{2}{\parallel GP{x}_{\alpha }\parallel }^{2}+\frac{1}{2}\alpha {\parallel {x}_{\alpha }\parallel }^{2}\le \frac{1}{2}{\parallel GP{x}_{\beta }\parallel }^{2}+\frac{1}{2}\alpha {\parallel {x}_{\beta }\parallel }^{2},\hfill \\ \frac{1}{2}{\parallel GP{x}_{\beta }\parallel }^{2}+\frac{1}{2}\beta {\parallel {x}_{\beta }\parallel }^{2}\le \frac{1}{2}{\parallel GP{x}_{\alpha }\parallel }^{2}+\frac{1}{2}\beta {\parallel {x}_{\alpha }\parallel }^{2}.\hfill \end{array}$

It follows that $\parallel {x}_{\alpha }\parallel \le \parallel {x}_{\beta }\parallel$. Thus $\parallel {x}_{\alpha }\parallel$ is decreasing for $\alpha \in \left(0,\mathrm{\infty }\right)$.

According to Proposition 2.1, we get

$〈{P}^{\ast }{G}^{\ast }GP{x}_{\alpha }+\alpha {x}_{\alpha },{x}_{\beta }-{x}_{\alpha }〉\ge 0,$

and

$〈{P}^{\ast }{G}^{\ast }GP{x}_{\beta }+\beta {x}_{\beta },{x}_{\alpha }-{x}_{\beta }〉\ge 0.$

It follows that

$〈{x}_{\alpha }-{x}_{\beta },\alpha {x}_{\alpha }-\beta {x}_{\beta }〉\le 〈{x}_{\alpha }-{x}_{\beta },{P}^{\ast }{G}^{\ast }GP\left({x}_{\beta }-{x}_{\alpha }\right)〉\le 0.$

Thus

$\alpha \parallel {x}_{\alpha }-{x}_{\beta }\parallel \le \left(\alpha -\beta \right)〈{x}_{\alpha }-{x}_{\beta },{x}_{\beta }〉.$

It turns out that

${\parallel {x}_{\alpha }-{x}_{\beta }\parallel }^{2}\le \frac{|\alpha -\beta |}{\alpha }\parallel {x}_{\beta }\parallel .$

Hence, $\alpha ↦{x}_{\alpha }$ is a continuous curve from $\left(0,\mathrm{\infty }\right)$ to ${H}_{1}$. □

Theorem 2.4 Let ${x}_{\alpha }$ be the unique solution of (2.2). Then ${x}_{\alpha }$ converges strongly to the minimum-norm solution $\stackrel{˜}{x}$ of $\mathcal{P}$ (1.1) with $\alpha \to 0$.

Proof For any $0<\alpha <\mathrm{\infty }$, ${x}_{\alpha }$ is given as (2.2), we get

$\frac{1}{2}{\parallel GP{x}_{\alpha }\parallel }^{2}+\frac{1}{2}\alpha {\parallel {x}_{\alpha }\parallel }^{2}\le \frac{1}{2}{\parallel GP\stackrel{˜}{x}\parallel }^{2}+\frac{1}{2}\alpha {\parallel \stackrel{˜}{x}\parallel }^{2}.$

Since $\stackrel{˜}{x}\in \mathrm{\Gamma }$ is a solution for $\mathcal{P}$,

$\frac{1}{2}{\parallel GP{x}_{\alpha }\parallel }^{2}+\frac{1}{2}\alpha {\parallel {x}_{\alpha }\parallel }^{2}\le \frac{1}{2}\alpha {\parallel \stackrel{˜}{x}\parallel }^{2}.$

It follows that $\parallel {x}_{\alpha }\parallel \le \parallel \stackrel{˜}{x}\parallel$ for all $\alpha >0$. Thus $\left\{{x}_{\alpha }\right\}$ is a bounded net in ${H}_{1}$.

All we need to prove is that for any sequence $\left\{{\alpha }_{n}\right\}$ such that ${\alpha }_{n}\to 0$, $\left\{{x}_{{\alpha }_{n}}\right\}$ contains a subsequence converging strongly to $\stackrel{˜}{x}$. For convenience, we set ${x}_{n}={x}_{{\alpha }_{n}}$.

In fact $\left\{{x}_{n}\right\}$ is bounded, by passing to a subsequence if necessary, we may assume that $\left\{{x}_{n}\right\}$ converges weakly to a point $\stackrel{ˆ}{x}\in S$. Due to Proposition 2.1, we get

$〈{P}^{\ast }{G}^{\ast }GP{x}_{n}+{\alpha }_{n}{x}_{n},\stackrel{˜}{x}-{x}_{n}〉\ge 0.$

It turns out that

$〈GP{x}_{n},GP\stackrel{˜}{x}-GP{x}_{n}〉\ge {\alpha }_{n}〈{x}_{n},{x}_{n}-\stackrel{˜}{x}〉.$

Since $\stackrel{˜}{x}\in \mathrm{\Gamma }$, it follows that

$〈GP{x}_{n},-GP{x}_{n}〉\ge {\alpha }_{n}〈{x}_{n},{x}_{n}-\stackrel{˜}{x}〉.$

Noting that $\parallel {x}_{n}\parallel \le \parallel \stackrel{˜}{x}\parallel$, we have

$\parallel GP{x}_{n}\parallel \le 2{\alpha }_{n}\parallel \stackrel{˜}{x}\parallel \to 0.$

Moreover, note that $\left\{{x}_{n}\right\}$ converges weakly to a point $\stackrel{ˆ}{x}\in C$, thus $\left\{GP{x}_{n}\right\}$ converges weakly to $GP\stackrel{ˆ}{x}$. It follows that $GP\stackrel{ˆ}{x}=0$, i.e. $\stackrel{ˆ}{x}\in \mathrm{\Gamma }$.

Finally, we prove that $\stackrel{ˆ}{x}=\stackrel{˜}{x}$ and this finishes the proof.

Recall that $\left\{{x}_{n}\right\}$ converges weakly to $\stackrel{ˆ}{x}$ and $\parallel {x}_{n}\parallel \le \parallel \stackrel{˜}{x}\parallel$, one can deduce that

$\parallel \stackrel{ˆ}{x}\parallel \le \underset{n}{lim inf}\parallel {x}_{n}\parallel \le \parallel \stackrel{˜}{x}\parallel =min\left\{\parallel x\parallel :x\in \mathrm{\Gamma }\right\}.$

This shows that $\stackrel{ˆ}{x}$ is also a point in Γ with minimum-norm. By the uniqueness of minimum-norm element, we get $\stackrel{ˆ}{x}=\stackrel{˜}{x}$. □

Finally, we will introduce another method to get the minimum-norm solution of the problem $\mathcal{P}$.

Lemma 2.5 Let $T=I-\gamma {P}^{\ast }{G}^{\ast }GP$, where $0<\gamma <2/\rho \left({P}^{\ast }{G}^{\ast }GP\right)$ with $\rho \left({P}^{\ast }{G}^{\ast }GP\right)$ being the spectral radius of the self-adjoint operator ${P}^{\ast }{G}^{\ast }GP$ on ${H}_{1}$. Then we have the following:

1. (1)

$\parallel T\parallel \le 1$ (i.e. T is nonexpansive) and averaged;

2. (2)

$Fix\left(T\right)=\left\{x\in {H}_{1},Ax=-Bx\right\}$, $Fix\left({P}_{C}T\right)=Fix\left({P}_{C}\right)\cap Fix\left(T\right)=\mathrm{\Gamma }$;

3. (3)

$x\in Fix\left({P}_{C}T\right)$ if and only if x is a solution of the variational inequality $〈{P}^{\ast }{G}^{\ast }GPx,v-x〉\ge 0$, $\mathrm{\forall }v\in C$.

Proof (1) It is easily proved that $\parallel T\parallel \le 1$, we only need to prove that $T=I-\gamma {P}^{\ast }{G}^{\ast }GP$ is averaged. Indeed, choose $0<\beta <1$, such that $\gamma /\left(1-\beta \right)<2/\rho \left({P}^{\ast }{G}^{\ast }GP\right)$, then $T=I-\gamma {P}^{\ast }{G}^{\ast }GP=\beta I+\left(1-\beta \right)V$, where $V=I-\gamma /\left(1-\beta \right){P}^{\ast }{G}^{\ast }GP$ is a nonexpansive mapping. That is to say T is averaged.

1. (2)

If $x\in \left\{x\in {H}_{1},Ax=-Bx\right\}$, it is obviously that $x\in Fix\left(T\right)$. Conversely, assume that $x\in Fix\left(T\right)$, we have $x=x-\gamma {P}^{\ast }{G}^{\ast }GPx$, hence $\gamma {P}^{\ast }{G}^{\ast }GPx=0$ then ${\parallel GPx\parallel }^{2}=〈{P}^{\ast }{G}^{\ast }GPx,x〉=0$, we get $x\in \left\{x\in {H}_{1},Ax=-Bx\right\}$. We have $Fix\left(T\right)=\left\{x\in {H}_{1},Ax=-Bx\right\}$.

Now we prove $Fix\left({P}_{C}T\right)=Fix\left({P}_{C}\right)\cap Fix\left(T\right)=\mathrm{\Gamma }$. By $Fix\left(T\right)=\left\{x\in {H}_{1},Ax=-Bx\right\}$, $Fix\left({P}_{C}\right)\cap Fix\left(T\right)=\mathrm{\Gamma }$ is obviously. On the other hand, since $Fix\left({P}_{C}\right)\cap Fix\left(T\right)=\mathrm{\Gamma }\ne \mathrm{\varnothing }$, and both ${P}_{C}$ and T are averaged, from Lemma 1.5, we have $Fix\left({P}_{C}T\right)=Fix\left({P}_{C}\right)\cap Fix\left(T\right)$.

1. (3)
$\begin{array}{rcl}〈{P}^{\ast }{G}^{\ast }GPx,v-x〉\ge 0,\phantom{\rule{1em}{0ex}}\mathrm{\forall }v\in C\phantom{\rule{1em}{0ex}}& ⇔& \phantom{\rule{1em}{0ex}}〈x-\left(x-\gamma {P}^{\ast }{G}^{\ast }GPx\right),v-x〉\ge 0,\phantom{\rule{1em}{0ex}}\mathrm{\forall }v\in S\\ ⇔& \phantom{\rule{1em}{0ex}}w={P}_{C}\left(w-\gamma {P}^{\ast }{G}^{\ast }GPx\right)\\ ⇔& \phantom{\rule{1em}{0ex}}w\in Fix\left({P}_{C}T\right).\end{array}$

□

Remark 2.6 Choose a constant γ satisfying that $0<\gamma <2/\rho \left({P}^{\ast }{G}^{\ast }GP\right)$. For $\alpha \in \left(0,\frac{2-\gamma \parallel {P}^{\ast }{G}^{\ast }GP\parallel }{2\gamma }\right)$, we define a mapping

${W}_{\alpha }\left(x\right):={P}_{C}\left[\left(1-\alpha \gamma \right)I-\gamma {P}^{\ast }{G}^{\ast }GP\right]x.$

It is clear that ${W}_{\alpha }$ is a contractive. Hence, ${W}_{\alpha }$ has a unique fixed point ${x}_{\alpha }$, we have

${x}_{\alpha }={P}_{C}\left[\left(1-\alpha \gamma \right)I-\gamma {P}^{\ast }{G}^{\ast }GP\right]{x}_{\alpha }.$
(2.4)

Theorem 2.7 Let ${x}_{\alpha }$ be given as (2.4). Then ${x}_{\alpha }$ converges strongly to the minimum-norm solution $\stackrel{˜}{x}$ of the problem $\mathcal{P}$ (1.1) when $\alpha \to 0$.

Proof Choose $\stackrel{ˇ}{x}\in \mathrm{\Gamma }$, noting that $\alpha \in \left(0,\frac{2-\gamma \parallel {P}^{\ast }{G}^{\ast }GP\parallel }{2\gamma }\right)$, $I-\frac{\gamma }{\left(1-\alpha \gamma \right)}{P}^{\ast }{G}^{\ast }GP$ is nonexpansive, it turns out that

$\begin{array}{rcl}\parallel {x}_{\alpha }-\stackrel{ˇ}{x}\parallel & =& \parallel {P}_{C}\left[\left(1-\alpha \gamma \right)I-\gamma {P}^{\ast }{G}^{\ast }GP\right]{x}_{\alpha }-{P}_{C}\left[\stackrel{ˇ}{x}-\gamma {P}^{\ast }{G}^{\ast }GP\stackrel{ˇ}{x}\right]\parallel \\ \le & \parallel \left[\left(1-\alpha \gamma \right)I-\gamma {P}^{\ast }{G}^{\ast }GP\right]{x}_{\alpha }-\left[\stackrel{ˇ}{x}-\gamma {P}^{\ast }{G}^{\ast }GP\stackrel{ˇ}{x}\right]\parallel \\ =& \parallel \left(1-\alpha \gamma \right)\left[{x}_{\alpha }-\frac{\gamma }{1-\alpha \gamma }{P}^{\ast }{G}^{\ast }GP{x}_{\alpha }\right]\\ -\left(1-\alpha \gamma \right)\left[\stackrel{ˇ}{x}-\frac{\gamma }{1-\alpha \gamma }{P}^{\ast }{G}^{\ast }GP\stackrel{ˇ}{x}\right]-\alpha \gamma \stackrel{ˇ}{x}\parallel \\ \le & \left(1-\alpha \gamma \right)\parallel \left({x}_{\alpha }-\frac{\gamma }{1-\alpha \gamma }{P}^{\ast }{G}^{\ast }GP{x}_{\alpha }\right)-\left(\stackrel{ˇ}{x}-\frac{\gamma }{1-\alpha \gamma }{P}^{\ast }{G}^{\ast }GP\stackrel{ˇ}{x}\right)\parallel +\alpha \gamma \parallel \stackrel{ˇ}{x}\parallel \\ \le & \left(1-\alpha \gamma \right)\parallel {x}_{\alpha }-\stackrel{ˇ}{x}\parallel +\alpha \gamma \parallel \stackrel{ˇ}{x}\parallel .\end{array}$

That is,

$\parallel {x}_{\alpha }-\stackrel{ˇ}{x}\parallel \le \parallel \stackrel{ˇ}{x}\parallel .$

Hence $\left\{{x}_{\alpha }\right\}$ is bounded.

Taking into account of (2.4), we have

$\parallel {x}_{\alpha }-{P}_{C}\left[I-\gamma {P}^{\ast }{G}^{\ast }GP\right]{x}_{\alpha }\parallel \le \alpha \parallel \gamma {x}_{\alpha }\parallel \to 0.$

We assert that $\left\{{x}_{\alpha }\right\}$ is relatively norm compact as $\alpha \to {0}^{+}$. In fact, assume that $\left\{{\alpha }_{n}\right\}\subseteq \left(0,\frac{2-\gamma \parallel {P}^{\ast }{G}^{\ast }GP\parallel }{2\gamma }\right)$ and ${\alpha }_{n}\to {0}^{+}$ as $n\to \mathrm{\infty }$. For convenience, we put ${x}_{n}:={x}_{{\alpha }_{n}}$, we get

$\parallel {x}_{n}-{P}_{C}\left[I-\gamma {P}^{\ast }{G}^{\ast }GP\right]{x}_{n}\parallel \le {\alpha }_{n}\parallel \gamma {x}_{n}\parallel \to 0.$

Since ${P}_{C}$ is nonexpansive, one concludes that

$\begin{array}{rcl}{\parallel {x}_{\alpha }-\stackrel{ˇ}{x}\parallel }^{2}& =& {\parallel {P}_{C}\left[\left(1-\alpha \gamma \right)I-\gamma {P}^{\ast }{G}^{\ast }GP\right]{x}_{\alpha }-{P}_{C}\left[\stackrel{ˇ}{x}-\gamma {P}^{\ast }{G}^{\ast }GP\stackrel{ˇ}{x}\right]\parallel }^{2}\\ \le & 〈\left[\left(1-\alpha \gamma \right)I-\gamma {P}^{\ast }{G}^{\ast }GP\right]{x}_{\alpha }-\left[\stackrel{ˇ}{x}-\gamma {P}^{\ast }{G}^{\ast }GP\stackrel{ˇ}{x}\right],{x}_{\alpha }-\stackrel{ˇ}{x}〉\\ =& 〈\left(1-\alpha \gamma \right)\left[{x}_{\alpha }-\frac{\gamma }{1-\alpha \gamma }{P}^{\ast }{G}^{\ast }GP{x}_{\alpha }\right]\\ -\left(1-\alpha \gamma \right)\left[\stackrel{ˇ}{x}-\frac{\gamma }{1-\alpha \gamma }{P}^{\ast }{G}^{\ast }GP\stackrel{ˇ}{x}\right],{x}_{\alpha }-\stackrel{ˇ}{x}〉-\alpha \gamma 〈\stackrel{ˇ}{x},{x}_{\alpha }-\stackrel{ˇ}{x}〉\\ \le & \left(1-\alpha \gamma \right){\parallel {x}_{\alpha }-\stackrel{ˇ}{x}\parallel }^{2}-\alpha \gamma 〈\stackrel{ˇ}{x},{x}_{\alpha }-\stackrel{ˇ}{x}〉.\end{array}$

That is ,

${\parallel {x}_{\alpha }-\stackrel{ˇ}{x}\parallel }^{2}\le 〈-\stackrel{ˇ}{x},{x}_{\alpha }-\stackrel{ˇ}{x}〉.$

Thus,

${\parallel {x}_{n}-\stackrel{ˇ}{x}\parallel }^{2}\le 〈-\stackrel{ˇ}{x},{x}_{n}-\stackrel{ˇ}{x}〉,\phantom{\rule{1em}{0ex}}\mathrm{\forall }\stackrel{ˇ}{x}\in \mathrm{\Gamma }.$

Due to $\left\{{x}_{n}\right\}$ is bounded, there exists a subsequence of $\left\{{x}_{n}\right\}$ which converges weakly to a point $\stackrel{˜}{x}$. Without loss of generality, we may assume that $\left\{{x}_{n}\right\}$ converges weakly to $\stackrel{˜}{x}$. Noting that

$\parallel {x}_{n}-{P}_{C}\left[I-\gamma {P}^{\ast }{G}^{\ast }GP\right]{x}_{n}\parallel \le {\alpha }_{n}\parallel \gamma {x}_{n}\parallel \to 0,$

and applying Lemma 1.1, we obtain $\stackrel{˜}{x}\in Fix\left({P}_{C}\left[I-\gamma {P}^{\ast }{G}^{\ast }GP\right]\right)=\mathrm{\Gamma }$.

Since

${\parallel {x}_{n}-\stackrel{ˇ}{x}\parallel }^{2}\le 〈-\stackrel{ˇ}{x},{x}_{n}-\stackrel{ˇ}{x}〉,\phantom{\rule{1em}{0ex}}\mathrm{\forall }\stackrel{ˇ}{x}\in \mathrm{\Gamma },$

it concludes that

${\parallel {x}_{n}-\stackrel{˜}{x}\parallel }^{2}\le 〈-\stackrel{˜}{x},{x}_{n}-\stackrel{˜}{x}〉.$

Hence, if $\left\{{x}_{n}\right\}$ converges weakly to $\stackrel{˜}{x}$, then $\left\{{x}_{n}\right\}$ converges strongly to $\stackrel{˜}{x}$. That is to say $\left\{{x}_{\alpha }\right\}$ is relatively norm compact as $\alpha \to {0}^{+}$.

Moreover, again using

${\parallel {x}_{n}-\stackrel{ˇ}{x}\parallel }^{2}\le 〈-\stackrel{ˇ}{x},{x}_{n}-\stackrel{ˇ}{x}〉,\phantom{\rule{1em}{0ex}}\mathrm{\forall }\stackrel{ˇ}{x}\in \mathrm{\Gamma },$

let $n\to \mathrm{\infty }$, we have

${\parallel \stackrel{˜}{x}-\stackrel{ˇ}{x}\parallel }^{2}\le 〈-\stackrel{ˇ}{x},\stackrel{˜}{x}-\stackrel{ˇ}{x}〉,\phantom{\rule{1em}{0ex}}\mathrm{\forall }\stackrel{ˇ}{x}\in \mathrm{\Gamma }.$

This implies that

$〈-\stackrel{ˇ}{x},\stackrel{ˇ}{x}-\stackrel{˜}{x}〉\le 0,\phantom{\rule{1em}{0ex}}\mathrm{\forall }\stackrel{ˇ}{x}\in \mathrm{\Gamma }.$

This is equivalent to

$〈-\stackrel{˜}{x},\stackrel{ˇ}{x}-\stackrel{˜}{x}〉\le 0,\phantom{\rule{1em}{0ex}}\mathrm{\forall }\stackrel{ˇ}{x}\in \mathrm{\Gamma }.$

It turns out that $\stackrel{˜}{x}\in {P}_{C}\left(0\right)$. Consequently, each cluster point of ${x}_{\alpha }$ is equals $\stackrel{˜}{x}$. Thus ${x}_{\alpha }\to \stackrel{˜}{x}\left(\alpha \to 0\right)$ the minimum-norm solution of the problem $\mathcal{P}$. □

## 3 Iterative algorithm for the minimum-norm solution of the problem $\mathcal{P}$

In this section, we introduce the following algorithm and prove the strong convergence of the algorithm, more importantly, its limit is the minimum-norm solution of the problem $\mathcal{P}$.

Algorithm 3.1 For an arbitrary point ${x}_{0}\in {H}_{1}$ the sequence $\left\{{x}_{n}\right\}$ is generated by the iterative algorithm

${x}_{n+1}={P}_{C}\left\{\left(1-{\alpha }_{n}\right)\left[I-\gamma {P}^{\ast }{G}^{\ast }GP\right]{x}_{n}\right\},$
(3.1)

where ${\alpha }_{n}>0$ is a sequence in $\left(0,1\right)$ such that

1. (i)

${lim}_{n}{\alpha }_{n}=0$;

2. (ii)

${\sum }_{n=0}^{\mathrm{\infty }}{\alpha }_{n}=\mathrm{\infty }$;

3. (iii)

${\sum }_{n=0}^{\mathrm{\infty }}|{\alpha }_{n+1}-{\alpha }_{n}|<\mathrm{\infty }$ or ${lim}_{n}|{\alpha }_{n+1}-{\alpha }_{n}|/{\alpha }_{n}=0$.

Now, we prove the strong convergence of the iterative algorithm.

Theorem 3.2 The sequence $\left\{{x}_{n}\right\}$ generated by algorithm (3.1) converges strongly to the minimum-norm solution $\stackrel{˜}{x}$ of the problem $\mathcal{P}$ (1.1).

Proof Let ${R}_{n}$ and R be defined by

$\begin{array}{c}{R}_{n}x:={P}_{C}\left\{\left(1-{\alpha }_{n}\right)\left[I-\gamma {P}^{\ast }{G}^{\ast }GP\right]\right\}x={P}_{C}\left[\left(1-{\alpha }_{n}\right)Tx\right],\hfill \\ Rx:={P}_{C}\left(I-\gamma {P}^{\ast }{G}^{\ast }GP\right)x={P}_{C}\left(Tx\right),\hfill \end{array}$

where $T=I-\gamma {P}^{\ast }{G}^{\ast }GP$, by Lemma 2.5 , it is easy to see that ${R}_{n}$ is a contraction with contractive constant $1-{\alpha }_{n}$. Algorithm (3.1) can be written as ${x}_{n+1}={R}_{n}{x}_{n}$.

For any $\stackrel{ˆ}{x}\in \mathrm{\Gamma }$, we have

$\begin{array}{rcl}\parallel {R}_{n}\stackrel{ˆ}{x}-\stackrel{ˆ}{x}\parallel & =& \parallel {P}_{C}\left[\left(1-{\alpha }_{n}\right)T\stackrel{ˆ}{x}\right]-\stackrel{ˆ}{x}\parallel \\ =& \parallel {P}_{C}\left[\left(1-{\alpha }_{n}\right)T\stackrel{ˆ}{x}\right]-{P}_{S}\left(T\stackrel{ˆ}{x}\right)\parallel \\ \le & \parallel \left(1-{\alpha }_{n}\right)T\stackrel{ˆ}{x}-T\stackrel{ˆ}{x}\parallel \\ =& {\alpha }_{n}\parallel T\stackrel{ˆ}{x}\parallel \le {\alpha }_{n}\parallel \stackrel{ˆ}{x}\parallel .\end{array}$

Hence,

$\begin{array}{rcl}\parallel {x}_{n+1}-\stackrel{ˆ}{x}\parallel & =& \parallel {R}_{n}{x}_{n}-\stackrel{ˆ}{x}\parallel \le \parallel {R}_{n}{x}_{n}-{R}_{n}\stackrel{ˆ}{x}\parallel +\parallel {R}_{n}\stackrel{ˆ}{x}-\stackrel{ˆ}{x}\parallel \\ \le & \parallel {P}_{C}\left[\left(1-{\alpha }_{n}\right)T\stackrel{ˆ}{x}\right]-{P}_{S}\left(T\stackrel{ˆ}{x}\right)\parallel \\ \le & \left(1-{\alpha }_{n}\right)\parallel {x}_{n}-\stackrel{ˆ}{x}\parallel +{\alpha }_{n}\parallel \stackrel{ˆ}{x}\parallel \\ \le & max\left\{\parallel {x}_{n}-\stackrel{ˆ}{x}\parallel ,\parallel \stackrel{ˆ}{x}\parallel \right\}.\end{array}$

It follows that $\parallel {x}_{n}-\stackrel{ˆ}{x}\parallel \le max\left\{\parallel {x}_{0}-\stackrel{ˆ}{x}\parallel ,\parallel \stackrel{ˆ}{x}\parallel \right\}$. So $\left\{{x}_{n}\right\}$ is bounded.

Next we prove that ${lim}_{n}\parallel {x}_{n+1}-{x}_{n}\parallel =0$.

Indeed,

$\begin{array}{rcl}\parallel {x}_{n+1}-{x}_{n}\parallel & =& \parallel {R}_{n}{x}_{n}-{R}_{n-1}{x}_{n-1}\parallel \\ \le & \parallel {R}_{n}{x}_{n}-{R}_{n}{x}_{n-1}\parallel +\parallel {R}_{n}{x}_{n-1}-{R}_{n-1}{x}_{n-1}\parallel \\ \le & \left(1-{\alpha }_{n}\right)\parallel {x}_{n}-{x}_{n-1}\parallel +\parallel {R}_{n}{x}_{n-1}-{R}_{n-1}{x}_{n-1}\parallel .\end{array}$

Notice that

$\begin{array}{rcl}\parallel {R}_{n}{x}_{n-1}-{R}_{n-1}{x}_{n-1}\parallel & =& \parallel {P}_{C}\left[\left(1-{\alpha }_{n}\right)T{x}_{n-1}\right]-{P}_{C}\left[\left(1-{\alpha }_{n-1}\right)T{x}_{n-1}\right]\parallel \\ \le & \parallel \left(1-{\alpha }_{n}\right)T{x}_{n-1}-\left(1-{\alpha }_{n-1}\right)T{x}_{n-1}\parallel \\ =& |{\alpha }_{n}-{\alpha }_{n-1}|\parallel T{x}_{n-1}\parallel \\ \le & |{\alpha }_{n}-{\alpha }_{n-1}|\parallel {x}_{n-1}\parallel .\end{array}$

Hence

$\parallel {x}_{n+1}-{x}_{n}\parallel \le \left(1-{\alpha }_{n}\right)\parallel {x}_{n}-{x}_{n-1}\parallel +|{\alpha }_{n}-{\alpha }_{n-1}|\parallel {x}_{n-1}\parallel .$

By virtue of the assumptions (1)-(3) and Lemma 1.2, we have

$\underset{n}{lim}\parallel {x}_{n+1}-{x}_{n}\parallel =0.$

Therefore,

$\begin{array}{rcl}\parallel {x}_{n}-R{x}_{n}\parallel & \le & \parallel {x}_{n+1}-{x}_{n}\parallel +\parallel {R}_{n}{x}_{n}-R{x}_{n}\parallel \\ \le & \parallel {x}_{n+1}-{x}_{n}\parallel +\parallel \left(1-{\alpha }_{n}\right)T{x}_{n}-T{x}_{n}\parallel \\ \le & \parallel {x}_{n+1}-{x}_{n}\parallel +{\alpha }_{n}\parallel {x}_{n}\parallel \to 0.\end{array}$

By the demiclosedness principle ensures that each weak limit point of $\left\{{x}_{n}\right\}$ is a fixed point of the nonexpansive mapping $R={P}_{C}T$, that is, a point of the solution set Γ of SEP (1.1).

Finally, we will prove that ${lim}_{n}\parallel {x}_{n+1}-\stackrel{˜}{x}\parallel =0$.

Choose $0<\beta <1$, such that $\gamma /\left(1-\beta \right)<2/\rho \left({P}^{\ast }{G}^{\ast }GP\right)$, then $T=I-\gamma {P}^{\ast }{G}^{\ast }GP=\beta I+\left(1-\beta \right)V$, where $V=I-\gamma /\left(1-\beta \right){P}^{\ast }{G}^{\ast }GP$ is a nonexpansive mapping. Taking $z\in \mathrm{\Gamma }$, we deduce that

$\begin{array}{rcl}{\parallel {x}_{n+1}-z\parallel }^{2}& =& {\parallel {P}_{C}\left[\left(1-{\alpha }_{n}\right)T{x}_{n}\right]-z\parallel }^{2}\\ \le & {\parallel \left(1-{\alpha }_{n}\right)T{x}_{n}-z\parallel }^{2}\\ \le & \left(1-{\alpha }_{n}\right){\parallel T{x}_{n}-z\parallel }^{2}+{\alpha }_{n}{\parallel z\parallel }^{2}\\ \le & {\parallel \beta \left({x}_{n}-z\right)+\left(1-\beta \right)\left(V{x}_{n}-z\right)\parallel }^{2}+{\alpha }_{n}{\parallel z\parallel }^{2}\\ \le & \beta {\parallel \left({x}_{n}-z\right)\parallel }^{2}+\left(1-\beta \right){\parallel \left(V{x}_{n}-z\right)\parallel }^{2}-\beta \left(1-\beta \right){\parallel {x}_{n}-V{x}_{n}\parallel }^{2}+{\alpha }_{n}{\parallel z\parallel }^{2}\\ \le & {\parallel \left({x}_{n}-z\right)\parallel }^{2}-\beta \left(1-\beta \right){\parallel {x}_{n}-V{x}_{n}\parallel }^{2}+{\alpha }_{n}{\parallel z\parallel }^{2}.\end{array}$

Then

$\begin{array}{rcl}\beta \left(1-\beta \right)\parallel {x}_{n}-V{x}_{n}\parallel & \le & {\parallel {x}_{n}-z\parallel }^{2}-{\parallel {x}_{n+1}-z\parallel }^{2}+{\alpha }_{n}{\parallel z\parallel }^{2}\\ \le & \left(\parallel {x}_{n}-z\parallel +\parallel {x}_{n+1}-z\parallel \right)\left(\parallel {x}_{n}-z\parallel -\parallel {x}_{n+1}-z\parallel \right){\alpha }_{n}{\parallel z\parallel }^{2}\\ \le & \left(\parallel {x}_{n}-z\parallel +\parallel {x}_{n+1}-z\parallel \right)\left(\parallel {x}_{n}-{x}_{n+1}\parallel \right){\alpha }_{n}{\parallel z\parallel }^{2}\to 0.\end{array}$

Note that $T=I-\gamma {P}^{\ast }{G}^{\ast }GP=\beta I+\left(1-\beta \right)V$, it follows that ${lim}_{n}\parallel T{x}_{n}-{x}_{n}\parallel =0$.

Take a subsequence $\left\{{x}_{{n}_{k}}\right\}$ of $\left\{{x}_{n}\right\}$ such that ${lim sup}_{n}〈{x}_{n}-\stackrel{˜}{x},-\stackrel{˜}{x}〉={lim}_{k}〈{x}_{{n}_{k}}-\stackrel{˜}{x},-\stackrel{˜}{x}〉$.

By virtue of the boundedness of ${x}_{n}$, we may further assume with no loss of generality that ${x}_{{n}_{k}}$ converges weakly to a point $\stackrel{ˇ}{x}$. Since $\parallel R{x}_{n}-{x}_{n}\parallel \to 0$, using the demiclosedness principle, $\stackrel{ˇ}{x}\in Fix\left(R\right)=Fix\left({P}_{C}T\right)=\mathrm{\Gamma }$. Noticing that $\stackrel{˜}{x}$ is the projection of the origin onto Γ, we get

$\underset{n}{lim sup}〈{x}_{n}-\stackrel{˜}{x},-\stackrel{˜}{x}〉=\underset{k}{lim}〈{x}_{{n}_{k}}-\stackrel{˜}{x},-\stackrel{˜}{x}〉=〈\stackrel{ˇ}{x}-\stackrel{˜}{x},-\stackrel{˜}{x}〉\le 0.$

Finally, we compute

$\begin{array}{rcl}{\parallel {x}_{n+1}-\stackrel{˜}{x}\parallel }^{2}& =& {\parallel {P}_{C}\left[\left(1-{\alpha }_{n}\right)T{x}_{n}\right]-\stackrel{˜}{x}\parallel }^{2}\\ =& {\parallel {P}_{C}\left[\left(1-{\alpha }_{n}\right)T{x}_{n}\right]-{P}_{C}T\stackrel{˜}{x}\parallel }^{2}\\ \le & {\parallel \left(1-{\alpha }_{n}\right)T{x}_{n}-T\stackrel{˜}{x}\parallel }^{2}\\ =& {\parallel \left(1-{\alpha }_{n}\right)T{x}_{n}-\stackrel{˜}{x}\parallel }^{2}\\ =& {\parallel \left(1-{\alpha }_{n}\right)\left(T{x}_{n}-\stackrel{˜}{x}\right)+{\alpha }_{n}\left(-\stackrel{˜}{x}\right)\parallel }^{2}\\ =& {\left(1-{\alpha }_{n}\right)}^{2}{\parallel \left(T{x}_{n}-\stackrel{˜}{x}\right)\parallel }^{2}+{\alpha }_{n}^{2}{\parallel \stackrel{˜}{x}\parallel }^{2}+2{\alpha }_{n}\left(1-{\alpha }_{n}\right)〈T{x}_{n}-\stackrel{˜}{x},-\stackrel{˜}{x}〉\\ \le & \left(1-{\alpha }_{n}\right){\parallel \left(T{x}_{n}-\stackrel{˜}{x}\right)\parallel }^{2}+{\alpha }_{n}\left[{\alpha }_{n}{\parallel \stackrel{˜}{x}\parallel }^{2}+2\left(1-{\alpha }_{n}\right)〈T{x}_{n}-\stackrel{˜}{x},-\stackrel{˜}{x}〉\right].\end{array}$

Since, ${lim sup}_{n}〈{x}_{n}-\stackrel{˜}{x},-\stackrel{˜}{x}〉\le 0$, $\parallel {x}_{n}-T{x}_{n}\parallel \to 0$, we know that ${lim sup}_{n}\left({\alpha }_{n}{\parallel \stackrel{˜}{x}\parallel }^{2}+2\left(1-{\alpha }_{n}\right)〈T{x}_{n}-\stackrel{˜}{x},-\stackrel{˜}{x}〉\right)\le 0$, by Lemma 1.2, we conclude that ${lim}_{n}\parallel {x}_{n+1}-\stackrel{˜}{x}\parallel =0$. This completes the proof. □

## 4 KM-CQ-like iterative algorithm for the problem $\mathcal{P}$

In this section, we establish a KM-CQ-like algorithm converges strongly to a solution of the problem $\mathcal{P}$.

Algorithm 4.1 For an arbitrary initial point ${x}_{0}$, sequence $\left\{{x}_{n}\right\}$ is generated by the iteration:

${x}_{n+1}=\left(1-{\beta }_{n}\right){x}_{n}+{\beta }_{n}{P}_{C}\left[\left(1-{\alpha }_{n}\right)\left(I-\gamma {P}^{\ast }{G}^{\ast }GP\right)\right]{x}_{n},$
(4.1)

where ${\alpha }_{n}>0$ is a sequence in $\left(0,1\right)$ such that

1. (i)

${lim}_{n\to \mathrm{\infty }}{\alpha }_{n}=0$, ${\sum }_{n=0}^{\mathrm{\infty }}{\alpha }_{n}=\mathrm{\infty }$;

2. (ii)

${lim}_{n\to \mathrm{\infty }}|{\alpha }_{n+1}-{\alpha }_{n}|=0$;

3. (iii)

$0<{lim inf}_{n\to \mathrm{\infty }}{\beta }_{n}\le {lim sup}_{n\to \mathrm{\infty }}{\beta }_{n}<1$.

Lemma 4.2 If $z\in Fix\left(T\right)=Fix\left(I-\gamma {P}^{\ast }{G}^{\ast }GP\right)$, then for any x we have ${\parallel Tx-z\parallel }^{2}\le {\parallel x-z\parallel }^{2}-\beta \left(1-\beta \right){\parallel Vx-x\parallel }^{2}$, where β and V are the same in Lemma 2.5(1).

Proof By Lemma 2.5(1), we know that $T=\beta I+\left(1-\beta \right)V$, where $0<\beta <1$ and V is a nonexpansive. It is clear that $z\in Fix\left(T\right)=Fix\left(V\right)$, and

$\begin{array}{rcl}{\parallel Tx-z\parallel }^{2}& =& {\parallel \beta x+\left(1-\beta \right)Vx-z\parallel }^{2}\\ \le & \beta {\parallel x-z\parallel }^{2}+\left(1-\beta \right){\parallel Vx-z\parallel }^{2}-\beta \left(1-\beta \right){\parallel Vx-x\parallel }^{2}\\ \le & \beta {\parallel x-z\parallel }^{2}+\left(1-\beta \right){\parallel x-z\parallel }^{2}-\beta \left(1-\beta \right){\parallel Vx-x\parallel }^{2}\\ =& {\parallel x-z\parallel }^{2}-\beta \left(1-\beta \right){\parallel Vx-x\parallel }^{2}.\end{array}$

□

Theorem 4.3 The sequence $\left\{{x}_{n}\right\}$ generated by algorithm (4.1) converges strongly to a solution of the problem $\mathcal{P}$.

Proof For any solution $\stackrel{ˆ}{x}$ of the problem $\mathcal{P}$, according to Lemma 2.5, $\stackrel{ˆ}{x}\in Fix\left({P}_{C}T\right)=Fix\left({P}_{C}\right)\cap Fix\left(T\right)$, where $T=I-\gamma {P}^{\ast }{G}^{\ast }GP$, and

$\begin{array}{rcl}\parallel {x}_{n+1}-\stackrel{ˆ}{x}\parallel & =& \parallel \left(1-{\beta }_{n}\right){x}_{n}+{\beta }_{n}{P}_{C}\left[\left(1-{\alpha }_{n}\right)T\right]{x}_{n}-\stackrel{ˆ}{x}\parallel \\ =& \parallel \left(1-{\beta }_{n}\right)\left({x}_{n}-\stackrel{ˆ}{x}\right)+{\beta }_{n}\left({P}_{C}\left[\left(1-{\alpha }_{n}\right)T\right]{x}_{n}-\stackrel{ˆ}{x}\right)\parallel \\ \le & \left(1-{\beta }_{n}\right)\parallel {x}_{n}-\stackrel{ˆ}{x}\parallel +{\beta }_{n}\parallel {P}_{C}\left[\left(1-{\alpha }_{n}\right)T\right]{x}_{n}-\stackrel{ˆ}{x}\parallel \\ \le & \left(1-{\beta }_{n}\right)\parallel {x}_{n}-\stackrel{ˆ}{x}\parallel \\ +{\beta }_{n}\parallel {P}_{C}\left[\left(1-{\alpha }_{n}\right)T\right]{x}_{n}-{P}_{C}\left[\left(1-{\alpha }_{n}\right)T\right]\stackrel{ˆ}{x}\parallel \\ +{\beta }_{n}\parallel {P}_{C}\left[\left(1-{\alpha }_{n}\right)T\right]\stackrel{ˆ}{x}-\stackrel{ˆ}{x}\parallel \\ \le & \left(1-{\beta }_{n}\right)\parallel {x}_{n}-\stackrel{ˆ}{x}\parallel +{\beta }_{n}\left(1-{\alpha }_{n}\right)\parallel {x}_{n}-\stackrel{ˆ}{x}\parallel +{\beta }_{n}{\alpha }_{n}\parallel \stackrel{ˆ}{x}\parallel \\ =& \left(1-{\beta }_{n}{\alpha }_{n}\right)\parallel {x}_{n}-\stackrel{ˆ}{x}\parallel +{\beta }_{n}{\alpha }_{n}\parallel \stackrel{ˆ}{x}\parallel \\ \le & max\left\{\parallel {x}_{n}-\stackrel{ˆ}{x}\parallel ,\parallel \stackrel{ˆ}{x}\parallel \right\}.\end{array}$

One can deduce that

$\parallel {x}_{n}-\stackrel{ˆ}{x}\parallel \le max\left\{\parallel {x}_{0}-\stackrel{ˆ}{x}\parallel ,\parallel \stackrel{ˆ}{x}\parallel \right\}.$

Hence, $\left\{{x}_{n}\right\}$ is bounded and so is $\left\{T{x}_{n}\right\}$. Moreover,

$\begin{array}{rcl}\parallel {P}_{C}\left[\left(1-{\alpha }_{n}\right)T\right]{x}_{n}-\stackrel{ˆ}{x}\parallel & \le & \parallel \left(1-{\alpha }_{n}\right)T{x}_{n}-\stackrel{ˆ}{x}\parallel \\ =& \parallel \left(1-{\alpha }_{n}\right)\left[T{x}_{n}-\stackrel{ˆ}{x}\right]-{\alpha }_{n}\stackrel{ˆ}{x}\parallel \\ \le & \left(1-{\alpha }_{n}\right)\parallel {x}_{n}-\stackrel{ˆ}{x}\parallel +{\alpha }_{n}\parallel \stackrel{ˆ}{x}\parallel \\ \le & max\left\{\parallel {x}_{n}-\stackrel{ˆ}{x}\parallel ,\parallel \stackrel{ˆ}{x}\parallel \right\}.\end{array}$

Since $\left\{{x}_{n}\right\}$ is bounded, we see that $\left\{T{x}_{n}\right\}$, $\left(1-{\alpha }_{n}\right)T{x}_{n}$, and $\left\{{P}_{C}\left[\left(1-{\alpha }_{n}\right)T\right]{x}_{n}\right\}$ are also bounded.

Let ${z}_{n}={P}_{C}\left[\left(1-{\alpha }_{n}\right)T\right]{x}_{n}$, and $M>0$ such that $M={sup}_{n\ge 1}\left\{T{x}_{n}\right\}$. Noting that

$\begin{array}{rcl}\parallel {P}_{C}\left[\left(1-{\alpha }_{n+1}\right)T\right]{x}_{n}-{P}_{C}\left[\left(1-{\alpha }_{n}\right)T\right]{x}_{n}\parallel & \le & \parallel \left(1-{\alpha }_{n+1}\right)T{x}_{n}-\left(1-{\alpha }_{n}\right)T{x}_{n}\parallel \\ =& \parallel \left({\alpha }_{n}-{\alpha }_{n+1}\right)T{x}_{n}\parallel \\ \le & M|{\alpha }_{n}-{\alpha }_{n+1}|.\end{array}$

One concludes that

$\begin{array}{rcl}\parallel {z}_{n+1}-{z}_{n}\parallel & =& \parallel {P}_{C}\left[\left(1-{\alpha }_{n+1}\right)T\right]{x}_{n+1}-{P}_{C}\left[\left(1-{\alpha }_{n}\right)T\right]{x}_{n}\parallel \\ \le & \parallel {P}_{C}\left[\left(1-{\alpha }_{n+1}\right)T\right]{x}_{n+1}-{P}_{C}\left[\left(1-{\alpha }_{n+1}\right)T\right]{x}_{n}\parallel \\ +\parallel {P}_{C}\left[\left(1-{\alpha }_{n+1}\right)T\right]{x}_{n}-{P}_{C}\left[\left(1-{\alpha }_{n}\right)T\right]{x}_{n}\parallel \\ \le & \left(1-{\alpha }_{n+1}\right)\parallel {x}_{n+1}-{x}_{n}\parallel +\parallel {P}_{C}\left[\left(1-{\alpha }_{n+1}\right)T\right]{x}_{n}-{P}_{C}\left[\left(1-{\alpha }_{n}\right)T\right]{x}_{n}\parallel \\ \le & \left(1-{\alpha }_{n+1}\right)\parallel {x}_{n+1}-{x}_{n}\parallel +M|{\alpha }_{n}-{\alpha }_{n+1}|.\end{array}$

Since $0<{\alpha }_{n}<1$ and ${lim}_{n\to \mathrm{\infty }}|{\alpha }_{n+1}-{\alpha }_{n}|=0$, we have

$\parallel {z}_{n+1}-{z}_{n}\parallel -\parallel {x}_{n+1}-{x}_{n}\parallel \le M|{\alpha }_{n}-{\alpha }_{n+1}|,$

and

$\underset{n\to \mathrm{\infty }}{lim sup}\parallel {z}_{n+1}-{z}_{n}\parallel -\parallel {x}_{n+1}-{x}_{n}\parallel \le 0.$

Applying Lemma 1.3, we get

$\underset{n\to \mathrm{\infty }}{lim}\parallel {P}_{C}\left[\left(1-{\alpha }_{n}\right)T\right]{x}_{n}-{x}_{n}\parallel =\underset{n\to \mathrm{\infty }}{lim}\parallel {z}_{n}-{x}_{n}\parallel =0.$

Hence,

$\begin{array}{rcl}\parallel {x}_{n+1}-{x}_{n}\parallel & =& \parallel \left(1-{\beta }_{n}\right){x}_{n}+{\beta }_{n}{P}_{C}\left[\left(1-{\alpha }_{n}\right)T\right]{x}_{n}-{x}_{n}\parallel \\ =& {\beta }_{n}\parallel {P}_{C}\left[\left(1-{\alpha }_{n}\right)T\right]{x}_{n}-{x}_{n}\parallel \to 0.\end{array}$

Let ${R}_{n}$ and R be defined by

$\begin{array}{c}{R}_{n}x:={P}_{C}\left\{\left(1-{\alpha }_{n}\right)\left[I-\gamma {P}^{\ast }{G}^{\ast }GP\right]\right\}x={P}_{C}\left[\left(1-{\alpha }_{n}\right)Tx\right],\hfill \\ Rx:={P}_{C}\left(I-\gamma {P}^{\ast }{G}^{\ast }GP\right)x={P}_{C}\left(Tx\right).\hfill \end{array}$

Noting that

$\begin{array}{rcl}\parallel {x}_{n}-R{x}_{n}\parallel & \le & \parallel {x}_{n}-{x}_{n+1}\parallel +\parallel {x}_{n+1}-R{x}_{n}\parallel \\ =& \parallel {x}_{n}-{x}_{n+1}\parallel +\parallel \left(1-{\beta }_{n}\right){x}_{n}+{\beta }_{n}{R}_{n}{x}_{n}-R{x}_{n}\parallel \\ \le & \parallel {x}_{n}-{x}_{n+1}\parallel +\left(1-{\beta }_{n}\right)\parallel {x}_{n}-R{x}_{n}\parallel +{\beta }_{n}\parallel {R}_{n}{x}_{n}-R{x}_{n}\parallel .\end{array}$

So, we have

$\begin{array}{rcl}\parallel {x}_{n}-R{x}_{n}\parallel & \le & \parallel {x}_{n}-{x}_{n+1}\parallel /{\beta }_{n}+\parallel {R}_{n}{x}_{n}-R{x}_{n}\parallel \\ =& \parallel {x}_{n}-{x}_{n+1}\parallel /{\beta }_{n}+\parallel {P}_{C}\left[\left(1-{\alpha }_{n}\right)T\right]{x}_{n}-{P}_{C}T{x}_{n}\parallel \\ \le & \parallel {x}_{n}-{x}_{n+1}\parallel /{\beta }_{n}+\parallel \left(1-{\alpha }_{n}\right)T{x}_{n}-T{x}_{n}\parallel \\ \le & \parallel {x}_{n}-{x}_{n+1}\parallel /{\beta }_{n}+M{\alpha }_{n}.\end{array}$

By assumption, we have

$\underset{n\to \mathrm{\infty }}{lim}\parallel {x}_{n}-R{x}_{n}\parallel =0.$

Furthermore, $\left\{{x}_{n}\right\}$ is bounded, there exists a subsequence of $\left\{{x}_{n}\right\}$ which converges weakly to a point $\stackrel{ˇ}{x}$. Without loss of generality, we may assume that $\left\{{x}_{n}\right\}$ converges weakly to $\stackrel{ˇ}{x}$. Since $\parallel R{x}_{n}-{x}_{n}\parallel \to 0$, using the demiclosedness principle we know that $\stackrel{ˇ}{x}\in Fix\left(R\right)=Fix\left({P}_{C}T\right)=Fix\left({P}_{C}\right)\cap Fix\left(T\right)=\mathrm{\Gamma }$.

Finally, we will prove that ${lim}_{n}\parallel {x}_{n+1}-\stackrel{ˇ}{x}\parallel =0$. In fact,

$\begin{array}{rcl}{\parallel {x}_{n+1}-\stackrel{ˇ}{x}\parallel }^{2}& =& {\parallel \left(1-{\beta }_{n}\right){x}_{n}+{\beta }_{n}{P}_{C}\left[\left(1-{\alpha }_{n}\right)T\right]{x}_{n}-{P}_{C}T\stackrel{ˇ}{x}\parallel }^{2}\\ \le & \left(1-{\beta }_{n}\right){\parallel {x}_{n}-\stackrel{ˇ}{x}\parallel }^{2}+{\beta }_{n}{\parallel {P}_{C}\left[\left(1-{\alpha }_{n}\right)T\right]{x}_{n}-{P}_{C}T\stackrel{ˇ}{x}\parallel }^{2}\\ \le & \left(1-{\beta }_{n}\right){\parallel {x}_{n}-\stackrel{ˇ}{x}\parallel }^{2}+{\beta }_{n}{\parallel \left(1-{\alpha }_{n}\right)T{x}_{n}-\stackrel{ˇ}{x}\parallel }^{2}\\ =& \left(1-{\beta }_{n}\right){\parallel {x}_{n}-\stackrel{ˇ}{x}\parallel }^{2}+{\beta }_{n}{\parallel \left(1-{\alpha }_{n}\right)\left(T{x}_{n}-\stackrel{ˇ}{x}\right)+{\alpha }_{n}\stackrel{ˇ}{x}\parallel }^{2}\\ =& \left(1-{\beta }_{n}\right){\parallel {x}_{n}-\stackrel{ˇ}{x}\parallel }^{2}+{\beta }_{n}\left[{\left(1-{\alpha }_{n}\right)}^{2}{\parallel T{x}_{n}-\stackrel{ˇ}{x}\parallel }^{2}+{\alpha }_{n}^{2}{\parallel \stackrel{ˇ}{x}\parallel }^{2}\\ +2{\alpha }_{n}\left(1-{\alpha }_{n}\right)〈T{x}_{n}-\stackrel{ˇ}{x},-\stackrel{ˇ}{x}〉\right]\\ \le & \left(1-{\beta }_{n}\right){\parallel {x}_{n}-\stackrel{ˇ}{x}\parallel }^{2}+{\beta }_{n}\left[\left(1-{\alpha }_{n}\right){\parallel {x}_{n}-\stackrel{ˇ}{x}\parallel }^{2}+{\alpha }_{n}^{2}{\parallel \stackrel{ˇ}{x}\parallel }^{2}\\ +2{\alpha }_{n}\left(1-{\alpha }_{n}\right)〈T{x}_{n}-\stackrel{ˇ}{x},-\stackrel{ˇ}{x}〉\right]\\ =& \left(1-{\alpha }_{n}{\beta }_{n}\right){\parallel {x}_{n}-\stackrel{ˇ}{x}\parallel }^{2}+{\alpha }_{n}{\beta }_{n}\left[2\left(1-{\alpha }_{n}\right)〈T{x}_{n}-\stackrel{ˇ}{x},-\stackrel{ˇ}{x}〉+{\alpha }_{n}{\parallel \stackrel{ˇ}{x}\parallel }^{2}\right].\end{array}$

Using Lemma 1.2, we only need to prove that

$\underset{n\to \mathrm{\infty }}{lim sup}〈T{x}_{n}-\stackrel{ˇ}{x},-\stackrel{ˇ}{x}〉\le 0.$

Applying Lemma 2.5, T is averaged, that is $T=\beta I+\left(1-\beta \right)V$, where $0<\beta <1$ and V is nonexpansive. Hence, for $z\in Fix\left({P}_{C}T\right)$, we have

$\begin{array}{rcl}{\parallel {x}_{n+1}-z\parallel }^{2}& =& {\parallel \left(1-{\beta }_{n}\right){x}_{n}+{\beta }_{n}{P}_{C}\left[\left(1-{\alpha }_{n}\right)T\right]{x}_{n}-z\parallel }^{2}\\ \le & \left(1-{\beta }_{n}\right){\parallel {x}_{n}-z\parallel }^{2}+{\beta }_{n}{\parallel \left(1-{\alpha }_{n}\right)T{x}_{n}-z\parallel }^{2}\\ =& \left(1-{\beta }_{n}\right){\parallel {x}_{n}-z\parallel }^{2}+{\beta }_{n}{\parallel \left(1-{\alpha }_{n}\right)\left(T{x}_{n}-z\right)-{\alpha }_{n}z\parallel }^{2}\\ \le & \left(1-{\beta }_{n}\right){\parallel {x}_{n}-z\parallel }^{2}+{\beta }_{n}\left[\left(1-{\alpha }_{n}\right){\parallel T{x}_{n}-z\parallel }^{2}+{\alpha }_{n}{\parallel z\parallel }^{2}\right]\\ \le & \left(1-{\beta }_{n}\right){\parallel {x}_{n}-z\parallel }^{2}+{\beta }_{n}\left[{\parallel T{x}_{n}-z\parallel }^{2}+{\alpha }_{n}{\parallel z\parallel }^{2}\right].\end{array}$

By Lemma 4.2, we have

$\begin{array}{rcl}{\parallel {x}_{n+1}-z\parallel }^{2}& \le & \left(1-{\beta }_{n}\right){\parallel {x}_{n}-z\parallel }^{2}\\ +{\beta }_{n}\left[{\parallel {x}_{n}-z\parallel }^{2}-\beta \left(1-\beta \right){\parallel V{x}_{n}-{x}_{n}\parallel }^{2}+{\alpha }_{n}{\parallel z\parallel }^{2}\right]\\ \le & {\parallel {x}_{n}-z\parallel }^{2}-{\beta }_{n}\beta \left(1-\beta \right){\parallel V{x}_{n}-{x}_{n}\parallel }^{2}+{\beta }_{n}{\alpha }_{n}{\parallel z\parallel }^{2}.\end{array}$

Let $N>0$ such that $\parallel {x}_{n}-z\parallel \le N$ for all n, then it concludes that

$\begin{array}{rcl}{\beta }_{n}\beta \left(1-\beta \right){\parallel V{x}_{n}-{x}_{n}\parallel }^{2}& \le & {\parallel {x}_{n}-z\parallel }^{2}-{\parallel {x}_{n+1}-z\parallel }^{2}+{\beta }_{n}{\alpha }_{n}{\parallel z\parallel }^{2}\\ \le & 2N|\parallel {x}_{n}-z\parallel -\parallel {x}_{n+1}-z\parallel |+{\beta }_{n}{\alpha }_{n}{\parallel z\parallel }^{2}\\ \le & 2N\parallel {x}_{n}-{x}_{n+1}\parallel +{\beta }_{n}{\alpha }_{n}{\parallel z\parallel }^{2}.\end{array}$

Hence,

$\beta \left(1-\beta \right){\parallel V{x}_{n}-{x}_{n}\parallel }^{2}\le \frac{2N\parallel {x}_{n}-{x}_{n+1}\parallel }{{\beta }_{n}}+{\alpha }_{n}{\parallel z\parallel }^{2}.$

Since $\parallel {x}_{n}-{x}_{n+1}\parallel \to 0$, we get

$\parallel V{x}_{n}-{x}_{n}\parallel \to 0.$

Therefore,

$\parallel T{x}_{n}-{x}_{n}\parallel \to 0.$

It follows that

$\underset{n\to \mathrm{\infty }}{lim sup}〈T{x}_{n}-\stackrel{ˇ}{x},-\stackrel{ˇ}{x}〉=\underset{n\to \mathrm{\infty }}{lim sup}〈{x}_{n}-\stackrel{ˇ}{x},-\stackrel{ˇ}{x}〉.$

Since $\left\{{x}_{n}\right\}$ converges weakly to $\stackrel{ˇ}{x}$, it follows that

$\underset{n\to \mathrm{\infty }}{lim sup}〈T{x}_{n}-\stackrel{ˇ}{x},-\stackrel{ˇ}{x}〉\le 0.$

□

Similar to the proof of Theorem 4.3, we can get the result that the following iterative algorithm converges strongly to a solution of the problem $\mathcal{P}$ also. Since the proof is similar to Theorem 4.3, we omit it.

Algorithm 4.4 For an arbitrary initial point ${x}_{0}$, sequence $\left\{{x}_{n}\right\}$ is generated by the iteration:

${x}_{n+1}=\left(1-{\beta }_{n}\right)\left(1-{\alpha }_{n}\right)\left(I-\gamma {P}^{\ast }{G}^{\ast }GP\right){x}_{n}+{\beta }_{n}{P}_{C}\left[\left(1-{\alpha }_{n}\right)\left(I-\gamma {P}^{\ast }{G}^{\ast }GP\right)\right]{x}_{n},$
(4.2)

where ${\alpha }_{n}>0$ is a sequence in $\left(0,1\right)$ such that

1. (i)

${lim}_{n\to \mathrm{\infty }}{\alpha }_{n}=0$, ${\sum }_{n=0}^{\mathrm{\infty }}{\alpha }_{n}=\mathrm{\infty }$;

2. (ii)

${lim}_{n\to \mathrm{\infty }}|{\alpha }_{n+1}-{\alpha }_{n}|=0$;

3. (iii)

$0<{lim inf}_{n\to \mathrm{\infty }}{\beta }_{n}\le {lim sup}_{n\to \mathrm{\infty }}{\beta }_{n}<1$.

## References

1. Eckstein J, Svaiter BF: A family of projective splitting methods for the sum of two maximal monotone operators. Math. Program. 2008, 111: 173-1199.

2. Eckstein J, Bertsekas D: On the Douglas-Ratford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 1992, 55: 293-318. 10.1007/BF01581204

3. Lions PL: Une méthode itérative de résolution d’une inéquation variationelle. Isr. J. Math. 1978, 31: 204-208. 10.1007/BF02760552

4. Passty GB: Ergodic convergence to a zero of the sum of monotone operators in Hilbert space. J. Math. Anal. Appl. 1979, 72: 383-390. 10.1016/0022-247X(79)90234-8

5. Tseng P: A modified forward-backward splitting method for maximal monotone mappings. SIAM J. Control Optim. 2000, 38: 431-446. 10.1137/S0363012998338806

6. Geobel K, Kirk WA Cambridge Studies in Advanced Mathematics 28. In Topics in Metric Fixed Point Theory. Cambridge University Press, Cambridge; 1990.

7. Geobel K, Reich S: Uniform Convexity, Nonexpansive Mappings, and Hyperbolic Geometry. Dekker, New York; 1984.

8. Aoyama K, Kimura Y, Takahashi W, Toyoda M: Approximation of common fixed points of a countable family of nonexpansive mappings in a Banach space. Nonlinear Anal., Theory Methods Appl. 2007,67(8):2350-2360. 10.1016/j.na.2006.08.032

9. Suzuki T: Strong convergence theorems for infinite families of nonexpansive mappings in general Banach space. Fixed Point Theory Appl. 2005, 1: 103-123.

10. Engl HW, Hanke M, Neubauer A Mathematics and Its Applications 375. In Regularization of Inverse Problems. Kluwer Academic, Dordrecht; 1996.

11. Byrne C: A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 2004,20(1):103-120. 10.1088/0266-5611/20/1/006

## Acknowledgements

This research was supported by NSFC Grants No:11071279; No:11226125; No:11301379.

## Author information

Authors

### Corresponding author

Correspondence to Rudong Chen.

### Competing interests

The authors declare that they have no competing interests.

### Authors’ contributions

Recently, Eckstein and Svaiter present some splitting methods for finding a zero of the sum of monotone operator A and B. However, the algorithms are largely dependent on the maximal monotonicity of A and B. In this paper, we describe some algorithms for finding a zero of the sum of A and B which ignore the conditions of the maximal monotonicity of A and B.

## Rights and permissions

The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

To view a copy of this licence, visit https://creativecommons.org/licenses/by/4.0/.

Reprints and Permissions

Shi, L.Y., Chen, R. & Wu, Y. Iterative algorithms for finding the zeroes of sums of operators. J Inequal Appl 2014, 349 (2014). https://doi.org/10.1186/1029-242X-2014-349