Skip to main content

On a system of monotone variational inclusion problems with fixed-point constraint

Abstract

In this paper, we study the problem of finding the solution of the system of monotone variational inclusion problems recently introduced by Chang et al. (Optimization 70(12):2511–2525, 2020) with the constraint of a fixed-point set of quasipseudocontractive mappings. We propose a new iterative method that employs an inertial technique with self-adaptive step size for approximating the solution of the problem in Hilbert spaces and prove a strong-convergence result for the proposed method under more relaxed conditions. Moreover, we apply our results to study related optimization problems. Finally, we present some numerical experiments to demonstrate the performance of our proposed method, compare it with a related method as well as experiment on the dependency of the key parameters on the performance of our method.

1 Introduction

In recent years, the split inverse problem (SIP) has received much research attention (see [1, 11, 12, 20, 24, 50] and the references therein) because of its extensive applications, for example, in phase retrieval, signal processing, image recovery, intensity-modulated radiation therapy, data compression, among others (see [13, 14, 42] and the references therein). The SIP model is presented as follows: Find a point

$$ \hat{x}\in H_{1} \quad \text{that solves }\mathrm{IP}_{1} $$
(1.1)

such that

$$ \hat{y} := A\hat{x}\in H_{2} \quad \text{solves }\mathrm{IP}_{2}, $$
(1.2)

where \(H_{1}\) and \(H_{2}\) are real Hilbert spaces, \(\mathrm{IP} _{1}\) denotes an inverse problem formulated in \(H_{1}\) and \(\mathrm{IP} _{2}\) denotes an inverse problem formulated in \(H_{2}\), and \(A : H_{1} \rightarrow H_{2}\) is a bounded linear operator.

Censor and Elfving [14] in 1994 introduced the split feasibility problem (SFP), which was the first instance of the SIP for modeling inverse problems that arise from medical-image reconstruction. Since then, several authors have studied and developed different iterative methods for approximating the solution of the SFP. The SFP has wide areas of applications, for instance, in signal processing, approximation theory, control theory, geophysics, communications, biomedical engineering, etc. [13, 30]. The SFP is formulated as follows:

$$ \text{find a point } \hat{x}\in C \text{ such that } \hat{y}= A \hat{x}\in Q, $$
(1.3)

where C and Q are nonempty closed convex subsets of Hilbert spaces \(H_{1}\) and \(H_{2}\), respectively, and \(A:H_{1}\rightarrow H_{2}\) is a bounded linear operator.

Moudafi [33] introduced another instance of the SIP known as the split monotone variational inclusion problem (SMVIP). Let \(H_{1}\), \(H_{2}\) be real Hilbert spaces, \(f_{1}:H_{1}\to H_{1}\), \(f_{2}:H_{2}\to H_{2}\), are inverse strongly monotone mappings, \(A:H_{1}\to H_{2}\) is a bounded linear operator, \(B_{1}:H_{1}\to 2^{H_{1}}\), \(B_{2}:H_{2}\to 2^{H_{2}}\) are multivalued maximal monotone mappings. The SMVIP is formulated as follows:

$$ \text{find a point } \hat{x}\in H_{1} \text{ such that } 0 \in f_{1}( \hat{x}) + B_{1}(\hat{x}) $$
(1.4)

and

$$ \hat{y}=A\hat{x}\in H_{2} \quad \text{such that } 0\in f_{2}(\hat{y}) + B_{2}( \hat{y}). $$
(1.5)

We point out that if (1.4) and (1.5) are considered separately, then each of (1.4) and (1.5) is a monotone variational inclusion problem (MVIP) with solution set \((B_{1}+f_{1})^{-1}(0)\) and \((B_{2}+f_{2})^{-1}(0)\), respectively. Moudafi in [33] showed that \(x^{*}\in (B_{1}+f_{1})^{-1}(0)\) if and only if \(x^{*}=J_{\lambda }^{B_{1}}(I-\lambda f_{1})(x^{*})\), for all \(\lambda >0\), where \(J_{\lambda }^{B_{1}}:H_{1}\to H_{1}\) is the resolvent operator associated with \(B_{1}\) and λ defined by

$$ J_{\lambda }^{B_{1}}(x)=(I+\lambda B_{1})^{-1}x,\quad x \in H, \lambda >0. $$
(1.6)

It is known that the resolvent operator \(J_{\lambda }^{B_{1}}\) is single valued, nonexpansive and 1-inverse strongly monotone (see, e.g., [8]).

Moreover, it was shown in [33] that, if \(f_{1}\) is an α-inverse strongly monotone mapping and \(B_{1}\) is a maximal monotone mapping, then \(J_{\lambda }^{B_{1}}(I-\lambda f_{1})\) is averaged with \(0<\lambda <2\alpha \). Consequently, \(J_{\lambda }^{B_{1}}(I-\lambda f_{1})\) is nonexpansive. Furthermore, \((B_{1}+f_{1})^{-1}(0)\) was shown to be closed and convex.

Moudafi [33], pointed out that the SMVIP (1.4) and (1.5) generalizes the split fixed-point problem, split feasibility problem, split variational inequality problem, split equilibrium problem, and split variational inclusion problem, which have been studied extensively by several researchers (e.g., see [3, 5, 10, 21, 25, 27, 36, 46, 49]). Moreover, it is applied in solving many real-life problems such as in sensor networks, in computerized tomography and data compression, modeling of inverse problems arising from phase retrieval [9, 19], and in modeling intensity-modulated radiation therapy treatment planning [13, 14].

If \(f_{1}\equiv 0\equiv f_{2}\), then the SMVIP (1.4) and (1.5) reduces to the following split variational inclusion problem (SVIP):

$$ \text{find a point } \hat{x}\in H_{1} \text{ such that } 0 \in B_{1}( \hat{x}) $$
(1.7)

and

$$ \hat{y}=A\hat{x}\in H_{2} \quad \text{such that } 0\in B_{2}(\hat{y}). $$
(1.8)

Moudafi [33], showed that the SVIP (1.7) and (1.8) includes the SFP (1.3) as a special case. Several authors have studied and proposed different iterative methods for solving SVIP (1.7) and (1.8), see for instance [22, 27], and the references therein. However, results on SMVIP (1.4) and (1.5) are relatively scanty in the literature.

Very recently, Yao et al. [48] proposed and studied the convergence of the following iterative method with an inertial extrapolation step for approximating the solution of SMVIP (1.4) and (1.5) in Hilbert spaces (Algorithm 1), where \(F_{1}:=J_{\lambda }^{B_{1}}(I-\lambda f_{1})\) and \(F_{2}:=J_{\lambda }^{B_{2}}(I-\lambda f_{2})\), \(f_{1}:H_{1}\to H_{1}\) and \(f_{2}:H_{2}\to H_{2}\) are an \(\alpha _{1}\)-inverse strongly monotone mapping and an \(\alpha _{2}\)-inverse strongly monotone mapping, respectively, with \(\alpha =\min \{\alpha _{1}, \alpha _{2}\}\), \(A:H_{1}\to H_{2}\) is a bounded linear operator with adjoint \(A^{*}, B_{1}:H_{1}\to 2^{H_{1}}\), \(B_{2}:H_{2}\to 2^{H_{2}}\) are multivalued maximal monotone mappings. The authors were able to prove the weak-convergence result for the sequence generated by the proposed algorithm under the following conditions:

  1. (i)

    The solution set \(\mathcal{F}\) is nonempty;

  2. (ii)

    \(\lambda \in (0,2\alpha )\), \(0<\liminf_{n\to \infty }\tau _{n}\le \limsup_{n\to \infty }\tau _{n}<1\);

  3. (iii)

    \(\{\mu _{n}\}_{n=1}^{\infty }\subset \ell _{1}\), i.e., \(\sum_{n=1}^{\infty }|\mu _{n}|<\infty \).

Bauschke and Combettes [6] pointed out that in solving optimization problems, the strong convergence of iterative schemes is more desirable and useful than the weak-convergence counterparts. Therefore, when solving optimization problems the authors strive to construct algorithms that generate sequences that converge strongly to the solution of the problem under investigation.

figure a

Algorithm 1

Also, very recently, Chang et al. [16] introduced and studied the following system of monotone variational inclusion problems in Hilbert spaces: find a point \(x^{*}\in H_{1}\) such that

$$ \textstyle\begin{cases} 0\in h_{i}(x^{*})+B_{i}(x^{*}),\quad i=1,2,\ldots ,m;\quad \text{and} \\ y^{*}=Ax^{*} \quad \text{solves } 0\in g_{j}(y^{*})+D_{j}(y^{*}),\quad j=1,2,\ldots ,k, \end{cases} $$
(1.11)

where for each \(i=1,2,\ldots ,m\) and \(j=1,2,\ldots ,k\), \(h_{i}\) and \(g_{j}\) are \(\varphi _{i}\)- and \(\vartheta _{j}\)- inverse strongly monotone mappings on \(H_{1}\) and \(H_{2}\), respectively, where \(\varphi _{i}>0\) and \(\vartheta _{j}>0\), \(B_{i}\) and \(D_{j}\) are multivalued maximal monotone operators on \(H_{1}\) and \(H_{2}\), respectively, and \(A:H_{1}\to H_{2}\) is a bounded linear operator. Set

$$ \phi =\min \{\varphi _{1}, \varphi _{2}, \ldots ,\varphi _{m}; \vartheta _{1},\vartheta _{2},\ldots ,\vartheta _{k}\}, $$
(1.12)

then all \(h_{i}\) and \(g_{j}\) are ϕ-inverse strongly monotone mappings. Moreover, the authors proposed the following inertial forward–backward splitting algorithm with the viscosity technique for approximating the solution of problem (1.11) in Hilbert spaces:

$$ \textstyle\begin{cases} w_{n} = x_{n} + \theta _{n}(x_{n} - x_{n-1}), \\ u_{n} = U(I-\gamma A^{*}(I-T)A)w_{n}, \\ x_{n+1}=\alpha _{n}f(w_{n})+(1-\alpha _{n})u_{n}, \end{cases} $$
(1.13)

where

$$ \textstyle\begin{cases} U:= J_{\lambda }^{B_{1}}(I-\lambda h_{1})\circ J_{\lambda }^{B_{2}}(I- \lambda h_{2})\circ \ldots J_{\lambda }^{B_{m}}(I-\lambda h_{m}), \\ T:= J_{\lambda }^{D_{1}}(I-\lambda g_{1})\circ J_{\lambda }^{D_{2}}(I- \lambda g_{2})\circ \ldots J_{\lambda }^{D_{k}}(I-\lambda g_{k}), \end{cases} $$
(1.14)

for each \(i=1,2,\ldots ,m\) and \(j=1,2,\ldots ,k\), \(h_{i}\), \(g_{j}\), \(B_{i}\), \(D_{j}\) are as defined in (1.11), and \(f:H_{1}\to H_{1}\) is a contraction with contraction constant \(\rho \in (\frac{1}{2}, 1)\). The authors proved the strong-convergence theorem for the proposed method under the following conditions:

  1. (i)

    The solution set Γ is nonempty;

  2. (ii)

    \(\{\alpha _{n}\}\subset (0,1)\) with \(\alpha _{n}\to 0\) and \(\sum_{n=1}^{\infty }\alpha _{n}=\infty \);

  3. (iii)

    \(0<\gamma <\frac{1}{2\|A\|^{2}}\), \(\lambda \in (0,2\phi )\) with ϕ as defined in (1.12);

  4. (iv)

    \(\sum_{n=1}^{\infty }\theta _{n}\|x_{n}-x_{n-1}\|<\infty \), \(\theta _{n} \in [0,1)\).

Remark 1.1

Observe that the problem (1.11) solved by Algorithm (1.13) is more general than the problem SMVIP (1.4) and (1.5) solved by Algorithm 1. The SMVIP (1.4) and (1.5) is a special case of the problem (1.11) when \(i=j=1\). We also point out that the term \(\theta _{n}(x_{n} - x_{n-1})\) in Algorithm 1 and Algorithm (1.13) above is referred to as the inertial term. It is employed in algorithm design to accelerate the rate of convergence. However, we note that condition (iii) of Algorithm 1 and condition (iv) of Algorithm (1.13) imposed to incorporate the inertial term are too restrictive. These might affect the implementation of the proposed methods. Some other drawbacks with Algorithm (1.13) are that the contraction constant ρ of the contraction f is restricted to the interval \((\frac{1}{2}, 1)\). Moreover, the implementation of the proposed algorithm requires knowledge of the operator norm, which is often very difficult to calculate or even estimate. On the other hand, while Algorithm 1 does not require knowledge of the operator norm for its implementation the authors were only able to obtain the weak-convergence result for the proposed algorithm.

From the above discourse, it is natural to ask the following question

Can we develop a new inertial iterative method with the viscosity technique that does not require knowledge of the operator norm for approximating the solution of system of monotone variational inclusion problems (1.11), such that condition (iii) of Algorithm 1 and condition (iv) of Algorithm (1.13) are dispensed with and obtain a strong-convergence result? Can the contraction constant ρ of the contraction mapping f of Algorithm (1.13) be selected from a larger interval than \((\frac{1}{2}, 1)\)?

Some of our aims in this paper are to provide affirmative answers to the above questions.

Another problem we consider in this paper is the fixed-point problem (FPP). Let C be a nonempty closed convex subset of a real Hilbert space H and let \(S: C\rightarrow C\) be a nonlinear mapping. A point \(\hat{x}\in C\) is called a fixed point of S if \(S\hat{x} = \hat{x}\). We denote by \(F(S)\), the set of all fixed points of S, i.e.,

$$ F(S) = \{\hat{x}\in C: S\hat{x} = \hat{x}\}. $$
(1.15)

In recent years, the study of fixed-point theory for nonlinear mappings has flourished owing to its extensive applications in various fields like economics, compressed sensing, and other applied sciences (see [4, 17, 38] and the references therein).

Recently, optimization problems dealing with finding a common solution of the set of fixed points of nonlinear mappings and the set of solutions of SMVIP (see, for instance, [3, 22]) were considered. One of the motivations for studying such a common solution problem is in its potential application to mathematical models whose constraints can be expressed as FPPs and SMVIP. An instance of this is found in practical problems such as signal processing, network-resource allocation, and image recovery. One scenario is in the network bandwidth-allocation problem for two services in a heterogeneous wireless access networks where the bandwidth of the services are mathematically related (see, for instance, [26, 31] and the references therein).

Motivated by the above results and the current research interest in this direction, in this paper, we study the problem of finding the solution of the system of monotone variational inclusion problems (1.11) with the constraint of a fixed-point set of quasipseudocontractions. Precisely, we consider the following problem: find a point \(x^{*}\in F(S)\) such that

$$ \textstyle\begin{cases} 0\in h_{i}(x^{*})+B_{i}(x^{*}),\quad i=1,2,\ldots ,m; \quad \text{and} \\ y^{*}=Ax^{*} \quad \text{solves}\quad 0\in g_{j}(y^{*})+D_{j}(y^{*}), \quad j=1,2,\ldots ,k, \end{cases} $$
(1.16)

where \(S:H_{1}\to H_{1}\) is a quasipseudocontractive mapping, for each \(i=1,2,\ldots ,m\) and \(j=1,2,\ldots ,k\), \(h_{i}\) and \(g_{j}\) are \(\varphi _{i}\)- and \(\vartheta _{j}\)- inverse strongly monotone mappings on \(H_{1}\) and \(H_{2}\), respectively, where \(\varphi _{i}>0\) and \(\vartheta _{j}>0\), \(B_{i}\) and \(D_{j}\) are multivalued maximal monotone operators on \(H_{1}\) and \(H_{2}\), respectively, and \(A:H_{1}\to H_{2}\) is a bounded linear operator.

Moreover, we introduce a new inertial iterative method that employs the viscosity technique to approximate the solution of the problem in the framework of Hilbert spaces. Furthermore, under mild conditions we prove that the sequence generated by the proposed method converges strongly to a solution of the problem. We point out that the implementation of our algorithm does not require knowledge of the operator norm and the contraction constant of the contraction mapping employed in the viscosity technique can be selected in the interval \((0,1)\); a larger interval than the restriction to interval \((\frac{1}{2}, 1)\) in Algorithm (1.13). In addition, we obtained a strong-convergence result dispensing with condition (iii) of Algorithm 1 and condition (iv) of Algorithm (1.13). We further apply our results to study other optimization problems and we provide some numerical experiments with graphical illustrations to demonstrate the implementability and efficiency of the proposed method in comparison with some methods in the current literature. Our results in this study improve and extend the recent ones announced by Yao et al. [48], Chang et al. [16], and many other results in the literature.

The paper is organized as follows: In Sect. 2, we recall basic definitions and lemmas employed in the convergence analysis. Section 3 presents the proposed algorithm and highlights some of its features, while in Sect. 4 we analyze the convergence of the proposed method. Section 5 presents applications of our results to some optimization problems. In Sect. 6, we provide some numerical examples with graphical illustrations and compare the performance of our proposed method with some of the existing methods in the literature. Finally, we give some concluding remarks in Sect. 7.

2 Preliminaries

In this section, we present some definitions and results, which will be needed in the following.

In what follows, we denote the weak and strong convergence of a sequence \(\{x_{n}\}\) to a point \(x \in H\) by \(x_{n} \rightharpoonup x\) and \(x_{n} \rightarrow x\), respectively, and \(w_{\omega }(x_{n})\) denotes the set of weak limits of \(\{x_{n}\}\), that is,

$$ \omega _{w}(x_{n}):= \bigl\{ x\in H: x_{n_{j}} \rightharpoonup x \text{ for some subsequence } \{x_{n_{j}}\} \text{ of } \{x_{n}\}\bigr\} ,$$

where H is a real Hilbert space. For a nonempty closed and convex subset C of H, the metric projection [37] \(P_{C}: H\rightarrow C\) is defined, for each \(x\in H\), as the unique element \(P_{C}x\in C\) such that

$$ \Vert x - P_{C}x \Vert = \inf \bigl\{ \Vert x-z \Vert : z\in C\bigr\} .$$

The operator \(P_{C}\) is nonexpansive and has the following properties [34, 44]:

  1. 1.

    it is firmly nonexpansive, that is,

    $$ \Vert P_{C}x - P_{C}y \Vert ^{2} \leq \langle P_{C}x - P_{C}y, x -y\rangle\quad \text{for all } x, y\in C;$$
  2. 2.

    for any \(x\in H\) and \(z\in C\), \(z = P_{C}x\) if and only if

    $$ \langle x - z, z - y\rangle \geq 0 \quad \text{for all } y\in C; $$
    (2.1)
  3. 3.

    for any \(x\in H\) and \(y\in C\),

    $$ \Vert P_{C}x - y \Vert ^{2} + \Vert x - P_{C}x \Vert ^{2} \leq \Vert x - y \Vert ^{2}. $$

Definition 2.1

Let \(T:H\to H\) be a nonlinear mapping and I be the identity mapping on H. The mapping \(I-T\) is said to be demiclosed at zero, if for any sequence \(\{x_{n}\}\subset H\) that converges weakly to x and \(\|x_{n}-Tx_{n}\|\to 0\), then \(x\in F(T)\).

Definition 2.2

Let C be a nonempty closed convex subset of a real Hilbert space H. A mapping \(T: C\rightarrow C\) is said to be:

  1. (1)

    L-Lipschitz continuous, if there exists a constant \(L>0\) such that

    $$ \Vert Tx-Ty \Vert \leq L \Vert x-y \Vert , \quad \forall x,y \in C; $$

    if \(L\in [0,1)\), then T is called a contraction.

  2. (2)

    nonexpansive if T is 1-Lipschitz continuous;

  3. (3)

    averaged if it can be written as

    $$ T=(1-\alpha )I + \alpha S,$$

    where \(\alpha \in (0,1)\), \(S:C\to C\) is nonexpansive and I is the identity mapping on C;

  4. (4)

    firmly nonexpansive if

    $$\begin{aligned} \Vert Tx-Ty \Vert ^{2}\leq \Vert x-y \Vert ^{2}- \bigl\Vert (I-T)x-(I-T)y \bigr\Vert ^{2},\quad \forall x,y \in C; \end{aligned}$$
  5. (5)

    quasinonexpansive if \(F(T)\neq \emptyset \) and

    $$\begin{aligned} \Vert Tx-p \Vert \leq \Vert x-p \Vert , \quad \forall x\in C, \text{and } p\in F(T); \end{aligned}$$
  6. (6)

    firmly quasinonexpansive if \(F(T)\neq \emptyset \) and

    $$\begin{aligned} \Vert Tx-p \Vert ^{2}\leq \Vert x-p \Vert ^{2}- \bigl\Vert (I-T)x \bigr\Vert ^{2},\quad \forall x\in C, \text{and } p\in F(T); \end{aligned}$$
  7. (7)

    κ-strictly pseudocontractive if there exists \(\kappa \in [0,1)\) such that

    $$\begin{aligned} \Vert Tx-Ty \Vert ^{2}\leq \Vert x-y \Vert ^{2}+ \kappa \bigl\Vert (I-T)x-(I-T)y \bigr\Vert ^{2},\quad \forall x,y\in C; \end{aligned}$$
  8. (8)

    directed if \(F(T)\neq \emptyset \) and \(\langle Tx-p, Tx-x\rangle \leq 0\), \(\forall x\in C\), and \(p\in F(T)\);

  9. (9)

    demicontractive if \(F(T)\neq \emptyset \) and there exists \(\kappa \in [0,1)\) such that

    $$\begin{aligned} \Vert Tx-p \Vert ^{2}\leq \Vert x-p \Vert ^{2}+ \kappa \Vert Tx-x \Vert ^{2},\quad \forall x \in C \text{ and } p\in F(T); \end{aligned}$$
  10. (10)

    monotone if

    $$ \langle Tx-Ty, x-y \rangle \geq 0,\quad \forall x,y \in C; $$
  11. (11)

    L-inverse strongly monotone (L-ism), if there exits \(L >0\), such that

    $$ \langle Tx-Ty, x-y \rangle \geq L \Vert Tx-Ty \Vert ^{2},\quad \forall x,y \in C. $$

Remark 2.3

As pointed out by Bauschke and Combettes [6], \(T:C\to C\) is directed if and only if

$$\begin{aligned} \Vert Tx-p \Vert ^{2}\leq \Vert x-p \Vert ^{2}- \Vert Tx-x \Vert ^{2}, \quad \forall x\in C, \text{and } p\in F(T). \end{aligned}$$

In other words, the class of directed mappings coincides with the class of firmly quasinonexpansive mappings.

Remark 2.4

From the definitions above, we observe that the class of demicontractive mappings includes several other classes of nonlinear mappings such as the directed mappings, the quasinonexpansive mappings, and the strictly pseudocontractive mappings with fixed points as special cases. Also, it is well known that every L-ism mapping is \(\frac{1}{L}\)-Lipschitz continuous and monotone, and every Lipschitz continuous operator is uniformly continuous but the converse of these statements are not always true (see, for example [41]).

Definition 2.5

A nonlinear operator \(T:C\to C\) is called pseudocontractive if

$$\begin{aligned} \langle Tx-Ty, x-y\rangle \leq \Vert x-y \Vert ^{2},\quad \forall x,y \in C. \end{aligned}$$

The interest of pseudocontractive mappings lies in their connection with monotone mappings, that is, T is a pseudocontraction if and only if \(I-T\) is a monotone mapping. It is well known that T is pseudocontractive if and only if

$$\begin{aligned} \Vert Tx-Ty \Vert ^{2}\leq \Vert x-y \Vert ^{2}+ \bigl\Vert (I-T)x-(I-T)y \bigr\Vert ^{2}, \quad \forall x,y \in C. \end{aligned}$$

Definition 2.6

An operator \(T:C\to C\) is said to be quasipseudocontractive if \(F(T)\neq \emptyset \) and

$$\begin{aligned} \Vert Tx-p \Vert ^{2}\leq \Vert x-p \Vert ^{2}+ \Vert Tx-x \Vert ^{2}, \quad \forall x\in C, p\in F(T). \end{aligned}$$

It is obvious that the class of quasipseudocontractive mappings includes the class of demicontractive mappings and the class of pseudocontractive mappings with a nonempty fixed-point set.

We have the following result on L-Lipschitz quasipseudocontractive mappings.

Lemma 2.7

([15])

Let H be a real Hilbert space and \(T:H\to H\) be an L-Lipschitzian mapping with \(L\geq 1\). Denote

$$\begin{aligned} G:=(1-\psi )I+\psi T \bigl((1-\eta )I+\eta T \bigr). \end{aligned}$$

If \(0<\psi <\eta <\frac{1}{1+\sqrt{1+L^{2}}}\), then the following conclusions hold:

  1. (1)

    \(F(T)=F (T((I-\eta )I+\eta T) )=F(G)\).

  2. (2)

    If \(I-T\) is demiclosed at 0, then \(I-G\) is also demiclosed at 0.

  3. (3)

    In addition, if \(T:H\to H\) is quasipseudocontractive, then the mapping G is quasinonexpansive.

Lemma 2.8

([49])

(Demiclosedness Principle). Let T be a nonexpansive mapping on a closed convex subset C of a real Hilbert space H. Then, \(I-T\) is demiclosed at any point \(y\in H\), that is, if \(x_{n}\rightharpoonup x\) and \(x_{n}-Tx_{n}\to y\in H\), then \(x-Tx=y\).

Lemma 2.9

([18])

Let H be a real Hilbert space. Then, the following results hold for all \(x,y\in H\) and \(\delta \in \mathbb{R}\):

  1. (i)

    \(\|x + y\|^{2} \leq \|x\|^{2} + 2\langle y, x + y \rangle \);

  2. (ii)

    \(\|x + y\|^{2} = \|x\|^{2} + 2\langle x, y \rangle + \|y\|^{2}\);

  3. (iii)

    \(\|\delta x + (1-\delta ) y\|^{2} = \delta \|x\|^{2} + (1-\delta )\|y \|^{2} -\delta (1-\delta )\|x-y\|^{2}\).

Lemma 2.10

([40])

Let \(\{a_{n}\}\) be a sequence of nonnegative real numbers, \(\{\alpha _{n}\}\) be a sequence in \((0, 1)\) with \(\sum_{n=1}^{\infty }\alpha _{n} = \infty \), and \(\{b_{n}\}\) be a sequence of real numbers. Assume that

$$ a_{n+1}\leq (1 - \alpha _{n})a_{n} + \alpha _{n}b_{n},\quad \textit{for all } n\geq 1,$$

if \(\limsup_{k\rightarrow \infty }b_{n_{k}}\leq 0\) for every subsequence \(\{a_{n_{k}}\}\) of \(\{a_{n}\}\) satisfying \(\liminf_{k\rightarrow \infty }(a_{n_{k+1}} - a_{n_{k}})\geq 0\), then \(\lim_{n\rightarrow \infty }a_{n} =0\).

Lemma 2.11

([32])

Let \(\{a_{n}\}, \{c_{n}\}\subset \mathbb{R}\mathbbm{_{+}}\), \(\{\sigma _{n}\}\subset (0,1)\), and \(\{b_{n}\}\subset \mathbb{R}\) be sequences such that

$$ a_{n+1}\leq (1-\sigma _{n})a_{n} + b_{n} + c_{n}\quad \textit{for all } n \geq 0.$$

Assume \(\sum_{n=0}^{\infty }|c_{n}|<\infty \). Then, the following results hold:

  1. (i)

    If \(b_{n}\leq \beta \sigma _{n}\) for some \(\beta \geq 0\), then \(\{a_{n}\}\) is a bounded sequence.

  2. (ii)

    If we have

    $$ \sum_{n=0}^{\infty }\sigma _{n} = \infty \quad \textit{and}\quad \limsup_{n \rightarrow \infty }\frac{b_{n}}{\sigma _{n}}\leq 0,$$

then \(\lim_{n\rightarrow \infty }a_{n} =0\).

Lemma 2.12

([7, 47])

Let H be a real Hilbert space and let \(A,S,T,V:H\to H\) be given operators.

  1. (i)

    If \(T=(1-\alpha )S + \alpha V\) for some \(\alpha \in (0,1)\), where \(S:H\to H\) is β-averaged and \(V:H\to H\) is nonexpansive, then T is \(\alpha +(1-\alpha )\beta \)-averaged.

  2. (ii)

    The composite of finitely many averaged mappings is averaged. In particular, if \(T_{i}\) is \(\alpha _{i}\)-averaged, where \(\alpha _{i}\in (0,1)\) for \(i=1,2\), then the composite \(T_{1}\circ T_{2}\) is α-averaged, where \(\alpha =\alpha _{1}+\alpha _{2}-\alpha _{1}\alpha _{2}\).

  3. (iii)

    If the mappings \(\{T_{i}\}_{i=1}^{N}\) are averaged and have a common fixed point, then

    $$ \bigcap_{i=1}^{N} F(T_{i}) = F(T_{1}\circ T_{2}\circ \cdots \circ T_{N}). $$
  4. (iv)

    If A is β-ism and \(\gamma \in (0,\beta ]\), then \(T:=I-\gamma A\) is firmly nonexpansive.

  5. (v)

    T is nonexpansive if and only if its complement \(I-T\) is \(\frac{1}{2}\)-ism.

  6. (vi)

    If T is β-ism, then for \(\gamma >0\), γT is \(\frac{\beta }{\gamma }\)-ism.

  7. (vii)

    T is averaged if and only if its complement \(I-T\) is β-ism for some \(\beta >\frac{1}{2}\). Indeed, for \(\alpha \in (0,1), T\) is α-averaged if and only if \(I-T\) is \(\frac{1}{2\alpha }\)-ism.

  8. (viii)

    T is firmly nonexpansive if and only if its complement \(I-T\) is firmly nonexpansive.

Lemma 2.13

([45])

Let \(H_{1}\) and \(H_{2}\) be real Hilbert spaces. Let \(A:H_{1}\to H_{2}\) be a nonzero bounded linear operator with adjoint \(A^{*}\) and let \(T:H_{2}\to H_{2}\) be a nonexpansive mapping. Then, \(A^{*}(I-T)A\) is \(\frac{1}{2\|A\|^{2}}\)-ism.

Lemma 2.14

([44])

Let H be a real Hilbert space, \(r>0\), \(f:H\to H\) be a μ-ism mapping and \(B:H\to 2^{H}\) be a maximal monotone mapping. Then,

  1. (I)

    the following conclusions are equivalent:

    1. (i)

      \(x^{*}\in H\) such that \(0\in f(x^{*})+ B(x^{*})\);

    2. (ii)

      \(x^{*}\in F(J_{r}^{B}(I-rf))\).

  2. (II)

    If \(r\in (0,2\mu )\), then \(J_{r}^{B}(I-rf)\) is averaged.

3 Proposed method

In this section, we present our proposed algorithm and highlight some of its important features. We assume that:

  1. (1)

    \(H_{1}\) and \(H_{2}\) are real Hilbert spaces;

  2. (2)

    For each \(i=1,2,\ldots ,m\) and \(j=1,2,\ldots ,k\), \(h_{i}\) and \(g_{j}\) are \(\varphi _{i}\)- and \(\vartheta _{j}\)- inverse strongly monotone mappings on \(H_{1}\) and \(H_{2}\), respectively, where \(\varphi _{i}>0\) and \(\vartheta _{j}>0\), \(B_{i}\) and \(D_{j}\) are multivalued maximal monotone operators on \(H_{1}\) and \(H_{2}\), respectively. Let \(\phi =\min \{\varphi _{1}, \varphi _{2},\ldots ,\varphi _{m}; \vartheta _{1},\vartheta _{2},\ldots ,\vartheta _{k}\}\), then all \(h_{i}\) and \(g_{j}\) are ϕ-inverse strongly monotone mappings;

  3. (3)

    \(A:H_{1}\to H_{2}\) is a bounded linear operator with adjoint \(A^{*}\) and \(f:H_{1}\to H_{1}\) is a contraction with contraction constant \(\rho \in (0,1)\);

  4. (4)

    \(S:H_{1}\to H_{1}\) is a K-Lipschitz continuous quasipseudocontractive mapping such that \(I-S\) is demiclosed at zero and with \(K\ge 1\);

  5. (5)

    The solution set \(\Omega =\Gamma \cap F(S)\) is nonempty, where

    $$ \Gamma := \Biggl\{ x\in H_{1}:x\in \bigcap _{i=1}^{m} \bigl((h_{i}+B_{i})^{-1}(0) \bigr)\cap \Biggl(A^{-1} \Biggl(\bigcap _{j=1}^{k}(g_{j}+D_{j})^{-1}(0) \Biggr) \Biggr) \Biggr\} . $$
    (3.1)
  6. (6)

    We denote

    $$ \textstyle\begin{cases} U:= J_{\lambda }^{B_{1}}(I-\lambda h_{1})\circ J_{\lambda }^{B_{2}}(I- \lambda h_{2})\circ \ldots J_{\lambda }^{B_{m}}(I-\lambda h_{m}), \\ T:= J_{\lambda }^{D_{1}}(I-\lambda g_{1})\circ J_{\lambda }^{D_{2}}(I- \lambda g_{2})\circ \ldots J_{\lambda }^{D_{k}}(I-\lambda g_{k}). \end{cases} $$
    (3.2)

It was shown in [16] that the operators U and T defined above are averaged mappings.

We establish the strong-convergence result for the proposed algorithm under the following conditions on the control parameters:

  1. (C1)

    \(\{\alpha _{n}\} \subset (0,1)\) such that \(\lim_{n\rightarrow \infty }\alpha _{n}=0\) and \(\sum_{n=0}^{\infty }\alpha _{n} = \infty \);

  2. (C2)

    \(\{\beta _{n}\}, \{\delta _{n}\},\{\xi _{n}\}\subset (0,1)\) such that \(0< a\le \beta _{n},\delta _{n}\), \(\xi _{n}\le b<1\);

  3. (C3)

    \(\{\epsilon _{n}\}\) is a positive sequence such that \(\lim_{n\rightarrow \infty }\frac{\epsilon _{n}}{\alpha _{n}}=0\);

  4. (C4)

    \(\theta >0\), \(\lambda \in (0,2\phi )\), \(0<\eta <\mu < \frac{1}{1+\sqrt{1+K^{2}}}\) and \(0 < \tau _{1} \leq \tau _{n} \leq \tau _{2} < 1\).

Now, our main algorithm is presented in Algorithm 2.

figure b

Algorithm 2

Remark 3.1

  • We point out that the step size of the proposed method defined in (3.4) does not depend on the norm of the bounded linear operator. This makes our algorithm easy to implement, unlike the methods proposed in [16, 22, 33, 50], which require knowledge of the operator norm for their implementation.

  • Step 1 of the algorithm can be implemented since the value of \(\|x_{n}-x_{n-1}\|\) is known prior to choosing \(\theta _{n}\). Also, observe that in incorporating the inertial term our method does not require stringent conditions, like we have in condition (iii) of Algorithm 1 and condition (iv) of Algorithm (1.13).

  • We note that unlike in Algorithm (1.13), the viscosity technique employed in Step 5 of our algorithm accommodates a larger class of contraction mappings since the contraction constant \(\rho \in (0,1)\).

Remark 3.2

By conditions (C1) and (C3), it follows from (3.3) that

$$ \lim_{n\rightarrow \infty }\theta _{n} \Vert x_{n} - x_{n-1} \Vert = 0 \quad \text{and} \quad \lim_{n\rightarrow \infty } \frac{\theta _{n}}{\alpha _{n}} \Vert x_{n} - x_{n-1} \Vert = 0. $$
(3.5)

Remark 3.3

We note that in (3.4), the choice of the step size \(\gamma _{n}\) is independent of the operator norm \(\|A\|\). Also, the value of γ has no effect on the proposed algorithm but was introduced for clarity. Now, we show that the step size of the algorithm in (3.4) is well defined.

Lemma 3.4

The step sizes \(\{\gamma _{n}\}\) of the Algorithm 2 defined by (3.4) are well defined.

Proof

Let \(p\in \Omega \). Then, by Lemma 2.14(I) we have that \(p\in \bigcap_{i=1}^{m} ((h_{i}+B_{i})^{-1}(0) )\) and \(Ap\in \bigcap_{j=1}^{k} ((g_{j}+D_{j})^{-1}(0) )\). From Lemma 2.12(iii) we have \(p\in F(U)\) and \(Ap\in F(T)\). Applying the fact that T is averaged together with Lemma 2.12(vii), we have

$$\begin{aligned} \bigl\Vert A^{*}(I-T)Aw_{n} \bigr\Vert \Vert w_{n}-p \Vert &\ge \bigl\langle A^{*}(I-T)Aw_{n}, w_{n}-p \bigr\rangle \\ &= \bigl\langle (I-T)Aw_{n}-(I-T)Ap, Aw_{n}-Ap \bigr\rangle \\ &\ge \beta \bigl\Vert (I-T)Aw_{n} \bigr\Vert ^{2}, \end{aligned}$$

for some \(\beta >\frac{1}{2}\). This shows that \(\|A^{*}(I-T)Aw_{n}\|>0\) when \(\|(I-T)Aw_{n}\|\ne 0\). Thus, \(\{\gamma _{n}\}\) is well defined. □

4 Convergence analysis

First, we establish some lemmas before proving the strong-convergence theorem for the proposed algorithm.

Lemma 4.1

Let \(\{x_{n}\}\) be a sequence generated by Algorithm 2. Then, \(\{x_{n}\}\) is bounded.

Proof

Observe that the mapping \(P_{\Omega }\circ f\) is a contraction. Then, by the Banach Contraction Principle there exists an element \(p\in H\) such that \(p=P_{\Omega }\circ f(p)\). It follows that \(p\in \Omega \), \(Sp=p\), \(Up=p\), and \(TAp=Ap\). By applying Lemma 2.9(ii) and the nonexpansiveness of U, we obtain

$$\begin{aligned} \Vert u_{n}-p \Vert ^{2} &= \bigl\Vert U \bigl(w_{n} +\gamma _{n}A^{*}(T-I)Aw_{n} \bigr)-p \bigr\Vert ^{2} \\ &\leq \bigl\Vert w_{n} +\gamma _{n}A^{*}(T-I)Aw_{n}-p \bigr\Vert ^{2} \end{aligned}$$
(4.1)
$$\begin{aligned} &= \Vert w_{n}-p \Vert ^{2} + \gamma _{n}^{2} \bigl\Vert A^{*}(T-I)Aw_{n} \bigr\Vert ^{2} + 2 \gamma _{n}\bigl\langle w_{n} - p, A^{*}(T-I)Aw_{n} \bigr\rangle . \end{aligned}$$
(4.2)

Again, applying Lemma 2.9(ii) and the nonexpansiveness of T, we have

$$\begin{aligned} &\bigl\langle w_{n} - p, A^{*}(T-I)Aw_{n} \bigr\rangle \\ &\quad =\bigl\langle Aw_{n}-Ap, (T-I)Aw_{n} \bigr\rangle \\ &\quad = \bigl\langle TAw_{n} - Ap - (T-I)Aw_{n} , (T-I)Aw_{n} \bigr\rangle \\ &\quad = \bigl\langle TAw_{n} - Ap , (T-I)Aw_{n} \bigr\rangle - \bigl\langle (T-I)Aw_{n} , (T-I)Aw_{n} \bigr\rangle \\ &\quad = \bigl\langle TAw_{n} - Ap , (T-I)Aw_{n} \bigr\rangle - \bigl\Vert (T-I)Aw_{n} \bigr\Vert ^{2} \\ &\quad =\frac{1}{2} \bigl[ \Vert TAw_{n}-Ap \Vert ^{2} + \bigl\Vert (T-I)Aw_{n} \bigr\Vert ^{2}- \bigl\Vert TAw_{n}-Ap - (T-I)Aw_{n} \bigr\Vert ^{2} \bigr] \\ &\qquad {} - \bigl\Vert (T-I)Aw_{n} \bigr\Vert ^{2} \\ &\quad =\frac{1}{2} \bigl[ \Vert TAw_{n}-Ap \Vert ^{2} + \bigl\Vert (T-I)Aw_{n} \bigr\Vert ^{2} - \Vert Aw_{n} - Ap \Vert ^{2} \bigr] - \bigl\Vert (T-I)Aw_{n} \bigr\Vert ^{2} \\ &\quad =\frac{1}{2} \bigl[ \Vert TAw_{n}-Ap \Vert ^{2} - \Vert Aw_{n} - Ap \Vert ^{2} - \bigl\Vert (T-I)Aw_{n} \bigr\Vert ^{2} \bigr] \\ &\quad \leq \frac{1}{2} \bigl[ \Vert Aw_{n}-Ap \Vert ^{2} - \Vert Aw_{n} - Ap \Vert ^{2} - \bigl\Vert (T-I)Aw_{n} \bigr\Vert ^{2} \bigr] \\ &\quad =-\frac{1}{2} \bigl\Vert (T-I)Aw_{n} \bigr\Vert ^{2}. \end{aligned}$$
(4.3)

Applying (4.3) into (4.2) and using the definition of \(\gamma _{n}\) together with the condition on \(\tau _{n}\), we have

$$\begin{aligned} \Vert u_{n}-p \Vert ^{2} &\leq \Vert w_{n}-p \Vert ^{2} + \gamma _{n}^{2} \bigl\Vert A^{*}(T-I)Aw_{n} \bigr\Vert ^{2} - \gamma _{n} \bigl\Vert (T-I)Aw_{n} \bigr\Vert ^{2} \\ &= \Vert w_{n}-p \Vert ^{2} -\gamma _{n} \bigl[ \bigl\Vert (T-I)Aw_{n} \bigr\Vert ^{2} - \gamma _{n} \bigl\Vert A^{*}(T-I)Aw_{n} \bigr\Vert ^{2}\bigr] \\ &= \Vert w_{n}-p \Vert ^{2} -\gamma _{n}(1- \tau _{n}) \bigl\Vert (T-I)Aw_{n} \bigr\Vert ^{2} \end{aligned}$$
(4.4)
$$\begin{aligned} &\leq \Vert w_{n}-p \Vert ^{2} . \end{aligned}$$
(4.5)

Next, using the triangle inequality, we obtain from Step 2

$$\begin{aligned} \Vert w_{n} - p \Vert & = \bigl\Vert x_{n} + \theta _{n}(x_{n} - x_{n-1}) - p \bigr\Vert \\ &\leq \Vert x_{n} - p \Vert + \theta _{n} \Vert x_{n} - x_{n-1} \Vert \\ & = \Vert x_{n} - p \Vert + \alpha _{n} \frac{\theta _{n}}{\alpha _{n}} \Vert x_{n} - x_{n-1} \Vert . \end{aligned}$$
(4.6)

Since, by Remark 3.2, \(\lim_{n\rightarrow \infty }\frac{\theta _{n}}{\alpha _{n}}\|x_{n} - x_{n-1} \| = 0\), then there exists a constant \(M_{1} > 0\) such that \(\frac{\theta _{n}}{\alpha _{n}}\|x_{n} - x_{n-1}\|\leq M_{1}\) for all \(n\geq 1\). Consequently, from (4.6) we obtain

$$ \Vert w_{n} - p \Vert \leq \Vert x_{n} - p \Vert + \alpha _{n}M_{1}. $$
(4.7)

By the conditions on η and μ, and by Lemma 2.7, we know that \(\mathbb{V}\) is quasinonexpansive. Consequently, by applying the triangle inequality, and using (4.5) and (4.7), from Step 4 we have

$$\begin{aligned} \Vert v_{n}-p \Vert &= \bigl\Vert \beta _{n}w_{n} + (1-\beta _{n}\mathbb{V}u_{n}) -p \bigr\Vert \\ &\le \beta _{n} \Vert w_{n}-p \Vert + (1-\beta _{n}) \Vert \mathbb{V}u_{n} -p \Vert \\ &\le \beta _{n} \Vert w_{n}-p \Vert + (1-\beta _{n}) \Vert u_{n} -p \Vert \\ &\le \beta _{n} \Vert w_{n}-p \Vert + (1-\beta _{n}) \Vert w_{n} -p \Vert \\ &= \Vert w_{n} -p \Vert \\ &\le \Vert x_{n} - p \Vert + \alpha _{n}M_{1}. \end{aligned}$$
(4.8)

Now, by applying (4.5) and (4.8), from Step 5 it follows that

$$\begin{aligned} \Vert x_{n+1}-p \Vert ={}& \bigl\Vert \alpha _{n}f(w_{n}) + \delta _{n}u_{n} + \xi _{n}v_{n}-p \bigr\Vert \\ ={}& \bigl\Vert \alpha _{n}\bigl(f(w_{n})-f(p)\bigr) + \alpha _{n}\bigl(f(p)-p\bigr) + \delta _{n}(u_{n}-p) + \xi _{n}(v_{n}-p) \bigr\Vert \\ \le{}& \alpha _{n}\rho \vert \vert w_{n}-p \vert \vert + \alpha _{n} \vert \vert f(p)-p \vert \vert + \delta _{n} \vert \vert u_{n}-p \vert \vert + \xi _{n} \vert \vert v_{n}-p \vert \vert \\ \le{}& \alpha _{n}\rho \bigl( \Vert x_{n} - p \Vert + \alpha _{n}M_{1}\bigr) + \alpha _{n} \bigl\Vert f(p)-p \bigr\Vert + \delta _{n}\bigl( \Vert x_{n} - p \Vert + \alpha _{n}M_{1}\bigr) \\ &{} + \xi _{n}\bigl( \Vert x_{n} - p \Vert + \alpha _{n}M_{1} \bigr) \\ ={}&\bigl(\alpha _{n}\rho +(1-\alpha _{n})\bigr) \vert \vert x_{n}-p \vert \vert + \alpha _{n} \vert \vert f(p)-p \vert \vert + \bigl(\alpha _{n}\rho +(1-\alpha _{n})\bigr)\alpha _{n}M_{1} \\ ={}&\bigl(1-\alpha _{n}(1-\rho )\bigr) \Vert x_{n}-p \Vert + \alpha _{n}(1-\rho ) \biggl\{ \frac{ \Vert f(p)-p \Vert }{1-\rho } + \frac{(1-\alpha _{n}(1-\rho ))M_{1}}{1-\rho } \biggr\} \\ \le{}& \bigl(1-\alpha _{n}(1-\rho )\bigr) \Vert x_{n}-p \Vert + \alpha _{n}(1-\rho )M^{*}, \end{aligned}$$

where \(M^{*} :=\sup_{n\in \mathbb{N}} \{\frac{\|f(p)-p\|}{1-\rho } + \frac{(1-\alpha _{n}(1-\rho ))M_{1}}{1-\rho } \}\). Set \(a_{n}:=\|x_{n}-p\|\); \(b_{n}:=\alpha _{n}(1-\rho )M^{*}\); \(c_{n}:=0\), and \(\sigma _{n}:=\alpha _{n}(1-\rho )\). By invoking Lemma 2.11(i) together with the assumptions on the control parameters, we have that \(\{\|x_{n}-p\|\}\) is bounded and this implies that \(\{x_{n}\}\) is bounded. Consequently, \(\{w_{n}\}\), \(\{u_{n}\}\), and \(\{v_{n}\}\) are all bounded. □

Lemma 4.2

Let \(\{x_{n}\}\) be a sequence generated by Algorithm 2 and \(p\in \Omega \). Then, under conditions (C1)–(C4) the following inequality holds for all \(n\in \mathbb{N}\):

$$\begin{aligned} \Vert x_{n+1}-p \Vert ^{2}\leq {}&\biggl(1- \frac{2\alpha _{n}(1-\rho )}{(1-\alpha _{n}\rho )} \biggr) \Vert x_{n} - p \Vert ^{2} \\ &{} + \frac{2\alpha _{n}(1-\rho )}{(1-\alpha _{n}\rho )} \biggl\{ \frac{\alpha _{n}}{2(1-\rho )}M + \frac{3M_{2} ((1-\alpha _{n})^{2}+\alpha _{n}\rho )}{2(1-\rho )} \frac{\theta _{n}}{\alpha _{n}} \Vert x_{n} - x_{n-1} \Vert \\ &{} + \frac{1}{(1-\rho )}\bigl\langle f(p) - p, x_{n+1} -p \bigr\rangle \biggr\} \\ &{}- \frac{\xi _{n}(1-\alpha _{n})(1-\beta _{n})}{(1-\alpha _{n}\rho )} \bigl\{ \gamma _{n}(1-\tau _{n}) \bigl\Vert (T-I)Aw_{n} \bigr\Vert ^{2} +\beta _{n} \Vert w_{n}- \mathbb{V}u_{n} \Vert \bigr\} . \end{aligned}$$

Proof

Let \(p\in \Omega \). Then, by applying the Cauchy–Schwartz inequality together with Lemma 2.9(ii), we obtain

$$\begin{aligned} \Vert w_{n} - p \Vert ^{2} &= \bigl\Vert x_{n} + \theta _{n}(x_{n} - x_{n-1}) - p \bigr\Vert ^{2} \\ &= \Vert x_{n} - p \Vert ^{2} + \theta _{n}^{2} \Vert x_{n} - x_{n-1} \Vert ^{2} + 2 \theta _{n}\langle x_{n} - p, x_{n} - x_{n-1} \rangle \\ &\leq \Vert x_{n} - p \Vert ^{2} + \theta _{n}^{2} \Vert x_{n} - x_{n-1} \Vert ^{2} + 2 \theta _{n} \Vert x_{n} - x_{n-1} \Vert \Vert x_{n} - p \Vert \\ &= \Vert x_{n} - p \Vert ^{2} + \theta _{n} \Vert x_{n} - x_{n-1} \Vert \bigl(\theta _{n} \Vert x_{n} - x_{n-1} \Vert + 2 \Vert x_{n} - p \Vert \bigr) \\ &\leq \Vert x_{n} - p \Vert ^{2} + 3M_{2} \theta _{n} \Vert x_{n} - x_{n-1} \Vert \\ &= \Vert x_{n} - p \Vert ^{2} + 3M_{2}\alpha _{n} \frac{\theta _{n}}{\alpha _{n}} \Vert x_{n} - x_{n-1} \Vert , \end{aligned}$$
(4.9)

where \(M_{2}:= \sup_{n\in \mathbb{N}}\{\|x_{n} - p\|, \theta _{n}\|x_{n} - x_{n-1} \|\}>0\).

Also, by applying Lemma 2.9(iii), (4.4) and (4.9), we have

$$\begin{aligned} \Vert v_{n}-p \Vert ^{2} ={}& \bigl\Vert \beta _{n}w_{n} + (1-\beta _{n})\mathbb{V}u_{n}-p \bigr\Vert ^{2} \\ ={}&\beta _{n} \Vert w_{n}-p \Vert ^{2}+(1- \beta _{n}) \Vert \mathbb{V}u_{n}-p \Vert ^{2}- \beta _{n}(1-\beta _{n}) \Vert w_{n}-\mathbb{V}u_{n} \Vert \\ \le{}& \beta _{n} \Vert w_{n}-p \Vert ^{2}+(1- \beta _{n}) \Vert u_{n}-p \Vert ^{2}-\beta _{n}(1- \beta _{n}) \Vert w_{n}- \mathbb{V}u_{n} \Vert \\ \le{}& \beta _{n} \Vert w_{n}-p \Vert ^{2}+(1- \beta _{n}) \bigl\{ \Vert w_{n}-p \Vert ^{2} - \gamma _{n}(1-\tau _{n}) \bigl\Vert (T-I)Aw_{n} \bigr\Vert ^{2} \bigr\} \\ &{}-\beta _{n}(1-\beta _{n}) \Vert w_{n}-\mathbb{V}u_{n} \Vert \\ ={}& \Vert w_{n}-p \Vert ^{2}-(1-\beta _{n}) \gamma _{n}(1-\tau _{n}) \bigl\Vert (T-I)Aw_{n} \bigr\Vert ^{2} \\ &{}-\beta _{n}(1-\beta _{n}) \Vert w_{n}-\mathbb{V}u_{n} \Vert \end{aligned}$$
(4.10)
$$\begin{aligned} \le{}& \Vert w_{n}-p \Vert ^{2} . \end{aligned}$$
(4.11)

Next, invoking Lemma 2.9(i), and applying (4.5), (4.9) and (4.10) we obtain

$$\begin{aligned} \Vert x_{n+1}-p \Vert ^{2}={}& \bigl\Vert \alpha _{n}f(w_{n})+\delta _{n}u_{n}+\xi _{n}v_{n}-p \bigr\Vert ^{2} \\ \le{}& \bigl\Vert \delta _{n}(u_{n}-p)+\xi _{n}(v_{n}-p) \bigr\Vert ^{2} + 2\alpha _{n} \bigl\langle f(w_{n})-p, x_{n+1}-p \bigr\rangle \\ \le {}&\delta _{n}^{2} \Vert u_{n}-p \Vert ^{2}+\xi _{n}^{2} \Vert v_{n}-p \Vert ^{2}+2 \delta \xi _{n} \Vert u_{n}-p \Vert \Vert v_{n}-p \Vert \\ &{} + 2\alpha _{n}\bigl\langle f(w_{n})-f(p), x_{n+1}-p \bigr\rangle \\ &{} + 2\alpha _{n}\bigl\langle f(p)-p, x_{n+1}-p \bigr\rangle \\ \le{}& \delta _{n}^{2} \Vert u_{n}-p \Vert ^{2}+\xi _{n}^{2} \Vert v_{n}-p \Vert ^{2}+ \delta \xi _{n} \bigl\{ \Vert u_{n}-p \Vert ^{2}+ \Vert v_{n}-p \Vert ^{2} \bigr\} \\ &{} +2\alpha _{n} \rho \Vert w_{n}-p \Vert \Vert x_{n+1}-p \Vert \\ &{} + 2\alpha _{n}\bigl\langle f(p)-p, x_{n+1}-p \bigr\rangle \\ \le{}& \delta _{n}(\delta _{n}+\xi _{n}) \Vert u_{n}-p \Vert ^{2}+ \xi _{n}(\xi _{n}+ \delta _{n}) \Vert v_{n}-p \Vert ^{2} \\ &{} +\alpha _{n}\rho \bigl\{ \Vert w_{n}-p \Vert ^{2} + \Vert x_{n+1}-p \Vert ^{2} \bigr\} \\ &{} + 2\alpha _{n}\bigl\langle f(p)-p, x_{n+1}-p \bigr\rangle \\ \le{}& \delta _{n}(1-\alpha _{n}) \Vert w_{n}-p \Vert ^{2}+ \xi _{n}(1-\alpha _{n}) \bigl\{ \Vert w_{n}-p \Vert ^{2} \\ &{} -(1-\beta _{n})\gamma _{n}(1-\tau _{n}) \bigl\Vert (T-I)Aw_{n} \bigr\Vert ^{2} \\ & {}-\beta _{n}(1-\beta _{n}) \Vert w_{n}- \mathbb{V}u_{n} \Vert \bigr\} + \alpha _{n}\rho \bigl\{ \Vert w_{n}-p \Vert ^{2} + \Vert x_{n+1}-p \Vert ^{2} \bigr\} \\ &{} + 2 \alpha _{n}\bigl\langle f(p)-p, x_{n+1}-p \bigr\rangle \\ ={}& \bigl((1-\alpha _{n})^{2}+\alpha _{n}\rho \bigr) \Vert w_{n}-p \Vert ^{2} \\ &{} - \xi _{n}(1- \alpha _{n}) (1-\beta _{n}) \bigl\{ \gamma _{n}(1- \tau _{n}) \bigl\Vert (T-I)Aw_{n} \bigr\Vert ^{2} +\beta _{n} \Vert w_{n}- \mathbb{V}u_{n} \Vert \bigr\} \\ &{} +\alpha _{n}\rho \Vert x_{n+1}-p \Vert ^{2} + 2\alpha _{n}\bigl\langle f(p)-p, x_{n+1}-p \bigr\rangle \\ \le{}& \bigl((1-\alpha _{n})^{2}+\alpha _{n}\rho \bigr) \biggl\{ \Vert x_{n} - p \Vert ^{2} + 3M_{2}\alpha _{n}\frac{\theta _{n}}{\alpha _{n}} \Vert x_{n} - x_{n-1} \Vert \biggr\} \\ &{}+\alpha _{n}\rho \Vert x_{n+1}-p \Vert ^{2} \\ & {}+ 2\alpha _{n}\bigl\langle f(p)-p, x_{n+1}-p \bigr\rangle - \xi _{n}(1- \alpha _{n}) (1-\beta _{n}) \bigl\{ \gamma _{n}(1-\tau _{n}) \bigl\Vert (T-I)Aw_{n} \bigr\Vert ^{2} \\ &{} +\beta _{n} \Vert w_{n}- \mathbb{V}u_{n} \Vert \bigr\} . \end{aligned}$$

Consequently, we obtain

$$\begin{aligned} \Vert x_{n+1} - p \Vert ^{2}\le{}& \frac{ (1-2\alpha _{n}+\alpha _{n}^{2} + \alpha _{n}\rho )}{(1-\alpha _{n}\rho )} \Vert x_{n} - p \Vert ^{2} \\ &{} + 3M_{2} \frac{ ((1-\alpha _{n})^{2}+\alpha _{n}\rho )}{(1-\alpha _{n}\rho )} \alpha _{n}\frac{\theta _{n}}{\alpha _{n}} \Vert x_{n} - x_{n-1} \Vert \\ & {}+ \frac{2\alpha _{n}}{(1-\alpha _{n}\rho )}\bigl\langle f(p)-p, x_{n+1}-p \bigr\rangle \\ &{} - \frac{\xi _{n}(1-\alpha _{n})(1-\beta _{n})}{(1-\alpha _{n}\rho )} \bigl\{ \gamma _{n}(1-\tau _{n}) \bigl\Vert (T-I)Aw_{n} \bigr\Vert ^{2} \\ &{} +\beta _{n} \Vert w_{n}-\mathbb{V}u_{n} \Vert \bigr\} \\ ={}& \frac{ (1-2\alpha _{n} + \alpha _{n}\rho )}{(1-\alpha _{n}\rho )} \Vert x_{n} - p \Vert ^{2} + \frac{\alpha _{n}^{2}}{(1-\alpha _{n}\rho )} \Vert x_{n} - p \Vert ^{2} \\ &{} + 3M_{2} \frac{ ((1-\alpha _{n})^{2}+\alpha _{n}\rho )}{(1-\alpha _{n}\rho )} \alpha _{n}\frac{\theta _{n}}{\alpha _{n}} \Vert x_{n} - x_{n-1} \Vert \\ &{} + \frac{2\alpha _{n}}{(1-\alpha _{n}\rho )}\bigl\langle f(p)-p, x_{n+1}-p \bigr\rangle \\ &{} - \frac{\xi _{n}(1-\alpha _{n})(1-\beta _{n})}{(1-\alpha _{n}\rho )} \bigl\{ \gamma _{n}(1-\tau _{n}) \bigl\Vert (T-I)Aw_{n} \bigr\Vert ^{2} \\ & {}+\beta _{n} \Vert w_{n}-\mathbb{V}u_{n} \Vert \bigr\} \\ \leq{}& \biggl(1- \frac{2\alpha _{n}(1-\rho )}{(1-\alpha _{n}\rho )} \biggr) \Vert x_{n} - p \Vert ^{2} \\ &{} + \frac{2\alpha _{n}(1-\rho )}{(1-\alpha _{n}\rho )} \biggl\{ \frac{\alpha _{n}}{2(1-\rho )}M + \frac{3M_{2} ((1-\alpha _{n})^{2}+\alpha _{n}\rho )}{2(1-\rho )} \frac{\theta _{n}}{\alpha _{n}} \Vert x_{n} - x_{n-1} \Vert \\ &{} + \frac{1}{(1-\rho )}\bigl\langle f(p) - p, x_{n+1} -p \bigr\rangle \biggr\} \\ &{} - \frac{\xi _{n}(1-\alpha _{n})(1-\beta _{n})}{(1-\alpha _{n}\rho )} \bigl\{ \gamma _{n}(1-\tau _{n}) \bigl\Vert (T-I)Aw_{n} \bigr\Vert ^{2} +\beta _{n} \Vert w_{n}- \mathbb{V}u_{n} \Vert \bigr\} , \end{aligned}$$

where \(M:=\sup \{\|x_{n}-p\|^{2}: n\in \mathbb{N}\}\). Hence, we have the required inequality. □

Lemma 4.3

Suppose \(\{x_{n}\}\) is a sequence generated by Algorithm 2 such that conditions (C1)–(C4) are satisfied. Then, the following inequality holds for all \(p\in \Omega \) and \(n\in \mathbb{N}\):

$$\begin{aligned} \Vert x_{n+1}-p \Vert ^{2}\le{}& (1-\alpha _{n}) \Vert x_{n}-p \Vert ^{2} +\alpha _{n} \biggl\{ \bigl\Vert f(w_{n})-p \bigr\Vert ^{2} + 3M_{2}(1-\alpha _{n}) \frac{\theta _{n}}{\alpha _{n}} \Vert x_{n} - x_{n-1} \Vert \biggr\} \\ &{} -\delta \xi _{n} \Vert u_{n}-v_{n} \Vert ^{2}. \end{aligned}$$

Proof

Let \(p\in \Omega \). By applying Lemma 2.9(iii) together with (4.5), (4.9) and (4.11) we have

$$\begin{aligned} \Vert x_{n+1}-p \Vert ^{2}={}& \bigl\Vert \alpha _{n}f(w_{n})+\delta _{n}u_{n}+\xi _{n}v_{n}-p \bigr\Vert ^{2} \\ \le{}& \alpha _{n} \bigl\Vert f(w_{n})-p \bigr\Vert ^{2}+\delta \Vert u_{n}-p \Vert ^{2}+\xi _{n} \Vert v_{n}-p \Vert ^{2} -\delta \xi _{n} \Vert u_{n}-v_{n} \Vert ^{2} \\ \le{}& \alpha _{n} \bigl\Vert f(w_{n})-p \bigr\Vert ^{2}+\delta \Vert w_{n}-p \Vert ^{2}+\xi _{n} \Vert w_{n}-p \Vert ^{2} -\delta \xi _{n} \Vert u_{n}-v_{n} \Vert ^{2} \\ ={}& \alpha _{n} \bigl\Vert f(w_{n})-p \bigr\Vert ^{2}+(1-\alpha _{n}) \Vert w_{n}-p \Vert ^{2} - \delta \xi _{n} \Vert u_{n}-v_{n} \Vert ^{2} \\ \le{}& \alpha _{n} \bigl\Vert f(w_{n})-p \bigr\Vert ^{2}+(1-\alpha _{n}) \biggl\{ \Vert x_{n}-p \Vert ^{2} + 3M_{2}\alpha _{n}\frac{\theta _{n}}{\alpha _{n}} \Vert x_{n} - x_{n-1} \Vert \biggr\} \\ &{} -\delta \xi _{n} \Vert u_{n}-v_{n} \Vert ^{2} \\ ={}&(1-\alpha _{n}) \Vert x_{n}-p \Vert ^{2} + \alpha _{n} \biggl\{ \bigl\Vert f(w_{n})-p \bigr\Vert ^{2} + 3M_{2}(1-\alpha _{n})\frac{\theta _{n}}{\alpha _{n}} \Vert x_{n} - x_{n-1} \Vert \biggr\} \\ &{} -\delta \xi _{n} \Vert u_{n}-v_{n} \Vert ^{2}, \end{aligned}$$

which is the required inequality. □

Now, we are in a position to state and prove the strong-convergence theorem for the proposed algorithm.

Theorem 4.4

Let \(H_{1}\) and \(H_{2}\) be two real Hilbert spaces and let \(f:H_{1}\rightarrow H_{1}\) be a contraction with coefficient \(\rho \in (0,1)\). Suppose \(\{x_{n}\}\) is a sequence generated by Algorithm 2 such that conditions (C1)–(C4) hold. Then, the sequence \(\{x_{n}\}\) converges strongly to a point \(\hat{x}\in \Omega \), where \(\hat{x} = P_{\Omega }\circ f(\hat{x})\).

Proof

Let \(\hat{x} = P_{\Omega }\circ f(\hat{x})\). From Lemma 4.2, we obtain

$$\begin{aligned} \Vert x_{n+1}-\hat{x} \Vert ^{2}\leq{}& \biggl(1- \frac{2\alpha _{n}(1-\rho )}{(1-\alpha _{n}\rho )} \biggr) \Vert x_{n} - \hat{x} \Vert ^{2} \\ &{} +\frac{2\alpha _{n}(1-\rho )}{(1-\alpha _{n}\rho )} \biggl\{ \frac{\alpha _{n}}{2(1-\rho )}M + \frac{3M_{2} ((1-\alpha _{n})^{2}+\alpha _{n}\rho )}{2(1-\rho )} \frac{\theta _{n}}{\alpha _{n}} \Vert x_{n} - x_{n-1} \Vert \\ &{} + \frac{1}{(1-\rho )}\bigl\langle f(\hat{x}) - \hat{x}, x_{n+1} - \hat{x} \bigr\rangle \biggr\} . \end{aligned}$$
(4.12)

Next, we claim that the sequence \(\{\|x_{n} - \hat{x}\|\}\) converges to zero. In order to establish this, by Lemma 2.10, it suffices to show that \(\limsup_{k\rightarrow \infty }\langle f(\hat{x}) - \hat{x}, x_{n_{k}+1} -\hat{x} \rangle \leq 0\) for every subsequence \(\{\|x_{n_{k}} - \hat{x}\|\}\) of \(\{\|x_{n} - \hat{x}\|\}\) satisfying

$$ \liminf_{k\rightarrow \infty }\bigl( \Vert x_{n_{k}+1} - \hat{x} \Vert - \Vert x_{n_{k}} - \hat{x} \Vert \bigr) \geq 0.$$

Suppose that \(\{\|x_{n_{k}} - \hat{x}\|\}\) is a subsequence of \(\{\|x_{n} - \hat{x}\|\}\) such that

$$ \liminf_{k\rightarrow \infty }\bigl( \Vert x_{n_{k}+1} - \hat{x} \Vert - \Vert x_{n_{k}} - \hat{x} \Vert \bigr) \geq 0. $$
(4.13)

Again, from Lemma 4.2 we obtain

$$\begin{aligned} &\frac{\xi _{n_{k}}(1-\alpha _{n_{k}})(1-\beta _{n_{k}})}{(1-\alpha _{n_{k}}\rho )} \gamma _{n_{k}}(1-\tau _{n_{k}}) \bigl\Vert (T-I)Aw_{n_{k}} \bigr\Vert ^{2} \\ &\quad \leq \biggl(1- \frac{2\alpha _{n_{k}}(1-\rho )}{(1-\alpha _{n_{k}}\rho )} \biggr) \Vert x_{n_{k}} - p \Vert ^{2} - \Vert x_{{n_{k}}+1}-p \Vert ^{2}+ \frac{2\alpha _{n_{k}}(1-\rho )}{(1-\alpha _{n_{k}}\rho )} \biggl\{ \frac{\alpha _{n_{k}}}{2(1-\rho )}M \\ &\qquad {} + \frac{3M_{2} ((1-\alpha _{n_{k}})^{2}+\alpha _{n_{k}}\rho )}{2(1-\rho )} \frac{\theta _{n_{k}}}{\alpha _{n_{k}}} \Vert x_{n_{k}} - x_{{n_{k}}-1} \Vert \\ &\qquad {}+ \frac{1}{(1-\rho )}\bigl\langle f(p) - p, x_{{n_{k}}+1} -p \bigr\rangle \biggr\} . \end{aligned}$$

By applying (4.13) together with condition (C2) and the fact that \(\lim_{k\rightarrow \infty }\alpha _{n_{k}}=0\), we have

$$ \gamma _{n_{k}}(1-\tau _{n_{k}}) \bigl\Vert (T-I)Aw_{n_{k}} \bigr\Vert ^{2} \rightarrow 0, \quad k\rightarrow \infty . $$

By the definition of \(\gamma _{n}\), we obtain

$$ \tau _{n_{k}}(1-\tau _{n_{k}}) \frac{ \Vert (T-I)Aw_{n_{k}} \Vert ^{4}}{ \Vert A^{*}(T-I)Aw_{n_{k}} \Vert ^{2}} \rightarrow 0,\quad k \rightarrow \infty . $$

Consequently, we have

$$ \frac{ \Vert (T-I)Aw_{n_{k}} \Vert ^{2}}{ \Vert A^{*}(T-I)Aw_{n_{k}} \Vert } \rightarrow 0,\quad k\rightarrow \infty . $$

Since \(\|A^{*}(T-I)Aw_{n_{k}}\|\) is bounded, it follows that

$$ \bigl\Vert (T-I)Aw_{n_{k}} \bigr\Vert \rightarrow 0,\quad k \rightarrow \infty . $$
(4.14)

Consequently, we obtain

$$\begin{aligned} \bigl\Vert A^{*}(T-I)Aw_{n_{k}} \bigr\Vert &\leq \bigl\Vert A^{*} \bigr\Vert \bigl\Vert (T-I)Aw_{n_{k}} \bigr\Vert \\ &= \Vert A \Vert \bigl\Vert (T-I)Aw_{n_{k}} \bigr\Vert \rightarrow 0,\quad k\rightarrow \infty . \end{aligned}$$
(4.15)

Following a similar argument, from Lemma 4.2 we obtain

$$ \beta _{n_{k}} \Vert w_{n_{k}}-\mathbb{V}u_{n_{k}} \Vert \rightarrow 0,\quad k \rightarrow \infty . $$

By condition (C2), it follows that

$$ \Vert w_{n_{k}}-\mathbb{V}u_{n_{k}} \Vert \rightarrow 0,\quad k\rightarrow \infty . $$
(4.16)

Next, from Lemma 4.3 we obtain

$$\begin{aligned} \delta \xi _{n_{k}} \Vert u_{n_{k}}-v_{n_{k}} \Vert ^{2}\le{}& (1-\alpha _{n_{k}}) \Vert x_{n_{k}}-\hat{x} \Vert ^{2} - \Vert x_{{n_{k}}_{k}+1}-\hat{x} \Vert ^{2}+ \alpha _{n_{k}} \biggl\{ \bigl\Vert f(w_{n_{k}})-\hat{x} \bigr\Vert ^{2} \\ & {}+ 3M_{2}(1-\alpha _{n_{k}}) \frac{\theta _{n_{k}}}{\alpha _{n_{k}}} \Vert x_{n_{k}} - x_{{n_{k}}-1} \Vert \biggr\} . \end{aligned}$$

By (4.13), Remark 3.2, and the fact that \(\lim_{k\rightarrow \infty }\alpha _{n_{k}}=0\), we obtain

$$ \delta \xi _{n_{k}} \Vert u_{n_{k}}-v_{n_{k}} \Vert ^{2}\rightarrow 0,\quad k \rightarrow \infty . $$
(4.17)

Consequently, we have

$$ \Vert u_{n_{k}}-v_{n_{k}} \Vert \rightarrow 0,\quad k\rightarrow \infty . $$
(4.18)

By Remark 3.2, we have

$$\begin{aligned} \Vert w_{n_{k}}- x_{n_{k}} \Vert &= \theta _{n_{k}} \Vert x_{n_{k}}-x_{n_{k}-1} \Vert \rightarrow 0,\quad k\rightarrow \infty . \end{aligned}$$
(4.19)

By applying (4.16), from Step 4 we have

$$ \Vert v_{n_{k}}- w_{n_{k}} \Vert \le \beta _{n_{k}} \Vert w_{n_{k}}- w_{n_{k}} \Vert + (1- \beta _{n_{k}}) \Vert \mathbb{V}u_{n_{k}}- w_{n_{k}} \Vert \to 0,\quad k\to \infty . $$
(4.20)

Next, by applying (4.16), (4.18), (4.19) and (4.20) we obtain

$$ \begin{aligned} &\Vert w_{n_{k}}- u_{n_{k}} \Vert \to 0,\quad k\to \infty ; \\ &\Vert x_{n_{k}}- \mathbb{V}u_{n_{k}} \Vert \to 0,\qquad k\to \infty ; \\ & \Vert x_{n_{k}}- v_{n_{k}} \Vert \to 0,\quad k\to \infty , \end{aligned}$$
(4.21)

and

$$ \begin{aligned} &\Vert x_{n_{k}}- u_{n_{k}} \Vert \to 0,\quad k\to \infty ; \\ &\Vert u_{n_{k}}- \mathbb{V}u_{n_{k}} \Vert \to 0,\quad k\to \infty . \end{aligned}$$
(4.22)

Now, applying (4.21) and (4.22) together with the fact that \(\lim_{k\rightarrow \infty }\alpha _{n_{k}}=0\), we obtain

$$\begin{aligned} \Vert x_{n_{k}+1}- x_{n_{k}} \Vert ={}& \bigl\Vert \alpha _{n_{k}}f(w_{n_{k}}) + \delta _{n_{k}}u_{n_{k}} + \xi _{n_{k}}v_{n_{k}}-x_{n_{k}} \bigr\Vert \\ \leq{}& \alpha _{n_{k}} \bigl\Vert f(w_{n_{k}})-x_{n_{k}} \bigr\Vert + \delta _{n_{k}} \Vert u_{n_{k}}-x_{n_{k}} \Vert \\ &{}+\xi _{n_{k}} \Vert v_{n_{k}}-x_{n_{k}} \Vert \to 0,\quad k\to \infty . \end{aligned}$$
(4.23)

To complete the proof, we need to show that \(w_{\omega }(x_{n})\subset \Omega \). Since \(\{x_{n}\}\) is bounded, then \(w_{\omega }(x_{n})\) is nonempty. Let \(x^{*}\in w_{\omega }(x_{n})\) be an arbitrary element. Then, there exists a subsequence \(\{x_{n_{k}}\}\) of \(\{x_{n}\}\) such that \(x_{n_{k}}\rightharpoonup x^{*}\) as \(k\to \infty \). From (4.22), we have that \(u_{n_{k}}\rightharpoonup x^{*}\) as \(k\to \infty \). Since \(I-\mathbb{V}\) is demiclosed at zero, then it follows from (4.22) and Lemma 2.7 that \(x^{*}\in F(\mathbb{V})=F(S)\). That is, \(w_{\omega }(x_{n})\subset F(S)\).

Next, we show that \(w_{\omega }(x_{n})\subset \Gamma \). From Step 3 and by applying (4.21), we have

$$ \lim_{k\to \infty } \bigl\Vert U\bigl(I +\gamma _{n}A^{*}(T-I)A\bigr)w_{n_{k}}-w_{n_{k}} \bigr\Vert = \lim_{k\to \infty } \Vert u_{n_{k}}-w_{n_{k}} \Vert =0. $$
(4.24)

Since the operators U and \(I +\gamma _{n}A^{*}(T-I)A\) are averaged, it follows from Lemma 2.12(ii) that the composition \(U(I +\gamma _{n}A^{*}(T-I)A)\) is also average and consequently nonexpansive. By the Demiclosedness Principle for nonexpansive mappings, and by applying (4.19) and (4.24) we obtain \(U(I +\gamma _{n}A^{*}(T-I)A)x^{*}=x^{*}\). Since \(\Omega \ne \emptyset \), then by Lemma 2.12(iii) we have \(Ux^{*}=x^{*}\) and \((I +\gamma _{n}A^{*}(T-I)A)x^{*}=x^{*}\). It then follows from Lemma 2.12(iii) and Lemma 2.14(I) that

$$ 0\in \bigcap_{i=1}^{m}(h_{i}+B_{i})x^{*}. $$
(4.25)

Sine T is nonexpansive, then by the Demiclosedness Principle for nonexpansive mappings, and by applying (4.14) and (4.19) we have \(TAx^{*} = Ax^{*}\). It then follows from Lemma 2.12(iii) and Lemma 2.14(I) that

$$ 0\in \bigcap_{j=1}^{k}(g_{i}+D_{i})Ax^{*}. $$
(4.26)

From (4.25) and (4.26), we obtain \(w_{\omega }(x_{n})\subset \Gamma \). Consequently, we have that \(w_{\omega }(x_{n})\subset \Omega \).

Next, from (4.21) we have that \(w_{\omega }\{v_{n_{k}}\} = w_{\omega }\{x_{n_{k}}\}\). By the boundedness of \(\{x_{n_{k}}\}\), there exists a subsequence \(\{x_{n_{k_{j}}}\}\) of \(\{x_{n_{k}}\}\) such that \(x_{n_{k_{j}}}\rightharpoonup x^{\dagger }\) and

$$ \begin{aligned}\lim_{j\rightarrow \infty }\bigl\langle f(\hat{x}) - \hat{x}, x_{n_{k_{j}}} -\hat{x} \bigr\rangle &= \limsup_{k\rightarrow \infty }\bigl\langle f(\hat{x}) - \hat{x}, x_{n_{k}} -\hat{x} \bigr\rangle \\ &= \limsup_{k\rightarrow \infty } \bigl\langle f(\hat{x}) - \hat{x}, v_{n_{k}} -\hat{x} \bigr\rangle . \end{aligned}$$

Since \(\hat{x}=P_{\Omega }\circ f(\hat{x})\), it follows that

$$\begin{aligned} \limsup_{k\rightarrow \infty }\bigl\langle f(\hat{x}) - \hat{x}, x_{n_{k}} - \hat{x} \bigr\rangle &= \lim_{j\rightarrow \infty }\bigl\langle f(\hat{x}) - \hat{x}, x_{n_{k_{j}}} -\hat{x} \bigr\rangle \\ &= \bigl\langle f(\hat{x}) - \hat{x}, x^{\dagger }-\hat{x} \bigr\rangle \leq 0. \end{aligned}$$
(4.27)

Now, from (4.23) and (4.27), we obtain

$$\begin{aligned} \limsup_{k\rightarrow \infty }\bigl\langle f(\hat{x}) - \hat{x}, x_{n_{k}+1} -\hat{x} \bigr\rangle ={}& \limsup_{k\rightarrow \infty }\bigl\langle f(\hat{x}) - \hat{x}, x_{n_{k}+1} -x_{n_{k}} \bigr\rangle \\ &{}+ \limsup_{k\rightarrow \infty }\bigl\langle f(\hat{x}) - \hat{x}, x_{n_{k}} -\hat{x} \bigr\rangle \\ ={}& \bigl\langle f(\hat{x}) - \hat{x}, x^{\dagger }-\hat{x} \bigr\rangle \leq 0. \end{aligned}$$
(4.28)

Applying Lemma 2.10 to (4.12), and using (4.28) together with the fact that \(\lim_{n\rightarrow \infty }\frac{\theta _{n}}{\alpha _{n}}\|x_{n} - x_{n-1}\| =0\) and \(\lim_{n\rightarrow \infty }\alpha _{n} = 0\), we deduce that \(\lim_{n\rightarrow \infty }\|x_{n} - \hat{x}\|=0\) as required. □

Remark 4.5

The results of this paper improve the results of Yao et al. [48] and Chang et al. [16] in the following ways:

  1. (i)

    Our result extends the result of Yao et al. [48] and the result of Chang et al. [16] from SMVIP (1.4) and (1.5) and a system of monotone variational inclusion problems (1.11), respectively, to the problem of finding a common solution of the system of monotone variational inclusion problems (1.11) and the fixed-point problem of quasipseudocontractions.

  2. (ii)

    While Yao et al. [48] were only able to prove a weak-convergence result, in this paper we established a strong-convergence result for our proposed algorithm.

  3. (iii)

    The proposed method of Chang et al. [16] requires knowledge of the operator norm for its implementation, while our proposed method is independent of the operator norm.

  4. (iv)

    Our method employs a very efficient inertial technique that does not require stringent conditions, like one has in condition (iii) of Algorithm 1 of Yao et al. [48] and condition (iv) of Algorithm (1.13) of Chang et al. [16].

  5. (v)

    The viscosity technique we employed accommodates a larger class of contractions than the one employed by Chang et al. [16].

Remark 4.6

Since the class of quasipseudocontractions contains several other classes of nonlinear mappings such as the pseudocontractions, the demicontractive operators, the quasinonexpansive operators, the directed operators, and the strictly pseudocontractive mappings with fixed points as special cases, our results present a unified framework for studying these classes of operators.

5 Applications

In this section we consider some applications of our results to approximating solutions of related optimization problems in the framework of Hilbert spaces.

5.1 System of equilibrium problems with fixed-point constraint

Let C be a nonempty closed convex subset of a real Hilbert space H, and let \(F:C\times C\to \mathbb{R}\) be a bifunction. The equilibrium problem (EP) for the bifunction F on C is to find a point \(x^{*}\in C\) such that

$$ F\bigl(x^{*}, y\bigr) \ge 0, \quad \forall y\in C. $$
(5.1)

We denote the solution of the EP (5.1) by \(\operatorname{EP}(F)\). The EP serves as a unifying framework for several mathematical problems, such as variational inequality problems, minimization problems, complementarity problems, saddle-point problems, mathematical programming problems, Nash-equilibrium problems in noncooperative games, and others; see [2, 23, 28, 29, 35] and the references therein. Several problems in economics, physics, and optimization can be formulated as finding a solution of EP (5.1).

In solving the EP (5.1), we assume that the bifunction \(F: C \times C \to \mathbb{R}\) satisfies the following conditions:

  1. (A1)

    \(F(x,x) = 0\) for all \(x \in C\);

  2. (A2)

    F is monotone, that is, \(F(x,y) + F(y,x) \le 0\) for all \(x,y \in C\);

  3. (A3)

    F is upper hemicontinuous, that is, for all \(x,y,z \in C\), \(\lim_{t \downarrow 0} F (tz + (1-t)x,y )\le F(x,y)\);

  4. (A4)

    for each \(x \in C\), \(y \mapsto F(x,y)\) is convex and lower semicontinuous.

The following theorem is required in establishing our next result.

Theorem 5.1

([43])

Let C be a nonempty closed convex subset of a real Hilbert space H and \(F:C\times C\rightarrow \mathbb{R}\) be a bifunction satisfying (A1)(A4). Define a multivalued mapping \(A_{F}: H\rightarrow 2^{H}\) by

$$ A_{F}(x) = \textstyle\begin{cases} \{y\in H: F(x,z)\geq \langle z-x, y \rangle , \forall z\in C\}, & \textit{if } x\in C, \\ \emptyset , & \textit{if } x\notin C. \end{cases} $$

Then, the following hold:

  1. (i)

    \(A_{F}\) is maximal monotone;

  2. (ii)

    \(\operatorname{EP}(F) = A_{F}^{-1}(0)\);

  3. (iii)

    \(T_{r}^{F} = (I + rA_{F})^{-1}\) for \(r>0\), where \(T_{r}^{F}\) is the resolvent of \(A_{F}\) and is given by

    $$ T_{r}^{F}(x) = \biggl\{ y\in C : F(y,z) + \frac{1}{r}\langle z - y, y-x \rangle \geq 0, \forall z\in C \biggr\} . $$

Here, we consider the following system of equilibrium problems (SEPs) with fixed-point constraint:

$$ \textstyle\begin{cases} \text{Find } x^{*}\in F(S) \text{ such that } F_{i}(x^{*}, x) \ge 0, \quad \forall x\in C, i=1,2,\ldots ,m; \quad \text{and} \\ y^{*}=Ax^{*} \quad \text{solves} \quad G_{j}(y^{*}, y) \ge 0, \quad \forall y\in Q, j=1,2,\ldots ,k, \end{cases} $$
(5.2)

where C and Q are nonempty closed convex subsets of real Hilbert spaces \(H_{1}\) and \(H_{2}\), respectively, \(S:H_{1}\to H_{1}\) is a quasipseudocontractive mapping, for each \(i=1,2,\ldots ,m\) and \(j=1,2,\ldots ,k\), \(F_{i}\) and \(G_{j}\) are bifunctions satisfying conditions (A1)–(A4) above, and \(A:H_{1}\to H_{2}\) is a bounded linear operator. We denote the solution set of problem (5.2) by \(\Gamma _{\mathrm{SEP}}=\bigcap_{i=1}^{m} EP(F_{i})\cap A^{-1}(\bigcap_{j=1}^{k}(EP(G_{j})))\).

Now, taking \(B_{i}= A_{F_{i}}\), \(i=1,2,\ldots ,m\) and \(D_{j}= H_{G_{j}}\), \(j=1,2,\ldots ,k\) and setting \(h_{i}=g_{j}=0\) in Theorem 4.4, we obtain the following result for approximating solutions of problem (5.2) in Hilbert spaces.

Theorem 5.2

Let C and Q be nonempty closed convex subsets of real Hilbert spaces \(H_{1}\) and \(H_{2}\), respectively, \(A:H_{1}\to H_{2}\) be a bounded linear operator with adjoint \(A^{*}\), and for each \(i=1,2,\ldots ,m\) and \(j=1,2,\ldots ,k\) let \(F_{i}:C\times C\to \mathbb{R}\) and \(G_{j}:Q\times Q\to \mathbb{R}\) be bifunctions satisfying conditions (A1)(A4). Let \(S:H_{1}\to H_{1}\) be a K-Lipschitz continuous quasipseudocontractive mapping, which is demiclosed at zero and with \(K\ge 1\), and \(f:H_{1}\to H_{1}\) be a contraction with coefficient \(\rho \in (0,1)\). Suppose that the solution set \(\Omega = \Gamma _{\mathrm{SEP}}\cap F(S)\neq \emptyset \), and conditions (C1)(C4) are satisfied. Then, the sequence \(\{x_{n}\}\) generated by the following algorithm converges strongly to a point \(\hat{x}\in \Omega \), where \(\hat{x}=P_{\Omega }\circ f(\hat{x})\).

figure c

Algorithm 3

5.2 System of convex minimization problems with fixed-point constraint

Suppose that \(F:H\to \mathbb{R}\) is a convex and differentiable function, and \(M:H\to (-\infty ,+\infty ]\) is a proper convex and lower semicontinuous function. It is known that if F is \(\frac{1}{\mu }\)-Lipschitz continuous, then it is μ-inverse strongly monotone, where F is the gradient of F. Also, it is known that the subdifferential ∂M of M is maximal monotone (see [39]). Moreover,

$$ F\bigl(x^{*}\bigr)+M\bigl(x^{*}\bigr)=\min _{x\in H}\bigl\{ F(x) + M(x)\bigr\} \quad \iff\quad 0\in \triangledown F \bigl(x^{*}\bigr)+\partial M\bigl(x^{*}\bigr). $$

We consider the following system of convex minimization problems (SCMP) with fixed-point constraint: Find

$$ x^{*}\in F(S) \text{ such that} \quad F_{i} \bigl(x^{*}\bigr) + M_{i}\bigl(x^{*}\bigr)= \min _{x \in F(S)}\bigl\{ F_{i}(x) + M_{i}(x)\bigr\} ,\quad i=1,2,\ldots ,m, $$
(5.6)

and such that \(y^{*}=Ax^{*}\in H_{2}\), solves

$$ G_{j}\bigl(x^{*}\bigr) + N_{j} \bigl(x^{*}\bigr)= \min_{x\in H_{2}}\bigl\{ G_{j}(x) + N_{j}(x) \bigr\} ,\quad j=1,2,\ldots ,k, $$
(5.7)

where \(S:H_{1}\to H_{1}\) is a quasipseudocontractive mapping, for each \(i=1,2,\ldots ,m\) and \(j=1,2,\ldots ,k\), \(F_{i}:H_{1}\to \mathbb{R}\) and \(G_{j}:H_{2}\to \mathbb{R}\) are convex and differentiable functions, and \(M_{i}:H_{1}\to (-\infty ,+\infty ]\) and \(N_{j}:H_{2}\to (-\infty ,+\infty ]\) are proper convex and lower semicontinuous functions. We denote the solution set of problem (5.6) and (5.7) by \(\Gamma _{\mathrm{SCMP}}\).

Now, for each \(i=1,2,\ldots ,m\) and \(j=1,2,\ldots ,k\), set \(B_{i}=\partial M_{i}\), \(D_{j}=\partial N_{j}\), \(h_{i}=\triangledown F_{i}\) and \(g_{j}=\triangledown G_{j}\) in Theorem 4.4, we obtain the following result for approximating solutions of problem (5.6) and (5.7) in Hilbert spaces.

Theorem 5.3

Let \(H_{1}\) and \(H_{2}\) be real Hilbert spaces, \(A:H_{1}\to H_{2}\) be a bounded linear operator with adjoint \(A^{*}\). For each \(i=1,2,\ldots ,m\) and \(j=1,2,\ldots ,k\), let \(M_{i}:H_{1}\to (-\infty ,+\infty ]\) and \(N_{j}:H_{2}\to (-\infty ,+\infty ]\) be proper convex and lower semicontinuous functions, \(F_{i}:H_{1}\to \mathbb{R}\), \(G_{j}:H_{2}\to \mathbb{R}\) be convex and differentiable functions such that \(\triangledown F_{i}\), \(\triangledown G_{j}\) are \(\frac{1}{\mu }\)-Lipschitz continuous. Let \(S:H_{1}\to H_{1}\) be a K-Lipschitz continuous quasipseudocontractive mapping, which is demiclosed at zero and with \(K\ge 1\), and \(f:H_{1}\to H_{1}\) be a contraction with coefficient \(\rho \in (0,1)\). Suppose that the solution set \(\Omega = \Gamma _{\mathrm{SCMP}}\cap F(S)\neq \emptyset \), and conditions (C1)(C4) are satisfied. Then, the sequence \(\{x_{n}\}\) generated by the following algorithm converges strongly to a point \(\hat{x}\in \Omega \), where \(\hat{x}=P_{\Omega }\circ f(\hat{x})\).

figure d

Algorithm 4

6 Numerical examples

Here, we present some numerical experiments both in finite-dimensional and infinite-dimensional Hilbert spaces to illustrate the performance of our proposed method Algorithm 2 in comparison with Algorithm (1.13). Moreover, we experiment on the dependency of the key parameters on the performance of our method. All numerical computations were carried out using Matlab version R2019(b).

In our computations, we choose for each \(n\in \mathbb{N}\) \(\alpha _{n} = \frac{1}{2n+1}\), \(\epsilon _{n} = \frac{1}{(2n+1)^{3}}\), \(\delta _{n}=\xi _{n}=\frac{n}{2n+1}\), \(\beta _{n}=\frac{2n}{3n+2}\), \(\lambda =0.5\). Let \(f(x) = \frac{1}{6}x\), then \(\rho =\frac{1}{6}\) is the Lipschitz constant for f. It can easily be verified that the conditions of Theorem 4.4 are satisfied. We take \(\theta _{n}=\frac{1}{3n^{2}}\) and \(\gamma =0.05\) in Algorithm (1.13).

Example 6.1

Let \(H_{1}=H_{2}=\mathbb{R}\) with the inner product defined by \(\langle x,y \rangle =xy\), for all \(x,y\in \mathbb{R}\), and the induced usual norm \(|\cdot |\). For \(i=j=1,2,\ldots ,5\), we define the mappings \(h_{i},g_{j}:\mathbb{R}\to \mathbb{R}\) by \(h_{i}(x)=ix+6 \ \forall x\in H_{1}\) and \(g_{j}(y)=2jx-1 \ \forall y\in H_{2}\), then we take \(\lambda =0.18\). Let \(B_{i},D_{j}:\mathbb{R}\to \mathbb{R}\) be defined by \(B_{i}(x)=3ix-2~\forall x\in H_{1}\), \(D_{j}(y)=3jy~\forall y\in H_{2}\), and we define \(A:H_{1}\to H_{2}\) by \(A(x)=-\frac{5}{3}x\) for all \(x\in H_{1}\), then \(A^{*}(y)=-\frac{5}{3}y\) for all \(y\in H_{2}\). Define \(S:\mathbb{R}\to \mathbb{R}\) by \(S(x)=-2x\). Then, S is 2-Lipschitzian quasipseudocontractive. We choose \(\eta =0.23\) and \(\mu =0.28\).

Using MATLAB 2019(b), we compare the performance of Algorithm 2 with Algorithm (1.13). The stopping criterion used for our computation is \(|x_{n+1}-x_{n}|< 10^{-3}\). We plot the graphs of errors against the number of iterations in each case. The numerical results are reported in Fig. 1 and Table 1.

Figure 1
figure 1

Top left: Case I; Top right: Case II; Bottom left: Case III; Bottom right: Case IV

Table 1 Numerical results for Example 6.1 (Experiment 1)

Example 6.2

Let \(H_{1}=(\ell _{2}(\mathbb{R}), \|\cdot \|_{2})=H_{2}\), where \(\ell _{2}(\mathbb{R}):=\{x=(x_{1},x_{2},\ldots ,x_{n},\ldots ), x_{j} \in \mathbb{R}:\sum_{j=1}^{\infty }|x_{j}|^{2}<\infty \}\), \(\|x\|_{2}=( \sum_{j=1}^{\infty }|x_{j}|^{2})^{\frac{1}{2}}\) for all \(x\in \ell _{2}(\mathbb{R})\). For \(i=j=1,2,\ldots ,5\), we define the mappings \(h_{i},g_{j}:\ell _{2}(\mathbb{R})\to \ell _{2}(\mathbb{R})\) by \(h_{i}(x)=2ix-1\ \forall x\in H_{1}\) and \(g_{j}(y)=jx+2 \ \forall y\in H_{2}\), then we take \(\lambda =0.15\). Let \(B_{i},D_{j}:\ell _{2}(\mathbb{R})\to \ell _{2}(\mathbb{R})\) be defined by \(B_{i}(x)=\frac{7}{3i}x \ \forall x\in H_{1}\), \(D_{j}(y)=\frac{5}{3j}y \forall y\in H_{2}\), and we define \(A:H_{1}\to H_{2}\) by \(A(x)=\frac{x}{3}x\) for all \(x\in H_{1}\), then \(A^{*}(y)=\frac{x}{3}y\) for all \(y\in H_{2}\). Define \(S:\ell _{2}(\mathbb{R})\to \ell _{2}(\mathbb{R})\) by \(S(x)=-\frac{5}{4}x\). Then, S is \(\frac{5}{4}\)-Lipschitzian quasipseudocontractive. We choose \(\eta =0.29\) and \(\mu =0.34\) and \(\lambda = 0.01\) in Algorithm (1.13).

Using MATLAB 2019(b), we compare the performance of Algorithm 2 with Algorithm (1.13). The stopping criterion used for our computation is \(\|x_{n+1}-x_{n}\|< 10^{-3}\). We plot the graphs of errors against the number of iterations in each case. The numerical results are reported in Fig. 2 and Table 2.

Figure 2
figure 2

Top left: Case I; Top right: Case II; Bottom left: Case III; Bottom right: Case IV

Table 2 Numerical results for Example 6.2 (Experiment 2)

We test Examples 6.1 and 6.2 under the following experiments:

Experiment 1

In this experiment, we check the behavior of our method by fixing the other parameters and varying θ. We do this to check the effect of the parameter θ on our method.

For Example 6.1, we choose different initial values as follows:

  1. Case I:

    \(x_{0} = -10\), \(x_{1} = 23\);

  2. Case II:

    \(x_{0} = \frac{17}{25}\), \(x_{1} = -32\);

  3. Case III:

    \(x_{0} = 29\), \(x_{1} = -100.23\);

  4. Case IV:

    \(x_{0} = 29\), \(x_{1} = -100.23\);

Also, we consider \(\theta \in \{1.5, 3.0, 4.5, 6.0, 7.5, 9.0\}\), which satisfies Assumption (C4). We use Algorithm (1.13) and Algorithm 2 for the experiment and report the numerical results in Table 1 and Fig. 1.

Experiment 2

In this experiment, we check the behavior of our method by fixing the other parameters and varying \(\tau _{n}\). We do this to check the effect of the parameter \(\tau _{n}\) on our method.

For Example 6.2, we choose different initial values as follows:

  1. Case I:

    \(x_{0} = (-3, 1, -\frac{1}{3}, \ldots )\), \(x_{1} = (1, \frac{1}{2}, \frac{1}{4},\ldots )\);

  2. Case II:

    \(x_{0} = (1, \frac{1}{7}, \frac{1}{49},\ldots )\), \(x_{1} = (0.1, 0.01, 0.001, \ldots )\);

  3. Case III:

    \(x_{0} = (2, \frac{4}{5}, \frac{8}{25}, \ldots )\), \(x_{1} = (1, -\frac{1}{6}, \frac{1}{36}, \ldots )\);

  4. Case IV:

    \(x_{0} = (-2, \frac{4}{3}, -\frac{8}{9},\ldots )\), \(x_{1} = (5, -0.5, 0.05, \ldots )\).

Also, we consider \(\tau _{n}\in \{\frac{n}{2n + 1}, \frac{n}{5n + 1}, \frac{2n}{3n + 2}, \frac{3n}{4n + 1}, \frac{4n}{5n + 1}, \frac{2n}{5n + 2},\}\), which satisfies Assumption (C4). We use Algorithm (1.13) and Algorithm 2 for the experiment and report the numerical results in Table 2 and Fig. 2.

7 Conclusion

We studied the problem of finding the solution of a system of monotone variational inclusion problems with the constraint of a fixed-point set of quasipseudocontractive mappings. We proposed a new iterative method that employs an inertial technique with a self-adaptive step size for approximating the solution of the problem in Hilbert spaces and proved a strong-convergence result for the proposed method under some mild conditions. We further applied our results to study related optimization problems and presented some numerical experiments with graphical illustration to demonstrate the efficiency and applicability of our proposed method. In Examples 6.1 and 6.2, we checked the dependency of key parameters for each starting point in order to determine if their choices affect the performance of our method. We can see from the tables and graphs that the number of iterations and CPU times for our proposed method remain consistent and well behaved for different choices of these key parameters and that our method is more efficient and outperforms a related method.

Availability of data and materials

Not applicable.

References

  1. Abbas, M., Al Sharani, M., Ansari, Q.H., Iyiola, O.S., Shehu, Y.: Iterative methods for solving proximal split minimization problem. Numer. Algorithms 78(1), 193–215 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  2. Alakoya, O.T., Mewomo, O.T.: Viscosity S-iteration method with inertial technique and self-adaptive step size for split variational inclusion, equilibrium and fixed-point problems. Comput. Appl. Math. 41(1), 39 (2022)

    Article  MathSciNet  MATH  Google Scholar 

  3. Alakoya, O.T., Taiwo, A., Mewomo, O.T.: On system of split generalised mixed equilibrium and fixed point problems for multivalued mappings with no prior knowledge of operator norm. Fixed Point Theory 23(1), 45–74 (2022)

    Article  MathSciNet  Google Scholar 

  4. Alakoya, T.O., Jolaoso, L.O., Mewomo, O.T.: Strong convergence and bounded perturbation resilience of a modified forward-backward and splitting algorithm and its application. J. Nonlinear Convex Anal. 23(4), 653–682 (2022)

    MathSciNet  Google Scholar 

  5. Alakoya, T.O., Owolabi, A.O.E., Mewomo, O.T.: An inertial algorithm with a self-adaptive step size for a split equilibrium problem and a fixed-point problem of an infinite family of strict pseudo-contractions. J. Nonlinear Var. Anal. 5, 803–829 (2021)

    Google Scholar 

  6. Bauschke, H.H., Combettes, P.L.: A weak-to-strong convergence principle for Fejer-monotone methods in Hilbert spaces. Math. Oper. Res. 26(2), 248–264 (2001)

    Article  MathSciNet  MATH  Google Scholar 

  7. Boikanyo, O.A.: The viscosity approximation forward-backward splitting method for zeros of the sum of monotone operators. Abstr. Appl. Anal. 2016, Article ID 2371857 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  8. Bréziz, H.: Operateur maximaux monotones. Amsterdam (The Netherlands), Mathematics studies 5 (1973)

  9. Byrne, C.: Iterative oblique projection onto convex sets and the split feasibility problem. Inverse Probl. 18, 441–453 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  10. Ceng, L.C.: Approximation of common solutions of a split inclusion problem and a fixed-point problem. J. Appl. Numer. Optim. 1, 1–12 (2019)

    Google Scholar 

  11. Ceng, L.C., Coroian, I., Qin, X., Yao, J.C.: A general viscosity implicit iterative algorithm for split variational inclusions with hierarchical variational inequality constraints. Fixed Point Theory 20, 469–482 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  12. Ceng, L.C., Yao, J.C.: Relaxed and hybrid viscosity methods for general system of variational inequalities with split feasibility problem constraint. Fixed Point Theory Appl. 2013, 43 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  13. Censor, Y., Borteld, T., Martin, B., Trofimov, A.: A unified approach for inversion problems in intensity-modulated radiation therapy. Phys. Med. Biol. 51, 2353–2365 (2006)

    Article  Google Scholar 

  14. Censor, Y., Elfving, T.: A multiprojection algorithm using Bregman projections in a product space. Numer. Algorithms 8, 221–239 (1994)

    Article  MathSciNet  MATH  Google Scholar 

  15. Chang, S.-S., Wang, L., Qin, L.J.: Split equality fixed point problem for quasi-pseudo-contractive mappings with applications. Fixed Point Theory Appl. 2015, 208 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  16. Chang, S.-S., Yao, J.-C., Wang, L., Liu, M., Zhao, L.: On the inertial forward-backward splitting technique for solving a system of inclusion problems in Hilbert spaces. Optimization 70(12), 2511–2525 (2020). https://doi.org/10.1080/02331934.2020.1786567

    Article  MathSciNet  MATH  Google Scholar 

  17. Chen, P., Huang, J., Zhang, X.: A primal-dual fixed point algorithm for convex separable minimization with applications to image restoration. Inverse Probl. 29(2), Article ID 025011 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  18. Chuang, C.S.: Strong convergence theorems for the split variational inclusion problem in Hilbert spaces. Fixed Point Theory Appl. 2013, 350 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  19. Combettes, P.L.: The convex feasibility problem in image recovery. Adv. Imaging Electron Phys. 95, 155–270 (1996)

    Article  Google Scholar 

  20. Cui, H.H., Zhang, H.X., Ceng, L.C.: An inertial Censor-Segal algorithm for split common fixed-point problems. Fixed Point Theory 22, 93–103 (2021)

    Article  MathSciNet  MATH  Google Scholar 

  21. Dang, Y., Sun, J., Xu, H.: Inertial accelerated algorithms for solving a split feasibility problem. J. Ind. Manag. Optim. 13(3), 1383–1394 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  22. Dilshad, M., Aljohani, A.F., Akram, M.: Iterative scheme for split variational inclusion and a fixed-point problem of a finite collection of nonexpansive mappings. J. Funct. Spaces 2020, Article ID 3567648 (2020)

    MathSciNet  MATH  Google Scholar 

  23. Facchinei, F., Pang, J.S.: Finite-Dimensional Variational Inequalities and Complementarity Problems. Springer, Berlin (2007)

    MATH  Google Scholar 

  24. Gibali, A.: A new split inverse problem and an application to least intensity feasible solutions. Pure Appl. Funct. Anal. 2, 243–258 (2017)

    MathSciNet  MATH  Google Scholar 

  25. Guan, J.L., Ceng, L.C., Hu, B.: Strong convergence theorem for split monotone variational inclusion with constraints of variational inequalities and fixed point problems. J. Inequal. Appl. 2018, 311 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  26. Iiduka, H.: Fixed point optimization algorithm and its application to network bandwidth allocation. J. Comput. Appl. Math. 236, 1733–1742 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  27. Kazmi, K.R., Rizvi, S.H.: An iterative method for split variational inclusion problem and fixed point problem for a nonexpansive mapping. Optim. Lett. 8, 1113–1124 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  28. Khan, S.H., Alakoya, T.O., Mewomo, O.T.: Relaxed projection methods with self-adaptive step size for solving variational inequality and fixed point problems for an infinite family of multivalued relatively nonexpansive mappings in Banach spaces. Math. Comput. Appl. 25, 54 (2020)

    MathSciNet  Google Scholar 

  29. Konnov, I.: Equilibrium Models and Variational Inequalities, vol. 210. Elsevier, Amsterdam (2007)

    Book  MATH  Google Scholar 

  30. López, G., Martín-Márquez, V., Xu, H.K.: Iterative algorithms for the multiple-sets split feasibility problem. In: Biomedical Mathematics: Promising Directions in Imaging, Therapy Planning and Inverse Problems, pp. 243–279. Medical Physics Publ., Madison (2010)

    Google Scholar 

  31. Luo, C., Ji, H., Li, Y.: Utility-based multi-service bandwidth allocation in the 4G heterogeneous wireless networks. In: IEEE Wireless Communication and Networking Conference, pp. 1–5. IEEE Comput. Soc., Los Alamitos (2009). https://doi.org/10.1109/WCNC.2009.4918017

    Chapter  Google Scholar 

  32. Maingé, P.E.: A hybrid extragradient-viscosity method for monotone operators and fixed point problems. SIAM J. Control Optim. 47, 1499–1515 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  33. Moudafi, A.: Split monotone variational inclusions. J. Optim. Theory Appl. 150, 275–283 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  34. Ogwo, G.N., Alakoya, T.O., Mewomo, O.T.: Iterative algorithm with self-adaptive step size for approximating the common solution of variational inequality and fixed point problems. Optimization (2021). https://doi.org/10.1080/02331934.2021.1981897

    Article  Google Scholar 

  35. Ogwo, G.N., Alakoya, T.O., Mewomo, O.T.: Inertial iterative method with self-adaptive step size for finite family of split monotone variational inclusion and fixed point problems in Banach spaces. Demonstr. Math. (2021). https://doi.org/10.1515/dema-2020-0119

    Article  Google Scholar 

  36. Ogwo, G.N., Izuchukwu, C., Mewomo, O.T.: Inertial methods for finding minimum-norm solutions of the split variational inequality problem beyond monotonicity. Numer. Algorithms 88, 1419–1456 (2021)

    Article  MathSciNet  MATH  Google Scholar 

  37. Ogwo, G.N., Izuchukwu, C., Shehu, Y., Mewomo, O.T.: Convergence of relaxed inertial subgradient extragradient methods for quasimonotone variational inequality problems. J. Sci. Comput. 90(1), 10 (2022)

    Article  MathSciNet  MATH  Google Scholar 

  38. Olona, M.A., Alakoya, T.O., Owolabi, A.O.-E., Mewomo, O.T.: Inertial shrinking projection algorithm with self-adaptive step size for split generalized equilibrium and fixed point problems for a countable family of nonexpansive multivalued mappings. Demonstr. Math. 54, 47–67 (2021)

    Article  MathSciNet  MATH  Google Scholar 

  39. Rockafellar, R.T.: On the maximality of sums of nonlinear monotone operators. Trans. Am. Math. Soc. 149, 75–288 (1970)

    Article  MathSciNet  MATH  Google Scholar 

  40. Saejung, S., Yotkaew, P.: Approximation of zeros of inverse strongly monotone operators in Banach spaces. Nonlinear Anal. 75, 742–750 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  41. Shehu, Y., Cholamjiak, P.: Iterative method with inertial for variational inequalities in Hilbert spaces. Calcolo 56(1), 4 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  42. Taiwo, A., Alakoya, T.O., Mewomo, O.T.: Strong convergence theorem for solving equilibrium problem and fixed point of relatively nonexpansive multi-valued mappings in a Banach space with applications. Asian-Eur. J. Math. 14(8), Article ID 2150137 (2021)

    Article  MATH  Google Scholar 

  43. Takahashi, S., Takahashi, W., Toyoda, M.T.: Strong convergence theorem for maximal monotone operators with nonlinear mappings in Hilbert spaces. J. Optim. Theory Appl. 147, 27–41 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  44. Takahashi, W.: Introduction to Nonlinear and Convex Analysis. Yokohama Publishers, Yokohama (2009)

    MATH  Google Scholar 

  45. Takahashi, W., Xu, H.K., Yao, J.C.: Iterative methods for generalized split feasibility problems in Hilbert spaces. Set-Valued Var. Anal. 23, 205–221 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  46. Uzor, V.A., Alakoya, T.O., Mewomo, O.T.: Strong convergence of a self-adaptive inertial Tseng’s extragradient method for pseudomonotone variational inequalities and fixed point problems. Open Math. (2022). https://doi.org/10.1515/math-2022-0429

    Article  MathSciNet  Google Scholar 

  47. Xu, H.K.: Averaged mappings and the gradient-projection algorithm. J. Optim. Theory Appl. 150, 360–378 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  48. Yao, Y., Shehu, Y., Li, X.-H., Dong, Q.-L.: A method with inertial extrapolation step for split monotone inclusion problems. Optimization 70(4), 741–761 (2021)

    Article  MathSciNet  MATH  Google Scholar 

  49. Zhao, J., Liang, Y., Liu, Y., Cho, Y.J.: Split equilibrium, variational inequality and fixed point problems for multi-valued mappings in Hilbert spaces. Appl. Comput. Math. 17(3), 271–283 (2018)

    MathSciNet  MATH  Google Scholar 

  50. Zhao, X., Yao, J.C., Yao, Y.: A proximal algorithm for solving split monotone variational inclusions. UPB Sci. Bull., Ser. A 82(3), 43–52 (2020)

    MathSciNet  Google Scholar 

Download references

Acknowledgements

The authors sincerely thank the reviewers for their careful reading, constructive comments, and fruitful suggestions that improved the manuscript. The research of the first author is wholly supported by the University of KwaZulu-Natal, Durban, South Africa Postdoctoral Fellowship. He is grateful for the funding and financial support. The second author is supported by the National Research Foundation (NRF) of South Africa Incentive Funding for Rated Researchers (Grant Number 119903). The research of the fourth author is partially supported by the grant MOST 108-2115-M-039-005-MY3. The opinions expressed and conclusions arrived are those of the authors and are not necessarily to be attributed to the NRF.

Funding

The first author is funded by the University of KwaZulu-Natal, Durban, South Africa Postdoctoral Fellowship. The third author is supported by the National Research Foundation (NRF) of South Africa Incentive Funding for Rated Researchers (Grant Number 119903). The fourth author is funded by the grant MOST 108-2115-M-039-005-MY3.

Author information

Authors and Affiliations

Authors

Contributions

Conceptualization of the article was carried out by TO, OT, and VA, methodology by TO and VA, formal analysis, investigation and writing the original draft preparation by TO and VA, software and validation by OT and JC, writing, reviewing and editing by TO, OT, and JC, and project administration by OT and JC. All authors have accepted responsibility for the entire content of this manuscript and approved its submission.

Corresponding author

Correspondence to Jen-Chih Yao.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Alakoya, T.O., Uzor, V.A., Mewomo, O.T. et al. On a system of monotone variational inclusion problems with fixed-point constraint. J Inequal Appl 2022, 47 (2022). https://doi.org/10.1186/s13660-022-02782-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13660-022-02782-4

MSC

Keywords