 Research
 Open Access
 Published:
On a system of monotone variational inclusion problems with fixedpoint constraint
Journal of Inequalities and Applications volume 2022, Article number: 47 (2022)
Abstract
In this paper, we study the problem of finding the solution of the system of monotone variational inclusion problems recently introduced by Chang et al. (Optimization 70(12):2511–2525, 2020) with the constraint of a fixedpoint set of quasipseudocontractive mappings. We propose a new iterative method that employs an inertial technique with selfadaptive step size for approximating the solution of the problem in Hilbert spaces and prove a strongconvergence result for the proposed method under more relaxed conditions. Moreover, we apply our results to study related optimization problems. Finally, we present some numerical experiments to demonstrate the performance of our proposed method, compare it with a related method as well as experiment on the dependency of the key parameters on the performance of our method.
Introduction
In recent years, the split inverse problem (SIP) has received much research attention (see [1, 11, 12, 20, 24, 50] and the references therein) because of its extensive applications, for example, in phase retrieval, signal processing, image recovery, intensitymodulated radiation therapy, data compression, among others (see [13, 14, 42] and the references therein). The SIP model is presented as follows: Find a point
such that
where \(H_{1}\) and \(H_{2}\) are real Hilbert spaces, \(\mathrm{IP} _{1}\) denotes an inverse problem formulated in \(H_{1}\) and \(\mathrm{IP} _{2}\) denotes an inverse problem formulated in \(H_{2}\), and \(A : H_{1} \rightarrow H_{2}\) is a bounded linear operator.
Censor and Elfving [14] in 1994 introduced the split feasibility problem (SFP), which was the first instance of the SIP for modeling inverse problems that arise from medicalimage reconstruction. Since then, several authors have studied and developed different iterative methods for approximating the solution of the SFP. The SFP has wide areas of applications, for instance, in signal processing, approximation theory, control theory, geophysics, communications, biomedical engineering, etc. [13, 30]. The SFP is formulated as follows:
where C and Q are nonempty closed convex subsets of Hilbert spaces \(H_{1}\) and \(H_{2}\), respectively, and \(A:H_{1}\rightarrow H_{2}\) is a bounded linear operator.
Moudafi [33] introduced another instance of the SIP known as the split monotone variational inclusion problem (SMVIP). Let \(H_{1}\), \(H_{2}\) be real Hilbert spaces, \(f_{1}:H_{1}\to H_{1}\), \(f_{2}:H_{2}\to H_{2}\), are inverse strongly monotone mappings, \(A:H_{1}\to H_{2}\) is a bounded linear operator, \(B_{1}:H_{1}\to 2^{H_{1}}\), \(B_{2}:H_{2}\to 2^{H_{2}}\) are multivalued maximal monotone mappings. The SMVIP is formulated as follows:
and
We point out that if (1.4) and (1.5) are considered separately, then each of (1.4) and (1.5) is a monotone variational inclusion problem (MVIP) with solution set \((B_{1}+f_{1})^{1}(0)\) and \((B_{2}+f_{2})^{1}(0)\), respectively. Moudafi in [33] showed that \(x^{*}\in (B_{1}+f_{1})^{1}(0)\) if and only if \(x^{*}=J_{\lambda }^{B_{1}}(I\lambda f_{1})(x^{*})\), for all \(\lambda >0\), where \(J_{\lambda }^{B_{1}}:H_{1}\to H_{1}\) is the resolvent operator associated with \(B_{1}\) and λ defined by
It is known that the resolvent operator \(J_{\lambda }^{B_{1}}\) is single valued, nonexpansive and 1inverse strongly monotone (see, e.g., [8]).
Moreover, it was shown in [33] that, if \(f_{1}\) is an αinverse strongly monotone mapping and \(B_{1}\) is a maximal monotone mapping, then \(J_{\lambda }^{B_{1}}(I\lambda f_{1})\) is averaged with \(0<\lambda <2\alpha \). Consequently, \(J_{\lambda }^{B_{1}}(I\lambda f_{1})\) is nonexpansive. Furthermore, \((B_{1}+f_{1})^{1}(0)\) was shown to be closed and convex.
Moudafi [33], pointed out that the SMVIP (1.4) and (1.5) generalizes the split fixedpoint problem, split feasibility problem, split variational inequality problem, split equilibrium problem, and split variational inclusion problem, which have been studied extensively by several researchers (e.g., see [3, 5, 10, 21, 25, 27, 36, 46, 49]). Moreover, it is applied in solving many reallife problems such as in sensor networks, in computerized tomography and data compression, modeling of inverse problems arising from phase retrieval [9, 19], and in modeling intensitymodulated radiation therapy treatment planning [13, 14].
If \(f_{1}\equiv 0\equiv f_{2}\), then the SMVIP (1.4) and (1.5) reduces to the following split variational inclusion problem (SVIP):
and
Moudafi [33], showed that the SVIP (1.7) and (1.8) includes the SFP (1.3) as a special case. Several authors have studied and proposed different iterative methods for solving SVIP (1.7) and (1.8), see for instance [22, 27], and the references therein. However, results on SMVIP (1.4) and (1.5) are relatively scanty in the literature.
Very recently, Yao et al. [48] proposed and studied the convergence of the following iterative method with an inertial extrapolation step for approximating the solution of SMVIP (1.4) and (1.5) in Hilbert spaces (Algorithm 1), where \(F_{1}:=J_{\lambda }^{B_{1}}(I\lambda f_{1})\) and \(F_{2}:=J_{\lambda }^{B_{2}}(I\lambda f_{2})\), \(f_{1}:H_{1}\to H_{1}\) and \(f_{2}:H_{2}\to H_{2}\) are an \(\alpha _{1}\)inverse strongly monotone mapping and an \(\alpha _{2}\)inverse strongly monotone mapping, respectively, with \(\alpha =\min \{\alpha _{1}, \alpha _{2}\}\), \(A:H_{1}\to H_{2}\) is a bounded linear operator with adjoint \(A^{*}, B_{1}:H_{1}\to 2^{H_{1}}\), \(B_{2}:H_{2}\to 2^{H_{2}}\) are multivalued maximal monotone mappings. The authors were able to prove the weakconvergence result for the sequence generated by the proposed algorithm under the following conditions:

(i)
The solution set \(\mathcal{F}\) is nonempty;

(ii)
\(\lambda \in (0,2\alpha )\), \(0<\liminf_{n\to \infty }\tau _{n}\le \limsup_{n\to \infty }\tau _{n}<1\);

(iii)
\(\{\mu _{n}\}_{n=1}^{\infty }\subset \ell _{1}\), i.e., \(\sum_{n=1}^{\infty }\mu _{n}<\infty \).
Bauschke and Combettes [6] pointed out that in solving optimization problems, the strong convergence of iterative schemes is more desirable and useful than the weakconvergence counterparts. Therefore, when solving optimization problems the authors strive to construct algorithms that generate sequences that converge strongly to the solution of the problem under investigation.
Also, very recently, Chang et al. [16] introduced and studied the following system of monotone variational inclusion problems in Hilbert spaces: find a point \(x^{*}\in H_{1}\) such that
where for each \(i=1,2,\ldots ,m\) and \(j=1,2,\ldots ,k\), \(h_{i}\) and \(g_{j}\) are \(\varphi _{i}\) and \(\vartheta _{j}\) inverse strongly monotone mappings on \(H_{1}\) and \(H_{2}\), respectively, where \(\varphi _{i}>0\) and \(\vartheta _{j}>0\), \(B_{i}\) and \(D_{j}\) are multivalued maximal monotone operators on \(H_{1}\) and \(H_{2}\), respectively, and \(A:H_{1}\to H_{2}\) is a bounded linear operator. Set
then all \(h_{i}\) and \(g_{j}\) are ϕinverse strongly monotone mappings. Moreover, the authors proposed the following inertial forward–backward splitting algorithm with the viscosity technique for approximating the solution of problem (1.11) in Hilbert spaces:
where
for each \(i=1,2,\ldots ,m\) and \(j=1,2,\ldots ,k\), \(h_{i}\), \(g_{j}\), \(B_{i}\), \(D_{j}\) are as defined in (1.11), and \(f:H_{1}\to H_{1}\) is a contraction with contraction constant \(\rho \in (\frac{1}{2}, 1)\). The authors proved the strongconvergence theorem for the proposed method under the following conditions:

(i)
The solution set Γ is nonempty;

(ii)
\(\{\alpha _{n}\}\subset (0,1)\) with \(\alpha _{n}\to 0\) and \(\sum_{n=1}^{\infty }\alpha _{n}=\infty \);

(iii)
\(0<\gamma <\frac{1}{2\A\^{2}}\), \(\lambda \in (0,2\phi )\) with ϕ as defined in (1.12);

(iv)
\(\sum_{n=1}^{\infty }\theta _{n}\x_{n}x_{n1}\<\infty \), \(\theta _{n} \in [0,1)\).
Remark 1.1
Observe that the problem (1.11) solved by Algorithm (1.13) is more general than the problem SMVIP (1.4) and (1.5) solved by Algorithm 1. The SMVIP (1.4) and (1.5) is a special case of the problem (1.11) when \(i=j=1\). We also point out that the term \(\theta _{n}(x_{n}  x_{n1})\) in Algorithm 1 and Algorithm (1.13) above is referred to as the inertial term. It is employed in algorithm design to accelerate the rate of convergence. However, we note that condition (iii) of Algorithm 1 and condition (iv) of Algorithm (1.13) imposed to incorporate the inertial term are too restrictive. These might affect the implementation of the proposed methods. Some other drawbacks with Algorithm (1.13) are that the contraction constant ρ of the contraction f is restricted to the interval \((\frac{1}{2}, 1)\). Moreover, the implementation of the proposed algorithm requires knowledge of the operator norm, which is often very difficult to calculate or even estimate. On the other hand, while Algorithm 1 does not require knowledge of the operator norm for its implementation the authors were only able to obtain the weakconvergence result for the proposed algorithm.
From the above discourse, it is natural to ask the following question
Can we develop a new inertial iterative method with the viscosity technique that does not require knowledge of the operator norm for approximating the solution of system of monotone variational inclusion problems (1.11), such that condition (iii) of Algorithm 1 and condition (iv) of Algorithm (1.13) are dispensed with and obtain a strongconvergence result? Can the contraction constant ρ of the contraction mapping f of Algorithm (1.13) be selected from a larger interval than \((\frac{1}{2}, 1)\)?
Some of our aims in this paper are to provide affirmative answers to the above questions.
Another problem we consider in this paper is the fixedpoint problem (FPP). Let C be a nonempty closed convex subset of a real Hilbert space H and let \(S: C\rightarrow C\) be a nonlinear mapping. A point \(\hat{x}\in C\) is called a fixed point of S if \(S\hat{x} = \hat{x}\). We denote by \(F(S)\), the set of all fixed points of S, i.e.,
In recent years, the study of fixedpoint theory for nonlinear mappings has flourished owing to its extensive applications in various fields like economics, compressed sensing, and other applied sciences (see [4, 17, 38] and the references therein).
Recently, optimization problems dealing with finding a common solution of the set of fixed points of nonlinear mappings and the set of solutions of SMVIP (see, for instance, [3, 22]) were considered. One of the motivations for studying such a common solution problem is in its potential application to mathematical models whose constraints can be expressed as FPPs and SMVIP. An instance of this is found in practical problems such as signal processing, networkresource allocation, and image recovery. One scenario is in the network bandwidthallocation problem for two services in a heterogeneous wireless access networks where the bandwidth of the services are mathematically related (see, for instance, [26, 31] and the references therein).
Motivated by the above results and the current research interest in this direction, in this paper, we study the problem of finding the solution of the system of monotone variational inclusion problems (1.11) with the constraint of a fixedpoint set of quasipseudocontractions. Precisely, we consider the following problem: find a point \(x^{*}\in F(S)\) such that
where \(S:H_{1}\to H_{1}\) is a quasipseudocontractive mapping, for each \(i=1,2,\ldots ,m\) and \(j=1,2,\ldots ,k\), \(h_{i}\) and \(g_{j}\) are \(\varphi _{i}\) and \(\vartheta _{j}\) inverse strongly monotone mappings on \(H_{1}\) and \(H_{2}\), respectively, where \(\varphi _{i}>0\) and \(\vartheta _{j}>0\), \(B_{i}\) and \(D_{j}\) are multivalued maximal monotone operators on \(H_{1}\) and \(H_{2}\), respectively, and \(A:H_{1}\to H_{2}\) is a bounded linear operator.
Moreover, we introduce a new inertial iterative method that employs the viscosity technique to approximate the solution of the problem in the framework of Hilbert spaces. Furthermore, under mild conditions we prove that the sequence generated by the proposed method converges strongly to a solution of the problem. We point out that the implementation of our algorithm does not require knowledge of the operator norm and the contraction constant of the contraction mapping employed in the viscosity technique can be selected in the interval \((0,1)\); a larger interval than the restriction to interval \((\frac{1}{2}, 1)\) in Algorithm (1.13). In addition, we obtained a strongconvergence result dispensing with condition (iii) of Algorithm 1 and condition (iv) of Algorithm (1.13). We further apply our results to study other optimization problems and we provide some numerical experiments with graphical illustrations to demonstrate the implementability and efficiency of the proposed method in comparison with some methods in the current literature. Our results in this study improve and extend the recent ones announced by Yao et al. [48], Chang et al. [16], and many other results in the literature.
The paper is organized as follows: In Sect. 2, we recall basic definitions and lemmas employed in the convergence analysis. Section 3 presents the proposed algorithm and highlights some of its features, while in Sect. 4 we analyze the convergence of the proposed method. Section 5 presents applications of our results to some optimization problems. In Sect. 6, we provide some numerical examples with graphical illustrations and compare the performance of our proposed method with some of the existing methods in the literature. Finally, we give some concluding remarks in Sect. 7.
Preliminaries
In this section, we present some definitions and results, which will be needed in the following.
In what follows, we denote the weak and strong convergence of a sequence \(\{x_{n}\}\) to a point \(x \in H\) by \(x_{n} \rightharpoonup x\) and \(x_{n} \rightarrow x\), respectively, and \(w_{\omega }(x_{n})\) denotes the set of weak limits of \(\{x_{n}\}\), that is,
where H is a real Hilbert space. For a nonempty closed and convex subset C of H, the metric projection [37] \(P_{C}: H\rightarrow C\) is defined, for each \(x\in H\), as the unique element \(P_{C}x\in C\) such that
The operator \(P_{C}\) is nonexpansive and has the following properties [34, 44]:

1.
it is firmly nonexpansive, that is,
$$ \Vert P_{C}x  P_{C}y \Vert ^{2} \leq \langle P_{C}x  P_{C}y, x y\rangle\quad \text{for all } x, y\in C;$$ 
2.
for any \(x\in H\) and \(z\in C\), \(z = P_{C}x\) if and only if
$$ \langle x  z, z  y\rangle \geq 0 \quad \text{for all } y\in C; $$(2.1) 
3.
for any \(x\in H\) and \(y\in C\),
$$ \Vert P_{C}x  y \Vert ^{2} + \Vert x  P_{C}x \Vert ^{2} \leq \Vert x  y \Vert ^{2}. $$
Definition 2.1
Let \(T:H\to H\) be a nonlinear mapping and I be the identity mapping on H. The mapping \(IT\) is said to be demiclosed at zero, if for any sequence \(\{x_{n}\}\subset H\) that converges weakly to x and \(\x_{n}Tx_{n}\\to 0\), then \(x\in F(T)\).
Definition 2.2
Let C be a nonempty closed convex subset of a real Hilbert space H. A mapping \(T: C\rightarrow C\) is said to be:

(1)
LLipschitz continuous, if there exists a constant \(L>0\) such that
$$ \Vert TxTy \Vert \leq L \Vert xy \Vert , \quad \forall x,y \in C; $$if \(L\in [0,1)\), then T is called a contraction.

(2)
nonexpansive if T is 1Lipschitz continuous;

(3)
averaged if it can be written as
$$ T=(1\alpha )I + \alpha S,$$where \(\alpha \in (0,1)\), \(S:C\to C\) is nonexpansive and I is the identity mapping on C;

(4)
firmly nonexpansive if
$$\begin{aligned} \Vert TxTy \Vert ^{2}\leq \Vert xy \Vert ^{2} \bigl\Vert (IT)x(IT)y \bigr\Vert ^{2},\quad \forall x,y \in C; \end{aligned}$$ 
(5)
quasinonexpansive if \(F(T)\neq \emptyset \) and
$$\begin{aligned} \Vert Txp \Vert \leq \Vert xp \Vert , \quad \forall x\in C, \text{and } p\in F(T); \end{aligned}$$ 
(6)
firmly quasinonexpansive if \(F(T)\neq \emptyset \) and
$$\begin{aligned} \Vert Txp \Vert ^{2}\leq \Vert xp \Vert ^{2} \bigl\Vert (IT)x \bigr\Vert ^{2},\quad \forall x\in C, \text{and } p\in F(T); \end{aligned}$$ 
(7)
κstrictly pseudocontractive if there exists \(\kappa \in [0,1)\) such that
$$\begin{aligned} \Vert TxTy \Vert ^{2}\leq \Vert xy \Vert ^{2}+ \kappa \bigl\Vert (IT)x(IT)y \bigr\Vert ^{2},\quad \forall x,y\in C; \end{aligned}$$ 
(8)
directed if \(F(T)\neq \emptyset \) and \(\langle Txp, Txx\rangle \leq 0\), \(\forall x\in C\), and \(p\in F(T)\);

(9)
demicontractive if \(F(T)\neq \emptyset \) and there exists \(\kappa \in [0,1)\) such that
$$\begin{aligned} \Vert Txp \Vert ^{2}\leq \Vert xp \Vert ^{2}+ \kappa \Vert Txx \Vert ^{2},\quad \forall x \in C \text{ and } p\in F(T); \end{aligned}$$ 
(10)
monotone if
$$ \langle TxTy, xy \rangle \geq 0,\quad \forall x,y \in C; $$ 
(11)
Linverse strongly monotone (Lism), if there exits \(L >0\), such that
$$ \langle TxTy, xy \rangle \geq L \Vert TxTy \Vert ^{2},\quad \forall x,y \in C. $$
Remark 2.3
As pointed out by Bauschke and Combettes [6], \(T:C\to C\) is directed if and only if
In other words, the class of directed mappings coincides with the class of firmly quasinonexpansive mappings.
Remark 2.4
From the definitions above, we observe that the class of demicontractive mappings includes several other classes of nonlinear mappings such as the directed mappings, the quasinonexpansive mappings, and the strictly pseudocontractive mappings with fixed points as special cases. Also, it is well known that every Lism mapping is \(\frac{1}{L}\)Lipschitz continuous and monotone, and every Lipschitz continuous operator is uniformly continuous but the converse of these statements are not always true (see, for example [41]).
Definition 2.5
A nonlinear operator \(T:C\to C\) is called pseudocontractive if
The interest of pseudocontractive mappings lies in their connection with monotone mappings, that is, T is a pseudocontraction if and only if \(IT\) is a monotone mapping. It is well known that T is pseudocontractive if and only if
Definition 2.6
An operator \(T:C\to C\) is said to be quasipseudocontractive if \(F(T)\neq \emptyset \) and
It is obvious that the class of quasipseudocontractive mappings includes the class of demicontractive mappings and the class of pseudocontractive mappings with a nonempty fixedpoint set.
We have the following result on LLipschitz quasipseudocontractive mappings.
Lemma 2.7
([15])
Let H be a real Hilbert space and \(T:H\to H\) be an LLipschitzian mapping with \(L\geq 1\). Denote
If \(0<\psi <\eta <\frac{1}{1+\sqrt{1+L^{2}}}\), then the following conclusions hold:

(1)
\(F(T)=F (T((I\eta )I+\eta T) )=F(G)\).

(2)
If \(IT\) is demiclosed at 0, then \(IG\) is also demiclosed at 0.

(3)
In addition, if \(T:H\to H\) is quasipseudocontractive, then the mapping G is quasinonexpansive.
Lemma 2.8
([49])
(Demiclosedness Principle). Let T be a nonexpansive mapping on a closed convex subset C of a real Hilbert space H. Then, \(IT\) is demiclosed at any point \(y\in H\), that is, if \(x_{n}\rightharpoonup x\) and \(x_{n}Tx_{n}\to y\in H\), then \(xTx=y\).
Lemma 2.9
([18])
Let H be a real Hilbert space. Then, the following results hold for all \(x,y\in H\) and \(\delta \in \mathbb{R}\):

(i)
\(\x + y\^{2} \leq \x\^{2} + 2\langle y, x + y \rangle \);

(ii)
\(\x + y\^{2} = \x\^{2} + 2\langle x, y \rangle + \y\^{2}\);

(iii)
\(\\delta x + (1\delta ) y\^{2} = \delta \x\^{2} + (1\delta )\y \^{2} \delta (1\delta )\xy\^{2}\).
Lemma 2.10
([40])
Let \(\{a_{n}\}\) be a sequence of nonnegative real numbers, \(\{\alpha _{n}\}\) be a sequence in \((0, 1)\) with \(\sum_{n=1}^{\infty }\alpha _{n} = \infty \), and \(\{b_{n}\}\) be a sequence of real numbers. Assume that
if \(\limsup_{k\rightarrow \infty }b_{n_{k}}\leq 0\) for every subsequence \(\{a_{n_{k}}\}\) of \(\{a_{n}\}\) satisfying \(\liminf_{k\rightarrow \infty }(a_{n_{k+1}}  a_{n_{k}})\geq 0\), then \(\lim_{n\rightarrow \infty }a_{n} =0\).
Lemma 2.11
([32])
Let \(\{a_{n}\}, \{c_{n}\}\subset \mathbb{R}\mathbbm{_{+}}\), \(\{\sigma _{n}\}\subset (0,1)\), and \(\{b_{n}\}\subset \mathbb{R}\) be sequences such that
Assume \(\sum_{n=0}^{\infty }c_{n}<\infty \). Then, the following results hold:

(i)
If \(b_{n}\leq \beta \sigma _{n}\) for some \(\beta \geq 0\), then \(\{a_{n}\}\) is a bounded sequence.

(ii)
If we have
$$ \sum_{n=0}^{\infty }\sigma _{n} = \infty \quad \textit{and}\quad \limsup_{n \rightarrow \infty }\frac{b_{n}}{\sigma _{n}}\leq 0,$$
then \(\lim_{n\rightarrow \infty }a_{n} =0\).
Lemma 2.12
Let H be a real Hilbert space and let \(A,S,T,V:H\to H\) be given operators.

(i)
If \(T=(1\alpha )S + \alpha V\) for some \(\alpha \in (0,1)\), where \(S:H\to H\) is βaveraged and \(V:H\to H\) is nonexpansive, then T is \(\alpha +(1\alpha )\beta \)averaged.

(ii)
The composite of finitely many averaged mappings is averaged. In particular, if \(T_{i}\) is \(\alpha _{i}\)averaged, where \(\alpha _{i}\in (0,1)\) for \(i=1,2\), then the composite \(T_{1}\circ T_{2}\) is αaveraged, where \(\alpha =\alpha _{1}+\alpha _{2}\alpha _{1}\alpha _{2}\).

(iii)
If the mappings \(\{T_{i}\}_{i=1}^{N}\) are averaged and have a common fixed point, then
$$ \bigcap_{i=1}^{N} F(T_{i}) = F(T_{1}\circ T_{2}\circ \cdots \circ T_{N}). $$ 
(iv)
If A is βism and \(\gamma \in (0,\beta ]\), then \(T:=I\gamma A\) is firmly nonexpansive.

(v)
T is nonexpansive if and only if its complement \(IT\) is \(\frac{1}{2}\)ism.

(vi)
If T is βism, then for \(\gamma >0\), γT is \(\frac{\beta }{\gamma }\)ism.

(vii)
T is averaged if and only if its complement \(IT\) is βism for some \(\beta >\frac{1}{2}\). Indeed, for \(\alpha \in (0,1), T\) is αaveraged if and only if \(IT\) is \(\frac{1}{2\alpha }\)ism.

(viii)
T is firmly nonexpansive if and only if its complement \(IT\) is firmly nonexpansive.
Lemma 2.13
([45])
Let \(H_{1}\) and \(H_{2}\) be real Hilbert spaces. Let \(A:H_{1}\to H_{2}\) be a nonzero bounded linear operator with adjoint \(A^{*}\) and let \(T:H_{2}\to H_{2}\) be a nonexpansive mapping. Then, \(A^{*}(IT)A\) is \(\frac{1}{2\A\^{2}}\)ism.
Lemma 2.14
([44])
Let H be a real Hilbert space, \(r>0\), \(f:H\to H\) be a μism mapping and \(B:H\to 2^{H}\) be a maximal monotone mapping. Then,

(I)
the following conclusions are equivalent:

(i)
\(x^{*}\in H\) such that \(0\in f(x^{*})+ B(x^{*})\);

(ii)
\(x^{*}\in F(J_{r}^{B}(Irf))\).

(i)

(II)
If \(r\in (0,2\mu )\), then \(J_{r}^{B}(Irf)\) is averaged.
Proposed method
In this section, we present our proposed algorithm and highlight some of its important features. We assume that:

(1)
\(H_{1}\) and \(H_{2}\) are real Hilbert spaces;

(2)
For each \(i=1,2,\ldots ,m\) and \(j=1,2,\ldots ,k\), \(h_{i}\) and \(g_{j}\) are \(\varphi _{i}\) and \(\vartheta _{j}\) inverse strongly monotone mappings on \(H_{1}\) and \(H_{2}\), respectively, where \(\varphi _{i}>0\) and \(\vartheta _{j}>0\), \(B_{i}\) and \(D_{j}\) are multivalued maximal monotone operators on \(H_{1}\) and \(H_{2}\), respectively. Let \(\phi =\min \{\varphi _{1}, \varphi _{2},\ldots ,\varphi _{m}; \vartheta _{1},\vartheta _{2},\ldots ,\vartheta _{k}\}\), then all \(h_{i}\) and \(g_{j}\) are ϕinverse strongly monotone mappings;

(3)
\(A:H_{1}\to H_{2}\) is a bounded linear operator with adjoint \(A^{*}\) and \(f:H_{1}\to H_{1}\) is a contraction with contraction constant \(\rho \in (0,1)\);

(4)
\(S:H_{1}\to H_{1}\) is a KLipschitz continuous quasipseudocontractive mapping such that \(IS\) is demiclosed at zero and with \(K\ge 1\);

(5)
The solution set \(\Omega =\Gamma \cap F(S)\) is nonempty, where
$$ \Gamma := \Biggl\{ x\in H_{1}:x\in \bigcap _{i=1}^{m} \bigl((h_{i}+B_{i})^{1}(0) \bigr)\cap \Biggl(A^{1} \Biggl(\bigcap _{j=1}^{k}(g_{j}+D_{j})^{1}(0) \Biggr) \Biggr) \Biggr\} . $$(3.1) 
(6)
We denote
$$ \textstyle\begin{cases} U:= J_{\lambda }^{B_{1}}(I\lambda h_{1})\circ J_{\lambda }^{B_{2}}(I \lambda h_{2})\circ \ldots J_{\lambda }^{B_{m}}(I\lambda h_{m}), \\ T:= J_{\lambda }^{D_{1}}(I\lambda g_{1})\circ J_{\lambda }^{D_{2}}(I \lambda g_{2})\circ \ldots J_{\lambda }^{D_{k}}(I\lambda g_{k}). \end{cases} $$(3.2)
It was shown in [16] that the operators U and T defined above are averaged mappings.
We establish the strongconvergence result for the proposed algorithm under the following conditions on the control parameters:

(C1)
\(\{\alpha _{n}\} \subset (0,1)\) such that \(\lim_{n\rightarrow \infty }\alpha _{n}=0\) and \(\sum_{n=0}^{\infty }\alpha _{n} = \infty \);

(C2)
\(\{\beta _{n}\}, \{\delta _{n}\},\{\xi _{n}\}\subset (0,1)\) such that \(0< a\le \beta _{n},\delta _{n}\), \(\xi _{n}\le b<1\);

(C3)
\(\{\epsilon _{n}\}\) is a positive sequence such that \(\lim_{n\rightarrow \infty }\frac{\epsilon _{n}}{\alpha _{n}}=0\);

(C4)
\(\theta >0\), \(\lambda \in (0,2\phi )\), \(0<\eta <\mu < \frac{1}{1+\sqrt{1+K^{2}}}\) and \(0 < \tau _{1} \leq \tau _{n} \leq \tau _{2} < 1\).
Now, our main algorithm is presented in Algorithm 2.
Remark 3.1

We point out that the step size of the proposed method defined in (3.4) does not depend on the norm of the bounded linear operator. This makes our algorithm easy to implement, unlike the methods proposed in [16, 22, 33, 50], which require knowledge of the operator norm for their implementation.

Step 1 of the algorithm can be implemented since the value of \(\x_{n}x_{n1}\\) is known prior to choosing \(\theta _{n}\). Also, observe that in incorporating the inertial term our method does not require stringent conditions, like we have in condition (iii) of Algorithm 1 and condition (iv) of Algorithm (1.13).

We note that unlike in Algorithm (1.13), the viscosity technique employed in Step 5 of our algorithm accommodates a larger class of contraction mappings since the contraction constant \(\rho \in (0,1)\).
Remark 3.2
By conditions (C1) and (C3), it follows from (3.3) that
Remark 3.3
We note that in (3.4), the choice of the step size \(\gamma _{n}\) is independent of the operator norm \(\A\\). Also, the value of γ has no effect on the proposed algorithm but was introduced for clarity. Now, we show that the step size of the algorithm in (3.4) is well defined.
Lemma 3.4
The step sizes \(\{\gamma _{n}\}\) of the Algorithm 2 defined by (3.4) are well defined.
Proof
Let \(p\in \Omega \). Then, by Lemma 2.14(I) we have that \(p\in \bigcap_{i=1}^{m} ((h_{i}+B_{i})^{1}(0) )\) and \(Ap\in \bigcap_{j=1}^{k} ((g_{j}+D_{j})^{1}(0) )\). From Lemma 2.12(iii) we have \(p\in F(U)\) and \(Ap\in F(T)\). Applying the fact that T is averaged together with Lemma 2.12(vii), we have
for some \(\beta >\frac{1}{2}\). This shows that \(\A^{*}(IT)Aw_{n}\>0\) when \(\(IT)Aw_{n}\\ne 0\). Thus, \(\{\gamma _{n}\}\) is well defined. □
Convergence analysis
First, we establish some lemmas before proving the strongconvergence theorem for the proposed algorithm.
Lemma 4.1
Let \(\{x_{n}\}\) be a sequence generated by Algorithm 2. Then, \(\{x_{n}\}\) is bounded.
Proof
Observe that the mapping \(P_{\Omega }\circ f\) is a contraction. Then, by the Banach Contraction Principle there exists an element \(p\in H\) such that \(p=P_{\Omega }\circ f(p)\). It follows that \(p\in \Omega \), \(Sp=p\), \(Up=p\), and \(TAp=Ap\). By applying Lemma 2.9(ii) and the nonexpansiveness of U, we obtain
Again, applying Lemma 2.9(ii) and the nonexpansiveness of T, we have
Applying (4.3) into (4.2) and using the definition of \(\gamma _{n}\) together with the condition on \(\tau _{n}\), we have
Next, using the triangle inequality, we obtain from Step 2
Since, by Remark 3.2, \(\lim_{n\rightarrow \infty }\frac{\theta _{n}}{\alpha _{n}}\x_{n}  x_{n1} \ = 0\), then there exists a constant \(M_{1} > 0\) such that \(\frac{\theta _{n}}{\alpha _{n}}\x_{n}  x_{n1}\\leq M_{1}\) for all \(n\geq 1\). Consequently, from (4.6) we obtain
By the conditions on η and μ, and by Lemma 2.7, we know that \(\mathbb{V}\) is quasinonexpansive. Consequently, by applying the triangle inequality, and using (4.5) and (4.7), from Step 4 we have
Now, by applying (4.5) and (4.8), from Step 5 it follows that
where \(M^{*} :=\sup_{n\in \mathbb{N}} \{\frac{\f(p)p\}{1\rho } + \frac{(1\alpha _{n}(1\rho ))M_{1}}{1\rho } \}\). Set \(a_{n}:=\x_{n}p\\); \(b_{n}:=\alpha _{n}(1\rho )M^{*}\); \(c_{n}:=0\), and \(\sigma _{n}:=\alpha _{n}(1\rho )\). By invoking Lemma 2.11(i) together with the assumptions on the control parameters, we have that \(\{\x_{n}p\\}\) is bounded and this implies that \(\{x_{n}\}\) is bounded. Consequently, \(\{w_{n}\}\), \(\{u_{n}\}\), and \(\{v_{n}\}\) are all bounded. □
Lemma 4.2
Let \(\{x_{n}\}\) be a sequence generated by Algorithm 2 and \(p\in \Omega \). Then, under conditions (C1)–(C4) the following inequality holds for all \(n\in \mathbb{N}\):
Proof
Let \(p\in \Omega \). Then, by applying the Cauchy–Schwartz inequality together with Lemma 2.9(ii), we obtain
where \(M_{2}:= \sup_{n\in \mathbb{N}}\{\x_{n}  p\, \theta _{n}\x_{n}  x_{n1} \\}>0\).
Also, by applying Lemma 2.9(iii), (4.4) and (4.9), we have
Next, invoking Lemma 2.9(i), and applying (4.5), (4.9) and (4.10) we obtain
Consequently, we obtain
where \(M:=\sup \{\x_{n}p\^{2}: n\in \mathbb{N}\}\). Hence, we have the required inequality. □
Lemma 4.3
Suppose \(\{x_{n}\}\) is a sequence generated by Algorithm 2 such that conditions (C1)–(C4) are satisfied. Then, the following inequality holds for all \(p\in \Omega \) and \(n\in \mathbb{N}\):
Proof
Let \(p\in \Omega \). By applying Lemma 2.9(iii) together with (4.5), (4.9) and (4.11) we have
which is the required inequality. □
Now, we are in a position to state and prove the strongconvergence theorem for the proposed algorithm.
Theorem 4.4
Let \(H_{1}\) and \(H_{2}\) be two real Hilbert spaces and let \(f:H_{1}\rightarrow H_{1}\) be a contraction with coefficient \(\rho \in (0,1)\). Suppose \(\{x_{n}\}\) is a sequence generated by Algorithm 2 such that conditions (C1)–(C4) hold. Then, the sequence \(\{x_{n}\}\) converges strongly to a point \(\hat{x}\in \Omega \), where \(\hat{x} = P_{\Omega }\circ f(\hat{x})\).
Proof
Let \(\hat{x} = P_{\Omega }\circ f(\hat{x})\). From Lemma 4.2, we obtain
Next, we claim that the sequence \(\{\x_{n}  \hat{x}\\}\) converges to zero. In order to establish this, by Lemma 2.10, it suffices to show that \(\limsup_{k\rightarrow \infty }\langle f(\hat{x})  \hat{x}, x_{n_{k}+1} \hat{x} \rangle \leq 0\) for every subsequence \(\{\x_{n_{k}}  \hat{x}\\}\) of \(\{\x_{n}  \hat{x}\\}\) satisfying
Suppose that \(\{\x_{n_{k}}  \hat{x}\\}\) is a subsequence of \(\{\x_{n}  \hat{x}\\}\) such that
Again, from Lemma 4.2 we obtain
By applying (4.13) together with condition (C2) and the fact that \(\lim_{k\rightarrow \infty }\alpha _{n_{k}}=0\), we have
By the definition of \(\gamma _{n}\), we obtain
Consequently, we have
Since \(\A^{*}(TI)Aw_{n_{k}}\\) is bounded, it follows that
Consequently, we obtain
Following a similar argument, from Lemma 4.2 we obtain
By condition (C2), it follows that
Next, from Lemma 4.3 we obtain
By (4.13), Remark 3.2, and the fact that \(\lim_{k\rightarrow \infty }\alpha _{n_{k}}=0\), we obtain
Consequently, we have
By Remark 3.2, we have
By applying (4.16), from Step 4 we have
Next, by applying (4.16), (4.18), (4.19) and (4.20) we obtain
and
Now, applying (4.21) and (4.22) together with the fact that \(\lim_{k\rightarrow \infty }\alpha _{n_{k}}=0\), we obtain
To complete the proof, we need to show that \(w_{\omega }(x_{n})\subset \Omega \). Since \(\{x_{n}\}\) is bounded, then \(w_{\omega }(x_{n})\) is nonempty. Let \(x^{*}\in w_{\omega }(x_{n})\) be an arbitrary element. Then, there exists a subsequence \(\{x_{n_{k}}\}\) of \(\{x_{n}\}\) such that \(x_{n_{k}}\rightharpoonup x^{*}\) as \(k\to \infty \). From (4.22), we have that \(u_{n_{k}}\rightharpoonup x^{*}\) as \(k\to \infty \). Since \(I\mathbb{V}\) is demiclosed at zero, then it follows from (4.22) and Lemma 2.7 that \(x^{*}\in F(\mathbb{V})=F(S)\). That is, \(w_{\omega }(x_{n})\subset F(S)\).
Next, we show that \(w_{\omega }(x_{n})\subset \Gamma \). From Step 3 and by applying (4.21), we have
Since the operators U and \(I +\gamma _{n}A^{*}(TI)A\) are averaged, it follows from Lemma 2.12(ii) that the composition \(U(I +\gamma _{n}A^{*}(TI)A)\) is also average and consequently nonexpansive. By the Demiclosedness Principle for nonexpansive mappings, and by applying (4.19) and (4.24) we obtain \(U(I +\gamma _{n}A^{*}(TI)A)x^{*}=x^{*}\). Since \(\Omega \ne \emptyset \), then by Lemma 2.12(iii) we have \(Ux^{*}=x^{*}\) and \((I +\gamma _{n}A^{*}(TI)A)x^{*}=x^{*}\). It then follows from Lemma 2.12(iii) and Lemma 2.14(I) that
Sine T is nonexpansive, then by the Demiclosedness Principle for nonexpansive mappings, and by applying (4.14) and (4.19) we have \(TAx^{*} = Ax^{*}\). It then follows from Lemma 2.12(iii) and Lemma 2.14(I) that
From (4.25) and (4.26), we obtain \(w_{\omega }(x_{n})\subset \Gamma \). Consequently, we have that \(w_{\omega }(x_{n})\subset \Omega \).
Next, from (4.21) we have that \(w_{\omega }\{v_{n_{k}}\} = w_{\omega }\{x_{n_{k}}\}\). By the boundedness of \(\{x_{n_{k}}\}\), there exists a subsequence \(\{x_{n_{k_{j}}}\}\) of \(\{x_{n_{k}}\}\) such that \(x_{n_{k_{j}}}\rightharpoonup x^{\dagger }\) and
Since \(\hat{x}=P_{\Omega }\circ f(\hat{x})\), it follows that
Now, from (4.23) and (4.27), we obtain
Applying Lemma 2.10 to (4.12), and using (4.28) together with the fact that \(\lim_{n\rightarrow \infty }\frac{\theta _{n}}{\alpha _{n}}\x_{n}  x_{n1}\ =0\) and \(\lim_{n\rightarrow \infty }\alpha _{n} = 0\), we deduce that \(\lim_{n\rightarrow \infty }\x_{n}  \hat{x}\=0\) as required. □
Remark 4.5
The results of this paper improve the results of Yao et al. [48] and Chang et al. [16] in the following ways:

(i)
Our result extends the result of Yao et al. [48] and the result of Chang et al. [16] from SMVIP (1.4) and (1.5) and a system of monotone variational inclusion problems (1.11), respectively, to the problem of finding a common solution of the system of monotone variational inclusion problems (1.11) and the fixedpoint problem of quasipseudocontractions.

(ii)
While Yao et al. [48] were only able to prove a weakconvergence result, in this paper we established a strongconvergence result for our proposed algorithm.

(iii)
The proposed method of Chang et al. [16] requires knowledge of the operator norm for its implementation, while our proposed method is independent of the operator norm.

(iv)
Our method employs a very efficient inertial technique that does not require stringent conditions, like one has in condition (iii) of Algorithm 1 of Yao et al. [48] and condition (iv) of Algorithm (1.13) of Chang et al. [16].

(v)
The viscosity technique we employed accommodates a larger class of contractions than the one employed by Chang et al. [16].
Remark 4.6
Since the class of quasipseudocontractions contains several other classes of nonlinear mappings such as the pseudocontractions, the demicontractive operators, the quasinonexpansive operators, the directed operators, and the strictly pseudocontractive mappings with fixed points as special cases, our results present a unified framework for studying these classes of operators.
Applications
In this section we consider some applications of our results to approximating solutions of related optimization problems in the framework of Hilbert spaces.
System of equilibrium problems with fixedpoint constraint
Let C be a nonempty closed convex subset of a real Hilbert space H, and let \(F:C\times C\to \mathbb{R}\) be a bifunction. The equilibrium problem (EP) for the bifunction F on C is to find a point \(x^{*}\in C\) such that
We denote the solution of the EP (5.1) by \(\operatorname{EP}(F)\). The EP serves as a unifying framework for several mathematical problems, such as variational inequality problems, minimization problems, complementarity problems, saddlepoint problems, mathematical programming problems, Nashequilibrium problems in noncooperative games, and others; see [2, 23, 28, 29, 35] and the references therein. Several problems in economics, physics, and optimization can be formulated as finding a solution of EP (5.1).
In solving the EP (5.1), we assume that the bifunction \(F: C \times C \to \mathbb{R}\) satisfies the following conditions:

(A1)
\(F(x,x) = 0\) for all \(x \in C\);

(A2)
F is monotone, that is, \(F(x,y) + F(y,x) \le 0\) for all \(x,y \in C\);

(A3)
F is upper hemicontinuous, that is, for all \(x,y,z \in C\), \(\lim_{t \downarrow 0} F (tz + (1t)x,y )\le F(x,y)\);

(A4)
for each \(x \in C\), \(y \mapsto F(x,y)\) is convex and lower semicontinuous.
The following theorem is required in establishing our next result.
Theorem 5.1
([43])
Let C be a nonempty closed convex subset of a real Hilbert space H and \(F:C\times C\rightarrow \mathbb{R}\) be a bifunction satisfying (A1)–(A4). Define a multivalued mapping \(A_{F}: H\rightarrow 2^{H}\) by
Then, the following hold:

(i)
\(A_{F}\) is maximal monotone;

(ii)
\(\operatorname{EP}(F) = A_{F}^{1}(0)\);

(iii)
\(T_{r}^{F} = (I + rA_{F})^{1}\) for \(r>0\), where \(T_{r}^{F}\) is the resolvent of \(A_{F}\) and is given by
$$ T_{r}^{F}(x) = \biggl\{ y\in C : F(y,z) + \frac{1}{r}\langle z  y, yx \rangle \geq 0, \forall z\in C \biggr\} . $$
Here, we consider the following system of equilibrium problems (SEPs) with fixedpoint constraint:
where C and Q are nonempty closed convex subsets of real Hilbert spaces \(H_{1}\) and \(H_{2}\), respectively, \(S:H_{1}\to H_{1}\) is a quasipseudocontractive mapping, for each \(i=1,2,\ldots ,m\) and \(j=1,2,\ldots ,k\), \(F_{i}\) and \(G_{j}\) are bifunctions satisfying conditions (A1)–(A4) above, and \(A:H_{1}\to H_{2}\) is a bounded linear operator. We denote the solution set of problem (5.2) by \(\Gamma _{\mathrm{SEP}}=\bigcap_{i=1}^{m} EP(F_{i})\cap A^{1}(\bigcap_{j=1}^{k}(EP(G_{j})))\).
Now, taking \(B_{i}= A_{F_{i}}\), \(i=1,2,\ldots ,m\) and \(D_{j}= H_{G_{j}}\), \(j=1,2,\ldots ,k\) and setting \(h_{i}=g_{j}=0\) in Theorem 4.4, we obtain the following result for approximating solutions of problem (5.2) in Hilbert spaces.
Theorem 5.2
Let C and Q be nonempty closed convex subsets of real Hilbert spaces \(H_{1}\) and \(H_{2}\), respectively, \(A:H_{1}\to H_{2}\) be a bounded linear operator with adjoint \(A^{*}\), and for each \(i=1,2,\ldots ,m\) and \(j=1,2,\ldots ,k\) let \(F_{i}:C\times C\to \mathbb{R}\) and \(G_{j}:Q\times Q\to \mathbb{R}\) be bifunctions satisfying conditions (A1)–(A4). Let \(S:H_{1}\to H_{1}\) be a KLipschitz continuous quasipseudocontractive mapping, which is demiclosed at zero and with \(K\ge 1\), and \(f:H_{1}\to H_{1}\) be a contraction with coefficient \(\rho \in (0,1)\). Suppose that the solution set \(\Omega = \Gamma _{\mathrm{SEP}}\cap F(S)\neq \emptyset \), and conditions (C1)–(C4) are satisfied. Then, the sequence \(\{x_{n}\}\) generated by the following algorithm converges strongly to a point \(\hat{x}\in \Omega \), where \(\hat{x}=P_{\Omega }\circ f(\hat{x})\).
System of convex minimization problems with fixedpoint constraint
Suppose that \(F:H\to \mathbb{R}\) is a convex and differentiable function, and \(M:H\to (\infty ,+\infty ]\) is a proper convex and lower semicontinuous function. It is known that if ▽F is \(\frac{1}{\mu }\)Lipschitz continuous, then it is μinverse strongly monotone, where ▽F is the gradient of F. Also, it is known that the subdifferential ∂M of M is maximal monotone (see [39]). Moreover,
We consider the following system of convex minimization problems (SCMP) with fixedpoint constraint: Find
and such that \(y^{*}=Ax^{*}\in H_{2}\), solves
where \(S:H_{1}\to H_{1}\) is a quasipseudocontractive mapping, for each \(i=1,2,\ldots ,m\) and \(j=1,2,\ldots ,k\), \(F_{i}:H_{1}\to \mathbb{R}\) and \(G_{j}:H_{2}\to \mathbb{R}\) are convex and differentiable functions, and \(M_{i}:H_{1}\to (\infty ,+\infty ]\) and \(N_{j}:H_{2}\to (\infty ,+\infty ]\) are proper convex and lower semicontinuous functions. We denote the solution set of problem (5.6) and (5.7) by \(\Gamma _{\mathrm{SCMP}}\).
Now, for each \(i=1,2,\ldots ,m\) and \(j=1,2,\ldots ,k\), set \(B_{i}=\partial M_{i}\), \(D_{j}=\partial N_{j}\), \(h_{i}=\triangledown F_{i}\) and \(g_{j}=\triangledown G_{j}\) in Theorem 4.4, we obtain the following result for approximating solutions of problem (5.6) and (5.7) in Hilbert spaces.
Theorem 5.3
Let \(H_{1}\) and \(H_{2}\) be real Hilbert spaces, \(A:H_{1}\to H_{2}\) be a bounded linear operator with adjoint \(A^{*}\). For each \(i=1,2,\ldots ,m\) and \(j=1,2,\ldots ,k\), let \(M_{i}:H_{1}\to (\infty ,+\infty ]\) and \(N_{j}:H_{2}\to (\infty ,+\infty ]\) be proper convex and lower semicontinuous functions, \(F_{i}:H_{1}\to \mathbb{R}\), \(G_{j}:H_{2}\to \mathbb{R}\) be convex and differentiable functions such that \(\triangledown F_{i}\), \(\triangledown G_{j}\) are \(\frac{1}{\mu }\)Lipschitz continuous. Let \(S:H_{1}\to H_{1}\) be a KLipschitz continuous quasipseudocontractive mapping, which is demiclosed at zero and with \(K\ge 1\), and \(f:H_{1}\to H_{1}\) be a contraction with coefficient \(\rho \in (0,1)\). Suppose that the solution set \(\Omega = \Gamma _{\mathrm{SCMP}}\cap F(S)\neq \emptyset \), and conditions (C1)–(C4) are satisfied. Then, the sequence \(\{x_{n}\}\) generated by the following algorithm converges strongly to a point \(\hat{x}\in \Omega \), where \(\hat{x}=P_{\Omega }\circ f(\hat{x})\).
Numerical examples
Here, we present some numerical experiments both in finitedimensional and infinitedimensional Hilbert spaces to illustrate the performance of our proposed method Algorithm 2 in comparison with Algorithm (1.13). Moreover, we experiment on the dependency of the key parameters on the performance of our method. All numerical computations were carried out using Matlab version R2019(b).
In our computations, we choose for each \(n\in \mathbb{N}\) \(\alpha _{n} = \frac{1}{2n+1}\), \(\epsilon _{n} = \frac{1}{(2n+1)^{3}}\), \(\delta _{n}=\xi _{n}=\frac{n}{2n+1}\), \(\beta _{n}=\frac{2n}{3n+2}\), \(\lambda =0.5\). Let \(f(x) = \frac{1}{6}x\), then \(\rho =\frac{1}{6}\) is the Lipschitz constant for f. It can easily be verified that the conditions of Theorem 4.4 are satisfied. We take \(\theta _{n}=\frac{1}{3n^{2}}\) and \(\gamma =0.05\) in Algorithm (1.13).
Example 6.1
Let \(H_{1}=H_{2}=\mathbb{R}\) with the inner product defined by \(\langle x,y \rangle =xy\), for all \(x,y\in \mathbb{R}\), and the induced usual norm \(\cdot \). For \(i=j=1,2,\ldots ,5\), we define the mappings \(h_{i},g_{j}:\mathbb{R}\to \mathbb{R}\) by \(h_{i}(x)=ix+6 \ \forall x\in H_{1}\) and \(g_{j}(y)=2jx1 \ \forall y\in H_{2}\), then we take \(\lambda =0.18\). Let \(B_{i},D_{j}:\mathbb{R}\to \mathbb{R}\) be defined by \(B_{i}(x)=3ix2~\forall x\in H_{1}\), \(D_{j}(y)=3jy~\forall y\in H_{2}\), and we define \(A:H_{1}\to H_{2}\) by \(A(x)=\frac{5}{3}x\) for all \(x\in H_{1}\), then \(A^{*}(y)=\frac{5}{3}y\) for all \(y\in H_{2}\). Define \(S:\mathbb{R}\to \mathbb{R}\) by \(S(x)=2x\). Then, S is 2Lipschitzian quasipseudocontractive. We choose \(\eta =0.23\) and \(\mu =0.28\).
Using MATLAB 2019(b), we compare the performance of Algorithm 2 with Algorithm (1.13). The stopping criterion used for our computation is \(x_{n+1}x_{n}< 10^{3}\). We plot the graphs of errors against the number of iterations in each case. The numerical results are reported in Fig. 1 and Table 1.
Example 6.2
Let \(H_{1}=(\ell _{2}(\mathbb{R}), \\cdot \_{2})=H_{2}\), where \(\ell _{2}(\mathbb{R}):=\{x=(x_{1},x_{2},\ldots ,x_{n},\ldots ), x_{j} \in \mathbb{R}:\sum_{j=1}^{\infty }x_{j}^{2}<\infty \}\), \(\x\_{2}=( \sum_{j=1}^{\infty }x_{j}^{2})^{\frac{1}{2}}\) for all \(x\in \ell _{2}(\mathbb{R})\). For \(i=j=1,2,\ldots ,5\), we define the mappings \(h_{i},g_{j}:\ell _{2}(\mathbb{R})\to \ell _{2}(\mathbb{R})\) by \(h_{i}(x)=2ix1\ \forall x\in H_{1}\) and \(g_{j}(y)=jx+2 \ \forall y\in H_{2}\), then we take \(\lambda =0.15\). Let \(B_{i},D_{j}:\ell _{2}(\mathbb{R})\to \ell _{2}(\mathbb{R})\) be defined by \(B_{i}(x)=\frac{7}{3i}x \ \forall x\in H_{1}\), \(D_{j}(y)=\frac{5}{3j}y \forall y\in H_{2}\), and we define \(A:H_{1}\to H_{2}\) by \(A(x)=\frac{x}{3}x\) for all \(x\in H_{1}\), then \(A^{*}(y)=\frac{x}{3}y\) for all \(y\in H_{2}\). Define \(S:\ell _{2}(\mathbb{R})\to \ell _{2}(\mathbb{R})\) by \(S(x)=\frac{5}{4}x\). Then, S is \(\frac{5}{4}\)Lipschitzian quasipseudocontractive. We choose \(\eta =0.29\) and \(\mu =0.34\) and \(\lambda = 0.01\) in Algorithm (1.13).
Using MATLAB 2019(b), we compare the performance of Algorithm 2 with Algorithm (1.13). The stopping criterion used for our computation is \(\x_{n+1}x_{n}\< 10^{3}\). We plot the graphs of errors against the number of iterations in each case. The numerical results are reported in Fig. 2 and Table 2.
We test Examples 6.1 and 6.2 under the following experiments:
Experiment 1
In this experiment, we check the behavior of our method by fixing the other parameters and varying θ. We do this to check the effect of the parameter θ on our method.
For Example 6.1, we choose different initial values as follows:

Case I:
\(x_{0} = 10\), \(x_{1} = 23\);

Case II:
\(x_{0} = \frac{17}{25}\), \(x_{1} = 32\);

Case III:
\(x_{0} = 29\), \(x_{1} = 100.23\);

Case IV:
\(x_{0} = 29\), \(x_{1} = 100.23\);
Also, we consider \(\theta \in \{1.5, 3.0, 4.5, 6.0, 7.5, 9.0\}\), which satisfies Assumption (C4). We use Algorithm (1.13) and Algorithm 2 for the experiment and report the numerical results in Table 1 and Fig. 1.
Experiment 2
In this experiment, we check the behavior of our method by fixing the other parameters and varying \(\tau _{n}\). We do this to check the effect of the parameter \(\tau _{n}\) on our method.
For Example 6.2, we choose different initial values as follows:

Case I:
\(x_{0} = (3, 1, \frac{1}{3}, \ldots )\), \(x_{1} = (1, \frac{1}{2}, \frac{1}{4},\ldots )\);

Case II:
\(x_{0} = (1, \frac{1}{7}, \frac{1}{49},\ldots )\), \(x_{1} = (0.1, 0.01, 0.001, \ldots )\);

Case III:
\(x_{0} = (2, \frac{4}{5}, \frac{8}{25}, \ldots )\), \(x_{1} = (1, \frac{1}{6}, \frac{1}{36}, \ldots )\);

Case IV:
\(x_{0} = (2, \frac{4}{3}, \frac{8}{9},\ldots )\), \(x_{1} = (5, 0.5, 0.05, \ldots )\).
Also, we consider \(\tau _{n}\in \{\frac{n}{2n + 1}, \frac{n}{5n + 1}, \frac{2n}{3n + 2}, \frac{3n}{4n + 1}, \frac{4n}{5n + 1}, \frac{2n}{5n + 2},\}\), which satisfies Assumption (C4). We use Algorithm (1.13) and Algorithm 2 for the experiment and report the numerical results in Table 2 and Fig. 2.
Conclusion
We studied the problem of finding the solution of a system of monotone variational inclusion problems with the constraint of a fixedpoint set of quasipseudocontractive mappings. We proposed a new iterative method that employs an inertial technique with a selfadaptive step size for approximating the solution of the problem in Hilbert spaces and proved a strongconvergence result for the proposed method under some mild conditions. We further applied our results to study related optimization problems and presented some numerical experiments with graphical illustration to demonstrate the efficiency and applicability of our proposed method. In Examples 6.1 and 6.2, we checked the dependency of key parameters for each starting point in order to determine if their choices affect the performance of our method. We can see from the tables and graphs that the number of iterations and CPU times for our proposed method remain consistent and well behaved for different choices of these key parameters and that our method is more efficient and outperforms a related method.
Availability of data and materials
Not applicable.
References
Abbas, M., Al Sharani, M., Ansari, Q.H., Iyiola, O.S., Shehu, Y.: Iterative methods for solving proximal split minimization problem. Numer. Algorithms 78(1), 193–215 (2018)
Alakoya, O.T., Mewomo, O.T.: Viscosity Siteration method with inertial technique and selfadaptive step size for split variational inclusion, equilibrium and fixedpoint problems. Comput. Appl. Math. 41(1), 39 (2022)
Alakoya, O.T., Taiwo, A., Mewomo, O.T.: On system of split generalised mixed equilibrium and fixed point problems for multivalued mappings with no prior knowledge of operator norm. Fixed Point Theory 23(1), 45–74 (2022)
Alakoya, T.O., Jolaoso, L.O., Mewomo, O.T.: Strong convergence and bounded perturbation resilience of a modified forwardbackward and splitting algorithm and its application. J. Nonlinear Convex Anal. 23(4), 653–682 (2022)
Alakoya, T.O., Owolabi, A.O.E., Mewomo, O.T.: An inertial algorithm with a selfadaptive step size for a split equilibrium problem and a fixedpoint problem of an infinite family of strict pseudocontractions. J. Nonlinear Var. Anal. 5, 803–829 (2021)
Bauschke, H.H., Combettes, P.L.: A weaktostrong convergence principle for Fejermonotone methods in Hilbert spaces. Math. Oper. Res. 26(2), 248–264 (2001)
Boikanyo, O.A.: The viscosity approximation forwardbackward splitting method for zeros of the sum of monotone operators. Abstr. Appl. Anal. 2016, Article ID 2371857 (2016)
Bréziz, H.: Operateur maximaux monotones. Amsterdam (The Netherlands), Mathematics studies 5 (1973)
Byrne, C.: Iterative oblique projection onto convex sets and the split feasibility problem. Inverse Probl. 18, 441–453 (2002)
Ceng, L.C.: Approximation of common solutions of a split inclusion problem and a fixedpoint problem. J. Appl. Numer. Optim. 1, 1–12 (2019)
Ceng, L.C., Coroian, I., Qin, X., Yao, J.C.: A general viscosity implicit iterative algorithm for split variational inclusions with hierarchical variational inequality constraints. Fixed Point Theory 20, 469–482 (2019)
Ceng, L.C., Yao, J.C.: Relaxed and hybrid viscosity methods for general system of variational inequalities with split feasibility problem constraint. Fixed Point Theory Appl. 2013, 43 (2013)
Censor, Y., Borteld, T., Martin, B., Trofimov, A.: A unified approach for inversion problems in intensitymodulated radiation therapy. Phys. Med. Biol. 51, 2353–2365 (2006)
Censor, Y., Elfving, T.: A multiprojection algorithm using Bregman projections in a product space. Numer. Algorithms 8, 221–239 (1994)
Chang, S.S., Wang, L., Qin, L.J.: Split equality fixed point problem for quasipseudocontractive mappings with applications. Fixed Point Theory Appl. 2015, 208 (2015)
Chang, S.S., Yao, J.C., Wang, L., Liu, M., Zhao, L.: On the inertial forwardbackward splitting technique for solving a system of inclusion problems in Hilbert spaces. Optimization 70(12), 2511–2525 (2020). https://doi.org/10.1080/02331934.2020.1786567
Chen, P., Huang, J., Zhang, X.: A primaldual fixed point algorithm for convex separable minimization with applications to image restoration. Inverse Probl. 29(2), Article ID 025011 (2013)
Chuang, C.S.: Strong convergence theorems for the split variational inclusion problem in Hilbert spaces. Fixed Point Theory Appl. 2013, 350 (2013)
Combettes, P.L.: The convex feasibility problem in image recovery. Adv. Imaging Electron Phys. 95, 155–270 (1996)
Cui, H.H., Zhang, H.X., Ceng, L.C.: An inertial CensorSegal algorithm for split common fixedpoint problems. Fixed Point Theory 22, 93–103 (2021)
Dang, Y., Sun, J., Xu, H.: Inertial accelerated algorithms for solving a split feasibility problem. J. Ind. Manag. Optim. 13(3), 1383–1394 (2017)
Dilshad, M., Aljohani, A.F., Akram, M.: Iterative scheme for split variational inclusion and a fixedpoint problem of a finite collection of nonexpansive mappings. J. Funct. Spaces 2020, Article ID 3567648 (2020)
Facchinei, F., Pang, J.S.: FiniteDimensional Variational Inequalities and Complementarity Problems. Springer, Berlin (2007)
Gibali, A.: A new split inverse problem and an application to least intensity feasible solutions. Pure Appl. Funct. Anal. 2, 243–258 (2017)
Guan, J.L., Ceng, L.C., Hu, B.: Strong convergence theorem for split monotone variational inclusion with constraints of variational inequalities and fixed point problems. J. Inequal. Appl. 2018, 311 (2018)
Iiduka, H.: Fixed point optimization algorithm and its application to network bandwidth allocation. J. Comput. Appl. Math. 236, 1733–1742 (2012)
Kazmi, K.R., Rizvi, S.H.: An iterative method for split variational inclusion problem and fixed point problem for a nonexpansive mapping. Optim. Lett. 8, 1113–1124 (2014)
Khan, S.H., Alakoya, T.O., Mewomo, O.T.: Relaxed projection methods with selfadaptive step size for solving variational inequality and fixed point problems for an infinite family of multivalued relatively nonexpansive mappings in Banach spaces. Math. Comput. Appl. 25, 54 (2020)
Konnov, I.: Equilibrium Models and Variational Inequalities, vol. 210. Elsevier, Amsterdam (2007)
López, G., MartínMárquez, V., Xu, H.K.: Iterative algorithms for the multiplesets split feasibility problem. In: Biomedical Mathematics: Promising Directions in Imaging, Therapy Planning and Inverse Problems, pp. 243–279. Medical Physics Publ., Madison (2010)
Luo, C., Ji, H., Li, Y.: Utilitybased multiservice bandwidth allocation in the 4G heterogeneous wireless networks. In: IEEE Wireless Communication and Networking Conference, pp. 1–5. IEEE Comput. Soc., Los Alamitos (2009). https://doi.org/10.1109/WCNC.2009.4918017
Maingé, P.E.: A hybrid extragradientviscosity method for monotone operators and fixed point problems. SIAM J. Control Optim. 47, 1499–1515 (2008)
Moudafi, A.: Split monotone variational inclusions. J. Optim. Theory Appl. 150, 275–283 (2011)
Ogwo, G.N., Alakoya, T.O., Mewomo, O.T.: Iterative algorithm with selfadaptive step size for approximating the common solution of variational inequality and fixed point problems. Optimization (2021). https://doi.org/10.1080/02331934.2021.1981897
Ogwo, G.N., Alakoya, T.O., Mewomo, O.T.: Inertial iterative method with selfadaptive step size for finite family of split monotone variational inclusion and fixed point problems in Banach spaces. Demonstr. Math. (2021). https://doi.org/10.1515/dema20200119
Ogwo, G.N., Izuchukwu, C., Mewomo, O.T.: Inertial methods for finding minimumnorm solutions of the split variational inequality problem beyond monotonicity. Numer. Algorithms 88, 1419–1456 (2021)
Ogwo, G.N., Izuchukwu, C., Shehu, Y., Mewomo, O.T.: Convergence of relaxed inertial subgradient extragradient methods for quasimonotone variational inequality problems. J. Sci. Comput. 90(1), 10 (2022)
Olona, M.A., Alakoya, T.O., Owolabi, A.O.E., Mewomo, O.T.: Inertial shrinking projection algorithm with selfadaptive step size for split generalized equilibrium and fixed point problems for a countable family of nonexpansive multivalued mappings. Demonstr. Math. 54, 47–67 (2021)
Rockafellar, R.T.: On the maximality of sums of nonlinear monotone operators. Trans. Am. Math. Soc. 149, 75–288 (1970)
Saejung, S., Yotkaew, P.: Approximation of zeros of inverse strongly monotone operators in Banach spaces. Nonlinear Anal. 75, 742–750 (2012)
Shehu, Y., Cholamjiak, P.: Iterative method with inertial for variational inequalities in Hilbert spaces. Calcolo 56(1), 4 (2019)
Taiwo, A., Alakoya, T.O., Mewomo, O.T.: Strong convergence theorem for solving equilibrium problem and fixed point of relatively nonexpansive multivalued mappings in a Banach space with applications. AsianEur. J. Math. 14(8), Article ID 2150137 (2021)
Takahashi, S., Takahashi, W., Toyoda, M.T.: Strong convergence theorem for maximal monotone operators with nonlinear mappings in Hilbert spaces. J. Optim. Theory Appl. 147, 27–41 (2010)
Takahashi, W.: Introduction to Nonlinear and Convex Analysis. Yokohama Publishers, Yokohama (2009)
Takahashi, W., Xu, H.K., Yao, J.C.: Iterative methods for generalized split feasibility problems in Hilbert spaces. SetValued Var. Anal. 23, 205–221 (2015)
Uzor, V.A., Alakoya, T.O., Mewomo, O.T.: Strong convergence of a selfadaptive inertial Tseng’s extragradient method for pseudomonotone variational inequalities and fixed point problems. Open Math. (2022). https://doi.org/10.1515/math20220429
Xu, H.K.: Averaged mappings and the gradientprojection algorithm. J. Optim. Theory Appl. 150, 360–378 (2011)
Yao, Y., Shehu, Y., Li, X.H., Dong, Q.L.: A method with inertial extrapolation step for split monotone inclusion problems. Optimization 70(4), 741–761 (2021)
Zhao, J., Liang, Y., Liu, Y., Cho, Y.J.: Split equilibrium, variational inequality and fixed point problems for multivalued mappings in Hilbert spaces. Appl. Comput. Math. 17(3), 271–283 (2018)
Zhao, X., Yao, J.C., Yao, Y.: A proximal algorithm for solving split monotone variational inclusions. UPB Sci. Bull., Ser. A 82(3), 43–52 (2020)
Acknowledgements
The authors sincerely thank the reviewers for their careful reading, constructive comments, and fruitful suggestions that improved the manuscript. The research of the first author is wholly supported by the University of KwaZuluNatal, Durban, South Africa Postdoctoral Fellowship. He is grateful for the funding and financial support. The second author is supported by the National Research Foundation (NRF) of South Africa Incentive Funding for Rated Researchers (Grant Number 119903). The research of the fourth author is partially supported by the grant MOST 1082115M039005MY3. The opinions expressed and conclusions arrived are those of the authors and are not necessarily to be attributed to the NRF.
Funding
The first author is funded by the University of KwaZuluNatal, Durban, South Africa Postdoctoral Fellowship. The third author is supported by the National Research Foundation (NRF) of South Africa Incentive Funding for Rated Researchers (Grant Number 119903). The fourth author is funded by the grant MOST 1082115M039005MY3.
Author information
Authors and Affiliations
Contributions
Conceptualization of the article was carried out by TO, OT, and VA, methodology by TO and VA, formal analysis, investigation and writing the original draft preparation by TO and VA, software and validation by OT and JC, writing, reviewing and editing by TO, OT, and JC, and project administration by OT and JC. All authors have accepted responsibility for the entire content of this manuscript and approved its submission.
Corresponding author
Ethics declarations
Competing interests
The authors declare that they have no competing interests.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Alakoya, T.O., Uzor, V.A., Mewomo, O.T. et al. On a system of monotone variational inclusion problems with fixedpoint constraint. J Inequal Appl 2022, 47 (2022). https://doi.org/10.1186/s13660022027824
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13660022027824
MSC
 65K15
 47J25
 65J15
 90C33
Keywords
 System of monotone variational inclusion problems
 Fixedpoint problem
 Inertial technique
 Selfadaptive step size
 Quasipseudocontractions