 Research
 Open Access
 Published:
Graph convergence with an application for system of variational inclusions and fixedpoint problems
Journal of Inequalities and Applications volume 2022, Article number: 112 (2022)
Abstract
This paper aims at proposing an iterative algorithm for finding an element in the intersection of the solutions set of a system of variational inclusions and the fixedpoints set of a total uniformly LLipschitzian mapping. Applying the concepts of graph convergence and the resolvent operator associated with an Ĥaccretive mapping, a new equivalence relationship between graph convergence and resolventoperator convergence of a sequence of Ĥaccretive mappings is established. As an application of the obtained equivalence relationship, the strong convergence of the sequence generated by our proposed iterative algorithm to a common point of the above two sets is proved under some suitable hypotheses imposed on the parameters and mappings. At the same time, the notion of \(H(\cdot,\cdot)\)accretive mapping that appeared in the literature, where \(H(\cdot,\cdot)\) is an α, βgeneralized accretive mapping, is also investigated and analyzed. We show that the notions \(H(\cdot,\cdot)\)accretive and Ĥaccretive operators are actually the same, and point out some comments on the results concerning them that are available in the literature.
Introduction
During the last six decades, variational inequality theory, originally introduced for the study of partial differential equations by Hartman and Stampacchia [1], has been recognized as a strong tool in the mathematical study of many nonlinear problems of physics and mechanics, as the complexity of the boundary conditions and the diversity of the constitutive equations lead to variational formulations of inequality type. As many nonlinear problems arising in optimization, operations research, structural analysis, and engineering sciences can be transformed into variational inequality problems (see, e.g., [2, 3]), since the appearance of this theory, there has been an increasing interest in extending and generalizing variational inequalities in many different directions using novel and innovative techniques, see, for example, [4, 5] and the references therein. Without doubt, one of the most important and wellknown generalizations of variational inequalities is variational inclusions and thanks to their wide applications in the optimization and control, economics and transportation equilibrium, engineering science, etc., the study of different classes of variational inclusion problems continues to attract the interest of many researchers. For more related details, we refer the readers to [6–19] and the references therein.
It is important to emphasize that two important problems in the theory of the variational inequalities/icnlusions are the existence of solutions and approximation of solutions by the iterative algorithms. This has been one of the main motivations of researchers to develop alternative methods to study iterative algorithms for approximating solutions of various kinds of variational inequality/inclusion problems in the setting of Hilbert and Banach spaces. Among the methods that have appeared in the literature, the resolventoperator technique is interesting and important, and plays a crucial role in computing approximate solutions of different classes of variational inequality/inclusion problems and their generalizations. More information along with relevant commentaries can be found in [6, 11–14, 17, 18, 20–29] and the references therein.
Monotone operators and accretive mappings have gained impetus, due to their wide range of applicability, to resolve diverse problems emanating from the theory of nonlinear differential equations, integral equations, mathematical economics, optimal control, and so forth. Due to the importance and their many diverse applications in a huge variety of scientific fields, considerable attention has been paid to the development and generalization of monotone and accretive operators in the framework of different spaces. By the same token, the introduction of the notion of generalized maccretive mapping as an extension of maximal monotone operators and maccretive mapping along with a definition of its resolvent operator in a Banach space setting was first made by Huang and Fang [16] in 2001. Afterwards, many authors have shown interest in extending maximal monotone operators and generalized maccretive mappings, and further generalizations of them have appeared in the literature. For instance, Huang and Fang [15], Ding and Lou [30] and Lee et al. [21], Fang and Huang [12], Xia and Huang [31], Fang and Huang [11], Fang et al. [14], Kazmi and Khan [24] and Peng and Zhu [25], Verma [27, 32], Verma [33], and Lan et al. [17] introduced and studied the notions of ηmonotone operators, ηsubdifferential operators, Hmonotone operators, general Hmonotone operators, Haccretive (to avoid confusion, throughout the paper we call it Ĥaccretive) mappings, \((H,\eta )\)monotone operators, Pηaccretive (also referred to as \((H,\eta )\)accretive) mappings, Amonotone operators, \((A,\eta )\)monotone operators, and \((A,\eta )\)accretive (also referred to as Amaximal mrelaxed ηaccretive) mappings, respectively. Motivated by these advances, in 2008, Sun et al. [34] introduced the class of Mmonotone operators as a generalization of maximal monotone and Hmonotone operators. With inspiration and motivation from the work of Sun et al. [34], in the same year, Zou and Huang [35] succeeded in introducing the notion of \(H(\cdot,\cdot)\)accretive mappings in a Banachspace setting as a generalization of generalized maccretive, Hmonotone, Ĥaccretive, and Mmonotone operators.
The notion of graph convergence has attracted many researchers since 1984 after the pioneering work of Attouch [36]. It is worthwhile to stress that the attention of the author in [36] was limited to maximal monotone operators. In later years, considerable research efforts have been made to generalize and study the concept of graph convergence for generalized monotone operators and generalized accretive mappings available in the literature. For instance, Li and Huang [22] introduced the notion of graph convergence concerned with \(H(\cdot,\cdot)\)accretive operators in Banach spaces and proved some equivalence theorems between graph convergence and resolventoperator convergence of a sequence of \(H(\cdot,\cdot)\)accretive mappings. For a detailed description of the concept of graph convergence for other generalizations of generalized monotone (accretive) operators existing in the literature, we refer the interested reader to [6, 22, 23, 28, 36] and the references therein. Using the properties of graph convergence of \(H(\cdot,\cdot)\)accretive operators introduced by Li and Huang [22], recently, Tang and Wang [26] constructed a perturbed iterative algorithm for solving a system of illposed variational inclusions involving \(H(\cdot,\cdot)\)accretive operators. At the same time, they proved that under some suitable conditions, the sequence generated by their proposed iterative algorithm is strongly convergent to the unique solution of the system of variational inclusions considered in [26].
On the other hand, the theory of fixed points that the starting point of its study dates back to the beginning of the 1920s with the pioneering work of Polish mathematician Stefan Banach [37], is a very attractive subject, which has recently drawn much attention from the communities of physics, engineering, mathematics, etc. The existence of a strong connection between the variational inequality problems and the fixedpoint problems motivated many investigators to study the problem of finding common elements of the set of solutions of variational inequalities/inclusions and the set of fixed points of given operators. For more details and information, the reader is referred to [4, 38–46] and the references therein.
In addition, after the emergence of the notion of nonexpansive mapping in the 1960s, the number of works dedicated to study fixedpoint theory for nonexpansive mappings in the setting of different spaces has grown rapidly and has influenced several branches of mathematics. This is mainly because there is a very close relation between the classes of monotone and accretive operators, which arise naturally in the theory of differential equations, and the class of nonexpansive mappings. Due to its many diverse applications in the theory of fixed points, the interest in extending and generalizing the notion of nonexpansive mapping has increased rapidly over the past forty years. One of the first attempts in this direction was carried out by Goebel and Kirk [47] in 1972 who introduced a class of generalized nonexpansive mappings, the socalled asymptotically nonexpansive mappings. In 2005, Sahu [48] succeeded in introducing the concept of nearly asymptotically nonexpansive mapping as a generalization of the notion of asymptotically nonexpansive mapping. One year later, another class of generalized nonexpansive mappings, the socalled total asymptotically nonexpansive mappings, which is essentially more general than the classes of nearly asymptotically nonexpansive mappings and asymptotically nonexpansive mappings, was introduced and studied by Alber et al. [49]. The efforts in this direction have continued and in a successfully attempt by Kiziltunc and Purtas [50], the class of total uniformly LLipschitzian mappings was introduced as a unifying framework for the classes of generalized nonexpansive mappings existing in the literature. To find more information about different classes of generalized nonexpansive mappings and relevant commentaries, we refer the reader to [20, 47–51] and the references therein.
Motivated and inspired by the excellent work mentioned above, this paper pursues two purposes. The first objective is to prove the existence of a unique solution for a system of variational inclusions (SVI) involving Ĥaccretive mappings under some suitable mild conditions. With the goal of finding a common point lying in the solutions set of the SVI and the set of fixed points of a total uniformly LLipschitzian mapping, an iterative algorithm is constructed. Employing the notions of graph convergence and the resolvent operator associated with an Ĥaccretive mapping, a new equivalence relationship between the graph convergence of a sequence of Ĥaccretive mappings and their associated resolvent operators, respectively, to a given Ĥaccretive mapping and its associated resolvent operator under some appropriate conditions is established. As an application of the obtained equivalence relationship, we prove the strong convergence of the sequence generated by our proposed iterative algorithm to a common point of the set of fixed points of the total uniformly LLipschitzian mapping and the set of solutions of the SVI. The second goal of this paper is to investigate and analyze the notion of \(H(\cdot,\cdot)\)accretive mapping that appeared in [26], where \(H(\cdot,\cdot)\) is an α, βgeneralized accretive mapping, and to point out some comments concerning it. We prove that under the assumptions imposed on the \(H(\cdot,\cdot)\)accretive mapping considered in [26], every \(H(\cdot,\cdot)\)accretive mapping is actually an Ĥaccretive mapping and is not a new one. All the results derived by the authors in [26] are reviewed and some remarks regarding them are stated. We show that our results improve and generalize the corresponding results of [26] and recent related works.
Notation and preliminaries
In this section, we briefly present the notation and some preliminary material to be used later in this paper. First, we make clear that all linear spaces used in this paper are assumed to be real. Unless it is stated otherwise, in this paper we denote by X a real Banach space with norm \(\Vert \cdot\Vert \), we denote by \(X^{*}\) its topological dual, and \(\langle \cdot,\cdot\rangle \) will represent the duality pairing of X and \(X^{*}\). We denote by \(S_{X}\) and \(S_{X^{*}}\), respectively, the unit sphere in X and \(X^{*}\). For a given setvalued mapping \(M:X\rightrightarrows X\), the set \(\operatorname{Graph}(M)\) defined by
is called the graph of M.
Let us recall that a normed space X is called strictly convex if \(S_{X}\) is strictly convex, that is, the inequality \(\Vert x+y\Vert <2\) holds for all \(x,y\in S_{X}\) with \(x\neq y\). It is said to be smooth if for every \(x\in S_{X}\) there is exactly one \(x^{*}\in S_{X^{*}}\) such that \(x^{*}(x)=1\). Equivalently, a normed space X is said to be smooth provided \(\lim_{t\rightarrow 0} \frac{\Vert x+ty\Vert \Vert x\Vert}{t}\) exists for all \(x,y\in S_{X}\). It is known that if a Banach space X is reflexive, then X is strictly convex if and only if \(X^{*}\) is smooth and X is smooth if and only if \(X^{*}\) is strictly convex.
With each \(x\in X\), we associate the set
The operator \(J:X\rightrightarrows X^{*}\) is called the normalized duality mapping of X. We observe immediately that if \(X=\mathcal{H}\), a Hilbert space, then J is the identity mapping on \(\mathcal{H}\). At the same time, from the Hahn–Banach theorem, it follows that \(J(x)\) is nonempty for each \(x\in X\). In general, the normalized duality mapping is setvalued. However, it is singlevalued in a smooth Banach space.
Definition 2.1
For a given real smooth Banach space X, an operator \(T:X\rightarrow X\) is said to be

(i)
accretive if
$$\begin{aligned} \bigl\langle T(x)T(y),J(xy)\bigr\rangle \geq 0,\quad \forall x,y\in X; \end{aligned}$$ 
(ii)
strictly accretive if T is accretive and equality holds if and only if \(x=y\);

(iii)
rstrongly accretive if there exists a constant \(r>0\) such that
$$\begin{aligned} \bigl\langle T(x)T(y),J(xy)\bigr\rangle \geq r \Vert xy \Vert ^{2}, \quad \forall x,y\in X; \end{aligned}$$ 
(iv)
αrelaxed accretive if there exists a constant \(\alpha >0\) such that
$$\begin{aligned} \bigl\langle T(x)T(y),J(xy)\bigr\rangle \geq \alpha \Vert xy \Vert ^{2},\quad \forall x,y\in X; \end{aligned}$$ 
(v)
γLipschitz continuous if there exists a constant \(\gamma >0\) such that
$$\begin{aligned} \bigl\Vert T(x)T(y) \bigr\Vert \leq \gamma \Vert xy \Vert ,\quad \forall x,y \in X. \end{aligned}$$
Definition 2.2
For a given real smooth Banach space X, a setvalued operator \(M:X\rightrightarrows X\) is said to be

(i)
accretive if
$$\begin{aligned} \bigl\langle uv,J(xy)\bigr\rangle \geq 0,\quad \forall (x,u),(y,v)\in \operatorname{Graph}(M); \end{aligned}$$ 
(ii)
maccretive if M is accretive and \((I+\lambda M)(X)=X\), for all \(\lambda >0\), where I denotes the identity mapping on X.
Example 2.3
([52])
Let Ω be a bounded domain in \(\mathbb{R}^{n}\) with smooth boundary ∂Ω, and let for \(1< p<\infty \), \(L^{p}(\Omega )\) be the space of all the Lebesgue measurable functions \(f:\Omega \rightarrow \mathbb{R}\) such that \(\int _{\Omega}f^{p}\,dx<\infty \). Suppose further that T is a maximal monotone graph in \(\mathbb{R}\). With appropriate domains,

(i)
the operator \(A_{1}u:=\Delta u+T(u)\) with homogeneous Neuman boundary condition, and the operator \(A_{2}u:=\Delta u\), \(\frac{\partial u}{\partial n}\in T(u)\) on ∂Ω, where Δ denotes the Laplacian, are accretive on \(L^{p}(\Omega )\). Meanwhile,

(ii)
the operator \(A_{3}u:=\sum (\frac{\partial}{\partial x_{i}}) ( \frac{\partial u}{\partial x_{i}}^{r1} \frac{\partial}{\partial x_{i}})\) is accretive for \(r\geq 1\).
Example 2.4
([52])
The operator −Δ, where Δ denotes the Laplacian, is an maccretive operator.
Remark 2.5
As was pointed out in [52], the interest and importance of the concept of accretive mappings, which was introduced and studied independently by Browder [53] and Kato [54], stems from the fact that many physically significant problems can be modeled in terms of an initialvalued problem of the form
where A is either an accretive or strongly accretive mapping on an appropriate Banach space. Typical examples of such evolution equations are found in models involving the heat, wave or Schrödinger equation (see, e.g., Browder [53]). An early fundamental result in the theory of accretive mappings, due to Browder [55], states that the initialvalue problem (2.1) is solvable if A is locally Lipschitzian and accretive on X. Utilizing the existence result for the equation (2.1), Browder [53] proved that if A is locally Lipschitzian and accretive on X, then A is maccretive. Obviously, a consequence of this is that the equation \(x+Tx=f\), for a given \(f\in X\), where \(T:=IA\), has a solution. Martin [56, 57] proved that the equation (2.1) is solvable if A is continuous and accretive on X, and using this result, he further established that if A is continuous and accretive, then A is maccretive. In [52], the author verified that if \(A:X\rightarrow X\) is a Lipschitz and strongly accretive mapping, then A is surjective. Consequently, for each \(f\in X\), the equation \(Ax=f\) has a solution in X.
We note that M is an maccretive mapping if and only if M is accretive and there is no other accretive mapping whose graph contains strictly \(\operatorname{Graph}(M)\). The maccretivity is to be understood in terms of inclusion of graphs. If \(M:X\rightrightarrows X\) is an maccretive mapping, then adding anything to its graph, so as to obtain the graph of a new setvalued mapping, destroys the accretivity. In fact, the extended mapping is no longer accretive. In other words, for every pair \((x,u)\in X\times X\backslash \operatorname{Graph}(M)\) there exists \((y,v)\in \operatorname{Graph}(M)\) such that \(\langle uv,J(xy)\rangle <0\). Thanks to the abovementioned arguments, a necessary and sufficient condition for setvalued mapping \(M:X\rightrightarrows X\) to be maccretive is that the property
is equivalent to \(u\in M(x)\). The above characterization of maccretive mappings provides us with a useful and manageable way for recognizing that an element u belongs to \(M(x)\).
Definition 2.6
Given a smooth Banach space X and a mapping \(\widehat{H}:X\rightarrow X\), the setvalued mapping \(M:X\rightrightarrows X\) is said to be Ĥaccretive if M is accretive and \((\widehat{H}+\lambda M)(X)=X\) holds for all \(\lambda >0\).
It should be remarked that Fang and Huang [11] were the first to introduce the class of Ĥaccretive mappings on quniformly smooth Banach spaces for some real constant \(q>1\). We recall that for a given real constant \(q>1\), X is called quniformly smooth if there exits a constant \(C>0\) such that \(\rho _{X}(\tau )\leq C\tau ^{q}\), for all \(\tau \in \mathbb{R}^{+}\), where the function \(\rho _{X}:\mathbb{R}^{+}\rightarrow \mathbb{R}^{+}\) is given by formula
At the same time, it should be pointed out that if \(\widehat{H}=I\), then Definition 2.6 reduces to the definition of an maccretive operator, and if \(X=\mathcal{H}\) and \(\widehat{H}=I\), then Definition 2.6 becomes just the definition of a maximal monotone operator.
The following example shows that for a given mapping \(\widehat{H}:X\rightarrow X\), an maccretive mapping may not be Ĥaccretive.
Example 2.7
Let \(M_{2}(\mathbb{C})\) be the space of all \(2\times 2\) matrices with complex entries. Then,
is a Hilbert space together with the inner product \(\langle A,B\rangle :=\operatorname{tr}(AB^{*})\), for all \(A,B\in M_{2}(\mathbb{C})\), where tr denotes the trace, that is, the sum of the diagonal entries, and \(B^{*}\) denotes the Hermitian conjugate (or adjoint) of the matrix B, that is, \(B^{*}=\overline{B^{t}}\), the complex conjugate of the transpose B. The inner product defined above induces a norm on \(M_{2}(\mathbb{C})\) as follows:
Thereby, the Hilbert space \((M_{2}(\mathbb{C}),\Vert \cdot\Vert )\) is a 2uniformly smooth Banach space. For any
we have
where
Therefore, the set \(\{\mu _{k}:k=1,2,3,4\}\) spans the Hilbert space \(M_{2}(\mathbb{C})\). Letting \(\theta _{k}:=\frac{1}{\sqrt{2}}\mu _{k}\) for \(k=1,2,3,4\), it is easy to see that the set \(\mathfrak{B}\) consisting of the rescaled \(2\times 2\) matrices \(\theta _{k}\) (\(k=1,2,3,4\)) is also a spanner of \(M_{2}(\mathbb{C})\). At the same time, \(\Vert \theta _{k}\Vert =1\), for \(k=1,2,3,4\) and \(\langle \theta _{k},\theta _{j}\rangle =0\) for \(1\leq k,j\leq 4\), that is, the set \(\mathfrak{B}\) is orthonormal. Accordingly, the set \(\mathfrak{B}=\{\theta _{k}:k=1,2,3,4\}\) is an orthonormal basis for the Banach space \(M_{2}(\mathbb{C})\). Let the mappings \(M,\widehat{H}:M_{2}(\mathbb{C})\rightarrow M_{2}(\mathbb{C})\) be defined, respectively, by \(M(A)=\gamma A+\alpha _{1}\theta _{1}+\alpha _{3}\theta _{3}\) and \(\widehat{H}(A)=\gamma A+\alpha _{2}\theta _{2}+\alpha _{4}\theta _{4}\), for all \(A\in M_{2}(\mathbb{C})\), where γ is an arbitrary positive real constant and \(\alpha _{k}\) (\(k=1,2,3,4\)) are arbitrary real constants. Then, for all \(A,B\in M_{2}(\mathbb{C})\), it yields
i.e., M is an accretive mapping. By virtue of the fact that for any \(A\in M_{2}(\mathbb{C})\) and \(\lambda >0\), we have
where I is the identity mapping on \(M_{2}(\mathbb{C})\), we conclude that \((I+\lambda M)(M_{2}(\mathbb{C}))=M_{2}(\mathbb{C})\), for every real constant \(\lambda >0\), that is, the mapping \(I+\lambda M\) is surjective for every positive real constant λ. Hence, M is an maccretive mapping. Since for any \(A\in M_{2}(\mathbb{C})\), we have
it follows that
This implies that \({\normalfont{\mathbf{0}}}\notin (\widehat{H}+M)(H_{2}(\mathbb{C}))\), i.e., \(\widehat{H}+M\) is not surjective. Thus, the mapping M is not Ĥaccretive.
Example 2.8
Let \(H_{2}(\mathbb{C})\) be the set of all Hermitian matrices with complex entries. We recall that a square matrix A is said to be Hermitian (or selfadjoint) if it is equal to its own Hermitian conjugate, i.e., \(A^{*}=\overline{A^{t}}=A\). In the light of the definition of a Hermitian \(2\times 2\) matrix, the condition \(A^{*}=A\) implies that the \(2\times 2\) matrix A=\left(\begin{array}{cc}a& b\\ c& d\end{array}\right) is Hermitian if and only if \(a,d\in \mathbb{R}\) and \(b=\bar{c}\). Therefore,
Then, \(H_{2}(\mathbb{C})\) is a subspace of \(M_{2}(\mathbb{C})\), the space of all \(2\times 2\) matrices with complex entries, with respect to the operations of addition and scalar multiplication defined on \(M_{2}(\mathbb{C})\), when \(M_{2}(\mathbb{C})\) is considered as a real vector space. In other words, \(H_{2}(\mathbb{C})\) together with the mentioned operators is a vector space over \(\mathbb{R}\). We introduce the scalar product on \(H_{2}(\mathbb{C})\) as \(\langle A,B\rangle :=\frac{1}{2}\operatorname{tr}(AB)\), for all \(A,B\in H_{2}(\mathbb{C})\). By an easy check, we observe that \((H_{2}(\mathbb{C}),\langle \cdot,\cdot\rangle )\) is an inner product space. The inner product defined above induces a norm on \(H_{2}(\mathbb{C})\) as follows:
Taking into account that every finitedimensional normed space is a Banach space, it follows that \((H_{2}(\mathbb{C}),\Vert \cdot\Vert )\) is a Hilbert space and so it is a 2uniformly smooth Banach space. Let the mappings \(\widehat{H}_{1},\widehat{H}_{2},M:H_{2}(\mathbb{C})\rightarrow H_{2}( \mathbb{C})\) be defined, respectively, by
for all A=\left(\begin{array}{cc}z& xiy\\ x+iy& w\end{array}\right)\in {H}_{2}(\mathbb{C}), where α and β are arbitrary positive real constants, γ and ς are arbitrary nonzero real constants, θ and ξ are arbitrary real constants, and k, l, and p are arbitrary but fixed odd natural numbers.
Then, for any A=\left(\begin{array}{cc}{z}_{1}& {x}_{1}i{y}_{1}\\ {x}_{1}+i{y}_{1}& {w}_{1}\end{array}\right), B=\left(\begin{array}{cc}{z}_{2}& {x}_{2}i{y}_{2}\\ {x}_{2}+i{y}_{2}& {w}_{2}\end{array}\right)\in {H}_{2}(\mathbb{C}), it yields
where
Taking into account the fact that k, l, and p are odd natural numbers, it can be easily observed that \(\sum_{j=1}^{l}z_{1}^{lj}z_{2}^{j1}\geq 0\), \(\sum_{t=1}^{k}x_{1}^{kt}x_{2}^{t1}\geq 0\), \(\sum_{s=1}^{k}y_{1}^{ks}y_{2}^{s1}\geq 0\) and \(\sum_{r=1}^{p}w_{1}^{pr}w_{2}^{r1}\geq 0\). Since \(\alpha ,\beta >0\), the preceding relation implies that
that is, M is an accretive mapping. Let us define now the functions \(f,g,h,\varphi :\mathbb{R}\rightarrow \mathbb{R}\), for all \(\nu \in \mathbb{R}\), respectively, as
Then, for any A=\left(\begin{array}{cc}z& xiy\\ x+iy& w\end{array}\right)\in {H}_{2}(\mathbb{C}), it yields
It can be easily seen that \(f(\mathbb{R})=[0,3]\), \(g(\mathbb{R})=[1,+\infty )\), \(h(\mathbb{R})=(1,1)\) and \(\varphi (\mathbb{R})=(0,4]\). These facts imply that \((\widehat{H}_{1}+M)(H_{2}(\mathbb{C}))\neq H_{2}(\mathbb{C})\), i.e., \(\widehat{H}_{1}+M\) is not surjective and so M is not an \(\widehat{H}_{1}\)accretive mapping. Now, let \(\lambda >0\) be an arbitrary real constant and assume that the functions \(\widehat{f},\widehat{g},\widehat{h},\widehat{\varphi}:\mathbb{R} \rightarrow \mathbb{R}\) are defined, respectively, by
Then, for any A=\left(\begin{array}{cc}z& xiy\\ x+iy& w\end{array}\right)\in {H}_{2}(\mathbb{C}), we obtain
Relying on the fact that k, l, and p are odd natural numbers, it is easy to see that \(\widehat{f}(\mathbb{R})=\widehat{g}(\mathbb{R})=\widehat{h}(\mathbb{R})= \widehat{\varphi}(\mathbb{R})=\mathbb{R}\). Consequently, \((\widehat{H}_{2}+\lambda M)(H_{2}(\mathbb{C}))=H_{2}(\mathbb{C})\), that is, \(\widehat{H}_{2}+\lambda M\) is surjective. Taking into account the arbitrariness in the choice of \(\lambda >0\), we conclude that M is an \(\widehat{H}_{2}\)accretive mapping.
As was pointed out, if \(\widehat{H}=I\), then the definition of Iaccretive mappings is that of maccretive mappings. In fact, the class of Ĥaccretive mappings has a close relation with that of maccretive mappings. In the same way as the proofs of Theorems 2.1 and 2.2 in [11], we obtain the following assertions in a smooth Banach space setting.
Lemma 2.9
Let X be a real smooth Banach space, \(\widehat{H}:X\rightarrow X\) be a strictly accretive mapping, \(M:X\rightrightarrows X\) be an Ĥaccretive mapping, and let \(x,u\in X\) be given points. If \(\langle uv,J(xy)\rangle \geq 0\) holds, for all \((y,v)\in \operatorname{Graph}(M)\), then \(u\in M(x)\), that is, M is an maccretive mapping.
Lemma 2.10
Let X be a real smooth Banach space, \(\widehat{H}:X\rightarrow X\) be a strictly accretive mapping and \(M:X\rightrightarrows X\) be an Ĥaccretive mapping. Then, the mapping \((\widehat{H}+\lambda M)^{1}\) is singlevalued for every constant \(\lambda >0\).
It is worth noting that Lemma 2.10 allows us to define the resolvent operator \(R^{\widehat{H}}_{M,\lambda}\) associated with Ĥ, M and an arbitrary real constant \(\lambda >0\) as follows.
Definition 2.11
Let X be a real smooth Banach space, \(\widehat{H}:X\rightarrow X\) be a strictly accretive mapping and \(M:X\rightrightarrows X\) be an Ĥaccretive mapping. The resolvent operator \(R^{\widehat{H}}_{M,\lambda}:X\rightarrow X\) associated with Ĥ, M and an arbitrary positive real constant λ is defined by
By a similar proof as in Theorem 2.3 of [11], we conclude the Lipschitz continuity of the resolvent operator \(R^{\widehat{H}}_{M,\lambda}\) associated with Ĥ, M and \(\lambda >0\) and calculate its Lipschitz constant under some appropriate conditions as follows.
Lemma 2.12
Let X be a real smooth Banach space, \(\widehat{H}:X\rightarrow X\) be an rstrongly accretive mapping and \(M:X\rightrightarrows X\) be an Ĥaccretive mapping. Then, the resolvent operator \(R^{\widehat{H}}_{M,\lambda}:X\rightarrow X\) is Lipschitz continuous with constant \(\frac{1}{r}\), i.e.,
Remark 2.13
(i) It should be pointed out that Lemmas 2.1–2.3 improve, respectively, Theorems 2.1–2.3 in [11]. In fact, Theorems 2.1–2.3 in [11] have been presented in the framework of a quniformly smooth Banach space, whereas our results are given in a smooth Banach space setting.
(ii) There is a small mistake in the context of Theorem 2.3 of [11]. In fact, in the context of [11, Theorem 2.3], the inequality
must be replaced by (2.2), as we have done in the context of Lemma 2.12.
System of variational inclusions: existence and uniqueness of solution and iterative algorithm
For given real Banach spaces \(X_{1}\) and \(X_{2}\), and the mappings \(F:X_{1}\times X_{2}\rightarrow X_{1}\), \(G:X_{1}\times X_{2}\rightarrow X_{2}\), \(\widehat{H}_{1}:X_{1}\rightarrow X_{1}\), \(\widehat{H}_{2}:X_{2}\rightarrow X_{2}\), \(M:X_{1}\rightrightarrows X_{1}\), and \(N:X_{2}\rightrightarrows X_{2}\), we consider the problem of finding \((a,b)\in X_{1}\times X_{2}\) such that
which is called a system of variational inclusions \((\operatorname{SVI})\) involving Ĥaccretive mappings.
It is important to emphasize that by taking different choices of the operators F, G, \(\widehat{H}_{i}\), M, N and the underlying spaces \(X_{i}\) (\(i=1,2\)) in the SVI (3.1), one can easily obtain the problems studied in [12–14, 22, 29, 58] and the references therein.
The following conclusion, which tells us that SVI (3.1) is equivalent to a fixedpoint problem, provides us with a characterization of the solution of the SVI (3.1).
Lemma 3.1
Let \(X_{1}\) and \(X_{2}\) be two real smooth Banach spaces, and \(\widehat{H}_{1}:X_{1}\rightarrow X_{1}\) and \(\widehat{H}_{2}:X_{2}\rightarrow X_{2}\) be strictly accretive mappings. Suppose further that \(M:X_{1}\rightrightarrows X_{1}\) is an \(\widehat{H}_{1}\)accretive operator and \(N:X_{2}\rightrightarrows X_{2}\) is an \(\widehat{H}_{2}\)accretive operator. Then, the following statements are equivalent:

(i)
\((a,b)\in X_{1}\times X_{2}\) is a solution of the SVI (3.1);

(ii)
For any \(\lambda ,\rho >0\), \((a,b)\) satisfies
$$\begin{aligned} \begin{aligned} \textstyle\begin{cases} a=R^{\widehat{H}_{1}}_{M,\lambda}[\widehat{H}_{1}(a)\lambda F(a,b)], \\ b=R^{\widehat{H}_{2}}_{N,\rho}[\widehat{H}_{2}(b)\rho G(a,b)]; \end{cases}\displaystyle \end{aligned} \end{aligned}$$ 
(iii)
For some \(\lambda _{0},\rho _{0}>0\), \((a,b)\) satisfies
$$\begin{aligned} \begin{aligned} \textstyle\begin{cases} a=R^{\widehat{H}_{1}}_{M,\lambda _{0}}[\widehat{H}_{1}(a)\lambda _{0} F(a,b)], \\ b=R^{\widehat{H}_{2}}_{N,\rho _{0}}[\widehat{H}_{2}(b)\rho _{0} G(a,b)]. \end{cases}\displaystyle \end{aligned} \end{aligned}$$
Proof
“(i) ⇒ (ii)” Let us first assume that \((a,b)\in X_{1}\times X_{2}\) is a solution of the SVI (3.1). Then, using Definition 2.11, it yields
where \(R^{\widehat{H}_{1}}_{M,\lambda}=(\widehat{H}_{1}+\lambda M)^{1}\) and \(R^{\widehat{H}_{2}}_{N,\rho}=(\widehat{H}_{2}+\rho N)^{1}\).
The proof of “(ii) ⇒ (iii)” is obvious.
“(iii) ⇒ (i)” Suppose that for some \(\lambda _{0},\rho _{0}>0\), \((a,b)\) satisfies
Then, in the light of Definition 2.11, we obtain
which implies that
and hence,
i.e., \((a,b)\in X_{1}\times X_{2}\) is a solution of the SVI (3.1). The proof is completed. □
Before proceeding to the main result of this section, we need to recall the following notion that will be used efficiently in its proof.
Definition 3.2
A mapping \(F:X\times X\rightarrow X\) is said to be

(i)
ςLipschitz continuous with respect to its first argument if there exists a constant \(\varsigma >0\) such that
$$\begin{aligned} \bigl\Vert F(x_{1},y)F(x_{2},y) \bigr\Vert \leq \varsigma \Vert x_{1}x_{2} \Vert ,\quad \forall x_{1},x_{2},y\in X; \end{aligned}$$ 
(ii)
ξLipschitz continuous with respect to its second argument if there exists a constant \(\xi >0\) such that
$$\begin{aligned} \bigl\Vert F(x,y_{1})F(x,y_{2}) \bigr\Vert \leq \xi \Vert y_{1}y_{2} \Vert , \quad \forall x,y_{1},y_{2} \in X. \end{aligned}$$
Theorem 3.3
Let \(X_{1}\) and \(X_{2}\) be two real smooth Banach spaces with norms \(\Vert \cdot\Vert _{1}\) and \(\Vert \cdot\Vert _{2}\), respectively, \(\widehat{H}_{1}:X_{1}\rightarrow X_{1}\) be a \(\varrho _{1}\)strongly accretive and rLipschitz continuous mapping, \(\widehat{H}_{2}:X_{2}\rightarrow X_{2}\) be a \(\varrho _{2}\)strongly accretive and kLipschitz continuous mapping, \(M:X_{1}\rightrightarrows X_{1}\) be an \(\widehat{H}_{1}\)accretive setvalued mapping, and \(N:X_{2}\rightrightarrows X_{2}\) be an \(\widehat{H}_{2}\)accretive setvalued mapping. Suppose further that the mapping \(F:X_{1}\times X_{2}\rightarrow X_{1}\) is \(\tau _{1}\)Lipschitz continuous with respect to its first argument and \(\tau _{2}\)Lipschitz continuous with respect to its second argument, and the mapping \(G:X_{1}\times X_{2}\rightarrow X_{2}\) is \(\theta _{1}\)Lipschitz continuous with respect to its first argument and \(\theta _{2}\)Lipschitz continuous with respect to its second argument. If \(r<\varrho _{1}\) and \(k<\varrho _{2}\) then, the SVI (3.1) admits a unique solution.
Proof
For any given \(\lambda ,\rho >0\), define \(T_{\lambda}:X_{1}\times X_{2}\rightarrow X_{1}\) and \(S_{\rho}:X_{1}\times X_{2}\rightarrow X_{2}\) for all \((x,y)\in X_{1}\times X_{2}\), by
and
respectively. At the same time, for any given \(\lambda ,\rho >0\), define \(Q_{\lambda ,\rho}:X_{1}\times X_{2}\rightarrow X_{1}\times X_{2}\) by
Making use of (3.2) and Lemma 2.12, it follows that for all \((x_{1},y_{1}),(x_{2},y_{2})\in X_{1}\times X_{2}\),
Taking into account that \(\widehat{H}_{1}\) is rLipschitz continuous, and F is \(\tau _{1}\)Lipschitz continuous with respect to its first argument and \(\tau _{2}\)Lipschitz continuous with respect to its second argument, we obtain
and
Combining (3.5)–(3.7), we deduce that for all \((x_{1},y_{1}),(x_{2},y_{2})\in X_{1}\times X_{2}\),
By arguments analogous to the previous inequalities (3.5)–(3.8), employing the assumptions, for all \((x_{1},y_{1}),(x_{2},y_{2})\in X_{1}\times X_{2}\), we obtain
Define the function \(\Vert \cdot\Vert _{*}\) on \(X_{1}\times X_{2}\) by
It can be easily seen that \((X_{1}\times X_{2},\Vert \cdot\Vert _{*})\) is a Banach space. Then, applying (3.4), (3.8), and (3.9), yields
where
Since \(r<\varrho _{1}\) and \(k<\varrho _{2}\), we can choose \(\lambda _{0},\rho _{0}>0\) small enough such that
From (3.12) it follows that
and so \(Q_{\lambda _{0},\rho _{0}}\) is a contraction mapping. Then, the Banach FixedPoint Theorem ensures the existence of a unique \((a,b)\in X_{1}\times X_{2}\) such that \(Q_{\lambda ,\rho}(a,b)=(a,b)\). Thereby, making use of (3.2)–(3.4) we conclude that for some \(\lambda _{0},\rho _{0}>0\),
Accordingly, Lemma 3.1 guarantees that \((a,b)\in X_{1}\times X_{2}\) is the unique solution of the SVI (3.1). This completes the proof. □
Given a real normed space X with a norm \(\Vert \cdot\Vert \), we recall that a nonlinear mapping \(T:X\rightarrow X\) is called nonexpansive if \(\Vert T(x)T(y)\Vert \leq \Vert xy\Vert \) for all \(x,y\in X\). It is well known that the class of nonexpansive mappings has a deep and close relation with the classes of monotone and accretive operators that arise naturally in the theory of differential equations. On the other hand, the fixedpoint theory is an attractive and interesting subject with a large number of applications in various fields of mathematics and other branches of science. At the same time, the study of nonexpansive mappings is a very interesting research area in fixedpoint theory. These facts have motivated many researchers to extend the notion of nonexpansive mapping and several interesting generalized nonexpansive mappings in the framework of different spaces have appeared in the literature. For example, two classes of generalized nonexpansive mappings are recalled in the next definition.
Definition 3.4
A nonlinear mapping \(T:X\rightarrow X\) is said to be

(i)
LLipschitzian if there exists a constant \(L>0\) such that
$$\begin{aligned} \bigl\Vert T(x)T(y) \bigr\Vert \leq L \Vert xy \Vert ,\quad \forall x,y\in X; \end{aligned}$$ 
(ii)
uniformly LLipschitzian if there exists a constant \(L>0\) such that for each \(n\in \mathbb{N}\),
$$\begin{aligned} \bigl\Vert T^{n}(x)T^{n}(y) \bigr\Vert \leq L \Vert xy \Vert ,\quad \forall x,y \in X. \end{aligned}$$
It is significant to emphasize that every uniformly LLipschitzian mapping is LLipschitzian but the converse need not be true. The following example illustrates that the class of LLipschitzian mappings contains properly the class of uniformly LLipschitzian mappings.
Example 3.5
Consider \(X=\mathbb{R}\) with the Euclidean norm \(\Vert \cdot\Vert =\cdot\) and let the selfmapping T of X be defined by \(T(x)=kx\) for all \(x\in X\), where \(k>1\) is an arbitrary real constant. Taking into account that for all \(x,y\in X\), \(T(x)T(y)=kxy\), it follows that T is a kLipschitzian mapping. However, thanks to the fact that \(k>1\), for any \(x,y\in X\) and \(n\in \mathbb{N}\backslash \{1\}\), we obtain \(T^{n}(x)T^{n}(y)=k^{n}xy>kxy\). This fact ensures that T is not a uniformly kLipschitzian mapping.
The introduction and study of the notion of asymptotically nonexpansive mapping as a generalization of the concept of nonexpansive mapping was first made by Goebel and Kirk [47].
Definition 3.6
([47])
A nonlinear mapping \(T:X\rightarrow X\) is said to be asymptotically nonexpansive if, there exists a sequence \(\{a_{n}\}\subset (0,+\infty )\) with \(\lim_{n\rightarrow \infty}a_{n}=0\) such that for each \(n\in \mathbb{N}\),
Equivalently, we say that the mapping T is asymptotically nonexpansive if there exists a sequence \(\{k_{n}\}\subset [1,+\infty )\) with \(\lim_{n\rightarrow \infty}k_{n}=1\) such that for each \(n\in \mathbb{N}\),
In recent decades, successful attempts in this direction have continued and several other interesting generalizations of nonexpansive mappings and asymptotically nonexpansive mappings are presented. For instance, in 2006, Alber et al. [49] succeeded in introducing a class of generalized nonexpansive mappings, the socalled total asymptotically nonexpansive mappings, which are more general than the classes of asymptotically nonexpansive mappings and nearly asymptotically nonexpansive mappings.
Definition 3.7
([49])
A nonlinear mapping \(T:X\rightarrow X\) is said to be total asymptotically nonexpansive (also referred to as \((\{a_{n}\},\{b_{n}\},\phi )\)total asymptotically nonexpansive) if there exist nonnegative real sequences \(\{a_{n}\}\) and \(\{b_{n}\}\) with \(a_{n},b_{n}\rightarrow 0\) as \(n\rightarrow \infty \) and a strictly increasing continuous function \(\phi :\mathbb{R}^{+}\rightarrow \mathbb{R}^{+}\) with \(\phi (0)=0\) such that for all \(x,y\in X\),
Using a modified Mann iteration process, they also studied the iterative approximation of the fixed point of total asymptotically nonexpansive mappings under some appropriate conditions. Note, in particular, that every asymptotically nonexpansive mapping is total asymptotically nonexpansive with \(b_{n}=0\) (or equivalently \(b_{n}=0\) and \(a_{n}=k_{n}1\)) for all \(n\in \mathbb{N}\) and \(\phi (t)=t\) for all \(t\geq 0\), but the converse need not be true. In other words, the class of total asymptotically nonexpansive mappings is more general than the class of asymptotically nonexpansive mappings. This fact is shown in the next example.
Example 3.8
For \(1\leq p<\infty \), consider
the classical space consisting of all ppower summable sequences, with the pnorm \(\Vert \cdot\Vert _{p}\) defined on it by
Furthermore, let B denote the closed unit ball in the Banach space \(l^{p}\) and consider \(X:=\mathbb{R}\times B\) with the norm \(\Vert \cdot\Vert _{X}=\cdot_{\mathbb{R}}+\Vert \cdot\Vert _{p}\) and define the selfmapping T of X by
where
\(\gamma \in (0,1)\) and \(\beta >1\) are arbitrary real constants, \(\lambda \in \mathbb{N}\) is an arbitrary constant, m is an arbitrary but fixed odd natural number, \(\lambda \geq m+1\) is an arbitrary but fixed natural number and \(k_{i},s_{i}\in \mathbb{N}\backslash \{1\}\) (\(i=1,2,\dots ,\frac{m+1}{2}\)) are arbitrary constants. Indeed, the element \(x^{\dagger}\) of \(l^{p}\) can be written as \(x^{\dagger}=\{x_{n}^{\dagger}\}_{n=1}^{\infty}\), where \(x_{i}^{\dagger}=0\) for all \(1\leq i\leq \lambda \), \(x_{\lambda +2i}^{\dagger}=0\) for all \(i\in \mathbb{N}\),
and \(x_{\lambda +2m+j}^{\dagger}=\gamma x_{m+\frac{j+1}{2}}\) for all \(j\in \{2l+1l\in \mathbb{N}\}\). Taking into account that the mapping T is not continuous at the points \((\beta ,x)\) for all \(x\in B\), we conclude that T is not Lipschitzian and so it is not an asymptotically nonexpansive mapping. For all \((u,x),(v,y)\in [0,\beta ]\times B\) and \((u,x),(v,y)\in ((\infty ,0)\cup (\beta ,+\infty ) )\times B\), one can show that
The fact that \(x,y\in B\) implies that \(0\leq x_{2i1}^{k_{i}j}\), \(y_{2i1}^{j1}\leq 1\) for each \(j\in \{1,2,\dots ,k_{i}\}\) and \(0\leq x_{2i}^{s_{i}r}\), \(y_{2i}^{r1}\leq 1\) for each \(r\in \{1,2,\dots ,s_{i}\}\) and \(i\in \{1,2,\dots ,\frac{m+1}{2}\}\). Relying on these facts, we conclude that \(0\leq \sum_{j=1}^{k_{i}}x_{2i1}^{k_{i}j}y_{2i1}^{j1} \leq k_{i}\) and \(0\leq \sum_{r=1}^{s_{i}}x_{2i}^{s_{i}r}y_{2i}^{r1} \leq s_{i}\) for each \(i\in \{1,2,\dots ,\frac{m+1}{2}\}\). Thereby, making use of (3.14) it follows that for all \((u,x),(v,y)\in [0,\beta ]\times B\) and \((u,x),(v,y)\in ((\infty ,0)\cup (\beta ,+\infty ) )\times B\),
If \(u\in [0,\beta ]\) and \(v\in (\infty ,0)\cup (\beta ,+\infty )\), then in a similar fashion to the preceding analysis, one can prove that for all \(x,y\in B\),
Now, applying (3.15) and (3.16), for all \((u,x),(v,y)\in X\), we obtain
For all \(n\geq 2\) and \((u,x)\in X\), we have
Then, by an argument analogous to those of (3.14) and (3.15), for all \((u,x),(v,y)\in X\) and \(n\geq 2\), one can deduce that
Employing (3.17) and (3.18) and by virtue of the fact that for each \(i\in \{1,2,\dots ,\frac{m+1}{2}\}\), \(0\leq \sum_{j=1}^{k_{i}}x_{2i1}^{k_{i}j}y_{2i1}^{j1} \leq k_{i}\) and \(0\leq \sum_{r=1}^{s_{i}}x_{2i}^{s_{i}r}y_{2i}^{r1} \leq s_{i}\), we conclude that for all \((u,x),(v,y)\in X\) and \(n\in \mathbb{N}\),
where \(\xi =\max \{k_{i},s_{i}:i=1,2,\dots ,\frac{m+1}{2}\}\). Taking \(a_{n}=\gamma ^{n}\) and \(b_{n}=\frac{1}{\beta ^{n}}\) for all \(n\in \mathbb{N}\), the fact that \(0<\gamma <1<\beta \) implies that \(a_{n},b_{n}\rightarrow 0\) as \(n\rightarrow \infty \). Now, define the function \(\phi :[0,+\infty )\rightarrow [0,+\infty )\) by \(\phi (t)=\xi t\) for all \(t\in [0,+\infty )\). Then, for all \((u,x),(v,y)\in X\) and \(n\in \mathbb{N}\), we obtain
that is, T is a \((\{\gamma ^{n}\},\{\frac{1}{\beta ^{n}}\},\phi )\)total asymptotically nonexpansive mapping.
With the aim of presenting a unifying framework for generalized nonexpansive mappings available in the literature and verifying a general convergence theorem applicable to all these classes of nonlinear mappings, very recently, Kiziltunc and Purtas [50] introduced a new class of generalized nonexpansive mappings as follows.
Definition 3.9
([50])
A nonlinear mapping \(T:X\rightarrow X\) is said to be total uniformly LLipschitzian (or \((\{a_{n}\},\{b_{n}\},\phi )\)total uniformly LLipschitzian) if there exist a constant \(L>0\), nonnegative real sequences \(\{a_{n}\}\) and \(\{b_{n}\}\) with \(a_{n},b_{n}\rightarrow 0\) as \(n\rightarrow \infty \) and a strictly increasing continuous function \(\phi :\mathbb{R}^{+}\rightarrow \mathbb{R}^{+}\) with \(\phi (0)=0\) such that for each \(n\in \mathbb{N}\),
It is essential to note that, for given nonnegative real sequences \(\{a_{n}\}\) and \(\{b_{n}\}\) and a strictly increasing continuous function \(\phi :\mathbb{R}^{+}\rightarrow \mathbb{R}^{+}\), an \((\{a_{n}\},\{b_{n}\},\phi )\)total asymptotically nonexpansive mapping is \((\{a_{n}\},\{b_{n}\},\phi )\)total uniformly LLipschitzian with \(L=1\), but the converse may not be true. In the following example, the fact that the class of total uniformly LLipschitzian mappings contains properly the class of total asymptotically nonexpansive mappings is illustrated.
Example 3.10
Let \(X=\mathbb{R}\) endowed with the Euclidean norm \(\Vert \cdot\Vert =\cdot\) and let the selfmapping T of X be defined by
where \(\alpha >0\) and \(\beta >\frac{\alpha +\sqrt{\alpha ^{2}+4}}{2}\) are arbitrary real constants such that \(\alpha \beta >1\). Since the mapping T is discontinuous at the points \(x=0,\alpha ,\frac{1}{\beta}\), it follows that T is not Lipschitzian and so it is not an asymptotically nonexpansive mapping. Take \(a_{n}=\frac{\gamma}{n}\) and \(b_{n}=\frac{\alpha}{k^{n}}\) for each \(n\in \mathbb{N}\), where \(\gamma >0\) and \(k>1\) are arbitrary constants such that \(k\neq \alpha \beta \). Let us now define the function \(\phi :\mathbb{R}^{+}\rightarrow \mathbb{R}^{+}\) by \(\phi (t)=\theta t^{m}\) for all \(t\in \mathbb{R}^{+}\), where \(m\in \mathbb{N}\) and \(\theta \in (0, \frac{k^{m}(\beta ^{2}\alpha \beta 1)}{\beta \gamma (k1)^{m}\alpha ^{m}} )\) are arbitrary constants. Selecting \(x=\alpha \) and \(y=\frac{\alpha}{k}\), we have \(T(x)=\frac{1}{\beta}\) and \(T(y)=\beta \). With the help of the fact that \(0<\theta < \frac{k^{m}(\beta ^{2}\alpha \beta 1)}{\beta \gamma (k1)^{m}\alpha ^{m}}\), it follows that
which implies that T is not a \((\{\frac{\gamma}{n}\},\{\frac{\alpha}{k^{n}}\},\phi )\)total asymptotically nonexpansive mapping. However, for all \(x,y\in X\), we obtain
and for all \(n\geq 2\),
due to the fact that \(T^{n}(z)=\frac{1}{\beta}\) for all \(z\in X\) and \(n\geq 2\). Making use of (3.19) and (3.20), we deduce that T is a \((\{\frac{\gamma}{n}\},\{\frac{\alpha}{k^{n}}\},\phi )\)total uniformly \(\frac{k\beta}{\alpha}\)Lipschitzian mapping.
Lemma 3.11
Let \(X_{1}\) and \(X_{2}\) be two real Banach spaces with norms \(\Vert \cdot\Vert _{1}\) and \(\Vert \cdot\Vert _{2}\), respectively, and let \(S_{1}:X_{1}\rightarrow X_{1}\) and \(S_{2}:X_{2}\rightarrow X_{2}\) be \((\{a_{i}\}_{i=1}^{\infty},\{b_{i}\}_{i=1}^{\infty},\phi _{1})\)total uniformly \(L_{1}\)Lipschitzian and \((\{c_{i}\}_{i=1}^{\infty},\{d_{i}\}_{i=1}^{\infty},\phi _{2})\)total uniformly \(L_{2}\)Lipschitzian mappings, respectively. Moreover, let Q and ϕ be selfmappings of \(X_{1}\times X_{2}\) and \(\mathbb{R}^{+}\), respectively, defined by
and
Then, Q is an (\(\{a_{i}+c_{i}\}_{i=1}^{\infty}\), \(\{b_{i}+d_{i}\}_{i=1}^{\infty}\), ϕ)total uniformly \(\max \{L_{1},L_{2}\}\)Lipschitzian mapping.
Proof
In view of the fact that for each \(j\in \{1,2\}\), \(\phi _{j}:\mathbb{R}^{+}\rightarrow \mathbb{R}^{+}\) is a strictly increasing function, for all \((x_{1},x_{2}),(y_{1},y_{2})\in X_{1}\times X_{2}\) and \(i\in \mathbb{N}\), yields
where \(\Vert \cdot\Vert _{*}\) is a norm on \(X_{1}\times X_{2}\) defined by (3.10). This fact ensures that Q is an (\(\{a_{i}+c_{i}\}_{i=1}^{\infty}\), \(\{b_{i}+d_{i}\}_{i=1}^{\infty}\), ϕ)total uniformly \(\max \{L_{1},L_{2}\}\)Lipschitzian mapping. The proof is completed. □
Assume that \(X_{1}\) and \(X_{2}\) are two real smooth Banach spaces with norms \(\Vert \cdot\Vert _{1}\) and \(\Vert \cdot\Vert _{2}\), respectively, \(S_{1}:X_{1}\rightarrow X_{1}\) is an \((\{a_{i}\}_{i=1}^{\infty},\{b_{i}\}_{i=1}^{\infty},\phi _{1})\)total uniformly \(L_{1}\)Lipschitzian mapping and \(S_{2}:X_{2}\rightarrow X_{2}\) is a \((\{c_{i}\}_{i=1}^{\infty},\{d_{i}\}_{i=1}^{\infty},\phi _{2})\)total uniformly \(L_{2}\)Lipschitzian mapping. Furthermore, let Q be a selfmapping of \(X_{1}\times X_{2}\) defined as (3.21). Denote by \(\operatorname{Fix}(S_{j})\) (\(j=1,2\)) and \(\operatorname{Fix}(Q)\) the sets of all the fixed points of \(S_{j}\) (\(j=1,2\)) and Q, respectively. At the same time, denote by \(\operatorname{SVI}(X_{j},\widehat{H}_{j},M,N,F,G:j=1,2)\) the set of all the solutions of the SVI (3.1), where for \(j=1,2\), the nonlinear mappings \(\widehat{H}_{j}:X_{j}\rightarrow X_{j}\) are strictly accretive, and the setvalued mappings \(M:X\rightrightarrows X_{1}\) and \(N:X_{2}\rightrightarrows X_{2}\) are \(\widehat{H}_{1}\)accretive and \(\widehat{H}_{2}\)accretive, respectively. Using (3.21), we infer that for any \((x_{1},x_{2})\in X_{1}\times X_{2}\), \((x_{1},x_{2})\in \operatorname{Fix}(Q)\) if and only if for \(j=1,2\), \(x_{j}\in \operatorname{Fix}(S_{j})\), that is, \(\operatorname{Fix}(Q)=\operatorname{Fix}(S_{1},S_{2})=\operatorname{Fix}(S_{1}) \times \operatorname{Fix}(S_{2})\). If \((a,b)\in \operatorname{Fix}(Q)\cap \operatorname{SVI}(X_{j},A_{j},M,N,F,G:j=1,2)\), then with the help of Lemma 3.1 it can be easily observed that for each \(i\in \mathbb{N}\),
Using the fixedpoint formulation (3.23) we now are able to construct the following iterative algorithm for finding a common element of the two sets of \(\operatorname{SVI}(X_{j},\widehat{H}_{j},M,N,F,G:j=1,2)\) and \(\operatorname{Fix}(Q)=\operatorname{Fix}(S_{1},S_{2})\).
Algorithm 3.12
Assume that \(X_{j}\) (\(j=1,2\)), F and G are the same as in the SVI (3.1). Let for \(i\geq 0\) and \(j=1,2\), \(\widehat{H}_{i,j}:X_{j}\rightarrow X_{j}\) be strictly accretive, \(M_{i}:X_{1}\rightrightarrows X_{1}\) be an \(\widehat{H}_{i,1}\)accretive setvalued mapping and \(N_{i}:X_{2}\rightrightarrows X_{2}\) be an \(\widehat{H}_{i,2}\)accretive setvalued mapping. Suppose further that for \(j=1,2\), \(S_{j}:X_{j}\rightarrow X_{j}\) is a \((\{c_{i,j}\}_{i=0}^{\infty},\{d_{i,j}\}_{i=0}^{\infty},\phi _{j})\)total uniformly \(L_{j}\)Lipschitzian mapping. For an arbitrarily chosen initial point \((a_{0},b_{0})\in X_{1}\times X_{2}\), compute the iterative sequence \(\{(a_{i},b_{i})\}_{i=0}^{\infty}\) in \(X_{1}\times X_{2}\) by the iterative schemes
where \(i\in \mathbb{N}\cup \{0\}\); \(\lambda _{i},\rho _{i}>0\) are real constants; and \(\{\alpha _{i}\}_{i=0}^{\infty}\) is a sequence in the interval \([0,1]\) such that \(\limsup_{i}\alpha _{i}<1\).
If \(S_{j}\equiv I_{j}\) (\(j=1,2\)), the identity mapping on \(X_{j}\), then Algorithm 3.12 reduces to the following algorithm.
Algorithm 3.13
Let \(X_{j}\), \(\widehat{H}_{i,j}\), \(M_{i}\), \(N_{i}\), F, G (\(j=1,2\); \(i\in \mathbb{N}\cup \{0\}\)) be the same as in Algorithm 3.12. For any given \((a_{0},b_{0})\in X_{1}\times X_{2}\), define the iterative sequence \(\{(a_{i},b_{i})\}_{i=0}^{\infty}\) in \(X_{1}\times X_{2}\) by the iterative processes
where \(i\in \mathbb{N}\cup \{0\}\); the constants \(\lambda _{i},\rho _{i}>0\) and the sequence \(\{\alpha _{i}\}_{i=0}^{\infty}\) are the same as in Algorithm 3.12.
If \(\widehat{H}_{i,j}=\widehat{H}_{j}\), \(\lambda _{i}=\lambda \) and \(\rho _{i}=\rho \) for each \(i\geq 0\) and \(j\in \{1,2\}\), then Algorithm 3.13 collapses to the following algorithm.
Algorithm 3.14
Suppose that \(X_{j}\) (\(j=1,2\)), F and G are the same as in Algorithm 3.12. Let for \(j=1,2\), \(\widehat{H}_{j}:X_{j}\rightarrow X_{j}\) be strictly accretive mappings and let for each \(i\geq 0\), \(M_{i}:X_{1}\rightrightarrows X_{1}\) be an \(\widehat{H}_{1}\)accretive setvalued mapping and \(N_{i}:X_{2}\rightrightarrows X_{2}\) be an \(\widehat{H}_{2}\)accretive setvalued mapping. For any given \((a_{0},b_{0})\in X_{1}\times X_{2}\), compute the iterative sequence \(\{(a_{i},b_{i})\}_{i=0}^{\infty}\) in \(X_{1}\times X_{2}\) by the iterative schemes
where \(i\in \mathbb{N}\cup \{0\}\); \(\lambda ,\rho >0\) are two constants; and the sequence \(\{\alpha _{i}\}_{i=0}^{\infty}\) is the same as in Algorithm 3.12.
Graph convergence and an application
Before turning to the main results of this paper, we need to recall the following definition.
Definition 4.1
Given setvalued mappings \(M_{i}\), \(M:X\rightrightarrows X\) (\(i\geq 0\)), the sequence \(\{M_{i}\}_{i=0}^{\infty}\) is said to be graphconvergent to M, denoted by \(M_{i}\stackrel{G}{\longrightarrow}M\), if for every point \((x,u)\in \operatorname{Graph}(M)\), there exists a sequence of points \((x_{i},u_{i})\in \operatorname{Graph}(M_{i})\) such that \(x_{i}\rightarrow x\) and \(u_{i}\rightarrow u\) as \(i\rightarrow \infty \).
We now establish a new equivalence relationship between the graph convergence of a sequence of Ĥaccretive mappings and their associated resolvent operators, respectively, to a given Ĥaccretive mapping and its associated resolvent operator under some appropriate conditions.
Theorem 4.2
Let X be a real smooth Banach space, and \(\widehat{H},\widehat{H}_{i}:X\rightarrow X\) (\(i\geq 0\)) be ϱstrongly accretive and \(\varrho _{i}\)strongly accretive mappings, respectively, such that for each \(i\geq 0\) the mapping \(\widehat{H}_{i}\) is \(r_{i}\)Lipschitz continuous. Suppose that M, \(M_{i}:X\rightrightarrows X\) (\(i\geq 0\)) are Ĥaccretive and \(\widehat{H}_{i}\)accretive mappings, respectively. Let the sequence \(\{r_{i}\}_{i=0}^{\infty}\) be bounded and \(\lim_{i\rightarrow \infty}\widehat{H}_{i}(x)=\widehat{H}(x)\) for any \(x\in X\). Assume further that \(\{\lambda _{i}\}_{i=0}^{\infty}\) is a sequence of real positive constants convergent to a positive real constant λ, and let the sequence \(\{\frac{1}{\varrho _{i}}\}_{i=0}^{\infty}\) be bounded. Then, the following statements are equivalent:

(i)
\(M_{i}\stackrel{G}{\longrightarrow}M\);

(ii)
For each sequence \(\{\lambda _{i}\}_{i=0}^{\infty}\) of real positive constants convergent to a positive real constant λ,
$$\begin{aligned} R^{\widehat{H}_{i}}_{M_{i},\lambda _{i}}(z)\rightarrow R^{\widehat{H}}_{M, \lambda}(z),\quad \forall z\in X, \end{aligned}$$where \(R^{\widehat{H}_{i}}_{M_{i},\lambda _{i}}=(\widehat{H}_{i}+\lambda _{i}M_{i})^{1}\) (\(i\geq 0\)) and \(R^{\widehat{H}}_{M,\lambda}=(\widehat{H}+\lambda M)^{1}\);

(iii)
For some sequence \(\{\lambda _{i,0}\}_{i=0}^{\infty}\) of real positive constants convergent to some positive real constant \(\lambda _{0}\),
$$\begin{aligned} R^{\widehat{H}_{i}}_{M_{i},\lambda _{i,0}}(z)\rightarrow R^{ \widehat{H}}_{M,\lambda _{0}}(z),\quad \forall z\in X. \end{aligned}$$
Proof
“(i) ⇒ (ii)” Suppose that \(M_{i}\stackrel{G}{\longrightarrow}M\) and let \(\{\lambda _{i}\}_{i=0}^{\infty}\) be a sequence of real positive constants convergent to a constant \(\lambda >0\). Choose \(z\in X\) arbitrarily but fixed. The fact that M is an Ĥaccretive mapping implies that \((\widehat{H}+\lambda M)(X)=X\), which guarantees the existence of a point \((x,u)\in \operatorname{Graph}(M)\) such that \(z=\widehat{H}(x)+\lambda u\). Then, thanks to Definition 4.1 there exists a sequence \(\{(x_{i},u_{i})\}_{i=0}^{\infty}\subset \operatorname{Graph}(M_{i})\) such that \(x_{i}\rightarrow x\) and \(u_{i}\rightarrow u\) as \(i\rightarrow \infty \). Taking into account that \((x,u)\in \operatorname{Graph}(M)\) and \((x_{i},u_{i})\in \operatorname{Graph}(M_{i})\), it follows that
Picking \(z_{i}=H_{i}(x_{i})+\lambda _{i}u_{i}\) for each \(i\geq 0\) and making use of Lemma 2.12, (4.1), and the assumptions, we derive that for all \(i\geq 0\),
Since \(\lambda _{i}\rightarrow \lambda \) as \(i\rightarrow \infty \) and the sequences \(\{r_{i}\}_{i=0}^{\infty}\), \(\{\frac{1}{\varrho _{i}}\}_{i=0}^{\infty}\) are bounded, it follows that the sequences \(\{\frac{r_{i}}{\varrho _{i}}\}_{i=0}^{\infty}\) and \(\{\frac{\lambda _{i}}{\varrho _{i}}\}_{i=0}^{\infty}\) are also bounded. By virtue of the facts that \(x_{i}\rightarrow x\), \(u_{i}\rightarrow u\) and \(\lambda _{i}\rightarrow \lambda \) as \(i\rightarrow \infty \), we conclude that the righthand side of (4.2) tends to zero as \(i\rightarrow \infty \), which implies that \(R^{\widehat{H}_{i}}_{M_{i},\lambda _{i}}(z)\rightarrow R^{ \widehat{H}}_{M,\lambda}(z)\), as \(i\rightarrow \infty \).
The proof of “(ii) ⇒ (iii)” is obvious.
“(iii) ⇒ (i)” Assume that for some sequence \(\{\lambda _{i,0}\}_{i=0}^{\infty}\) of real positive constants convergent to some positive real constant \(\lambda _{0}\), \(R^{\widehat{H}_{i}}_{M_{i},\lambda _{i,0}}(z)\rightarrow R^{ \widehat{H}}_{M,\lambda _{0}}(z)\), as \(i\rightarrow \infty \), for all \(z\in X\). Then, for any \((x,u)\in \operatorname{Graph}(M)\), we have \(x=R^{\widehat{H}}_{M,\lambda _{0}}[\widehat{H}(x)+\lambda _{0}u]\) and so \(R^{\widehat{H}_{i}}_{M_{i},\lambda _{i,0}}[\widehat{H}(x)+\lambda _{0}u] \rightarrow x\), as \(i\rightarrow \infty \). Taking \(x_{i}=R^{\widehat{H}_{i}}_{M_{i},\lambda _{i,0}}[\widehat{H}(x)+ \lambda _{0}u]\) for each \(i\geq 0\), we infer that for each \(i\geq 0\), \(\widehat{H}(x)+\lambda _{0}u\in (\widehat{H}_{i}+\lambda _{i,0}M_{i})(x_{i})\). Thus, for each \(i\geq 0\), we can choose \(u_{i}\in M_{i}(x_{i})\) such that \(\widehat{H}(x)+\lambda _{0}u=\widehat{H}_{i}(x)+\lambda _{i,0}u_{i}\). Since \(x_{i}\rightarrow x\) as \(i\rightarrow \infty \), it follows that \(\lambda _{i,0}u_{i}\rightarrow \lambda _{0}u\), as \(i\rightarrow \infty \). Meanwhile, for all \(i\geq 0\), it yields
Taking into account that \(\lambda _{i,0}\rightarrow \lambda _{0}\) and \(\lambda _{i,0}u_{i}\rightarrow \lambda _{0}u\), as \(i\rightarrow \infty \), we deduce that the righthand side of (4.3) approaches zero, as \(i\rightarrow \infty \), which ensures that \(u_{i}\rightarrow u\) as \(i\rightarrow \infty \). The proof is finished. □
We obtain the following corollary as a direct consequence of the above theorem immediately.
Corollary 4.3
Suppose that X is a real smooth Banach space, and \(\widehat{H}:X\rightarrow X\) is a ϱstrongly accretive and γLipschitz continuous mapping. Furthermore, let \(M_{i}\), \(M:X\rightrightarrows X\) be Ĥaccretive mappings for \(i=1,2,\dots \). Then, the following statements are equivalent:

(i)
\(M_{i}\stackrel{G}{\longrightarrow}M\);

(ii)
For each \(\lambda >0\), \(R^{\widehat{H}_{i}}_{M_{i},\lambda}(z) \rightarrow R^{\widehat{H}}_{M, \lambda}(z)\), \(\forall z\in X\);

(iii)
For some \(\lambda _{0}>0\), \(R^{\widehat{H}_{i}}_{M_{i},\lambda _{0}}(z) \rightarrow R^{ \widehat{H}}_{M,\lambda _{0}}(z)\), \(\forall z\in X\).
Lemma 4.4
([59])
Let \(\{\delta _{i}\}_{i=0}^{\infty}\) be a sequence of real numbers and let there exist \(\theta \in [0,1)\) and \(\xi >0\) such that
Then,
The following lemma plays a prominent role in studying the convergence analysis of our iterative algorithms proposed in the previous section.
Lemma 4.5
Suppose that \(\{\sigma _{i}\}_{i=0}^{\infty}\), \(\{\gamma _{i}\}_{i=0}^{\infty}\) and \(\{t_{i}\}_{i=0}^{\infty}\) are three real sequences of nonnegative numbers that satisfy the following conditions:

(i)
\(0\leq \gamma _{i}<1\) for all \(i\geq 0\) and \(\limsup_{i}\gamma _{i}<1\);

(ii)
\(\sigma _{i+1}\leq \gamma _{i}\sigma _{i}+t_{i}\), for all \(i\geq 0\);

(iii)
\(\lim_{i\rightarrow \infty}t_{i}=0\).
Then, \(\lim_{i\rightarrow \infty}\sigma _{i}=0\).
Proof
Let \(\epsilon >0\) be chosen arbitrarily but fixed. Taking into account that \(\limsup_{i}\gamma _{i}<1\) and \(\lim_{i\rightarrow \infty}t_{i}=0\), one can choose \(i_{0}\in \mathbb{N}\) such that we have \(\limsup_{i}\gamma _{i}<1\epsilon \) and \(t_{i}<\epsilon ^{2}\) for all \(i\geq i_{0}\). In the light of (ii), we deduce that
Then, by taking \(\theta =1\epsilon \) and \(\xi =\epsilon ^{2}\), from Lemma 4.4, it follows that
which implies that \(\limsup_{i}\sigma _{i}\leq \epsilon \). This completes the proof. □
Remark 4.6
(i) It should be pointed out that the condition \(\limsup_{i}\sigma _{i}<1\) imposed on the sequence \(\{\sigma _{i}\}\) in Lemma 4.5 is essential and cannot be dropped. To illustrate this fact, let us take \(\sigma _{i}=\beta \), \(t_{i}=\frac{\beta}{i}\), and \(\gamma _{i}=1\frac{1}{i}\) for all \(i\in \mathbb{N}\), where \(\beta >0\) is an arbitrary but fixed real number. Then, we have \(\sigma _{i+1}\leq \gamma _{i}\sigma _{i}+t_{i}\) for all \(i\in \mathbb{N}\), \(\lim_{i\rightarrow \infty}t_{i}=0\) and \(\limsup_{i}\gamma _{i}=1\), but \(\lim_{i\rightarrow \infty}\sigma _{i}=\beta \neq 0\).
(ii) It is important to emphasize that Lemma 4.5 extends and unifies Lemma 5.1 in [13, 14] and Lemma 2.2 in [60].
We are now ready, as an application of the notion of graph convergence for Ĥaccretive mappings, to present the most important result of this paper in which the strong convergence of the iterative sequence generated by Algorithm 3.12 to a common element of the two sets \(\operatorname{SVI}(X_{j},\widehat{H}_{j},M,N,F,G:j=1,2)\) and \(\operatorname{Fix}(Q)\), where Q is a selfmapping of \(X_{1}\times X_{2}\) defined by (3.23), is proved.
Theorem 4.7
Suppose that \(X_{j}\), \(\widehat{H}_{j}\), F, G, M, N (\(j=1,2\)) are the same as in Theorem 3.3and let all the conditions of Theorem 3.3hold. Assume that \(\widehat{H}_{i,j}\), \(M_{i}\), \(N_{i}\), \(S_{j}\), \(\lambda _{i}\), and \(\rho _{i}\) (\(i\geq 0\); \(j=1,2\)) are the same as in Algorithm 3.12such that for each \(i\geq 0\), \(\widehat{H}_{i,1}\) is a \(\varrho _{i,1}\)strongly accretive and \(r_{i}\)Lipschitz continuous and \(\widehat{H}_{i,2}\) is a \(\varrho _{i,2}\)strongly accretive and \(k_{i}\)Lipschitz continuous mapping. Let Q be a selfmapping of \(X_{1}\times X_{2}\) defined by (3.21) such that \(\operatorname{Fix}(Q)\cap \operatorname{SVI}(X_{j},\widehat{H}_{j},M,N,F,G:j=1,2) \neq \emptyset \). Suppose that \(\lim_{i\rightarrow \infty}\widehat{H}_{i,j}(x_{j})= \widehat{H}_{j}(x_{j})\), \(M_{i}\stackrel{G}{\longrightarrow}M\), \(N_{i}\stackrel{G}{\longrightarrow}N\), \(r_{i}\rightarrow r\), \(k_{i}\rightarrow k\), \(\varrho _{i,j}\rightarrow \varrho _{i}\), as \(i\rightarrow \infty \), and \(L_{i}(\vartheta _{\lambda _{0},\rho _{0}}+1)<2\), where \(\vartheta _{\lambda _{0},\rho _{0}}\) is the same as in (3.13). Assume further that \(r<\varrho _{1}\) and \(k<\varrho _{2}\) and let there exist constants \(\lambda ,\rho >0\) such that \(\lambda _{i}\rightarrow \lambda \) and \(\rho _{i}\rightarrow \rho \), as \(i\rightarrow \infty \). Then, the iterative sequence \(\{(a_{i},b_{i})\}_{i=0}^{\infty}\) generated by Algorithm 3.12converges strongly to the only element \((a,b)\in \operatorname{Fix}(Q)\cap \operatorname{SVI}(X_{j},\widehat{H}_{j},M,N,F,G:j=1,2)\).
Proof
Since all the conditions of Theorem 3.3 hold, invoking Theorem 3.3, the SVI (3.1) admits the unique solution \((a,b)\in X_{1}\times X_{2}\). Then, from Lemma 3.1(ii) we infer that
which can be written, for each \(i\geq 0\), as follows:
where the sequence \(\{\alpha _{i}\}_{i=0}^{\infty}\) is the same as in Algorithm 3.12. Applying (3.24), (4.4), Lemma 2.12, and considering the fact that \(S_{1}\) is a \((\{c_{i,1}\}_{i=0}^{\infty},\{d_{i,1}\}_{i=0}^{\infty},\phi _{1})\)total uniformly \(L_{1}\)Lipschitzian mapping, we derive that for each \(i\geq 0\),
where for each \(i\geq 0\),
and
Taking into account that for each \(i\geq 0\), the mapping \(\widehat{H}_{i,1}\) is \(r_{i}\)Lipschitz continuous, and the mapping F is \(\tau _{1}\)Lipschitz continuous and \(\tau _{2}\)Lipschitz continuous with respect to its first and second arguments, respectively, it follows that
and
Substituting (4.6) and (4.7) into (4.5), for all \(i\geq 0\), we obtain
By following similar arguments as in the proofs of (4.5)–(4.8) with suitable changes, from (3.24), (4.5), Lemma 2.12, and the assumptions, one can deduce that
where for each \(i\geq 0\),
and
Taking \(L=\max \{L_{1},L_{2}\}\) and making use of (4.8) and (4.9), we conclude that for all \(i\geq 0\),
where ϕ is a selfmapping of \(\mathbb{R}^{+}\) defined by (3.22), and for each \(i\geq 0\),
Since \(r_{i}\rightarrow r\), \(k_{i}\rightarrow k\), \(\lambda _{i}\rightarrow \lambda \), \(\rho _{i}\rightarrow \rho \), \(\varrho _{i,j}\rightarrow \varrho _{j}\) for \(j=1,2\), it follows that \(\vartheta _{\lambda _{i},\rho _{i}}(i)\rightarrow \vartheta _{ \lambda ,\rho}\), as \(i\rightarrow \infty \), where \(\vartheta _{\lambda ,\rho}\) is the same as in (3.11). By virtue of the fact that \(r<\varrho _{1}\) and \(k<\varrho _{2}\), there are some \(\lambda _{0},\rho _{0}>0\) small enough such that \(\vartheta _{\lambda _{0},\rho _{0}}\in (0,1)\). Then, for \(\widehat{\vartheta}_{\lambda _{0},\rho _{0}}= \frac{\vartheta _{\lambda _{0},\rho _{0}}+1}{2}\in (\vartheta _{ \lambda _{0},\rho _{0}},1)\) there exists \(i_{0}\geq 1\) such that \(\vartheta _{\lambda _{i},\rho _{i}}(i)<\widehat{\vartheta}_{\lambda _{0}, \rho _{0}}\) for all \(i\geq i_{0}\). Thereby, from (4.10) we derive that for all \(i\geq i_{0}\),
Letting \(\gamma _{i}=L\widehat{\vartheta}_{\lambda _{0},\rho _{0}}+(1L \widehat{\vartheta}_{\lambda _{0},\rho _{0}})\alpha _{i}\) for each \(i\geq 0\) and thanks to the facts that \(L(\vartheta _{\lambda _{0},\rho _{0}}+1)<2\) and \(\limsup_{i}\alpha _{i}<1\), we deduce that
Owing to the facts that \(M_{i}\stackrel{G}{\longrightarrow}M\) and \(N_{i}\stackrel{G}{\longrightarrow}N\), from Theorem 4.2 it follows that for \(j=1,2\), \(\Vert \varphi _{i,j}\Vert _{j}\rightarrow 0\) as \(i\rightarrow \infty \). Meanwhile, since for \(j=1,2\), \(\widehat{H}_{i,j}(x_{j})\rightarrow \widehat{H}_{j}(x_{j})\) for any \(x_{j}\in X_{j}\), \(\lambda _{i}\rightarrow \lambda \) and \(\rho _{i}\rightarrow \rho \) as \(i\rightarrow \infty \), we conclude that for \(j=1,2\), \(\mu _{i,j}\rightarrow 0\) as \(i\rightarrow \infty \). Relying on the fact that for \(j=1,2\), \(S_{j}\) is a \((\{c_{i,j}\}_{i=0}^{\infty},\{d_{i,j}\}_{i=0}^{\infty},\phi _{j})\)total uniformly \(L_{j}\)Lipschitzian mapping, invoking Definition 4.1, for \(j=1,2\) we have \(c_{i,j},d_{i,j}\rightarrow 0\) as \(i\rightarrow \infty \). By assuming \(\sigma _{i}=\Vert (a_{i},b_{i})(a,b)\Vert _{*}\) and
we infer that \(\lim_{i\rightarrow \infty}t_{i}=0\) and (4.11) can be written as \(\sigma _{i+1}\leq \gamma _{i}\sigma _{i}+t_{i}\) for all \(i\geq 0\). We now note that all the conditions of Lemma 4.5 are satisfied and thereby making use of (4.11) and Lemma 4.5 it follows that \(\sigma _{i}\rightarrow 0\) as \(i\rightarrow \infty \), i.e., \((a_{i},b_{i})\rightarrow (a,b)\), as \(i\rightarrow \infty \). Accordingly, the sequence \(\{(a_{i},b_{i})\}_{i=0}^{\infty}\) generated by Algorithm 3.12 converges strongly to the unique solution of the SVI (3.1), that is, the only element of \(\operatorname{Fix}(Q)\cap \operatorname{SVI}(X_{j},\widehat{H}_{j},M,N,F,G:j=1,2)\). This completes the proof. □
Taking \(S_{j}\equiv I_{j}\), the identity mapping on \(X_{j}\), the following corollary follows from Theorem 4.7 immediately.
Corollary 4.8
Assume that \(X_{j}\), \(\widehat{H}_{j}\), \(\widehat{H}_{i,j}\), \(M_{i}\), \(N_{i}\), \(\lambda _{i}\), \(\rho _{i}\), F, G, M, N (\(i\geq 0\) and \(j=1,2\)) are the same as in Theorem 3.3and let all the conditions of Theorem 3.3hold. Then, the iterative sequence \(\{(a_{i},b_{i})\}_{i=0}^{\infty}\) generated by Algorithm 3.13converges strongly to the unique solution of the SVI (3.1).
Taking \(S_{j}\equiv I_{j}\), \(\widehat{H}_{i,j}=\widehat{H}_{j}\), \(\lambda _{i}=\lambda \) and \(\rho _{i}=\rho \) for each \(i\geq 0\) and \(j\in \{1,2\}\), we obtain the following corollary as a direct consequence of Theorem 4.7.
Corollary 4.9
Let \(X_{j}\), \(\widehat{H}_{j}\), F, G, M, N (\(j=1,2\)) be the same as in Theorem 4.2and let all the conditions of Theorem 4.2hold. Suppose that for each \(i\geq 0\), \(M_{i}:X_{1}\rightrightarrows X_{1}\) is an \(\widehat{H}_{1}\)accretive setvalued mapping and \(N_{i}:X_{2}\rightrightarrows X_{2}\) is an \(\widehat{H}_{2}\)accretive setvalued mapping such that \(M_{i}\stackrel{G}{\longrightarrow}M\) and \(N_{i}\stackrel{G}{\longrightarrow}N\). Assume further that \(r<\varrho _{1}\) and \(k<\varrho _{2}\). Then, the iterative sequence \(\{(a_{i},b_{i})\}_{i=0}^{\infty}\) generated by Algorithm 3.14converges strongly to the unique solution of the SVI (3.1).
\(H(\cdot,\cdot)\)Accretive operators and some comments
In this section, our attention is turned to investigate and analyze the notion of an \(H(\cdot,\cdot)\)accretive operator and the related results available in [26]. Some remarks together with relevant commentaries are also pointed out.
Let us first remark that throughout [26], X is assumed to be a real Banach space such that J is singlevalued. As we know, J is singlevalued if and only if X is smooth. Hence, throughout the rest of paper, unless otherwise stated, we assume that X is a real smooth Banach space.
Definition 5.1
([26])
For given singlevalued mappings \(A,B:X\rightarrow X\) and \(H:X\times X\rightarrow X\),

(i)
\(H(A,.)\) is said to be αgeneralized accretive with respect to A if there exists a constant \(\alpha \in \mathbb{R}\) satisfying
$$\begin{aligned} \bigl\langle H(Ax,u)H(Ay,u),J(xy)\bigr\rangle \geq \alpha \Vert xy \Vert ^{2}, \quad \forall x,y,u\in X; \end{aligned}$$ 
(ii)
\(H(.,B)\) is said to be βgeneralized accretive with respect to B if there exists a constant \(\beta \in \mathbb{R}\) such that
$$\begin{aligned} \bigl\langle H(u,Bx)H(u,By),J(xy)\bigr\rangle \geq \beta \Vert xy \Vert ^{2}, \quad \forall x,y,u\in X; \end{aligned}$$ 
(iii)
\(H(\cdot,\cdot)\) is said to be ρLipschitz continuous with respect to A if there exists a constant \(\rho >0\) such that
$$\begin{aligned} \bigl\Vert H(Ax,u)H(Ay,u) \bigr\Vert \leq \rho \Vert xy \Vert ,\quad \forall x,y,u \in X; \end{aligned}$$ 
(iv)
\(H(\cdot,\cdot)\) is said to be ςLipschitz continuous with respect to B if there exists a constant \(\varsigma >0\) such that
$$\begin{aligned} \bigl\Vert H(u,Bx)H(u,By) \bigr\Vert \leq \varsigma \Vert xy \Vert ,\quad \forall x,y,u\in X. \end{aligned}$$
Here, it is to be noted that, as was pointed out in [26], in a similar way to cases (iv) and (v) of Definition 2.1 in [26], one can define the generalized accretivity of the mapping \(H(\cdot,\cdot)\) with respect to B and the Lipschitz continuity of the mapping \(H(\cdot,\cdot)\) with respect to B, as we have done, respectively, in parts (ii) and (iv) of Definition 5.1.
Proposition 5.2
Let \(A,B:X\rightarrow X\) and \(H:X\times X\rightarrow X\) be the mappings and let \(\widehat{H}:X\rightarrow X\) be a mapping defined by \(\widehat{H}(x):=H(Ax,Bx)\) for all \(x\in X\). Then, the following conclusions hold:

(i)
If \(H(\cdot,\cdot)\) is α, βgeneralized accretive with respect to A, B, respectively, then Ĥ is \((\alpha +\beta )\)strongly accretive and hence it is strictly accretive (resp., accretive and \((\alpha +\beta )\)relaxed accretive) provided that \(\alpha +\beta >0\) (resp., \(\alpha +\beta =0\) and \(\alpha +\beta <0\));

(ii)
If \(H(\cdot,\cdot)\) is \(r_{1}\)Lipschitz continuous with respect to A and \(r_{2}\)Lipschitz continuous with respect to B, then Ĥ is \((r_{1}+r_{2})\)Lipschitz continuous.
Proof
(i) Since \(H(\cdot,\cdot)\) is α, βgeneralized accretive with respect to A, B, respectively, it yields
If \(\alpha +\beta >0\), the last inequality ensures that Ĥ is \((\alpha +\beta )\)strongly accretive and so the fact that Ĥ is strictly accretive is straightforward. For the case when \(\alpha +\beta =0\) (resp., \(\alpha +\beta <0\)), thanks to the preceding inequality we infer that Ĥ is accretive (resp., \((\alpha +\beta )\)relaxed accretive).
(ii) Taking into account that the mapping \(H(\cdot,\cdot)\) is \(r_{1}\)Lipschitz continuous and \(r_{2}\)Lipschitz continuous with respect to the mappings A and B, respectively, it follows that for all \(x,y\in X\),
i.e., Ĥ is \((r_{1}+r_{2})\)Lipschitz continuous. The proof is finished. □
It is significant to emphasize that every bifunction \(H:X\times X\rightarrow X\) that is α, βgeneralized accretive with respect to A, B, respectively, is actually a univariate \((\alpha +\beta )\)strongly accretive (resp., accretive and \((\alpha +\beta )\)relaxed accretive) mapping provided that \(\alpha +\beta >0\) (resp., \(\alpha +\beta =0\) and \(\alpha +\beta <0\)) and is not a new one. At the same time, thanks to Proposition 5.2(ii), the notion of Lipschitz continuity of the bifunction \(H:X\times X\rightarrow X\) with respect to the mappings \(A,B:X\rightarrow X\) presented in parts (iii) and (iv) of Definition 5.1 is exactly the same concept of Lipschitz continuity of a univariate mapping \(\widehat{H}=H(A,B):X\rightarrow X\) that appeared in Definition 2.1(v) and is not a new one.
Definition 5.3
For given singlevalued mappings \(A,B:X\rightarrow X\) and \(H:X\times X\rightarrow X\), a setvalued mapping \(M:X\rightrightarrows X\) is said to be \(H(\cdot,\cdot)\)accretive with respect to mappings A and B (or simply \(H(\cdot,\cdot)\)accretive in the following), if M is accretive and \((H(A,B)+\lambda M)(X)=X\) for every \(\lambda >0\).
Remark 5.4
It is worth mentioning that the concept of an \(H(\cdot,\cdot)\)accretive operator was initially introduced by Zou and Huang [35] in 2008, and was studied for the case when \(H(\cdot,\cdot)\) is αstrongly accretive with respect to A, βrelaxed accretive with respect to B, and \(\alpha >\beta \). Afterwards, several generalizations of this notion appeared in the literature. Recently, this notion has been considered by Tang and Wang [26] and has been studied in a more general case when \(H(\cdot,\cdot)\) is αgeneralized accretive with respect to A, βgeneralized accretive with respect to B. It should be pointed out that by defining the mapping \(\widehat{H}:X\rightarrow X\) as \(\widehat{H}(x):=H(Ax,Bx)\) for all \(x\in X\), Definition 5.3 coincides exactly with Definition 2.6. In other words, the concept of an \(H(\cdot,\cdot)\)accretive operator is actually the same notion of the Ĥaccretive operator introduced and studied by Fang and Huang [11] and is not a new one.
According to the following conclusion, the authors [26] deduced that every \(H(\cdot,\cdot)\)accretive operator is maximal under some appropriate conditions.
Lemma 5.5
([26, Theorem 2.1])
Let \(H(\cdot,\cdot)\) be α, βgeneralized accretive with respect to A, B, respectively, such that \(\alpha +\beta >0\). Let \(M:X\rightrightarrows X\) be an \(H(\cdot,\cdot)\)accretive operator with respect to A and B. If the inequality \(\langle uv,J(xy)\rangle \geq 0\) holds for all \((y,v)\in \operatorname{Graph}(M)\), then \(u\in M(x)\).
Proof
Defining the mapping \(\widehat{H}:X\rightarrow X\) by \(\widehat{H}(x):=H(Ax,Bx)\) for all \(x\in X\), from the assumptions and using Proposition 5.2(i) it follows that Ĥ is \((\alpha +\beta )\)strongly accretive and so it is a strictly accretive mapping. At the same time, invoking Remark 5.4, M is an Ĥaccretive mapping. We now note that all the conditions of Lemma 2.9 are satisfied and so the conclusion follows from Lemma 2.9 immediately. □
It should be noted that the conclusion of Theorem 2.1 in [26] has been derived based on Theorem 3.1 in [35] without presenting any proof. In fact, in [35, Theorem 3.1], the authors proved that every \(H(\cdot,\cdot)\)accretive operator with respect to mappings A and B satisfying the appropriate conditions, where \(H(\cdot,\cdot)\) is αstrongly accretive with respect to A, βrelaxed accretive with respect to B and \(\alpha >\beta \) is maximal. Tang and Wang [26] concluded the same assertion for \(H(\cdot,\cdot)\)accretive mappings for the case when \(H(\cdot,\cdot)\) is αβgeneralized accretive with respect to A, B, respectively, and \(\alpha +\beta \neq 0\). However, by following a similar argument as in the proof of [35, Theorem 3.1] with suitable modifications, we found that the condition \(\alpha +\beta \neq 0\) in the context of [26, Theorem 2.1] must be replaced by the condition \(\alpha +\beta >0\), as has been done in the context of Lemma 5.5.
Lemma 5.6
([26, Theorem 2.2])
Let \(H(\cdot,\cdot)\) be α, βgeneralized accretive with respect to A, B, respectively, such that \(\alpha +\beta >0\). Let \(M:X\rightrightarrows X\) be an \(H(\cdot,\cdot)\)accretive operator with respect to A and B. Then, the operator \((H(A,B)+\lambda M)^{1}\) is singlevalued.
Proof
Let us define the mapping \(\widehat{H}:X\rightarrow X\) as \(\widehat{H}(x):=H(Ax,Bx)\) for all \(x\in X\). Then, in the light of the assumptions, Proposition 5.2(i) implies that Ĥ is \((\alpha +\beta )\)strongly accretive and so it is a strictly accretive mapping. Meanwhile, in view of Remark 5.4, M is an Ĥaccretive operator. Thereby, all the conditions of Lemma 2.10 are satisfied. Hence, according to Lemma 2.10, the operator \((\widehat{H}+\lambda M)^{1}=(H(\cdot,\cdot)+\lambda M)^{1}\) is singlevalued for every constant \(\lambda >0\). The proof is completed. □
Remark 5.7
It should be pointed out that the assertion of Theorem 2.2 in [26] is derived similarly to that of assertion of Theorem 3.3 in [35]. In fact, in Theorem 3.3 of [35], the authors proved that for a given \(H(\cdot,\cdot)\)accretive operator \(M:X\rightrightarrows X\) with respect to the mappings A and B, where \(H(\cdot,\cdot)\) is αstrongly accretive with respect to A, βrelaxed accretive with respect to B, and \(\alpha >\beta \), the operator \((H(A,B)+\lambda M)^{1}\) is singlevalued for every constant \(\lambda >0\). Without giving any proof, Tang and Wang [26] claimed that the same assertion holds for the case when \(H(\cdot,\cdot)\) is α, βgeneralized accretive with respect to A, B, respectively, and \(\alpha +\beta \neq 0\). However, by following a similar argument as in the proof of Theorem 3.3 presented in [35] with suitable changes, we inferred that the condition \(\alpha +\beta \neq 0\) in the context of [26, Theorem 2.2] must be replaced by the condition \(\alpha +\beta >0\), as we have done in the context of Lemma 5.6.
Based on Theorem 2.2 in [26], the authors defined the resolvent operator associated with an \(H(\cdot,\cdot)\)accretive operator \(M:X\rightrightarrows X\) as follows.
Definition 5.8
([26, Definition 2.3])
Let \(H(\cdot,\cdot)\) be α, βgeneralized accretive with respect to A, B, respectively, such that \(\alpha +\beta >0\). Let \(M:X\rightrightarrows X\) be an \(H(\cdot,\cdot)\)accretive operator with respect to A and B. For each \(\lambda >0\), the resolvent operator \(R^{H(\cdot,\cdot)}_{M,\lambda}:X\rightarrow X\) is defined by
Note, in particular, that by defining the operator \(\widehat{H}:X\rightarrow X\) as \(\widehat{H}(x):=H(Ax,Bx)\) for all \(x\in X\), thanks to the assumptions from Proposition 5.2(i) it follows that Ĥ is a strictly accretive operator. Furthermore, by virtue of Remark 5.4, M is an Ĥaccretive operator. Thus, based on Definition 2.11, for any constant \(\lambda >0\), the resolvent operator \(R^{\widehat{H}}_{M,\lambda}=R^{H(\cdot,\cdot)}_{M,\lambda}:X\rightarrow X\) associated with an \(\widehat{H}=H(\cdot,\cdot)\)accretive operator M is defined by
Indeed, in view of the discussion mentioned above, the notion of the resolvent operator \(R^{H(\cdot,\cdot)}_{M,\lambda}\) associated with an \(H(\cdot,\cdot)\)accretive operator \(M:X\rightrightarrows X\) and an arbitrary constant \(\lambda >0\), where \(H(\cdot,\cdot)\) is α, βgeneralized accretive with respect to A, B, respectively, and \(\alpha +\beta >0\) is actually the same notion of the resolvent operator \(R^{\widehat{H}}_{M,\lambda}\) associated with the Ĥaccretive operator M and the real constant \(\lambda >0\) given in Definition 2.11 and is not a new one. At the same time, it should be remarked that in Definition 2.3 of [26], the notion of a resolvent operator associated with an \(H(\cdot,\cdot)\)accretive operator \(M:X\rightrightarrows X\) is defined based on Theorem 2.2 in [26]. However, as was pointed out in Remark 5.7, the condition \(\alpha +\beta \neq 0\) in the context of Theorem 2.2 of [26] must be replaced by the condition \(\alpha +\beta >0\). Hence, this correction must be done in the context of Definition 2.3 of [26], as has done in the context of Definition 5.8.
With the aim of proving the Lipschitz continuity of the resolvent operator \(R^{H(\cdot,\cdot)}_{M,\lambda}\) and computing an estimate of its Lipschitz constant, Tang and Wang [26] presented one of the most important results of Sect. 2 of [26] without any proof as follows.
Lemma 5.9
([26, Theorem 2.3])
Let \(H(\cdot,\cdot)\) be α, βgeneralized accretive with respect to A, B, respectively, such that \(\alpha +\beta >0\). Let \(M:X\rightrightarrows X\) be an \(H(\cdot,\cdot)\)accretive operator with respect to A and B. Then, the resolvent operator \(R^{H(\cdot,\cdot)}_{M,\lambda}:X\rightarrow X\) is \(\frac{1}{\alpha +\beta}\)Lipschitz continuous, that is,
Proof
Defining the mapping \(\widehat{H}:X\rightarrow X\) as \(\widehat{H}(x):=H(Ax,Bx)\) for all \(x\in X\), with the help of the assumptions, from Proposition 5.2(i) it follows that the operator Ĥ is \((\alpha +\beta )\)strongly accretive. At the same time, by virtue of Remark 5.4, M is an Ĥaccretive operator. Taking \(r=\alpha +\beta \), Lemma 2.12 ensures that the resolvent operator \(R^{\widehat{H}}_{M,\lambda}=R^{H(\cdot,\cdot)}_{M,\lambda}:X\rightarrow X\) is Lipschitz continuous with constant \(\frac{1}{r}=\frac{1}{\alpha +\beta}\), i.e.,
for all \(u,v\in X\). This gives the desired result. □
Example 5.10
([26, Example 2.1])
Let \(X=\mathbb{R}^{2}=(\infty ,+\infty )\times (\infty ,+\infty )\) and define \(A,B:\mathbb{R}^{2}\rightarrow \mathbb{R}^{2}\), respectively, by
Suppose that the bifunction \(H(\cdot,\cdot):\mathbb{R}^{2}\times \mathbb{R}^{2}\rightarrow \mathbb{R}^{2}\) is defined by \(H(\cdot,\cdot)((x,y))=x+y\), for all \(x,y\in X=\mathbb{R}^{2}\). Thanks to the facts that
and
Tang and Wang [26] deduced that \(H(\cdot,\cdot)\) is −1, 2generalized accretive with respective to A, B, respectively. By virtue of the facts that
and
it follows that \(H(\cdot,\cdot)\) is 1Lipschitz continuous and 2Lipschitz continuous with respect to A and B, respectively. Taking into account that H is not strongly accretive with respect to A, they pointed out that the condition of Theorems 3.1, 3.3, 3.4, and Definition 3.2 of Zou and Huang [35] is not satisfied.
Let us define the mapping \(\widehat{H}:X\rightarrow X\) by \(\widehat{H}(x):=H(Ax,Bx)\) for all \(x\in X\). Then, for all \(x\in X\), we have \(\widehat{H}(x)=Ax+Bx=x+2x=x\). Now, taking \(\alpha =1\), \(\beta =2\), \(r_{1}=1\) and \(r_{2}=2\), in light of the fact that \(\alpha +\beta =1+2>0\), from parts (i) and (ii) of Proposition 5.2, it is expected that the mapping Ĥ is \((\alpha +\beta )=1\)strongly accretive and \((r_{1}+r_{2})=3\)Lipschitz continuous. Since
and
for all \(x,y\in X\), these facts confirm our expectations, i.e., our observations are compatible with our derived assertions in Proposition 5.2.
Definition 5.11
Let \(A,B:X\rightarrow X\) and \(H:X\times X\rightarrow X\) be three singlevalued mappings. Let \(M_{i}\), \(M:X\rightrightarrows X\) be \(H(\cdot,\cdot)\)accretive operators for \(i=1,2,\dots \). The sequence \(\{M_{i}\}\) is said to be graphconvergent to M, denoted by \(M_{i}\stackrel{G}{\longrightarrow}M\), if for every \((x,u)\in \operatorname{Graph}(M)\), there exists a sequence of points \((x_{i},u_{i})\in \operatorname{Graph}(M_{i})\) such that \(x_{i}\rightarrow x\) and \(u_{i}\rightarrow u\), as \(i\rightarrow \infty \).
As was pointed out, by defining the mapping \(\widehat{H}:X\rightarrow X\) by \(\widehat{H}(x):=H(Ax,Bx)\) for all \(x\in X\), the notion of \(H(\cdot,\cdot)\)accretive mapping is exactly the same concept of Ĥaccretive mapping and is not a new one. Thanks to this fact we found that if in Definition 5.11, the mappings \(M_{i}\) (\(i\geq 0\)) and M are assumed to be Ĥaccretive, then Definition 5.11 becomes actually the same Definition 4.1. In fact, the notion of graph convergence for \(H(\cdot,\cdot)\)accretive mappings introduced in [26, 60] is exactly the same concept of graph convergence for Ĥaccretive mappings as a special case of Definition 4.1 and is not a new one.
Using the notion of graph convergence for \(H(\cdot,\cdot)\)accretive operators, Tang and Wang [26] established an equivalence between the graph convergence of a sequence of \(H(\cdot,\cdot)\)accretive operators and their associated resolvent operators, respectively, to a given \(H(\cdot,\cdot)\)accretive mapping and its associated resolvent operator as follows.
Theorem 5.12
([26, Theorem 2.4])
Let \(M_{i}\), \(M:X\rightrightarrows X\) be \(H(\cdot,\cdot)\)accretive operators for \(i=1,2,\dots \). Assume that \(H:X\times X\rightarrow X\) is a singlevalued mapping such that

(a)
\(H(A,B)\) is α, βgeneralized accretive with respect to A, B, respectively, with \(\alpha +\beta >0\);

(b)
\(H(A,B)\) is \(\gamma _{1}\), \(\gamma _{2}\)Lipschitz continuous with respect to A, B, respectively.
Then, the following statements are equivalent:

(i)
\(M_{i}\stackrel{G}{\longrightarrow}M\);

(ii)
For each \(\lambda >0\), \(R^{H(\cdot,\cdot)}_{M_{i},\lambda}(u) \rightarrow R^{H(\cdot,\cdot)}_{M,\lambda}(u)\), \(\forall u\in X\);

(iii)
For some \(\lambda _{0}>0\), \(R^{H(\cdot,\cdot)}_{M_{i},\lambda _{0}}(u) \rightarrow R^{H(\cdot,\cdot)}_{M, \lambda _{0}}(u)\), \(\forall u\in X\).
Proof
Let us define the mapping \(\widehat{H}:X\rightarrow X\) by \(\widehat{H}:=H(Ax,Bx)\) for all \(x\in X\). Thanks to the assumptions mentioned in parts (a) and (b), from parts (i) and (ii) of Proposition 5.2 it follows that Ĥ is \((\alpha +\beta )\)strongly accretive and \((\gamma _{1}+\gamma _{2})\)Lipschitz continuous. Meanwhile, invoking Remark 5.4, \(M_{i}\) (\(i\geq 0\)) and M are Ĥaccretive mappings and so the resolvent operators \(R^{H(\cdot,\cdot)}_{M_{i},\lambda}\) (\(i\geq 0\)) and \(R^{H(\cdot,\cdot)}_{M,\lambda}\) become actually the same resolvent operators \(R^{\widehat{H}}_{M_{i},\lambda}\) (\(i\geq 0\)) and \(R^{\widehat{H}}_{M,\lambda}\), respectively. Taking \(\varrho =\alpha +\beta \) and \(\gamma =\gamma _{1}+\gamma _{2}\), we note that all the conditions of Corollary 4.9 are satisfied. Now, in the light of Corollary 4.9, it follows that the following statements are equivalent:

(i)
\(M_{i}\stackrel{G}{\longrightarrow}M\);

(ii)
For each \(\lambda >0\), \(R^{H(\cdot,\cdot)}_{M_{i},\lambda}(u)=R^{\widehat{H}}_{M_{i},\lambda}(u) \rightarrow R^{\widehat{H}}_{M,\lambda}(u)=R^{H(\cdot,\cdot)}_{M,\lambda}(u)\), \(\forall u\in X\);

(iii)
For some \(\lambda _{0}>0\), \(R^{H(\cdot,\cdot)}_{M_{i},\lambda _{0}}(u)=R^{\widehat{H}}_{M_{i},\lambda _{0}}(u) \rightarrow R^{\widehat{H}}_{M,\lambda _{0}}(u)=R^{H(\cdot,\cdot)}_{M, \lambda _{0}}(u)\), \(\forall u\in X\).
The proof is completed. □
Let for \(i=1,2\), \(X_{i}\) be real Banach spaces and let \(A_{i},B_{i}:X_{i}\rightarrow X_{i}\), \(H_{i}:X_{i}\times X_{i}\rightarrow X_{i}\), \(F:X_{1}\times X_{2}\rightarrow X_{1}\) and \(G:X_{1}\times X_{2}\rightarrow X_{2}\) be the nonlinear operators. Recently, Tang and Wang [26] considered and studied the SVI (3.1), where \(M:X_{1}\rightrightarrows X_{1}\) and \(N:X_{2}\rightrightarrows X_{2}\) are \(H_{1}(A_{1},B_{1})\)accretive and \(H_{2}(A_{2},B_{2})\)accretive setvalued operators, respectively. In order to present a characterization of the solution of the SVI (3.1) involving \(H_{i}(\cdot,\cdot)\)accretive operators M and N (\(i=1,2\)), Tang and Wang [26] gave the following conclusion by using the notion of the resolvent operators \(R^{H_{1}(\cdot,\cdot)}_{M,\lambda}\) and \(R^{H_{2}(\cdot,\cdot)}_{N,\rho}\).
Lemma 5.13
([26, Lemma 3.1])
Let \(X_{1}\) and \(X_{2}\) be two real smooth Banach spaces. Let \(A_{1},B_{1}:X_{1}\rightarrow X_{1}\), \(A_{2},B_{2}:X_{2}\rightarrow X_{2}\) be four singlevalued operators, \(H_{1}:X_{1}\times X_{1}\rightarrow X_{1}\) be a singlevalued mapping such that \(H_{1}(A_{1},B_{1})\) is \(\alpha _{1}\), \(\beta _{1}\)generalized accretive with respect to \(A_{1}\), \(B_{1}\), respectively, with \(\alpha _{1}+\beta _{1}>0\), and \(H_{2}:X_{2}\times X_{2}\rightarrow X_{2}\) be a singlevalued mapping such that \(H_{2}(A_{2},B_{2})\) is \(\alpha _{2}\), \(\beta _{2}\)generalized accretive with respect to \(A_{2}\), \(B_{2}\), respectively, with \(\alpha _{2}+\beta _{2}>0\). Let \(M:X_{1}\rightrightarrows X_{1}\) be an \(H_{1}(\cdot,\cdot)\)accretive setvalued mapping and \(N:X_{2}\rightrightarrows X_{2}\) be an \(H_{2}(\cdot,\cdot)\)accretive setvalued mapping. Then, the following statements are equivalent:

(i)
\((a,b)\in X_{1}\times X_{2}\) is a solution of the problem (3.1) (involving an \(H_{1}(\cdot,\cdot)\)accretive operator M and an \(H_{2}(\cdot,\cdot)\)accretive operator N, that is, [26, problem (3.1)]);

(ii)
For any \(\lambda ,\rho >0\), \((a,b)\) satisfies
$$\begin{aligned} \begin{aligned} \textstyle\begin{cases} a=R^{H_{1}(\cdot,\cdot)}_{M,\lambda}[H_{1}(A_{1}(a),B_{1}(a))\lambda F(a,b)], \\ b=R^{H_{2}(\cdot,\cdot)}_{N,\rho}[H_{2}(A_{2}(b),B_{2}(b))\rho G(a,b)]; \end{cases}\displaystyle \end{aligned} \end{aligned}$$ 
(iii)
For some \(\lambda _{0}>0\) and \(\rho _{0}>0\), \((a,b)\) satisfies
$$\begin{aligned} \begin{aligned} \textstyle\begin{cases} a=R^{H_{1}(\cdot,\cdot)}_{M,\lambda _{0}}[H_{1}(A_{1}(a),B_{1}(a))\lambda _{0} F(a,b)], \\ b=R^{H_{2}(\cdot,\cdot)}_{N,\rho _{0}}[H_{2}(A_{2}(b),B_{2}(b))\rho _{0} G(a,b)]. \end{cases}\displaystyle \end{aligned} \end{aligned}$$
Proof
Defining the mappings \(\widehat{H}_{i}:X_{i}\rightarrow X_{i}\) for \(i=1,2\) as \(\widehat{H}_{i}(x_{i}):=H_{i}(A_{i}x_{i},B_{i}x_{i})\) for all \(x_{i}\in X_{i}\), in the light of the assumptions it follows from Proposition 5.2(i) that the operators \(\widehat{H}_{i}\) (\(i=1,2\)) are strictly accretive. At the same time, invoking Remark 5.4, we infer that M and N are \(\widehat{H}_{1}\)accretive and \(\widehat{H}_{2}\)accretive operators, respectively, and so the resolvent operators \(R^{H_{1}(\cdot,\cdot)}_{M,\lambda}\) and \(R^{H_{2}(\cdot,\cdot)}_{N,\rho}\) become actually the same resolvent operators \(R^{\widehat{H}_{1}}_{M,\lambda}\) and \(R^{\widehat{H}_{2}}_{N,\rho}\), respectively. Now, we note that all the conditions of Lemma 3.1 are satisfied. Hence, Lemma 3.1 ensures that the following statements are equivalent:

(i)
\((a,b)\in X_{1}\times X_{2}\) is a solution of the SVI (3.1);

(ii)
For any \(\lambda ,\rho >0\), \((a,b)\) satisfies
$$\begin{aligned} \begin{aligned} \textstyle\begin{cases} a=R^{\widehat{H}_{1}}_{M,\lambda}[\widehat{H}_{1}(a)\lambda F(a,b)]=R^{H_{1}(\cdot,\cdot)}_{M, \lambda}[H_{1}(A_{1}(a),B_{1}(a))\lambda F(a,b)], \\ b=R^{\widehat{H}_{2}}_{N,\rho}R^{\widehat{H}_{2}}_{N,\rho}[ \widehat{H}_{2}(b)\rho G(a,b)]=R^{H_{2}(\cdot,\cdot)}_{N,\rho}[H_{2}(A_{2}(b),B_{2}(b)) \rho G(a,b)]; \end{cases}\displaystyle \end{aligned} \end{aligned}$$ 
(iii)
For some \(\lambda _{0}>0\) and \(\rho _{0}>0\), \((a,b)\) satisfies
$$\begin{aligned} \begin{aligned} \textstyle\begin{cases} a=R^{\widehat{H}_{1}}_{M,\lambda _{0}}[\widehat{H}_{1}(a)\lambda _{0} F(a,b)]=R^{H_{1}(\cdot,\cdot)}_{M,\lambda _{0}}[H_{1}(A_{1}(a),B_{1}(a)) \lambda _{0} F(a,b)], \\ b=R^{\widehat{H}_{2}}_{N,\rho _{0}}R^{\widehat{H}_{2}}_{N,\rho _{0}}[ \widehat{H}_{2}(b)\rho _{0} G(a,b)]=R^{H_{2}(\cdot,\cdot)}_{N,\rho _{0}}[H_{2}(A_{2}(b),B_{2}(b)) \rho _{0} G(a,b)]. \end{cases}\displaystyle \end{aligned} \end{aligned}$$
This completes the proof. □
Taking into account the abovementioned argument, it is significant to emphasize that contrary to the claim of the authors in [26], Lemma 5.13 (that is, [26, Lemma 3.1]) gives actually a characterization of the solution of the SVI (3.1) involving an \(\widehat{H}_{1}\)accretive mapping M and an \(\widehat{H}_{2}\)accretive mapping N not the SVI (3.1) involving \(H_{1}(\cdot,\cdot)\)accretive and \(H_{2}(\cdot,\cdot)\)accretive mappings M and N (that is, [26, the problem (3.1)]). Meanwhile, it should be remarked that throughout Sect. 3 of [26], the spaces \(X_{i}\) (\(i=1,2\)) are assumed to be real Banach spaces such that for each \(i\in \{1,2\}\), the normalized duality mapping \(J_{i}:X_{i}\rightrightarrows X_{i}^{*}\) is singlevalued. It is known that, in general, \(J_{i}\) (\(i=1,2\)) is singlevalued if and only if \(X_{i}\) is smooth. Hence, in the following, we may assume that \(X_{i}\) (\(i=1,2\)) are real smooth Banach spaces, as we have assumed in the context of Lemma 5.13.
Under some suitable conditions, Tang and Wang [26] proved the existence of a unique solution for [26, problem (3.1)] (that is, the SVI (3.1) involving an \(H_{1}(\cdot,\cdot)\)accretive mapping M and an \(H_{2}(\cdot,\cdot)\)accretive mapping N) as follows.
Theorem 5.14
([26, Theorem 3.1])
Let \(X_{1}\), \(X_{2}\), \(A_{1}\), \(B_{1}\), \(A_{2}\), \(B_{2}\), \(H_{1}\), \(H_{2}\), M, N be the same as in Lemma 5.13. Furthermore, assume that \(H_{1}(A_{1},B_{1})\) is \(r_{1}\), \(r_{2}\)Lipschitz continuous with respect to \(A_{1}\), \(B_{1}\), respectively, \(H_{2}(A_{2},B_{2})\) is \(k_{1}\), \(k_{2}\)Lipschitz continuous with respect to \(A_{2}\), \(B_{2}\), respectively, \(F:X_{1}\times X_{2}\rightarrow X_{1}\) is \(\tau _{1}\)Lipschitz continuous with respect to its first argument and \(\tau _{2}\)Lipschitz continuous with respect to its second argument, and \(G:X_{1}\times X_{2}\rightarrow X_{2}\) is \(\theta _{1}\)Lipschitz continuous with respect to its first argument and \(\theta _{2}\)Lipschitz continuous with respect to its second argument. If the following inequalities hold:
then the SVI (3.1) (with an \(H_{1}(\cdot,\cdot)\)accretive mapping M and an \(H_{2}(\cdot,\cdot)\)accretive mapping N, that is, [26, the problem (3.1)]) admits a unique solution.
Proof
Let us define for \(i=1,2\), the mapping \(\widehat{H}_{i}:X_{i}\rightarrow X_{i}\) by \(\widehat{H}_{i}(x_{i})=H_{i}(A_{i}x_{i},B_{i}x_{i})\) for all \(x_{i}\in X_{i}\). Since \(H_{1}(A_{1},B_{1})\) is \(\alpha _{1}\), \(\beta _{1}\)generalized accretive with respect to \(A_{1}\), \(B_{1}\), respectively, with \(\alpha _{1}+\beta _{1}>0\), and \(r_{1}\), \(r_{2}\)Lipschitz continuous, from parts (i) and (ii) of Proposition 5.2 we conclude that \(\widehat{H}_{1}\) is \((\alpha _{1}+\beta _{1})\)strongly accretive and \((r_{1}+r_{2})\)Lipschitz continuous. By an argument analogous to the previous one, from the assumptions and Proposition 5.2 it follows that the operator \(\widehat{H}_{2}\) is \((\alpha _{2}+\beta _{2})\)strongly accretive and \((k_{1}+k_{2})\)Lipschitz continuous. Furthermore, thanks to Remark 5.4 we deduce that M and N are \(\widehat{H}_{1}\)accretive and \(\widehat{H}_{2}\)accretive mappings, respectively. Then, the problem (3.1) in [26] involving an \(H_{1}(\cdot,\cdot)\)accretive mapping M and an \(H_{2}(\cdot,\cdot)\)accretive mapping N coincides exactly with the SVI (3.1) involving an \(\widehat{H}_{1}\)accretive mapping M and an \(\widehat{H}_{2}\)accretive mapping N. Taking \(\varrho _{i}=\alpha _{i}+\beta _{i}\) (\(i=1,2\)), \(r=r_{1}+r_{2}\) and \(k=k_{1}+k_{2}\), we have \(\frac{r}{\varrho _{1}}=\frac{r_{1}+r_{2}}{\alpha _{1}+\beta _{1}}<1\) and \(\frac{k}{\varrho _{2}}=\frac{k_{1}+k_{2}}{\alpha _{2}+\beta _{2}}<1\). We now note that all the conditions of Theorem 3.3 are satisfied and so in accordance with Theorem 3.3, the SVI (3.1) ([26, the problem (3.1)]) admits a unique solution. This completes the proof. □
Based on Lemma 5.13 and by assuming that for all \(i\geq 0\), \(M_{i}\) is an \(H_{1}(\cdot,\cdot)\)accretive mapping with respect to \(A_{1}\) and \(B_{1}\), and \(N_{i}\) is an \(H_{2}(\cdot,\cdot)\)accretive mapping with respect to \(A_{2}\) and \(B_{2}\), Tang and Wang [26] constructed the following iterative algorithm for finding an approximate solution of the SVI (3.1) involving an \(H_{1}(\cdot,\cdot)\)accretive mapping M and an \(H_{2}(\cdot,\cdot)\)accretive mapping N (that is, [26, the problem (3.1)]).
Algorithm 5.15
([26, Algorithm 3.1])
Step 0: Choose some \(\lambda _{0}>0\) and \(\rho _{0}>0\) to satisfy the two inequalities presented in (3.16) (of [26]). Select an initial point \((a_{0},b_{0})\in X_{1}\times X_{2}\). Set \(i:=0\).
Step i: Given \((a_{i},b_{i})\in X_{1}\times X_{2}\), compute \((a_{i+1},b_{i+1})\in X_{1}\times X_{2}\) by
for \(i=0,1,2,\ldots \) , where \(0\leq \alpha _{i}<1\) with \(\limsup_{i}\alpha _{i}<1\).
It is also remarkable that by defining the mapping \(\widehat{H}_{i}:X_{i}\rightarrow X_{i}\) for \(i=1,2\) by \(\widehat{H}_{i}(x_{i}):=H_{i}(A_{i}x_{i},B_{i}x_{i})\) for all \(x_{i}\in X_{i}\), with the help of the assumptions and utilizing Proposition 5.2(i) we infer that the operators \(\widehat{H}_{i}\) (\(i=1,2\)) are strictly accretive. In the light of Remark 5.4 we also conclude that M and N are \(\widehat{H}_{1}\)accretive and \(\widehat{H}_{2}\)accretive operators, respectively. Meanwhile, the resolvent operators \(R^{H_{1}(\cdot,\cdot)}_{M_{i},\lambda _{0}}\) and \(R^{H_{2}(\cdot,\cdot)}_{N_{i},\rho _{0}}\) (\(i\geq 0\)) become actually the same resolvent operators \(R^{\widehat{H}_{1}}_{M_{i},\lambda _{0}}\) and \(R^{\widehat{H}_{2}}_{N_{i},\rho _{0}}\), respectively. Then, for each \(i\geq 0\), it yields
Thereby, we find that Algorithm 5.15 actually becomes the same Algorithm 3.14 and is not a new one.
Finally, Tang and Wang [26] closed their paper with the most important result that appeared in it related to the strong convergence of the iterative sequence \(\{(a_{i},b_{i})\}_{i=0}^{\infty}\) generated by Algorithm 5.15 to the unique solution of the SVI (3.1) involving an \(H_{1}(\cdot,\cdot)\)accretive mapping M and an \(H_{2}(\cdot,\cdot)\)accretive mapping N (that is, [26, the problem (3.1)]).
Theorem 5.16
([26, Theorem 3.2])
Let \(X_{1}\), \(X_{2}\), \(A_{1}\), \(B_{1}\), \(A_{2}\), \(B_{2}\), \(H_{1}\), \(H_{2}\), M, N, F, G be the same as in Theorem 5.14 (that is, [26, Theorem 3.1]). Assume that the following inequalities hold:
Furthermore, let \(M_{i}:X_{1}\rightrightarrows X_{1}\) (\(i=0,1,2,\dots \)) be \(H_{1}(\cdot,\cdot)\)accretive setvalued mappings such that \(M_{i}\stackrel{G}{\longrightarrow}M\) and \(N_{i}:X_{2}\rightrightarrows X_{2}\) be \(H_{2}(\cdot,\cdot)\)accretive setvalued mappings such that \(N_{i}\stackrel{G}{\longrightarrow}N\). Then, the sequence generated by Algorithm 5.15 (that is, [26, Algorithm 3.1]) converges strongly to the unique solution of the SVI (3.1) (involving an \(H_{1}(\cdot,\cdot)\)accretive mapping M and an \(H_{2}(\cdot,\cdot)\)accretive mapping N, that is, [26, the problem (3.1)]).
Proof
Define for \(i=1,2\), the operators \(\widehat{H}_{i}:X_{i}\rightarrow X_{i}\) by \(\widehat{H}_{i}(x_{i}):=H_{i}(A_{i}x_{i},B_{i}x_{i})\) for all \(x_{i}\in X_{i}\). In light of the assumptions and by the same arguments used in Theorem 5.14, we conclude that \(\widehat{H}_{1}\) is an \((\alpha _{1}+\beta _{1})\)strongly accretive and \((r_{1}+r_{2})\)Lipschitz continuous mapping, \(\widehat{H}_{2}\) is an \((\alpha _{2}+\beta _{2})\)strongly accretive and \((k_{1}+k_{2})\)Lipschitz continuous mapping, for all \(i\geq 0\), the mappings \(M_{i}\) and \(N_{i}\) are \(\widehat{H}_{1}\)accretive and \(\widehat{H}_{2}\)accretive, respectively, and the problem (3.1) in [26] involving an \(H_{1}(\cdot,\cdot)\)accretive mapping M and an \(H_{2}(\cdot,\cdot)\)accretive mapping N coincides exactly with the SVI (3.1) involving an \(\widehat{H}_{1}\)accretive mapping M and an \(\widehat{H}_{2}\)accretive mapping N. At the same time, Algorithm 5.15 becomes actually the same Algorithm 3.14. Taking \(\varrho _{i}=\alpha _{i}+\beta _{i}\) (\(i=1,2\)), \(r=r_{1}+r_{2}\) and \(k=k_{1}+k_{2}\), we obtain \(\frac{r}{\varrho _{1}}=\frac{r_{1}+r_{2}}{\alpha _{1}+\beta _{1}}<1\) and \(\frac{k}{\varrho _{2}}=\frac{k_{1}+k_{2}}{\alpha _{2}+\beta _{2}}<1\). In view of the fact that all the conditions of Corollary 4.9 are satisfied, Corollary 4.9 ensures that the iterative sequence \(\{(a_{i},b_{i})\}_{i=0}^{\infty}\) generated by Algorithm 5.15 converges strongly to the unique solution of the problem (3.1) in [26]. This completes the proof. □
Availability of data and materials
All data generated or analyzed during this study are included in this manuscript.
References
Hartmann, P., Stampacchia, G.: On some nonlinear elliptic differential functional equations. Acta Math. 115, 271–310 (1966)
Browder, F.E.: Fixed point theory and nonlinear problems. Bull. Am. Math. Soc. (N.S.) 9(1), 1–39 (1983)
Gorniewicz, L.: Topoligical Fixed Point Theory of Multivalued Mapping. Springer, Berlin (2006)
Ansari, Q.H., Balooee, J., Yao, J.C.: Extended general nonlinear quasivariational inequalities and projection dynamical systems. Taiwan. J. Math. 17(4), 1321–1352 (2013)
Carl, S., Le, V.K., Montreanu, D.: Nonsmooth Variational Problems and Their Inequalities: Comparison Principles and Applications. Springer, New York (2007)
Alimohammady, M., Balooee, J., Cho, Y.J., Roohi, M.: New perturbed finite step iterative algorithms for a system of extended generalized nonlinear mixed quasivariational inclusions. Comput. Math. Appl. 60, 2953–2970 (2010)
Chang, S.S.: Setvalued variational inclusions in Banach spaces. J. Math. Anal. Appl. 248, 438–454 (2000)
Ding, X.P., Feng, H.R.: The pstep iterative algorithm for a system of generalized mixed quasivariational inclusions with \((\mathrm{A},\eta )\)accretive operators in quniformly smooth Banach spaces. J. Comput. Appl. Math. 220, 163–174 (2008)
Ding, X.P., Xia, F.Q.: A new class of completely generalized quasivariational inclusions in Banach spaces. J. Comput. Appl. Math. 147, 369–383 (2002)
Ding, X.P., Yao, J.C.: Existence and algorithm of solutions for mixed quasivariationallike inclusions in Banach spaces. Comput. Math. Appl. 49, 857–869 (2005)
Fang, Y.P., Huang, N.J.: Haccretive operators and resolvent operator technique for solving variational inclusions in Banach spaces. Appl. Math. Lett. 17, 647–653 (2004)
Fang, Y.P., Huang, N.J.: Hmonotone operator and resolvent operator technique for variational inclusions. Appl. Math. Lett. 145, 795–803 (2003)
Fang, Y.P., Huang, N.J.: Iterative algorithm for a system of variational inclusions involving Haccretive operators in Banach spaces. Acta Math. Hung. 108(3), 183–195 (2005)
Fang, Y.P., Huang, N.J., Thompson, H.B.: A new system of variational inclusions with \((\mathrm{H},\eta )\)monotone operators in Hilbert spaces. Comput. Math. Appl. 49, 365–374 (2005)
Huang, N.J., Fang, Y.P.: A new class of general variational inclusions involving maximal ηmonotone mappings. Publ. Math. (Debr.) 62(1–2), 83–98 (2003)
Huang, N.J., Fang, Y.P.: Generalized maccretive mappings in Banach spaces. J. Sichuan Univ. 38(4), 591–592 (2001)
Lan, H.Y., Cho, Y.J., Verma, R.U.: Nonlinear relaxed cocoercive variational inclusions involving \((\mathrm{A},\eta )\)accretive mappings in Banach spaces. Comput. Math. Appl. 51(9–10), 1529–1538 (2006)
Tan, B., Qin, X., Yao, J.C.: Strong convergence of selfadaptive inertial algorithms for solving split variational inclusion problems with applications. J. Sci. Comput. 87, 20 (2021)
Tang, Y., Gibali, A.: New selfadaptive step size algorithms for solving split variational inclusion problems and its applications. Numer. Algorithms 83(1), 305–331 (2020)
Balooee, J.: Iterative algorithm with mixed errors for solving a new system of generalized nonlinear variationallike inclusions and fixed point problems in Banach spaces. Chin. Ann. Math. 34B(4), 593–622 (2013)
Lee, C.H., Ansari, Q.H., Yao, J.C.: A perturbed algorithm for strongly nonlinear variationallike inclusions. Bull. Aust. Math. Soc. 62, 417–426 (2000)
Li, X., Huang, N.J.: Graph convergence for the H(⋅,⋅)accretive operator in Banach spaces. Appl. Math. Comput. 217, 9053–9061 (2011)
Kazmi, K.R., Khan, H.H., Ahmad, N.: Existence and iterative approximation of solutions of a system of general variational inclusions. Appl. Math. Comput. 215, 110–117 (2009)
Kazmi, K.R., Khan, F.A.: Iterative approximation of a unique solution of a system of variationallike inclusions in real quniformly smooth Banach spaces. Nonlinear Anal. 67, 917–929 (2007)
Peng, J.W., Zhu, D.L.: A system of variational inclusions with Pηaccretive operators. J. Comput. Appl. Math. 216, 198–209 (2008)
Tang, G.J., Wang, X.: A perturbed algorithm for a system of variational inclusions involving \(\mathrm{H}(\cdot,\cdot)\)accretive operators in Banach spaces. J. Comput. Appl. Math. 272, 1–7 (2014)
Verma, R.U.: Amonotonicity and applications to nonlinear variational inclusion problems. J. Appl. Math. Stoch. Anal. 17(2), 193–195 (2004)
Verma, R.U.: General class of implicit variational inclusions and graph convergence on Amaximal relaxed monotonicity. J. Optim. Theory Appl. 155, 196–214 (2012)
Zou, Y.Z., Huang, N.J.: A new system of variational inclusions involving \(\mathrm{H}(\cdot,\cdot)\)accretive operator in Banach spaces. Appl. Math. Comput. 212(1), 135–144 (2009)
Ding, X.P., Luo, C.L.: Perturbed proximal point algorithms for general quasivariationallike inclusions. J. Comput. Appl. Math. 113, 153–165 (2000)
Xia, F.Q., Huang, N.J.: Variational inclusions with a general Hmonotone operator in Banach spaces. Comput. Math. Appl. 54, 24–30 (2007)
Verma, R.U.: Approximationsolvability of a class of Amonotone variational inclusion problems. J. Korean Soc. Ind. Appl. Math. 8, 55–66 (2004)
Verma, R.U.: Sensitivity analysis for generalized strongly monotone variational inclusions based on the \((\mathrm{A},\eta )\)resolvent operator technique. Comput. Math. Appl. 19, 1409–1413 (2006)
Sun, J.H., Zhang, L.W., Xiao, X.T.: An algorithm based on resolvent operators for solving variational inequalities in Hilbert spaces. Nonlinear Anal. 69, 3344–3357 (2008)
Zou, Y.Z., Huang, N.J.: \(\mathrm{H}(\cdot,\cdot)\)accretive operator with an application for solving variational inclusions in Banach spaces. Appl. Math. Comput. 204, 809–816 (2008)
Attouch, H.: Variational Convergence for Functions and Operators. Applicable Mathematics Series. Pitman, London (1984)
Banach, S.: Sur les opérations dans les ensembles abstraits et leur applications aux équations intégrales. Fundam. Math. 3, 133–181 (1922)
Yang, J., Liu, H.: The subgradient extragradient method extended to pseudomonotone equilibrium problems and fixed point problems in Hilbert space. Optim. Lett. 14, 1803–1816 (2020)
Balooee, J., Cho, Y.J.: Algorithms for solutions of extended general mixed variational inequalities and fixed points. Optim. Lett. 7, 1929–1955 (2013)
Majee, P., Nahak, C.: A hybrid viscosity iterative method with averaged mappings for split equilibrium problems and fixed point problems. Numer. Algorithms 74(2), 609–635 (2017)
Thuy, N.T.T., Hieu, P.T., Strodiot, J.J.: Regularization methods for accretive variational inequalities over the set of common fixed points of nonexpansive semigroups. Optimization 65(8), 1553–1567 (2016)
Zhao, X., Yao, Y.: Modified extragradient algorithms for solving monotone variational inequalities and fixed point problems. Carpath. J. Math. 69(9), 1987–2002 (2020)
Thong, D.V., Hieu, D.V.: Some extragradientviscosity algorithms for solving variational inequality problems and fixed point problems. Numer. Algorithms 82, 761–789 (2019)
Yao, Y., Shahzad, N., Yao, J.C.: Convergence of Tsengtype selfadaptive algorithms for variational inequalities and fixed point problems. Carpath. J. Math. 37(3), 541–550 (2021)
Ceng, L.C., Petrusel, A., Qin, X., Yao, J.C.: A modified inertial subgradient extragradient method for solving pseudomonotone variational inequalities and common fixed point problems. Fixed Point Theory 21, 93–108 (2020)
Yao, Y., Postolachoste, M., Yao, J.C.: Convergence of an extragradient algorithm for fixed point and variational inequality problems. J. Nonlinear Convex Anal. 20(12), 2623–2631 (2019)
Goebel, K., Kirk, W.A.: A fixed point theorem for asymptotically nonexpansive mappings. Proc. Am. Math. Soc. 35, 171–174 (1972)
Sahu, D.R.: Fixed points of demicontinuous nearly Lipschitzian mappings in Banach spaces. Comment. Math. Univ. Carol. 46, 653–666 (2005)
Alber, Y.I., Chidume, C.E., Zegeya, H.: Approximating fixed points of total asymptotically nonexpansive mappings. Fixed Point Theory Appl. 2006, Article ID 10673 (2006)
Kiziltunc, H., Purtas, Y.: On weak and strong convergence of an explicit iteration process for a total asymptotically quasinonexpansive mapping in Banach space. Filomat 28(8), 1699–1710 (2014)
Suzuki, T.: Fixed point theorems and convergence theorems for some generalized nonexpansive mappings. J. Math. Anal. Appl. 340(2), 1088–1095 (2008)
Chidume, C.: Geometric Properties of Banach Spaces and Nonlinear Iterations. Lecture Notes in Mathematics. Springer, London (2009)
Browder, F.E.: Nonlinear operators and nonlinear equations of evolution in banach spaces. Proc. of Symposia in Pure Math. XVIII, Part 2 (1976)
Kato, T.: Nonlinear semigroups and evolution equations. J. Math. Soc. Jpn. 18/19, 508–520 (1976)
Browder, F.E.: Nonlinear mappings of nonexpansive and accretive type in Banach spaces. Bull. Am. Math. Soc. 73, 875–882 (1976)
Martin, R.H.: A global existence theorem for autonomous differential equations in Banach spaces. Proc. Am. Math. Soc. 26, 307–314 (1970)
Martin, R.H.: Nonlinear Operators and Differential Equations in Banach Spaces. Interscience, New York (1976)
Verma, R.U.: General system of amonotone nonlinear variational inclusion problems with applications. J. Optim. Theory Appl. 131(1), 151–157 (2006)
Polyak, B.T.: Introduction to Optimization. Optimization Software, New York (1987)
Li, X., Huang, N.J.: Graph convergence for the \(\mathrm{H}(\cdot,\cdot)\)accretive operator in Banach spaces. Appl. Math. Comput. 217, 9053–9061 (2011)
Acknowledgements
Not applicable.
Funding
The research of the second author was supported by the Grant MOST (1082115M039005MY3).
Author information
Authors and Affiliations
Contributions
JB was a major contributor in writing the manuscript. JCY performed the validation and formal analysis. All authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare that they have no competing interests.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Balooee, J., Yao, JC. Graph convergence with an application for system of variational inclusions and fixedpoint problems. J Inequal Appl 2022, 112 (2022). https://doi.org/10.1186/s13660022028483
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13660022028483
MSC
 47H05
 47H09
 47J20
 47J22
 47J25
 49J40
Keywords
 System of variational inclusions
 Ĥaccretive mapping
 Total uniformly LLipschitzian mapping
 \(H(\cdot,\cdot)\)accretive mapping
 Resolventoperator technique
 Iterative algorithm
 Fixedpoint problem
 Convergence analysis