- Research
- Open Access
- Published:
A new subgradient extragradient method for solving the split modified system of variational inequality problems and fixed point problem
Journal of Inequalities and Applications volume 2022, Article number: 51 (2022)
Abstract
We introduce a new subgradient extragradient algorithm utilizing the concept of the set of solutions of the split modified system of variational inequality problems (SMSVIP). Our main theorem is weak convergence theorem for such an algorithm for approximating the fixed point problem in a real Hilbert space. We also apply these results to approximate the split minimization problem. In the last section, we provide an example to illustrate the potential of our main theorem.
1 Introduction
Let C be a nonempty closed convex subset of a real Hilbert space H. The mapping \(T:C\rightarrow C\) is called nonexpansive if \(\|Tx-Ty\|\leq \|x-y\|\) for all \(x,y\in C\). An element \(x\in C\) is said to be a fixed point of T if \(Tx=x\) and \(F(T)=\{x\in C: Tx=x\}\) denotes the set of fixed points of T. Fixed point problem has been widely studied and developed in the literature; see [5, 11, 26, 27, 29] and the references therein.
We now recall some well-known concepts and results in a real Hilbert space H.
The variational inequality problem (VIP) is to find a point \(x^{*}\in C\) such that
for all \(y\in C\). The set of all solutions of the variational inequality is denoted by \(VI(C,A)\). Since its inception by Stampacchia [24] in 1964, the variational inequality problem has become interesting in several topics arising in structural analysis, physic, economics, optimization, and applied sciences; see [1, 3, 6, 8, 11–13, 15, 18, 20, 30, 32] and the references therein.
Several algorithms for solving the VIP are projection algorithms that employ projections onto the feasible set C of the VIP, or onto some related set, in order to iteratively reach a solution. In 1976, Korpelevich [19] proposed an algorithm for solving the VIP in a Euclidean space, known as the extragradient method. In each iteration of her algorithm, in order to get the next iterate \(x^{k+1}\), two orthogonal projections onto C are calculated, according to the following iterative step. Given the current iterate \(x^{k}\), calculate
for all \(k\in \mathbb{N}\), where τ is some positive number and \(P_{C}\) denotes the Euclidean least distance projection onto C.
The convergence was proved in [19] under the assumptions of Lipschitz continuity and pseudo-monotonicity. However, there is still the need to calculate two projections onto C. If the set C is simple enough so that projections onto it can be easily computed, but if C is a general closed and convex set, a minimal distance problem has to be solved twice in order to obtain the next iterate. This might seriously affect the efficiency of the extragradient method. Korpelevich’s extragradient method has been widely studied in the literature; see [2, 4, 7, 9, 14, 16, 17, 22, 28, 31] and the references therein.
In the past decade years, Censor et al. [10] developed the subgradient extragradient algorithm in a Euclidean space, in which they replaced the (second) projection (2) onto C by a projection onto a specific constructible half-space as follows:
Algorithm 1
(The subgradient extragradient algorithm)
\(\mathbf{Step\ 0:}\) Select a starting point \(x^{0}\in H\) and \(\tau >0\), and set \(k=0\).
\(\mathbf{Step\ 1:}\) Given the current iterate \(x^{k}\), compute
construct the half-space \(T_{k}\) the bounding hyperplane of which supports C at \(y^{k}\),
and calculate the next iterate
\(\mathbf{Step\ 2:}\) If \(x^{k}=y^{k}\) then stop. Otherwise, set \(k\leftarrow (k+1)\) and return to step 1.
Furthermore, under some control conditions, they proved weak convergence theorems for their algorithms.
Very recently, Sripattanet and Kangtunyakarn [23] introduced the following split modified system of variational inequality problems (SMSVIP), which involves finding \((x^{*},y^{*},z^{*})\in C\times C\times C\) such that
and finding \((\bar{x^{*}}=Ax^{*}, \bar{y^{*}}=Ay^{*}, \bar{z^{*}}=Az^{*})\in Q\times Q\times Q\) such that
where \(D_{1},D_{2},D_{3}:C\rightarrow H_{1}\), \(\bar{D_{1}},\bar{D_{2}},\bar{D_{3}}:Q\rightarrow H_{2}\) are six different mappings, \({\zeta,\bar{\zeta } >0,}\) and \(a\in [0,1]\). The sets of all solutions of (4) and (5) are denoted by \(\Psi _{D_{1},D_{2},D_{3}}\) and \(\Psi _{\bar{D_{1}},\bar{D_{2}},\bar{D_{3}}}\), respectively. The set of all solutions of the SMSVIP is denoted by \(\Psi ^{D_{1},D_{2},D_{3}}_{\bar{D_{1}},\bar{D_{2}},\bar{D_{3}}}\), that is,
If we put \(a=0\) in (4) and (5), we have
and
which is a modified the split general system of variational inequalities (SVIP) [21].
Based on the above works and observation of a half-space in Algorithm 1 related to the VIP, we introduce a new half-space related to the SMSVIP and prove weak convergence theorems of the sequence \(\{x_{n}\}\) generated by our new half-space for approximating the solutions of the SMSVIP. Moreover, using our main result, we obtain the additional results involving the split minimization problem. Finally, we perform numerical examples to illustrate the computational performance of the proposed algorithms.
2 Preliminaries
We denote the weak convergence and the strong convergence by \(^{\backprime \backprime }\rightharpoonup ^{\prime \prime }\) and \(^{\backprime \backprime }\rightarrow ^{\prime \prime }\), respectively. For every \(x\in \mathcal{H}\), there exists a unique nearest point \(P_{C}x\) in C such that \(\|x-P_{C}x\|\leq \|x-y\|\) for all \(y\in C\). \(P_{C}\) is called the metric projection of \(\mathcal{H}\) onto C.
The metric projection \(P_{C}\) is characterized by the following two properties:
-
1.
\(P_{C} x\in C\),
-
2.
\(\langle x-P_{C} x,P_{C} x-y\rangle \geq 0\), \(\forall x\in \mathcal{H}\), \(y\in C\),
and if C is a hyperplane, it follows that
$$\begin{aligned} \Vert x-y \Vert ^{2}&\geq \Vert x-P_{C} x \Vert ^{2}+ \Vert y-P_{C} x \Vert ^{2}, \end{aligned}$$(6)\(\forall x\in \mathcal{H}\), \(y\in C\).
Definition 2.1
A mapping \(A:C\rightarrow H\) is called α-inversestronglymonotone if there exists a positive real number \(\alpha >0\) such that
for all \(x,y\in C\).
The following lemmas are needed to prove the main theorem.
Lemma 2.2
Let \(\mathcal{H}\) be a real Hilbert space, and let C be a nonempty closed convex subset of \(\mathcal{H}\). Let \(\{x^{k}\}^{\infty }_{k=0}\subset \mathcal{H}\) be Fejer-monotone with respect to C, i.e., for every \(u\in C\),
Then \(\{P_{C} x^{k}\}^{\infty }_{k=0}\) converges strongly to some \(z\in C\).
Lemma 2.3
Each Hilbert space \(\mathcal{H}\) satisfies Opial’s condition, i.e., for any sequence \(\{x_{n}\}\subset \mathcal{H}\) with \(x_{n}\rightharpoonup x\), the inequality
holds for every \(y\in \mathcal{H}\) with \(y\neq x\).
Lemma 2.4
([23])
Let \(H_{1}\) and \(H_{2}\) be real Hilbert spaces, and let \(C,Q\) be nonempty closed convex subsets of \(H_{1}\) and \(H_{2}\), respectively. Let \(D_{1},D_{2}\), \(D_{3}:C\rightarrow H_{1}\) be \(d_{1},d_{2},d_{3}\)-inverse strongly monotone, respectively, where \(\zeta \in (0,2d^{*})\) with \(d^{*}=\operatorname{min} \{d_{1},d_{2},d_{3}\} \). Let \(\bar{D_{1}},\bar{D_{2}},\bar{D_{3}}:Q\rightarrow H_{2}\) be \(\bar{d_{1}},\bar{d_{2}},\bar{d_{3}}\)-inverse strongly monotone, respectively, where \(\bar{\zeta }\in (0,2\hat{d})\) with \(\hat{d}=\operatorname{min} \{\bar{d_{1}}, \bar{d_{2}},\bar{d_{3}}\}\). Let \(A:H_{1}\rightarrow H_{2}\) be a bounded linear operator with adjoint \(A^{*}\) and \(\eta \in (0,\frac{1}{L})\) with L being the spectral radius of the operator \(A^{*}A\). Define \(M_{C}:C\rightarrow C\) by
\(\forall x\in C\), and define \(M_{Q}:Q\rightarrow Q\) by
\(\forall \hat{x}\in Q\). Define \(M:C\rightarrow C\) by \(M(x)=M_{C}(x-\eta A^{*}(I-M_{Q})Ax)\) for all \(x\in C\). Then M is a nonexpansive mapping for all \(x\in C\).
Remark 1
From the study of Lemma 2.4, we have
for all \(x,y\in H_{1}\).
Lemma 2.5
([23])
Let \(H_{1}\) and \(H_{2}\) be real Hilbert spaces, and let \(C,Q\) be nonempty closed convex subsets of \(H_{1},H_{2} \), respectively. Define the mappings \(D_{1},D_{2},D_{3},\bar{D_{1}},\bar{D_{2}},\bar{D_{3}},M_{C}\), and \(M_{Q}\) as in Lemma 2.4, where \(\zeta \in (0,2d^{*})\) with \(d^{*}=\operatorname{min} \{d_{1},d_{2},d_{3}\}\), \(\bar{\zeta }\in (0,2\hat{d})\) with \(\hat{d}= \operatorname{min} \{\bar{d_{1}}, \bar{d_{2}},\bar{d_{3}}\}\). Let \(A:H_{1}\rightarrow H_{2}\) be a bounded linear operator with adjoint \(A^{*}\) and \(\eta \in (0,\frac{1}{L})\) with L being the spectral radius of the operator \(A^{*}A\).
Assume
The following statements are equivalent:
-
(i)
\((x^{*},y^{*},z^{*})\in \Psi ^{D_{1},D_{2},D_{3}}_{\bar{D_{1}}, \bar{D_{2}},\bar{D_{3}}}\),
-
(ii)
\(x^{*}=M_{C}(x^{*}-\eta A^{*}(I-M_{Q})Ax^{*})\), where \(y^{*}=P_{C}(I-\zeta D_{2})(ax^{*}+(1-a)z^{*})\), \(z^{*}=P_{C}(I-\zeta D_{3})x^{*}\), \(\bar{x^{*}}=Ax^{*}=P_{Q}(I-\bar{\zeta }\bar{D_{1}})(a\bar{x^{*}}+(1-a) \bar{y^{*}})\), \(\bar{y^{*}}=Ay^{*}=P_{Q}(I-\bar{\zeta }\bar{D_{2}})(a\bar{x^{*}}+(1-a) \bar{z^{*}})\), and \(\bar{z^{*}}=Az^{*}=P_{Q}(I-\bar{\zeta }\bar{D_{3}})\bar{x^{*}}\).
Lemma 2.6
([23])
Let \(H_{1}\) and \(H_{2}\) be real Hilbert spaces, and let \(C,Q\) be nonempty closed convex subsets of \(H_{1},H_{2} \), respectively. Define the mappings \(D_{1},D_{2},D_{3},\bar{D_{1}},\bar{D_{2}},\bar{D_{3}},M_{C}\), and \(M_{Q}\) as in Lemma 2.4where \(\zeta \in (0,2d^{*})\) with \(d^{*}=\operatorname{min} \{d_{1},d_{2},d_{3}\}\), \(\bar{\zeta }\in (0,2\hat{d})\) with \(\hat{d}= \operatorname{min} \{\bar{d_{1}}, \bar{d_{2}},\bar{d_{3}}\}\) and \(a\in [0,1]\). Let \(A:H_{1}\rightarrow H_{2}\) be a bounded linear operator with adjoint \(A^{*}\) and \(\eta \in (0,\frac{1}{L})\) with L being the spectral radius of the operator \(A^{*}A\). Let \(\bigcap_{i=1}^{3}\Phi _{i}\neq \emptyset \) and \(\Phi _{i}=\{w\in VI(C,D_{i})|Aw=\bar{w}\in VI(Q,\bar{D}_{i})\}\) for all \(i=1,2,3\). Then
In order to prove our main result, we need to prove the lemmas involving the split variational inequality problem.
Lemma 2.7
Let \(H_{1}\) and \(H_{2}\) be real Hilbert spaces, and let \(C,Q\) be nonempty closed convex subsets of \(H_{1},H_{2} \), respectively. Define the mappings \(D_{1},D_{2},D_{3},\bar{D_{1}},\bar{D_{2}},\bar{D_{3}},M_{C}\), and \(M_{Q}\) as in Lemma 2.4where \(\zeta \in (0,2d^{*})\) with \(d^{*}=\operatorname{min} \{d_{1},d_{2},d_{3}\}\), \(\bar{\zeta }\in (0,2\hat{d})\) with \(\hat{d}= \operatorname{min} \{\bar{d_{1}}, \bar{d_{2}},\bar{d_{3}}\}\) and \(a\in [0,1]\). Let \(\{x_{n}\}\) be a sequence in \(H_{1}\), and let \(A:H_{1}\rightarrow H_{2}\) be a bounded linear operator with adjoint \(A^{*}\) and \(\eta \in (0,\frac{1}{L})\) with L being the spectral radius of the operator \(A^{*}A\). For every \(n\in \mathbb{N}\), let \(T_{n}=aW_{n}+(1-a)P_{C}(I-\zeta D_{2})(aW_{n}+(1-a)P_{C}(I-\zeta D_{3})W_{n}))\) and \(W_{n}=(I-\eta A^{*}(I-M_{Q})A)x_{n}\). If \(x^{*}\in \bigcap_{i=1}^{3}\Phi _{i}\), then
for all \(n\in \mathbb{N}\).
Proof
Let \(x^{*}\in \bigcap_{i=1}^{3}\Phi _{i}\). From Lemma 2.6, we have
It implies that \(x^{*}=M_{C}(I-\eta A^{*}(I-M_{Q})A)x^{*}\), \(y^{*}=P_{C}(I-\zeta D_{2})(ax^{*}+(1-a)z^{*})\), and \(z^{*}=P_{C}(I-\zeta D_{3})x^{*}\), where \(\bar{x^{*}}=Ax^{*}=P_{Q}(I-\bar{\zeta }\bar{D_{1}})(a\bar{x^{*}}+(1-a) \bar{y^{*}})\), \(\bar{y^{*}}=Ay^{*}=P_{Q}(I-\bar{\zeta }\bar{D_{2}})(a\bar{x^{*}}+(1-a) \bar{z^{*}})\), and \(\bar{z^{*}}=Az^{*}=P_{Q}(I-\bar{\zeta }\bar{D_{3}})\bar{x^{*}}\). From Lemma 2.5, we have \((x^{*},y^{*},z^{*})\in \Omega ^{D_{1},D_{2},D_{3}}_{\bar{D_{1}}, \bar{D_{2}},\bar{D_{3}}}\). That is, \((x^{*},y^{*},z^{*})\in \Omega _{D_{1},D_{2},D_{3}}\) and \((\bar{x^{*}},\bar{y^{*}},\bar{z^{*}})\in \Omega _{\bar{D_{1}}, \bar{D_{2}},\bar{D_{3}}}\). From \((\bar{x^{*}},\bar{y^{*}},\bar{z^{*}})\in \Omega _{\bar{D_{1}}, \bar{D_{2}},\bar{D_{3}}}\), we obtain that
It implies that
From the definition of \(x^{*}\), we get \(x^{*}=P_{C}(I-\zeta D_{1})T_{x}^{*}\), where \(T_{x}^{*}=aW_{x}^{*}+(1-a)P_{C}(I-\zeta D_{2})(aW_{x}^{*}+(1-a)P_{C}(I- \zeta D_{3})W_{x}^{*}))\) and \(W_{x}^{*}=(I-\eta A^{*}(I-M_{Q})A)x^{*})=x^{*}\).
From Lemma 2.6, we have that \({P_{C}}(I - {\zeta }{D_{1}}),{P_{C}}(I - {\zeta }{D_{2}}) \) and \({P_{C}}(I - {\zeta }{D_{3}}) \) are nonexpansive.
By the definition of \(T_{n}\), Lemma 2.4, and Remark 1, we have
□
3 Main results
Theorem 3.1
Let C and Q be nonempty closed convex subsets of real Hilbert spaces \(H_{1}\) and \(H_{2}\), respectively, and let \(S:C\rightarrow C\) be a nonexpansive mapping. Let \(D_{1},D_{2},D_{3}:C\rightarrow H_{1}\) be \(d_{1},d_{2},d_{3}\)-inverse strongly monotone, respectively, with \(d^{*}=\operatorname{min} \{d_{1},d_{2},d_{3}\} \). Let \(\bar{D_{1}},\bar{D_{2}},\bar{D_{3}}:Q\rightarrow H_{2}\) be \(\bar{d_{1}},\bar{d_{2}},\bar{d_{3}}\)-inverse strongly monotone, respectively, with \(\hat{d}=\operatorname{min} \{\bar{d_{1}},\bar{d_{2}},\bar{d_{3}}\}\). Let \(A:H_{1}\rightarrow H_{2}\) be a bounded linear operator with adjoint \(A^{*}\) and \(\eta \in (0,\frac{1}{L})\) with L being the spectral radius of the operator \(A^{*}A\). Define \(M_{C}:H_{1}\rightarrow C\) by
\(\forall x\in H_{1}\), where \(a\in [0,1)\), \(\zeta \in (0,2d^{*})\), and define \(M_{Q}:H_{2}\rightarrow Q\) by
\(\forall \hat{x}\in H_{1}\), where \(a\in [0,1)\), \(\bar{\zeta }\in (0,2\hat{d})\). Let the sequences \(\{x_{n}\}\) and \(\{y_{n}\}\) be generated by \(x_{1}\in H_{1}\) and
where \(W_{n}=(I-\eta A^{*}(I-M_{Q})A)x_{n}\) and \(T_{n}=aW_{n}+(1-a)P_{C}(I-\zeta D_{2})(aW_{n}+(1-a)P_{C}(I-\zeta D_{3})W_{n}))\).
for all \(n\in \mathbb{N}\).
Assume that the following conditions hold:
-
(i)
\(\Im =F(S)\bigcap \bigcap_{i=1}^{3}\Phi _{i}\neq \emptyset \), where \(\Phi _{i}=\{w\in VI(C,D_{i})|Aw\in VI(Q,\bar{D}_{i})\}\) for all \(i=1,2,3\).
-
(ii)
\(\alpha _{n}\in [c,d]\subset (0,1)\).
Then \(\{x_{n}\}\) converges weakly to \(x_{0}=P_{\Im }{x_{n}}\), which \((x_{0},y_{0},z_{0})\in \Omega ^{D_{1},D_{2},D_{3}}_{\bar{D_{1}}, \bar{D_{2}},\bar{D_{3}}}\), \(y_{0}=P_{C}(I-\zeta D_{2})(ax_{0}+(1-a)z_{0})\), and \(z_{0}=P_{C}(I-\zeta D_{3})x_{0}\) with \(\bar{x_{0}}=Ax_{0}\), \(\bar{y_{0}}=Ay_{0}\) and \(\bar{z_{0}}=Az_{0}\).
Proof
Denote \(k_{n}:=P_{Q_{n}}(T_{n}-\zeta D_{1}(y_{n}))\) for all \(n\geq 0\). Let \(x^{*}\in \Im \). From the definition of \(P_{Q_{n}}\), we have \(y_{n}=P_{Q_{n}}(I-\zeta D_{1})T_{n}\). Let \(M_{n}=T_{n}-\zeta D_{1}(y_{n})\). From \(C\subseteq Q_{n}\), and applying (6), we have
From the monotonicity of \(D_{1}\), we have
which implies that
From (10) and Lemma 2.7, we have
By the definition of \(x_{n+1}\), (11), and Lemma 2.7, we have
So,
Therefore \(\lim_{n \rightarrow \infty }\|x_{n+1}-x^{*}\|\) exists, \(\forall x^{*}\in \Im \). So, we have \(\{x_{n}\}^{\infty }_{n=0}\) and \(\{k_{n}\}^{\infty }_{n=0}\) are bounded. From the last relations it follows that
or
Thus
By using the same method as above, we have
From (12), we get
so
which implies that
Consider
and by (14), we have
From the property of \(P_{C}\), we have
By the definition of \(T_{n}\), (7), Remark 1, and (18), we have
In addition, by the definition of \(x_{n+1}\) and (19), we have
so
which implies that
From the property of \(P_{C}\), we have
so
By the definition of \(T_{n}\), (7), Remark 1, and (21), we have
In addition, by the definition of \(x_{n+1}\), (11), and (22), we have
Let \(G_{n}=aW_{n}+(1-a)P_{C}(I-\lambda _{3}D_{3})W_{n}\). From the property of \(P_{C}\), we have
By the definition of \(T_{n}\) and (25), we have
In addition, by the definition of \(x_{n+1}\) and (26), we have
so
It implies that
From the property of \(P_{C}\), we have
It implies that
By the definition of \(T_{n}\) and (28), we have
In addition, by the definition of \(x_{n+1}\) and (29), we have
Since
From the property of norm, we have
Then we have
From (24) and (31), it implies that
we have
Moreover, from (16), (15), (34), and
we have
Since \(\{x_{n}\}^{\infty }_{n=0}\) is bounded, it has a subsequence \(\{x_{n_{k}}\}^{\infty }_{k=0}\) which weakly converges to some \(\bar{x}\in C\).
Assume \(\bar{x} \notin F(S)\). By the nonexpansiveness of S and Opial’s property and (35), we have
This is a contradiction, then we have
Assume \(\bar{x}\notin \bigcap_{i=1}^{3}\Phi _{i}\). From Lemma 2.6, we have \(\bar{x}\notin F(M_{C}(I-\eta A^{*}(I-M_{Q})A))\). By Opial’s condition, (34), and Remark 1, we have
This is a contradiction, then we have
It implies that
Hence
In order to show that the entire sequence \(\{x_{n}\}\) weakly converges to x̄, assume \(\{x\}_{n_{k}}\rightharpoonup \hat{x}\) as \(k \rightarrow \infty \), with \(\bar{x}\neq \hat{x}\) and \(\hat{x}\in \Im \). By Opial’s condition, we have
This is a contradiction, thus
It implies that the sequence \(\{x_{n}\}^{\infty }_{n=0}\) weakly converges to \(\bar{x}\in \Im \).
From (34), we have \(\{y_{n}\}^{\infty }_{n=0}\) weakly converges to \(\bar{x}\in \Im \).
Finally, if we take
by Lemma 2.2, we see that \(\{P_{\Im }x_{n}\}^{\infty }_{n=0}\) converges strongly to some \(z\in \Im \). From (37), we get
Take \(n\rightarrow \infty \), we also have
and hence \(\bar{x}=z\). Therefore \(U_{n}\) converges strongly to \(\bar{x}\in \Im \), this completes the proof. □
4 Application
Let C be a closed convex subset of H. The standard constrained convex optimization problem is to find \(x^{*}\in C\) such that
where \(\Im:C\rightarrow \mathbb{R}\) is a convex, Frechet differentiable function. The set of all solution of (38) is denoted by \(\Phi _{\Im }\).
Lemma 4.1
([25] Optimality condition)
A necessary condition of optimality for a point \({x^{*}} \in C \) to be a solution of the minimization problem (38) is that \({x^{*}} \) solves the variational inequality
for all \(x \in C \). Equivalently, \({x^{*}} \in C \) solves the fixed point equation
for every \(\zeta > 0 \). If, in addition, ℑ is convex, then the optimality condition (39) is also sufficient.
By using the concept of the split modified system of variational inequalities problem (SMSVIP), we consider the problem for finding \((x^{*},y^{*},z^{*})\in C\times C\times C\) such that
and finding \((\bar{x^{*}}=Ax^{*}, \bar{y^{*}}=Ay^{*}, \bar{z^{*}}=Az^{*})\in Q\times Q\times Q\) such that
where \(\Im _{1},\Im _{2},\Im _{3}: C\rightarrow \mathbb{R}\) with \(\nabla \Im _{1},\nabla \Im _{2},\nabla \Im _{3}\) are the gradients of \(\Im _{1},\Im _{2},\Im _{3}\), respectively, and \(\bar{\Im }_{1},\bar{\Im }_{2},\bar{\Im }_{3}:Q\rightarrow \mathbb{R}\) with \(\nabla \bar{\Im }_{1},\nabla \bar{\Im }_{2},\nabla \bar{\Im }_{3}\) are the gradients of \(\bar{\Im }_{1},\bar{\Im }_{2},\bar{\Im }_{3}\), respectively, \({\zeta,\bar{\zeta } >0}\) and \(a\in [0,1]\). The sets of all solution of (40) and (41) are denoted by \(\Psi _{\nabla \Im _{1},\nabla \Im _{2},\nabla \Im _{3}}\) and \(\Psi _{\nabla \bar{\Im _{1}},\nabla \bar{\Im _{2}},\nabla \bar{\Im _{3}}}\), respectively. The set of all solutions of the split modified system of variational inequalities (SMSVIP) is denoted by \(\Psi ^{\nabla \Im _{1},\nabla \Im _{2},\nabla \Im _{3}}_{\nabla \bar{\Im _{1}},\nabla \bar{\Im _{2}},\nabla \bar{\Im _{3}}}\), that is,
Lemma 4.2
([23])
Let C and Q be nonempty closed convex subsets of real Hilbert spaces \(H_{1}\) and \(H_{2}\), respectively. Let \(\Im _{1},\Im _{2},\Im _{3}:C \to \mathbb{R} \) be real-valued convex functions with the gradients \(\nabla \Im _{1},\nabla \Im _{2},\nabla \Im _{3}\) being \(\frac{1}{{{L_{\Im _{1}}}}},\frac{1}{{{L_{\Im _{2}}}}}, \frac{1}{{{L_{\Im _{3}}}}}\)-inverse strongly monotone and continuous, respectively, where \(\zeta \in (0,\frac{2}{L_{\Im }})\) with \(\frac{1}{L_{\Im }} = \operatorname{min} \{\frac{1}{L_{\Im _{1}}}, \frac{1}{L_{\Im _{2}}},\frac{1}{L_{\Im _{3}}}\}\). Let \(\bar{\Im }_{1},\bar{\Im }_{2},\bar{\Im }_{3}:Q \to \mathbb{R} \) be real-valued convex functions with the gradients \(\nabla \bar{\Im _{1}}\), \(\nabla \bar{\Im _{2}}\), \(\nabla \bar{\Im _{3}}\) being \(\frac{1}{{{L_{\bar{\Im _{1}}}}}}, \frac{1}{{{L_{\bar{\Im _{2}}}}}}, \frac{1}{{{L_{\bar{\Im _{3}}}}}} \)-inverse strongly monotone and continuous, respectively, where \(\bar{\zeta }\in (0,\frac{2}{L_{\bar{\Im }}})\) with \(\frac{1}{L_{\bar{\Im }}} = \operatorname{min} \{\frac{1}{L_{\bar{\Im }_{1}}}, \frac{1}{L_{\bar{\Im }_{2}}},\frac{1}{L_{\bar{\Im }_{3}}}\}\). Let \(A:H_{1}\rightarrow H_{2}\) be a bounded linear operator with adjoint \(A^{*}\) and \(\eta \in (0,\frac{1}{L})\) with L being the spectral radius of the operator \(A^{*}A\). Define \(M_{C}:H_{1}\rightarrow C\) by \(M_{C}(x)=P_{C}(I-\zeta \nabla \Im _{1})(ax+(1-a)P_{C}(I-\zeta \nabla \Im _{2})(ax+(1-a)P_{C}(I-\zeta \nabla \Im _{3})x))\), \(\forall x\in H_{1}\), and define \(M_{Q}:H_{2}\rightarrow Q\) by \(M_{Q}(\hat{x})=P_{Q}(I-\bar{\zeta }\nabla \bar{\Im _{1}})(a\hat{x}+(1-a)P_{Q}(I- \bar{\zeta }\nabla \bar{\Im _{2}})(a\hat{x}+(1-a)P_{Q}(I-\bar{\zeta } \nabla \bar{\Im _{3}})\hat{x}))\), \(\forall \hat{x}\in H_{2}\). Let \(\bigcap_{i=1}^{3}\Phi _{\Im _{i}}\neq \emptyset \) and \(\Phi _{\Im _{i}}=\{ \Im _{i}(x)= \min_{x^{*}\in C}\Im _{i}(x^{*}) : \bar{\Im _{i}}(Ax)= \min_{Ax^{*}\in Q}\bar{\Im _{i}}(Ax^{*}) \}\) for all \(i=1,2,3\). Then
Theorem 4.3
Let C and Q be nonempty closed convex subsets of real Hilbert spaces \(H_{1}\) and \(H_{2}\), respectively, and let \(S:C\rightarrow C\) be a nonexpansive mapping. Let \(\Im _{1},\Im _{2},\Im _{3}:C \to \mathbb{R} \) be real-valued convex functions with the gradients \(\nabla \Im _{1},\nabla \Im _{2},\nabla \Im _{3}\) being \(\frac{1}{{{L_{\Im _{1}}}}},\frac{1}{{{L_{\Im _{2}}}}}, \frac{1}{{{L_{\Im _{3}}}}}\)-inverse strongly monotone and continuous, respectively, where \(\zeta \in (0,\frac{2}{L_{\Im }})\) with \(\frac{1}{L_{\Im }}= \operatorname{min} \{\frac{1}{L_{\Im _{1}}},\frac{1}{L_{\Im _{2}}}, \frac{1}{L_{\Im _{3}}}\}\). Let \(\bar{\Im }_{1},\bar{\Im }_{2},\bar{\Im }_{3}:Q \to \mathbb{R} \) be real-valued convex functions with the gradients \(\nabla \bar{\Im _{1}},\nabla \bar{\Im _{2}},\nabla \bar{\Im _{3}}\) being \(\frac{1}{{{L_{\bar{\Im _{1}}}}}}, \frac{1}{{{L_{\bar{\Im _{2}}}}}}, \frac{1}{{{L_{\bar{\Im _{3}}}}}} \)-inverse strongly monotone and continuous, respectively, where \(\bar{\zeta }\in (0,\frac{2}{L_{\bar{\Im }}})\) with \(\frac{1}{L_{\bar{\Im }}}= \operatorname{min} \{\frac{1}{L_{\bar{\Im }_{1}}}, \frac{1}{L_{\bar{\Im }_{2}}},\frac{1}{L_{\bar{\Im }_{3}}}\}\). Let \(A:H_{1}\rightarrow H_{2}\) be a bounded linear operator with adjoint \(A^{*}\) and \(\eta \in (0,\frac{1}{L})\) with L being the spectral radius of the operator \(A^{*}A\). Define \(M_{C}:H_{1}\rightarrow C\) by \(M_{C}(x)=P_{C}(I-\zeta \nabla \Im _{1})(ax+(1-a)P_{C}(I-\zeta \nabla \Im _{2})(ax+(1-a)P_{C}(I-\zeta \nabla \Im _{3})x))\), \(\forall x\in H_{1}\), and define \(M_{Q}:H_{2}\rightarrow Q\) by \(M_{Q}(\hat{x})=P_{Q}(I-\bar{\zeta }\nabla \bar{\Im _{1}})(a\hat{x}+(1-a)P_{Q}(I- \bar{\zeta }\nabla \bar{\Im _{2}})(a\hat{x}+(1-a)P_{Q}(I-\bar{\zeta } \nabla \bar{\Im _{3}})\hat{x}))\), \(\forall \hat{x}\in H_{2}\). Let the sequences \(\{x_{n}\}\) and \(\{y_{n}\}\) be generated by \(x_{1}\in H_{1}\) and
where \(W_{n}=(I-\eta A^{*}(I-M_{Q})A)x_{n}\) and \(T_{n}=aW_{n}+(1-a)P_{C}(I-\zeta \nabla \Im _{2})(aW_{n}+(1-a)P_{C}(I- \zeta \nabla \Im _{3})W_{n}))\).
Assume that the following conditions hold:
-
(i)
\(\Im =F(S)\bigcap \bigcap_{i=1}^{3}\Phi _{\Im _{i}} \neq \emptyset \), where \(\Phi _{\Im _{i}}=\{ \Im _{i}(x)= \min_{x^{*}\in C}\Im _{i}(x^{*}) : \bar{\Im _{i}}(Ax)= \min_{Ax^{*}\in Q}\bar{\Im _{i}}(Ax^{*}) \}\) for all \(i=1,2,3\).
-
(ii)
\(\alpha _{n}\in [c,d]\subset (0,1)\).
Then \(\{x_{n}\}\) converges weakly to \(x_{0}=P_{\Im }{x_{n}}\), which \((x_{0},y_{0},z_{0})\in \Omega ^{\nabla \Im _{1},\nabla \Im _{2}, \nabla \Im _{3}}_{\nabla \bar{\Im _{1}},\nabla \bar{\Im _{2}},\nabla \bar{\Im _{3}}}\), where \(y_{0}=P_{C}(I-\zeta \nabla \Im _{2})(ax_{0}+(1-a)z_{0})\) and \(z_{0}=P_{C}(I-\zeta \nabla \Im _{3})x_{0}\) with \(\bar{x_{0}}=Ax_{0}\), \(\bar{y_{0}}=Ay_{0}\), and \(\bar{z_{0}}=Az_{0}\).
Proof
By using Theorem 3.1 and Lemma 4.2, we obtain the conclusion. □
5 Example and numerical results
In this section, we give the following example to support our main theorem.
Example 5.1
Let \(\mathbb{R}\) be the set of real numbers, \(C:=\{x\in H|1\leq 2x_{1}+x_{2}\leq 7\}\), \(Q:=\{x\in H|-10\leq 3x_{1}-x_{2}\leq 20\}\), \(H_{1}=H_{2}=\mathbb{R}^{2}\). Let \(D_{1},D_{2},D_{3}:C\rightarrow \mathbb{R}^{2}\) be defined by \(D_{1}(x_{1},x_{2})=(x_{1} -2,x_{2} +1)\), \(D_{2}(x_{1},x_{2})=(x_{1} -3,x_{2} -\frac{5}{2})\), and \(D_{3}(x_{1},x_{2})=(x_{1} +2,x_{2} -6)\) for all \((x_{1},x_{2})\in C\). Let \(\bar{D_{1}},\bar{D_{2}},\bar{D_{3}}:Q\rightarrow \mathbb{R}^{2}\) be defined by \(\bar{D_{1}}(\bar{x_{1}},\bar{x_{2}})=(\bar{x_{1}} -4,\bar{x_{2}} +8)\), \(\bar{D_{2}}(\bar{x_{1}},\bar{x_{2}})=(\bar{x_{1}} -12,\bar{x_{2}} -8)\), and \(\bar{D_{3}}(\bar{x_{1}},\bar{x_{2}})=(\bar{x_{1}} +16,\bar{x_{2}} -30)\) for all \((\bar{x_{1}},\bar{x_{2}})\in Q\). Let \(A:\mathbb{R}^{2}\rightarrow \mathbb{R}^{2}\) be defined by \(A(x_{1},x_{2})=(2x_{1},2x_{2})\) and \(A^{*}:\mathbb{R}^{2}\rightarrow \mathbb{R}\) be defined by \(A^{*}(x_{1},x_{2})=(2x_{1},2x_{2})\). Define \(M_{C}:H_{1}\rightarrow C\) by \(M_{C}(x)=P_{C}(I-\frac{1}{2}D_{1})(\frac{1}{2}x+\frac{1}{2}P_{C}(I- \frac{1}{2}D_{2})(\frac{1}{2}x+\frac{1}{2}P_{C}(I-\frac{1}{2}D_{3})x))\), \(\forall x=(x_{1},x_{2})\in H_{1}\), define \(M_{Q}:H_{2}\rightarrow Q\) by \(M_{Q}(\hat{x})=P_{Q}(I-\frac{1}{5}\bar{D_{1}})(\frac{1}{2}\hat{x}+ \frac{1}{2}P_{Q}(I-\frac{1}{5}\bar{D_{2}})(\frac{1}{2}\hat{x}+ \frac{1}{2}P_{Q}(I-\frac{1}{5}\bar{D_{3}})\hat{x}))\), \(\forall \hat{x}=(\hat{x_{1}},\hat{x_{2}})\in H_{2}\), and define \(S:C\rightarrow C\) by \(S(x_{1},x_{2})=(\frac{x_{1}}{2}+1,\frac{x_{2}}{2})\). Let the sequences \(\{x_{n}\}\) and \(\{y_{n}\}\) be generated by \(x_{1}\in H_{1}\) and
where \(W_{n}=(I-\frac{1}{8}A^{*}(I-M_{Q})A)x_{n}\) and \(T_{n}=\frac{1}{2}W_{n}+\frac{1}{2}P_{C}(I-\frac{1}{2}(x_{1} -3,x_{2} - \frac{5}{2}))(\frac{1}{2}W_{n}+\frac{1}{2}P_{C}(I-\frac{1}{2}(x_{1} +2,x_{2} -6))W_{n}))\),
and
where
for every \(x=(x_{1},x_{2})\in H_{1}\) and
for every \(\hat{x}=({x_{1}},{x_{2}})\in H_{2}\). By the definition of \(S, D_{i}, \bar{D_{i}}, M_{C}, M_{Q} \) for every \(i=1,2,3\), we have \((2,0)\in F(M_{C}(I-\frac{1}{8}A^{*}(I-M_{Q})A))\). From Theorem 3.1, we can conclude that the sequences \(\{x_{n}\}\) and \(\{y_{n}\}\) converge strongly to \((2,0)\).
Table 1 and Fig. 1 show the numerical results of sequences \(\{x_{n}\}\) and \(\{y_{n}\}\) where \(x_{1}=(-5,5)\) and \(n=N=30\).
Availability of data and materials
All data generated or analyzed during this study are included in this published article.
References
Ceng, L.C., Li, X., Qin, X.: Parallel proximal point methods for systems of vector optimization problems on Hadamard manifolds without convexity. Optimization 69, 357–383 (2020)
Ceng, L.C., Petrusel, A., Qin, X., Yao, J.C.: A modified inertial subgradient extragradient method for solving pseudomonotone variational inequalities and common fixed point problems. Fixed Point Theory 21, 93–108 (2020)
Ceng, L.C., Petrusel, A., Qin, X., Yao, J.C.: Pseudomonotone variational inequalities and fixed points. Fixed Point Theory 22, 543–558 (2021)
Ceng, L.C., Petrusel, A., Qin, X., Yao, J.C.: Two inertial subgradient extragradient algorithms for variational inequalities with fixed-point constraints. Optimization 70, 1337–1358 (2021)
Ceng, L.C., Petrusel, A., Yao, J.C., Yao, Y.: Hybrid viscosity extragradient method for systems of variational inequalities, fixed points of nonexpansive mappings, zero points of accretive operators in Banach spaces. Fixed Point Theory 19, 487–501 (2018)
Ceng, L.C., Petrusel, A., Yao, J.C., Yao, Y.: Systems of variational inequalities with hierarchical variational inequality constraints for Lipschitzian pseudocontractions. Fixed Point Theory 20, 113–133 (2019)
Ceng, L.C., Shang, M.J.: Hybrid inertial subgradient extragradient methods for variational inequalities and fixed point problems involving asymptotically nonexpansive mappings. Optimization 70, 715–740 (2021)
Ceng, L.C., Wen, C.F.: Systems of variational inequalities with hierarchical variational inequality constraints for asymptotically nonexpansive and pseudocontractive mappings. Rev. R. Acad. Cienc. Exactas Fís. Nat., Ser. A Mat. 113, 2431–2447 (2019)
Ceng, L.C., Yuan, Q.: Composite inertial subgradient extragradient methods for variational inequalities and fixed point problems. J. Inequal. Appl. 20, Article ID 274 (2019)
Censor, Y., Gibali, A., Reich, S.: The subgradient extragradient method for solving variational inequalities in Hilbert space. J. Optim. Theory Appl. 148, 318–335 (2011)
Cui, H.H., Zhang, H.X., Ceng, L.C.: An inertial Censor–Segal algorithm for split common fixed-point problems. Fixed Point Theory 22, 93–103 (2021)
Dong, Q.L., Liu, L., Yao, Y.: Self-adaptive projection and contraction methods with alternated inertial terms for solving the split feasibility problem. J. Nonlinear Convex Anal. 23(3), 591–605 (2022)
Guan, J.L., Ceng, L.C., Hu, B.: Strong convergence theorem for split monotone variational inclusion with constraints of variational inequalities and fixed point problems. J. Inequal. Appl. 29, Article ID 311 (2018)
He, L., Cui, Y.L., Ceng, L.C., et al.: Strong convergence for monotone bilevel equilibria with constraints of variational inequalities and fixed points using subgradient extragradient implicit rule. J. Inequal. Appl. 37, Article ID 146 (2021)
He, S., Yang, C.: Solving the variational inequality problem defined on intersection of finite level sets. Abstr. Appl. Anal. 2013, Article ID 942315 (2013)
Iusem, A.N., Svaiter, B.F.: A variant of Korpelevich’s method for variational inequalities with a new search strategy. Optimization 42, 309–321 (1997)
Khobotov, E.N.: Modification of the extra-gradient method for solving variational inequalities and certain optimization problems. USSR Comput. Math. Math. Phys. 27, 120–127 (1989)
Kim, J.K., Salahuddin Lim, W.H.: General nonconvex split variational inequality problems. Korean J. Math. 25(4), 469–481 (2017)
Korpelevich, G.M.: The extragradient method for finding saddle points and other problems. Ekon. Mat. Metody. 12, 747–756 (1976)
Noor, M.A., Noor, K.I.: Some aspects of variational inequalities. J. Comput. Appl. Math. 47, 285–312 (1993)
Siriyan, K., Kangtunyakarn, A.: Algorithm method for solving the split general system of variational inequalities problem and fixed point problem of nonexpansive mapping with application. Math. Methods Appl. Sci. 41, 7766–7788 (2018)
Solodov, M.V., Svaiter, B.F.: A new projection method for variational inequality problems. SIAM J. Control Optim. 37, 765–776 (1999)
Sripattanet, A., Kangtunyakarn, A.: Convergence theorem for solving a new concept of the split variational inequality problems and application. Rev. R. Acad. Cienc. Exactas Fís. Nat., Ser. A Mat. 114, Article ID 177 (2020). https://doi.org/10.1007/s13398-020-00909-0
Stampacchia, G.: Formes bilineaires coercivites sur les ensembles convexes. C. R. Acad. Sci. Paris, Ser. I 258, 4413–4416 (1964)
Su, M., Xu, H.K.: Remarks on the gradient-projection algorithm. J. Nonlinear Anal. Optim. 1(1), 35–43 (2010)
Taiwo, A., Mewomo, O.T., Gibali, A.: A simple strong convergent method for solving split common fixed point problems. J. Nonlinear Var. Anal. 5, 777–793 (2021)
Tan, B., Cho, S., Yao, J.C.: Accelerated inertial subgradient extragradient algorithms with non-monotonic step sizes for equilibrium problems and fixed point problems. J. Nonlinear Var. Anal. 6, 89–122 (2022)
Yao, Y., Iyiola, O.S., Shehu, Y.: Subgradient extragradient method with double inertial steps for variational inequalities. J. Sci. Comput. 90(2), Article ID 71 (2022)
Yao, Y., Li, H., Postolache, M.: Iterative algorithms for split equilibrium problems of monotone operators and fixed point problems of pseudo-contractions. Optimization (2020). https://doi.org/10.1080/02331934.2020.1857757
Yao, Y., Liou, Y.C., Yao, Y.C.: Iterative algorithms for the split variational inequality and fixed point problems under nonlinear transformations. J. Nonlinear Sci. Appl. 10, 843–854 (2017)
Zhao, T.Y., Wang, D.Q., Ceng, L.C., et al.: Quasi-inertial Tseng’s extragradient algorithms for pseudomonotone variational inequalities and fixed point problems of quasi-nonexpansive operators. Numer. Funct. Anal. Optim. 42, 69–90 (2020)
Zhao, X., Yao, J.C., Yao, Y.: A proximal algorithm for solving split monotone variational inclusions. UPB Sci. Bull., Ser. A 82(3), 43–52 (2020)
Acknowledgements
The authors would like to extend their sincere appreciation to the Research and Innovation Services of King Mongkut’s Institute of Technology Ladkrabang.
Funding
This research was supported by the Royal Golden Jubilee (RGJ) Ph.D. Programme, the National Research Council of Thailand (NRCT), under Grant No. PHD/0170/2561.
Author information
Authors and Affiliations
Contributions
AK dealt with the conceptualization, formal analysis, supervision, writing—review and editing. AS writing—original draft, formal analysis, writing—review and editing. Both authors have read and approved the manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare that they have no competing interests.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Sripattanet, A., Kangtunyakarn, A. A new subgradient extragradient method for solving the split modified system of variational inequality problems and fixed point problem. J Inequal Appl 2022, 51 (2022). https://doi.org/10.1186/s13660-022-02787-z
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13660-022-02787-z
MSC
- 47H09
- 47H10
- 90C33
Keywords
- Fixed point
- Subgradient extragradient
- The split modified system of variational inequality problems
- Variational inequality