- Research
- Open Access
- Published:
Triple-adaptive subgradient extragradient with extrapolation procedure for bilevel split variational inequality
Journal of Inequalities and Applications volume 2023, Article number: 14 (2023)
Abstract
This paper introduces a triple-adaptive subgradient extragradient process with extrapolation to solve a bilevel split pseudomonotone variational inequality problem (BSPVIP) with the common fixed point problem constraint of finitely many nonexpansive mappings. The problem under consideration is in real Hilbert spaces, where the BSPVIP involves a fixed point problem of demimetric mapping. The proposed rule exploits the strong monotonicity of one operator at the upper level and the pseudomonotonicity of another mapping at the lower level. The strong convergence result for the proposed algorithm is established under some suitable assumptions. In addition, a numerical example is given to demonstrate the viability of the proposed rule. Our results improve and extend some recent developments to a great extent.
1 Introduction
Suppose that \(\emptyset \neq C\subset{\mathcal {H}}\) with C being a closed convex set in a real Hilbert space \(\mathcal {H}\), and \(\langle \cdot ,\cdot \rangle \) and \(\|\cdot \|\) are the inner product and the induced norm in \(\mathcal {H}\), respectively. Let \(P_{C}\) be the metric projection of \(\mathcal {H}\) onto C, and for a given mapping \(S:C\to{\mathcal {H}}\), let its set of fixed points be denoted by \(\operatorname{Fix}(S)\).
Let \(A:{\mathcal {H}}\to{\mathcal {H}}\) be a Lipschitz continuous mapping with Lipschitz constant L, and consider the classical variational inequality problem (VIP) of finding \(x^{*}\in C\) such that \(\langle Ax^{*},x-x^{*}\rangle \geq 0 \ \forall x\in C\). We denote the solution set of the VIP by VI(C, A). One of the most popular approaches for settling the VIP is the extragradient method invented by Korpelevich [1] in 1976. For any given initial point \(p_{0}\in C\), the method of Korpelevich [1] generates a sequence \(\{p_{t}\}\) as fabricated below:
where the constant ℓ lies in \((0,\frac{1}{L} )\). The literature on the VIP is numerous, and Korpelevich’s extragradient method has received extensive attention of many scholars, who intensely enhanced it in various aspects; for example, please see [2–26] and the references therein, to name but a few.
Thong and Hieu [26] put forward subgradient extragradient process with extrapolation, which generates a sequence \(\{p_{t}\}\) for any given \(p_{1},p_{0}\in{\mathcal {H}}\) as follows:
where \(\zeta \in (0,\frac{1}{L} )\) and weak convergence is obtained. Given nonexpansive mappings \(S_{i}:{\mathcal {H}}\rightarrow {\mathcal {H}}\), \(i=1,2,\ldots , N\), Ceng and Shang [16] presented a subgradient extragradient-type process for computing a common element of the common fixed point set and \(\operatorname{VI}(C,A)\) when
Furthermore, the following strongly convergent algorithm was studied in [21] when \(\Omega :=\bigcap^{N}_{i=1} \operatorname{Fix}(S_{i})\cap\operatorname{VI}(C,A)\) is nonempty.
Algorithm 1.1
(See [21, Algorithm 3.1])
Modified inertial subgradient extragradient method.
Initialization
Let \(\lambda _{1}>0\), \(\alpha >0\), \(\mu \in (0,1)\), and \(x_{1},x_{0}\in{\mathcal {H}}\) be arbitrary.
Iterative steps
Calculate \(x_{t+1}\) as follows:
Step 1. Given the iterates \(x_{t}\) and \(x_{t-1}\) (\(t\geq 1\)), choose \(\alpha _{t}\) such that \(0\leq \alpha _{t}\leq \bar{\alpha}_{t}\), where
Step 2. Compute \(w_{t}=S_{t}x_{t}+\alpha _{t}(S_{t}x_{t}-S_{t}x_{t-1})\) and \(y_{t}=P_{C}(w_{t}-\lambda _{t}Aw_{t})\).
Step 3. Identify \(C_{t} = \{y\in{\mathcal {H}}:\langle w_{t}-\lambda _{t}Aw_{t}-y_{t},y_{t}-y \rangle \geq 0\}\), then calculate
Step 4. Update \(x_{t+1}=\beta _{t}f(x_{t})+\gamma _{t}x_{t}+((1-\gamma _{t})I-\beta _{t} \rho F)z_{t}\), where \(\rho \in (0, \frac{2\eta}{\kappa ^{2}} )\) and update
Set \(t:=t+1\) and return to Step 1, where f is a contraction (\(f:{\mathcal {H}}\rightarrow { \mathcal {H}}\) is a contraction if there exists \(\nu \in [0,1)\) such that \(\|f(x)-f(y)\| \leq \nu \|x-y\|\), \(\forall x,y \in {\mathcal {H}}\)), F is η-strongly monotone and κ-Lipschitz continuous (kindly see Sect. 2 for its definition) with \(\{\beta _{t}\},\{\gamma _{t}\}, \{\varepsilon _{t}\} \subset (0,1)\) fulfilling some conditions.
Next, suppose that C and Q are nonempty, closed, and convex subsets of Hilbert spaces \({\mathcal {H}}_{1}\) and \({\mathcal {H}}_{2}\), respectively. Let \(T:{\mathcal {H}}_{1}\to{\mathcal {H}}_{2}\) denote a bounded linear operator and \(A,F:{\mathcal {H}}_{1}\to{\mathcal {H}}_{1}\) and \(B:{\mathcal {H}}_{2}\to{\mathcal {H}}_{2}\) be nonlinear mappings. Then, the bilevel split variational inequality problem (BSVIP) (see [27]) is as specified below:
where \(\Lambda :=\{z\in\operatorname{VI}(C,A):Tz\in\operatorname{VI}(Q,B)\}\) is the solution set of the split variational inequality problem (SVIP), which was introduced by Censor et al. [28] and formulated as follows:
and
with \(\operatorname{VI}(C,A)\) and \(\operatorname{VI}(Q,B)\) representing the solution sets of variational inequalities (1.2) and (1.3), respectively. Note that the SVIP involves finding \(x^{*}\in\operatorname{VI}(C,A)\) such that \(Tx^{*}\in\operatorname{VI}(Q,B)\). Censor et al. [28] proposed a weakly convergent method for approximating the solution of (1.2)–(1.3): for any given initial \(x_{1}\in{\mathcal {H}}_{1}\), identify the sequence \(\{x_{t}\}\) generated by
where A and B both are inverse-strongly monotone and T is a bounded linear operator. Under appropriate assumptions, it was proven in [28] that the sequence \(\{x_{t}\}\) converges weakly to a solution of (1.2)–(1.3).
We note that the VIP can be expressed as the FPP: \(Sz=P_{Q}(z-\mu Bz)\), \(\mu >0\), with \(\operatorname{VI}(Q,B)=\operatorname{Fix}(S)\). Consequently, we can reformulate the BSVIP in (1.1) as follows: Let \(A:{\mathcal {H}}_{1}\to{\mathcal {H}}_{1}\) be quasimonotone and L-Lipschitz continuous, \(F:{\mathcal {H}}_{1}\to{\mathcal {H}}_{1}\) be κ-Lipschitzian and η-strongly monotone, \(T:{\mathcal {H}}_{1}\to{\mathcal {H}}_{2}\) be a nonzero bounded linear operator, and \(S:{\mathcal {H}}_{2}\to{\mathcal {H}}_{2}\) be a τ-demimetric mapping with \(\tau \in (-\infty ,1)\); then,
where \(\Omega :=\{z\in\operatorname{VI}(C,A): Tz\in\operatorname{Fix}(S)\}\). In this case, such a problem is referred to as a bilevel split quasimonotone variational inequality problem (BSQVIP) and its strong convergence results are obtained in [18].
Assume that \(f:{\mathcal {H}}_{1}\to {\mathcal {H}}_{1}\) is a contractive mapping with \(\nu \in [0,1)\) with \(\nu <\zeta :=1-\sqrt{1-\rho (2\eta -\rho \kappa ^{2})}\) for \(\rho \in (0,\frac{2\eta}{\kappa ^{2}} )\), \(A:{\mathcal {H}}_{1}\to {\mathcal {H}}_{1}\) is pseudomonotone and L-Lipschitz continuous with \(\|Au\|\leq \liminf_{t\to \infty}\|Au_{t}\|\) for each \(\{u_{t}\}\subset C\) with \(u_{t}\rightharpoonup u\), \(\{S_{i}\}^{N}_{i=1}\) is finitely many nonexpansive mappings on \({\mathcal {H}}_{1}\) and \(\Xi :=\bigcap^{N}_{i=1}\operatorname{Fix}(S_{i})\cap\Omega \neq \emptyset \). Then, the bilevel split pseudomonotone variational inequality problem (BSPVIP) with the common fixed point problem (CFPP) constraint is formulated as follows:
We propose triple-adaptive subgradient extragradient-type rule with inertial extrapolation to solve (1.6) in real Hilbert spaces, where the BSPVIP involves the FPP of demimetric mapping S. The rule exploits the strong monotonicity of the operator F at the upper-level problem and the pseudomonotonicity of the mapping A at the lower level. Consequently, we obtain strong convergence result. In addition, a numerical test is provided to show the viability of the suggested rule.
The article is organized as follows: In Sect. 2, we provide some concepts and basic tools for further use. Section 3 gives the convergence analysis of the suggested algorithm. Lastly, Sect. 4 gives a numerical illustration. Our results improve and extend the corresponding ones in [21, 29], and the relevant explanatory argument is given after the main proof of convergence result in Sect. 3.
2 Preliminaries
A mapping \(S:C\to{\mathcal {H}}\) is (see [30]):
-
(i)
L-Lipschitz continuous or L-Lipschitzian if \(\exists L>0\) such that \(\|S\tilde{u}-S\bar{y}\|\leq L\|\tilde{u}-\bar{y}\| \ \forall \tilde{u}, \bar{y}\in C\). If \(L=1\), then S is nonexpansive;
-
(ii)
ς-strongly monotone if \(\exists \varsigma >0\) such that \(\langle S\tilde{u}-S\bar{y},\tilde{u}-\bar{y} \rangle \geq \varsigma \|\tilde{u}-\bar{y}\|^{2} \ \forall \tilde{u}\), \(\bar{y}\in C\);
-
(iii)
monotone if \(\langle S\tilde{u}-S\bar{y},\tilde{u}-\bar{y}\rangle \geq 0 \ \forall \tilde{u}\), \(\bar{y}\in C\);
-
(iv)
pseudomonotone if \(\langle S\tilde{u},\bar{y}-\tilde{u}\rangle \geq 0\Longrightarrow \langle S\bar{y},\bar{y}-\tilde{u}\rangle \geq 0 \ \forall \tilde{u}, \bar{y}\in C\);
-
(v)
quasimonotone if \(\langle S\tilde{u},\bar{y}-\tilde{u}\rangle >0\Longrightarrow \langle S\bar{y},\bar{y}-\tilde{u}\rangle \geq 0 \ \forall \tilde{u}, \bar{y}\in C\);
-
(vi)
τ-demicontractive if \(\exists \tau \in (0,1)\) such that
$$ \Vert S\tilde{u}-p \Vert ^{2}\leq \Vert \tilde{u}-p \Vert ^{2}+\tau \Vert \tilde{u}-S \tilde{u} \Vert ^{2} \quad \forall \tilde{u}\in C, p\in\operatorname{Fix}(S)\neq \emptyset ; $$ -
(vii)
τ-demimetric if \(\exists \tau \in (-\infty ,1)\) such that
$$ \langle \tilde{u}-S\tilde{u},\tilde{u}-p\rangle \geq \frac {1-\tau}{2} \Vert \tilde{u}-S\tilde{u} \Vert ^{2} \quad \forall \tilde{u}\in C, p \in\operatorname{Fix}(S)\neq \emptyset ; $$ -
(viii)
sequentially weakly continuous if \(\forall \{x_{t}\}\subset C\), \(x_{t}\rightharpoonup x\Longrightarrow Sx_{t}\rightharpoonup Sx\).
Given \(\grave{u}\in{\mathcal {H}}\), there exists unique \(P_{C}\grave{u} \in C\) with the following properties.
Lemma 2.1
(See [31])
The following hold:
-
(i)
\(\langle \check{u}-\grave{v},P_{C} \check{u} - P_{C} \grave{v} \rangle \geq \|P_{C}\check{u}-P_{C}\grave{v}\|^{2} \ \forall \check{u}, \grave{v}\in{\mathcal {H}}\);
-
(ii)
\(w=P_{C}\check{u} \Longleftrightarrow \langle \check{u}-w,\grave{v}-w \rangle \leq 0 \ \forall \check{u}\in{\mathcal {H}},\grave{v}\in C\);
-
(iii)
\(\|\check{u}-\grave{v}\|^{2}\geq \|\check{u}-P_{C}\check{u}\|^{2}+\| \grave{v}-P_{C}\check{u}\|^{2} \ \forall \check{u}\in{\mathcal {H}}, \grave{v}\in C\);
-
(iv)
\(\|\check{u}-\grave{v}\|^{2}=\|\check{u}\|^{2}-\|\grave{v}\|^{2}-2 \langle \check{u}-\grave{v},\grave{v}\rangle \ \forall \check{u}, \grave{v}\in{\mathcal {H}}\);
-
(v)
\(\|\vartheta \check{u}+(1-\vartheta )\grave{v}\|^{2}=\vartheta \| \check{u}\|^{2}+(1-\vartheta )\|\grave{v}\|^{2}-\vartheta (1- \vartheta )\|\check{u}-\grave{v}\|^{2} \ \forall \check{u},\grave{v} \in{\mathcal {H}}, \vartheta \in {\mathbb{R}}\).
Clearly, (ii) ⟹ (iii) ⟹ (iv) ⟹ (v). However, the converse is not generally true.
Lemma 2.2
(See [32])
Let \(\varpi \in (0,1]\), \(S:C\to{\mathcal {H}}\) be nonexpansive and \(S^{\varpi}: C\to{\mathcal {H}}\) be defined by \(S^{\varpi }\acute{x}:=S\acute{x}-\varpi \rho F(S\acute{x}) \ \forall \acute{x}\in C\), where F is ϱ-Lipschitz continuous and ς-strongly monotone. Then \(S^{\varpi}\) is a contraction provided \(0<\rho <\frac{2\varsigma}{\varrho ^{2}}\), i.e., \(\|S^{\varpi }\acute{x}-S^{\varpi }\acute{y}\|\leq (1-\varpi \zeta ) \|\acute{x}-\acute{y}\| \ \forall \acute{x},\acute{y}\in C\), where \(\zeta =1-\sqrt{1-\rho (2\varsigma -\rho \varrho ^{2})}\in (0,1]\).
Lemma 2.3
If \(A:C\to{\mathcal {H}}\) is pseudomonotone and continuous, then \(u^{*}\in C\) solves VIP ⇔ \(\langle Av,v-u^{*}\rangle \geq 0 \ \forall v\in C\).
Proof
The proof is straightforward and thus we skip it. □
Lemma 2.4
(See [32])
Let \(\{a_{t}\} \subset (0,\infty )\) satisfying the condition \(a_{t+1} \leq (1-\lambda _{t})a_{t}+\lambda _{t}\gamma _{t} \ \forall t \geq 1\), where \(\{\lambda _{t}\}, \{\gamma _{t}\} \subset \mathbb{R}\) and (i) \(\{\lambda _{t}\}\subset [0,1]\) and \(\sum^{\infty}_{t=1}\lambda _{t}=\infty \), and (ii) \(\limsup_{t\to \infty}\gamma _{t}\leq 0\) or \(\sum^{\infty}_{t=1}|\lambda _{t}\gamma _{t}|<\infty \). Then \(\lim_{t\to \infty}a_{t}=0\).
Lemma 2.5
(See [31, demiclosedness principle])
If S is nonexpansive with \(\operatorname{Fix} (S)\neq \emptyset \), then \(I-S\) is demiclosed at zero, i.e., if \(\{x_{t}\}\) is a sequence in C such that \(x_{t} \rightharpoonup x\in C\) and \((I-S)x_{t}\to 0\), then \((I-S)x=0\), where I is the identity mapping of \(\mathcal {H}\).
Lemma 2.6
(See [6])
Let \(\{{\boldsymbol{\Gamma}}_{s}\} \subset \mathbb{R}\) with \(\exists \{{\boldsymbol{\Gamma}}_{s_{k}}\}\subset \{{\boldsymbol{\Gamma}}_{s}\}\) such that \({\boldsymbol{\Gamma}}_{s_{k}}<{\boldsymbol{\Gamma}}_{s_{k}+1} \ \forall k\geq 1\). Let \(\{\phi (s)\}_{s\geq s_{0}}\) be formulated as
with \(s_{0}\geq 1\) satisfying \(\{k\leq s_{0}:{\boldsymbol{\Gamma}}_{k}<{\boldsymbol{\Gamma}}_{k+1}\}\neq \emptyset \). Then:
-
(i)
\(\phi (s_{0})\leq \phi (s_{0}+1)\leq \cdots \) and \(\phi (s)\to \infty \);
-
(ii)
\({\boldsymbol{\Gamma}}_{\phi (s)}\leq{\boldsymbol{\Gamma}}_{\phi (s)+1}\) and \({\boldsymbol{\Gamma}}_{s}\leq{\boldsymbol{\Gamma}}_{\phi (s)+1} \ \forall s\geq s_{0}\).
3 Convergence analysis
For the convergence analysis of our proposed rule for treating BSPVIP (1.6) with the CFPP constraint, we assume throughout that
-
\(T:{\mathcal {H}}_{1}\to{\mathcal {H}}_{2}\) is a nonzero bounded linear operator with the adjoint \(T^{*}\), and \(S:{\mathcal {H}}_{2}\to{\mathcal {H}}_{2}\) is τ-demimetric with \(I-S\) being demiclosed at zero, where \(\tau \in (-\infty , 1)\).
-
\(A:{\mathcal {H}}_{1}\to{\mathcal {H}}_{1}\) is a pseudomonotone and L-Lipschitz continuous mapping satisfying the condition: \(\|Au\|\leq \liminf_{t\to \infty}\|Au_{t}\|\) for each \(\{u_{t}\}\subset C\) with \(u_{t}\rightharpoonup u\).
-
\(\{S_{i}\}^{N}_{i=1}\) is finitely many nonexpansive self-mappings on \({\mathcal {H}}_{1}\) such that \(\Xi :=\bigcap^{N}_{i=1} \operatorname{Fix}(S_{i})\cap\Omega \neq \emptyset \) with \(\Omega :=\{z\in\operatorname{VI}(C,A):Tz\in\operatorname{Fix}(S)\}\). In addition, when required, we write \(S_{t}:=S_{t \operatorname{mod} N}\), \(t = 1, 2, 3, \ldots \) .
-
\(f:{\mathcal {H}}_{1}\to{\mathcal {H}}_{1}\) is a contraction with constant \(\nu \in [0,1)\), and \(F:{\mathcal {H}}_{1}\to{\mathcal {H}}_{1}\) is η-strongly monotone and κ-Lipschitzian such that \(\nu <\zeta :=1-\sqrt{1-\rho (2\eta -\rho \kappa ^{2})}\) for \(\rho \in (0,\frac{2\eta}{\kappa ^{2}})\).
-
\(\{\beta _{t}\},\{\gamma _{t}\},\{\varepsilon _{t}\} \subset (0, \infty ) \) such that \(\beta _{t}+\gamma _{t}<1\), \(\sum^{\infty}_{t=1} \beta _{t}=\infty \), \(\lim_{t\to \infty}\beta _{t}=0\), \(0<\liminf_{t\to \infty}\gamma _{t} \leq \limsup_{t\to \infty}\gamma _{t}<1\) and \(\varepsilon _{t}=o(\beta _{t})\).
Algorithm 3.1
(Triple-adaptive inertial subgradient extragradient rule)
Initialization: Let \(\lambda _{1}>0\), \(\epsilon >0\), \(\sigma \geq 0\), \(\mu \in (0,1)\), \(\alpha \in [0,1)\), and \(x_{0},x_{1}\in{\mathcal {H}}_{1}\) be arbitrary.
Iterative steps: Calculate \(x_{t+1}\) as follows:
Step 1. Given the iterates \(x_{t-1}\) and \(x_{t}\) (\(t\geq 1\)), choose \(\alpha _{t}\) such that \(0\leq \alpha _{t}\leq \bar{\alpha}_{t}\), where
Step 2. Compute \(w_{t}=S_{t}x_{t}+\alpha _{t}(S_{t}x_{t}-S_{t}x_{t-1})\) and \(y_{t}=P_{C}(w_{t}-\lambda _{t}Aw_{t})\).
Step 3. Construct \(C_{t}:=\{y\in{\mathcal {H}}_{1}:\langle w_{t}-\lambda _{t}Aw_{t}-y_{t},y_{t}-y \rangle \geq 0\}\), and compute \(v_{t}=P_{C_{t}}(w_{t}-\lambda _{t}Ay_{t})\) and \(z_{t}=v_{t}-\sigma _{t}T^{*}(I-S)Tv_{t}\).
Step 4. Calculate \(x_{t+1}=\beta _{t}f(x_{t})+\gamma _{t}x_{t}+((1-\gamma _{t})I-\beta _{t} \rho F)z_{t}\) and update
and for any fixed \(\epsilon >0\), \(\sigma _{t}\) is chosen to be the bounded sequence satisfying
otherwise set \(\sigma _{t}=\sigma \geq 0\).
Set \(t:=t+1\) and go to Step 1.
Remark 3.1
We have from (3.1) that \(\lim_{t\to \infty}\frac{\alpha _{t}}{\beta _{t}}\|x_{t}-x_{t-1}\| =0\). Indeed, we have \(\alpha _{t}\|x_{t}-x_{t-1}\|\leq \varepsilon _{t} \ \forall t\geq 1\), which together with \(\lim_{t\to \infty} \frac{\varepsilon _{t}}{\beta _{t}}=0\) implies that \(\frac{\alpha _{t}}{\beta _{t}}\|x_{t}-x_{t-1}\|\leq \frac{\varepsilon _{t}}{\beta _{t}}\to 0\). It is easy to see that \(C_{t}\) is closed and convex. Furthermore, \(C_{t} \neq \emptyset \) since \(C \subset C_{t}\) and \(C\neq \emptyset \). Hence, \(\{v_{t}\}\) is well defined.
Lemma 3.1
The step size \(\{\lambda _{t}\}\) is nonincreasing with \(\lambda _{t}\geq \lambda :=\min \{\lambda _{1},\frac {\mu}{L}\} \ \forall t\geq 1\), and \(\lim_{t\to \infty}\lambda _{t}\geq \lambda :=\min \{\lambda _{1}, \frac {\mu}{L}\}\).
Proof
By (3.2), we get \(\lambda _{t}\geq \lambda _{t+1} \ \forall t\geq 1\). Now, observe that
□
We prove the following lemmas.
Lemma 3.2
The step size \(\sigma _{t}\) formulated in (3.3) is well defined.
Proof
It suffices to show that \(\|T^{*}(Tv_{t}-STv_{t})\|^{2}\neq 0\). Take \(p\in\Xi \) arbitrarily. Since S is a τ-demimetric mapping, we obtain
If \(Tv_{t}\neq STv_{t}\), then \(\|Tv_{t}-STv_{t}\|^{2}>0\). Thus, \(\|T^{*}(Tv_{t}-STv_{t})\|^{2}>0\). □
Lemma 3.3
The sequences \(\{w_{t}\}\), \(\{y_{t}\}\), \(\{v_{t}\}\) satisfy
Proof
Observe that
Note that (3.5) holds when \(\langle Aw_{t}-Ay_{t},v_{t}-y_{t}\rangle \leq 0\). Conversely, we have (3.5) by (3.2). Also, \(\forall \hat{p}\in\Xi \subset C\subset C_{t}\),
which hence yields
Since \(\hat{p}\in\operatorname{VI}(C,A)\), we get \(\langle A\hat{p},\breve{x}-\hat{p}\rangle \geq 0 \ \forall \breve{x} \in C\). Pseudomonotonicity of A implies \(\langle Au,u-\hat{p}\rangle \geq 0 \ \forall u\in C\). Letting \(u:=y_{t}\in C\) gives \(\langle Ay_{t},\hat{p}-y_{t}\rangle \leq 0\). Thus,
Substituting (3.7) for (3.6), we obtain
Since \(v_{t}=P_{C_{t}}(w_{t}-\lambda _{t}Ay_{t})\), we have that \(v_{t}\in C_{t}\), and hence
which together with (3.5) implies that
Therefore, substituting (3.9) for (3.8), the result follows. □
Lemma 3.4
\(\{x_{t}\}\) is bounded.
Proof
First of all, we show that \(P_{\Xi}(f+I-\rho F)\) is a contraction. Indeed, for any \(x,y\in{\mathcal {H}}_{1}\), by Lemma 2.2, we have
which implies that \(P_{\Xi}(f+I-\rho F)\) is a contraction. Banach’s contraction mapping principle guarantees that \(P_{\Xi}(f+I-\rho F)\) has a unique fixed point. Say \(q^{*}\in{\mathcal {H}}_{1}\), i.e., \(q^{*}=P_{\Xi}(f+I-\rho F) q^{*}\). Hence, there exists unique \(q^{*}\in\Xi \) that solves
This also means that there exists a unique solution \(q^{*}\in\Xi \) to BSPVIP (1.6) with the CFPP constraint.
Now, by the definition of \(w_{t}\) in Algorithm 3.1, we have
From Remark 3.1, we know that \(\lim_{t\to \infty}\frac{\alpha _{t}}{\beta _{t}}\|x_{t}-x_{t-1}\|=0\). This means that \(\{\frac{\alpha _{t}}{\beta _{t}}\|x_{t}-x_{t-1}\|\}\) is bounded. Thus, \(\exists M_{1}>0\) such that \(\frac{\alpha _{t}}{\beta _{t}}\|x_{t}-x_{t-1}\|\leq M_{1} \ \forall t \geq 1\). Hence,
From Step 3 of Algorithm 3.1, using the definition of \(z_{t}\), we get
Since the operator S is τ-demimetric, from (3.12), we get
However, from the step size \(\sigma _{t}\) in (3.3), we get
if and only if
Using \(0<\epsilon \leq \sigma _{t}\) in (3.3), we have that \(-\epsilon ^{2}\geq -\sigma _{t}\epsilon \), and hence
Combining (3.13), (3.14), and (3.15), we obtain
In addition, by Lemma 3.1, we have \(\lim_{t\to \infty}\lambda _{t}\geq \lambda :=\min \{\lambda _{1}, \frac{\mu}{L}\}\), which leads to \(\lim_{t\to \infty}(1-\mu \frac{\lambda _{t}}{\lambda _{t+1}})=1- \mu >0\). Without loss of generality, we may assume that \(1-\mu \frac{\lambda _{t}}{\lambda _{t+1}}>0 \ \forall t\geq 1\). Thus, by Lemma 3.3, we get
Combining (3.11), (3.16), and (3.17), we obtain
Since \(\beta _{t}+\gamma _{t}<1 \ \forall t\geq 1\), we get \(\frac{\beta _{t}}{1-\gamma _{t}}<1 \ \forall t\geq 1\). So, from Lemma 2.2 and (3.18) it follows that
Thus, \(\|x_{t}-q^{*}\|\leq \max \{\|x_{1}-q^{*}\|, \frac{M_{1}+\|f(q^{*})-q^{*}\|+\|(I-\rho F)q^{*}\|}{\zeta -\nu} \}\) for all \(t\geq 1\). Thus, \(\{x_{t}\}\) is bounded, and so are the sequences \(\{v_{t}\}\), \(\{w_{t}\}\), \(\{y_{t}\}\), \(\{z_{t}\}\), \(\{f(x_{t})\}\), \(\{Fz_{t}\}\), \(\{S_{t}x_{t}\}\). □
Lemma 3.5
Let \(\{v_{t}\}\), \(\{w_{t}\}\), \(\{x_{t}\}\), \(\{y_{t}\}\), \(\{z_{t}\}\) be the sequences generated by Algorithm 3.1. Suppose that \(x_{t}-x_{t+1}\to 0\), \(w_{t}-x_{t}\to 0\), \(w_{t}-y_{t}\to 0\), and \(v_{t}-z_{t}\to 0\). Then \(\omega _{w}(\{x_{t}\})\subset\Xi \) with \(\omega _{w}(\{x_{t}\})=\{z\in{\mathcal {H}}_{1}:x_{t_{k}} \rightharpoonup z\textit{ for some }\{x_{t_{k}}\}\subset \{x_{t}\}\}\).
Proof
Take an arbitrary fixed \(z\in \omega _{w}(\{x_{t}\})\). Then \(\exists \{x_{t_{k}}\}\subset \{x_{t}\}\) such that \(x_{t_{k}} \rightharpoonup z\in{\mathcal {H}}_{1}\). Thanks to \(w_{t}-x_{t}\to 0\), by which \(\exists \{w_{t_{k}}\}\subset \{w_{t}\}\) such that \(w_{t_{k}}\rightharpoonup z\in{\mathcal {H}}_{1}\). In what follows, we claim that \(z\in\Xi \). In fact, from Algorithm 3.1, we get \(w_{t}-x_{t}=S_{t}x_{t}-x_{t}+\alpha _{t}(S_{t}x_{t}-S_{t}x_{t-1}) \ \forall t\geq 1\), and hence
Using Remark 3.1 and the assumption \(w_{t}-x_{t}\to 0\), we have
Also, from \(y_{t}=P_{C}(w_{t}-\lambda _{t}Aw_{t})\), we have \(\langle w_{t}-\lambda _{t}Aw_{t}-y_{t},y_{t}-y\rangle \geq 0 \ \forall y\in C\), and hence
Observe that \(\lambda _{t}\geq \min \{\lambda _{1},\frac{\mu}{L}\}\). So, from (3.20), we get \(\liminf_{k\to \infty}\langle Aw_{t_{k}},y-w_{t_{k}}\rangle \geq 0 \ \forall y\in C\). In the meantime, observe that \(\langle Ay_{t},y-y_{t}\rangle =\langle Ay_{t}-Aw_{t},y-w_{t}\rangle + \langle Aw_{t},y-w_{t}\rangle +\langle Ay_{t},w_{t}-y_{t}\rangle \). Since \(w_{t}-y_{t}\to 0\), we obtain \(Aw_{t}-Ay_{t}\to 0\), which together with (3.20) arrives at \(\liminf_{k\to \infty}\langle Ay_{t_{k}},v-y_{t_{k}}\rangle \geq 0 \ \forall v\in C\).
For \(i=1,2,\ldots ,N\),
Hence, from (3.19) and the assumption \(x_{t}-x_{t+1}\to 0\), we get \(\lim_{t\to \infty}\|x_{t}-S_{t+i}x_{t}\|=0\) for \(i=1,2,\ldots ,N\). This immediately implies that
Pick \(\{\varsigma _{k}\}\subset (0,1)\), \(\varsigma _{k}\downarrow 0\). For all \(k\geq 1\), let \(m_{k}\) be the smallest positive integer such that
Since \(\{\varsigma _{k}\}\) is nonincreasing, it is clear that \(\{m_{k}\}\) is nondecreasing.
Again from the assumption on A, we know that \(\liminf_{k\to \infty}\|Ay_{t_{k}}\|\geq \|Az\|\). If \(Az=0\), then z is a solution, i.e., \(z\in\operatorname{VI}(C,A)\). Let \(Az\neq 0\). Then we have \(0<\|Az\|\leq \liminf_{k\to \infty}\|Ay_{t_{k}}\|\). Without loss of generality, we may assume that \(Ay_{t_{k}}\neq 0 \ \forall k\geq 1\). Noticing \(\{y_{m_{k}}\}\subset \{y_{t_{k}}\}\) and \(Ay_{t_{k}}\neq 0 \ \forall k\geq 1\), set \(u_{m_{k}}=\frac{Ay_{m_{k}}}{\|Ay_{m_{k}}\|^{2}}\), and then \(\langle Ay_{m_{k}},u_{m_{k}}\rangle =1 \ \forall k\geq 1\). So, from (3.22), we get \(\langle Ay_{m_{k}},y+\varsigma _{k}u_{m_{k}}-y_{m_{k}}\rangle \geq 0 \ \forall k\geq 1\). By the pseudomonotonicity of A, we obtain \(\langle A(y+\varsigma _{k}u_{m_{k}}),y+\varsigma _{k}u_{m_{k}}-y_{m_{k}} \rangle \geq 0 \ \forall k\geq 1\). This immediately yields
From \(x_{t_{k}}\rightharpoonup z\) and \(x_{t}-y_{t}\to 0\) (due to \(w_{t}-x_{t}\to 0\) and \(w_{t}-y_{t}\to 0\)), we obtain \(y_{t_{k}}\rightharpoonup z\). So, \(\{y_{t}\}\subset C\) guarantees \(z\in C\). Since \(\{y_{m_{k}}\}\subset \{y_{t_{k}}\}\) and \(\varsigma _{k}\downarrow 0\), we have \(0\leq \limsup_{k\to \infty}\|\varsigma _{k}u_{m_{k}}\|=\limsup_{k \to \infty}\frac{\varsigma _{k}}{\|Ay_{m_{k}}\|} \leq \frac{\limsup_{k\to \infty}\varsigma _{k}}{\liminf_{k\to \infty}\|Ay_{t_{k}}\|}=0\). Hence, we get \(\varsigma _{k}u_{m_{k}}\to 0\).
Next, we show that \(z\in\Xi \). Indeed, using (3.21), we have \(x_{t_{k}}-S_{l}x_{t_{k}}\to 0\) for \(l=1,2,\ldots ,N\). By Lemma 2.5, \(I-S_{l}\) is demiclosed at zero for \(l=1,2,\ldots ,N\). Thus, from \(x_{t_{k}}\rightharpoonup z\), we get \(z\in\operatorname{Fix}(S_{l})\). Since l is an arbitrary element in the finite set \(\{1,2,\ldots ,N\}\), it follows that \(z\in \bigcap^{N}_{i=1}\operatorname{Fix}(S_{i})\). Also, letting \(k\to \infty \), we have that the right-hand side of (3.23) tends to zero. Thus, \(\langle A\vec{y},\vec{y}-z\rangle =\liminf_{k\to \infty}\langle A \vec{y},\vec{y}-y_{m_{k}}\rangle \geq 0 \ \forall \vec{y}\in C\). By Lemma 2.3 we have \(z\in\operatorname{VI}(C,A)\). Furthermore, we claim \(Tz\in\operatorname{Fix}(S)\). In fact, noticing \(z_{t}=v_{t}-\sigma _{t}T^{*}(I-S)Tv_{t}\), from \(0<\epsilon \leq \sigma _{t}\) and \(v_{t}-z_{t}\to 0\), we get
which together with the τ-demimetricness of S leads to
Noticing \(x_{t+1}=\beta _{t}f(x_{t})+\gamma _{t}x_{t}+((1-\gamma _{t})I-\beta _{t} \rho F)z_{t}\), we have
Since \(0<\liminf_{t\to \infty}(1-\gamma _{t})\), \(x_{t}-x_{t+1}\to 0\) and \(\beta _{t}\to 0\), from the boundedness of \(\{x_{t}\}\) and \(\{z_{t}\}\), we get \(\lim_{t\to \infty}\|z_{t}-x_{t}\|=0\), which hence yields
From \(x_{t_{k}}\rightharpoonup z\), we get \(v_{t_{k}}\rightharpoonup z\). It follows that \(Tv_{t_{k}}\rightharpoonup Tz\). From (3.24) one derives \(Tz\in\operatorname{Fix}(S)\). Therefore, \(z\in \bigcap^{N}_{i=1}\operatorname{Fix}(S_{i})\cap\Omega =\Xi \). This completes the proof. □
Theorem 3.1
\(\{x_{t}\}\) generated by Algorithm 3.1converges strongly to the unique solution \(q^{*}\in\Xi \) of BSPVIP (1.6) with the CFPP constraint.
Proof
First of all, in terms of Lemma 3.4 we obtain that \(\{x_{t}\}\) is bounded. From its proof we know that there exists a unique solution \(q^{*}\in\Xi \) of BSPVIP (1.6) with the CFPP constraint, i.e., VIP (3.10) has a unique solution \(q^{*}\in\Xi \).
Step 1. We claim that
for some \(M_{4}>0\). Also
Using Lemma 2.2, we get
(due to \(\beta _{t}\nu +\gamma _{t}+(1-\beta _{t}\zeta -\gamma _{t})=1-\beta _{t}( \zeta -\nu )\leq 1\)), where \(\sup_{t\geq 1}2\|(f-\rho F)q^{*}\|\|x_{t}-q^{*}\|\leq M_{2}\) for some \(M_{2}>0\). Substituting (3.16) for (3.25), by Lemma 3.3 we get
Also, from (3.18) we have
where \(\sup_{t\geq 1}(2M_{1}\|x_{t}-q^{*}\|+\beta _{t}M^{2}_{1})\leq M_{3}\) for some \(M_{3}>0\). Combining (3.27) and (3.28), we obtain
where \(M_{4}:=M_{2}+M_{3}\). This immediately implies that
Step 2. We claim that
for some \(M>0\). Indeed, we have
Combining (3.18), (3.25), and (3.30), we have
where \(\sup_{t\geq 1}\{2\|x_{t}-q^{*}\|+\alpha _{t}\|x_{t}-x_{t-1}\|\} \leq M\).
Step 3. We show that \(\{x_{t}\}\) converges strongly to \(q^{*}\in\Xi \). Put \({\boldsymbol{\Gamma}}_{t}=\|x_{t}-q^{*}\|^{2}\).
Case 1. Assume that integer \(t_{0}\geq 1\) with \(\{{\boldsymbol{\Gamma}}_{t}\}_{t\geq t_{0}}\) is nonincreasing. Then \(\lim_{t\to \infty}{\boldsymbol{\Gamma}}_{t}=d<+\infty \), \(\lim_{t\to \infty}({\boldsymbol{\Gamma}}_{t}-{\boldsymbol{\Gamma}}_{t+1})=0\). By (3.29), one obtains
Since \(\lim_{t\to \infty}(1-\mu \frac{\lambda _{t}}{\lambda _{t+1}})=1- \mu >0\), \(\liminf_{t\to \infty}(1-\gamma _{t})>0\), \(\beta _{t}\to 0\), and \({\boldsymbol{\Gamma}}_{t}-{\boldsymbol{\Gamma}}_{t+1}\to 0\), one has
Noticing \(z_{t}=v_{t}-\sigma _{t}T^{*}(I-S)Tv_{t}\) and the boundedness of \(\{\sigma _{t}\}\), from (3.32) we get
and hence
Moreover, noticing \(x_{t+1}-q^{*}=\gamma _{t}(x_{t}-q^{*})+(1-\gamma _{t})(z_{t}-q^{*})+ \beta _{t}(f(x_{t})-\rho Fz_{t})\), we obtain from (3.18) that
which immediately leads to
Since \(0<\liminf_{t\to \infty}\gamma _{t}\leq \limsup_{t\to \infty} \gamma _{t}<1\), \(\beta _{t}\to 0\), \({\boldsymbol{\Gamma}}_{t}-{\boldsymbol{\Gamma}}_{t+1}\to 0\), and \(\lim_{t\to \infty}{\boldsymbol{\Gamma}}_{t}=d<+\infty \), from the boundedness of \(\{x_{t}\}\), \(\{z_{t}\}\), we infer that
So, it follows from (3.34) that
Also, from Algorithm 3.1 we obtain that
In addition, the boundedness of \(\{x_{t}\}\) means there is \(\{x_{t_{k}}\}\subset \{x_{t}\}\) such that
Since \(\{x_{t}\}\) is bounded, we may assume that \(x_{t_{k}}\rightharpoonup \widetilde{z}\). We get from (3.37)
Since \(x_{t}-x_{t+1}\to 0\), \(w_{t}-x_{t}\to 0\), \(w_{t}-y_{t}\to 0\), and \(v_{t}-z_{t}\to 0\), by Lemma 3.5 we deduce that \(\widetilde{z}\in \omega _{w}(\{x_{t}\})\subset\Xi \). Hence, from (3.10) and (3.38), one gets
which together with (3.36) leads to
Note that \(\{\beta _{t}(\zeta -\nu )\}\subset [0,1]\), \(\sum^{\infty}_{t=1}\beta _{t}( \zeta -\nu )=\infty \), and
By Lemma 2.4 and (3.31), \(\lim_{t\to \infty}\|x_{t}-q^{*}\|^{2}=0\).
Case 2. Suppose that \(\exists \{{\boldsymbol{\Gamma}}_{t_{k}}\}\subset \{{\boldsymbol{\Gamma}}_{t}\}\) such that \({\boldsymbol{\Gamma}}_{t_{k}}<{\boldsymbol{\Gamma}}_{t_{k}+1} \ \forall k\in{\mathcal {N}}\), where \({\mathcal {N}}\) is the set of all positive integers. Define the mapping \(\phi :{\mathcal {N}}\to{\mathcal {N}}\) by
By Lemma 2.6, we get
From (3.29) we have
which immediately yields
Similar to Case 1,
By (3.31),
and so
Thus, \(\lim_{t\to \infty}\|x_{\phi (t)}-q^{*}\|^{2}=0\). Also note that
Owing to \({\boldsymbol{\Gamma}}_{t}\leq{\boldsymbol{\Gamma}}_{\phi (t)+1}\), we get
i.e., \(x_{t}\to q^{*}\) as \(t\to \infty \). □
Remark 3.2
-
(i)
The results in [21] are extended to develop BSPVIP (1.6) with the CFPP constraint, i.e., the problem of finding \(q^{*}\in\Xi =\bigcap^{N}_{i=1}\operatorname{Fix}(S_{i})\cap\Omega \) such that \(\langle (\rho F-f)q^{*},p-q^{*}\rangle \geq 0 \ \forall p\in\Xi \), where \(\Omega =\{z\in\operatorname{VI}(C,A):Tz\in\operatorname{Fix}(S)\}\) with A being pseudomonotone and Lipschitzian mapping. The results in [21] are extended to develop our triple-adaptive inertial subgradient extragradient rule for settling BSPVIP (1.6) with the CFPP constraint, which is on the basis of the subgradient extragradient method with adaptive step sizes, accelerated inertial approach, hybrid deepest-descent method, and viscosity approximation technique. In [21] the following holds:
$$ x_{t}\to q^{*}\in\Omega =\bigcap ^{N}_{i=1}\operatorname{Fix}(S_{i})\cap\operatorname{VI}(C,A)\quad \Leftrightarrow\quad \Vert x_{t}-x_{t+1} \Vert \to 0 $$with \(q^{*}=P_{\Omega}(I-\rho F+f)q^{*}\). In our results, Lemma 2.6 implies that
$$ x_{t}\to q^{*}\in\Xi =\bigcap ^{N}_{i=1}\operatorname{Fix}(S_{i})\cap \bigl\{ z \in\operatorname{VI}(C,A):Tz\in\operatorname{Fix}(S)\bigr\} $$with \(q^{*} = P_{\Xi}(I-\rho F+f)q^{*}\).
-
(ii)
BSQVIP (1.5) (i.e., the problem of finding \(q^{*}\in\Omega \) such that \(\langle Fq^{*},p-q^{*}\rangle \geq 0 \ \forall p\in\Omega \), where \(\Omega =\{z\in\operatorname{VI}(C,A):Tz\in\operatorname{Fix}(S)\}\) with A being quasimonotone and Lipschitzian mapping) in [29] is extended to develop BSPVIP (1.6) with the CFPP constraint, i.e., the problem of finding \(q^{*}\in\Xi =\bigcap^{N}_{i=1}\operatorname{Fix}(S_{i})\cap\Omega \) such that \(\langle (\rho F-f)q^{*},p-q^{*}\rangle \geq 0 \ \forall p\in\Xi \), where \(\Omega =\{z\in\operatorname{VI}(C,A):Tz\in\operatorname{Fix}(S)\}\) with A being pseudomonotone and Lipschitzian mapping.
4 Numerical implementation
In this section, we compare our proposed Algorithm 3.1 with Algorithm 1 of [27] using the example below. All codes were written in MATLAB R2017a and performed on a PC Desktop Intel(R) Core(TM) i7-8700U CPU @ 3.20GHz 3.19GHz, RAM 8.00 GB.
Suppose that \(H_{1}=H_{2}=L_{2}([0,1])\) is endowed with the inner product \(\langle x,y\rangle = \int _{0}^{1} x(t)y(t)\,dt\), \(\forall x, y \in L_{2}([0,1])\) and the induced norm \(\|x\|:= \int _{0}^{1} |x(t)|^{2}\,dt\), \(\forall x, y \in L_{2}([0,1])\). Let \(T: L_{2}([0,1])\rightarrow L_{2}([0,1])\) be defined by
Then T is a bounded linear operator with adjoint
Let \(C = \{x \in L_{2}([0,1]): \langle t+1,x\rangle \leq 1\}\). Then C is a nonempty closed and convex subset. The projection \(P_{C}\) is given as
Also, let \(Q= \{x \in L_{2}([0,1]): \|x\|\leq 2\}\). Then Q is a nonempty closed and convex subset. \(P_{Q}\) is
Let \(A:L_{2}([0,1])\rightarrow L_{2}([0,1])\) be defined by
Then A is pseudomonotone and Lipschitz continuous but not monotone. Also define \(B:L_{2}([0,1])\rightarrow L_{2}([0,1])\) by
Take \(f(x)=\frac{x}{2}\), \(x \in L_{2}([0,1])\), \(\beta _{t}=\frac{1}{t+1}\) and \(F=I\).
To test the algorithms, we choose the following parameters for the algorithm: for our algorithm, we used \(\lambda _{1} = 0.06\), \(\epsilon = 10^{-4}\), \(\sigma = 0.5\), \(\mu = 0.06\), \(\alpha = 10^{-3}\), \(\epsilon _{t} = (t+1)^{-2}\), \(\beta _{t} = (t+1)^{-1}\), \(\gamma _{t} = 2t(5t+9)^{-1}\), \(\rho = 0.07\). For Anh’s algorithm, we choose \(\eta = 0.06\), \(\gamma = 0.05\), \(\mu = 0.07\), \(\delta _{t} = 10^{-3}\), \(\lambda _{t} = 2t(5t+1)^{-1}\), \(\alpha _{t} = (t+1)^{-1}\). We used \(Err = \|x_{t+1} - x_{t}\| < 10^{-4}\) as a stopping criterion for each algorithm. We test the algorithms using the following starting points:
Case I: \(x_{0} = 2t^{2} +1\), \(x_{1} = \exp (3t) \)
Case II: \(x_{0} = 2t^{2} -2t+1\), \(x_{1} = -4(t^{3} +2t -3)\);
Case III: \(x_{0} = t^{4} -1\), \(x_{1} = t^{5} -9 \);
Case IV: \(x_{0} = \frac{1}{4}t^{2} +2t\), \(x_{1} = \frac{1}{3}\cos (2t)\).
The numerical results are shown in Table 1 and Fig. 1.
Algorithm 4.1
Initialization: Let \(\lambda _{1}>0\), \(\epsilon >0\), \(\sigma \geq 0\), \(\mu \in (0,1)\), \(\alpha \in [0,1)\), and \(x_{0},x_{1}\in{\mathcal {H}}_{1}\) be arbitrary.
Iterative steps: Calculate \(x_{t+1}\) as follows:
Step 1. Given the iterates \(x_{t-1}\) and \(x_{t}\) (\(t\geq 1\)), choose \(\alpha _{t}\) such that \(0\leq \alpha _{t}\leq \bar{\alpha}_{t}\), where
Step 2. Compute \(w_{t}=x_{t}+\alpha _{t}(x_{t}-x_{t-1})\) and \(y_{t}=P_{C}(w_{t}-\lambda _{t}Aw_{t})\).
Step 3. Construct \(C_{t}:=\{y\in{\mathcal {H}}_{1}:\langle w_{t}-\lambda _{t}Aw_{t}-y_{t},y_{t}-y \rangle \geq 0\}\), and compute \(v_{t}=P_{C_{t}}(w_{t}-\lambda _{t}Ay_{t})\) and \(z_{t}=v_{t}-\sigma _{t}T^{*}(I-S)Tv_{t}\), where \(S=P_{Q}(I-\varphi B)-\varphi (B(P_{Q}(I-\varphi B))-B)\) and \(\varphi \in (0,1)\).
Step 4. Calculate \(x_{t+1}=\beta _{t}\frac{x_{t}}{2}+\gamma _{t}x_{t}+((1-\gamma _{t})I- \beta _{t}\rho )z_{t}\) and update
and for any fixed \(\epsilon >0\), \(\sigma _{t}\) is chosen to be the bounded sequence satisfying
otherwise set \(\sigma _{t}=\sigma \geq 0\).
Set \(t:=t+1\) and go to Step 1.
Availability of data and materials
Not applicable.
Abbreviations
- VIP:
-
Variational inequality problem
- BSPVIP:
-
Bilevel split pseudomonotone variational inequality problem
- CFPP:
-
Common fixed point problem
References
Korpelevich, G.M.: The extragradient method for finding saddle points and other problems. Ekon. Mat. Metody 12, 747–756 (1976)
Yao, Y., Liou, Y.C., Kang, S.M.: Approach to common elements of variational inequality problems and fixed point problems via a relaxed extragradient method. Comput. Math. Appl. 59, 3472–3480 (2010)
Shehu, Y., Iyiola, O.S.: Strong convergence result for monotone variational inequalities. Numer. Algorithms 76, 259–282 (2017)
Shehu, Y., Dong, Q.L., Jiang, D.: Single projection method for pseudo-monotone variational inequality in Hilbert spaces. Optimization 68, 385–409 (2019)
Ceng, L.C., Petrusel, A., Yao, J.C., Yao, Y.: Hybrid viscosity extragradient method for systems of variational inequalities, fixed points of nonexpansive mappings, zero points of accretive operators in Banach spaces. Fixed Point Theory 19(2), 487–501 (2018)
Maingé, P.E.: Strong convergence of projected subgradient methods for nonsmooth and nonstrictly convex minimization. Set-Valued Anal. 16, 899–912 (2008)
Ceng, L.C., Liou, Y.C., Wen, C.F., Wu, Y.J.: Hybrid extragradient viscosity method for general system of variational inequalities. J. Inequal. Appl. 2015, 150 (2015)
Ceng, L.C., Wen, C.F.: Relaxed extragradient methods for systems of variational inequalities. J. Inequal. Appl. 2015, 140 (2015)
Ceng, L.C., Sahu, D.R., Yao, J.C.: A unified extragradient method for systems of hierarchical variational inequalities in a Hilbert space. J. Inequal. Appl. 2014, 460 (2014)
Ceng, L.C., Hadjisavvas, N., Wong, N.C.: Strong convergence theorem by a hybrid extragradient-like approximation method for variational inequalities and fixed point problems. J. Glob. Optim. 46, 635–646 (2010)
Ceng, L.C., Ansari, Q.H., Schaible, S.: Hybrid extragradient-like methods for generalized mixed equilibrium problems, systems of generalized equilibrium problems and optimization problems. J. Glob. Optim. 53, 69–96 (2012)
Denisov, S.V., Semenov, V.V., Chabak, L.M.: Convergence of the modified extragradient method for variational inequalities with non-Lipschitz operators. Cybern. Syst. Anal. 51, 757–765 (2015)
Dong, Q.L., Lu, Y.Y., Yang, J.F.: The extragradient algorithm with inertial effects for solving the variational inequality. Optimization 65, 2217–2226 (2016)
Ceng, L.C., Liou, Y.C., Wen, C.F.: Extragradient method for convex minimization problem. J. Inequal. Appl. 2014, 444 (2014)
Chen, J.Z., Ceng, L.C., Qiu, Y.Q., Kong, Z.R.: Extra-gradient methods for solving split feasibility and fixed point problems. Fixed Point Theory Appl. 2015, 192 (2015)
Ceng, L.C., Shang, M.: Hybrid inertial subgradient extragradient methods for variational inequalities and fixed point problems involving asymptotically nonexpansive mappings. Optimization 70, 715–740 (2021)
Ceng, L.C., Pang, C.T., Wen, C.F.: Multi-step extragradient method with regularization for triple hierarchical variational inequalities with variational inclusion and split feasibility constraints. J. Inequal. Appl. 2014, 492 (2014)
Ceng, L.C., Latif, A., Ansari, Q.H., Yao, J.C.: Hybrid extragradient method for hierarchical variational inequalities. Fixed Point Theory Appl. 2014, 222 (2014)
Kraikaew, R., Saejung, S.: Strong convergence of the Halpern subgradient extragradient method for solving variational inequalities in Hilbert spaces. J. Optim. Theory Appl. 163, 399–412 (2014)
Ceng, L.C., Liou, Y.C., Wen, C.F.: Composite relaxed extragradient method for triple hierarchical variational inequalities with constraints of systems of variational inequalities. J. Nonlinear Sci. Appl. 10, 2018–2039 (2017)
Ceng, L.C., Petrusel, A., Qin, X., Yao, J.C.: A modified inertial subgradient extragradient method for solving pseudomonotone variational inequalities and common fixed point problems. Fixed Point Theory 21, 93–108 (2020)
Ceng, L.C., Liou, Y.C., Wen, C.F.: A hybrid extragradient method for bilevel pseudomonotone variational inequalities with multiple solutions. J. Nonlinear Sci. Appl. 9(2016), 4052–4069 (2016)
Yang, J., Liu, H., Liu, Z.: Modified subgradient extragradient algorithms for solving monotone variational inequalities. Optimization 67, 2247–2258 (2018)
Vuong, P.T.: On the weak convergence of the extragradient method for solving pseudo-monotone variational inequalities. J. Optim. Theory Appl. 176, 399–409 (2018)
Thong, D.V., Hieu, D.V.: Modified Tseng’s extragradient algorithms for variational inequality problems. J. Fixed Point Theory Appl. 20(4), 152 (2018)
Thong, D.V., Hieu, D.V.: Modified subgradient extragradient method for variational inequality problems. Numer. Algorithms 79, 597–610 (2018)
Anh, T.V.: Line search methods for bilevel split pseudomonotone variational inequality problems. Numer. Algorithms 81, 1067–1087 (2019)
Censor, Y., Gibali, A., Reich, S.: Algorithms for the split variational inequality problem. Numer. Algorithms 59, 301–323 (2012)
Abuchu, J.A., Ugwunnadi, G.C., Darvish, V., Narain, O.K.: Accelerated hybrid subgradient extragradient methods for solving bilevel split quasimonotone variational inequality problems. Optimization (2022, in press)
Huy, P.V., Van, L.H.M., Hien, N.D., Anh, T.V.: Modified Tseng’s extragradient methods with self-adaptive step size for solving bilevel split variational inequality problems. Optimization 71(6), 1721–1748 (2022)
Goebel, K., Reich, S.: Uniform Convexity, Hyperbolic Geometry, and Nonexpansive Mappings. Dekker, New York (1984)
Xu, H.K., Kim, T.H.: Convergence of hybrid steepest-descent methods for variational inequalities. J. Optim. Theory Appl. 119, 185–201 (2003)
Funding
Lu-Chuan Ceng was supported by the 2020 Shanghai Leading Talents Program of the Shanghai, Municipal Human Resources and Social Security Bureau (20LJ2006100), the Innovation Program of Shanghai Municipal Education Commission (15ZZ068), and the Program for Outstanding Academic Leaders in Shanghai City (15XD1503100). Jen-Chih Yao was partially supported by the grant MOST 111-2115-M-039-001-MY2 to carry out this research work.
Author information
Authors and Affiliations
Contributions
All authors contributed equally to this manuscript. All authors reviewed the manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Ceng, LC., Ghosh, D., Shehu, Y. et al. Triple-adaptive subgradient extragradient with extrapolation procedure for bilevel split variational inequality. J Inequal Appl 2023, 14 (2023). https://doi.org/10.1186/s13660-023-02913-5
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13660-023-02913-5
MSC
- 65Y05
- 65K15
- 68W10
- 47H05
- 47H10
Keywords
- Subgradient extragradient process
- Bilevel split pseudomonotone variational inequality problem
- Extrapolation step
- Demimetric mapping
- Fixed point
- Nonexpansive mapping