 Research
 Open access
 Published:
Outer approximated projection and contraction method for solving variational inequalities
Journal of Inequalities and Applications volume 2023, Article number: 141 (2023)
Abstract
In this paper we focus on solving the classical variational inequality (VI) problem. Most common methods for solving VIs use some kind of projection onto the associated feasible set. Thus, when the involved set is not simple to project onto, then the applicability and computational effort of the proposed method could be arguable. One such scenario is when the given set is represented as a finite intersection of sublevel sets of convex functions. In this work we develop an outer approximation method that replaces the projection onto the VI’s feasible set by a simple, closed formula projection onto some “superset”. The proposed method also combines several known ideas such as the inertial technique and selfadaptive step size.
Under standard assumptions, a strong minimumnorm convergence is proved and several numerical experiments validate and exhibit the performance of our scheme.
1 Introduction
Let H be a real Hilbert space with a nonempty, closed, and convex set \(C\subseteq H\). Let \(\\cdot \\) and \(\langle \cdot , \cdot \rangle \) denote the induced norm and inner product on H, respectively, and let \(F:H\to H\) be a singlevalued mapping. The variational inequality (VI) problem formulated by (1) is an ageold problem in mathematical analysis with present relevance. It was introduced independently by Fichera [15] and Stampacchia [40], and since then, numerous researchers have developed various methods for solving the VIs with applications in diverse fields, such as the sciences, engineering, medicine, cryptography, image processing, signal processing, optimal control, etc., see [2, 3, 13, 14, 17, 18, 21, 22, 27, 29, 33], for more details. The VI, with solution set denoted by \(VI(C,F)\), is defined as finding a point \(p\in C\) such that
Two main known methods for solving VIs are the projection and regularization methods. The foremost projection method is the gradient method (GM) that generates a sequence \(\{x_{n}\}\) according to the following rule:
where \(P_{C}\) is the metric projection of H onto the feasible set C. Although the GM has a simple structure, it has two major drawbacks. The first is the quite strong monotonicity assumption required for its convergence and the second is the need for computing the projection onto the feasible set C, per iteration.
As a way to overcome the first GM’s monotonicity limitation, Korpelevich [31] (Antipin [5] independently) proposed the extragradient method (EGM) that, on the one hand, converges under a weaker monotonicity assumption but requires the evaluation of two projections onto C, per iteration. Censor et al. [9] introduced the subgradient extragradient method (SEGM) in which one of the projections is replaced by an easyclosed formula projection onto a “superset” containing C. Other modifications in this directions can be found, for example in [32, 47].
Other relevant EGM extensions are Tseng’s extragradient method (TEGM) [49], and the projection and contraction method (PCM) [41], see also [10, 13, 19, 50]. Both methods use only one projection onto C, per iteration. The PCM, for example, generates \(\{x_{n}\}\) according to the following rule:
where \(\rho \in (0,2)\), \(\xi \in (0,\frac{1}{L}), L\) is the Lipschitz constant of F, \(\beta _{n}:=\frac{\alpha (x_{n},y_{n})}{\d(x_{n},y_{n})\^{2}}\), \(\alpha (x_{n},y_{n}):=\langle x_{n}y_{n}, d(x_{n},y_{n}) \rangle \), \(\forall n\ge 1\).
As the implementation of the above methods (SEGM, TEGM, and PCM) still requires the computation of \(P_{C}\) for each iteration, a need for a “freeprojection” method encouraged many researchers to come up with some creative ideas. One such idea is the twosubgradient method (TSEGM) of Censor et al. [9]. Suppose that the closed and convex set C can be represented as a sublevel set of some convex function \(c:H\to \mathbb{R}\), that is
Denote by \(\partial c(x)\), the subdifferential of the convex function \(c(\cdot )\) at x. The TSEGM generates \(\{x_{n}\}\) according to the following rule:
where \(\zeta _{n}\in \partial c(x_{n})\). Observe that if \(\zeta _{n}=0\) then \(C_{n}=H\) and otherwise it is a halfspace containing the set C. The convergence of (5) was raised as an open problem in [9].
Recently, Cao and Guo [8] as well as Ma and Wang [34] partially answered this open question by proposing an inertial twosubgradient extragradient method (ITSEGM), and selfadaptive TSEGM for solving a Lipschitz continuous and monotone variational inequality problem with weak convergence properties.
Other relevant works related to the subgradient extragradient method are of He and Wu [24], in which a line search is involved and the two projections onto the set C are replaced by projections onto two particular halfspaces. He et al. [23] proposed a relaxed projection and contraction method where again the projections onto the set C are replaced by projection onto a particular constructible halfspace.
In this paper, we are interested in studying VIs where the feasible set C is given as a finite intersection of sublevel sets of convex functions defined as follows:
where k is a positive integer and \(c_{i}:H\to \mathbb{R}\) for all \(i\in I:=\{1,2,\ldots, k\}\) are convex functions.
A very recent result for solving VIs defined over sets of the form (6) is the He et al. [25] totally relaxed selfadaptive subgradient extragradient.
Remark 1.1
Although all the above results, He and Wu [24], He et al. [23], He et al. [25], Cao and Guo [8], and Ma and Wang [34] managed to replace successfully the projections onto C by some closedformula projections onto some set, there are still some limitations. First, we note that their proposed methods either require knowledge of the Lipschitz constants of F and the Gâteaux differential \(c'(\cdot )\) of \(c(\cdot )\) (which are often unknown or very difficult to estimate) or employed a linesearch procedure, which is known to be time consuming to implement. In addition, all results obtain weak convergence, that is known to be a drawback when solving optimization problems, see, e.g., Bauschke [6].
Following the above methods and results, in this paper we establish a totally relaxed, inertial, selfadaptive projection and contraction method (TRISPCM) for solving VI (1) defined over a finite intersection of some closed, convex sublevel sets (as seen in (6)). Our method employs projections onto some constructable “supersets” and inertial ([4, 10, 20, 37, 42, 50, 52])) and relaxation [28] techniques are incorporated to speed up the convergence rate of our method. Although we assume that F is Gâteaux differentiable, and \(c_{i}' (\cdot )\) of \(c_{i}(\cdot )\) are Lipschitz continuous, our method does not require any linesearch procedure, rather we employ a more efficient selfadaptive stepsize technique that generates a nonmonotonic sequence of step sizes. Moreover, under suitable conditions we prove strong convergence to a minimumnorm solution of the problem. Relevant numerical experiments at the end of this paper clearly display the efficiency of our methods over those in the literature.
The remainder of this paper is organized as follows. Section 2 contains definitions and existing results relevant to our analysis. In Sect. 3, the proposed algorithm is presented and its strong convergence is established in Sect. 4. Numerical experiments and comparisons with related methods are given in Sect. 5, illustrating the performance of our scheme. Finally, some concluding remarks on our work are presented in Sect. 6.
2 Preliminaries
In this section, we review basic definitions and important lemmas, vital in proving our main results.
Let C be a nonempty, closed, and convex subset of a real Hilbert space H. Also, throughout this paper, we let the strong and weak convergence of a sequence \(\{x_{n}\}\) to a point \(x^{*} \in H\) be denoted by \(x_{n} \rightarrow x^{*}\) and \(x_{n} \rightharpoonup x^{*}\), respectively. The set of weak limits of \(\{x_{n}\}\), denoted by \(w_{\omega}(x_{n})\), is defined by
The metric projection \(P_{C}: H\rightarrow C\) ([1]) is defined, for each \(x\in H\), as the unique element \(P_{C}x\in C\) such that
It is known that \(P_{C}\) is nonexpansive (see [4, 38]). For more interesting features of the metric projection, see Lemma 2.1.
Lemma 2.1
[25, 30] Let H be a real Hilbert space, and I be the identity map on H. Let C be a nonempty, closed, and convex subset of H. We have the following results for any \(x\in H\) and \(f,g\in C\):

(i)
\(g = P_{C}x \Longleftrightarrow \langle x  g, g  f\rangle \geq 0\);

(ii)
\(\langle (IP_{C})x(IP_{C})f, xf\rangle \ge \(IP_{C})x(IP_{C})f \^{2}\);

(iii)
\(\fP_{C}x\^{2}+\xP_{C}x\^{2} \le \xf\^{2}\);

(iv)
\(\langle xf,P_{C}xP_{C}f \rangle \ge \P_{C}xP_{C}f\^{2}\);

(v)
Let \(D=\{u\in H : \langle x,ud \rangle \le 0\}\) be a halfspace, where \(x\ne 0\), and \(d\in \mathbb{R}\). Then, for \(a\in H\),
$$ P_{D} (a)= a\max \biggl\{ 0, \frac{\langle x,ad \rangle }{ \Vert x \Vert ^{2}} \biggr\} x. $$(8)
Note that (8) is the explicit formula for the orthogonal projection onto the halfspace D.
Definition 2.2
[44, 51] Let \(F: H\rightarrow H\) be a mapping defined on a real Hilbert space H. Then, F is said to be:

(i)
LLipschitz continuous, where \(L>0\), if
$$ \Vert Fu  Fv \Vert \leq L \Vert uv \Vert , \quad \forall u,v\in H. $$A contraction if \(L\in [0,1)\), and nonexpansive, if \(L=1\);

(ii)
λ strongly monotone, if there exists \(\lambda >0\) such that
$$ \langle uv, FuFv \rangle \ge \lambda \Vert uv \Vert ^{2}, \quad \forall u,v \in H ; $$ 
(iii)
monotone, if
$$ \langle Fu  Fv, uv\rangle \geq 0, \quad \forall u,v\in H. $$
Lemma 2.3
[43, 50] Let H be a real Hilbert space. Then, the following results hold, for all \(x,y\in H\) and \(\zeta \in \mathbb{R}\):

(i)
\(\x + y\^{2} \leq \x\^{2} + 2\langle y, x + y \rangle \);

(ii)
\(\x + y\^{2} = \x\^{2} + 2\langle x, y \rangle + \y\^{2}\);

(iii)
\(\\zeta x + (1\zeta ) y\^{2} = \zeta \x\^{2} + (1\zeta )\y\^{2} \zeta (1\zeta )\xy\^{2}\).
Definition 2.4
[36] Let \(c:H\to \mathbb{R}\) be a realvalued function. Then,

(i)
c is said to be Gâteaux differentiable at \(z\in H\), if there exists an element in H, denoted by \(c'(z)\), such that
$$ \lim_{t\to 0} \frac{c(z+th)c(z)}{t}=\bigl\langle h, c'(z) \bigr\rangle , \quad \forall h\in H, $$(9)where \(c'(z)\) (also written as \(\nabla c(z)\)), is known as the Gâteaux differential (or gradient) of c at z.

(ii)
If c is convex, then c is said to be subdifferentiable at point \(z\in H\), if \(\partial c(z)\) is nonempty, where \(\partial c(z)\) is defined as follows:
$$ \partial c(z):=\bigl\{ x\in H : c(y)\ge c(z)+\langle x,yz \rangle \ \forall y\in H\bigr\} . $$(10)c is said to be subdifferentiable on H, if for each \(z\in H\), c is subdifferentiable at z.

(iii)
c is said to be weakly lower semicontinuous (wlsc) at \(z\in H\), if \(z_{n}\rightharpoonup z\) implies
$$ c(z)\le \liminf_{n\to \infty} c(z_{n}). $$(11)c is said to be wlsc on H if for each \(z\in H\), c is wlsc at z.
Remark 2.5
We note the following from Definition 2.4:

(i)
Each element in \(\partial c(z)\) is referred to as a subgradient of c at z. Also, (10) is said to be the subdifferential inequality of c at z, where \(\partial c(z)\) is the subdifferential of c at z.

(ii)
It is also known that if c is Gâteaux differentiable at z, then c is subdifferentiable at z, and \(\partial c(z)=\{c'(z)\}\), in particular, \(\partial c(z)\) is a singleton set (see [25]).
Lemma 2.6
[7] Let \(c:H\to \mathbb{R}\cup \{+\infty \}\) be convex. Then, the following results are equivalent:

(i)
c is weakly sequential lower semicontinuous;

(ii)
c is lower semicontinuous.
Lemma 2.7
[45] Let \(\{\xi _{n}\}\) and \(\{\mu _{n}\}\) be two nonnegative real sequences such that
If \(\sum_{n=1}^{\infty}\mu _{n}<+\infty \), then \(\lim_{n\to \infty}\xi _{n}\) exists.
Lemma 2.8
[12] Suppose C is a nonempty, closed, and convex subset of H, and suppose \(F:C\to H\) is a continuous monotone mapping, with \(x\in C\), then
Lemma 2.9
[39] Suppose \(\{x_{n}\}\) is a sequence of nonnegative real numbers, \(\{\alpha _{n}\}\) is a sequence in \((0, 1)\) with \(\sum_{n=1}^{\infty}\alpha _{n} = +\infty \) and \(\{z_{n}\}\) is a sequence of real numbers. Let
if \(\limsup_{k\rightarrow \infty}z_{n_{k}}\leq 0\) for every subsequence \(\{x_{n_{k}}\}\) of \(\{x_{n}\}\) satisfying \(\liminf_{k\rightarrow \infty}(x_{n_{k+1}}  x_{n_{k}})\geq 0\), then \(\lim_{n\rightarrow \infty}x_{n} =0\).
Lemma 2.10
[26, 35] Let C be a set defined as in (6), and let \(F:h\to H\) be an operator. Suppose the solution set \(VI(C,F)\) is nonempty. Then, the following alternating theorem holds for the solution of the VI(C,F), that is, given \(\hat{z}\in C\), \(\hat{z}\in VI(C,F)\) if and only if one of the following holds.

(i)
\(F\hat{z}=0\); or

(ii)
\(\hat{z}\in bd(C)\), and there exist \(\beta _{\hat{z}}>0\) (depending on the point ẑ), and \(\kappa \in \operatorname{conv}\{c'_{i}(\hat{z}) : i\in I_{\hat{z}}^{*}\}\) such that \(F(\hat{z})=\beta _{\hat{z}}\kappa \), where \(bd(C)\) denotes the boundary of the set C, \(I_{\kappa}^{*}=\{i\in I : c_{i}(\hat{z})=0\}\) and \(\operatorname{conv}\{c'_{i}(\hat{z}) : i\in I_{\hat{z}}^{*}\}\) is the convex hull of the set \(\{c'_{i}(\hat{z}) : i\in I_{\hat{z}}^{*}\}\).
3 Proposed method
Here, we present our algorithm: A totally relaxed inertial selfadaptive projection and contraction method (TRISPCM) for solving the monotone variational inequality problem defined over the feasible set (6). Our results are based on the following assumptions:
Assumption A

(A1)
\(F:H\to H\) is monotone and \(\mathcal{J}\) Lipschitz continuous on H.

(A2)
The solution set \(VI(C,F)\) is nonempty.

(A3)
For all \(i\in H\), the family of functions \(c_{i}:H\to \mathbb{R}\) satisfy the following conditions:

(i)
Any \(c_{i}(i\in I)\) is convex on H.

(ii)
Any \(c_{i}(i\in I)\) is weakly lower semicontinuous on H.

(iii)
Any \(c_{i}(i\in I)\) is Gâteaux differentiable and \(c'_{i}(i\in I)\) is \(L_{i}\) Lipschitz on H.

(iv)
There exists a positive constant K such that for all \(\hat{z} \in bd(C)\), the following holds:
$$ \Vert F\hat{z} \Vert \le K\inf \bigl\{ \bigl\Vert m(\hat{z}) \bigr\Vert : m(\hat{z}) \in \operatorname{con}\bigl\{ c'_{i}( \hat{z}) : i\in I_{\hat{z}}^{*}\bigr\} \bigr\} , $$where \(I_{\hat{z}}^{*}\) is defined as in Lemma 2.10.

(i)
Assumption B

(B1)
Let \(\tau >0\), \(\gamma _{1}>0\), \(\ell \in (0,2)\), \(\delta \in (0, \frac{2\ell}{2\ell +2K} )\);

(B2)
\(\alpha _{n}\in (0,1)\), \(\lim_{n\rightarrow \infty}\alpha _{n}=0\), \(\sum_{n=1}^{\infty}\alpha _{n}=+\infty \), \(\{\theta _{n}\}\subset \mathbb{R}_{+}\) such that \(\lim_{n\rightarrow \infty}\frac{\theta _{n}}{\alpha _{n}}=0\);

(B3)
Let \(\mu _{n}\) be a nonnegative sequence such that \(\sum_{n=1}^{\infty}\mu _{n}<+\infty \).
We show our algorithm below:
Algorithm 3.1
 Step 0. :

Set \(n=1\), and let \(x_{0}, x_{1}\in H\) be two arbitrary initial points.
 Step 1. :

Given the \((n1)th\) and \(nth\) iterates, choose \(\tau _{n}\) such that \(0\leq \tau _{n}\leq \hat{\tau}_{n}\) with \(\hat{\tau}_{n}\) defined by
$$ \hat{\tau}_{n} = \textstyle\begin{cases} \min \{\tau , \frac{\theta _{n}}{ \Vert x_{n}  x_{n1} \Vert } \}, & \text{if } x_{n} \neq x_{n1}, \\ \tau , & \text{otherwise.} \end{cases} $$(12)  Step 2. :

Compute
$$\begin{aligned} &w_{n} = (1\alpha _{n}) \bigl(x_{n} + \tau _{n}(x_{n}  x_{n1}) \bigr). \end{aligned}$$(13)  Step 3. :

Given the current iterate \(w_{n}\), construct the family of halfspaces
$$\begin{aligned} & C_{n}^{i}= \bigl\{ w\in H : c_{i}(w_{n})+\bigl\langle c'_{i}(w_{n}), ww_{n} \bigr\rangle \le 0\bigr\} , i\in I. \end{aligned}$$(14)Set,
$$\begin{aligned} C_{n}:=\bigcap_{i\in I}C_{n}^{i} \end{aligned}$$(15)and compute:
$$\begin{aligned} y_{n}:=P_{C_{n}}(w_{n}\gamma _{n}Fw_{n}). \end{aligned}$$(16)If \(w_{n}=y_{n}\), then stop, \(w_{n}\in SOL(C,F)\), otherwise, proceed to Step 4.
 Step 4. :

Compute:
$$\begin{aligned} & x_{n+1}=w_{n}\ell \psi _{n} d_{n}, \\ & \text{where, } d_{n}:= w_{n}y_{n}\gamma _{n}(Fw_{n}Fy_{n}), \text{ and} \\ & \psi _{n}= \textstyle\begin{cases} \frac{\langle w_{n}y_{n}, d_{n} \rangle}{ \Vert d_{n} \Vert ^{2}}, & \text{if } d_{n}\ne 0, \\ 0, & \text{otherwise.} \end{cases}\displaystyle \end{aligned}$$(17)Update:
$$ \gamma _{n+1} = \textstyle\begin{cases} \min \{ \frac{\delta \Vert w_{n}y_{n} \Vert }{ \Vert Fw_{n}Fy_{n} \Vert + \Vert c'_{i_{n}}(w_{n})c'_{i_{n}}(y_{n}) \Vert }, \gamma _{n}+\mu _{n}\}, \\ \quad \text{if } \Vert Fw_{n}Fy_{n} \Vert + \Vert c'_{i_{n}}(w_{n})c'_{i_{n}}(y_{n}) \Vert \ne 0, \\ \gamma _{n}+\mu _{n}, \quad \text{otherwise,} \end{cases} $$(18)where \(\c'_{i_{n}}(w_{n})c'_{i_{n}}(y_{n})\=\max_{i\in I}\{\c'_{i}(w_{n})c'_{i}(y_{n}) \\}\). Set \(n:= n +1\) and return to Step 1.
Remark 3.2
We highlight below some of the key features of our proposed Algorithm 3.1.

(i)
Observe that the feasible set is constructed as a finite intersection of sublevel sets, as seen in (6), which is more general than the feasible set adopted in [8, 19, 23, 24, 34]. Also, observe that in (14), if \(c'_{i}(w_{n})=0\), then we have that \(C_{n}^{i}=H\).

(ii)
We note that our proposed algorithm completely avoids projection onto the feasible set, but rather allows only one projection onto some halfspace, as seen in (14)–(16). This obviously ensures easier computation, since projection onto halfspaces can be calculated using an explicit formula, see (8).

(iii)
We note that our Algorithm 3.1 employs the relaxation and inertial techniques to improve its rate of convergence. Also, we observe that Step 1 of our proposed algorithm is easily implemented, since we have prior knowledge of the estimate \(\x_{n}x_{n1}\\) before choosing \(\tau _{n}\).

(iv)
We emphasize that while the cost operator is Lipschitz continuous, our algorithm does not require any linesearch procedure (unlike the methods of [23–25]). Instead, we adopt a more efficient selfadaptive stepsize technique that generates nonmonotonic sequence of step sizes (as seen in (18)).

(v)
We also point out that unlike the results in [8, 23–25, 34], our proposed algorithm generates a strong convergence sequence, which converges to a minimumnorm solution of the VI.
Remark 3.3
By applying condition (B2), from (12) we have that
4 Convergence analysis
In this section, we carry out the convergence analysis of our proposed algorithm. First, we establish some lemmas that are needed to prove the strong convergence theorem for the proposed algorithm.
Lemma 4.1
Let C and \(C_{n}\) be the sets defined by (6) and (15), respectively, then we have that \(C\subset C_{n}\), \(\forall n\ge 1\).
Proof
For all \(i\in I\), let \(C^{i}:=\{x\in H : c_{i}(x)\le 0\}\). Thus,
we see that \(C=\bigcap_{i\in I}C^{i}\). Then, for each \(i\in I\) and any \(x\in C^{i}\), by the subdifferential inequality, it follows that
By definition of the sets \(C_{n}^{i}\) (14), we see that \(x\in C_{n}^{i}\). It then follows that \(C^{i}\subset C_{n}^{i}\), \(\forall n\ge 1\), \(i\in I\). Hence, \(C\subset C_{n}\), \(\forall n\ge 1\) as required. □
Lemma 4.2
If \(y_{n}=w_{n}\) for some \(n\ge 1\) in Algorithm 3.1, then \(w_{n}\in VI(C,F)\).
Proof
Suppose \(y_{n}=w_{n}\) for some \(n\ge 1\). Then, by (16), we obtain
From (20), it follows that \(w_{n}\in C_{n}\), in particular, \(w_{n}\in C_{n}^{i}\) for each \(i\in I\) and \(n\ge 1\). Then, we have that \(c_{i}(w_{n})+\langle c'_{i}(w_{n}), w_{n}w_{n} \rangle \le 0\), by the definition of \(C_{n}^{i}\). From this, we obtain \(c_{i}(w_{n})\le 0\) for each \(i\in I\). Thus, \(w_{n}\in C\).
Next, by (20) and (2), we have \(w_{n}\in VI(C_{n},F)\), which implies that
Therefore, from the fact that \(w_{n}\in C\in C_{n}\) and (21), the conclusion follows. □
Lemma 4.3
Let \(\{\gamma _{n}\}\) be the sequence generated by (18). Then, \(\{\gamma _{n}\}\) is well defined and \(\lim_{n\rightarrow \infty}\gamma _{n}=\gamma \), where \(\gamma \in [\min \{\frac{\delta}{M}, \gamma _{1}\}, \gamma _{1}+ \Phi ]\), for some constants \(M>0\) and \(\Phi =\sum_{n=1}^{\infty}\Phi _{n}\).
Proof
Since \(c'_{i}\) and F are both Lipschitz continuous, considering the case \(\Fw_{n}Fy_{n}\+\c'_{i_{n}}(w_{n})c'_{i_{n}}(y_{n})\\ne 0\) in (18), we obtain for all \(n\ge 1\), that
where \(M:=(\mathcal{J}+L)>0\), \(L =\max \{L_{i}: i\in I\}\). Thus, by the definition of \(\gamma _{n+1}\), it is obvious that the sequence \(\{\gamma _{n}\}\) has upper bound and lower bound \(\gamma _{1} + \Phi \) and \(\min \{\frac{\delta}{M},\gamma _{1}\}\), respectively. Hence, by Lemma 2.7, we have that \(\lim_{n\to \infty}\gamma _{n}\) exists, and we denote by \(\lim_{n\to \infty}\gamma _{n}=\gamma \). Clearly, \(\gamma \in [\min \{\frac{\delta}{M},\gamma _{1}\},\gamma _{1}+ \Phi ]\). □
Lemma 4.4
Let \(\{x_{n}\}\) be a sequence generated by Algorithm 3.1. Suppose Assumptions A and B are satisfied, then the following inequalities hold for all \(p\in VI(C,F)\).
and
Proof
From (18), we obtain
which implies that
Since both terms on the lefthand side of (25) are positive terms, then it implies that
Next, we proceed to prove the first inequality (22). From the definition of \(\psi _{n}\), if \(d_{n}\ne 0\) and by applying (25), we have
Also,
From (27), we obtain
From the last inequality and by applying (28), we obtain
Observe that (29) still holds when \(d_{n}=0\). Hence, from (29) and the definition of \(x_{n+1}\), we have that
Hence,
as required.
Next, we proceed to prove the second inequality (23). Now, if there exists \(n^{*}\ge 1\) such that \(d_{n^{*}}=0\), then \(x_{n^{*}+1}=w_{n^{*}}\), and hence, (23) holds. Hence, we consider the nontrivial case, where \(d_{n}\ne 0\), for each \(n\ge 1\).
Let \(p\in VI(C,F)\). Then, from (17), we have that
By the definition of \(d_{n}\), we obtain
By the monotonicity of F, we have that \(\langle y_{n}p,Fy_{n}Fp \rangle \ge 0\), which implies that
Also, since \(y_{n}=P_{C_{n}}(w_{n}\gamma _{n}Fw_{n})\) and by Lemma 2.1, we have
By (31), (32), and (33), we obtain
Now, we consider the following two cases:
Case 1: \(Fp=0\). If \(Fp=0\), then from (34) we obtain
Then, it follows from (30), (35), the definition of \(\psi _{n}\), and the conditions imposed on the control parameter that
Hence, the desired inequality (23) follows from (36).
Case 2: \(Fp\ne 0\). By applying Lemma 2.10, we have that \(p\in bd(C)\) and we have
where \(\beta _{p}\) is some positive constant, \(I^{*}_{p}=\{i\in I : c_{i}(p)=0\}\), and \(\{\alpha _{i}\}_{i\in I^{*}_{p}}\) are nonnegative numbers satisfying \(\sum_{i\in I^{*}_{p}}\alpha _{i}=1\). Then, by the subdifferential inequality, we obtain
Since \(p\in bd(C)\), we have that \(c_{i}(p)=0\), for each \(i\in I^{*}_{p}\), and then
We have from (37) and (39) that
Since \(y_{n}\in C_{n}=\bigcap_{i\in I}C^{i}_{n}\), we have
Then, by the differential inequality, we obtain
From (41) and (42), and by applying (26) we have
Observe that by condition \((A3)(iv)\), we have
Hence, from (40) and by applying (43), (44), and the condition on \(\gamma _{n}\), we obtain
By substituting (45) into (34), we obtain
Then, substituting (46) into (30), we have
From (27), we obtain
Hence, using (48) and the definition of \(x_{n+1}\), we have
By substituting (49) into (47), we obtain
which is the required inequality. □
Since the limit of \(\{\gamma _{n}\}\) exists, we have that \(\lim_{n\rightarrow \infty}\gamma _{n}=\lim_{n \rightarrow \infty}\gamma _{n+1}\). Hence, by the conditions imposed on the control parameters, we have that
Thus, there exists \(n_{0}\ge 1\) such that for all \(n\ge n_{0}\), we have
Hence, from (23), we have that for all \(n\ge n_{0}\),
Lemma 4.5
Let \(\{x_{n}\}\) be a sequence generated by Algorithm 3.1such that Assumptions A and B hold. Then, \(\{x_{n}\}\) is bounded.
Proof
Let \(p\in VI(C,F)\). Then, by the definition of \(w_{n}\), we have
From Remark 3.3, we obtain that \(\lim_{n\rightarrow \infty} [(1\alpha _{n}) \frac{\tau _{n}}{\alpha _{n}}\x_{n}x_{n1}\+\p\ ]=\p\\). Thus, there exists \(M_{1}>0\) such that
Combining (52) and (53), we obtain
Now, using (54) together with (51), we have
Therefore, we have that the sequence \(\{x_{n}\}\) is bounded. Consequently, \(\{w_{n}\}\) and \(\{y_{n}\}\) are both bounded. □
Lemma 4.6
Suppose \(\{x_{n}\}\) is a sequence generated by Algorithm 3.1under Assumptions Aand B. Then, for all \(p\in VI(C,F)\) the following inequality holds:
Proof
Let \(p\in VI(C,F)\). Then, we see from (54) that
where \(M_{2}=\sup_{n\in \mathbb{N}}\{2(1\alpha _{n})M_{1}\x_{n}p\+ \alpha _{n}M_{1}^{2}\}>0\).
Then, by substituting (55) into (23), we have
The desired result follows from the last inequality. □
Lemma 4.7
Assume that \(\{x_{n}\}\) is a sequence generated by Algorithm 3.1under Assumptions Aand B. Then, for all \(p\in VI(C,F)\), the following inequality holds:
Proof
Using Lemma 2.3 and (51), we obtain
which is the required inequality. □
Lemma 4.8
Let \(\{w_{n}\}\) and \(\{y_{n}\}\) be two sequences generated by Algorithm 3.1such that Assumptions Aand Bhold. If there exists a subsequence \(\{w_{n_{k}}\}\) of \(\{w_{n}\}\) such that \(w_{n_{k}}\rightharpoonup x^{*}\in H\) and \(\lim_{k\to \infty}\w_{n_{k}}y_{n_{k}}\=0\), then \(x^{*}\in VI(C,F)\).
Proof
Assume that \(\{w_{n}\}\) and \(\{y_{n}\}\) are two sequences generated by Algorithm 3.1 with subsequences \(\{w_{n_{k}}\}\) and \(\{y_{n_{k}}\}\), respectively, such that \(w_{n_{k}}\rightharpoonup x^{*}\). By the hypothesis of the lemma, we have \(y_{n_{k}}\rightharpoonup x^{*}\). Since \(y_{n_{k}}\in C_{n_{k}}\) by the definition of \(C_{n}\), we have
By applying the Cauchy–Schwarz inequality, from (56) we obtain
By the Lipschitz continuity of \(c'_{i}(\cdot )\) and the fact that \(\{w_{n_{k}}\}\) is bounded, then \(\{c'_{i}(w_{n_{k}})\}\) is bounded. This implies that there exists a constant \(M_{4}>0\), such that \(\c_{i}(w_{n_{k}})\\le M_{4}\), \(\forall k\ge 0\). Hence, we see from (57) that
Since \(c_{i}(\cdot )\) is continuous, then it is lower semicontinuous. Also, since \(c_{i}(\cdot )\) is convex, then by Lemma 2.6, \(c_{i}(\cdot )\) is weakly lower semicontinuous. Then, we have from (58) and the definition of weakly lower semicontinuity that
Hence, by the hypothesis of the lemma it follows from (59) that \(x^{*}\in C\). By the property of the projection map (see Lemma 2.1), we have
Then, by the monotonicity of F, we obtain
Since \(\lim_{k\rightarrow \infty}\y_{n_{k}}w_{n_{k}}\=0\) and \(\lim_{k\rightarrow \infty}\gamma _{n_{k}}=\gamma >0\), then by letting \(k\to \infty \) in (60), we have
Hence, by Lemma 2.8, we have that \(x^{*}\in VI(C,F)\).
At this juncture, we proceed to prove the strong convergence theorem of our proposed Algorithm 3.1. □
Theorem 4.9
Let \(\{x_{n}\}\) be a sequence generated by Algorithm 3.1under Assumptions Aand B. Then, \(\{x_{n}\}\) converges strongly to an element \(\hat{x}\in VI(C,F)\), where \(\\hat{x}\=\min \{\p\:p\in VI(C,F)\}\).
Proof
Since \(\\hat{x}\=\min \{\p\:p\in VI(C,F)\}\), then we have \(\hat{x}=P_{VI(C,F)}(0)\). From Lemma 4.7 we obtain
where, \(d_{n}= [2(1\alpha _{n})\x_{n}\hat{x}\ \frac{\tau _{n}}{\alpha _{n}}\x_{n}x_{n1}\+\tau _{n}\x_{n}x_{n1} \\frac{\tau _{n}}{\alpha _{n}}\x_{n}x_{n1}\+2\\hat{x}\\w_{n}x_{n+1} \+2\langle \hat{x},\hat{x}x_{n+1} \rangle ]\).
Now, we claim that \(\lim_{n\rightarrow \infty}\x_{n}\hat{x}\=0\). To verify this claim, it suffices to show by Lemma 2.9 that \(\lim_{k\rightarrow \infty}d_{n_{k}}\le 0\), for every subsequence \(\{\x_{n_{k}}\hat{x}\\}\) of \(\{\x_{n}\hat{x}\\}\) satisfying
We assume that \(\{\x_{n_{k}}\hat{x}\\}\) is a subsequence of \(\{\x_{n}\hat{x}\\}\) such that (63) holds. Then, from Lemma 4.6 we have
By applying (63) and the fact that \(\lim_{k\rightarrow \infty}\alpha _{n_{k}}=0\), we obtain
Hence, by (50) we obtain
Also, from (22) and (65), we obtain
By Remark (3.3), the definition of \(w_{n}\), and the fact that \(\lim_{k\rightarrow \infty}\alpha _{n_{k}}=0\), we have
To complete the proof, we need to show that \(w_{\omega}(x_{n})\subset VI(C,F)\). Since \(\{x_{n}\}\) is bounded, then \(w_{\omega}(x_{n})\ne \emptyset \). Now, let \(x^{*}\in w_{\omega}(x_{n})\) be an arbitrary element. Then, there exists a subsequence \(\{x_{n_{k}}\}\) of \(\{x_{n}\}\) such \(x_{n_{k}}\rightharpoonup x^{*}\) as \(k\to \infty \). It follows from (67) that \(w_{n_{k}}\rightharpoonup x^{*}\) as \(k\to \infty \). Then, by Lemma 4.8 together with (66), we see that \(x^{*}\in VI(C,F)\). Hence, \(w_{\omega}(x_{n})\subset VI(C,F)\) since \(x^{*}\in w_{\omega}(x_{n})\) was chosen arbitrarily.
Again, since \(\{x_{n_{k}}\}\) is bounded, then there exists a subsequence \(\{x_{n_{k_{j}}}\}\) of \(\{x_{n_{k}}\}\), such that \(x_{n_{k_{j}}}\rightharpoonup z\), and
Since \(\hat{x}=P_{VI(C,F)}(0)\), then by the property of the projection map and (69), we obtain
By combining (68) and (70), we obtain
Using (65), Remark 3.3 together with (71), we clearly see that \(\limsup_{k\rightarrow \infty}d_{n_{k}}\leq 0\). Then, by applying Lemma 2.9 to (61), we easily conclude that \(\lim_{n\rightarrow \infty}\x_{n}  \hat{x}\=0\) as required. This completes the proof. □
Remark 4.10
We note that our strong convergence analysis completely avoids the “twocases” approach, which is usually employed by researchers in the proof of strong convergence theorems (see [32, 46]). However, we adopted a much simpler and more straightforward approach in our proofs.
5 Numerical examples
Here, we perform some numerical experiments to illustrate the performance of our method, Algorithm 3.1 (Proposed Alg.), in comparison with Algorithm A.1 proposed by Ma and Wang (Ma and Wang Alg.), Algorithm A.2 proposed by He et al. (He et al. Alg.), Algorithm A.3 by He, Wu et al. (He, Wu et al. Alg.), Algorithm A.4 by Thong and Gibali (Thong and Gibali Alg.) and Algorithm A.5 by Thong and Gibali (Thong and Gibali Alg.). Our experiments were carried out on MATLAB R2021(b).
The choice of values for our parameters are as follows: In Algorithm 3.1, we chose \(\tau =0.88\), \(\theta _{n}=(\frac{2}{3n+1})^{2}\), \(\alpha _{n}= \frac{2}{3n+1}\), \(\gamma _{1}=0.99\), \(\mu _{n}=\frac{30}{(3n+4)^{2}}\), \(\ell =0.25\). Also, we chose \(\gamma _{1}=0.0017\), \(\xi =0.76\), \(\nu =0.87\) in Algorithm A.1, \(\chi =0.97\) in Algorithm A.2\(\sigma =0.02\), \(\omega =0.05\) in Algorithm A.3, \(g=0.66\), \(\lambda =1.2\), \(\delta _{n}=\frac{2}{3n+1}\), \(\beta _{n}= \frac{1\delta _{n}}{2}\), in Algorithm A.4 and \(f(x)=\frac{1}{4}x\) in Algorithm A.5.
Our numerical experiments will be conducted using the following examples below:
Example 5.1
Let \(F:\mathbb{R}^{2}\to \mathbb{R}^{2}\) be defined by \(F(x_{1},x_{2})=(6h(x_{1}),3x_{1}+x_{2})\) on the feasible set \(C:=C^{1}\cap C^{2}\subseteq \mathbb{R}^{2}\), where
We see from Lemma 2.10 that \(VI(C,F)\) is nonempty, in particular, the solution, \(VI(C,F)\) of the VI 5.1, is the set \(\{(1,1)\}\). We note that F is monotone and Lipschitz continuous. The functions \(c'_{i}(\cdot )\) are also Lipschitz continuous, for \(i=1,2\), with constants \(L_{1}=L_{2}=2\), where \(L=\max \{L_{1},L_{2}\}\). Also, \(K=3\sqrt{e^{2}+1}\), and \(K'=6\sqrt{e^{2}+1}\), see [25]. For this example we choose \(\delta =0.068\) and set \(M=K\).
We test the algorithms for four different initial points as follows:
Case I: \(x_{0} = (5,5)\), \(x_{1} = (2,1)\);
Case II: \(x_{0} = (2, 6)\), \(x_{1} = (1,2)\);
Case III: \(x_{0} = (3,6)\), \(x_{1}= (1,0)\);
Case IV: \(x_{0} = (5, 5)\), \(x_{1} = (1, 1)\).
The stopping criterion used for this example is \(\Vert x_{n+1}x_{n}\Vert < 10^{3}\). We plot the graphs of errors against the number of iterations in each case. The numerical results are reported in Figs. 1–4 and Table 1.
Next, we provide an example in infinitedimensional spaces for the experiment of our strong convergence result.
Example 5.2
Let \(F(x)=3x\), \(\forall x\in H\), and let \(C\subset H\) be the closed, convex, feasible set defined as follows:
Also, note that F is monotone and 3Lipschitz continuous, \(c'_{i}\) is Lipschitz continuous, and \(K=1\). We chose \(\delta =0.14\) in this example.
We chose different initial values as follows:
Case I: \(x_{0} = (4, 1, \frac{1}{4}, \ldots )\), \(x_{1} = (\frac{1}{2}, \frac{1}{4}, \frac{1}{8},\ldots )\);
Case II: \(x_{0} = (3, 1, \frac{1}{3},\ldots )\), \(x_{1} = (1, 0.1, 0.01, \ldots )\);
Case III: \(x_{0} = (5, 1, \frac{1}{5}, \ldots )\), \(x_{1} = (1, 0.1, 0.01, \ldots )\);
Case IV: \(x_{0} = (4, 1, \frac{1}{4},\ldots )\), \(x_{1} = (\frac{1}{2}, \frac{1}{4}, \frac{1}{8}, \ldots )\).
The stopping criterion used for this example is \(\Vert x_{n+1}x_{n}\Vert < 10^{3}\). We plot the graphs of errors against the number of iterations in each case. The numerical results are reported in Figs. 5–8 and Table 2.
6 Conclusion
In this paper, we studied the classical variational inequality problem defined over a finite intersection of sublevel sets of convex functions. We proposed a new iterative method called a “Totally relaxed inertial selfadaptive projection and contraction method” (TRISPCM), in which the projection onto the feasible set is replaced with a projection onto some halfspace. Our method does not require any linesearch procedure, rather it uses a more efficient selfadaptive stepsize technique. We also employed the relaxation and inertial techniques to speed up the rate of convergence of our proposed method. Moreover, under some mild conditions we proved that the sequence generated by our proposed algorithm converges strongly to a minimumnorm solution of the problem. Lastly, we conducted some numerical experiments to clearly showcase the computational advantage of our proposed method over the existing methods in the literature.
Availability of data and materials
Not applicable.
References
Alakoya, T.O., Mewomo, O.T.: Viscosity Siteration method with inertial technique and selfadaptive step size for split variational inclusion, equilibrium and fixed point problems. Comput. Appl. Math. 41(1), Paper No. 39, 31 pp. (2022).
Alakoya, T.O., Mewomo, O.T.: Siteration inertial subgradient extragradient method for variational inequality and fixed point problems. Optimization (2023). https://doi.org/10.1080/02331934.2023.2168482
Alakoya, T.O., Uzor, V.A., Mewomo, O.T.: A new projection and contraction method for solving split monotone variational inclusion, pseudomonotone variational inequality, and common fixed point problems. Comput. Appl. Math. 42(1), Paper No. 3, 33 pp. (2023)
Alakoya, T.O., Uzor, V.A., Mewomo, O.T., Yao, J.C.: On a system of monotone variational inclusion problems with fixedpoint constraint. J. Inequal. Appl. 2022, 47 (2022)
Antipin, A.S.: On a method for convex programs using a symmetrical modification of the Lagrange function. Ekon. Math. Meth. 12(6), 1164–1173 (1976)
Bauschke, H.H., Combettes, P.L.: A weaktostrong convergence principle for Féjermonotone methods in Hilbert spaces. Math. Oper. Res. 26(2), 248–264 (2001)
Bauschke, H.H., Combettes, P.L.: Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd edn. Springer, New York (2017)
Cao, Y., Guo, K.: On the convergence of inertial twosubgradient extragradient method for variational inequality problems. Optimization 69(6), 1237–1253 (2020)
Censor, Y., Gibali, A., Reich, S.: The subgradient extragradient method for solving variational inequalities in Hilbert spaces. J. Optim. Theory Appl. 48, 318–335 (2011)
Cholamjiak, P., Thong, D.V., Cho, Y.J.: A novel inertial projection and contraction method for solving pseudomonotone variational inequality problems. Acta Appl. Math. 169, 217–245 (2020)
Denisov, S.V., Semenov, V.V., Chabak, L.M.: Convergence of the modified extragradient method for variational inequalities with nonLipschitz operators. Cybern. Syst. Anal. 51, 757–765 (2015)
Dong, Q.L., Cho, Y.J., Zhong, L.L., et al.: Inertial projection and contraction algorithms for variational inequalities. J. Glob. Optim. 70, 687–704 (2018)
Dong, Q.L., Gibali, A., Jiang, D., Ke, S.H.: Convergence of projection and contraction algorithms with outer perturbations and their applications to sparse signals recovery. J. Fixed Point Theory Appl. 20, 16 (2018)
Elliot, C.M.: Variational and quasivariational inequalities: applications to free boundary problem (Claudio Baiocchi and António). SIAM Rev. 29(2), 314–315 (1987)
Fichera, G.: Sul problema elastostatico di signorini con ambigue condizioni al contorno. Atti Accad. Naz. Lincei, Rend. Cl. Sci. Fis. Mat. Nat. 34(8), 138–142 (1963)
Fukushima, M.: A relaxed projection method for variational inequalities. Math. Program. 35, 58–70 (1986)
Gibali, A., Jolaoso, L.O., Mewomo, O.T., Taiwo, A.: Fast and simple Bregman projection methods for solving variational inequalities and related problems in Banach spaces. Results Math. 75(4), Paper No. 179, 36 pp. (2020)
Gibali, A., Reich, S., Zalas, R.: Outer approximation methods for solving variational inequalities in Hilbert space. Optimization 66, 417–437 (2017)
Godwin, E.C., Alakoya, T.O., Mewomo, O.T., Yao, J.C.: Relaxed inertial Tseng extragradient method for variational inequality and fixed point problems. Appl. Anal. (2022). https://doi.org/10.1080/00036811.2022.2107913
Godwin, E.C., Izuchukwu, C., Mewomo, O.T.: An inertial extrapolation method for solving generalized split feasibility problems in real Hilbert spaces. Boll. Unione Mat. Ital. 14(2), 379–401 (2021)
Godwin, E.C., Izuchukwu, C., Mewomo, O.T.: Image restoration using a modified relaxed inertial method for generalized split feasibility problems. Math. Methods Appl. Sci. 46(5), 5521–5544 (2023)
Godwin, E.C., Mewomo, O.T., Alakoya, O.T.: A strongly convergent algorithm for solving multiple set split equality equilibrium and fixed point problems in Banach spaces. Proc. Edinb. Math. Soc. (2) 66, 475–515 (2023)
He, S., Dong, Q.L., Tian, H.: Relaxed projection and contraction methods for solving Lipschitz continuous monotone variational inequalities. Rev. R. Acad. Cienc. Exactas Fís. Nat., Ser. A Mat. 113, 2773–2791 (2019)
He, S., Wu, T.: A modified subgradient extragradient method for solving monotone variational inequalities. J. Inequal. Appl. 2017, 89 (2017)
He, S., Wu, T., Gibali, A., Dong, Q.L.: Totally relaxed selfadaptive algorithm for solving variational inequalities over the intersection of sublevel sets. Optimization 67(9), 1487–1504 (2018)
He, S., Xu, H.K.: Uniqueness of supporting hyperplanes and an alternative to solutions of variational inequalities. J. Glob. Optim. 57, 1375–1384 (2013)
Iiduka, H.: Fixed point optimization algorithm and its application to network bandwidth allocation. J. Comput. Appl. Math. 236(7), 1733–1742 (2012)
Iutzeler, F., Hendrickx, J.M.: A generic online acceleration scheme for optimization algorithms via relaxation and inertia. Optim. Methods Softw. 34(2), 383–405 (2019)
Kinderlehrer, D., Stampacchia, G.: An Introduction to Variational Inequalities and Their Applications. Academic Press, New York (1980)
Kopecká, E., Reich, S.: A note on alternating projections in Hilbert space. J. Fixed Point Theory Appl. 12, 41–47 (2012)
Korpelevich, G.M.: The extragradient method for finding saddle points and other problems. Matecon 12, 747–756 (1976)
Kraikaew, R., Saejung, S.: Strong convergence of the Halpern subgradient extragradient method for solving variational inequalities in Hilbert spaces. J. Optim. Theory Appl. 163, 399–412 (2014)
Liu, Z., Zeng, S., Motreanu, D.: Evolutionary problems driven by variational inequalities. J. Differ. Equ. 260(9), 6787–6799 (2016)
Ma, B., Wang, W.: Selfadaptive subgradient extragradienttype methods for solving variational inequalities. J. Inequal. Appl. 2022, 54 (2022)
Nguyen, H.Q., Xu, H.K.: The supporting hyperplane and an alternative to solutions of variational inequalities. J. Nonlinear Convex Anal. 16(11), 2323–2331 (2015)
Ogbuisi, F.U., Mewomo, O.T.: Convergence analysis of common solution of certain nonlinear problems. Fixed Point Theory 19(1), 335–358 (2018)
Ogwo, G.N., Alakoya, T.O., Mewomo, O.T.: Iterative algorithm with selfadaptive step size for approximating the common solution of variational inequality and fixed point problems. Optimization 72(3), 677–711 (2023)
Ogwo, G.N., Izuchukwu, C., Mewomo, O.T.: Relaxed inertial methods for solving split variational inequality problems without product space formulation. Acta Math. Sci. Ser. B Engl. Ed. 42(5), 1701–1733 (2022)
Saejung, S., Yotkaew, P.: Approximation of zeros of inverse strongly monotone operators in Banach spaces. Nonlinear Anal. 75, 742–750 (2012)
Stampacchia, G.: Variational inequalities. In: Theory and Applications of Monotone Operators, Proceedings of the NATO Advanced Study Institute, Venice, Italy (Edizioni Odersi, Gubbio, Italy), pp. 102–192 (1968)
Sun, D.: A projection and contraction method for the nonlinear complementarity problems and its extentions. Math. Numer. Sin. 16, 183–194 (1994)
Taiwo, A., Jolaoso, L.O., Mewomo, O.T.: Inertialtype algorithm for solving split common fixed point problems in Banach spaces. J. Sci. Comput. 86, 12, 30 pp. (2021)
Taiwo, A., Jolaoso, L.O., Mewomo, O.T.: Viscosity approximation method for solving the multipleset split equality common fixed point problems for quasipseudocontractive mappings in Hilbert spaces. J. Ind. Manag. Optim. 17(5), 2733–2759 (2021)
Taiwo, A., Owolabi, A.O.E., Jolaoso, L.O., Mewomo, O.T., Gibali, A.: A new approximation scheme for solving various split inverse problems. Afr. Math. 32(3–4), 369–401 (2021)
Tan, K.K., Xu, H.K.: Approximating fixed points of nonexpansive mappings by the Ishikawa iteration process. J. Math. Anal. Appl. 178, 301–308 (1993)
Thong, D.V., Gibali, A.: Two strong convergence subgradient extragradient methods for solving variational inequalities in Hilbert spaces. Jpn. J. Ind. Appl. Math. 36, 299–321 (2019)
Thong, D.V., Hieu, D.V.: Modified subgradient extragradient algorithms for variational inequality problems and fixed point problems. Optimization 67(1), 83–102 (2018)
Thong, D.V., Shehu, Y., Iyiola, O.S.: A new iterative method for solving pseudomonotone variational inequalities with nonLipschitz operators. Comput. Appl. Math. 39, 108 (2020)
Tseng, P.: A modified forwardbackward splitting method for maximal monotone mappings. SIAM J. Control Optim. 38, 431–446 (2000)
Uzor, V.A., Alakoya, T.O., Mewomo, O.T.: Strong convergence of a selfadaptive inertial Tseng’s extragradient method for pseudomonotone variational inequalities and fixed point problems. Open Math. 20(1), 234–257 (2022)
Uzor, V.A., Alakoya, T.O., Mewomo, O.T.: On split monotone variational inclusion problem with multiple output sets with fixed point constraints. Comput. Methods Appl. Math. (2022). https://doi.org/10.1515/cmam20220199
Wickramasinghe, M.U., Mewomo, O.T., Alakoya, T.O., Iyiola, S.O.: Manntype approximation scheme for solving a new class of split inverse problems in Hilbert spaces. Appl. Anal. (2023). https://doi.org/10.1080/00036811.2023.2233977
Acknowledgements
The authors sincerely thank the anonymous referees for their careful reading, constructive comments, and useful suggestions that improved the manuscript. The first author acknowledges with thanks the International Mathematical Union Breakout Graduate Fellowship (IMUBGF) Award for his doctoral study. The second author is supported by the National Research Foundation (NRF) of South Africa Incentive Funding for Rated Researchers (Grant Number 119903) and DSINRF Centre of Excellence in Mathematical and Statistical Sciences (CoEMaSS), South Africa (Grant Number 2022087OPA). The third author is wholly supported by the University of KwaZuluNatal, Durban, South Africa Postdoctoral Fellowship. He is grateful for the funding and financial support. The opinions expressed and conclusions arrived are those of the authors and are not necessarily to be attributed to the CoEMaSS and NRF.
Funding
The first author is funded by International Mathematical Union Breakout Graduate Fellowship (IMUBGF). The second author is supported by the National Research Foundation (NRF) of South Africa Incentive Funding for Rated Researchers (Grant Number 119903) and DSINRF Centre of Excellence in Mathematical and Statistical Sciences (CoEMaSS), South Africa (Grant Number 2022087OPA). The third author is funded by the University of KwaZuluNatal, Durban, South Africa Postdoctoral Fellowship.
Author information
Authors and Affiliations
Contributions
Conceptualization of the article was given by OTM, TOA and AG, methodology by OTM and TOA, formal analysis, investigation and writing–original draft preparation by VAU, OTM, and TOA software and validation by OTM, writing–review and editing by VAU, OTM TOA, and AG, project administration and supervision by OTM and AG. All authors have accepted responsibility for the entire content of this manuscript and approved its submission.
Corresponding author
Ethics declarations
Ethics approval and consent to participate
Not applicable.
Competing interests
The authors declare no competing interests.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendix
Appendix
Algorithm A.1
Algorithm 2 in [34]
 Step 0.:

Choose \(x_{1},x_{0}, y_{1}\in H\); \(\xi ,\nu \in [a,b]\subset (0,1)\), \(\gamma _{1}\in (0,\frac{1\xi ^{2}}{2\beta _{p}L_{1}} ]\). Set \(n=0\).
 Step 1.:

Given \(\gamma _{n1}\), \(y_{1}\), and \(x_{1}\). Let \(p_{n_{n1}}=x_{n1}y_{1}\).
$$\begin{aligned} \gamma _{n}:= \textstyle\begin{cases} \gamma _{n1}, & \gamma _{n1} \Vert Fx_{n1}Fy_{n1} \Vert \le \xi \Vert p_{n1} \Vert , \\ \gamma _{n1}\nu , & {\text{otherwise}.} \end{cases}\displaystyle \end{aligned}$$  Step 2.:

Compute
$$ y_{n}=P_{C_{n}}(x_{n}\gamma _{n}Fx_{n}). $$  Step 3.:

Compute
$$ x_{n+1}=P_{C_{n}}\bigl(y_{n}\gamma _{n}(Fy_{n}Fx_{n})\bigr), $$where,
$$ C_{n}:=\bigl\{ x\in H: c(x_{n})+\bigl\langle c'_{i}(x_{n}),xx_{n} \bigr\rangle \le 0 \bigr\} . $$
Set \(n:=n+1\) and return to Step 1. Where \(F:H\to H\) is monotone and \(L_{2}\) Lipschitz continuous, \(c'(\cdot )\) is Lipschitz continuous, and \(\beta _{p}\) is the parameter in Lemma 2.10.
Algorithm A.2
Algorithm 1 in [23]
where \(C_{n}\) and \(T_{n}\) are halfspaces given by \(C_{n}:=\{w\in H : c(x_{n})+\langle c'(x_{n}),wx_{n} \rangle \le 0 \}\) and \(T_{n}:=\{w\in H : \langle x_{n}y_{n},\gamma _{n}Fy_{n}d(x_{n},y_{n}) \rangle \ge 0 \}\), respectively, and
for some constant \(M>0\) and \(L_{n}\) satisfying certain conditions.
Algorithm A.3
(Algorithm 3.1 in [25])
 Step 0.:

Set \(L=\max \{L_{i} : i\in I\}\), \(K'=KL\), where K is defined in Lemma 2.10, and \(L_{i}(i\in I)\) is the Lipschitz constant. Choose arbitrarily, \(x_{0}\in H\), \(\xi \in (0,1)\), and set \(n=0\).
 Step 1.:

Given the current iterate \(x_{n}\), construct the family of halfspaces
$$\begin{aligned} & C_{n}^{i}= \bigl\{ w\in H : c_{i}(x_{n})+ \bigl\langle c'_{i}(x_{n}), wx_{n} \bigr\rangle \le 0\bigr\} ,\quad i\in I. \end{aligned}$$Set,
$$\begin{aligned} C_{n}:=\bigcap_{i\in I}C_{n}^{i} \end{aligned}$$and compute:
$$\begin{aligned} &y_{n}:=P_{C_{n}}(x_{n}\gamma _{n}Fx_{n}); \\ &\text{where, } \gamma _{n}=\sigma \omega ^{g_{n}}, \sigma >0, \omega \in (0,1), \\ &\text{and } g_{n} \text{ is the smallest integer, such that} \\ &\gamma _{n}^{2} \Vert Fx_{n}Fy_{n} \Vert ^{2}+K'\gamma _{n} \Vert x_{n}y_{n} \Vert ^{2} \le \xi \Vert x_{n}y_{n} \Vert ^{2}. \end{aligned}$$  Step 2.:

If \(w_{n}=y_{n}\), then stop, \(x_{n}\in SOL(C,F)\), otherwise, calculate the next iterative step by:
$$\begin{aligned} & x_{n+1}=P_{C_{n}}(x_{n}\gamma _{n}Fy_{n}), \\ & \text{or by,} \\ & x_{n+1}=P_{T_{n}}(x_{n}\gamma _{n}Fy_{n}), \\ &\text{where } T_{n}=\bigl\{ w\in H : \langle x_{n}\gamma _{n}Fx_{n}y_{n},wy_{n} \rangle \le 0\bigr\} . \end{aligned}$$Set \(n:= n +1\) and return to Step 1.
Algorithm A.4
(Algorithm 3.1 in [46])
 Step 0.:

Given \(\lambda >0\), \(g\in (0,1)\), \(\nu \in (0,1)\), \(\ell \in (0,2)\). Let \(x_{0}\in H\) be chosen arbitrarily.
Given the current iterate \(x_{n}\), calculate \(x_{n}+1\) as follows:
 Step 1.:

Compute:
$$ y_{n}=P_{C}(x_{n}\gamma _{n}Fx_{n}), $$where \(\gamma _{n}\) is chosen to be the largest \(\eta _{n}\in [\lambda ,\lambda g, \lambda g^{2},\ldots]\) satisfying
$$ \eta \Vert Fx_{n}Fy_{n} \Vert \le \nu \Vert x_{n}y_{n} \Vert . $$If \(w_{n}=y_{n}\), then stop, \(x_{n}\in SOL(C,F)\), otherwise,
 Step 2.:

Compute:
$$\begin{aligned} & z_{n}=P_{T_{n}}\bigl(x_{n}\ell \gamma _{n}F(y_{n})\bigr), \\ & \text{where } T_{n}:=\bigl\{ x\in H : \langle x_{n} \gamma _{n}Fx_{n}y_{n},xy_{n} \rangle \le 0 \bigr\} , \text{ and} \\ & d_{n}:=x_{n}y_{n}\gamma _{n}(Fx_{n}Fy_{n}). \end{aligned}$$  Step 3.:

Compute:
$$ x_{n+1}=(1\delta _{n}\beta _{n})x_{n}+ \beta _{n}z_{n}. $$Set \(n:= n +1\) and return to Step 1.
Algorithm A.5
(Algorithm 3.2 in [46])
 Step 0.:

Given \(\lambda >0\), \(g\in (0,1)\), \(\nu \in (0,1)\), \(\ell \in (0,2)\). Let \(x_{0}\in H\) be chosen arbitrarily.
Given the current iterate \(x_{n}\), calculate \(x_{n}+1\) as follows:
 Step 1.:

Compute:
$$ y_{n}=P_{C}(x_{n}\gamma _{n}Fx_{n}), $$where \(\gamma _{n}\) is chosen to be the largest \(\eta _{n}\in [\lambda ,\lambda g, \lambda g^{2},\ldots]\) satisfying
$$ \eta \Vert Fx_{n}Fy_{n} \Vert \le \nu \Vert x_{n}y_{n} \Vert . $$If \(w_{n}=y_{n}\), then stop, \(x_{n}\in SOL(C,F)\), otherwise,
 Step 2.:

Compute:
$$\begin{aligned} & z_{n}=P_{T_{n}}(x_{n}\ell \gamma _{n}Fy_{n}), \\ & \text{where } T_{n}:=\bigl\{ x\in H : \langle x_{n} \gamma _{n}Fx_{n}y_{n},xy_{n} \rangle \le 0 \bigr\} , \text{ and} \\ & d_{n}:=x_{n}y_{n}\gamma _{n}(Fx_{n}Fy_{n}). \end{aligned}$$  Step 3.:

Compute:
$$ x_{n+1}=\alpha _{n}f(x_{n})+(1\alpha _{n})z_{n}. $$Set \(n:= n +1\) and return to Step 1.
Where \(f:H\to H\) is a contraction with constant \(\phi \in (0,1)\).
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Uzor, V.A., Mewomo, O.T., Alakoya, T.O. et al. Outer approximated projection and contraction method for solving variational inequalities. J Inequal Appl 2023, 141 (2023). https://doi.org/10.1186/s13660023030438
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13660023030438