Skip to main content

Outer approximated projection and contraction method for solving variational inequalities

Abstract

In this paper we focus on solving the classical variational inequality (VI) problem. Most common methods for solving VIs use some kind of projection onto the associated feasible set. Thus, when the involved set is not simple to project onto, then the applicability and computational effort of the proposed method could be arguable. One such scenario is when the given set is represented as a finite intersection of sublevel sets of convex functions. In this work we develop an outer approximation method that replaces the projection onto the VI’s feasible set by a simple, closed formula projection onto some “superset”. The proposed method also combines several known ideas such as the inertial technique and self-adaptive step size.

Under standard assumptions, a strong minimum-norm convergence is proved and several numerical experiments validate and exhibit the performance of our scheme.

1 Introduction

Let H be a real Hilbert space with a nonempty, closed, and convex set \(C\subseteq H\). Let \(\|\cdot \|\) and \(\langle \cdot , \cdot \rangle \) denote the induced norm and inner product on H, respectively, and let \(F:H\to H\) be a single-valued mapping. The variational inequality (VI) problem formulated by (1) is an age-old problem in mathematical analysis with present relevance. It was introduced independently by Fichera [15] and Stampacchia [40], and since then, numerous researchers have developed various methods for solving the VIs with applications in diverse fields, such as the sciences, engineering, medicine, cryptography, image processing, signal processing, optimal control, etc., see [2, 3, 13, 14, 17, 18, 21, 22, 27, 29, 33], for more details. The VI, with solution set denoted by \(VI(C,F)\), is defined as finding a point \(p\in C\) such that

$$ \langle Fp, z-p \rangle \ge 0, \quad \forall z\in C. $$
(1)

Two main known methods for solving VIs are the projection and regularization methods. The foremost projection method is the gradient method (GM) that generates a sequence \(\{x_{n}\}\) according to the following rule:

$$ x_{n+1}=P_{C}(x_{n}-\nu Fx_{n}), $$
(2)

where \(P_{C}\) is the metric projection of H onto the feasible set C. Although the GM has a simple structure, it has two major drawbacks. The first is the quite strong monotonicity assumption required for its convergence and the second is the need for computing the projection onto the feasible set C, per iteration.

As a way to overcome the first GM’s monotonicity limitation, Korpelevich [31] (Antipin [5] independently) proposed the extragradient method (EGM) that, on the one hand, converges under a weaker monotonicity assumption but requires the evaluation of two projections onto C, per iteration. Censor et al. [9] introduced the subgradient extragradient method (SEGM) in which one of the projections is replaced by an easy-closed formula projection onto a “superset” containing C. Other modifications in this directions can be found, for example in [32, 47].

Other relevant EGM extensions are Tseng’s extragradient method (TEGM) [49], and the projection and contraction method (PCM) [41], see also [10, 13, 19, 50]. Both methods use only one projection onto C, per iteration. The PCM, for example, generates \(\{x_{n}\}\) according to the following rule:

$$\begin{aligned} \textstyle\begin{cases} x_{1}\in H, \\ y_{n}=P_{C}(x_{n}-\xi Fx_{n}), \\ d(x_{n},y_{n}):= (x_{n}-y_{n})-\xi (Fx_{n}-Fy_{n}), \\ x_{n+1}=x_{n}-\rho \beta _{n} d(x_{n},y_{n}), \end{cases}\displaystyle \end{aligned}$$
(3)

where \(\rho \in (0,2)\), \(\xi \in (0,\frac{1}{L}), L\) is the Lipschitz constant of F, \(\beta _{n}:=\frac{\alpha (x_{n},y_{n})}{\|d(x_{n},y_{n})\|^{2}}\), \(\alpha (x_{n},y_{n}):=\langle x_{n}-y_{n}, d(x_{n},y_{n}) \rangle \), \(\forall n\ge 1\).

As the implementation of the above methods (SEGM, TEGM, and PCM) still requires the computation of \(P_{C}\) for each iteration, a need for a “free-projection” method encouraged many researchers to come up with some creative ideas. One such idea is the two-subgradient method (TSEGM) of Censor et al. [9]. Suppose that the closed and convex set C can be represented as a sublevel set of some convex function \(c:H\to \mathbb{R}\), that is

$$\begin{aligned} C=\bigl\{ x\in H : c(x)\leq 0\bigr\} . \end{aligned}$$
(4)

Denote by \(\partial c(x)\), the subdifferential of the convex function \(c(\cdot )\) at x. The TSEGM generates \(\{x_{n}\}\) according to the following rule:

$$\begin{aligned} \textstyle\begin{cases} x_{1}\in H, \\ y_{n} = P_{C_{n}}(x_{n} - \lambda Fx_{n}), \\ C_{n}:=\{x\in H: c(x_{n})+\langle \zeta _{n}, x-x_{n}\rangle \leq 0\}, \\ x_{n+1} = P_{C_{n}}(x_{n} - \lambda Fy_{n}), \end{cases}\displaystyle \end{aligned}$$
(5)

where \(\zeta _{n}\in \partial c(x_{n})\). Observe that if \(\zeta _{n}=0\) then \(C_{n}=H\) and otherwise it is a half-space containing the set C. The convergence of (5) was raised as an open problem in [9].

Recently, Cao and Guo [8] as well as Ma and Wang [34] partially answered this open question by proposing an inertial two-subgradient extragradient method (ITSEGM), and self-adaptive TSEGM for solving a Lipschitz continuous and monotone variational inequality problem with weak convergence properties.

Other relevant works related to the subgradient extragradient method are of He and Wu [24], in which a line search is involved and the two projections onto the set C are replaced by projections onto two particular half-spaces. He et al. [23] proposed a relaxed projection and contraction method where again the projections onto the set C are replaced by projection onto a particular constructible half-space.

In this paper, we are interested in studying VIs where the feasible set C is given as a finite intersection of sublevel sets of convex functions defined as follows:

$$ C:=\bigcap_{i=1}^{k} C^{i}:=\bigl\{ z\in H : c_{i}(z)\le 0\bigr\} , $$
(6)

where k is a positive integer and \(c_{i}:H\to \mathbb{R}\) for all \(i\in I:=\{1,2,\ldots, k\}\) are convex functions.

A very recent result for solving VIs defined over sets of the form (6) is the He et al. [25] totally relaxed self-adaptive subgradient extragradient.

Remark 1.1

Although all the above results, He and Wu [24], He et al. [23], He et al. [25], Cao and Guo [8], and Ma and Wang [34] managed to replace successfully the projections onto C by some closed-formula projections onto some set, there are still some limitations. First, we note that their proposed methods either require knowledge of the Lipschitz constants of F and the Gâteaux differential \(c'(\cdot )\) of \(c(\cdot )\) (which are often unknown or very difficult to estimate) or employed a line-search procedure, which is known to be time consuming to implement. In addition, all results obtain weak convergence, that is known to be a drawback when solving optimization problems, see, e.g., Bauschke [6].

Following the above methods and results, in this paper we establish a totally relaxed, inertial, self-adaptive projection and contraction method (TRISPCM) for solving VI (1) defined over a finite intersection of some closed, convex sublevel sets (as seen in (6)). Our method employs projections onto some constructable “supersets” and inertial ([4, 10, 20, 37, 42, 50, 52])) and relaxation [28] techniques are incorporated to speed up the convergence rate of our method. Although we assume that F is Gâteaux differentiable, and \(c_{i}' (\cdot )\) of \(c_{i}(\cdot )\) are Lipschitz continuous, our method does not require any line-search procedure, rather we employ a more efficient self-adaptive step-size technique that generates a nonmonotonic sequence of step sizes. Moreover, under suitable conditions we prove strong convergence to a minimum-norm solution of the problem. Relevant numerical experiments at the end of this paper clearly display the efficiency of our methods over those in the literature.

The remainder of this paper is organized as follows. Section 2 contains definitions and existing results relevant to our analysis. In Sect. 3, the proposed algorithm is presented and its strong convergence is established in Sect. 4. Numerical experiments and comparisons with related methods are given in Sect. 5, illustrating the performance of our scheme. Finally, some concluding remarks on our work are presented in Sect. 6.

2 Preliminaries

In this section, we review basic definitions and important lemmas, vital in proving our main results.

Let C be a nonempty, closed, and convex subset of a real Hilbert space H. Also, throughout this paper, we let the strong and weak convergence of a sequence \(\{x_{n}\}\) to a point \(x^{*} \in H\) be denoted by \(x_{n} \rightarrow x^{*}\) and \(x_{n} \rightharpoonup x^{*}\), respectively. The set of weak limits of \(\{x_{n}\}\), denoted by \(w_{\omega}(x_{n})\), is defined by

$$ w_{\omega}(x_{n}):= \bigl\{ x^{*}\in H: x_{n_{k}}\rightharpoonup x^{*} \text{ for some subsequence } \{x_{n_{k}}\} \text{ of }\{x_{n}\}\bigr\} . $$
(7)

The metric projection \(P_{C}: H\rightarrow C\) ([1]) is defined, for each \(x\in H\), as the unique element \(P_{C}x\in C\) such that

$$ \Vert x - P_{C}x \Vert = \inf \bigl\{ \Vert x-z \Vert : z\in C\bigr\} . $$

It is known that \(P_{C}\) is nonexpansive (see [4, 38]). For more interesting features of the metric projection, see Lemma 2.1.

Lemma 2.1

[25, 30] Let H be a real Hilbert space, and I be the identity map on H. Let C be a nonempty, closed, and convex subset of H. We have the following results for any \(x\in H\) and \(f,g\in C\):

  1. (i)

    \(g = P_{C}x \Longleftrightarrow \langle x - g, g - f\rangle \geq 0\);

  2. (ii)

    \(\langle (I-P_{C})x-(I-P_{C})f, x-f\rangle \ge \|(I-P_{C})x-(I-P_{C})f \|^{2}\);

  3. (iii)

    \(\|f-P_{C}x\|^{2}+\|x-P_{C}x\|^{2} \le \|x-f\|^{2}\);

  4. (iv)

    \(\langle x-f,P_{C}x-P_{C}f \rangle \ge \|P_{C}x-P_{C}f\|^{2}\);

  5. (v)

    Let \(D=\{u\in H : \langle x,u-d \rangle \le 0\}\) be a half-space, where \(x\ne 0\), and \(d\in \mathbb{R}\). Then, for \(a\in H\),

    $$ P_{D} (a)= a-\max \biggl\{ 0, \frac{\langle x,a-d \rangle }{ \Vert x \Vert ^{2}} \biggr\} x. $$
    (8)

Note that (8) is the explicit formula for the orthogonal projection onto the half-space D.

Definition 2.2

[44, 51] Let \(F: H\rightarrow H\) be a mapping defined on a real Hilbert space H. Then, F is said to be:

  1. (i)

    L-Lipschitz continuous, where \(L>0\), if

    $$ \Vert Fu - Fv \Vert \leq L \Vert u-v \Vert , \quad \forall u,v\in H. $$

    A contraction if \(L\in [0,1)\), and nonexpansive, if \(L=1\);

  2. (ii)

    λ- strongly monotone, if there exists \(\lambda >0\) such that

    $$ \langle u-v, Fu-Fv \rangle \ge \lambda \Vert u-v \Vert ^{2}, \quad \forall u,v \in H ; $$
  3. (iii)

    monotone, if

    $$ \langle Fu - Fv, u-v\rangle \geq 0, \quad \forall u,v\in H. $$

Lemma 2.3

[43, 50] Let H be a real Hilbert space. Then, the following results hold, for all \(x,y\in H\) and \(\zeta \in \mathbb{R}\):

  1. (i)

    \(\|x + y\|^{2} \leq \|x\|^{2} + 2\langle y, x + y \rangle \);

  2. (ii)

    \(\|x + y\|^{2} = \|x\|^{2} + 2\langle x, y \rangle + \|y\|^{2}\);

  3. (iii)

    \(\|\zeta x + (1-\zeta ) y\|^{2} = \zeta \|x\|^{2} + (1-\zeta )\|y\|^{2} -\zeta (1-\zeta )\|x-y\|^{2}\).

Definition 2.4

[36] Let \(c:H\to \mathbb{R}\) be a real-valued function. Then,

  1. (i)

    c is said to be Gâteaux differentiable at \(z\in H\), if there exists an element in H, denoted by \(c'(z)\), such that

    $$ \lim_{t\to 0} \frac{c(z+th)-c(z)}{t}=\bigl\langle h, c'(z) \bigr\rangle , \quad \forall h\in H, $$
    (9)

    where \(c'(z)\) (also written as \(\nabla c(z)\)), is known as the Gâteaux differential (or gradient) of c at z.

  2. (ii)

    If c is convex, then c is said to be subdifferentiable at point \(z\in H\), if \(\partial c(z)\) is nonempty, where \(\partial c(z)\) is defined as follows:

    $$ \partial c(z):=\bigl\{ x\in H : c(y)\ge c(z)+\langle x,y-z \rangle \ \forall y\in H\bigr\} . $$
    (10)

    c is said to be subdifferentiable on H, if for each \(z\in H\), c is subdifferentiable at z.

  3. (iii)

    c is said to be weakly lower semicontinuous (w-lsc) at \(z\in H\), if \(z_{n}\rightharpoonup z\) implies

    $$ c(z)\le \liminf_{n\to \infty} c(z_{n}). $$
    (11)

    c is said to be w-lsc on H if for each \(z\in H\), c is w-lsc at z.

Remark 2.5

We note the following from Definition 2.4:

  1. (i)

    Each element in \(\partial c(z)\) is referred to as a subgradient of c at z. Also, (10) is said to be the subdifferential inequality of c at z, where \(\partial c(z)\) is the subdifferential of c at z.

  2. (ii)

    It is also known that if c is Gâteaux differentiable at z, then c is subdifferentiable at z, and \(\partial c(z)=\{c'(z)\}\), in particular, \(\partial c(z)\) is a singleton set (see [25]).

Lemma 2.6

[7] Let \(c:H\to \mathbb{R}\cup \{+\infty \}\) be convex. Then, the following results are equivalent:

  1. (i)

    c is weakly sequential lower semicontinuous;

  2. (ii)

    c is lower semicontinuous.

Lemma 2.7

[45] Let \(\{\xi _{n}\}\) and \(\{\mu _{n}\}\) be two nonnegative real sequences such that

$$ \xi _{n+1}\le \xi _{n} + \mu _{n},\quad \forall n \ge 1. $$

If \(\sum_{n=1}^{\infty}\mu _{n}<+\infty \), then \(\lim_{n\to \infty}\xi _{n}\) exists.

Lemma 2.8

[12] Suppose C is a nonempty, closed, and convex subset of H, and suppose \(F:C\to H\) is a continuous monotone mapping, with \(x\in C\), then

$$ x\in VI(C,F)\quad \iff\quad \langle Fy, y-x \rangle \ge 0,\quad \forall y\in C. $$

Lemma 2.9

[39] Suppose \(\{x_{n}\}\) is a sequence of nonnegative real numbers, \(\{\alpha _{n}\}\) is a sequence in \((0, 1)\) with \(\sum_{n=1}^{\infty}\alpha _{n} = +\infty \) and \(\{z_{n}\}\) is a sequence of real numbers. Let

$$ x_{n+1}\leq (1 - \alpha _{n})x_{n} + \alpha _{n}z_{n}, \quad \textit{for all } n\geq 1, $$

if \(\limsup_{k\rightarrow \infty}z_{n_{k}}\leq 0\) for every subsequence \(\{x_{n_{k}}\}\) of \(\{x_{n}\}\) satisfying \(\liminf_{k\rightarrow \infty}(x_{n_{k+1}} - x_{n_{k}})\geq 0\), then \(\lim_{n\rightarrow \infty}x_{n} =0\).

Lemma 2.10

[26, 35] Let C be a set defined as in (6), and let \(F:h\to H\) be an operator. Suppose the solution set \(VI(C,F)\) is nonempty. Then, the following alternating theorem holds for the solution of the VI(C,F), that is, given \(\hat{z}\in C\), \(\hat{z}\in VI(C,F)\) if and only if one of the following holds.

  1. (i)

    \(F\hat{z}=0\); or

  2. (ii)

    \(\hat{z}\in bd(C)\), and there exist \(\beta _{\hat{z}}>0\) (depending on the point ), and \(\kappa \in \operatorname{conv}\{c'_{i}(\hat{z}) : i\in I_{\hat{z}}^{*}\}\) such that \(F(\hat{z})=-\beta _{\hat{z}}\kappa \), where \(bd(C)\) denotes the boundary of the set C, \(I_{\kappa}^{*}=\{i\in I : c_{i}(\hat{z})=0\}\) and \(\operatorname{conv}\{c'_{i}(\hat{z}) : i\in I_{\hat{z}}^{*}\}\) is the convex hull of the set \(\{c'_{i}(\hat{z}) : i\in I_{\hat{z}}^{*}\}\).

3 Proposed method

Here, we present our algorithm: A totally relaxed inertial self-adaptive projection and contraction method (TRISPCM) for solving the monotone variational inequality problem defined over the feasible set (6). Our results are based on the following assumptions:

Assumption A

  1. (A1)

    \(F:H\to H\) is monotone and \(\mathcal{J}-\) Lipschitz continuous on H.

  2. (A2)

    The solution set \(VI(C,F)\) is nonempty.

  3. (A3)

    For all \(i\in H\), the family of functions \(c_{i}:H\to \mathbb{R}\) satisfy the following conditions:

    1. (i)

      Any \(c_{i}(i\in I)\) is convex on H.

    2. (ii)

      Any \(c_{i}(i\in I)\) is weakly lower semicontinuous on H.

    3. (iii)

      Any \(c_{i}(i\in I)\) is Gâteaux differentiable and \(c'_{i}(i\in I)\) is \(L_{i}-\) Lipschitz on H.

    4. (iv)

      There exists a positive constant K such that for all \(\hat{z} \in bd(C)\), the following holds:

      $$ \Vert F\hat{z} \Vert \le K\inf \bigl\{ \bigl\Vert m(\hat{z}) \bigr\Vert : m(\hat{z}) \in \operatorname{con}\bigl\{ c'_{i}( \hat{z}) : i\in I_{\hat{z}}^{*}\bigr\} \bigr\} , $$

      where \(I_{\hat{z}}^{*}\) is defined as in Lemma 2.10.

Assumption B

  1. (B1)

    Let \(\tau >0\), \(\gamma _{1}>0\), \(\ell \in (0,2)\), \(\delta \in (0, \frac{2-\ell}{2-\ell +2K} )\);

  2. (B2)

    \(\alpha _{n}\in (0,1)\), \(\lim_{n\rightarrow \infty}\alpha _{n}=0\), \(\sum_{n=1}^{\infty}\alpha _{n}=+\infty \), \(\{\theta _{n}\}\subset \mathbb{R}_{+}\) such that \(\lim_{n\rightarrow \infty}\frac{\theta _{n}}{\alpha _{n}}=0\);

  3. (B3)

    Let \(\mu _{n}\) be a nonnegative sequence such that \(\sum_{n=1}^{\infty}\mu _{n}<+\infty \).

We show our algorithm below:

Algorithm 3.1

Step 0. :

Set \(n=1\), and let \(x_{0}, x_{1}\in H\) be two arbitrary initial points.

Step 1. :

Given the \((n-1)th\) and \(nth\) iterates, choose \(\tau _{n}\) such that \(0\leq \tau _{n}\leq \hat{\tau}_{n}\) with \(\hat{\tau}_{n}\) defined by

$$ \hat{\tau}_{n} = \textstyle\begin{cases} \min \{\tau , \frac{\theta _{n}}{ \Vert x_{n} - x_{n-1} \Vert } \}, & \text{if } x_{n} \neq x_{n-1}, \\ \tau , & \text{otherwise.} \end{cases} $$
(12)
Step 2. :

Compute

$$\begin{aligned} &w_{n} = (1-\alpha _{n}) \bigl(x_{n} + \tau _{n}(x_{n} - x_{n-1}) \bigr). \end{aligned}$$
(13)
Step 3. :

Given the current iterate \(w_{n}\), construct the family of half-spaces

$$\begin{aligned} & C_{n}^{i}= \bigl\{ w\in H : c_{i}(w_{n})+\bigl\langle c'_{i}(w_{n}), w-w_{n} \bigr\rangle \le 0\bigr\} , i\in I. \end{aligned}$$
(14)

Set,

$$\begin{aligned} C_{n}:=\bigcap_{i\in I}C_{n}^{i} \end{aligned}$$
(15)

and compute:

$$\begin{aligned} y_{n}:=P_{C_{n}}(w_{n}-\gamma _{n}Fw_{n}). \end{aligned}$$
(16)

If \(w_{n}=y_{n}\), then stop, \(w_{n}\in SOL(C,F)\), otherwise, proceed to Step 4.

Step 4. :

Compute:

$$\begin{aligned} & x_{n+1}=w_{n}-\ell \psi _{n} d_{n}, \\ & \text{where, } d_{n}:= w_{n}-y_{n}-\gamma _{n}(Fw_{n}-Fy_{n}), \text{ and} \\ & \psi _{n}= \textstyle\begin{cases} \frac{\langle w_{n}-y_{n}, d_{n} \rangle}{ \Vert d_{n} \Vert ^{2}}, & \text{if } d_{n}\ne 0, \\ 0, & \text{otherwise.} \end{cases}\displaystyle \end{aligned}$$
(17)

Update:

$$ \gamma _{n+1} = \textstyle\begin{cases} \min \{ \frac{\delta \Vert w_{n}-y_{n} \Vert }{ \Vert Fw_{n}-Fy_{n} \Vert + \Vert c'_{i_{n}}(w_{n})-c'_{i_{n}}(y_{n}) \Vert }, \gamma _{n}+\mu _{n}\}, \\ \quad \text{if } \Vert Fw_{n}-Fy_{n} \Vert + \Vert c'_{i_{n}}(w_{n})-c'_{i_{n}}(y_{n}) \Vert \ne 0, \\ \gamma _{n}+\mu _{n}, \quad \text{otherwise,} \end{cases} $$
(18)

where \(\|c'_{i_{n}}(w_{n})-c'_{i_{n}}(y_{n})\|=\max_{i\in I}\{\|c'_{i}(w_{n})-c'_{i}(y_{n}) \|\}\). Set \(n:= n +1\) and return to Step 1.

Remark 3.2

We highlight below some of the key features of our proposed Algorithm 3.1.

  1. (i)

    Observe that the feasible set is constructed as a finite intersection of sublevel sets, as seen in (6), which is more general than the feasible set adopted in [8, 19, 23, 24, 34]. Also, observe that in (14), if \(c'_{i}(w_{n})=0\), then we have that \(C_{n}^{i}=H\).

  2. (ii)

    We note that our proposed algorithm completely avoids projection onto the feasible set, but rather allows only one projection onto some half-space, as seen in (14)–(16). This obviously ensures easier computation, since projection onto half-spaces can be calculated using an explicit formula, see (8).

  3. (iii)

    We note that our Algorithm 3.1 employs the relaxation and inertial techniques to improve its rate of convergence. Also, we observe that Step 1 of our proposed algorithm is easily implemented, since we have prior knowledge of the estimate \(\|x_{n}-x_{n-1}\|\) before choosing \(\tau _{n}\).

  4. (iv)

    We emphasize that while the cost operator is Lipschitz continuous, our algorithm does not require any line-search procedure (unlike the methods of [2325]). Instead, we adopt a more efficient self-adaptive step-size technique that generates nonmonotonic sequence of step sizes (as seen in (18)).

  5. (v)

    We also point out that unlike the results in [8, 2325, 34], our proposed algorithm generates a strong convergence sequence, which converges to a minimum-norm solution of the VI.

Remark 3.3

By applying condition (B2), from (12) we have that

$$ \lim_{n\rightarrow \infty}\tau _{n} \Vert x_{n} - x_{n-1} \Vert = 0 \quad \text{and}\quad \lim_{n\rightarrow \infty} \frac{\tau _{n}}{\alpha _{n}} \Vert x_{n} - x_{n-1} \Vert = 0. $$

4 Convergence analysis

In this section, we carry out the convergence analysis of our proposed algorithm. First, we establish some lemmas that are needed to prove the strong convergence theorem for the proposed algorithm.

Lemma 4.1

Let C and \(C_{n}\) be the sets defined by (6) and (15), respectively, then we have that \(C\subset C_{n}\), \(\forall n\ge 1\).

Proof

For all \(i\in I\), let \(C^{i}:=\{x\in H : c_{i}(x)\le 0\}\). Thus,

we see that \(C=\bigcap_{i\in I}C^{i}\). Then, for each \(i\in I\) and any \(x\in C^{i}\), by the subdifferential inequality, it follows that

$$ c_{i}(w_{n}) + \bigl\langle c'_{i}(w_{n}), x-w_{n} \bigr\rangle \le c_{i}(x) \le 0. $$
(19)

By definition of the sets \(C_{n}^{i}\) (14), we see that \(x\in C_{n}^{i}\). It then follows that \(C^{i}\subset C_{n}^{i}\), \(\forall n\ge 1\), \(i\in I\). Hence, \(C\subset C_{n}\), \(\forall n\ge 1\) as required. □

Lemma 4.2

If \(y_{n}=w_{n}\) for some \(n\ge 1\) in Algorithm 3.1, then \(w_{n}\in VI(C,F)\).

Proof

Suppose \(y_{n}=w_{n}\) for some \(n\ge 1\). Then, by (16), we obtain

$$ w_{n}=P_{C_{n}}(w_{n}-\gamma _{n}Fw_{n}). $$
(20)

From (20), it follows that \(w_{n}\in C_{n}\), in particular, \(w_{n}\in C_{n}^{i}\) for each \(i\in I\) and \(n\ge 1\). Then, we have that \(c_{i}(w_{n})+\langle c'_{i}(w_{n}), w_{n}-w_{n} \rangle \le 0\), by the definition of \(C_{n}^{i}\). From this, we obtain \(c_{i}(w_{n})\le 0\) for each \(i\in I\). Thus, \(w_{n}\in C\).

Next, by (20) and (2), we have \(w_{n}\in VI(C_{n},F)\), which implies that

$$ \langle Fw_{n}, z-w_{n} \rangle \ge 0, \quad \forall z\in C_{n}. $$
(21)

Therefore, from the fact that \(w_{n}\in C\in C_{n}\) and (21), the conclusion follows. □

Lemma 4.3

Let \(\{\gamma _{n}\}\) be the sequence generated by (18). Then, \(\{\gamma _{n}\}\) is well defined and \(\lim_{n\rightarrow \infty}\gamma _{n}=\gamma \), where \(\gamma \in [\min \{\frac{\delta}{M}, \gamma _{1}\}, \gamma _{1}+ \Phi ]\), for some constants \(M>0\) and \(\Phi =\sum_{n=1}^{\infty}\Phi _{n}\).

Proof

Since \(c'_{i}\) and F are both Lipschitz continuous, considering the case \(\|Fw_{n}-Fy_{n}\|+\|c'_{i_{n}}(w_{n})-c'_{i_{n}}(y_{n})\|\ne 0\) in (18), we obtain for all \(n\ge 1\), that

$$\begin{aligned} \frac{\delta \Vert w_{n}-y_{n} \Vert }{ \Vert Fw_{n}-Fy_{n} \Vert + \Vert c'_{i_{n}}(w_{n})-c'_{i_{n}}(y_{n}) \Vert }& \ge \frac{\delta \Vert w_{n}-y_{n} \Vert }{\mathcal{J} \Vert w_{n}-y_{n} \Vert + L \Vert w_{n}-y_{n} \Vert } \\ &\ge \frac{\delta \Vert w_{n}-y_{n} \Vert }{(\mathcal{J}+L) \Vert w_{n}-y_{n} \Vert } = \frac{\delta}{M}, \end{aligned}$$

where \(M:=(\mathcal{J}+L)>0\), \(L =\max \{L_{i}: i\in I\}\). Thus, by the definition of \(\gamma _{n+1}\), it is obvious that the sequence \(\{\gamma _{n}\}\) has upper bound and lower bound \(\gamma _{1} + \Phi \) and \(\min \{\frac{\delta}{M},\gamma _{1}\}\), respectively. Hence, by Lemma 2.7, we have that \(\lim_{n\to \infty}\gamma _{n}\) exists, and we denote by \(\lim_{n\to \infty}\gamma _{n}=\gamma \). Clearly, \(\gamma \in [\min \{\frac{\delta}{M},\gamma _{1}\},\gamma _{1}+ \Phi ]\). □

Lemma 4.4

Let \(\{x_{n}\}\) be a sequence generated by Algorithm 3.1. Suppose Assumptions A and B are satisfied, then the following inequalities hold for all \(p\in VI(C,F)\).

$$ \Vert w_{n}-y_{n} \Vert ^{2} \le \frac{1}{\ell ^{2}} \biggl( \frac{\gamma _{n+1}+\delta \gamma _{n}}{\gamma _{n+1}-\delta \gamma _{n}} \biggr)^{2} \Vert w_{n}-x_{n+1} \Vert ^{2} $$
(22)

and

$$ \Vert x_{n+1}-p \Vert ^{2}\le \Vert w_{n}-p \Vert ^{2}- \biggl[\frac{2-\ell}{\ell}- \frac{2K\delta}{\ell} \frac{\gamma _{n}}{(\gamma _{n+1}-\delta \gamma _{n})} \biggr] \Vert w_{n}-x_{n+1} \Vert ^{2}. $$
(23)

Proof

From (18), we obtain

$$\begin{aligned} \gamma _{n+1}&= \min \biggl\{ \frac{\delta \Vert w_{n}-y_{n} \Vert }{ \Vert Fw_{n}-Fy_{n} \Vert + \Vert c'_{i_{n}}(w_{n})-c'_{i_{n}}(y_{n}) \Vert }, \gamma _{n}+\mu _{n} \biggr\} \\ &\le \frac{\delta \Vert w_{n}-y_{n} \Vert }{ \Vert Fw_{n}-Fy_{n} \Vert + \Vert c'_{i_{n}}(w_{n})-c'_{i_{n}}(y_{n}) \Vert } , \end{aligned}$$
(24)

which implies that

$$\begin{aligned} \Vert Fw_{n}-Fy_{n} \Vert + \bigl\Vert c'_{i_{n}}(w_{n})-c'_{i_{n}}(y_{n}) \bigr\Vert \le \frac{\delta}{\gamma _{n+1}} \Vert w_{n}-y_{n} \Vert , \quad \forall n\ge 1. \end{aligned}$$
(25)

Since both terms on the left-hand side of (25) are positive terms, then it implies that

$$\begin{aligned} \bigl\Vert c'_{i_{n}}(w_{n})-c'_{i_{n}}(y_{n}) \bigr\Vert \le \frac{\delta}{\gamma _{n+1}} \Vert w_{n}-y_{n} \Vert , \quad \forall n\ge 1. \end{aligned}$$
(26)

Next, we proceed to prove the first inequality (22). From the definition of \(\psi _{n}\), if \(d_{n}\ne 0\) and by applying (25), we have

$$\begin{aligned} \psi _{n} \Vert d_{n} \Vert ^{2}=\langle w_{n}-y_{n},d_{n} \rangle &\ge \Vert w_{n}-y_{n} \Vert ^{2}-\gamma _{n} \Vert Fw_{n}-Fy_{n} \Vert \Vert w_{n}-y_{n} \Vert \\ &\ge \Vert w_{n}-y_{n} \Vert ^{2}-\gamma _{n} \biggl(\frac{\delta}{\gamma _{n+1}} \biggr) \Vert w_{n}-y_{n} \Vert ^{2} \\ &= \biggl(1-\frac{\delta \gamma _{n}}{\gamma _{n+1}} \biggr) \Vert w_{n}-y_{n} \Vert ^{2}. \end{aligned}$$
(27)

Also,

$$\begin{aligned} \Vert d_{n} \Vert &\le \Vert w_{n}-y_{n} \Vert +\gamma _{n} \Vert Fw_{n}-Fy_{n} \Vert \\ &\le \Vert w_{n}-y_{n} \Vert +\frac{\delta \gamma _{n}}{\gamma _{n+1}} \Vert w_{n}-y_{n} \Vert \\ & = \biggl(1+\frac{\delta \gamma _{n}}{\gamma _{n+1}} \biggr) \Vert w_{n}-y_{n} \Vert . \end{aligned}$$
(28)

From (27), we obtain

$$\begin{aligned} \bigl(\psi _{n} \Vert d_{n} \Vert ^{2} \bigr)^{2}&\ge \biggl(1- \frac{\delta \gamma _{n}}{\gamma _{n+1}} \biggr)^{2} \Vert w_{n}-y_{n} \Vert ^{4}. \end{aligned}$$

From the last inequality and by applying (28), we obtain

$$\begin{aligned} \psi _{n}^{2} \Vert d_{n} \Vert ^{2}& \ge \biggl(1- \frac{\delta \gamma _{n}}{\gamma _{n+1}} \biggr)^{2} \frac{ \Vert w_{n}-y_{n} \Vert ^{4}}{ \Vert d_{n} \Vert ^{2}} \\ &\ge \frac{ (1-\frac{\delta \gamma _{n}}{\gamma _{n+1}} )^{2} \Vert w_{n}-y_{n} \Vert ^{4}}{ (1+\frac{\delta \gamma _{n}}{\gamma _{n+1}} )^{2} \Vert w_{n}-y_{n} \Vert ^{2}} \\ &= \frac{ (1-\frac{\delta \gamma _{n}}{\gamma _{n+1}} )^{2}}{ (1+\frac{\delta \gamma _{n}}{\gamma _{n+1}} )^{2}} \Vert w_{n}-y_{n} \Vert ^{2}. \end{aligned}$$
(29)

Observe that (29) still holds when \(d_{n}=0\). Hence, from (29) and the definition of \(x_{n+1}\), we have that

$$\begin{aligned} \Vert x_{n+1}-w_{n} \Vert ^{2}&= \Vert w_{n}-\ell \psi _{n}d_{n}-w_{n} \Vert ^{2}= \Vert \ell \psi _{n}d_{n} \Vert ^{2}=\ell ^{2}\psi _{n}^{2} \Vert d_{n} \Vert ^{2} \\ &\ge \ell ^{2} \frac{ (1-\frac{\delta \gamma _{n}}{\gamma _{n+1}} )^{2}}{ (1+\frac{\delta \gamma _{n}}{\gamma _{n+1}} )^{2}} \Vert w_{n}-y_{n} \Vert ^{2}. \end{aligned}$$

Hence,

$$\begin{aligned} \Vert w_{n}-y_{n} \Vert ^{2}\le \frac{1}{\ell ^{2}} \biggl( \frac{\gamma _{n+1}+\delta \gamma _{n}}{\gamma _{n+1}-\delta \gamma _{n}} \biggr)^{2} \Vert w_{n}-x_{n+1} \Vert ^{2}, \end{aligned}$$

as required.

Next, we proceed to prove the second inequality (23). Now, if there exists \(n^{*}\ge 1\) such that \(d_{n^{*}}=0\), then \(x_{n^{*}+1}=w_{n^{*}}\), and hence, (23) holds. Hence, we consider the nontrivial case, where \(d_{n}\ne 0\), for each \(n\ge 1\).

Let \(p\in VI(C,F)\). Then, from (17), we have that

$$\begin{aligned} \Vert x_{n+1}-p \Vert ^{2}&= \Vert w_{n}-\ell \psi _{n}d_{n}-p \Vert ^{2} \\ &= \Vert w_{n}-p \Vert ^{2}-2\ell \psi _{n} \langle w_{n}-p,d_{n} \rangle +\ell ^{2} \psi _{n}^{2} \Vert d_{n} \Vert ^{2}. \end{aligned}$$
(30)

By the definition of \(d_{n}\), we obtain

$$\begin{aligned} \langle w_{n}-p,d_{n} \rangle &=\langle w_{n}-y_{n},d_{n} \rangle + \langle y_{n}-p,d_{n} \rangle \\ &=\langle w_{n}-y_{n},d_{n} \rangle + \bigl\langle y_{n}-p,w_{n}-y_{n}- \gamma _{n}(Fw_{n}-Fy_{n}) \bigr\rangle \\ &=\langle w_{n}-y_{n},d_{n} \rangle + \langle y_{n}-p,w_{n}-y_{n}- \gamma _{n}Fw_{n} \rangle \\ &\quad{}+\gamma _{n}\langle y_{n}-p,Fy_{n} \rangle . \end{aligned}$$
(31)

By the monotonicity of F, we have that \(\langle y_{n}-p,Fy_{n}-Fp \rangle \ge 0\), which implies that

$$ \langle y_{n}-p,Fy_{n} \rangle \ge \langle y_{n}-p,Fp \rangle . $$
(32)

Also, since \(y_{n}=P_{C_{n}}(w_{n}-\gamma _{n}Fw_{n})\) and by Lemma 2.1, we have

$$ \langle w_{n}-y_{n}-\gamma _{n}Fw_{n},y_{n}-p \rangle \ge 0. $$
(33)

By (31), (32), and (33), we obtain

$$ \langle w_{n}-p,d_{n} \rangle \ge \langle w_{n}-y_{n},d_{n} \rangle + \gamma _{n} \langle y_{n}-p,Fp \rangle . $$
(34)

Now, we consider the following two cases:

Case 1: \(Fp=0\). If \(Fp=0\), then from (34) we obtain

$$ \langle w_{n}-p,d_{n} \rangle \ge \langle w_{n}-y_{n},d_{n} \rangle . $$
(35)

Then, it follows from (30), (35), the definition of \(\psi _{n}\), and the conditions imposed on the control parameter that

$$\begin{aligned} \Vert x_{n+1}-p \Vert ^{2}&\le \Vert w_{n}-p \Vert ^{2}-2\ell \psi _{n}\langle w_{n}-y_{n},d_{n} \rangle + \ell ^{2}\psi _{n}^{2} \Vert d_{n} \Vert ^{2} \\ &= \Vert w_{n}-p \Vert ^{2}-2\ell \psi _{n}^{2} \Vert d_{n} \Vert ^{2}+ \ell ^{2}\psi _{n}^{2} \Vert d_{n} \Vert ^{2} \\ &= \Vert w_{n}-p \Vert ^{2}-\frac{2-\ell}{\ell} \Vert \ell \psi _{n}d_{n} \Vert ^{2} \\ &= \Vert w_{n}-p \Vert ^{2}-\frac{2-\ell}{\ell} \Vert w_{n}-x_{n+1} \Vert ^{2}. \end{aligned}$$
(36)

Hence, the desired inequality (23) follows from (36).

Case 2: \(Fp\ne 0\). By applying Lemma 2.10, we have that \(p\in bd(C)\) and we have

$$ Fp=-\beta _{p}\sum_{i\in I^{*}_{p}} \alpha _{i}c'_{i}(p), $$
(37)

where \(\beta _{p}\) is some positive constant, \(I^{*}_{p}=\{i\in I : c_{i}(p)=0\}\), and \(\{\alpha _{i}\}_{i\in I^{*}_{p}}\) are nonnegative numbers satisfying \(\sum_{i\in I^{*}_{p}}\alpha _{i}=1\). Then, by the subdifferential inequality, we obtain

$$ c_{i}(p)+\bigl\langle c'_{i}(p),y_{n}-p \bigr\rangle \le c_{i}(y_{n}), \quad \forall n\ge 0, i\in I^{*}_{p}. $$
(38)

Since \(p\in bd(C)\), we have that \(c_{i}(p)=0\), for each \(i\in I^{*}_{p}\), and then

$$ \bigl\langle c'_{i}(p),y_{n}-p \bigr\rangle \le c_{i}(y_{n}), \quad \forall n\ge 0, i\in I^{*}_{p}. $$
(39)

We have from (37) and (39) that

$$ \langle -Fp,y_{n}-p \rangle \le \beta _{p} \sum_{i\in I^{*}_{p}} \alpha _{i}c_{i}(y_{n}). $$
(40)

Since \(y_{n}\in C_{n}=\bigcap_{i\in I}C^{i}_{n}\), we have

$$ c_{i}(w_{n})+\bigl\langle c'_{i}(w_{n}),y_{n}-w_{n} \bigr\rangle \le 0. $$
(41)

Then, by the differential inequality, we obtain

$$ c_{i}(y_{n})+\bigl\langle c'_{i}(y_{n}),w_{n}-y_{n} \bigr\rangle \le c_{i}(w_{n}), \quad \forall n\ge 0, i\in I^{*}_{p}. $$
(42)

From (41) and (42), and by applying (26) we have

$$\begin{aligned} &c_{i}(y_{n})+\bigl\langle c'_{i}(y_{n}),w_{n}-y_{n} \bigr\rangle \le -\bigl\langle c'_{i}(w_{n}),y_{n}-w_{n} \bigr\rangle \\ &\begin{aligned}[b] \quad \implies\quad c_{i}(y_{n})&\le \bigl\langle c'_{i}(w_{n}) - c'_{i}(y_{n}),w_{n}-y_{n} \bigr\rangle \\ & = \bigl\langle c'_{i}(y_{n}) - c'_{i}(w_{n}),y_{n}-w_{n} \bigr\rangle \\ & \le \bigl\Vert c'_{i}(y_{n}) - c'_{i}(w_{n}) \bigr\Vert \Vert y_{n}-w_{n} \Vert \\ & \le \frac{\delta}{\gamma _{n+1}} \Vert y_{n}-w_{n} \Vert ^{2}. \end{aligned} \end{aligned}$$
(43)

Observe that by condition \((A3)(iv)\), we have

$$ \beta _{p}\le K. $$
(44)

Hence, from (40) and by applying (43), (44), and the condition on \(\gamma _{n}\), we obtain

$$\begin{aligned} &\langle -Fp,y_{n}-p \rangle \le K\frac{\delta}{\gamma _{n+1}} \Vert y_{n}-w_{n} \Vert ^{2} \\ &\quad \implies\quad \langle Fp,y_{n}-p \rangle \ge -K \frac{\delta}{\gamma _{n+1}} \Vert y_{n}-w_{n} \Vert ^{2} \\ &\quad \implies\quad \gamma _{n} \langle Fp,y_{n}-p \rangle \ge -K \frac{\delta \gamma _{n}}{\gamma _{n+1}} \Vert y_{n}-w_{n} \Vert ^{2}. \end{aligned}$$
(45)

By substituting (45) into (34), we obtain

$$ \langle w_{n}-p, d_{n} \rangle \ge \langle w_{n}-y_{n}, d_{n} \rangle -K\frac{\delta \gamma _{n}}{\gamma _{n+1}} \Vert y_{n}-w_{n} \Vert ^{2}. $$
(46)

Then, substituting (46) into (30), we have

$$\begin{aligned} \Vert x_{n+1}-p \Vert ^{2}&\le \Vert w_{n}-p \Vert ^{2}-2\ell \psi _{n} \biggl[ \langle w_{n}-y_{n}, d_{n} \rangle -K \frac{\delta \gamma _{n}}{\gamma _{n+1}} \Vert y_{n}-w_{n} \Vert ^{2} \biggr] \\ &\quad{}+\ell ^{2}\psi _{n}^{2} \Vert d_{n} \Vert ^{2} \\ &= \Vert w_{n}-p \Vert ^{2}-2\ell \psi _{n} \langle w_{n}-y_{n}, d_{n} \rangle + \ell ^{2}\psi _{n}^{2} \Vert d_{n} \Vert ^{2} \\ &\quad{}+2\ell \psi _{n}K\frac{\delta \gamma _{n}}{\gamma _{n+1}} \Vert y_{n}-w_{n} \Vert ^{2} \\ &= \Vert w_{n}-p \Vert ^{2}-\frac{2-\ell}{\ell} \Vert \ell \psi _{n}d_{n} \Vert ^{2}+2 \ell \psi _{n}K\frac{\delta \gamma _{n}}{\gamma _{n+1}} \Vert y_{n}-w_{n} \Vert ^{2} \\ &= \Vert w_{n}-p \Vert ^{2}-\frac{2-\ell}{\ell} \Vert w_{n}-x_{n+1} \Vert ^{2}+2\ell \psi _{n}K\frac{\delta \gamma _{n}}{\gamma _{n+1}} \Vert y_{n}-w_{n} \Vert ^{2}. \end{aligned}$$
(47)

From (27), we obtain

$$\begin{aligned} \Vert w_{n}-y_{n} \Vert ^{2} \le \frac{\psi _{n} \Vert d_{n} \Vert ^{2}}{ (1-\frac{\delta \gamma _{n}}{\gamma _{n+1}} )}. \end{aligned}$$
(48)

Hence, using (48) and the definition of \(x_{n+1}\), we have

$$\begin{aligned} 2\ell \psi _{n}K\frac{\delta \gamma _{n}}{\gamma _{n+1}} \Vert y_{n}-w_{n} \Vert ^{2}&\le 2\ell \psi _{n}K\frac{\delta \gamma _{n}}{\gamma _{n+1}} \frac{\psi _{n} \Vert d_{n} \Vert ^{2}}{ (1-\frac{\delta \gamma _{n}}{\gamma _{n+1}} )} \\ & =\frac{2K\delta}{\ell} \frac{\gamma _{n}}{(\gamma _{n+1}-\delta \gamma _{n})} \Vert \ell \psi _{n}d_{n} \Vert ^{2} \\ & =\frac{2K\delta}{\ell} \frac{\gamma _{n}}{(\gamma _{n+1}-\delta \gamma _{n})} \Vert w_{n}-x_{n+1} \Vert ^{2}. \end{aligned}$$
(49)

By substituting (49) into (47), we obtain

$$\begin{aligned} & \Vert x_{n+1}-p \Vert ^{2} \le \Vert w_{n}-p \Vert ^{2}- \biggl[\frac{2-\ell}{\ell}- \frac{2K\delta}{\ell} \frac{\gamma _{n}}{(\gamma _{n+1}-\delta \gamma _{n})} \biggr] \Vert w_{n}-x_{n+1} \Vert ^{2}, \end{aligned}$$

which is the required inequality. □

Since the limit of \(\{\gamma _{n}\}\) exists, we have that \(\lim_{n\rightarrow \infty}\gamma _{n}=\lim_{n \rightarrow \infty}\gamma _{n+1}\). Hence, by the conditions imposed on the control parameters, we have that

$$ \lim_{n\rightarrow \infty} \biggl[\frac{2-\ell}{\ell}- \frac{2K\delta}{\ell} \frac{\gamma _{n}}{(\gamma _{n+1}-\delta \gamma _{n})} \biggr]= \biggl[ \frac{2-\ell}{\ell}-\frac{2K}{\ell} \frac{\delta}{(1-\delta )} \biggr]>0. $$

Thus, there exists \(n_{0}\ge 1\) such that for all \(n\ge n_{0}\), we have

$$ \biggl[\frac{2-\ell}{\ell}-\frac{2K\delta}{\ell} \frac{\gamma _{n}}{(\gamma _{n+1}-\delta \gamma _{n})} \biggr]>0. $$
(50)

Hence, from (23), we have that for all \(n\ge n_{0}\),

$$ \Vert x_{n+1}-p \Vert \le \Vert w_{n}-p \Vert . $$
(51)

Lemma 4.5

Let \(\{x_{n}\}\) be a sequence generated by Algorithm 3.1such that Assumptions A and B hold. Then, \(\{x_{n}\}\) is bounded.

Proof

Let \(p\in VI(C,F)\). Then, by the definition of \(w_{n}\), we have

$$\begin{aligned} \Vert w_{n}-p \Vert &= \bigl\Vert (1-\alpha _{n}) \bigl(x_{n}+\tau _{n}(x_{n}-x_{n-1})-p \bigr) \bigr\Vert \\ &= \bigl\Vert (1-\alpha _{n}) (x_{n}-p)+(1-\alpha _{n})\tau _{n}(x_{n}-x_{n-1})- \alpha _{n}p \bigr\Vert \\ &\le (1-\alpha _{n}) \Vert x_{n}-p \Vert +(1-\alpha _{n})\tau _{n} \Vert x_{n}-x_{n-1} \Vert +\alpha _{n} \Vert p \Vert \\ &=(1-\alpha _{n}) \Vert x_{n}-p \Vert +\alpha _{n} \biggl[(1-\alpha _{n}) \frac{\tau _{n}}{\alpha _{n}} \Vert x_{n}-x_{n-1} \Vert + \Vert p \Vert \biggr]. \end{aligned}$$
(52)

From Remark 3.3, we obtain that \(\lim_{n\rightarrow \infty} [(1-\alpha _{n}) \frac{\tau _{n}}{\alpha _{n}}\|x_{n}-x_{n-1}\|+\|p\| ]=\|p\|\). Thus, there exists \(M_{1}>0\) such that

$$ (1-\alpha _{n})\frac{\tau _{n}}{\alpha _{n}} \Vert x_{n}-x_{n-1} \Vert + \Vert p \Vert \le M_{1},\quad \forall n\ge 1. $$
(53)

Combining (52) and (53), we obtain

$$\begin{aligned} \Vert w_{n}-p \Vert \le (1-\alpha _{n}) \Vert x_{n}-p \Vert +\alpha _{n}M_{1}. \end{aligned}$$
(54)

Now, using (54) together with (51), we have

$$\begin{aligned} \Vert x_{n+1}-p \Vert \le \Vert w_{n}-p \Vert &\le (1-\alpha _{n}) \Vert x_{n}-p \Vert +\alpha _{n}M_{1} \\ &\le \max \bigl\{ \Vert x_{n}-p \Vert ,M_{1}\bigr\} \\ &\quad \vdots \\ &\le \bigl\{ \Vert x_{n_{0}}-p \Vert ,M_{1}\bigr\} . \end{aligned}$$

Therefore, we have that the sequence \(\{x_{n}\}\) is bounded. Consequently, \(\{w_{n}\}\) and \(\{y_{n}\}\) are both bounded. □

Lemma 4.6

Suppose \(\{x_{n}\}\) is a sequence generated by Algorithm 3.1under Assumptions Aand B. Then, for all \(p\in VI(C,F)\) the following inequality holds:

$$ \biggl[\frac{2-\ell}{\ell}-\frac{2K\delta}{\ell} \frac{\gamma _{n}}{(\gamma _{n+1}-\delta \gamma _{n})} \biggr] \Vert w_{n}-x_{n+1} \Vert ^{2}\le \Vert x_{n}-p \Vert ^{2}- \Vert x_{n+1}-p \Vert ^{2}+\alpha _{n}M_{2}. $$

Proof

Let \(p\in VI(C,F)\). Then, we see from (54) that

$$\begin{aligned} \Vert w_{n}-p \Vert ^{2}&\le (1-\alpha _{n})^{2} \Vert x_{n}-p \Vert ^{2}+2\alpha _{n}(1- \alpha _{n})M_{1} \Vert x_{n}-p \Vert +\alpha _{n}^{2}M_{1}^{2} \\ &\le \Vert x_{n}-p \Vert ^{2}+\alpha _{n} \bigl(2(1-\alpha _{n})M_{1} \Vert x_{n}-p \Vert + \alpha _{n}M_{1}^{2}\bigr) \\ &\le \Vert x_{n}-p \Vert ^{2}+\alpha _{n}M_{2}, \end{aligned}$$
(55)

where \(M_{2}=\sup_{n\in \mathbb{N}}\{2(1-\alpha _{n})M_{1}\|x_{n}-p\|+ \alpha _{n}M_{1}^{2}\}>0\).

Then, by substituting (55) into (23), we have

$$\begin{aligned} \Vert x_{n+1}-p \Vert ^{2}&\le \Vert x_{n}-p \Vert ^{2}+\alpha _{n}M_{2} \\ &\quad{}- \biggl[\frac{2-\ell}{\ell}-\frac{2K\delta}{\ell} \frac{\gamma _{n}}{(\gamma _{n+1}-\delta \gamma _{n})} \biggr] \Vert w_{n}-x_{n+1} \Vert ^{2}. \end{aligned}$$

The desired result follows from the last inequality. □

Lemma 4.7

Assume that \(\{x_{n}\}\) is a sequence generated by Algorithm 3.1under Assumptions Aand B. Then, for all \(p\in VI(C,F)\), the following inequality holds:

$$\begin{aligned} \Vert x_{n+1}-p \Vert ^{2}&\le (1-\alpha _{n}) \Vert x_{n}-p \Vert ^{2} \\ &\quad{}+\alpha _{n} \biggl[2(1-\alpha _{n}) \Vert x_{n}-p \Vert \frac{\tau _{n}}{\alpha _{n}} \Vert x_{n}-x_{n-1} \Vert \\ &\quad{}+\tau _{n} \Vert x_{n}-x_{n-1} \Vert \frac{\tau _{n}}{\alpha _{n}} \Vert x_{n}-x_{n-1} \Vert \\ &\quad{}+2 \Vert p \Vert \Vert w_{n}-x_{n+1} \Vert +2 \langle p,p-x_{n+1} \rangle \biggr]. \end{aligned}$$

Proof

Using Lemma 2.3 and (51), we obtain

$$\begin{aligned} \Vert x_{n+1}-p \Vert ^{2}&\le \Vert w_{n}-p \Vert ^{2} \\ &= \bigl\Vert (1-\alpha _{n}) (x_{n}-p)+(1-\alpha _{n})\tau _{n}(x_{n}-x_{n-1})- \alpha _{n}p \bigr\Vert ^{2} \\ &\le \bigl\Vert (1-\alpha _{n}) (x_{n}-p)+(1-\alpha _{n})\tau _{n}(x_{n}-x_{n-1}) \bigr\Vert ^{2} \\ &\quad{}+2\alpha _{n}\langle -p, w_{n}-p \rangle \\ &\le (1-\alpha _{n})^{2} \Vert x_{n}-p \Vert ^{2}+2(1-\alpha _{n})\tau _{n} \Vert x_{n}-p \Vert \Vert x_{n}-x_{n-1} \Vert \\ &\quad{}+\tau _{n}^{2} \Vert x_{n}-x_{n-1} \Vert ^{2} +2\alpha _{n}\langle p, p-w_{n} \rangle \\ &=(1-\alpha _{n})^{2} \Vert x_{n}-p \Vert ^{2}+2(1-\alpha _{n})\tau _{n} \Vert x_{n}-p \Vert \Vert x_{n}-x_{n-1} \Vert \\ &\quad{}+\tau _{n}^{2} \Vert x_{n}-x_{n-1} \Vert ^{2} +2\alpha _{n}\langle p, x_{n+1}-w_{n} \rangle +2\alpha _{n}\langle p, p-x_{n+1} \rangle \\ &\le (1-\alpha _{n}) \Vert x_{n}-p \Vert ^{2} \\ &\quad{}+\alpha _{n} \biggl[2(1-\alpha _{n}) \Vert x_{n}-p \Vert \frac{\tau _{n}}{\alpha _{n}} \Vert x_{n}-x_{n-1} \Vert \\ &\quad{}+\tau _{n} \Vert x_{n}-x_{n-1} \Vert \frac{\tau _{n}}{\alpha _{n}} \Vert x_{n}-x_{n-1} \Vert \\ &\quad{}+2 \Vert p \Vert \Vert w_{n}-x_{n+1} \Vert +2 \langle p,p-x_{n+1} \rangle \biggr], \end{aligned}$$

which is the required inequality. □

Lemma 4.8

Let \(\{w_{n}\}\) and \(\{y_{n}\}\) be two sequences generated by Algorithm 3.1such that Assumptions Aand Bhold. If there exists a subsequence \(\{w_{n_{k}}\}\) of \(\{w_{n}\}\) such that \(w_{n_{k}}\rightharpoonup x^{*}\in H\) and \(\lim_{k\to \infty}\|w_{n_{k}}-y_{n_{k}}\|=0\), then \(x^{*}\in VI(C,F)\).

Proof

Assume that \(\{w_{n}\}\) and \(\{y_{n}\}\) are two sequences generated by Algorithm 3.1 with subsequences \(\{w_{n_{k}}\}\) and \(\{y_{n_{k}}\}\), respectively, such that \(w_{n_{k}}\rightharpoonup x^{*}\). By the hypothesis of the lemma, we have \(y_{n_{k}}\rightharpoonup x^{*}\). Since \(y_{n_{k}}\in C_{n_{k}}\) by the definition of \(C_{n}\), we have

$$ c_{i}(w_{n_{k}})+\bigl\langle c'_{i}(w_{n_{k}}),y_{n_{k}}-w_{n_{k}} \bigr\rangle \le 0. $$
(56)

By applying the Cauchy–Schwarz inequality, from (56) we obtain

$$ c_{i}(w_{n_{k}})\le \bigl\Vert c'_{i}(w_{n_{k}}) \bigr\Vert \Vert y_{n_{k}}-w_{n_{k}} \Vert . $$
(57)

By the Lipschitz continuity of \(c'_{i}(\cdot )\) and the fact that \(\{w_{n_{k}}\}\) is bounded, then \(\{c'_{i}(w_{n_{k}})\}\) is bounded. This implies that there exists a constant \(M_{4}>0\), such that \(\|c_{i}(w_{n_{k}})\|\le M_{4}\), \(\forall k\ge 0\). Hence, we see from (57) that

$$ c_{i}(w_{n_{k}})\le M_{4} \Vert y_{n_{k}}-w_{n_{k}} \Vert . $$
(58)

Since \(c_{i}(\cdot )\) is continuous, then it is lower semicontinuous. Also, since \(c_{i}(\cdot )\) is convex, then by Lemma 2.6, \(c_{i}(\cdot )\) is weakly lower semicontinuous. Then, we have from (58) and the definition of weakly lower semicontinuity that

$$ c_{i}\bigl(x^{*}\bigr)\le \liminf _{k\rightarrow \infty}c_{i}(w_{n_{k}})\le \lim _{k\rightarrow \infty} M_{4} \Vert y_{n_{k}}-w_{n_{k}} \Vert . $$
(59)

Hence, by the hypothesis of the lemma it follows from (59) that \(x^{*}\in C\). By the property of the projection map (see Lemma 2.1), we have

$$ \langle y_{n_{k}}-w_{n_{k}}+\gamma _{n_{k}}Fw_{n_{k}},p-y_{n_{k}} \rangle \ge 0, \quad \forall p\in C\subset C_{n_{k}}. $$

Then, by the monotonicity of F, we obtain

$$\begin{aligned} 0&\le \langle y_{n_{k}}-w_{n_{k}},p-y_{n_{k}} \rangle +\gamma _{n_{k}} \langle Fw_{n_{k}},p-y_{n_{k}} \rangle \\ &=\langle y_{n_{k}}-w_{n_{k}},p-y_{n_{k}} \rangle +\gamma _{n_{k}} \langle Fw_{n_{k}},p-w_{n_{k}} \rangle +\gamma _{n_{k}}\langle Fw_{n_{k}},w_{n_{k}}-y_{n_{k}} \rangle \\ &\le \langle y_{n_{k}}-w_{n_{k}},p-y_{n_{k}} \rangle + \gamma _{n_{k}} \langle Fp,p-w_{n_{k}} \rangle +\gamma _{n_{k}}\langle Fw_{n_{k}},w_{n_{k}}-y_{n_{k}} \rangle . \end{aligned}$$
(60)

Since \(\lim_{k\rightarrow \infty}\|y_{n_{k}}-w_{n_{k}}\|=0\) and \(\lim_{k\rightarrow \infty}\gamma _{n_{k}}=\gamma >0\), then by letting \(k\to \infty \) in (60), we have

$$ \bigl\langle Fp,p-x^{*} \bigr\rangle \ge 0, \quad \forall p\in C. $$

Hence, by Lemma 2.8, we have that \(x^{*}\in VI(C,F)\).

At this juncture, we proceed to prove the strong convergence theorem of our proposed Algorithm 3.1. □

Theorem 4.9

Let \(\{x_{n}\}\) be a sequence generated by Algorithm 3.1under Assumptions Aand B. Then, \(\{x_{n}\}\) converges strongly to an element \(\hat{x}\in VI(C,F)\), where \(\|\hat{x}\|=\min \{\|p\|:p\in VI(C,F)\}\).

Proof

Since \(\|\hat{x}\|=\min \{\|p\|:p\in VI(C,F)\}\), then we have \(\hat{x}=P_{VI(C,F)}(0)\). From Lemma 4.7 we obtain

$$\begin{aligned} \Vert x_{n+1}-\hat{x} \Vert ^{2}&\le (1- \alpha _{n}) \Vert x_{n}-\hat{x} \Vert ^{2} \end{aligned}$$
(61)
$$\begin{aligned} &\quad{}+\alpha _{n} \biggl[2(1-\alpha _{n}) \Vert x_{n}-\hat{x} \Vert \frac{\tau _{n}}{\alpha _{n}} \Vert x_{n}-x_{n-1} \Vert \\ &\quad{}+\tau _{n} \Vert x_{n}-x_{n-1} \Vert \frac{\tau _{n}}{\alpha _{n}} \Vert x_{n}-x_{n-1} \Vert \\ &\quad{}+2 \Vert \hat{x} \Vert \Vert w_{n}-x_{n+1} \Vert +2 \langle \hat{x},\hat{x}-x_{n+1} \rangle \biggr] \\ &= (1-\alpha _{n}) \Vert x_{n}-\hat{x} \Vert ^{2}+ \alpha _{n}d_{n}, \end{aligned}$$
(62)

where, \(d_{n}= [2(1-\alpha _{n})\|x_{n}-\hat{x}\| \frac{\tau _{n}}{\alpha _{n}}\|x_{n}-x_{n-1}\|+\tau _{n}\|x_{n}-x_{n-1} \|\frac{\tau _{n}}{\alpha _{n}}\|x_{n}-x_{n-1}\|+2\|\hat{x}\|\|w_{n}-x_{n+1} \|+2\langle \hat{x},\hat{x}-x_{n+1} \rangle ]\).

Now, we claim that \(\lim_{n\rightarrow \infty}\|x_{n}-\hat{x}\|=0\). To verify this claim, it suffices to show by Lemma 2.9 that \(\lim_{k\rightarrow \infty}d_{n_{k}}\le 0\), for every subsequence \(\{\|x_{n_{k}}-\hat{x}\|\}\) of \(\{\|x_{n}-\hat{x}\|\}\) satisfying

$$ \liminf_{k\rightarrow \infty}(\bigl( \Vert x_{n_{k+1}}- \hat{x} \Vert - \Vert x_{n_{k}}- \hat{x} \Vert \bigr) \ge 0. $$
(63)

We assume that \(\{\|x_{n_{k}}-\hat{x}\|\}\) is a subsequence of \(\{\|x_{n}-\hat{x}\|\}\) such that (63) holds. Then, from Lemma 4.6 we have

$$\begin{aligned} \biggl[\frac{2-\ell}{\ell}-\frac{2K\delta}{\ell} \frac{\gamma _{n_{k}}}{(\gamma _{{n_{k}}+1}-\delta \gamma _{n_{k}})} \biggr] \Vert w_{n_{k}}-x_{{n_{k}}+1} \Vert ^{2}&\le \Vert x_{n_{k}}-\hat{x} \Vert ^{2} \\ &\quad{}- \Vert x_{{n_{k}}+1}-\hat{x} \Vert ^{2}+\alpha _{n_{k}}M_{2}. \end{aligned}$$
(64)

By applying (63) and the fact that \(\lim_{k\rightarrow \infty}\alpha _{n_{k}}=0\), we obtain

$$\begin{aligned} \biggl[\frac{2-\ell}{\ell}-\frac{2K\delta}{\ell} \frac{\gamma _{n_{k}}}{(\gamma _{{n_{k}}+1}-\delta \gamma _{n_{k}})} \biggr] \Vert w_{n_{k}}-x_{{n_{k}}+1} \Vert ^{2}\to 0, \quad k\to \infty . \end{aligned}$$

Hence, by (50) we obtain

$$\begin{aligned} \Vert w_{n_{k}}-x_{{n_{k}}+1} \Vert \to 0, \quad k \to \infty . \end{aligned}$$
(65)

Also, from (22) and (65), we obtain

$$\begin{aligned} \Vert w_{n_{k}}-y_{n_{k}} \Vert \to 0, \quad k \to \infty . \end{aligned}$$
(66)

By Remark (3.3), the definition of \(w_{n}\), and the fact that \(\lim_{k\rightarrow \infty}\alpha _{n_{k}}=0\), we have

$$\begin{aligned} \Vert w_{n_{k}}-x_{n_{k}} \Vert &= \bigl\Vert (1-\alpha _{n_{k}}) \bigl(x_{n_{k}}+\tau _{n_{k}}(x_{n_{k}}-x_{n_{k}-1})-x_{n_{k}} \bigr) \bigr\Vert \\ &= \bigl\Vert (1-\alpha _{n_{k}}) (x_{n_{k}}-x_{n_{k}})+(1- \alpha _{n_{k}}) \tau _{n_{k}}(x_{n_{k}}-x_{n_{k}-1})- \alpha _{n_{k}}x_{n_{k}} \bigr\Vert \\ &\le (1-\alpha _{n_{k}}) \Vert x_{n_{k}}-x_{n_{k}} \Vert +(1-\alpha _{n_{k}}) \tau _{n_{k}} \Vert x_{n_{k}}-x_{n_{k}-1} \Vert +\alpha _{n_{k}} \Vert x_{n_{k}} \Vert \\ & \to 0,\quad k\to \infty . \end{aligned}$$
(67)

From (65) and (67), we obtain

$$\begin{aligned} \Vert x_{n_{k}}-x_{n_{k}+1} \Vert \le \Vert x_{n_{k}}-w_{n_{k}} \Vert + \Vert w_{n_{k}}-x_{n_{k}+1} \Vert \to 0, \quad k\to \infty . \end{aligned}$$
(68)

To complete the proof, we need to show that \(w_{\omega}(x_{n})\subset VI(C,F)\). Since \(\{x_{n}\}\) is bounded, then \(w_{\omega}(x_{n})\ne \emptyset \). Now, let \(x^{*}\in w_{\omega}(x_{n})\) be an arbitrary element. Then, there exists a subsequence \(\{x_{n_{k}}\}\) of \(\{x_{n}\}\) such \(x_{n_{k}}\rightharpoonup x^{*}\) as \(k\to \infty \). It follows from (67) that \(w_{n_{k}}\rightharpoonup x^{*}\) as \(k\to \infty \). Then, by Lemma 4.8 together with (66), we see that \(x^{*}\in VI(C,F)\). Hence, \(w_{\omega}(x_{n})\subset VI(C,F)\) since \(x^{*}\in w_{\omega}(x_{n})\) was chosen arbitrarily.

Again, since \(\{x_{n_{k}}\}\) is bounded, then there exists a subsequence \(\{x_{n_{k_{j}}}\}\) of \(\{x_{n_{k}}\}\), such that \(x_{n_{k_{j}}}\rightharpoonup z\), and

$$ \lim_{j\rightarrow \infty}\langle \hat{x},\hat{x}-x_{n_{k_{j}}} \rangle = \limsup_{k\rightarrow \infty}\langle \hat{x}, \hat{x}-x_{n_{k}} \rangle . $$
(69)

Since \(\hat{x}=P_{VI(C,F)}(0)\), then by the property of the projection map and (69), we obtain

$$ \limsup_{k\rightarrow \infty}\langle \hat{x}, \hat{x}-x_{n_{k}} \rangle = \lim_{j\rightarrow \infty}\langle \hat{x}, \hat{x}-x_{n_{k_{j}}} \rangle = \langle \hat{x}, \hat{x}-z \rangle \leq 0. $$
(70)

By combining (68) and (70), we obtain

$$\begin{aligned} \limsup_{k\rightarrow \infty}\langle \hat{x}, \hat{x}- x_{n_{k}+1} \rangle = \limsup_{k\rightarrow \infty}\langle \hat{x}, \hat{x}- x_{n_{k}} \rangle = \langle \hat{x}, \hat{x}- z \rangle \leq 0. \end{aligned}$$
(71)

Using (65), Remark 3.3 together with (71), we clearly see that \(\limsup_{k\rightarrow \infty}d_{n_{k}}\leq 0\). Then, by applying Lemma 2.9 to (61), we easily conclude that \(\lim_{n\rightarrow \infty}\|x_{n} - \hat{x}\|=0\) as required. This completes the proof. □

Remark 4.10

We note that our strong convergence analysis completely avoids the “two-cases” approach, which is usually employed by researchers in the proof of strong convergence theorems (see [32, 46]). However, we adopted a much simpler and more straightforward approach in our proofs.

5 Numerical examples

Here, we perform some numerical experiments to illustrate the performance of our method, Algorithm 3.1 (Proposed Alg.), in comparison with Algorithm A.1 proposed by Ma and Wang (Ma and Wang Alg.), Algorithm A.2 proposed by He et al. (He et al. Alg.), Algorithm A.3 by He, Wu et al. (He, Wu et al. Alg.), Algorithm A.4 by Thong and Gibali (Thong and Gibali Alg.) and Algorithm A.5 by Thong and Gibali (Thong and Gibali Alg.). Our experiments were carried out on MATLAB R2021(b).

The choice of values for our parameters are as follows: In Algorithm 3.1, we chose \(\tau =0.88\), \(\theta _{n}=(\frac{2}{3n+1})^{2}\), \(\alpha _{n}= \frac{2}{3n+1}\), \(\gamma _{1}=0.99\), \(\mu _{n}=\frac{30}{(3n+4)^{2}}\), \(\ell =0.25\). Also, we chose \(\gamma _{-1}=0.0017\), \(\xi =0.76\), \(\nu =0.87\) in Algorithm A.1, \(\chi =0.97\) in Algorithm A.2\(\sigma =0.02\), \(\omega =0.05\) in Algorithm A.3, \(g=0.66\), \(\lambda =1.2\), \(\delta _{n}=\frac{2}{3n+1}\), \(\beta _{n}= \frac{1-\delta _{n}}{2}\), in Algorithm A.4 and \(f(x)=\frac{1}{4}x\) in Algorithm A.5.

Our numerical experiments will be conducted using the following examples below:

Example 5.1

Let \(F:\mathbb{R}^{2}\to \mathbb{R}^{2}\) be defined by \(F(x_{1},x_{2})=(6h(x_{1}),3x_{1}+x_{2})\) on the feasible set \(C:=C^{1}\cap C^{2}\subseteq \mathbb{R}^{2}\), where

$$\begin{aligned} &C^{1}:=\bigl\{ (x_{1},x_{2})\in \mathbb{R}^{2} : c_{1}(x_{1},x_{2}):=x_{1}^{2}+x_{2}^{2}-2 \le 0\bigr\} , \\ &C^{2}:=\bigl\{ (x_{1},x_{2})\in \mathbb{R}^{2} : c_{2}(x_{1},x_{2}):=x_{1}^{2}-x_{2} \le 0\bigr\} , \quad \text{and}, \\ &h(t):= \textstyle\begin{cases} e(t-1)+e & \text{if } t>1, \\ e^{t} & \text{if } -1\let\le1, \\ e^{-1}(t+1)+e^{-1} & \text{if } t< -1. \end{cases}\displaystyle \end{aligned}$$

We see from Lemma 2.10 that \(VI(C,F)\) is nonempty, in particular, the solution, \(VI(C,F)\) of the VI 5.1, is the set \(\{(-1,1)\}\). We note that F is monotone and Lipschitz continuous. The functions \(c'_{i}(\cdot )\) are also Lipschitz continuous, for \(i=1,2\), with constants \(L_{1}=L_{2}=2\), where \(L=\max \{L_{1},L_{2}\}\). Also, \(K=3\sqrt{e^{2}+1}\), and \(K'=6\sqrt{e^{2}+1}\), see [25]. For this example we choose \(\delta =0.068\) and set \(M=K\).

We test the algorithms for four different initial points as follows:

Case I: \(x_{0} = (5,5)\), \(x_{1} = (-2,-1)\);

Case II: \(x_{0} = (2, 6)\), \(x_{1} = (1,-2)\);

Case III: \(x_{0} = (3,6)\), \(x_{1}= (1,0)\);

Case IV: \(x_{0} = (5, 5)\), \(x_{1} = (-1, 1)\).

The stopping criterion used for this example is \(\Vert x_{n+1}-x_{n}\Vert < 10^{-3}\). We plot the graphs of errors against the number of iterations in each case. The numerical results are reported in Figs. 14 and Table 1.

Figure 1
figure 1

Example 5.1: Case I

Figure 2
figure 2

Example 5.1: Case II

Figure 3
figure 3

Example 5.1: Case III

Figure 4
figure 4

Example 5.1: Case IV

Table 1 Numerical results for Example 5.1

Next, we provide an example in infinite-dimensional spaces for the experiment of our strong convergence result.

Example 5.2

Let \(F(x)=3x\), \(\forall x\in H\), and let \(C\subset H\) be the closed, convex, feasible set defined as follows:

$$\begin{aligned} &C:=\bigcap_{i=1}^{m} C^{i}:= \bigcap_{i=1}^{m}\bigl\{ x\in H : c_{i}(x):= \Vert x_{i} \Vert ^{2}-2\le 0\bigr\} ,\quad \text{for each } i=1,\ldots,m, \\ & \text{where, } H=\bigl(\ell _{2}( \mathbb{R} ), \Vert \cdot \Vert \bigr), \Vert x \Vert _{2}=\Biggl( \sum _{k=1}^{\infty} \vert x_{k} \vert ^{2}\Biggr)^{\frac{1}{2}}, \langle x,y \rangle = \sum _{k=1}^{\infty}x_{k}y_{k}, \ \forall x\in \ell _{2}( \mathbb{R}), \\ & \text{and, } \ell _{2} ( \mathbb{R} ):=\Biggl\{ x=(x_{1},x_{2},\ldots,x_{n},\ldots),x_{k} \in \mathbb{R} :\sum_{k=1}^{\infty} \vert x_{k} \vert ^{2}< \infty \Biggr\} . \end{aligned}$$

Also, note that F is monotone and 3-Lipschitz continuous, \(c'_{i}\) is Lipschitz continuous, and \(K=1\). We chose \(\delta =0.14\) in this example.

We chose different initial values as follows:

Case I: \(x_{0} = (-4, 1, -\frac{1}{4}, \ldots )\), \(x_{1} = (\frac{1}{2}, \frac{1}{4}, \frac{1}{8},\ldots )\);

Case II: \(x_{0} = (3, 1, \frac{1}{3},\ldots )\), \(x_{1} = (1, 0.1, 0.01, \ldots )\);

Case III: \(x_{0} = (5, 1, \frac{1}{5}, \ldots )\), \(x_{1} = (1, -0.1, 0.01, \ldots )\);

Case IV: \(x_{0} = (4, 1, \frac{1}{4},\ldots )\), \(x_{1} = (-\frac{1}{2}, \frac{1}{4}, -\frac{1}{8}, \ldots )\).

The stopping criterion used for this example is \(\Vert x_{n+1}-x_{n}\Vert < 10^{-3}\). We plot the graphs of errors against the number of iterations in each case. The numerical results are reported in Figs. 58 and Table 2.

Figure 5
figure 5

Example 5.2: Case I

Figure 6
figure 6

Example 5.2: Case II

Figure 7
figure 7

Example 5.2: Case III

Figure 8
figure 8

Example 5.2: Case IV

Table 2 Numerical results for Example 5.2

6 Conclusion

In this paper, we studied the classical variational inequality problem defined over a finite intersection of sublevel sets of convex functions. We proposed a new iterative method called a “Totally relaxed inertial self-adaptive projection and contraction method” (TRISPCM), in which the projection onto the feasible set is replaced with a projection onto some half-space. Our method does not require any line-search procedure, rather it uses a more efficient self-adaptive step-size technique. We also employed the relaxation and inertial techniques to speed up the rate of convergence of our proposed method. Moreover, under some mild conditions we proved that the sequence generated by our proposed algorithm converges strongly to a minimum-norm solution of the problem. Lastly, we conducted some numerical experiments to clearly showcase the computational advantage of our proposed method over the existing methods in the literature.

Availability of data and materials

Not applicable.

References

  1. Alakoya, T.O., Mewomo, O.T.: Viscosity S-iteration method with inertial technique and self-adaptive step size for split variational inclusion, equilibrium and fixed point problems. Comput. Appl. Math. 41(1), Paper No. 39, 31 pp. (2022).

    Article  MathSciNet  MATH  Google Scholar 

  2. Alakoya, T.O., Mewomo, O.T.: S-iteration inertial subgradient extragradient method for variational inequality and fixed point problems. Optimization (2023). https://doi.org/10.1080/02331934.2023.2168482

    Article  Google Scholar 

  3. Alakoya, T.O., Uzor, V.A., Mewomo, O.T.: A new projection and contraction method for solving split monotone variational inclusion, pseudomonotone variational inequality, and common fixed point problems. Comput. Appl. Math. 42(1), Paper No. 3, 33 pp. (2023)

    Article  MathSciNet  MATH  Google Scholar 

  4. Alakoya, T.O., Uzor, V.A., Mewomo, O.T., Yao, J.-C.: On a system of monotone variational inclusion problems with fixed-point constraint. J. Inequal. Appl. 2022, 47 (2022)

    Article  MathSciNet  MATH  Google Scholar 

  5. Antipin, A.S.: On a method for convex programs using a symmetrical modification of the Lagrange function. Ekon. Math. Meth. 12(6), 1164–1173 (1976)

    Google Scholar 

  6. Bauschke, H.H., Combettes, P.L.: A weak-to-strong convergence principle for Féjer-monotone methods in Hilbert spaces. Math. Oper. Res. 26(2), 248–264 (2001)

    Article  MathSciNet  MATH  Google Scholar 

  7. Bauschke, H.H., Combettes, P.L.: Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd edn. Springer, New York (2017)

    Book  MATH  Google Scholar 

  8. Cao, Y., Guo, K.: On the convergence of inertial two-subgradient extragradient method for variational inequality problems. Optimization 69(6), 1237–1253 (2020)

    Article  MathSciNet  MATH  Google Scholar 

  9. Censor, Y., Gibali, A., Reich, S.: The subgradient extragradient method for solving variational inequalities in Hilbert spaces. J. Optim. Theory Appl. 48, 318–335 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  10. Cholamjiak, P., Thong, D.V., Cho, Y.J.: A novel inertial projection and contraction method for solving pseudomonotone variational inequality problems. Acta Appl. Math. 169, 217–245 (2020)

    Article  MathSciNet  MATH  Google Scholar 

  11. Denisov, S.V., Semenov, V.V., Chabak, L.M.: Convergence of the modified extragradient method for variational inequalities with nonLipschitz operators. Cybern. Syst. Anal. 51, 757–765 (2015)

    Article  MATH  Google Scholar 

  12. Dong, Q.L., Cho, Y.J., Zhong, L.L., et al.: Inertial projection and contraction algorithms for variational inequalities. J. Glob. Optim. 70, 687–704 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  13. Dong, Q.L., Gibali, A., Jiang, D., Ke, S.H.: Convergence of projection and contraction algorithms with outer perturbations and their applications to sparse signals recovery. J. Fixed Point Theory Appl. 20, 16 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  14. Elliot, C.M.: Variational and quasivariational inequalities: applications to free boundary problem (Claudio Baiocchi and António). SIAM Rev. 29(2), 314–315 (1987)

    Article  Google Scholar 

  15. Fichera, G.: Sul problema elastostatico di signorini con ambigue condizioni al contorno. Atti Accad. Naz. Lincei, Rend. Cl. Sci. Fis. Mat. Nat. 34(8), 138–142 (1963)

    MathSciNet  MATH  Google Scholar 

  16. Fukushima, M.: A relaxed projection method for variational inequalities. Math. Program. 35, 58–70 (1986)

    Article  MathSciNet  MATH  Google Scholar 

  17. Gibali, A., Jolaoso, L.O., Mewomo, O.T., Taiwo, A.: Fast and simple Bregman projection methods for solving variational inequalities and related problems in Banach spaces. Results Math. 75(4), Paper No. 179, 36 pp. (2020)

    Article  MathSciNet  MATH  Google Scholar 

  18. Gibali, A., Reich, S., Zalas, R.: Outer approximation methods for solving variational inequalities in Hilbert space. Optimization 66, 417–437 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  19. Godwin, E.C., Alakoya, T.O., Mewomo, O.T., Yao, J.-C.: Relaxed inertial Tseng extragradient method for variational inequality and fixed point problems. Appl. Anal. (2022). https://doi.org/10.1080/00036811.2022.2107913

    Article  MATH  Google Scholar 

  20. Godwin, E.C., Izuchukwu, C., Mewomo, O.T.: An inertial extrapolation method for solving generalized split feasibility problems in real Hilbert spaces. Boll. Unione Mat. Ital. 14(2), 379–401 (2021)

    Article  MathSciNet  MATH  Google Scholar 

  21. Godwin, E.C., Izuchukwu, C., Mewomo, O.T.: Image restoration using a modified relaxed inertial method for generalized split feasibility problems. Math. Methods Appl. Sci. 46(5), 5521–5544 (2023)

    Article  MathSciNet  Google Scholar 

  22. Godwin, E.C., Mewomo, O.T., Alakoya, O.T.: A strongly convergent algorithm for solving multiple set split equality equilibrium and fixed point problems in Banach spaces. Proc. Edinb. Math. Soc. (2) 66, 475–515 (2023)

    Article  MathSciNet  MATH  Google Scholar 

  23. He, S., Dong, Q.L., Tian, H.: Relaxed projection and contraction methods for solving Lipschitz continuous monotone variational inequalities. Rev. R. Acad. Cienc. Exactas Fís. Nat., Ser. A Mat. 113, 2773–2791 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  24. He, S., Wu, T.: A modified subgradient extragradient method for solving monotone variational inequalities. J. Inequal. Appl. 2017, 89 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  25. He, S., Wu, T., Gibali, A., Dong, Q.L.: Totally relaxed self-adaptive algorithm for solving variational inequalities over the intersection of sub-level sets. Optimization 67(9), 1487–1504 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  26. He, S., Xu, H.-K.: Uniqueness of supporting hyperplanes and an alternative to solutions of variational inequalities. J. Glob. Optim. 57, 1375–1384 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  27. Iiduka, H.: Fixed point optimization algorithm and its application to network bandwidth allocation. J. Comput. Appl. Math. 236(7), 1733–1742 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  28. Iutzeler, F., Hendrickx, J.M.: A generic online acceleration scheme for optimization algorithms via relaxation and inertia. Optim. Methods Softw. 34(2), 383–405 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  29. Kinderlehrer, D., Stampacchia, G.: An Introduction to Variational Inequalities and Their Applications. Academic Press, New York (1980)

    MATH  Google Scholar 

  30. Kopecká, E., Reich, S.: A note on alternating projections in Hilbert space. J. Fixed Point Theory Appl. 12, 41–47 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  31. Korpelevich, G.M.: The extragradient method for finding saddle points and other problems. Matecon 12, 747–756 (1976)

    MathSciNet  MATH  Google Scholar 

  32. Kraikaew, R., Saejung, S.: Strong convergence of the Halpern subgradient extragradient method for solving variational inequalities in Hilbert spaces. J. Optim. Theory Appl. 163, 399–412 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  33. Liu, Z., Zeng, S., Motreanu, D.: Evolutionary problems driven by variational inequalities. J. Differ. Equ. 260(9), 6787–6799 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  34. Ma, B., Wang, W.: Self-adaptive subgradient extragradient-type methods for solving variational inequalities. J. Inequal. Appl. 2022, 54 (2022)

    Article  MathSciNet  MATH  Google Scholar 

  35. Nguyen, H.Q., Xu, H.K.: The supporting hyperplane and an alternative to solutions of variational inequalities. J. Nonlinear Convex Anal. 16(11), 2323–2331 (2015)

    MathSciNet  MATH  Google Scholar 

  36. Ogbuisi, F.U., Mewomo, O.T.: Convergence analysis of common solution of certain nonlinear problems. Fixed Point Theory 19(1), 335–358 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  37. Ogwo, G.N., Alakoya, T.O., Mewomo, O.T.: Iterative algorithm with self-adaptive step size for approximating the common solution of variational inequality and fixed point problems. Optimization 72(3), 677–711 (2023)

    Article  MathSciNet  MATH  Google Scholar 

  38. Ogwo, G.N., Izuchukwu, C., Mewomo, O.T.: Relaxed inertial methods for solving split variational inequality problems without product space formulation. Acta Math. Sci. Ser. B Engl. Ed. 42(5), 1701–1733 (2022)

    Article  MathSciNet  MATH  Google Scholar 

  39. Saejung, S., Yotkaew, P.: Approximation of zeros of inverse strongly monotone operators in Banach spaces. Nonlinear Anal. 75, 742–750 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  40. Stampacchia, G.: Variational inequalities. In: Theory and Applications of Monotone Operators, Proceedings of the NATO Advanced Study Institute, Venice, Italy (Edizioni Odersi, Gubbio, Italy), pp. 102–192 (1968)

    Google Scholar 

  41. Sun, D.: A projection and contraction method for the nonlinear complementarity problems and its extentions. Math. Numer. Sin. 16, 183–194 (1994)

    Google Scholar 

  42. Taiwo, A., Jolaoso, L.O., Mewomo, O.T.: Inertial-type algorithm for solving split common fixed point problems in Banach spaces. J. Sci. Comput. 86, 12, 30 pp. (2021)

    Article  MathSciNet  MATH  Google Scholar 

  43. Taiwo, A., Jolaoso, L.O., Mewomo, O.T.: Viscosity approximation method for solving the multiple-set split equality common fixed point problems for quasi-pseudocontractive mappings in Hilbert spaces. J. Ind. Manag. Optim. 17(5), 2733–2759 (2021)

    Article  MathSciNet  MATH  Google Scholar 

  44. Taiwo, A., Owolabi, A.O.-E., Jolaoso, L.O., Mewomo, O.T., Gibali, A.: A new approximation scheme for solving various split inverse problems. Afr. Math. 32(3–4), 369–401 (2021)

    Article  MathSciNet  MATH  Google Scholar 

  45. Tan, K.K., Xu, H.K.: Approximating fixed points of nonexpansive mappings by the Ishikawa iteration process. J. Math. Anal. Appl. 178, 301–308 (1993)

    Article  MathSciNet  MATH  Google Scholar 

  46. Thong, D.V., Gibali, A.: Two strong convergence subgradient extragradient methods for solving variational inequalities in Hilbert spaces. Jpn. J. Ind. Appl. Math. 36, 299–321 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  47. Thong, D.V., Hieu, D.V.: Modified subgradient extragradient algorithms for variational inequality problems and fixed point problems. Optimization 67(1), 83–102 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  48. Thong, D.V., Shehu, Y., Iyiola, O.S.: A new iterative method for solving pseudomonotone variational inequalities with non-Lipschitz operators. Comput. Appl. Math. 39, 108 (2020)

    Article  MathSciNet  MATH  Google Scholar 

  49. Tseng, P.: A modified forward-backward splitting method for maximal monotone mappings. SIAM J. Control Optim. 38, 431–446 (2000)

    Article  MathSciNet  MATH  Google Scholar 

  50. Uzor, V.A., Alakoya, T.O., Mewomo, O.T.: Strong convergence of a self-adaptive inertial Tseng’s extragradient method for pseudomonotone variational inequalities and fixed point problems. Open Math. 20(1), 234–257 (2022)

    Article  MathSciNet  MATH  Google Scholar 

  51. Uzor, V.A., Alakoya, T.O., Mewomo, O.T.: On split monotone variational inclusion problem with multiple output sets with fixed point constraints. Comput. Methods Appl. Math. (2022). https://doi.org/10.1515/cmam-2022-0199

    Article  MATH  Google Scholar 

  52. Wickramasinghe, M.U., Mewomo, O.T., Alakoya, T.O., Iyiola, S.O.: Mann-type approximation scheme for solving a new class of split inverse problems in Hilbert spaces. Appl. Anal. (2023). https://doi.org/10.1080/00036811.2023.2233977

    Article  Google Scholar 

Download references

Acknowledgements

The authors sincerely thank the anonymous referees for their careful reading, constructive comments, and useful suggestions that improved the manuscript. The first author acknowledges with thanks the International Mathematical Union Breakout Graduate Fellowship (IMU-BGF) Award for his doctoral study. The second author is supported by the National Research Foundation (NRF) of South Africa Incentive Funding for Rated Researchers (Grant Number 119903) and DSI-NRF Centre of Excellence in Mathematical and Statistical Sciences (CoE-MaSS), South Africa (Grant Number 2022-087-OPA). The third author is wholly supported by the University of KwaZulu-Natal, Durban, South Africa Postdoctoral Fellowship. He is grateful for the funding and financial support. The opinions expressed and conclusions arrived are those of the authors and are not necessarily to be attributed to the CoE-MaSS and NRF.

Funding

The first author is funded by International Mathematical Union Breakout Graduate Fellowship (IMU-BGF). The second author is supported by the National Research Foundation (NRF) of South Africa Incentive Funding for Rated Researchers (Grant Number 119903) and DSI-NRF Centre of Excellence in Mathematical and Statistical Sciences (CoE-MaSS), South Africa (Grant Number 2022-087-OPA). The third author is funded by the University of KwaZulu-Natal, Durban, South Africa Postdoctoral Fellowship.

Author information

Authors and Affiliations

Authors

Contributions

Conceptualization of the article was given by OTM, TOA and AG, methodology by OTM and TOA, formal analysis, investigation and writing–original draft preparation by VAU, OTM, and TOA software and validation by OTM, writing–review and editing by VAU, OTM TOA, and AG, project administration and supervision by OTM and AG. All authors have accepted responsibility for the entire content of this manuscript and approved its submission.

Corresponding author

Correspondence to O. T. Mewomo.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

Algorithm A.1

Algorithm 2 in [34]

Step 0.:

Choose \(x_{-1},x_{0}, y_{-1}\in H\); \(\xi ,\nu \in [a,b]\subset (0,1)\), \(\gamma _{-1}\in (0,\frac{1-\xi ^{2}}{2\beta _{p}L_{1}} ]\). Set \(n=0\).

Step 1.:

Given \(\gamma _{n-1}\), \(y_{-1}\), and \(x_{-1}\). Let \(p_{n_{n-1}}=x_{n-1}-y_{-1}\).

$$\begin{aligned} \gamma _{n}:= \textstyle\begin{cases} \gamma _{n-1}, & \gamma _{n-1} \Vert Fx_{n-1}-Fy_{n-1} \Vert \le \xi \Vert p_{n-1} \Vert , \\ \gamma _{n-1}\nu , & {\text{otherwise}.} \end{cases}\displaystyle \end{aligned}$$
Step 2.:

Compute

$$ y_{n}=P_{C_{n}}(x_{n}-\gamma _{n}Fx_{n}). $$
Step 3.:

Compute

$$ x_{n+1}=P_{C_{n}}\bigl(y_{n}-\gamma _{n}(Fy_{n}-Fx_{n})\bigr), $$

where,

$$ C_{n}:=\bigl\{ x\in H: c(x_{n})+\bigl\langle c'_{i}(x_{n}),x-x_{n} \bigr\rangle \le 0 \bigr\} . $$

Set \(n:=n+1\) and return to Step 1. Where \(F:H\to H\) is monotone and \(L_{2}-\) Lipschitz continuous, \(c'(\cdot )\) is Lipschitz continuous, and \(\beta _{p}\) is the parameter in Lemma 2.10.

Algorithm A.2

Algorithm 1 in [23]

$$ \textstyle\begin{cases} x_{0}\in H \text{ chosen arbitrarily}, \\ y_{n}=P_{C_{n}}(x_{n}-\gamma _{n} Fx_{n}), \\ \text{If } x_{n}=y_{n}, \text{ stop, } x_{n} \text{ is a solution. Otherwise,} \\ d(x_{n},y_{n}) =(x_{n}-y_{n})-\gamma _{n}(Fx_{n}-Fy_{n}), \\ x_{n+1}=x_{n}-\ell \psi _{n}d(x_{n},y_{n}), \\ \text{or}, \\ x_{n+1}^{*}=P_{T_{n}}[x_{n}-\ell \psi _{n}\gamma _{n}Fy_{n}], \end{cases} $$

where \(C_{n}\) and \(T_{n}\) are half-spaces given by \(C_{n}:=\{w\in H : c(x_{n})+\langle c'(x_{n}),w-x_{n} \rangle \le 0 \}\) and \(T_{n}:=\{w\in H : \langle x_{n}-y_{n},\gamma _{n}Fy_{n}-d(x_{n},y_{n}) \rangle \ge 0 \}\), respectively, and

$$\begin{aligned} &\psi _{n}:= \frac{\langle x_{n}-y_{n},d(x_{n},y_{n}) \rangle -M\gamma _{n} \Vert x_{n}-y_{n} \Vert ^{2}}{ \Vert d(x_{n},y_{n}) \Vert ^{2}}, \\ &\gamma _{n}=\frac{\chi}{M+\sqrt{M^{2}+L_{n}^{2}}}, \end{aligned}$$

for some constant \(M>0\) and \(L_{n}\) satisfying certain conditions.

Algorithm A.3

(Algorithm 3.1 in [25])

Step 0.:

Set \(L=\max \{L_{i} : i\in I\}\), \(K'=KL\), where K is defined in Lemma 2.10, and \(L_{i}(i\in I)\) is the Lipschitz constant. Choose arbitrarily, \(x_{0}\in H\), \(\xi \in (0,1)\), and set \(n=0\).

Step 1.:

Given the current iterate \(x_{n}\), construct the family of half-spaces

$$\begin{aligned} & C_{n}^{i}= \bigl\{ w\in H : c_{i}(x_{n})+ \bigl\langle c'_{i}(x_{n}), w-x_{n} \bigr\rangle \le 0\bigr\} ,\quad i\in I. \end{aligned}$$

Set,

$$\begin{aligned} C_{n}:=\bigcap_{i\in I}C_{n}^{i} \end{aligned}$$

and compute:

$$\begin{aligned} &y_{n}:=P_{C_{n}}(x_{n}-\gamma _{n}Fx_{n}); \\ &\text{where, } \gamma _{n}=\sigma \omega ^{g_{n}}, \sigma >0, \omega \in (0,1), \\ &\text{and } g_{n} \text{ is the smallest integer, such that} \\ &\gamma _{n}^{2} \Vert Fx_{n}-Fy_{n} \Vert ^{2}+K'\gamma _{n} \Vert x_{n}-y_{n} \Vert ^{2} \le \xi \Vert x_{n}-y_{n} \Vert ^{2}. \end{aligned}$$
Step 2.:

If \(w_{n}=y_{n}\), then stop, \(x_{n}\in SOL(C,F)\), otherwise, calculate the next iterative step by:

$$\begin{aligned} & x_{n+1}=P_{C_{n}}(x_{n}-\gamma _{n}Fy_{n}), \\ & \text{or by,} \\ & x_{n+1}=P_{T_{n}}(x_{n}-\gamma _{n}Fy_{n}), \\ &\text{where } T_{n}=\bigl\{ w\in H : \langle x_{n}-\gamma _{n}Fx_{n}-y_{n},w-y_{n} \rangle \le 0\bigr\} . \end{aligned}$$

Set \(n:= n +1\) and return to Step 1.

Algorithm A.4

(Algorithm 3.1 in [46])

Step 0.:

Given \(\lambda >0\), \(g\in (0,1)\), \(\nu \in (0,1)\), \(\ell \in (0,2)\). Let \(x_{0}\in H\) be chosen arbitrarily.

Given the current iterate \(x_{n}\), calculate \(x_{n}+1\) as follows:

Step 1.:

Compute:

$$ y_{n}=P_{C}(x_{n}-\gamma _{n}Fx_{n}), $$

where \(\gamma _{n}\) is chosen to be the largest \(\eta _{n}\in [\lambda ,\lambda g, \lambda g^{2},\ldots]\) satisfying

$$ \eta \Vert Fx_{n}-Fy_{n} \Vert \le \nu \Vert x_{n}-y_{n} \Vert . $$

If \(w_{n}=y_{n}\), then stop, \(x_{n}\in SOL(C,F)\), otherwise,

Step 2.:

Compute:

$$\begin{aligned} & z_{n}=P_{T_{n}}\bigl(x_{n}-\ell \gamma _{n}F(y_{n})\bigr), \\ & \text{where } T_{n}:=\bigl\{ x\in H : \langle x_{n}- \gamma _{n}Fx_{n}-y_{n},x-y_{n} \rangle \le 0 \bigr\} , \text{ and} \\ & d_{n}:=x_{n}-y_{n}-\gamma _{n}(Fx_{n}-Fy_{n}). \end{aligned}$$
Step 3.:

Compute:

$$ x_{n+1}=(1-\delta _{n}-\beta _{n})x_{n}+ \beta _{n}z_{n}. $$

Set \(n:= n +1\) and return to Step 1.

Algorithm A.5

(Algorithm 3.2 in [46])

Step 0.:

Given \(\lambda >0\), \(g\in (0,1)\), \(\nu \in (0,1)\), \(\ell \in (0,2)\). Let \(x_{0}\in H\) be chosen arbitrarily.

Given the current iterate \(x_{n}\), calculate \(x_{n}+1\) as follows:

Step 1.:

Compute:

$$ y_{n}=P_{C}(x_{n}-\gamma _{n}Fx_{n}), $$

where \(\gamma _{n}\) is chosen to be the largest \(\eta _{n}\in [\lambda ,\lambda g, \lambda g^{2},\ldots]\) satisfying

$$ \eta \Vert Fx_{n}-Fy_{n} \Vert \le \nu \Vert x_{n}-y_{n} \Vert . $$

If \(w_{n}=y_{n}\), then stop, \(x_{n}\in SOL(C,F)\), otherwise,

Step 2.:

Compute:

$$\begin{aligned} & z_{n}=P_{T_{n}}(x_{n}-\ell \gamma _{n}Fy_{n}), \\ & \text{where } T_{n}:=\bigl\{ x\in H : \langle x_{n}- \gamma _{n}Fx_{n}-y_{n},x-y_{n} \rangle \le 0 \bigr\} , \text{ and} \\ & d_{n}:=x_{n}-y_{n}-\gamma _{n}(Fx_{n}-Fy_{n}). \end{aligned}$$
Step 3.:

Compute:

$$ x_{n+1}=\alpha _{n}f(x_{n})+(1-\alpha _{n})z_{n}. $$

Set \(n:= n +1\) and return to Step 1.

Where \(f:H\to H\) is a contraction with constant \(\phi \in (0,1)\).

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Uzor, V.A., Mewomo, O.T., Alakoya, T.O. et al. Outer approximated projection and contraction method for solving variational inequalities. J Inequal Appl 2023, 141 (2023). https://doi.org/10.1186/s13660-023-03043-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13660-023-03043-8

Mathematics Subject Classification

Keywords