Skip to main content

Improved inertial projection and contraction method for solving pseudomonotone variational inequality problems

Abstract

The objective of this article is to solve pseudomonotone variational inequality problems in a real Hilbert space. We introduce an inertial algorithm with a new self-adaptive step size rule, which is based on the projection and contraction method. Only one step projection is used to design the proposed algorithm, and the strong convergence of the iterative sequence is obtained under some appropriate conditions. The main advantage of the algorithm is that the proof of convergence of the algorithm is implemented without the prior knowledge of the Lipschitz constant of cost operator. Numerical experiments are also put forward to support the analysis of the theorem and provide comparisons with related algorithms.

1 Introduction

Let H be a real Hilbert space with the scalar product \(\langle \cdot ,\cdot \rangle \) and the induced norm \(\|\cdot \|\). Let Ω be a nonempty, closed, and convex subset of H and \(A:H\rightarrow H\) be a nonlinear operator. Let \(\mathbb{N}\) and \(\mathbb{R}\) be the sets of positive integers and real numbers, respectively.

The variational inequality problem (VIP) for A on Ω is to find a point \(x^{*}\in \Omega \) such that

$$ \bigl\langle Ax^{*},x-x^{*}\bigr\rangle \geq 0,\quad \forall x\in \Omega . $$
(1.1)

Problem (VIP) is an important problem of nonlinear analysis and captures multiple applications arising in diverse areas such as signal processing, transportation, machine learning, and medical imaging; see, e.g., [1–4]. From now on, the set of solutions of the variational inequality problem is denoted by \(VI(\Omega ,A)\).

Numerous methods have been developed in the literature for solving the variational problem. For all we know, the regularized method and the projection method can be used to solve the problem under suitable conditions. Next, we primarily research the projection method. Particularly, a point \(x\in \Omega \) is a solution of \(VI(\Omega ,A)\), the VIP is equivalent to the following fixed point problem:

$$ x^{*}=P_{\Omega }(I-\lambda A)x^{*}, $$
(1.2)

where \(P_{\Omega }:H\rightarrow \Omega \) is called the metric projection and \(\lambda >0\). Thus, we consider the sequence \(\{x_{n}\}\) generated by the following iteration formula:

$$ x_{n+1}=P_{\Omega }(I-\lambda A)x_{n}, $$
(1.3)

where the operator \(A:H\rightarrow H\) is η-strongly monotone, L-Lipschitz continuous, and \(\lambda \in (0,2\eta /L^{2})\). Then the iteration formula has strong convergence results under appropriate conditions of parameters, and \(VI(\Omega ,A)\) has a unique solution. Besides, if A is inverse strongly monotone, it has weak convergence results under certain conditions, see, e.g., [5].

To avoid this strong assumption, Korpelevich [6] proposed the extragradient method in 1976:

$$ \textstyle\begin{cases} y_{n}=P_{\Omega }(x_{n}-\lambda Ax_{n}), \\ x_{n+1}=P_{\Omega }(x_{n}-\lambda Ay_{n}),\quad n\geq 1, \end{cases} $$
(1.4)

where \(\lambda \in (0,1/L)\) and the operator A is monotone and L-Lipschitz continuous in a Hilbert space.

Observe that the conditions of the extragradient method are weakened, but the algorithm still needs to calculate two step projections from H onto the closed convex set Ω. If the feasible set Ω is a general closed and convex set with complicated structures, a major expenditure of computation time and effort might be needed. In the circumstances, it is much easier to influence the efficiency of the extragradient method.

One of the methods which mitigates this obstacle is the modified forward-backward splitting method introduced by Tseng [7] in 2000, which officially contains one step projection per iteration onto the feasible Ω. Given the current iteration \(x_{n}\), calculate the next iteration \(x_{n+1}\) via

$$ \textstyle\begin{cases} y_{n}=P_{\Omega }(x_{n}-\lambda Ax_{n}), \\ x_{n+1}=y_{n}-\lambda Ay_{n}+\lambda Ax_{n},\quad n\geq 1, \end{cases} $$
(1.5)

where \(\lambda \in (0,1/L)\) and the operator A is monotone, L-Lipschitz continuous. The weak convergence of the iterative sequence \(\{x_{n}\}\) generated by this algorithm was proved.

The second method which conquers this hindrance is the projection and contraction method (PCM) of He [8] and Sun [9]:

$$ \textstyle\begin{cases} y_{n}=P_{\Omega }(x_{n}-\lambda _{n}Ax_{n}), \\ d(x_{n},y_{n})=x_{n}-y_{n}-\lambda _{n}(Ax_{n}-Ay_{n}), \\ x_{n+1}=x_{n}-\alpha \eta _{n}d(x_{n},y_{n}),\quad n\geq 1, \end{cases} $$
(1.6)

where \(\alpha \in (0,2)\), \(\lambda _{n}\in (0,1/L)\) (or \(\lambda _{n}\) is updated by some self-adaptive rule),

$$ \eta _{n}=\textstyle\begin{cases} \varphi (x_{n},y_{n})/ \Vert d(x_{n},y_{n}) \Vert ^{2},& \text{if } d(x_{n},y_{n}) \neq 0, \\ 0, & \text{if } d(x_{n},y_{n})=0, \end{cases} $$

and

$$ \varphi (x_{n},y_{n})=\bigl\langle x_{n}-y_{n},d(x_{n},y_{n}) \bigr\rangle . $$

The method (PCM) also requires only one step projection onto the feasible set Ω in each iteration, and the sequence \(\{x_{n}\}\) generated by the PCM converges weakly to a point in \(VI(\Omega ,A)\) under suitable conditions.

Next, let us mention an algorithm of the inertial form which is based upon a discrete version of a second order dissipative dynamical system in time. In [10], Alvarez and Attouch introduced the inertial proximal method (IPM) to find zero of a maximal monotone operator. The method is of the form:

Find \(x_{n+1}\in H\) such that \(0\in \lambda _{n}A(x_{n+1})+x_{n+1}-x_{n}-\theta _{n}(x_{n}-x_{n-1})\), where \(x_{n-1}, x_{n}\in H\), \(\theta _{n}\in [0,1)\) and \(\lambda _{n}>0\). It also can be expressed in the following form:

$$ x_{n+1}=J_{\lambda _{n}}^{A}\bigl(x_{n}+\theta _{n}(x_{n}-x_{n-1})\bigr), $$

where \(J_{\lambda _{n}}^{A}\) is the resolvent of A with parameter \(\lambda _{n}\) and the inertia is produced by the term \(\theta _{n}(x_{n}-x_{n-1})\). It is worth emphasizing the advantage of the inertial method, which can speed up the convergence properties of the original algorithm.

In 2017, Thong and Hieu [11] proposed a self-adaptive algorithm which was based on Tseng’s extragradient method [7], the algorithm is described as follows.

Algorithm 2

(Tseng’s extragradient method)

Step 1::

Choose \(x_{0}\in H\), \(\gamma >0\), \(l\in (0,1)\), \(\mu \in (0,1)\).

Step 2::

Given the current iteration \(x_{n}\), compute

$$ y_{n}=P_{\Omega }(x_{n}-\lambda _{n} Ax_{n}), $$

where \(\lambda _{n}\) is chosen to be the largest \(\lambda \in \{\gamma ,\gamma l,\gamma l^{2},\ldots \}\) satisfying

$$ \lambda \Vert Ax_{n}-Ay_{n} \Vert \leq \mu \Vert x_{n}-y_{n} \Vert . $$

If \(y_{n}=x_{n}\), then stop and \(x_{n}\) is the solution of the variational inequality problem. Otherwise:

Step 3::

Compute the new iteration \(x_{n+1}\) via the following iterate formula:

$$ x_{n+1}=y_{n}-\lambda _{n}(Ay_{n}-Ax_{n}). $$

Set \(n:=n+1\) and return to Step 2.

The advantage of this iterative algorithm is that it does not need the knowledge of the Lipschitz constant of the operator A. It is a new self-adaptive method.

Under adequate conditions, the sequence \(\{x_{n}\}\) generated by Tseng’s extragradient method, projection and contraction method (PCM), and Algorithm 2 all converge weakly to an element of \(VI(\Omega ,A)\). Since the weak convergence is not desirable, efforts have been made to a lot more varieties of modifications so that the strong convergence is guaranteed.

In 2019, Thong and Hieu [12] proposed a self-adaptive algorithm which was based on Mann-type Tseng’s extragradient method, the algorithm is as follows.

Algorithm 3.2

Initialization: Given \(\tau _{0}>0\), \(\mu \in (0,1)\). Let \(x_{0}\in H\) be arbitrary.

Iterative Steps: Calculate \(x_{n+1}\) as follows:

Step 1: Compute

$$ y_{n}=P_{\Omega }(x_{n}-\tau _{n}Ax_{n}). $$
(1.7)

If \(x_{n}=y_{n}\), then stop, and \(y_{n}\) is a solution of \(VI(\Omega ,A)\). Otherwise:

Step 2: Compute

$$ x_{n+1}=(1-\alpha _{n}-\beta _{n})x_{n}+ \beta _{n}z_{n}, $$
(1.8)

and

$$ \tau _{n+1}=\textstyle\begin{cases} \min \{\frac{\mu \Vert x_{n}-y_{n} \Vert }{ \Vert Ax_{n}-Ay_{n} \Vert },\tau _{n}\}, & \text{if } Ax_{n}-Ay_{n}\neq 0, \\ \tau _{n}, & \text{otherwise}, \end{cases} $$
(1.9)

where \(z_{n}=y_{n}-\tau _{n}(Ay_{n}-Ax_{n})\).

Set \(n:=n+1\) and go to Step 1.

Note that only one step projection is required by the algorithm, and the strong convergence theorem is proved. Besides, this algorithm was studied with a self-adaptive technique so that the conditions imposed on the cost operator can be relaxed.

In this paper, motivated and inspired by the results in the literature Tseng [7], Thong and Hieu [11, 12], and by the ongoing research in these directions, we introduce a new algorithm for solving the (VIP) involving pseudomonotone and Lipschitz continuous operator. The algorithm combines the inertial technique with the projection and contraction method (PCM), it uses a new step size rule which allows the introduced algorithm to work without depending on the Lipschitz constant of cost operator, the step size is updated over each iteration. The rule only needs a simple computation, this may increase the efficiency of the algorithm, and the strong convergence of the algorithm is established. Under several appropriate conditions on the parameters, we will prove that the sequence \(\{x_{n}\}\) generated by the new algorithm converges strongly to a minimum-norm solution.

To this end, several numerical examples are presented to illustrate the performances and accuracies of our introduced new algorithm and provide comparisons with previously known algorithms.

This paper is organized as follows: In Sect. 2, some definitions and lemmas are recalled for further use. In Sect. 3, the convergence of the proposed algorithm is proved. In Sect. 4, we consider some numerical examples and comparisons.

2 Preliminaries

Assume that H is a real Hilbert space and Ω is a nonempty closed convex subset of H. In this paper, we use the following notations:

  • → denotes strong convergence.

  • ⇀ denotes weak convergence.

  • \(\omega _{w}(x_{n}):=\{x| \text{ there exists } \{x_{n_{j}}\}_{j=0}^{ \infty }\subset \{x_{n}\}_{n=0}^{\infty } \text{ such that } x_{n_{j}} \rightharpoonup x\}\) denotes the weak cluster point set of \(\{x_{n}\}_{n=0}^{\infty }\).

Lemma 2.1

([13])

Let H be a real Hilbert space. For all \(x,y\in H\) and \(\lambda \in \mathbb{R}\), we have

  1. (i)

    \(\|x+y\|^{2}=\|x\|^{2}+\|y\|^{2}+2\langle x,y\rangle \);

  2. (ii)

    \(\|\lambda x+(1-\lambda )y\|^{2}=\lambda \|x\|^{2}+(1-\lambda )\|y\|^{2}- \lambda (1-\lambda )\| x-y\|^{2}\);

  3. (iii)

    \(\|x+y\|^{2}\leq \|x\|^{2}+2\langle y,x+y\rangle \).

Next, we present some concepts of an operator.

Definition 2.2

([14])

An operator \(A:H\rightarrow H\) is said to be:

  1. (i)

    monotone if

    $$ \langle x-y,Ax-Ay\rangle \geq 0,\quad \forall x,y\in H; $$
  2. (ii)

    pseudomonotone if

    $$ \langle Ay,x-y\rangle \geq 0\quad \Rightarrow\quad \langle Ax,x-y\rangle \geq 0, \quad \forall x,y\in H; $$
  3. (iii)

    L-Lipschitz continuous with \(L>0\), if

    $$ \Vert Ax-Ay \Vert \leq L \Vert x-y \Vert ,\quad \forall x,y\in H; $$
  4. (iv)

    sequentially weakly continuous if, for each sequence \(\{x_{n}\}\), we have that \(\{x_{n}\}\) converges weakly to x implies \(\{A(x_{n})\}\) converges weakly to Ax.

From Definition 2.2, we can see that every monotone operator A is pseudomonotone, but the converse is not true. Next, we present an example of the variational inequality problem in an infinite dimensional space.

Example 2.3

Let H be a Hilbert space,

$$ H=l_{2}:=\Biggl\{ u=(u_{1},u_{2},\ldots ,u_{i},\ldots ):\sum_{i=1}^{\infty } \vert u_{i} \vert ^{2}< \infty \Biggr\} . $$

The inner product and the norm on H are given as follows:

$$ \langle u,v\rangle =\sum_{i=1}^{\infty }u_{i}v_{i}, \qquad \Vert u \Vert =\sqrt{ \langle u,u\rangle } $$

for any \(u=(u_{1},u_{2},\ldots ,u_{i},\ldots )\), \(v=(v_{1},v_{2},\ldots ,v_{i}, \ldots )\in H\). Let \(\alpha ,\beta \in \mathbb{R}\) be such that \(\beta >\alpha >\frac{\beta }{2}>0\), and let

$$ \Omega =\biggl\{ u=(u_{1},u_{2},\ldots ,u_{i}, \ldots )\in H: \vert u_{i} \vert \leq \frac{1}{i}, \forall i \geq 1\biggr\} ,\qquad Au=\bigl(\beta - \Vert u \Vert \bigr)u. $$

Since \(0\in VI(\Omega ,A)\), we can see that \(VI(\Omega ,A)\neq \emptyset \).

Besides, we let

$$ \Omega _{\alpha }:=\bigl\{ u\in H: \Vert u \Vert \leq \alpha \bigr\} . $$

It is easy to see that A is pseudomonotone, \((\beta +2\alpha )\)-Lipschitz continuous on \(\Omega _{\alpha }\) and A fails to be a monotone mapping on H (see [15], Example 4.1).

Next, we show that \(\Omega \subset \Omega _{\alpha }\). Let \(u=(u_{1},u_{2},\ldots ,u_{i},\ldots )\in \Omega \). From that we can get

$$ \Vert u \Vert ^{2}=\sum_{i=1}^{\infty } \vert u_{i} \vert ^{2}\leq \sum _{i=1}^{\infty } \frac{1}{i^{2}}=1+\sum _{i=2}^{\infty }\frac{1}{i^{2}} \leq 1+\sum _{i=2}^{ \infty }\frac{1}{i^{2}-1}=1+\frac{3}{4}= \frac{7}{4}, $$

which implies that \(\|u\|\leq \alpha \), thus \(u\in \Omega _{\alpha }\) and \(\Omega \subset \Omega _{\alpha }\).

Moreover, since \(\Omega \subset \Omega _{\alpha }\), we know that A is pseudomonotone and \((\beta +2\alpha )\)-Lipschitz continuous on Ω. Besides, Ω is compact and A is continuous on H, thus we have that A is sequentially weakly continuous on Ω (see [16], Example 1).

In the following, we gather some characteristic properties of \(P_{\Omega }\).

Lemma 2.4

([17])

Let Ω be a closed convex subset in a real Hilbert space H, \(x\in H\). Then

  1. (i)

    \(\|P_{\Omega }x-P_{\Omega }y\|^{2}\leq \langle x-y,P_{\Omega }x-P_{ \Omega }y\rangle \), \(\forall y\in \Omega \);

  2. (ii)

    \(\|x-P_{\Omega }x\|^{2}+\|y-P_{\Omega }x\|^{2}\leq \| x-y\|^{2}\), \(\forall y\in \Omega \).

Lemma 2.5

([17])

Let H be a real Hilbert space and Ω be a nonempty closed convex subset of H. Given \(x\in H\) and \(z\in \Omega \), then \(z=P_{\Omega }x\) if and only if there holds the inequality \(\langle x-z,y-z\rangle \leq 0\), \(\forall y\in \Omega \).

Lemma 2.6

(Minty [18])

Consider problem \(VI(\Omega ,A)\) with Ω a nonempty, closed, convex subset of a real Hilbert space H and \(A:\Omega \rightarrow H\) pseudomonotone and continuous. Then \(x^{*}\) is a solution of \(VI(\Omega ,A)\) if and only if \(\langle x-x^{*},Ax\rangle \geq 0\), \(\forall x\in \Omega \).

Lemma 2.7

([19])

Let \(\{a_{n}\}\) be sequences of nonnegative real numbers. Suppose that

$$ a_{n+1}\leq (1-\alpha _{n})a_{n}+\alpha _{n}\delta _{n} $$

for each \(n>0\), where the sequence \(\{\alpha _{n}\}\subset (0,1)\), \(\sum_{n=1}^{\infty }\alpha _{n}=\infty \), \(\limsup_{n\rightarrow \infty }\delta _{n}\leq 0\).

Then \(\lim_{n\rightarrow \infty }a_{n}=0\).

Lemma 2.8

([20])

Let \(\{a_{n}\}\) be a sequence of nonnegative real numbers such that there exists a subsequence \(\{a_{n_{j}}\}\) of \(\{a_{n}\}\) such that \(a_{n_{j}}< a_{n_{j}+1}\) for all \(j\in \mathbb{N}\). Then there exists a nondecreasing sequence \(\{m_{k}\}\) of \(\mathbb{N}\) such that \(\lim_{k\rightarrow \infty }m_{k}=\infty \) and the following properties are satisfied by all (sufficiently large) number \(k\in \mathbb{N}\): \(a_{m_{k}}\leq a_{m_{k}+1}\) and \(a_{k}\leq a_{m_{k}+1}\). In fact, \(m_{k}\) is the largest number of n in the set \(\{1,2,\ldots ,k\}\) such that \(a_{n}< a_{n+1}\).

3 Main results

In this section, we propose a strongly convergent algorithm for solving pseudomonotone variational inequality problems. Under mild assumptions, the sequence generated by the proposed method converges strongly to \(p\in VI(\Omega ,A)\), where \(\|p\|=\min \{\|z\|: z\in VI(\Omega ,A)\}\). Our algorithm is described as follows.

Algorithm 1

Given \(\gamma >0\), \(l\in (0,1)\), \(\mu \in (0,1)\), \(\alpha \in (0,2)\), \(\theta \in (0,1)\). Let \(x_{0},x_{1}\in \Omega \) be arbitrarily fixed. Calculate \(x_{n+1}\) as follows:

$$ \textstyle\begin{cases} w_{n}=x_{n}+\theta _{n}(x_{n}-x_{n-1}), \\ y_{n}=P_{\Omega }(w_{n}-\tau _{n}Aw_{n}), \\ d_{n}=w_{n}-y_{n}-\tau _{n}(Aw_{n}-Ay_{n}), \\ z_{n}=w_{n}-\alpha \eta _{n}d_{n}, \\ x_{n+1}=(1-\alpha _{n}-\beta _{n})w_{n}+\beta _{n}z_{n}, \end{cases} $$
(3.1)

where the sequences \(\{\epsilon _{n}\}\), \(\{\theta _{n}\}\), \(\{\tau _{n}\}\), \(\{\eta _{n}\}\), \(\{\alpha _{n}\}\), and \(\{\beta _{n}\}\) satisfy the following conditions:

  1. (a)

    \(\{\alpha _{n}\}\subset (0,1)\), \(\lim_{n\rightarrow \infty }\alpha _{n}=0\), \(\sum_{n=1}^{\infty }\alpha _{n}=\infty \);

  2. (b)

    \(\epsilon _{n}=\circ (\alpha _{n})\), i.e., \(\lim_{n\rightarrow \infty }( \epsilon _{n}/\alpha _{n})=0\);

  3. (c)

    Choose \(\theta _{n}\) such that \(0\leq \theta _{n}\leq \bar{\theta }_{n}\), where

    $$ \bar{\theta }_{n}=\textstyle\begin{cases} \min \{\theta ,\frac{\epsilon _{n}}{ \Vert x_{n}-x_{n-1} \Vert }\}, & \text{if } x_{n} \neq x_{n-1}, \\ \theta , & \text{otherwise}; \end{cases} $$
    (3.2)
  4. (d)

    \(\tau _{n}\) is chosen to be the largest \(\tau \in \{\gamma ,\gamma l,\gamma l^{2},\ldots \}\) satisfying

    $$ \tau \Vert Aw_{n}-Ay_{n} \Vert \leq \mu \Vert w_{n}-y_{n} \Vert ; $$
    (3.3)
  5. (e)
    $$\eta _{n}=\textstyle\begin{cases} \frac{\langle w_{n}-y_{n},d_{n}\rangle }{ \Vert d_{n} \Vert ^{2}}, & \text{if } d_{n} \neq 0, \\ 0, & \text{if } d_{n}=0; \end{cases}$$
  6. (f)

    \(\{\beta _{n}\}\subset [a,b]\subset (0,1)\).

The following lemmas are important to prove the convergence of algorithm.

Lemma 3.1

([11])

The Armijo-like search rule is well defined and

$$ \min \biggl\{ \gamma ,\frac{\mu l}{L}\biggr\} \leq \tau _{n}\leq \gamma . $$

Lemma 3.2

Let \(\{z_{n}\}\) be a sequence generated by Algorithm 1. For all \(p\in VI(\Omega ,A)\), we have

$$ \Vert z_{n}-p \Vert ^{2}\leq \Vert w_{n}-p \Vert ^{2}-\frac{2-\alpha }{\alpha } \Vert w_{n}-z_{n} \Vert ^{2}. $$
(3.4)

Proof

If \(d_{n_{0}}=0\), then \(z_{n_{0}}=w_{n_{0}}\) and inequality (3.4) holds.

Next, we consider \(d_{n}\neq 0\) for each \(n\geq 1\). Let \(p\in VI(\Omega ,A)\), we have

$$\begin{aligned} \Vert z_{n}-p \Vert ^{2} =& \Vert w_{n}- \alpha \eta _{n}d_{n}-p \Vert ^{2} \\ =& \Vert w_{n}-p \Vert ^{2}-2\alpha \eta _{n}\langle w_{n}-p,d_{n}\rangle + \alpha ^{2}\eta _{n}^{2} \Vert d_{n} \Vert ^{2}. \end{aligned}$$
(3.5)

By the definition of \(d_{n}\), we get

$$\begin{aligned} \langle w_{n}-p,d_{n}\rangle =&\langle w_{n}-y_{n},d_{n}\rangle + \langle y_{n}-p,d_{n}\rangle \\ =&\langle w_{n}-y_{n},d_{n}\rangle +\bigl\langle y_{n}-p,w_{n}-y_{n}- \tau _{n}(Aw_{n}-Ay_{n})\bigr\rangle . \end{aligned}$$
(3.6)

Since \(y_{n}=P_{C}(w_{n}-\tau _{n} Aw_{n})\), we obtain

$$ \langle w_{n}-y_{n}-\tau _{n}Aw_{n},y_{n}-p \rangle \geq 0. $$
(3.7)

Besides, \(p\in VI(\Omega ,A)\), \(y_{n}\in \Omega \), we have

$$ \langle Ap,y_{n}-p\rangle \geq 0. $$

Then, by the pseudomonotonicity of A, we get

$$ \langle Ay_{n},y_{n}-p\rangle \geq 0. $$
(3.8)

By (3.6), (3.7), and (3.8), we have

$$ \langle w_{n}-p,d_{n}\rangle \geq \langle w_{n}-y_{n},d_{n}\rangle . $$
(3.9)

Combining (3.5) and (3.9), we obtain

$$ \Vert z_{n}-p \Vert ^{2}\leq \Vert w_{n}-p \Vert ^{2}-2\alpha \eta _{n}\langle w_{n}-y_{n},d_{n} \rangle +\alpha ^{2}\eta _{n}^{2} \Vert d_{n} \Vert ^{2}. $$
(3.10)

Since \(d_{n}\neq 0\), we have \(\eta _{n}=\frac{\langle w_{n}-y_{n},d_{n}\rangle }{\|d_{n}\|^{2}}\), which implies that \(\eta _{n}\|d_{n}\|^{2}=\langle w_{n}-y_{n},d_{n}\rangle \).

Thus, we obtain

$$\begin{aligned} \Vert z_{n}-p \Vert ^{2} \leq & \Vert w_{n}-p \Vert ^{2}-2\alpha \eta _{n}\langle w_{n}-y_{n},d_{n} \rangle +\alpha ^{2} \eta _{n}\langle w_{n}-y_{n},d_{n} \rangle \\ =& \Vert w_{n}-p \Vert ^{2}-(2-\alpha )\alpha \eta _{n}\langle w_{n}-y_{n},d_{n} \rangle \\ =& \Vert w_{n}-p \Vert ^{2}-(2-\alpha )\alpha \eta _{n}^{2} \Vert d_{n} \Vert ^{2} \\ =& \Vert w_{n}-p \Vert ^{2}-(2-\alpha )\alpha \Vert \eta _{n}d_{n} \Vert ^{2} \\ \leq & \Vert w_{n}-p \Vert ^{2}-\frac{2-\alpha }{\alpha } \Vert w_{n}-z_{n} \Vert ^{2}. \end{aligned}$$

This completes the proof. □

Lemma 3.3

Let \(\{w_{n}\}\) be a sequence generated by Algorithm 1, then there exists \(n_{0}\geq 1\) such that

$$ \Vert w_{n}-y_{n} \Vert ^{2}\leq \frac{(1+\mu )^{2}}{[(1-\mu )\alpha ]^{2}} \Vert z_{n}-w_{n} \Vert ^{2}, \quad \forall n\geq n_{0}. $$
(3.11)

Proof

Clearly, from the definition of \(d_{n}\), we have

$$\begin{aligned} \Vert d_{n} \Vert =& \bigl\Vert w_{n}-y_{n}- \tau _{n}(Aw_{n}-Ay_{n}) \bigr\Vert \\ \geq & \Vert w_{n}-y_{n} \Vert -\tau _{n} \Vert Aw_{n}-Ay_{n} \Vert \\ \geq & \Vert w_{n}-y_{n} \Vert -\mu \Vert w_{n}-y_{n} \Vert \\ =&(1-\mu ) \Vert w_{n}-y_{n} \Vert . \end{aligned}$$

On the one hand, we can obtain

$$ \Vert d_{n} \Vert \leq (1+\mu ) \Vert w_{n}-y_{n} \Vert , $$

thus, we have

$$ \frac{1}{ \Vert d_{n} \Vert ^{2}}\geq \frac{1}{(1+\mu )^{2} \Vert w_{n}-y_{n} \Vert ^{2}}. $$

Besides,

$$\begin{aligned} \langle w_{n}-y_{n},d_{n}\rangle =&\bigl\langle w_{n}-y_{n},w_{n}-y_{n}- \tau _{n}(Aw_{n}-Ay_{n})\bigr\rangle \\ =& \Vert w_{n}-y_{n} \Vert ^{2}-\tau _{n}\langle w_{n}-y_{n},Aw_{n}-Ay_{n} \rangle \\ \geq & \Vert w_{n}-y_{n} \Vert ^{2}-\tau _{n} \Vert w_{n}-y_{n} \Vert \Vert Aw_{n}-Ay_{n} \Vert \\ \geq & \Vert w_{n}-y_{n} \Vert ^{2}-\mu \Vert w_{n}-y_{n} \Vert \Vert w_{n}-y_{n} \Vert \\ =&(1-\mu ) \Vert w_{n}-y_{n} \Vert ^{2}. \end{aligned}$$

Thus, we obtain

$$ \eta _{n}=\frac{\langle w_{n}-y_{n},d_{n}\rangle }{ \Vert d_{n} \Vert ^{2}}\geq \frac{1-\mu }{(1+\mu )^{2}},\quad \forall n \geq n_{0}. $$
(3.12)

On the other hand, we have

$$ \eta _{n} \Vert d_{n} \Vert ^{2}=\langle w_{n}-y_{n},d_{n}\rangle \geq (1-\mu ) \Vert w_{n}-y_{n} \Vert ^{2}, \quad \forall n\geq n_{0}. $$

Thus,

$$\begin{aligned} \Vert w_{n}-y_{n} \Vert ^{2} \leq & \frac{1}{1-\mu }\eta _{n} \Vert d_{n} \Vert ^{2} \\ =&\frac{1}{1-\mu } \Vert \alpha \eta _{n}d_{n} \Vert ^{2}\frac{1}{\alpha ^{2}} \frac{1}{\eta _{n}} \\ =&\frac{1}{1-\mu } \Vert z_{n}-w_{n} \Vert ^{2}\frac{1}{\alpha ^{2}} \frac{1}{\eta _{n}}. \end{aligned}$$
(3.13)

Also, from (3.12) and (3.13), we can get

$$\begin{aligned} \Vert w_{n}-y_{n} \Vert ^{2} \leq & \frac{1}{1-\mu } \Vert z_{n}-w_{n} \Vert ^{2} \frac{1}{\alpha ^{2}}\frac{(1+\mu )^{2}}{1-\mu } \\ =&\frac{(1+\mu )^{2}}{[(1-\mu )\alpha ]^{2}} \Vert z_{n}-w_{n} \Vert ^{2}, \end{aligned}$$

which leads to the desired conclusion. □

Theorem 3.4

Let \(A:\Omega \rightarrow H\) be pseudomonotone, L-Lipschitz continuous, and sequentially weakly continuous on a bounded subset of H. Assume that \(VI(\Omega ,A)\neq \emptyset \), then the sequence \(\{x_{n}\}\) generated by Algorithm 1converges strongly to an element \(p\in VI(\Omega ,A)\), where \(\|p\|=\min \{\|z\|: z\in VI(\Omega ,A)\}\).

Proof

We divide the proof into several claims.

Claim 1. Prove that the sequence \(\{x_{n}\}\) is bounded. Let \(p\in VI(\Omega ,A)\), we have

$$\begin{aligned} \Vert x_{n+1}-p \Vert =& \bigl\Vert (1-\alpha _{n}- \beta _{n})w_{n}+\beta _{n}z_{n}-p \bigr\Vert \\ =& \bigl\Vert (1-\alpha _{n}-\beta _{n}) (w_{n}-p)+\beta _{n}(z_{n}-p)-\alpha _{n}p \bigr\Vert \\ \leq &(1-\alpha _{n}-\beta _{n}) \Vert w_{n}-p \Vert +\beta _{n} \Vert z_{n}-p \Vert + \alpha _{n} \Vert p \Vert . \end{aligned}$$

Note that, from Lemma 3.2, we have

$$\begin{aligned} \Vert x_{n+1}-p \Vert \leq &(1-\alpha _{n}-\beta _{n}) \Vert w_{n}-p \Vert +\beta _{n} \Vert w_{n}-p \Vert +\alpha _{n} \Vert p \Vert \\ =&(1-\alpha _{n}) \Vert w_{n}-p \Vert +\alpha _{n} \Vert p \Vert \\ =&(1-\alpha _{n}) \bigl\Vert x_{n}-p+\theta _{n}(x_{n}-x_{n-1}) \bigr\Vert +\alpha _{n} \Vert p \Vert \\ \leq &(1-\alpha _{n}) \Vert x_{n}-p \Vert +\theta _{n}(1-\alpha _{n}) \Vert x_{n}-x_{n-1} \Vert +\alpha _{n} \Vert p \Vert \\ =&(1-\alpha _{n}) \Vert x_{n}-p \Vert +\alpha _{n}\bigl(\sigma _{n}+ \Vert p \Vert \bigr), \end{aligned}$$
(3.14)

where

$$ \sigma _{n}=(1-\alpha _{n})\frac{\theta _{n}}{\alpha _{n}} \Vert x_{n}-x_{n-1} \Vert . $$

From \(\epsilon _{n}=\circ (\alpha _{n})\) and the definitions of \(\theta _{n}\), \(\bar{\theta }_{n}\) given in (3.2), we have

$$ \lim_{n\rightarrow \infty }\sigma _{n}=0. $$

Thus, the sequence \(\{\sigma _{n}\}\) is bounded. Setting \(M=\sup_{n\geq 1}\sigma _{n}+\|p\|\), by (3.14), we get

$$ \Vert x_{n+1}-p \Vert \leq (1-\alpha _{n}) \Vert x_{n}-p \Vert +\alpha _{n}M\leq \max \bigl\{ \Vert x_{n}-p \Vert , M\bigr\} $$

for each \(n\geq n_{0}\). By induction, we can obtain that

$$ \Vert x_{n}-p \Vert \leq \max \bigl\{ \Vert x_{n_{0}}-p \Vert , M\bigr\} . $$

Therefore, the sequence \(\{x_{n}\}\) is bounded.

Claim 2. For each \(p\in VI(\Omega ,A)\) and \(n\geq n_{0}\), prove

$$\begin{aligned} \beta _{n}\frac{2-\alpha }{\alpha } \Vert w_{n}-z_{n} \Vert ^{2} \leq & \Vert x_{n}-p \Vert ^{2}- \Vert x_{n+1}-p \Vert ^{2}+\theta _{n}(1-\alpha _{n}) \bigl( \Vert x_{n}-p \Vert ^{2}- \Vert x_{n-1}-p \Vert ^{2}\bigr) \\ &{}+2\theta _{n}(1-\alpha _{n}) \Vert x_{n}-x_{n-1} \Vert ^{2}+\alpha _{n} \Vert p \Vert ^{2}. \end{aligned}$$

From the definition of \(w_{n}\), we have

$$\begin{aligned} \Vert w_{n}-p \Vert ^{2} =& \bigl\Vert (1+\theta _{n}) (x_{n}-p)-\theta _{n}(x_{n-1}-p) \bigr\Vert ^{2} \\ =&(1+\theta _{n}) \Vert x_{n}-p \Vert ^{2}- \theta _{n} \Vert x_{n-1}-p \Vert ^{2}+ \theta _{n}(1+\theta _{n}) \Vert x_{n}-x_{n-1} \Vert ^{2} \\ \leq &(1+\theta _{n}) \Vert x_{n}-p \Vert ^{2}-\theta _{n} \Vert x_{n-1}-p \Vert ^{2}+2 \theta _{n} \Vert x_{n}-x_{n-1} \Vert ^{2} \\ =& \Vert x_{n}-p \Vert ^{2}+\theta _{n} \bigl( \Vert x_{n}-p \Vert ^{2}- \Vert x_{n-1}-p \Vert ^{2}\bigr)+2 \theta _{n} \Vert x_{n}-x_{n-1} \Vert ^{2}. \end{aligned}$$
(3.15)

Combining the convexity of \(\|\cdot \|^{2}\), Lemma 3.2, and (3.15), we obtain

$$\begin{aligned} \Vert x_{n+1}-p \Vert ^{2} =& \bigl\Vert (1-\alpha _{n}-\beta _{n}) (w_{n}-p)+\beta _{n}(z_{n}-p)+ \alpha _{n}(-p) \bigr\Vert ^{2} \\ \leq &(1-\alpha _{n}-\beta _{n}) \Vert w_{n}-p \Vert ^{2}+\beta _{n} \Vert z_{n}-p \Vert ^{2}+\alpha _{n} \Vert p \Vert ^{2} \\ \leq &(1-\alpha _{n}-\beta _{n}) \Vert w_{n}-p \Vert ^{2}+\alpha _{n} \Vert p \Vert ^{2}+ \beta _{n}\biggl( \Vert w_{n}-p \Vert ^{2}-\frac{2-\alpha }{\alpha } \Vert w_{n}-z_{n} \Vert ^{2}\biggr) \\ =&(1-\alpha _{n}) \Vert w_{n}-p \Vert ^{2}+ \alpha _{n} \Vert p \Vert ^{2}-\beta _{n} \frac{2-\alpha }{\alpha } \Vert w_{n}-z_{n} \Vert ^{2} \\ \leq &(1-\alpha _{n}) \Vert x_{n}-p \Vert ^{2}+(1-\alpha _{n})\theta _{n}\bigl( \Vert x_{n}-p \Vert ^{2}- \Vert x_{n-1}-p \Vert ^{2}\bigr) \\ &{}+2\theta _{n}(1-\alpha _{n}) \Vert x_{n}-x_{n-1} \Vert ^{2}+\alpha _{n} \Vert p \Vert ^{2}-\beta _{n}\frac{2-\alpha }{\alpha } \Vert w_{n}-z_{n} \Vert ^{2} \\ \leq & \Vert x_{n}-p \Vert ^{2}+\theta _{n}(1-\alpha _{n}) \bigl( \Vert x_{n}-p \Vert ^{2}- \Vert x_{n-1}-p \Vert ^{2}\bigr) \\ &{}+2\theta _{n}(1-\alpha _{n}) \Vert x_{n}-x_{n-1} \Vert ^{2}+\alpha _{n} \Vert p \Vert ^{2}-\beta _{n}\frac{2-\alpha }{\alpha } \Vert w_{n}-z_{n} \Vert ^{2}, \end{aligned}$$

which leads to the desired conclusion.

Claim 3. For each \(p\in VI(\Omega ,A)\) and \(n\geq n_{0}\), prove

$$ \Vert x_{n+1}-p \Vert ^{2}\leq (1-\alpha _{n}) \Vert w_{n}-p \Vert ^{2}+\alpha _{n}\bigl[2 \beta _{n} \Vert w_{n}-z_{n} \Vert \Vert x_{n+1}-p \Vert +2\langle p,p-x_{n+1}\rangle \bigr]. $$

Setting \(u_{n}=(1-\beta _{n})w_{n}+\beta _{n}z_{n}\), we have

$$ \Vert w_{n}-u_{n} \Vert =\beta _{n} \Vert w_{n}-z_{n} \Vert $$
(3.16)

and

$$\begin{aligned} \Vert u_{n}-p \Vert =& \bigl\Vert (1-\beta _{n}) (w_{n}-p)+\beta _{n}(z_{n}-p) \bigr\Vert \\ \leq &(1-\beta _{n}) \Vert w_{n}-p \Vert +\beta _{n} \Vert z_{n}-p \Vert \\ \leq & \Vert w_{n}-p \Vert . \end{aligned}$$
(3.17)

It follows from (3.16) and (3.17) that

$$\begin{aligned} \Vert x_{n+1}-p \Vert ^{2} =& \bigl\Vert (1-\alpha _{n}-\beta _{n})w_{n}+\beta _{n}z_{n}-p \bigr\Vert ^{2} \\ =& \bigl\Vert (1-\beta _{n})w_{n}+\beta _{n}z_{n}-\alpha _{n}w_{n}-p \bigr\Vert ^{2} \\ =& \Vert u_{n}-\alpha _{n}w_{n}-p \Vert ^{2} \\ =& \bigl\Vert (1-\alpha _{n}) (u_{n}-p)-\alpha _{n}(w_{n}-u_{n})-\alpha _{n}p \bigr\Vert ^{2} \\ \leq &(1-\alpha _{n})^{2} \Vert u_{n}-p \Vert ^{2}-2\bigl\langle x_{n+1}-p,\alpha _{n}(w_{n}-u_{n})+ \alpha _{n}p\bigr\rangle \\ =&(1-\alpha _{n})^{2} \Vert u_{n}-p \Vert ^{2}+2\alpha _{n}\langle w_{n}-u_{n},p-x_{n+1} \rangle +2\alpha _{n}\langle p,p-x_{n+1}\rangle \\ \leq &(1-\alpha _{n}) \Vert u_{n}-p \Vert ^{2}+2\alpha _{n} \Vert w_{n}-u_{n} \Vert \Vert x_{n+1}-p \Vert +2\alpha _{n}\langle p,p-x_{n+1}\rangle \\ \leq &(1-\alpha _{n}) \Vert w_{n}-p \Vert ^{2}+\alpha _{n}\bigl[2\beta _{n} \Vert w_{n}-z_{n} \Vert \Vert x_{n+1}-p \Vert +2 \langle p,p-x_{n+1}\rangle \bigr], \end{aligned}$$

which leads to the desired conclusion.

Claim 4. The sequence \(\{\|x_{n}-p\|^{2}\}\) converges to zero by considering two possible cases on the sequence \(\{\|x_{n}-p\|^{2}\}\).

Case 1: Suppose that there exists \(N\in \mathbb{N}\) such that \(\|x_{n+1}-p\|^{2}\leq \|x_{n}-p\|^{2}\), \(\forall n>N\).

Then we know that \(\lim_{n\rightarrow \infty }\|x_{n}-p\|^{2}\) exists. From Claim 2, we can obtain

$$\begin{aligned} \beta _{n}\frac{2-\alpha }{\alpha } \Vert w_{n}-z_{n} \Vert ^{2} \leq & \Vert x_{n}-p \Vert ^{2}- \Vert x_{n+1}-p \Vert ^{2}+\theta _{n}(1-\alpha _{n}) \bigl( \Vert x_{n}-p \Vert ^{2}- \Vert x_{n-1}-p \Vert ^{2}\bigr) \\ &{}+2\theta _{n}(1-\alpha _{n}) \Vert x_{n}-x_{n-1} \Vert ^{2}+\alpha _{n} \Vert p \Vert ^{2}. \end{aligned}$$
(3.18)

From \(\alpha \in (0,2)\), \(\{\beta _{n}\}\subset [a,b]\subset (0,1)\), the facts that \(\theta _{n}\|x_{n}-x_{n-1}\|\rightarrow 0\) and \(\lim_{n\rightarrow \infty }\alpha _{n}=0\), we have

$$ \Vert w_{n}-z_{n} \Vert \rightarrow 0,\quad n \rightarrow \infty , $$
(3.19)

and

$$ \Vert w_{n}-x_{n} \Vert \rightarrow 0,\quad n \rightarrow \infty . $$
(3.20)

Using Lemma 3.3 and (3.19), we can get

$$ \Vert w_{n}-y_{n} \Vert \rightarrow 0,\quad n \rightarrow \infty . $$
(3.21)

By the definition of \(w_{n}\), we obtain

$$\begin{aligned} \Vert w_{n}-p \Vert ^{2} \leq &\bigl( \Vert x_{n}-p \Vert +\theta _{n} \Vert x_{n}-x_{n-1} \Vert \bigr)^{2} \\ =& \Vert x_{n}-p \Vert ^{2}+\theta _{n}^{2} \Vert x_{n}-x_{n-1} \Vert ^{2}+2\theta _{n} \Vert x_{n}-p \Vert \Vert x_{n}-x_{n-1} \Vert \\ \leq & \Vert x_{n}-p \Vert ^{2}+\theta _{n} \Vert x_{n}-x_{n-1} \Vert ^{2}+2\theta _{n} \Vert x_{n}-p \Vert \Vert x_{n}-x_{n-1} \Vert \\ \leq & \Vert x_{n}-p \Vert ^{2}+3K\theta _{n} \Vert x_{n}-x_{n-1} \Vert , \end{aligned}$$
(3.22)

where \(K=\sup_{n\geq 1}\{\|x_{n}-x_{n-1}\|,\|x_{n}-p\|\}\).

Combining \(\lim_{n\rightarrow \infty }\alpha _{n}=0\) and (3.19), we obtain

$$ \Vert x_{n+1}-w_{n} \Vert \leq \alpha _{n} \Vert w_{n} \Vert +\beta _{n} \Vert z_{n}-w_{n} \Vert \rightarrow 0,\quad n\rightarrow \infty . $$
(3.23)

By (3.20) and (3.23), we get

$$ \Vert x_{n+1}-x_{n} \Vert \rightarrow 0,\quad n \rightarrow \infty . $$
(3.24)

From Claim 3 and (3.22), we have

$$\begin{aligned} \Vert x_{n+1}-p \Vert ^{2} \leq &(1-\alpha _{n}) \Vert x_{n}-p \Vert ^{2}+\alpha _{n}\biggl[3K(1- \alpha _{n})\frac{\theta _{n}}{\alpha _{n}} \Vert x_{n}-x_{n-1} \Vert \\ &{}+2\beta _{n} \Vert w_{n}-z_{n} \Vert \Vert x_{n+1}-p \Vert +2\langle p,p-x_{n+1} \rangle \biggr], \end{aligned}$$

or

$$ \Vert x_{n+1}-p \Vert ^{2}\leq (1-\alpha _{n}) \Vert x_{n}-p \Vert ^{2}+\alpha _{n} \delta _{n}, $$
(3.25)

where

$$ \delta _{n}=3K(1-\alpha _{n})\frac{\theta _{n}}{\alpha _{n}} \Vert x_{n}-x_{n-1} \Vert +2\beta _{n} \Vert w_{n}-z_{n} \Vert \Vert x_{n+1}-p \Vert +2 \langle p,p-x_{n+1} \rangle . $$

Since \(\{x_{n}\}\) is bounded, \(\|x_{n+1}-x_{n}\|\rightarrow 0\), \(n\rightarrow \infty \), there exists a subsequence \(\{x_{n_{j}}\}\) of \(\{x_{n}\}\) such that \(x_{n_{j}}\rightharpoonup q\) and

$$ \limsup_{n\rightarrow \infty }\langle p,p-x_{n+1}\rangle =\limsup _{n \rightarrow \infty }\langle p,p-x_{n}\rangle =\lim _{j\rightarrow \infty } \langle p,p-x_{n_{j}}\rangle =\langle p,p-q\rangle , $$

from (3.20), we get \(w_{n_{j}}\rightharpoonup q\).

Next, we show that \(q\in VI(\Omega ,A)\). By \(y_{n}=P_{\Omega }(w_{n}-\tau _{n}Aw_{n})\), we have

$$ \langle w_{n_{j}}-\tau _{n_{j}}Aw_{n_{j}}-y_{n_{j}},w-y_{n_{j}} \rangle \leq 0,\quad \forall w\in \Omega . $$

From that, we infer that

$$ \langle Aw_{n_{j}},w-w_{n_{j}}\rangle \geq \frac{1}{\tau _{n_{j}}} \langle w_{n_{j}}-y_{n_{j}},w-y_{n_{j}}\rangle +\langle Aw_{n_{j}},y_{n_{j}}-w_{n_{j}} \rangle ,\quad \forall w\in \Omega . $$

By (3.21) and \(\liminf_{j\rightarrow \infty }\tau _{n_{j}}>0\), we take the limit as \(j\rightarrow \infty \), then we can obtain

$$ \liminf_{j\rightarrow \infty }\langle Aw_{n_{j}},w-w_{n_{j}} \rangle \geq 0. $$
(3.26)

Then we opt for a positive real sequence \(\{\epsilon _{j}\}\) decreasing and tending to 0. For each \(\epsilon _{j}\), we denote by \(m_{j}\) the smallest positive integer such that

$$ \langle Aw_{n_{i}},w-w_{n_{i}}\rangle +\epsilon _{j} \geq 0,\quad \forall i\geq m_{j}, $$
(3.27)

where the existence of \(m_{j}\) follows from (3.26). Since \(\{\epsilon _{j}\}\) is decreasing, we can get that sequence \(\{m_{j}\}\) is increasing. For each j, \(A(w_{n_{m_{j}}})\neq 0\), set

$$ t_{n_{m_{j}}}=\frac{A(w_{n_{m_{j}}})}{ \Vert A(w_{n_{m_{j}}}) \Vert ^{2}}. $$

It follows from the pseudomonotonicity of A that

$$ \bigl\langle A(w+\epsilon _{j}t_{n_{m_{j}}}),w+\epsilon _{j}t_{n_{m_{j}}}-w_{n_{m_{j}}} \bigr\rangle \geq 0. $$
(3.28)

Besides, we have that \(\{w_{n_{j}}\}\) converges weakly to q as \(j\rightarrow \infty \). Since A is sequentially weakly continuous on Ω, we have \(\{A(w_{n_{j}})\}\) converges weakly to \(A(q)\). Assume \(\|A(q)\|\neq 0\) (otherwise, q is a solution). Since the norm mapping is sequentially weakly lower semicontinuous, thus we can obtain

$$ \liminf_{j\rightarrow \infty } \bigl\Vert A(w_{n_{j}}) \bigr\Vert \geq \bigl\Vert A(q) \bigr\Vert . $$

From \(\{w_{n_{m_{j}}}\}\subset \{w_{n_{j}}\}\) and \(\epsilon _{j}\rightarrow 0\) \(j\rightarrow \infty \), we have

$$ 0=\frac{0}{ \Vert A(q) \Vert }\geq \lim_{j\rightarrow \infty } \frac{\epsilon _{j}}{ \Vert A(w_{n_{m_{j}}}) \Vert }=\lim _{j\rightarrow \infty } \Vert \epsilon _{j}t_{n_{m_{j}}} \Vert \geq 0. $$

Furthermore, taking the limit as \(j\rightarrow \infty \), then from (3.28) we have \(\langle A(w),w-q\rangle \geq 0\).

Thus, by Lemma 2.6, we can obtain that \(q\in VI(\Omega ,A)\). Therefore, we have \(\omega _{w}(x_{n})\subset VI(\Omega ,A)\).

From the fact that \(p=P_{VI(\Omega ,A)}0\), we obtain

$$ \limsup_{n\rightarrow \infty }\langle p,p-x_{n+1}\rangle \leq 0. $$

Thus, from (3.25) and Lemma 2.7, we have \(\lim_{n\rightarrow \infty }\|x_{n}-p\|^{2}=0\). This implies that the sequence \(\{x_{n}\}\) converges strongly to p.

Case 2: There exists a subsequence \(\{\|x_{n_{j}}-p\|^{2}\}\) of \(\{\|x_{n}-p\|^{2}\}\) such that \(\|x_{n_{j}}-p\|^{2}<\|x_{n_{j}+1}-p\|^{2}\) for all \(j\in \mathbb{N}\). From Lemma 2.8, there exists a nondecreasing sequence \(\{m_{k}\}\) of \(\mathbb{N}\) such that \(\lim_{k\rightarrow \infty }m_{k}=\infty \) and the following inequalities hold for all \(k\in \mathbb{N}\):

$$ \Vert x_{m_{k}}-p \Vert ^{2}\leq \Vert x_{m_{k}+1}-p \Vert ^{2} \quad \text{and} \quad \Vert x_{k}-p \Vert ^{2}\leq \Vert x_{m_{k}+1}-p \Vert ^{2}. $$
(3.29)

It follows from Claim 2 and (3.29) that

$$\begin{aligned}& \beta _{m_{k}}\frac{2-\alpha }{\alpha } \Vert w_{m_{k}}-z_{m_{k}} \Vert ^{2} \\& \quad \leq \Vert x_{m_{k}}-p \Vert ^{2}- \Vert x_{m_{k}+1}-p \Vert ^{2}+\theta _{m_{k}}(1- \alpha _{m_{k}}) \bigl( \Vert x_{m_{k}}-p \Vert ^{2}- \Vert x_{m_{k}-1}-p \Vert ^{2}\bigr) \\& \qquad {}+2\theta _{m_{k}}(1-\alpha _{m_{k}}) \Vert x_{m_{k}}-x_{m_{k}-1} \Vert ^{2}+ \alpha _{m_{k}} \Vert p \Vert ^{2} \\& \quad \leq \theta _{m_{k}}(1-\alpha _{m_{k}}) \bigl( \Vert x_{m_{k}}-p \Vert ^{2}- \Vert x_{m_{k}-1}-p \Vert ^{2}\bigr) \\& \qquad {}+2\theta _{m_{k}}(1-\alpha _{m_{k}}) \Vert x_{m_{k}}-x_{m_{k}-1} \Vert ^{2}+ \alpha _{m_{k}} \Vert p \Vert ^{2}. \end{aligned}$$

Besides, we have

$$\begin{aligned} \Vert x_{m_{k}}-p \Vert ^{2}- \Vert x_{m_{k}-1}-p \Vert ^{2} =&\bigl( \Vert x_{m_{k}}-p \Vert - \Vert x_{m_{k}-1}-p \Vert \bigr) \bigl( \Vert x_{m_{k}}-p \Vert + \Vert x_{m_{k}-1}-p \Vert \bigr) \\ \leq & \Vert x_{m_{k}}-x_{m_{k}-1} \Vert \bigl( \Vert x_{m_{k}}-p \Vert + \Vert x_{m_{k}-1}-p \Vert \bigr). \end{aligned}$$

Thus,

$$\begin{aligned} \beta _{m_{k}}\frac{2-\alpha }{\alpha } \Vert w_{m_{k}}-z_{m_{k}} \Vert ^{2} \leq &\theta _{m_{k}}(1-\alpha _{m_{k}}) \Vert x_{m_{k}}-x_{m_{k}-1} \Vert \bigl( \Vert x_{m_{k}}-p \Vert + \Vert x_{m_{k}-1}-p \Vert \bigr) \\ &{}+2\theta _{m_{k}}(1-\alpha _{m_{k}}) \Vert x_{m_{k}}-x_{m_{k}-1} \Vert ^{2}+ \alpha _{m_{k}} \Vert p \Vert ^{2}. \end{aligned}$$

Since \(\theta _{m_{k}}(1-\alpha _{m_{k}})\|x_{m_{k}}-x_{m_{k}-1}\| \rightarrow 0\), \(\lim_{k\rightarrow \infty }\alpha _{m_{k}}=0\), we get

$$ \Vert w_{m_{k}}-z_{m_{k}} \Vert \rightarrow 0,\quad k \rightarrow \infty . $$
(3.30)

By similar arguments as those in Case 1, we have

$$\begin{aligned} \Vert x_{m_{k}+1}-p \Vert ^{2} \leq &(1-\alpha _{m_{k}}) \Vert x_{m_{k}}-p \Vert ^{2}+ \alpha _{m_{k}}\biggl[3K(1-\alpha _{m_{k}}) \frac{\theta _{m_{k}}}{\alpha _{m_{k}}} \Vert x_{m_{k}}-x_{m_{k}-1} \Vert \\ &{}+2\beta _{m_{k}} \Vert w_{m_{k}}-z_{m_{k}} \Vert \Vert x_{m_{k}+1}-p \Vert +2 \langle p,p-x_{m_{k}+1}\rangle \biggr]. \end{aligned}$$
(3.31)

Since \(\|x_{m_{k}}-p\|^{2}\leq \|x_{m_{k}+1}-p\|^{2}\) and \(\alpha _{m_{k}}>0\), we get

$$\begin{aligned} \Vert x_{m_{k}}-p \Vert ^{2} \leq &3K(1-\alpha _{m_{k}}) \frac{\theta _{m_{k}}}{\alpha _{m_{k}}} \Vert x_{m_{k}}-x_{m_{k}-1} \Vert \\ &{}+2\beta _{m_{k}} \Vert w_{m_{k}}-z_{m_{k}} \Vert \Vert x_{m_{k}+1}-p \Vert +2 \langle p,p-x_{m_{k}+1}\rangle . \end{aligned}$$

As proved in the first case, we obtain

$$ \limsup_{k\rightarrow \infty }\langle p,p-x_{m_{k}+1}\rangle \leq 0. $$

Since \(\frac{\theta _{m_{k}}}{\alpha _{m_{k}}}\|x_{m_{k}}-x_{m_{k}-1}\| \rightarrow 0\), thus by (3.30) and (3.31) we have

$$ \limsup_{k\rightarrow \infty } \Vert x_{m_{k}}-p \Vert ^{2}\leq 0. $$

From (3.31) we get

$$ \limsup_{k\rightarrow \infty } \Vert x_{m_{k}+1}-p \Vert ^{2}\leq 0. $$

Since \(\|x_{k}-p\|^{2}\leq \|x_{m_{k}+1}-p\|^{2}\), thus \(\limsup_{k\rightarrow \infty }\|x_{k}-p\|^{2}\leq 0\), that is, \(x_{k}\rightarrow p\).

The proof is completed. □

4 Numerical experiments

In this section, we consider some numerical examples to evaluate the efficiency and advantages of our proposed algorithm in comparison with the well-known Algorithm 2 [11], Algorithm 3.2 [12]. The projections over Ω are computed effectively by the function \(quadprog\) in Matlab 7.0 Optimization Toolbox. In the following, we give the specific examples.

The choice of parameters for each algorithm is listed in the following:

  • \(Algo. 2: \gamma =1\), \(l=\mu =0.5\);

  • \(Algo. 3.2: \tau _{0}=1\), \(\mu =0.9\), \(\alpha _{n}=\frac{1}{(n+1)^{p}}\) (\(p=0.7\) or 1), \(\beta _{n}=\frac{1-\alpha _{n}}{2}\);

  • \(Algo. 1: \gamma =1\), \(l=\mu =0.3\), \(\theta =0.35\), \(\theta _{n}= \bar{\theta }_{n}\), \(\alpha =1.8\), \(\alpha _{n}=\frac{1}{(n+1)^{p}}\) (\(p=0.7\) or 1), \(\beta _{n}=\frac{1-\alpha _{n}}{2}\).

Example 4.1

Let \(\Omega =[-2,5]\), \(H=\mathbb{R}\). We consider the problem for \(A:\Omega \rightarrow \mathbb{R}\) defined by

$$ Ax:=x+\sin x. $$

For all \(x,y\in \Omega \), we have

$$\begin{aligned}& \Vert Ax-Ay \Vert = \Vert x+\sin x-y-\sin y \Vert \leq \Vert x-y \Vert + \Vert \sin x-\sin y \Vert \leq 2 \Vert x-y \Vert , \\& \langle Ax-Ay,x-y\rangle =(x+\sin x-y-\sin y) (x-y)=(x-y)^{2}+(\sin x- \sin y) (x-y)\geq 0. \end{aligned}$$

Therefore, the operator A is monotone and 2-Lipschitz continuous, and A meets the needs of the topic. Besides, \(VI(\Omega ,A)=\{0\}\neq \emptyset \), the starting point is \(x_{0}=x_{1}=1\), \(2,3\in \Omega \), we denote \(x^{*}=0\), and take \(\|x_{n}-x^{*}\|\leq 10^{-i}\) (\(i=3\),5,7) to terminate Algorithm 1. The numerical results for the example are shown in Tables 1, 2.

Table 1 Experiment with \(\alpha _{n}=\frac{1}{(n+1)^{0.7}}\) for Example 4.1
Table 2 Experiment with \(\alpha _{n}=\frac{1}{n+1}\) for Example 4.1

From Tables 1, 2, we can easily observe that Algorithm 1 converges for a shorter iterate number than the previously studied Algorithm 2 [11] and Algorithm 3.2 [12].

Example 4.2

We consider the linear operator \(A:\mathfrak{R}^{m}\rightarrow \mathfrak{R}^{m}\) (\(m=10,20,30\)) in the form \(A(x)=Mx+q\) [20, 21], where

$$ M=NN^{T}+S+D, $$

N is an \(m\times m\) matrix, S is an \(m\times m\) skew-symmetric matrix, D is an \(m\times m\) diagonal matrix whose diagonal entries are nonnegative, and \(q\in \mathfrak{R}^{m}\) is a vector, therefore M is positive definite. The feasible set is

$$ \Omega =\bigl\{ x=(x_{1},\ldots ,x_{m})\in \mathfrak{R}^{m}:-2\leq x_{i} \leq 5, i=1,2,\ldots ,m\bigr\} . $$

Obviously, the operator A is monotone and Lipschitz continuous. For experiments, q is equal to zero vector, all the entries of N, S are generated randomly and uniformly in \([-2,2]\), and the diagonal entries of D are in \((0,2)\). We choose \(x_{0}=x_{1}=(1,1,\ldots ,1)\in \mathfrak{R}^{m}\), \(\alpha _{n}=\frac{1}{(n+1)^{p}}\) (\(p=0.7\)). Besides, it is easy to see that \(VI(\Omega ,A)=\{(0,0,\ldots ,0)^{T}\}\neq \emptyset \), we denote \(x^{*}=(0,0,\ldots ,0)^{T}\) and take \(\|x_{n}-x^{*}\|\leq 10^{-4}\) as the stopping criterion. The results are described in Figs. 1, 2, 3.

Figure 1
figure 1

Experiment with \(m=10\) for Example 4.2

Figure 2
figure 2

Experiment with \(m=20\) for Example 4.2

Figure 3
figure 3

Experiment with \(m=30\) for Example 4.2

Figures 1, 2, 3 have conformed that the proposed algorithm has the competitive advantage over existing Algorithm 2 [11] and Algorithm 3.2 [12].

Example 4.3

Let H be a functional space \(L^{2}([0,1])\) with the inner product \(\langle x,y\rangle :=\int _{0}^{1}x(t)y(t)\,dt\) and the induced norm \(\|x\|:=(\int _{0}^{1}|x(t)|^{2}\,dt)^{\frac{1}{2}}\). The mapping A is defined by

$$ (Ax) (t)=\max \bigl\{ 0,x(t)\bigr\} =\frac{x(t)+ \vert x(t) \vert }{2},\quad \forall x\in H. $$

It is easy to show that the operator A is monotone and 1-Lipschitz continuous:

$$\begin{aligned} \langle Ax-Ay,x-y\rangle =& \int _{0}^{1}\bigl(Ax(t)-Ay(t)\bigr) \bigl(x(t)-y(t)\bigr)\,dt \\ =& \int _{0}^{1}\frac{x(t)-y(t)+ \vert x(t) \vert - \vert y(t) \vert }{2}\bigl(x(t)-y(t) \bigr)\,dt \\ =& \int _{0}^{1}\frac{1}{2}\bigl[ \bigl(x(t)-y(t)\bigr)^{2}+\bigl( \bigl\vert x(t) \bigr\vert - \bigl\vert y(t) \bigr\vert \bigr) \bigl(x(t)-y(t)\bigr)\bigr]\,dt \\ \geq &0. \end{aligned}$$

Thus, the operator A is monotone.

$$\begin{aligned} \Vert Ax-Ay \Vert ^{2} =& \int _{0}^{1} \bigl\vert Ax(t)-Ay(t) \bigr\vert ^{2}\,dt \\ =& \int _{0}^{1} \biggl\vert \frac{x(t)-y(t)+ \vert x(t) \vert - \vert y(t) \vert }{2} \biggr\vert ^{2}\,dt \\ =&\frac{1}{4} \int _{0}^{1} \bigl\vert x(t)-y(t)+ \bigr\vert x(t) \bigl\vert - \bigl\vert y(t) \bigr\vert \bigr\vert ^{2}\,dt \\ \leq & \int _{0}^{1} \bigl\vert x(t)-y(t) \bigr\vert ^{2}\,dt \\ =& \Vert x-y \Vert ^{2}. \end{aligned}$$

Therefore, the operator A is 1-Lipschitz continuous.

The feasible set is the unit ball \(\Omega :=\{x\in H:\|x\|\leq 1\}\). We choose \(x_{0}^{1}=x_{1}^{1}=t^{2}\), \(x_{0}^{2}=x_{1}^{2}=\frac{2^{t}}{16}\), \(x_{0}^{3}=x_{1}^{3}=e^{-t}\), \(x_{0}^{4}=x_{1}^{4}=t+0.5\cos t\) as initial values, and we use the condition \(\|x_{n}-x^{*}\|\leq 10^{-i}\), (\(i=2,3\)) as the stopping criterion, where \(x^{*}(t)=0\) is the solution founded by the algorithm. The numerical results are presented in Tables 3, 4. We mainly consider the iteration step and iteration time to verify its effectiveness.

Table 3 Experiment with \(\alpha _{n}=\frac{1}{(n+1)^{0.7}}\) for Example 4.3
Table 4 Experiment with \(\alpha _{n}=\frac{1}{n+1}\) for Example 4.3

5 Conclusion

In this paper, we first present a new algorithm, which is based on inertial projection and contraction method for solving pseudomonotone variational inequality problems in Hilbert space. Under suitable conditions, we have proved the convergence of Algorithm 1, it is a strong convergence iterative method with self-adaptive technique. More so, it is worth underlining that Algorithm 1 does not require the information of Lipschitz constant of the operator A. Finally, some numerical experiments are given to illustrate the advantages of the proposed algorithm compared with previously known algorithms.

Availability of data and materials

Not applicable.

References

  1. An, N.T.: Solving k-center problems involving sets based on optimization techniques. J. Glob. Optim. 76, 189–209 (2020)

    Article  MathSciNet  Google Scholar 

  2. Sahu, D.R., Yao, J.C., Verma, M., Shukla, K.K.: Convergence rate analysis of proximal gradient methods with applications to composite minimization problems. Optimization 70(1), 75–100 (2021)

    Article  MathSciNet  Google Scholar 

  3. Cuong, T.H., Yao, J.C., Yen, N.D.: Qualitative properties of the minimum sum-of-squares clustering problem. Optimization 69(9), 2131–2154 (2020)

    Article  MathSciNet  Google Scholar 

  4. Qin, X., An, N.T.: Smoothing algorithms for computing the projection onto a Minkowski sum of convex sets. Comput. Optim. Appl. 74, 821–850 (2019)

    Article  MathSciNet  Google Scholar 

  5. Xiu, N.H., Zhang, J.Z.: Some recent advances in projection-type methods for variational inequalities. J. Comput. Appl. Math. 152, 559–585 (2003)

    Article  MathSciNet  Google Scholar 

  6. Koepelevich, G.M.: The extragradient method for finding saddle points and other problem. Èkon. Mat. Metody 12, 747–756 (1976)

    MathSciNet  Google Scholar 

  7. Tseng, P.: A modified forward-backward splitting method for maximal monotone mappings. SIAM J. Control Optim. 38, 431–446 (2000)

    Article  MathSciNet  Google Scholar 

  8. He, B.: A class of projection and contraction methods for monotone variational inequalities. Appl. Math. Optim. 35, 69–76 (1997)

    Article  MathSciNet  Google Scholar 

  9. Sun, D.F.: A class of iterative methods for solving nonlinear projection equations. J. Optim. Theory Appl. 91, 123–140 (1996)

    Article  MathSciNet  Google Scholar 

  10. Alvarez, F., Attouch, H.: An inertial proximal method for maximal monotone operators via discretization of a nonlinear oscillator with damping. Set-Valued Anal. 9, 3–11 (2001)

    Article  MathSciNet  Google Scholar 

  11. Thong, D.V., Hieu, D.V.: Weak and strong convergence theorems for variational inequality problems. Numer. Algorithms 78, 1045–1060 (2017)

    Article  MathSciNet  Google Scholar 

  12. Thong, D.V., Hieu, D.V.: Strong convergence of extragradient methods with a new step size for solving variational inequality problems. Comput. Appl. Math. 38, 136 (2019)

    Article  MathSciNet  Google Scholar 

  13. Takahashi, W.: Introduction to Nonlinear and Convex Analysis. Yokohoma Pub., Yokohoma (2009)

    MATH  Google Scholar 

  14. Mainge, P.E.: The viscosity approximation process for quasi-nonexpansive mapping in Hilbert space. Comput. Math. Appl. 59, 74–79 (2010)

    Article  MathSciNet  Google Scholar 

  15. Khanh, P.D., Vuong, P.T.: Modified projection method for strongly pseudomonotone variational inequalities. J. Glob. Optim. 58, 341–350 (2014)

    Article  MathSciNet  Google Scholar 

  16. Thong, D.V., Jun, Y., Cho, Y.J., Rassias, T.M.: Explicit extragradient-like method with adaptive stepsizes for pseudomonotone variational inequalities. Optim. Lett. (2021). https://doi.org/10.1007/s11590-020-01678-w

    Article  Google Scholar 

  17. Goebel, K., Reich, S.: Uniform Convexity, Hyperbolic Geometry, and Nonexpansive Mappings. Dekker, New York (1984)

    MATH  Google Scholar 

  18. Cottle, R.W., Yao, J.C.: Pseudo-monotone complementarity problems in Hilbert space. J. Optim. Theory Appl. 75, 281–295 (1992)

    Article  MathSciNet  Google Scholar 

  19. Xu, H.K.: Iterative algorithm for nonlinear operators. J. Lond. Math. Soc. 66, 240–256 (2002)

    Article  MathSciNet  Google Scholar 

  20. Mainge, P.E.: A hybrid extragradient-viscosity method for monotone operators and fixed point problems. SIAM J. Control Optim. 47, 1499–1515 (2008)

    Article  MathSciNet  Google Scholar 

  21. Harker, P.T., Pang, J.S.: A damped-Newton method for the linear complementarity problem. Lect. Appl. Math. 26, 265–284 (1990)

    MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

Not applicable.

Funding

This work was supported by Tianjin Key Lab for Advanced Signal Processing, Civil Aviation University of China (No. 2019ASP-TJ02).

Author information

Authors and Affiliations

Authors

Contributions

All the authors read and approved the final manuscript.

Corresponding author

Correspondence to Ming Tian.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Tian, M., Xu, G. Improved inertial projection and contraction method for solving pseudomonotone variational inequality problems. J Inequal Appl 2021, 107 (2021). https://doi.org/10.1186/s13660-021-02643-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13660-021-02643-6

MSC

Keywords