- Research
- Open access
- Published:
Improved inertial projection and contraction method for solving pseudomonotone variational inequality problems
Journal of Inequalities and Applications volume 2021, Article number: 107 (2021)
Abstract
The objective of this article is to solve pseudomonotone variational inequality problems in a real Hilbert space. We introduce an inertial algorithm with a new self-adaptive step size rule, which is based on the projection and contraction method. Only one step projection is used to design the proposed algorithm, and the strong convergence of the iterative sequence is obtained under some appropriate conditions. The main advantage of the algorithm is that the proof of convergence of the algorithm is implemented without the prior knowledge of the Lipschitz constant of cost operator. Numerical experiments are also put forward to support the analysis of the theorem and provide comparisons with related algorithms.
1 Introduction
Let H be a real Hilbert space with the scalar product \(\langle \cdot ,\cdot \rangle \) and the induced norm \(\|\cdot \|\). Let Ω be a nonempty, closed, and convex subset of H and \(A:H\rightarrow H\) be a nonlinear operator. Let \(\mathbb{N}\) and \(\mathbb{R}\) be the sets of positive integers and real numbers, respectively.
The variational inequality problem (VIP) for A on Ω is to find a point \(x^{*}\in \Omega \) such that
Problem (VIP) is an important problem of nonlinear analysis and captures multiple applications arising in diverse areas such as signal processing, transportation, machine learning, and medical imaging; see, e.g., [1–4]. From now on, the set of solutions of the variational inequality problem is denoted by \(VI(\Omega ,A)\).
Numerous methods have been developed in the literature for solving the variational problem. For all we know, the regularized method and the projection method can be used to solve the problem under suitable conditions. Next, we primarily research the projection method. Particularly, a point \(x\in \Omega \) is a solution of \(VI(\Omega ,A)\), the VIP is equivalent to the following fixed point problem:
where \(P_{\Omega }:H\rightarrow \Omega \) is called the metric projection and \(\lambda >0\). Thus, we consider the sequence \(\{x_{n}\}\) generated by the following iteration formula:
where the operator \(A:H\rightarrow H\) is η-strongly monotone, L-Lipschitz continuous, and \(\lambda \in (0,2\eta /L^{2})\). Then the iteration formula has strong convergence results under appropriate conditions of parameters, and \(VI(\Omega ,A)\) has a unique solution. Besides, if A is inverse strongly monotone, it has weak convergence results under certain conditions, see, e.g., [5].
To avoid this strong assumption, Korpelevich [6] proposed the extragradient method in 1976:
where \(\lambda \in (0,1/L)\) and the operator A is monotone and L-Lipschitz continuous in a Hilbert space.
Observe that the conditions of the extragradient method are weakened, but the algorithm still needs to calculate two step projections from H onto the closed convex set Ω. If the feasible set Ω is a general closed and convex set with complicated structures, a major expenditure of computation time and effort might be needed. In the circumstances, it is much easier to influence the efficiency of the extragradient method.
One of the methods which mitigates this obstacle is the modified forward-backward splitting method introduced by Tseng [7] in 2000, which officially contains one step projection per iteration onto the feasible Ω. Given the current iteration \(x_{n}\), calculate the next iteration \(x_{n+1}\) via
where \(\lambda \in (0,1/L)\) and the operator A is monotone, L-Lipschitz continuous. The weak convergence of the iterative sequence \(\{x_{n}\}\) generated by this algorithm was proved.
The second method which conquers this hindrance is the projection and contraction method (PCM) of He [8] and Sun [9]:
where \(\alpha \in (0,2)\), \(\lambda _{n}\in (0,1/L)\) (or \(\lambda _{n}\) is updated by some self-adaptive rule),
and
The method (PCM) also requires only one step projection onto the feasible set Ω in each iteration, and the sequence \(\{x_{n}\}\) generated by the PCM converges weakly to a point in \(VI(\Omega ,A)\) under suitable conditions.
Next, let us mention an algorithm of the inertial form which is based upon a discrete version of a second order dissipative dynamical system in time. In [10], Alvarez and Attouch introduced the inertial proximal method (IPM) to find zero of a maximal monotone operator. The method is of the form:
Find \(x_{n+1}\in H\) such that \(0\in \lambda _{n}A(x_{n+1})+x_{n+1}-x_{n}-\theta _{n}(x_{n}-x_{n-1})\), where \(x_{n-1}, x_{n}\in H\), \(\theta _{n}\in [0,1)\) and \(\lambda _{n}>0\). It also can be expressed in the following form:
where \(J_{\lambda _{n}}^{A}\) is the resolvent of A with parameter \(\lambda _{n}\) and the inertia is produced by the term \(\theta _{n}(x_{n}-x_{n-1})\). It is worth emphasizing the advantage of the inertial method, which can speed up the convergence properties of the original algorithm.
In 2017, Thong and Hieu [11] proposed a self-adaptive algorithm which was based on Tseng’s extragradient method [7], the algorithm is described as follows.
Algorithm 2
(Tseng’s extragradient method)
- Step 1::
-
Choose \(x_{0}\in H\), \(\gamma >0\), \(l\in (0,1)\), \(\mu \in (0,1)\).
- Step 2::
-
Given the current iteration \(x_{n}\), compute
$$ y_{n}=P_{\Omega }(x_{n}-\lambda _{n} Ax_{n}), $$where \(\lambda _{n}\) is chosen to be the largest \(\lambda \in \{\gamma ,\gamma l,\gamma l^{2},\ldots \}\) satisfying
$$ \lambda \Vert Ax_{n}-Ay_{n} \Vert \leq \mu \Vert x_{n}-y_{n} \Vert . $$If \(y_{n}=x_{n}\), then stop and \(x_{n}\) is the solution of the variational inequality problem. Otherwise:
- Step 3::
-
Compute the new iteration \(x_{n+1}\) via the following iterate formula:
$$ x_{n+1}=y_{n}-\lambda _{n}(Ay_{n}-Ax_{n}). $$
Set \(n:=n+1\) and return to Step 2.
The advantage of this iterative algorithm is that it does not need the knowledge of the Lipschitz constant of the operator A. It is a new self-adaptive method.
Under adequate conditions, the sequence \(\{x_{n}\}\) generated by Tseng’s extragradient method, projection and contraction method (PCM), and Algorithm 2 all converge weakly to an element of \(VI(\Omega ,A)\). Since the weak convergence is not desirable, efforts have been made to a lot more varieties of modifications so that the strong convergence is guaranteed.
In 2019, Thong and Hieu [12] proposed a self-adaptive algorithm which was based on Mann-type Tseng’s extragradient method, the algorithm is as follows.
Algorithm 3.2
Initialization: Given \(\tau _{0}>0\), \(\mu \in (0,1)\). Let \(x_{0}\in H\) be arbitrary.
Iterative Steps: Calculate \(x_{n+1}\) as follows:
Step 1: Compute
If \(x_{n}=y_{n}\), then stop, and \(y_{n}\) is a solution of \(VI(\Omega ,A)\). Otherwise:
Step 2: Compute
and
where \(z_{n}=y_{n}-\tau _{n}(Ay_{n}-Ax_{n})\).
Set \(n:=n+1\) and go to Step 1.
Note that only one step projection is required by the algorithm, and the strong convergence theorem is proved. Besides, this algorithm was studied with a self-adaptive technique so that the conditions imposed on the cost operator can be relaxed.
In this paper, motivated and inspired by the results in the literature Tseng [7], Thong and Hieu [11, 12], and by the ongoing research in these directions, we introduce a new algorithm for solving the (VIP) involving pseudomonotone and Lipschitz continuous operator. The algorithm combines the inertial technique with the projection and contraction method (PCM), it uses a new step size rule which allows the introduced algorithm to work without depending on the Lipschitz constant of cost operator, the step size is updated over each iteration. The rule only needs a simple computation, this may increase the efficiency of the algorithm, and the strong convergence of the algorithm is established. Under several appropriate conditions on the parameters, we will prove that the sequence \(\{x_{n}\}\) generated by the new algorithm converges strongly to a minimum-norm solution.
To this end, several numerical examples are presented to illustrate the performances and accuracies of our introduced new algorithm and provide comparisons with previously known algorithms.
This paper is organized as follows: In Sect. 2, some definitions and lemmas are recalled for further use. In Sect. 3, the convergence of the proposed algorithm is proved. In Sect. 4, we consider some numerical examples and comparisons.
2 Preliminaries
Assume that H is a real Hilbert space and Ω is a nonempty closed convex subset of H. In this paper, we use the following notations:
-
→ denotes strong convergence.
-
⇀ denotes weak convergence.
-
\(\omega _{w}(x_{n}):=\{x| \text{ there exists } \{x_{n_{j}}\}_{j=0}^{ \infty }\subset \{x_{n}\}_{n=0}^{\infty } \text{ such that } x_{n_{j}} \rightharpoonup x\}\) denotes the weak cluster point set of \(\{x_{n}\}_{n=0}^{\infty }\).
Lemma 2.1
([13])
Let H be a real Hilbert space. For all \(x,y\in H\) and \(\lambda \in \mathbb{R}\), we have
-
(i)
\(\|x+y\|^{2}=\|x\|^{2}+\|y\|^{2}+2\langle x,y\rangle \);
-
(ii)
\(\|\lambda x+(1-\lambda )y\|^{2}=\lambda \|x\|^{2}+(1-\lambda )\|y\|^{2}- \lambda (1-\lambda )\| x-y\|^{2}\);
-
(iii)
\(\|x+y\|^{2}\leq \|x\|^{2}+2\langle y,x+y\rangle \).
Next, we present some concepts of an operator.
Definition 2.2
([14])
An operator \(A:H\rightarrow H\) is said to be:
-
(i)
monotone if
$$ \langle x-y,Ax-Ay\rangle \geq 0,\quad \forall x,y\in H; $$ -
(ii)
pseudomonotone if
$$ \langle Ay,x-y\rangle \geq 0\quad \Rightarrow\quad \langle Ax,x-y\rangle \geq 0, \quad \forall x,y\in H; $$ -
(iii)
L-Lipschitz continuous with \(L>0\), if
$$ \Vert Ax-Ay \Vert \leq L \Vert x-y \Vert ,\quad \forall x,y\in H; $$ -
(iv)
sequentially weakly continuous if, for each sequence \(\{x_{n}\}\), we have that \(\{x_{n}\}\) converges weakly to x implies \(\{A(x_{n})\}\) converges weakly to Ax.
From Definition 2.2, we can see that every monotone operator A is pseudomonotone, but the converse is not true. Next, we present an example of the variational inequality problem in an infinite dimensional space.
Example 2.3
Let H be a Hilbert space,
The inner product and the norm on H are given as follows:
for any \(u=(u_{1},u_{2},\ldots ,u_{i},\ldots )\), \(v=(v_{1},v_{2},\ldots ,v_{i}, \ldots )\in H\). Let \(\alpha ,\beta \in \mathbb{R}\) be such that \(\beta >\alpha >\frac{\beta }{2}>0\), and let
Since \(0\in VI(\Omega ,A)\), we can see that \(VI(\Omega ,A)\neq \emptyset \).
Besides, we let
It is easy to see that A is pseudomonotone, \((\beta +2\alpha )\)-Lipschitz continuous on \(\Omega _{\alpha }\) and A fails to be a monotone mapping on H (see [15], Example 4.1).
Next, we show that \(\Omega \subset \Omega _{\alpha }\). Let \(u=(u_{1},u_{2},\ldots ,u_{i},\ldots )\in \Omega \). From that we can get
which implies that \(\|u\|\leq \alpha \), thus \(u\in \Omega _{\alpha }\) and \(\Omega \subset \Omega _{\alpha }\).
Moreover, since \(\Omega \subset \Omega _{\alpha }\), we know that A is pseudomonotone and \((\beta +2\alpha )\)-Lipschitz continuous on Ω. Besides, Ω is compact and A is continuous on H, thus we have that A is sequentially weakly continuous on Ω (see [16], Example 1).
In the following, we gather some characteristic properties of \(P_{\Omega }\).
Lemma 2.4
([17])
Let Ω be a closed convex subset in a real Hilbert space H, \(x\in H\). Then
-
(i)
\(\|P_{\Omega }x-P_{\Omega }y\|^{2}\leq \langle x-y,P_{\Omega }x-P_{ \Omega }y\rangle \), \(\forall y\in \Omega \);
-
(ii)
\(\|x-P_{\Omega }x\|^{2}+\|y-P_{\Omega }x\|^{2}\leq \| x-y\|^{2}\), \(\forall y\in \Omega \).
Lemma 2.5
([17])
Let H be a real Hilbert space and Ω be a nonempty closed convex subset of H. Given \(x\in H\) and \(z\in \Omega \), then \(z=P_{\Omega }x\) if and only if there holds the inequality \(\langle x-z,y-z\rangle \leq 0\), \(\forall y\in \Omega \).
Lemma 2.6
(Minty [18])
Consider problem \(VI(\Omega ,A)\) with Ω a nonempty, closed, convex subset of a real Hilbert space H and \(A:\Omega \rightarrow H\) pseudomonotone and continuous. Then \(x^{*}\) is a solution of \(VI(\Omega ,A)\) if and only if \(\langle x-x^{*},Ax\rangle \geq 0\), \(\forall x\in \Omega \).
Lemma 2.7
([19])
Let \(\{a_{n}\}\) be sequences of nonnegative real numbers. Suppose that
for each \(n>0\), where the sequence \(\{\alpha _{n}\}\subset (0,1)\), \(\sum_{n=1}^{\infty }\alpha _{n}=\infty \), \(\limsup_{n\rightarrow \infty }\delta _{n}\leq 0\).
Then \(\lim_{n\rightarrow \infty }a_{n}=0\).
Lemma 2.8
([20])
Let \(\{a_{n}\}\) be a sequence of nonnegative real numbers such that there exists a subsequence \(\{a_{n_{j}}\}\) of \(\{a_{n}\}\) such that \(a_{n_{j}}< a_{n_{j}+1}\) for all \(j\in \mathbb{N}\). Then there exists a nondecreasing sequence \(\{m_{k}\}\) of \(\mathbb{N}\) such that \(\lim_{k\rightarrow \infty }m_{k}=\infty \) and the following properties are satisfied by all (sufficiently large) number \(k\in \mathbb{N}\): \(a_{m_{k}}\leq a_{m_{k}+1}\) and \(a_{k}\leq a_{m_{k}+1}\). In fact, \(m_{k}\) is the largest number of n in the set \(\{1,2,\ldots ,k\}\) such that \(a_{n}< a_{n+1}\).
3 Main results
In this section, we propose a strongly convergent algorithm for solving pseudomonotone variational inequality problems. Under mild assumptions, the sequence generated by the proposed method converges strongly to \(p\in VI(\Omega ,A)\), where \(\|p\|=\min \{\|z\|: z\in VI(\Omega ,A)\}\). Our algorithm is described as follows.
Algorithm 1
Given \(\gamma >0\), \(l\in (0,1)\), \(\mu \in (0,1)\), \(\alpha \in (0,2)\), \(\theta \in (0,1)\). Let \(x_{0},x_{1}\in \Omega \) be arbitrarily fixed. Calculate \(x_{n+1}\) as follows:
where the sequences \(\{\epsilon _{n}\}\), \(\{\theta _{n}\}\), \(\{\tau _{n}\}\), \(\{\eta _{n}\}\), \(\{\alpha _{n}\}\), and \(\{\beta _{n}\}\) satisfy the following conditions:
-
(a)
\(\{\alpha _{n}\}\subset (0,1)\), \(\lim_{n\rightarrow \infty }\alpha _{n}=0\), \(\sum_{n=1}^{\infty }\alpha _{n}=\infty \);
-
(b)
\(\epsilon _{n}=\circ (\alpha _{n})\), i.e., \(\lim_{n\rightarrow \infty }( \epsilon _{n}/\alpha _{n})=0\);
-
(c)
Choose \(\theta _{n}\) such that \(0\leq \theta _{n}\leq \bar{\theta }_{n}\), where
$$ \bar{\theta }_{n}=\textstyle\begin{cases} \min \{\theta ,\frac{\epsilon _{n}}{ \Vert x_{n}-x_{n-1} \Vert }\}, & \text{if } x_{n} \neq x_{n-1}, \\ \theta , & \text{otherwise}; \end{cases} $$(3.2) -
(d)
\(\tau _{n}\) is chosen to be the largest \(\tau \in \{\gamma ,\gamma l,\gamma l^{2},\ldots \}\) satisfying
$$ \tau \Vert Aw_{n}-Ay_{n} \Vert \leq \mu \Vert w_{n}-y_{n} \Vert ; $$(3.3) -
(e)
$$\eta _{n}=\textstyle\begin{cases} \frac{\langle w_{n}-y_{n},d_{n}\rangle }{ \Vert d_{n} \Vert ^{2}}, & \text{if } d_{n} \neq 0, \\ 0, & \text{if } d_{n}=0; \end{cases}$$
-
(f)
\(\{\beta _{n}\}\subset [a,b]\subset (0,1)\).
The following lemmas are important to prove the convergence of algorithm.
Lemma 3.1
([11])
The Armijo-like search rule is well defined and
Lemma 3.2
Let \(\{z_{n}\}\) be a sequence generated by Algorithm 1. For all \(p\in VI(\Omega ,A)\), we have
Proof
If \(d_{n_{0}}=0\), then \(z_{n_{0}}=w_{n_{0}}\) and inequality (3.4) holds.
Next, we consider \(d_{n}\neq 0\) for each \(n\geq 1\). Let \(p\in VI(\Omega ,A)\), we have
By the definition of \(d_{n}\), we get
Since \(y_{n}=P_{C}(w_{n}-\tau _{n} Aw_{n})\), we obtain
Besides, \(p\in VI(\Omega ,A)\), \(y_{n}\in \Omega \), we have
Then, by the pseudomonotonicity of A, we get
By (3.6), (3.7), and (3.8), we have
Combining (3.5) and (3.9), we obtain
Since \(d_{n}\neq 0\), we have \(\eta _{n}=\frac{\langle w_{n}-y_{n},d_{n}\rangle }{\|d_{n}\|^{2}}\), which implies that \(\eta _{n}\|d_{n}\|^{2}=\langle w_{n}-y_{n},d_{n}\rangle \).
Thus, we obtain
This completes the proof. □
Lemma 3.3
Let \(\{w_{n}\}\) be a sequence generated by Algorithm 1, then there exists \(n_{0}\geq 1\) such that
Proof
Clearly, from the definition of \(d_{n}\), we have
On the one hand, we can obtain
thus, we have
Besides,
Thus, we obtain
On the other hand, we have
Thus,
Also, from (3.12) and (3.13), we can get
which leads to the desired conclusion. □
Theorem 3.4
Let \(A:\Omega \rightarrow H\) be pseudomonotone, L-Lipschitz continuous, and sequentially weakly continuous on a bounded subset of H. Assume that \(VI(\Omega ,A)\neq \emptyset \), then the sequence \(\{x_{n}\}\) generated by Algorithm 1converges strongly to an element \(p\in VI(\Omega ,A)\), where \(\|p\|=\min \{\|z\|: z\in VI(\Omega ,A)\}\).
Proof
We divide the proof into several claims.
Claim 1. Prove that the sequence \(\{x_{n}\}\) is bounded. Let \(p\in VI(\Omega ,A)\), we have
Note that, from Lemma 3.2, we have
where
From \(\epsilon _{n}=\circ (\alpha _{n})\) and the definitions of \(\theta _{n}\), \(\bar{\theta }_{n}\) given in (3.2), we have
Thus, the sequence \(\{\sigma _{n}\}\) is bounded. Setting \(M=\sup_{n\geq 1}\sigma _{n}+\|p\|\), by (3.14), we get
for each \(n\geq n_{0}\). By induction, we can obtain that
Therefore, the sequence \(\{x_{n}\}\) is bounded.
Claim 2. For each \(p\in VI(\Omega ,A)\) and \(n\geq n_{0}\), prove
From the definition of \(w_{n}\), we have
Combining the convexity of \(\|\cdot \|^{2}\), Lemma 3.2, and (3.15), we obtain
which leads to the desired conclusion.
Claim 3. For each \(p\in VI(\Omega ,A)\) and \(n\geq n_{0}\), prove
Setting \(u_{n}=(1-\beta _{n})w_{n}+\beta _{n}z_{n}\), we have
and
It follows from (3.16) and (3.17) that
which leads to the desired conclusion.
Claim 4. The sequence \(\{\|x_{n}-p\|^{2}\}\) converges to zero by considering two possible cases on the sequence \(\{\|x_{n}-p\|^{2}\}\).
Case 1: Suppose that there exists \(N\in \mathbb{N}\) such that \(\|x_{n+1}-p\|^{2}\leq \|x_{n}-p\|^{2}\), \(\forall n>N\).
Then we know that \(\lim_{n\rightarrow \infty }\|x_{n}-p\|^{2}\) exists. From Claim 2, we can obtain
From \(\alpha \in (0,2)\), \(\{\beta _{n}\}\subset [a,b]\subset (0,1)\), the facts that \(\theta _{n}\|x_{n}-x_{n-1}\|\rightarrow 0\) and \(\lim_{n\rightarrow \infty }\alpha _{n}=0\), we have
and
Using Lemma 3.3 and (3.19), we can get
By the definition of \(w_{n}\), we obtain
where \(K=\sup_{n\geq 1}\{\|x_{n}-x_{n-1}\|,\|x_{n}-p\|\}\).
Combining \(\lim_{n\rightarrow \infty }\alpha _{n}=0\) and (3.19), we obtain
From Claim 3 and (3.22), we have
or
where
Since \(\{x_{n}\}\) is bounded, \(\|x_{n+1}-x_{n}\|\rightarrow 0\), \(n\rightarrow \infty \), there exists a subsequence \(\{x_{n_{j}}\}\) of \(\{x_{n}\}\) such that \(x_{n_{j}}\rightharpoonup q\) and
from (3.20), we get \(w_{n_{j}}\rightharpoonup q\).
Next, we show that \(q\in VI(\Omega ,A)\). By \(y_{n}=P_{\Omega }(w_{n}-\tau _{n}Aw_{n})\), we have
From that, we infer that
By (3.21) and \(\liminf_{j\rightarrow \infty }\tau _{n_{j}}>0\), we take the limit as \(j\rightarrow \infty \), then we can obtain
Then we opt for a positive real sequence \(\{\epsilon _{j}\}\) decreasing and tending to 0. For each \(\epsilon _{j}\), we denote by \(m_{j}\) the smallest positive integer such that
where the existence of \(m_{j}\) follows from (3.26). Since \(\{\epsilon _{j}\}\) is decreasing, we can get that sequence \(\{m_{j}\}\) is increasing. For each j, \(A(w_{n_{m_{j}}})\neq 0\), set
It follows from the pseudomonotonicity of A that
Besides, we have that \(\{w_{n_{j}}\}\) converges weakly to q as \(j\rightarrow \infty \). Since A is sequentially weakly continuous on Ω, we have \(\{A(w_{n_{j}})\}\) converges weakly to \(A(q)\). Assume \(\|A(q)\|\neq 0\) (otherwise, q is a solution). Since the norm mapping is sequentially weakly lower semicontinuous, thus we can obtain
From \(\{w_{n_{m_{j}}}\}\subset \{w_{n_{j}}\}\) and \(\epsilon _{j}\rightarrow 0\) \(j\rightarrow \infty \), we have
Furthermore, taking the limit as \(j\rightarrow \infty \), then from (3.28) we have \(\langle A(w),w-q\rangle \geq 0\).
Thus, by Lemma 2.6, we can obtain that \(q\in VI(\Omega ,A)\). Therefore, we have \(\omega _{w}(x_{n})\subset VI(\Omega ,A)\).
From the fact that \(p=P_{VI(\Omega ,A)}0\), we obtain
Thus, from (3.25) and Lemma 2.7, we have \(\lim_{n\rightarrow \infty }\|x_{n}-p\|^{2}=0\). This implies that the sequence \(\{x_{n}\}\) converges strongly to p.
Case 2: There exists a subsequence \(\{\|x_{n_{j}}-p\|^{2}\}\) of \(\{\|x_{n}-p\|^{2}\}\) such that \(\|x_{n_{j}}-p\|^{2}<\|x_{n_{j}+1}-p\|^{2}\) for all \(j\in \mathbb{N}\). From Lemma 2.8, there exists a nondecreasing sequence \(\{m_{k}\}\) of \(\mathbb{N}\) such that \(\lim_{k\rightarrow \infty }m_{k}=\infty \) and the following inequalities hold for all \(k\in \mathbb{N}\):
It follows from Claim 2 and (3.29) that
Besides, we have
Thus,
Since \(\theta _{m_{k}}(1-\alpha _{m_{k}})\|x_{m_{k}}-x_{m_{k}-1}\| \rightarrow 0\), \(\lim_{k\rightarrow \infty }\alpha _{m_{k}}=0\), we get
By similar arguments as those in Case 1, we have
Since \(\|x_{m_{k}}-p\|^{2}\leq \|x_{m_{k}+1}-p\|^{2}\) and \(\alpha _{m_{k}}>0\), we get
As proved in the first case, we obtain
Since \(\frac{\theta _{m_{k}}}{\alpha _{m_{k}}}\|x_{m_{k}}-x_{m_{k}-1}\| \rightarrow 0\), thus by (3.30) and (3.31) we have
From (3.31) we get
Since \(\|x_{k}-p\|^{2}\leq \|x_{m_{k}+1}-p\|^{2}\), thus \(\limsup_{k\rightarrow \infty }\|x_{k}-p\|^{2}\leq 0\), that is, \(x_{k}\rightarrow p\).
The proof is completed. □
4 Numerical experiments
In this section, we consider some numerical examples to evaluate the efficiency and advantages of our proposed algorithm in comparison with the well-known Algorithm 2 [11], Algorithm 3.2 [12]. The projections over Ω are computed effectively by the function \(quadprog\) in Matlab 7.0 Optimization Toolbox. In the following, we give the specific examples.
The choice of parameters for each algorithm is listed in the following:
-
\(Algo. 2: \gamma =1\), \(l=\mu =0.5\);
-
\(Algo. 3.2: \tau _{0}=1\), \(\mu =0.9\), \(\alpha _{n}=\frac{1}{(n+1)^{p}}\) (\(p=0.7\) or 1), \(\beta _{n}=\frac{1-\alpha _{n}}{2}\);
-
\(Algo. 1: \gamma =1\), \(l=\mu =0.3\), \(\theta =0.35\), \(\theta _{n}= \bar{\theta }_{n}\), \(\alpha =1.8\), \(\alpha _{n}=\frac{1}{(n+1)^{p}}\) (\(p=0.7\) or 1), \(\beta _{n}=\frac{1-\alpha _{n}}{2}\).
Example 4.1
Let \(\Omega =[-2,5]\), \(H=\mathbb{R}\). We consider the problem for \(A:\Omega \rightarrow \mathbb{R}\) defined by
For all \(x,y\in \Omega \), we have
Therefore, the operator A is monotone and 2-Lipschitz continuous, and A meets the needs of the topic. Besides, \(VI(\Omega ,A)=\{0\}\neq \emptyset \), the starting point is \(x_{0}=x_{1}=1\), \(2,3\in \Omega \), we denote \(x^{*}=0\), and take \(\|x_{n}-x^{*}\|\leq 10^{-i}\) (\(i=3\),5,7) to terminate Algorithm 1. The numerical results for the example are shown in Tables 1, 2.
From Tables 1, 2, we can easily observe that Algorithm 1 converges for a shorter iterate number than the previously studied Algorithm 2 [11] and Algorithm 3.2 [12].
Example 4.2
We consider the linear operator \(A:\mathfrak{R}^{m}\rightarrow \mathfrak{R}^{m}\) (\(m=10,20,30\)) in the form \(A(x)=Mx+q\) [20, 21], where
N is an \(m\times m\) matrix, S is an \(m\times m\) skew-symmetric matrix, D is an \(m\times m\) diagonal matrix whose diagonal entries are nonnegative, and \(q\in \mathfrak{R}^{m}\) is a vector, therefore M is positive definite. The feasible set is
Obviously, the operator A is monotone and Lipschitz continuous. For experiments, q is equal to zero vector, all the entries of N, S are generated randomly and uniformly in \([-2,2]\), and the diagonal entries of D are in \((0,2)\). We choose \(x_{0}=x_{1}=(1,1,\ldots ,1)\in \mathfrak{R}^{m}\), \(\alpha _{n}=\frac{1}{(n+1)^{p}}\) (\(p=0.7\)). Besides, it is easy to see that \(VI(\Omega ,A)=\{(0,0,\ldots ,0)^{T}\}\neq \emptyset \), we denote \(x^{*}=(0,0,\ldots ,0)^{T}\) and take \(\|x_{n}-x^{*}\|\leq 10^{-4}\) as the stopping criterion. The results are described in Figs. 1, 2, 3.
Figures 1, 2, 3 have conformed that the proposed algorithm has the competitive advantage over existing Algorithm 2 [11] and Algorithm 3.2 [12].
Example 4.3
Let H be a functional space \(L^{2}([0,1])\) with the inner product \(\langle x,y\rangle :=\int _{0}^{1}x(t)y(t)\,dt\) and the induced norm \(\|x\|:=(\int _{0}^{1}|x(t)|^{2}\,dt)^{\frac{1}{2}}\). The mapping A is defined by
It is easy to show that the operator A is monotone and 1-Lipschitz continuous:
Thus, the operator A is monotone.
Therefore, the operator A is 1-Lipschitz continuous.
The feasible set is the unit ball \(\Omega :=\{x\in H:\|x\|\leq 1\}\). We choose \(x_{0}^{1}=x_{1}^{1}=t^{2}\), \(x_{0}^{2}=x_{1}^{2}=\frac{2^{t}}{16}\), \(x_{0}^{3}=x_{1}^{3}=e^{-t}\), \(x_{0}^{4}=x_{1}^{4}=t+0.5\cos t\) as initial values, and we use the condition \(\|x_{n}-x^{*}\|\leq 10^{-i}\), (\(i=2,3\)) as the stopping criterion, where \(x^{*}(t)=0\) is the solution founded by the algorithm. The numerical results are presented in Tables 3, 4. We mainly consider the iteration step and iteration time to verify its effectiveness.
5 Conclusion
In this paper, we first present a new algorithm, which is based on inertial projection and contraction method for solving pseudomonotone variational inequality problems in Hilbert space. Under suitable conditions, we have proved the convergence of Algorithm 1, it is a strong convergence iterative method with self-adaptive technique. More so, it is worth underlining that Algorithm 1 does not require the information of Lipschitz constant of the operator A. Finally, some numerical experiments are given to illustrate the advantages of the proposed algorithm compared with previously known algorithms.
Availability of data and materials
Not applicable.
References
An, N.T.: Solving k-center problems involving sets based on optimization techniques. J. Glob. Optim. 76, 189–209 (2020)
Sahu, D.R., Yao, J.C., Verma, M., Shukla, K.K.: Convergence rate analysis of proximal gradient methods with applications to composite minimization problems. Optimization 70(1), 75–100 (2021)
Cuong, T.H., Yao, J.C., Yen, N.D.: Qualitative properties of the minimum sum-of-squares clustering problem. Optimization 69(9), 2131–2154 (2020)
Qin, X., An, N.T.: Smoothing algorithms for computing the projection onto a Minkowski sum of convex sets. Comput. Optim. Appl. 74, 821–850 (2019)
Xiu, N.H., Zhang, J.Z.: Some recent advances in projection-type methods for variational inequalities. J. Comput. Appl. Math. 152, 559–585 (2003)
Koepelevich, G.M.: The extragradient method for finding saddle points and other problem. Èkon. Mat. Metody 12, 747–756 (1976)
Tseng, P.: A modified forward-backward splitting method for maximal monotone mappings. SIAM J. Control Optim. 38, 431–446 (2000)
He, B.: A class of projection and contraction methods for monotone variational inequalities. Appl. Math. Optim. 35, 69–76 (1997)
Sun, D.F.: A class of iterative methods for solving nonlinear projection equations. J. Optim. Theory Appl. 91, 123–140 (1996)
Alvarez, F., Attouch, H.: An inertial proximal method for maximal monotone operators via discretization of a nonlinear oscillator with damping. Set-Valued Anal. 9, 3–11 (2001)
Thong, D.V., Hieu, D.V.: Weak and strong convergence theorems for variational inequality problems. Numer. Algorithms 78, 1045–1060 (2017)
Thong, D.V., Hieu, D.V.: Strong convergence of extragradient methods with a new step size for solving variational inequality problems. Comput. Appl. Math. 38, 136 (2019)
Takahashi, W.: Introduction to Nonlinear and Convex Analysis. Yokohoma Pub., Yokohoma (2009)
Mainge, P.E.: The viscosity approximation process for quasi-nonexpansive mapping in Hilbert space. Comput. Math. Appl. 59, 74–79 (2010)
Khanh, P.D., Vuong, P.T.: Modified projection method for strongly pseudomonotone variational inequalities. J. Glob. Optim. 58, 341–350 (2014)
Thong, D.V., Jun, Y., Cho, Y.J., Rassias, T.M.: Explicit extragradient-like method with adaptive stepsizes for pseudomonotone variational inequalities. Optim. Lett. (2021). https://doi.org/10.1007/s11590-020-01678-w
Goebel, K., Reich, S.: Uniform Convexity, Hyperbolic Geometry, and Nonexpansive Mappings. Dekker, New York (1984)
Cottle, R.W., Yao, J.C.: Pseudo-monotone complementarity problems in Hilbert space. J. Optim. Theory Appl. 75, 281–295 (1992)
Xu, H.K.: Iterative algorithm for nonlinear operators. J. Lond. Math. Soc. 66, 240–256 (2002)
Mainge, P.E.: A hybrid extragradient-viscosity method for monotone operators and fixed point problems. SIAM J. Control Optim. 47, 1499–1515 (2008)
Harker, P.T., Pang, J.S.: A damped-Newton method for the linear complementarity problem. Lect. Appl. Math. 26, 265–284 (1990)
Acknowledgements
Not applicable.
Funding
This work was supported by Tianjin Key Lab for Advanced Signal Processing, Civil Aviation University of China (No. 2019ASP-TJ02).
Author information
Authors and Affiliations
Contributions
All the authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare that they have no competing interests.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Tian, M., Xu, G. Improved inertial projection and contraction method for solving pseudomonotone variational inequality problems. J Inequal Appl 2021, 107 (2021). https://doi.org/10.1186/s13660-021-02643-6
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13660-021-02643-6