 Research
 Open Access
 Published:
Selfadaptive subgradient extragradient method with inertial modification for solving monotone variational inequality problems and quasinonexpansive fixed point problems
Journal of Inequalities and Applications volume 2019, Article number: 7 (2019)
Abstract
In this paper, we introduce a new algorithm with selfadaptive method for finding a solution of the variational inequality problem involving monotone operator and the fixed point problem of a quasinonexpansive mapping with a demiclosedness property in a real Hilbert space. The algorithm is based on the subgradient extragradient method and inertial method. At the same time, it can be considered as an improvement of the inertial extragradient method over each computational step which was previously known. The weak convergence of the algorithm is studied under standard assumptions. It is worth emphasizing that the algorithm that we propose does not require one to know the Lipschitz constant of the operator. Finally, we provide some numerical experiments to verify the effectiveness and advantage of the proposed algorithm.
Introduction
Throughout this paper, let H be a real Hilbert space with the inner product \(\langle \cdot ,\cdot \rangle \) and norm \(\\cdot \\). Let C be a nonempty, closed and convex subset of H. Let \(\mathbb{N}\) and \(\mathbb{R}\) be the sets of positive integers and real numbers, respectively.
The variational inequality problem (VIP) is the problem to find a point \(x^{*}\in C\) such that
The solution set of VIP is denoted by \(VI(C,A)\). The variational inequality problem is an important branch of the nonlinear problem and it has received a lot of attentions by many authors in recent years (see [1,2,3] and the references therein). Under appropriate conditions, there are two general approaches for solving the variational inequality problem, one is the regularized method and the other is the projection method. Now, we mainly study the projection method.
For every point \(x\in H\), there exists a unique point in C such that
where \(P_{C}:H\rightarrow C\) is called the metric projection from H into C.
We know that the variational inequality problem can be turned into the fixed point problem, which means (1) is equivalent to
where \(P_{C}:H\rightarrow C\) is the metric projection and \(\lambda >0\). Thus we generate \(\{x_{n}\}\) in the following manner:
This simple algorithm is an extension of the projection gradient method. However, the convergence of this method requires a slight assumption that the operator \(A:H\rightarrow H\) is strongly monotone or inverse strongly monotone.
To avoid this strong assumption, Korpelevich [4] proposed an algorithm which was called the extragradient method:
for each \(n=1,2,\ldots \) , where \(\lambda \in (0,1/L)\). At the beginning, this method was used to solve saddle point problems. Soon, this method was extended to Euclidean spaces and Hilbert spaces. In particular, this method only requires that the operator A is monotone and LLipschitz continuous in a Hilbert space. If \(VI(C,A)\neq \emptyset \), the sequence \(\{x_{n}\}\) generated by (4) converges weakly to an element of \(VI(C,A)\).
However, the extragradient method needs to calculate two projections from H onto the closed convex set C and it is applicable to the case that \(P_{C}\) has a closed form which means that \(P_{C}\) has an explicit expression. In fact, in some cases, the projection onto the nonempty closed convex subset C might be difficult to calculate. To overcome this drawback, it has received great attentions by many authors who had improved it in various ways.
To our knowledge, there were four kinds of methods to overcome this drawback. The first one was the modification of the extragradient method by Tseng [5] who proposed it in 2000 with the following remarkable scheme:
where A is monotone, LLipschitz continuous and \(\lambda \in (0,1/L)\). From (5), we find using this method one only needs to calculate one projection, which is simpler than (4). The second one was the subgradient extragradient method which was proposed by Censor et al. [6] in 2011:
where A is monotone, LLipschitz continuous and \(\lambda \in (0,1/L)\). The key operation of the subgradient extragradient method replaces the second projection onto C of the extragradient method by a projection onto a special constructible halfspace, which significantly reduces the difficulty of calculations.
Before explaining the third method, let us take a look at the inertial method. In 2001, Alvarez and Attouch [7] applied the inertial technique to obtain an inertial proximal method to solve the problem of finding zero of a maximal monotone operator, which works as follows:
where \(x_{n1}, x_{n}\in H\), \(\theta _{n}\in [0,1)\) and \(\lambda _{n}>0\). It also can be written in the following form:
where \(J^{A}_{\lambda _{n}}\) is the resolvent of A with parameter \(\lambda _{n}\) and the inertia is induced by the term \(\theta _{n}(x _{n}x_{n1})\). Recently, considerable interest has been shown in studying the inertial method by many authors. They constructed fast iterative algorithms by using inertial method. The third method which was studied by Q.L. Dong et al. [8] in 2017:
for each \(k\geq 1\), where \(\gamma \in (0,2)\), \(\lambda >0\),
This algorithm incorporate the inertial terms in the projection and contraction algorithm, which does not need the summability condition for the sequence. The fourth one was a selfadaptive algorithm which was based on Tseng’s extragradient method [9] and it was proposed by Duong Viet Thong and Dang Van Hieu [9] in 2017. The algorithm is described as follows.
It is worth mentioning that the Algorithm 1 does not require one to know the Lipschitz constant of the operator A, which is different from the other three algorithms. If \(VI(C,A)\neq \emptyset \), the sequences \(\{x_{n}\}\) generated by (5), (6), (8) and Algorithm 1 all converge weakly to an element of \(VI(C,A)\). For Algorithm 1, it does not require to know the Lipschitz constant, but the step size may involve computation of additional projections.
In 2016, Mainge and Gobinddass [10] got \(x_{n+1}\) by the following algorithm:
where \(\theta _{n}=\frac{\lambda _{n}}{\delta \lambda _{n1}}\), \(\lambda _{n}k_{n}\leq \varepsilon \delta (\sqrt{2}1)\), \(\lambda _{n}\leq k\lambda _{n1}(\delta +\frac{\lambda _{n1}}{\lambda _{n2}})^{ \frac{1}{2}}\), \(\{\lambda _{n}\}\subset [\overline{\mu },\overline{ \nu }]\) and
In this iterative algorithm, it does not require either additional projection for the determination of the stepsizes or the knowledge of the Lipschitz constant of the operator.
The fixed point problem is the problem to find \(x^{*}\in H\) such that
where \(x^{*}\) is called a fixed point of \(T:H\rightarrow H\). The set of fixed points of T is denoted by \(\operatorname{Fix}(T)\). Recently, many iterative methods have been proposed (see [6, 11,12,13,14,15,16,17,18,19,20,21] and the references therein) for finding a common element of \(\operatorname{Fix}(T)\) and \(VI(C,A)\) in a real Hilbert space.
In this paper, motivated and inspired by the above results, we introduce a new algorithm with selfadaptive subgradient extragradient method and inertial modification for finding a solution of the variational inequality problem involving monotone operator and the fixed point problem of a quasinonexpansive mapping with a demiclosedness property in a real Hilbert space. Then the weak convergence theorem will be proved in Sect. 3.
This paper is organized as follows. In Sect. 2, we list some lemmas which will be used for further proof. In Sect. 3, we proposed a new algorithm, then the weak convergence theorem is analyzed. In Sect. 4, we give some numerical examples to illustrate the efficiency and advantage of our algorithm.
Preliminaries
In this section, we introduce some lemmas which will be used in this paper. Assume H is a real Hilbert space and C is a nonempty closed convex subset of H. In the following of the paper, we use the symbol \(x_{n}\rightarrow x\) to denote the strong convergence of the sequence \(\{x_{n}\}\) to x as \(n\rightarrow \infty \) and use the symbol \(x_{n}\rightharpoonup x\) to denote the weak convergence of the sequence \(\{x_{n}\}\) to x as \(n\rightarrow \infty \). If there exists a subsequence \(\{x_{n_{i}}\}\) of \(\{x_{n}\}\) converging weakly to a point z, then z is called a weak cluster point of \(\{x_{n}\}\) and the set of all weak cluster points of \(\{x_{n}\}\) is denoted by \(\omega _{w}(x _{n})\).
Lemma 2.1
([22])
Let H be a real Hilbert space, for each \(x,y\in H\) and \(\lambda \in \mathbb{R}\), we have

(i)
\(\x+y\^{2}=\x\^{2}+\y\^{2}+2\langle x,y\rangle \);

(ii)
\(\\lambda x+(1\lambda )y\^{2}=\lambda \x\^{2}+(1\lambda )\y\^{2}\lambda (1\lambda )\ xy\^{2}\).
In the following, we gather some characteristic properties of \(P_{C}\).
Lemma 2.2
([23])
Let H be a real Hilbert space and C be a nonempty closed subset of H. Then

(i)
\(\P_{C}xP_{C}y\^{2}\leq \langle xy,P_{C}xP_{C}y\rangle \), \(\forall x,y\in H\);

(ii)
\(\xP_{C}x\^{2}+\yP_{C}x\^{2}\leq \ xy\^{2}\), \(\forall x\in H\), \(y\in C\).
Lemma 2.3
Let H be a real Hilbert space and C be a nonempty closed subset of H. Given \(x\in H\) and \(z\in C\), then \(z=P_{C}x\) if and only if there hold the inequality \(\langle xz,yz \rangle \leq 0\), \(\forall y\in C\).
Next, we present some concepts of an operator.
Definition 2.4
([24])
An operator \(A:H\rightarrow H\) is said to be:

(i)
monotone, if
$$ \langle xy,AxAy\rangle \geq 0,\quad \forall x,y\in H; $$ 
(ii)
LLipschitz continuous with \(L>0\), if
$$ \Vert AxAy \Vert \leq L \Vert xy \Vert ,\quad \forall x,y\in H; $$ 
(iii)
nonexpansive, if
$$ \Vert AxAy \Vert \leq \Vert xy \Vert ,\quad \forall x,y\in H; $$ 
(iv)
quasinonexpansive, if
$$ \Vert Axp \Vert \leq \Vert xp \Vert ,\quad \forall x\in H, p\in Fix(A), $$where \(Fix(A)\neq \emptyset \).
Remark 2.5
([25])
It is well that every nonexpansive mapping with a nonempty set of fixed point is quasinonexpansive. However, a quasinonexpansive mapping may not be a nonexpansive mapping.
Lemma 2.6
([23])
Assume that \(T:H\rightarrow H\) is a nonlinear operator with \(\operatorname{Fix}(T)\neq \emptyset \). Then \(IT\) is said to be demiclosed at zero if for any \(\{x_{n}\}\) in H, the following implication holds:
Remark 2.7
We know that the Lemma 2.6 is clearly established when the operator T is nonexpansive. However, there exists a quasinonexpansive mapping T but \(IT\) is not demiclosed at zero. Therefore, in this paper, we need to emphasize that \(T:H\rightarrow H\) is a quasinonexpansive mapping such that \(IT\) is demiclosed at zero.
Example 1
Let H be the line real and \(C=[0,\frac{3}{2}]\). Define the operator T on C by
Indeed, it is easy to see that \(\operatorname{Fix}(T)=\{0\}\).
On the one hand, for any \(x\in [0,1]\), we have
On the other hand, for any \(x\in (1,\frac{3}{2}]\), we have
Thus, the operator T is quasinonexpansive.
By taking \(\{x_{n}\}\subset (1,\frac{3}{2}]\) and \(x_{n}\rightarrow 1\) as \(n\rightarrow \infty \), we have
But \(1\notin \operatorname{Fix}(T)\), so \(IT\) is not demiclosed at zero.
Lemma 2.8
([7])
Let \(\{\varphi _{n}\}\), \(\{\delta _{n}\}\) and \(\{\alpha _{n}\}\) be sequences in \([0,+\infty )\) such that
and there exists a real number α with \(0\leq \alpha _{n}\leq \alpha <1\) for all \(n\in \mathbb{N}\). Then the following hold:

(i)
\(\sum^{+\infty }_{n=1}[\varphi _{n}\varphi _{n1}]_{+}<+\infty \), where \([t]_{+}:=\max \{t,0\}\);

(ii)
there exists \(\varphi ^{*}\in [0,+\infty )\) such that \(\lim_{n\rightarrow +\infty }\varphi _{n}=\varphi ^{*}\).
Lemma 2.9
([26])
Let \(A:H\rightarrow H\) be a monotone and LLipschitz continuous mapping on C. Let \(S=P_{C}(I \mu A)\), where \(\mu >0\). If \(\{x_{n}\}\) is a sequence in H satisfying \(x_{n}\rightharpoonup q\) and \(x_{n}Sx_{n}\rightarrow 0\), then \(q\in VI(C,A)=\operatorname{Fix}(S)\).
Lemma 2.10
([27])
Let C be a nonempty closed and convex subset of a real Hilbert space H and \(\{x_{n}\}\) be a sequence in H. The following two properties hold:

(i)
\(\lim_{n\rightarrow \infty }\x_{n}x\\) exists for each \(x\in C\);

(ii)
\(\omega _{w}(x_{n})\subset C\).
Then the sequence \(\{x_{n}\}\) converges weakly to a point in C.
Main results
In this section, we propose a new iterative algorithm with selfadaptive method for solving monotone variational inequality problems and quasinonexpansive fixed point problems in a Hilbert space. Meanwhile, we combine subgradient extragradient method and inertial modification for the algorithm. Under the assumption \(\operatorname{Fix}(T)\cap VI(C,A)\neq \emptyset \), we prove the weak convergence theorem. Let H be a real Hilbert space. Let C be a nonempty closed convex subset in H. Let \(A:H\rightarrow H\) be a monotone and LLipschitz continuous operator. In particular, the information of the Lipschitz constant L does not require to be known. Let \(T:H\rightarrow H\) be a quasinonexpansive mapping such that \(IT\) is demiclosed at zero. The algorithm is described as follows.
Before giving the theorem and its proof, we propose several useful lemmas firstly.
Lemma 3.1
The sequence \(\{\lambda _{n}\}\) generated by Algorithm 2 is a monotonically decreasing sequence, and its lower bound is \(\min \{\frac{\mu }{L},\lambda _{0}\}\).
Proof
It is obvious that the sequence \(\{\lambda _{n}\}\) is a monotonically decreasing sequence.
Since A is LLipschitz continuous with \(L>0\), we have
In the case of \(Ax_{n}Ay_{n}\neq 0\), we have
Clearly, the lower bound of the sequence \(\{\lambda _{n}\}\) is \(\min \{\frac{\mu }{L},\lambda _{0}\}\). □
Lemma 3.2
If \(w_{n}=y_{n}=x_{n+1}\), then \(w_{n} \in \operatorname{Fix}(T)\cap VI(C,A)\).
Proof
If \(w_{n}=y_{n}\), we have \(w_{n}\in VI(C,A)\).
Besides, since \(w_{n}=y_{n}\), \(y_{n}=P_{C}(w_{n}\lambda _{n} Aw_{n})\), according to Lemma 2.3, we have \(\langle w_{n}\lambda _{n} Aw_{n}y _{n},xy_{n}\rangle \leq 0\), \(\forall x\in C\). Since \(w_{n}=y_{n}\), \(z_{n}=P_{T_{n}}(w_{n}\lambda _{n}Ay_{n})\), where \(T_{n}=\{x\in H \langle w_{n}\lambda _{n} Aw_{n}y_{n},xy_{n}\rangle \leq 0\}\), we have \(y_{n}=z_{n}\).
On the other hand, if \(w_{n}=y_{n}=x_{n+1}\), by \(x_{n+1}=(1\beta _{n})w _{n}+\beta _{n}Tz_{n}\), we have
through deformation, we can get \(Tw_{n}=w_{n}\), which means \(w_{n}\in \operatorname{Fix}(T)\).
Therefore, \(w_{n}\in \operatorname{Fix}(T)\cap VI(C,A)\). □
Lemma 3.3
Let \(\{z_{n}\}\) be a sequence generated by Algorithm 2, then, for all \(p\in VI(C,A)\), and for n sufficiently large, we have
Proof
Since \(p\in VI(C,A)\) and \(VI(C,A)\subset C\subset T_{n}\), we have
It implies that
Because the operator A is monotone and \(\lambda _{n}>0\), we have
So
Since \(\{\lambda _{n}\}\) is a monotonically decreasing sequence, the limit of \(\{\lambda _{n}\}\) exists and \(\frac{\lambda _{n}}{\lambda _{n+1}}\geq 1\). We denote \(\lambda = \lim_{n\rightarrow \infty }\lambda _{n}\). Therefore, we have \(\lim_{n\rightarrow \infty }=\frac{\lambda _{n}}{\lambda _{n+1}}=1\). We have \(\mu \in (0,1)\), \(\frac{1}{\mu }>1\), let \(\gamma =\frac{1+\frac{1}{ \mu }}{2}\), \(1<\gamma <\frac{1}{\mu }\). Therefore, \(\exists N\in \mathbb{N}\), \(\forall n>N\), \(\frac{\lambda _{n}}{\lambda _{n+1}}< \gamma \). Therefore, \(1\mu \gamma <1\).
Since \(z_{n}\in T_{n}\), we have
So
Therefore
□
Theorem 3.4
Assume that the sequence \(\{\alpha _{n}\}\) is nondecreasing such that \(0\leq \alpha _{n}\leq \alpha \leq \frac{1}{4}\) and the sequence \(\{\beta _{n}\}\) is a sequence of real numbers such that \(0<\beta \leq \beta _{n}\leq \frac{1}{2}\). Then the sequence \(\{x_{n}\}\) generated by Algorithm 2 converges weakly to an element of \(\operatorname{Fix}(T)\cap VI(C,A)\).
Proof
Let \(p\in \operatorname{Fix}(T)\cap VI(C,A)\).
From Lemma 3.3, we have \(\exists N\geq 0\), \(\forall n>N\), \(\z_{n}p\ \leq \w_{n}p\\).
Since T is quasinonexpansive, by Lemma 2.1, we have \(\forall n>N\)
Since \(x_{n+1}=(1\beta _{n})w_{n}+\beta _{n}Tz_{n}\), we can write it as
Combining (12) and (13), with \(\beta _{n}\leq \frac{1}{2}\), we have
Besides,
and
Combining (14), (15), (16), and \(\{\alpha _{n}\}\) being nondecreasing, we obtain
Therefore,
Put \(\varGamma _{n}:=\x_{n}p\^{2}\alpha _{n}\x_{n1}p\^{2}+2\alpha _{n}\x_{n}x_{n1}\^{2}\).
From (18), we obtain
We have \(0\leq \alpha _{n}\leq \alpha \leq \frac{1}{4}\), \((2\alpha _{n+1}1+\alpha _{n})\geq \frac{1}{4}\).
So \(\varGamma _{n+1}\varGamma _{n}\leq \delta \x_{n+1}x_{n}\^{2}\leq 0\), where \(\delta =\frac{1}{4}\), which implies that the sequence \(\{\varGamma _{n}\}\) is nonincreasing.
Besides,
and
Therefore,
This implies that the sequence \(\{x_{n}\}\) is bounded.
Combining (21) and (22), we have
From (19), we have
which means
and
Besides, since \(\alpha _{n}\leq \alpha \), we have
Therefore, by (26) and (27), we can obtain
We have
Therefore, by (29) and Lemma 2.8, we have
by (15), we have
and we also have
We have
which means
Combining (30), (31) and (33), the sequence \(\{\beta _{n}\}\) being bounded, we have
By (11), we have
Therefore, we have
From Lemma 3.3, we can obtain \(\forall n>N\)
and
and
So
which implies
Therefore, by (12), (30) and (31), we have
So
which implies
Since \(\{x_{n}\}\) is bounded, there exist a subsequence \(\{x_{n_{k}} \}\) of \(\{x_{n}\}\) and \(q\in H\) such that \(x_{n_{k}}\rightharpoonup q\).
So, by (32) we have \(\omega _{n_{k}}\rightharpoonup q\) and by (37) we have \(z_{n_{k}}\rightharpoonup q\).
Since \(z_{n_{k}}\rightharpoonup q\) and \(IT\) is demiclosed at zero, by Lemma 2.6, we have \(q\in \operatorname{Fix}(T)\).
On the other hand, for all \(n\in \mathbb{R}\), we have \(\lambda _{n}>\frac{ \mu }{L}\). We have \(\omega _{n_{k}}\rightharpoonup q\) and
By Lemma 2.9, we have \(q\in VI(C,A)\).
Therefore, \(q\in \operatorname{Fix}(T)\cap VI(C,A)\).
By Lemma 2.10, we get the conclusion that the sequence \(\{x_{n}\}\) converges weakly to an element of \(\operatorname{Fix}(T)\cap VI(C,A)\).
This completes the proof. □
Numerical experiments
In this section, we give some numerical examples to illustrate the efficiency and advantage of our algorithm in comparisons with the wellknown algorithm. We compare Algorithm 2 with the weakly convergent Algorithm 1 [19].
We choose \(\alpha _{n}=\frac{1}{4}\), \(\beta _{n}=\frac{1}{2}\), \(\mu =\frac{1}{2}\), \(\lambda _{0}=\frac{1}{7}\). The starting point is \(x_{0}=x_{1}=(1,1,\ldots ,1)\in \mathfrak{R}^{m}\). In order to show the converges of the algorithm, we illustrate the behavior of the sequence \(D_{n}=\x_{n}x^{*}\^{2}\), \(n=0,1,2,\ldots \) , when the execution time in second elapses where \(x^{*}\) is the solution of the problem and \(\{x_{n}\}\) is the sequence generated by the algorithms. Now we introduce the examples in detail.
Example 2
Let A be a Lipschitz continuous and monotone mapping. Let T be a quasinonexpansive mapping. Assume \(\operatorname{Fix}(T) \cap VI(C,A)\neq \emptyset \) and \(C=[2,5]\), \(H=\mathbb{R}\). Let A and T be given by
In the following, let us verify if A and T meet the requirements of the topic.
First, for all \(x,y\in H\), we have
Therefore, \(\AxAy\\leq L\xy\\), where \(L=2\) and \(\langle AxAy,xy \rangle \geq 0\). Therefore, A is LLipschitz continuous and monotone.
Second, for \(Tx=\frac{x}{2}\sin x\), if \(x\neq 0\) and \(Tx=x\), then we have \(x=\frac{x}{2}\sin x\), and \(\sin x=2\), which is impossible. Therefore, we obtain \(x=0\), which means \(\operatorname{Fix}(T)=\{0\}\).
For all \(x\in H\),
which means T is quasinonexpansive.
Besides, take \(x=2\pi \) and \(y=\frac{3\pi }{2}\), we have
which means T is not a nonexpansive mapping.
Therefore, A and T meet the requirements of the topic. The numerical results for the example are shown in Fig. 1.
From Fig. 1, we can see that the Algorithm 2 converges for a shorter time than the previously studied Algorithm 1 [19].
Example 3
We consider the operator \(T:H\rightarrow H\) with \(Tx=\frac{1}{2}x\) and a linear operator \(A:\mathfrak{R}^{m}\rightarrow \mathfrak{R}^{m}\) in the form \(A(x)=Mx+q\) [28, 29], where
N is a \(m\times m\) matrix, S is a \(m\times m\) skewsymmetric matrix, D is a \(m\times m\) diagonal matrix which its diagonal entries are nonnegative, and \(q\in \mathfrak{R}^{m}\) is a vector, therefore M is positive definite. The feasible set is
It is obvious that A is monotone and Lipschitz continuous. For experiments, q is equal to zero vector, all the entries of N, S are generated randomly and uniformly in \([2,2]\), and the diagonal entries of D are in \((0,2)\).
We can easily see that the solution of the algorithm in this case is \(x^{*}=0\). In order to illustrate the effectiveness of the algorithm, we show the behavior of \(D_{n}\) when execution time elapses(in second) by Figs. 2, 3, 4 in \(\mathfrak{R}^{20}\), \(\mathfrak{R}^{50}\), \(\mathfrak{R}^{100}\) respectively.
According to Figs. 2, 3, and 4, we have confirmed that the proposed algorithm have the competitive advantages over the existing Algorithm 1 [19].
Conclusion
In this paper, we introduce a new algorithm with selfadaptive method for finding a solution of the variational inequality problem involving monotone operator and the fixed point problem of a quasinonexpansive mapping with a demiclosedness property in a real Hilbert space. We combine a subgradient extragradient method and inertial modification for the algorithm. Under some suitable conditions, we have proved the weak convergence of the algorithm. In particular, it is worth emphasizing that the algorithm that we propose does not need any additional projections of the Lipschitz constant. Finally, some numerical experiments are performed to verify the convergence of the algorithm and compared with previously known Algorithm 1 [19].
References
Gibali, A.: Two simple relaxed perturbed extragradient methods for solving variational inequalities in Euclidean spaces. J. Nonlinear Var. Anal. 2, 49–61 (2018)
Yao, Y.H., Shahzad, N.: Strong convergence of a proximal point algorithm with general errors. Optim. Lett. 6, 621–628 (2012)
Yao, Y.H., Chen, R.D., Xu, H.K.: Schemes for finding minimumnorm solutions of variational inequalities. Nonlinear Anal. 72, 3447–3456 (2010)
Korpelevich, G.M.: The extragradient method for finding saddle points and other problem. Èkon. Mat. Metody 12, 747–756 (1976)
Tseng, P.: A modified forwardbackward splitting method for maximal monotone mappings. SIAM J. Control Optim. 38, 431–446 (2000)
Censor, Y., Gibali, A., Reich, S.: The subgradient extragradient method for solving variational inequalities in Hilbert space. J. Optim. Theory Appl. 148, 318–335 (2011)
Alvarez, F., Attouch, H.: An inertial proximal method for maximal monotone operators via discretization of a nonlinear oscillator with damping. SetValued Anal. 9, 3–11 (2001)
Dong, Q.L., Cho, Y.J., Zhong, L.L.: Inertial projection and contraction algorithms for variational inequalities. J. Glob. Optim. (2017). https://doi.org/10.1007/s1089801705060
Thong, D.V., Hieu, D.V.: Weak and strong convergence theorems for variational inequality problems. Numer. Algorithms (2017). https://doi.org/10.1007/s110750170412Z
Mainge, P.E., Gobinddass, M.L.: Covergence of onestep projected gradient methods for variational inequalities. J. Optim. Theory Appl. 171, 146–168 (2016)
Ceng, L.C., Yao, J.C.: Strong convergence theorem by an extragradient method for fixed point problems and variational inequality problems. Taiwan. J. Math. 10, 1293–1303 (2006)
Iiduka, H., Takahashi, W.: Strong convergence theorems for nonexpansive mappings and inversesstrongly monotone mappings. Nonlinear Anal. 61, 341–350 (2005)
Nadezhkina, N., Takashi, W.: Strong convergence theorem by a hybrid method for nonexpansive mappings and monotone mappings. J. Optim. 16, 1230–1241 (2006)
Yao, Y.H., Liou, Y.C., Yao, J.C.: Iterative algorithms for the split variational inequality and fixed point problems under nonlinear transformations. J. Nonlinear Sci. Appl. 10, 843–854 (2017)
Thong, D.V., Hieu, D.V.: An inertial method for solving split common fixed point problems. J. Fixed Point Theory Appl. 19, 3029–3051 (2017)
Qin, X.L., Yao, J.C.: Projection splitting algorithms for nonself operators. J. Nonlinear Convex Anal. 18, 925–935 (2017)
Yang, Y., Yuan, Q.: A hybrid descent iterative algorithm for a split inclusion problem. J. Nonlinear Funct. Anal. 2018, Article ID 42 (2018)
Zegeye, H., Shahzad, N., Yao, Y.H.: Minimumnorm solution of variational inequality and fixed point problem in Banach spaces. Optimization 64, 453–471 (2015)
Thong, D.V., Hieu, D.V.: Inertial subgradient extragradient algorithms with linesearch process for solving variational inequality problems and fixed point problems. Numer. Algorithms (2018). https://doi.org/10.1007/s110750180527x
Takahashi, W.: Nonlinear Functional AnalysisFixed Point Theory and Its Applications. Yokohama Publishers, Yokohama (2000)
Xu, H.K.: Iterative algorithm for nonlinear operators. J. Lond. Math. Soc. 66(2), 240–256 (2002)
Takahashi, W.: Introduction to Nonlinear and Convex Analysis. Yokohoma Publishers, Yokohoma (2009)
Goebel, K., Reich, S.: Uniform Convexity, Hyperbolic Geometry, and Nonexpansive Mappings. Dekker, New York(1984)
Mainge, P.E.: The viscosity approximation process for quasinonexpansive mapping in Hilbert space. Comput. Math. Appl. 59, 74–79 (2010)
Chidume, C.E.: Geometric Properties of Banach Spaces and Nonlinear Iterations. Lecture Notes in Mathematics. vol. 1965. Springer, Berlin (2009)
Kraikaew, R., Saejung, S.: Strong convergence of the Halpern subgradient extragradient method for solving variational inequalities in Hilbert spaces. J. Optim. Theory Appl. 66, 75–96 (2017)
Xu, H.K.: Averaged mappings and the gradientprojection algorithm. J. Optim. Theory Appl. 150, 360–378 (2011)
Harker, P.T., Pang, J.S.: A dampedNewton method for the linear complementarity problem. Lect. Appl. Math. 26, 265–284 (1990)
Hieu, D.V., Anh, P.K., Muu, L.D.: Modified hybrid projection methods for finding common solutions to variational inequality problems. Comput. Optim. Appl. 66, 75–96 (2017)
Funding
This work was supported by the Financial Funds for the Central Universities (No. 3122018L004) and Scientific research project of Tianjin Municipal Education Commission (No. 2018KJ253).
Author information
Authors and Affiliations
Contributions
All the authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare that they have no competing interests.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Tian, M., Tong, M. Selfadaptive subgradient extragradient method with inertial modification for solving monotone variational inequality problems and quasinonexpansive fixed point problems. J Inequal Appl 2019, 7 (2019). https://doi.org/10.1186/s1366001919581
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s1366001919581
Keywords
 Variational inequality problem
 Fixed point problem
 Extragradient method
 Subgradient extragradient method
 Inertial method
 Selfadaptive method