- Research
- Open access
- Published:
Composite inertial subgradient extragradient methods for variational inequalities and fixed point problems
Journal of Inequalities and Applications volume 2019, Article number: 274 (2019)
Abstract
In this paper, we introduce and investigate composite inertial gradient-based algorithms with a line-search process for solving a variational inequality problem (VIP) with a pseudomonotone and Lipschitz continuous mapping and a common fixed-point problem (CFPP) of finitely many nonexpansive mappings and a strictly pseudocontractive mapping in the framework infinite-dimensional Hilbert spaces. The proposed algorithms are based on an inertial subgradient–extragradient method with the line-search process, hybrid steepest-descent methods, viscosity approximation methods and Mann iteration methods. Under weak conditions, we prove strong convergence of the proposed algorithms to the element in the common solution set of the VIP and CFPP, which solves a certain hierarchical VIP defined on this common solution set.
1 Introduction
Throughout this work, H is assumed to be a real Hilbert space with norm \(\|\cdot \|\) and inner product \(\langle \cdot ,\cdot \rangle \). We suppose that C is a convex and closed set in Hilbert space H and \(P_{C}\) is the metric projection from space H onto set C. Let \(S:C\to H\) be a nonlinear operator. We denote by \(\operatorname{Fix}(S)\) the set of fixed points of operator S. A mapping \(T:C\to C\) is called strictly pseudocontractive if
where \(\zeta \in [0,1)\) is some constant. With \(\zeta =0\), T is called a nonexpansive operator. It is well known that the class of strict pseudocontractions includes the class of nonexpansive mappings and is also continuous. From their direct and indirect applications to other branches of mathematics and other engineering fields, to the the class of nonexpansive operators and the class of strictly pseudocontractive operators are now under the spotlights; see, e.g., [1,2,3,4,5,6,7,8] and the references therein.
Let \(A:H\to H\) be a mapping defined on the whole space. The known variational inequality problem (VIP), which find lots of applications in sparse reconstruction, image reconstruction, traffic and transportation systems, is to get \(x^{*}\in C\) with \(\langle Ax^{*},x-x ^{*}\rangle \geq 0\) \(\forall x\in C\). The solution set of the VIP is presented with VI(C, A). At present, one of the most effective methods for solving the VIP is the extragradient method introduced by Korpelevich [9] in 1976, that is, for any initial \(x_{0}\in C\), the sequence \(\{x_{n}\}\) is generated by
with τ is such that \(L\tau <1\). If \(\operatorname{VI}(C,A)\neq \emptyset \), then the sequence \(\{x_{n}\}\) generated by process (1.2) converges weakly to a point, which lies in \(\operatorname{VI}(C,A)\). The literature on the VIP is quite vast and Korpelevich’s extragradient method, which is projection-based, has received great attention given by many researchers working on nonlinear programming, who improved it in various ways; see, e.g., [10,11,12,13,14,15,16,17] and the references therein. For the well-known extragradient method, one needs to compute two nearest point projections for every iterative step/process. Without question, the projection onto a feasibility (closed-convex) set is closely related to a minimizer problem (min distance). If feasibility sets are general, this might require a prohibitive amount of computation time; For recent efforts on projection-based methods, see, e.g., [18,19,20,21,22]. In 2011, Censor et al. [21] deeply studied Korpelevich’s extragradient method and first introduced the subgradient–extragradient method, in which the second projection onto set C is replaced by a projection onto a half-space:
with τ is a parameter such that \(L\tau <1\). In 2014, Kraikaew and Saejung [23] proposed the Halpern subgradient–extragradient method for solving the VIP
where τ is a parameter such that \(L\tau <1\), \(\{\alpha _{n}\}\) is a sequence in \((0,1)\) with \(\lim_{n\to \infty }\alpha _{n}=0\), and \(\sum^{\infty }_{n=1}\alpha _{n}= +\infty \). They proved the strong convergence of the sequence \(\{x_{n}\}\) generated by (1.4) to \(P_{\operatorname{VI}(C,A)}x_{0}\).
In 2018, by virtue of the inertial technique, Thong and Hieu [24] first proposed the inertial subgradient–extragradient method, that is, for any initial \(x_{0},x_{1}\in H\), the sequence \(\{x_{n}\}\) is generated by
with τ is a parameter such that \(L\tau <1\). Under some suitable conditions, they proved the weak convergence of \(\{x_{n}\}\) to an element of \(\operatorname{VI}(C,A)\). Very recently, Thong and Hieu [25] introduced two inertial subgradient–extragradient algorithms with linear-search process for solving the VIP with monotone and Lipschitz continuous mapping A and the fixed-point problem (FPP) of a quasi-nonexpansive mapping T with a demiclosedness property in a real Hilbert space.
Algorithm 1.1
(see [25, Algorithm 1])
Initialization: Let \(x_{0},x_{1}\in H\) be arbitrary. Given \(\gamma >0\), \(l\in (0,1)\), \(\mu \in (0,1)\).
Iterative Steps: Compute \(x_{n+1}\) as follows:
Step 1. Set \(w_{n}=\alpha _{n}(x_{n}-x_{n-1})+x_{n}\) and compute \(y_{n}=P_{C}(w_{n}-\tau _{n}Aw_{n})\), where \(\tau _{n}\) is chosen to be the largest \(\tau \in \{\gamma ,\gamma l,\gamma l^{2},\ldots\}\) satisfying \(\tau \|Aw_{n}-Ay_{n}\|\leq \mu \|w_{n}-y_{n}\|\).
Step 2. Compute \(z_{n}=P_{C_{n}}(w_{n}-\tau _{n}Ay_{n})\) with \(C_{n}:=\{x\in H:\langle w_{n}-\tau _{n}Aw_{n}-y_{n},x-y_{n}\rangle \leq 0\}\).
Step 3. Compute \(x_{n+1}=(1-\beta _{n})w_{n}+\beta _{n}Tz_{n}\). If \(w_{n}=z_{n}=x_{n+1}\) then \(w_{n}\in \operatorname{Fix}(T)\cap \operatorname{VI}(C,A)\). Set \(n:=n+1\) and go to Step 1.
Algorithm 1.2
(see [25, Algorithm 2])
Initialization: Let \(x_{0},x_{1}\in H\) be arbitrary. Given \(\gamma >0\), \(l\in (0,1)\), \(\mu \in (0,1)\).
Iterative Steps: Calculate \(x_{n+1}\) as follows:
Step 1. Set \(w_{n}=\alpha _{n}(x_{n}-x_{n-1})+x_{n}\) and compute \(y_{n}=P_{C}(w_{n}-\tau _{n}Aw_{n})\), where \(\tau _{n}\) is chosen to be the largest \(\tau \in \{\gamma ,\gamma l,\gamma l^{2},\ldots\}\) satisfying \(\tau \|Aw_{n}-Ay_{n}\|\leq \mu \|w_{n}-y_{n}\|\).
Step 2. Compute \(z_{n}=P_{C_{n}}(w_{n}-\tau _{n}Ay_{n})\) with \(C_{n}:=\{x\in H:\langle w_{n}-\tau _{n}Aw_{n}-y_{n},x-y_{n}\rangle \leq 0\}\).
Step 3. Compute \(x_{n+1}=(1-\beta _{n})x_{n}+\beta _{n}Tz_{n}\). If \(w_{n}=z_{n}=x_{n}=x_{n+1}\) then \(x_{n}\in \operatorname{Fix}(T)\cap \operatorname{VI}(C,A)\). Set \(n:=n+1\) and go to Step 1.
Under some appropriate mild conditions, they proved the weak convergence of the proposed algorithms to an element of \(\operatorname{Fix}(T)\cap \operatorname{VI}(C,A)\). Inspired by the research work of [23,24,25,26,27,28], we introduce two composite inertial subgradient–extragradient algorithms with a line-search process for solving the VIP with pseudomonotone and Lipschitz continuous mapping and the CFPP of finitely many nonexpansive mappings and a strictly pseudocontractive mapping in a real Hilbert space. The proposed algorithms are based on an inertial subgradient–extragradient method with a line-search process, hybrid steepest-descent methods, viscosity approximation methods and the Mann iteration method. We prove strong convergence of the proposed algorithms to an element in the common solution set of the VIP and CFPP, which solves a certain hierarchical VIP defined on this common solution set. Finally, our main results are applied to solve the VIP and CFPP in an illustrative example.
This paper is organized as follows: In Sect. 2, we recall some definitions and preliminary results for further use. Section 3 deals with the convergence analysis of the proposed algorithms. Some examples are presented to solve the VIP and CFPP.
2 Preliminaries
Let C be a convex nonempty closed set in a real Hilbert space H. Let \(\{x_{n}\}\) be a vector sequence in H, then we denote by \(x_{n} \to x\) (resp., \(x_{n}\rightharpoonup x\)) the strong (resp., weak) convergence of \(\{x_{n}\}\) to x. A single-valued mapping \(T:C\to H\) is called
-
(i)
L-Lipschitz continuous (or L-Lipschitzian) if \(\exists L>0\) such that \(\|Tx-Ty\|\leq L\|x-y\|\) \(\forall x, y\in C\);
-
(ii)
monotone if \(\langle Tx-Ty,x-y\rangle \geq 0\) \(\forall x, y\in C\);
-
(iii)
pseudomonotone if \(\langle Tx,y-x\rangle \geq 0\Rightarrow \langle Ty,y-x\rangle \geq 0\) \(\forall x, y\in C\);
-
(iv)
α-strongly monotone if \(\exists \alpha >0\) such that \(\langle Tx-Ty,x-y\rangle \geq \alpha \|x-y\|^{2}\) \(\forall x, y\in C\);
-
(v)
sequentially weakly continuous if \(\forall \{x_{n}\}\subset C\), the relation holds: \(x_{n}\rightharpoonup x\Rightarrow Tx_{n}\rightharpoonup Tx\).
It is easy to see that every monotone operator is pseudomonotone but the converse is not true. Also, recall that the mapping \(T:C\to C\) is a ζ-strict pseudocontraction for some \(\zeta \in [0,1)\) if and only if the inequality holds \(\langle Tx-Ty,x-y\rangle \leq \|x-y\|^{2}-\frac{1- \zeta }{2}\|(I-T)x-(I-T)y\|^{2}\) \(\forall x, y\in C\). We know that if T is a ζ-strictly pseudocontractive mapping, then T satisfies Lipschitz condition \(\|Tx-Ty\|\leq \frac{1+\zeta }{1-\zeta }\|x-y\| \forall x\), \(y\in C\). For each point \(x\in H\), we know that there exists a unique nearest point in C, denoted by \(P_{C}x\), such that \(\|x-P_{C}x\|\leq \|x-y\|\) \(\forall y\in C\). The mapping \(P_{C}\) is called the metric projection of H onto C. The complementary operator of the metric projection operator is monotone.
The following two lemmas are trivial.
Lemma 2.1
The following hold:
-
(i)
\(\langle x-y,P_{C}x-P_{C}y\rangle \geq \|P_{C}x-P_{C}y\|^{2}\) \(\forall x, y\in H\);
-
(ii)
\(\langle P_{C}x-x,y-P_{C}x\rangle \geq 0\) \(\forall x\in H\), \(y\in C\);
-
(iii)
\(\|x-y\|^{2}\geq \|x-P_{C}x\|^{2}+\|y-P_{C}x\|^{2}\) \(\forall x \in H\), \(y\in C\);
-
(iv)
\(\|x-y\|^{2}+2\langle x-y,y\rangle =\|x\|^{2}-\|y\|^{2}\) \(\forall x, y\in H\);
-
(v)
\(\|\lambda x+\mu y\|^{2}+\lambda \mu \|x-y\|^{2}=\lambda \|x\| ^{2}+\mu \|y\|^{2}\) \(\forall x, y\in H\), \(\forall \lambda ,\mu \in [0,1]\) with \(\lambda +\mu =1\).
Lemma 2.2
We have the inequality
Lemma 2.3
([21])
Let \(A:C\to H\) be pseudomonotone and continuous. Then \(x^{*}\in C\) is a solution to the VIP \(\langle Ax ^{*},x-x^{*}\rangle \geq 0\) \(\forall x\in C\), if and only if \(\langle Ax,x-x ^{*}\rangle \geq 0\) \(\forall x\in C\).
Lemma 2.4
([29])
Let \(\{a_{n}\}\) be a sequence of nonnegative real numbers satisfying the conditions: \(a_{n+1} \leq \lambda _{n}\gamma _{n}+(1-\lambda _{n})a_{n}\) \(\forall n\geq 1\), where \(\{\lambda _{n}\}\) and \(\{\gamma _{n}\}\) are sequences of real numbers such that (i) \(\{\lambda _{n}\}\subset [0,1]\) and \(\sum^{\infty }_{n=1} \lambda _{n}=\infty \), and (ii) \(\limsup_{n\to \infty }\gamma _{n} \leq 0\) or \(\sum^{\infty }_{n=1}|\lambda _{n}\gamma _{n}|<\infty \). Then \(\lim_{n\to \infty }a_{n}=0\).
Lemma 2.5
([30])
Let \(T:C\to C\) be a ζ-strict pseudocontraction. Then \(I-T\) is demiclosed at zero, i.e., if \(\{x_{n}\}\) is a sequence in C such that \(x_{n}\rightharpoonup x \in C\) and \((T-I)x_{n}\to 0\), then \((I-T)x=0\), where I is the identity mapping of H.
Lemma 2.6
([31])
Let \(T:C\to C\) be a ζ-strictly pseudocontractive mapping. Let γ and δ be two nonnegative real numbers. Then \(\|\gamma (x-y)+\delta (Tx-Ty)\|\leq ( \gamma + \delta )\|x-y\|\) \(\forall x, y\in C\) provided \((\gamma +\delta )\zeta \leq \gamma \).
The following lemma is a direct consequence of Yamada [32].
Lemma 2.7
Let \(\lambda \in (0,1]\), \(T:C\to H\) be a nonexpansive mapping, and the mapping \(T^{\lambda }: C\to H\) be defined by \(T^{\lambda }x:=Tx-\lambda \mu F(Tx)\) \(\forall x\in C\), where \(F:H\to H\) is κ-Lipschitzian and η-strongly monotone. Then \(T^{\lambda }\) is a contraction with the coefficient \(1-\lambda \tau \) provided \(0<\mu <\frac{2\eta }{\kappa ^{2}}\).
3 Main results
In this section, we use C to denote the the feasible set and assume always that the following hold.
\(T_{i}:H\to H\) is a nonexpansive mapping for \(i=1,\ldots,N\) and \(T:H\to H\) is a ζ-strictly pseudocontractive mapping.
\(A:H\to H\) is Lipschitz continuous with the module L, pseudomonotone on H, and sequentially weakly continuous on C, such that \({\varOmega }=\bigcap ^{N}_{i=0}\operatorname{Fix}(T_{i})\cap \operatorname{VI}(C,A) \neq \emptyset \) with \(T_{0}:=T\).
\(f:H\to H\) is a contraction with constant \(\delta \in [0,1)\), and \(F:H\to H\) is η-strongly monotone and κ-Lipschitzian such that \(\delta <\tau :=1-\sqrt{1-\rho (2\eta -\rho \kappa ^{2})}\) for \(\rho \in (0,\frac{2\eta }{\kappa ^{2}})\).
\(\{\sigma _{n}\}\subset [0,1]\) and \(\{\alpha _{n}\},\{\beta _{n}\},\{ \gamma _{n}\},\{\delta _{n}\}\subset (0,1)\) such that
-
(i)
\(\sup_{n\geq 1}\frac{\sigma _{n}}{\alpha _{n}}<\infty \) and \(\beta _{n}+\gamma _{n}+\delta _{n}=1\) \(\forall n\geq 1\);
-
(ii)
\(\zeta (\gamma _{n}+\delta _{n})\leq \gamma _{n}\) \(\forall n\geq 1\);
-
(iii)
\(\sum^{\infty }_{n=1}\alpha _{n}=\infty \) and \(\lim_{n\to \infty }\alpha _{n}=0\);
-
(iv)
\(\limsup_{n\to \infty }\beta _{n}<1\), \(\liminf_{n\to \infty }\beta _{n}>0\) and \(\liminf_{n\to \infty }\delta _{n}>0\).
In terms of Xu and Kim [33], we write \(T_{n}:=T_{n~\mathrm{mod}~N}\) for integer \(n\geq 1\) with the mod function taking values in the set \(\{1,2,\ldots,N\}\), that is, if \(n=jN+q\) for some integers \(j\geq 0\) and \(0\leq q< N\), then \(T_{n}=T_{N}\) if \(q=0\) and \(T_{n}=T_{q}\) if \(0< q< N\).
Algorithm 3.1
Initialization: Given \(l\in (0,1)\), \(\gamma >0\) and \(\mu \in (0,1)\). Let \(x_{0},x_{1}\in H\) be arbitrary.
Iterative Steps: Calculate \(x_{n+1}\) as follows:
Step 1. Set \(w_{n}=\sigma _{n}(T_{n}x_{n}-T_{n}x_{n-1})+T_{n}x_{n}\) and compute \(y_{n}=P_{C}(w_{n}-\tau _{n}Aw_{n})\), where \(\tau _{n}\) is chosen to be the largest \(\tau \in \{\gamma ,\gamma l,\gamma l^{2},\ldots\}\) satisfying
Step 2. Compute \(z_{n}=(I-\alpha _{n}\rho F)P_{C_{n}}(w_{n}-\tau _{n}Ay _{n})+\alpha _{n}f(x_{n})\) with \(C_{n}:=\{x\in H:\langle w_{n}-\tau _{n}Aw_{n}-y_{n},x-y_{n}\rangle \leq 0\}\).
Step 3. Compute
Again set \(n:=n+1\) and go to Step 1.
Lemma 3.1
The Armijo-like search rule (3.1) is well defined, and \(\min \{\gamma ,\frac{\mu l}{L}\}\leq \tau _{n}\leq \gamma \).
Proof
From the L-Lipschitz continuity of A we get \(\mu \|w_{n}-P_{C}(w_{n}-\gamma l^{m} Aw_{n})\|\geq \frac{\mu }{L}\|Aw _{n}-AP_{C}(w_{n}-\gamma l^{m} Aw_{n})\| \). Thus, (3.1) holds for all \(\gamma l^{m}\leq \frac{\mu }{L}\) and so \(\tau _{n}\) is well defined. Obviously, \(\tau _{n}\geq \gamma \). If \(\tau _{n}=\gamma \), then the inequality is true. If \(\tau _{n}<\gamma \), then from (3.1) we get \(\|Aw_{n}-AP_{C}(w_{n}-\frac{\tau _{n}}{l}Aw_{n})\|>\frac{\mu }{\frac{ \tau _{n}}{l}}\|w_{n}-P_{C}(w_{n} -\frac{\tau _{n}}{l}Aw_{n})\|\). Again from the L-Lipschitz continuity of A we obtain \(\tau _{n}>\frac{ \mu l}{L}\). Hence the inequality is valid. □
Lemma 3.2
Let \(\{w_{n}\}\), \(\{y_{n}\}\), \(\{z_{n}\}\) be the sequences generated by Algorithm 3.1. Then
where \(u_{n}:=P_{C_{n}}(w_{n}-\tau _{n}Ay_{n})\) \(\forall n\geq 1\).
Proof
First, take an arbitrary \(p\in {\varOmega }\subset C \subset C_{n}\). We note that
So, it follows that \(\|u_{n}-p\|^{2}\leq \|w_{n}-p\|^{2}-\|u_{n}-w _{n}\|^{2}-2\langle u_{n}-p,\tau _{n}Ay_{n}\rangle \), from which, together with (3.1) and the pseudomonotonicity of A, we deduce that \(\langle Ay_{n},p-y_{n}\rangle \leq 0\) and
Since \(u_{n}=P_{C_{n}}(w_{n}-\tau _{n}Ay_{n})\) with \(C_{n}:=\{x\in H: \langle w_{n}-\tau _{n}Aw_{n}-y_{n},x-y_{n}\rangle \leq 0\}\), we have \(\langle w_{n}-\tau _{n}Aw_{n}-y_{n},u_{n}-y_{n}\rangle \leq 0\), which, together with (3.1), implies that
Therefore, substituting the last inequality for (3.4), we infer that
In addition, from Algorithm 3.1 we have
Using Lemma 2.2, Lemma 2.7, and the convexity of the function \(h(t)=t^{2}\) \(\forall t\in {\mathbf{R}}\), from (3.5) we get
This completes the proof. □
Lemma 3.3
Let \(\{w_{n}\}\), \(\{x_{n}\}\), \(\{y_{n}\}\), \(\{z_{n}\}\) be bounded sequences generated by Algorithm 3.1. If \(x_{n}-x_{n+1}\to 0\), \(w _{n}-x_{n}\to 0\), \(w_{n}-y_{n}\to 0\), \(w_{n}-z_{n}\to 0\) and \(\exists \{w _{n_{k}}\}\subset \{w_{n}\}\) such that \(w_{n_{k}}\rightharpoonup z \in H\), then \(z\in {\varOmega }\).
Proof
From Algorithm 3.1, we get \(T_{n}x_{n}-x_{n}+\alpha _{n}(T_{n}x_{n}-T_{n}x_{n-1})=w_{n}-x_{n}\) \(\forall n\geq 1\), and hence \(\|T_{n}x_{n}-x_{n}\|\leq \|w_{n}-x_{n}\|+\alpha _{n}\|T_{n}x_{n}-T _{n}x_{n-1}\|\leq \|w_{n}-x_{n}\|+\|x_{n}-x_{n-1}\|\). Utilizing the assumptions \(x_{n}-x_{n+1}\to 0\) and \(w_{n}-x_{n}\to 0\), we obtain
Combining the assumptions \(w_{n}-x_{n}\to 0\) and \(w_{n}-z_{n}\to 0\) implies that, as \(n\to \infty \),
Moreover, by Algorithm 3.1 we get \(z_{n}-x_{n}=u_{n}-x_{n}-\alpha _{n} \rho Fu_{n}+\alpha _{n}f(x_{n})\) with \(u_{n}:=P_{C_{n}} (w_{n}-\tau _{n}Ay_{n})\). So it follows from the boundedness of \(\{x_{n}\}\), \(\{u _{n}\}\) that, as \(n\to \infty \),
Also, by Algorithm 3.1 we get \(\beta _{n}(x_{n}-z_{n})+\delta _{n}(Tz _{n}-z_{n})=x_{n+1}-z_{n}\), which immediately yields
Since \(x_{n}-x_{n+1}\to 0\), \(z_{n}-x_{n}\to 0\) and \(\liminf_{n\to \infty }\delta _{n}>0\), we obtain
Noticing \(y_{n}=P_{C}(I-\tau _{n}A)w_{n}\), we have \(\langle w_{n}-\tau _{n}Aw_{n}-y_{n},x-y_{n}\rangle \leq 0\) \(\forall x\in C\), and hence
Being weakly convergent, \(\{w_{n_{k}}\}\) is bounded. Then, according to the Lipschitz continuity of A, \(\{Aw_{n_{k}}\}\) is bounded. Since \(w_{n}-y_{n}\to 0\), \(\{y_{n_{k}}\}\) is bounded as well. Note that \(L\tau _{n}\geq \min \{L\gamma ,\mu l \}\). So, from (3.8) we get \(\liminf_{k\to \infty }\langle Aw_{n_{k}},x-w_{n_{k}}\rangle \geq 0 \forall x\in C\). Meanwhile, observe that \(\langle x-y_{n},Ay_{n} \rangle =\langle Ay_{n}-Aw_{n},x-w_{n}\rangle +\langle Aw_{n},x-w_{n} \rangle +\langle Ay_{n},w_{n}-y_{n} \rangle \). Since \(w_{n}-y_{n} \to 0\), from L-Lipschitz continuity of A we obtain \(Ay_{n}-Aw_{n} \to 0\), which together with (3.8) yields \(\liminf_{k\to \infty } \langle Ay_{n_{k}},x-y_{n_{k}}\rangle \geq 0\) \(\forall x\in C\).
Next we show that \(\lim_{n\to \infty }\|x_{n}-T_{r}x_{n}\|=0\) for \(r=1,\ldots,N\). Indeed, note that, for \(i=1,\ldots,N\),
Hence from (3.6) and the assumption \(x_{n}-x_{n+1}\to 0\) we get \(\lim_{n\to \infty }\|x_{n}-T_{n+i}x_{n}\|=0\) for \(i=1,\ldots,N\). This immediately implies that
We now take a sequence \(\{\varepsilon _{k}\}\subset (0,1)\) satisfying \(\varepsilon _{k}\downarrow 0\) as \(k\to \infty \). For all \(k\geq 1\), we denote by \(m_{k}\) the smallest positive integer such that
Since \(\{\varepsilon _{k}\}\) is decreasing, it is clear that \(\{m_{k}\}\) is increasing. Noticing that \(\{y_{m_{k}}\}\subset C\) guarantees \(Ay_{m_{k}}\neq 0\) \(\forall k\geq 1\), we set \(u_{m_{k}}=\frac{Ay _{m_{k}}}{\|Ay_{m_{k}}\|^{2}}\), we get \(\langle Ay_{m_{k}},u_{m_{k}} \rangle =1\) \(\forall k\geq 1\). So, from (3.10) we get \(\langle Ay_{m _{k}},x+\varepsilon _{k}u_{m_{k}}-y_{m_{k}} \rangle \geq 0\) \(\forall k \geq 1\). Again from the pseudomonotonicity of A we have \(\langle A(x+ \varepsilon _{k}u_{m_{k}}), x+\varepsilon _{k}u_{m_{k}}-y_{m_{k}}\rangle \geq 0\) \(\forall k\geq 1\). This immediately leads to
We claim \(\lim_{k\to \infty }\varepsilon _{k}u_{m_{k}}=0\). As fact, from \(w_{n_{k}}\rightharpoonup z\) and \(w_{n}-y_{n}\to 0\), we obtain \(y_{n_{k}}\rightharpoonup z\). So, \(\{y_{n}\}\subset C\) guarantees \(z\in C\). Again from the sequentially weak continuity of A, one knows \(Ay_{n_{k}}\rightharpoonup Az\). Thus, one has \(Az\neq 0\) (otherwise, z is a solution). Taking into account the sequentially weak lower semicontinuity of the norm \(\|\cdot \|\), one gets \(0<\|Az\|\leq \liminf_{k\to \infty }\|Ay_{n_{k}}\|\). Note that \(\{y_{m_{k}}\}\subset \{y_{n_{k}}\}\) and \(\varepsilon _{k}\downarrow 0\) as \(k\to \infty \). So it follows that \(0\leq \limsup_{k\to \infty }\|\varepsilon _{k}u_{m _{k}}\| =\limsup_{k\to \infty }\frac{\varepsilon _{k}}{\|Ay_{m_{k}}\|} \leq \frac{\limsup_{k\to \infty }\varepsilon _{k}}{\liminf_{k\to \infty }\|Ay_{n_{k}}\|}=0\). Hence \(\varepsilon _{k}u_{m_{k}}\to 0\).
Next one claims \(z\in {\varOmega }\). Indeed, from \(w_{n}-x_{n}\to 0\) and \(w_{n_{k}}\rightharpoonup z\), we get \(x_{n_{k}} \rightharpoonup z\). From (3.9) we have \((x_{n_{k}}-T_{r}x_{n_{k}})\to 0\) for \(r=1,\ldots,N\). Note that Lemma 2.5 guarantees the demiclosedness of \(I-T_{r}\) at zero for \(r=1,\ldots,N\). Thus \(z\in \operatorname{Fix}(T_{r})\). Since r is an arbitrary element in the finite set \(\{1,\ldots,N\}\), we get \(z\in \bigcap ^{N}_{r=1}\operatorname{Fix}(T_{r})\). Meanwhile, from \((w_{n}-z_{n}) \to 0\) and \(w_{n_{k}}\rightharpoonup z\), we get \(z_{n_{k}}\rightharpoonup z\). From (3.7) we have \((z_{n_{k}}-Tz_{n_{k}})\to 0\). From Lemma 2.5 it follows that \(I-T\) is demiclosed at zero, and hence we get \((I-T)z=0\), i.e., \(z\in \operatorname{Fix}(T)\). On the other hand, letting \(k\to \infty \), we deduce that the right hand side of (3.11) tends to zero by the uniform continuity of A, the boundedness of \(\{w_{m_{k}}\}\), \(\{u _{m_{k}}\}\) and the limit \(\lim_{k\to \infty }\varepsilon _{k}u_{m_{k}}=0\). Thus, we get \(\langle Ax,x-z\rangle =\liminf_{k\to \infty }\langle Ax,x-y_{m_{k}} \rangle \geq 0\) \(\forall x\in C\). By Lemma 2.3 we have \(z\in \operatorname{VI}(C,A)\). Therefore, \(z\in \bigcap ^{N}_{i=0}\operatorname{Fix}(T _{i})\cap \operatorname{VI}(C,A)={\varOmega }\). This completes the proof. □
Theorem 3.1
Let the sequence \(\{x_{n}\}\) be generated by Algorithm 3.1. Then
where \(x^{*}\in {\varOmega }\) is a unique solution to the VIP: \(\langle (\rho F-f)x^{*}, p-x^{*}\rangle \geq 0\) \(\forall p\in {\varOmega }\).
Proof
First of all, since \(\limsup_{n\to \infty }\beta _{n}<1\) and \(\liminf_{n\to \infty }\beta _{n}>0\), we may assume, without loss of generality, that \(\{\beta _{n}\}\subset [a,b]\subset (0,1)\). one asserts that \(P_{\varOmega }(f+I-\rho F)\) is a contraction. This sends to the situation that \(P_{\varOmega }(f+I-\rho F)\) is a contraction. Then \(P_{\varOmega }(f+I-\rho F)\) has a unique fixed point. Say \(x^{*}\in H\), that is, \(x^{*}=P_{\varOmega }(f+I-\rho F) x^{*}\). Thus, there is a unique solution \(x^{*}\in {\varOmega }=\bigcap ^{N}_{i=0}\operatorname{Fix}(T_{i}) \cap \operatorname{VI}(C,A)\) to the VIP
It is now easy to see that the necessity of the theorem is valid. Indeed, if \(x_{n}\to x^{*}\in {\varOmega }=\bigcap ^{N}_{i=0} \operatorname{Fix}(T _{i})\cap \operatorname{VI}(C,A)\), then \(x^{*}=T_{i}x^{*}\) for \(i=0,1,\ldots,N\) and \(x^{*}=P_{C}(x^{*}-\tau _{n}Ax^{*})\), which, together with Algorithm 3.1, imply that
and hence
In addition, it is clear that
Next we show the sufficiency of the theorem. To this aim, we assume \(\lim_{n\to \infty }(\|x_{n}-x_{n+1}\|+\|x_{n}-y_{n}\|)=0\) and divide the proof of the sufficiency into several steps.
Step 1. We show that \(\{x_{n}\}\) is bounded. Indeed, take an arbitrary \(p\in {\varOmega }=\bigcap ^{N}_{i=0}\operatorname{Fix}(T_{i})\cap \operatorname{VI}(C,A)\). Then \(Tp=p\), \(T_{n}p=p\) \(\forall n\geq 1\), and (3.5) holds, i.e.,
This immediately implies that
From the definition of \(w_{n}\), we get
Since \(\sup_{n\geq 1}\frac{\sigma _{n}}{\alpha _{n}}<\infty \) and \(\sup_{n\geq 1}\|x_{n}-x_{n-1}\|<\infty \), \(\sup_{n\geq 1}\frac{\sigma _{n}}{\alpha _{n}}\|x_{n}-x_{n-1}\|<\infty \), which hence guarantees that there is \(M_{1}>0\) such that
Combining (3.14), (3.15) and (3.16), we obtain
So, from Algorithm 3.1, Lemma 2.7 and (3.17) it follows that
which, together with Lemma 2.6 and \((\gamma _{n}+\delta _{n})\zeta \leq \gamma _{n}\), yields
By induction, we obtain \(\|x_{n}-p\|\leq \max \{\|x_{1}-p\|,\frac{M _{1}+\|(\rho F-f)p\|}{\tau -\delta }\}\) \(\forall n\geq 1\). Thus, \(\{x_{n}\}\) is a bounded point sequence, and so are the sequences \(\{u_{n}\}\), \(\{w_{n}\}\), \(\{y_{n}\}\), \(\{z_{n}\}\), \(\{f(x_{n})\}\), \(\{Tz_{n}\}\), \(\{Fu _{n}\}\), \(\{T_{n}x_{n}\}\).
Step 2. We show that
for some \(M_{4}>0\). Indeed, utilizing Lemma 2.6, Lemma 3.2 and the convexity of \(\|\cdot \|^{2}\), from \((\gamma _{n}+\delta _{n}) \zeta \leq \gamma _{n}\) we get
where \(\sup_{n\geq 1}2\|(f-\rho F)p\|\|z_{n}-p\|\leq M_{2}\) for some \(M_{2}>0\). Also, from (3.17) we have
where \(\sup_{n\geq 1}(2M_{1}\|x_{n}-p\|+\beta _{n}M^{2}_{1})\leq M_{3}\) for some \(M_{3}>0\). Substituting (3.19) for (3.18), we obtain
where \(M_{4}:=M_{2}+M_{3}\). This immediately implies that
Step 3. We show that
for some \(M>0\). Indeed, we have
Combining (3.18) and (3.22), we have
where \(\sup_{n\geq 1}\{\|x_{n}-p\|,\alpha _{n}\|x_{n}-x_{n-1}\|\} \leq M\) for some \(M>0\).
Step 4. We show that \(\{x_{n}\}\) converges strongly to a unique solution \(x^{*}\in {\varOmega }\) to the VIP (3.12). Indeed, putting \(p=x^{*}\), we deduce from (3.23) that
By Lemma 2.4, it suffices to show that \(\limsup_{n\to \infty }\langle (f-\rho F)x^{*},z_{n}-x^{*}\rangle \leq 0\). From (3.21), \(x_{n}-x_{n+1} \to 0\), \(\alpha _{n}\to 0\) and \(\{\beta _{n}\}\subset [a,b]\subset (0,1)\), we obtain
This immediately implies that
Since \(z_{n}=(I-\alpha _{n}\rho F)u_{n}+\alpha _{n}f(x_{n})\) with \(u_{n}:=P_{C_{n}}(w_{n}-\tau _{n}Ay_{n})\), from (3.25) and the boundedness of \(\{x_{n}\}\), \(\{u_{n}\}\), we get
and hence
(due to the assumption \(\|x_{n}-y_{n}\|\to 0\)). Obviously, the assumption \(\|x_{n}-y_{n}\|\to 0\) together with (3.25) and (3.26), guarantees that, as \(n\to \infty \), \(\|w_{n}-x_{n}\|\leq \|w_{n}-y _{n}\|+\|y_{n}-x_{n}\|\to 0\) and \(\|w_{n}-z_{n}\|\leq \|w_{n}-y_{n}\|+ \|y_{n}-z_{n}\|\to 0\). From the property of \(\{z_{n}\}\), it follows that there is a subsequence \(\{z_{n_{k}}\}\) of \(\{z_{n}\}\) such that
Since H is reflexive and \(\{z_{n}\}\) is bounded, it is guaranteed that \(z_{n_{k}}\rightharpoonup \tilde{z}\). Hence from (3.28) one gets
It is easy to see from \(w_{n}-z_{n}\to 0\) and \(z_{n_{k}}\rightharpoonup \tilde{z}\) that \(w_{n_{k}}\rightharpoonup \tilde{z}\). Since \(x_{n}-x_{n+1}\to 0\), \(w_{n}-x_{n}\to 0\), \(w_{n}-y_{n}\to 0\), \(w_{n}-z_{n} \to 0\) and \(w_{n_{k}}\rightharpoonup \tilde{z}\), by Lemma 3.3 we infer that \(\tilde{z}\in {\varOmega }\). Therefore, from (3.12) and (3.29) we conclude that
Note that \(\{\beta _{n}\}\subset [a,b]\subset (0,1)\), \(\{\alpha _{n}(1- \beta _{n})(\tau -\delta )\}\subset [0,1]\), \(\sum^{\infty }_{n=1} \alpha _{n}(1-\beta _{n})(\tau -\delta )=\infty \), and
Consequently, applying Lemma 2.4 to (3.24), we have \(\lim_{n\to 0}\|x _{n}-x^{*}\|=0\). This completes the proof. □
Next, we introduce another composite inertial subgradient–extragradient algorithms with a line-search process.
Algorithm 3.2
Initialization: Given \(\gamma >0\), \(l \in (0,1)\), \(\mu \in (0,1)\). Let \(x_{0},x_{1}\in H\) be arbitrary.
Iterative Steps: Calculate \(x_{n+1}\) as follows:
Step 1. Set \(w_{n}=\sigma _{n}(T_{n}x_{n}-T_{n}x_{n-1})+T_{n}x_{n}\) and compute \(y_{n}=P_{C}(w_{n}-\tau _{n}Aw_{n})\), where \(\tau _{n}\) is chosen to be the largest \(\tau \in \{\gamma ,\gamma l,\gamma l^{2},\ldots\}\) satisfying
Step 2. Compute \(z_{n}=(I-\alpha _{n}\rho F)P_{C_{n}}(w_{n}-\tau _{n}Ay _{n})+\alpha _{n}f(x_{n})\) with \(C_{n}:=\{x\in H:\langle w_{n}-\tau _{n}Aw_{n}-y_{n},x-y_{n}\rangle \leq 0\}\).
Step 3. Compute
Again set \(n:=n+1\) and go to Step 1.
It is worth pointing out that Lemmas 3.1, 3.2 and 3.3 are still valid for Algorithm 3.2.
Theorem 3.2
Let the sequence \(\{x_{n}\}\) be generated by Algorithm 3.2. Then
where \(x^{*}\in {\varOmega }\) is a unique solution to the VIP: \(\langle (\rho F-f)x^{*}, p-x^{*}\rangle \geq 0\) \(\forall p\in {\varOmega }\).
Proof
Utilizing the same arguments as in the proof of Theorem 3.1, we deduce that there exists a unique solution \(x^{*} \in {\varOmega }=\bigcap ^{N}_{i=0}\operatorname{Fix}(T_{i})\cap \operatorname{VI}(C,A)\) to the VIP (3.12), and that the necessity of the theorem is valid.
Next we show the sufficiency of the theorem. To the aim, we assume \(\lim_{n\to \infty }(\|x_{n}-x_{n+1}\|+\|x_{n}-y_{n}\|)=0\) and divide the proof of the sufficiency into several steps.
Step 1. We show that \(\{x_{n}\}\) is bounded. Indeed, utilizing the same arguments as in Step 1 of the proof of Theorem 3.1, we obtain inequalities (3.13)–(3.17). So, from Algorithm 3.2, Lemma 2.7 and (3.17) it follows that
which, together with Lemma 2.6 and \((\gamma _{n}+\delta _{n})\zeta \leq \gamma _{n}\), yields
By induction, we obtain \(\|x_{n}-p\|\leq \max \{\frac{ \frac{M_{1}}{1-b}+\|(f-\rho F)p\|}{\tau -\delta },\|x_{1}-p\|\} \forall n\geq 1\). Thus, \(\{x_{n}\}\) is bounded point sequence. Hence the sequences \(\{u_{n}\}\), \(\{w_{n}\}\), \(\{y_{n}\}\), \(\{z_{n}\}\), \(\{f(x_{n})\}\), \(\{Tz _{n}\}\), \(\{Fu_{n}\}\), \(\{T_{n}x_{n}\}\) are also bounded sequences.
Step 2. We show that
for some \(M_{4}>0\). Indeed, utilizing Lemma 2.6, Lemma 3.2 and the convexity of \(\|\cdot \|^{2}\), from \((\gamma _{n}+\delta _{n}) \zeta \leq \gamma _{n}\) we get
where \(\sup_{n\geq 1}2\|(f-\rho F)p\|\|z_{n}-p\|\leq M_{2}\) for some \(M_{2}>0\). Also, from (3.17) we have
where \(\sup_{n\geq 1}(2M_{1}\|x_{n}-p\|+\beta _{n}M^{2}_{1})\leq M_{3}\) for some \(M_{3}>0\). Substituting (3.35) for (3.34) guarantees
where \(M_{4}:=M_{2}+M_{3}\). This immediately implies that
Step 3. We show that
for some \(M>0\). Indeed, we have
Combining (3.34) and (3.38), we have
where \(\sup_{n\geq 1}\{\|x_{n}-p\|,\alpha _{n}\|x_{n}-x_{n-1}\|\} \leq M\) for some \(M>0\).
Step 4. We show that \(\{x_{n}\}\) converges strongly to a unique solution \(x^{*}\in {\varOmega }\) to the VIP (3.12). Indeed, utilizing the same argument as in Step 4 of the proof of Theorem 3.1, we obtain the desired assertion. This completes the proof. □
Remark 3.1
Compared with the corresponding results in Kraikaew and Saejung [23], Thong and Hieu [24, 25], our results improve and extend them in the following aspects.
(i) The problem of finding an element of \(\operatorname{VI}(C,A)\) in [23] is extended to develop our problem of finding an element of \(\bigcap ^{N}_{i=0}\operatorname{Fix}(T_{i})\cap \operatorname{VI}(C,A)\) where \(T_{i}\) is nonexpansive for \(i=1,\ldots,N\) and \(T_{0}=T\) is strictly pseudocontractive. The Halpern subgradient–extragradient method for solving the VIP in [23] is extended to develop our composite inertial subgradient–extragradient method with line-search process for solving the VIP and CFPP, which is based on an inertial subgradient–extragradient method with a line-search process, a hybrid steepest-descent method, a viscosity approximation method and a Mann iteration method.
(ii) The problem of finding an element of \(\operatorname{VI}(C,A)\) in [24] is extended to develop our problem of finding an element of \(\bigcap ^{N}_{i=0}\operatorname{Fix}(T_{i})\cap \operatorname{VI}(C,A)\) where \(T_{i}\) is nonexpansive for \(i=1,\ldots,N\) and \(T_{0}=T\) is strictly pseudocontractive. The inertial subgradient–extragradient method with weak convergence for solving the VIP in [24] is extended to develop our composite inertial subgradient–extragradient method with line-search process (which is strongly convergent) for solving the VIP and CFPP, which is based on an inertial subgradient–extragradient method with line-search process, a hybrid steepest-descent method, a viscosity approximation method and a Mann iteration method.
(iii) The problem of finding an element of \(\operatorname{VI}(C,A)\cap \operatorname{Fix}(T)\) (where A is monotone and T is quasi-nonexpansive) in [23] is extended to develop our problem of finding an element of \(\bigcap ^{N}_{i=0}\operatorname{Fix}(T_{i})\cap \operatorname{VI}(C,A)\) where A is pseudomonotone, \(T_{i}\) is nonexpansive for \(i=1,\ldots,N\) and \(T_{0}=T\) is strictly pseudocontractive. The inertial subgradient–extragradient method with line-search (which is weakly convergent) for solving the VIP and FPP in [24] is extended to develop our composite inertial subgradient–extragradient method with line-search process (which is strongly convergent) for solving the VIP and CFPP, which is based on an inertial subgradient–extragradient method with a line-search process, a hybrid steepest-descent method, a viscosity approximation method and a Mann iteration method. It is worth pointing out that the inertial subgradient–extragradient method with a line-search process in [23] combines the inertial subgradient–extragradient method [24] with the Mann iteration method.
4 An example
In this section, our main results are applied to solve the VIP and CFPP in an illustrated example. The initial point \(x_{0}=x_{1}\) is randomly chosen in R. Take \(f(x)=F(x)=\frac{1}{2}x\), \(\gamma =l= \mu =\frac{1}{2}\), \(\sigma _{n}=\alpha _{n}=\frac{1}{n+1}\), \(\beta _{n}= \frac{1}{3}\), \(\gamma _{n}=\frac{1}{2}\), \(\delta _{n}=\frac{1}{6}\) and \(\rho =2\). Then we know that \(\delta =\kappa =\eta =\frac{1}{2}\), and
We first provide an example of Lipschitz continuous and pseudomonotone mapping A, strictly pseudocontractive mapping T and nonexpansive mapping \(T_{1}\) with \({\varOmega }=\operatorname{Fix}(T_{1})\cap \operatorname{Fix}(T) \cap \operatorname{VI}(C,A)\neq \emptyset \). Let \(C=[-1,0]\) and \(H=\mathbf{R}\) with the inner product \(\langle a,b\rangle =ab\) and induced norm \(\|\cdot \|=|\cdot |\). Let \(A,T,T_{1}:H\to H\) be defined as \(Ax:=\frac{1}{1+|\sin x|}-\frac{1}{1+|x|}\), \(Tx:=\frac{1}{2}x+ \frac{3}{8}\sin x\) and \(T_{1}x:=\sin x\) for all \(x\in H\). Now, we first show that A is pseudomonotone and Lipschitz continuous with \(L=2\). Indeed, for all \(x,y\in H\) we have
This implies that A is Lipschitz continuous with \(L=2\). Next, we show that A is pseudomonotone. For any given \(x,y\in H\), it is clear that
Furthermore, it is easy to see that T is strictly pseudocontractive with constant \(\zeta =\frac{1}{2}\). Indeed, we observe that, for all \(x,y\in H\),
It is clear that \((\gamma _{n}+\delta _{n})\zeta =(\frac{1}{2}+ \frac{1}{6})\cdot \frac{1}{2}\leq \frac{1}{2}=\gamma _{n}\) for all \(n\geq 1\). In addition, it is clear that \(T_{1}\) is nonexpansive and \(\operatorname{Fix}(T_{1})=\{0\}\). Therefore, \({\varOmega }=\operatorname{Fix}(T_{1}) \cap \operatorname{Fix}(T)\cap \operatorname{VI}(C,A)=\{0\}\neq \emptyset \). In this case, Algorithm 3.1 can be rewritten as follows:
where for each \(n\geq 1\), \(C_{n}\) and \(\tau _{n}\) are chosen as in Algorithm 3.2. Then, by Theorem 3.2, we know that \(\{x_{n}\}\) converges to \(0\in {\varOmega }=\operatorname{Fix}(T_{1})\cap \operatorname{Fix}(T)\cap \operatorname{VI}(C,A)\) if and only if \(|x_{n}-x_{n+1}| +|x_{n}-y_{n}| \to 0\) as \(n\to \infty \).
On the other hand, Algorithm 3.2 can be rewritten as follows:
where for each \(n\geq 1\), \(C_{n}\) and \(\tau _{n}\) are chosen as in Algorithm 3.2. Then, by Theorem 3.2, we know that \(\{x_{n}\}\) converges to \(0\in {\varOmega }=\operatorname{Fix}(T_{1})\cap \operatorname{Fix}(T)\cap \operatorname{VI}(C,A)\) if and only if \(|x_{n}-x_{n+1}| +|x_{n}-y_{n}| \to 0\) as \(n\to \infty \).
References
Wang, Z.M.: Convergence theorems based on the shrinking projection method for hemi-relatively nonexpansive mappings, variational inequalities and equilibrium problems. Nonlinear Funct. Anal. Appl. 22, 459–483 (2017)
Qin, X., Yao, J.C.: Weak and strong convergence of splitting algorithms in Banach spaces. Optimization (2019). https://doi.org/10.1080/02331934.2019.1654475
Kim, J.K., Tuyen, T.M.: Alternating resolvent algorithms for finding a common zero of two accretive operators in Banach spaces. J. Korean Math. Soc. 54, 1905–1926 (2017)
Qin, X., Yao, J.C.: A viscosity iterative method for a split feasibility problem. J. Nonlinear Convex Anal. 20, 1497–1506 (2019)
Takahahsi, W., Yao, J.C.: The split common fixed point problem for two finite families of nonlinear mappings in Hilbert spaces. J. Nonlinear Convex Anal. 20, 173–195 (2019)
Ceng, L.C., et al.: Hybrid viscosity extragradient method for systems of variational inequalities, fixed points of nonexpansive mappings, zero points of accretive operators in Banach spaces. Fixed Point Theory 19, 487–501 (2018)
Qin, X., An, N.T.: Smoothing algorithms for computing the projection onto a Minkowski sum of convex sets. Comput. Optim. Appl. (2019). https://doi.org/10.1007/s10589-019-00124-7
Cho, S.Y.: Strong convergence analysis of a hybrid algorithm for nonlinear operators in a Banach space. J. Appl. Anal. Comput. 8, 19–31 (2018)
Korpelevich, G.M.: The extragradient method for finding saddle points and other problems. Èkon. Mat. Metody 12, 747–756 (1976)
Qin, X., Yao, J.C.: Weak convergence of a Mann-like algorithm for nonexpansive and accretive operators. J. Inequal. Appl. 2016, 232 (2016)
Qin, X., Petrusel, A., Yao, J.C.: CQ iterative algorithms for fixed points of nonexpansive mappings and split feasibility problems in Hilbert spaces. J. Nonlinear Convex Anal. 19, 157–165 (2018)
Bin Dehaish, B.A., et al.: Weak and strong convergence of algorithms for the sum of two accretive operators with applications. J. Nonlinear Convex Anal. 16, 1321–1336 (2015)
Dehaish, B.A.B.: Weak convergence of a splitting algorithm in Hilbert spaces. J. Appl. Anal. Comput. 7, 427–438 (2017)
Qin, X., Yao, J.C.: Projection splitting algorithms for nonself operators. J. Nonlinear Convex Anal. 18, 925–935 (2017)
Chang, S.S., Wen, C.F., Yao, J.C.: Common zero point for a finite family of inclusion problems of accretive mappings in Banach spaces. Optimization 67, 1183–1196 (2018)
Takahashi, W.: The shrinking projection method for a finite family of demimetric mappings with variational inequality problems in a Hilbert space. Fixed Point Theory 19, 407–419 (2018)
Qin, X., Cho, S.Y., Wang, L.: Strong convergence of an iterative algorithm involving nonlinear mappings of nonexpansive and accretive type. Optimization 67, 1377–1388 (2018)
Ansari, Q.H., Babu, F., Yao, J.C.: Regularization of proximal point algorithms in Hadamard manifolds. J. Fixed Point Theory Appl. 21, 25 (2019)
Zhao, X., et al.: Linear regularity and linear convergence of projection-based methods for solving convex feasibility problems. Appl. Math. Optim. 78, 613–641 (2018)
Nguyen, L.V., Qin, X.: Some results on strongly pseudomonotone quasi-variational inequalities. Set-Valued Var. Anal. (2019). https://doi.org/10.1007/s11228-019-00508-1
Censor, Y., Gibali, A., Reich, S.: The subgradient extragradient method for solving variational inequalities in Hilbert space. J. Optim. Theory Appl. 148, 318–335 (2011)
Cho, S.Y., Qin, X.: On the strong convergence of an iterative process for asymptotically strict pseudocontractions and equilibrium problems. Appl. Math. Comput. 235, 430–438 (2014)
Kraikaew, R., Saejung, S.: Strong convergence of the Halpern subgradient extragradient method for solving variational inequalities in Hilbert spaces. J. Optim. Theory Appl. 163, 399–412 (2014)
Thong, D.V., Hieu, D.V.: Modified subgradient extragradient method for variational inequality problems. Numer. Algorithms 79, 597–610 (2018)
Thong, D.V., Hieu, D.V.: Inertial subgradient extragradient algorithms with line-search process for solving variational inequality problems and fixed point problems. Numer. Algorithms 80, 1283–1307 (2019)
Qin, X., Cho, S.Y., Wang, L.: Iterative algorithms with errors for zero points of m-accretive operators. Fixed Point Theory Appl. 2013, Article ID 148 (2013)
Ceng, L.C., Ansari, Q.H., Yao, J.C.: Some iterative methods for finding fixed points and for solving constrained convex minimization problems. Nonlinear Anal. 74, 5286–5302 (2011)
An, N.T., Nam, N.M., Qin, X.: Solving k-center problems involving sets based on optimization techniques. Comput. Optim. Appl. (2019). https://doi.org/10.1007/s10898-019-00834-6
Xue, Z., Zhou, H., Cho, Y.J.: Iterative solutions of nonlinear equations for m-accretive operators in Banach spaces. J. Nonlinear Convex Anal. 1, 313–320 (2000)
Zhou, H.: Convergence theorems of fixed points for κ-strict pseudo-contractions in Hilbert spaces. Nonlinear Anal. 69, 456–462 (2008)
Yao, Y., Liou, Y.C., Kang, S.M.: Approach to common elements of variational inequality problems and fixed point problems via a relaxed extragradient method. Comput. Math. Appl. 59, 3472–3480 (2010)
Yamada, Y.: The hybrid steepest-descent method for variational inequalities problems over the intersection of the fixed point sets of nonexpansive mappings. In: Butnariu, D., Censor, Y., Reich, S. (eds.) Inherently Parallel Algorithms in Feasibility and Optimization and Their Applications, pp. 473–504. North-Holland, Amsterdam (2001)
Xu, H.K., Kim, T.H.: Convergence of hybrid steepest-descent methods for variational inequalities. J. Optim. Theory Appl. 119, 185–201 (2003)
Acknowledgements
We thank the two anonymous referees for useful suggestions, which improved the presentation of this paper a lot.
Availability of data and materials
Not applicable.
Funding
This research was supported by the Natural Science Foundation of Shandong Province of China (ZR2017LA001) and Youth Foundation of Linyi University (LYDX2016BS023). The first author was partially supported by the Innovation Program of Shanghai Municipal Education Commission (15ZZ068), Ph.D. Program Foundation of Ministry of Education of China (20123127110002) and Program for Outstanding Academic Leaders in Shanghai City (15XD1503100).
Author information
Authors and Affiliations
Contributions
The two authors contributed equally. Both authors read and approved the final version of the manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare that they have no competing interests.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Ceng, LC., Yuan, Q. Composite inertial subgradient extragradient methods for variational inequalities and fixed point problems. J Inequal Appl 2019, 274 (2019). https://doi.org/10.1186/s13660-019-2229-x
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13660-019-2229-x